Drones and self-driving cars as weapons: why we need to be afraid of hackers
Drones and self-driving cars as weapons: why we need to be afraid of hackers
Anonim

If artificial intelligence falls into the wrong hands, the civilized world can plunge into chaos.

Drones and self-driving cars as weapons: why we need to be afraid of hackers
Drones and self-driving cars as weapons: why we need to be afraid of hackers

No one will deny that artificial intelligence can take our lives to the next level. AI is able to solve many problems that are beyond the power of humans.

However, many believe that the superintelligence will definitely want to destroy us, like SkyNet, or will begin to conduct experiments on people, like GLADoS from the Portal game. The irony is that only humans can make artificial intelligence good or evil.

Why artificial intelligence can be a serious threat
Why artificial intelligence can be a serious threat

Researchers from Yale University, Oxford, Cambridge and OpenAI have published a report on the abuse of artificial intelligence. It says the real danger comes from hackers. With the help of malicious code, they can disrupt the work of automated systems under the control of AI.

Researchers fear that well-intentioned technologies will be harmed. For example, surveillance equipment can be used not only to catch terrorists, but also to spy on ordinary citizens. Researchers are also concerned about commercial drones that deliver food. It is easy to intercept them and plant something explosive.

Another destructive use case for AI is self-driving cars. It is enough to change a few lines of code, and machines will start to ignore safety rules.

Why artificial intelligence can be a serious threat
Why artificial intelligence can be a serious threat

Scientists believe the threat can be digital, physical, and political.

  • Artificial intelligence is already being used to study the vulnerabilities of various software codes. In the future, hackers can create a bot that will bypass any protection.
  • With the help of AI, a person can automate many processes: for example, control a swarm of drones or a group of cars.
  • With the help of technologies such as DeepFake, it is possible to influence the political life of the state by spreading false information about world leaders using bots on the Internet.

These frightening examples so far exist only as a hypothesis. The authors of the study do not suggest a complete rejection of technology. Instead, they believe that national governments and large companies should take care of security while the AI industry is still in its infancy.

Policymakers must study technology and work with experts in the field to effectively regulate the creation and use of artificial intelligence.

Developers, in turn, must assess the danger posed by high technology, anticipate the worst consequences and warn world leaders about them. The report calls on AI developers to team up with security experts in other fields and see if the principles that ensure the security of these technologies can be used to protect artificial intelligence.

The full report describes the problem in more detail, but the bottom line is that AI is a powerful tool. All interested parties should study the new technology and make sure it is not being used for criminal purposes.

Recommended: