Robots will not take over on the Ground as a minimum, because they can’t experience human ambitions, convinced the psychophysiology Alexander Kaplan.
<hr/>do Not be afraid of technological progress© Photo from personal archive of Alexander Kaplan
Scientists across the world warned that work to create artificial intelligence in the near future can spiral out of control. We are talking about a revolution in the field of robotics and improve computers. According to some experts, a little more — and “hardware” will become dangerously independent.
About how justified such concerns, in an interview to “Rosbalt” says Alexander Kaplan, psychophysiology, doctor of biological Sciences, Professor, head of laboratory for neurophysiology and neuro-computer interfaces at the biological faculty of Moscow state University.
— Alexander, we really threatens rise of the machines, or it from fantasy area?
— That is, whether machines, robots go against humans?
— Can they be independent from the person and go out of control.
— Mechanical and Electromechanical systems with positioners, tracks etc. there is no danger for humanity is not present — either now or in the future. They themselves — ordinary piece of iron, which are driven by the person or programs that he created.
Therefore, the danger lies in software systems, which will be located not only in the “minds” of robots, but also in distributed systems, cloud technologies. These systems can really be dangerous to humans, but no more than, for example, fire systems, developed for safety in nuclear power stations, or systems that control modern weapons. Their developers need to worry about getting there stood the fuse, overlapping, mechanisms, which let these systems work abnormal. Yet the greatest danger here is the man himself, which is something wrong, somewhere will write wrong code, etc.
But do not start if a software system at some point to evolve on their own?
Is not excluded. The fact that modern intelligent technologies include including self-evolving software systems, which must themselves analyze and improve. And, of course, the question is: will this evolution, so to speak, by way of seizure of power? If this happens, the guilty will be the man himself who invented a similar device and wrote the appropriate program.
Fortunately, there is a point that will not allow software systems to go too far down the path of intercepting the actions of people and especially the seizure of power on Earth: they do not have and will not have the intellectual experience of man. They can, of course, to keep in memory all the books of the world, but don’t use them the way we, because humans have a vector of existence in this world needs a living body, and along with them a host of other functions that are missing from the machines. For example, the humanistic views, or calculation to save a life — your own and on a global scale, not to use technology, which is able to destroy. Every man has the instinct of self — preservation is exactly the same as that of groups of people, and governments, and humanity in General.
None of this software systems can not be because they are not alive and for them it’s all good. They will aim specifically to harm that person? But for this to be a man. “To seize power”, “harm”, “hurt” are all too human…
— And yet where is the line beyond which it can begin the autonomy of machines and their independence from us?
These systems and now: for example, autopilots in airplanes. They are quite complex, in flight guided by satellites and ground crew, and pilots can rest. Of course, if something goes wrong and the program will go beyond the boundaries of normal operation, it can harm the person. But again, it will meet the people who created the software product. In inanimate, computer substance has no problem to ditch the plane. On the contrary for that to happen, automation typically must be disabled. And if it is reasonable to approach the matter, then the reliability will only increase.
— It is believed that the cycles of self-improvement in the field of computer technology
— Improvement of computer technologies does not happen automatically, and in design offices, institutes and research centers, where creating a new “hardware” and “software” for increasingly sophisticated computers. All this is well monitored by the specialists who work there. Another thing is that the average person cannot keep track of such changes, as well as in the early nineteenth century, he will not follow the roads EN masse, there were cars.
— It is estimated that on the planet today there are about 1.6 million robots, which are used mainly in industry. And in 10-15 years it is expected a boom in the production of electronic assistants for home and personal use. Experts warn that in the next 5 years around the world sold more than 30 million such robots that are able to provide many useful services. But harm, perhaps, too?
— Hardly “homemade” robots and similar devices can be more dangerous than cars. In our homes already have refrigerators, TVs, vacuum cleaners, microwave ovens and other more “intelligent” units. We are surviving in today’s mechanized world just because we observe certain rules.
That’s no problem, quite the contrary: it seems to me that over time technology will become more secure because they will keep an electronic automation of small forms. In the washing machine already embedded small processors that control the operation. The same will happen with robots.
— Experts believe that the danger comes from hackers. If they hacked some system — say, a military, or control system with powerful robots, for example, in civil aviation, in space or nuclear plants, we will be in big trouble…
— Perhaps, Yes. For “malicious” actions of computer and software systems will always be a person. This may not be necessarily a criminal or a hacker, but a specialist who really operates them — as, for example, a pilot who in the flight deliberately disables automatic monitoring of the condition of the aircraft. But for every evil act possible to develop adequate protection.
— Now create software that is smart enough to solve the so-called “captcha” — completely automated public test computer that is used on many web sites to determine who is the user of the system: human or a bot. It shouldn’t bother us?
— Once you create new encryption system, then, of course, are developed and methods for decoding of ciphers. This fight will go continuously, but ahead in it security systems — because you first create the system itself, and then someone begins to pick up her keys. Seeing that her attempt to crack the software system begins to be modified — and so on. Moves ahead of the curve here is much more than these attacks, just need the time to foresee the possibility of malicious conduct. It is clear that in some cases someone will not follow, but, unfortunately, we are not immune…
— According to some experts, the governments of countries that invest huge money in robotics, underestimate the risks. Is that so?
I think every country appreciates the dangers, but a measure of prevention is determined by the material and human resources. When resources are insufficient, someone starts to risk. Plays a role and human avarice: in a situation where, for example, an unscrupulous mine owner invests enough money in its security and resulting catastrophes happen
— We’re already too dependent on electronic AIDS. People can not imagine their life without gadgets. The emergence of robots is the next step? Soon we will not be able to do without them?
— Well, of course. People long ago became dependent on technical devices. We live in homes with Central heating and there’s nothing we can do in the winter if it suddenly shuts down, and along with the electricity in the sockets. The only option is to plant fires …
But that in our life there will be another service and assistance robots, I don’t see any problem — will adjust to this. Just the reliability of all life support systems must be increased.
— Researchers say that we are now at the “point of no return”, and if we pass it, the chance to correct something.
— What is the point? Software systems are not the first beat chess Champions, and now start to beat people in cards. If in chess everything is defined and you only need to sort the moves, in card games information for direct calculation is often insufficient. However, the computers beat humans at cards and even win in intellectual games “What? Where? When?” But this does not mean the emergence of programs of the mind. It is the merit of the programming teams that create them.
In the programs themselves there is no intelligent life, they do not provide themselves and are not able autonomously to confront the man. Need to worry about accumulating weight in the world of software systems and their security.
Interviewed Vladimir Voskresensky