How do we get the best of ai to happen to us?
Many of the leading artificial intelligence researchers think that in a few years, artificial intelligence will not only can finish part of our work, can finish all our work, forever changing the life on the earth.
The reason many people see this as science fiction is that we traditionally think of intelligence as a mysterious thing that can only exist in organisms, especially humans. But this chauvinism is unscientific.
From me as a physicist and AI researchers point of view, the intelligence is only elementary particles move around and for some kind of information processing, and no laws of physics say we can’t build in a variety of ways machine more intelligent than us. This suggests that we only see the intelligence on the tip of the iceberg, and has an amazing potential to release the underlying nature of all intelligence, and use it to help human prosperity – or the flounder.
If we are doing the right thing, the advantages are huge: because we love all of civilization is the product of wisdom, so the use of AI to enhance our own wisdom could solve the most difficult problem in the future. For example, why does the risk of death of our loved ones driving the risk of death by driving a self-driving car to prevent or succumb to artificial intelligence might help us find a cure for cancer? Why not increase productivity and prosperity through automation and use AI to accelerate research and development of sustainable energy that we can afford?
As long as we win the competition between the growing power of technology and the wisdom we manage, I am optimistic that we can develop through advanced artificial intelligence. But it requires abandoning our outdated strategy of learning from mistakes. It helped us win the smart race with less powerful technology: we screwed up the fire, invented the fire extinguisher, we screwed up the car, and invented the seat belt. However, for more powerful technologies such as nuclear weapons or the super smart AI, this is a bad strategy – even a single mistake is unacceptable, we need to get things done for the first time. The risk of artificial intelligence is not Luddite intimidation – it is safety engineering. When the Apollo leaders carefully considered anything that could go wrong, they were not alarmist. What they are doing ultimately leads to the success of the mission.
So what we can do to maintain the advantage of artificial intelligence in the future is the four steps that AI researchers broadly support:
1. Investment in artificial intelligence safety research. How can we turn today’s suvs and exploitable computers into a powerful AI system that we truly believe in? Thank you for crashing your computer last time. It’s not controlling your self-driving car or your grid. With ai systems approaching human levels, can we let them learn, adopt and maintain our goals? Suppose you tell your future self drive to take you to the airport as soon as possible, catch up with vomit and helicopters, and complain that this is not what you want. If your car answers, “but that’s what you’re asking for,” then you realize that the machine understands exactly how hard our goal is – and very important. Through the national science foundation and other institutions, artificial intelligence security research should be a national priority. Equifax, which affects about half of all americans, is just the latest reminder that all our ai technology could be against us unless we start our game.
2. The prohibition of the self-government of lethal weapons. We will begin the AI control arms arms race out of control, and this could weaken the strong country, today for all people, including terrorist organizations, to provide cheap, convenient and anonymous the assassination of the machine. Let’s use the international artificial intelligence arms control treaty to stigmatize this point, just as we have contempt and limit biological and chemical weapons, and maintain the benefits of biology and chemistry.
Make sure that the wealth created by AI makes everyone better off. The progress of the artificial intelligence (ai) may produce a luxury leisure society or produce unprecedented pain on most of unemployment, it depends on how artificial intelligence to generate wealth be taxed and sharing. Many economists believe that automation is eroding the middle class and radicalizing our country. Ensuring that the technology of the future makes a better life for everyone is not only moral, but also crucial to maintaining a healthy democracy.
Think about the future we want. When a student walks into my office looking for career advice, I ask her where she sees herself in the future. If she would answer, “maybe in the cancer ward” or a prison, I will attack her plan strategy: I want to hear her imagine a very exciting future, then we can discuss to reach the target of strategy, at the same time avoid the trap. However, humanity itself has made this mistake: from the terminator to the hunger games, our future vision is almost dystopian. We don’t need to be paralyzed by fear like paranoid depression, but we need to join a conversation about where the technology is going. If we don’t know what we want, we’re unlikely to get it.