The world is excited about Artificial Intelligence (AI). In the last 5000 years, starting with the invention of the wheel, machines have saved humans from existential threats despite our smaller-sized bodies.
Human civilization has evolved due to our superior skills in taking machines to the next level, because machines did the kinds of work that humans could not. Consequently, we could become city dwellers with high rises. Mining became possible. Drilling for oil in the ocean became possible.
As we well know human ingenuity has been at play, which helped rationalize how we overcome threats and adversities.
Now we are taking our intelligence to the next level, by making machines more intelligent. That is the ultimate promise of Artificial Intelligence (AI), hoping machines will get smarter than we are.
Will that be good or bad?
What have we learned by becoming super trainers of machines?
Tesla has the most real-life experience with AI, when they put autonomous cars on the streets, equipped with drivers to train the cars. The starting point of self-driving cars is AI algorithms based neural networks similar to those found in the brain.
Say the goal is to have a neural network recognize photos that contain a dog. The concept of neural networks entails that ‘the machines be not explicitly told what “makes” a dog. When the computer sees something furry, has a snout, and has four legs – it may conclude that it is a dog.’ Then the machines are shown a lot of images of dogs — more data. With minimal training by reinforcement learning— when the machines are told when they make a mistake — the computers start learning from their own mistakes and begin to recognize dogs more reliably.
Tesla has hundreds of thousands of hours of experience based on more than ten years of data collected from training autonomous cars. Based on his experience is why in 2017 Elon Musk warned a bipartisan gathering of U.S. governors that AI is a “fundamental risk to the existence of human civilization.”
Elon now wants to focus on Tesla Bot instead, incorporating Tesla’s automotive artificial intelligence and autopilot technologies. A car is a Bot on steroids — at those speeds. He thinks that a Bot should be perfected first.
Major Hurdles to AI:
1. Humans know only what decision an autonomous car has made, but not why. AI does not permit reverse engineering. Letting machines learn on the fly, on their own, is dangerous when it comes to life-and-death situations or what they might do in the future.
2. Machines by definition do not have common sense, which comes from lived experiences. These broad set of rules of thumb are impossible to be incorporated into machines. Common sense is essential for the robots to operate usefully and safely in the human environment. When a deer jumps in front of an autonomous car, the algorithm will not know what to do. It is even harder to teach machines to make moral or ethical decisions.
3. Intelligent machines will not know not to kill the human specie that helps them survive. Machines will never evolve as organisms do per the theory of natural evolution.
The idea of Tesla’s autonomous autos, with current technology, can work for delivery trucks, but it needs infrastructure. Such trucks for example can use dedicated special lanes with barriers — maybe only at night. There can be stations along the way where the drivers can hop in, for safe last-mile delivery to the warehouses inside the cities. During commute hours similar concept could be applied to carpoolers. This may not even require the expansion of freeways.
Similarly, smaller walking or even flying robots for making home deliveries sounds promising. On city streets, they can drive in special lanes dedicated to them, just like bike lanes. Integrating the concept with delivery hubs on major street corners may be a more practical solution.
With more people working remotely, and reduced delivery truck traffic on highways and city streets, AI can help us dramatically reduce the carbon footprint to save the environment.
Musk is also planning to introduce a home robot as a personal valet. Some people think it will eliminate hired home help. Another example of machines replacing human labor.
Last but not least, regulatory bodies need to start building expertise in AI, expediently. When in 2018 Facebook CEO Mark Zuckerberg testified before a joint hearing in Congress to address steps the social network was taking in light of the Cambridge Analytica’s connection with the 2016 presidential elections interference, it was scary to see how little older legislators knew about social media. This was more than 12 years after Facebook was open for general business — beyond university campuses.
If we have AI development without regulatory oversight, we will pay a catastrophic price when applied to warfare. According to Bill Gates, A.I. is like nuclear energy — ‘both promising and dangerous’