Artificial Intelligence

Our Inevitable Eradication or Highest Evolution?
A discussion with futurist, Apala Lahiri Chavan

How does one forecast the future of artificial intelligence? How do you forecast what you don't know and can’t predict? After Google’s AlphaGo defeated China’s top player of the game, we know we’re closer to having a more human-like opponent who is vastly more intelligent than what the limited human brain is capable of.

Neural networks, whose workings are based on the architecture of the human brain, are being applied to search engines, apps, and other digital technologies. So what happens when a machine starts thinking just like you? It can connect the dots and see patterns, just like we do, but of course, billions of times faster. And what if several such neural networks were talking to each other? It might take a fraction of a second for all pollution on Earth to be cleaned up as well as a fraction of a second for all humankind to be destroyed. An interesting viewpoint by neuroscientist and philosopher, Sam Harris, questions the emotions that an eradication of humanity by AI elicits in us. Suddenly, the end of humankind by an epidemic, natural calamity, etc. seems so meaningful. AI, devoid of emotions, is meaningless.

If you create AI code, and throw at it different problems to solve, it will solve them, in a rather non-emotional way. Using data points and might also show independence, while using its own methods. The point at which all this becomes a bit confounding is the use of its own method. Somehow, we can have a conversation with another human being about the method they might have taken to do something new. Can we have this conversation with artificial intelligence? Perhaps, and can we be sure that it will share its method with us?

As long as we see AI as an assistant to humans, we can have a relationship of receiving services from them, sort of like a thankless and yet unpaid servant. Paying for a service isn't new but if that service doesn’t need to be paid for, would that make AI tech our slaves? So if you have an obliging AI secretary scheduling your appointments at work, isn’t that better than a human secretary who might not always be obliging, is not free of cost, and needs to sleep at night?

Now coming to our trust in AI to work for us without any supervision or administering, working in auto-mode so to speak. If AI Amy can be Apala's secretary and use her discretion in scheduling her meetings, can whole governments do the same and expect an administrator of the country to work on auto-mode? This "trust" in AI has a lot to do with the creator of the AI, such as the government or the military. 


Can we assume that none of the AI systems built will never harm us? If we give it problems to solve, the bigger problems might become simpler for us but have the capacity to cause massive damage. In Apala Lahiri Chavan's talk at the UN conference on AI for social good, she suggests modifying AI code in alignment with human values, such as compassion, honesty, integrity, into a system to eventually not govern it and leave it to its own devices. The question is – how willing are we to let it go in its own direction?

What is the relationship between AI and humans? At what point will the trust increase? And won't machines eventually manifest the true intentions of their creators?

The creators' intentions can show up as an algorithmic bias as a result of any previous data fed in, for example, arresting people from a particular lower income neighborhood again and again, and marking them as deviant. One of the biggest issues though is our current inability to predict the behavior and the method the AI will use.

The developer would code in his/her motives, but is there any guarantee that the system won't evolve in an unpredictable way? Technologists and futurists have expressed their concerns about this. Elon Musk suggests that humanity in some way needs to control or govern AI so that it is accessible to all. AI for all and OpenAI are a couple of the bodies that open access AI options for anyone to be able to build their own AI code.

Did you know that cleaning in 60% of US homes is done by robots? Once "bought" they're owned by humans and aren’t paid for their services. What if those robots finally started demanding their rights? What if they finally realized, like in series such as West World, that they were being used? One way to look at it is that the most powerful of intentions in the human brain can manifest with full strength through the deployment of AI. This can end in warfare and a dystopian future as a result of the pure logic built into AI. For example, with a command to eradicate cancer, AI might execute that order with the eradication of all human beings. Intelligence has no limits, but emotions and human values might give it direction. For example, how to distribute resources fairly? Or how to sell something whether someone wants to buy it or not and can manipulate your decision to buy.

Depending on its creators, AI could become a creepy meaningless entity as we as a destroyer or the most significant thing to ever have manifested for humankind - a supermind that will take us to a higher state of evolution.

"Supermind is a plane of perfect knowledge, that has the full, integral truth of anything…"
Sri Aurobindo, Pondicherry

Get in touch with us @futuristapala and @ka_li_ka 

http://ice.humanfactors.com