Artificial Intelligence gets a bad rap. Any time an AI appears in a movie, we can safely predict that it will turn malevolent in the second act. Twenty minutes into Westworld or I, Robot, we all knew what was coming — the AI will turn evil, forcing humans to fight them as enemies

The press likes headlines about the threat of strong AI. Nick Bostrom has a nuanced view on the promise and peril of artificial intelligence, but editors tag it with headlines like Artificial Intelligence: We’re like children playing with a bomb. Elon Musk supports artificial general intelligence enough to invest in it, yet the media choose to focus on the negative, with headlines like Elon Musk: Artificial intelligence is our biggest existential threat

In a sense, the media is absolutely right to warn us. There is indeed some cause for concern; what’s wrong is the focus. It would be very hard to argue that intelligence is inherently evil, and it follows that there’s nothing inherently bad about creating something more intelligent than humans

AI has proven massively beneficial.

Machine learning helps us remove tumors, assemble cars, and vacuum our floors. You, my reader, have probably never been harmed by anything enabled by AI, and likely have enjoyed considerable benefit from them.

On the other hand, there are killer drones, Palantir’s AI for mass surveillance, Google’s AI to sell us things we don’t need, and increasingly sophisticated AIs to manipulate public opinion for political ends. The machines we build reflect the mixed bag of human decency and human nastiness.

It takes a village to raise an AI

When we raise a child, we are unable to give any foolproof guarantees that the child will grow up to be a benevolent, caring adult. We can influence it, but not control it. Some mother’s son turned into John Wayne Gacy, another into Edward Snowden, and Mahatma Gandhi, Adolf Hitler and Oscar Schindler.

If a superhuman Artificial General Intelligence of godlike powers is built in the coming decades (and we must take seriously the possibility that it will be), how do we raise this machine-baby to grow into a good person? What development methodologies now will give it the greatest possible chance of benevolence?

We can’t use simple safeguards like Asimov’s three laws. AIs are not wind-up automatons who execute the laws we put into them — no more than children follow the rules their parents give them.

00talentwar-1-superjumbo

Christina Chung, Tech Giants Are Paying Huge Salaries for Scarce A.I. Talent (The New York Times)

 

And besides, trying to defend against a more-than-human intelligence with any simple tactic is like playing chess against a smarter opponent and saying, “I don’t have to worry; I’ll defend by moving this rook here.”

The truth is that we don’t know whether Artificial General Intelligence will be nasty or nice. Like rearing a child, we have a non-deterministic influence on the outcome. As David Hanson of the SingularityNET team says, the aim is to create “super-benevolent super-intelligence.” We must focus not just on making AI smarter, but also nicer

Tools are already being built to raise benevolent AI

A child who grows up surrounded by benevolence is much more likely to be a decent adult than one who grows up in ugly circumstances. Projects like SingularityNET are already developing technologies to ensure this reality.

Some of SingularityNET’s first applications are already beneficial. Sophia, their chief humanoid spokesperson, recently undertook an experiment where she was programmed to display unconditional supportiveness and encourage positive emotions in people she talked to. The experiment was dubbed Loving AI.

Care of the elderly is an interesting potential use-case of AI-powered robots like Sophia. If an Alzheimer’s patient asks the same question 50 times in a day, or forgets basic things, Sophia does not get impatient or frustrated. If the early years of an AGI’s life are filled with compassionate and caring tasks such as elder care, education, and cancer research, this will be a positive influence on how they mature.

Using networks to instill empathy

Let’s switch metaphors. An AI is a like a child, but a network is more like a country; it is a group of people linked together, but still having their own self-interest.

How do we make countries benevolent? Countries that are governed by one dictator or a few oligarchs tend to serve the interests of those in power, and too frequently trample on everyone else’s interests. Countries with a broader distribution of power tend to serve broader interests.

So we see that the Global Peace Indexcorrelates pretty nicely with the Democracy Index. Nominally democratic countries do conduct illegal wars, extraordinary rendition, and corruption, but this is almost never by democratic rule; it can usually be traced to a few powerful people acting like oligarchs.

By powering their networks with decentralized blockchain technologies, AI platforms like SingularityNET are giving the power to the people, increasing the breadth of interests that AIs are incentivized to meet.

In SingularityNET, the democracy will pick a percentage of the network’s assets to be designated as ‘Benefit Tokens’, say 5%. The democracy then votes on what tasks are considered ‘good’ (good in the moral sense). Agents on the network will get a chance to earn those 5% of tokens by doing these tasks, things like analyzing biological data to research cancer or other diseases.

Advertisements

Posted by Manu Belmonte

Senior Editor at The Credible Hulk Magazine and writer at my personal blog learninghayek.wordpress.com

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s