Ethical AI needed now (opinion)

Ethical AI

So far there are no overarching rules or accepted guidelines for the use of Artificial Intelligence. Ethical AI must be a top priority before the genie gets any further out of the bottle.

That is why I was so pleased to see Google’s CEO Sundar Pichai publish ‘AI at Google: our principles’.

Sure, I could reproduce Google’s guidelines here, but it’s a document you should all read. Let’s just say it is good stuff and other AI companies should use it as a basis for their declarations.

Here are the main headings

  1. Be socially beneficial.
  2. Avoid creating or reinforcing unfair bias.
  3. Be built and tested for safety.
  4. Be accountable to people.
  5. Incorporate privacy design principles.
  6. Uphold high standards of scientific excellence.
  7. Be made available for uses that accord with these principles.

Pichai goes as far to list AI applications it will not support

  1. Technologies that cause or are likely to cause overall harm.  Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.
  2. Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.
  3. Technologies that gather or use information for surveillance violating internationally accepted norms.
  4. Technologies whose purpose contravenes widely accepted principles of international law and human rights.

Let’s applaud Google for making a good start. Google has been one of the leading AI facilitators since 2006. If you want to follow AI you can find its dedicated AI blog here.

The rest of this opinion piece could loosely be termed ‘the road to hell is paved with good intentions’. It is in no way a reflection on Google. Rather it’s about controlling the technology that will ultimately give us the self-aware AI called Skynet.

Ethical AI needs global regulation with big, brutally sharp teeth

Tesla and technology entrepreneur Elon Musk begged the U.S. National Governors Association to regulate artificial intelligence (AI). He argued that “By the time we are reactive in AI regulation, it’s too late.”

Musk added, “Normally, the way regulations are set up is when a bunch of bad things happen, there’s a public outcry, and after many years a regulatory agency is set up to regulate that industry.”

“With artificial intelligence, we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like — yeah, he’s sure he can control the demon. Doesn’t work out [with AI],” said Musk.

Google has made a good start, but its declaration suggests it can self-regulate. That is fine if a) you trust the company and b) if the technology was not going to so dramatically and irretrievably change our lives in this lifetime. Remember that with great power comes great responsibility.

Google at least acknowledges that regulation is inevitable. It also is going full-speed ahead with AI research under its Ethical AI guidelines as it knows how regulation can stifle innovation.

Google’s Sundar Pichai and Microsoft’s Satya Nadella (see Microsoft’s AI blog here) – seem more responsible and are leading the charge for ethical AI.

The basis of ethical AI laws

Anil Sabharwal, Google Photos creator, recently said that AI is one of the most profound technology advancements and its happening in our lifetime.

Applying AI

  • Makes goods and services more useful
  • Helps us to be more efficient and do much more
  • Can solve humanities the impossibly big challenges

AI is the science of making things smart. ML is about making machines that learn smarter without programming rules.

There are three words in that critical definition, ‘without programming rules.’

If we are to have autonomous robots, transport, financial systems and have it help us in almost every basic daily function, we need rules.

When science fiction writer Isaac Asimov introduced the three laws of robotics in 1942, he envisioned control as follows.

A robot

  1. May not injure a human being or, through inaction, allow a human being to come to harm
  2. Must obey the orders given it by human beings, except when such orders would conflict with the previous law
  3. Must protect its own existence as long as such protection does not conflict with the previous two laws.

Asimov never imagined the internet and the privacy issues that would become the next big social crusade #Delete Facebook. We need to overlay privacy rules. AI may not repeat anything it has heard, learned or seen without its owner’s permission.

Asimov never imagined that there could be competing world powers. What is good for one may severely damage another. We need safeguards against AI driven cybercrime/terrorism/spying, financial or electoral manipulation etc. Who defines the ethical boundaries and who enforces them?

Asimov never imagined the vast AI cloud. His robots were self-aware and learned from their experiences. The vast AI Clouds already contain the answers to life, the universe and everything (no it is not 42). But they have not yet learned what to do with that vast intelligence and information bank. AI left unchecked will make logical, not necessarily practical decisions for the common good. Many sci-fi books posture that if AI decides humans are the biggest threat to the Gaia mother earth it would be better off without them.

Brilliant as Asimov was he never imagined a world where everything was reliant on AI.

I suspect we need several levels of AI.

  • One for basic services that are bulletproof and provides utilities, transport, and precision to our day.
  • The next level may be autonomous personal helpers, and these need a ‘safe word’ to ensure that if they depart from the norm, they cannot harm.
  • The top level is about science, wonder, thinking and planning and that must have the full gamut of human ethics so that it cannot harm. But who defines global good?

Finally, the penalty for misuse of AI must be final. There can be no wiggle room claiming the “AI ate my homework.”

Proponents of AI say it is simply a tool a.k.a. Guns don’t kill people; people kill people.

It is in their interests to avoid the big questions we must face sooner rather than later.

Ironically Elon Musk was not asking us to slow down. He was asking us to put in place rules so that we can speed up. He said without clear guidance; risk aversion will stop us fully benefiting from these important innovations.

I say We need to work out the rules for AI before misuse hits the headlines and we use a typical ill-considered knee-jerk reaction to counter such misuse.