So far there are no overarching rules or accepted guidelines for the use of Artificial Intelligence. Ethical AI must be a top priority before the genie gets any further out of the bottle.

That is why I was so pleased to see Google’s CEO Sundar Pichai publish ‘AI at Google: our principles’.

Sure, I could reproduce Google’s guidelines here, but it’s a document you should all read. Let’s just say it is good stuff and other AI companies should use it as a basis for their declarations.

Here are the main headings

  1. Be socially beneficial.
  2. Avoid creating or reinforcing unfair bias.
  3. Be built and tested for safety.
  4. Be accountable to people.
  5. Incorporate privacy design principles.
  6. Uphold high standards of scientific excellence.
  7. Be made available for uses that accord with these principles.

Pichai goes as far to list AI applications it will not support

  1. Technologies that cause or are likely to cause overall harm.  Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.
  2. Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.
  3. Technologies that gather or use information for surveillance violating internationally accepted norms.
  4. Technologies whose purpose contravenes widely accepted principles of international law and human rights.

Let’s applaud Google for making a good start. Google has been one of the leading AI facilitators since 2006. If you want to follow AI you can find its dedicated AI blog here.

The rest of this opinion piece could loosely be termed ‘the road to hell is paved with good intentions’. It is in no way a reflection on Google. Rather it’s about controlling the technology that will ultimately give us the self-aware AI called Skynet.

Ethical AI needs global regulation with big, brutally sharp teeth

Tesla and technology entrepreneur Elon Musk begged the U.S. National Governors Association to regulate artificial intelligence (AI). He argued that “By the time we are reactive in AI regulation, it’s too late.”

Musk added, “Normally, the way regulations are set up is when a bunch of bad things happen, there’s a public outcry, and after many years a regulatory agency is set up to regulate that industry.”

“With artificial intelligence, we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like — yeah, he’s sure he can control the demon. Doesn’t work out [with AI],” said Musk.

Google has made a good start, but its declaration suggests it can self-regulate. That is fine if a) you trust the company and b) if the technology was not going to so dramatically and irretrievably change our lives in this lifetime. Remember that with great power comes great responsibility.

Google at least acknowledges that regulation is inevitable. It also is going full-speed ahead with AI research under its Ethical AI guidelines as it knows how regulation can stifle innovation.

Google’s Sundar Pichai and Microsoft’s Satya Nadella (see Microsoft’s AI blog here) – seem more responsible and are leading the charge for ethical AI.

The basis of ethical AI laws

Anil Sabharwal, Google Photos creator, recently said that AI is one of the most profound technology advancements and its happening in our lifetime.