Hot on the heels of new research that puts trust in big tech
ahead of the police, government and others, we thought it necessary to
investigate what is trust in a digital world.
Trust is one of the most abused words in the western world.
It is also one of the most overarching words that no one can adequately define.
For it is a ‘belief’ – not an absolute – more of a guideline really.
The dictionary says the closest synonyms are hope and faith
that the other party will act according to a long-established set of moral or
legal mores (traditions). In Australia, we expect these to be ‘western’ traditions;
our multi-cultural community chooses to live here because of that.
If you go to the shop, you expect the shop
keeper to deal fairly and warrant the goods (but in a digital world that
expectation is frequently ignored)
If you drive on the road you expect other drivers
to obey traffic laws, not drink/drug drive and use a mobile phone (but in a
digital world there are no laws to obey and ‘train wrecks’ are common)
And your definition of trust could be very different
from mine (trust in a country of billions is very different from a country of
I could go on, but I think you get my drift – the digital
world is lawless, faceless and anarchy prevails (and it is in the interests of
cybercriminals that it does). Anyone that tells you otherwise is abusing your
The digital trust problem is that you cannot look a person in the eye, judge their mettle and shake on it.
Human to human connection is the basis of trust. The digital world is, for the most part, faceless and often shady. And with the plethora of fake websites and counterfeit goods, you don’t know what is real any more.
Recently the founder of the web Sir Tim Berners-Lee launched a plan to save the web from such “political manipulation, fake news, privacy violations and other malign forces that threaten to plunge the world into a digital dystopia”.
Tim has a great idea – a contact for the web, and it is a great start but sadly, it is going nowhere fast.
Cutting to the chase the web is all about making rivers of money. For example, if FAANG et al. are advertising-driven, there is no incentive to change their Fakebook policies. Shareholders demand continued share price growth and generous dividends that Facebook happily delivers. It’s a modern-day twist on Robbing Hood – take from the poor (us) and deliver to the rich (shareholders).
Its 2020 report is sobering. Instead of a bright future, a
majority in every developed market do not believe they will be better off in
five years. WTF?
There are now two very different trust realities – those that
trust (generally well informed and only represent 17% of the global population)
and those that do not (Mass and 83% of the population). Distrust is driven by a
growing sense of inequity and unfairness in the system. The perception is that
institutions increasingly serve the interests of the few over everyone. And
there is a massive distrust in technology.
In 2020 people will grant their trust based on two distinct
attributes: competence (delivering on promises) and ethical behaviour (doing
the right thing and working to improve society). This year’s Trust Barometer
reveals that none of the central institutions (Government, Business, Media and NGOs)
are both competent and ethical.
But what about trust in AI, robotics and future tech?
There is that word again – trust. What do we mean? It is Asimov’s
three laws of robotics
A robot may not injure a human being, or,
through inaction, allow a human being to come to harm.
A robot must obey orders given it by human
beings, except where such orders would conflict with the First Law
A robot must protect its own existence as long
as such protection does not conflict with the First or Second Law.
Well, as we know, that did not stop HAL, Lost in Space Robot,
Terminator et al. from killing humans. In all cases, AI found the impediment to
future happiness and prosperity was to rid the world of inefficient humans. Or
put bluntly while these rules appear to be reasonable, they are based upon
human concepts. Humans can be evil; machines do what they are programmed to do.
Algorithms (AI/ML) are incredibly literal. They pursue the ultimate goal and do exactly what is told while ignoring any other, important consideration. They cannot be trusted in the traditional sense of the word because they do not work for you. What started as AI being our tool has moved well past that – we are AI’s tool. If you have a spare eight minutes, this is a great video to explain AI and machine learning.
GadgetGuy’s take – There can be no trust in a digital world without tearing down the existing precepts and finding new ones.
For example, if we decided that privacy was our number one consideration, then could FAANG exist without a paid model? We need to ask. “What data are you asking me to share, and how else are you using that data?” Let’s stop our data being rivers of gold – or at least have perfect control over its use.
If we decided that honesty and fair dealing was number two
then mechanisms to prevent fake websites, fake reviews, counterfeits and Scamazons
would prevent these from trading online – back to the corner flea-markets or Asian
back streets for ‘genuine’ knock-offs! We need a digital online ID that
prevents fakes and scams!
If we decided that transparency and accountability needed to
be the same as bricks and mortar shops, 99.99999% of online merchants would
fail. Sure, this may lead to higher prices, but you are covered by ACL and competition
would soon bring things back to reality.
Vested interests in FAANG and many governments prefer to pay
lip service to the need for regulation (we are looking at you Zuc) knowing
damned well that it will never happen on a global basis. These groups obfuscate
the issues and pretend to help while undermining trust.
GadgetGuy will be researching Big Tech and trust levels soon.