Data & Analytics Opinion South Africa

The ethics of AI: Tool, partner or master?

Mention the words 'Artificial Intelligence' around your family dinner table and talk inevitably turns to a Terminator-like future, where machines become our masters and quantum-computers dictate our lives. Many people still think of AI as a semi-futuristic concept, the premise of science fiction shows and YouTube videos of clever robots.
Sorin Cheran is technology strategist at Hewlett Packard Enterprise
Sorin Cheran is technology strategist at Hewlett Packard Enterprise

However, if you live in a modern, connected society, chances are your life is already touched by AI in some way. Whether it be your GPS determines the best route to take home, music app suggesting playlists based on your listening habits, or receiving targeted adverts on social media, AI sits behind it - an entrenched part of daily life.

So where is AI headed, and how do we prevent a world where mankind is ruled by machines?

Evolution of AI

AI has developed in fits and bursts over the past half century, with periods of excitement and development interspersed with frequent ‘AI winters’, where governments pulled funding and exploration ceased. The reasons for these innovation troughs are typically due to complex and somewhat over-reaching goals – projects centred around neural networks fuelled by ideas of perfecting AI consciousness to create human-equivalent machines.

Successful AI projects focus on more realistic, less broad objectives, such as solving specific problems within specific verticals.

Despite the growing use of AI, the world is still in a period of relative discovery, researching the possibilities of deep learning and AI. Over the next five to ten years, AI investments will likely be more siloed, looking at solving problems in business, automating processes and simplifying daily life.

We are still far from creating super-intelligent AI human replacements. Until we are, it becomes critical to examine the ethics of AI, so we can avoid the pitfalls of a machine-led society.

Code of ethics

AI is succeeding in many areas, however, the process of learning is lengthy and mistakes are being made. Machines are learning, it’s true, but the quality of data we are feeding them is flawed and can result in unforeseen dangers such as bias.

We need to focus on how to make AI safe, how to avoid the misuse of autonomous machines, and how to avoid bias within AI solutions.

Robust AI

IT systems are often imperfect because they aren’t verified, validated and controlled properly. With AI, as with any IT programming project, the tiniest typographical error can have catastrophic consequences. There have been incidences of autonomous rockets exploding due to a single missed hyphen, or AI-driven marketing campaigns failing due to an incorrect, or misplaced, instruction.

There is still no complete replacement for human checks to verify data and minimise errors and ensure safer AI.

The autonomy debate

Autonomous machines are both an exciting and terrifying prospect. Driverless cars are capable of preventing accidents and making our roads safer but, as described above, a single error can create more chaos than we already have.

In the event of an autonomous car accident, who holds the blame? Is it the manufacturer, the organisation who bought or used the machine, the people who built the machine or AI, the machine itself? Placing blame on an AI machine, such as an autonomous car, means assigning rights to the machine.

It’s clear that laws and governance standards need to be created to clarify the roles and responsibilities of both people involved in and using AI, and the AI devices themselves. Autonomy also requires a human level of judgement, ensuring a safety check is in place so that reactions aren’t triggered by accidental actions, such as an autonomous weapon firing because an alarm was accidentally set off.

Eliminating bias

There was recently an incident where an AI facial recognition program failed to process a woman of colour, however had no trouble distinguishing white males. What happens when we depend upon an AI program, designed for parole boards to predict a criminal’s chances of re-offending, who gives the wrong prediction based on biased data?

This type of bias needs to be worked out of AI, however with flawed data, it’s not an easy task. Humans are still required to verify data and ensure that the results are accurate, flawless and without bias.

The future

It’s easy to see that we have years of work ahead before we are capable of creating an AI human equivalent that is robust, free of bias and capable of rational autonomy.

In the meantime, we are building on AI, working towards systems that make life easier and perform the tasks that don’t require much human intervention. There is still a huge concern around the potential for unemployment – for jobs being replaced by AI. After all Gartner predicted job losses of an estimated 1.8 million by 2020. Fear of job losses could push AI back into another winter.

However, the same prediction also states that AI will create more than 2.3 million jobs by 2020, far outweighing the loss of employment. Many people will be retrained and reskilled to move into AI careers, giving them opportunities to become part of the future.

Technology – AI – can solve so many problems. It has the potential to solve issues such as climate change, population control, food creation, and even grave diseases such as cancer and HIV. We need to embrace it, or risk going backwards, but we need to do so with caution and preparation; an ethical approach to AI.

About Sorin Cheran

Sorin Cheran is technology strategist at Hewlett Packard Enterprise
Let's do Biz