4th Industrial Revolution Opinion South Africa

Debunking myths that might be holding back AI adoption

Artificial intelligence (AI) will account for as much as $15.7tn of the world economy, Bloomberg expects. Of that figure, an estimated $6.6tn will come from businesses' ability to be more efficient via automation and freeing their workforces to work on other tasks. But the lion's share ($9.1tn) will come from the goods and services these companies are able to produce as a result, and which consumers will purchase.
Mark Nasila, chief analytics officer – FNB Chief Risk Office | image supplied
Mark Nasila, chief analytics officer – FNB Chief Risk Office | image supplied

This combination of improved efficiency and consumer spending isn’t limited to manufacturing or fast-moving consumer goods (FMCG). Financial services providers like banks and insurers stand to reap the benefits of AI, too. Assuming, that they successfully embrace, adopt, and deploy it.

According to the SPD group, financial services companies spend on AI will reach $12bn by the end of this year alone. Eventually, though, it’s expected to turn into a trillion-dollar sector. Two-thirds of the value will be generated by HR, sales, risk and marketing departments deploying AI and gleaning benefits from it, with the remaining third being accounted for by high-level AI services, primarily in the risk sector.

Even though the opportunity is vast and seemingly irrefutable, and although most firms are investing in AI initiatives, a mere 14.6% of financial services firms have actually deployed AI capabilities in production, according to Forbes. What accounts for this disparity between opportunity and execution? One of the key problems is the number of myths that surround AI.

Managing myths

AI is surfacing in all sorts of sectors, from personal digital assistants to chatbots on websites, and self-driving cars. Each of these systems, however, are reliant on carefully curated sources of information and algorithms that help transform data into actionable insights. In many cases, industry practitioners who seek to implement AI look only at the outcomes of other instances of it and don’t consider how much has to happen to enable them to reach those outcomes successfully and consistently, leading in turn to unrealistic expectations and despondency.

Obakeng Moepya | image supplied
Obakeng Moepya | image supplied

AI is inevitably going to impact the jobs market, but not in the manner which many people fear. Pessimists argue that AI will destroy entire segments and render people redundant, but what’s more likely — as we’ve seen in the risk management space — is that AI will free up people’s time by assuming repetitive tasks. These tasks are time-consuming for humans, but are easy for computers, for example, analysing past expenditure patterns to unearth and flag any anomalies. While tedious for a human, pattern recognition is something AI excels at, and by handing it the duty, people can focus on higher-level aspects of risk assessment that require more nuanced judgment.

Contrary to many people’s perceptions, AI will create new roles, some of which don’t even exist yet. As AI evolves, it becomes easier to implement, and the more applicable harnessing data becomes to a wide range of industries, new jobs will be created. These employment opportunities will revolve around the design, distribution, deployment and management of AI systems.

Another oft-repeated myth is that AI is neutral or unbiased. AI, however, is shaped by those who create it, and by which inputs it’s taught to value most. If anything, AI tends to reflect and reinforce the biases or prejudices of those who create it. Being aware of this and guarding against it is crucial as we entrust ever more decisions to AI. Ethics in AI is a growing field and a prime example of new positions being created in response to the technology and its ongoing evolution.

Interoperability and interdependence

Like data literacy, an understanding of AI’s possibilities and pitfalls is going to become increasingly important for all members of an organisation and not only those working in IT. To this end, good governance of AI is about evaluating and monitoring its biases, effectiveness, risk and return on investment. Another consequence that data literacy may bring is ensuring AI projects are socialised and coordinated across the company and not siloed.

Working across divisions implies that security becomes even more important. It’s not only about ensuring that competitors don’t gain access to trade secrets but about securing the processes by which users interact with AI. Good systems will include strong authentication, follow the principles of least privilege, and be resistant to modifications like data injections, deletions and poisoning.

Measure to evaluate

Another potential point of resistance to AI adoption comes when trying to measure its success or failure, and the value it adds (or removes) from a business (or individual business units).

When undertaking an AI project, it’s thus essential to set realistic intentions for what you want to achieve with it and then consider how to measure whether or not those goals have been met. If, for instance, the goal is to reduce instances of fraudulent transactions, it’s important to have a baseline at the outset against which the AI’s performance can be compared.

This measurement and evaluation process may vary between projects, divisions, or organisations, and it may require extended testing periods for assessments to even be possible, but it’s as essential to do as it is to temper expectations. AI can’t solve every problem or alleviate every pain point.

In fact, when it comes to some use cases, such as automated interactions with customers who are contending with complex, edge-case issues for which they need support, it may not be the right solution at all.

However, a chatbot may be a very useful tool for quotidian queries, in turn reducing the burden on call centres. The challenge lies in accurately assessing which business processes AI is adept at, and which it shouldn’t be entrusted with. Dismissing AI completely is just as foolhardy an exercise as trying to apply it to everything.

About Mark Nasila, Obakeng Moepya

Mark Nasila is chief analytics officer - FNB Chief Risk Office and Obakeng Moepya is part of Isazi Consulting.
Let's do Biz