Forget novelty experiments, it’s time to get real with AI


Human oversight is an essential aspect of keeping an AI unbiased and capable of truly accurate analysis

The world of artificial intelligence is beset by hype and hubris. But make no mistake: AI has the potential to make businesses more efficient and faster.

Yet, as transformative technologies keep getting smarter and cheaper, businesses’ return on innovation investment has declined by 27 per cent over the last five years, according to a report from Accenture. So what’s going wrong?

To Ray Eitel-Porter, who heads up Accenture Applied Intelligence in the UK, the evidence points to one problem: companies don’t know how to make the most of AI and data analytics, and how they can apply to business problems.

AI is a transformative technology, so businesses should be ambitious and think about solutions that would yield ten times the impact of traditional investments – not just a ten per cent improvement. “Using [AI] in situations where it’s going to make an incremental improvement under-utilises its potential,” Eitel-Porter says.

But that doesn’t mean that AI is the solution to every business challenge. Eitel-Porter warns of the dangers of having a solution looking for a problem. Instead, start by drawing up a list of business challenges and prioritise them by whether or not they can be addressed by using AI and the expected return on investment. “You need to start with the ones at the top of the list and think about what AI solution could be developed and what the impact of that solution would be,” says Eitel-Porter. “Fit the applications of AI to business needs, rather than vice-versa.”

Only once a business has picked a suitable challenge is it time to get down to the nuts and bolts of AI. That means starting with the data – working out the best way to bring disparate datasets together to root out new insights. Even challenges that might not immediately seem to lend themselves to AI solutions can sometimes be reframed so that AI can help.

Eitel-Porter points to the example of one project he worked on for a retail firm that was experiencing a high level of customer turnover in a loyalty scheme. The business problem was identified, and the firm had access to plenty of data about the products people were buying, how often they shopped and when they stopped being customers. By employing an unconstrained AI to analyse the data, the firm pinpointed that when people stopped buying personal hygiene products, they were likely to stop being customers altogether. This early warning signal allowed the firm to target those customers with specific offers aimed at maintaining the relationship.

“The value is very often going to be hidden or dispersed within large amounts of data and so one of the key things that AI is looking to do is to find that hidden informational – or insight – content, within vast amounts of data or multiple types of data,” Eitel-Porter says.

Although the Google-owned AI company DeepMind is best known for building highly experimental machine learning algorithms capable of beating the world’s best Go players, the company has already put similar technology to work solving more mundane problems. In 2016 the firm started using DeepMind AI to manage the cooling systems in its data centres, leading to a 40 per cent reduction in the total amount of energy used for cooling.

But the success of any AI experiment requires the buy-in of senior executives. Without high-level support, even the most well-intentioned projects can flounder. “You’ve got to have true believers at the most senior levels in an organisation who are leading by doing and championing the use of AI in the business,” says Eitel-Porter. “I’ve not seen any business successfully adopt AI at scale and make major impact on the business unless it has been driven by at least one very senior leader who is committed and walking the walk.”

If firms are going to move beyond AI experimentation and make it a core part of their business, Eitel-Porter says they need to be prepared to make the investment in skills, tools and processes that embed the learnings from AI into everyday business. And the next step is keeping an eye on all of those systems and retraining them to make sure they don’t develop bias. “You need processes that are going to monitor and maintain that AI product as its life cycle continues,” he says.

That means having people with the technical and ethical knowledge to understand when bias might be creeping in to a system, and finding the right balance between bias and accuracy. Businesses need to keep asking tough questions and striking a balance between fairness and accuracy – a balance that Eitel-Porter isn’t sure every company has yet managed to master. “I suspect out there in business there are quite a few algorithms that wouldn’t necessarily pass muster in terms of their fairness.”

For more information please click here.





Source Article Link

Leave a Reply

Your email address will not be published. Required fields are marked *