Wednesday, 17 June 2020 05:32

Algorithms and armies: Businesses are finding AI hard to adopt

Rate this item
(0 votes)

Facebook: the inside story”, Steven Levy’s recent book about the American social-media giant, paints a vivid picture of the firm’s size, not in terms of revenues or share price but in the sheer amount of human activity that thrums through its servers. 1.73bn people use Facebook every day, writing comments and uploading videos. An operation on that scale is so big, writes Mr Levy, “that it can only be policed by algorithms or armies”.

In fact, Facebook uses both. Human moderators work alongside algorithms trained to spot posts that violate either an individual country’s laws or the site’s own policies. But algorithms have many advantages over their human counterparts. They do not sleep, or take holidays, or complain about their performance reviews. They are quick, scanning thousands of messages a second, and untiring. And, of course, they do not need to be paid.

And it is not just Facebook. Google uses machine learning to refine search results, and target advertisements; Amazon and Netflix use it to recommend products and television shows to watch; Twitter and TikTok to suggest new users to follow. The ability to provide all these services with minimal human intervention is one reason why tech firms’ dizzying valuations have been achieved with comparatively small workforces.

Firms in other industries woud love that kind of efficiency. Yet the magic is proving elusive. A survey carried out by Boston Consulting Group and mit polled almost 2,500 bosses and found that seven out of ten said their ai projects had generated little impact so far. Two-fifths of those with “significant investments” in ai had yet to report any benefits at all.

Perhaps as a result, bosses seem to be cooling on the idea more generally. Another survey, this one by pwc, found that the number of bosses planning to deploy ai across their firms was 4% in 2020, down from 20% the year before. The number saying they had already implemented ai in “multiple areas” fell from 27% to 18%. Euan Cameron at pwc says that rushed trials may have been abandoned or rethought, and that the “irrational exuberance” that has dominated boardrooms for the past few years is fading.

There are several reasons for the reality check. One is prosaic: businesses, particularly big ones, often find change difficult. One parallel from history is with the electrification of factories. Electricity offers big advantages over steam power in terms of both efficiency and convenience. Most of the fundamental technologies had been invented by the end of the 19th century. But electric power nonetheless took more than 30 years to become widely adopted in the rich world.

Reasons specific to ai exist, too. Firms may have been misled by the success of the internet giants, which were perfectly placed to adopt the new technology. They were already staffed by programmers, and were already sitting on huge piles of user-generated data. The uses to which they put ai, at least at first—improving search results, displaying adverts, recommending new products and the like—were straightforward and easy to measure.

Not everyone is so lucky. Finding staff can be tricky for many firms. ai experts are scarce, and command luxuriant salaries. “Only the tech giants and the hedge funds can afford to employ these people,” grumbles one senior manager at an organisation that is neither. Academia has been a fertile recruiting ground.

A more subtle problem is that of deciding what to use ai for. Machine intelligence is very different from the biological sort. That means that gauging how difficult machines will find a task can be counter-intuitive. ai researchers call the problem Moravec’s paradox, after Hans Moravec, a Canadian roboticist, who noted that, though machines find complex arithmetic and formal logic easy, they struggle with tasks like co-ordinated movement and locomotion which humans take completely for granted.

For example, almost any human can staff a customer-support helpline. Very few can play Go at grandmaster level. Yet Paul Henninger, an ai expert at kpmg, an accountancy firm, says that building a customer-service chatbot is in some ways harder than building a superhuman Go machine. Go has only two possible outcomes—win or lose—and both can be easily identified. Individual games can play out in zillions of unique ways, but the underlying rules are few and clearly specified. Such well-defined problems are a good fit for ai. By contrast, says Mr Henninger, “a single customer call after a cancelled flight has…many, many more ways it could go”.

What to do? One piece of advice, says James Gralton, engineering director at Ocado, a British warehouse-automation and food-delivery firm, is to start small, and pick projects that can quickly deliver obvious benefits. Ocado’s warehouses are full of thousands of robots that look like little filing cabinets on wheels. Swarms of them zip around a grid of rails, picking up food to fulfil orders from online shoppers.

Ocado’s engineers used simple data from the robots, like electricity consumption or torque readings from their wheel motors, to train a machine-learning model to predict when a damaged or worn robot was likely to fail. Since broken-down robots get in the way, removing them for pre-emptive maintenance saves time and money. And implementing the system was comparatively easy.

The robots, warehouses and data all existed already. And the outcome is clear, too, which makes it easy to tell how well the ai model is working: either the system reduces breakdowns and saves money, or it does not. That kind of “predictive maintenance”, along with things like back-office automation, is a good example of what pwc approvingly calls “boring ai” (though Mr Gralton would surely object).

There is more to building an ai system than its accuracy in a vacuum. It must also do something that can be integrated into a firm’s work. During the late 1990s Mr Henninger worked on Fair Isaac Corporation’s (fico) “Falcon”, a credit-card fraud-detection system aimed at banks and credit-card companies that was, he says, one of the first real-world uses for machine learning. As with predictive maintenance, fraud detection was a good fit: the data (in the form of credit-card transaction records) were clean and readily available, and decisions were usefully binary (either a transaction was fraudulent or it wasn’t).

The widening gyre

But although Falcon was much better at spotting dodgy transactions than banks’ existing systems, he says, it did not enjoy success as a product until fico worked out how to help banks do something with the information the model was generating. “Falcon was limited by the same thing that holds a lot of ai projects back today: going from a working model to a useful system.” In the end, says Mr Henninger, it was the much more mundane task of creating a case-management system—flagging up potential frauds to bank workers, then allowing them to block the transaction, wave it through, or phone clients to double-check—that persuaded banks that the system was worth buying.

Because they are complicated and open-ended, few problems in the real world are likely to be completely solvable by ai, says Mr Gralton. Managers should therefore plan for how their systems will fail. Often that will mean throwing difficult cases to human beings to judge. That can limit the expected cost savings, especially if a model is poorly tuned and makes frequent wrong decisions.

The tech giants’ experience of the covid-19 pandemic, which has been accompanied by a deluge of online conspiracy theories, disinformation and nonsense, demonstrates the benefits of always keeping humans in the loop. Because human moderators see sensitive, private data, they typically work in offices with strict security policies (bringing smartphones to work, for instance, is usually prohibited).

In early March, as the disease spread, tech firms sent their content moderators home, where such security is tough to enforce. That meant an increased reliance on the algorithms. The firms were frank about the impact. More videos would end up being removed, said YouTube, “including some that may not violate [our] policies”. Facebook admitted that less human supervision would likely mean “longer response times and more mistakes”. ai can do a lot. But it works best when humans are there to hold its hand.

 

The Economist


NEWSSCROLL TEAM: 'Sina Kawonise: Publisher/Editor-in-Chief; Prof Wale Are Olaitan: Editorial Consultant; Femi Kawonise: Head, Production & Administration; Afolabi Ajibola: IT Manager;
Contact Us: [email protected] Tel/WhatsApp: +234 811 395 4049

Copyright © 2015 - 2024 NewsScroll. All rights reserved.