·

AI has not yet spurred a productivity boom, but just you wait

Nobel laureate Bob Solow pronounced 30 years ago that “you can see the computer age everywhere but in the productivity statistics”.

At the start of the 1980s, the world entered the digital age. Fax machines transformed communications. The introduction of personal computers made high-powered computing available to all.

But it took time to work out how to make best use of these major changes in technology. In the 1980s, output per worker in the US grew by only 1.4 per cent a year. But between 1995 and 2005, this had accelerated to 2.1 per cent.

We are on the cusp of another acceleration in productivity growth, due to artificial intelligence (AI).

Even the mention of AI strikes fear into many hearts. Surely this will cause massive job losses? That is one way to boost productivity, but it’s hardly desirable.

In fact, to date most of the applications of AI in companies have not replaced workers.

Rather, they have supplemented what employees do, enabling them to be more productive.

Two recent pieces in the Harvard Business Review provide firm evidence for this. Satya Ramswamy found that the most common use of AI and data analytics was in back-office functions, particularly IT, finance and accounting, where the processes were already at least partly automated.

Thomas H Davenport and Rajeev Ronanki came to the same conclusion in a detailed survey of 152 companies. AI was used, for example, to read contracts or to extract information from emails to update customer contact information or changes to orders.

Developments within the techniques of AI itself suggest that practical applications of the concept are about to spread much more widely.

There was a surge of research interest in AI in the 1980s and 1990s. It did not lead to much.

Essentially, in this phase of development, people tried to get machines to think like humans. If you wanted a translation, for example, your algorithm had to try to learn spelling, the correct use of grammar, and so on. But this proved too hard.

The real breakthrough was through the 2000s. Researchers realised that algorithms were much better than humans at one particular task: namely, matching patterns.

To develop a good translator, you give the machine some documents in English, say, and the same ones translated into French. The algorithm learns how to match the patterns. It does not know any grammar. It does not even know that it is “reading” English and French. So at one level, it is stupid, not intelligent. But it exceptionally good at matching up the patterns.

In the jargon, this is “supervised machine learning”.

At the same time, a new study in the MIT Technology Review shows that purely scientific advances in this field are slowing down markedly. In other words, in the space of a single decade, this has become a mature analytical technology – one that can be used with confidence in practical applications, in the knowledge that it is unlikely to be made obsolete by new developments.

Productivity looks set to boom in the 2020s.

As published in City AM Wednesday 30th January 2019
Image: AI via vpnusrus.com by Mike MacKenzie under CC BY 2.0
Facebook
Twitter
LinkedIn
Pinterest
Join our newsletter and get 20% discount
Promotion nulla vitae elit libero a pharetra augue