Blog

Event Recap: AI and The Future of Work

Wednesday, December 20, 2017
Irving Wladawsky-Berger

Last month I attended AI and the Future of Work, a conference hosted by MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and its Initiative on the Digital Economy (IDE).  The two-day agenda included over 20 keynotes and panels on AI-related topics, and around 60 speakers and panelists from academia and business.

New technologies have been replacing workers and transforming economies over the past two centuries.  But, over time, these same technologies led to the creation of whole new industries and new jobs.  While the technologies of the industrial economy helped to make up for our physical limitations, the technologies of the digital economy are now enhancing our cognitive capabilities.  They’re being increasingly applied to activities that not long ago were viewed as the exclusive domain of humans.  Will the AI revolution play out like past technology revolutions - short-term disruptions followed by long-term benefits - or will this time be different?

Conference participants generally agreed that AI will have a major impact on jobs and the very nature of work.  But, for the most part, they viewed AI as mostly augmenting rather than replacing human capabilities, automating the more routine parts of a job and increasing the productivity and quality of workers, so they can focus on those aspect of the job that most require human attention.  Overall, a small percentage of jobs will be fully automated, while many more will be significantly transformed.

Conference participants also generally agreed that the more advanced AI-based transformations will not happen rapidly, but are likely decades away.  Much progress has been recently made in the ability to extract features from all the data we now have access to, as well as in machine learning algorithms that give computers the ability to learn by ingesting large amounts of data instead of being explicitly programmed.  While such statistical pattern recognition approaches can be applied to many tasks, they’re no substitute for model formation, the main approach used by humans, - from toddlers to physicists, - to understand how the world works.  We’re a long way from the development of AIs that truly learn and reason like people.

Over time, AI will not seem any more unusual than electricity, cars, airplanes, the Internet and other major transformative technologies.  I like the way author and publisher Kevin Kelly put it in in an October, 2014 Wired article.  He wrote that the AI he foresees is more like a kind of “cheap, reliable, industrial-grade digital smartness running behind everything, and almost invisible except when it blinks off…  Everything that we formerly electrified we will now cognitize.  This new utilitarian AI will also augment us individually as people (deepening our memory, speeding our recognition) and collectively as a species.  There is almost nothing we can think of that cannot be made new, different, or interesting by infusing it with some extra IQ… Like all utilities, AI will be supremely boring, even as it transforms the Internet, the global economy, and civilization.”

Let me briefly discuss a few of the sessions at the MIT conference.

Continue reading the full blog here.