By clicking “Accept All Cookies,” you agree to the storing of cookies on your device to enhance site navigation and analyze site usage.

Skip to main content

The MIT Intelligence Quest

March 29, 2018

IMG_3593

How does human intelligence work, in biological as well as in engineering terms? And how can we use such an understanding of human intelligence to build wiser and more useful machines?  On February 1, MIT launched the Intelligence Quest (MIT IQ) – an initiative aimed at addressing these big questions by advancing the science and engineering of both human and machine intelligence.  MIT IQ aims “to discover the foundations of human intelligence and drive the development of technological tools that can positively influence virtually every aspect of society.”

MIT has been deeply involved in artificial intelligence since the field’s inception in the 1950s.  MIT professors  John McCarthy and Marvin Minsky were among the founders and most prominent leaders of the new discipline.  AI was one of the most exciting areas in computer sciences in the 1960s and 1970s.  Many of the AI leaders in those days were convinced that a machine as intelligent as a human being would be developed within a couple of decades.  They were trying to do so by somehow programming the machines to exhibit intelligent behavior, even though to this day we have little idea what intelligence is about, let alone how to translate intelligence into a set of instructions to be executed by a machine.  Eventually, all these early AI approaches met with disappointment and were abandoned in the 1980s.  After years of unfulfilled promises, a so called AI winter of reduced interest and funding set in that nearly killed the field.

AI was reborn in the 1990s when it adopted a more applied, engineering-oriented paradigm.  The new AI paradigm enabled computers to acquire intelligent capabilities by ingesting and analyzing large amounts of data using powerful computers and sophisticated algorithms.  Instead of trying to explicitly program intelligence, this new approach was based on feeding lots and lots of data to the machine, and then letting the algorithms discover patterns and extract insights from all that data.

Such a data-driven, machine learning approach produced something akin to intelligence or knowledge.  Moreover, unlike the explicit programming-based approaches, the statistical-based ones scaled very nicely.  The more data you had, the more powerful the supercomputers, the more sophisticated the algorithms, the better the results.  Machine learning and related advances like deep learning, have played a major role in AI’s recent achievements

In its early years, AI research was mostly conducted in computer sciences departments.  Now, AI research is also being conducted in a number of other disciplines, including cognitive sciences, brain sciences, robotics, and linguistics.  In addition, AI applications are being used in a variety of industries and professions.  Consequently, MIT IQ is a university-wide initiative, composed of two related research programs:

  • The Core will conduct fundamental research on both human and machine intelligence; develop new machine learning algorithms by reverse engineering  human intelligence; and advance our understanding of human intelligence based on insights from machine intelligence research.
  • The Bridge will apply natural and artificial intelligence discoveries to a variety of  disciplines and industries, as well as develop state-of-the-art platforms and tools.  

In addition, the initiative will explore the societal and ethical implications of AI, including the impact of AI on the future of work and on public policy.

To better appreciate the scope of MIT IQ, let’s take a look at three of its projects.

1. Reverse-engineer human intelligence to build machines that grow into intelligence the way a person does – starting like a baby and learning like a child

A project in the Brain and Cognitive Sciences Department is trying to understand how human beings learn, form theories about the world, and process information.  This work may well point the way to the next generation of machine learning algorithms.

At a recent AI conference, Professor Josh Tenenbaum explained his views on the difference between our present state of AI and the long-term quest for what he called the real AI.  Our present AI applications do just one thing quite well by being trained with lots of data and machine learning algorithms.  Real AI requires the ability to go beyond data and machine learning.  It should be able to build models of the world it perceives, and then use these models to explain its actions and decisions.

According to Tenenbaum, we’re decades away from such a real AI.  Three month old babies have a more commonsense understanding of the world around them than any AI application ever built.  AI applications start with a blank slate before learning from patterns in the data it analyzes, while babies start off with a genetic head start and a brain structure that allows them to learn much more than data and patterns.

Unlike machine learning systems, humans are lifelong learners, building layers upon layers of intelligence.  By studying the computational basis of human learning, Tenenbaum and his colleagues are aiming to achieve both a better understanding of human learning in computational terms, as well as trying to build AI systems that come closer to the capacities of human learners.

2. Using patient data and machine learning for the early detection and personalized treatment of cancer

In 2014, Professor Regina Barzilay was diagnosed with breast cancer.  She then decided to turn her computer science and AI expertise to oncology.  She soon learned that data analysis and tools like machine learning were barely being used in the treatment of cancer, and, in fact,  that good data about the disease was hard to find.  Treatment and drug choices are more a matter of guessing.  It would clearly be much better to make such decisions based on reliable empirical evidence – that is, by putting data and machine learning tools in the hands of oncologists.

“Across different areas of cancer care – be it diagnosis, treatment, or prevention – the data protocol is similar,” notes this article on Barzilay’s research.  “Doctors start the process by mapping patient information into structured data by hand, and then run basic statistical analyses to identify correlations.”  According to Barzilay, this approach is far behind the state-of-the-art in data analysis and AI technologies.

“These kinds of delays and lapses (which are not limited to cancer treatment), can really hamper scientific advances, Barzilay says.  For example, 1.7 million people are diagnosed with cancer in the U.S. every year, but only about 3 percent enroll in clinical trials, according to the American Society of Clinical Oncology.  Current research practice relies exclusively on data drawn from this tiny fraction of patients.”

Her project is aimed at helping the other 97% receiving cancer care.

3. Aggregate data across organizational boundaries to measure and predict the vulnerabilities of the entire financial system while respecting privacy concerns

Financial markets have been significantly transformed by the major advances in technology over the past two decades, e.g., big data, faster and cheaper computers, greater connectivity and machine learning algorithms.  While these technologies have brought many benefits to investors, they’ve been accompanied by serious unintended consequences, including loss of privacy, identity thefts, flash crashes, and business-ending trading errors.

The project – led by Professor Andrew Lo, Director of MIT’s Laboratory for Financial Engineering – is focused on “both positive and negative aspects of big data and financial technology in an attempt to identify and measure the magnitude of emerging problems as well as develop new technologies to address them.” It includes the use of machine-learning models for consumer credit risk management and applications of secure multi-party computation to financial regulation.