Blog

Cracking the CoDE to Algorithmic Testing

Monday, November 30, 2020

Digital experimentation is going corporate. Rapid market testing tools have been a key for data scientists and programmers for some time as they seek to more accurately target marketing campaigns and test apps in real time and at large scale. With AI techniques, the practice is accelerating.

The two-day, MIT IDE Conference on Digital Experimentation (CoDE) event, held virtually with more than 300 attendees on November 19-20, proves how popular these tools have become. Leaders from academia and industry discussed how they are rapidly deploying and iterating complex social and economic problems. Academic researchers still have a head start in this data analytics field, but businesses are jumping in as they see the value of better time-to-market and consumer-demand targeting techniques. Attendance at the event reflects the rapid growth since the first MIT IDE CODE in 2014.

“The newly emerging capability to rapidly deploy and iterate micro-level, in-vivo, randomized experiments in complex social and economic settings at population scale, is one of the most significant innovations in modern social science,” said IDE Director, Sinan Aral, a conference organizer.

“As more social interactions, behaviors, decisions, opinions and transactions are digitized and mediated by online platforms, our ability to quickly answer nuanced causal questions about the role of social behavior in outcomes such as health, voting, political mobilization, consumer demand, information sharing, product rating, and opinion aggregation is becoming unprecedented,” he said.  The conference is co-organized by IDE professors Dean EcklesAlex 'Sandy' Pentland, and John Horton.

This year, a practitioner’s panel included platform giants, Netflix, Airbnb, and Facebook explaining how they use massive digital experimentation in their ad tracking, app design testing, and analytics. Facebook and Netflix were conference sponsors along with Accenture.

Additionally, experiments are being used to determine everything from whether Americans can be nudged to exercise more, to the best digital strategies for political campaigns, and how to monitor remote learning habits. Presentations spanned theory and practice, design and analysis, and covered a variety of applications including new ways to measure GDP. Among the new work cited were studies on social networks, the success of matching markets, and how to overcome resource constraints. Presenters also discussed when algorithms are more accurate than humans at predications and interactions.

Watch the panel on political campaigns here.

Aral sees randomized experiments as “the gold standard of causal inference and a cornerstone of effective policy.” But the scale and complexity of these experiments also create scientific and statistical challenges for design and inference, he noted.

Moreover, different disciplines are approaching causal inference in contrasting, complementary ways. That’s why CODE includes research in various scientific disciplines including economics, computer science, and sociology, to lay the foundation for a multidisciplinary research community.