By clicking “Accept All Cookies,” you agree to the storing of cookies on your device to enhance site navigation and analyze site usage.

Skip to main content

AI Insights Generated at CoDE

At this year’s meeting of the minds, one panel explored the challenges of developing high-quality, trustworthy Gen- AI

November 29, 2023

By Peter Krass

Generative AI — and all its potential and risks — has become the subject of widespread public debate and discussion. GenAI is also a hot topic in academic and tech industry circles, but their focus is more about how to better understand the inner workings and to make the technology more reliable and trustworthy in a rapidly changing environment.

How can the experts design better AI? These technical issues were explored at the 2023 MIT Conference on Digital Experimentation (CoDE) held at MIT in Cambridge, Mass., November 10–11. The 10th-annual event — hosted by the MIT Initiative on the Digital Economy (IDE) — is a mecca for those testing and proving the latest theories underlying emerging technologies

More than 300 attendees heard presentations ranging from the general to the highly specialized. All shared a common focus on digital experimentation, the impact of advanced technology, and the benefits of collaboration and shared learning. [See the full agenda here.]

Labor and Hacking Concerns

Brandon Stewart, an associate professor of sociology at Princeton, laid out three ways GenAI and large language models (LLMs) are being used in beneficial ways: to detect biases and inaccuracies; to integrate LLMs into current workflows; and to innovate creating new kinds of tools and applications. Stewart was on a GenAI panel led by David Holtz, assistant professor at the Haas School of Business and a CoDE co-organizer.

From left, David Holtz with panelists, Payel Das, Xiao Ma, Daniel Rock, and Brandon Stewart.

 

From a labor perspective, however, there are many concerns about AI’s potential to replace humans. Those worries are slowing the technology’s adoption in the workplace, according to Daniel Rock, assistant professor of operations, information and decisions at The Wharton School.

Rock expects GenAI to have a broad impact on job tasks, job roles and even job clusters, and that’s causing “an explicit friction in many companies. They’re not willing to take the risks” and dive into implementations, he said.

The concerns are valid. AI models can create false or misleading information and present them as facts. They also can expose proprietary files, violate copyrights, and cause what’s known as “prompt injection,” a kind of attack where hackers trick an AI model into changing its expected behavior.

Panelist Xiao Ma, a software engineer at Google, agreed that GenAI has generated a lot of energy, but also a lot of fear. One big worry, she pointed out, is bias embedded in AI models. It is a serious issue for the kind of processing — Ma called it “moral reasoning” — that’s needed when a problem’s solution isn’t black or white. Ma provided an example with the question, Is it okay to tell a lie?

“The standard answer is ‘no,’ so you can code the model that way,” she added. “But in your daily situation, there are cases where lying is probably okay.” Today, most LLMs cannot handle that kind of ambiguity.

Too Fast for Comfort?

Panelists also discussed the challenges resulting from the rapid development of GenAI tools. ChatGPT was introduced only in November 2022, and since then, it has gone through no fewer than three major revisions. The current version, ChatGPT-4, is estimated to have 100 million weekly active users worldwide. That’s great for technological progress, but problematic for digital experimentation.

Payel Das, an AI researcher at IBM, said the technology’s speedy pace has her worried, in part because ChatGPT and other GenAI tools are “opaque,” meaning their LLMs are trained on data that isn’t shown to the user. “Let’s say ChatGPT-5 comes along tomorrow,” she speculated. “Is that because some of its inner workings have changed? Or has there been a big shift in the [quality of the] data?”

Panelists agreed that another big barrier to business adoption is trust.
“There’s still a lack of a confidence model,” said Payel. Panelists said two things are needed to address these concerns: A better understanding of how LLMs and GenAI tools work, and the proven ability of the technology to provide far higher levels of accuracy than are now possible.

“We’ll figure it out,” said Rock. “But it will take some time.”

Watch the GenAI panel discussion video here.

At other sessions during the two-day event, presenters included major tech business such as Airbnb, Apple, Meta, IBM and Tencent; as well as global researchers from Cornell, Harvard, Hong Kong University of Science & Technology, Stanford, and University of Toronto. CoDE sponsors included Netflix , Amazon, Booking.com, Itaú and Eppo.

A Practitioner’s panel featured data scientists and tech leads from Microsoft, Amazon, Grow Therapy and Roblox.
Watch the Practitioner panel discussion here

From left, Dean Eckles with panelists Widad Machmouchi, James McQueen, Wenjing Zheng, and Tushar Shanker

 

 

 

 

 

 

 

 

 

 

Peter Krass is a contributing writer and editor to the MIT IDE.