Exploring the impact of artificial intelligence is a big, interdisciplinary job. To help with this undertaking, the MIT Initiative on the Digital Economy (IDE) has recently brought on two new researchers.
Frank Nagle joins the IDE as a Research Scientist. Previously, he was an Assistant Professor in the Strategy Unit at Harvard Business School and the Principal Investigator in the school’s OpenLab. Frank is also Advising Chief Economist for The Linux Foundation.
Zezhen (Dawn) He joins the IDE as a Postdoc, and she’ll be working closely with Renée Richardson Gosline, leader of the IDE’s Human-First AI research group. Dawn recently received a doctorate in operations management from the University of Rochester’s Simon Business School.
Both Frank and Dawn spoke recently with Peter Krass, a contributing writer and editor to the IDE. Following are edited versions of their conversations.
FRANK NAGLE 
Q: At Harvard Business School, what was the main focus of your research?
My research has focused on competition and collaboration — the two main ways companies interact with each other — in the context of open-source software. In the old days, open source was mainly the purview of tinkerers and hobbyists. But today it’s very much the purview of companies. Companies work together to build open-source projects, and then they build their own products on top of that; that’s where they compete. My work has looked at when it makes sense for companies to collaborate on products and services, and when it makes sense for them to instead compete.
Q: What research do you plan to undertake at the IDE?
At the IDE my research will remain rooted in questions around open source, but now I’ll be shifting much of my work to the intersection of open source and AI. That includes looking at how AI is being used to create open-source tools. For example, I recently co-wrote a working paper, Generative AI and the Nature of Work.
I’ll also be asking, What is the value of open-source AI? How do we think about the value that open-source AI creates for companies and for innovation? And how do we think about closed-source vs. open-source AI?
One thing that attracted me to the IDE is its partnerships with companies. Because that’s where the data is, and where all the interesting action is happening. That’s a real IDE strength.
Q: At the IDE, who will be your main collaborators?
I’ll be working with IDE Director Sinan Aral, since he’s doing a lot of work in the GenAI space. Also, I’ve known John Horton — leader of the IDE’s AI, Marketplaces and Labor Markets research group — for a long time, and we’ll surely be doing some work together. Same with Neil Thompson and the work he’s been doing as leader of the IDE’s AI, Quantum and Beyond research group.
Q: This is an interesting moment for AI. Even if AI code is open, isn’t there an issue of whether it’s understandable?
Yes, this is one of the big debates right now: What does it mean to say a piece of AI is open source? In the traditional open-source world, we know what it means: the code is open. But with AI, the code is just the first step. There’s also the data it’s being trained on, the resulting models and weights, and all the compute power underneath it.
So in the AI space, I take a broader view of what open means. Unlike more traditional open-source software, AI has a spectrum of openness. So if you open-source the models and the code, that’s great. And if you can open source the training data — and lots of it is proprietary, so you can’t — that’s great, too. But without an AI being fully open, it’s going to be hard to understand how an AI is making decisions. And this has important implications for fairness, bias and a host of other safety-related concerns.
~~~~~~~~~~~~~~~~~~~~~~~~~~
DAWN HE 
Q: Congratulations on receiving your doctorate in operations management. What was the focus of your research?
Traditionally, people in operations management look for solutions to business problems, such as optimizing a process or maximizing revenue. My research was more focused on human-AI interaction in operations — specifically, how people make decisions with AI in a business context.
Q: How did you study that?
I used both behavioral and methodological approaches, sometimes separately and sometimes combined. For example, in one chapter of my dissertation, I developed methodologies that help decision-makers understand how machine learning models compare, as well as what the tradeoffs are of selecting one model over another.
I also explored the behavioral side of human-AI interactions. I did this with a lab experiment that studied whether people interact differently with AI-recommendation systems when they make decisions that impact themselves versus decisions that impact others. An example of the first group might be someone making a financial investment with their own money. And for the second group, an example would be a physician deciding whether to run a test for a patient. I also explored whether providing AI explanations increases algorithm adoption.
A: You’ve also researched explainable AI, right?
Yes. People in business want to know why the AI makes a particular recommendation, so that they’re not working with a black-box model. This is especially important for high-stakes decisions.
In another paper, I designed an explainable AI that explained why an AI model makes different predictions, either for different observations or over time. One example is reviewing a bank loan application. You might wonder why one person’s application was approved but another person’s was not. The most important factors are those that contribute to differences in the AI’s prediction.
Another issue I studied is wait-time prediction. Imagine you’re using a ride-hailing app, and the app shows an expected wait time of five minutes. However, after a short while, the app updates the wait time to 10 minutes. We call this an inconsistency in the prediction over time. It tends to lower a customer’s satisfaction as well as their perception of the system. In this scenario, our explainable AI might reveal that the cause of the inconsistency was an unexpected traffic delay.
Q: Looking ahead, what will be your research focus at the IDE?
I’ll be moving toward more behavioral work on both field and lab experiments about human-AI interaction. We plan to collaborate with companies that want to incorporate AI into their internal and client-facing operations. One focus area could be studying how to promote appropriate reliance on AI. We don’t want people to rely blindly on AI, but we also don’t want them to be averse to using AI when it could help them. What’s needed is a point of balance. Most important, my work strives to improve the outcome of human-AI collaboration.