The finance industry has taken note: Artificial intelligence can automate tasks that formerly required enormous human effort. Is that a good thing, speeding decisions and investments? Or a bad thing, reducing the diversity of analyses and perspectives?
To explore this complex issue, the MIT Initiative on the Digital Economy (IDE) has just launched the AI in Financial Markets and Decision-Making research group. It’s being led by Eric So, a Sloan Distinguished Professor of Global Economics and Behavioral Science at MIT and a member of the IDE research team. Eric recently provided his perspective on the group’s work in discussion with IDE contributing writer Peter Krass. The following is a lightly edited version of their conversation.
Q: Why a new IDE research group, and why a focus on AI and financial markets?
We want to understand how AI is shaping the practice of finance and investing. We also intend to use financial markets as a laboratory for studying changes in human behavior.
More specifically, we hope to study the ability of AI to assist us with longstanding problems. For example, there’s a notion in economics about the “wisdom of the crowd.” The idea is that when we aggregate information, it gives us better forecasting. Previously, we had to rely on human experts for this. But increasingly, we have access to several variations of large language models [LLMs] across different providers that give us the ability to aggregate information.
This capability can be used for valuing a company. When valuing companies, we need to dig through the financials to understand what people are saying, and to come up with a proper valuation. It’s quite tricky to quantify how much effort you put forth to do that; how do we measure mental exertion in a large-sample context? So we’ll look at how LLMs process this information. We want to get a sense of the amount of processing, as well as the cost, required to really understand a firm.
To do this, we plan to measure what’s known as an LLM’s reasoning traces. These traces give you a sense of the amount of compute required for the AI to think through a problem. This allows us to study the amount of compute required to process information in a really large sample. We’ll also use that as a means to understand where market mispricing concentrates.
Q: Aren’t you also studying the impact of technology on human behavior?
Yes, there’s another side of our research that will think about what AI does to us. That is, how does AI change the way we process information and think? For example, we’re increasingly presented with AI-generated summaries as the basis from which we make decisions. But there’s not a lot of oversight in terms of how these summaries are generated, and whether it’s actually good if we’re all looking at the same information.
That question connects to another project in the “crowd wisdom” theme. We’re exploring whether we lose crowd wisdom when everyone is presented with the same three bullet points. We’ll use the finance industry as a context to study this. But the problem is more general. That is, if we’re all homogenized in terms of the information we’re getting, do we lose some richness and diversity that we’d like for dialog — and not just in finance?
Some of my research also looks at the introduction of technologies other than AI. For example, what happens when everyone starts to trade on their smartphones? What does that do to their welfare, their performance and other factors?
Q: How will you study this?
One thing we’ve done already is to explore how summaries change the forecasts of AI models themselves. We can either give the LLMs a very lengthy document and have them each independently assess that document and make a forecast, or we can give them a summary and then look at the heterogeneity in those forecasts. Looking ahead, our plan is to also work with humans in an experiment.
Q: You’re also writing a book. What’s it about?
The working title is The Collision: What does AI do to us? My publisher is W.W. Norton, and the book’s release is set for the late summer or early fall of 2026. The book is a reflection of the idea that we’re not gradually adopting AI, but instead colliding with it. AI presents us with a tremendous opportunity to be more productive and scale up what we do. But it also presents us with a tremendous threat.
That’s because AI tempts people to outsource their thinking. And when you outsource your thinking, it presents a host of problems, including reduced skills and increased dependence. I hear people in my social circle talking about this. They’ve already become so accustomed to using AI tools, they don’t remember what it was like to work without them. They’re losing some of the skills that are central to their function, both within society and their jobs.
The book is my attempt to lay out the issues, both the reason why the temptation and pressure to use these tools is so strong, and what that means for our own understanding, our skills, our ability to make independent judgments. It’s a book about how our own thinking is being influenced by the systems. I’ll also look at the opportunity for chatbots to reshape our relationships with other people
The book features a lot of research for understanding how AI is interacting with our brains. However, it’s intended for a general reader, so I’ll make that research accessible. And while my book is hopeful, I won’t pull any punches in examining how our brains are interacting with artificial minds.
Learn more: