By clicking “Accept All Cookies,” you agree to the storing of cookies on your device to enhance site navigation and analyze site usage.

Skip to main content

3 Researchers Navigate the Intersection of Tech and Economics

Q&A: New Postdocs Yang Yu, Kazimier Smith and Omeed Maghzian explain how they’ll apply their backgrounds in economics to AI, deep learning & automation.

October 01, 2025

The MIT Initiative on the Digital Economy (IDE) has brought on three new postdoctoral associates who share something in common: All three recently earned doctorates in the field of economics.

Yang Yu (pictured above) joined the IDE in June. She earned a doctorate in economics from the University of Virginia earlier this year and holds a master’s degree in economics for development from Oxford University. Yang’s research areas include entrepreneurship and innovation. At FutureTech, she’s working closely with Neil Thompson, leader of the IDE’s AI, Quantum and Beyond research group, and Martin Fleming.

Kazimier Smith joined the IDE in June as well. Earlier this year, he earned a doctorate in economics from the NYU Stern School of Business. Kazimier’s research areas include the economics of platforms, social media and artificial intelligence (AI). At MIT, he’s working primarily with Neil Thompson and Aaron Kaye.

Omeed Maghzian joined the IDE in July. He earned a doctorate in economics earlier this year from Harvard University. Omeed’s main research areas are macroeconomics and labor markets, and at the IDE he’s also working primarily with Neil Thompson.

All three spoke recently with Peter Krass, a contributing writer and editor to the IDE. The following are lightly edited transcripts of their talks.

Q: Can you describe your main areas of research?

Yang Yu: In my working paper, Venture Capital and the Dynamism of Startups, I look at uncertainty among startups in biotech and software. Based on that uncertainty, I study how we should design policies to improve the functioning of the venture capital market. Venture capital is an important financial source for startups, and startups are a very important component of driving innovation.

One thing I looked at is the rate of learning in these startups. It mainly goes back to the business models of biotech and software, which are very different.

Q: Different? How so?

Most software startups are working with proven technologies. The primary source of uncertainty is usually whether there’s market demand for a new product or service. For this reason, many of them first build what’s known as a minimal viable product (MVP) to test whether there’s market demand. Once they show that there is indeed a large market demand for their new products or services, most of the uncertainty has been resolved. So in software, most of the uncertainty is at the beginning.

For biotech, it’s a different story. The drug-development process has multiple stages. First, they conduct animal tests to see if the molecule is safe and effective. Next, they recruit a small group of healthy volunteers to assess safety. After that, they test the drug’s efficacy on patients with certain diseases. Then they gradually scale it up to test effectiveness. It’s a multi-stage process, and each stage has different sorts of uncertainties. Also, the uncertainties are evenly distributed over time.

That leads to the different learning rates in these two sectors. With software startups, most of the failures happen at the beginning, after raising the first round of capital. But biotech startups can fail continuously.

Q: At MIT, what are you researching now?

One research stream is the economics of AI. I’ll be working on a project that looks at what affects AI adoption levels in different sectors. We’re mainly using the job-posting data to see current demand for AI technologies across different industries and companies.

To start, we’re looking at S&P 500 companies across about 11 sectors. Later, we’ll scale this up to all public companies in the U.S. stock market. That will cover most of the industrial sectors.

Kazimier Smith

Q: How did you become interested in researching the economics behind a social media influencer’s choices and career progression?

Kazimier Smith: Influencer Dynamics was a chapter of my dissertation. I also hope it will be published as a standalone paper at some point. My adviser in graduate school was interested in the economics of media and entertainment, and he’d become interested more specifically in social media. He asked if I would try working on this project.

Data collection was a challenge. Social media companies typically don’t want people to look at their data — mainly because they’re worried about what they might find! There obviously is some evidence of negative social impacts of social media. So it has gotten harder and harder to collect data to do this sort of research. I worked with a company that helped me collect the data, which was great. I also got lucky finding a couple of data sources that I could merge with the main dataset to add some new insights.

Q: What were your key findings?

Sponsored posts do seem to be somewhat less effective than organic posts. But the surprising thing for me is that that gap is not as big as people had thought. In terms of the growth of your audience, you’re not that worse off making a sponsored post than an organic post. That’s my analysis of my data.

I’ll add a caveat: There’s more work to be done in trying to quantify the impact of an organic post vs. a sponsored one. It’s not super-easy to quantify that in a rigorous and reliable way. My research is not the last word on that.

Q: You’ve also done work on large language models (LLMs), right?

Yes, for the paper, Feeding LLM Annotations to BERT Classifiers at Your Own Risk, the motivation was that there’s increasing interest in using synthetic data for social science research. For example, let’s say you’re classifying the political leanings of social media posts. It becomes expensive to have humans do the labeling, so sometimes researchers use an LLM instead, employing synthetic data to fine-tune a smaller model. Also, when researchers have limited amounts of data, they may use synthetic data to increase the size of their datasets.

Then what people do with those datasets is to fine-tune a cheaper LLM to do the classification task. So the question is: What happens when you use synthetic data, rather than actual human-generated data, to do the fine-tuning? We find there is some negative impact from using that approach.

Q: What will you be focusing on at MIT?

The big research topic is what we call The Economics of Deep Learning. The project’s goal is to think about the dynamics of competition in the AI industry.

You might be interested to know whether the AI industry will wind up with one huge monopolist firm or with a bunch of smaller firms competing with each other. Right now, the space is super new, and it’s not clear where it’s going to go.

Our goal is to write a model that captures the various forces in the industry. Then we’ll use data to estimate and inform that model, and potentially make some predictions. If the federal government is interested in regulating competition in that space, the model might incorporate counterfactual simulations; that would provide a way to say some things about those regulations.

Right now, we’re in an early stage, just collecting and exploring the data. Next, we’ll see what the data looks like, and what comes out of it. That will also shape the project and how it turns out.

Omeed Maghzian

Q: What research interests have you pursued in your work?

Omeed Maghzian: A lot of the work I’ve done is trying to understand how macroeconomic shocks transmit through labor markets. Historically, economists have studied these effects using aggregate time-series data, such as movements in the unemployment rate. But oftentimes, to get the specific mechanisms or channels by which workers are affected by macroeconomic shocks, you need to go deeper and use microdata on individual firms and workers.

In the paper I co-wrote, Credit Cycles, Firms, and the Labor Market, the macroeconomic shock is the fact that there are times when the supply of corporate credit increases, as investors are willing to bear more risk. The interesting thing about that is, it reverts in the future. There’s some sort of crash.

Q: For example?

You could imagine that workers benefit from an aggregate increase in corporate credit, because the people who are pulled into the labor market can use their initial job as a steppingstone to better opportunities. Or you could imagine that these workers spend time accumulating job-specific skills and knowledge, but are laid off when credit conditions tighten.

We primarily see that the latter effect is true. The people who oftentimes get hired as a result of these expansions in credit supply are the ones who also lose their jobs in three to five years. That’s because loose credit conditions lead risky firms to engage in rapid job creation, only to destroy many of those same jobs when they experience financial distress. And this means that workers — particularly those who are younger and less experienced — bear more risk from fluctuations in aggregate credit conditions than we previously thought.

Identifying this effect required us to link financial data to administrative data on firm employment and the income trajectories of the workers hired by those firms. We address these selection effects by jointly using natural segmentation in the corporate bond market and segmentation in where workers take their first jobs — something that would not be possible without microdata.

Q: You’ve also done other research related to job loss, right?

Yes, one of the other papers I co-wrote, The Labor Market Spillovers of Job Destruction, shows an important driver of why the income loss from being laid off in a recession is so high: It comes from changes in labor market conditions, due to many workers losing their jobs at the same time. When firms decide to cut employment, it has spillover effects on other workers who they do not directly employ.

Every company could be making the right choice to survive by laying off workers. But because everyone’s doing it at the same time in recessions, the costs that workers experience are amplified. The inflow of workers looking for new jobs can congest the labor market, lowering the chance for any one worker to find a job. This also suggests that smoothing the pace at which workers are laid off could help the labor market from deteriorating as much as it usually does in recessions.

Q: Now that you’re at MIT, what are your new research topics?

I’ll be expanding my interest in the intersection of macroeconomics and labor by understanding the effects of technology as well. One thing I find interesting: At what point in people’s careers might they be displaced by AI? For example, there’s a lot of talk about not hiring coders, which are often entry-level jobs. But this may have dynamic effects. A lot of people learn skills on these jobs, but they may not be able to do that effectively if there are fewer opportunities when they begin their career.

One project I’m working on, now in its early stages, will look at the dynamics of the economy after firms replace human labor with AI. Not only as prices adjust, because it may be more productive to use AI, but also because you have workers reallocating to positions for which they may or may not be suitable. We’re trying to capture all of these forces in a structured model, and to estimate the aggregate effects using data on the observed adoption patterns of AI by firms. So, like in my previous work, there’s both a theoretical component and an empirical component.