Andrey F.: Welcome to Economic Frontiers, the show where we interview leading economists about their research on economics, technology and innovation. I'm Andrey Fradkin and today, our guest is Greg Lewis, senior researcher at Microsoft Research in New England. Greg is one of the world's experts on pricing and marketplace design.
In this conversation, we talk about the economic perspective on online platforms. What is the role of data in setting prices and designing platform policies? Why are economists so useful to companies such as Amazon, Google and Microsoft? Do theoretical models still matter in the age of big data and has the chicken and egg problem gotten easier to solve over time? Welcome to the show.
Greg L.: Thanks very much for having me, Andrey.
Andrey F.: Today, we're going to be talking about a lot of issues relating to the role of economics in modern technology companies. What does economics have to offer? What are the key challenges and what do we think might happen going forward? As a bit of kind of a start to the conversation, we've seen that companies such as Microsoft, Google and eBay have recently been building up large teams of economists, so people with PhDs in economics, to do what? What have these people been doing and why are these companies hiring economists?
Greg L.: I think one of the things that's been interesting about this is that different companies have been hiring people for different things. My advisor [Pat Beyer 00:01:26] is at Amazon. There, he is at least to start being very interested in pricing, which is a natural activity for Amazon to be thinking about, and then also branching out to logistics to which things should be shipped, which objective function should these companies be optimizing. What are the key economic drivers of growth? How do you forecast one of the Thanksgiving? These kinds of things.
Whereas somebody like Steve Tadelis sitting at eBay has been much more interested in things like the reputation system, which is really eBay specific kind of phenomenon. It happens in Amazon as well but it's not quite as [00:02:00] important to Amazon. Susan Athey at Microsoft has been thinking about Bing, display advertising and search advertising, how to improve the way in which we monetize in those dimensions.
Really, I think economists bring quite a lot to the table, but it's not like it's just one thing that they bring. It's a range of different parts of expertise that they've gotten from the academic training.
Andrey F.: That's really interesting, because those are quite diverse activities and they are not typically things that economists actually study in their PhD. What is the advantage of an economist looking at this type of problem, as opposed to let's say in the case of logistics for Amazon? You would think that there is a person in operations research or in the case of search engine design, you'd think that there is a computer scientist focusing on machine learning, that would be the appropriate person. What is the economists' perspective on this?
Greg L.: Yes. I think what economists bring to the table is really a good idea of how we think about economic remodeling. If you think about the case of logistics, Amazon does have many highly talented operations research professionals working in these kinds of questions. The way Pat described this to me is you have to know what to optimize. In order to know what to optimize, you have to know what the economic trade-offs are that Amazon is making by, say, delivering things in one day relative two days.
Economists would sit there and think, "One day delivery attracts customers a little bit better. People like one day delivery. In the long run, this may make a big difference to Amazon's growth trajectory, versus the standalone retail store that we're used to every day." Economist are very good at thinking of those trade-offs and then working out how one might think about quantifying things.
Optimization is a skill that people in [inaudible 00:03:59] are extremely [00:04:00] good at and probably in some sense it should be left to them, but one objective function is a harder problem and how do you connect the objective function to data is also a problem that economists, particularly economists in an industrial organization really have some comparative advantage in.
Andrey F.: That makes a lot of sense. For the listeners out there, industrial organization sounds like a very broad term. What exactly do you mean by industrial organization in the case of economics and specifically, empirical industrial organization, which is the application of industrial organization to actual data?
Greg L.: Industrial organization is historically the part of economics that is concerned with market power. It's typically concerned with the behavior of firms and market power, we really mean the departure of classical economics from these perfectly competitive markets where there are lots of firms, all competing to supply something to their customer and therefore, price gets driven down basically down to cost.
Industrial organization is all about situations in which there are few companies and because there are a few companies, each company is able to maintain substantial markups to make money. A classical example of this would be a monopoly. A monopoly is in a position of having a lot of market power and people in industrial organization have studied how companies act to maintain their market power, what kind of strategies they follow and the implications of this for consumers, which leads to questions in anti-trust, for example. Should we allow these companies to merge, given that together, they may be able to exert market power.
That's the discipline. There is a whole bunch in a very big field, but one thing that's come out of this focus on market power has been that we need models of strategic behavior by firms and then also by consumers. Those models and the connection of those models to data is exactly [00:06:00] what's useful to companies, because companies have a lot of data at their disposal nowadays, but they are not quite sure how to interpret it, and a form of modeling exercise if very useful.
Andrey F.: You bring up a strategic behavior, but in some sense, a lot of industrial organization can be done even without thinking about strategic behavior. Just thinking about a consumer who is making a choice about what car to buy, how much do they value different cars or different characteristics for cars. Solving that type of problem is a very traditional industrial organization problem.
Of course, there are many difficulties in doing that, but it's not necessary to include strategic interactions to be doing industrial organization, I guess.
Greg L.: I think I disagree with you a little bit. Historically, it really has been about the strategy and why do we understand how people choose cars in terms of characteristics. We went through that exercise because we wanted to understand the mind, but really, the mind was going to be this input at this level of how firms competed, which is where we were already going.
Along the way, it turns out that these techniques that people in IO were developing, this paper that I think you're implicitly referencing, [inaudible 00:07:21] for the Mind Estimation, that then became very useful in a lot of different places, and you're right, companies today care as much about that piece as they do about the strategic piece. I want to know what my consumers value, what they want and therefore, what I should deliver to them.
In the case of Amazon, if I'm choosing between getting the goods too fast or trying to make them cheaper by driving down markups from my suppliers, which should I be investing in, which is the one that's going to get people to my platform? The mind estimation piece is definitely important, but it's kind of a recent phenomenon and maybe, maybe you're right in just pointing out the empirical IO [00:08:00], this empirical part of IO connected to data is also kind of recent.
As late as the eighties, people were still basically game theorists in IO and that's only changed in the last twenty years.
Andrey F.: Got it. There is also this interesting aspect of the change, which is traditionally, economists are thinking about how do you structure the optimal society or what is the right way to regulate market power, but now, with these tech platforms, it's really how do you optimize for this specific tech platform. What should that platform even be optimizing for is a really big question within each of these companies.
As you say, there are some trade-offs. Thinking about your work with Microsoft, if you can talk about it, what are some trade-offs that you're thinking about in terms of the design of marketplaces or pricing?
Greg L.: That's an interesting question. I have to think quite hard to think about what I actually can talk about, but I think one of the generic questions that I've thought about a little bit in my work is this question of what value platforms can it add, because really, if you think about somewhere like eBay or a dating platform, such as OK Cupid, a lot of the work that economists have done so far has said, "Platform is a place where people get together. The value of a platform is getting a lot of people together and getting those people talking to each other." That's obvious in the case of a dating platform.
What we haven't spent so much time thinking about is what else can be brought to the table. I think one of the things that most modern platforms bring to the table is really good search design. I want to find somebody to date. I'd like to have some way of filtering them [00:10:00] so that I find the kind of person that would be good for me.
I'm on eBay. I want to find a bargain, so I need to be able to find a used good because I'm not going to get the new good very cheaply. How do I find the used goods? A lot of the stuff has run into the technology and at first, it was just categorization. Later, it was actually algorithms trying to do the matching. From the economist point of view, these matching algorithms are interesting, because they are in these externalities.
If I make it very easy for people to identify attractive women on a dating website for notion of what attractive is, fortunately that differs across people, but if that's the case, then you might imagine that people who are relatively less attractive will get far fewer invitations or far fewer dates. This technology, which seems kind of very benign sitting in the middle really has implications for everybody in the market and not all of those are good.
That's one of the things I've been thinking about a little bit recently is how do we formalize this notion of the role that a platform plays in matching people together and then ask who wins and who loses when that platform design improves in some dimension.
Andrey F.: Got it. Thinking about who wins and who loses is kind of an interesting question because you might think that what really matters is the overall welfare of people on the platform, but that's really hard to define in a lot of cases. Furthermore, different size of the platforms, so let's say buyers and sellers, might have different outside options, and therefore you might want to cater to one side more than the other in order to keep them on your platform [00:12:00].
Trying to figure out which of these sides of the market is the one that you want to attract more or what are the policies to favor that side of the market is I think a really important problem for all these platforms. For example with Google, you want the searchers coming in, but you also want to have the advertisers wanting to participate in bidding for these advertising slots.
I guess it all seems very complicated. In fact, when I talk to managers in various Silicon Valley companies, that's what they tell me. They are like, "We know these trade-offs exist, but it's really, really complicated and so we're going to do something simple." Has that been your experience or can you give some examples where the approach of thinking things through and going a little more in depth actually has paid off?
Greg L.: At least in my experience thus far, I don't think we're already at the point of being able to give really great advice to managers, other than to say, "Look, it's clear that this [inaudible 00:13:12] in the marketplace. It's clear that who is going to win and who is going to lose from this varies." In the work I've done so far, you can point to who's likely to win and who's likely to lose, and I think in most companies you can probably do the same exercise. People who really have a lot of domain expertise can say by changing this algorithm, who is winning, who is losing out of it.
Connecting that to this probability of leaving your platform is something that seems to be in the realm of A/B experiments, but we have some of those item experiments problem. You're thinking about rolling out a new algorithm. You know it's going to hurt some people. You want to know if those people would leave, so you need to know how sensitive they are to this change in the algorithm.
Now, in order to evaluate that, you can actually run a set of tests on that specific targeted group [00:14:00]. Now, you're starting to get quite a fine-tuned experiment, and I can see why people at companies might be saying, "This is a little bit too complicated at least to think through for the moment. I'd rather it's something a little bit simpler," but I don't think that's how things are going to be forever. I think that this is relatively new stuff. We're starting to get a better understanding of where those trade-offs are.
A lot of my colleagues in computer science who are working on exactly these questions of optimal design of experiments. If you give them a well thought-out statistical model of how you think the world works, up to some parameters that you don't know, they are going to find a very good way to test out those parameters, but we're not quite at the point yet where the economics is converged on models that we trust enough to hit them with the available computer science and say, "Let's work an optimal testing of these models, optimal learning of the parameters of these models."
Andrey F.: That seems like a very high level discussion. Can you give a concrete example in which you might want to learn the parameters of an economic model?
Greg L.: Yes, sure. Let's think about the problem faced by eBay or Amazon, when they think about which sellers to allow onto their marketplace, for example. One kind of problem is if I allow everybody to join my marketplace, I get a very thick marketplace. People are willing to compete for customers and that presumably presents customers with an array of diverse alternatives and also possibly at lower costs in the face of competition.
On the other hand, I want to offer some black and white guarantees. I want to be able to say, "Everybody on our platform is reliable. Everybody in our platform is going to get their goods to you in two days flat." Those two things are incompatible. I either have to [00:16:00] exclude people who are unreliable or exclude people who aren't going to be shipping from Alaska and therefore, in order to get them to the customer in two days is going to be virtually impossible, or I have to have everybody in and then I have this population management problem, which primarily companies seem to think about on the seller side.
There is some pruning of bad buyers, as well, for the good buyers, but there is two sides of pruning. In those cases, you want to know can I manage people to become better citizens. If my problem is that the person in Alaska can't get it to people in two days, can I find a way to force them to become better and do it in two days or to force them to behave better through changing my system, or is that just something that I can't possibly ... I want to know how sensitive they are through various managers you've got at your disposal.
Andrey F.: You're thinking about taxes or differential fees for some of them?
Greg L.: Yes, exactly. Differential fees would be a nice standard policy instrument. The other policy instrument that I think people have underestimated but seems to have a lot of power in practice is the search algorithm itself. These search algorithms are usually opaque. Sellers on eBay don't quite know what gets them to the top of the search results in response to a query, but as they discovered when they made free shipping something that pushed you way up the rankings, suddenly everybody started offering free shipping. People figured it out.
The algorithm itself, the exposure, the possibility of being exposed to a customer might buy a product, very powerful and if you just start up-weighting certain features of the seller, what the seller is offering, then pretty soon, sellers will either figure it out or will die, in the sense that they won't be on the platform and selling there much longer.
Andrey F.: It's pretty clear how the platform can influence the behavior of sellers. [00:18:00] It's not clear what the response of those sellers could be, but the trade-off of course is that there might be some buyers that really want a good deal and they don't really care if they get the good in two days or in five days. In fact, that seems to be a bet that this new company Jet.com is making regarding their deliveries.
The converse role in this would be to just enumerate the various trade-offs and for each of the trade-offs, to try to measure what's going on. One of the key benefits of ensuring that every seller on your platform will deliver within two days is that consumers can know about this and that might induce even more consumers to come to the platform. Measuring that inducement effect is going to be really important for determining how important this policy is to pursue as a platform.
Greg L.: Yes, and it's not easy, because a lot of these things are long run effects and I think this is where some of the experiments don't quite give you the right answer. You see that you get some sort of response over the ten days you run this A/B experiment, you think that's the end of the day the right answer, and actually, some of the stuff takes quite a lot longer.
There is a sense in which I feel like economics is subservient to strategy in the way that maybe you were saying before economists are good at and we aspire to be very good at enumerating what the possibilities are. In some sense, putting out their possibility frontier. You can have A or you can have B, but you can't have both. At some level, it's a higher level decision as to where a company wants to be and what they want to market themselves on, and what they want to stand that their vision as being.
It's hard, I think, that in this marketplace with so many platforms, especially for newer platforms to be at all coherent about what your vision is. We're going to be the company that delivers you your products. Fast, reliably and they are going to be god products. [00:20:00] That is what Amazon is all about.
It's true, they can also be the company that gave you the option of having it in five days, slowly and maybe they can do that, but there is a danger of diluting their message in the marketplace. That message matters, because as you say, the message is what drives people in. I think for economists as being in the role, primarily at least at the moment of saying, "This is what we think you can get and then you tell us what you want."
Andrey F.: Got it. One thing that economists should be good at but I find that they are not is actually thinking about what the right fees to set are for the marketplace. I guess I was wondering whether you have any thoughts on that. Let's say you're in eBay or you're in Amazon, and you want to choose which side of the market do you leverage a transaction fee on, and what should that fee be, and how can the data help inform what that fee should be.
Greg L.: I have some thoughts that pop in my head. One is, you constrain by norms a lot. You typically don't make buyers pay a lot and if you change that norm, you're probably going to face a very strong mass exodus. There is a different world in which people are not going to be that price-sensitive, but in the world in which they are accustomed to getting everything for free, they are extremely price sensitive.
That constrains you, and so then you know where you're extracting the fees from. You're going to be extracting them from the sellers.
Andrey F.: In some markets, that's not actually true. For example, on Airbnb, there are also guest fees.
Greg L.: That's true. You're right. It's exposed. Was there a pre-Airbnb in which there was a different norm? I'm trying to ... It is really this question of what are you used to? I think the buyers would revolt on eBay if you were to suddenly start charging them fees, but maybe you're right [00:22:00]. Maybe, now going forward, on every bid you manage, every Airbnb competitor manages to extract money from the buyers, I'm not sure.
We know what the theory on this says. The theory on this says you extract your money from the more price in elastic sides, the part of the market that is least likely to flee if you raise prices on them. It depends a lot also on whether there is [multi-homing 00:22:30] or [single-homing 00:22:29]. By that, I mean whether the people have decided to join in just one platform or whether they are across multiple platforms.
It's typically the case that where one side is single-homing, where for example we think of the case of newspapers and advertisers of people reading the newspapers, advertiser will be advertising across multiple newspapers, but readers will typically read one newspaper, maybe two newspapers. If they've got strong political preferences, it might just be the New York Times or the Wall Street Journal. In that case, what you want to do is you want to extract the rents primarily from the advertisers and what you hold out is the carrot to get them pay you money is the monopoly readership that you control, the people who really want your content.
This kind of theory gives some idea of where to extract, when as you say, what we may not be quite as good at is how exactly, what the exact fee is to extract. Again, we have a theory on this and basically, [inaudible 00:23:33] elasticities, but it's elasticities in marginal types. This is something my colleague [Glen Wilde 00:23:39] has worked on. You really have to understand who are the people who are most likely to leave your platform when you change the fees.
It's not your general population you're worried about. There are some New York Times readers who wouldn't leave if you raised their prices quite a lot, but there is some sub-population that's really sensitive and you have to know [00:24:00], you like to do the pricing experimentation there. That, you can do a little bit through coupons. You can do some sort of experimentation to see if you can get a few more people on or few more people off with coupons, but it's hard to know often what the changes are going to be.
This is exactly the problem that you mentioned earlier of the mind estimation. What's the demand for my product? How does it change if I change my price?
Andrey F.: I'd like to point out that in some sense you simplified that. First of all, many newspapers have a price, as well. You are extracting money from both sides of the platform. Second of all, what is really important in that case is these cross-side externalities. How much is the marginal user worth to the advertiser? In this sense, it's not just a simple elasticity how responsive is each side to the fee that you're charging, but also what are these different spillovers across the different types of agents within your platform.
Greg L.: Yes, you're right. The standards of the monopoly pricing form is you charge people what it costs you to make the product and you charge them markup which is related to the elasticity, and that's true. In the platform case that it's a little bit different. You charge them cost again. You charge them a markup which reflects how much they like your product and how much price insensitive they are. Then, you discount them back by how many people they'll attract on the other side.
In the case of newspapers and advertising, of course we don't think advertisers typically attract more readers. In fact, that discount probably is going to raise the price a bit more. It is likely to exclude some advertisers, even if they would be profitable and because you're going to destroy your readership a little bit. I think we all understand that from a few magazines that if there is some people that have decided to find that advertising turn off some readers and there are other people [00:26:00] who are quite careful about preserving some sort of high quality content to advertising ratio.
You're right. There are these things but none of these kinds of issues are unsettling in theory. They are all pretty crisp. We have these models in quite a lot of generality. If you think that the world is as simple as people choosing newspapers and depending on the price, and maybe depending on how many other people, what other people are on the other side, we know how to work that out in principle. What turns up to be hard in practice is actually putting numbers to that.
Andrey F.: I guess the hard part about estimating these elasticities is that what you really care about is the long run elasticity. Let's say that we ran and A/B experiment where we just changed the fee of our platform by a little bit and saw what happened. In the short run, there might not even be any response at all, because people might not have noticed or they might not have made the appropriate adjustments to switch platforms.
In the long run, this might happen or new platforms may arise as in competition by observing the fact that you as a platform are charging very high fees and that's inefficient, or not inefficient but that leaves room for entry. Knowing how big those threats are when choosing the platform fee structure, it's something that's I don't want to say unanswerable by the data but it is hard to think of a very simple way to get at those magnitudes.
Greg L.: Yes. One thing you point to instead of salience, you are worried on the one hand that I change my fees in the experiment and I see not much happens, but actually three months later, people notice. The other thing I'm really worried about in practice, if I was running a small experiment [00:28:00], I move to a platform-wide fee change because that's very salient. Everybody might simultaneously re-optimize it exactly at that point to meet my platform.
This is certainly a problem. I think companies are reluctant therefore to make these changes very often, until they feel like they are really out of whack and you probably want to do some robustness checking, which is I think what most sensible people would do in this situation is you come up with some sort of elasticity that you treat it as the truth. You say, "This is what I got in my experiment. It looks like that's probably the right number, but let's say the number was one half times as big or half as big, does this still look like a good idea or does it look like it's now a whole lot more marginal?"
In most of the metrics, we're not [patient 00:28:52] but in planning, we really should [patient 00:28:54]. Really, people should be thinking a little bit about what they actually think they know and what their product is worth, and how much the data is able to move their priors and I think you can learn stuff from A/B experiments.
In some work places, you can learn quite a lot pretty easily and pricing, it's often a lot harder to learn something. Maybe you learn something from experiment and you run a survey, and you look at past experiences, similar changes at other websites and you try to piece together some sort of composite picture, and you do some scenario planning.
This is what I feel is the corporate reality. There is no one answer and no magic bullet. You end up piecing together a lot of different pieces of information.
Andrey F.: Yes, it's not an elegant academic solution, regardless of how many academics are sitting around the table and trying to make the decision.
Greg L.: Yes, exactly and the academic voice at the table is, "We have this point of view. It's a very clear, it's a very crisp point of view. We think the answer is X," and that's sort of attractive. I think that gets you a seat at the table [00:30:00], but ultimately, you're going to end up having to defend that point of view pretty aggressively, or change your mind. You're probably going to change your mind sometimes, as well.
Andrey F.: That's definitely true. There are other considerations that are outside of standard optimization theories and economics that matter for decision making.
Greg L.: Yes, exactly. Things that we haven't yet put into models. Things that somebody at a meeting says and you go, "Yes, that sounds right but they don't quite fit in my model right now. Give me three, four months and I'll get back to you."
Andrey F.: There are a couple more topics that I want to touch on. One of those topics is one that's been the topic of a lot of discussion within the economics community but I think outside, people still don't quite get it is what is the difference between the economist approach and the machine learner's approach to studying data.
Greg L.: Yes. I think that's a great question. I think it's a very interesting question, and I think people are evolving. The background I come from is structural econometrics and the structural econometrics basically means a commitment to a model. When you view data, you view data through the lens of a model, but you have to say exactly what your model is.
That forces all kinds of restrictions on what's going on and how you interpret your results. Machine learning is model free. It's typically you give me a problem and I'll give you an algorithm that solves it. I think one of the things that's very appealing about machine learning is exactly this practicality. Here is a problem. Here is a nail, we'll give you a hammer for that nail. It's very specialized and it's also fully implemented.
One of the things that's a little frustrating about structural econometrics sometimes is [00:32:00], you write down a model, we give you an optimization problem, and then we can say, "Good luck. Go, optimize." That is often this is the exact algorithm you're going to use. The problem is that with ML, you'll often do a very good job at prediction. So, you'll be able to say, "We think that this person should be classified as an A type or a B type, or we think that revenues next year will be this," or "We think that with high accuracy I can predict the price of this product as a function of its characteristics," but we don't know what that means because we don't have a model that underlies it.
We often don't understand causal relationships either. We now know that this price is correlated with these characteristics. A very good model for explaining how much a PC will cost would be to take something about the screen size and then something about the keyboards, and what Intel chip line it's running. That will tell you a lot, which is great, and now I ask you, what if I were to change the price of an Intel computer and ask about the quantity that's sold. Even if I did this exercise and quantity at the left hand side and run the ML model on the right hand side, I might get something very silly like the quantity is increasing in price.
That might be a really good predictive model in the sense that it does a great job of explaining the data but it might do horribly in the sense that if I actually were to change price, that would not be what happens to quantity. It wouldn't go up. That doesn't make any sense.
Andrey F.: That concept is very un-intuitive to a lot of people. The reason that there could be a difference is that the data that the machine learning model is trained on, it was generated in a certain way and no one in that data set arbitrarily, let's say, changed [00:34:00] their prices just to see what would happen to the demand.
Greg L.: Exactly.
Andrey F.: Whereas that's really what you were thinking about as a decision maker that's trying to set a price or change some other policy. Now, in order to use the data in order to predict what would happen if you arbitrarily changed the price, you need to have some model that relates what economists call structural or machine learners call generative model of how people make decisions and how changes in prices would affect those decisions.
I think actually spreading the knowledge that there is this big difference is really important for the economics profession because one, it matters. It matters for making the right decisions and two, it's good for the economics profession because that's what economists are specialized in.
Greg L.: Yes, exactly. There are layers of this. First layer, I'm going to estimate demand. I have price on the right hand side. I learned this relationship from my data and it's wrong, because there is no experimental variation to price to begin with. As you say, nobody is actually moving around the price arbitrarily. It was very much co-relatable what was going on. That's a problem.
Now, I can do a little bit better. Maybe I have some experimental data and I can really try to prove that predictable performance, and that could give me another, a better equation. Now, I think to myself, now that the demand is downward sloping, should I then maybe lower my price a little bit and get a few more customers? I can think about that modeling, and then somebody in a business meeting says, "What are my competitors going to do?"
You go, "That's something that's not in my model either." Now, you want to think a little bit harder and now you write down one in which people are actually responding to each other. This is exactly what the structural IO is good at is saying, "Look, there are these things that we think happen in marketplaces. We think that people set prices up [00:36:00]. We think they compete against each other. We think that if you write down that model and you're willing to believe it, then we can tell you what's going to happen in these scenarios."
"If you don't want to write that down, then we have to hold a lot of other stuff fixed. We have to just believe, for example, that everybody else is just going to be within their last period, and let me give you some predictions."
I think the discipline of having a model is a very useful one, almost regardless of whether you're going to make the model simple or complicated. It's just important to have a model and to take it seriously, and to think about what would be required of them to learn that model.
Andrey F.: Got it. You bring up an interesting example. First of all, there are kind of two prediction problems. One, what would happen to demand and two, what would your competitors do? It's very easy to see that you have a lot of consumers, as an online platform you have millions of consumers, but you probably only have one or two competitors.
That makes any machine learning algorithm useless in this case, but also in general, it makes most estimation or statistical methods useless. You really do have to rely on some model of the world. Do we think that the models that economists have of competition in this case, are they good? Is there any example in which they do a good job of predicting or are they more simply that, "You should just think about is there anything preventing your competitor from lowering prices," as well, and how would that affect demand?
Greg L.: I think there has been some record shows that they are not bad models. I know there are other examples. The one I am thinking of right now is a paper by [inaudible 00:37:47] on the Texas electricity market and they are thinking they are about how firms at the start of the market [00:38:00] actually some start competing with each other, and whether their bidding profiles that they eventually submit, and it's a very complicated market. I don't know if you know anything about electricity, but it's just very messy. You have to submit basically these supply codes as an electricity generator.
It's kind of a complicated thing to learn and they show that at the time, big players end up best responding to everybody else. Equilibrium is this idea that everybody should be best responding to everybody else. People are doing pretty well. They are doing the best they can, given what everybody else is doing. I have a paper with [inaudible 00:38:38] looking at the British electricity market in a slightly different setting and there we find slightly more mixed evidence.
We find that people eventually do reach equilibrium, but it takes them three and a half years in the market where they only compete every month. In Steven Ali's example, it's much more like they compete every day. The prices goes a whole lot faster. I think it's not a set of questions where the equilibrium is the right way to think about this. I don't think that we know that that's the right answer, I think it definitely varies, but something as simple as saying, "If everybody can do this best response exercise, we're going to do this, I should do this. If I'm going to do this, you can do this." If that's an easy exercise for everybody to do, it probably would not be crazy to expect that we end up in equilibrium pretty quickly.
That's a hard exercise to do, if it's hard for me to conjecture what the world would be like if you were to price it differently, then yes, I should expect learning to be a little bit slower. I view those things as probably better than assuming the competition is not going to do anything in response, especially if I'm making a big move.
Andrey F.: That's fair on one hand. On the other hand, you do have some policies which are very opaque. If you change your recommendation algorithm, in theory, your competitors should respond, but in practice, will they even know that they you changed your ranking algorithm or how you specifically changed it? These things are hard to [00:40:00] detect.
Greg L.: Yes, and I think also it really varies across industries and where competition happens. There are industries in which price competition is fierce. This is a market in which really, people think that the way they win customers is by competing on price. There are other markets in which people think that for example a recommendation engine really matters. The recommendation engine that I have is not something that my competitor has to respond to at all, really.
To some extent, I'm going to drive a few customers away from them but it's somewhat technological. They are going to keep innovating and we're going to keep innovating. We are both going to be opaque, and they are not like free controls we can just move easily to match the other person. Price is somewhat special. Price is easy to change, pretty transparent usually to my competitors what I am doing.
Thinking about cloud computing, for example. Amazon and Microsoft, and Google have been throwing their prices down recently on a very sharp price decline. Why? Because they all think it's very, very important growth area for their company is that whoever gets many customers on their platform wins and so they are in a price war. That just makes sense and if you were to sit here, and I don't think I'm saying anything controversial when I say this, if I were sitting here at Microsoft thinking, "What would they do if I dropped my price?"
I have to think a little bit about what Amazon would do, because that's the main dimension in which people are competing.
Andrey F.: This brings up an interesting question then. Are these prices identical and if not, why are they different? Is it something about the beliefs of the competitors or is it something about the technology of the competitors? What do you think?
Greg L.: I believe so. This is a factual question that I shouldn't and don't believe that I know the answer. I think they are not quite identical. I think that Microsoft and Amazon have a pretty similar price. Google might be a little bit cheaper, but they are offering different technologies [00:42:00].
Microsoft is [inaudible 00:42:02]. It's linked to some of the other services, integrated to some of the enterprise software. Amazon on the other hand was the first to market. They got in a little bit quicker. They had some innovations. We price slightly differently. There are different things, I think catering to slightly different customer bases at some level but as we talked about it earlier, what matters is the marginal customer.
You've got these existing customer bases, you obviously want to keep them but there is getting more and more people in the cloud. You want to know who is going next. The question is, what would really make those next people go? At some level, this is a tech market. Product design, quality, reliability of service, these things really matter and everybody is fighting really hard to get the best possible product out there, but the one thing that you can move on the short term is price. People are doing what they can there.
Andrey F.: Got it. This brings up the last topic that I wanted to talk about, which was the chicken and the egg problem, which is a classic problem when people are thinking about platforms.
The brief synopsis is that a lot of things are very valuable at scale. Facebook, where everyone in the world is on it, potentially, and can interact with each other, find out information about each other, but at the beginning, there is very little value to anyone joining the network.
As the digital era has progressed, do you think that the chicken and the egg problem is becoming less severe or is in fact more severe because incumbents are now sitting in all the major platforms and it's really hard to topple an incumbent?
Greg L.: Two questions. I think firstly, if you are in a new industry, the chicken and egg problem is the same problem that's ever been [00:44:00]. In fact, actually, it might be worse, because consumer attention is fragmented. We're people who work across many, many devices. There are many, many new apps floating to the top of our attention. Unless you've got a really cool product, it's hard to get anybody on. If it's hard to get anybody on, then you can't build up the platform that will make a compelling product for everybody else, that's the standard chicken and egg problem. We're in a world where I think it's harder to get people's attention than it used to be.
Andrey F.: I think I would disagree, in that, actually the technology to get people's attention is much cheaper now than it used to be. In the sense that let's say I'm a small capital constrained firm. Before, in order to get people's attention, I might have to pay a lot of fixed costs for a commercial on TV maybe or for some other type of advertisement. Whereas now, if I have some sense of who my platform is going to benefit the most in the short run, then I can start acquiring just those customers, and then slowly build out the platform to achieve scale.
This kind of assumes that at the beginning, there are at least some people for whom there is still some value of the platform, but then there at least becomes a path to achieving scale. Whereas before, it might have been more difficult.
Greg L.: Yes. You make a good point. You're giving me a basic lesson in supply. It's hard to get people's attention but we have better technology and we have people's attention. At the same time, it's the cost of getting people's attention at the same time as the demand at this point. Yes, I agree.
I think that that's been more successful in the case where a recommended algorithm can do well. If you like this, then you'd like that, so I know who to target to the new places, which is sort of out of nowhere. I don't know who to link those people to. I don't know who my new customers are going to be, but yes, that's right. I think it's not obvious [00:46:00], as you say, maybe which way this is going over time, but it is the same problem. It's still chicken and egg, especially in the case of two-sided markets. Which side do you go off the first? How do you off of them? Is it straight up marketing? Is it exclusive content deals with the one side in order to get the other side interested? It's not clear.
Then, for the places where we have elephants in the room, where we have really big players already in place, I think it's difficult to attract scale and also, I think the end points are quite different. I think a lot of times, if you're successful, then you just get bought up by one of the big guys as opposed to the old school part of growing and getting to scale. If you can deliver value to people, then you're suddenly a very attractive acquisition target.
Andrey F.: On that note, I think we'll end the conversation. This has been really interesting and thanks for coming on the show, Greg.
Greg L.: Thanks very much for having me, Andrey.