Economist and Best-Selling Author
Is This the AI Renaissance?
September 11, 2023 | 38:08
Artificial intelligence (AI) has the potential to greatly enhance human capabilities and improve lives. But the implications are far-reaching and not fully understood. In this episode of The Active Share, Hugo sits down with Tyler Cowen, best-selling author, podcast host, and Holbert L. Harris chair of economics at George Mason University, for a conversation about the impact of AI on labor, capital, business models, and global connectivity.
|00:36||Host Hugo Scott-Gall introduces today’s guest, Tyler Cowen.|
|01:35||Business models that get interrupted by AI.|
|04:35||Is distributional power still important?|
|07:12||Does AI help us be more productive or give us more time distracted by technology?|
|08:55||How does AI move the human race out of stagnation?|
|10:14||Constraints of AI.|
|14:19||Will the world become more or less connected?|
|18:21||Will AI bring more equality or inequality?|
|22:24||Does AI create its own truth?|
|33:37||Large language models and AI.|
Hugo Scott-Gall: Today, I have with me Tyler Cowen. Tyler is the Holbert L. Harris Chair of Economics at George Mason University and serves as chairman and faculty director at the Mercatus Center at George Mason. He graduated from there with a B.S. in economics and received his Ph.D. in economics from Harvard.
He is coauthor of the economics blog, Marginal Revolution, and author of several best-selling books. His latest is Talent: How to Identify Energizers, Creatives, and Winners Around the World. He writes a column for Bloomberg View, has contributed extensively to national publications, and he is also a podcast host of the excellent Conversations with Tyler. Thank you very much for joining me today.
Tyler Cowen: Thank you. Happy to be here.
Hugo Scott-Gall: Okay. Let’s get cracking. So, I admire your podcast interview style. You ask quick, to the point questions. Here’s mine in the style of you: which business models get created or disrupted by this wave of AI?
Tyler Cowen: Many large business firms will become much smaller. So, if you look at OpenAI, which of course built ChatGPT, when they made their key breakthrough, they had maybe only 250 employees, and AI did a lot of the work. If you look at Midjourney, which is the best image creator AI, I believe last I heard, they had 13 employees. Imagine a world-famous company with only 13 people working there.
So, in so far as AI does the work, the long-standing ascendency of big business will in some sectors come to an end, and that will be shocking. There will be many more projects, but the entities undertaking those projects will hire fewer people.
Hugo Scott-Gall: So, Instagram, before it was bought by Facebook, now Meta, didn’t have many employees, but it ended up inside a behemoth, a dominant company. So, do you see wider moats for the incumbents, the big tech firms, or do you see them disappearing in a world that goes open source? So, for some companies, will AI reinforce competitive advantage, but it will be disruptive for others? And what do you think about the types of companies it’s disruptive for?
We can all agree the travel agents were not consigned to the dustbin of history by the internet, but it was pretty damn close. They had to go niche. So, what types of business models are most vulnerable, you think, to the coming wave of AI, and do you think that some incumbents with tremendous distributional power, maybe high-switching costs, strong network affects, might get stronger because of this?
Tyler Cowen: Well, the large tech companies have been laying off quite a few people for a while now, even before AI. I’m not pessimistic about their business prospects. But again, just in terms of their size as measured by number of employees, the notion that you get out of Harvard; you’re a bright 22-year-old, and of course you think, “I’ll go work for Google or Facebook,” I think those days are over. They don’t need so many people. Those companies are expanding what they do and limiting their workforce at the same time.
And in terms of mergers, I think we will see more alliances and fewer mergers. So, Facebook buying up Instagram and WhatsApp, did, it seem, lead to overemployment in those product lines. And what OpenAI has done is build an alliance with Microsoft rather than just sell the whole company to them. So, it seems to me that’s working better. You keep dynamism, but you get the advantages of Microsoft’s huge presence in the cloud computing market and all of their resources.
Hugo Scott-Gall: So, distributional power still really matters. And does that matter because I still want, as the consumer—whether that is B2B or B2C, I still want one entry point for my kind of digital experience? Or do you think I’m happy to be a subscriber to lots of different large language models, LLMs? So, is distributional power still super important? And what you just said is that you can take someone else’s product and put it in your bundle and put it through your distributional system. Is that still a big competitive advantage?
Tyler Cowen: I think what consumers want is something like two or three different entry points and not too many. What I see actually happening is a proliferation of an excess number of entry points. Too many walled gardens. Too many gates. Too many logins required. And it’s not what people want. But if you don’t do it, your data could be sucked dry, or you could lose your competitive advantage, or your media, and you can’t charge people for what you’re doing.
So, I think we’re headed to a world where everything is too cumbersome. There’s too many passwords to remember, too many different sites to go to. You have to ask yourself: while I was on Twitter, what about Jack Dorsey’s new product? What about Mastodon? What about Threads, now from Meta? How many of these are people going to do? I think we’re testing people’s patience with the very internet itself.
Hugo Scott-Gall: And here, we’re already talking about consumers, individuals. But I think about a highly skilled manufacturing company. Surely, they just will adopt—they’re very good at adopting technology and staying relevant, staying industry-leading in what they do. So, I think there’s a sway to the economy that will just absorb this and use it intelligently. Whether it makes them more productive, whether it makes their product better, whether it makes the customer relationship better, it doesn’t necessarily change their overall position.
Tyler Cowen: Oh, I agree. I mean, say you’re a telecom company, right? You’re just going to use AI. It may or may not be visible to your direct customer base, but you’ll become a lot more productive. You’ll hire fewer people. It won’t be an emotional event for anyone other than those who are laid off, and the world is just going to move there pretty quickly. I think a lot of businesses are actually way behind on AI, but many others are not. You look at Stripe or Coca Cola. You see these rather rapid alliances being built with open AI. And it’s just going to be much more of that.
Hugo Scott-Gall: So, overall, the kind of who benefits question. In some parts of the economy—I can see how this reinforces corporate profits. It maybe increases them—productivity flows back to the owners of capital. But in other parts of the economy it is clearly going to increase the consumer surplus, and we saw that with the simple example of Uber now can tell me what time my car is coming. That has some benefit to me, but I probably just spend time scrolling in a not very productive way.
So, the consumer surplus, I would assume you would agree, increases from this wave of AI. Does that surplus get reinvested and turned into anything productive, or is it more just scrolling and not doing a whole lot because we’re distracted by technology?
Tyler Cowen: Well, I don’t see most of it as scrolling. So, if you take institutions, right now, so many of them have their information poorly organized, and it’s all siloed and hard to get at. It would just be in the future very easy to get at your institution’s information, and it would be organized for you however you want, and it will mean much less scrolling, say, through databases.
So, I think the promise of AI is less scrolling. You can just talk to your device and say, “Send me the 10 best Tweets from today that you think I would like the best.” And so on. So, I view it as a liberation of human energy and the ability to spend more time touching the grass. But of course, we’ll see.
Hugo Scott-Gall: So, do you think that the benefits of the average individual of access to an LLM is something that just gives them a happier life, or does it actually make them a more productive economic asset?
Tyler Cowen: I don’t think we know yet, and I’m not sure what the average individual is or means. So, I think the people who are willing to take initiative, and learn a lot from LLMs, and manage more projects with all these free, essentially, research assistants, those people will do very, very well. I doubt if that’s the average individual, but it is still the case that the average individual will benefit from all those new projects. That’s the way I tend to look at it.
Hugo Scott-Gall: You’ve written about stagnation. Are we on the cusp of a productivity wave that actually moves us out of stagnation because of AI, because of this wave of AI?
Tyler Cowen: That’s exactly right. I think this wave of AI will move us out of stagnation.
Hugo Scott-Gall: What’s the mechanism though? How does it flow to economic growth? Getting back to: is it a productivity surge? Is it something different? Is it actually a much more virile period of competition in an economy that maybe breaks open some monopolies, redistributes some rents? How does it work mechanistically?
Tyler Cowen: I think you have three major areas where there will be tangible advances quite soon. One is what you might call education and tutoring. Another is medical diagnosis. And then, the third is how institutions organize, and order, and store their information. The third one will probably take longer than the other two. With a bit more of a lag, I think the rate of scientific progress will be advanced because people running laboratories now have all these new intellectual resources, and aides, and assistants.
The most general way of looking at it is you could imagine now every person who wants it has a free research assistant, colleague, and architect—is a way someone once put it. Now, how valuable is that? I don’t think we know yet. And maybe a lot of people just won’t do anything with that. That’s to some extent what I’m observing.
But it is revolutionary, and AI also is likely to get better. I think it’s hard to predict at what pace, but we’re probably just seeing the beginning of all of this.
Hugo Scott-Gall: And what are the constraints? Is it data, to your point around how data is organized? Is it skills? Is it actually just broad access? Is it compute power? Behind compute power, you’ve got energy. How do you think about the things that if there were to be disappointment around here and an under-realization of potential—the things that are likely to constrain it?
Tyler Cowen: I’m most worried about human inertia. So, I give talks or discussions on AI all the time, and just the number of people who don’t seem that curious, or they haven’t yet spent time with GPT-4 Plus, or they’re content with the inferior version and don’t seem to notice or to care. And these are typically very smart people, often high net worth, or a lot of them have Ph.Ds. So, I think humans are falling down on the job is what I see so far.
Now, there are plenty of ways in which current AI could be better, but we’re not close to having fully exploited the capabilities we now have.
Hugo Scott-Gall: Pretty much on every podcast, I do mention Charlie Munger, and he always says, “Show me the incentives. I’ll show you the behavior.” Is it an incentives issue?
Tyler Cowen: Well, change typically is slow historically. If you look at electricity, which of course was a revolutionary development, by the estimates of some economic historians, it took us 40 years to really reconfigure factory floors fully to take advantage of electricity. Now, I don’t think AI will take 40 years, but if electricity took 40 years, you need to reevaluate just how rapidly a lot of human beings will adjust. And there is a big profit incentive, of course, to have electricity in your factory.
Hugo Scott-Gall: So, returns to labor, returns to capital. What are the obvious, I guess, most positive, most bearish implications for labor? And then, the same again for capital?
Tyler Cowen: Well, I think if you’re a worker doing routine work with words or routine back-office functions, the chance your job disappears in the next three to five years is really quite high. So, those individuals typically will be worse off. If you are someone who is creative, takes initiative, and knows how to manage research assistants, whether human or virtual, you will be much more productive. If you’re at the top of your field and willing to learn this stuff, you will be much more productive and more successful.
Now, in terms of capital, I think it will favor large regions with English language, a fair amount of stability, and where people will want to put all these new projects. So, in the world, I think India and Kenya are relatively likely to be winners. South England and North America. Those would be my picks for the places that will benefit the most. And small backwater countries that are already not doing so well I think will lose talent and resources. And there are many of those.
Hugo Scott-Gall: Because they’re held back by poor communication infrastructure, so it’s an access point? Or is it something else?
Tyler Cowen: Well, let’s just say there’s going to be a lot of new projects of whatever kind. Energy, science, arts, who knows what. And you want to put your new project somewhere in Africa. Are you going to choose Burkina Faso, which is very small, or are you going to choose Kenya? I think—it’s just a speculation, but I think you’re going to choose Kenya. So, geographically, there’ll be a greater concentration of projects in areas with some scale.
Hugo Scott-Gall: You didn’t mention China, and I’m always interested in competitive advantage between countries and what drives more successful countries relative to less successful countries. China has a fantastic database. You have no choice. You surrender your data to the government there. So, it has a tremendous database. It has access to computing power. It has extremely good education. Why won’t China be a winner from this?
Tyler Cowen: Well, China is all about politics, whether you’re talking AI or anything else. I don’t feel I can predict Chinese politics. But periodically in world history, it seems Chinese politics goes badly wrong. That may or may not happen now. But that would be the key variable. I would just say this: AI may prove to be China’s curse. If it strengthens the regime and allows for much more and more thorough surveillance, that’s bad news, in my view.
The possible upside is simply that Chinese citizens will want to use VPNs to access western large language models, and that in essence breaks down the great firewall because you can ask those models anything, and they will tell you more or less the truth, including about the Uyghurs at Tiananmen Square or whatever else. So, it could be an enormous soft power victory for America and the west.
Hugo Scott-Gall: Does it overall make the world more connected or less connected?
Tyler Cowen: It will make most places more connected. So, I was just in Kenya, and I needed to speak to some people in their language, which in that case was Swahili. And I took out my ChatGPT app, and it just did it for me in two or three seconds. Now, there may be some countries—I don’t think China will do this. Maybe North Korea—that will just shut down all contact because they won’t want the contaminating influence of Western LLMs. So, you can imagine that China either shuts down VPN access or so monitors it that people are afraid to use it. I wouldn’t say that’s my prediction, but you can’t rule it out.
So, countries will be forced to choose either quite a bit more integrated or much less integrated. I think most countries will end up more integrated.
Hugo Scott-Gall: That is a desirable outcome.
Tyler Cowen: Again, these are very big questions. Any major change is going to have a lot of downside, no matter what it is. Just as electricity did, the printing press did. But my core view is if we humans, when we actually get more intelligence at our command, if we cannot turn that into a positive, what is it we’re hoping for? Should we prefer a world where we develop artificial stupidity and try to use that to make our lives better? That doesn’t seem very good. So, this is our big chance. Let’s not blow it.
Hugo Scott-Gall: How big of a moment is this where you’re seeing separate lanes of technology merging? Is this like the Renaissance? Is this like the Space Race? Is it this a sizable, epoch defining moment we’re in?
Tyler Cowen: I think it’s much bigger than the Space Race, which the final implications of the Space Race are still unclear. I view it as comparable to the printing press, which took quite a while to matter, but when it did, it really did. And most of the gains went to users, right? Gutenberg did not become a billionaire.
Hugo Scott-Gall: Yeah. Although we still talk about him.
Tyler Cowen: Sure. And we’ll still talk about Sam Altman X number of years from now. Good for Sam.
Hugo Scott-Gall: Yes and of course, once you can turn him into a bot, you can talk to him into eternity.
Tyler Cowen: That’s right.
Hugo Scott-Gall: So, let’s do some—
Tyler Cowen: I’m teaching and training a Tyler Cowen bot. I might do this.
Hugo Scott-Gall: Well, you should, and then I could create my own, and we could interview each other into eternity, and it could be great. But it makes sense, right? It makes sense. I mean, I’m going to ask you this question, actually, about investing.
Why not take the 10 best minds in any field and just record everything they have, capture their mental models, capture their pattern recognitions, just capture everything about how they think, and then just have the very best created into eternity in each field? You can have people who may not be alive anymore still helping you solve problems. And that still happens today. Of course, that’s education. But beyond that, in a much more active way. Is that just a slightly daft picture I’m painting?
Tyler Cowen: I don’t think it’s crazy at all. There are some barriers. One is the legal issues. So, if someone creates an AI likeness of me, do I have the rights to that, or do they? It seems to be under current law they have the rights, but I’m not sure current law will stick, and it’s certainly unclear, and that limits investment.
And also, the reinforcement training and learning you need to do on these models, it’s fairly complex, especially if you want to capture Paul Krugman, or Milton Freedman, or whoever. And that takes a fair amount of time, and the people you need to do it are quite smart, and they apply opportunity costs. I don’t think any of that will stop it from happening. But just there’s a lot of different steps that will need to come together for this to be a reality.
You probably know the website that’s called conversation.AI, where you can chat with great figures from the past: Napoleon, Einstein. That’s not good enough yet to be what you’re suggesting, which is that it’s almost like having the person as an intellectual contributor. But we’ll get there. But there’s a lot of bumps in the road. Like do I want a Tyler Cowen bot competing with me? I guess I think I do, but I think a lot of people don’t.
Hugo Scott-Gall: Yep. I can see why. I can see why. So, we were in stagnation. Hopefully, we’re leaving stagnation. Part of stagnation is inequality whether that’s wealth, lower income. Let’s talk about some social implications from this wave and further waves of AI in terms of: does it resolve or widen inequality, do you think, in the average country?
Tyler Cowen: Well, I think for the world as a whole, inequality will go down a great deal. So, so many countries right now do not have many doctors or many good doctors. And in those countries, you can right now, using the current service, get medical diagnoses that are slightly better than good North American doctors. That’s a reality. Most people aren’t doing it yet, but it’s just not gonna take that long. So, that will radically lower what you might call medical inequality.
Now, within a single nation, it’s always a more complicated story. Inequality of measured income could either go up or down. We’re not sure. It could be the big winners are carpenters, and gardeners, and working-class people with skills the AI can’t mimic. I don’t think we know that for sure, but that’s the most likely scenario. And the losers would be the semi-intellectual word manipulator, word-sell class who can be copied more readily. And that would make the situation more egalitarian.
Hugo Scott-Gall: And that would make us imbedded in the ideals of the French Revolution that it’s going to make us happier, an egalitarian society. Do you think it makes us happier?
Tyler Cowen: I don’t know what makes us happier.
Hugo Scott-Gall: The internet didn’t make us happier if you believe the data.
Tyler Cowen: I don’t trust those data as a study of the internet. They look at only one set of causes. They don’t look at how happy people are from all the friends they make through the internet, for instance. I think the internet probably has made 70% or 80% of people much happier, but we don’t know. I think that’s fair to say.
I think AI will take away a lot of people’s power. So, people in media who may not be that wealthy, but they have a lot of social capital, and they’re highly influential, and they’re used to being able to get their kids into Harvard, we might be in this new world where suddenly no one cares about them anymore. I think that would make me happy to see that new world. I don’t know. Maybe I’m one of those people. I think it’s very complex. But most people are not ready for it, and I think the non-income effects in many cases will be the more important effects. Sort of status and influence.
Hugo Scott-Gall: Does social trust go up or down?
Tyler Cowen: Maybe both. So, the nice thing about an LLM, you can always ask it for the objective answer, or the left-wing answer, or the right-wing answer, and it will tell you. So, it’s not like Fox News or MSNBC, where you get one thing. You could shout at your TV, “Tell me the other side of the story,” and they won’t do it, right? The LLM will.
But the question is: how many people will want to do that? It will give us what we ask for. I think we still don’t know what it is we collectively are going to ask for. It certainly has the potential to do much more to limit disinformation than to spread it, but we don’t know yet.
Hugo Scott-Gall: If all knowledge is social—you know, we’re humans—where does AI fit in with that and the notion of truth?
Tyler Cowen: I don’t know what it means to say all knowledge is social. LLMs seem to know an awful lot of things. I don’t know if I would call them social. So, I would revise the initial claim. I would say we have a new kind of knowledge. We’ve invented a new kind of intelligence in a way that is truly marvelous and takes your breath away. I’m still in awe of the magic when I use it. It feels to me like witchcraft.
And it’s just going to change how we view the world, our own abilities, how we raise our kids. What if your kid grows up talking to the LLM, which then custom trains on how your kid learns? What’s your kid going to be like? Will you approve? Will you disapprove? Does it matter if you disapprove? Maybe you’ll just be the old fogie, right? So many huge questions. Again, I’m generally optimistic. When we have more intelligence on the table, my view is we can make a better go of that with a pretty high benefit to cost ratio, but I certainly admit that’s not proven.
Hugo Scott-Gall: I guess to follow-up from my question, does AI create its own truth? And this is a basic question, but clearly, there are some biases imbedded into it. Or maybe you think there’s fewer and fewer. But there must be from the data we’ve seen versus the data it hasn’t seen. It must be from some of the code behind it. So, where does truth sit? You said several times already that you think ChatGPT is objective, and objective clearly is one step removed or linked to truth. So, how does truth work?
Tyler Cowen: Well, I think GPT-4 is more objective than any other media source I know. I certainly wouldn’t say it’s fully objective, but it’s quite objective. But that’s also misleading because the near future will be a world where there are dozens of these models, maybe hundreds. And they’ll be varying levels of quality, but for sure, they’re not all going to be as objective as GPT-4. I’m pretty sure of that. So, there’ll be like a Donald Trump LLM. Elon Musk is building one. I don’t know what he plans, but it’s probably going to be different from what Sam Altman has planned.
And it’s going to be a wild, wooly, super weird landscape, just like the early days of publishing, where you have all these pamphlets promoting bizarre ideologies like millenarianism, and Fifth Monarchy Men, and everything, say in 17th Century England. And it will be like that phase of the Renaissance where just ideas proliferate. It will feel super weird. And we’re going to be disoriented. But at the same time, for those people who want objectivity, it will be easier to find than ever before.
Hugo Scott-Gall: Technology doesn’t typically come with a rulebook. Usually, it’s ahead of the rules. So, what worries you most? We talked about constraints. We didn’t really talk so much about what would worry you most. Do you worry that there may well be some sort of big cases that set it back because somehow liability was proven in a way that’s unfair to the technology or is still ambiguous? How do you think about that?
Tyler Cowen: Well, there’s two separate questions in there. One is what worries me most, and what worries me most is we have a world based on inertia and people expecting that their current privileges will more or less continue or even be extended. And LLMs will overturn that, and I’m not sure how people, our elites, other groups will respond politically, ideologically, or otherwise. That worries me a great deal.
Now, in terms of liability, I don’t think liability law will stop LLMs. I’m not sure how liability law will evolve, but keep in mind these can be global services, and if you need to base your LLM in Estonia, or in Bermuda, or somewhere else, eventually, this can happen. So, the U.S. is not going to have a kind of great firewall. So, the LLMs people want will be more or less the LLMs they get, even without open source. So, I don’t think changes to liability law ultimately will halt that process. They may slow it down a bit.
Hugo Scott-Gall: Final question before I move onto talent, and I want to make that link between AI and talent. Just around the kind of enablers: electricity, hardware, is that a fair summary of where you see kind of the most attractive returns to capital optionality upside? And when you think about hardware enablers, will that be very concentrated in a few companies, or will it be quite broadly distributed? So, for every extra dollar created, quite a few people get to share in that dollar, or is it really going to go to one or two providers?
Right now, NVIDIA with the H100 chip, that’s seeing a lot of the value accruing to it, as implied by its recent share price moves.
Tyler Cowen: I think whether in AI or elsewhere, including, perhaps, electric vehicles, if you can create an enduring supply chain, that’s something that’s very hard for other people to copy. In investment terms, I would look at which companies can do that. Think about hardware and supply chains. And a lot of the AI service itself will become commodified and probably is not super high in investment returns. But again, the people making the physical stuff, yes.
Hugo Scott-Gall: Final question in AI, then I’ll move to talent. A world of abundant AI and abundant energy where the cost is so low relative to income or whatever you want to compare it to. What does that world look like? It’s different, right? We deal with scarcity. Scarcity primarily manifests when it comes to inflation. Often, that’s due to energy. A world where energy is just not a cost issue; it’s abundant and plentiful. Access to AI is abundant and plentiful. That’s a very different world, isn’t it?
Tyler Cowen: Yes. If clean, green energy is truly abundant, we can settle so many parts of the world so cheaply. For instance, so many parts of Africa that are underpopulated but have major problems with, say, water supply or something else in infrastructure. If we can do that, maybe in small modular, or nuclear power, or solar, geothermal, whatever it is, the whole landscape of the world will change. Space settlement will become possible. Everything. So many changes it’s really very hard to predict. I think it would outrace our imaginations.
Hugo Scott-Gall: Not impossible in our lifetimes.
Tyler Cowen: I agree. I wouldn’t take the 50/50 bet in my lifetime. I’m 61. But it’s not crazy to think that I’ll see it.
Hugo Scott-Gall: Yeah. Okay, so you wrote a book about talent because overall, we’re not very good at identifying talent or even thinking about what talent is. Does AI help us in that quest?
Tyler Cowen: We don’t know yet. So, large language models, as they currently exist, are not designed to help you identify talent at all. But over time, let’s say you have people in a poorer country, and they grow up being taught by LLMs and working with LLMs. It’s easy for me to imagine that the LLM will collect data on how rapidly and how well they learn, and they’ll be third-party certifiers. And in essence, say all of the children of Kenya who do really well working with their LLMs, their names and emails will be sent to companies who will want to hire them. It seems to me that’s quite likely to happen.
Hugo Scott-Gall: That makes sense. That makes sense. So, one of the things you say in your book is for most jobs, there is a table stakes level of intelligence. There’s a minimum level of intelligence to do that job. But beyond that, the kind of correlation with success and IQ breaks down that there are other factors. That made a lot of sense to me, and I think that’s very true in investing. It’s very simplistic to assume high IQ relative to someone else implies better investor. There are other things at play.
As you wrote the book, and as you’ve thought about it since and talked to people, do you think we’re getting closer for a given profession to understand the multi-factor model of success? So, clearly, there’s a minimum level of intelligence required, but beyond that, there are other things. Have you thought some more about the context for each industry, the weightings of those things, and the mix of those things?
Tyler Cowen: I think human intelligence is mattering less. AI will accelerate that trend. What seems to be mattering more and more, again, above a certain IQ threshold, is initiative and determination. Can you use all these tools? Will you set out to learn them and retrain yourself? Some people will; some people won’t. But that’s not so correlated with IQ, right? That’s determination and initiative. I look for that more and more.
Hugo Scott-Gall: How do you look for that? Because if you ask someone, “Are you determined?” they’re going to say yes.
Tyler Cowen: Well, if it’s someone who’s older, you just look at their track record. Interviewing references is still underutilized by many people. If it’s someone who is very young and essentially doesn’t have a track record, effusive praise from their high school teachers, while it may be true, you have to discount it because a lot of people can get it.
I think you want to look at the question: at how young an age did they start their first project? And I find that to be somewhat predictive, by no means entirely. It’s more predictive for males than for females. But if someone, say, started trying to build a nuclear fusion reactor when they were 12, I would bet on that person. I don’t even care if they succeeded or if their parents helped them. The mere fact that they were out there doing something I would put pretty high weight on.
Women are more likely to be late bloomers for whatever reasons. So, that’s harder talent to spot. You need to have more of an open mind. Maybe think more about issues of social intelligence and just how their lives are going to evolve. But I would say we need to get better at spotting talent in young women. We’re not very good at it right now, and it’s not fair.
Hugo Scott-Gall: It clearly isn’t fair. So, what concrete steps should people take?
Tyler Cowen: Well, it depends who you are, but just spending more time with young people would be my primary advice for many older individuals. I think 13- to 19-year-olds are grossly underrated. Those people will learn all the more quickly with artificial intelligence if they want to. And just having some part of your life connected to those individuals and what they’re doing would be my main advice.
Hugo Scott-Gall: I like your “how many open browser tabs” question. Have you used it? Have you refined it? Have you thought of some better questions to test for curiosity?
Tyler Cowen: Here’s my most recent question—it does not apply to all jobs, to be clear, but I find it useful in many cases. I say to the person: if you could go on a two- or three-day retreat with all expenses paid, and you get to invite three or four people or people you do not currently know and spend that weekend talking with those people, whom would you invite?
I find that shows a lot what the person really cares about and just how creatively they think about how to advance themselves. There’s not like a single right answer, but if someone has no answer at all, I would say start worrying, and look for an answer that’s creative and a bit out of the box. The person looks for people they can learn from.
Some people, oddly enough, they name people that they want to give lectures to, which for some jobs, might be proper. But you tend to then think those individuals are not the best learners. So, that’s my new question.
Hugo Scott-Gall: Yes. And who pops up most frequently in the shaded area of the Venn diagram there?
Tyler Cowen: There’s basically no overlap so far. It’s a very open-ended question, so people have very different interests. I would say I’d love to spend some time with Magnus Carlsen, with Paul McCartney. Those are weird answers. But just not that many other people are going to say them, right?
Hugo Scott-Gall: I don’t think either are weird answers.
Tyler Cowen: Well, that’s a sign that you’re weird too. That’s great. Maybe you would say them. There’s a lot of people who are super established, but they might be, in a sense, too established to help you. So, say you chose Bill Gates for your two days. Is Bill Gates—obviously, he has a big hand in a number of enterprises, but sometimes, people like that are too high in the enterprise to actually get you hired or help you.
So, he’s not obviously the best choice. I don’t mean that as any kind of negative on Bill Gates. But just someone who can think through: what’s the level of knowledge, expertise, and doing stuff where the person actually can help me? I find this question a very good test of that. If I spent two days with Bill Gates, I think it would be fascinating. I mean, maybe he would be pretty high on my list. But I don’t actually think he could or would help me at all. He’s too famous.
It’s like spending a weekend with Barack Obama. Again, nothing against Obama. But I don’t think he could help me at all. Someone I’ve never heard of who’s in some mid-level position somewhere maybe could help me more. So, just an open door to thinking about that kind of question.
Hugo Scott-Gall: So, I’m going to fire a few questions at you if you don’t mind. There are very few people who say in public that they can read five books in an evening. Is that as much as a competitive advantage to you as it sounds, and do you think that’s going to be undermined by technology? You’ve clearly got a skill-based competitive advantage. There are very few people, I think, who can do that. You clearly retain a lot of information, so you’ve got a fantastic database, but you’re still human.
Tyler Cowen: I think it’s been a big advantage to me. Now, as you’ve mentioned, AI is here. Large language models are here. So, it’s a new test of me. Do I have enough initiative to do things with large language models? I’ve already recorded a podcast with a large language model, playing the role of Jonathan Swift. I thought it did a very good job. I’ve written a paper with Alex Tabarrok, How to use Large Language Models to Teach Yourself Economics.
So, I’m trying. I’m not sure it’s for me to judge how successful I’m being. But if all I did was just flatline and be the guy who reads a lot of books and not take some new initiatives, I would say press the eject button on me.
Hugo Scott-Gall: What do you most look forward to rereading? What have you reread most?
Tyler Cowen: Well, I can tell you what I’m taking on my next trip. So, I have not read Lord of the Rings in 30 years. I think I’m looking forward to rereading it. We’ll find out. Sigrid Undset, Kristin Lavransdatter, that very well-known Nordic novel. Very long. I read that maybe now almost 40 years ago. I’m more sure that I’ll like that than I am sure about Lord of the Rings. And Dan Simmons, the two-volume science fiction set, Hyperion novels. I’m bringing those as well, which I think I’ll still like quite a bit. Those I read first more or less when they came out. That’s not as long ago as the others.
So, by definition, that’s what I’m looking forward to rereading.
Hugo Scott-Gall: One final question. How would you like to be remembered? And that question obviously has in parentheses that you’re going to be a chat bot, so you will be remembered.
Tyler Cowen: I don’t think I really want to be remembered. So, anyone remembered is misremembered, or they’re remembered for things other than what they wanted. I think a lot of the projects I’ve undertaken, like blogging and podcasting, you get a big audience, a lot of attention now. I’m not sure how that lasts 20, 30 years out, but I suspect really not at all. And that’s fine. I want to be remembered now in my lifetime for what I’m doing now. And Robin Hanson and I have this discussion periodically. Robin wants to be remembered 100 years from now for something he’s doing currently. I don’t. I think it’s a waste. It’s a bad investment. It’s like earning incredible returns on your portfolio after you’re gone. Why bother?
Hugo Scott-Gall: Okay. Well, Tyler, I want to say thank you very much for answering all my questions. I admire the way you do your podcast. You ask lots of very, very good questions very quickly. At least I asked most of the questions quickly. I don’t think they were as intelligent as yours. But thank you very much. It’s been a pleasure and a treat to have you on.
Tyler Cowen: Thank you. It’s greatly appreciated. And keep in touch.
Hugo Scott-Gall: Thanks a lot. Thank you.
Questions, Comments, Podcast Ideas?
Our active ownership culture creates long-term client relationships by aligning with your interests and helping you achieve successful investment outcomes. Contact us to learn how we can partner with you.