Artificial intelligence (AI) has the potential to greatly enhance human capabilities and improve lives. But the implications are far-reaching and not fully understood. In this episode of The Active Share, Hugo sits down with Tyler Cowen, best-selling author, podcast host, and Holbert L. Harris chair of economics at George Mason University, for a conversation about the impact of AI on labor, capital, business models, and global connectivity.
Comments are edited excerpts from our podcast, which you can listen to in full below.
Which business models are created or disrupted by this current wave of AI?
Tyler Cowen: Many large business firms will become smaller. OpenAI, which built ChatGPT, had only 250 employees when it made its key breakthrough, and AI did a lot of the work. Midjourney, which is the best image creator AI, had 13 employees.
If AI does the work, the long-standing ascendency of big business will, in some sectors, come to an end, and that will be shocking. There will be many more projects, but the entities undertaking those projects will hire fewer people.
Will AI reinforce competitive advantages for some companies and be disruptive for others?
Tyler: Large tech companies have been laying off people for a while now, even before AI. I’m not pessimistic about their business prospects, but the days of large employee headcounts are over. These companies are expanding what they do while limiting their workforce at the same time.
I also think we will see more alliances and fewer mergers. Facebook buying up Instagram and WhatsApp led to overemployment in those product lines. And what OpenAI has done is build an alliance with Microsoft rather than just sell the whole company to them. You keep dynamism while getting the advantages of Microsoft’s resources and huge presence in the cloud computing market.
Will companies adopt AI technology to stay relevant?
Tyler: Say you’re a telecom company—using AI, you’ll become more productive, and you’ll likely hire fewer people. It won’t be an emotional event for anyone other than those who are laid off, and the world is quickly moving in that direction.
I think many businesses are behind on AI, but many others are not. Rapid alliances are being built, and it’s just going to be much more of that.
Are we on the cusp of a productivity wave?
Tyler: I think this AI wave will move us out of stagnation. There are three major areas where there will soon be tangible advances. One is education and tutoring, another is medical diagnosis, and the third is how institutions organize and store their information.
The most general way of looking at it is imagining every person has access to a free research assistant, colleague, and architect. How valuable is that? While many people won’t do anything with that, it is still revolutionary. And AI is likely to get even better. We’re just seeing the beginning of all of this.
LLMs could be an enormous soft power victory for the United States and the West.
What are the constraints of AI?
Tyler: I’m most worried about human inertia. I give talks about AI all the time, and many people don’t seem that curious, or they haven’t spent time with ChatGPT-4, or they’re content with the inferior version and don’t seem to notice or to care. While there are plenty of ways in which current AI could be better, we aren’t fully exploiting the capabilities we now have.
Change is historically slow. Take electricity—by some economic historians’ estimates, it took 40 years to reconfigure factory floors to fully take advantage of electricity. I don’t think AI will take 40 years, but we need to reevaluate how rapidly human beings will adjust.
What are the implications for labor and capital?
Tyler: If you’re a worker doing routine work with words or back-office functions, the chance your job disappears in the next three to five years is high. If you are creative, take initiative, and know how to manage research assistants, whether human or virtual, you will be much more productive. And if you’re at the top of your field and willing to learn, you will be much more successful.
In terms of capital, I think AI will favor large regions with the English language, a fair amount of stability, and where people will want to put all these new projects. India and Kenya could be winners, as well as South England and North America. But smaller countries will lose talent and resources.
Because they’re held back by poor infrastructure?
Tyler: There are going to be many new projects because of AI. If a company wants to establish a new project somewhere in Africa, will they choose Burkina Faso, which is very small, or Kenya? It’s just speculation, but I think a company would choose Kenya. Geographically, there’ll be a greater concentration of projects in areas with some scale.
You didn’t mention China. Why won’t China be a winner?
Tyler: China is all about politics, whether you’re talking AI or anything else. And periodically, it seems Chinese politics go badly wrong. That may or may not happen now. But that would be the key variable. I would just say this: AI may prove to be China’s curse. If it strengthens the regime and allows for more thorough surveillance, that’s bad news, in my view.
The possible upside is Chinese citizens could want to use virtual private networks (VPNs) to access Western large language models (LLMs), and that in essence breaks down the great firewall. You can ask those models anything, and they will tell you the truth about the Uyghurs at Tiananmen Square or whatever else. It could be an enormous soft power victory for the United States and the West.
Will AI make the world more or less connected?
Tyler: It will make most places more connected. I was recently in Kenya and needed to speak to some people in Swahili. I opened ChatGPT, and it translated for me in two or three seconds.
There may be some countries, like China or North Korea, that will just shut down all contact because they don’t want the contaminating influence of Western LLMs. You can imagine that China either shuts down VPN access or monitors it so that people are afraid to use it. Countries will be forced to choose between more or less AI integration.
But any major change is going to have downside, no matter what it is. My core view is if we cannot turn more intelligence at our command into a positive, what is it we’re hoping for? Do we want a world where we develop artificial stupidity?
LLMs seem to know an awful lot of things. We have a new kind of knowledge that is truly marvelous and takes your breath away. I’m still in awe of the magic when I use it. It feels like witchcraft.
Do you think AI makes us happier?
Tyler: I don’t know what makes us happier. I think AI will take away a lot of people’s power. We might be in a new world where suddenly no one cares about people who have social capital (like those in the media) and are thereby able to get their kids into Harvard. That would make me happy to see that new world.
How big of a technological moment is this? Is this like the Renaissance or the Space Race?
Tyler: I think it’s much bigger than the Space Race, but the final implications are still unclear. I view it as comparable to the printing press, which took a while to matter, but when it did, it really did. And most of the gains went to users. Johannes Gutenberg did not become a billionaire.
What are some social implications of this wave of AI?
Tyler: Globally, I think inequality will decrease. Take medical inequality—many countries don’t have a lot of good doctors, but right now, you can get medical diagnoses using AI that are slightly better than good North American doctors. Most people aren’t using AI this way yet, but it will radically lower medical inequality.
Within a single nation, it’s a more complicated story. Inequality of measured income could either go up or down. The big winners could be carpenters, gardeners, or other working-class people with skills that AI can’t mimic. The losers could be the semi-intellectual word manipulator who can be copied more readily.
Does social trust go up or down?
Tyler: The nice thing about an LLM is you can ask it for an objective answer, and it will tell you. But the question is: how many people will want that? We still don’t know what it is we collectively are going to ask for, but AI has the potential to do much more to limit disinformation than to spread it.
Where does AI fit in with the notion of truth?
Tyler: LLMs seem to know an awful lot of things. We have a new kind of knowledge that is truly marvelous and takes your breath away. I’m still in awe of the magic when I use it. It feels like witchcraft.
And it’s going to change how we view the world, our own abilities, and how we raise our kids. What if our kids grow up talking to LLMs, which train based on how they learn? What will our kids be like? Will we approve? Will we disapprove? Does it matter if we disapprove? We can make a better go of things, with a high benefit-to-cost ratio, when we have more intelligence on the table.
If we cannot turn more intelligence at our command into a positive, what is it we’re hoping for?
Does AI create its own truth?
Tyler: ChatGPT-4 is more objective than any other media source I know. But the near future will be a world where there are dozens of these models, and they’ll have varying levels of objectivity.
It’s going to be a wild and weird landscape, just like the early days of publishing, where you had all these pamphlets promoting bizarre ideologies. It will also be like the phase of the Renaissance where ideas proliferate. But at the same time, it will be easier to find objectivity than ever before.
What worries you most about AI?
Tyler: What worries me most is we have a world based on inertia and people expecting that their current privileges will continue or even be extended. LLMs will overturn that, and I’m not sure how people will respond politically, ideologically, or otherwise.
Do you worry about the liability of AI?
Tyler: I don’t think liability law will stop LLMs. I’m not sure how liability law will evolve, but keep in mind LLMs can be global, and if you need to base your LLM in Estonia or Bermuda, eventually this can happen. The United States is not going to have a great firewall around it.
The LLMs people want will be the LLMs they get, even without open source. I don’t think changes to liability law ultimately will halt that process, only slow it down a bit.
Scarcity primarily manifests when it comes to inflation, and often, that’s due to energy. What does a world where access to energy and AI is abundant?
Tyler: If clean energy is truly abundant, we could cheaply settle many parts of the world. For instance, many parts of Africa are underpopulated, with water supply and other infrastructure problems. If we can solve these issues with AI and nuclear or solar or geothermal power, the whole landscape of the world would change, and even space settlement could become possible. The number of changes could outrace our imaginations.
Hugo and Tyler also discuss the topic of talent, and how to identify it, in this episode. Listen to find out more.
Want more insights on the economy and investment landscape? Subscribe to our blog.