Institutions that have been in place for centuries and cutting-edge new technologies both have big effects on society. Daron Acemoglu, co-recipient of the 2024 Nobel Prize in Economic Sciences, talks to Graeme Roy about how we can harness institutions and technologies for the common good.
GR: The importance of institutions has been central to much of your research. I wonder if you can set out why institutions are important, and are there any lessons for the UK?
DA:
My perspective on institutions actually is very historically shaped and I tend to think of them as inevitable. So, it’s not like ‘we have institutions’ or ‘we don’t have institutions’: it’s just politics and part of all of our social environments.
We always have rules – implicit or explicit – and those are going to evolve over time. But as they are present today, they are going to shape how we interact with each other, how we behave as workers, as entrepreneurs, as managers, as innovators.
Once you start thinking about it that way, institutions are, of course, going to be a critical input into what sort of society we build and what sort of economy we will have. And if you look at history through this lens, what becomes very apparent is that the institutions that societies themselves have built and evolved into have always mattered greatly. In some instances, this has been in nuanced ways, but institutions have always been a critical determinant of economic growth, inequality, wages, poverty, health and public services.
The UK has a special place in this story because it has been at the forefront of institutional changes. Europe was a backwater, and it evolved tremendously as an economy, a society, a polity throughout the Middle Ages and then during the commercial revolution at the end of the Middle Ages into early modern Europe. Britain was emblematic because it was perhaps one of the least developed parts of the European continent, but it was one of the earliest areas to start going through an institutional process of transformation, meaning that it played an important role.
What’s very interesting is that when I was writing about institutions in the 2000s – and when I wrote my book Why Nations Fail with James Robinson, which took an institutional approach to world history and economic development – Britain was viewed, and I saw it, as one of the most solid countries institutionally. This was because it was leveraging norms and institutions that had evolved over centuries. It wasn’t a process of enlightened democratisation, but it was a process of democratisation that was so enshrined in British political culture and in British political attitudes that I would have thought that Britain would have been the last country to go through an institutional crisis.
But over the last few years, we have seen that. In recent years, we have also seen the economy go to pieces. I think that’s partly because, right now, the British economy is without a rudder. It doesn’t have a very clear agenda or direction, and that’s because it hasn’t really grappled with the institutional crisis that started several years ago.
I think for the future of the country, going back to the strengths of institutional precision is needed. What I mean by that is an understanding of what laws are going to be obeyed, and how norms are going to shape political attitudes and business attitudes, and the relationship between workers and managers, between civil society and democracy.
I think all of those are very clear, but we have seen them become topsy-turvy, and rebuilding them is going to be critical. I believe that tolerance, predictability of institutions, the ability to voice dissent and some amount of trust in politicians and institutions are key. And I think that those are the things that have been lost in Britain.
GR: In addition to our institutions seeming unsettled, we are also in a time of rapid technological change. Do you think this is something that we should embrace as a positive or see as a challenge?
DA:
My research over the last three decades has been mostly, almost entirely, at the intersection of institutions and technology. Sometimes it has focused a little bit more on institutions, sometimes more on technology.
In the same way that I am a believer in the importance of institutions, I am a believer in the importance of innovation and new technologies. You cannot understand the history of enrichment, and improvements in our health, comfort, prosperity over the last 250 years, without recognising the really sweeping changes that have happened in industrial technology, in the scientific process, in our understanding of how to interact with nature and control nature.
But technologies always come with challenges. And there is no law of economics that says that technologies are always good for everybody. There is some sort of belief now in the techno-optimistic circles in Silicon Valley and some US academia and media, that when we invent something impressive, somehow we are all going to benefit from it.
Well, history is full of counter-examples to that – just think of nuclear weapons or technologies of control that were at the centre of slavery and other things. So, it really matters how we innovate, what we do with those innovations and how we create the institutional guardrails around how a new technology is going to affect democracy, production, communication and so on.
In the case of artificial intelligence (AI), I think I have a view that is very different from both the techno-optimists and techno-pessimists. I violently disagree with the techno-optimists – I think they are really naive and misleading, and often try to silence the conversation.
But I do not agree with the techno-pessimists either. I do not think you can say that AI is going to destroy us or to kill humanity. Nor can you say that there is something in the nature of AI that is going to be bad for humanity. I think that AI has a set of impressive architectures and achievements, as well as impressive objectives. The changes that we have seen over the last few years have been truly mind-boggling.
The room for having something that is going to be better for society is there, but I think AI – just like nuclear physics – has within it the dangers of something very negative for humanity: a much more unequal society, a much less democratic society, a much more manipulative society, with very few people deciding how others think and how others can be manipulated.
It really highlights another theme of my work, which is the choices that we make. So, AI is neither good nor bad. I do not think that we can even talk in the abstract about what AI will do to jobs or inequality – it will do whatever we choose it to do.
Then the bottom line to your question becomes, well, where are we heading right now? And there, I think I am more on the pessimistic side. I am hopeful, but pessimistic, meaning that what I see right now is AI being controlled by a very small cadre of like-minded, non-democratic and dangerous people, to be honest. There is a risk that it is going to boost inequality, it will be an anti-democratic force and it will lead us towards a two-tier society.
But I am also hopeful that there is a possibility to redirect AI and use it in ways that are good for workers, good for citizens, good for all of us. But this is not something that is going to happen automatically.
GR: What one radical idea would you suggest to make technology work better for society?
DA:
Let us first set the objectives, and I’ll give you the radical idea. As I said, we need to redirect AI, so first of all we need to have a clear understanding of what is it that we want. My argument here is that it is both technically feasible and socially desirable to have pro-worker, pro-human, pro-democracy AI.
But because that is not where we’re heading, we need various policy levers in order to redirect the trajectory of AI. And I do not think we have a silver bullet here, we need to have many tools. But since you asked for only one idea, and I’ll give you one that probably many of your readers won’t have considered or heard.
I think part of the problem with the current direction of AI is with the ecosystem that has emerged, both in terms of the ideology of AI and the business model of how to monetise it. We need to break that, and I think one very effective tool to do so would be a very high rate of digital advertising taxes.
We need to make sure that social media companies don’t exploit people manipulatively. I think the way that that’s happening right now – with horrendous implications for mental health, political polarisation and degradation of communication – is that you collect people’s data in order then to monetise them. You lock people into the platform with their attention, and then you monetise that with digital ads.
This is so profitable that it has shut off all sorts of alternative business models, and it has taken the energy out of using AI for other more valuable things, like, for example, helping workers. As a result, I think that a significant digital ad tax would be a transformative technology. It would essentially make at least half of the industry using digital ads unprofitable so that companies would redirect their efforts somewhere else, to other business models and other ways of making money from offering services to customers.
GR: What advice would you give to a young economist today?
DA:
Well, now is a fantastic time to be a young person – I wish I were! I would say just take advantage of it, and within the intellectual environment, that means do what you are passionate about, be open-minded and tolerant, and be a thinker. I think what we need more than anything else is people who think for themselves, rather than just repeating lies that they get either from their professors or from social media or their peers.
I think independent thinking is the gift that our current environment bestows upon us, but we may choose not to use it. And I think part of the reason why I’m so worried about social media and new communication technologies is that they’re really pushing us away from being thinkers. So, be a thinker!