In Our Hands
In Our Hands
Ep. 25. Kate Crawford: The True Cost of AI
1
0:00
-44:29

Ep. 25. Kate Crawford: The True Cost of AI

Ramanan Raghavendran speaks with Kate Crawford, a leading scholar of artificial intelligence, to explore the hidden costs of AI—from its environmental footprint to societal impacts.
1

In this episode of In Our Hands, Ramanan Raghavendran delves into the environmental impact of AI technologies with renowned scholar Kate Crawford. They discuss the interdisciplinary nature of her work, focusing on the implications of AI technology on society, the environment, and labor. They explore the materiality of AI, its environmental costs, and the biases inherent in AI systems.

Time stamps and the full transcript are below. This episode is also available on Apple Podcasts and Spotify.

In Our Hands is a production of Amasia. Follow these links for more about our firm, the Amasia blog, our climate fiction podcast, and Ramanan’s blog.

Thanks for listening! Subscribe here for future episodes.

Show Notes

02:15 Interdisciplinary Foundations of AI and Technology

04:19 The Material Reality of AI

07:40 Environmental Costs of AI

9:52 The Impact of Rare Earth Mining

12:04 Environmental Costs of AI

15:03 The Hidden Costs of AI

16:44 Bias in AI Systems

21:45 Policy Measures for AI Governance

28:03 The Real Cost of Labor

31:37 The Future of AI and Collective Responsibility

37:12 AI In Education and Pedagogy


Kate (00:00) This is the biggest social experiment that has been conducted in the last 50 years, and it's happening in real time, and we don't know the results.

Ramanan (00:11) Welcome to In Our Hands, a podcast that brings you voices tackling the real-world trade-offs, tensions, and innovations that shape our planet's future from sustainability to AI. I'm Ramanan, your host. No easy answers here, just honest conversation about our shared future.

Hello and welcome to In Our Hands. This episode marks a new direction for our podcast, and that is the direction of AI. I am thrilled to welcome a guest who is reshaping how we understand artificial intelligence, not just as a technology, but as a deeply material, political, and environmental system. Kate Crawford is a leading scholar of the social, political, and ecological implications of AI.

She is a research professor at USC Annenberg, a senior principal researcher at Microsoft Research and a co-founder of the AI Now Institute. She is also my book of the year pick for 2023. The book is called Atlas of AI. And as you listen to us, you will soon realize why it is a must read. Today, we'll explore the intersection of technology and climate and how we can reimagine AI systems for a more just and sustainable.

(01:30) So Kate, your career is the definition of interdisciplinary. There's media studies, there's AI ethics, there's art. Can you pull all these together and tell us where this came from? What are the experiences and influences that led you to look at technology in this very broad way?

Kate (01:48) Well, I have to say I trace a lot of it back to really the emergence of the web. I was a kid, I remember looking at really early versions of AltaVista and just thinking, this is the game changer.

Ramanan (02:03) And this was in Australia?

Kate (02:15) And this was in Australia where I grew up. And I was really interested in how to study the wider social implications of these technologies. Now, at the time, there wasn't really a degree program to do that. I ended up studying political philosophy and law, and then I did a PhD that really looked at a combination of media studies plus philosophy plus sociology to sort of look at technology in a wider context.

Ramanan (02:28) No, one quick question. You say degree programs do not exist then. Do you think they do now?

Kate (2:38) Honestly, no. We still haven't, I think in many ways, gone through the type of re-conceptualization of education that is needed in this moment.

Kate (02:45) Obviously people are focused on AI issues and things like chat GPT, shifting the way that students are engaging with assessment. But I think there's a much bigger question, which is how do we escape silos of university education to really move towards interdisciplinarity, which I think is absolutely the challenge of the 21st century. For me, it really came about through curiosity.

I taught myself to code. was really interested in computer science in terms of where it met human life and society, not of itself. For me, was always this broader question of how do technological systems change how we live? How do we engage with them? How do they shift societal norms? And what does it mean for the future?

So I've really followed that pursuit in multiple different avenues, from working in very high-end technology labs to being in universities and creating interdisciplinary research centers to pursuing large-scale art projects to try and show these questions visually. So, as you say, interdisciplinary from top to bottom.

Ramanan (03:52) And I have to say this for the audience because they may not know you. Prize-winning art projects. Can you, would you mind before we get into the main thread of our discussion, you know, the average listener may not know all the hats you wear in this moment. You have academic hats, you have corporate hats, you have policy hats. Can you just lay out the full scope of your footprint, so to speak?

Kate (04:19) Sure, I mean in short, I've always worked in both industry and academia. I think it's really important that those two domains are interconnected, particularly now. I'm a professor at USC in Los Angeles. I'm a senior principal researcher at Microsoft Research. That's the kind of conceptual R &D wing of Microsoft in New York.

I'm also the inaugural visiting chair of AI and Justice at the École Normale Supérieure in Paris. And I also do a lot of work in policy. I've advised people in the EU, obviously in earlier iterations of the White House. Currently I'm advising President Sanchez in Spain around implementing the AI Act as a member state. So, you know, in many cases I think artificial intelligence really touches on all of these domains. Obviously, it's a major industrial force, but we really have to think about how it touches education and also policy.

Ramanan (05:16) Well, now that we all feel profoundly inadequate, we can now proceed. You know, one of your most famous phrases is when you described AI as neither artificial nor intelligent. you know, sometimes phrases haunt people for the rest of their lives. And I think, you know, we will associate that phrase with you. Can you explain it?

Kate (05:19) Mm-hmm. Well, you know, that phrase is really distilling down to the smallest possible sentence about 10 years of field work of going to visit the entire process of AI manufacturing from the mines where the minerals are extracted to the large scale data labs where the data sets are designed to the factories where click workers are interfacing with AI models, to the data centers, these you know, gigantic factories of the 21st century, and many places in between. And through doing that work, really became very powerfully clear to me that we see these systems as somehow being abstract, as AI being in the cloud, as mathematics and code and almost entirely immaterial.

But the opposite is true. This is, in fact, one of the most material technologies that we've built as a species. It's currently the largest infrastructure project in history. ⁓ And in terms of its intelligence, I think, unfortunately, it's a very mixed metaphor for what AI actually does, which is statistical analysis prediction at scale, which is very different to human embodied relational intelligence. And so between these terms, as a term, as it was started to be used back in 1956,

I think it's given us the wrong metaphor to think with, if you will. And rather, I think it's much more instructive to look at this as a radical industrial transformation, which has enormous implications across both the social layer and I think also the political and cultural layers as well.

Ramanan (07:19) And I should just say before we move on to exploring that further, I did not name your book. And so for our listeners, the book is by Kate Crawford, it's "Atlas of AI, Power Politics and the Planetary Costs of Artificial Intelligence". It is an amazing book, please read it.

(7:40) So let's talk about mines and extraction. You describe AI and the Atlas of AI as a technology of extraction. Can you speak a little further about how AI systems are physically rooted in environmental and labor intensive processes, which as you said, sitting behind a screen, we're not thinking about that.

Kate (08:04) Exactly. Well, let's have a look at the current wave of artificial intelligence, is known as generative AI. ⁓ This type of artificial intelligence is enormously energy intensive. For example, if you just do one search on chat GPT, that's 10 times more energy than just doing a traditional Google search.

Every exchange of around 20 back and forths with GPT is about the equivalent of pouring a half litre bottle of water on the ground. Why is that? That's because these architectures, which are built from lots of ⁓ very intensely heat producing chips, often produced by Nvidia, need to be cooled with fresh water and many of these data centres just evaporate that water into the air. At least 20 % of the world's data centres are already in drought affected areas, so this is not water we can afford to lose.

And we also know that these systems draw an enormous amount of minerals, part of the reason we're in a planetary mineral cold war over rare earth, lithium, cobalt, et cetera, is precisely because of AI infrastructures. So this is all coming at a time where obviously due to climate change, we really need to be thinking about cutting back and thinking about how do we come up with more innovative sustainability solutions, but instead we're seeing a system which right now, the current estimates are that generative AI is basically using as much energy as the nation of Switzerland. It'll be as much as Japan by next year, and by 2034, AI will be using as much energy as India. So that is absolutely terrifying to anybody who's really closely looking at the climate research at the same time. So we have a major issue.

(9:52) I do actually happen to think that this is one of the biggest issues in AI that's not being addressed. Many people are simply unaware of how resource intensive these systems are. And obviously, to feed these systems, that's extraction. That's extraction from mines. It's extraction of energy. Unfortunately, the energy mix for AIs is not coming from renewable sources.

There simply isn't enough renewable energy in the grid to keep these systems going, which is why coal-fired plants are now being kept online. And we're also seeing a massive resurgence of interest in nuclear.

And also I think a build out there in the next decade. So that's all happening right now as we speak. I've just been looking into some of the biggest AI data center deployments and there's one going up right now in Abilene, Texas, which is for OpenAI. It's possibly the biggest construction site in the United States. It's larger than 17 football fields. It's over a million square feet.

It is going to use, just require an enormous amount of energy, a minimum of around sort of 1.2 gigahertz is the prediction. That's more than the average nuclear power station produces. So these are just gargantuan. In fact, on that theme, Elon Musk, of course, is building a competitive system down in South Memphis called Colossus. It also uses over at one gigahertz, but has been shown to be highly polluting.

Kate (11:18) They're using generators that are sort of spewing out nitrogen oxide and formaldehyde into the air into an already historically economically vulnerable population. This is historically black community, which has already dealt with ⁓ so much by way of ⁓ population sort of marginalization, but also environmental abuses. \

And so this is really doubling down in an area that simply cannot afford this type of toxic air pollution. So that's just for starters. And then we could of course talk about labour. You know, part of doing this sort of work is you really have to sort of look at truly just staggering amounts of change. But this story, the story of the environmental costs just keeps spiralling.

Ramanan (12:04) So I'm going to come back to that because I think we want to give that its own space here to go a little deeper. Can we just go to the other end of the supply chain and talk about the rare earth minerals that are being mined for AI hardware and for many other things. But here we're talking about AI. Some commentary, I think, for listeners who may not have thought about faraway lands on how this extraction impacts local ecosystems and communities.

Kate (12:38) Mining rare earth is actually one of the most environmentally destructive practices that we have. It just produces, in addition to just gigantic amounts of tailings and mineral waste, it also produces radioactive waste. This is primarily being done in China.

China is really responsible for it, depending how you count, between 70 to 80 % of all rare earth mining, if you go to some of these mine sites and you just see the waste around, there's a gigantic black lake in Mongolia of just the tailings of these systems, it's absolutely horrifying. But of course, we're now in a slightly different situation because of the situation around trade restrictions between the US and China. China is now really restricting the sale of particular rare earth like gallium to the United States.

So you're now seeing this sort of push globally to find other sources and spin up other mines. So we're really going to see that type of environmental impact shift to many other countries as well, given the demand. So that's just one story of the many stories of the minerals that are really needed for this industrial shift.

Ramanan (13:53) Not long ago, I read a book by someone named Gabrielle Hecht called Residual Governance, which is a book essentially about the mines in and around Johannesburg. And it's not focused on rare earth, but you know, just, you know, we are far away from these places. And so we don't fully realize, you know, Johannesburg is basically built on tailings, as far as I can tell. What is an environment? Have you read the book?

Kate (14:16) It's a fantastic book. I'm glad you mentioned that. Yes, yes, I have. And I think what's interesting is that in many ways, the way that the mine has functioned historically for centuries, it's of kept out of sight. You don't see it in sort of cities and towns, but the same is happening with the data center, which is, it is the factory of the 21st century, but it's being put in often rural and regional areas where you either don't see it very easily at all, or if you do, you'll just see the equivalent of a shed in the distant horizon. So it really is that same story playing out again, really shifting externalities in such a way that they're simply not visible to the vast majority of people.

Ramanan (15:03) And I should disclose to the audience, I'm in a class with Hecht on trash and waste. And that's the context in which I read that book. What's an environmental cost of AI most people don't think about? And what I mean by that is most people don't think about the environmental cost of AI, period. Even the very large examples that you just shared. But are there hidden or subterranean costs that even a knowledgeable person wouldn't necessarily conclude is a cost?

Kate (15:37) I love this question. mean, honestly, I think, you know, there are all sorts of new studies that are coming out at the moment. I mean, it's really a very active period. And certainly what we're seeing in terms of air pollution, that's not something that I was researching for Atlas of AI, but the air pollution questions are really very serious for local populations that are really affected. And of course, that's what we're seeing with Colossus.

I think water is still a story which people are not aware of. And given the water scarcity issues of the 21st century, it still shocks me that this is something that's happening in terms of the design of data centers. I I often speak to architects about this issue and say, look, this is the primary architectural challenge for this century. We should have completely different designs.

Ramanan (16:06) I think you're right. I think you're 100%.

Kate (16:31) You know, there are really intelligent ways to start thinking about these architectures, but at the moment they're just just treated like giant computer sheds Which is incredibly inefficient and just doing you know untold damage

Ramanan (16:44) I'm going to shift gears a little bit and talk about inequalities. And you've done a lot of work in this area. The idea of biased AI systems, all systems have biases and AI systems have idiosyncratic biases that are not visible to most people. And so can you speak a little bit, just bring home to our audience. And this is in the context of decision making about the environment, climate justice, and you touched on it a little bit with the Memphis example. But more on that.

Kate (17:23) Well, interestingly, my kind of first experiences in really studying the bias that can be built into large scale data sets really came with enormous disaster in Australia back in 2007, 2008, which was an enormous flood in Queensland. It was a floodplain almost the size of France, you know, it's sort of extraordinarily destructive.

Ramanan (17:50) Wow.

Kate (17:52) And one of the things that was happening at that time was that people were using Twitter data to try and understand where the damage was, who was most affected. And that seemed like a really good idea at the time. I'm sure you remember back when Twitter was seen as, know, it's the people's network and we'll be able to sort of see what's happening.

Absolutely people were tweeting about what was happening to them during this extraordinary flood, but of course there were so many people who were not on Twitter and who at that time didn't even necessarily have smartphones. And they were often people who were coming from poorer areas, often older populations, and just simply weren't seeing them in the data at all. So, even though you could get these enormous data sets of many millions of tweets.

Ramanan (18:24) Yep.

Kate (18:34) You were getting a very skewed perspective of this population. So that clued me in early on that these data sets are, from the very outset, the ground truth is skewed from the beginning. And so any kind of model that you build upon it is going to have those skews within it. And so it really began for me many years of opening up the data sets that are used to train AI. Now, this is something that very few people do. Very few engineers ever look inside data sets.

They just pull them off a shelf and train a model on them. That's generally the sort of level of engagement. But I started opening up these datasets in around sort of 2015 and really looking at where did the data come from? How is it labeled? How are things being defined? Because you know that AI models are then going to be absolutely premised on those ideas. So.

I started looking at ImageNet, is still to this day one of the most well-cited, most significant data sets. It really revolutionized computer vision, starting in 2009 when it was released. But again, very few people had really gone through it, including the creators of ImageNet, Fei-Fei Li and her team.

So I was looking specifically at the way that people were being labeled in ImageNet and just found through painstakingly manual work over multiple years, just so many problems. And the problem is that when you basically take a data set of nouns, in this case a word database called WordNet, and then you say, let's just get pictures that represent these nouns, that might be fine if you're trying to get a picture of a cat or a dog, but if you start labeling people, you might say this person is a nurse, know, this person is a father, this person is a politician, but then it started to get very nebulous and quite judgmental. You know, this is a bad person, this is a drug addict, this is a kleptomaniac, this is an alcoholic. And then of course, horrifying, you know, gendered and racial, you know, insults just all the way through.

You know, I'm still shocked that some of these words were in anybody's database really, but here they were paired with images of people that were just extracted from the web. Just people like you and I who might have put up our graduation photo holding a glass of champagne and now you've been labeled an alcoholic in the most powerful training data set in AI history. So I was sure, I found multiple pictures of colleagues who had no idea that they'd been labeled in these ways. So that was a real eye-opener for me.

And when we published that research, we did something which I was actually really, proud we did, which is we trained working with the artist, Trevor Paglen, in his studio we trained an AI model so that anyone could upload their pictures and see how they would be labeled by ImageNet. And that's when people really got it. It went absolutely viral. It was being covered by the New York Times. And that was when ImageNet responded and said, OK, we're actually going to remove 600,000 of these images and a whole set of these labels. And that is, I think, one of those moments where we could see change coming out of this research.

(21:45) But there's a fundamental problem beneath that which remains untouched, which is what are the politics of classification that's being built into AI systems every day, often in ways that are invisible. You know, I was looking at a publicly released data set. Many of the data sets for AI these days are privately held and you're not going to be able to see into them at all.

So for researchers like myself, that means that there's a wall there in terms of being able to see the potential errors in these systems and unfortunately in a kind of an AI race where everyone's going as fast as they can to build the biggest model, get it out as quickly as possible. They're not doing this sort of forensic research that has really been a hallmark of my work. And I worry about that because I've seen just how bad it can get, even with really careful researchers who are doing their absolute utmost as Fei-Fei Li and her team were.

Ramanan (22:18) As fast as they can, yep.

Kate (22:38) You know, we're really moving into a period now where these systems, unfortunately, have fewer and fewer guardrails and fewer and fewer eyes on them in terms of making sure that these systems are actually reliable, safe, and are not essentially encoding the worst types of bias and prejudice.

Ramanan (22:57) I mean, I think for our audience, if you want more on Kate's work here, just do a Google search for Kate Crawford excavating AI and you will get to a site, jointly between you and Trevor Paglen, which was just eye-opening and shame on me for not even really knowing about it until I read what you wrote on that page.

Kate (23:23) You can't know everything.

Ramanan (23:26) There is a lot going on, but you know, we should all have known about this. And so it is very unfortunate, but we're going to correct all that by doing podcasts like this and directing people to this work. I'm going to, well, you the only thing I'll say just to build on what you said, which is it is a depressing moment because AI is now at the center of a new geopolitical conflict. And so the inclination to put guardrails on may end up not existing on either side and that is a dark road to be going down.

Switching gears yet again, you work for a tech company, one of the largest in the world. And so there's an obvious question to ask, which is you're an insider in that sense. And so what can tech companies large and small play in mitigating some of the effects that we just talked about.

Are there examples of responsible practices? Is there any hope around here, is my question.

Kate (24:33) Well, you know, this is interesting. I obviously really look into the many challenges and difficulties we face with this AI transition, but I'm by nature quite optimistic that we can have real impacts. Honestly, there's just so much low hanging fruit when it comes to the AI infrastructure piece. ⁓ So much of what I do is really speaking not just to Microsoft, but to all tech companies saying, we really need to do better in terms of the impacts of these systems. Most of the tech companies had published agreements to reach carbon neutrality by 2030. I think it's entirely understood now that that's not going to happen. And that's being, I think, let's slide far too easily.

I think if the actual design of AI from the models themselves, thinking about the difference between the biggest possible models versus where could we be using small language models? Where could we be using curated data sets that aren't just harvesting the entirety of the internet? Where can we be thinking with sustainability in mind first, rather than speed first, which is really the priority that has brought us to this place?

You know, those of us who can remember all the way back in the late 90s, early 2000s, where trying to keep computer programs small was really important. Avoiding bloatware because there simply just wasn't enough compute. It created this environment of try to create the most efficient systems that you can. Now, that is one of the ways forward here. That certainly you could make AI much more efficient.

Indeed, it has become more efficient over the last five years, but you're seeing so many more people use it that those efficiency gains are not going to be enough. This is something, obviously, that I've written about as the Jevons paradox problem. I just wrote a paper with Sasha Luccione and Emma Strubell, both of whom are deep experts on questions around AI and sustainability. And we found that the difficulty is just going to be if you make AI more efficient, more people will use it. We do not get away from this problem. So we have to think in a much more holistic way, right through the entire AI stack.

But also, I think this does touch on questions of regulation. I think tech companies need to have regulatory structures. I don't think this is going to be solved, honestly, through self-regulation. But I do think they can do a lot. And they can do a lot in terms of transparency, number one. I think we're not getting sufficient transparency from tech companies. I would like to see much more granular information around exactly how much energy, exactly how much water. We should be able to ask a model that question or see it.

As we're using these systems. We should also, I think in many ways, have some really difficult but necessary conversations around how many frontier models are really necessary. At the moment, you're seeing companies racing to build versions of the same thing. We're seeing enormous duplication across the AI field. And each time you spin up another frontier model, that's an enormous amount of energy and water in training.

Ramanan (27:33) Right.

Kate (27:44) Not to mention what happens at inference when people actually start using these systems. So again, these are coordination problems that could be resolved, I think, with a more coherent set of conversations across the sector, but also regulatory frameworks.

Ramanan (28:03) Super helpful. I'm going to come back to that question because I do want to leave, I do want to end on a somewhat optimistic note. And I want to talk about policy in that context. But before we go there, another digression on human labor. You we talked about this in the context of faraway mines and mining for rare minerals, but there's lots of other kinds of labor floating around here, not just in the form of people doing coding, you know, data labeling, content moderation. Can you talk to that a little bit? I mean, you touch on it in various places in the book, but you may have developed further ideas since then.

Kate (28:45) Hmm. I mean, we could have an entire hour-long conversation just on the labour side. There's obviously two different important parts of the equation in labour. It's the labour that's going into actually constructing and training these models. And in large part, that is being done by what's called reinforcement learning with human feedback. That is an entire sort of class of labourer who's commonly in the global south, often working for less than $2 an hour.

Ramanan (28:48) Yes.

Kate (29:13) Engaging with and training these models and trying to remove some of the most toxic and horrifying examples, both in terms of visual material, text, et cetera, etc. You know, there's now unions that are springing up, particularly in Africa, around the conditions of labor, you know, for these workers, which are in many cases, you know, just deeply exploitative and fairly horrifying.

On the other end, of course, you have the impact that these systems are going to have on labor around the world in terms of how the labor markets themselves are going to metabolize these systems and what happens to jobs. And there we're starting to see something that I think is potentially going to be extraordinarily dramatic within the next decade. And again, across the sorts of cognitive labor that we always used to say was somehow removed from the thread of automation. You might remember 10 years ago, learn to code. That was the answer for keeping yourself employed for life. Remember that?

Ramanan (30:15) Right. yeah. 100%. You wanted your kids to learn how to code. That was going to be the one foolproof thing you could do.

Kate (30:24) Alright. Yeah. And what is the one thing that the latest AI models are really good at? Yeah. Coding. I mean honestly, I think both Google and Microsoft have noted that at least 20 to 30 % of their code is now written by AI systems. So it's an enormous shift that's going on in the sort of least expected areas. So I think our relationship to what is valued in human labor is going to be under strain. I think our relationship to our own labor.

Ramanan (30:28) Coding!

Kate (30:53) You know, what do we, what do we cognitively outsource to models, and what do we do ourselves? You know, that is going to be a really important question, particularly in an education context. But I also think in just everyday work, no matter what you do, there's going to be an enormous pressure to try and do things faster, use more AI, try to skim over the surface of things quickly and just get summaries for you and, you know, and, and go at the maximum possible rate.

I think there are some real questions there around how we learn, how we challenge ourselves, how we produce our most creative work. This is the biggest social experiment that has been conducted in the last 50 years, and it's happening in real time, and we don't know the results.

Ramanan (31:37) Can we sort of end, not so much on a happy note, because that may be too much to ask, but on a constructive note as it relates to what lies ahead. So what are examples, one or two examples of policy measures that all of us can advocate for to address not all of these issues, but at least some of these issues? How should we, as an engaged citizenry that buys into all of this argument? What should we do now? Besides, use AI less?

Kate (32:11) And I'm not even sure that would be enough. I don't think this is an individual responsibility question, just like with climate change itself, this idea that we can fix it through monitoring our own individual carbon budgets. It's simply not the answer. This is a collective action issue.

So what should we do? Well, in many ways, we actually made some extraordinary progress. If we really look to the Bill of Rights and the executive order under President Biden around artificial intelligence. That was spearheaded by people like Dr. Alondra Nelson. It had really important principles for governing questions around bias, discrimination, equality, transparency. And we've lost all of those now. They've all overturned within the first week of the Trump administration.

And of course, at the same time, seeing we're in a world now where the EPA is apparently now considering removing any restrictions in terms of greenhouse gas emissions for power plants. And we're being told that that is being driven by AI because of course the demands on power are just so, so high. I mean, of course, you know, Eric Schmidt was testifying just last week saying that the AI sector is currently using about 3 % of energy could go up to 99%, which was a number even I hadn't heard.

So, you know, how do we think about the sorts of policy interventions that you would need that affect both the models themselves, so the inner workings of the models, what needs to be transparent, the quality of the data that they're trained on. These are all issues, by the way, that the EU AI Act actually does address. Now, not a perfect act by any means, but it is indeed the first omnibus piece of legislation that addresses artificial intelligence. And so it actually managed to address many of these internal technical questions, I think, quite adequately.

But what it doesn't do...is look at these environmental questions. So they're really seen as broader questions that should be regulated by member states individually. So what do we need? Well, we need certainly something like the AI Act here in the US. We're not going to get that under the current administration.

And in fact, what's even more concerning is that as of right now, in fact, on Friday, the House passed a bill that would remove all state-based regulation around artificial intelligence. Yeah, so that's a very big deal because many of those, the state legislature was actually doing very well on restricting core problems in terms of, again, how people might be assessed in terms of emotion recognition on people's faces.

Ramanan (34:34) Yeah, I saw that.

Kate (34:52) Tracking people in the streets, facial recognition generally, all sorts of questions around how AI could be used to penalize people in terms of their health insurance. I mean, the list goes on, but that was happening at a state level. So removing all of that regulation is actually very serious. So I would almost say we would look to all of the good things that have been done in the last five years and actually kind of preserve them as law.

So the focus has moved to looking at the EU. And also I think in the US, local communities actually have an enormous role to play here. The place where we've seen real pushback on data centers and their water usage and their energy usage has come from local communities. We've seen the local communities in South Memphis holding enormous public meetings and protesting and I believe now suing Elon Musk's XAI over this polluting data center.

So perhaps the units of governance have shifted away from the federal level and even the state level, now to the community level. And I think certainly from a climate change perspective, we are really fighting uphill, given the way that the federal government has, in many ways either overturned the regulations we had or criminalized the organizations who were doing this work.

Ramanan (36:13) Well, it's interesting, you know, if...if rejecting or rebutting is being pushed down to the community level, you would actually argue on a competing philosophy table that should be actually a bipartisan cause, so to speak, right? Who wants the big bad federal government is something I hear from one group of people. And if that is the case, let our communities just reject something showing up from outer space to just destroy their air.

Kate (36:44) If they know.

Ramanan (36:45) I thought of something else, if they know, if they know, and there's a role for all of us to play in that respect. It turns out I do have another question ⁓ because you're in the academy and an issue that is really on my mind these days is the effect of AI on pedagogy and research in our elite higher educational institutions.

(37:12) This fall when entering freshmen, come to college, it'll be the first near AI native generation to be on these college campuses. How should universities react? What should universities be doing? What are two or three things they should all be doing no matter what? What are you seeing at USC or in France?

Kate (37:30) I mean, I think there's been across the universities really kind of two ways that people have been dealing with this issue. One is to try to ignore it altogether and to go back to things like pen and paper based exams, really double down on trying to avoid AI where possible. And the other is to encourage the use of AI at every possible level to to give people different sorts of assignments that should be using AI, and in some cases, minimally or maximally. And that's all fine. Honestly, I think some mix of those things is going to happen.

I think there's something in the middle that's much more interesting, which is that we're still thinking about AI as consumers. As we're looking at the front end, we're just using that we're not seeing all of these resources, all of these laborers, all of these critical questions around bias and data provenance.

None of that's happening. you're just looking, if you're trapped at that clean, bland front end of a large language model, we need to open these up. Not just technically. Obviously, some people, if you're in a humanities degree, they don't want to be coding LLMs. But maybe they can start to look at the bigger infrastructures that sit behind them.

Maybe we can start to have classes that show you how much water, how much energy, what happens to the labor, what happens to the minerals every time you use one of these systems so that you have a critical attunement and an awareness of what you're actually doing. For example, a study came out this week that showed every time you generate a 30 second long video using a text to video model, that is the equivalent of running your washing machine for 24 hours. Right? Just generate, and people do this, generate a picture, generate a video, don't even think about it. And if you could just see or if you knew that

These are enormously polluting, you know, energy. Well, you would think about it differently, but you might also pressure companies to produce models that didn't use that much energy. Why, you know, why do they have to use as much other different ways of designing everything from the chipsets to the data sets to actually reduce those numbers? Because those numbers are really shocking, and they're going up, they're going in the wrong direction. So that's the issue.

Ramanan (39:32) Yeah, you would think about usage differently. In fact, now that you've said that to me. In your teaching, do you touch on any of this?

Kate (39:59) Yes, well, I mean you know, it's interesting because I think, you not only, you know, as somebody who's very much focused on the research side these days, you know, I run large scale research projects. I do also, you know, go and teach and visit classes around the world to talk about these questions. And many cases people simply don't know.

I mean, this is the issue that when we don't have a strong educational curriculum yet that shows how you can teach sort of human focused AI. This to me, think is one of the most important things that we're not getting in core university curriculum because it really does, it's a complete mindset shift. And you think differently.

You then have, I think, some sort of agency and understanding, not just, I'm getting an answer from an AI system, but why this answer? How does a large language model work? What does it not do very well, so what should I trust, what shouldn't I trust, why and when should I be fact checking these systems, what are their full resource implications, what are the things that happen if I cognitively outsource that maybe I want to keep learning for myself, what muscles do I want to keep strong and what things am I prepared to say, let the AI do it. These are the sorts of questions that I think are really important from a pedagogical perspective.

Ramanan (41:17) And you know, one observation I have, and it's not specific to AI, you know, if a research paper falls in an academic forest and no one is reading it, does it really exist? And so my humble request for people doing brilliant work is this stuff has to get out of specialized journals and into mainstream media, even with all of its issues and challenges. So that's my, you know, your book, for example, was just such an amazing act of visibility, right, of revealing. So more, more please.

Kate (41:58) More of that. And I do think it's a, I do think this is one of the most urgent challenges right now, is that the research is out there, but people aren't seeing it. So, you know, for me, I've spent my career really trying to think of how do we make this a more democratic debate, you know, and that has meant, yes, writing books that, you know, anyone who's interested could pick it up and read it. It means making large-scale visual installations that, if you go to MoMA or if you go to Venice, you can go and see these things and talk about it with your kids, with your friends. It's putting it in the public square. I think that's really important. It means writing op-eds. It means doing podcasts. These are actually really important public acts of service.

Ramanan (42:39) We'll just speak every month, We'll just do like a monthly podcast. And by the way, guys, listeners, one of the things Kate's talking about is the anatomy of an AI system, which won a Design of the Year award, among other things. And there's a website and you can find your way to it. My eyes nearly fell out of my brain looking at it. It's amazing. So, Kate, we really would like to speak to you every month, but that's not feasible. And you've given us a lot of your time and I'm deeply grateful for that. Before we let you completely go, who should we speak with next?

Kate (43:14) Ooh, well, first of all, it's been such a pleasure to speak with you. And I love your podcast, so please keep up the important work yourself. That's already doing a large part of this sort of recirculation of research that we need. There are so many people right now who I think are worth talking to. I mentioned a couple who I've collaborated with.

I always think that Emma Strubell and Sasha Luccione, particularly if you want to hear more about the environmental impacts of AI would be fantastic for you to speak to. I also think Alondra Nelson - she is hosting a conference that I'll be speaking at in Princeton next week called Rare Earth, which is looking at the impact of artificial intelligence on critical minerals and the sort of policy and ultimately materiality of extracting these systems at such speed. So I think Alondra is always fascinating on these questions as well.

Ramanan (44:07) Okay. Kate, thank you very much.

Ramanan (44:14) Thank you for listening. Please visit inourhands.earth for the full transcript of this podcast, other information, or to send us a message.

Share


Discussion about this episode

User's avatar