July 25, 2023

AI and climate change: The good and the bad

Jim Fruchterman (Tech Matters) and Trisha Ray (Observer Research Foundation) discuss the applications of and challenges posed by AI and tech in climate mitigation.

7 min read

Artificial intelligence (AI) and information and communication technologies are becoming popular as crucial tools to tackle the climate crisis. Technological innovations in renewable energy, and use of predictive AI for climate modelling, are all gaining traction as countries work towards their net-zero goals.

However, tech companies are also some of the biggest carbon emitters. For example, producing semiconductors or silicone chips found in most gadgets and products today requires vast amounts of energy, which accounts for a majority of the carbon output from electrical devices. While many tech companies have announced net-zero policies, these are falling short by a long measure. In addition to this, advancements in AI and tech are further exacerbating global inequalities, as most of the social and economic benefits of AI are accruing to only a privileged few. 

On our podcast ‘On the Contrary by IDR’, we sat down with Jim Fruchterman of Tech Matters and Trisha Ray of Observer Research Foundation to discuss the role of technological developments in climate solutions, how tech giants influence this space, and what needs to change to enable countries to use tech more efficiently and for the benefit of all, especially those most vulnerable to the crisis. 

Below is an edited transcript that provides an overview of the guests’ perspectives on the show.    

AI can accelerate climate action

Jim: Currently, AI is mainly being used by the academic community to create climate models, to understand [the] climate crisis, to [analyse] how things are, [and] to [provide a] forecast into the future… [Technology provides a] lot of ways to deal with climate. I think people are using technology more broadly and [in] a lot of ways to deal with climate. The biggest area that you see [this happening in] is energy. People [are] trying to innovate around energy. Solar panels [too], which are not necessarily considered an advanced technology any more. [But it is] a big part of wind power.

Trisha: There is a lot of good that tech like AI can do. [It can] help us track, predict, and mitigate climate change. We could even use AI to invent new synthetic materials to replace the ones we currently use. [We can build] these new materials that are more resistant to heat or materials that can help us store solar energy better. There’s also a lot of interest in a concept called smart grid. And India has its own National Smart Grid Mission, where one could use AI to detect patterns in how people use electricity and how this may change over time, and then act accordingly.

A nokia mobile phone saying rainfall: light to moderate_tech and climate change
We could use AI to invent new synthetic materials to replace the ones we currently use. | Picture courtesy: CGIAR Climate / CC BY

However, unregulated use of AI can have serious consequences 

1. AI models have their own carbon footprint

Trisha: We talk about tech for climate in a way where we think it’s a clear-cut solution that will neatly fix all our problems. [But these] applications have a cost… One way to view this problem is to understand that it takes a lot of energy to train an AI model. There’s a study that attempts to quantify the carbon emissions [of AI]. And it says that training a single natural language processing model emits as much carbon dioxide as a car in its entire lifetime. That’s a lot.

2. The existing tech business model is leading to data colonialism

Trisha: The social and economic benefits of AI are accruing to a privileged few countries. A major concern of developing and underdeveloped countries is that [they] have data flowing out of their citizens, and they have services and products flowing in. So they’ve become data suppliers and product buyers. The name for this concept is data colonialism. Sub-Saharan Africa, Latin America, the Caribbean, South and Central Asia really falling behind in AI development and use in terms of start-ups, funding skills, and so on.

Jim: [Tech companies] farm you for your data and become billionaires. What we need to do is to see data and the AI that is built on top of that data [and] use [it] more in the interest of global society, local communities, patient states. Nithya Ramanathan and I have written several pieces on the need to decolonise data, aimed not at the companies because they’re doing total clinical data processing but at the government, at nonprofits, to understand that while they’re delivering community empowerment, this can’t be at the price of extracting the data from the community and then using it to punish that community.

Hindi Facebook ad banner for English website

3. Tech companies aren’t doing enough to offset their emissions

Jim: Tech companies are very powerful, and have this habit of killing legislation that works against their interests… We haven’t had any big privacy or data use laws passed in major countries, including India, in quite a while… And the tech companies, they have social good arms. But the idea of the social good arm is to say how do we spend, you know, 0.1 percent of our profits, or 1 percent of our profits. And then let’s not talk about what we do the other 95 percent of the time. And so when we’re doing social good, because we’re 10 or 15 years behind the times, we’re quite tiny in terms of our use of these models.

Trisha: [To combat emissions caused by tech] many major technology giants—Amazon, Microsoft, Alphabet, and Facebook—have all announced net-zero policies and initiatives. But there’s still a lot that needs to be done. And the first and most fundamental issue is that these net-zero initiatives often rely on something called the carbon offsets system. For example, if Amazon cuts 10,000 trees in a forest, it can plant a number of trees somewhere else in the world and declare that their net impact on the environment is zero. But that’s not how things work in the real world. There is no simple equivalency. And what Amazon should be doing is fundamentally changing how [it] operates, and being more transparent about the environmental impact of [its] operations.

What needs to change?

1. Using AI for communities

Jim: I want to see AI being used to help the local farmer, the local community leader, to better understand what’s going on, what might be going on, and help them make decisions within their own priorities and context. This is called a landscape focus—how do you actually make better things happen at the local level? And local leaders don’t think very much about climate. They’re thinking about how to increase farmers’ incomes when we have less water than we’ve ever had before. They’re kind of aware that things are changing, but they want to focus on local issues. And so a lot of this landscape approach is to try to make those local leaders, smallholder farmers, producers more powerful, and this is being led primarily by the traditional aid and biodiversity and conservation organisations, because they realise they’re never going to meet their goals if local communities are not making climate-smart decisions.

The best solutions come from the people who live with the problems.

Trisha: I think the best solutions come from the people who live with the problems. There’s an interesting example in relation to environmental crimes and the Amazon. The situation there is a little tricky, because you can’t always rely on the government or local authorities as they might often be the ones participating in displacing communities and encroaching on the forest. The Brazilian government also dismantled the organisation in charge of environmental monitoring and protection. So there’s not a lot by way of official resources, you can use. A nonprofit called Rainforest Connection has [installed] an AI in the Amazon to monitor ambient sound and alerts local team members if it detects logging or poaching activity [through] sounds [like] of chainsaws. So they’re partnering with the local indigenous tribes and their rangers to use that AI to monitor and then combat poaching and illegal logging.

2. Prioritising data sharing for social good

Jim: There is a need to shift the power from a unilateral extractive model to one where we understand that there’s such a thing as a rightful data owner, and that we’re going to use their data not, for instance, with checkbox consent, but with meaningful consent, so that meaningful benefits flow to that community. And I think one of the things that’s been identified as a big need is that we don’t have an easy way to share data right now for social benefit. So can there be an open-source licence or a Creative Commons licence for private data that actually gets used to benefit a farmer? So that they know that if their data is used, it’s actually going into a model that’s going to help their communities help them, more than it’s going to help the fertiliser maker, the maker of the agricultural equipment, the supply chain actor who’s going to try and by and large use data to get a lower price from them. How are we actually going to shift that power? And that’s a big theme for how I think the social sector should discard the for-profit business models and actually engage in empowering the communities that they serve.

3. Building tech fluency

Trisha: A big problem with policymakers is that there’s a lack of tech fluency. It is the duty of industry, of civil society, of media, to highlight these issues. One example is the energy impact of bitcoin mining. It’s everywhere. We’re all now aware of it and that’s because it was [covered vastly] in the media. There is a role that media can play in bringing some of these more niche issues on to our radar. We may also want to think about whether we should even be using AI in some situations. There are two fundamental principles of international humanitarian law. These are necessity and proportionality. So is AI necessary in a given context? Are there alternate solutions that are low-tech, but perhaps less intrusive? Is the sheen of the neutrality of AI solutions distracting from some harmful consequences that they have? We should be delegating these solutions rather than implementing them blindly.

4. Using popular media to create public awareness on tech and AI

Jim: I think the role of Hollywood and Bollywood in changing consumer opinions should not be discounted. And one of my early backers is a guy named Jeff Skoll, who was one of the first two people at eBay. He created a movie company in Hollywood that was going to highlight social issues, and we all joked about how much money he was going to lose trying to do this. And Participant Media has actually really worked. And, you know, he backed An Inconvenient Truth, which was one of the more influential films, and certainly in the United States, about awareness of climate change. I do think that it’s not just traditional news media, but it’s also our film industries that are often sort of opinion and sentiment leaders if they can turn something into a story; it’s not easy to turn AI into a story unless it’s, you know, the Terminator movies or something like that. So we have to kind of think about this, but I believe it does move a society.

You can listen to the full episode here.

Know more

  • Read this article to learn more about how AI is assisting reforestation effort.
  • Read this to learn more about the relationship between ICTs and climate change.
  • Read this article to learn more about climate tech start ups in India.

We want IDR to be as much yours as it is ours. Tell us what you want to read.
ABOUT THE AUTHORS
India Development Review-Image
India Development Review

India Development Review (IDR) is India’s first independent online media platform for leaders in the development community. Our mission is to advance knowledge on social impact in India. We publish ideas, opinion, analysis, and lessons from real-world practice.

COMMENTS
READ NEXT