These days, it seems as if nothing would work without artificial intelligence. There’s hardly an area of life in which AI cannot be integrated. AI is also booming in the field of climate protection. In recent years, many AI-based applications have been developed to support sustainable goals. However, in addition to critical social aspects, the ecological downsides of AI are becoming increasingly clear: the technology consumes vast amounts of energy, water and other resources.
This makes it all the more important to discover how artificial intelligence systems can be designed more responsibly. Friederike Rohde is doing exactly that. She’s a sustainability researcher and sociologist of technology at the Institute for Ecological Economy Research (IÖW) and is committed to shaping the digital transformation sustainably. She is involved in the ‘Bits & Trees’ alliance and runs the SustAIn project at the IÖW. Together with colleagues, AlgorithmWatch and the DAI Lab at TU Berlin, she has developed sustainability criteria for AI-based applications as part of the project.
We spoke to her about the ecological footprint of AI-based applications, why new models are becoming increasingly energy- and water-hungry and what adjustments can be made to reduce the systems’ emissions.
Friederike, you developed a sustainability index for artificial intelligence together with colleagues in the SustAIn project. Why is this needed?
Not only does the technology have a large ecological footprint, but many social values, norms and ideas are also inscribed in AI. This gives rise to inherent risks, which is why responsible and sustainable technology design needs to take centre stage. We must look at the entire life cycle of all AI systems. And then think about how to develop systems that have as little impact as possible on the environment and society, as well as on economic dynamics such as market concentration.
You launched the project in 2020. Looking at AI from a comprehensive sustainability perspective was still pretty uncharted territory back then, wasn’t it?
Yes, taking such a comprehensive sustainability perspective on AI was definitely uncharted territory at the time. What already existed—and which had also stimulated many discussions about the sustainability of AI—was a paper by American scientists led by Emma Strubbell, who had calculated the amount of energy and resources required to train AI models.
SustAIn – A sustainability index for AI
The SustAIn project aims to develop criteria for the sustainability assessment of artificial intelligence applications. It will also survey the sustainability effects using examples.
The joint project is funded by the BMU’s AI Lighthouses funding initiative. It’s being carried out by the Institute for Ecological Economy Research (IÖW) together with AlgorithmWatch and the Distributed Artificial Intelligence Laboratory (DAI Lab) at Technische Universität Berlin.
The study by Strubbell and colleagues found that developing an AI generates as many emissions as five cars will generate over their entire life cycle.
Exactly, this figure went around and made a big splash. But it was also misquoted in some cases. For example, it was said that the high emissions were generated during training. But these figures are related to the search for an optimal neural model architecture for the artificial neural network—i.e. the model development. This is part of the research process that does not happen in every company, as pre-trained models are often used.
However, these were the first models that set new standards in machine language processing—so-called transformer models—and they weren’t that big. In the meantime, the models have become much larger and have many billions of parameters. As a result, the resource consumption for many of the models used, such as GPT3, the predecessor of the model on which the famous chatbot ChatGPT is based, is very high during training. A recent study by Sasha Luccioni and other scientists has shown that training such a large model can emit up to 552 tonnes of CO2. However, this also depends on which energy sources are used to train the model. If electricity from renewable energy sources is used for computing power, then these emissions are greatly reduced.
So can we expect energy and resource consumption to continue to rise in the future as a result of ever larger AI applications?
This can be assumed. If you look at the current figures for large language models, they already consume a lot of energy during training. And we still know very little about how much is added for the actual application—i.e. the inference. Luccioni’s study looked at various models and compared them with each other. What I also found especially interesting from the study was that around 30 percent of AI’s carbon footprint is simply due to the provision of the infrastructure. So even if AI isn’t used for computing processes, it still has a substantial carbon footprint.
The fact that more and more computing power is being utilised is a major problem from an ecological perspective. However, it has to be said that major cloud providers are doing a lot to reduce energy and resource consumption, if only from a cost perspective. Some are transitioning to renewable energies. And some of them also provide tools that can be used to measure or at least estimate energy consumption during model development and training.
What exactly does that mean, the provision of infrastructure? The provision of servers and correspondingly fast networks and so on?
Exactly, this is called ‘idle consumption’ in the tech world. The servers need to be cooled, even if the computers aren’t running. This applies to a relatively large number of servers.
The increasing use of AI makes us dependent on an unsustainable digital infrastructure. Of course, we already had this whole infrastructure issue in the debate about digitalisation and sustainability. But the issue is becoming increasingly important as computing capacities are scarce. With so many companies looking for capacity, they turn to data centre providers in Asia, for example, because they are not as expensive as the large cloud providers. But they also often have much lower environmental standards.
We currently have a mix of fossil fuels and renewable energies, which means that energy consumption is always linked to CO2 emissions. But the moment we operate data centres with 100 percent renewable energy, their energy consumption is decoupled from CO2 emissions, isn’t it? That makes it one of the most important levers.
This is definitely an important factor, but not everything. In the study on language models cited earlier, a model trained with renewable energies only produces 30 tonnes of CO2 emissions. If the training were powered with fossil fuels, this figure would be 550 tonnes. You can make huge leaps if you use renewable energies. However, two other aspects are also relevant. Firstly, energy generation based on 100 percent renewable energies will only work in Germany and other countries if absolute energy consumption is also reduced. If all possible sectors are electrified, including the transport sector and so on, and we use more energy for training and using AI systems, then at some point we will have a conflict of use.
Ultimately, a large company like Google has an incredible amount of capital at its disposal. They can simply build their own renewable energy plants. The big tech companies are now the most important customers for renewable energy in the USA. But sustainable digitalisation doesn’t just mean switching to renewable energies. Especially as we also have incredible competition for land in Germany. There is a battle for every wind turbine. So the problem cannot simply be solved with renewable energies.
So what needs to be done to stop AI’s energy consumption from continuing to grow?
Ultimately, of course, it’s always about putting this into perspective. In other words, questioning what the ecological benefits are of digitising a process using AI. How much greater is the positive environmental effect of an AI project compared to the emissions from hardware manufacturing and training? How useful is it to use an AI system in a specific area? But we don’t always ask this question.
And many applications of AI don’t try to help the climate at all. As part of the SustAIn project, we conducted a case study on online marketing and advertising. It turned out that this area generates insane amounts of data. AI is being used on a massive scale to encourage people to consume more. This is doubly unsustainable. Not only is the goal of using AI systems—more consumption—unsustainable, but also the practices themselves, i.e. how AI is used for tracking and personalisation.
The energy aspect is talked about most when we discuss sustainability and artificial intelligence. But other aspects have an impact on the sustainability of AI.
Yes, another aspect that we have analysed is the indirect effects of AI such as material consumption. A self-driving car, for example, contains up to 30 hardware components, mainly sensors that record data or lidar technology, i.e. laser scanning. Increasing use of AI means that more hardware is used, which we need to consider.
We know from research that the biggest carbon footprint of digitalisation is generated in the manufacture of devices, usually more than half. And of course, we also need hardware for the computing power, i.e. the servers.
This also has social implications, because appliances are often manufactured in the Global South. These countries may have poor environmental and social standards and manufacture products under inhumane conditions. The social and ecological aspects are very closely linked. In my opinion, this also applies to water consumption.
How is water consumption related to AI?
Data centres need water for cooling and most use drinking water. There are also options for using grey water or passive cooling, for example. But these options require either very modern data centres or major technical modifications to existing infrastructure.
Cooling with drinking water is a major problem. Many data centres are being built in areas where there are severe water shortages. In 2021, for example, there were protests against data centres in Chile. A study by the Chilean water authority Dirección General de Aguas (DGA) showed that the new data centres that Google wanted to build would consume 169 litres of drinking water per second. This caused a huge outcry. There was a water shortage in this region and the construction of a new data centre led to a conflict of use. This clearly shows that ecological problems are connected to social distributive justice. In other words, who has access to which resources and at what price? This is certainly an issue that needs more attention.
Water consumption of AI
Servers not only require electricity to process data. Heat is also generated as a by-product. The servers therefore need to be permanently cooled to prevent overheating — usually using water. When training large language models, millions of litres of fresh water are evaporated to cool the power plants and AI servers.
Areas with lots of cheap solar energy are particularly attractive for data centres. But water is often scarce in these locations. Unsurprisingly, protests against data centres are on the rise, as Paris Marx reports in this talk: The Backlash to the AI-Fueled Data Center Boom.
What are the other key levers for sustainable artificial intelligence? Where do we need to focus our efforts?
An integrated sustainability perspective must include ecological, social and economic aspects. Even if you train your AI with renewable energies, it can still be discriminatory, not transparent or monopolise the market.
So, I think there are three levels. One, in my opinion, is paradigms in development. We also need a paradigm shift in the machine learning community towards smaller, more energy-efficient models. Or rather, the perspective that bigger is not always better, instead focusing on lower energy consumption or transparency. One very important factor here is documenting how the system was developed or who the vulnerable groups might be.
Another aspect is certainly the political dimension, i.e. how AI is regulated. The AI regulation passed in the spring is definitely a step in the right direction. The regulation will certainly have an impact on companies becoming more transparent because they will have to document the energy consumption of their AI applications. But we are still at the beginning. We can already see that more and more data centre operators in Asia are being used, where far fewer environmental and social standards exist. Large data centre operators are relatively expensive and many small start-ups cannot afford them.
The third area is strengthening the social debate. People are still not aware that AI consumes an insane amount of energy and resources. Every ChatGPT query requires significantly more energy than every Google search.
What exactly is the new AI regulation about?
The AI Regulation or AI Act is based at a European level and is one of the first regulations on AI worldwide. In addition to many other important aspects of AI regulation, it also includes clauses relating to environmental sustainability. It states that there must be standardised documentation procedures for AI’s energy and resource consumption. For example, this consumption must be measured, estimated and documented.
AI-Act
The Artificial Intelligence Act (AI Act) is the European Union’s response to the risks of artificial intelligence. It’s aimed at companies and public authorities that use AI systems in the EU. The Act is intended to provide a legal basis for the development and use of AI in order to avert or minimise potential damage caused by AI.
The AI Act mentions the environment as a legal asset worthy of protection. It introduces standardised reporting and documentation procedures for the efficient use of resources by AI systems. These are intended to help reduce consumption and promote energy-efficient development of AI.
I think the findings from the SustAIn project have also contributed because we were able to make scientifically sound recommendations. If you look at current developments in other countries, you can see that such regulations or initiatives are also being introduced. It’s also relevant in the USA, for example, because the increasing consumption of energy and resources is becoming problematic for AI systems.
As far as I understand it, however, the AI Act only deals with the documentation of energy consumption. This means that environmental requirements aren’t included yet, right?
Yes, there are no environmental requirements; the act is about documenting the consumption of energy and resources. It doesn’t say that less energy and resources need to be consumed. If this now goes into national implementation, more consideration should be given to these aspects, such as identifying potential for improvement.
Where do you think the main responsibility for the development of sustainable AI applications lies?
The responsibility lies with both the organisations that develop it and those that implement it. We have also made this relatively clear in our criteria and indicators. Ultimately, the organisation that purchases the AI must also ensure that it has been developed responsibly. However, this also means that the developing organisation must document their process properly to be able to prove this.
But the political framework is also crucial. If this is in place, then the organisation is not solely responsible. We also wanted to make this clear with the criteria and indicators. And we have developed a self-assessment tool which includes the parts of the criteria that can easily be addressed by developers and deploying organisations. The tool is available online, where organisations can click through and see if the AI they use is developed sustainably.
13 Sustainability criteria for AI systems – self-assessment tool
With the digital self-assessment tool, organisations that develop AI themselves or purchase it externally can put their AI systems to the sustainability test. The tool is based on comprehensive indicators developed in the SustAIn project.
There are also tools for developers to check the sustainability of an AI application. The first thing that comes to mind is the Machine Learning Emissions Calculator.
Exactly, there is the Machine Learning Emissions Calculator and there is also CodeCarbon. This was a project that developed a software package where you insert a few lines of code and then it estimated the CO2 emissions of the computing power used. There are many other tools and some of the major cloud providers also have their own tools.
If companies are interested in the energy consumption of their AI applications, then the current focus is probably more on saving costs.
Yes, of course, because companies are trying to optimise their AI applications in terms of costs. Having a model that requires less computing power but still has the same performance is of course economically worthwhile.
What role do open source applications play in sustainable AI development?
So, an open source model enables more players to gain access. And if you have a pre-trained model that is open source, then you don’t always have to retrain this model and can develop it further within the community. That can make perfect sense from a sustainability perspective.
You’ve listed many areas that need to be addressed to achieve more sustainable AI models. How do you think we can go about implementing these?
The fact that the AI Regulation has now come into force and is gradually being transposed into national legislation means that there will be no further regulation for the time being. But that there’s an AI regulation at all is a good step for the time being. Even if there is still room for improvement in many areas, of course.
However, we need to take a much closer look at the entire value chain. In other words, where do different stages of this chain take place and what regulation must be introduced? If you look at the working conditions of clickworkers, for example, the question is of course to what extent the regulation will help. And then perhaps we need to look at other legislation, such as the Supply Chain Act.
Perhaps you can summarise again: how does sustainable artificial intelligence work?
You can ask yourself whether sustainable AI in that sense even exists. It may be more sustainable than the status quo, but I would put a big question mark over whether there really is a completely sustainable AI.
Basically, sustainable AI means developing technology responsibly, i.e. looking at who the vulnerable groups are, making the system as transparent as possible and ensuring fairness. However, this also means documenting energy and resource consumption, looking at the extent to which this can be reduced and selecting data centre operators based on their carbon footprint. It can also make sense to make the model available as open source so that others can develop it further instead of training other models.
Friederike, thank you for the interview!
The post How can the energy guzzler that’s AI become more sustainable? Interview with Friederike Rohde (IÖW) appeared first on Digital for Good | RESET.ORG.