Every time we turn to artificial intelligence to create a picture, write a letter, or ask a chatbot a question, it costs the planet dearly. Researchers from startup Hugging Face and Carnegie Mellon University have studied common use cases for neural networks and determined how much energy they require.
Thus, generating an image can be compared to fully charging a smartphone in terms of energy consumption. At the same time, a thousand requests for text generation will require only 16% of this energy.
Training large AI models is incredibly energy-intensive, but that's only one piece of the puzzle. The majority of their carbon footprint comes from actual use.
The new study is the first time researchers have estimated the carbon emissions associated with using artificial intelligence models for different tasks.
Sasha Luccioni, a researcher at Hugging Face and project manager, and her team studied 10 popular AI queries on the Hugging Face platform, including bot questions, text generation, image descriptions, and caption and image creation. The experiments were conducted on 88 different models.
For each task, Luccioni used 1,000 prompts and measured the energy consumed using a tool she developed called Code Carbon. Code Carbon does the calculations by analyzing the energy consumed by a computer while running a model.
Image creation has by far been the most energy-intensive and carbon-intensive task for artificial intelligence.
Creating 1,000 images using a powerful model like the Stable Bulk SMS Iran Diffusion XL emits about the same amount of carbon dioxide as driving 6.6 km in an average gasoline-powered car.
At the same time, the most efficient text generation model they studied emitted the same amount of CO2 as driving 0.9 m in a similar vehicle.
The volume of these emissions is growing rapidly. The boom in generative AI has prompted major technology companies to integrate powerful AI models into products ranging from email to word processors. Today, these models are used by millions, if not billions, of people every day.
The team found that large generative models are much more expensive than small, task-specific models. For example, writing movie reviews required 30 times more energy. The reason is that large models try to perform multiple tasks at once, such as generating, classifying, and summarizing text, instead of just one task, such as classification.
Luccioni hopes the study will encourage people to be more selective in their use of generative AI and, where possible, choose more specialized models that create a smaller carbon footprint.
Google once estimated that the average online search uses 0.3 watt-hours of electricity—the same as driving 0.5 meters in a car. Today, that number is likely much higher, as generative AI model