Offsetting My Own GPT-3 Carbon Emissions in 2022
Although, by my understanding, most cloud environments already run on renewable energy, I still think it would be good to instill more environmental consciousness into the DNA of the multimodal/OpenAI/language model community.
We are already familiar with the environmental impact large language models like GPT-3 can have at training time, but I also think inference time requests are an additional environmental consideration. This is something I think we should talk more about in the community.
To start with, I’m interested in just basic math - what is the total energy consumption of a single 2048 character GPT-3 DaVinci prompt request? If we can get some reasonable back-of-the-napkin calculations going, I’m interested in offsetting my own inference costs in 2022 to the best of my ability. Maybe there’s a, “per token, per engine” environmental cost we can establish which could make it easier to calculate the amount needed to offset the damage.
If you’ve seen numbers anywhere or have experience calculating this sort of thing, please let me know. I’m excited to share my usage numbers and experience offsetting my environmental impact next year!