As the climate crisis deepens, influenced by extractive Big Tech infrastructures, groups from the Majority World impacted by these technologies are developing small, resource-efficient, community-centered climate solutions to pressing environmental challenges. Over the next year, we will center those groups in a full-cycle initiative exploring how locally driven, climate resilient AI technologies are designed and developed, and how they can be adopted in care-based and responsible ways.
Re-imagining and re-directing AI technologies
Dominant narratives about AI and climate change continue to be driven by a narrow set of actors, primarily large technology companies and Global Minority institutions whose priorities, resources, and worldviews are often far removed from the communities bearing the greatest burden of the climate crisis. These approaches tend to privilege large-scale, resource-intensive systems that rely on extractive practices, high energy and water use, or datasets and models that do not reflect local environments where they are applied. In addition to reinforcing existing inequalities in how technology is developed and used, emphasizing these approaches also limits our ability to imagine alternative ways of engaging with AI, obscuring the role that frontline communities have in challenging and redirecting how these technologies can promote justice.
To support the re-imagination and re-direction of AI technologies, we are launching our AI for Climate Action initiative today. Through this year-long effort, we will center Majority World leadership by mapping out their stories and collectively build a bottom-up framework to assess AI adoption. We will also support groups who are curious about these models, through peer learning and experimentation in a digital playground to explore tools in a safe environment.
By showcasing a myriad (or a pluriverse) of alternatives, we will illustrate that there are multiple ways AI technologies can be developed, proposing a non-binary approach to the ways that models can be imagined and developed: neither uncritical hype nor outright refusal to its design or use, but a grounded approach that speaks from the contexts of affected communities, through technology approaches designed or adapted by them, in their diversity.
Our initiative seeks to contribute to more equitable, sustainable societies, by centering the imagination of groups exercising their agency and developing locally-grounded responses. We will center their re-imaginations and re-directions to expand ours and the digital rights ecosystem as a whole.
Researching community-centered AI
With climate justice as a guiding approach, we will kickstart this initiative by conducting research on community-centered AI and mapping out existing climate related AI solutions in Majority World contexts that center the experiences and leadership of groups who are most impacted by the climate crisis. We will look towards technologies designed or adapted to tackle climate challenges by non-profit, community groups, and academic organizations placing a special emphasis on ‘edge’, ‘frugal’ or ‘small models’, such as compact AI systems for environmental monitoring, agricultural forecasting, and resource management, as well as small language models, that are designed to operate efficiently with lower resource consumption.
We will look to understand how communities are centered across the AI lifecycle in each of these initiatives from design and model training to deployment and eventual decommissioning, noting what conditions have supported community-based organizations to develop and sustain AI tools for climate action in different contexts. We will also look to learn how groups envision the future of their tools, and what models of growth align with their vision for justice: are they looking to remain small and rooted in local context? Or perhaps scaling through replication, adaptation, or connection with broader movements?
After initiatives are identified, community leaders will be invited to participate in a community of practice, a space for exchanging knowledge and sharing best practices around the adoption and adaptation of AI across diverse local contexts.
Communities of practice to multiply knowledge
Organizations and practitioners working at the intersections of AI, environmental monitoring, and social justice in Majority World contexts, including members of our newly formed community of practice, will be invited to participate in a peer-learning cohort that will serve as a collaborative platform for sharing experiences, challenges, and successes related to responsible data practices when it comes to AI development.
The focus of the cohort will be to collectively understand how organizations are currently applying data-driven approaches in their AI initiatives, and exploring the key considerations for responsible data management, to support ethical technology deployment and prevent issues like data misuse, privacy breaches, and the disenfranchisement of communities from controlling their own data.
During the peer learning program, we will also lay the groundwork for the co-creation of a bottom-up Responsible AI for Climate Framework, to guide the sustainable, privacy preserving and ethical deployment of climate focused AI models, supporting organizations to assess, adopt, contest, or refuse AI technologies. We will build on a proof of concept developed by The Engine Room and our Responsible Data frameworks to ensure organizations are able to have agency over the data that informs, feeds and is generated through AI applications.
A playground as a watering hole for emerging AI practitioners
For us at The Engine Room, we aim to make tech accessible through play, joy and experimentation. With this approach at the center, we will develop a digital platform, the AI Playground for Climate Action, which will feature interactive stories, highlighting real-life challenges from local partners leading on climate action and demonstrating the impact of AI-driven tools in addressing environmental issues. These stories will showcase use cases, including a tools playground for user interaction, along with guidance on responsible and ethical data management practices. It will also serve as a capacity-building resource, offering guidance and opportunities for hands-on exploration of AI tools, through the development of interactive instances (sandboxes) for experimentation.
The platform will be a space for knowledge sharing and community building, connecting organizations interested in AI for climate solutions, which will be hosted in our soon-to-be-launched Community Hub, creating opportunities for networking, encouraging collaboration between experienced AI users and newcomers, and providing an accessible environment for non-technical users to explore AI applications for the environment.
The AI Playground, and the learning initiatives that will go alongside it, will support the adaptation of existing AI solutions to meet the specific needs of Majority World climate justice activists, and help develop a new generation of skilled, social justice-oriented community organizations trained in sustainable AI technologies.
If you are curious about our initiative, and how to contribute to it, or synchronize efforts, feel free to reach out to paola[at]theengineroom.org

