The Whole Earth Codec is a foundation model that transforms planetary-scale, multi-modal ecological data into a single knowledge architecture

Traditional models of the observatory have focused on gazing outward, towards the cosmos. The recent proliferation of planetary sensor networks has inverted this gaze, forming a new kind of planetary observatory that takes the earth itself as its object. Could we cast the entire earth as a distributed observatory, using a foundation model to compose a singular, synthetic representation of the planet? The current generation of models primarily deal with human language, their training corpus scraped from the detritus of the internet. We must widen the aperture of what these models observe to include the non-human.

The Whole Earth Codec is an autoregressive, multi-modal foundation model that allows the planet to observe itself. This proposal radically expands the scope of foundation models, moving beyond anthropocentric language data towards the wealth of ecological information immanent to the planet. Moving from raw sense data to high-dimensional embedding in latent space, the observatory folds in on itself, thus revealing a form of computational reason that transcends sense perception alone: a sight beyond sight. Guided by planetary-scale sensing rather than myopic anthropocentrism, the Whole Earth Codec opens up a future of ambivalent possibility through cross-modal meta-observation, perhaps generating a form of planetary sapience.

Read →

Sensing Layer
Foundation Model
Sensors
Federation
Spatiotemporal Anchoring
Encoders
Latent Space
Fine-Tuned Models
Decoders

Studio Researchers
Connor Cook
Christina Lu
Dalena Tran

Program Director
Benjamin Bratton

Studio Director
Nicolay Boyadjiev

Associate Director
Stephanie Sherman

Senior Program Manager
Emily Knapp

Network Operatives
Dasha Silkina
Andrew Karabanov

Art Direction
Case Miller

Sound Design
Błażej Kotowski

Graphic Design
Callum Dean

Voiceover Engineer
Sam Horn

Editor
Guy Mackinnon-Little

Thanks to The Berggruen Institute and One Project for their support for the inaugural year of Antikythera.

Special thanks to Nicolas Berggruen, Nils Gilman, Dawn Nakagawa, Justin Rosenstein, and Raphael Arar for their visionary support and participation.

Press and inquiries → contactno scraping@codec.earth