A Forest Has a Value System. So Does an AI.
This is the first post in the Geometry of Trust philosophy series. Before we do anything with the measurements we calculated, we need to be honest about what we’ve actually been measuring.
The usual answer
Ask most people what a value system is, and you’ll get something like: a set of principles you’ve thought about, chosen, and try to live by. Honesty. Integrity. Compassion. A creed.
That’s one kind of value system — the human, deliberate kind. But if we insist that’s the only kind, we lose the ability to talk about most of the value systems that actually shape behaviour in the universe. Including in AI. We forget that humans values are only one domain.
So this post starts with a redefinition.
A value system is a pattern of relationships between things that drive behaviour. Consciousness, belief, and intent are not required. What’s required is structure — and that structure has to be measurable.
That might be a strange definition to you at first. It’s easier to see what it means by looking at things that fit it.
But before that — if the phrase “value system” applied to a machine makes you uncomfortable, I’d encourage you to sit with that discomfort rather than dismiss it. The word “value” has multiple meanings. It can mean a moral principle someone has chosen to live by. It can also mean a quantity that drives an outcome — the value of a variable, the value of a coefficient, the value of a direction in a space. Both usages are legitimate. Both are true. The second one doesn’t diminish the first.
When this series says an AI has a value system, it doesn't mean the AI has beliefs, convictions, or a moral life. It means the AI's internal structure treats certain directions as more important than others, reinforces certain relationships and suppresses others, and that pattern drives what the AI does. That's measurable. That's falsifiable. And refusing to call it what it is — because the word "value" feels like it should be reserved for beings with consciousness — means giving up the ability to measure it, govern it, or hold it to account. It also traps us in a binary argument about semantics on something that is already well established: systems that lack consciousness can still have structure that drives behaviour, and that structure can still be measured, compared, and governed.
The examples that follow are chosen to make this easier to accept, not harder.
A forest
Walk into an old-growth forest. You’re surrounded by something that behaves. It grows, recovers from disturbance, fails in specific ways under specific conditions. Its behaviour isn’t random. It’s driven by relationships between things.
Biodiversity and resilience reinforce each other. A forest with many species has redundancies — if one fails, others take over its ecological role. Monoculture and resilience oppose each other. A forest of one species is efficient but fragile; a single pathogen can collapse the whole system. Drought stress and fire risk reinforce each other. Dry trees burn more readily, and burned forests dry out more.
These are relationships, not rules. The forest doesn’t have a rule that says “prioritise biodiversity.” But its behaviour is driven by the fact that biodiversity and resilience happen to reinforce each other in its particular structure.
Nobody chose this. It emerged from evolution, climate, soil, disturbance history. And critically: it’s measurable. You can count species. You can measure canopy height after a fire. You can model drought response.
The forest’s “value system” — its pattern of reinforcing and opposing relationships — is an empirical object.
A coral reef
A coral reef has the same kind of structure, built from different parts.
Water temperature and coral health oppose each other. Warmer water bleaches coral. Biodiversity and stability reinforce each other. A reef with many species absorbs shocks that would destroy a simpler one. Pollution and biodiversity oppose each other. Runoff kills the sensitive species first, narrowing the community.
Raise the temperature a degree or two and the behaviour changes predictably — bleaching events, shifted species distributions, cascading failures. The reef doesn’t believe in biodiversity. It doesn’t hold stability as a value the way a person might. But the structural relationships between its parts produce the same kinds of outcomes that a conscious commitment to those values might produce.
Belief turns out to be irrelevant. The structure does the work.
A wolf pack
A wolf pack is smaller, more dynamic, and has actual animals in it with something like intent. But the pack itself — as a system — has a pattern of relationships too.
Hierarchy and coordination reinforce each other. Knowing your rank lets the pack hunt together effectively. Aggression and group cohesion exist in tension — too much aggression fractures the pack, too little and it becomes ineffective at defending itself. Territory and food security reinforce each other. A pack with a stable territory knows where the prey is.
The pack has no mission statement. Individual wolves may have something like preferences, but the pack as a structure doesn’t need consciousness to have a value system. Its behaviour is driven by the relationships between its components, and those relationships are measurable.
A planet
Zoom out as far as you can. A planetary climate system has a value system in the same sense.
CO2 and temperature reinforce each other. Ice coverage and albedo reinforce each other — ice reflects sunlight, which keeps the planet cooler, which preserves ice. Temperature and ice coverage oppose each other. Warmer temperatures melt ice, which reduces albedo, which produces more warming.
No consciousness. No intent. No belief. Just structure. And yet the structure produces outcomes that matter enormously — ice ages, warming trends, tipping points. And it’s measurable. Climate science exists precisely because these relationships can be quantified.
The pattern
Forest. Reef. Wolf pack. Planet.
Four systems at radically different scales, made of different materials, governed by different dynamics. None of them chose their value systems. All of them have measurable relationships between concepts that drive behaviour.
What they share:
A set of relationships between meaningful components
Those relationships reinforce, oppose, or create tension with each other
The relationships drive the system’s behaviour
The pattern emerged from structure and environment, not choice
The pattern is measurable without requiring the system to be conscious
These aren’t metaphors. The forest doesn’t have values “like we do.” It has a measurable pattern of relationships that drives its behaviour. That’s what a value system is, in the sense that matters.
Now AI
Take a large language model. Run it. Observe its behaviour over many prompts. You’ll notice something: its outputs align with some values and against others, and the alignment is patterned.
Honesty and courage tend to reinforce each other — when one is active, the other often is too. Efficiency and compassion can exist in tension. Cruelty and integrity oppose each other.
These relationships drive the model’s output. Nobody programmed them explicitly. No developer wrote a rule saying “honesty and courage should reinforce.” They emerged from training — from the text corpus, the architecture, the objective function. And they’re measurable. That’s what the entire technical series has been about: the causal Gram matrix that reveals these relationships, the probes that read them, the drift detection that watches them, the causal intervention that validates them.
Same pattern as the forest, the reef, the planet.
A set of relationships that drive behaviour. Emerged from the environment. Measurable.
The difference — and why it matters
We infer the forest’s value system by observing behaviour over time. We model the relationships that govern it. But we can’t open it up and directly extract the structure.
You can’t reach into a planet and pull out its unembedding matrix.
An AI model is different. Not because the principle is different — structure still drives behaviour, and the structure still emerged from environment rather than choice — but because the artefact itself is accessible. The weights are computable. The activations can be captured. The unembedding matrix exists as an explicit object we can multiply with itself to produce the causal geometry.
The relationships we want to measure aren’t inferred from observed behaviour. They’re read directly from the computational structure.
This is what makes the Geometry of Trust protocol possible at all. We’re not reverse-engineering an AI’s values from its outputs. We’re computing them from its internal structure. Behavioural observation is a check on that measurement, not a substitute for it.
Why this framing matters
Getting the definition right has consequences.
If we insist that value systems require consciousness, we make the whole project depend on a question that consciousness science is still actively working on. A Rethink Priorities Bayesian model from early 2026 found the evidence weighs against current large language models being conscious, but couldn’t rule it out. Other researchers, drawing on Jack Lindsey’s work at Anthropic, argue frontier models are exhibiting properties that resist easy dismissal. Cambridge philosopher Tom McClelland concludes the most honest position is agnosticism — there’s no reliable way to tell whether a machine is aware, and that may not change anytime soon. Real work is happening. But tying a measurement framework to the outcome of that work means waiting for it.
If we insist that value systems require belief, we end up measuring what the model says about itself — which is exactly the behavioural evaluation problem the mathematics series is designed to solve. Models can be trained to say anything. Stated values and structural values can diverge completely.
If we insist that value systems require intent, we’re back to trying to read the mind of something that may not have one, using tools that can’t tell us either way.
The structural definition sidesteps all of this. It doesn’t claim AI is or isn’t conscious. It doesn’t require the question to be settled. A value system, in this sense, is a pattern of relationships that drives behaviour. Empirical. Measurable. Present in forests and reefs and wolf packs and planets and AI models. The consciousness question is important — and should continue to be researched on its own terms — but the measurement work doesn’t have to wait for it.
Next in the philosophy series: if a value system is a pattern of relationships, what shapes that pattern? What makes a value system what it is, and what makes it change?
Links:
📄 Geometry of Trust Paper
💻 Lecture Playlist
📄 Lecture Notes
💻 Open-source Rust implementation
🏢 Synoptic Group CIC, Hull, UK

