Memoji of Charles

Charles Bethin.

Active Inference

Exploring a New Frontier in Machine Learning

Dec 3, 2023. Written by Charles Bethin. Jump to demo.

🤖

Instructions: Tap to move the agent around LineWorld.

Learning is What?! Fundamental.

Imagine a world where machines don't just learn from data handed to them, but on data gathered by innate intution. This is the revolutionary promise of Active Inference - a concept that's not only redefining our understanding of learning but also paving the way for more intuitive, adaptive artificial intelligence.

Dr. Karl Friston, a leading figure in neuroscience, introduces us to this game-changing idea. At its core, Active Inference is founded on the principle of free energy minimization, a process akin to reducing surprise in a system. This framework suggests that learning, in essence, is a system's endeavor to minimize the unpredictability of its environment.

Learning is just a stochastic system in a stochastic world minimizing its surprise (aka its free energy). Just like a ball rolls down a hill to minimize energy, a learning system rolls down a hill to minimize surprise.

What do you, as an intelligent learning machine, do when you're surprised? You learn! You assess your actions, interpret the feedback from your environment, and refine your internal model of the world. You learn.

The Brain's Algorithm 🧠

I got into Active Inference after reading this exciting article, which then led me to read this amazing book. I was hooked! Even simple examples felt like they were doing something out of a science fiction novel. What is truly wild is how widely applicable it is, to things you wouldn't even consider to be intelligent. If an agent can be viewed as taking some kind of action on its environment, then there's a chance Active Inference can be applied to it.

An "agent" is really loosely defined in Active Inference. A single cell can be seen as an agent, as can a whole organism, or even a group of organisms. The only real delineator is whether or not the system can be viewed as having a reasonably separate state from its environment (in mathematical terms — where there is a Markov blanket).

To distill it down, an Active Inference agent engages in a cyclical four-step process: Observe, Infer, Estimate Surprise, and Act. Utilizing Bayesian inference, the agent seamlessly orchestrates these steps, continually aiming for minimized surprise. This cycle enables an agent to learn autonomously, sans pre-training.

Image of the Active Inference Cycle

Image of an Active Inference cycle

I don't want to get too deep into the math – for that I recommend checking out some of the resources I've linked – but what I did want to do is see if we can build some cool demos to see Active Inference in action. So let's get to it!

AI in Action: Living in a 1D World

Welcome to LineWorld. There's not much going on here. Just a line, and an agent. The agent can move left or right, and its goal is to get to the goal (marked by the green circle).

Play around with LineWorld for a bit. Move the agent around. See if you can get it to the goal. It's not too hard, right? You can do it!

🤖

Instructions: Tap to move the agent around LineWorld. Use the buttons below to move the goal. Press start to watch the agent instinctively navigate!

Goal

So.. how did it go? No really, I don't know how it went. Remember: the agent is random! Odds are, though, if you ran it a few times, it probably made some weird choices along the way but was eventually able to get to the goal! 🎉

What does the agent actually know?

  1. It knows that it can move LEFT, RIGHT, or STAY.
  2. It knows it can observe where it probably is on the line, but it can't see what spots are next to it
  3. It knows where it probably will be next if it takes each action.
  4. It knows it has a strong preference for being on the goal.

That last one, a strong preference for being on the goal, is what motivates it to even move! Remember, Active Inference is all about minimizing surprise. But what surprises an agent ? An agent is surprised when it's in a state it doesn't like, or when it's in a state it didn't expect to be in.

Let's walk through an example.

Imagine an agent at the beginning of its journey, positioned on the left end of a linear world. Its initial task? Determine its next move from three options: MOVE LEFT, MOVE RIGHT, or STAY.

The agent assesses the surprise associated with each action. Staying put or moving left likely means remaining at position 0 – not a surprising outcome, but not helpful for reaching its goal. Moving right, however, suggests a shift to position 1 (a new spot, but still probably not the goal). To our agent, this is surprising.

In Active Inference, the agent seeks to minimize its distress – akin to our body's pursuit of homeostasis. But remember, it doesn't know what spots are next to it, let alone how to get to the goal. At this point, all it can tell is that whether it moves LEFT or RIGHT, it (probably) will not be on the goal.

Without knowing its exact surroundings, it initially struggles to discern the most effective move. It makes a choice, let's say MOVE RIGHT, and transitions to position 1. Now, it re-evaluates its options with newly updated information.

Remember, surprise comes in two forms: being in a state you don't like, or being in a state you didn't expect to be in. It's doubly surprised when it's neither.

When running these agents, I notice that, often, the agent suddenly turns around and heads back toward the start. To a viewer, this feels unexpected. But to the agent, it just makes mathematical sense. This behavior stems from the core principles of Active Inference, and is derived simply from the minimization of Free Energy – balancing exploitation of known information with the exploration of new possibilities. Sometimes, the option of "going to a spot you know that isn't the desired state" offers less surprise than "going to a spot you don't know that isn't the desired state". Just like humans, the agent might find itself in a rut, sticking to the safety it knows. Or, conversely, at times it makes a breakthrough that propels it towards its goal.

While this explanation humanizes the agent and simplifies some of the underlying mathematics, it captures the essence of Active Inference: a mathematical, stochastic system grappling with uncertainty, learning over time, and making decisions based on statistical inference. The beauty lies in its fundamental process: using basic statistical reasoning to navigate toward a desired state.

Charting the Future ✨

Our agent is making progress, but it's still not very efficient. It's just randomly moving around, hoping to get to the goal. And for how simple it is, it's pretty decent at it in our simple 1D LineWorld! But what if we made it a little smarter?

Baked into Active Inference is the concept of planning. It sounds a lot more complicated, but it just uses the math we already use, just looking into the future before it makes a decision. Play around with the demo below. Adjust how far ahead we let our agent look and see how it changes its behavior.

LineWorld: With Planning

🤖

Instructions: Tap to move the agent around LineWorld. Use the buttons below to move the goal. Press start to watch the agent instinctively navigate!

1 steps ahead

Goal

How does our agent plan? The premise is actually quite simple. At each time step, instead of just looking at the next step, it looks at all the possibilities for the next n steps. It then roughly chooses the next action that minimizes its surprise over all the steps!

For those CS folks out there – yes, planning is quite computationally expensive. But it's also quite powerful. It's a way for our agent to look into the future and make a decision based on what it sees. It's a way for our agent to be a little more intelligent.

Our Adventure Concludes 🎬

In exploring Active Inference through LineWorld, we've ventured into a fascinating intersection where natural principles shape learning – a space where learning extends beyond traditional data processing to embrace instinct and adaptability. This exploration with our digital agent has not just demonstrated a model's goal-reaching capabilities; it has illuminated its potential to adapt, learn, and make decisions in dynamic environments. As we move forward, Active Inference represents an intriguing advance in our understanding of biological learning processes. It serves as an insightful metaphor for the complex dance of decision-making and cognitive development, suggesting a future where machines could evolve from simple computational tools to entities capable of more intuitive, adaptive reasoning.


References