District Manager Engagements Analyzed with Active Inference
Using Bayesian Inference and RxInfer to model resource assignments
Retail Industry
Bayesian Inference
Active Inference
Hidden Markov Model
RxInfer
Julia
Author
Kobus Esterhuysen
Published
March 20, 2024
Modified
September 4, 2024
0 Active Inference: Bridging Minds and Machines
In recent years, the landscape of machine learning has undergone a profound transformation with the emergence of active inference, a novel paradigm that draws inspiration from the principles of biological systems to inform intelligent decision-making processes. Unlike traditional approaches to machine learning, which often passively receive data and adjust internal parameters to optimize performance, active inference represents a dynamic and interactive framework where agents actively engage with their environment to gather information and make decisions in real-time.
At its core, active inference is rooted in the notion of agents as embodied entities situated within their environments, constantly interacting with and influencing their surroundings. This perspective mirrors the fundamental processes observed in living organisms, where perception, action, and cognition are deeply intertwined to facilitate adaptive behavior. By leveraging this holistic view of intelligence, active inference offers a unified framework that seamlessly integrates perception, decision-making, and action, thereby enabling agents to navigate complex and uncertain environments more effectively.
One of the defining features of active inference is its emphasis on the active acquisition of information. Rather than waiting passively for sensory inputs, agents proactively select actions that are expected to yield the most informative outcomes, thus guiding their interactions with the environment. This active exploration not only enables agents to reduce uncertainty and make more informed decisions but also allows them to actively shape their environments to better suit their goals and objectives.
Furthermore, active inference places a strong emphasis on the hierarchical organization of decision-making processes, recognizing that complex behaviors often emerge from the interaction of multiple levels of abstraction. At each level, agents engage in a continuous cycle of prediction, inference, and action, where higher-level representations guide lower-level processes while simultaneously being refined and updated based on incoming sensory information.
The applications of active inference span a wide range of domains, including robotics, autonomous systems, neuroscience, and cognitive science. In robotics, active inference offers a promising approach for developing robots that can adapt and learn in real-time, even in unpredictable and dynamic environments. In neuroscience and cognitive science, active inference provides a theoretical framework for understanding the computational principles underlying perception, action, and decision-making in biological systems.
In conclusion, active inference represents a paradigm shift in machine learning, offering a principled and unified framework for understanding and implementing intelligent behavior in artificial systems. By drawing inspiration from the principles of biological systems, active inference holds the promise of revolutionizing our approach to building intelligent machines and understanding the nature of intelligence itself.
1 BUSINESS UNDERSTANDING
In this project the client is a newly appointed district manager that is responsible for the optimal running of 4 retail stores in Washington state at the locations:
Bellingham
Ferndale
Lynden
Blaine
His engagement with each of these branches will happen in terms of full weeks. The decision to engage with a specific store is up to the district manager. What is expected is for the district manager to engage with a store and to help improve and optimize operations. From past engagement data it is not clear how engagements decisions were made. Sometimes they appear to be random. There are also quick changes rather than longer running engagements, likely to put out sudden fires. The client wants to minimize the effect of his appointment to the role and would like, at least for a while, to follow the past pattern of engagements as closely as possible.
This analysis will make use of Bayesian inference within the larger scope of an approach known as Active Inference. In particular, the past manager engagements will be modeled as a Hidden Markov Model. This model can then be used by the client for suggestions on how to engage with the stores.
versioninfo() ## Julia version
Julia Version 1.10.4
Commit 48d4fd48430 (2024-06-04 10:41 UTC)
Build Info:
Official https://julialang.org/ release
Platform Info:
OS: Linux (x86_64-linux-gnu)
CPU: 12 × Intel(R) Core(TM) i7-8700B CPU @ 3.20GHz
WORD_SIZE: 64
LIBM: libopenlibm
LLVM: libLLVM-15.0.7 (ORCJIT, skylake)
Threads: 1 default, 0 interactive, 1 GC (on 12 virtual cores)
Environment:
JULIA_NUM_THREADS =
Resolving package versions...
No Changes to `~/.julia/environments/v1.10/Project.toml`
No Changes to `~/.julia/environments/v1.10/Manifest.toml`
Resolving package versions...
No Changes to `~/.julia/environments/v1.10/Project.toml`
No Changes to `~/.julia/environments/v1.10/Manifest.toml`
Resolving package versions...
No Changes to `~/.julia/environments/v1.10/Project.toml`
No Changes to `~/.julia/environments/v1.10/Manifest.toml`
Resolving package versions...
No Changes to `~/.julia/environments/v1.10/Project.toml`
No Changes to `~/.julia/environments/v1.10/Manifest.toml`
Resolving package versions...
No Changes to `~/.julia/environments/v1.10/Project.toml`
No Changes to `~/.julia/environments/v1.10/Manifest.toml`
Resolving package versions...
No Changes to `~/.julia/environments/v1.10/Project.toml`
No Changes to `~/.julia/environments/v1.10/Manifest.toml`
Resolving package versions...
No Changes to `~/.julia/environments/v1.10/Project.toml`
No Changes to `~/.julia/environments/v1.10/Manifest.toml`
Resolving package versions...
No Changes to `~/.julia/environments/v1.10/Project.toml`
No Changes to `~/.julia/environments/v1.10/Manifest.toml`
The historical data spans a period of 9 years’ worth of weekly engagement data. Each weekly record consists of a 1-hot encoded engagement state vector. When the associated component of the state vector is a 1, the district manager was engaged with the associated store:
Similarly, we also have the observation vectors for each week:
The observations do not always agree with the states due to administrative errors. For example, a district manager might have failed to capture a specific engagement and eventually the capture was done with a misremenbered date.
3 DATA PREPARATION
We will use the data from the simulator and from the field directly. There is no need to perform additional data preparation.
4 MODELING
4.1 Narrative
Please review the narrative in section 1.
4.2 Core Elements
This section attempts to answer three important questions:
What metrics are we going to track?
What decisions do we intend to make?
What are the sources of uncertainty?
The only metric we are interested in is the engagement store of the district manager, i.e. at which of the 4 stores the manager is working for the week.
There are no control/steering decisions to be made. We are simply interested in understanding and modeling the past engagement behavior.
There are two sources of uncertainty. The first has to do with the fact that the state transitions are not deterministic but rather stochastic. This will be captured in the transition matrix \(B\). The second relates to observations. Engagements were not always recorded accurately. This means observations of past engagements sometimes differ from what they really were.This uncertainty will be captured in the observation matrix \(A\).
4.3 Environment Model (Generative Process)
To get insight into the engagement behavior, we need to formulate a model. Since we have a finite (and small number of) stores, we can use a categorical distribution to represent the district manager’s engagement. There are four stores which means we will have four distinc values with associated probabilities in the categorical distribution. For time we will use \(t\). An estimate of the state of the engagement at time \(t\) will be indicated by \(s_t\). The associated observation will be indicated by \(x(t)\).
The probabilities for switching between stores will be captured in a transition matrix \(B\). To test our approach we will assign these probabilities manually in a simulation and then see to what extent the model can learn these. From the past data it appears that transition probabilities can vary quite a bit. Once we are confident in our approach we will use the past data from the field to learn the transition probabilities without a reference to compare them against.
We will also learn the observation matrix \(A\) which reflects how well observations of the true state were made. At time \(t\), the observation will be indicated by \(x_t\). The observation matrix encodes the likelihood that the record of an engagement was a mistake so that a wrong observation is made. The model may be specified as follows:
This type of discrete state space model is known as a Hidden Markov Model (HMM). In summary, our goal is to learn the matrices \(B\) and \(A\) so we can use them to understand the past engagement behavior of district managers.
4.3.1 State variables
The state variables represent what we need to know. These are captured in a state vector \(s_t\).
4.3.2 Decision variables
There are no decisions to be made. We are simply observing the past behavior of district manager engagements, i.e. there is no control/steering applied to the engagement environment.
To simulate the engagement behavior, we need to specify:
the actual transition probabilities between the states (how likely is the manager to move from one store to another)
the observation probabilities (i.e., how reliably will engagements be captured)
These specifications will allow us to generate observations from the hidden Markov model (HMM).
Here are the steps to generate observation data:
Assume an initial engagement state (store) for the district manager. We will have the initial engagement at Bellingham.
Determine where the manager went next by drawing from a Categorical distribution with the transition probabilities between the different stores.
Determine the observation encountered in the capture records by drawing from a Categorical distribution with the corresponding observation probabilities.
Repeat steps 2 and 3 for as many samples as needed.
The following code implements the above process and generates our simulated observation data:
"""Returns a one-hot encoding of a random sample from a categorical distribution. The sample is drawn with the `rng` random number generator."""functionrand_1hot_vec(rng, distribution::Categorical) k =ncategories(distribution) s =zeros(k) drawn_category =rand(rng, distribution) s[drawn_category] =1.0return send
rand_1hot_vec
functiongenerate_data( N, ## number of samples B, ## transition matrix A, ## observation matrix s₀; ## initial state seed=42) rng =MersenneTwister(seed) s =Vector{Vector{Float64}}(undef, N) ## one-hot encoding of the states o =Vector{Vector{Float64}}(undef, N) ## one-hot encoding of the observations sₜ₋₁ = s₀ ## keep previous statefor t =1:N sdis_t = B*sₜ₋₁ s[t] =rand_1hot_vec(rng, Categorical(sdis_t)) odis_t = A*s[t] o[t] =rand_1hot_vec(rng, Categorical(odis_t)) sₜ₋₁ = s[t]endreturn o, send
generate_data (generic function with 1 method)
Next we will generate a number of weekly data points to simulate engagements. xSim will contain the measurements (capture records) and sSim will contain information on the actual engagements.
The uncertainty model is captured in the transition matrix \(B\) and the observation matrix \(A\).
4.5 Agent Model (Generative Model)
4.5.1 Implementation of the Agent Model (Generative Model)
We will use the RxInfer Julia package. RxInfer stands at the forefront of Bayesian inference tools within the Julia ecosystem, offering a powerful and versatile platform for probabilistic modeling and analysis. Built upon the robust foundation of the Julia programming language, RxInfer provides researchers, data scientists, and practitioners with a streamlined workflow for conducting Bayesian inference tasks with unprecedented speed and efficiency.
At its core, RxInfer leverages cutting-edge techniques from the realm of reactive programming to enable dynamic and interactive model specification and estimation. This unique approach empowers users to define complex probabilistic models with ease, seamlessly integrating prior knowledge, data, and domain expertise into the modeling process.
With RxInfer, conducting Bayesian inference tasks becomes a seamless and intuitive experience. The package offers a rich set of tools for performing parameter estimation, model comparison, and uncertainty quantification, all while leveraging the high-performance capabilities of Julia to deliver results in a fraction of the time required by traditional methods.
Whether tackling problems in machine learning, statistics, finance, or any other field where uncertainty reigns supreme, RxInfer equips users with the tools they need to extract meaningful insights from their data and make informed decisions with confidence.
RxInfer represents a paradigm shift in the world of Bayesian inference, combining the expressive power of Julia with the flexibility of reactive programming to deliver a state-of-the-art toolkit for probabilistic modeling and analysis. With its focus on speed, simplicity, and scalability, RxInfer is poised to become an indispensable tool for researchers and practitioners seeking to harness the power of Bayesian methods in their work.
To configure the model in RxInfer, we will use Categorical distributions for the states and observations. To learn the \(B\) and \(A\) matrices we can use MatrixDirichlet priors. Since we have no apriori idea how the engagements will play out we will assume that it happens randomly. For the \(B\)-matrix, we can represent this by filling our MatrixDirichlet prior on \(B\) with all ones. These values will get updated once learning starts.
For the observations, it is reasonable to assume that record keeping for engagements are very accurate. To configure this, we will have large values on the diagonal of \(A\)’s prior. However, to allow for the odd errors in record keeping, we will add some noise on the off-diagonal entries.
We will use Variational Inference. This means we will have to specify inference constraints. Using a structured variational approximation to the true posterior distribution, we will decouple the variational posterior over the states (q(s_0, s)) from the posteriors over the transition matrices (q(B) and q(A)). This dependency decoupling in the approximate posterior distribution ensures that inference is tractable.
Next, we specify this model in RxInfer. We will make use of the following nodes in the Forney Factor Graph (FFG):
MatrixDirichlet
Categorical
Transition
Kronecker-\(\delta\) nodes (for the N observations)
## Model specification@modelfunctionhidden_markov_model(x, N) B ~MatrixDirichlet(ones(4, 4)) ## transition matrix A ~MatrixDirichlet([10.01.01.01.0; ## observation matrix1.010.01.01.0; 1.01.010.01.0;1.01.01.010.0]) s₀ ~Categorical(fill(1.0/4.0, 4)) ## initial state## s = randomvar(N)## x = datavar(Vector{Float64}, N) sₜ₋₁ = s₀ ## initialize the previous statefor t in1:N s[t] ~Transition(sₜ₋₁, B) x[t] ~Transition(s[t], A) sₜ₋₁ = s[t] ## keep the previous stateendend## Constraints specification@constraintsfunctionhidden_markov_model_constraints()q(s₀, s, B, A) =q(s₀, s)q(B)q(A)end
hidden_markov_model_constraints (generic function with 1 method)
4.6 Agent Evaluation
Next we will perform inference to see how engagements changed.
Using Variational Inference to perform inference, means we need to set some initial marginals as a starting point. This is made easy in RxInfer by using the vague function, which provides an uninformative guess. Different initial guesses can also be tried.
We are only interested in the final result - the best guess about the engagement store. So we will only keep the last results.
We obtain the estimated posteriors for \(B\) and \(A\). Then we compare them with the \(B\) and \(A\) matrices we setup for the simulation. For both matrices the estimates are quite good. This gives us confidence that we might get useful results for the field data as well.
println("Posterior Marginal for B:")_BSim_est =mean(_resultSim.posteriors[:B])
Next we visualize the results by comparing the real states with the inferred states. We also verify if the model has converged by looking at the Free Energy.
Now it is time to evaluate the agent/model with the historical field data. There are 468 weekly records. So 468 weeks = 9 years * 52 weeks/year.
_xFld_df =DataFrame(XLSX.readtable("ManagerEngagements_observations.xlsx", "Sheet1"));_xFld_mat =Float64.(Matrix(_xFld_df))_xFld = [_xFld_mat[r,1:end] for r in1:size(_xFld_mat)[1]]
_sFld_df =DataFrame(XLSX.readtable("ManagerEngagements_states.xlsx", "Sheet1"));_sFld_mat =Float64.(Matrix(_sFld_df))_sFld = [_sFld_mat[r,1:end] for r in1:size(_sFld_mat)[1]]