Automatic and optimal shift scheduling using Reinforcement Learning (Part 1)

Using Sequential Decision Analytics to find ongoing optimal decisions

Retail Industry
Scheduling
Powell Unified Framework
Reinforcement Learning
Python
Author

Kobus Esterhuysen

Published

October 21, 2023

Modified

October 25, 2023

0 INTRODUCTION

In this project the client had a need to be convinced of the benefits of formal optimized sequential decision making. The client’s need relates to human resource scheduling. The current project makes a first attempt to solve this need.

This POC involves a retail grocery provider whose HR manager is reponsible for providing a schedule to fill daily needs for roles:

  • Courtesy Clerk (7 resources)
  • Stocker (3 resources)
  • Cleaner (2 resources)
  • Curbsider (4 resources)

Daily demands are provided by a stochastic simulator based on past needs and trends.

The overall structure of this project and report follows the traditional CRISP-DM format. However, instead of the CRISP-DM’S “4 Modeling” section, we inserted the “6 step modeling process” of Dr. Warren Powell in section 4 of this document. Dr Powell’s universal framework shows great promise for unifying the formalisms of at least a dozen different fields. Using his framework enables easier access to thinking patterns in these other fields that might be beneficial and informative to the sequential decision problem at hand. Traditionally, this kind of problem would be approached from the reinforcement learning perspective. However, using Dr. Powell’s wider and more comprehensive perspective almost certainly provides additional value.

Here is information on Dr. Powell’s perspective on Sequential Decision Analytics.

In order to make a strong mapping between the code in this notebook and the mathematics in the Powell Universal Framework (PUF), we follow the following convention for naming Python identifier names:

  • How to read/say
    • var name & flavor first
    • at t/n
    • for entity OR of/with attribute
    • \(\hat{R}^{fail}_{t+1,a}\) has code Rhat__fail_tt1_a which is read: “Rhatfail at t+1 of/with (attribute) a”
  • Superscripts
    • variable names have a double underscore to indicate a superscript
    • \(X^{\pi}\): has code X__pi, is read X pi
    • when there is a ‘natural’ distinction between the variable symbol and the superscript (e.g. a change in case), the double underscore is sometimes omitted: Xpi instead of X__pi, or MSpend_t instead of M__Spend_t
  • Subscripts
    • variable names have a single underscore to indicate a subscript
    • \(S_t\): has code S_t, is read ‘S at t’
    • \(M^{Spend}_t\) has code M__Spend_t which is read: “MSpend at t”
    • \(\hat{R}^{fail}_{t+1,a}\) has code Rhat__fail_tt1_a which is read: “Rhatfail at t+1 of/with (attribute) a” [RLSO-p436]
  • Arguments
    • collection variable names may have argument information added
    • \(X^{\pi}(S_t)\): has code X__piIS_tI, is read ‘X pi in S at t’
    • the surrounding I’s are used to imitate the parentheses around the argument
  • Next time/iteration
    • variable names that indicate one step in the future are quite common
    • \(R_{t+1}\): has code R_tt1, is read ‘R at t+1’
    • \(R^{n+1}\): has code R__nt1, is read ‘R at n+1’
  • Rewards
    • State-independent terminal reward and cumulative reward
      • \(F\): has code F for terminal reward
      • \(\sum_{n}F\): has code cumF for cumulative reward
    • State-dependent terminal reward and cumulative reward
      • \(C\): has code C for terminal reward
      • \(\sum_{t}C\): has code cumC for cumulative reward
  • Vectors where components use different names
    • \(S_t(R_t, p_t)\): has code S_t.R_t and S_t.p_t, is read ‘S at t in R at t, and, S at t in p at t’
    • the code implementation is by means of a named tuple
      • self.State = namedtuple('State', SVarNames) for the ‘class’ of the vector
      • self.S_t for the ‘instance’ of the vector
  • Vectors where components reuse names
    • \(x_t(x_{t,GB}, x_{t,BL})\): has code x_t.x_t_GB and x_t.x_t_BL, is read ‘x at t in x at t for GB, and, x at t in x at t for BL’
    • the code implementation is by means of a named tuple
      • self.Decision = namedtuple('Decision', xVarNames) for the ‘class’ of the vector
      • self.x_t for the ‘instance’ of the vector
  • Use of mixed-case variable names
    • to reduce confusion, sometimes the use of mixed-case variable names are preferred (even though it is not a best practice in the Python community), reserving the use of underscores and double underscores for math-related variables

1 BUSINESS UNDERSTANDING

The HR manager has to schedule resources for 3 shifts: Shift1, Shift2, and Shift3. For now, only Shift1 will be handled by the AI agent.

The number of resources of each type for each schedule slots for each day will be provided by the simulator. Only two resource types will be handled:

  • Courtesy
  • Stocker

The HR manager typically runs the AI Shift Scheduler 2 weeks into the future to finalize a tentative schedule to publish for his team.

As demands for shift slot allocations come in, they are handled in the following way:

  • the available resouces for the resource type are identified by only considering resources whose accumulated shifts (over 2 weeks) are less than a parameter (to be learned by the AI agent)
  • the specific resources are then marked for allocation considering the number of resources needed for the type
  • if there are insufficient resources available, the unassigned slots incur a penalty for the AI agent (each successful assignment incurs a reward)
  • the state of the resources are then updated including the number of accumulated shifts
  • at the end of the shift all resources are makde available again

The overall objective will be to maximize the cumulative allocated number of shift slots.

2 DATA UNDERSTANDING

Based on recent market research, the demand may be modeled by a Poisson distribution for each resource type: \[ \begin{aligned} \mu^{ResourceType} &= \mathrm{SIM\_MU\_D[RESOURCE\_TYPE]} \end{aligned} \]

So we have: \[ D^{ResourceType}_{t+1} \sim Pois(\mu^{ResourceType}) \]

The decision window is 1 day and these simulations are for the daily demands for Shift1.

## import pdb
from collections import namedtuple, defaultdict
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from copy import copy
import time
from scipy.ndimage.interpolation import shift
import pickle
from bisect import bisect
import math
from pprint import pprint
import matplotlib as mpl
from certifi.core import where
pd.options.display.float_format = '{:,.4f}'.format
pd.set_option('display.max_columns', None)
pd.set_option('display.max_rows', None)
pd.set_option('display.max_colwidth', None)
! python --version
Python 3.10.12
DeprecationWarning: Please use `shift` from the `scipy.ndimage` namespace, the `scipy.ndimage.interpolation` namespace is deprecated.
  from scipy.ndimage.interpolation import shift

We will have the learnable paramters:

\[(\theta^{Hi})\]

## PARAMETERS
SNAMES = [ #state variable names
    'RAvail_t', #available resource
    'R_t',      #resource
    'D_t',      #demand
]
xNAMES = ['x_t'] #decision variable names

## TYPES = ['Courtesy']*7 + ['Stocker']*3 + ['Cleaner']*2 + ['Curbsider']*4
TYPES = ['Courtesy']*7 + ['Stocker']*3
RESOURCE_IDS = [str(i+1) for i in range(len(TYPES))]
## RESOURCE_TYPES = ['Courtesy', 'Stocker', 'Cleaner', 'Curbsider']
RESOURCE_TYPES = ['Courtesy', 'Stocker']

## *resource* attribute vectors
aNAMES = [tup[0]+'_'+tup[1] for tup in zip(RESOURCE_IDS, TYPES)]
print(f'{len(aNAMES)=}')
print(aNAMES)

## *demand* attribute vectors
bNAMES = RESOURCE_TYPES
print(f'\n{len(bNAMES)=}')
print(bNAMES)

## *decision* 'attribute' vectors
abNAMES = [] #to DEMAND b
for a in aNAMES:
  a0,a1 = a.split('_')
  for b in bNAMES:
    if(a1==b):
      abn = (a + '___' + b)
      abNAMES.append(abn)
print(f'\n{len(abNAMES)=}')
print(abNAMES)

piNAMES = ['X__AllocBelow'] #policy names
thNAMES = [ #theta names
  ## hi number of allocs/week (of a resource) above which the resource is not
  ## considered for the next alloc
  'thHi',
]
print(f'\n{len(thNAMES)=}')
print(f'{thNAMES=}')

SEED_TRAIN = 77777777
SEED_EVALU = 88888888
N_SAMPLEPATHS = 100; L = N_SAMPLEPATHS
N_TRANSITIONS = 100; T = N_TRANSITIONS

TH_Hi_SPEC = (1, 14, 1)

SIM_T = 60
SIM_MU_D = {bNAMES[0]: 4, bNAMES[1]: 2}
print(f'\n{SIM_MU_D=}')
assert len(SIM_MU_D.items())==len(bNAMES)

## SIM_EVENT_TIME_D = {bNAMES[0]: None, bNAMES[1]: None, bNAMES[2]: None, bNAMES[3]: None}
SIM_EVENT_TIME_D = {bNAMES[0]: None, bNAMES[1]: None}
print(f'\n{SIM_EVENT_TIME_D=}')
assert len(SIM_EVENT_TIME_D.items())==len(bNAMES)

## SIM_MU_DELTA_D = {bNAMES[0]: None, bNAMES[1]: None, bNAMES[2]: None, bNAMES[3]: None}
SIM_MU_DELTA_D = {bNAMES[0]: None, bNAMES[1]: None}
print(f'\n{SIM_MU_DELTA_D=}')
assert len(SIM_MU_DELTA_D.items())==len(bNAMES)

# math parameters use 'math/small case' (as opposed to code parameters):

## CONTRIB_MATRIX = {}
# for an in aNAMES:
#     contribs = {}
#     for bn in bNAMES:
#       contribs[bn] = contribution(an, bn)
#     CONTRIB_MATRIX[an] = contribs
# CONTRIB_MATRIX
len(aNAMES)=10
['1_Courtesy', '2_Courtesy', '3_Courtesy', '4_Courtesy', '5_Courtesy', '6_Courtesy', '7_Courtesy', '8_Stocker', '9_Stocker', '10_Stocker']

len(bNAMES)=2
['Courtesy', 'Stocker']

len(abNAMES)=10
['1_Courtesy___Courtesy', '2_Courtesy___Courtesy', '3_Courtesy___Courtesy', '4_Courtesy___Courtesy', '5_Courtesy___Courtesy', '6_Courtesy___Courtesy', '7_Courtesy___Courtesy', '8_Stocker___Stocker', '9_Stocker___Stocker', '10_Stocker___Stocker']

len(thNAMES)=1
thNAMES=['thHi']

SIM_MU_D={'Courtesy': 4, 'Stocker': 2}

SIM_EVENT_TIME_D={'Courtesy': None, 'Stocker': None}

SIM_MU_DELTA_D={'Courtesy': None, 'Stocker': None}
class DemandSimulator():
  def __init__(self,
    T__sim=SIM_T,
    muD=SIM_MU_D,
    eventTimeD=SIM_EVENT_TIME_D,
    muDeltaD=SIM_MU_DELTA_D,
    seed=None):
    self.time = 0
    self.T__sim = SIM_T
    self.muD = SIM_MU_D
    self.eventTimeD = SIM_EVENT_TIME_D
    self.muDeltaD = SIM_MU_DELTA_D
    self.prng = np.random.RandomState(seed)

  def simulate(self):
    if self.time > self.T__sim - 1:
      self.time = 0
    D_tt1 = {}
    for bn in bNAMES:
      if self.eventTimeD[bn] and self.time > self.eventTimeD[bn]: #event for entity
        D_tt1[bn] = self.muDeltaD[bn] + self.prng.poisson(self.muD[bn]) #after event
      else:
        D_tt1[bn] = self.prng.poisson(self.muD[bn])
    self.time += 1
    return {bn: max(0, D_tt1[bn]) for bn in bNAMES} #always positive
dem_sim = DemandSimulator(seed=1234)
DemandData = []
for i in range(SIM_T):
  d_sh1 = list(dem_sim.simulate().values())
  d_sh2 = list(dem_sim.simulate().values())
  DemandData.append(d_sh1 + d_sh2)
# labels = [f'{bn}_demand' for bn in bNAMES]
labels = [f'sh1_{bn}_dem' for bn in bNAMES] + [f'sh2_{bn}_dem' for bn in bNAMES]
df = pd.DataFrame.from_records(data=DemandData, columns=labels); df[:10]
sh1_Courtesy_dem sh1_Stocker_dem sh2_Courtesy_dem sh2_Stocker_dem
0 5 4 6 4
1 6 3 6 1
2 3 2 3 6
3 4 2 4 2
4 1 1 7 1
5 7 6 1 0
6 6 1 7 2
7 1 3 3 1
8 4 1 5 6
9 11 3 8 2
import random
def plot_output(df1, df2):
  n_charts = len(bNAMES)
  ylabelsize = 16
  mpl.rcParams['lines.linewidth'] = 1.2
  default_colors = plt.rcParams['axes.prop_cycle'].by_key()['color']
  fig, axs = plt.subplots(n_charts, sharex=True)
  fig.set_figwidth(13); fig.set_figheight(9)
  fig.suptitle('Demand Simulation', fontsize=20)

  for i,bn in enumerate(bNAMES):
    axs[i].set_title(f'Demanded {bn}')
    axs[i].set_ylim(auto=True); axs[i].spines['top'].set_visible(False); axs[i].spines['right'].set_visible(True); axs[i].spines['bottom'].set_visible(False)
    axs[i].step(df1[f'sh1_{bn}_dem'], 'r-')
    axs[i].step(df1[f'sh2_{bn}_dem'], 'r:')
    ## axs[i].axhline(y=dem_sim.muD[e], color='k', linestyle=':')
    axs[i].axhline(y=0, color='k', linestyle=':')

  axs[i].set_xlabel('$t\ \mathrm{[daily\ windows]}$', rotation=0, ha='center', va='center', fontweight='bold', size=ylabelsize)
plot_output(df, None)

3 DATA PREPARATION

We will use the data provided by the simulator directly. There is no need to perform additional data preparation.

4 MODELING

4.1 Narrative

Please review the narrative in section 1. The next figure is a representation of the solution to the problem:

AIShiftScheduler1

4.2 Core Elements

This section attempts to answer three important questions: - What metrics are we going to track? - What decisions do we intend to make? - What are the sources of uncertainty?

For this problem, the only metric we are interested in is the number of allocated shift slots for Shift1 after each decision window. The only source of uncertainty is the levels of demand for each of the resource types.

4.3 Mathematical Model | SUS Design

A Python class is used to implement the model for the SUS (System Under Steer):

class Model():
  def __init__(self, S_0_info):
    ...
    ...

4.3.1 State variables

The state variables represent what we need to know.

  • \(R^{Avail}_t = (R^{Avail}_{ta})_{a \in \cal A}\) where \(\cal{A} = \{\alpha_1, \alpha_2, ... \alpha_{10}\}\)
    • \(R^{Avail}_{ta}\) = Boolean indicator for whether this resource (with attribute \(a\)), is available at \(t\) for rental
    • \(\alpha_1\) = 1_Courtesy
    • \(\alpha_2\) = 2_Courtesy
    • \(\alpha_3\) = 3_Courtesy
    • \(\alpha_{15}\) = 9_Stocker
    • \(\alpha_{16}\) = 10_Stocker
  • \(R^{CumShifts}_t = (R^{CumShifts}_{ta})_{a \in \cal A}\) where \(\cal{A} = \{\alpha_1, \alpha_2, ... \alpha_{10}\}\)
    • \(R^{CumShifts}_{ta}\) = Number of hours this resource (with attribute \(a\)), is tied up for rental at \(t\)
    • \(\alpha_1\) = 1_Courtesy
    • \(\alpha_2\) = 2_Courtesy
    • \(\alpha_3\) = 3_Courtesy
    • \(\alpha_{15}\) = 9_Stocker
    • \(\alpha_{16}\) = 10_Stocker
  • \(D^{Shift1}_t = (D^{Shift1}_{tb})_{b \in \cal B}\) where \(\cal{B} = \{\beta_1, \beta_2\}\)
    • \(D^{Shift1}_{tb}\) = Number of demands for this resource (with attribute \(b\)), at \(t\)
    • \(\beta_1\) = Courtesy
    • \(\beta_1\) = Stocker

The state is:

\[ \begin{aligned} S_t &= (R^{Avail}_t, R^{CumShifts}_t, D^{Shift1}_t) \\ &= (^{Avail}_{ta})_{a \in \cal A}, (R^{CumShifts}_{ta})_{a \in \cal A}, (D^{Shift1}_{tb})_{b \in \cal B}) \end{aligned} \]

4.3.2 Decision variables

The decision variables represent what we control.

The decision vector is given by:

  • \(x_t = (x_{tab})_{a\in \cal A, b\in \cal B}\) where
    • \(\cal{A} = \{\alpha_1, \alpha_2, ... \alpha_{10}\}\)
    • \(\cal{B} = \{\beta_1, \beta_2\}\)
    • \(x_{tab}\) is a boolean vector that indicates whether a specific resource is to be allocated to a demand
  • Decisions are made with a policy (TBD below):
    • \(X^{\pi}(S_t)\)

4.3.3 Exogenous information variables

The exogenous information variables represent what we did not know (when we made a decision). These are the variables that we cannot control directly. The information in these variables become available after we make the decision \(x_t\).

When we assume that the demand in each time period is revealed, without any model to predict the demand based on past demands, we have, using approach 1:

\[ \begin{aligned} D_{t+1} &= W_{t+1} \\ &= \hat{D}_{t+1} \end{aligned} \]

Alternatively, when we assume that we observe the change in demand \(\hat{D}_{t+1}=p_{t+1}-p_{t}\), we have, using approach 2:

\[ \begin{aligned} D_{t+1} &= D_t + W_{t+1} \\ &= D_t + \hat{D}_{t+1} \end{aligned} \]

We will make use of approach 1 which means that the exogenous information, \(W_{t+1}\), is the directly observed demands of the resources.

The exogenous information is obtained by a call to

DemandSimulator.simulate(...)

4.3.4 Transition function

The transition function describe how the state variables evolve over time. We have the equations:

\[ R^{Avail}_{t+1} = \begin{cases} 1 & \text{if resource with attribute $a$ has not been allocated} \\ 0 & \text{if resource with attribute $a$ has been allocated } \end{cases} \]

\[ R^{CumShifts}_{t+1} = \begin{cases} R^{CumShifts}_{t} + 1 & \text{if resource was allocated} \\ R^{CumShifts}_{t} & \text{if resource was not allocated } \end{cases} \]

Collectively, they represent the general transition function:

\[ S_{t+1} = S^M(S_t,X^{\pi}(S_t)) \]

4.3.5 Objective function

The objective function captures the performance metrics of the solution to the problem.

We can write the state-dependant reward (also called contribution due to the allocation of a resource with attribute \(b\)):

\[ C(S_t,x_t) = \begin{cases} 1 & \text{if resource was allocated} \\ -1 & \text{if resource was not allocated } \end{cases} \]

We have the objective function:

\[ \max_{\pi}\mathbb{E}\{\sum_{t=0}^{T}C(S_t,x_t,W_{t+1}) \} \]

The learned parameters are:

\[(\theta^{Hi})\]

  • \(\theta^{Hi}\)
    • the \(R^{CumShifts}_{t,a}\) threshold below which a resource with attributes \(a\) is considered available for allocation

4.3.6 Implementation of the System Under Steer (SUS) Model

class Model():
  def __init__(self, W_fn=None, S__M_fn=None, C_fn=None):
    self.S_t = {
      'R_t': pd.DataFrame({
        'ResourceId': RESOURCE_IDS,
        'Type': TYPES,
        'RAvail_t': [True]*len(TYPES),
        'RCumShifts_t': [0]*len(TYPES), #cumulative allocs (for T)
      }),
      'D_t': pd.DataFrame({
        'Type': RESOURCE_TYPES,
        'DShift1_t': [0]*len(RESOURCE_TYPES),
        'DShift2_t': [0]*len(RESOURCE_TYPES),
      }),
    }
    self.x_t = {
      'xAlloc_t': pd.DataFrame({
        'Comb': abNAMES, #Combination
        'Allocd_t': [False]*len(abNAMES), #Allocated
      }),
    }
    self.Ccum = 0.0 #. cumulative reward
    self.Ucum = 0 ##cumulative unallocated demands

  ## def reset(self):
  #   self.Ccum = 0.0
  #   self.Ucum = 0

  ## exogenous information, dependent on a random process,
  # the directly observed demands
  def W_fn(self, t):
    return {'shift1': SIM.simulate(), 'shift2': SIM.simulate()}

  def performAllocBelowDecision(self, S_t, x_t, theta):
    ## find list of ResourceIds for allocs from x_t
    resourceIds = x_t['xAlloc_t'].loc[
      x_t['xAlloc_t']['Allocd_t']==True,
      ['Comb']
    ]['Comb'].str.split('_').str[:1].tolist(); ##print(f'{resourceIds=}')
    resourceIds_flat = [e[0] for e in resourceIds]; ##print(f'{resourceIds_flat=}')

    ## update state of allocs
    S_t['R_t'].loc[
      S_t['R_t']['ResourceId'].isin(resourceIds_flat),
      ['RAvail_t']
    ] = False
    S_t['R_t'].loc[
      S_t['R_t']['ResourceId'].isin(resourceIds_flat),
      ['RCumShifts_t']
    ] += 1

    ## update Ccum
    self.Ccum += len(resourceIds_flat) #number of allocations

  def S__M_fn(self, t, S_t, x_t, W_tt1, theta):
    ## perform decision taken this morning
    self.performAllocBelowDecision(S_t, x_t, theta)

    ## D_t #direct approach
    for rt in RESOURCE_TYPES:
      sh1_demands = W_tt1['shift1'][rt]
      sh2_demands = W_tt1['shift2'][rt]
      S_t['D_t'].loc[S_t['D_t']['Type']==rt, 'DShift1_t'] = sh1_demands
      S_t['D_t'].loc[S_t['D_t']['Type']==rt, 'DShift2_t'] = sh2_demands

    ## Return all allocated resources when completed with allocation
    S_t['R_t'].loc[
      S_t['R_t']['RAvail_t']==False,
      ['RAvail_t']
    ] = True

    record_t = [t] + \
      list(S_t['R_t']['RAvail_t']) + \
      list(S_t['R_t']['RCumShifts_t']) + \
      list(S_t['D_t']['DShift1_t']) + \
      [self.Ucum] + \
      [self.Ccum] + \
      list(x_t['xAlloc_t']['Allocd_t'])
    return record_t

  def C_fn(self, S_t, x_t, W_tt1, theta):
    return

  def step(self, t, theta):
    ## IND = '\t\t'
    ## print(f"{IND}..... M. step() .....\n{t=}\n{theta=}")
    W_tt1 = self.W_fn(t); ##print(f'{W_tt1=}')

    ## update state & reward
    record_t = self.S__M_fn(t, self.S_t, self.x_t, W_tt1, theta)
    return record_t

4.4 Uncertainty Model

We will simulate the rental demand vector \(D^{Shift1}_{t+1}\) as described in section 2.

4.5 Policy Design

There are two main meta-classes of policy design. Each of these has two subclasses: - Policy Search - Policy Function Approximations (PFAs) - Cost Function Approximations (CFAs) - Lookahead - Value Function Approximations (VFAs) - Direct Lookaheads (DLAs)

In this project we will only use one approach: - A simple allocate-below parameterized policy (from the PFA class) which can be summarized as:

  • if RCumShifts_t < \(\theta^{Hi}\):
    • include the resource in the list of available resources for allocation
  • else:
    • exclude the resource

4.5.1 Implementation of Policy Design

import random
class Policy():
  def __init__(self, model):
    self.model = model
    self.Policy = namedtuple('Policy', piNAMES) #. 'class'
    self.Theta = namedtuple('Theta', thNAMES) #. 'class'

  def build_policy(self, info):
    return self.Policy(*[info[pin] for pin in piNAMES])

  def build_theta(self, info):
    return self.Theta(*[info[thn] for thn in thNAMES])

  def X__AllocBelow(self, t, S_t, x_t, theta):
    sh1_demandsToService = []
    sh2_demandsToService = []
    for rt in RESOURCE_TYPES:
      number = S_t['D_t'].loc[
        S_t['D_t']['Type']==rt,
        ['DShift1_t']
      ].squeeze()
      sh1_demandsToService.append((rt, number))
      number = S_t['D_t'].loc[
        S_t['D_t']['Type']==rt,
        ['DShift2_t']
      ].squeeze()
      sh2_demandsToService.append((rt, number))
    for demand in sh1_demandsToService:
      resourceType, number = demand; ##print(f'{resourceType=}, {number=}')
      avails = S_t['R_t'].loc[
        (S_t['R_t']['Type'] == resourceType) & \
        (S_t['R_t']['RAvail_t'] == True) & \
        (S_t['R_t']['RCumShifts_t'] < theta.thHi),
        ['ResourceId', 'Type', 'RAvail_t', 'RCumShifts_t']
      ]; ##print(f'avails=\n{avails}')
      if len(avails) > 0 and number > 0:
        if len(avails) >= number:
          allocs = avails.iloc[0:number-1, :]; ##print(f'allocs=\n{allocs}') #pick first 'number' avails
        elif len(avails) < number:
          ##print('------- NOT ENOUGH avails')
          allocs = avails.iloc[0:len(avails)-1, :]; ##print(f'allocs=\n{allocs}') #pick all avails
          unallocated_demands = number - len(avails)
          self.model.Ccum -= unallocated_demands
          self.model.Ucum += unallocated_demands
        x_t['xAlloc_t'].loc[
          x_t['xAlloc_t']['Comb'].apply(lambda x: x.split("_")[0]).isin(allocs['ResourceId']),
          ['Allocd_t']
        ] = True
      else:
        # print(f'%%% No resource of type {resourceType} available, or number is 0. Shift has no resource for demand {demand}.')
        pass

  def run_grid_sample_paths(self, theta, piName, record):
    CcumIomega__lI = []
    for l in range(1, L + 1): #for each sample-path
      M = Model()
      ## P = Policy(M) #NO!, overwrite existing global P
      self.model = M
      record_l = [piName, theta, l]
      for t in range(T): #for each transition/step
        ## print(f'\t%%% {t=}')
        ## >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
        getattr(self, piName)(t, self.model.S_t, self.model.x_t, theta) #lookup (new) today's decision

        # sit in post-decision state until end of cycle (evening)

        ## S_t, Ccum, x_t = self.model.step(t, x_t, theta)
        record_t = self.model.step(t, theta) #change from today to tomorrow
        ## >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
        record.append(record_l + record_t)
      CcumIomega__lI.append(self.model.Ccum) #just above (SDAM-eq2.9)
    return CcumIomega__lI

  def perform_grid_search_sample_paths(self, piName, thetas):
    Cbarcum = defaultdict(float)
    Ctilcum = defaultdict(float)
    expCbarcum = defaultdict(float)
    expCtilcum = defaultdict(float)
    numThetas = len(thetas)
    record = []
    print(f'{numThetas=:,}')
    nth = 1
    i = 0; print(f'... printing every {nth}th theta (if considered) ...')
    for theta in thetas:
      if True: ##in case relationships between thetas can be exploited
        ## a dict cannot be used as a key, so we define theta_key, e.g.
        ## theta_key = ((168.0, 72.0), (200.0, 90.0)):
        ## theta_key = tuple(tuple(itm.values()) for itm in theta)
        theta_key = theta ##if theta is not a dict
        if i%nth == 0: print(f'=== ({i:,} / {numThetas:,}), theta={theta} ===')

        ## >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
        CcumIomega__lI = self.run_grid_sample_paths(theta, piName, record)
        ## >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>

        Cbarcum_tmp = np.array(CcumIomega__lI).mean() #(SDAM-eq2.9)
        Ctilcum_tmp = np.sum(np.square(np.array(CcumIomega__lI) - Cbarcum_tmp))/(L - 1)

        Cbarcum[theta_key] = Cbarcum_tmp
        Ctilcum[theta_key] = np.sqrt(Ctilcum_tmp/L)

        expCbarcum_tmp = pd.Series(CcumIomega__lI).expanding().mean()
        expCbarcum[theta_key] = expCbarcum_tmp

        expCtilcum_tmp = pd.Series(CcumIomega__lI).expanding().std()
        expCtilcum[theta_key] = expCtilcum_tmp
        i += 1
      ##endif
    best_theta = max(Cbarcum, key=Cbarcum.get)
    worst_theta = min(Cbarcum, key=Cbarcum.get)

    best_Cbarcum = Cbarcum[best_theta]
    best_Ctilcum = Ctilcum[best_theta]

    worst_Cbarcum = Cbarcum[worst_theta]
    worst_Ctilcum = Ctilcum[worst_theta]

    thetaStar_expCbarcum = expCbarcum[best_theta]
    thetaStar_expCtilcum = expCtilcum[best_theta]
    thetaStar_expCtilcum[0] = 0 ##set NaN to 0

    ## best_theta_w_names = tuple((
    #   ({
    #     a1NAMES[0]: subvec[0],
    #     a1NAMES[1]: subvec[1]
    #   })) for subvec in best_theta)
    ## best_theta_n = self.build_theta({'thAdj': best_theta_w_names[0]})
    ## best_theta_n = self.build_theta({'thAdj1': best_theta_w_names[0], 'thAdj3': best_theta_w_names[1]})
    ## print(f'best_theta_n:\n{best_theta_n}\n{best_Cbarcum=:.2f}\n{best_Ctilcum=:.2f}')

    ## worst_theta_w_names = tuple((
    #   ({
    #     a1NAMES[0]: subvec[0],
    #     a1NAMES[1]: subvec[1]})) for subvec in worst_theta)
    ## worst_theta_n = self.build_theta({'thAdj': worst_theta_w_names[0]})
    ## worst_theta_n = self.build_theta({'thAdj1': worst_theta_w_names[0], 'thAdj3': worst_theta_w_names[1]})
    ## print(f'worst_theta_n:\n{worst_theta_n}\n{worst_Cbarcum=:.2f}\n{worst_Ctilcum=:.2f}')

    return \
      thetaStar_expCbarcum, thetaStar_expCtilcum, \
      Cbarcum, Ctilcum, \
      best_theta, worst_theta, \
      best_Cbarcum, worst_Cbarcum, \
      best_Ctilcum, worst_Ctilcum, \
      record

  ## dispatch {prepend @}
  # def grid_search_theta_values(self, thetas0): #. using vectors reduces loops in perform_grid_search_sample_paths()
  #     thetas = [(th0,) for th0 in thetas0]
  #     return thetas

  ## dispatch {prepend @}
  # def grid_search_theta_values(self, thetas0, thetas1): #. using vectors reduces loops in perform_grid_search_sample_paths()
  #     thetas = [(th0, th1) for th0 in thetas0 for th1 in thetas1]
  #     return thetas

  ## dispatch {prepend @}
  # def grid_search_theta_values(self, thetas0, thetas1, thetas2): #. using vectors reduces loops in perform_grid_search_sample_paths()
  #     thetas = [(th0, th1, th2) for th0 in thetas0 for th1 in thetas1 for th2 in thetas2]
  #     return thetas

  ## EXAMPLE:
  ## thetasA: Buy
  ## thetasA_name: 'thBuy'
  ## names: ELA
  ## 1_1: 1 theta sub-vectors, each having 1 theta
  ## thetas = grid_search_thetas_1_2(thetasBuy 'thBuy', CAR_TYPES)
  def grid_search_thetas_1_1(self, thetasA, thetasA_name, names):
    thetas = [
    self.build_theta({thetasA_name: {names[0]: thA0}})
    for thA0 in thetasA[names[0]]
    ]
    return thetas

  ## EXAMPLE:
  ## thetasA: Buy
  ## thetasA_name: 'thBuy'
  ## names: ELA, SON
  ## 1_2: 1 theta sub-vectors, each having 2 thetas
  ## thetas = grid_search_thetas_1_2(thetasBuy 'thBuy', CAR_TYPES)
  def grid_search_thetas_1_2(self, thetasA, thetasA_name, names):
    thetas = [
    self.build_theta({thetasA_name: {names[0]: thA0, names[1]: thA1}})
    for thA0 in thetasA[names[0]]
      for thA1 in thetasA[names[1]]
    ]
    return thetas

  ## EXAMPLE:
  ## thetasA: Adj
  ## thetasA_name: 'thAdj'
  ## names: ELA, SON
  ## 1_4: 1 theta sub-vectors, each having 4 thetas
  ## thetas = grid_search_thetas_1_4(thetasBuy 'thAdj', bNAMES)
  def grid_search_thetas_1_4(self, thetasA, thetasA_name, names):
    thetas = [
    self.build_theta({thetasA_name: {names[0]: thA0, names[1]: thA1, names[2]: thA2, names[3]: thA3}})
    for thA0 in thetasA[names[0]]
      for thA1 in thetasA[names[1]]
        for thA2 in thetasA[names[2]]
          for thA3 in thetasA[names[3]]
    ]
    return thetas

  ## EXAMPLE:
  ## thetasA: Buy
  ## thetasB: Max
  ## thetasA_name: 'thBuy'
  ## thetasB_name: 'thMax'
  ## names: ELA
  ## 2_1: 2 theta sub-vectors, each having 1 theta
  ## thetas = grid_search_thetas_2_1(thetasBuy, thetasMax, 'thBuy', 'thMax', CAR_TYPES)
  def grid_search_thetas_2_1(self, thetasA, thetasB, thetasA_name, thetasB_name, names):
    thetas = [
    self.build_theta({thetasA_name: {names[0]: thA0}, thetasB_name: {names[0]: thB0}})
    for thA0 in thetasA[names[0]]
      for thB0 in thetasB[names[0]]
    ]
    return thetas

  ## EXAMPLE:
  ## thetasA: Buy
  ## thetasB: Max
  ## thetasA_name: 'thBuy'
  ## thetasB_name: 'thMax'
  ## names: ELA, SON
  ## 2_2: 2 theta sub-vectors, each having 2 thetas
  ## thetas = grid_search_thetas_4(thetasBuy, thetasMax, 'thBuy', 'thMax', CAR_TYPES)
  def grid_search_thetas_2_2(self, thetasA, thetasB, thetasA_name, thetasB_name, names):
    thetas = [
    self.build_theta({thetasA_name: {names[0]: thA0, names[1]: thA1}, thetasB_name: {names[0]: thB0, names[1]: thB1}})
    for thA0 in thetasA[names[0]]
      for thA1 in thetasA[names[1]]
        for thB0 in thetasB[names[0]]
          for thB1 in thetasB[names[1]]
    ]
    return thetas


  ############################################################################
  ### PLOTTING
  ############################################################################
  def round_theta(self, complex_theta):
    thetas_rounded = []
    for theta in complex_theta:
      evalues_rounded = []
      for _, evalue in theta.items():
        evalues_rounded.append(float(f"{evalue:f}"))
      thetas_rounded.append(tuple(evalues_rounded))
    return str(tuple(thetas_rounded))

  def plot_Fhat_map_2(self,
      FhatI_theta_I,
      thetasX, thetasY, labelX, labelY, title):
      Fhat_values = [
        FhatI_theta_I[
          (thetaX,thetaY)
          ## ((thetaX,),(thetaY,))
        ]
          for thetaY in thetasY for thetaX in thetasX
      ]
      Fhats = np.array(Fhat_values)
      increment_count = len(thetasX)
      Fhats = np.reshape(Fhats, (-1, increment_count))#.

      fig, ax = plt.subplots()
      im = ax.imshow(Fhats, cmap='hot', origin='lower', aspect='auto')
      ## create colorbar
      cbar = ax.figure.colorbar(im, ax=ax)
      ## cbar.ax.set_ylabel(cbarlabel, rotation=-90, va="bottom")

      ax.set_xticks(np.arange(0, len(thetasX), 5))#.
      ## ax.set_xticks(np.arange(len(thetasX)))

      ax.set_yticks(np.arange(0, len(thetasY), 5))#.
      ## ax.set_yticks(np.arange(len(thetasY)))

      ## NOTE: round tick labels, else very messy
      ## function round() does not work, have to do this way
      thetasX_form = [f'{th:.0f}' for th in thetasX]
      thetasY_form = [f'{th:.0f}' for th in thetasY]

      ax.set_xticklabels(thetasX[::5])
      ## ax.set_xticklabels(thetasX); ax.set_xticklabels(thetasX_form)

      ax.set_yticklabels(thetasY[::5])
      ## ax.set_yticklabels(thetasY); ax.set_yticklabels(thetasY_form)

      ## rotate the tick labels and set their alignment.
      ## plt.setp(ax.get_xticklabels(), rotation=45, ha="right",rotation_mode="anchor")

      ax.set_title(title)
      ax.set_xlabel(labelX)
      ax.set_ylabel(labelY)

      ## fig.tight_layout()
      plt.show()
      return True

  def plot_Fhat_map_4(self,
      FhatI_theta_I,
      thetasX, thetasY, labelX, labelY, title,
      thetaFixed1, thetaFixed2):
      ## Fhat_values = [FhatI_theta_I[(thetaX,thetaY)] for thetaY in thetasY for thetaX in thetasX]
      Fhat_values = [
        FhatI_theta_I[((thetaX,thetaY), (thetaFixed1,thetaFixed2))]
        for thetaY in thetasY
          for thetaX in thetasX]
      Fhats = np.array(Fhat_values)
      increment_count = len(thetasX)
      Fhats = np.reshape(Fhats, (-1, increment_count))#.

      fig, ax = plt.subplots()
      im = ax.imshow(Fhats, cmap='hot', origin='lower', aspect='auto')
      ## create colorbar
      cbar = ax.figure.colorbar(im, ax=ax)
      ## cbar.ax.set_ylabel(cbarlabel, rotation=-90, va="bottom")

      ax.set_xticks(np.arange(0, len(thetasX), 5))#.
      ## ax.set_xticks(np.arange(len(thetasX)))

      ax.set_yticks(np.arange(0, len(thetasY), 5))#.
      ## ax.set_yticks(np.arange(len(thetasY)))

      ## NOTE: round tick labels, else very messy
      ## function round() does not work, have to do this way
      ## thetasX_form = [f'{th:.1f}' for th in thetasX]
      ## thetasY_form = [f'{th:.1f}' for th in thetasY]

      ax.set_xticklabels(thetasX[::5])#.
      ## ax.set_xticklabels(thetasX)
      ## ax.set_xticklabels(thetasX_form)

      ax.set_yticklabels(thetasY[::5])#.
      ## ax.set_yticklabels(thetasY)
      ## ax.set_yticklabels(thetasY_form)

      ## rotate the tick labels and set their alignment.
      ## plt.setp(ax.get_xticklabels(), rotation=45, ha="right",rotation_mode="anchor")

      ax.set_title(title)
      ax.set_xlabel(labelX)
      ax.set_ylabel(labelY)

      ## fig.tight_layout()
      plt.show()
      return True

  ## color_style examples: 'r-', 'b:', 'g--'
  def plot_Fhat_chart(self, FhatI_theta_I, thetasX, labelX, labelY, title, color_style, thetaStar):
      mpl.rcParams['lines.linewidth'] = 1.2
      xylabelsize = 16
      ## plt.figure(figsize=(13, 9))
      plt.figure(figsize=(13, 4))
      plt.title(title, fontsize=20)
      Fhats = FhatI_theta_I.values()
      plt.plot(thetasX, Fhats, color_style)
      plt.axvline(x=thetaStar, color='k', linestyle=':')
      plt.xlabel(labelX, rotation=0, labelpad=10, ha='right', va='center', fontweight='bold', size=xylabelsize)
      plt.ylabel(labelY, rotation=0, labelpad=1, ha='right', va='center', fontweight='normal', size=xylabelsize)
      plt.show()

  ## expanding Fhat chart
  def plot_expFhat_chart(self, df, labelX, labelY, title, color_style):
    mpl.rcParams['lines.linewidth'] = 1.2
    xylabelsize = 16
    plt.figure(figsize=(13, 4))
    plt.title(title, fontsize=20)
    plt.plot(df, color_style)
    plt.xlabel(labelX, rotation=0, labelpad=10, ha='right', va='center', fontweight='bold', size=xylabelsize)
    plt.ylabel(labelY, rotation=0, labelpad=1, ha='right', va='center', fontweight='normal', size=xylabelsize)
    plt.show()

  ## expanding Fhat charts
  def plot_expFhat_charts(self, means, stdvs, labelX, labelY, suptitle, pars=defaultdict(str)):
    n_charts = 2
    xlabelsize = 14
    ylabelsize = 14
    mpl.rcParams['lines.linewidth'] = 1.2
    default_colors = plt.rcParams['axes.prop_cycle'].by_key()['color']
    fig, axs = plt.subplots(n_charts, sharex=True)
    fig.set_figwidth(13); fig.set_figheight(9)
    fig.suptitle(suptitle, fontsize=18)

    xi = 0
    legendlabels = []
    axs[xi].set_title(r"$exp\bar{C}^{cum}(\theta^*)$", loc='right', fontsize=16)
    for i,itm in enumerate(means.items()):
      axs[xi].spines['top'].set_visible(False); axs[xi].spines['right'].set_visible(True); axs[xi].spines['bottom'].set_visible(False)
      leg = axs[xi].plot(itm[1], color=pars['colors'][i])
      legendlabels.append(itm[0])
    axs[xi].set_ylabel(labelY, rotation=0, ha='right', va='center', fontweight='normal', size=ylabelsize)

    xi = 1
    axs[xi].set_title(r"$exp\tilde{C}^{cum}(\theta^*)$", loc='right', fontsize=16)
    for i,itm in enumerate(stdvs.items()):
      axs[xi].spines['top'].set_visible(False); axs[xi].spines['right'].set_visible(True); axs[xi].spines['bottom'].set_visible(False)
      ## leg = axs[xi].plot(itm[1], default_colors[i], linestyle='--')
      leg = axs[xi].plot(itm[1], pars['colors'][i], linestyle='--')
    axs[xi].set_ylabel(labelY, rotation=0, ha='right', va='center', fontweight='normal', size=ylabelsize)

    fig.legend(
      ## [leg],
      labels=legendlabels,
      title="Policies",
      loc='upper right',
      fancybox=True,
      shadow=True,
      ncol=1)
    plt.xlabel(labelX, rotation=0, labelpad=10, ha='right', va='center', fontweight='normal', size=xlabelsize)
    plt.show()

  def plot_records(self, df, df_non, pars=defaultdict(str)):
    n_a = len(aNAMES)
    n_b = len(bNAMES)
    n_ab = len(abNAMES)
    n_x = 1
    # n_charts = n_x + n_b + n_b + 2 + n_a + 1
    n_charts = n_ab + n_b + n_a + 1 + 1
    ylabelsize = 14
    mpl.rcParams['lines.linewidth'] = 1.2
    mycolors = ['g', 'b']
    fig, axs = plt.subplots(n_charts, sharex=True)
    ## fig.set_figwidth(13); fig.set_figheight(9)
    fig.set_figwidth(13); fig.set_figheight(20)
    fig.suptitle(pars['suptitle'], fontsize=14)

    xi = 0
    for i,ab in enumerate(abNAMES):
      axs[xi+i].set_ylim(auto=True); axs[xi+i].spines['top'].set_visible(False); axs[xi+i].spines['right'].set_visible(True); axs[xi+i].spines['bottom'].set_visible(False)
      axs[xi+i].step(df[f'Allocd_t_{ab}'], 'm-', where='post')
      if not df_non is None: axs[xi+i].step(df_non[f'Allocd_t_{ab}'], 'c-.', where='post')
      axs[xi+i].axhline(y=0, color='k', linestyle=':')
      abl = ab.split("___")
      al = abl[0].split('_'); al = al[0]+'\_'+al[1]; bl = abl[1]
      y1ab = '$x^{Allocd}_{t,'+f'{",".join([al, bl])}'+'}$'
      axs[xi+i].set_ylabel(y1ab, rotation=0, ha='right', va='center', fontweight='bold', size=ylabelsize)
      for j in range(df.shape[0]//T): axs[xi+i].axvline(x=(j+1)*T, color='grey', ls=':')

    xi = n_ab
    for i,b in enumerate(bNAMES):
      y1ab = '$D^{Shift1}_{t,'+f'{b}'+'}$'
      axs[xi+i].set_ylim(auto=True); axs[xi+i].spines['top'].set_visible(False); axs[xi+i].spines['right'].set_visible(True); axs[xi+i].spines['bottom'].set_visible(False)
      axs[xi+i].step(df[f'DShift1_t_{b}'], mycolors[xi%len(mycolors)], where='post')
      if not df_non is None: axs[xi+i].step(df_non[f'DShift1_t_{b}'], 'c-.', where='post')
      axs[xi+i].axhline(y=SIM.muD[b], color='k', linestyle=':')
      axs[xi+i].set_ylabel(y1ab, rotation=0, ha='right', va='center', fontweight='bold', size=ylabelsize)
      for j in range(df.shape[0]//T): axs[xi+i].axvline(x=(j+1)*T, color='grey', ls=':')

    xi = n_ab + n_b
    for i,a in enumerate(aNAMES):
      axs[xi+i].set_ylim(auto=True); axs[xi+i].spines['top'].set_visible(False); axs[xi+i].spines['right'].set_visible(True); axs[xi+i].spines['bottom'].set_visible(False)
      ## axs[xi+i].step(df[f'RAvail_t_{a}'], 'm-', where='post')
      axs[xi+i].step(df[f'RCumShifts_t_{a}'], 'm-', where='post')
      if not df_non is None: axs[xi+i].step(df_non[f'RAvail_t_{a}'], 'c-.', where='post')
      if not df_non is None: axs[xi+i].step(df_non[f'RCumShifts_t_{a}'], 'c-.', where='post')
      axs[xi+i].axhline(y=0, color='k', linestyle=':')
      al = a.split('_'); al = al[0]+'\_'+al[1]; y1ab = '$R^{CumShifts}_{t,'+f'{al}'+'}$'
      axs[xi+i].set_ylabel(y1ab, rotation=0, ha='right', va='center', fontweight='bold', size=ylabelsize)
      for j in range(df.shape[0]//T): axs[xi+i].axvline(x=(j+1)*T, color='grey', ls=':')

    xi = n_ab + n_b + n_a
    axs[xi].set_ylim(auto=True); axs[xi].spines['top'].set_visible(False); axs[xi].spines['right'].set_visible(True); axs[xi].spines['bottom'].set_visible(False)
    axs[xi].step(df['Ucum'], 'm-', where='post')
    if not df_non is None: axs[xi].step(df_non['Ucum'], 'c-.', where='post')
    axs[xi].axhline(y=0, color='k', linestyle=':')
    axs[xi].set_ylabel('$U^{cum}$'+'\n'+''+'$\mathrm{[Allocs]}$', rotation=0, ha='right', va='center', fontweight='bold', size=ylabelsize);
    for j in range(df.shape[0]//T): axs[xi].axvline(x=(j+1)*T, color='grey', ls=':')

    xi = n_ab + n_b + n_a + 1
    axs[xi].set_ylim(auto=True); axs[xi].spines['top'].set_visible(False); axs[xi].spines['right'].set_visible(True); axs[xi].spines['bottom'].set_visible(False)
    axs[xi].step(df['Ccum'], 'm-', where='post')
    axs[xi].step(df['Ucum'], 'k-', where='post')
    if not df_non is None: axs[xi].step(df_non['Ccum'], 'c-.', where='post')
    axs[xi].axhline(y=0, color='k', linestyle=':')
    axs[xi].set_ylabel('$C^{cum}$'+'\n'+''+'$\mathrm{[Allocs]}$', rotation=0, ha='right', va='center', fontweight='bold', size=ylabelsize);
    for j in range(df.shape[0]//T): axs[xi].axvline(x=(j+1)*T, color='grey', ls=':')

    axs[xi].set_xlabel('$t\ \mathrm{[decision\ windows]}$', rotation=0, ha='right', va='center', fontweight='bold', size=ylabelsize);
    if(pars['legendLabels']): fig.legend(labels=pars['legendLabels'], loc='lower left', fontsize=16)

  def plot_evalu_comparison(self, df1, df2, df3, pars=defaultdict(str)):
    legendlabels = ['X__BuyBelow', 'X__Bellman']
    n_charts = 5
    ylabelsize = 14
    mpl.rcParams['lines.linewidth'] = 1.2
    fig, axs = plt.subplots(n_charts, sharex=True)
    fig.set_figwidth(13); fig.set_figheight(9)
    thetaStarStr = []
    for cmp in pars["thetaStar"]: thetaStarStr.append(f'{cmp:.1f}')
    thetaStarStr = '(' + ', '.join(thetaStarStr) + ')'
    fig.suptitle(pars['suptitle'], fontsize=14)

    xi = 0
    axs[xi].set_ylim(auto=True); axs[xi].spines['top'].set_visible(False); axs[xi].spines['right'].set_visible(True); axs[xi].spines['bottom'].set_visible(False)
    axs[xi].step(df1[f'x_t'], 'r-', where='post')
    axs[xi].step(df2[f'x_t'], 'g-.', where='post')
    axs[xi].step(df3[f'x_t'], 'b:', where='post')
    axs[xi].axhline(y=0, color='k', linestyle=':')
    y1ab = '$x_{t}$'
    axs[xi].set_ylabel(y1ab, rotation=0, ha='right', va='center', fontweight='bold', size=ylabelsize)
    for j in range(df1.shape[0]//T): axs[xi].axvline(x=(j+1)*T, color='grey', ls=':')

    xi = 1
    axs[xi].set_ylim(auto=True); axs[xi].spines['top'].set_visible(False); axs[xi].spines['right'].set_visible(True); axs[xi].spines['bottom'].set_visible(False)
    axs[xi].step(df1[f'R_t'], 'r-', where='post')
    axs[xi].step(df2[f'R_t'], 'g-.', where='post')
    axs[xi].step(df3[f'R_t'], 'b:', where='post')
    axs[xi].axhline(y=0, color='k', linestyle=':')
    y1ab = '$R_{t}$'
    axs[xi].set_ylabel(y1ab, rotation=0, ha='right', va='center', fontweight='bold', size=ylabelsize)
    for j in range(df1.shape[0]//T): axs[xi].axvline(x=(j+1)*T, color='grey', ls=':')

    xi = 2
    axs[xi].set_ylim(auto=True); axs[xi].spines['top'].set_visible(False); axs[xi].spines['right'].set_visible(True); axs[xi].spines['bottom'].set_visible(False)
    axs[xi].step(df1[f'p_t'], 'r-', where='post')
    axs[xi].step(df2[f'p_t'], 'g-.', where='post')
    axs[xi].step(df3[f'p_t'], 'b:', where='post')
    axs[xi].axhline(y=0, color='k', linestyle=':')

    if(pars['lower_non']): axs[xi].text(-4, pars['lower_non'], r'$\theta^{lower}$' + f"={pars['lower_non']:.1f}", size=10, color='c')
    if(pars['lower_non']): axs[xi].axhline(y=pars['lower_non'], color='c', linestyle=':')

    if(pars['upper_non']): axs[xi].text(-4, pars['upper_non'], r'$\theta^{upper}$' + f"={pars['upper_non']:.1f}", size=10, color='c')
    if(pars['upper_non']): axs[xi].axhline(y=pars['upper_non'], color='c', linestyle=':')

    if(pars['lower']): axs[xi].text(-4, pars['lower'], r'$\theta^{lower}$' + f"={pars['lower']:.1f}", size=10, color='m')
    if(pars['lower']): axs[xi].axhline(y=pars['lower'], color='m', linestyle=':')

    if(pars['upper']): axs[xi].text(-4, pars['upper'], r'$\theta^{upper}$' + f"={pars['upper']:.1f}", size=10, color='m')
    if(pars['upper']): axs[xi].axhline(y=pars['upper'], color='m', linestyle=':')

    if(pars['alpha_non']): axs[xi].text(-4, pars['alpha_non'], r'$\theta^{alpha}$' + f"={pars['alpha_non']:.1f}", size=10, color='c')
    if(pars['alpha_non']): axs[xi].axhline(y=pars['alpha_non'], color='c', linestyle=':')

    if(pars['trackSignal_non']): axs[xi].text(-4, pars['trackSignal_non'], r'$\theta^{trackSignal}$' + f"={pars['trackSignal_non']:.1f}", size=10, color='c')
    if(pars['trackSignal_non']): axs[xi].axhline(y=pars['trackSignal_non'], color='c', linestyle=':')

    if(pars['alpha']): axs[xi].text(-4, pars['alpha'], r'$\theta^{alpha}$' + f"={pars['alpha']:.1f}", size=10, color='m')
    if(pars['alpha']): axs[xi].axhline(y=pars['alpha'], color='m', linestyle=':')

    if(pars['trackSignal']): axs[xi].text(-4, pars['trackSignal'], r'$\theta^{trackSignal}$' + f"={pars['trackSignal']:.1f}", size=10, color='m')
    if(pars['trackSignal']): axs[xi].axhline(y=pars['trackSignal'], color='m', linestyle=':')

    y1ab = '$p_{t}$'
    axs[xi].set_ylabel(y1ab, rotation=0, ha='right', va='center', fontweight='bold', size=ylabelsize)
    for j in range(df1.shape[0]//T): axs[xi].axvline(x=(j+1)*T, color='grey', ls=':')

    xi = 3
    axs[xi].set_ylim(auto=True); axs[xi].spines['top'].set_visible(False); axs[xi].spines['right'].set_visible(True); axs[xi].spines['bottom'].set_visible(False)
    axs[xi].step(df1['b_t_val'], 'r-', where='post')
    axs[xi].step(df2['b_t_val'], 'g-.', where='post')
    axs[xi].step(df3['b_t_val'], 'b:', where='post')
    axs[xi].axhline(y=0, color='k', linestyle=':')
    y1ab = '$b_{t,val}$'
    axs[xi].set_ylabel(y1ab, rotation=0, ha='right', va='center', fontweight='bold', size=ylabelsize)
    for j in range(df1.shape[0]//T): axs[xi].axvline(x=(j+1)*T, color='grey', ls=':')

    xi = 4
    axs[xi].set_ylim(auto=True); axs[xi].spines['top'].set_visible(False); axs[xi].spines['right'].set_visible(True); axs[xi].spines['bottom'].set_visible(False)
    axs[xi].step(df1['Ccum'], 'r-', where='post')
    axs[xi].step(df2['Ccum'], 'g-.', where='post')
    axs[xi].step(df3['Ccum'], 'b:', where='post')
    axs[xi].axhline(y=0, color='k', linestyle=':')
    axs[xi].set_ylabel('$\mathrm{cumC}$'+'\n'+'$\mathrm{(Profit)}$'+'\n'+''+'$\mathrm{[\$]}$', rotation=0, ha='right', va='center', fontweight='bold', size=ylabelsize);
    axs[xi].set_xlabel('$t\ \mathrm{[decision\ windows]}$', rotation=0, ha='right', va='center', fontweight='bold', size=ylabelsize);
    for j in range(df1.shape[0]//T): axs[xi].axvline(x=(j+1)*T, color='grey', ls=':')

    fig.legend(
      ## [leg],
      labels=legendlabels,
      title="Policies",
      loc='upper right',
      fontsize=16,
      fancybox=True,
      shadow=True,
      ncol=1)

4.6 Policy Evaluation

4.6.1 Training/Tuning

## setup labels to plot info
RAvail_t_labels = ['RAvail_t_'+an for an in aNAMES]
RCumShifts_t_labels = ['RCumShifts_t_'+an for an in aNAMES]
DShift1_t_labels = ['DShift1_t_'+rt for rt in RESOURCE_TYPES]
xAlloc_t_labels = ['Allocd_t_'+abn for abn in abNAMES]
labels = ['piName', 'theta', 'l'] + \
  ['t'] + \
  RAvail_t_labels + RCumShifts_t_labels + \
  DShift1_t_labels + \
  ['Ucum'] + \
  ['Ccum'] + \
  xAlloc_t_labels
## def grid_search_thetas(thetas1, thetas2, thetas1_name, thetas2_name):
#   thetas = [
#     P.build_theta({thetas1_name: thA0, thetas2_name: thB0})
#     # (thA0, thB0)
#     for thA0 in thetas1
#       for thB0 in thetas2
#   ]
#   return thetas

def grid_search_thetas(thetas1, thetas1_name):
  thetas = [
    P.build_theta({thetas1_name: thA0})
    # (thA0,)
    for thA0 in thetas1
  ]
  return thetas
%%time
L = 20 #30pub #20db #N_SAMPLEPATHS
T = 14 #14pub #14db #N_TRANSITIONS
first_n_t = 500
last_n_t = 500

M = Model()
P = Policy(M)
SIM = DemandSimulator(seed=SEED_TRAIN)

thetasHi = np.arange(
  TH_Hi_SPEC[0],
  TH_Hi_SPEC[1],
  TH_Hi_SPEC[2])
thetas = grid_search_thetas(
  thetasHi,
  'thHi',
)

thetaStar_expCbarcum_AllocBelow, thetaStar_expCtilcum_AllocBelow, \
Cbarcum_AllocBelow, Ctilcum_AllocBelow, \
best_theta_AllocBelow, worst_theta_AllocBelow, \
best_Cbarcum_AllocBelow, worst_Cbarcum_AllocBelow, \
best_Ctilcum_AllocBelow, worst_Ctilcum_AllocBelow, \
record_AllocBelow = \
  P.perform_grid_search_sample_paths('X__AllocBelow', thetas)
f'{thetaStar_expCbarcum_AllocBelow.iloc[-1]=:.2f}'
df_first_n_t = pd.DataFrame.from_records(record_AllocBelow[:first_n_t], columns=labels)
df_last_n_t = pd.DataFrame.from_records(record_AllocBelow[-last_n_t:], columns=labels)
numThetas=13
... printing every 1th theta (if considered) ...
=== (0 / 13), theta=Theta(thHi=1) ===
=== (1 / 13), theta=Theta(thHi=2) ===
=== (2 / 13), theta=Theta(thHi=3) ===
=== (3 / 13), theta=Theta(thHi=4) ===
=== (4 / 13), theta=Theta(thHi=5) ===
=== (5 / 13), theta=Theta(thHi=6) ===
=== (6 / 13), theta=Theta(thHi=7) ===
=== (7 / 13), theta=Theta(thHi=8) ===
=== (8 / 13), theta=Theta(thHi=9) ===
=== (9 / 13), theta=Theta(thHi=10) ===
=== (10 / 13), theta=Theta(thHi=11) ===
=== (11 / 13), theta=Theta(thHi=12) ===
=== (12 / 13), theta=Theta(thHi=13) ===
CPU times: user 1min 5s, sys: 1.17 s, total: 1min 6s
Wall time: 1min 6s
best_theta_AllocBelow
Theta(thHi=12)
P.plot_Fhat_chart(
  FhatI_theta_I=Cbarcum_AllocBelow,
  thetasX=thetasHi,
  labelX=r'$\theta^{Hi}$',
  labelY=r'$\bar{C}^{cum}$'+'\n$[Allocs]$',
  title="Sample mean of the cumulative reward"+f"\n $L={L}, T={T}$, "+ \
    r"$\theta^* =$("+ str(best_theta_AllocBelow[0])+"), " \
    r"$\bar{C}^{cum}(\theta^*) =$"+f"{best_Cbarcum_AllocBelow:,.0f}\n",
  color_style='ro-',
  thetaStar=best_theta_AllocBelow
)
P.plot_Fhat_chart(
  FhatI_theta_I=Ctilcum_AllocBelow,
  thetasX=thetasHi,
  labelX=r'$\theta^{Hi}$',
  labelY=r'$\tilde{C}^{cum}$'+'\n$[Allocs]$',
  title="Sample error of the cumulative reward"+f"\n $L={L}, T={T}$, "+ \
    r"$\theta^* =$("+ str(best_theta_AllocBelow[0])+"), " \
    r"$\tilde{C}^{cum}(\theta^*) =$"+f"{best_Cbarcum_AllocBelow:,.0f}\n",
  color_style='bo-',
  thetaStar=best_theta_AllocBelow
)

P.plot_expFhat_chart(
  df=thetaStar_expCbarcum_AllocBelow,
  labelX=r'$\ell$',
  labelY=r"$exp\bar{C}^{cum}(\theta^*)$"+"\n[Allocs]",
  title="Expanding sample mean of the cumulative reward"+f"\n $L={L}, T={T}$, "+ \
    r"$\theta^* =$("+ str(best_theta_AllocBelow[0])+"), " \
    r"$\bar{C}^{cum}(\theta^*) =$"+f"{best_Cbarcum_AllocBelow:,.0f}\n",
  color_style='b-'
)
print()
P.plot_expFhat_chart(
  df=thetaStar_expCtilcum_AllocBelow,
  labelX=r'$\ell$',
  labelY=r"$exp\bar{C}^{cum}(\theta^*)$"+"\n[Allocs]",
  title="Expanding standard error of the cumulative reward"+f"\n $L={L}, T={T}$, "+ \
    r"$\theta^* =$("+ str(best_theta_AllocBelow[0])+"), " \
    r"$\tilde{C}^{cum}(\theta^*) =$"+f"{best_Ctilcum_AllocBelow:,.0f}\n",
  color_style='b--'
)

f'{len(record_AllocBelow):,}', L, T
('3,640', 20, 14)
best_theta_AllocBelow
Theta(thHi=12)
P.plot_records(
  df=df_first_n_t,
  df_non=None,
  pars=defaultdict(str, {
    'thHi': best_theta_AllocBelow.thHi,
    ## 'legendLabels': [r'$\mathrm{opt}$', r'$\mathrm{non}$'],
    'suptitle': f'TRAINING OF X__AllocBelow POLICY'+'\n'+f'(first {first_n_t} records)'+'\n'+ \
    f'L = {L}, T = {T}, '+ \
    r"$\theta^*=$("+ str(best_theta_AllocBelow[0])+")"
  }),
)

P.plot_records(
  df=df_last_n_t,
  df_non=None,
  pars=defaultdict(str, {
    'thHi': best_theta_AllocBelow.thHi,
    ## 'legendLabels': [r'$\mathrm{opt}$', r'$\mathrm{non}$'],
    'suptitle': f'TRAINING OF X__AllocBelow POLICY'+'\n'+f'(last {last_n_t} records)'+'\n'+ \
    f'L = {L}, T = {T}, '+ \
    r"$\theta^*=$("+ str(best_theta_AllocBelow[0])+")"
  }),
)

4.6.2 Evaluation

4.6.2.1 X__ServiceAbove
best_theta_AllocBelow
Theta(thHi=12)
worst_theta_AllocBelow
Theta(thHi=1)
%%time
L = 20 #30pub #20db #N_SAMPLEPATHS
T = 14 #20pub #10db #N_TRANSITIONS
first_n_t = int(1*L*T)

M = Model()
P = Policy(M)
SIM = DemandSimulator(seed=SEED_EVALU)
thetasOpt = []; thetasOpt.append(best_theta_AllocBelow)
thetaStar_expCbarcum_AllocBelow_evalu_opt, thetaStar_expCtilcum_AllocBelow_evalu_opt, \
_, _, \
best_theta_AllocBelow_evalu_opt, worst_theta_AllocBelow_evalu_opt, \
_, _, \
_, _, \
record_AllocBelow_evalu_opt = \
  P.perform_grid_search_sample_paths('X__AllocBelow', thetasOpt)
df_AllocBelow_evalu_opt = pd.DataFrame.from_records(
    record_AllocBelow_evalu_opt[:first_n_t], columns=labels)

M = Model()
P = Policy(M)
SIM = DemandSimulator(seed=SEED_EVALU)
thetasNon = []; thetasNon.append(worst_theta_AllocBelow)
## thetasNon = []; thetasNon.append(
#   P.build_theta(
#     {'thHi': 12, 'thAllocBelow': 5}
#   )
# )
thetaStar_expCbarcum_AllocBelow_evalu_non, thetaStar_expCtilcum_AllocBelow_evalu_non, \
_, _, \
best_theta_AllocBelow_evalu_non, worst_theta_AllocBelow_evalu_non, \
_, _, \
_, _, \
record_AllocBelow_evalu_non = \
  P.perform_grid_search_sample_paths('X__AllocBelow', thetasNon)
df_AllocBelow_evalu_non = pd.DataFrame.from_records(
    record_AllocBelow_evalu_non[:first_n_t], columns=labels)

print(
  f'{thetaStar_expCbarcum_AllocBelow_evalu_opt.iloc[-1]=:.2f}, \
    {thetaStar_expCbarcum_AllocBelow_evalu_non.iloc[-1]=:.2f}')
numThetas=1
... printing every 1th theta (if considered) ...
=== (0 / 1), theta=Theta(thHi=12) ===
numThetas=1
... printing every 1th theta (if considered) ...
=== (0 / 1), theta=Theta(thHi=1) ===
thetaStar_expCbarcum_AllocBelow_evalu_opt.iloc[-1]=85.25,     thetaStar_expCbarcum_AllocBelow_evalu_non.iloc[-1]=50.65
CPU times: user 11.2 s, sys: 131 ms, total: 11.3 s
Wall time: 12.1 s
P.plot_records(
  df=df_AllocBelow_evalu_opt,
  df_non=df_AllocBelow_evalu_non,
  pars=defaultdict(str, {
    'thHi': best_theta_AllocBelow_evalu_opt.thHi,
    'thHi': best_theta_AllocBelow_evalu_non.thHi,
    'legendLabels': [r'$\mathrm{opt}$', r'$\mathrm{non}$'],
    'suptitle': f'EVALUATION OF X__AllocBelow POLICY'+'\n'+f'(first {first_n_t} records)'+'\n'+ \
    f'L = {L}, T = {T}, '+ \
    r"$\theta^*=$("+ str(best_theta_AllocBelow_evalu_opt[0])+")"
  }),
)

## last_n_l = int(.99*L)
last_n_l = int(1.0*L)
P.plot_expFhat_charts(
  means={
      'AllocBelow optimal': thetaStar_expCbarcum_AllocBelow_evalu_opt[-last_n_l:],
      'AllocBelow non-optimal': thetaStar_expCbarcum_AllocBelow_evalu_non[-last_n_l:],
  },
  stdvs={
      'AllocBelow optimal': thetaStar_expCtilcum_AllocBelow_evalu_opt[-last_n_l:],
      'AllocBelow non-optimal': thetaStar_expCtilcum_AllocBelow_evalu_non[-last_n_l:],
  },
  labelX='Sample paths, ' + r'$\ell$',
  labelY='Profit\n[Allocs]',
  suptitle=f"Comparison of Optimal/Non-optimal Policies after Evaluation\n \
    L = {L}, T = {T}\n \
    last {last_n_l} records\n \
    ('exp' refers to expanding)",
  pars=defaultdict(str, {
    'colors': ['m', 'c']
  }),
)

Next, we evaluate with a single sample-path:

%%time
L = 1 #N_SAMPLEPATHS
T = 15 #100 #N_TRANSITIONS
first_n_t = int(1*L*T)

M = Model()
P = Policy(M)
SIM = DemandSimulator(seed=SEED_EVALU)
thetasOpt = []; thetasOpt.append(best_theta_AllocBelow)
thetaStar_expCbarcum_AllocBelow_evalu_opt, thetaStar_expCtilcum_AllocBelow_evalu_opt, \
_, _, \
best_theta_AllocBelow_evalu_opt, worst_theta_AllocBelow_evalu_opt, \
_, _, \
_, _, \
record_AllocBelow_evalu_opt = \
  P.perform_grid_search_sample_paths('X__AllocBelow', thetasOpt)
df_AllocBelow_evalu_opt = pd.DataFrame.from_records(
    record_AllocBelow_evalu_opt[:first_n_t], columns=labels)

M = Model()
P = Policy(M)
SIM = DemandSimulator(seed=SEED_EVALU)
thetasNon = []; thetasNon.append(worst_theta_AllocBelow)
## thetasNon = []; thetasNon.append(
#   P.build_theta(
#     {'thHi': 9}
#   )
# )
thetaStar_expCbarcum_AllocBelow_evalu_non, thetaStar_expCtilcum_AllocBelow_evalu_non, \
_, _, \
best_theta_AllocBelow_evalu_non, worst_theta_AllocBelow_evalu_non, \
_, _, \
_, _, \
record_AllocBelow_evalu_non = \
  P.perform_grid_search_sample_paths('X__AllocBelow', thetasNon)
df_AllocBelow_evalu_non = pd.DataFrame.from_records(
    record_AllocBelow_evalu_non[:first_n_t], columns=labels)

print(
  f'{thetaStar_expCbarcum_AllocBelow_evalu_opt.iloc[-1]=:.2f}, \
    {thetaStar_expCbarcum_AllocBelow_evalu_non.iloc[-1]=:.2f}')
numThetas=1
... printing every 1th theta (if considered) ...
=== (0 / 1), theta=Theta(thHi=12) ===
numThetas=1
... printing every 1th theta (if considered) ...
=== (0 / 1), theta=Theta(thHi=1) ===
thetaStar_expCbarcum_AllocBelow_evalu_opt.iloc[-1]=96.00,     thetaStar_expCbarcum_AllocBelow_evalu_non.iloc[-1]=45.00
CPU times: user 551 ms, sys: 15.1 ms, total: 566 ms
Wall time: 567 ms
RuntimeWarning: invalid value encountered in double_scalars
  Ctilcum_tmp = np.sum(np.square(np.array(CcumIomega__lI) - Cbarcum_tmp))/(L - 1)
<ipython-input-31-a09e326cfa07>:97: RuntimeWarning: invalid value encountered in double_scalars
  Ctilcum_tmp = np.sum(np.square(np.array(CcumIomega__lI) - Cbarcum_tmp))/(L - 1)
P.plot_records(
  df=df_AllocBelow_evalu_opt,
  df_non=df_AllocBelow_evalu_non,
  pars=defaultdict(str, {
    'thHi': best_theta_AllocBelow_evalu_opt.thHi,
    # 'thServiceAbove': best_theta_ServiceAbove_evalu_opt.thServiceAbove,
    'thHi': best_theta_AllocBelow_evalu_non.thHi,
    # 'thServiceAboveNon': best_theta_ServiceAbove_evalu_non.thServiceAbove,
    'legendLabels': [r'$\mathrm{opt}$', r'$\mathrm{non}$'],
    'suptitle': f'EVALUATION OF X__AllocBelow POLICY'+'\n'+f'(first {first_n_t} records)'+'\n'+ \
    f'L = {L}, T = {T}, '+ \
    r"$\theta^*=$("+ str(best_theta_AllocBelow_evalu_opt[0])+")"
    # r"$\theta^*=$("+ str(best_theta_ServiceAbove_evalu_opt[0])+", "+ str(best_theta_ServiceAbove_evalu_opt[1])+")"
  }),
)

df_AllocBelow_evalu_opt[df_AllocBelow_evalu_opt['t']==T-1][['Ccum']]
Ccum
14 96.0000
df_AllocBelow_evalu_non[df_AllocBelow_evalu_non['t']==T-1][['Ccum']]
Ccum
14 45.0000

From the Ccum plot we see that the cumulative reward for the optimal policy keeps on rising. The non-optimal policy does not do as well.