Content
- What is a system? Why are they challenging to understand?
- Simulation to understand system dynamics.
- Optimization to make decisions about managing systems.
- Sensitivity Analysis: What matters?
Lecture 20
December 3, 2025
Mathematical programming requires a particular geometry to ensure solutions or error guaranteed (e.g. linearity).
Many problems can be formulated in these frameworks, but not all…
What are some alternatives?
Most search algorithms look for critical points to find candidate optima. Then the “best” of the critical points is the global optimum.
Two common approaches:
Find estimate of gradient near current point and step in positive/negative direction (depending on max/min).
\[x_{n+1} = x_n \pm \alpha_n \nabla f(x_n)\]
The second is more problematic: in some cases, methods like stochastic gradient approximation or automatic differentiation can be used.
Use a sampling strategy to find a new proposal, then evaluate, keep if improvement.
Evolutionary Algorithms fall into this category.
Can also incorporate constraints into search.
Two main packages:
Random.seed!(1)
# define function to optimize
function f(x)
lnorm = LogNormal(0.25, 1)
y = rand(lnorm)
return sum(x .- y)^2
end
fobj(x) = mean([f(x) for i in 1:1000])
bounds = [0.0 1000.0]'
# optimize
results = Metaheuristics.optimize(fobj, bounds, DE())
results.best_sol(f = 3.7745e+00, x = [1.74764])
Metaheuristics.jl AlgorithmsMetaheuristics.jl contains a number of algorithms, covering a variety of single-objective and multi-objective algorithms.
We won’t go into details here, and will just stick with DE() (differential evolution) in our examples.
These methods work pretty well, but can:

These methods work pretty well, but can:

These methods work pretty well, but can:

Simulation-Optimization involves the use of a simulation model to map decision variables and other inputs to system outputs.

What kinds of methods can use for simulation-optimization?

Challenge: Underlying structure of the simulation model \(f(x)\) is unknown and can be complex.
We can use a search algorithm to navigate the response surface of the model and find an “optimum”.

We usually can’t guarantee that we can find an optimum (or even quantify an optimality gap) because:
But:
The optimal solution of a model is not an optimal solution of a problem unless the model is a perfect representation of the problem, which it never is.
— Ackoff, R. L. (1979). “The Future of Operational Research is Past.” The Journal of the Operational Research Society, 30(2), 93–104. https://doi.org/10.1057/jors.1979.22
Simulation-optimization methods typically rely on heuristics to decide that a solution is good enough. These can include
\[\begin{align*} X_{t+1} &= X_t + a_t + y_t \\ &\quad + \frac{X_t^q}{1 + X_t^q} - bX_t,\\[0.5em] y_t &\sim \text{LogNormal}(\mu, \sigma^2) \end{align*}\]
| Parameter | Definition |
|---|---|
| \(X_t\) | P concentration in lake |
| \(a_t\) | point source P input |
| \(y_t\) | non-point source P input |
| \(q\) | P recycling rate |
| \(b\) | rate at which P is lost |
DE() with a maximum of 10,000 function evaluations).What do you observe?
# find critical value
crit(x) = (x^q/(1+x^q)) - b*x
Xcrit = find_zero(crit, (0.1, 1.5))
function lake(a, y, q, b, T)
X = zeros(T+1, size(y, 2))
# calculate states
for t = 1:T
X[t+1, :] = X[t, :] .+ a[t] .+ y[t, :] .+ (X[t, :].^q./(1 .+ X[t, :].^q)) .- b.*X[t, :]
end
return X
end
function lake_opt(a, y, q, b, T, Xcrit)
X = lake(a, y, q, b, T)
# calculate exceedance of critical value
Pexceed = sum(X[101, :] .> Xcrit) / size(X, 2)
failconst = [Pexceed - 0.1]
return mean(a), failconst, [0.0]
end
# set bounds
bounds = [0.0ones(T) 0.1ones(T)]'
# optimize
obj(a) = lake_opt(a, y, q, b, T, Xcrit)
options = Options(f_calls_limit=10_000)
results = Metaheuristics.optimize(obj, bounds, DE(options=options))
a = results.best_sol.x
pa = plot(a, tickfontsize=16, guidefontsize=16, ylabel="Point Source P Release", xlabel="Time", linewidth=3, legend=false, left_margin=5mm, bottom_margin=10mm)Now we can simulate to learn about how the solution performs.
pconc = plot(X, alpha=0.2, linewidth=3,
guidefontsize=16, tickfontsize=16,
legendfontsize=16, label=:false,
legend=:topleft, bottom_margin=10mm, left_margin=5mm, grid=false)
hline!([Xcrit], color=:red, linestyle=:dot,
label="Critical Value")
plot!(size=(600, 600))
ylabel!("Lake P Concentration")
xlabel!("Time")Many choices were made in this optimization problem:
All of these affect the tradeoff between solution quality and computational expense.
Optimization converges faster with fewer variables.
Instead of optimizing the entire sequence of decisions, formulate those decisions as a decision rule \(a_t = f(X_{t}).\)
Example: Quinn et al. (2017) uses \[a_t = \min\left(\max\left(\sum_j w_j \left| \frac{X_t - c_j}{r_j}\right|^3, 0.01\right), 0.1\right).\]
No Class Monday:
HW5: Due tomorrow (12/4)