Undergraduate Computational Macro
\[ v = y + \beta v \]
\[ f(v) := y + \beta v \]
Consider iteration of the map \(f\) starting from an initial condition \(v_0\)
\[ v_{n+1} = f(v_n) \]
Does this converge? Depends on \(f(\cdot)\), as we will explore in detail
norm(v_new - v_old)
in JuliaFor our simple linear map: \(f(v) \equiv y + \beta v\)
Iteration becomes \(v_{n+1} = y + \beta v_n\). Iterating backwards \[ v_{n+1} = y + \beta v_n = y + \beta y + \beta^2 v_{n-1} = y \sum_{i=0}^{n-1} \beta^i + \beta^n v_0 \]
y = 1.0
beta = 0.9
v_iv = 0.8 # initial condition
v_old = v_iv
normdiff = Inf
iter = 1
for i in 1:1000
v_new = y + beta * v_old # the f(v) map
normdiff = norm(v_new - v_old)
if normdiff < 1.0E-7 # check convergence
iter = i
break # converged, exit loop
end
v_old = v_new # replace and continue
end
println("Fixed point = $v_old |f(x) - x| = $normdiff in $iter iterations");
Fixed point = 9.999999081896231 |f(x) - x| = 9.181037796679448e-8 in 154 iterations
v_old = v_iv
normdiff = Inf
iter = 1
while normdiff > 1.0E-7 && iter <= 1000
v_new = y + beta * v_old # the f(v) map
normdiff = norm(v_new - v_old)
v_old = v_new # replace and continue
iter = iter + 1
end
println("Fixed point = $v_old |f(x) - x| = $normdiff in $iter iterations")
Fixed point = 9.999999173706609 |f(x) - x| = 9.181037796679448e-8 in 155 iterations
function v_fp(beta, y, v_iv; tolerance = 1.0E-7, maxiter=1000)
v_old = v_iv
normdiff = Inf
iter = 1
while normdiff > tolerance && iter <= maxiter
v_new = y + beta * v_old # the f(v) map
normdiff = norm(v_new - v_old)
v_old = v_new
iter = iter + 1
end
return (v_old, normdiff, iter) # returns a tuple
end
y = 1.0
beta = 0.9
v_star, normdiff, iter = v_fp(beta, y, 0.8)
println("Fixed point = $v_star |f(x) - x| = $normdiff in $iter iterations")
Fixed point = 9.999999173706609 |f(x) - x| = 9.181037796679448e-8 in 155 iterations
function fixedpointmap(f, iv; tolerance = 1.0E-7, maxiter=1000)
x_old = iv
normdiff = Inf
iter = 1
while normdiff > tolerance && iter <= maxiter
x_new = f(x_old) # use the passed in map
normdiff = norm(x_new - x_old)
x_old = x_new
iter = iter + 1
end
return (; value = x_old, normdiff, iter) # A named tuple
end
fixedpointmap (generic function with 1 method)
y = 1.0
beta = 0.9
v_initial = 0.8
f(v) = y + beta * v # note that y and beta are used in the function!
sol = fixedpointmap(f, 0.8; tolerance = 1.0E-8) # don't need to pass
println("Fixed point = $(sol.value) |f(x) - x| = $(sol.normdiff) in $(sol.iter) iterations")
# Unpacking notation for the named tuples not sensitive to order
(; value, iter, normdiff) = fixedpointmap(v -> y + beta * v, # creates an anonymous "closure"
v_initial; tolerance = 1.0E-8)
println("Fixed point = $value |f(x) - x| = $normdiff in $iter iterations")
Fixed point = 9.999999918629035 |f(x) - x| = 9.041219328764782e-9 in 177 iterations
Fixed point = 9.999999918629035 |f(x) - x| = 9.041219328764782e-9 in 177 iterations
using NLsolve
# best style
y = 1.0
beta = 0.9
iv = [0.8] # note move to array
f(v) = y .+ beta * v # note that y and beta are used in the function!
sol = fixedpoint(f, iv) # uses Anderson Acceleration
fnorm = norm(f(sol.zero) .- sol.zero)
println("Fixed point = $(sol.zero) |f(x) - x| = $fnorm in $(sol.iterations) iterations")
Fixed point = [9.999999999999972] |f(x) - x| = 3.552713678800501e-15 in 3 iterations
\[ 1 + c + c^2 + c^3 + \cdots + c^T = \sum_{t=0}^T c^t = \frac{1 - c^{T+1}}{1-c} \]
\[ 1 + c + c^2 + c^3 + \cdots = \sum_{t=0}^{\infty} c^t = \frac{1}{1 -c } \]
An asset has payments stream of \(y_t\) dollars at times \(t = 0, 1, 2, \ldots, G \equiv 1+g, g > 0\) and \(G < R \equiv 1 + r\)
\[ y_t = G^t y_0 \]
The present value of the asset is
\[ \begin{aligned} p_0 & = y_0 + y_1/R + y_2/(R^2) + \cdots = \sum_{t=0}^{\infty} y_t (1/R)^t = \sum_{t=0}^{\infty} y_0 G^t (1/R)^t \\ &= \sum_{t=0}^{\infty} y_0 (G/R)^t = \frac{y_0}{1 - G/R} \end{aligned} \]
For small \(r\) and \(g\), use a Taylor series or \(r g \approx 0\) to get
\[ G R^{-1} \approx 1 + g - r \]
Hence,
\[ p_0 = y_0/(1 - (1+g)/(1+r)) \approx \frac{y_0}{r - g} \]
Consider an asset that pays \(y_t = 0\) for \(t > T\) and \(y_t = G^t y_0\) for \(t \leq T\)
The present value is
\[ \begin{aligned} p_0 &= \sum_{t=0}^{T} y_t (1/R)^t = \sum_{t=0}^{T} y_0 G^t (1/R)^t \\ &= \sum_{t=0}^{T} y_0 (G/R)^t = y_0\frac{1 - (G/R)^{T+1}}{1 - G/R} \end{aligned} \]
How large is \((G/R)^{T+1}\)?
infinite_payoffs(g, r, y_0) = y_0 / (1 - (1 + g) * (1 + r)^(-1))
function finite_payoffs(T, g, r, y_0)
G = 1 + g
R = 1 + r
return (y_0 * (1 - G^(T + 1) * R^(-T - 1))) / (1 - G * R^(-1))
end
@show infinite_payoffs(0.01, 0.05, 1.0)
@show finite_payoffs(100, 0.01, 0.05, 1.0);
infinite_payoffs(0.01, 0.05, 1.0) = 26.24999999999994
finite_payoffs(100, 0.01, 0.05, 1.0) = 25.73063957477331
T = 10
y_0 = 1.0
plot(title = L"Present Value $p_0(T)$", legend = :topleft, xlabel = "T")
plot!(finite_payoffs.(0:T, 0.4, 0.9, y_0),
label = L"r=0.9 \gg 0.4 = g")
plot!(finite_payoffs.(0:T, 0.4, 0.5, y_0), label = L"r=0.5 > 0.4 = g")
plot!(finite_payoffs.(0:T, 0.4, 0.4001, y_0),
label = L"r=0.4001 \approx 0.4 = g")
plot!(finite_payoffs.(0:T, 0.5, 0.4, y_0), label = L"r=0.4 < 0.5 = g")
Lets write a version of the model for arbitrary \(y_t\) and relabel \(\beta \equiv 1/R\)
The asset price, \(p_t\) starting at any \(t\)
\[ \begin{aligned} p_t &= \sum_{j = 0}^{\infty}\beta^j y_{t+j}\\ p_t &= y_t + \beta y_{t+1} + \beta^2 y_{t+2} + \beta^3 y_{t+3} + \cdots\\ &= y_t + \beta \left(y_{t+1} + \beta y_{t+2} + \beta^2 y_{t+2} \cdots\right)\\ &= y_t + \beta \sum_{j=0}^{\infty}\beta^j y_{t+j+1}\\ &= y_t + \beta p_{t+1} \end{aligned} \]
In the simple case of \(y_t = \bar{y}\), recursive equation is
\[ p_t = \bar{y} + \beta p_{t+1} \]
In cases where the price is time-invariant, write this as a fixed point
\[ p = \bar{y} + \beta p \equiv f(p) \]
\[ p_t = y_t + \beta p_{t+1} \]
\[ p_t = y_t + \beta \mathbb{E}\left[p_{t+1}\right] \]
Assume two prices: \(p_L\) and \(p_H\) for the asset depending on the \(y_t\) \[ \begin{aligned} p_L &= y_L + \beta \left[ 0.5 p_L + 0.5 p_H \right]\\ p_H &= y_H + \beta \left[ 0.5 p_L + 0.5 p_H \right] \end{aligned} \]
Stack \(p \equiv \begin{bmatrix} p_L & p_H \end{bmatrix}^{\top}\) and \(y \equiv \begin{bmatrix} y_L & y_H \end{bmatrix}^{\top}\)
\[ p = y + \beta \begin{bmatrix} 0.5 & 0.5 \\ 0.5 & 0.5 \end{bmatrix} p\equiv f(p) \]
We could solve this as a linear equation, but lets use a fixed point
y = [0.5, 1.5] #y_L, y_H
beta = 0.9
iv = [0.8, 0.8]
A = [0.5 0.5; 0.5 0.5]
sol = fixedpoint(p -> y .+ beta * A * p, iv) # f(p) := y + beta A p
p_L, p_H = sol.zero # can unpack a vector
@show p_L, p_H, sol.iterations
# p = y + beta A p => (I - beta A) p = y => p = (I - beta A)^{-1} y
@show (I - beta * A) \ y; # or $inv(I - beta * A) * y
(p_L, p_H, sol.iterations) = (9.500000000000028, 10.500000000000028, 4)
(I - beta * A) \ y = [9.499999999999996, 10.499999999999996]
National income is an accounting identity: the sum of consumption, investment, and government expenditures is the national income
\[ y_t = c_t + i_t + g_t \]
Investment private + government investment. Assume it is fixed here at \(i\) and \(g\). Embeds behavioral assumptions?
Consumption \(c_t = b y_{t-1}\), i.e. “behavior”, not accounting. Lag on last periods income/output
Substituting the consumption equation into the national income equation
\[ \begin{aligned} y_t &= c_t + i + g\\ y_t &= b y_{t-1} + i + g\\ y_t &= b (b y_{t-2} + i + g) + i + g\\ y_t &= b^2 y_{t-2} + b (i + g) + (i + g) \end{aligned} \]
Iterative backwards to a \(y_0\),
\[ y_t = \sum_{j=0}^{t-1} b^j (i + g) + b^t y_0 = \frac{1 - b^{t}}{1 - b} (i + g) + b^t y_0 \]
Take limit as \(t \to \infty\) to get
\[ \lim_{t\to\infty}y_t = \frac{1}{1 - b} (i + g) \]
Define the Keynesian multiplier is \(1/(1-b)\)
Is this correct (or useful) of a model?
\[ y_t = b y_{t-1} + i + g \]
For \(v_{n+1} = f(v_n)\), take the limit for some \(v_0\),
\[ \begin{aligned} v_1 &= f(v_0)\\ v_2 &= f(v_1) = f(f(v_0))\\ \ldots &\\ \lim_{n\to\infty} v_n &= f(f(\ldots f((v_0)))) \stackrel{?}{\equiv} v^* \end{aligned} \]
A contraction mapping is a function \(f\) such that for some \(0 < \beta < 1\) and all \(x, y \in X\)
\[ |f(x) - f(y)| \leq \beta |x - y| \]
If \(f\) is a contraction mapping, then \(f\) has a unique fixed point \(x^*\)
The proof is constructive, and gives us a way to find the fixed point
Start with \(x_0 \in \mathbb{R}\) and define \(x_{n+1} = f(x_n)\)
Then, for \(n \geq 1\)
\[ \begin{aligned} |x_{n+1} - x_n| &= |f(x_n) - f(x_{n-1})| \leq \beta |x_n - x_{n-1}| = \beta |f(x_{n-1}) - f(x_{n-2})| \\ &\leq \beta^2 |x_{n-1} - x_{n-2}| \leq \cdots \leq \beta^n |x_1 - x_0| \end{aligned} \]
Since \(0 < \beta < 1\), the right hand side converges to zero as \(n\to\infty\), independent of \(x_0\)
Hence the \(|x_{n+1} - x_n|\) goes to zero, so \(x_n = x_{n+1} \to x^*\) as \(n\to\infty\)
Let \(f(x) = a + b x\) for \(a, b \in \mathbb{R}\)
Substitute into the the definition of contraction mapping directly
\[ \begin{aligned} |f(x) - f(y)| &= |a + b x - (a + b y)| = |b| |x - y| \leq \beta |x - y| \end{aligned} \]
The multidimensional generalization of this checks the maximum absolute eigenvalue