Final Practice Problems

Author

Jesse Perla, UBC

Student Name/Number:

Instructions

The following are example directions for an exam.

  • Ensure you modify the field above with your name and student number above immediately
  • The exam has XXXXX questions, each with multiple parts for a total of XXXXX points. You may not finish the exam, so best to do your best answering all questions to the extent possible and not get stuck on any one question.
  • This exam is closed book and accessing the internet is not permitted
    • See the formula “sheet” embedded at the end of this notebook for reference
    • You can the internal help as required (see the Jupyterhub menu, use Settings/Show Contextual Help)
  • Execute the file to begin, which will also check your setup. To do this in Jupyter, in the menu go > Run > Run All Cells or the equivalent buttons
  • Edit this file directly, and in-place as an ipynb file, which we will automatically download at the end of the exam time directly. In particular
    • DO NOT rename this file with your name. It is automatically associated with your canvas account
    • DO NOT save-as the file, move it, or export to pdf or html
    • DO NOT add any additional packages
  • Save the notebook as you are working
    • We will only grade what is saved at the end of the exam in this exact file, and it is your responsibility to ensure the file is saved
    • We will not execute the notebook, so ensure all code, figures, etc. are ready as-is upon saving for submission
  • Ensure you edit the results in the code blocks or markup blocks indicated as we will not grade anything outside of those
    • You will not be judged on code quality directly, but code clarity may be required for us to ensure you understood the problem directly
    • If a question requires math, you can try to put latex inside of the cells but will not be judged on whether you write latex vs. math in text that doesn’t quite match latex. But it should be clear
# Packages available
# DO NOT MODIFY OR ADD PACKAGES
using Distributions, Plots, LaTeXStrings, LinearAlgebra, Statistics, Random, QuantEcon, NLsolve
Precompiling packages...
   1001.4 msPositiveFactorizations
   1338.2 msAbstractFFTs
   1523.6 msADTypes
    878.8 msIntegerMathUtils
    720.8 msEnumX
    934.5 msInflate
   3561.3 msCEnum
   1242.2 msAdapt
   4451.7 msBessels
    889.6 msStaticArraysCore
   1171.0 msCommonSubexpressions
   3820.5 msConstructionBase
   1964.8 msSimpleTraits
   1649.6 msNLopt_jll
   1551.0 msFFTW_jll
   1634.2 msDiffRules
   2483.2 msIntelOpenMP_jll
   2777.9 msoneTBB_jll
   1252.0 msPrimes
   1358.7 msAdapt → AdaptSparseArraysExt
    937.0 msDiffResults
   5194.2 msDifferentiationInterface
   3681.3 msArrayInterface
    976.2 msConstructionBase → ConstructionBaseLinearAlgebraExt
   1108.7 msADTypes → ADTypesConstructionBaseExt
   2243.8 msNLopt
   8174.1 msMKL_jll
   1496.9 msDifferentiationInterface → DifferentiationInterfaceSparseArraysExt
   1644.5 msArrayInterface → ArrayInterfaceSparseArraysExt
    917.7 msArrayInterface → ArrayInterfaceStaticArraysCoreExt
  10798.2 msForwardDiff
   2968.3 msSetfield
   4080.6 msDifferentiationInterface → DifferentiationInterfaceForwardDiffExt
  22924.2 msStaticArrays
   1645.8 msFiniteDiff
   6019.3 msArnoldiMethod
   1439.9 msStaticArrays → StaticArraysStatisticsExt
   1497.8 msAdapt → AdaptStaticArraysExt
   1412.3 msConstructionBase → ConstructionBaseStaticArraysExt
   1344.3 msDifferentiationInterface → DifferentiationInterfaceStaticArraysExt
   1715.8 msForwardDiff → ForwardDiffStaticArraysExt
   1416.7 msFiniteDiff → FiniteDiffSparseArraysExt
   1216.9 msFiniteDiff → FiniteDiffStaticArraysExt
   1203.9 msDifferentiationInterface → DifferentiationInterfaceFiniteDiffExt
  36214.3 msFFTW
  12313.9 msGraphs
   2038.3 msNLSolversBase
   2963.9 msLineSearches
   4301.2 msOptim
  49207.0 msPolynomials
    953.9 msPolynomials → PolynomialsFFTWExt
   2641.9 msDSP
   4114.3 msQuantEcon
  53 dependencies successfully precompiled in 94 seconds. 74 already precompiled.
  1 dependency had output during precompilation:MKL_jll Downloading artifact: IntelOpenMP Downloading artifact: oneTBB
Precompiling packages...
   1758.3 msDistances
   1058.5 msNLsolve
  2 dependencies successfully precompiled in 3 seconds. 42 already precompiled.
Precompiling packages...
    609.3 msDistances → DistancesSparseArraysExt
  1 dependency successfully precompiled in 1 seconds. 7 already precompiled.

Short Question 1

What is the definition of a Martingale? Take the following stochastic process

\[ X_{t+1} = a + X_t + \epsilon_{t+1} \]

for some \(\epsilon_{t+1}\) which is IID. What values of \(a\) and properties of \(\epsilon_{t+1}\) would make this a martingale? If the variance of \(\epsilon_{t+1}\) is \(\sigma^2\), then in that case would you expect there to be a stationary distribution? Why or Why not?

Answer:

(double click to edit your answer)

Short Question 2

Given a stochastic process \(X_t\), write a paragraph describe what economists mean by rational expectations? Explain this from the context of an agent making forecasts and biases in their forecasts.

Answer:

(double click to edit your answer)

Short Question 3

Consider our baseline consumption based asset pricing model with CRRA utility \(u(c) = \frac{c^{1-\gamma}}{1-\gamma}\) with \(\gamma > 0\) and a discount factor of \(\beta \in (0,1)\).

Given a consumption process \(c_t\) then stochastic discount factor (SDF) is then

\[ m_{t+1} \equiv \beta \left(\frac{c_{t+1}}{c_t}\right)^{-\gamma} \]

Now imagine that an asset pays dividends \(d_t\) which is positive correlated with \(c_t\) vs. one which is negatively correlated with \(c_t\). In both cases, assume that the expectation of dividends is the same

Which asset would you expect to have higher prices? Interpret.

Answer:

(double click to edit your answer)

Short Question 4

Consider the case of Short Question 3 but where the agent is risk-neutral (i.e., \(\gamma = 0\) above). Given the assumption that the expected dividends are the same, would you change your answer to the previous question? Interpret.

Answer:

(double click to edit your answer)

Short Question 5

Take the consumer welfare function for a stochastic consumption process \(c_t\) as

\[ \mathbb{E}_0\left\{\sum_{t=0}^{\infty} \beta^t u(c_t)\right\} \]

where \(u(c)\) is assumed to be strictly concave and increasing.

Briefly explain the two types of consumption smoothing incentives that occur in these cases.

Answer:

(double click to edit your answer)

Short Question 6

In our standard search model, with a probability \(\alpha\) of losing the job and a probability \(\gamma\) of getting a job offer, briefly interpret the left and right hand sides of the Bellman equation. What is the definition of a reservation wage?

\[ \begin{aligned} V(w) = \max\{&u(w) + \beta \left[ (1-\alpha) V(w) + \alpha V(0) \right],\\ & u(c) + \beta \left[ (1-\gamma) V(0) + \gamma \mathbb{E}(V(w'))\right]\} \end{aligned} \]

Answer:

(double click to edit your answer)

Short Question 7

In the permanent income model, we had consumers face a fixed gross interest rate \(R\) and an exogenously given income process \(y_t\). The consumer’s problem was to maximize

\[ \begin{aligned} \max_{\{c_{t+j}, F_{t+j}\}_{j=0}^\infty} & \mathbb{E}_t\left[\sum_{j=0}^\infty \beta^j u(c_{t+j})\right] \\ \text{s.t.} \,& F_{t+j+1} = R(F_{t+j} + y_{t+j} - c_{t+j})\,\quad \text{ for all } j \geq 0\\ & \text{no-ponzi scheme/transversality condition} \end{aligned} \]

Explain the assumptions required such that the the solution is

\[ c_t = (1-\beta)\left[\mathbb{E}_t\left[\sum_{j=0}^\infty \beta^j y_{t+j}\right] + F_t\right] \]

Does your answer change slightly if the income process is deterministic vs. stochastic?

Answer:

(double click to edit your answer)

Short Question 8

We showed that in the case of \(\beta R = 1\) and other standard assumptions that the change in consumption in the Permanent Income Model is

\[ c_{t+1} - c_t = (1-\beta)\sum_{j=0}^\infty \beta^j \left[\mathbb{E}_{t+1}[y_{t+j+1}] - \mathbb{E}_t[y_{t+j+1}]\right] \]

Interpret this expression in terms of information sets and martingales.

Answer:

(double click to edit your answer)

Short Question 9

Take a Markov Chain with two states and a transition matrix of

\[ P = \begin{bmatrix} a & 1-a \\ 1-b & b \end{bmatrix} \]

What is a sufficient condition for this to have a unique stationary distribution? Give examples of \(a\) and \(b\) for 2 different types of failure of a unique stationary distribution and give some intuition for why they fail.

Answer:

(double click to edit your answer)

Question 1

Take a variation on the lake model where workers will spend a portion of their lives as a student.

The following describes the probabilities

  • \(\lambda\), the job finding rate for currently unemployed workers transitioning directly to employment
  • \(\gamma\) is the probability of an unemployed working going to study or learn new skills (hence \(1 - \lambda - \gamma\) is the probability of an unemployed worker remains unemployed)
  • \(\alpha\) is the dismissal rate for currently employed workers where they enter unemployment. Employed workers do not directly enter the studying state
  • \(\delta\) is the probability of a student transitioning to employment from work placement. They never go directly to unemployment
  • There is no entry or exit from the labor force (i.e. \(g = b = d = 0\))
  • We normalize the population to be \(N_t = 1\), and define the employment and unemployment rate as \(e_t, u_t\). The proportion of students is \(s_t\). Note \(e_t + u_t + s_t = 1\).
  • Define \(x_t \equiv \left(\begin{matrix}u_t\\ e_t\\ s_t \end{matrix}\right)\).
# Reusable functions, do not modify
function iterate_map(f, x0, T)
    x = zeros(length(x0), T + 1)
    x[:, 1] = x0
    for t in 2:(T + 1)
        x[:, t] = f(x[:, t - 1])
    end
    return x
end
iterate_map (generic function with 1 method)

Part (a)

Using a similar method to the previous question, define a function which creates a model with this process, and carefully define the Markov Chain transition matrices. Hint: Is there a difference now between the Markov Chain and the functions for the linear dynamics?

# OLD CODE FOR REFERENCE, NO NEED TO MODIFY
function lake_model(;lambda = 0.283, alpha = 0.013, b = 0, d = 0, gamma = 0.05, delta = 0.2)
    g = b - d
    A = [(1 - lambda) * (1 - d)+b (1 - d) * alpha+b
         (1 - d)*lambda (1 - d)*(1 - alpha)]
    A_hat = A ./ (1 + g)
    x_0 = ones(size(A_hat, 1)) / size(A_hat, 1)
    sol = fixedpoint(x -> A_hat * x, x_0)
    converged(sol) || error("Failed to converge in $(sol.iterations) iter")    
    x_bar =sol.zero
    return (; lambda, alpha, b, d, A, A_hat, x_bar)
end

# edit your code here, modifying the old method.  You will not need to have a separate A and A_hat matrix.  Consider if even the P is enough?

# Note the default values for gamma and delta,

function new_lake_model(;lambda = 0.283, alpha = 0.013, gamma = 0.05, delta = 0.2,
    x_0 = ones(2) / 2) # change initial condition for fixed point
    # Modify these below to be consistent with the new model
    # This set b = d = 0 from above.
    A = [(1 - lambda)  alpha
         lambda        1 - alpha]
    A_hat = A
    sol = fixedpoint(x -> A_hat * x, x_0)
    converged(sol) || error("Failed to converge in $(sol.iterations) iter")
    x_bar =sol.zero
    return (; lambda, alpha, gamma, delta, A_hat, x_bar)
end
new_lake_model (generic function with 1 method)

Plot the evolution equation unemployment, employment, and studying rates using your new function.

# edit your code here
lm = lake_model() # call new function
N_0 = 150      # population
e_0 = 0.90     # initial employment rate
s_0 = 0.04     # initial student rate
u_0 = 1 - e_0  # - s_0 when ready
T = 50         # simulation length
x_0 = [u_0; e_0]  # Add your s_0 after your code is functional.
x_ss = lm.x_bar
x_path = iterate_map(x -> lm.A_hat * x, x_0, T - 1)
plt_unemp = plot(1:T, x_path[1, :];title = "Unemployment rate", 
                 color = :blue, label = L"u_t")
hline!(plt_unemp, [x_ss[1]], color = :red, linestyle = :dash, label = L"\pi^{*}_U")
plt_emp = plot(1:T, x_path[2, :]; title = "Employment rate", color = :blue, label = L"e_t")
hline!(plt_emp, [x_ss[2]], color = :red, linestyle = :dash,label = L"\pi^{*}_E")
plot(plt_unemp, plt_emp, layout = (1, 2), size = (1200, 400))

Part (b)

Here we will investigate how studying impacts the longrun steady-state

First, what is the longrun employment and unemployment rate when gamma = delta = 0? Hint, consider starting the iteration with x_0 = [0.5, 0.5, 0.0] for the fixed-point. Otherwise is there indeterminacy?

# edit your code here
lm = new_lake_model(;# x_0 = ones(3) / 3
                    ) # hint: can change the initial condition here
@show lm.x_bar # Note using the default initial condition an absorbing state of S
lm.x_bar = [0.03722261989978509, 0.9534717251252686, 0.00930565497494612]
3-element Vector{Float64}:
 0.03722261989978509
 0.9534717251252686
 0.00930565497494612

Next look at the case where gamma = 0.3 and delta = 0.1. What is the longrun employment and unemployment rate?

# edit your code here

Next look at the case where gamma = 0.3 and delta = 1.0. What is the longrun employment and unemployment rate?

# edit your code here

Finally, interpret the reasons for differences in the longrun unemployment rate for these three cases.

Answer:

(double click to edit your answer)

Question 2

We previously priced a consol (i.e. a bond that pays a constant amount for eternity) given the standard risk-averse stochastic discount factor above.

The pricing equation is then

\[ p(x) = {\mathbb E} \left[\beta G(x')^{-\gamma} (\zeta + p(x'))\mid x \right] \]

Which we can implement for a markov chain as

\[ p(x) = \sum_{x'\in S} \left[\beta G(x')^{-\gamma} (\zeta + p(x'))\right]P(x',x) \]

And in code, if we define \(J(x, x') \equiv P(x, x') G(x')^{-\gamma}\), then the price can be calculated according to the following code:

function consol_price(ap)
    (; beta, gamma, mc, zeta, G) = ap
    P = mc.p
    y = mc.state_values'
    M = P .* G.(y) .^ (-gamma)
    @assert maximum(abs, eigvals(M)) < 1 / beta

    # Compute price
    p = (I - beta * M) \ sum(beta * zeta * M, dims = 2)
    return p
end
function consol_model(; beta = 0.96, gamma = 2.0, G = exp,
                               rho = 0.9, sigma = 0.02, N = 25, zeta = 1.0)
    mc = tauchen(N, rho, sigma)
    G_x = G.(mc.state_values)
    return (; beta, gamma, mc, G, G_x, zeta)
end
ap = consol_model(;beta = 0.9)
sol = consol_price(ap)
plot(ap.G_x, sol, xlabel = L"G(x_t)", label = L"p(x_t)", title="Consol price")

Part (a)

Now take this code and plot this figure for \(\gamma = 0\).

# edit solution here

Interpret the results relative to the case with \(\gamma = 2\) If the solution looks strange, consider the scale.

Answer:

(double click to edit your answer)

Part (b)

Take the above code (going back to the baseline \(\gamma\)) and consider an option to purchase this \(p_s\) (i.e., the strike price). This option never expires and all price volatility comes from the stochastic discount factor volatility (i.e. the \(m_{t+1}\)).

With this, jumping to the recursive formulation and taking the consol price \(p(x)\) as given, the Bellman equation for the option value problem is

\[ w(x; p_s) = \max\left\{\sum_{x'\in S} \beta G(x')^{-\gamma} P(x', x) w(x'; p_s), p(x) - p_s\right\} \]

A code implementation of this, using the consol price above, is,

# price of perpetual call on consol bond
function call_option(ap, p_s)
    (; beta, gamma, mc, G) = ap
    P = mc.p
    y = mc.state_values'
    M = P .* G.(y) .^ (-gamma)
    @assert maximum(abs, eigvals(M)) < 1 / beta
    p = consol_price(ap)

    # Operator for fixed point, using consol prices
    T(w) = max.(beta * M * w, p .- p_s)
    sol = fixedpoint(T, zeros(length(y), 1); m=2, iterations = 200)
    converged(sol) || error("Failed to converge in $(sol.iterations) iter")
    return sol.zero
end

ap = consol_model(;beta = 0.9)
p = consol_price(ap)
w = call_option(ap, 40.0)

plot(ap.G_x, p, color = "blue", lw = 2, xlabel = "state", label = "consol price")
plot!(ap.G_x, w, color = "green", lw = 2, label = "value of call option")

Repeat this figure, but now set \(p_s = 80\)

# modify code here
ap = consol_model(;beta = 0.9)
p = consol_price(ap)
w = call_option(ap, 40.0)

plot(ap.G_x, p, color = "blue", lw = 2, xlabel = "state", label = "consol price")
plot!(ap.G_x, w, color = "green", lw = 2, label = "value of call option")

Compare the two cases for the \(p_s = 40\) vs. \(p_s = 80\).

Answer:

(double click to edit your answer)

Part (c)

Now consider a new type of option which expires with probability \(\delta\) each period. If it expires then it provides a choice to execute the option right before it expires, otherwise it becomes worthless.

The Bellman equation for this new option becomes

\[ w(x; p_s) = \max\left\{\sum_{x'\in S} \beta G(x')^{-\gamma} P(x', x)\left[(1-\delta) w(x'; p_s) + \delta \max\{0, p(x') - p_s\}\right], p(x) - p_s\right\} \]

Below is the code to implement our original option above. Modify it for the new option. Hint: almost all of the changes are in the \(T(w)\) definition. The code has been modified relative to the case within the lecture notes to make it easier to change.

# Code for new parameters already added.
function new_consol_model(; beta = 0.96, gamma = 2.0, G = exp, delta = 0.1, rho = 0.9, sigma = 0.02, N = 25, zeta = 1.0)
    mc = tauchen(N, rho, sigma)
    G_x = G.(mc.state_values)
    return (; beta, gamma, mc, G, G_x, zeta, delta)
end

# modify here
function new_call_option(ap, p_s)
    (; beta, gamma, mc, G, delta) = ap
    P = mc.p
    y = mc.state_values'
    M = P .* G.(y) .^ (-gamma)

    @assert maximum(abs, eigvals(M)) < 1 / beta
    p = consol_price(ap)

    # Original code
    # T(w) = max.(beta * M * w, p .- p_s)

    # Expanded version manually 
    T(w) = [max(
                sum(beta * M[i,j] * w[j]  for j in eachindex(y)),
                p[i] - p_s
                ) for i in eachindex(w)]

    sol = fixedpoint(T, zeros(length(y), 1); m=2, iterations = 200)
    converged(sol) || error("Failed to converge in $(sol.iterations) iter")
    return sol.zero
end

ap = new_consol_model(;beta = 0.9)
p = consol_price(ap)
w = new_call_option(ap, 40.0)

plot(ap.G_x, p, color = "blue", lw = 2, xlabel = "state", label = "consol price")
plot!(ap.G_x, w, color = "green", lw = 2, label = "value of call option")

Interpret the differences relative to our baseline case (which is nested when \(\delta = 0.0\))

Answer:

(double click to edit your answer)

Question 3

The following sample code sets up a model with 2 states, a probability of 0.2 to switch from the first to the 2nd, a probability of 0.1 to switch between the second to the first, and payoffs of 0.2 and 2.0 in the two states respectively.

# sample code for simpler problem
P = [0.8 0.2; 0.1 0.9] # transition matrix, consistent with ordering of payoffs
y = [0.2, 2.0]  # payoffs in each state
mc = MarkovChain(P, y) # create a MarkovChain object with those state values and the transition matrix
init = 1  # i.e the state index, not the initial payoff value
T = 100
y_sim = simulate(mc, T; init) # simulate T periods of the Markov chain starting in state = init
plot(1:T, y_sim, xlabel = L"t", label = L"y_t", title = "Simulated path of payoffs")

The next code simulates a path of \(N\) possible realizations of these simulated payoffs, then calculates the expected discounted value of these payoffs over the \(T\) periods for a risk-neutral agent with discount factor \(\beta = 0.9\).


N = 5000
T = 500
beta = 0.9
y_sims = [simulate(mc, T; init) for _ in 1:N]
discounted_y_sims = [sum(beta^(t-1) * y_sim[t] for t in 1:T) for y_sim in y_sims]
simulated_EPV = mean(discounted_y_sims)
10.698341097762118

Part (a)

Consider the formula, which comes from Finite Markov Chain Notes,

\[ v_0 = \mathbb{E}_0\left\{\sum_{t=0}^{\infty}\beta^t y_t \, | \, y_0 = 0.1\right\} \]

 (I - beta * P) \ y
2-element Vector{Float64}:
 10.756756756756769
 15.621621621621633

Should we expect the simulation above to roughly match this calculation? Where would the sources of uncertainty be?

Answer:

(double click to edit your answer)

Part (b)

Consider a new problem along the lines of this code.

There are 4 possible states of period payoffs (a random variable \(Y_t\)) for a new firm, with a time-invariant probability to transition between them:

  1. While in the R&D state (R), the firm gets payoffs of \(y_R < 0\) (i.e., at a loss) which must be maintained to continue research.
  2. Once a innovation has occurred the profits can be either high or low profits (H) or (L) which we denote \(y_H > y_L > 0\) respectively.
  3. There is always the probability that a competitor drives the firm out of business (X), in which case the firm gets a payoff of \(y_X = 0\) because the firm has exited.

The transition probabilities are time-invariant and are as follows:

  1. In the R state with \(\lambda \in (0,1)\) each period they make a breakthrough and enter the H state. No direct movement from R to L or X.
  2. In H they have a probability \(\mu \in (0,1)\) of moving to L. No direct movement occurs to R or X.
  3. In L they have a probability \(\eta \in (0,1)\) of making a good discovery again and transitioning back to the H. Otherwise, there is a \(\alpha \in (0, 1 - \eta)\) probability of a competitor driving them out of business to X.
  4. If they enter X (which can only occur from L) then the firm permanently exits.

An investor is considering how to value a firm with these cash flows. Firms always start in the R state. Modify our code above to implement this new Markov chain and simulate a path of payoffs. Parameter values are provided below

# parameter values
lambda = 0.2
mu = 0.3
eta = 0.4
alpha = 0.05
beta = 0.9
y_R = -0.2
y_L = 1.0
y_H = 2.0
y_X = 0.0

# Old code below, modify to implement the new model/markov chains
# clearly describe the ordering of your states.

# sample code for simpler problem
P = [0.8 0.2; 0.1 0.9] # transition matrix, consistent with ordering of payoffs
y = [0.2, 2.0]  # payoffs in each state
mc = MarkovChain(P, y) # create a MarkovChain object with those state values and the transition matrix
init = 1  # i.e the state index, not the initial payoff value
T = 100
y_sim = simulate(mc, T; init) # simulate T periods of the Markov chain starting in state = init
plot(1:T, y_sim, xlabel = L"t", label = L"y_t", title = "Simulated path of payoffs")

Use this new markov chain to simulate a number of cashflows and calculate their present discounted value

N = 1000
T = 100
beta = 0.9
y_sims = [simulate(mc, T; init) for _ in 1:N]
discounted_y_sims = [sum(beta^(t-1) * y_sim[t] for t in 1:T) for y_sim in y_sims]
simulated_EPV = mean(discounted_y_sims)
8.440139948970018

And investigate how well it does vs. the full solution

v = (I - beta * P) \ y
v[1] # i.e., first state
8.464265795554143

Does this simulation do a better job of matching the explicit solution than the simulation in part (a)? If so, any idea why?

Answer:

(double click to edit your answer)

Part (c)

Now consider the case where research is more costly, and \(y_R = -2.575\), but otherwise the parameters are the same.

# modify code here

Is this a good investment?

Answer:

(double click to edit your answer)

Formulas

Use the following formulas as needed. Formulas are intentionally provided without complete definitions of each variable or conditions on convergence, which you should study using your notes.
YOU WILL NOT BE EXPECTED TO USE THE MAJORITY OF THESE FORMULAS - BUT SOME MAY HELP PROVIDE INTUITION

General and Stochastic Process Formulas

Description 1 Formula 1 Description 2 Formula 2
Partial Geometric Series \(\sum_{t=0}^T c^t = \frac{1 - c^{T+1}}{1-c}\) Geometric Series \(\sum_{t=0}^{\infty} c^t = \frac{1}{1 -c }\)
PDV \(p_t = \sum_{j = 0}^{\infty}\beta^j y_{t+j}\) Recursive Formulation of PDV \(p_t = y_t + \beta p_{t+1}\)
Univariate Linear Difference Equation \(x_{t+1} = a x_t + b\) Solution \(x_t = b \frac{1 - a^t}{1 - a} + a^t x_0\)
Linearity of Normals \(X \sim \mathcal{N}(\mu_X, \sigma_X^2), Y \sim \mathcal{N}(\mu_Y, \sigma_Y^2)\) then \(a X + b Y \sim \mathcal{N}(a \mu_X + b \mu_Y, a^2 \sigma_X^2 + b^2 \sigma_Y^2)\) Special Case \(Y \sim N(\mu, \sigma^2)\) then \(Y = \mu + \sigma X\) for \(X \sim N(0,1)\)
Partial Sums \(X_1,\ldots\) IID with \(\mu \equiv \mathbb{E}(X)\) \(\bar{X}_n \equiv \frac{1}{n} \sum_{i=1}^n X_i\) Strong LLN \(\mathbb{P} \left( \lim_{n \rightarrow \infty} \bar{X}_n = \mu \right) = 1\)
AR(1) Process \(X_{t+1} = a X_t + b + c W_{t+1}\) with \(W_{t+1} \sim \mathcal{N}(0, 1)\) Stationary Distribution \(X_{\infty} \sim \mathcal{N}(\frac{b}{1 - a}, \frac{c^2}{1 - a^2})\)
AR(1) Evolution \(X_t \sim \mathcal{N}(\mu_t, v_t)\), then \(X_{t+1} \sim \mathcal{N}(a \mu_t + b, a^2 v_t + c^2)\) Recursively \(\mu_{t+1} = a \mu_t + b, v_{t+1} = a^2 v_t + c^2\)
Mean Ergodic Definition Given Stationary \(X_{\infty}\) \(\lim_{T\to\infty}\frac{1}{T} \sum_{t=1}^T X_t = \mathbb{E}[X_{\infty}]\) ARCH (1) \(X_{t+1} = a X_t + \left(\beta + \gamma X_t^2\right)^{1/2} W_{t+1}\) with \(W_{t+1} \sim \mathcal{N}(0, 1)\)

Inequality and Power Law Formulas

Description 1 Formula 1 Description 2 Formula 2
Kesten Process for \(a_{t+1}, y_{t+1}\) IID \(X_{t+1} = a_{t+1} X_t + y_{t+1}\) Key Conditions for Kesten Stationarity \(\mathbb{E}(\log a_t) < 0\) and \(\mathbb{E}(y) < \infty\)
Counter-CDF \(\mathbb{P}(X > x) = 1 - \mathbb{P}(X \leq x)\) CCDF With density \(f(x)\) and CDF \(F(x)\) \(\int_{x}^{\infty} f(x) dx = 1 - F(x)\)
Pareto PDF with \(x_m\) minimum and tail parameter \(\alpha\) \(f(x) = \frac{\alpha x_m^\alpha}{x^{\alpha+1}},\text{for all } x \geq x_m\) CDF and CCDF \(F(x) = 1 - \left(\frac{x}{x_m}\right)^{-\alpha}\), \(1 - F(x) = \left(\frac{x}{x_m}\right)^{-\alpha}\)
Log-Log Plot with CDF \(F(x)\) \(\log(x)\) vs. \(\log(1 - F(x))\) For Pareto \(\log(1 - F(x)) = \alpha \log(x_m) - \alpha \log(x)\)
Power-law Tail \(\mathbb{P}(X > x) \propto x^{-\alpha}\) for large \(x\) With CCDF \(1 - F(x) \propto x^{-\alpha}\) for large \(x\)
Empirical CDF \(\hat{F}(x) = \frac{\text{number of observations } X_n \leq x}{N}\) Tail Parameter Regression where \(\alpha \approx - a\) \(\log(1 - \hat{F}(x_i)) = b + a \log(x_i) + \epsilon_i\)
Quantile Function \(x = F^{-1}(p)\equiv Q(p)\) Lorenz Curve \(L(p) = \frac{\int_{0}^{p} Q(s) ds}{\int_{0}^{1} Q(s) ds}\)
CDF with Ordered Data \({v_1, \ldots v_n}\) \(F(v_i) \equiv F_i = \frac{i}{n}\) Lorenz with Ordered Data \(S_i = \frac{1}{n}\sum_{j=1}^i v_j\), \(L(v_i) \equiv L_i = \frac{S_i}{S_n}\)
Gini Coefficient Area between Lorenz Curve and Line of Equality Gini with Ordered Data \(G = \frac{2\sum_{i=1}^n i v_i}{n \sum_{i=1}^n v_i} - \frac{n+1}{n}\)

Solow and Stochastic Growth Formulas

Description 1 Formula 1 Description 2 Formula 2
Production \(Y_t = z_t F(K_t, N_t)\) Constant Returns to Scale \(F(\alpha K, \alpha N) = \alpha F(K, N) \quad \forall \alpha > 0\)
Production per Capita \(k_t \equiv K_t/N_t, f(k_t) \equiv F(k_t, 1)\) Scaled Production Function \(f(k_t) = z_t f(k_{t-1})\)
Marginal Product of Capital \(z_t \frac{\partial F(K_t, N_t)}{\partial K_t}\) MPK with \(f(k) = k^\alpha\) \(z_t \frac{\partial F(K_t, N_t)}{\partial K_t} = \alpha z_t k_t^{\alpha - 1}\)
Consumption/Investment \(C_t + X_t = Y_t \equiv z_t F(K_t, N_t)\) Capital Accumulation \(K_{t+1} = (1 - \delta) K_t + X_t\)
Constant Population Growth \(N_{t+1} = (1+g_N) N_t\) Per-Capital Evolution \(k_{t+1} = \frac{1}{1+g_N} \left[(1-\delta) k_t + s z_t f(k_t)\right]\)
Steady State \((g_N + \delta)\bar{k} = s \bar{z} f(\bar{k})\) Steady State with \(f(k) = k^\alpha\) \(\bar{k} = \left(\frac{s \bar{z}}{g_N + \delta}\right)^{\frac{1}{1-\alpha}}\)
Real Rental Rate of Capital \(r_t = z_t f'(k_t)\) Real Wages \(w_t = (1-\alpha) z_t f(k_t)\)
Stochastic Growth Capital Evolution \(k_{t+1} = (1-\delta) k_t + s Z_t f(k_t),\quad \text{given } k_0\) Stochastic Growth Productivity Process \(\log Z_{t+1} = a \log Z_t + b + c W_{t+1}\) with \(W_{t+1} \sim \mathcal{N}(0, 1)\)

Linear State Space Models

Description 1 Formula 1 Description 2 Formula 2
LSS Model \(x_{t+1} = A x_t + C w_{t+1}, y_t = G x_t, w_{t+1} \sim \mathcal{N}(0,I)\) Forecast \(x_{t+1}\) \(x_{t+1} \sim \mathcal{N}(\mu_{t+1}, \Sigma_{t+1})\)
Forecast \(\mu_{t+1}\) \(\mu_{t+1} = A \mu_t\) Forecast \(\Sigma_{t+1}\) \(\Sigma_{t+1} = A \Sigma_t A^{\top} + C C^{\top}\)
Forecast \(y_{t+1}\) \(y_{t+1} \sim \mathcal{N}(G \mu_t, G \Sigma_t G^{\top})\) Expected \(x_{t+j}\) \(\mathbb{E}_t x_{t+j} = A^j \mu_t\)
Expected \(y_{t+j}\) \(\mathbb{E}_t y_{t+j} = G A^j \mu_t\) PDV of \(y_{t+j}\) \(\mathbb{E}_t \sum_{j=0}^{\infty} \beta^j y_{t+j} = G(I - \beta A)^{-1} \mu_t\)
Stationary Distribution \(x_{\infty} \sim \mathcal{N}(\mu_{\infty}, \Sigma_{\infty})\) Stationary \(\mu_{\infty}\) \(\mu_{\infty} = A \mu_{\infty}\)
Stationary \(\Sigma_{\infty}\) \(\Sigma_{\infty} = A \Sigma_{\infty} A^{\top} + C C^{\top}\) Noisy Observation \(y_t = G x_t + H v_t, v_t \sim \mathcal{N}(0, I)\)
Kalman Filter \(K_t\) \(K_t = A \Sigma_t G^{\top} (G \Sigma_t G^{\top} + H H^{\top})^{-1}\) Kalman Filter \(\mu_{t+1}\) \(\mu_{t+1} = A \mu_t + K_t (y_t - G \mu_t)\)
Kalman Filter \(\Sigma_{t+1}\) \(\Sigma_{t+1} = A \Sigma_t A^{\top} - K_t G \Sigma_t A^{\top} + C C^{\top}\) Forecast Error \(FE_{t,t+1} \equiv x_{t+1} - \mathbb{E}_t[x_{t+1}]\)
Var of LSS Forecast Error \(\mathbb{V}_t(FE_{t+1}) = G C C^{\top} G^{\top} + H H^{\top}\) \(\,\) \(\,\)

Permanent Income Model

Description 1 Formula 1 Description 2 Formula 2
Period-By-Period Budgets \(F_{t+1} = R(F_t + y_t - c_t)\) Lifetime Budget Constraint \(\mathbb{E}_t\left[\sum_{j=0}^{\infty}\frac{c_{t+j}}{R^j}\right] = \mathbb{E}_t\left[\sum_{j=0}^{\infty} \frac{y_{t+j}}{R^j}\right] + F_t\)
PIH Decision Problem \({\scriptsize\begin{aligned} \max_{\{c_{t+j}\}_{j=0}^\infty} & \mathbb{E}_t\left[\sum_{j=0}^\infty \beta^j u(c_{t+j})\right] \\ \text{s.t.} \,& \mathbb{E}_t\left[\sum_{j=0}^{\infty}R^{-j} (c_{t+j}-y_{t+j})\right] = F_t \end{aligned}}\) FONCs \(\begin{aligned} u'(c_t) &= \beta R\, \mathbb{E}_t[u'(c_{t+1})] \\ F_{t+1} &= R(F_t + y_t - c_t) \\ 0 &= \mathbb{E}_0\left[\lim_{j \to \infty} \beta^j F_{t+j}\right] \end{aligned}\)
Solution for \(\beta R = 1\) \({\scriptsize c_t = \bar{c} = (1-\beta)\left[\sum_{j=0}^{\infty}\beta^j y_{t+j} + F_t\right]}\) FOC for Stochastic with \(\beta R = 1\) \(u'(c_t) = \mathbb{E}_t[u'(c_{t+1})]\)
FOC for Quadratic Utility, \(\beta R = 1\) \(c_t = \mathbb{E}_t[c_{t+1}]\) Solution for Quadratic Utility, \(\beta R = 1\) \({\scriptsize c_t = (1-\beta)\left[\mathbb{E}_t\left[\sum_{j=0}^\infty \beta^j y_{t+j}\right] + F_t\right]}\)
Solution for LSS with \(\beta R = 1\) \(\begin{aligned} c_t &= (1-\beta)\left[G (I - \beta A)^{-1} x_t + F_t\right] \\ F_{t+1} &= F_t + G(I - \beta A)^{-1}(I - A)x_t \end{aligned}\) Change in Consumption for \(\beta R = 1\) \(\begin{aligned} c_{t+1} - c_t &= (1-\beta)\sum_{j=0}^\infty \beta^j \left[\mathbb{E}_{t+1}[y_{t+j+1}] - \mathbb{E}_t[y_{t+j+1}]\right] \\ &= (1-\beta)G(I - \beta A)^{-1} C w_{t+1} \end{aligned}\)
Stacked LSS \(\begin{aligned} \begin{bmatrix} x_{t+1} \\ F_{t+1} \end{bmatrix} &= \begin{bmatrix} A & \mathbf{0} \\ G(I - \beta A)^{-1}(I - A) & 1 \end{bmatrix} \begin{bmatrix} x_t \\ F_t \end{bmatrix} + \begin{bmatrix} C \\ 0 \end{bmatrix} w_{t+1} \\ \begin{bmatrix} y_t \\ c_t \end{bmatrix} &= \begin{bmatrix} G & 0\\ (1-\beta)G(I - \beta A)^{-1} & 1-\beta \end{bmatrix} \begin{bmatrix} x_t \\ F_t \end{bmatrix} \end{aligned}\) \(\,\) \(\,\)

Markov Chains

Description 1 Formula 1 Description 2 Formula 2
PMF over finite states \(\pi_t = \mathbb{P}\left[X_t = X_1\right],\ldots \mathbb{P}\left[X_t = X_N\right]\) Transition Matrix \(P\)
Forecast \(\pi_{t+j} = \pi_t P^j\) Conditional Expectation \(\mathbb{E}[X_{t+j} \mid X_t] = \sum_{i=1}^N x_i \pi_{t+j,i} = G \cdot (\pi_t P^j) = G (\pi_t P^j)^{\top}\)
Expected PDV \(p(X_t) = \mathbb{E}\left[\sum_{j=0}^{\infty} \beta^j X_{t+j}\mid X_t\right] = G(I - \beta P^{\top})^{-1} \pi_t^{\top}\) Possible Steady States \(\pi^{*} = \pi^{*} \cdot P\)

Consumption-based Asset Pricing

Description 1 Formula 1 Description 2 Formula 2
Consumer’s problem \(\begin{aligned} \max_{\{c_{t+j}, \pi_{t+j+1}\}_{j=0}^{\infty}} &\; \mathbb{E}_t\left[\sum_{j=0}^\infty \beta^j u(c_{t+j})\right] \\\\ \text{s.t. } &\; c_{t+j} + p_{t+j} \pi_{t+j+1} = \pi_{t+j} (d_{t+j} + p_{t+j}),\quad \forall j \geq 0 \end{aligned}\) Dynamic Programming Formulation \(V(\pi, d) = \max_{\pi'} \left\{ u\big(\pi(d + p(d)) - \pi' p(d)\big) + \beta \mathbb{E}\big[V(\pi', d') \mid d\big] \right\}\)
FONC \(p(d) = \mathbb{E} \left[ \beta \frac{u'(c')}{u'(c)}(d' + p(d')) \mid d \right]\) Sequential Notation \(p_t = \mathbb{E}_t \left[ m_{t+1} (d_{t+1} + p_{t+1}) \right]\)
Riskless Asset \(p_t^{RF} \equiv \frac{1}{R_t} = \mathbb{E}_t \left[ \beta \frac{u'(c_{t+1})}{u'(c_t)} \right]\) Conditional Covariances \(\mathbb{E}_t(x_{t+1} y_{t+1}) = \text{cov}_t(x_{t+1}, y_{t+1}) + \mathbb{E}_t x_{t+1} \cdot \mathbb{E}_t y_{t+1}\)
Asset Pricing Decomposition \(\begin{aligned} p_t &= \mathbb{E}_t \left[ m_{t+1} (d_{t+1} + p_{t+1}) \right] \\\\ &= \mathbb{E}_t m_{t+1} \cdot \mathbb{E}_t (d_{t+1} + p_{t+1}) + \text{cov}_t(m_{t+1}, d_{t+1} + p_{t+1}) \end{aligned}\) Dividend Growth Rates \(d_{t+1} = G(X_{t+1}) d_t\)
Price to Dividend Ratio \(v(X_t) = \mathbb{E} \left[ m(X_{t+1}) G(X_{t+1}) \left(1 + v(X_{t+1})\right) \mid X_t \right]\) Price to Dividend Ratio with Markov Chain \(v_i = \sum_{j=1}^N m(X_j) G(X_j) (1 + v_j) P_{ij}\)
Risk-Neutral \(m_{t+1} = \beta\); \(K_{ij} \equiv G(x_j) P_{ij}\), then \(v = (I - \beta K)^{-1} \beta K \mathbb{1}\) CRRA SDF \(u(c) = \frac{c^{1 - \gamma} - 1}{1 - \gamma}\), \(m_{t+1} = \beta \left( \frac{c_{t+1}}{c_t} \right)^{-\gamma} = \beta G_{t+1}^{-\gamma}\)
Price-Dividend Ratio for CRRA \(J_{ij} \equiv G(x_j)^{1 - \gamma} P_{ij}\), \(v = (I - \beta J)^{-1} \beta J \mathbb{1}\) Consol Pays \(d_{t+1} = \zeta\) for all \(t\)
Price of Consol with CRRA \(M_{ij} \equiv P_{ij} G(X_j)^{-\gamma}\), \(p = (I - \beta M)^{-1} \beta M \zeta \mathbb{1}\) Bellman for Perpetual Option on a Consol \(w(X_t, p_S) = \max \left\{ \mathbb{E}_t\left[ m(X_{t+1}) w(X_{t+1}, p_S) \right],\; p(X_t) - p_S \right\}\)
Option on Consol with Finite-State Markov Process \(M_{ij} \equiv P_{ij} G(X_j)^{-\gamma}\), \(w = \max \left\{ \beta M w,\; p - p_S \mathbb{1} \right\}\) \(\,\) \(\,\)