Ensure you modify the field above with your name and student number above immediately
The exam has XXXXX questions, each with multiple parts for a total of XXXXX points. You may not finish the exam, so best to do your best answering all questions to the extent possible and not get stuck on any one question.
This exam is closed book and accessing the internet is not permitted
See the formula “sheet” embedded at the end of this notebook for reference
You can the internal help as required (see the Jupyterhub menu, use Settings/Show Contextual Help)
Execute the file to begin, which will also check your setup. To do this in Jupyter, in the menu go > Run > Run All Cells or the equivalent buttons
Edit this file directly, and in-place as an ipynb file, which we will automatically download at the end of the exam time directly. In particular
DO NOT rename this file with your name. It is automatically associated with your canvas account
DO NOT save-as the file, move it, or export to pdf or html
DO NOT add any additional packages
Save the notebook as you are working
We will only grade what is saved at the end of the exam in this exact file, and it is your responsibility to ensure the file is saved
We will not execute the notebook, so ensure all code, figures, etc. are ready as-is upon saving for submission
Ensure you edit the results in the code blocks or markup blocks indicated as we will not grade anything outside of those
You will not be judged on code quality directly, but code clarity may be required for us to ensure you understood the problem directly
If a question requires math, you can try to put latex inside of the cells but will not be judged on whether you write latex vs. math in text that doesn’t quite match latex. But it should be clear
# Packages available# DO NOT MODIFY OR ADD PACKAGESusingDistributions, Plots, LaTeXStrings, LinearAlgebra, Statistics, Random, QuantEcon, NLsolve
Precompiling packages...
1001.4 ms ✓ PositiveFactorizations
1338.2 ms ✓ AbstractFFTs
1523.6 ms ✓ ADTypes
878.8 ms ✓ IntegerMathUtils
720.8 ms ✓ EnumX
934.5 ms ✓ Inflate
3561.3 ms ✓ CEnum
1242.2 ms ✓ Adapt
4451.7 ms ✓ Bessels
889.6 ms ✓ StaticArraysCore
1171.0 ms ✓ CommonSubexpressions
3820.5 ms ✓ ConstructionBase
1964.8 ms ✓ SimpleTraits
1649.6 ms ✓ NLopt_jll
1551.0 ms ✓ FFTW_jll
1634.2 ms ✓ DiffRules
2483.2 ms ✓ IntelOpenMP_jll
2777.9 ms ✓ oneTBB_jll
1252.0 ms ✓ Primes
1358.7 ms ✓ Adapt → AdaptSparseArraysExt
937.0 ms ✓ DiffResults
5194.2 ms ✓ DifferentiationInterface
3681.3 ms ✓ ArrayInterface
976.2 ms ✓ ConstructionBase → ConstructionBaseLinearAlgebraExt
1108.7 ms ✓ ADTypes → ADTypesConstructionBaseExt
2243.8 ms ✓ NLopt
8174.1 ms ✓ MKL_jll
1496.9 ms ✓ DifferentiationInterface → DifferentiationInterfaceSparseArraysExt
1644.5 ms ✓ ArrayInterface → ArrayInterfaceSparseArraysExt
917.7 ms ✓ ArrayInterface → ArrayInterfaceStaticArraysCoreExt
10798.2 ms ✓ ForwardDiff
2968.3 ms ✓ Setfield
4080.6 ms ✓ DifferentiationInterface → DifferentiationInterfaceForwardDiffExt
22924.2 ms ✓ StaticArrays
1645.8 ms ✓ FiniteDiff
6019.3 ms ✓ ArnoldiMethod
1439.9 ms ✓ StaticArrays → StaticArraysStatisticsExt
1497.8 ms ✓ Adapt → AdaptStaticArraysExt
1412.3 ms ✓ ConstructionBase → ConstructionBaseStaticArraysExt
1344.3 ms ✓ DifferentiationInterface → DifferentiationInterfaceStaticArraysExt
1715.8 ms ✓ ForwardDiff → ForwardDiffStaticArraysExt
1416.7 ms ✓ FiniteDiff → FiniteDiffSparseArraysExt
1216.9 ms ✓ FiniteDiff → FiniteDiffStaticArraysExt
1203.9 ms ✓ DifferentiationInterface → DifferentiationInterfaceFiniteDiffExt
36214.3 ms ✓ FFTW
12313.9 ms ✓ Graphs
2038.3 ms ✓ NLSolversBase
2963.9 ms ✓ LineSearches
4301.2 ms ✓ Optim
49207.0 ms ✓ Polynomials
953.9 ms ✓ Polynomials → PolynomialsFFTWExt
2641.9 ms ✓ DSP
4114.3 ms ✓ QuantEcon
53 dependencies successfully precompiled in 94 seconds. 74 already precompiled.
1 dependency had output during precompilation:
┌ MKL_jll
│ Downloading artifact: IntelOpenMP
│ Downloading artifact: oneTBB
└ Precompiling packages...
1758.3 ms ✓ Distances
1058.5 ms ✓ NLsolve
2 dependencies successfully precompiled in 3 seconds. 42 already precompiled.
Precompiling packages...
609.3 ms ✓ Distances → DistancesSparseArraysExt
1 dependency successfully precompiled in 1 seconds. 7 already precompiled.
Short Question 1
What is the definition of a Martingale? Take the following stochastic process
\[
X_{t+1} = a + X_t + \epsilon_{t+1}
\]
for some \(\epsilon_{t+1}\) which is IID. What values of \(a\) and properties of \(\epsilon_{t+1}\) would make this a martingale? If the variance of \(\epsilon_{t+1}\) is \(\sigma^2\), then in that case would you expect there to be a stationary distribution? Why or Why not?
Answer:
(double click to edit your answer)
Short Question 2
Given a stochastic process \(X_t\), write a paragraph describe what economists mean by rational expectations? Explain this from the context of an agent making forecasts and biases in their forecasts.
Answer:
(double click to edit your answer)
Short Question 3
Consider our baseline consumption based asset pricing model with CRRA utility \(u(c) = \frac{c^{1-\gamma}}{1-\gamma}\) with \(\gamma > 0\) and a discount factor of \(\beta \in (0,1)\).
Given a consumption process \(c_t\) then stochastic discount factor (SDF) is then
Now imagine that an asset pays dividends \(d_t\) which is positive correlated with \(c_t\) vs. one which is negatively correlated with \(c_t\). In both cases, assume that the expectation of dividends is the same
Which asset would you expect to have higher prices? Interpret.
Answer:
(double click to edit your answer)
Short Question 4
Consider the case of Short Question 3 but where the agent is risk-neutral (i.e., \(\gamma = 0\) above). Given the assumption that the expected dividends are the same, would you change your answer to the previous question? Interpret.
Answer:
(double click to edit your answer)
Short Question 5
Take the consumer welfare function for a stochastic consumption process \(c_t\) as
where \(u(c)\) is assumed to be strictly concave and increasing.
Briefly explain the two types of consumption smoothing incentives that occur in these cases.
Answer:
(double click to edit your answer)
Short Question 6
In our standard search model, with a probability \(\alpha\) of losing the job and a probability \(\gamma\) of getting a job offer, briefly interpret the left and right hand sides of the Bellman equation. What is the definition of a reservation wage?
In the permanent income model, we had consumers face a fixed gross interest rate \(R\) and an exogenously given income process \(y_t\). The consumer’s problem was to maximize
Interpret this expression in terms of information sets and martingales.
Answer:
(double click to edit your answer)
Short Question 9
Take a Markov Chain with two states and a transition matrix of
\[
P = \begin{bmatrix} a & 1-a \\ 1-b & b \end{bmatrix}
\]
What is a sufficient condition for this to have a unique stationary distribution? Give examples of \(a\) and \(b\) for 2 different types of failure of a unique stationary distribution and give some intuition for why they fail.
Answer:
(double click to edit your answer)
Question 1
Take a variation on the lake model where workers will spend a portion of their lives as a student.
The following describes the probabilities
\(\lambda\), the job finding rate for currently unemployed workers transitioning directly to employment
\(\gamma\) is the probability of an unemployed working going to study or learn new skills (hence \(1 - \lambda - \gamma\) is the probability of an unemployed worker remains unemployed)
\(\alpha\) is the dismissal rate for currently employed workers where they enter unemployment. Employed workers do not directly enter the studying state
\(\delta\) is the probability of a student transitioning to employment from work placement. They never go directly to unemployment
There is no entry or exit from the labor force (i.e. \(g = b = d = 0\))
We normalize the population to be \(N_t = 1\), and define the employment and unemployment rate as \(e_t, u_t\). The proportion of students is \(s_t\). Note \(e_t + u_t + s_t = 1\).
# Reusable functions, do not modifyfunctioniterate_map(f, x0, T) x =zeros(length(x0), T +1) x[:, 1] = x0for t in2:(T +1) x[:, t] =f(x[:, t -1])endreturn xend
iterate_map (generic function with 1 method)
Part (a)
Using a similar method to the previous question, define a function which creates a model with this process, and carefully define the Markov Chain transition matrices. Hint: Is there a difference now between the Markov Chain and the functions for the linear dynamics?
# OLD CODE FOR REFERENCE, NO NEED TO MODIFYfunctionlake_model(;lambda =0.283, alpha =0.013, b =0, d =0, gamma =0.05, delta =0.2) g = b - d A = [(1- lambda) * (1- d)+b (1- d) * alpha+b (1- d)*lambda (1- d)*(1- alpha)] A_hat = A ./ (1+ g) x_0 =ones(size(A_hat, 1)) /size(A_hat, 1) sol =fixedpoint(x -> A_hat * x, x_0)converged(sol) ||error("Failed to converge in $(sol.iterations) iter") x_bar =sol.zeroreturn (; lambda, alpha, b, d, A, A_hat, x_bar)end# edit your code here, modifying the old method. You will not need to have a separate A and A_hat matrix. Consider if even the P is enough?# Note the default values for gamma and delta,functionnew_lake_model(;lambda =0.283, alpha =0.013, gamma =0.05, delta =0.2, x_0 =ones(2) /2) # change initial condition for fixed point# Modify these below to be consistent with the new model# This set b = d = 0 from above. A = [(1- lambda) alpha lambda 1- alpha] A_hat = A sol =fixedpoint(x -> A_hat * x, x_0)converged(sol) ||error("Failed to converge in $(sol.iterations) iter") x_bar =sol.zeroreturn (; lambda, alpha, gamma, delta, A_hat, x_bar)end
new_lake_model (generic function with 1 method)
Plot the evolution equation unemployment, employment, and studying rates using your new function.
# edit your code herelm =lake_model() # call new functionN_0 =150# populatione_0 =0.90# initial employment rates_0 =0.04# initial student rateu_0 =1- e_0 # - s_0 when readyT =50# simulation lengthx_0 = [u_0; e_0] # Add your s_0 after your code is functional.x_ss = lm.x_barx_path =iterate_map(x -> lm.A_hat * x, x_0, T -1)plt_unemp =plot(1:T, x_path[1, :];title ="Unemployment rate", color =:blue, label = L"u_t")hline!(plt_unemp, [x_ss[1]], color =:red, linestyle =:dash, label = L"\pi^{*}_U")plt_emp =plot(1:T, x_path[2, :]; title ="Employment rate", color =:blue, label = L"e_t")hline!(plt_emp, [x_ss[2]], color =:red, linestyle =:dash,label = L"\pi^{*}_E")plot(plt_unemp, plt_emp, layout = (1, 2), size = (1200, 400))
Part (b)
Here we will investigate how studying impacts the longrun steady-state
First, what is the longrun employment and unemployment rate when gamma = delta = 0? Hint, consider starting the iteration with x_0 = [0.5, 0.5, 0.0] for the fixed-point. Otherwise is there indeterminacy?
# edit your code herelm =new_lake_model(;# x_0 = ones(3) / 3 ) # hint: can change the initial condition here@show lm.x_bar # Note using the default initial condition an absorbing state of S
And in code, if we define \(J(x, x') \equiv P(x, x') G(x')^{-\gamma}\), then the price can be calculated according to the following code:
functionconsol_price(ap) (; beta, gamma, mc, zeta, G) = ap P = mc.p y = mc.state_values' M = P .*G.(y) .^ (-gamma)@assertmaximum(abs, eigvals(M)) <1/ beta# Compute price p = (I - beta * M) \sum(beta * zeta * M, dims =2)return pendfunctionconsol_model(; beta =0.96, gamma =2.0, G = exp, rho =0.9, sigma =0.02, N =25, zeta =1.0) mc =tauchen(N, rho, sigma) G_x =G.(mc.state_values)return (; beta, gamma, mc, G, G_x, zeta)endap =consol_model(;beta =0.9)sol =consol_price(ap)plot(ap.G_x, sol, xlabel = L"G(x_t)", label = L"p(x_t)", title="Consol price")
Part (a)
Now take this code and plot this figure for \(\gamma = 0\).
# edit solution here
Interpret the results relative to the case with \(\gamma = 2\) If the solution looks strange, consider the scale.
Answer:
(double click to edit your answer)
Part (b)
Take the above code (going back to the baseline \(\gamma\)) and consider an option to purchase this \(p_s\) (i.e., the strike price). This option never expires and all price volatility comes from the stochastic discount factor volatility (i.e. the \(m_{t+1}\)).
With this, jumping to the recursive formulation and taking the consol price \(p(x)\) as given, the Bellman equation for the option value problem is
A code implementation of this, using the consol price above, is,
# price of perpetual call on consol bondfunctioncall_option(ap, p_s) (; beta, gamma, mc, G) = ap P = mc.p y = mc.state_values' M = P .*G.(y) .^ (-gamma)@assertmaximum(abs, eigvals(M)) <1/ beta p =consol_price(ap)# Operator for fixed point, using consol pricesT(w) =max.(beta * M * w, p .- p_s) sol =fixedpoint(T, zeros(length(y), 1); m=2, iterations =200)converged(sol) ||error("Failed to converge in $(sol.iterations) iter")return sol.zeroendap =consol_model(;beta =0.9)p =consol_price(ap)w =call_option(ap, 40.0)plot(ap.G_x, p, color ="blue", lw =2, xlabel ="state", label ="consol price")plot!(ap.G_x, w, color ="green", lw =2, label ="value of call option")
Repeat this figure, but now set \(p_s = 80\)
# modify code hereap =consol_model(;beta =0.9)p =consol_price(ap)w =call_option(ap, 40.0)plot(ap.G_x, p, color ="blue", lw =2, xlabel ="state", label ="consol price")plot!(ap.G_x, w, color ="green", lw =2, label ="value of call option")
Compare the two cases for the \(p_s = 40\) vs. \(p_s = 80\).
Answer:
(double click to edit your answer)
Part (c)
Now consider a new type of option which expires with probability \(\delta\) each period. If it expires then it provides a choice to execute the option right before it expires, otherwise it becomes worthless.
Below is the code to implement our original option above. Modify it for the new option. Hint: almost all of the changes are in the \(T(w)\) definition. The code has been modified relative to the case within the lecture notes to make it easier to change.
# Code for new parameters already added.functionnew_consol_model(; beta =0.96, gamma =2.0, G = exp, delta =0.1, rho =0.9, sigma =0.02, N =25, zeta =1.0) mc =tauchen(N, rho, sigma) G_x =G.(mc.state_values)return (; beta, gamma, mc, G, G_x, zeta, delta)end# modify herefunctionnew_call_option(ap, p_s) (; beta, gamma, mc, G, delta) = ap P = mc.p y = mc.state_values' M = P .*G.(y) .^ (-gamma)@assertmaximum(abs, eigvals(M)) <1/ beta p =consol_price(ap)# Original code# T(w) = max.(beta * M * w, p .- p_s)# Expanded version manually T(w) = [max(sum(beta * M[i,j] * w[j] for j ineachindex(y)), p[i] - p_s ) for i ineachindex(w)] sol =fixedpoint(T, zeros(length(y), 1); m=2, iterations =200)converged(sol) ||error("Failed to converge in $(sol.iterations) iter")return sol.zeroendap =new_consol_model(;beta =0.9)p =consol_price(ap)w =new_call_option(ap, 40.0)plot(ap.G_x, p, color ="blue", lw =2, xlabel ="state", label ="consol price")plot!(ap.G_x, w, color ="green", lw =2, label ="value of call option")
Interpret the differences relative to our baseline case (which is nested when \(\delta = 0.0\))
Answer:
(double click to edit your answer)
Question 3
The following sample code sets up a model with 2 states, a probability of 0.2 to switch from the first to the 2nd, a probability of 0.1 to switch between the second to the first, and payoffs of 0.2 and 2.0 in the two states respectively.
# sample code for simpler problemP = [0.80.2; 0.10.9] # transition matrix, consistent with ordering of payoffsy = [0.2, 2.0] # payoffs in each statemc =MarkovChain(P, y) # create a MarkovChain object with those state values and the transition matrixinit =1# i.e the state index, not the initial payoff valueT =100y_sim =simulate(mc, T; init) # simulate T periods of the Markov chain starting in state = initplot(1:T, y_sim, xlabel = L"t", label = L"y_t", title ="Simulated path of payoffs")
The next code simulates a path of \(N\) possible realizations of these simulated payoffs, then calculates the expected discounted value of these payoffs over the \(T\) periods for a risk-neutral agent with discount factor \(\beta = 0.9\).
N =5000T =500beta =0.9y_sims = [simulate(mc, T; init) for _ in1:N]discounted_y_sims = [sum(beta^(t-1) * y_sim[t] for t in1:T) for y_sim in y_sims]simulated_EPV =mean(discounted_y_sims)
Should we expect the simulation above to roughly match this calculation? Where would the sources of uncertainty be?
Answer:
(double click to edit your answer)
Part (b)
Consider a new problem along the lines of this code.
There are 4 possible states of period payoffs (a random variable \(Y_t\)) for a new firm, with a time-invariant probability to transition between them:
While in the R&D state (R), the firm gets payoffs of \(y_R < 0\) (i.e., at a loss) which must be maintained to continue research.
Once a innovation has occurred the profits can be either high or low profits (H) or (L) which we denote \(y_H > y_L > 0\) respectively.
There is always the probability that a competitor drives the firm out of business (X), in which case the firm gets a payoff of \(y_X = 0\) because the firm has exited.
The transition probabilities are time-invariant and are as follows:
In the R state with \(\lambda \in (0,1)\) each period they make a breakthrough and enter the H state. No direct movement from R to L or X.
In H they have a probability \(\mu \in (0,1)\) of moving to L. No direct movement occurs to R or X.
In L they have a probability \(\eta \in (0,1)\) of making a good discovery again and transitioning back to the H. Otherwise, there is a \(\alpha \in (0, 1 - \eta)\) probability of a competitor driving them out of business to X.
If they enter X (which can only occur from L) then the firm permanently exits.
An investor is considering how to value a firm with these cash flows. Firms always start in the R state. Modify our code above to implement this new Markov chain and simulate a path of payoffs. Parameter values are provided below
# parameter valueslambda =0.2mu =0.3eta =0.4alpha =0.05beta =0.9y_R =-0.2y_L =1.0y_H =2.0y_X =0.0# Old code below, modify to implement the new model/markov chains# clearly describe the ordering of your states.# sample code for simpler problemP = [0.80.2; 0.10.9] # transition matrix, consistent with ordering of payoffsy = [0.2, 2.0] # payoffs in each statemc =MarkovChain(P, y) # create a MarkovChain object with those state values and the transition matrixinit =1# i.e the state index, not the initial payoff valueT =100y_sim =simulate(mc, T; init) # simulate T periods of the Markov chain starting in state = initplot(1:T, y_sim, xlabel = L"t", label = L"y_t", title ="Simulated path of payoffs")
Use this new markov chain to simulate a number of cashflows and calculate their present discounted value
N =1000T =100beta =0.9y_sims = [simulate(mc, T; init) for _ in1:N]discounted_y_sims = [sum(beta^(t-1) * y_sim[t] for t in1:T) for y_sim in y_sims]simulated_EPV =mean(discounted_y_sims)
8.440139948970018
And investigate how well it does vs. the full solution
v = (I - beta * P) \ yv[1] # i.e., first state
8.464265795554143
Does this simulation do a better job of matching the explicit solution than the simulation in part (a)? If so, any idea why?
Answer:
(double click to edit your answer)
Part (c)
Now consider the case where research is more costly, and \(y_R = -2.575\), but otherwise the parameters are the same.
# modify code here
Is this a good investment?
Answer:
(double click to edit your answer)
Formulas
Use the following formulas as needed. Formulas are intentionally provided without complete definitions of each variable or conditions on convergence, which you should study using your notes. YOU WILL NOT BE EXPECTED TO USE THE MAJORITY OF THESE FORMULAS - BUT SOME MAY HELP PROVIDE INTUITION
General and Stochastic Process Formulas
Description 1
Formula 1
Description 2
Formula 2
Partial Geometric Series
\(\sum_{t=0}^T c^t = \frac{1 - c^{T+1}}{1-c}\)
Geometric Series
\(\sum_{t=0}^{\infty} c^t = \frac{1}{1 -c }\)
PDV
\(p_t = \sum_{j = 0}^{\infty}\beta^j y_{t+j}\)
Recursive Formulation of PDV
\(p_t = y_t + \beta p_{t+1}\)
Univariate Linear Difference Equation
\(x_{t+1} = a x_t + b\)
Solution
\(x_t = b \frac{1 - a^t}{1 - a} + a^t x_0\)
Linearity of Normals
\(X \sim \mathcal{N}(\mu_X, \sigma_X^2), Y \sim \mathcal{N}(\mu_Y, \sigma_Y^2)\) then \(a X + b Y \sim \mathcal{N}(a \mu_X + b \mu_Y, a^2 \sigma_X^2 + b^2 \sigma_Y^2)\)
Special Case
\(Y \sim N(\mu, \sigma^2)\) then \(Y = \mu + \sigma X\) for \(X \sim N(0,1)\)
Partial Sums \(X_1,\ldots\) IID with \(\mu \equiv \mathbb{E}(X)\)