KEMBAR78
A Few More Past Exam Questions (Multiple Choice) | PDF | Mathematical Optimization | Matrix (Mathematics)
0% found this document useful (0 votes)
177 views6 pages

A Few More Past Exam Questions (Multiple Choice)

The document contains a series of multiple choice questions about optimization problems, matrices, eigenvalues, norms, and other mathematical concepts. There are questions related to equality and inequality constraints in optimization, Kuhn-Tucker conditions, Lagrange multipliers, determinants, Cramer's rule, concavity, and complementary slackness conditions.

Uploaded by

exams_sbs
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
177 views6 pages

A Few More Past Exam Questions (Multiple Choice)

The document contains a series of multiple choice questions about optimization problems, matrices, eigenvalues, norms, and other mathematical concepts. There are questions related to equality and inequality constraints in optimization, Kuhn-Tucker conditions, Lagrange multipliers, determinants, Cramer's rule, concavity, and complementary slackness conditions.

Uploaded by

exams_sbs
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 6

A few more past exam questions

(Multiple Choice)
Mark (at most) one answer per question. Choose the answer you find most appropriate.
There is +2 for a correct answer and -1/2 for a wrong (and multiple) answer(s).

In optimization problems with equality constraints:


a the number of constraints equals the number of choice variables.
b the number of constraints may equal the number of choice variables.
c the number of constraints must exceed the number of choice variables.
d the number of constraints may exceed the number of choice variables.
e the number of constraints must be smaller than the number of choice variables.

In optimization problems with inequality constraints:


a the number of constraints equals the number of choice variables.
b the number of constraints may equal the number of choice variables.
c the number of constraints must exceed the number of choice variables.
d the number of constraints must be smaller than the number of choice variables.
e the number of choice variables must be larger than the number of constraints.

In optimization problems with inequality constraints, a sufficient set of conditions


for the existence of a maximum is:
a f (.) convex and g(.) convex.
b f (.) convex and g(.) concave.
c f (.) concave and g(.) convex.
d f (.) concave and g(.) concave.
e nothing of the above.

1
In optimization problems with inequality constraints, a sufficient set of conditions
for a local maximum to be a global maximum is:
a f (.) convex and g(.) convex.
b f (.) convex and g(.) concave.
c f (.) concave and g(.) convex.
d f (.) concave and g(.) concave.
e nothing of the above.

In optimization problems with inequality constraints, the value of the Lagrange


function, in an optimum:
a equals the value of the objective function.
b may be smaller than the value of the objective function.
c is always smaller than the value of the objective function.
d may be greater than the value of the objective function.
e is always greater than the value of the objective function.

In optimization problems with inequality constraints, the Kuhn-Tucker conditions


are:
a sufficient conditions for (x0, ..., xN ) to solve the optimization problem.
b necessary conditions for (x0, ..., xN ) to solve the optimization problem.
c sufficient but not necessary conditions for (x0, ..., xN ) to solve the optimization
problem.
d neither sufficient nor necessary conditions for (x0, ..., xN ) to solve the optimiza-
tion problem.
e none of the above.

Consider a square matrix M . If the determinant of M is zero, then:


a M is a symmetric matrix.
b the rows of M add up to zero.
c the sum of eigenvalues of M equals zero.
d one row is linearly dependent of the other rows of M .
e M is nonsingular.

2
If e is an eigenvalue of matrix M , then:

a e is real valued.
b e equals the trace of M .
c ex = M · I

d e = 2−1 {tr2 ± tr − 4det}

e e = 2−1 {tr ± tr2 − 4det}.

If e1 and e2 are the eigenvalues of a 2 × 2 matrix M , then:


a det M = e1 e2.
b det M = e1 + e2.
c det M = e1 − e2.
d trace M = e1 e2.
e trace M = e1 − e2.

Consider the Hessian, H, of f (x1, x2). If f (x1, x2) is concave, then the eigenvalues
of H are as follows:
a e1 ≥ e2 ≥ 0.
b e1 ≤ 0, e2 ≤ 0.
c e1 ≥ 0, e2 ≤ 0.
d e1 ≤ 0, e2 ≥ 0.
e e1 < 0, e2 > 0.

John’s preferences for two goods, x and y, are given by: u(x, y) = x + a ln(y),
where a > 0. The prices of the two goods are: px > 0, py > 0. Let I > 0 be John’s
income, where I < a px. Calculate John’s utility maximizing quantities x∗, y∗. By
employing the Kuhn-Tucker conditions, check whether or not the following possible
results are true. Indicate your results in the table below.

(a) x =y =0 true O false O


(b) x > 0, y > 0 true O false O
(c) x = 0, y > 0 true O false O
(d) x > 0, y = 0 true O false O

(e) x∗ = y∗ =

3
Consider u(x, y) = x + a ln(y), with a, x, y > 0. Calculate the Hessian of u(x, y).
Calculate the eigenvalues of the Hessian. Then:

(a) e1 = 0 e2 = 0 true O false O


(b) e1 < 0 e2 > 0 true O false O
(c) e1 < 0 e2 = 0 true O false O
(d) e1 > 0 e2 < 0 true O false O
(e) e1 > 0 e2 = 0 true O false O
Cramer’s Rule [15]

Let an economy be given by the following functions and identities: C = C0 +c Y (1−τ ),


I = I0, G = τ Y , Y = C + I + G. The variables have the usual meaning. Endoge-
nous variables: C (aggregate consumption), Y (GDP), I (investment), G (govern-
ment expenditures). Exogenous variables: I0 (exogenous investment level), C0
(autonomous consumption). Parameters: c (marginal propensity to consume), τ
(income tax rate).

(a) Use Cramer’s rule to calculate C and Y . [5]

C = ..............................................

Y = ..............................................

(b) For almost all parameter values, we can calculate a unique solution for (C, Y ).
Under which parameter restrictions is the solution for (C, Y ) not unique? [10]

Parameter restriction 1: ..............................................

Parameter restriction 2 (if any): ..............................................

Eigenvalues [15]

A Silicon Valley firm produces an amount y of computer chips, of which a percentage


of σ is good and can be sold at a price of p. However, a percentage of (1− σ) of
the produced chips comes with problems — so that they cannot be sold. Suppose,
the firm cannot only control its production level, but the quality of its production, as
measured by σ, as well. A rise in quality (a higher percentage of good chips), however
is costly. Suppose the total cost function is:
β
c(y, σ) = α y + y2 + γ σ .
2
(a) Derive the profit function of the firm.

4
π(y, σ) = ..............................................

(b) Derive the eigenvalues, (e1, e2) of the Hessian matrix of π(y, σ).

e1 = ..............................................

e2 = ..............................................

(c) Is π(y, σ) concave (convex)?

π(y, σ) is: ........................................................

Minors & Concavity [10]

Consider the following function: f (x1, x2) = ln[a xα] + ln[xβ] + b, where x1 > 0,
1 2
x2 > 0. Using leading principal minors, derive parameter restrictions that guarantee
strict concavity of f (x1, x2).

Parameter restriction 1: ..............................................

Parameter restriction 2 (if any): ..............................................


The following vector-/matrix-multiplication — (a b) — is not defined:

Q dimension of a: n × 1; dimension of b: 1 × n.
Q dimension of a: k × 1; dimension of b: 1 × n.
Q dimension of b: n × 1; dimension of a: 1 × n.
Q dimension of b: n × k; dimension of a: h × n.
Q dimension of b: m × n; dimension of a: m × n.

5
Consider x = (2, 2, 2), and y = (1, 1, 1). If we project y (orthogonal) onto x, the
projection is t x, where:

Q t = 1.
Q t = 2.
Q t = 1/2.
Q t = 1/3.
Q t = 3.

Consider a nonlinear programming problem. The complementary slackness condition


says:
∂ L(λ,x)
Q ∂ xi ƒ= 0 implies xi ƒ=
∂ L(λ,x)
0.Q ∂ xi = 0 implies xi ƒ=
∂ L(λ,x)
0. Q ∂ xi = 0 implies xi =
∂ L(λ,x)
0. Q ∂ xi ƒ= 0 implies xi
∂ L(λ,x)
= 0. Q
∂ xi = 0 implies xi
< 0.

The famous (Euclidean) norm of x = (2, −2, 3) is:

Q ǁxǁ = 17.
Q ǁxǁ = 25.
Q ǁxǁ = 3.
Q ǁxǁ = 4.
Q nothing of the above.

You might also like