Telechargé par seyoli10

Solutions Manual Foundations of Mathematical Economics ( PDFDrive.com )

publicité
Solutions Manual
Foundations of Mathematical Economics
Michael Carter
November 15, 2002
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
Chapter 1: Sets and Spaces
1.1
{ 1, 3, 5, 7 . . . } or { 𝑛 ∈ 𝑁 : 𝑛 is odd }
1.2 Every 𝑥 ∈ 𝐴 also belongs to 𝐵. Every 𝑥 ∈ 𝐵 also belongs to 𝐴. Hence 𝐴, 𝐵 have
precisely the same elements.
1.3 Examples of finite sets are
∙ the letters of the alphabet { A, B, C, . . . , Z }
∙ the set of consumers in an economy
∙ the set of goods in an economy
∙ the set of players in a game.
Examples of infinite sets are
∙ the real numbers ℜ
∙ the natural numbers 𝔑
∙ the set of all possible colors
∙ the set of possible prices of copper on the world market
∙ the set of possible temperatures of liquid water.
1.4 𝑆 = { 1, 2, 3, 4, 5, 6 }, 𝐸 = { 2, 4, 6 }.
1.5 The player set is 𝑁 = { Jenny, Chris }. Their action spaces are
𝐴𝑖 = { Rock, Scissors, Paper }
𝑖 = Jenny, Chris
1.6 The set of players is 𝑁 = {1, 2, . . . , 𝑛 }. The strategy space of each player is the set
of feasible outputs
𝐴𝑖 = { 𝑞𝑖 ∈ ℜ+ : 𝑞𝑖 ≤ 𝑄𝑖 }
where 𝑞𝑖 is the output of dam 𝑖.
1.7 The player set is 𝑁 = {1, 2, 3}. There are 23 = 8 coalitions, namely
𝒫(𝑁 ) = {∅, {1}, {2}, {3}, {1, 2}, {1, 3}, {2, 3}, {1, 2, 3}}
There are 2
10
coalitions in a ten player game.
/ 𝑆 ∪ 𝑇 . This implies 𝑥 ∈
/ 𝑆 and 𝑥 ∈
/ 𝑇,
1.8 Assume that 𝑥 ∈ (𝑆 ∪ 𝑇 )𝑐 . That is 𝑥 ∈
or 𝑥 ∈ 𝑆 𝑐 and 𝑥 ∈ 𝑇 𝑐. Consequently, 𝑥 ∈ 𝑆 𝑐 ∩ 𝑇 𝑐 . Conversely, assume 𝑥 ∈ 𝑆 𝑐 ∩ 𝑇 𝑐 .
This implies that 𝑥 ∈ 𝑆 𝑐 and 𝑥 ∈ 𝑇 𝑐 . Consequently 𝑥 ∈
/ 𝑆 and 𝑥 ∈
/ 𝑇 and therefore
𝑥∈
/ 𝑆 ∪ 𝑇 . This implies that 𝑥 ∈ (𝑆 ∪ 𝑇 )𝑐 . The other identity is proved similarly.
1.9
∪
𝑆=𝑁
𝑆∈𝒞
∩
𝑆=∅
𝑆∈𝒞
1
c 2001 Michael Carter
⃝
All rights reserved
Solutions for Foundations of Mathematical Economics
1
-1
𝑥2
0
1
𝑥1
-1
Figure 1.1: The relation { (𝑥, 𝑦) : 𝑥2 + 𝑦 2 = 1 }
1.10 The sample space of a single coin toss is { 𝐻, 𝑇 }. The set of possible outcomes in
three tosses is the product
{
{𝐻, 𝑇 } × {𝐻, 𝑇 } × {𝐻, 𝑇 } = (𝐻, 𝐻, 𝐻), (𝐻, 𝐻, 𝑇 ), (𝐻, 𝑇, 𝐻),
}
(𝐻, 𝑇, 𝑇 ), (𝑇, 𝐻, 𝐻), (𝑇, 𝐻, 𝑇 ), (𝑇, 𝑇, 𝐻), (𝑇, 𝑇, 𝑇 )
A typical outcome is the sequence (𝐻, 𝐻, 𝑇 ) of two heads followed by a tail.
1.11
𝑌 ∩ ℜ𝑛+ = {0}
where 0 = (0, 0, . . . , 0) is the production plan using no inputs and producing no outputs.
To see this, first note that 0 is a feasible production plan. Therefore, 0 ∈ 𝑌 . Also,
0 ∈ ℜ𝑛+ and therefore 0 ∈ 𝑌 ∩ ℜ𝑛+ .
To show that there is no other feasible production plan in ℜ𝑛+ , we assume the contrary.
That is, we assume there is some feasible production plan y ∈ ℜ𝑛+ ∖ {0}. This implies
the existence of a plan producing a positive output with no inputs. This technological
infeasible, so that 𝑦 ∈
/ 𝑌.
1.12
1. Let x ∈ 𝑉 (𝑦). This implies that (𝑦, −x) ∈ 𝑌 . Let x′ ≥ x. Then (𝑦, −x′ ) ≤
(𝑦, −x) and free disposability implies that (𝑦, −x′ ) ∈ 𝑌 . Therefore x′ ∈ 𝑉 (𝑦).
2. Again assume x ∈ 𝑉 (𝑦). This implies that (𝑦, −x) ∈ 𝑌 . By free disposal,
(𝑦 ′ , −x) ∈ 𝑌 for every 𝑦 ′ ≤ 𝑦, which implies that x ∈ 𝑉 (𝑦 ′ ). 𝑉 (𝑦 ′ ) ⊇ 𝑉 (𝑦).
1.13 The domain of “<” is {1, 2} = 𝑋 and the range is {2, 3} ⫋ 𝑌 .
1.14 Figure 1.1.
1.15 The relation “is strictly higher than” is transitive, antisymmetric and asymmetric.
It is not complete, reflexive or symmetric.
2
c 2001 Michael Carter
⃝
All rights reserved
Solutions for Foundations of Mathematical Economics
1.16 The following table lists their respective properties.
reflexive
transitive
symmetric
asymmetric
anti-symmetric
complete
<
×
√
×
√
√
√
≤
√
√
√
=
√
√
√
×
√
√
×
√
×
Note that the properties of symmetry and anti-symmetry are not mutually exclusive.
1.17 Let ∼ be an equivalence relation of a set 𝑋 ∕= ∅. That is, the relation ∼ is reflexive,
symmetric and transitive. We first show that every 𝑥 ∈ 𝑋 belongs to some equivalence
class. Let 𝑎 be any element in 𝑋 and let ∼ (𝑎) be the class of elements equivalent to
𝑎, that is
∼(𝑎) ≡ { 𝑥 ∈ 𝑋 : 𝑥 ∼ 𝑎 }
Since ∼ is reflexive, 𝑎 ∼ 𝑎 and so 𝑎 ∈ ∼(𝑎). Every 𝑎 ∈ 𝑋 belongs to some equivalence
class and therefore
∪
𝑋=
∼(𝑎)
𝑎∈𝑋
Next, we show that the equivalence classes are either disjoint or identical, that is
∼(𝑎) ∕= ∼(𝑏) if and only if f∼(𝑎) ∩ ∼(𝑏) = ∅.
First, assume ∼(𝑎) ∩ ∼(𝑏) = ∅. Then 𝑎 ∈ ∼(𝑎) but 𝑎 ∈
/ ∼(𝑏). Therefore ∼(𝑎) ∕= ∼(𝑏).
Conversely, assume ∼(𝑎) ∩ ∼(𝑏) ∕= ∅ and let 𝑥 ∈ ∼(𝑎) ∩ ∼(𝑏). Then 𝑥 ∼ 𝑎 and by
symmetry 𝑎 ∼ 𝑥. Also 𝑥 ∼ 𝑏 and so by transitivity 𝑎 ∼ 𝑏. Let 𝑦 be any element
in ∼(𝑎) so that 𝑦 ∼ 𝑎. Again by transitivity 𝑦 ∼ 𝑏 and therefore 𝑦 ∈ ∼(𝑏). Hence
∼(𝑎) ⊆ ∼(𝑏). Similar reasoning implies that ∼(𝑏) ⊆ ∼(𝑎). Therefore ∼(𝑎) = ∼(𝑏).
We conclude that the equivalence classes partition 𝑋.
1.18 The set of proper coalitions is not a partition of the set of players, since any player
can belong to more than one coalition. For example, player 1 belongs to the coalitions
{1}, {1, 2} and so on.
1.19
𝑥 ≻ 𝑦 =⇒ 𝑥 ≿ 𝑦 and 𝑦 ∕≿ 𝑥
𝑦 ∼ 𝑧 =⇒ 𝑦 ≿ 𝑧 and 𝑧 ≿ 𝑦
Transitivity of ≿ implies 𝑥 ≿ 𝑧. We need to show that 𝑧 ∕≿ 𝑥. Assume otherwise, that
is assume 𝑧 ≿ 𝑥 This implies 𝑧 ∼ 𝑥 and by transitivity 𝑦 ∼ 𝑥. But this implies that
𝑦 ≿ 𝑥 which contradicts the assumption that 𝑥 ≻ 𝑦. Therefore we conclude that 𝑧 ∕≿ 𝑥
and therefore 𝑥 ≻ 𝑧. The other result is proved in similar fashion.
1.20 asymmetric Assume 𝑥 ≻ 𝑦.
𝑥 ≻ 𝑦 =⇒ 𝑦 ∕≿ 𝑥
while
𝑦 ≻ 𝑥 =⇒ 𝑦 ≿ 𝑥
Therefore
𝑥 ≻ 𝑦 =⇒ 𝑦 ∕≻ 𝑥
3
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
transitive Assume 𝑥 ≻ 𝑦 and 𝑦 ≻ 𝑧.
𝑥 ≻ 𝑦 =⇒ 𝑥 ≿ 𝑦 and 𝑦 ∕≿ 𝑥
𝑦 ≻ 𝑧 =⇒ 𝑦 ≿ 𝑧 and 𝑧 ∕≿ 𝑦
Since ≿ is transitive, we conclude that 𝑥 ≿ 𝑧.
It remains to show that 𝑧 ∕≿ 𝑥. Assume otherwise, that is assume 𝑧 ≿ 𝑥. We
know that 𝑥 ≿ 𝑦 and transitivity implies that 𝑧 ≿ 𝑦, contrary to the assumption
that 𝑦 ≻ 𝑧. We conclude that 𝑧 ∕≿ 𝑥 and
𝑥 ≿ 𝑧 and 𝑧 ∕≿ 𝑥 =⇒ 𝑥 ≻ 𝑧
This shows that ≻ is transitive.
1.21 reflexive Since ≿ is reflexive, 𝑥 ≿ 𝑥 which implies 𝑥 ∼ 𝑥.
transitive Assume 𝑥 ∼ 𝑦 and 𝑦 ∼ 𝑧. Now
𝑥 ∼ 𝑦 ⇐⇒ 𝑥 ≿ 𝑦 and 𝑦 ≿ 𝑥
𝑦 ∼ 𝑧 ⇐⇒ 𝑦 ≿ 𝑧 and 𝑧 ≿ 𝑦
Transitivity of ≿ implies
𝑥 ≿ 𝑦 and 𝑦 ≿ 𝑧 =⇒ 𝑥 ≿ 𝑧
𝑧 ≿ 𝑦 and 𝑦 ≿ 𝑥 =⇒ 𝑧 ≿ 𝑥
Combining
𝑥 ≿ 𝑧 and 𝑧 ≿ 𝑥 =⇒ 𝑥 ∼ 𝑧
symmetric
𝑥 ∼ 𝑦 ⇐⇒ 𝑥 ≿ 𝑦 and 𝑦 ≿ 𝑥
⇐⇒ 𝑦 ≿ 𝑥 and 𝑥 ≿ 𝑦
⇐⇒ 𝑦 ∼ 𝑥
1.22 reflexive Every integer is a multiple of itself, that is 𝑚 = 1𝑚.
transitive Assume 𝑚 = 𝑘𝑛 and 𝑛 = 𝑙𝑝 where 𝑘, 𝑙 ∈ 𝑁 . Then 𝑚 = 𝑘𝑙𝑝 so that 𝑚 is a
multiple of 𝑝.
not symmetric If 𝑚 = 𝑘𝑛, 𝑘 ∈ 𝑁 , then 𝑛 =
multiple of 2 but 2 is not a multiple of 4.
1
𝑘𝑚
1.23
[𝑎, 𝑏] = { 𝑎, 𝑦, 𝑏, 𝑧 }
(𝑎, 𝑏) = { 𝑦 }
1.24
≿ (𝑦) = {𝑏, 𝑦, 𝑧 }
≻ (𝑦) = {𝑏, 𝑧 }
≾ (𝑦) = {𝑎, 𝑥, 𝑦 }
≺ (𝑦) = {𝑎, 𝑥 }
4
and 𝑘 ∈
/ 𝑁 . For example, 4 is a
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
1.25 Let 𝑋 be ordered by ≿. 𝑥 ∈ 𝑋 is a minimal element there is no element which
strictly precedes it, that is there is no element 𝑦 ∈ 𝑋 such that 𝑦 ≺ 𝑥. 𝑥 ∈ 𝑋 is the
first element if it precedes every other element, that is 𝑥 ≾ 𝑦 for all 𝑦 ∈ 𝑋.
1.26 The maximal elements of 𝑋 are 𝑏 and 𝑧. The minimal element of 𝑋 is 𝑥. These
are also best and worst elements respectively.
1.27 Assume that 𝑥 is a best element in 𝑋 ordered by ≿. That is, 𝑥 ≿ 𝑦 for all 𝑦 ∈ 𝑋.
This implies that there is no 𝑦 ∈ 𝑋 which strictly dominates 𝑥. Therefore, 𝑥 is maximal
in 𝑋. In Example 1.23, the numbers 5, 6, 7, 8, 9 are all maximal elements, but none of
them is a best element.
1.28 Assume that the elements are denoted 𝑥1 , 𝑥2 , . . . , 𝑥𝑛 . We can identify the maximal
element by constructing another list using the following recursive algorithm
𝑎1 = 𝑥1
{
𝑥𝑖
𝑎𝑖 =
𝑎𝑖−1
if 𝑥𝑖 ≻ 𝑎𝑖−1
otherwise
By construction, there is no 𝑥𝑖 which strictly succedes 𝑎𝑛 . 𝑎𝑛 is a maximal element.
1.29
𝑥∗ is maximal ⇐⇒ there does not exist 𝑥 ≻ 𝑥∗
that is
≻(𝑥∗ ) = { 𝑥 : 𝑥 ≻ 𝑥∗ } = ∅
𝑥∗ is best ⇐⇒ 𝑥∗ ≿ 𝑥 for every 𝑥 ∈ 𝑋
⇐⇒ 𝑥 ≾ 𝑥∗ for every 𝑥 ∈ 𝑋
That is, every 𝑥 ∈ 𝑋 belongs to ≾(𝑥∗ ) or ≾(𝑥∗ ) = 𝑋.
1.30 Let 𝐴 be a nonempty set of a set 𝑋 ordered by ≿. 𝑥 ∈ 𝑋 is a lower bound for
𝐴 if it precedes every element in 𝐴, that is 𝑥 ≾ 𝑎 for all 𝑎 ∈ 𝐴. It is a greatest lower
bound if it dominates every lower bound, that is 𝑥 ≿ 𝑦 for every lower bound 𝑦 of 𝐴.
1.31 Any multiple of 60 is an upper bound for 𝐴. Thus, the set of upper bounds of 𝐴
is {60, 120, 240, . . . }. The least upper bound of 𝐴 is 60. The only lower bound is 1,
hence it is the greatest lower bound.
1.32 The least upper bounds of interval [𝑎, 𝑏] are 𝑏 and 𝑧. The least upper bound of
(𝑎, 𝑏) is 𝑦.
1.33
𝑥 is an upper bound of 𝐴 ⇐⇒ 𝑥 ≿ 𝑎 for every 𝑎 ∈ 𝐴
⇐⇒ 𝑎 ≾ 𝑥 for every 𝑎 ∈ 𝐴
⇐⇒ 𝐴 ⊆ ≾(𝑥)
Similarly
𝑥 is a lower bound of 𝐴 ⇐⇒ 𝑥 ≾ 𝑎 for every 𝑎 ∈ 𝐴
⇐⇒ 𝑎 ≿ 𝑥 for every 𝑎 ∈ 𝐴
⇐⇒ 𝐴 ⊆ ≿(𝑥)
5
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
1.34 For every 𝑥 ∈ ℜ2 ,
𝑥 ≻ 𝑦 if 𝑥1 > 𝑦1 or 𝑥1 = 𝑦1 and 𝑥2 > 𝑦2
Since all elements 𝑥 ∈ ℜ2 are comparable, ≻ is complete; it is a total order.
1.35 Assume ≿𝑖 is complete for every 𝑖. Then for every 𝑥, 𝑦 ∈ 𝑋 and for all 𝑖 =
1, 2, . . . , 𝑛, either 𝑥𝑖 ≿𝑖 𝑦𝑖 or 𝑦𝑖 ≿𝑖 𝑥𝑖 or both. Either
𝑥𝑖 ∼𝑖 𝑦𝑖 for all 𝑖 Then define 𝑥 ∼ 𝑦.
𝑥𝑖 ∕∼𝑖 𝑦𝑖 for some 𝑖 Let 𝑘 be the first individual with a strict preference, that is 𝑘 =
min𝑖 (𝑥𝑖 ∕∼ 𝑦𝑖 ). (Completeness of ≿𝑖 ensures that 𝑘 is defined). Then define
𝑥 ≻ 𝑦 if 𝑥𝑘 ≻𝑖 𝑦𝑘
𝑦 ≻ 𝑥 otherwise
1.36 Let 𝑆, 𝑇 and 𝑈 be subsets of a finite set 𝑋. Set inclusion ⊆ is
reflexive since 𝑆 ⊆ 𝑆.
transitive since 𝑆 ⊆ 𝑇 and 𝑇 ⊆ 𝑈 implies 𝑆 ⊆ 𝑈 .
anti-symmetric since 𝑆 ⊆ 𝑇 and 𝑇 ⊆ 𝑆 implies 𝑆 = 𝑇
Therefore ⊆ is a partial order.
1.37 Assume 𝑥 and 𝑦 are both least upper bounds of 𝐴. That is 𝑥 ≿ 𝑎 for all 𝑎 ∈ 𝐴
and 𝑦 ≿ 𝑎 for all 𝑎 ∈ 𝐴. Further, if 𝑥 is a least upper bound, 𝑦 ≿ 𝑥. If 𝑦 is a least
upper bound, 𝑥 ≿ 𝑦. By anti-symmetry, 𝑥 = 𝑦.
1.38
𝑥 ∼ 𝑦 =⇒ 𝑥 ≿ 𝑦 and 𝑦 ≿ 𝑥
which implies that 𝑥 = 𝑦 by antisymmetry. Each equivalence class
∼ (𝑥) = { 𝑦 ∈ 𝑋 : 𝑦 ∼ 𝑥 }
comprises just a single element 𝑥.
1.39 max 𝒫(𝑋) = 𝑋 and min 𝒫(𝑋) = ∅.
1.40 The subset {2, 4, 8} forms a chain. More generally, the set of integer powers of a
given number { 𝑛, 𝑛2 , 𝑛3 , . . . } forms a chain.
1.41 Assume 𝑥 and 𝑦 are maximal elements of the chain 𝐴. Then 𝑥 ≿ 𝑎 for all 𝑎 ∈ 𝐴
and in particular 𝑥 ≿ 𝑦. Similarly, 𝑦 ≿ 𝑎 for all 𝑎 ∈ 𝐴 and in particular 𝑦 ≿ 𝑥. Since
≿ is anti-symmetric, 𝑥 = 𝑦.
1.42
1. By assumption, for every 𝑡 ∈ 𝑇 ∖ 𝑊 , ≺(𝑡) is a nonempty finite chain. Hence,
it has a unique maximal element, 𝑝(𝑡).
2. Let 𝑡 be any node. Either 𝑡 is an initial node or 𝑡 has a unique predecessor 𝑝(𝑡).
Either 𝑝(𝑡) is an initial node, or it has a unique predecessor 𝑝(𝑝(𝑡)). Continuing
in this way, we trace out a unique path from 𝑡 back to an initial node. We can
be sure of eventually reaching an initial node since 𝑇 is finite.
1.43
(1, 2) ∨ (3, 1) = (3, 2) and (1, 2) ∧ (3, 2) = (1, 2)
6
Solutions for Foundations of Mathematical Economics
1.44
c 2001 Michael Carter
⃝
All rights reserved
1. 𝑥∨𝑦 is an upper bound for { 𝑥, 𝑦 }, that is x∨y ≿ 𝑥 and x∨y ≿ 𝑦. Similarly,
𝑥 ∨ 𝑦 is a lower bound for { 𝑥, 𝑦 }.
2. Assume 𝑥 ≿ 𝑦. Then 𝑥 is an upper bound for { 𝑥, 𝑦 }, that is 𝑥 ≿ 𝑥 ∨ 𝑦. If 𝑏 is
any upper bound for { 𝑥, 𝑦 }, then 𝑏 ≿ 𝑥. Therefore, 𝑥 is the least upper bound
for { 𝑥, 𝑦 }. Similarly, 𝑦 is a lower bound for { 𝑥, 𝑦 }, and is greater than any
other lower bound. Conversely, assume 𝑥 ∨ 𝑦 = 𝑥. Then 𝑥 is an upper bound for
{ 𝑥, 𝑦 }, that is 𝑥 ≿ 𝑦.
3. Using the preceding equivalence
𝑥 ≿ 𝑥 ∧ 𝑦 =⇒ 𝑥 ∨ (𝑥 ∧ 𝑦) = 𝑥
𝑥 ∨ 𝑦 ≿ 𝑥 =⇒ (𝑥 ∨ 𝑦) ∧ 𝑥 = 𝑥
1.45 A chain 𝑋 is a complete partially ordered set. For every 𝑥, 𝑦 ∈ 𝑋 with 𝑥 ∕= 𝑦,
either 𝑥 ≻ 𝑦 or 𝑦 ≻ 𝑥. Therefore, define the meet and join by
{
𝑦 if 𝑥 ≻ 𝑦
𝑥∧𝑦 =
𝑥 if 𝑦 ≻ 𝑥
{
𝑥 if 𝑥 ≻ 𝑦
𝑥∨𝑦 =
𝑦 if 𝑦 ≻ 𝑥
𝑋 is a lattice with these operations.
1.46 Assume 𝑋1 and 𝑋2 are lattices, and let 𝑋 = 𝑋1 × 𝑋2 . Consider any two elements
x = (𝑥1 , 𝑥2 ) and y = (𝑦1 , 𝑦2 ) in 𝑋. Since 𝑋1 and 𝑋2 are lattices, 𝑏1 = 𝑥1 ∨ 𝑦1 ∈ 𝑋1
and 𝑏2 = 𝑥2 ∨ 𝑦2 ∈ 𝑋2 , so that b = (𝑏1 , 𝑏2 ) = (𝑥1 ∨ 𝑦1 , 𝑥2 ∨ 𝑦2 ) ∈ 𝑋. Furthermore
b ≿ x and b ≿ y in the natural product order, so that b is an upper bound for the
{x, y}. Every upper bound b̂ = (ˆ𝑏1 , ˆ𝑏2 ) of {x, y} must have 𝑏𝑖 ≿𝑖 𝑥𝑖 and 𝑏𝑖 ≿𝑖 𝑦𝑖 ,
so that b̂ ≿ b. Therefore, b is the least upper bound of {x, y}, that is b = x ∨ y.
Similarly, x ∧ y = (𝑥1 ∧ 𝑦1 , 𝑥2 ∧ 𝑦2 ).
1.47 Let 𝑆 be a subset of 𝑋 and let
𝑆 ∗ = { 𝑥 ∈ 𝑋 : 𝑥 ≿ 𝑠 for every 𝑠 ∈ 𝑆 }
be the set of upper bounds of 𝑆. Then 𝑥∗ ∈ 𝑆 ∗ ∕= ∅. By assumption, 𝑆 ∗ has a greatest
lower bound 𝑏. Since every 𝑠 ∈ 𝑆 is a lower bound of 𝑆 ∗ , 𝑏 ≿ 𝑠 for every 𝑠 ∈ 𝑆.
Therefore 𝑏 is an upper bound of 𝑆. Furthermore, 𝑏 is the least upper bound of 𝑆,
since 𝑏 ≾ 𝑥 for every 𝑥 ∈ 𝑆 ∗ . This establishes that every subset of 𝑋 also has a least
upper bound. In particular, every pair of elements has a least upper and a greatest
lower bound. Consequently 𝑋 is a complete lattice.
1.48 Without loss of generality, we will prove the closed interval case. Let [𝑎, 𝑏] be an
interval in a lattice 𝐿. Recall that 𝑎 = inf[𝑎, 𝑏] and 𝑏 = sup[𝑎, 𝑏]. Choose any 𝑥, 𝑦 in
[𝑎, 𝑏] ⊆ 𝐿. Since 𝐿 is a lattice, 𝑥 ∨ 𝑦 ∈ 𝐿 and
𝑥 ∨ 𝑦 = sup{ 𝑥, 𝑦 } ≾ 𝑏
Therefore 𝑥 ∨ 𝑦 ∈ [𝑎, 𝑏]. Similarly, 𝑥 ∧ 𝑦 ∈ [𝑎, 𝑏]. [𝑎, 𝑏] is a lattice. Similarly, for any
subset 𝑆 ⊆ [𝑎, 𝑏] ⊆ 𝐿, sup 𝑆 ∈ 𝐿 if 𝐿 is complete. Also, sup 𝑆 ≾ 𝑏 = sup[𝑎, 𝑏]. Therefore
sup 𝑆 ∈ [𝑎, 𝑏]. Similarly inf 𝑆 ∈ [𝑎, 𝑏] so that [𝑎, 𝑏] is complete.
7
Solutions for Foundations of Mathematical Economics
1.49
c 2001 Michael Carter
⃝
All rights reserved
1. The strong set order ≿𝑆 is
antisymmetric Let 𝑆1 , 𝑆2 ⊆ 𝑋 with 𝑆1 ≿𝑆 𝑆2 and 𝑆2 ≿𝑆 𝑆1 . Choose 𝑥1 ∈ 𝑆1
and 𝑥2 ∈ 𝑆2 . Since 𝑆1 ≿𝑆 𝑆2 , 𝑥1 ∨ 𝑥2 ∈ 𝑆1 and 𝑥1 ∧ 𝑥2 ∈ 𝑆2 . On the other
hand, since 𝑆2 ≿ 𝑆1 , 𝑥1 = (𝑥1 ∨ (𝑥1 ∧ 𝑥2 ) ∈ 𝑆2 and 𝑥2 = 𝑥2 ∧ (𝑥1 ∨ 𝑥2 ) ∈ 𝑆1
(Exercise 1.44. Therefore 𝑆1 = 𝑆2 and ≿𝑆 is antisymmetric.
transitive Let 𝑆1 , 𝑆2 , 𝑆3 ⊆ 𝑋 with 𝑆1 ≿𝑆 𝑆2 and 𝑆2 ≿𝑆 𝑆3 . Choose 𝑥1 ∈ 𝑆1 ,
𝑥2 ∈ 𝑆2 and 𝑥3 ∈ 𝑆3 . Since 𝑆1 ≿𝑆 𝑆2 and 𝑆2 ≿𝑆 𝑆3 , 𝑥1 ∨ 𝑥2 and 𝑥2 ∧ 𝑥3
are in 𝑆2 . Therefore 𝑦2 = 𝑥1 ∨ (𝑥2 ∧ 𝑥3 ) ∈ 𝑆2 which implies
)
(
𝑥1 ∨ 𝑥3 = 𝑥1 ∨ (𝑥2 ∧ 𝑥3 ) ∨ 𝑥3
(
)
= 𝑥1 ∨ (𝑥2 ∧ 𝑥3 ) ∨ 𝑥3
= 𝑦2 ∨ 𝑥3 ∈ 𝑆3
since 𝑆2 ≿𝑆 𝑆3 . Similarly 𝑧2 = (𝑥1 ∨ 𝑥2 ) ∧ 𝑥3 ∈ 𝑆2 and
(
)
𝑥1 ∧ 𝑥3 = 𝑥1 ∧ (𝑥1 ∨ 𝑥2 ) ∧ 𝑥3
)
(
= 𝑥1 ∧ (𝑥1 ∨ 𝑥2 ) ∧ 𝑥3
= 𝑥1 ∧ 𝑧2 ∈ 𝑆1
Therefore, 𝑆1 ≿𝑆 𝑆3 .
2. 𝑆 ≿𝑆 𝑆 if and only if, for every 𝑥1 , 𝑥2 ∈ 𝑆, 𝑥1 ∨ 𝑥2 ∈ 𝑆 and 𝑥1 ∧ 𝑥2 ∈ 𝑆, which
is the case if and only if 𝑆 is a sublattice.
3. Let 𝐿(𝑋) denote the set of all sublattices of 𝑋. We have shown that ≿𝑆 is
reflexive, transitive and antisymmetric on 𝐿(𝑋). Hence, it is a partial order on
𝐿(𝑋).
1.50 Assume 𝑆1 ≿𝑆 𝑆2 . For any 𝑥1 ∈ 𝑆1 and 𝑥2 ∈ 𝑆2 , 𝑥1 ∨ 𝑥2 ∈ 𝑆1 and 𝑥1 ∧ 𝑥2 ∈ 𝑆2 .
Therefore
sup 𝑆1 ≿ 𝑥1 ∨ 𝑥2 ≿ 𝑥2
for every 𝑥2 ∈ 𝑆2
which implies that sup 𝑆1 ≿ sup 𝑆2 . Similarly
inf 𝑆2 ≾ 𝑥1 ∧ 𝑥2 ≾ 𝑥1
for every 𝑥1 ∈ 𝑆1
which implies that inf 𝑆2 ≾ inf 𝑆1 . Note that completeness ensures the existence of
sup 𝑆 and inf 𝑆 respectively.
1.51 An argument analogous to the preceding exercise establishes =⇒ . (Completeness is not required, since for any interval 𝑎 = inf[𝑎, 𝑏] and 𝑏 = sup[𝑎, 𝑏]).
To establish the converse, assume that 𝑆1 = [𝑎1 , 𝑏1 ] and 𝑆2 = [𝑎2 , 𝑏2 ]. Consider any
𝑥1 ∈ 𝑆1 and 𝑥2 ∈ 𝑆2 . There are two cases.
Case 1. 𝑥1 ≿ 𝑥2 Since 𝑋 is a chain, 𝑥1 ∨ 𝑥2 = 𝑥1 ∈ 𝑆1 . 𝑥1 ∧ 𝑥2 = 𝑥2 ∈ 𝑆2 .
Case 2. 𝑥1 ≺ 𝑥2 Since 𝑋 is a chain, 𝑥1 ∨ 𝑥2 = 𝑥2 . Now 𝑎1 ≾ 𝑥1 ≺ 𝑥2 ≾ 𝑏2 ≾ 𝑏2 .
Therefore, 𝑥2 = 𝑥1 ∨ 𝑥2 ∈ 𝑆1 . Similarly 𝑎2 ≾ 𝑎1 ≾ 𝑥1 ≺ 𝑥2 ≾ 𝑏2 . Therefore
𝑥1 ∧ 𝑥2 = 𝑥1 ∈ 𝑆2 .
We have shown that 𝑆1 ≿𝑆 𝑆2 in both cases.
1.52 Assume that ≿ is a complete relation on 𝑋. This means that for every 𝑥, 𝑦 ∈ 𝑋,
either 𝑥 ≿ 𝑦 or 𝑦 ≿ 𝑥. In particular, letting 𝑥 = 𝑦, 𝑥 ≿ 𝑥 for 𝑥 ∈ 𝑋. ≿ is reflexive.
8
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
1.53 Anti-symmetry implies that each indifference class contains a single element. If the
consumer’s preference relation was anti-symmetric, there would be no baskets of goods
between which the consumer was indifferent. Each indifference curve which consist a
single point.
1.54 We previously showed (Exercise 1.27) that every best element is maximal. To
prove the converse, assume that 𝑥 is maximal in the weakly ordered set 𝑋. We have to
show that 𝑥 ≿ 𝑦 for all 𝑦 ∈ 𝑋. Assume otherwise, that is assume there is some 𝑦 ∈ 𝑋
for which 𝑥 ∕≿ 𝑦. Since ≿ is complete, this implies that 𝑦 ≻ 𝑥 which contradicts the
assumption that 𝑥 is maximal. Hence we conclude that 𝑥 ≿ 𝑦 for 𝑦 ∈ 𝑋 and 𝑥 is a
best element.
1.55 False. A chain has at most one maximal element (Exercise 1.41). Here, uniqueness
is ensured by anti-symmetry. A weakly ordered set in which the order is not antisymmetric may have multiple maximal and best elements. For example, 𝑎 and 𝑏 are
both best elements in the weakly ordered set {𝑎 ∼ 𝑏 ≻ 𝑐}.
1.56
1. For every 𝑥 ∈ 𝑋, either 𝑥 ≿ 𝑦 =⇒ 𝑥 ∈ ≿(𝑦) or 𝑦 ≿ 𝑥 =⇒ 𝑥 ∈ ≾(𝑦) since
≿ is complete. Consequently, ≿(𝑦) ∪ ≺(𝑦) = 𝑋 If 𝑥 ∈ ≿(𝑦) ∩ ≾(𝑦), then 𝑥 ≿ 𝑦
and 𝑦 ≿ 𝑥 so that 𝑥 ∼ 𝑦 and 𝑥 ∈ 𝐼𝑦 .
2. For every 𝑥 ∈ 𝑋, either 𝑥 ≿ 𝑦 =⇒ 𝑥 ∈ ≿(𝑦) or 𝑦 ≻ 𝑥 =⇒ 𝑥 ∈ ≺(𝑦) since ≿ is
complete. Consequently, ≿(𝑦) ∪ ≺(𝑦) = 𝑋 and ≿(𝑦) ∩ ≺(𝑦) = ∅.
3. For every 𝑦 ∈ 𝑋, ≻(𝑦) and 𝐼𝑦 partition ≿(𝑦) and therefore ≻(𝑦), 𝐼𝑦 and ≺(𝑦)
partition 𝑋.
1.57 Assume 𝑥 ≿ 𝑦 and 𝑧 ∈ ≿(𝑥). Then 𝑧 ≿ 𝑥 ≿ 𝑦 by transitivity. Therefore 𝑧 ∈ ≿(𝑦).
This shows that ≿(𝑥) ⊆ ≿(𝑦).
Similarly, assume 𝑥 ≻ 𝑦 and 𝑧 ∈ ≻(𝑥). Then 𝑧 ≻ 𝑥 ≻ 𝑦 by transitivity. Therefore
𝑧 ∈ ≻(𝑦). This shows that ≿(𝑥) ⊆ ≿(𝑦). To show that ≿(𝑥) ∕= ≿(𝑦), observe that
𝑥 ∈ ≻(𝑦) but that 𝑥 ∈
/ ≻(𝑥)
1.58 Every finite ordered set has a least one maximal element (Exercise 1.28).
1.59 Kreps (1990, p.323), Luenberger (1995, p.170) and Mas-Colell et al. (1995, p.313)
adopt the weak Pareto order, whereas Varian (1992, p.323) distinguishes the two orders. Osborne and Rubinstein (1994, p.7) also distinguish the two orders, utilizing the
weak order in defining the core (Chapter 13) but the strong Pareto order in the Nash
bargaining solution (Chapter 15).
1.60 Assume that a group 𝑆 is decisive over 𝑥, 𝑦 ∈ 𝑋. Let 𝑎, 𝑏 ∈ 𝑋 be two other states.
We have to show that 𝑆 is decisive over 𝑎 and 𝑏. Without loss of generality, assume
for all individuals 𝑎 ≿𝑖 𝑥 and 𝑦 ≿𝑖 𝑏. Then, the Pareto order implies that 𝑎 ≻ 𝑥 and
𝑦 ≻ 𝑏.
Assume that for every 𝑖 ∈ 𝑆, 𝑥 ≿𝑖 𝑦. Since 𝑆 is decisive over 𝑥 and 𝑦, the social
order ranks 𝑥 ≿ 𝑦. By transitivity, 𝑎 ≿ 𝑏. By IIA, this holds irrespective of individual
preferences on other alternatives. Hence, 𝑆 is decisive over 𝑎 and 𝑏.
1.61 Assume that 𝑆 is decisive. Let 𝑥, 𝑦 and 𝑧 be any three alternatives and assume
𝑥 ≿ 𝑦 for every 𝑖 ∈ 𝑆. Partition 𝑆 into two subgroups 𝑆1 and 𝑆2 so that
𝑥 ≿𝑖 𝑧 for every 𝑖 ∈ 𝑆1 and 𝑧 ≿𝑖 𝑦 for every 𝑖 ∈ 𝑆2
Since 𝑆 is decisive, 𝑥 ≿ 𝑦. By completeness, either
𝑥 ≿ 𝑧 in which case 𝑆1 is decisive over 𝑥 and 𝑧. By the field expansion lemma (Exercise
1.60), 𝑆1 is decisive.
9
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
𝑧 ≻ 𝑥 which implies 𝑧 ≿ 𝑦. In this case, 𝑆2 is decisive over 𝑦 and 𝑧, and therefore
(Exercise 1.60) decisive.
1.62 Assume ≻ is a social order which is Pareto and satisfies Independence of Irrelevant
Alternatives. By the Pareto principle, the whole group is decisive over any pair of
alternatives. By the previous exercise, some proper subgroup is decisive. Continuing
in this way, we eventually arrive at a decisive subgroup of one individual. By the
Field Expansion Lemma (Exercise 1.60), that individual is decisive over every pair of
alternatives. That is, the individual is a dictator.
1.63 Assume 𝐴 is decisive over 𝑥 and 𝑦 and 𝐵 is decisive over 𝑤 and 𝑧. That is, assume
𝑥 ≻𝐴 𝑦 =⇒ 𝑥 ≻ 𝑦
𝑤 ≻𝐵 𝑧 =⇒ 𝑤 ≻ 𝑧
Also assume
𝑦 ≿𝑖 𝑤
for every 𝑖
𝑧 ≿𝑖 𝑥
for every 𝑖
This implies that 𝑦 ≿ 𝑤 and 𝑧 ≿ 𝑥 (Pareto principle). Combining these preferences,
transitivity implies that
𝑥≻𝑦≿𝑤≻𝑧
which contradicts the assumption that 𝑧 ≿ 𝑥. Therefore, the implied social ordering is
intransitive.
1.64 Assume 𝑥 ∈ core. In particular this implies that there does not exist any 𝑦 ∈ 𝑊 (𝑁 )
such that 𝑦 ≻ 𝑥. Therefore 𝑥 ∈ Pareto.
1.65 No state will accept a cost share which exceeds what it can achieve on its own, so
that if 𝑥 ∈ core then
𝑥𝐴𝑃 ≤ 1870
𝑥𝑇 𝑁 ≤ 5330
𝑥𝐴𝑃 ≤ 860
Similarly, the combined share of the two states AP and TN should not exceed 6990,
which they could achieve by proceeding without KM, that is
𝑥𝐴𝑃 + 𝑥𝑇 𝑁 ≤ 6990
Similarly
𝑥𝐴𝑃 + 𝑥𝐾𝑀 ≤ 1960
𝑥𝑇 𝑁 + 𝑥𝐾𝑀 ≤ 5020
Finally, the sum of the shares should equal the total cost
𝑥𝐴𝑃 + 𝑥𝑇 𝑁 + 𝑥𝐾𝑀 = 6530
The core is the set of all allocations of the total cost which satisfy the preceding
inequalities.
For example, the allocation (𝑥𝐴𝑃 = 1500, 𝑥𝑇 𝑁 = 5000, 𝑥𝐾𝑀 = 30) does not belong
to the core, since TN and KM will object to their combined share of 5030; since they
can meet their needs jointly at a total cost of 5020. One the other hand, no group
can object to the allocation (𝑥𝐴𝑃 = 1510, 𝑥𝑇 𝑁 = 5000, 𝑥𝐾𝑀 = 20), which therefore
belongs to the core.
10
c 2001 Michael Carter
⃝
All rights reserved
Solutions for Foundations of Mathematical Economics
1.66 The usual way to model a cost allocation problem as a TP-coalitional game is
to regard the potential cost savings from cooperation as the sum to be allocated. In
this example, the total joint cost of 6530 represents a potential saving of 1530 over
the aggregate cost of 8060 if each region goes its own way. This potential saving of
1530 measures 𝑤(𝑁 ). Similarly, undertaking a joint development, AP and TN could
satisfy their combined requirements at a total cost of 6890. This compares with the
standalone costs of 7100 (= 1870 (AP) + 5330 (TN)). Hence, the potential cost savings
from their collaboration are 210 (= 7100 - 6890), which measures 𝑤(𝐴𝑃, 𝑇 𝑁 ). By
similar calculations, we can compute the worth of each coalition, namely
𝑤(𝐴𝑃 ) = 0
𝑤(𝑇 𝑁 ) = 0
𝑤(𝐴𝑃, 𝑇 𝑁 ) = 210
𝑤(𝐴𝑃, 𝐾𝑀 ) = 770
𝑤(𝐾𝑀 ) = 0
𝑤(𝐾𝑀, 𝑇 𝑁 ) = 1170
𝑤(𝑁 ) = 1530
An outcome in this game is an allocation of the total cost savings 𝑤(𝑁 ) = 1530 amongst
the three players. This can be translated into final cost shares by subtracting each
players share of the cost savings from their standalone cost. For example, a specific
outcome in this game is (𝑥𝐴𝑃 = 370, 𝑥𝑇 𝑁 = 330, 𝑥𝐾𝑀 = 830), which corresponds to
final cost shares of 1500 for AP, 5000 for TN and 30 for KM.
1.67 Let
𝐶 = {x ∈ 𝑋 :
∑
𝑥𝑖 ≥ 𝑤(𝑆) for every 𝑆 ⊆ 𝑁 }
𝑖∈𝑆
/ core. This implies there exists some
1. 𝐶 ⊆ core Assume that x ∈ 𝐶. Suppose x ∈
coalition 𝑆 and outcome y ∈ 𝑤(𝑆) such that y ≻𝑖 x for every 𝑖 ∈ 𝑆.
∑
∙ y ∈ 𝑤(𝑆) implies 𝑖∈𝑆 𝑦𝑖 ≤ 𝑤(𝑆) while
∙ y ≻𝑖 x for every 𝑖 ∈ 𝑆 implies 𝑦𝑖 > 𝑥𝑖 for every 𝑖 ∈ 𝑆. Summing, this
implies
∑
∑
𝑦𝑖 >
𝑥𝑖 ≥ 𝑤(𝑆)
𝑖∈𝑆
𝑖∈𝑆
This contradiction establishes that x ∈ core.
2. core ⊆ 𝐶 Assume that x ∈ core. Suppose x ∈
/ 𝐶. This implies there exists some
∑
∑
coalition 𝑆 such that 𝑖∈𝑆 𝑥𝑖 < 𝑤(𝑆). Let 𝑑 = 𝑤(𝑆) − 𝑖∈𝑆 𝑥𝑖 and consider the
allocation y obtained by reallocating 𝑑 from 𝑆 𝑐 to 𝑆, that is
{
𝑥𝑖 + 𝑑/𝑠
𝑖∈𝑆
𝑦𝑖 =
𝑥𝑖 − 𝑑/(𝑛 − 𝑠) 𝑖 ∈
/𝑆
where 𝑠 = ∣𝑆∣ is the number of players in 𝑆 and 𝑛 = ∣𝑁 ∣ is the number in 𝑁 .
Then ∑
𝑦𝑖 > 𝑥𝑖 for∑
every 𝑖 ∈ 𝑆 so that y ≻𝑖 x for every 𝑖 ∈ 𝑆. Further, y ∈ 𝑤(𝑆)
since 𝑖∈𝑆 𝑦𝑖 = 𝑖∈𝑆 𝑥𝑖 + 𝑑 = 𝑤(𝑆) and y ∈ 𝑋 since
∑
𝑖∈𝑁
𝑦𝑖 =
∑
𝑖∈𝑆
(𝑥𝑖 + 𝑑/𝑠) +
∑
(𝑥𝑖 − 𝑑/(𝑛 − 𝑠)) =
𝑖∈𝑆
/
∑
𝑥𝑖 = 𝑤(𝑁 )
𝑖∈𝑁
This contradicts our assumption that x ∈
/ core, establishing that x ∈ 𝐶.
11
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
1.68 The 7 unanimity games for the player set 𝑁 = {1, 2, 3} are
{
1 S = {1}, {1,2}, {1,3}, N
𝑢{1} (𝑆) =
0 otherwise
{
1 S = {2}, {1,2}, {2,3}, N
𝑢{2} (𝑆) =
0 otherwise
{
1 S = {3}, {1,3}, {2,3}, N
𝑢{3} (𝑆) =
0 otherwise
{
1 S = {1,2}, N
𝑢{1,2} (𝑆) =
0 otherwise
{
1 S = {1,3}, N
𝑢{1,3} (𝑆) =
0 otherwise
{
1 S = {2,3}, N
𝑢{2,3} (𝑆) =
0 otherwise
{
1 S=N
𝑢𝑁 (𝑆) =
0 otherwise
1.69 Firstly, consider a simple game which is a unanimity game with essential coalition
𝑇 and let 𝑥 be an outcome in which
𝑥𝑖 ≥ 0
for every 𝑖 ∈ 𝑇
𝑥𝑖 = 0
for every 𝑖 ∈
/𝑇
and
∑
𝑥𝑖 = 1
𝑖∈𝑁
We claim that 𝑥 ∈ core.
Winning coalitions If 𝑆 is winning coalition, then 𝑤(𝑆) = 1. Furthermore, if it is a
winning coalition, it must contain 𝑇 , that is 𝑇 ⊆ 𝑆 and
∑
∑
𝑥𝑖 ≥
𝑥𝑖 = 1 = 𝑤(𝑆)
𝑖∈𝑆
𝑖∈𝑇
Losing coalitions If 𝑆 is a losing coalition, 𝑤(𝑆) = 0 and
∑
𝑥𝑖 ≥ 0 = 𝑤(𝑆)
𝑖∈𝑆
Therefore 𝑥 ∈ core and so core ∕= ∅.
Conversely, consider a simple game which is not a unanimity game. Suppose there
exists an outcome 𝑥 ∈ core. Then
∑
𝑥𝑖 𝑤(𝑁 ) = 1
(1.15)
𝑖∈𝑁
12
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
Since there are no veto players (𝑇 = ∅), 𝑤(𝑁 ∖ {𝑖}) = 1 for every player 𝑖 ∈ 𝑁 and
∑
𝑥𝑗 ≥ 𝑤(𝑁 ∖ {𝑖}) = 1
𝑗∕=𝑖
which implies that 𝑥𝑖 = 0 for every 𝑖 ∈ 𝑁 contradicting (1.15). Thus we conclude that
core = ∅.
1.70 The excesses of the proper coalitions at x1 and x2 are
x1
-180
-955
-395
-365
-365
-180
{AP}
{KM}
{TN}
{AP, KM}
{AP, TN}
{KM, TN}
x2
-200
-950
-380
-380
-370
-160
Therefore
𝑑(x1 ) = (−180, −180, −365, −365, −395, −955)
and
𝑑(x2 ) = (−160, −200, −370, −380, −380, −950)
d(x1 ) ≺𝐿 d(x2 ) which implies x1 ≻𝑑 x2 .
1.71 It is a weak order on 𝑋, that is ≿ is reflexive, transitive and complete. Reflexivity
𝑛
and transitivity flow from the corresponding properties of ≿𝐿 on ℜ2 . Similarly, for
𝑛
any x, y ∈ 𝑋, either d(x) ≾𝐿 d(y) or d(y) ≾𝐿 d(x) since ≿𝐿 is complete on ℜ2 .
Consequently either x ≿ y or y ≿ x (or both).
≿ is not a partial order since it is not antisymmetric
d(x) ≾𝐿 d(y) and d(y) ≾𝐿 d(x) does not imply x = y
1.72
𝑑(𝑆, x) = 𝑤(𝑆) −
∑
𝑥𝑖
𝑖∈𝑆
so that
𝑑(𝑆, x) ≤ 0 ⇐⇒
∑
𝑥𝑖 ≥ 𝑤(𝑆)
𝑖∈𝑆
1.73 Assume to the contrary that x ∈ Nu but that x ∈
/ core. Then, there exists a
coalition 𝑇 with a positive deficit 𝑑(𝑇, x) > 0. Since core ∕= ∅, there exists some y ∈ 𝑋
such that 𝑑(𝑆, y) ≤ 0 for every 𝑆 ⊆ Nu. Consequently, d(y) ≺ d(x) and y ≻ x, so
that x ∈
/ Nu. This contradiction establishes that Nu ⊆ core.
1.74 For player 1, 𝐴1 = {𝐶, 𝑁 } and
(𝐶, 𝐶) ≿1 (𝐶, 𝐶)
(𝐶, 𝐶) ≿1 (𝑁, 𝐶)
Similarly for player 2
(𝐶, 𝐶) ≿2 (𝐶, 𝐶)
(𝐶, 𝐶) ≿2 (𝐶, 𝑁 )
Therefore, (𝐶, 𝐶) satisfies the requirements of the definition of a Nash equilibrium
(Example 1.51).
13
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
1.75 If a∗𝑖 is the best element in (𝐴𝑖 , ≿′𝑖 ) for every player 𝑖, then
(𝑎∗𝑖 , a−𝑖 ) ≻𝑖 (𝑎𝑖 , a−𝑖 ) for every 𝑎𝑖 ∈ 𝐴𝑖 and a−𝑖 ∈ 𝐴−𝑖
for every 𝑖 ∈ 𝑁 . Therefore, a∗ is a Nash equilibrium.
To show that it is unique, assume that ā is another Nash equilibrium. Then for every
player 𝑖 ∈ 𝑁
(¯
𝑎𝑖 , ā−𝑖 ) ≿𝑖 (𝑎𝑖 , ā−𝑖 ) for every 𝑎𝑖 ∈ 𝐴𝑖
which implies that ā is a maximal element of ≿′𝑖 . To see this, assume not. That is,
assume that there exists some 𝑎˜𝑖 ∈ 𝐴𝑖 such that 𝑎˜𝑖 ≻′𝑖 𝑎
¯𝑖 which implies
𝑎𝑖 , a−𝑖 ) for every a−𝑖 ∈ 𝐴−𝑖
(˜
𝑎𝑖 , a−𝑖 ) ≻𝑖 (¯
In particular
(˜
𝑎𝑖 , ā−𝑖 ) ≻𝑖 (𝑎∗𝑖 , a∗−𝑖 )
which contradicts the assumption that a∗ is a Nash equilibrium. Therefore, ā is another Nash equilibrium, then 𝑎
¯𝑖 is maximal in ≿′𝑖 and hence also a best element of ≿′𝑖
(Exercise 1.54), which contradicts the assumption that 𝑎∗𝑖 is the unique best element.
Consequently, we conclude that a∗ is the unique Nash equilibrium of the game.
1.76 We show that 𝜌(𝑥, 𝑦) = ∣𝑥 − 𝑦∣ satisfies the requirements of a metric, namely
1. ∣𝑥 − 𝑦∣ ≥ 0.
2. ∣𝑥 − 𝑦∣ = 0 if and only if 𝑥 = 𝑦.
3. ∣𝑥 − 𝑦∣ = ∣𝑦 − 𝑥∣.
To establish the triangle inequality, we can consider various cases. For example, if
𝑥≤𝑦≤𝑧
∣𝑥 − 𝑧∣ + ∣𝑧 − 𝑦∣ ≥ ∣𝑥 − 𝑧∣ = 𝑧 − 𝑥 ≥ 𝑦 − 𝑥 = ∣𝑥 − 𝑦∣
If 𝑥 ≤ 𝑧 ≤ 𝑦
∣𝑥 − 𝑧∣ + ∣𝑧 − 𝑦∣ = 𝑧 − 𝑥 + 𝑦 − 𝑧 = 𝑦 − 𝑥 = ∣𝑥 − 𝑦∣
and so on.
1.77 We show that 𝜌∞ 𝑥, 𝑦 = max𝑛𝑖=1 ∣𝑥𝑖 − 𝑦𝑖 ∣ satisfies the requirements of a metric,
namely
1. max𝑛𝑖=1 ∣𝑥𝑖 − 𝑦𝑖 ∣ ≥ 0
2. max𝑛𝑖=1 ∣𝑥𝑖 − 𝑦𝑖 ∣ = 0 if and only if 𝑥𝑖 = 𝑦𝑖 for all 𝑖.
3. max𝑛𝑖=1 ∣𝑥𝑖 − 𝑦𝑖 ∣ = max𝑛𝑖=1 ∣𝑦𝑖 − 𝑥𝑖 ∣
4. For every 𝑖, ∣𝑥𝑖 − 𝑦𝑖 ∣ ≤ ∣𝑥𝑖 − 𝑧𝑖 ∣ + ∣𝑧𝑖 − 𝑦𝑖 ∣ from previous exercise. Therefore
max ∣𝑥𝑖 − 𝑦𝑖 ∣ ≤ max (∣𝑥𝑖 − 𝑧𝑖 ∣ + ∣𝑧𝑖 − 𝑦𝑖 ∣)
≤ max ∣𝑥𝑖 − 𝑧𝑖 ∣ + max ∣𝑧𝑖 − 𝑦𝑖 ∣
1.78 For any 𝑛, any neighborhood of 1/𝑛 contains points of 𝑆 (namely 1/𝑛) and points
not in 𝑆 (1/𝑛 + 𝜖). Hence every point in 𝑆 is a boundary point. Also, 0 is a boundary
point. Therefore b(𝑆) = 𝑆 ∪ {0}. Note that 𝑆 ⊂ b(𝑆). Therefore, 𝑆 has no interior
points.
14
Solutions for Foundations of Mathematical Economics
1.79
c 2001 Michael Carter
⃝
All rights reserved
1. Let 𝑥 ∈ int 𝑆. Thus 𝑆 is a neighborhood of 𝑥. Therefore, 𝑇 ⊇ 𝑆 is a
neighborhood of 𝑥, so that 𝑥 is an interior point of 𝑇 .
2. Clearly, if 𝑥 ∈ 𝑆, then 𝑥 ∈ 𝑇 ⊆ 𝑇 . Therefore, assume 𝑥 ∈ 𝑆 ∖ 𝑆 which implies
that 𝑥 is a boundary point of 𝑆. Every neighborhood of 𝑥 contains other points
of 𝑆 ⊆ 𝑇 . Hence 𝑥 ∈ 𝑇 .
1.80 Assume that 𝑆 is open. Every 𝑥 ∈ 𝑆 has a neighborhood which is disjoint from
𝑆 𝑐 . Hence no 𝑥 ∈ 𝑆 is a closure point of 𝑆 𝑐 . 𝑆 𝑐 contains all its closure points and is
therefore closed.
Conversely, assume that 𝑆 is closed. Let 𝑥 be a point its complement 𝑆 𝑐 . Since 𝑆
is closed and 𝑥 ∈
/ 𝑆, 𝑥 is not a boundary point of 𝑆. This implies that 𝑥 has a
neighborhood 𝑁 which is disjoint from 𝑆, that is 𝑁 ⊆ 𝑆 𝑐 . Hence, 𝑥 is an interior point
of 𝑆 𝑐 . This implies that 𝑆 𝑐 contains only interior points, and hence is open.
1.81 Clearly 𝑥 is a neighborhood of every point 𝑥 ∈ 𝑋, since 𝐵𝑟 (𝑥) ⊆ 𝑋 for every
𝑟 > 0. Hence, every point 𝑥 ∈ 𝑋 is an interior point of 𝑥. Similarly, every point 𝑥 ∈ ∅
is an interior point (there are none). Since 𝑥 and ∅ are open, there complements ∅ and
𝑥 are closed.
Alternatively, ∅ has no boundary points, and is therefore is open. Trivialy, on the other
hand, ∅ contains all its boundary points, and is therefore closed.
1.82 Let 𝑋 be a metric space. Assume 𝑋 is the union of two disjoint closed sets 𝐴 and
𝐵, that is
𝑋 =𝐴∪𝐵
𝐴∩𝐵 =∅
Then 𝐴 = 𝐵 𝑐 is open as is 𝐵 = 𝐴𝑐 . Therefore 𝑋 is not connected.
Conversely, assume that 𝑋 is not connected. Then there exist disjoint open sets 𝐴 and
𝐵 such that 𝑋 = 𝐴 ∪ 𝐵. But 𝐴 = 𝐵 𝑐 is also closed as is 𝐵 = 𝐴𝑐 . Therefore 𝑋 is the
union of two disjoint closed sets.
1.83 Assume 𝑆 is both open and closed, ∅ ⊂ 𝑆 ⊂ 𝑋. We show that we can represent
𝑋 as the union of two disjoint open sets, 𝑆 and 𝑆 𝑐 . For any 𝑆 ⊂ 𝑋, 𝑋 = 𝑆 ∪ 𝑆 𝑐 and
𝑆 ∩ 𝑆 𝑐 = ∅. 𝑆 is open by assumption. It complement 𝑆 𝑐 is open since 𝑆 is closed.
Therefore, 𝑋 is not connected.
Conversely, assume that 𝑆 is not connected. That is, there exists two disjoint open
sets 𝑆 and 𝑇 such that 𝑋 = 𝑆 ∪ 𝑇 . Now 𝑆 = 𝑇 𝑐 , which implies that 𝑆 is closed since
𝑇 is open. Therefore 𝑆 is both open and closed.
1.84 Assume that 𝑆 is both open and closed. Then so is 𝑆 𝑐 and 𝑋 is the disjoint union
of two closed sets
𝑥 = 𝑆 ∪ 𝑆𝑐
so that
b(𝑆) = 𝑆 ∩ 𝑆 𝑐 = 𝑆 ∩ 𝑆 𝑐 = ∅
Conversely, assume that b(𝑆) = 𝑆 ∩ 𝑆 𝑐 = ∅. This implies that Consider any 𝑥 ∈ 𝑆.
Since 𝑆 ∩ 𝑆 𝑐 = ∅, 𝑥 ∈
/ 𝑆 𝑐 . A fortiori, x ∈
/ 𝑆 𝑐 which implies that 𝑥 ∈ 𝑆 and therefore
𝑆 ⊆ 𝑆. 𝑆 is closed. Similarly we can show that 𝑆 𝑐 ⊆ 𝑆 𝑐 so that 𝑆 𝑐 is closed and
therefore 𝑆 is open. 𝑆 is both open and closed.
15
Solutions for Foundations of Mathematical Economics
1.85
c 2001 Michael Carter
⃝
All rights reserved
1. Let {𝐺𝑖 } be a (possibly infinite) collection of open sets. Let 𝐺 = ∪𝑖 𝐺𝑖 . Let
𝑥 be a point in 𝐺. Then there exists some particular 𝐺𝑗 which contains 𝑥. Since
𝐺𝑗 is open, 𝐺𝑗 is a neighborhood of 𝑥. Since 𝐺𝑗 ⊆ 𝐺, 𝑥 is an interior point of 𝐺.
Since 𝑥 is an arbitrary point in 𝐺, we have shown that every 𝑥 ∈ 𝐺 is an interior
point. Hence, 𝐺 is open.
What happens if every 𝐺𝑖 is empty? In this case, 𝐺 = ∅ and is open (Exercise
1.81). The other possibility is that the collection {𝐺𝑖 } is empty. Again 𝐺 = ∅
which is open.
Suppose { 𝐺1 , 𝐺2 , . . . , 𝐺𝑛 } is a finite collection of open sets. Let 𝐺 = ∩𝑖 𝐺𝑖 . If
𝐺 = ∅, then it is trivially open. Otherwise, let 𝑥 be a point in 𝐺. Then 𝑥 ∈ 𝐺𝑖
for all 𝑖 = 1, 2, . . . , 𝑛. Since the sets 𝐺𝑖 are open, for every 𝑖, there exists an open
ball 𝐵(𝑥, 𝑟𝑖 ) ⊆ 𝐺𝑖 about 𝑥. Let 𝑟 be the smallest radius of these open balls, that
is 𝑟 = min{ 𝑟1 , 𝑟2 , . . . , 𝑟𝑛 }. Then 𝐵𝑟 (𝑥) ⊆ 𝐵(𝑥, 𝑟𝑖 ), so that 𝐵𝑟 (𝑥) ⊆ 𝐺𝑖 for all i.
Hence 𝐵𝑟 (𝑥) ⊆ 𝐺. 𝑥 is an interior point of 𝐺 and 𝐺 is open.
To complete the proof, we need to deal with the trivial case in which the collection
is empty. In that case, 𝐺 = ∩𝑖 𝐺𝑖 = 𝑋 and hence is open.
2. The corresponding properties of closed sets are established analogously.
1.86
1. Let 𝑥0 be an interior point of 𝑆. This implies there exists an open ball 𝐵 ⊆ 𝑆
about 𝑥0 . Every 𝑥 ∈ 𝐵 is an interior point of 𝑆. Hence 𝐵 ⊆ int 𝑆. 𝑥0 is an
interior point of int 𝑆 which is therefore open.
Let 𝐺 be any open subset of 𝑆 and 𝑥 be a point in 𝐺. 𝐺 is neighborhood of 𝑥,
which implies that 𝑆 ⊇ 𝐺 is also neighborhood of 𝑥. Therefore 𝑥 is an interior
point of 𝑆. Therefore int 𝑆 contains every open subset 𝐺 ⊆ 𝑆, and hence is the
largest open set in 𝑆.
2. Let 𝑆 denote the closure of the set 𝑆. Clearly, 𝑆 ⊆ 𝑆. To show the converse, let
𝑥 be a closure point of 𝑆 and let 𝑁 be a neighborhood of 𝑥. Then 𝑁 contains
some other point 𝑥′ ∕= 𝑋 which is a closure point of 𝑆. 𝑁 is a neighborhood of
𝑥′ which intersects 𝑆. Hence 𝑥 is a closure point of 𝑆.
Consequently 𝑆 = 𝑆 which implies that 𝑆 is closed.
Assume 𝐹 is a closed subset of containing 𝑆. Then
𝑆⊆𝐹 =𝐹
since 𝐹 is closed. Hence, 𝑆 is a subset of every closed set containing 𝑆.
1.87 Every 𝑥 ∈ 𝑆 is either an interior point or a boundary point. Consequently, the
interior of 𝑆 is the set of all 𝑥 ∈ 𝑆 which are not boundary points
int 𝑆 = 𝑆 ∖ b(𝑆)
1.88 Assume that 𝑆 is closed, that is
𝑆 = 𝑆 ∪ b(𝑆) = 𝑆
This implies that b(𝑆) ⊆ 𝑆. 𝑆 contains its boundary.
Assume that 𝑆 contains its boundary, that is 𝑆 ⊇ b(𝑆). Then
𝑆 = 𝑆 ∪ b(𝑆) = 𝑆
𝑆 is closed.
16
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
1.89 Assume 𝑆 is bounded, and let 𝑑 = 𝑑(𝑆). Choose any 𝑥 ∈ 𝑆. For all 𝑦 ∈ 𝑆,
𝜌(𝑥, 𝑦) ≤ 𝑑 < 𝑑 + 1. Therefore, 𝑦 ∈ 𝐵(𝑥, 𝑑 + 1). 𝑆 is contained in the open ball
𝐵(𝑥, 𝑑 + 1).
Conversely, assume 𝑆 is contained in the open ball 𝐵𝑟 (𝑥). Then for any 𝑦, 𝑧 ∈ 𝑆
𝜌(𝑦, 𝑧) ≤ 𝜌(𝑦, 𝑥) + 𝜌(𝑥, 𝑧) < 2𝑟
by the triangle inequality. Therefore 𝑑(𝑆) < 2𝑟 and the set is bounded.
1.90 Let 𝑦 ∈ 𝑆 ∩ 𝐵𝑟 (𝑥0 ). For every 𝑥 ∈ 𝑆, 𝜌(𝑥, 𝑦) < 𝑟 and therefore
𝜌(𝑥, 𝑥0 ) ≤ 𝜌(𝑥, 𝑦) + 𝜌(𝑦, 𝑥0 ) < 𝑟 + 𝑟 = 2𝑟
so that 𝑥 ∈ 𝐵2𝑟 (𝑥0 ).
1.91 Let y0 ∈ 𝑌 . For any 𝑟 > 0, let y′ = y − 𝑟 be the production plan which is 𝑟 units
less in every commodity. Then, for any y ∈ 𝐵𝑟 (y′ )
𝑦𝑖 − 𝑦𝑖′ ≤ 𝜌∞ (y, y′ ) < 𝑟
for every 𝑖
and therefore y < y0 . Thus 𝐵𝑟 (y′ ) ⊂ 𝑌 and so y′ ∈ int 𝑌 ∕= ∅.
1.92 For any 𝑥 ∈ 𝑆1
𝜌𝑥 = 𝜌(𝑥, 𝑆2 ) > 0
Similarly, for every 𝑦 ∈ 𝑆2
𝜌𝑦 = 𝜌(𝑦, 𝑆1 ) > 0
Let
𝑇1 =
∪
𝐵𝜌𝑥 /2 (𝑥)
𝑥∈𝑆1
𝑇2 =
∪
𝐵𝜌𝑦 /2 (𝑥)
𝑦∈𝑆2
Then 𝑇1 and 𝑇2 are open sets containing 𝑆1 and 𝑆2 respectively.
To show that 𝑇1 and 𝑇2 are disjoint, suppose to the contrary that 𝑧 ∈ 𝑇1 ∩ 𝑇2 . Then,
there exist points 𝑥 ∈ 𝑆1 and 𝑦 ∈ 𝑆2 such that
𝜌(𝑥, 𝑧) < 𝜌𝑥 /2,
𝜌(𝑦, 𝑧) < 𝜌𝑦 /2
Without loss of generality, suppose that 𝜌𝑥 ≤ 𝜌𝑦 and therefore
𝜌(𝑥, 𝑦) ≤ 𝜌(𝑥, 𝑧) + 𝜌(𝑦, 𝑧) < 𝜌𝑥 /2 + 𝜌𝑦 /2 ≤ 𝜌𝑦
which contradicts the definition of 𝜌𝑦 and shows that 𝑇1 ∩ 𝑇2 = ∅.
1.93 By Exercise 1.92, there exist disjoint open sets 𝑇1 and 𝑇2 such that 𝑆1 ⊆ 𝑇1 and
𝑆2 ⊆ 𝑇2 . Since 𝑆2 ⊆ 𝑇2 , 𝑆2 ∩ 𝑇2𝑐 = ∅. 𝑇2𝑐 is a closed set which contains 𝑇1 , and
therefore 𝑆2 ∩ 𝑇1 = ∅. 𝑇 = 𝑇1 is the desired set.
1.94 See Figure 1.2.
17
c 2001 Michael Carter
⃝
All rights reserved
Solutions for Foundations of Mathematical Economics
𝑆
1
𝐵1/2 ((2, 0))
1
2
Figure 1.2: Open ball about (2, 0) relative to 𝑋
1.95 Assume 𝑆 is connected. Suppose 𝑆 is not an interval. This implies that there
exists numbers 𝑥, 𝑦, 𝑧 such that 𝑥 < 𝑦 < 𝑧 and 𝑥, 𝑧 ∈ 𝑆 while 𝑦 ∈
/ 𝑆. Then
𝑆 = (𝑆 ∩ (−∞, 𝑦)) ∪ (𝑆 ∩ (𝑦, ∞))
represents 𝑆 as the union of two disjoint open sets (relative to 𝑆), contradicting the
assumption that 𝑆 is connected.
Conversely, assume that 𝑆 is an interval. Suppose that 𝑆 is not connected. That is,
𝑆 = 𝐴 ∪ 𝐵 where 𝐴 and 𝐵 are nonempty disjoint closed sets. Choose 𝑥 ∈ 𝐴 and 𝑧 ∈ 𝐵.
Since 𝐴 and 𝐵 are disjoint, 𝑥 ∕= 𝑧. Without loss of generality, we may assume 𝑥 < 𝑧.
Since 𝑆 is an interval, [𝑥, 𝑧] ⊆ 𝑆 = 𝐴 ∪ 𝐵. Let
𝑦 = sup{ [𝑥, 𝑧] ∩ 𝑆 }
Clearly 𝑥 ≤ 𝑦 ≤ 𝑧 so that 𝑦 ∈ 𝑆. Now 𝑦 belongs to either 𝐴 or 𝐵. Since 𝐴 is closed in 𝑆,
[𝑥, 𝑧] ∩ 𝐴 is closed and 𝑦 = sup{ [𝑥, 𝑧] ∩ 𝑆 } ∈ 𝐴. This implies the 𝑦 < 𝑧. Consequently,
𝑦 + 𝜖 ∈ 𝐵 for every 𝜖 > 0 such that 𝑦 + 𝜖 ≤ 𝑧. Since 𝐵 is closed, 𝑦 ∈ 𝐵. This implies
that 𝑦 belongs to both 𝐴 and 𝐵 contradicting the assumption that 𝐴 ∩ 𝐵 = ∅. We
conclude that 𝑆 must be connected.
1.96 Assume 𝑥𝑛 → 𝑥 and also 𝑥𝑛 → 𝑦. We have to show that 𝑥 = 𝑦. Suppose not,
that is suppose 𝑥 ∕= 𝑦 (see Figure 1.3). Then 𝜌(𝑥, 𝑦) = 𝑅 > 0. Let 𝑟 = 𝑅/3 > 0. Since
𝑥𝑛 → 𝑥, there exists some 𝑁𝑥 such that 𝑥𝑛 ∈ 𝐵𝑟 (𝑥) for all 𝑛 ≥ 𝑁𝑥 . Since 𝑥𝑛 → 𝑦,
there exists some 𝑁𝑦 such that 𝑥𝑛 ∈ 𝐵𝑟 (𝑦) for all 𝑛 ≥ 𝑁𝑦 . But these statements are
contradictory since 𝐵𝑟 (𝑥) ∩ 𝐵(𝑦, 𝑟) = ∅. We conclude that the successive terms of a
convergent sequence cannot get arbitrarily close to two distinct points, so that the limit
a convergent sequence is unique.
1.97 Let (𝑥𝑛 ) be a sequence which converges to 𝑥. There exists some 𝑁 such that
𝜌(𝑥𝑛 − 𝑥) < 1
for all 𝑛 ≥ 𝑁 . Let
𝑅 = max{ 𝜌(𝑥1 − 𝑥), 𝜌(𝑥2 − 𝑥), . . . , 𝜌(𝑥𝑁 −1 − 𝑥), 1 }
Then for all 𝑛, 𝜌(𝑥𝑛 −𝑥) ≤ 𝑅. That is every element 𝑥𝑛 in the sequence (𝑥𝑛 ) belongs to
𝐵(𝑥, 𝑅 + 1), the open ball about 𝑥 of radius 𝑅 + 1. Therefore the sequence is bounded.
18
c 2001 Michael Carter
⃝
All rights reserved
Solutions for Foundations of Mathematical Economics
𝐵(𝑥, 𝑟)
𝑅
𝐵(𝑦, 𝑟)
𝑟
𝑟
𝑥
𝑦
Figure 1.3: A convergent sequence cannot have two distinct limits
1.98 The share 𝑠𝑛 of the 𝑛th guest is
𝑠𝑛 =
1𝑛
2
lim 𝑠𝑛 = 0
However, 𝑠𝑛 > 0 for all 𝑛. There is no limit to the number of guests who will get a
share of the cake, although the shares will get vanishingly small for large parties.
1.99 Suppose 𝑥𝑛 → 𝑥. That is, there exists some 𝑁 such that 𝜌(𝑥𝑛 , 𝑥) < 𝜖/2 for all
𝑛 ≥ 𝑁 . Then, for all 𝑚, 𝑛 ≥ 𝑁
𝜌(𝑥𝑚 , 𝑥𝑛 ) ≤ 𝜌(𝑥𝑚 , 𝑥) + 𝜌(𝑥, 𝑥𝑛 )
< 𝜖/2 + 𝜖/2 = 𝜖
1.100 Let (𝑥𝑛 ) be a Cauchy sequence. There exists some 𝑁 such that
𝜌(𝑥𝑛 − 𝑥𝑁 ) < 1
for all 𝑛 ≥ 𝑁 . Let
𝑅 = max{ 𝜌(𝑥1 − 𝑥𝑁 ), 𝜌(𝑥2 − 𝑥𝑁 ), . . . , 𝜌(𝑥𝑁 −1 − 𝑥𝑁 ), 1 }
Every 𝑥𝑛 belongs to 𝐵(𝑥𝑁 , 𝑅 + 1), the ball of radius 𝑅 + 1 centered on 𝑥𝑁 .
1.101 Let (𝑥𝑛 ) be a bounded increasing sequence in ℜ and let 𝑆 = { 𝑥𝑛 } be the set of
elements of (𝑥𝑛 ). Let 𝑏 be the least upper bound of 𝑆. We show that 𝑥𝑛 → 𝑏.
First observe that 𝑥𝑛 ≤ 𝑏 for every 𝑛 (since 𝑏 is an upper bound). Since 𝑏 is the least
upper bound, for every 𝜖 > 0 there exists some element 𝑥𝑁 such that 𝑥𝑁 > 𝑏 − 𝜖. Since
(𝑥𝑛 ) is increasing, we must have
𝑏 − 𝜖 < 𝑥𝑛 ≤ 𝑏 for every 𝑛 ≥ 𝑁
That is, for every 𝜖 > 0 there exists an 𝑁 such that
𝜌(𝑥𝑛 , 𝑥) < 𝜖 for every 𝑛 ≥ 𝑁
𝑥𝑛 → 𝑏.
1.102 If 𝛽 > 1, the sequence 𝛽, 𝛽 2 , 𝛽 3 , . . . is unbounded.
Otherwise, if 𝛽 ≤ 1, 𝛽 𝑛 ≤ 𝛽 𝑛−1 and the sequence is decreasing and bounded by 𝛽 ≤ 1.
Therefore the sequence converges (Exercise 1.101). Let 𝑥 = lim𝑛→∞ . Then
𝛽 𝑛+1 = 𝛽𝛽 𝑛
and therefore
𝑥 = lim 𝛽 𝑛+1 = 𝛽 lim 𝛽 𝑛 = 𝛽𝑥
𝑛→∞
𝑛→∞
which can be satisfied if and only if
19
c 2001 Michael Carter
⃝
All rights reserved
Solutions for Foundations of Mathematical Economics
∙ 𝛽 = 1, in which case 𝑥 = lim 1𝑛 = 1
∙ 𝑥 = 0 when 0 ≤ 𝛽 < 1
Therefore
𝛽 𝑛 → 0 ⇐⇒ 𝛽 < 1
1.103
1. For every 𝑥 ∈ ℜ
(𝑥 −
Expanding
√ 2
2) ≥ 0
√
𝑥2 − 2 2𝑥 + 2 ≥ 0
√
𝑥2 + 2 ≥ 2 2𝑥
Dividing by 𝑥
√
2
≥2 2
𝑥
𝑥+
for every 𝑥 > 0. Therefore
1
2
(
2
𝑥+
𝑥
)
≥
√
2
2. Let (𝑥𝑛 ) be the sequence defined in Example 1.64. That is
)
(
1
2
𝑥𝑛 =
𝑥𝑛−1 + 𝑛−1
2
𝑥
Starting from 𝑥0 = 2, it is clear that 𝑥𝑛 ≥ 0 for all 𝑛. Substituting in
(
)
√
1
2
𝑥+
≥ 2
2
𝑥
1
𝑥 =
2
𝑛
That is 𝑥𝑛 ≥
(
𝑥𝑛−1 +
2
𝑥𝑛−1
)
≥
√
2
√
2 for every 𝑛. Therefore for every 𝑛
)
(
1
2
𝑥𝑛 − 𝑥𝑛+1 = 𝑥𝑛 −
𝑥𝑛 + 𝑛
2
𝑥
)
(
1
2
=
𝑥𝑛 − 𝑛
2
𝑥
(
)
1
2
≥
𝑥𝑛 − √
2
2
√
𝑛
=𝑥 − 2
≥0
√
This implies that 𝑥𝑛+1 ≤ 𝑥𝑛 . Consequently 2 ≤ 𝑥𝑛 ≤ 2 for every 𝑛. (𝑥𝑛 ) is a
bounded monotone sequence. By Exercise 1.101, 𝑥𝑛 → 𝑥. The limit 𝑥 satisfies
the equation
(
)
1
2
𝑥=
𝑥+
2
𝑥
√
Solving, this implies 𝑥2 = 2 or 𝑥 = 2 as required.
20
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
1.104 The following sequence approximates the square root of any positive number 𝑎
𝑥1 = 𝑎
1(
𝑎 )
𝑥𝑛+1 = 𝑥𝑛 + 𝑛
2
𝑥
1.105 Let 𝑥 ∈ 𝑆. If 𝑥 ∈ 𝑆, then 𝑥 is the limit of the sequence (𝑥, 𝑥, 𝑥, . . . ). If 𝑥 ∈
/ 𝑆,
then 𝑥 is a boundary point of 𝑆. For every 𝑛, the ball 𝐵(𝑥, 1/𝑛) contains a point
𝑥𝑛 ∈ 𝑆. From the sequence of open balls 𝐵(𝑥, 1/𝑛) for 𝑛 = 1, 2, 3, . . . , we can generate
of a sequence of points 𝑥𝑛 which converges to 𝑥.
Conversely, assume that 𝑥 is the limit of a sequence (𝑥𝑛 ) of points in 𝑆. Either 𝑥 ∈ 𝑆
and therefore 𝑥 ∈ 𝑆. Or 𝑥 ∈
/ 𝑆. Since 𝑥𝑛 → 𝑥, every neighborhood of 𝑥 contains points
𝑛
𝑥 of the sequence. Hence, 𝑥 is a boundary point of 𝑆 and 𝑥 ∈ 𝑆.
1.106 𝑆 is closed if and only if 𝑆 = 𝑆. The result follows from Exercise 1.105.
1.107 Let 𝑆 be a closed subset of a complete metric space 𝑋. Let (𝑥𝑛 ) be a Cauchy
sequence in 𝑆. Since 𝑋 is complete, 𝑥𝑛 → 𝑥 ∈ 𝑋. Since 𝑆 is closed, 𝑥 ∈ 𝑆 (Exercise
1.106).
1.108 Since 𝑑(𝑆 𝑛 ) → 0, 𝑆 cannot contain more than one point. Therefore, it suffices
to show that 𝑆 is nonempty. Choose some 𝑥𝑛 from each 𝑆 𝑛 . Since 𝑑(𝑆 𝑛 ) → 0, (𝑥𝑛 ) is
a Cauchy sequence. Since 𝑋 is complete, there exists some 𝑥 ∈ 𝑋 such that 𝑥𝑛 → 𝑥.
Choose some 𝑚. Since the sets are nested, the subsequence { 𝑥𝑛 : 𝑛 ≥ 𝑚 } ⊆ 𝑆 𝑚 . Since
𝑆 𝑚 is closed, 𝑥 ∈ 𝑆 𝑚 (Exercise 1.106). Since 𝑥 ∈ 𝑆 𝑚 for every 𝑚
𝑥∈
∞
∩
𝑆𝑚
𝑚=1
1.109 If player 1 picks closed balls whose radius decreases by at least half after each
pair of moves, then { 𝑆 1 , 𝑆 3 , 𝑆 5 , . . . } is a nested sequence of closed sets which has a
nonempty intersection (Exercise 1.108).
1.110 Let (𝑥𝑛 ) be a sequence in 𝑆 ⊆ 𝑇 with 𝑆 closed and 𝑇 compact. Since 𝑇 is
compact, there exists a convergent subsequence 𝑥𝑚 → 𝑥 ∈ 𝑇 . Since 𝑆 is closed,
we must have 𝑥 ∈ 𝑆 (Exercise 1.106). Therefore (𝑥𝑛 ) contains a subsequence which
converges in 𝑆, so that 𝑆 is compact.
1.111 Let (𝑥𝑛 ) be a Cauchy sequence in a metric space. For every 𝜖 > 0, there exists
𝑁 such that
𝜌(𝑥𝑚 , 𝑥𝑛 ) < 𝜖/2 for all 𝑚, 𝑛 ≥ 𝑁
Trivially, if (𝑥𝑛 ) converges, it has a convergent subsequence (the whole sequence).
Conversely, assume that (𝑥𝑛 ) has a subsequence (𝑥𝑚 ) which converges to 𝑥. That is,
there exists some 𝑀 such that
𝜌(𝑥𝑚 , 𝑥) < 𝜖/2 for all 𝑚 ≥ 𝑀
Therefore, by the triangle inequality
𝜌(𝑥𝑛 , 𝑥) ≤ 𝜌(𝑥𝑛 , 𝑥𝑀 ) + 𝜌(𝑥𝑀 , 𝑥) < 𝜖/2 + 𝜖/2 = 𝜖
for all 𝑛 ≥ max 𝑀, 𝑁
21
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
1.112 We proceed sequentially as follows. Choose any 𝑥1 in 𝑋. If the open ball 𝐵(𝑥1 , 𝑟)
contains 𝑋, we are done. Otherwise, choose some 𝑥2 ∈
/ 𝐵(𝑥1 , 𝑟) and consider the set
∪2
𝐵(𝑥𝑖 , 𝑟). If this set contains 𝑋, we are done. Otherwise, choose some 𝑥3 ∈
/
∪𝑖=1
∪3
2
𝐵(𝑥
,
𝑟)
and
consider
𝐵(𝑥
,
𝑟)
𝑖
𝑖
𝑖=1
𝑖=1
The process must terminate with a finite number of open balls. Otherwise, if the process
could be continued indefinitely, we could construct an infinite sequence (𝑥1 , 𝑥2 , 𝑥3 , . . . )
which had no convergent subsequence. The would contradict the compactness of 𝑋.
1.113 Assume 𝑋 is compact. The previous exercise showed that 𝑋 is totally bounded.
Further, since every sequence has a convergent subsequence, every Cauchy sequence
converges (Exercise 1.111). Therefore 𝑋 is complete.
Conversely, assume that 𝑋 is complete and totally bounded and let 𝑆1 = { 𝑥11 , 𝑥21 , 𝑥31 , . . . }
be an infinite sequence of points in 𝑋. Since 𝑋 is totally bounded, it is covered by a
finite collection of open balls of radius 1/2. 𝑆1 has a subsequence 𝑆2 = { 𝑥12 , 𝑥22 , 𝑥32 , . . . }
all of whose points lie in one of the open balls. Similarly, 𝑆2 has a subsequence
𝑆3 = { 𝑥13 , 𝑥23 , 𝑥33 , . . . } all of whose points lie in an open ball of radius 1/3. Continuing in this fashion, we construct a sequence of subsequences, each of which lies in a
ball of smaller and smaller radius. Consequently, successive terms of the “diagonal”
subsequence { 𝑥11 , 𝑥22 , 𝑥33 , . . . } get closer and closer together. That is, 𝑆 is a Cauchy
sequence. Since 𝑋 is complete, 𝑆 converges in 𝑋 and 𝑆1 has a convergent subsequence
𝑆. Hence, 𝑋 is compact.
1.114
1. Every big set 𝑇 ∈ ℬ has a least two distinct points.
0 for every 𝑇 ∈ ℬ.
Hence 𝑑(𝑇 ) >
2. Otherwise, there exists 𝑛 such that 𝑑(𝑇 ) ≥ 1/𝑛 for every 𝑇 ∈ ℬ and therefore
𝛿 = inf 𝑇 ∈ℬ 𝑑(𝑇 ) ≥ 1/𝑛 > 0.
3. Choose a point 𝑥𝑛 in each 𝑇𝑛 . Since 𝑋 is compact, the sequence (𝑥𝑛 ) has a
convergent subsequence (𝑥𝑚 ) which converges to some point 𝑥0 ∈ 𝑋.
4. The point 𝑥0 belongs to at least one 𝑆0 in the open cover 𝒞. Since 𝑆0 is open,
there exists some open ball 𝐵𝑟 (𝑥0 ) ⊆ 𝑆0 .
5. Consider the concentric ball 𝐵𝑟/2 (𝑥0 ). Since (𝑥𝑚 ) is a convergent subsequence,
there exists some 𝑀 such that 𝑥𝑚 ∈ 𝐵𝑟/2 (𝑥) for every 𝑚 ≥ 𝑀 .
6. Choose some 𝑛0 ≥ min{ 𝑀, 2/𝑟 }. Then 1/𝑛0 < 𝑟/2 and 𝑑(𝑇𝑛0 ) < 1/𝑛0 < 𝑟/2.
𝑥𝑛0 ∈ 𝑇𝑛0 ∩ 𝐵𝑟/2 (𝑥) and therefore (Exercise 1.90) 𝑇𝑛0 ⊆ 𝐵𝑟 (𝑥) ⊆ 𝑆 0 .
This contradicts the assumption that 𝑇𝑛 is a big set. Therefore, we conclude that
𝛿 > 0.
1.115
1. 𝑋 is totally bounded (Exercise 1.112). Therefore, for every 𝑟 > 0, there
exists a finite number of open balls 𝐵𝑟 (𝑥𝑛 ) such that
𝑋=
𝑛
∪
𝐵𝑟 (𝑥𝑖 )
𝑖=1
2. 𝑑(𝐵𝑟 (𝑥𝑖 )) = 2𝑟 < 𝛿. By definition of the Lebesgue number, every 𝐵𝑟 (𝑥𝑖 ) is
contained in some 𝑆𝑖 ∈ 𝒞.
3. The collection of open balls {𝐵𝑟 (𝑥𝑖 )} covers 𝑋. Therefore, fore every 𝑥 ∈ 𝑋,
there exists 𝑖 such that
𝑥 ∈ 𝐵𝑟 (𝑥𝑖 ) ⊆ 𝑆𝑖
22
c 2001 Michael Carter
⃝
All rights reserved
Solutions for Foundations of Mathematical Economics
Therefore, the finite collection 𝑆1 , 𝑆2 , . . . , 𝑆𝑛 covers 𝑋.
1.116 For any family of subsets 𝒞
∩
∪
𝑆 = ∅ ⇐⇒
𝑆𝑐 = 𝑋
𝑆∈𝒞
𝑆∈𝒞
Suppose to the contrary
that 𝒞 is a collection of closed sets with the finite intersection
∩
property, but that 𝑆∈𝒞 𝑆 = ∅. Then { 𝑆 𝑐 : 𝑆 ∈ 𝒞 } is a open cover of 𝑋 which does
not have a finite subcover. Consequently 𝑋 cannot be compact.
Conversely, assume every collection of closed sets with the finite intersection property
has a nonempty intersection. Let ℬ be an open cover of 𝑋. Let
𝒞 = { 𝑆 ⊆ 𝑋 : 𝑆𝑐 ∈ ℬ }
That is
∪
𝑆 𝑐 = 𝑋 which implies
𝑆∈𝒞
∩
𝑆=∅
𝑆∈𝒞
Consequently, 𝒞 does not have the finite intersection property. There exists a finite
subcollection { 𝑆1 , 𝑆2 , . . . , 𝑆𝑛 } such that
𝑛
∩
𝑆𝑖 = ∅
𝑖=1
which implies that
𝑛
∪
𝑆𝑖𝑐 = 𝑋
𝑖=1
{ 𝑆1𝑐 , 𝑆2𝑐 , . . . , 𝑆𝑛𝑐 } is a finite subcover of 𝑋. Thus, 𝑋 is compact.
1.117 Every finite collection of nested (nonempty) sets has the finite intersection property. By Exercise 1.116, the sequence has a non-empty intersection. (Note: every set
𝑆𝑖 is a subset of the compact set 𝑆1 .)
1.118 (1) =⇒ (2) Exercises 1.114 and 1.115.
(2) =⇒ (3) Exercise 1.116
(3) =⇒ (1) Let 𝑋 be a metric space in which every collection of closed subsets with
the finite intersection property has a finite intersection. Let (𝑥𝑛 ) be a sequence
in 𝑋. For any 𝑛, let 𝑆𝑛 be the tail of the sequence minus the first 𝑛 terms, that
is
𝑆𝑛 = { 𝑥𝑚 : 𝑚 = 𝑛 + 1, 𝑛 + 2, . . . }
The collection (𝑆𝑛 ) has the finite intersection property since, for any finite set of
integers { 𝑛1 , 𝑛2 , . . . , 𝑛𝑘 }
𝑘
∩
𝑆𝑛𝑗 ⊆ 𝑆𝐾 ∕= ∅
𝑗=1
where 𝐾 = max{ 𝑛1 , 𝑛2 , . . . , 𝑛𝑘 }. Therefore
∞
∩
𝑛=1
23
𝑆𝑛 ∕= ∅
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
∪∞
Choose any 𝑥 ∈ 𝑛=1 𝑆𝑛 . That is, 𝑥 ∈ 𝑆𝑛 for each 𝑛 = 1, 2, . . . . Thus, for every
𝑟 > 0 and 𝑛 = 1, 2, . . . , there exists some 𝑥𝑛 ∈ 𝐵𝑟 (𝑥) ∩ 𝑆𝑛
We construct a subsequence as follows. For 𝑘 = 1, 2, . . . , let 𝑥𝑘 be the first
term in 𝑆𝑘 which belongs to 𝐵1/𝑘 (𝑥). Then, (𝑥𝑘 ) is a subsequence of (𝑥𝑛 ) which
converges to 𝑥. We conclude that every sequence has a convergent subsequence.
1.119 Assume (𝑥𝑛 ) is a bounded sequence in ℜ. Without loss of generality, we can
assume that { 𝑥𝑛 } ⊂ [0, 1]. Divide 𝐼 0 = [0, 1] into two sub-intervals [0, 1/2] and
[1/2, 1]. At least one of the sub-intervals must contain an infinite number of terms of
the sequence. Call this interval 𝐼 1 . Continuing this process of subdivision, we obtain
a nested sequence of intervals
𝐼0 ⊃ 𝐼1 ⊃ 𝐼2 ⊃ . . .
each of which contains an infinite number of terms of the sequence. Consequently,
we can construct a subsequence (𝑥𝑚 ) with 𝑥𝑚 ∈ 𝐼 𝑚 . Furthermore, the intervals get
smaller and smaller with 𝑑(𝐼 𝑛 ) → 0, so that (𝑥𝑚 ) is a Cauchy sequence. Since ℜ is
complete, the subsequence (𝑥𝑚 ) converges to 𝑥 ∈ ℜ.
Note how we implicitly called on the Axiom of Choice (Remark 1.5) in choosing a
subsequence from the nested sequence of intervals.
1.120 Let (𝑥𝑛 ) be a Cauchy sequence in ℜ. That is, for every 𝜖 > 0, there exists 𝑁 such
that ∣𝑥𝑛 − 𝑥𝑚 ∣ < 𝜖 for all 𝑚, 𝑛 ≥ 𝑁 . (𝑥𝑛 ) is bounded (Exercise 1.100) and hence by the
Bolzano-Weierstrass theorem, it has a convergent subsequence (𝑥𝑚 ) with 𝑥𝑚 → 𝑥 ∈ ℜ.
Choose 𝑥𝑟 from the convergent subsequence such that 𝑟 ≥ 𝑁 and ∣𝑥𝑟 − 𝑥∣ < 𝜖/2. By
the triangle inequality
∣𝑥𝑛 − 𝑥∣ ≤ ∣𝑥𝑛 − 𝑥𝑟 ∣ + ∣𝑥𝑟 − 𝑥∣ < 𝜖/2 + 𝜖/2 = 𝜖
Hence the sequence (𝑥𝑛 ) converges to 𝑥 ∈ ℜ.
1.121 Since 𝑋1 and 𝑋2 are linear spaces, x1 + y1 ∈ 𝑋1 and x2 + y2 ∈ 𝑋2 , so that
(x1 + y1 , x2 + y2 ) ∈ 𝑋1 × 𝑋2 . Similarly (𝛼x1 , 𝛼x2 ) ∈ 𝑋1 × 𝑋2 for every (x1 , x2 ) ∈
𝑋1 × 𝑋2 . Hence, 𝑋 = 𝑋1 × 𝑋2 is closed under addition and scalar multiplication.
With addition and scalar multiplication defined component-wise, 𝑋 inherits the arithmetic properties (like associativity) of its constituent spaces. Verifying this would
proceed identically as for ℜ𝑛 . It is straightforward though tedious. The zero element
in 𝑋 is 0 = (01 , 02 ) where 01 is the zero element in 𝑋1 and 02 is the zero element in
𝑋2 . Similarly, the inverse of x = (x1 , x2 ) is −x = (−x1 , −x2 ).
1.122
1.
x+y =x+z
−x + (x + y) = −x + (x + z)
(−x + x) + y = (−x + x) + z
0+y =0+z
y=z
24
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
2.
𝛼x = 𝛼y
1
1
(𝛼x) = (𝛼y)
𝛼
(
)
(𝛼 )
1
1
𝛼 x=
𝛼 y
𝛼
𝛼
x=y
3. 𝛼x = 𝛽x implies
(𝛼 − 𝛽)x = 𝛼x − 𝛽x = 0
Provided x = 0, we must have
(𝛼 − 𝛽)x = 0x
That is 𝛼 − 𝛽 = 0 which implies 𝛼 = 𝛽.
4.
(𝛼 − 𝛽)x = (𝛼 + (−𝛽))x
= 𝛼x + (−𝛽)x
= 𝛼x − 𝛽x
5.
𝛼(x − y) = 𝛼(x + (−1)y)
= 𝛼x + 𝛼(−1)y
= 𝛼x − 𝛼y
6.
𝛼0 = 𝛼(x + (−x))
= 𝛼x + 𝛼(−x)
= 𝛼x − 𝛼x
=0
1.123 The linear hull of the vectors {(1, 0), (0, 2)} is
{ (
)
( )}
1
0
lin {(1, 0), (0, 2)} = 𝛼1
+ 𝛼2
0
2
)
(
{
𝛼1 }
=
𝛼2
= ℜ2
The linear hull of the vectors {(1, 0), (0, 2)} is the whole plane ℜ2 . Figure 1.4 illustrates
how any vector in ℜ2 can be obtained as a linear combination of {(1, 0), (0, 2)}.
1.124
1. From the definition of 𝛼,
𝛼𝑆 = 𝑤(𝑆) −
∑
𝑇 ⊊𝑆
25
𝛼𝑇
c 2001 Michael Carter
⃝
All rights reserved
Solutions for Foundations of Mathematical Economics
(−2, 3)
3
2
1
-2
-1
0
1
Figure 1.4: Illustrating the span of { (1, 0), (0, 2) }.
for every 𝑆 ⊆ 𝑁 . Rearranging
𝑤(𝑆) = 𝛼𝑆 +
=
∑
∑
𝛼𝑇 +
𝑇 =𝑆
=
∑
𝛼𝑇
𝑇 ⊊𝑆
∑
𝛼𝑇
𝑇 ⊊𝑆
𝛼𝑇
𝑇 ⊆𝑆
2.
∑
𝛼𝑇 𝑤𝑇 (𝑆) =
𝑇 ⊆𝑁
∑
𝛼𝑇 𝑤𝑇 (𝑆) +
𝑇 ⊆𝑆
=
∑
𝛼𝑇 1 +
𝑇 ⊆𝑆
=
∑
∑
∑
𝛼𝑇 𝑤𝑇 (𝑆)
𝑇 ∕⊆𝑆
𝛼𝑇 0
𝑇 ∕⊆𝑆
𝛼𝑇 1
𝑇 ⊆𝑆
= 𝑤(𝑆)
1.125
1. Choose any x ∈ 𝑆. By homogeneity 0x = 𝜃 ∈ 𝑆.
2. For every x ∈ 𝑆, −x = (−1)x ∈ 𝑆 by homogeneity.
1.126 Examples of subspaces in ℜ𝑛 include:
1. The set containing just the null vector {0} is subspace.
2. Let x be any element in ℜ𝑛 and let 𝑇 be the set of all scalar multiples of x
𝑇 = { 𝛼x : 𝛼 ∈ ℜ }
𝑇 is a line through the origin in ℜ𝑛 and is a subspace.
3. Let 𝑆 be the set of all 𝑛-tuples with zero first coordinate, that is
𝑆 = { (𝑥1 , 𝑥2 , . . . , 𝑥𝑛 ) : 𝑥1 = 0, 𝑥𝑗 ∈ ℜ, 𝑗 ∕= 1 }
For any x, y ∈ 𝑆
x + y = (0, 𝑥2 , 𝑥3 , . . . , 𝑥𝑛 ) + (0, 𝑦2 , 𝑦3 , . . . , 𝑦𝑛 )
= (0, 𝑥2 + 𝑦2 , 𝑥3 + 𝑦3 , . . . , 𝑥𝑛 + 𝑦𝑛 ) ∈ 𝑆
26
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
Similarly
𝛼x = 𝛼(0, 𝑥2 , 𝑥3 , . . . , 𝑥𝑛 )
= (0, 𝛼𝑥2 , 𝛼𝑥3 , . . . , 𝛼𝑥𝑛 ) ∈ 𝑆
Therefore 𝑆 is a subspace of ℜ𝑛 . Generalizing, any set of vectors with one or
more coordinates identically zero is a subspace of ℜ𝑛 .
4. We will meet some more complicated subspaces in Chapter 2.
1.127 No, −x ∈
/ ℜ𝑛+ if x ∈ ℜ𝑛+ unless x = 0. ℜ𝑛+ is an example of a cone (Section 1.4.5).
1.128 lin 𝑆 is a subspace Let x, y be two elements in lin 𝑆. x is a linear combination
of elements of 𝑆, that is
x = 𝛼1 𝑥1 + 𝛼2 𝑥2 + . . . 𝛼𝑛 𝑥𝑛
Similarly
y = 𝛽1 𝑥1 + 𝛽2 𝑥2 + . . . 𝛽𝑛 𝑥𝑛
and
x + y = (𝛼1 + 𝛽1 )𝑥1 + (𝛼2 + 𝛽2 )𝑥2 + ⋅ ⋅ ⋅ + (𝛼𝑛 + 𝛽𝑛 )𝑥𝑛 ∈ lin 𝑆
and
𝛼x = 𝛼𝛼1 𝑥1 + 𝛼𝛼2 𝑥2 + ⋅ ⋅ ⋅ + 𝛼𝛼𝑛 𝑥𝑛 ∈ lin 𝑆
This shows that lin 𝑆 is closed under addition and scalar multiplication and hence
is a subspace.
lin 𝑆 is the smallest subspace containing 𝑆 Let 𝑇 be any subspace containing 𝑆.
Then 𝑇 contains all linear combinations of elements in 𝑆, so that lin 𝑆 ⊂ 𝑇 .
Hence lin 𝑆 is the smallest subspace containing S.
1.129 The previous exercise showed that lin 𝑆 is a subspace. Therefore, if 𝑆 = lin 𝑆,
𝑆 is a subspace.
Conversely, assume that 𝑆 is a subspace. Then 𝑆 is the smallest subspace containing
𝑆, and therefore 𝑆 = lin 𝑆 (again by the previous exercise).
1.130 Let x, y ∈ 𝑆 = 𝑆1 ∩ 𝑆2 . Hence x, y ∈ 𝑆1 and for any 𝛼, 𝛽 ∈ ℜ, 𝛼x + 𝛽y ∈ 𝑆1 .
Similarly 𝛼x + 𝛽y ∈ 𝑆2 and therefore 𝛼x + 𝛽y ∈ 𝑆. 𝑆 is a subspace.
1.131 Let 𝑆 = 𝑆1 + 𝑆2 . First note that 0 = 0 + 0 ∈ 𝑆. Suppose x, y belong to 𝑆. Then
there exist s1 , t1 ∈ 𝑆1 and s2 , t2 ∈ 𝑆2 such that x = s1 + s2 and y = t1 + t2 . For any
𝛼, 𝛽 ∈ ℜ,
𝛼x + 𝛽y = 𝛼(s1 + s2 ) + 𝛽(t1 + t2 )
= (𝛼s1 + 𝛽t1 ) + (𝛼s2 + 𝛽t2 ) ∈ 𝑆
since 𝛼s1 + 𝛽t1 ∈ 𝑆1 and 𝛼s2 + 𝛽t2 ∈ 𝑆2 .
1.132 Let
𝑆1 = { 𝛼(1, 0) : 𝛼 ∈ ℜ }
𝑆2 = { 𝛼(0, 1) : 𝛼 ∈ ℜ }
27
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
𝑆1 and 𝑆2 are respectively the horizontal and vertical axes in ℜ2 . Their union is not a
subspace, since for example
( ) (
) (
)
1
1
0
=
+
∈
/ 𝑆1 ∪ 𝑆2
1
0
1
However, any vector in ℜ2 can be written as the sum of an element of 𝑆1 and an element
of 𝑆2 . Therefore, their sum is the whole space ℜ2 , that is
𝑆 1 + 𝑆 2 = ℜ2
1.133 Assume that 𝑆 is linearly dependent, that is there exists x1 , . . . , x𝑛 ∈ 𝑆 and
𝛼2 , . . . , 𝛼𝑛 ∈ 𝑅 such that
x1 = 𝛼2 x2 + 𝛼3 x3 + . . . , 𝛼𝑛 x𝑛
Rearranging, this implies
1x1 − 𝛼2 x2 − 𝛼3 x3 − . . . 𝛼𝑛 x𝑛 = 0
Conversely, assume there exist x1 , x2 , . . . , x𝑛 ∈ x and 𝛼1 , 𝛼2 , . . . , 𝛼𝑛 ∈ ℜ such that
𝛼1 x1 + 𝛼2 x2 . . . + 𝛼𝑛 x𝑛 = 0
Assume without loss of generality that 𝛼1 ∕= 0. Then
x1 = −
𝛼2
𝛼3
𝛼𝑛
x2 −
x3 − . . . −
x𝑛
𝛼1
𝛼1
𝛼1
which shows that
x1 ∈ lin 𝑆 ∖ {x1 }
1.134 Assume {(1, 1, 1), (0, 1, 1), (0, 0, 1)} are linearly dependent. Then there exists
𝛼1 , 𝛼2 , 𝛼3 such that
⎛ ⎞
⎛ ⎞
⎛ ⎞ ⎛ ⎞
1
0
0
0
𝛼1 ⎝1⎠ + 𝛼2 ⎝1⎠ + 𝛼3 ⎝0⎠ = ⎝0⎠
1
1
1
0
or equivalently
𝛼1 = 0
𝛼1 + 𝛼2 = 0
𝛼1 + 𝛼2 + 𝛼3 = 0
which imply that
𝛼1 = 𝛼2 = 𝛼3 = 0
Therefore {(1, 1, 1), (0, 1, 1), (0, 0, 1)} are linearly independent.
1.135 Suppose on the contrary that 𝑈 is linearly dependent. That is, there exists a
set of games { 𝑢𝑇1 , 𝑢𝑇2 , . . . , 𝑢𝑇𝑚 } and nonzero coefficients (𝛼1 , 𝛼2 , . . . , 𝛼𝑚 ) such that
(Exercise 1.133)
𝛼1 𝑢𝑇1 + 𝛼2 𝑢𝑇2 + . . . + 𝛼𝑚 𝑢𝑇𝑚 = 0
28
(1.16)
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
Assume that the coalitions are ordered so that 𝑇1 has the smallest number of players
of any of the coalitions 𝑇1 , 𝑇2 , . . . , 𝑇𝑚 . This implies that no coalition 𝑇2 , 𝑇3 , . . . , 𝑇𝑚 is
a subset of 𝑇1 and
𝑢𝑇𝑗 (𝑇1 ) = 0
for every 𝑗 = 2, 3, . . . , 𝑛
(1.17)
Using (1.39), 𝑢𝑇1 can be expressed as a linear combination of the other games,
𝑢𝑇1 = −1/𝛼1
𝑚
∑
𝛼𝑗 𝑢𝑇𝑗
(1.18)
𝑗=2
Substituting (1.40) this implies that
𝑢𝑇1 (𝑇1 ) = 0
whereas
𝑢𝑇 (𝑇 ) = 1
for every 𝑇
by definition. This contradiction establishes that the set 𝑈 is linearly independent.
1.136 If 𝑆 is a subspace, then 0 ∈ 𝑆 and
𝛼x1 = 0
with 𝛼 ∕= 0 and x1 = 0 (Exercise 1.122). Therefore 𝑆 is linearly dependent (Exercise
1.133).
1.137 Suppose x has two representations, that is
x = 𝛼1 x1 + 𝛼2 x2 + . . . + 𝛼𝑛 x𝑛
x = 𝛽1 x1 + 𝛽2 x2 + . . . + 𝛽𝑛 x𝑛
Subtracting
0 = (𝛼1 − 𝛽1 )x1 + (𝛼2 − 𝛽2 )x2 + . . . + (𝛼𝑛 − 𝛽𝑛 )x𝑛
(1.19)
Since {x1 , x2 , . . . , , x𝑛 } is linearly independent, (1.19) implies that 𝛼𝑖 = 𝛽𝑖 = 0 for all
𝑖 (Exercise 1.133)
1.138 Let 𝑃 be the set of all linearly independent subsets of a linear space 𝑋. 𝑃 is
partially
ordered by inclusion. Every chain 𝐶 = {𝑆𝛼 } ⊆ 𝑃 has an upper bound, namely
∪
𝑆.
By Zorn’s lemma, 𝑃 has a maximal element 𝐵. We show that 𝐵 is a basis
𝑆∈𝐶
for 𝑋.
𝐵 is linearly independent since 𝐵 ∈ 𝑃 . Suppose that 𝐵 does not span 𝑋 so that
lin 𝐵 ⊂ 𝑋. Then there exists some x ∈ 𝑋 ∖ lin 𝐵. The set 𝐵 ∪ {x} is a linearly
independent and contains 𝐵, which contradicts the assumption that 𝐵 is the maximal
element of 𝑃 . Consequently, we conclude that 𝐵 spans 𝑋 and hence is a basis.
1.139 Exercise 1.134 established that the set 𝐵 = { (1, 1, 1), (0, 1, 1), (0, 0, 1)} is linearly
independent. Since dim 𝑅3 = 3, any other vectors must be linearly dependent on 𝐵.
That is lin 𝐵 = ℜ3 . 𝐵 is a basis.
By a similar argument to exercise 1.134, it is readily seen that {(1, 0, 0), (0, 1, 0), (0, 0, 1)}
is linearly independent and hence constitutes a basis.
29
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
1.140 Let 𝐴 = {a1 , a2 , . . . , a𝑛 } and 𝐵 = {b1 , b2 , . . . , b𝑚 } be two bases for a linear
space 𝑋. Let
𝑆1 = {𝑏1 } ∪ 𝐴 = {b1 , a1 , a2 , . . . , a𝑛 }
𝑆 is linearly dependent (since 𝑏1 ∈ lin 𝐴) and spans 𝑋.
𝛼1 , 𝛼2 , . . . , 𝛼𝑛 and 𝛽1 such that
Therefore, there exists
𝛽1 b1 + 𝛼1 a1 + 𝛼2 a2 + . . . + 𝛼𝑛 a𝑛 = 0
At least one 𝛼𝑖 ∕= 0. Deleting the corresponding element a𝑖 , we obtain another set 𝑆1′
of 𝑛 elements
𝑆1′ = {b1 , a1 , a2 , . . . , a𝑖−1 , a𝑖+1 , . . . , a𝑛 }
which is also spans 𝑋. Adding the second element from 𝐵, we obtain the 𝑛 + 1 element
set
𝑆2 = {b1 , b2 , a1 , a2 , . . . , a𝑖−1 , a𝑖+1 , . . . , a𝑛 }
which again is linearly dependent and spans 𝑋.
Continuing in this way, we can replace 𝑚 vectors in 𝐴 with the 𝑚 vectors from 𝐵 while
maintaining a spanning set. This process cannot eliminate all the vectors in 𝐴, because
this would imply that 𝐵 was linearly dependent. (Otherwise, the remaining b𝑖 would be
linear combinations of preceding elements of 𝐵.) We conclude that necessarily 𝑚 ≤ 𝑛.
Reversing the process and replacing elements of 𝐵 with elements of 𝐴 establishes that
𝑛 ≤ 𝑚. Together these inequalities imply that 𝑛 = 𝑚 and 𝐴 and 𝐵 have the same
number of elements.
1.141 Suppose that the coalitions are ordered in some way, so that
𝒫(𝑁 ) = {𝑆0 , 𝑆1 , 𝑆2 , . . . , 𝑆2𝑛 −1 }
with 𝑆0 = ∅. There are 2𝑛 coalitions. Each game 𝐺 ∈ 𝒢 𝑁 corresponds to a unique list
of length 2𝑛 of coalitional worths
v = (𝑣0 , 𝑣1 , 𝑣2 , . . . , 𝑣2𝑛 −1 )
𝑛
with 𝑣0 = 0. That is, each game defines a vector 𝑣 = (0, 𝑣1 , . . . , 𝑣2𝑛 −1 ) ∈ ℜ2 and
𝑛
conversely each vector 𝑣 ∈ ℜ2 (with 𝑣0 = 0) defines a game. Therefore, the space of
𝑛
all games 𝒢 𝑁 is formally identical to the subspace of ℜ2 in which the first component
𝑛
is identically zero, which in turn is equivalent to the space ℜ2 −1 . Thus, 𝒢 𝑁 is a
2𝑛 − 1-dimensional linear space.
1.142 For illustrative purposes, we present two proofs, depending upon whether the
linear space is assumed to be finite dimensional or not. In the finite dimensional case,
a constructive proof is possible, which forms the basis for practical algorithms for
constructing a basis.
Let 𝑆 be a linearly independent set in a linear space 𝑋.
𝑋 is finite dimensional Let 𝑛 = dim 𝑋. Assume 𝑆 has 𝑚 elements and denote it
𝑆𝑚 .
If lin 𝑆𝑚 = 𝑋, then 𝑆𝑚 is a basis and we are done. Otherwise, there exists some
x𝑚+1 ∈ 𝑋 ∖ lin 𝑆𝑚 . Adding x𝑚+1 to 𝑆𝑚 gives a new set of 𝑚 + 1 elements
𝑆𝑚+1 = 𝑆𝑚 ∪ { x𝑚+1 }
30
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
which is also linearly independent ( since x𝑚+1 ∈
/ lin 𝑆𝑚 ).
If lin 𝑆𝑚+1 = 𝑋, then 𝑆𝑚+1 is a basis and we are done. Otherwise, there exists
some x𝑚+2 ∈ 𝑋 ∖ lin 𝑆𝑚+1 . Adding x𝑚+2 to 𝑆𝑚+1 gives a new set of 𝑚 + 2
elements
𝑆𝑚+2 = 𝑆𝑚+1 ∪ { x𝑚+2 }
which is also linearly independent ( since x𝑚+2 ∈
/ lin 𝑆𝑚+2 ).
Repeating this process, we can construct a sequence of linearly independent sets
𝑆𝑚 , 𝑆𝑚+1 , 𝑆𝑚+2 . . . such that lin 𝑆𝑚 ⫋ lin 𝑆𝑚+1 ⫋ lin 𝑆𝑚+2 ⋅ ⋅ ⋅ ⊆ 𝑋. Eventually, we will reach a set which spans 𝑋 and hence is a basis.
𝑋 is possibly infinite dimensional For the general case, we can adapt the proof
of the existence of a basis (Exercise 1.138), restricting 𝑃 to be the class of all
linearly independent subsets of 𝑋 containing 𝑆.
1.143 Otherwise (if a set of 𝑛 + 1 elements was linearly independent), it could be
extended to basis at least 𝑛 + 1 elements (exercise 1.142). This would contradict the
fundamental result that all bases have the same number of elements (Exercise 1.140).
1.144 Every basis is linearly independent. Conversely, let 𝐵 = {x1 , x2 , . . . , x𝑛 } be a
set of linearly independent elements in an 𝑛-dimensional linear space 𝑋. We have to
show that lin 𝐵 = 𝑋.
Take any x ∈ 𝑋. The set
𝐵 ∪ {x} = {x1 , x2 , . . . , x𝑛 , x }
must be linearly dependent (Exercise 1.143). That is there exists numbers 𝛼1 , 𝛼2 , . . . , 𝛼𝑛 , 𝛼,
not all zero, such that
𝛼1 x1 + 𝛼2 x2 + ⋅ ⋅ ⋅ + 𝛼𝑛 x𝑛 + 𝛼x = 0
(1.20)
Furthermore, it must be the case that 𝛼 ∕= 0 since otherwise
𝛼1 x1 + 𝛼2 x2 + ⋅ ⋅ ⋅ + 𝛼𝑛 x𝑛 = 0
which contradicts the linear independence of 𝐴. Solving (1.20) for x, we obtain
x=
1
𝛼1 x1 + 𝛼2 x2 + ⋅ ⋅ ⋅ + 𝛼𝑛 x𝑛
𝛼
Since x was an arbitrary element of 𝑋, we conclude that 𝐵 spans 𝑋 and hence 𝐵 is a
basis.
1.145 A basis spans 𝑋. To establish the converse, assume that 𝐵 = {x1 , x2 , . . . , x𝑛 }
is a set of 𝑛 elements which span 𝑋. If 𝑆 is linearly dependent, then one element
is linearly dependent on the other elements. Without loss of generality, assume that
x1 ∈ lin 𝐵 ∖ {x1 }. Deleting x1 the set
𝐵 ∖ {x1 } = {x2 , x3 , . . . , x𝑛 }
also spans 𝑋. Continuing in this fashion by eliminating dependent elements, we finish
with a linearly independent set of 𝑚 < 𝑛 elements which spans 𝑋. That is, we can
find a basis of 𝑚 < 𝑛 elements, which contradicts the assumption that the dimension
of 𝑋 is 𝑛 (Exercise 1.140). Thus any set of 𝑛 vectors which spans 𝑋 must be linearly
independent and hence a basis.
31
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
1.146 We have previously shown
∙ that the set 𝑈 is linearly independent (Exercise 1.135).
∙ the space 𝒢 𝑁 has dimension 2𝑛−1 (Exercise 1.141).
There are 2𝑛−1 distinct T-unanimity games 𝑢𝑇 in 𝑈 . Hence 𝑈 spans the 2𝑛−1 space
𝒢 𝑁 . Alternatively, note that any game 𝑤 ∈ 𝒢 𝑁 can be written as a linear combination
of T-unanimity games (Exercise 1.75).
1.147 Let 𝐵 = {x1 , x2 , . . . , x𝑚 } be a basis for 𝑆. Since 𝐵 is linearly independent,
𝑚 ≤ 𝑛 (Exercise 1.143). There are two possibilities.
Case 1: 𝑚 = 𝑛. 𝐵 is a set of 𝑛 linearly independent elements in an 𝑛-dimensional
space 𝑋. Hence 𝐵 is a basis for 𝑋 and 𝑆 = lin 𝐵 = 𝑋.
Case 2: 𝑚 < 𝑛. Since 𝐵 is linearly independent but cannot be a basis for the 𝑛dimensional space 𝑋, we must have 𝑆 = lin 𝐵 ⊂ 𝑋.
Therefore, we conclude that if 𝑆 ⊂ 𝑋 is a proper subspace, it has a lower dimension
than 𝑋.
1.148 Let 𝛼1 , 𝛼2 , 𝛼3 be the coordinates of (1, 1, 1) for the basis {(1, 1, 1), (0, 1, 1), (0, 0, 1)}.
That is
⎛ ⎞
⎛ ⎞
⎛ ⎞
⎛ ⎞
1
0
0
1
⎝1⎠ = 𝛼1 ⎝1⎠ + 𝛼2 ⎝1⎠ + 𝛼3 ⎝0⎠
1
1
1
1
which implies that 𝛼1 = 1, 𝛼2 = 𝛼3 = 0. Therefore (1, 0, 0) are the required coordinates
of the (1, 1, 1) with respect to the basis {(1, 1, 1), (0, 1, 1), (0, 0, 1)}.
(1, 1, 1) are the coordinates of the vector (1, 1, 1) with respect to the standard basis.
1.149 A subset 𝑆 of a linear space 𝑋 is a subspace of 𝑋 if
𝛼x + 𝛽y ∈ 𝑆 for every x, y ∈ 𝑆 and for every 𝛼, 𝛽 ∈ ℜ
Letting 𝛽 = 1 − 𝛼, this implies that
𝛼x + (1 − 𝛼)y ∈ 𝑆
for every x, y ∈ 𝑆 and 𝛼 ∈ ℜ
𝑆 is an affine set.
Conversely, suppose that 𝑆 is an affine set containing 0, that is
𝛼x + (1 − 𝛼)y ∈ 𝑆
for every x, y ∈ 𝑆 and 𝛼 ∈ ℜ
Letting y = 0, this implies that
𝛼x ∈ 𝑆
for every x ∈ 𝑆 and 𝛼 ∈ ℜ
so that 𝑆 is homogeneous. Now letting 𝛼 = 12 , for every x and y in 𝑆,
1
1
x+ y ∈𝑆
2
2
and homogeneity implies
(
x+y =2
1
1
x+ y
2
2
𝑆 is also additive. Hence 𝑆 is subspace.
32
)
∈𝑆
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
1.150 For any x ∈ 𝑆, let
𝑉 = 𝑆 −x = {v ∈ 𝑋 : v +x ∈ 𝑆 }
𝑉 is an affine set For any v1 , v2 ∈ 𝑉 , there exist corresponding s1 , s2 ∈ 𝑆 such that
v1 = s1 − x and v2 = s2 − x and therefore
𝛼v1 + (1 − 𝛼)v2 = 𝛼(s1 − x) + (1 − 𝛼)(s1 − x)
= 𝛼s1 + (1 − 𝛼)s2 − 𝛼x + (1 − 𝛼)x
=s−x
where s = 𝛼𝑠1 + (1 − 𝛼)𝑠2 ∈ 𝑆. There 𝑉 is an affine set.
𝑉 is a subspace Since x ∈ 𝑆, 0 = x − x ∈ 𝑉 . Therefore 𝑉 is a subspace (Exercise
1.149).
𝑉 is unique Suppose that there are two subspaces 𝑉 1 and 𝑉 2 such that 𝑆 = 𝑉 1 + x1
and 𝑆 = 𝑉 2 + x2 . Then
𝑉1 + x1 = 𝑉2 + x2
𝑉1 = 𝑉2 + (x2 − x1 )
= 𝑉2 + x
where x = x2 − x1 ∈ 𝑋. Therefore 𝑉1 is parallel to 𝑉2 .
Since 𝑉1 is a subspace, 0 ∈ 𝑉1 which implies that −x ∈ 𝑉2 . Since 𝑉2 is a subspace,
this implies that x ∈ 𝑉2 and 𝑉2 + x ⊆ 𝑉2 . Therefore 𝑉1 = 𝑉2 + x ⊆ 𝑉2 . Similarly,
𝑉2 ⊆ 𝑉1 and hence 𝑉1 = 𝑉2 . Therefore the subspace 𝑉 is unique.
1.151 Let 𝑆 ∥ 𝑇 denote the relation 𝑆 is parallel to 𝑇 , that is
𝑆 ∥ 𝑇 ⇐⇒ 𝑆 = 𝑇 + x for some x ∈ 𝑋
The relation ∥ is
reflexive 𝑆 ∥ 𝑆 since 𝑆 = 𝑆 + 0
transitive Assume 𝑆 = 𝑇 + x and 𝑇 = 𝑈 + y. Then 𝑆 = 𝑈 + (x + y)
symmetric 𝑆 = 𝑇 + x =⇒ 𝑇 = 𝑆 + (−x)
Therefore ∥ is an equivalence relation.
1.152 See exercises 1.130 and 1.162.
1.153
1. Exercise 1.150
2. Assume x0 ∈ 𝑉 . For every x ∈ 𝐻
x = x0 + v = w ∈ 𝑉
which implies that 𝐻 ⊆ 𝑉 . Conversely, assume 𝐻 = 𝑉 . Then x0 = 0 ∈ 𝑉 since
𝑉 is a subspace.
3. By definition, 𝐻 ⊂ 𝑋. Therefore 𝑉 = 𝐻 − x ⊂ 𝑋.
/ 𝑉 . Suppose to the contrary
4. Let x1 ∈
lin {x1 , 𝑉 } = 𝑉 ′ ⊂ 𝑋
33
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
Then
𝐻 ′ = x0 + 𝑉 ′
is an affine set (Exercise 1.150) which strictly contains 𝐻. This contradicts the
definition of 𝐻 as a maximal proper affine set.
5. Let x1 ∈
/ 𝑉 . By the previous part, x ∈ lin {x1 , 𝑉 }. That is, there exists 𝛼 ∈ ℜ
such that
x = 𝛼x1 + v for some v ∈ 𝑉
To see that 𝛼 is unique, suppose that there exists 𝛽 ∈ ℜ such that
x = 𝛽x1 + v′ for some v′ ∈ 𝑉
Subtracting
0 = (𝛼 − 𝛽)x1 + (v − v′ )
/ 𝑉.
which implies that 𝛼 = 𝛽 since x1 ∈
1.154 Assume x, y ∈ 𝑋. That is, x, y ∈ ℜ𝑛 and
∑
∑
𝑥𝑖 =
𝑦𝑖 = 𝑤(𝑁 )
𝑖∈𝑁
𝑖∈𝑁
𝑛
For any 𝛼 ∈ ℜ, 𝛼x + (1 − 𝛼)y ∈ ℜ and
∑
∑
∑
𝛼𝑥𝑖 + (1 − 𝛼)𝑦𝑖 = 𝛼
𝑥𝑖 + (1 − 𝛼)
𝑦𝑖
𝑖∈𝑁
𝑖∈𝑁
𝑖∈𝑁
= 𝛼𝑤(𝑁 ) + (1 − 𝛼)𝑤(𝑁 )
= 𝑤(𝑁 )
Hence 𝑋 is an affine subset of ℜ𝑛 .
1.155 See Exercise 1.129.
1.156 No. A straight line through any two points in ℜ𝑛+ extends outside ℜ𝑛+ . Put
differently, the affine hull of ℜ𝑛+ is the whole space ℜ𝑛 .
1.157 Let
𝑉 = aff 𝑆 − x1
= aff {0, x2 − x1 , x3 − x1 , . . . , x𝑛 − x1 }
𝑉 is a subspace (0 ∈ 𝑉 ) and
aff 𝑆 = 𝑉 + x1
and
dim aff 𝑆 = dim 𝑉
Note that the choice of x1 is arbitrary.
𝑆 is affinely dependent if and only if there exists some x𝑘 ∈ 𝑆 such that x𝑘 ∈
∕ x1 .
aff (𝑆 ∖ {x𝑘 }). Since the choice of x1 is arbitrary, we assume that x𝑘 =
x𝑘 ∈ aff (𝑆 ∖ {x𝑘 }) ⇐⇒ x𝑘 ∈ (𝑉 + x1 ) ∖ {x𝑘 }
⇐⇒ x𝑘 − x1 ∈ 𝑉 ∖ {x𝑘 − x1 }
⇐⇒ x𝑘 − x1 ∈ lin {x2 − x1 , x3 − x1 , . . . , x𝑘−1 − x1 ,
. . . , x𝑘+1 − x1 , . . . , x𝑛 − x1 }
34
c 2001 Michael Carter
⃝
All rights reserved
Solutions for Foundations of Mathematical Economics
Therefore, 𝑆 is affinely dependent if and only if {x2 − x1 , x3 − x1 , . . . , x𝑛 − x1 } is
linearly independent.
1.158 By the previous exercise, the set 𝑆 = {x1 , x2 , . . . , x𝑛 } is affinely dependent if
and only if the set {x2 − x1 , x3 − x1 , . . . , x𝑛 − x1 } is linearly dependent, so that there
exist numbers 𝛼2 , 𝛼3 , . . . , 𝛼𝑛 , not all zero, such that
𝛼2 (x2 − x1 ) + 𝛼3 (x3 − x1 ) + ⋅ ⋅ ⋅ + 𝛼𝑛 (x𝑛 − x1 ) = 0
or
𝛼2 x2 + 𝛼3 x3 + ⋅ ⋅ ⋅ + 𝛼𝑛 x𝑛 −
𝑛
∑
𝛼𝑖 x1 = 0
𝑖=2
Let 𝛼1 = −
∑𝑛
𝑖=2
𝛼𝑖 . Then
𝛼1 x1 + 𝛼2 x2 + . . . + 𝛼𝑛 x𝑛 = 0
and
𝛼1 + 𝛼2 + . . . + 𝛼𝑛 = 0
as required.
1.159 Let
𝑉 = aff 𝑆 − x1 = aff { 0, x2 − x1 , x3 − x1 , . . . , x𝑛 − x1 }
Then
aff 𝑆 = x1 + 𝑉
If 𝑆 is affinely independent, every x ∈ aff 𝑆 has a unique representation as
x = x1 + v,
v∈𝑉
with
v = 𝛼2 (x2 − x1 ) + 𝛼3 (x3 − x1 ) + ⋅ ⋅ ⋅ + 𝛼𝑛 (x𝑛 − x1 )
so that
x = x1 + 𝛼2 (x2 − x1 ) + 𝛼3 (x3 − x1 ) + ⋅ ⋅ ⋅ + 𝛼𝑛 (x𝑛 − x1 )
∑𝑛
Define 𝛼1 = 1 − 𝑖=2 𝛼𝑖 . Then
x = 𝛼1 x1 + 𝛼2 x2 + ⋅ ⋅ ⋅ + 𝛼𝑛 x𝑛
with
𝛼1 + 𝛼2 + ⋅ ⋅ ⋅ + 𝛼𝑛 = 1
x is a unique affine combination of the elements of 𝑆.
1.160 Assume that 𝑥, 𝑦 ∈ (𝑎, 𝑏) ⊆ ℜ. This means that 𝑎 < 𝑥 < 𝑏 and 𝑎 < 𝑦 < 𝑏. For
every 0 ≤ 𝛼 ≤ 1
𝛼𝑥 + (1 − 𝛼)𝑦 > 𝛼𝑎 + (1 − 𝛼)𝑎
35
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
and
𝛼𝑥 + (1 − 𝛼)𝑦 < 𝛼𝑏 + (1 − 𝛼)𝑏
Therefore 𝑎 < 𝛼𝑥+(1−𝛼)𝑦 < 𝑏 and 𝛼𝑥+(1−𝛼)𝑦 ∈ (𝑎, 𝑏). (𝑎, 𝑏) is convex. Substituting
≤ for < demonstrates that [𝑎, 𝑏] is convex.
Let 𝑆 be an arbitrary convex set in ℜ. Assume that 𝑆 is not an interval. This implies
that there exist numbers 𝑥, 𝑦, 𝑧 such that 𝑥 < 𝑦 < 𝑧 and 𝑥, 𝑧 ∈ 𝑆 while 𝑦 ∈
/ 𝑆. Define
𝛼=
𝑧−𝑦
𝑧−𝑥
so that
1−𝛼=
𝑦−𝑥
𝑧−𝑥
Note that 0 ≤ 𝛼 ≤ 1 and that
𝛼𝑥 + (1 − 𝛼)𝑧 =
𝑦−𝑥
𝑧−𝑦
𝑥+
𝑧=𝑦∈
/𝑆
𝑧−𝑥
𝑧−𝑥
which contradicts the assumption that 𝑆 is convex. We conclude that every convex set
in ℜ is an interval. Note that 𝑆 may be a hybrid interval such (𝑎, 𝑏] or [𝑎, 𝑏) as well as
an open (𝑎, 𝑏) or closed [𝑎, 𝑏] interval.
1.161 Let (𝑁, 𝑤) be a TP-coalitional game. If core(𝑁, 𝑤) = ∅ then it is trivially convex.
Otherwise, assume core(𝑁, 𝑤) is nonempty and let x1 and x2 belong to core(𝑁, 𝑤).
That is
∑
𝑥1𝑖 ≥ 𝑤(𝑆)
for every 𝑆 ⊆ 𝑁
𝑖∈𝑆
∑
𝑥1𝑖 = 𝑤(𝑁 )
𝑖∈𝑁
and therefore for any 0 ≤ 𝛼 ≤ 1
∑
𝛼𝑥1𝑖 ≥ 𝛼𝑤(𝑆)
for every 𝑆 ⊆ 𝑁
𝑖∈𝑆
∑
𝛼𝑥1𝑖 = 𝛼𝑤(𝑁 )
𝑖∈𝑁
Similarly
∑
(1 − 𝛼)𝑥2𝑖 ≥ (1 − 𝛼)𝑤(𝑆)
for every 𝑆 ⊆ 𝑁
𝑖∈𝑆
∑
(1 − 𝛼)𝑥2𝑖 = (1 − 𝛼)𝑤(𝑁 )
𝑖∈𝑁
Summing these two systems
∑
𝛼𝑥1𝑖 + (1 − 𝛼)𝑥2𝑖 ≥ 𝛼𝑤(𝑆) + (1 − 𝛼)𝑤(𝑆) = 𝑤(𝑆)
𝑖∈𝑆
∑
𝛼𝑥1𝑖 + (1 − 𝛼)𝑥2𝑖 = 𝛼𝑤(𝑁 ) + (1 − 𝛼)𝑤(𝑁 ) = 𝑤(𝑁 )
𝑖∈𝑁
That is, 𝛼𝑥1𝑖 + (1 − 𝛼)𝑥2𝑖 belongs to core(𝑁, 𝑤).
36
for every 𝑆 ⊆ 𝑁
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
∩
1.162 Let ℭ be a collection of convex sets and let x, y belong to 𝑆∈ℭ 𝑆. for every
𝑆 ∈ ℭ, x, y ∈ 𝑆 and therefore
∩ 𝛼x + (1 − 𝛼)y ∈ 𝑆 for all 0 ≤ 𝛼 ≤ 1 (since 𝑆 is convex).
Therefore 𝛼x + (1 − 𝛼)y ∈ 𝑆∈ℭ 𝑆.
1.163 Fix some output 𝑦. Assume that x1 , x2 ∈ 𝑉 (𝑦). This implies that both (𝑦, −x1 )
and (𝑦, −x2 ) belong to the production possibility set 𝑌 . If 𝑌 is convex
𝛼(𝑦, −x1 ) + (1 − 𝛼)(𝑦, −x2 ) = (𝛼𝑦 + (1 − 𝛼)𝑦, 𝛼x1 + (1 − 𝛼)x2 )
= (𝑦, 𝛼x1 + (1 − 𝛼)x2 ) ∈ 𝑌
for every 𝛼 ∈ [0, 1]. This implies that 𝛼x1 + (1 − 𝛼)x2 ∈ 𝑉 (𝑦). Since the choice of 𝑦
was arbitrary, this implies that 𝑉 (𝑦) is convex for every 𝑦.
1.164 Assume 𝑆1 and 𝑆2 are convex sets. Let 𝑆 = 𝑆1 + 𝑆2 . Suppose x, y belong to 𝑆.
Then there exist s1 , t1 ∈ 𝑆1 and s2 , t2 ∈ 𝑆2 such that x = s1 + s2 and y = t1 + t2 . For
any 𝛼 ∈ [0, 1]
𝛼x + (1 − 𝛼)y = 𝛼s1 + s2 + (1 − 𝛼)t1 + t2
= 𝛼s1 + (1 − 𝛼)t1 + 𝛼s2 + (1 − 𝛼)t2 ∈ 𝑆
since 𝛼s1 + (1 − 𝛼)t1 ∈ 𝑆1 and 𝛼s2 + (1 − 𝛼)t2 ∈ 𝑆2 . The argument readily extends to
any finite number of sets.
1.165 Without loss of generality, assume that 𝑛 = 2. Let 𝑆 = 𝑆1 × 𝑆2 ⊆ 𝑋 = 𝑋1 × 𝑋2 .
Suppose x = (𝑥1 , 𝑥2 ) and y = (𝑦1 , 𝑦2 ) belong to 𝑆. Then
𝛼x + (1 − 𝛼)y = 𝛼(𝑥1 , 𝑥2 ) + (1 − 𝛼)(𝑦1 , 𝑦2 )
= (𝛼𝑥1 , 𝛼𝑥2 ) + ((1 − 𝛼)𝑦1 , (1 − 𝛼)𝑦2 )
= (𝛼𝑥1 + (1 − 𝛼)𝑦1 , 𝛼𝑥2 + (1 − 𝛼)𝑦2 ) ∈ 𝑆
1.166 Let 𝛼x, 𝛼y be points in 𝛼𝑆 so that x, y ∈ 𝑆. Since 𝑆 is convex, 𝛽x+ (1 − 𝛽)y ∈ 𝑆
for every 0 ≤ 𝛽 ≤ 1. Multiplying by 𝛼
𝛼(𝛽x + (1 − 𝛽)y) = 𝛽(𝛼x) + (1 − 𝛽)(𝛼y) ∈ 𝛼𝑆
Therefore, 𝛼𝑆 is convex.
1.167 Combine Exercises 1.164 and 1.166.
1.168 The inclusion 𝑆 ⊆ 𝛼𝑆 + (1 − 𝛼)𝑆 is true for any set (whether convex or not),
since for every x ∈ 𝑆
x = 𝛼x + (1 − 𝛼)x ∈ 𝛼𝑆 + (1 − 𝛼)𝑆
The reverse inclusion 𝛼𝑆 +(1−𝛼)𝑆 ⊆ 𝑆 follows directly from the definition of convexity.
1.169 Given any two convex sets 𝑆 and 𝑇 in a linear space, the largest convex set
contained in both is 𝑆 ∩ 𝑇 ; the smallest convex set containing both is conv 𝑆 ∪ 𝑇 .
Therefore, the set of all convex sets is a lattice with
𝑆 ∧𝑇 =𝑆 ∩𝑇
𝑆 ∨ 𝑇 = conv 𝑆 ∪ 𝑇
The lattice is complete since every collection {𝑆𝑖 } has a least upper bound conv ∪ 𝑆𝑖
and a greatest lower bound ∩𝑆𝑖 .
37
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
1.170 If a set contains all convex combinations of its elements, it contains all convex
combinations of any two points, and hence is convex.
Conversely, assume that 𝑆 is convex. Let x be a convex combination of elements in 𝑆,
that is let
x = 𝛼1 x1 + 𝛼2 x2 + . . . + 𝛼𝑛 x𝑛
where x1 , x2 , . . . , x𝑛 ∈ 𝑆 and 𝛼1 , 𝛼2 , . . . , 𝛼𝑛 ∈ ℜ+ with 𝛼1 + 𝛼2 + . . . + 𝛼𝑛 = 1. We
need to show that x ∈ 𝑆.
We proceed by induction of the number of points 𝑛. Clearly, x ∈ 𝑆 if 𝑛 = 1 or 𝑛 = 2.
To show that it is true for 𝑛 = 3, let
x = 𝛼1 x1 + 𝛼2 x2 + 𝛼3 x3
where x1 , x2 , x3 ∈ 𝑆 and 𝛼1 , 𝛼2 , 𝛼3 ∈ ℜ+ with 𝛼1 + 𝛼2 + 𝛼3 = 1. Assume that 𝛼𝑖 > 0
for all 𝑖 (otherwise 𝑛 = 1 or 𝑛 = 2) so that 𝛼1 < 1. Rewriting
x = 𝛼1 x1 + 𝛼2 x2 + 𝛼3 x3
(
)
𝛼2
𝛼2
= 𝛼1 x1 + (1 − 𝛼1 )
x2 +
x3
1 − 𝛼1
1 − 𝛼1
= 𝛼1 x1 + (1 − 𝛼1 )y
where
(
y=
𝛼2
𝛼2
x2 +
x3
1 − 𝛼1
1 − 𝛼1
)
y is a convex combination of two elements x2 and x3 since
𝛼2
𝛼2
𝛼2 + 𝛼3
+
=
=1
1 − 𝛼1
1 − 𝛼1
1 − 𝛼1
and 𝛼2 + 𝛼3 = 1 − 𝛼1 . Hence y ∈ 𝑆. Therefore x is a convex combination of two
elements x1 and 𝑦 and is also in 𝑆. Proceeding in this fashion, we can show that every
convex combination belongs to 𝑆, that is conv 𝑆 ⊆ 𝑆.
1.171 This is precisely analogous to Exercise 1.128. We observe that
1. conv 𝑆 is a convex set.
2. if 𝐶 is any convex set containing 𝑆, then conv 𝑆 ⊆ 𝐶.
Therefore, conv 𝑆 is the smallest convex set containing S.
1.172 Note first that 𝑆 ⊆ conv 𝑆 for any set 𝑆. The converse for convex sets follows
from Exercise 1.170.
1.173 Assume x ∈ conv (𝑆1 + 𝑆2 ). Then, there exist numbers 𝛼1 , 𝛼2 , . . . , 𝛼𝑛 and vectors x1 , x2 , . . . , x𝑛 in 𝑆1 + 𝑆2 such that
x = 𝛼1 x1 + 𝛼2 x2 + ⋅ ⋅ ⋅ + 𝛼𝑛 x𝑛
For every x𝑖 , there exists x1𝑖 ∈ 𝑆1 and x2𝑖 ∈ 𝑆2 such that
x𝑖 = x1𝑖 + x2𝑖
38
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
and therefore
𝑛
∑
x=
𝛼𝑖 x1𝑖 +
𝑖=1
𝑛
∑
𝛼𝑖 x2𝑖
𝑖=1
= x1 + x2
∑𝑛
∑𝑛
where x1 = 𝑖=1 𝛼𝑖 x1𝑖 ∈ 𝑆1 and x2 = 𝑖=1 𝛼𝑖 x2𝑖 ∈ 𝑆2 . Therefore x ∈ conv 𝑆1 +
conv 𝑆2 .
Conversely, assume that x ∈ conv 𝑆1 + conv 𝑆2 . Then x = x1 + x2 , where
x1 =
𝑛
∑
𝛼𝑖 𝑥1𝑖 ,
x1𝑖 ∈ 𝑆1
𝛽𝑗 𝑥2𝑗 ,
x2𝑖 ∈ 𝑆2
𝑖=1
x2 =
𝑚
∑
𝑗=1
and
x = x1 + x2 =
𝑛
∑
𝛼𝑖 𝑥1𝑖 +
𝑖=1
𝑚
∑
𝛽𝑗 𝑥2𝑗 ∈ conv (𝑆1 + 𝑆2 )
𝑗=1
since x1𝑖 , x2𝑗 ∈ 𝑆1 + 𝑆2 for every 𝑖 and 𝑗.
1.174 The dimension of the input requirement set 𝑉 (𝑦) is 𝑛. Its affine hull is ℜ𝑛 .
1.175
1. Let
x = 𝛼1 x1 + 𝛼2 x2 + . . . + 𝛼𝑛 x𝑛
(1.21)
If 𝑛 > dim 𝑆 +1, the elements x1 , x2 , . . . , x𝑛 ∈ 𝑆 are affinely dependent (Exercise
1.157 and therefore there exist numbers 𝛽1 , 𝛽2 , . . . , 𝛽𝑛 , not all zero, such that
(Exercise 1.158)
𝛽1 x1 + 𝛽2 x2 + . . . + 𝛽𝑛 x𝑛 = 0
(1.22)
and
𝛽1 + 𝛽2 + . . . + 𝛽𝑛 = 0
2. Combining (1.21) and (1.22)
x = x − 𝑡0
𝑛
𝑛
∑
∑
𝛼𝑖 x𝑖 − 𝑡
𝛽𝑖 x𝑖
=
𝑖=1
=
𝑛
∑
𝑖=1
(𝛼𝑖 − 𝑡𝛽𝑖 )x𝑖
𝑖=1
for any 𝑡 ∈ ℜ.
}
{
3. Let 𝑡 = min𝑖 𝛼𝛽𝑖𝑖 : 𝛽𝑖 > 0 =
𝛼𝑗
𝛽𝑗
We note that
∙ 𝑡 > 0 since 𝛼𝑖 > 0 for every 𝑖.
39
(1.23)
c 2001 Michael Carter
⃝
All rights reserved
Solutions for Foundations of Mathematical Economics
∙ If 𝛽𝑖 > 0, then 𝛼𝑖 /𝛽𝑖 ≥ 𝛼𝑗 /𝛽𝑗 ≥ 𝑡 and therefore 𝛼𝑖 − 𝑡𝛽𝑖 ≥ 0
∙ If 𝛽𝑖 ≤ 0 then 𝛼𝑖 − 𝑡𝛽𝑖 > 0 for every 𝑡 > 0.
∙ Therefore 𝛼𝑖 − 𝑡𝛽𝑖 ≥ 0 for every 𝑡 and
∙ 𝛼𝑖 − 𝑡𝛽𝑖 = 0 for 𝑖 = 𝑗.
Therefore, (1.23) represents x as a convex combination of only 𝑛 − 1 points.
4. This process can be repeated until x is represented as a convex combination of
at most dim 𝑆 + 1 elements.
1.176 Assume x is not an extreme point of 𝑆. Then there exists distinct x1 and x2 in
S such that
x = 𝛼x1 + (1 − 𝛼)x2
Without loss of generality, assume 𝛼 ≤ 1/2 and let y = x2 − x. Then x + y = x2 ∈ 𝑆.
Furthermore
x − y = x − x2 + x
= 2x − x2
= 2(𝛼x1 + (1 − 𝛼)x2 ) − x2
= 2𝛼x1 + (1 − 2𝛼)x2 ∈ 𝑆
since 𝛼 ≤ 1/2.
1.177
1. For any x = (𝑥1 , 𝑥2 ) ∈ 𝐶2 , there exists some 𝛼1 ∈ [0, 1] such that
𝑥1 = 𝛼1 𝑐 + (1 − 𝛼1 )(−𝑐) = (2𝛼1 − 1)𝑐
In fact, 𝛼1 is defined by
𝛼1 =
Therefore (see Figure 1.5)
)
(
(
𝑥1
= 𝛼1
𝑐
(
)
(
𝑥1
= 𝛼1
−𝑐
𝑥1 + 𝑐
2𝑐
)
−𝑐
+ (1 − 𝛼1 )
𝑐
(
)
)
−𝑐
𝑐
+ (1 − 𝛼1 )
−𝑐
−𝑐
𝑐
𝑐
)
(
Similarly 𝑥2 = 𝛼2 𝑐 + (1 − 𝛼2 )(−𝑐) where
𝛼2 =
𝑥2 + 𝑐
2𝑐
Therefore, for any x ∈ 𝐶2 ,
(
(
)
)
)
(
𝑥1
𝑥1
𝑥1
x=
+ (1 − 𝛼2 )
= 𝛼2
𝑥2
𝑐
−𝑐
( )
(
)
𝑐
−𝑐
= 𝛼1 𝛼2
+ (1 − 𝛼1 )𝛼2
𝑐
𝑐
(
)
(
)
𝑐
−𝑐
+ 𝛼1 (1 − 𝛼2 )
+ (1 − 𝛼1 )(1 − 𝛼2 )
−𝑐
−𝑐
( )
(
)
(
)
(
)
𝑐
−𝑐
𝑐
−𝑐
= 𝛽1
+ 𝛽2
+ 𝛽3
+ 𝛽4
𝑐
𝑐
−𝑐
−𝑐
40
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
(𝑥1 , c)
x
𝑥1
(𝑥1 , -c)
Figure 1.5: A cube in ℜ2
where 0 ≤ 𝛽𝑖 ≤ 1 and
𝛽1 + 𝛽2 + 𝛽3 + 𝛽4 = 𝛼1 𝛼2 + (1 − 𝛼1 )𝛼2 + 𝛼1 (1 − 𝛼2 ) + (1 − 𝛼1 )(1 − 𝛼2 )
= 𝛼1 𝛼2 + 𝛼2 − 𝛼1 𝛼2 + 𝛼1 − 𝛼1 𝛼2 + 1 − 𝛼1 − 𝛼2 + 𝛼1 𝛼2
=1
That is
{(
𝑥 ∈ conv
𝑐
𝑐
) (
) (
) (
)}
−𝑐
𝑐
−𝑐
,
,
,
𝑐
−𝑐
−𝑐
2. (a) For any point (𝑥1 , 𝑥2 , . . . , 𝑥𝑛−1 , 𝑐) which lies on face of the cube 𝐶𝑛 , (𝑥1 , 𝑥2 , . . . , 𝑥𝑛−1 ) ∈
𝐶𝑛−1 and therefore
(𝑥1 , 𝑥2 , . . . , 𝑥𝑛−1 ) ∈ conv { ±𝑐, ±𝑐, . . . , ±𝑐) } ⊆ ℜ𝑛−1
so that
x ∈ conv { (±𝑐, ±𝑐, . . . , ±𝑐, 𝑐) } ⊆ ℜ𝑛
Similarly, any point (𝑥1 , 𝑥2 , . . . , 𝑥𝑛−1 , −𝑐) on the opposite face lies in the
convex hull of the points { (±𝑐, ±𝑐, . . . , ±𝑐, −𝑐) }.
(b) For any other point x = (𝑥1 , 𝑥2 , . . . , 𝑥𝑛 ) ∈ 𝐶𝑛 , let
𝛼𝑛 =
𝑥𝑛 + 𝑐
2𝑐
so that
𝑥𝑛 = 𝛼𝑛 𝑐 + (1 − 𝛼𝑛 )(−𝑐)
Then
⎛
⎞
⎛
⎛
⎞
⎞
𝑥1
𝑥1
𝑥1
⎜ 𝑥2 ⎟
⎜ 𝑥2 ⎟
⎜ 𝑥2 ⎟
⎜
⎟
⎜
⎜
⎟
⎟
⎜
⎟
⎜
⎟
⎟
x = ⎜ . . . ⎟ = 𝛼𝑛 ⎜ . . . ⎟ + (1 − 𝛼𝑛 ) ⎜
⎜ ... ⎟
⎝𝑥𝑛−1 ⎠
⎝𝑥𝑛−1 ⎠
⎝𝑥𝑛−1 ⎠
𝑥𝑛
𝑐
−𝑐
41
c 2001 Michael Carter
⃝
All rights reserved
Solutions for Foundations of Mathematical Economics
Hence
x ∈ conv { (±𝑐, ±𝑐, . . . , ±𝑐) } ⊂ ℜ𝑛
In other words
𝐶𝑛 ⊆ conv { (±𝑐, ±𝑐, . . . , ±𝑐) } ⊂ ℜ𝑛
3. Let 𝐸 denote the set of points of the form { (±𝑐, ±𝑐, . . . , ±𝑐) } ⊆ ℜ𝑛 . Clearly,
every point in 𝐸 is an extreme point of 𝐶𝑛 . Conversely, we have shown that
𝐶𝑛 ⊆ conv 𝐸. Therefore, no point x ∈ 𝐶 𝑛 ∖ 𝐸 can be an extreme point of 𝐶 𝑛 . 𝐸
is the set of extreme points of 𝐶 𝑛 .
4. Since 𝐶 𝑛 is convex, and 𝐸 ⊂ 𝐶𝑛 , conv 𝐸 ⊆ 𝐶 𝑛 . Consequently, 𝐶 𝑛 = conv 𝐸.
1.178 Let x, y belong to 𝑆 ∖ 𝐹 is convex. For any 𝛼 ∈ [0, 1]
∙ 𝛼x + (1 − 𝛼)y ∈ 𝑆 since 𝑆 convex
∙ 𝛼x + (1 − 𝛼)y ∈
/ 𝐹 since 𝐹 is a face
Thus 𝛼x + (1 − 𝛼)y ∈ 𝑆 ∖ 𝐹 which is convex.
1.179
1. Trivial.
∪
2. Let {𝐹𝑖 } be a collection of faces of 𝑆 and let 𝐹 = 𝐹𝑖 . Choose any x, y ∈ 𝑆.
If the line segment between x and y intersects 𝐹 , then
∪ it intersects some face 𝐹𝑖
which implies that x, y ∈ 𝐹𝑖 . Therefore, x, y ∈ 𝐹 = 𝐹𝑖 .
∩
3. Let {𝐹𝑖 } be a collection of faces of 𝑆 and let 𝐹 = 𝐹𝑖 . Choose any x, y ∈ 𝑆. if
the line segment between x and y intersects 𝐹 , then it intersects
∪ every face 𝐹𝑖
which implies that x, y ∈ 𝐹𝑖 for every 𝑖. Therefore, x, y ∈ 𝐹 = 𝐹𝑖 .
4. Let 𝔉 be the collection of all faces of 𝑆. This is partially ordered by inclusion.
By
∩ the previous result, every nonempty subcollection 𝔊 has a least upper bound
( 𝐹 ∈𝔊 𝐹 ). Hence 𝔉 is a complete lattice (Exercise 1.47).
1.180 Let 𝑆 be a polytope. Then 𝑆 = conv { x1 , x2 , . . . , x𝑛 }. Note that every extreme
point belongs to { x1 , x2 , . . . , x𝑛 }. Now choose the smallest subset whose convex hull
is still 𝑆, that is delete elements which can be written as convex combinations of other
elements. Suppose the minimal subset is { x1 , x2 , . . . , x𝑚 }. We claim that each of
these elements is an extreme point of 𝑆, that is { x1 , x2 , . . . , x𝑚 } = 𝐸.
Assume not, that is assume that x𝑚 is not an extreme point so that there exists x, y ∈ 𝑆
with
x𝑚 = 𝛼x + (1 − 𝛼)y
with 0 < 𝛼 < 1
(1.24)
Since x, y ∈ conv {x1 , x2 , . . . , x𝑚 }
x=
𝑚
∑
𝛼𝑖 x𝑖
y=
𝑖=1
𝑚
∑
𝛽x𝑖
𝑖=1
Substituting in (1.24), we can write x𝑚 as a convex combination of {x1 , x2 , . . . , x𝑚 }.
x𝑚 =
𝑚
𝑚
∑
∑
(
)
𝛼𝛼𝑖 + (1 − 𝛼)𝛽𝑖 x𝑖 =
𝛾𝑖 x𝑖
𝑖=1
𝑖=1
where
𝛾𝑖 = 𝛼𝛼𝑖 + (1 − 𝛼)𝛽𝑖
42
c 2001 Michael Carter
⃝
All rights reserved
Solutions for Foundations of Mathematical Economics
Note that 0 ≤ 𝛾𝑖 ≤ 1, so that either 𝛾𝑚 < 1 or 𝛾𝑚 = 1. We show that both cases lead
to a contradiction.
∙ 𝛾𝑚 < 1. Then
𝑚−1
∑(
)
1
𝛼𝛼𝑖 + (1 − 𝛼)𝛽𝑖 x𝑖
1 − 𝛾𝑚 𝑖=1
x𝑚 =
which contradicts the minimality of the set {x1 , x2 , . . . , x𝑚 }.
∙ 𝛾𝑚 = 1. Then 𝛾𝑖 = 0 for every 𝑖 ∕= 𝑚. That is
𝛼𝛼𝑖 + (1 − 𝛼)𝛽𝑖 = 0
which implies that 𝛼𝑖 = 𝛽𝑖
for every 𝑖 ∕= 𝑚
for every 𝑖 ∕= 𝑚 and therefore x = y.
Therefore, if {x1 , x2 , . . . , x𝑚 } is a minimal spanning set, every point must be an extreme
point.
1.181 Assume to the contrary that one of the vertices is not an extreme point of the
simplex. Without loss of generality, assume this is x1 . Then, there exist distinct
y, z ∈ 𝑆 and 0 < 𝛼 < 1 such that
x1 = 𝛼y + (1 − 𝛼)z
(1.25)
Now, since y ∈ 𝑆, there exist 𝛽1 , 𝛽2 , . . . , 𝛽𝑛 such that
y=
𝑛
∑
𝛽𝑖 x𝑖 ,
𝑖=1
𝑛
∑
𝛽𝑖 = 1
𝑖=1
Similarly, there exist 𝛿1 , 𝛿2 , . . . , 𝛿𝑛 such that
z=
𝑛
∑
𝑛
∑
𝛿𝑖 x𝑖 ,
𝑖=1
𝛿𝑖 = 1
𝑖=1
Substituting in (1.25)
x1 = 𝛼
=
𝑛
∑
𝛽𝑖 x𝑖 + (1 − 𝛼)
𝑖=1
𝑛
∑
𝑛
∑
𝛿𝑖 x𝑖
𝑖=1
(
)
𝛼𝛽𝑖 + (1 − 𝛼)𝛿𝑖 x𝑖
𝑖=1
Since
∑𝑛
𝑖=1
(
)
∑
∑
𝛼𝛽𝑖 + (1 − 𝛼)𝛿𝑖 = 𝛼 𝑛𝑖=1 𝛽𝑖 + (1 − 𝛼) 𝑖=1 𝛿𝑖 = 1
x1 =
𝑛
∑
(
)
𝛼𝛽𝑖 + (1 − 𝛼)𝛿𝑖 x𝑖
𝑖=1
Subtracting, this implies
0=
𝑛
∑
(
)
𝛼𝛽𝑖 + (1 − 𝛼)𝛿𝑖 (x𝑖 − x1 )
𝑖=2
This establishes that the set {x2 − x1 , x3 − x1 , . . . , x𝑛 − x1 } is linearly dependent and
therefore 𝐸 = {x1 , x2 , . . . , x𝑛 } is affinely dependent (Exercise 1.157). This contradicts
the assumption that 𝑆 is a simplex.
43
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
1.182 Let 𝑛 be the dimension of a convex set 𝑆 in a linear space 𝑋. Then 𝑛 = dim aff 𝑆
and there exists a set { x1 , x2 , . . . , x𝑛+1 } of affinely independent points in 𝑆. Define
𝑆 ′ = conv { x1 , x2 , . . . , x𝑛+1 }
Then 𝑆 ′ is an 𝑛-dimensional simplex contained in 𝑆.
1.183 Let w = (𝑤({1}), 𝑤({2}), . . . , 𝑤({𝑛})) denote the vector of individual worths and
let 𝑠 denote the surplus to be distributed, that is
∑
𝑠 = 𝑤(𝑁 ) −
𝑤({𝑖})
𝑖∈𝑁
𝑠 > 0 if the game is essential. For each player 𝑖 = 1, 2, . . . , 𝑛, let
y𝑖 = w + 𝑠e𝑖
be the outcome in which player 𝑖 receives the entire surplus. (e𝑖 is the 𝑖th unit vector.)
Note that
{
𝑤({𝑖}) + 𝑠 𝑗 = 𝑖
𝑖
𝑦𝑗 =
𝑤({𝑖})
𝑗 ∕= 𝑖
Each y𝑖 is an imputation since 𝑦𝑗𝑖 ≥ 𝑤({𝑗}) and
∑
∑
𝑦𝑗𝑖 =
𝑤({𝑗}) + 𝑠 = 𝑤(𝑁 )
𝑗∈𝑁
𝑗∈𝑁
Therefore {y1 , y2 , . . . , y𝑛 } ⊆ 𝐼. Since 𝐼 is convex (why ?), 𝑆 = conv {y1 , y2 , . . . , y𝑛 } ⊆
𝐼. Further, for every 𝑖, 𝑗 ∈ 𝑁 the vectors
y𝑖 − y𝑗 = 𝑠(e𝑖 − e𝑗 )
are linearly independent. Therefore 𝑆 is an 𝑛 − 1-dimensional simplex in ℜ𝑛 .
For any x ∈ 𝐼 define
𝛼𝑖 =
𝑥𝑖 − 𝑤({𝑖})
𝑠
so that
𝑥𝑗 = 𝑤({𝑗}) + 𝛼𝑗 𝑠
Since x is an imputation
∙ 𝛼𝑖 ≥ 0
(∑
)
∑
∑
∙
𝑖∈𝑁 𝛼𝑖 =
𝑖∈𝑁 𝑥𝑖 −
𝑖∈𝑁 𝑤({𝑖}) /𝑠 = 1
∑
We claim that x = 𝑖∈𝑁 𝛼𝑖 y𝑖 since for each 𝑗 = 1, 2, . . . , 𝑛
∑
∑
𝛼𝑖 𝑦𝑗𝑖 =
𝛼𝑖 𝑤({𝑗}) + 𝛼𝑗 𝑠
𝑖∈𝑁
𝑖∈𝑁
= 𝑤({𝑗}) + 𝛼𝑗 𝑠
= 𝑥𝑗
Therefore x ∈ conv {y1 , y2 , . . . , y𝑛 } = 𝑆, that is 𝐼 ⊆ 𝑆. Since we previously showed
that 𝑆 ⊆ 𝐼, we have established that 𝐼 = 𝑆, which is an 𝑛 − 1 dimensional simplex in
ℜ𝑛 .
44
c 2001 Michael Carter
⃝
All rights reserved
Solutions for Foundations of Mathematical Economics
𝑥2
𝑥2
𝑥2
𝑥1
𝑥1
𝑥1
1. A non-convex cone
2. A convex set
3. A convex cone
Figure 1.6: A cone which is not convex, a convex set and a convex cone
1.184 See Figure 1.6.
1.185 Let x = (𝑥1 , 𝑥2 , . . . , 𝑥𝑛 ) belong to ℜ𝑛+ , which means that 𝑥𝑖 ≥ 0 for every 𝑖. For
every 𝛼 > 0
𝛼x = (𝛼x1 , 𝛼x2 , . . . , 𝛼x𝑛 )
and 𝛼𝑥𝑖 ≥ 0 for every 𝑖. Therefore 𝛼x ∈ ℜ𝑛+ . ℜ𝑛+ is a cone in ℜ𝑛 .
1.186 Assume
𝛼x + 𝛽y ∈ 𝑆 for every x, y ∈ 𝑆 and 𝛼, 𝛽 ∈ ℜ+
(1.26)
Letting 𝛽 = 0, this implies that
𝛼x ∈ 𝑆 for every x ∈ 𝑆 and 𝛼 ∈ ℜ+
so that 𝑆 is a cone. To show that 𝑆 is convex, let x and y be any two elements in 𝑆.
For any 𝛼 ∈ [0, 1], (1.26) implies that
𝛼x + (1 − 𝛼)y ∈ 𝑆
Therefore 𝑆 is convex.
Conversely, assume that 𝑆 is a convex cone. For any 𝛼, 𝛽 ∈ ℜ+ and x, y ∈ 𝑆
𝛽
𝛼
x+
y∈𝑆
𝛼+𝛽
𝛼+𝛽
and therefore
𝛼x + 𝛽y ∈ 𝑆
1.187 Assume 𝑆 satisfies
1. 𝛼𝑆 ⊆ 𝑆 for every 𝛼 ≥ 0
2. 𝑆 + 𝑆 ⊆ 𝑆
45
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
By (1), 𝑆 is a cone. To show that it is convex, let x and y belong to 𝑆. By (1), 𝛼x and
(1 − 𝛼)y belong to 𝑆, and therefore 𝛼x + (1 − 𝛼)y belongs to 𝑆 by (2). 𝑆 is convex.
Conversely, assume that 𝑆 is a convex cone. Then
𝛼𝑆 ⊆ 𝑆
for every 𝛼 ≥ 0
Let x and y be any two elements in 𝑆. Since 𝑆 is convex, 𝑧 = 𝛼 12 x + (1 − 𝛼) 12 y ∈ 𝑆
and since it is a cone, 2𝑧 = x + y ∈ 𝑆. Therefore
𝑆 +𝑆 ⊆𝑆
1.188 We have to show that 𝑌 is convex cone. By assumption, 𝑌 is convex. To show
that 𝑌 is a cone, let y be any production plan in 𝑌 . By convexity
𝛼y = 𝛼y + (1 − 𝛼)0 ∈ 𝑌 for every 0 ≤ 𝛼 ≤ 1
Repeated use of additivity ensures that
𝛼y ∈ 𝑌 for every 𝛼 = 1, 2, . . .
Combining these two conclusions implies that
𝛼y ∈ 𝑌 for every 𝛼 ≥ 0
1.189 Let 𝒮 ⊂ 𝒢 𝑁 denote the set of all superadditive games. Let 𝑤1 , 𝑤2 ∈ 𝑆 be two
superadditive games. Then, for all distinct coalitions 𝑆, 𝑇 ⊂ 𝑁 with 𝑆 ∩ 𝑇 = ∅
𝑤1 (𝑆 ∪ 𝑇 ) ≥ 𝑤1 (𝑆) + 𝑤1 (𝑇 )
𝑤2 (𝑆 ∪ 𝑇 ) ≥ 𝑤2 (𝑆) + 𝑤2 (𝑇 )
Adding
(𝑤1 + 𝑤2 )(𝑆 ∪ 𝑇 ) = 𝑤1 (𝑆 ∪ 𝑇 ) + 𝑤2 (𝑆 ∪ 𝑇 )
≥ 𝑤1 (𝑆) + 𝑤2 (𝑆) + 𝑤1 (𝑇 ) + 𝑤2 (𝑇 )
= (𝑤1 + 𝑤2 )(𝑆) + (𝑤1 + 𝑤2 )(𝑇 )
so that 𝑤1 + 𝑤2 is superadditive. Similarly, we can show that 𝛼𝑤1 is superadditive for
all 𝛼 ∈ ℜ+ . Hence 𝒮 is a convex cone in 𝒢 𝑁 .
∩𝑛
1.190 Let x belong to 𝑖=1 𝑆𝑖 . Then x ∈ 𝑆
∩𝑖 for every 𝑖. Since each 𝑆𝑖 is a cone,
𝛼x ∈ 𝑆𝑖 for every 𝛼 ≥ 0 and therefore 𝛼x ∈ 𝑛𝑖=1 𝑆𝑖 .
Let 𝑆 = 𝑆1 + 𝑆2 + ⋅ ⋅ ⋅ + 𝑆𝑛 and assume x belongs to 𝑆. Then there exist x𝑖 ∈ 𝑆𝑖 ,
𝑖 = 1, 2, . . . , 𝑛 such that
x = x1 + x2 + ⋅ ⋅ ⋅ + x𝑛
For any 𝛼 ≥ 0
𝛼x = 𝛼(x1 + x2 + ⋅ ⋅ ⋅ + x𝑛 )
= 𝛼x1 + 𝛼x2 + ⋅ ⋅ ⋅ + 𝛼x𝑛 ∈ 𝑆
since 𝛼x𝑖 ∈ 𝑆𝑖 for every 𝑖.
46
Solutions for Foundations of Mathematical Economics
1.191
c 2001 Michael Carter
⃝
All rights reserved
1. Suppose that y ∈ 𝑌 . Then, there exist 𝛼, 𝛼2 , . . . , 𝛼8 ≥ 0 such that
y=
8
∑
𝛼𝑖 y𝑖
𝑖=1
and for the first commodity
8
∑
y1 =
𝛼𝑖 𝑦𝑖1
𝑖=1
If y ∕= 0, at least one of the 𝛼𝑖 > 0 and hence y1 < 0 since 𝑦𝑖1 < 0 for 𝑖 =
1, 2, . . . , 8.
2. Free disposal requires that y ∈ 𝑌, y′ ≤ y =⇒ y′ ∈ 𝑌 . Consider the production
plan y′ = (−2, −2, −2, −2). Note that y′ ≤ y3 and y′ ≤ y6 . Suppose that
y′ ∈ 𝑌 . Then there exist 𝛼1 , 𝛼2 , . . . , 𝛼8 ≥ 0 such that
y=
8
∑
𝛼𝑖 y𝑖
𝑖=1
For the third commodity (component), we have
4𝛼1 + 3𝛼2 + 3𝛼3 + 3𝛼4 + 12𝛼5 − 2𝛼6 + 5𝛼8 = −2
(1.27)
and for the fourth commodity
2𝛼2 − 1𝛼3 + 1𝛼4 + 5𝛼6 + 10𝛼7 − 2𝛼8 = −2
(1.28)
Adding to (1.28) to (1.27) gives
4𝛼1 + 5𝛼2 + 2𝛼3 + 4𝛼4 + 12𝛼5 + 3𝛼6 + 10𝛼7 + 3𝛼8 = −4
/ 𝑌.
which is impossible given that 𝛼𝑖 ≥ 0. Therefore, we conclude that y′ ∈
3.
y2 = (−7, −9, 3, 2) ≥ (−8, −13, 3, 1) = y4
3y1 = (−9, −18, 12, 0) ≥ (−11, −19, 12, 0) = y5
y7 = (−8, −5, 0, 10) ≥ (−8, −6, −4, 10) = 2y6
2y3 = (−2, −4, 6, −2) ≥ (−2, −4, 5, −2) = y8
4.
2y3 + y7 = (−2, −4, 6, −2) + (−8, −5, 0, 10)
= (−10, −9, 6, 8)
≥ (−14, −18, 6, 4) = 2y2
20y3 + 2y7 = 20(−1, −2, 3, −1) + 2(−8, −5, 0, 10)
= (−20, −40, 60, −20) + (−16, −10, 0, 20)
= (−36, −50, 60, 0)
≥ (−45, −90, 60, 0) = 15y1
47
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
5. Eff(𝑌 ) = cone { y3 , y7 }.
1.192 This is precisely analogous to Exercise 1.128. We observe that
1. cone 𝑆 is a convex cone.
2. if 𝐶 is any convex cone containing 𝑆, then conv 𝑆 ⊆ 𝐶.
Therefore, cone 𝑆 is the smallest convex cone containing S.
1.193 For any set 𝑆, 𝑆 ⊆ cone 𝑆. If 𝑆 is a convex cone, Exercise 1.186 implies that
cone 𝑆 ⊆ 𝑆.
1.194
1. If 𝑛 > 𝑚 = dim cone 𝑆 = dim lin 𝑆, the elements x1 , x2 , . . . , x𝑛 ∈ 𝑆 are
linearly dependent and therefore there exist numbers 𝛽1 , 𝛽2 , . . . , 𝛽𝑛 , not all zero,
such that (Exercise 1.134)
𝛽1 x1 + 𝛽2 x2 + . . . + 𝛽𝑛 x𝑛 = 0
(1.29)
2. Combining (1.14) and (1.29)
x = x − 𝑡0
𝑛
𝑛
∑
∑
=
𝛼𝑖 x𝑖 − 𝑡
𝛽𝑖 x𝑖
𝑖=1
=
𝑛
∑
𝑖=1
(𝛼𝑖 − 𝑡𝛽𝑖 )x𝑖
(1.30)
𝑖=1
for any 𝑡 ∈ ℜ.
{
}
3. Let 𝑡 = min𝑖 𝛼𝛽𝑖𝑖 : 𝛽𝑖 > 0 =
𝛼𝑗
𝛽𝑗
We note that
∙ 𝑡 > 0 since 𝛼𝑖 > 0 for every 𝑖.
∙ If 𝛽𝑖 > 0, then 𝛼𝑖 /𝛽𝑖 ≥ 𝛼𝑗 /𝛽𝑗 ≥ 𝑡 and therefore 𝛼𝑖 − 𝑡𝛽𝑖 ≥ 0.
∙ If 𝛽𝑖 ≤ 0 then 𝛼𝑖 − 𝑡𝛽𝑖 > 0 for every 𝑡 > 0.
∙ Therefore 𝛼𝑖 − 𝑡𝛽𝑖 ≥ 0 for every 𝑡 and
∙ 𝛼𝑖 − 𝑡𝛽𝑖 = 0 for 𝑖 = 𝑗.
Therefore, (1.30) represents x as a nonnegative combination of only 𝑛 − 1 points.
4. This process can be repeated until x is represented as a convex combination of
at most 𝑚 points.
1.195
1. The affine hull of 𝑆˜ is parallel to the affine hull of 𝑆. Therefore
(
Since
0
0
)
dim 𝑆 = dim aff 𝑆 = dim aff 𝑆˜
˜
∈
/ aff 𝑆,
dim cone 𝑆˜ = dim aff 𝑆˜ + 1 = dim 𝑆 + 1
48
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
(
)
x
2. For every x ∈ conv 𝑆,
∈ conv 𝑆˜ and there exist (Exercise 1.194) 𝑚 + 1
1
(
)
x𝑖
points
∈ 𝑆˜ such that
1
(
x
1
)
(
∈ conv {
x1
1
) (
)
(
)
x2
x𝑚+1
,
,...
}
1
1
This implies that
x ∈ conv { x1 , x2 , . . . , x𝑚+1 }
with x1 , x2 , . . . , x𝑚+1 ∈ 𝑆.
1.196 A subsimplex with precisely one distinguished face is completely labeled. Suppose
a subsimplex has more than one distinguished face. This means that it has vertices
labeled 1, 2, . . . , 𝑛. Since it has 𝑛 + 1 vertices, one of these labels must be repeated
(twice). The distinguished faces lie opposite the repeated vertices. There are precisely
two distinguished faces.
1.197
1. 𝜌(x, y) = ∥x − y∥ ≥ 0.
2. 𝜌(x, y) = ∥x − y∥ = 0 if and only if x − y = 0, that is x = y.
3. Property (3) ensures that ∥−x∥ = ∥x∥ and therefore ∥x − y∥ = ∥y − x∥ so that
𝜌(x, y) = ∥x − y∥ = ∥y − x∥ = 𝜌(y, x)
4. For any z ∈ 𝑋
𝜌(x, y) = ∥x − y∥
= ∥x − z + z − y∥
≤ ∥x − z∥ + ∥z − y∥
= 𝜌(x, z) + 𝜌(z, y)
Therefore 𝜌(x, y) = ∥x − y∥ satisfies the properties required of a metric.
1.198 Clearly ∥x∥∞ ≥ 0 and ∥x∥∞ = 0 if and only if x = 0. Thirdly
𝑛
𝑛
𝑖=1
𝑖=1
∥𝛼x∥ = max ∣𝛼𝑥𝑖 ∣ = ∣𝛼∣ max ∣𝑥𝑖 ∣ = ∣𝛼∣ ∥x∥
To prove the triangle inequality, we note that for any 𝑥𝑖 , 𝑦𝑖 ∈ ℜ
max(𝑥𝑖 + 𝑦𝑖 ) ≤ max 𝑥𝑖 + max 𝑦𝑖
Therefore
𝑛
𝑛
𝑛
𝑖=1
𝑖=1
𝑖=1
∥x∥ = max(𝑥𝑖 + 𝑦𝑖 ) ≤ max 𝑥𝑖 + max 𝑦𝑖 = ∥x∥ + ∥y∥
1.199 Suppose that producing one unit of good 1 requires two units of good 2 and three
units of good 3. The production plan is (1, −2, −3) and the average net output, −2,
is negative. A norm is required to be nonnegative. Moreover, the∑quantities of inputs
𝑛
and outputs may balance out yielding a zero average. That is, ( 𝑖=1 𝑦𝑖 )/𝑛 = 0 does
not imply that 𝑦𝑖 = 0 for all 𝑖.
49
c 2001 Michael Carter
⃝
All rights reserved
Solutions for Foundations of Mathematical Economics
1.200
∥x∥ − ∥y∥ = ∥x − y + y∥ − ∥y∥
≤ ∥x − y∥ + ∥y∥ − ∥y∥
= ∥x − y∥
1.201 Using the previous exercise
∥x𝑛 ∥ − ∥x∥ ≤ ∥x𝑛 − x∥ → 0
1.202 First note that each term x𝑛 + y𝑛 ∈ 𝑋 by linearity. Similarly, x + y ∈ 𝑋. Fix
some 𝜖 > 0. There exists some 𝑁x such that ∥x𝑛 − x∥ < 𝜖 for all 𝑛 ≥ 𝑁x . Similarly,
there exists some 𝑁y such that ∥y𝑛 − y∥ < 𝜖 for all 𝑛 ≥ 𝑁y . For all 𝑛 ≥ max{ 𝑁x , 𝑁y },
∥(x𝑛 + y𝑛 ) − (x + y)∥ = ∥(x𝑛 − x) + (y𝑛 − y)∥
≤ ∥x𝑛 − x∥ + ∥y𝑛 − y∥
<𝜖
Similarly, for every 𝑛 ≥ 𝑁x
∥𝛼x𝑛 − 𝛼x∥ = ∣𝛼∣ ∥x𝑛 − x∥
≤ ∣𝛼∣ 𝜖/2
→ 0 as 𝜖 → 0
1.203 Let x𝑛 be a sequence in 𝑆 + 𝑇 converging to x. For every 𝑛, there exists y𝑛 ∈ 𝑆
and z𝑛 ∈ 𝑇 such that x𝑛 = y𝑛 + z𝑛 . Since 𝑇 is compact, there exists a subsequence z𝑚
converging to z ∈ 𝑇 . Let y = lim 𝑦𝑚 . Then y ∈ 𝑆 since 𝑆 is closed. By the previous
exercise, y𝑚 + z𝑚 → y + z. By assumption, y𝑚 + z𝑚 → x so that x = y + z ∈ 𝑆 + 𝑇 .
𝑆 + 𝑇 is closed.
1.204 Yes. Apply Exercise 1.202.
1.205 The 𝑛th partial sum of the series is
s𝑛 = x + 𝛽x + 𝛽 2 x + ⋅ ⋅ ⋅ + 𝛽 𝑛−1 x
Multiplying this equation by 𝛽 gives
𝛽s𝑛 = 𝛽x + 𝛽 2 x + 𝛽 3 x + ⋅ ⋅ ⋅ + 𝛽 𝑛 x
Subtracting this equation from the previous one and canceling common terms gives
(1 − 𝛽)s𝑛 = x − 𝛽 𝑛 x = (1 − 𝛽 𝑛 )x
Provided that 𝛽 ∕= 1
s𝑛 =
x − 𝛽𝑛x
x
𝛽𝑛x
=
−
1−𝛽
1−𝛽
1−𝛽
(1.31)
If 𝛽 < 1, then 𝛽 𝑛 → 0 (Exercise 1.102) and therefore 𝑠𝑛 converges to x/(1 − 𝛽).
1.206
1+
1
1 1 1
+ + +
+ ...
2 4 8 16
is a geometric series 1 + 𝛽 + 𝛽 2 + 𝛽 3 + . . . with 𝛽 = 1/2. The series converges (Exercise
1.205) to
1+
1
1
1
1 1 1
+ + +
+ ⋅⋅⋅ =
=
2 4 8 16
1−𝛽
1−
50
1
2
=2
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
1.207 The present value of the 𝑛 payments is the 𝑛th partial sum of the geometric
series 𝑥 + 𝛽𝑥 + 𝛽 2 𝑥 + 𝛽 3 𝑥 + . . . which (using (1.31)) is given by
Present value = 𝑠𝑛 =
𝑥 − 𝛽𝑛𝑥
1−𝛽
1.208 By Exercise 1.93, there exists an open set 𝑇 ⊇ 𝑆1 such that 𝑇 ∩𝑆2 = ∅. For every
x ∈ 𝑆1 , there exists an open ball 𝐵(x) such that 𝐵(x) ⊆ 𝑇 and therefore 𝐵(x)∩𝑆2 = ∅.
The collection { 𝐵(x) } of open balls is an open cover for 𝑆1 . Since 𝑆1 is compact there
exists a finite subcover, that is there exists points x1 , x2 , . . . , x𝑛 in 𝑆1 such that
𝑆1 ⊆
𝑛
∪
𝐵(x𝑖 )
𝑖=1
Furthermore, for every x𝑖 , there exists 𝑟𝑛 such that
𝐵(x𝑖 ) = x𝑖 + 𝑟𝑛 𝐵
where 𝐵 is the unit ball. Let 𝑟 = min 𝑟𝑛 . 𝑈 = 𝑟𝐵 is the required neighborhood.
1.209 Clearly 𝑋 × 𝑌 is a normed linear space. To show that it is complete, let (z𝑛 ) be
a Cauchy sequence in 𝑋 × 𝑌 where z𝑛 = (x𝑛 , y𝑛 ). For every 𝜖 > 0, there exists some
𝑁 such that
∥z𝑛 − z𝑚 ∥ = max{ ∥x𝑛 − x𝑚 ∥ , ∥y𝑛 − y𝑚 ∥ } < 𝜖
for every 𝑛, 𝑚 ≥ 𝑁 . This implies that (x𝑛 ) and (y𝑛 ) are Cauchy sequences in 𝑋 and
𝑌 respectively. Since 𝑋 and 𝑌 are complete, both sequences converge. That is, there
exists x ∈ 𝑋 and y ∈ 𝑌 such that ∥x𝑛 − x∥ → 0 and ∥y𝑛 − y∥ → 0. In other words,
given 𝜖 > 0 there exists 𝑁 such that ∥x𝑛 − x∥ < 𝜖 and ∥y𝑛 − y∥ < 𝜖 for every 𝑛 ≥ 𝑁 .
Let z = (x, y). Then, for every 𝑛 ≥ 𝑁
∥z𝑛 − z∥ = max{ ∥x𝑛 − x∥ , ∥y𝑛 − y∥ } < 𝜖
z𝑛 → z.
1.210
1. By assumption, for every 𝑚 = 1, 2, . . . , there exists a point y𝑚 such that
( 𝑛
)
1 ∑
∥y∥ <
∣𝛼𝑖 ∣
𝑚 𝑖=1
where
y = 𝛼1 x1 + 𝛼2 x2 + ⋅ ⋅ ⋅ + 𝛼𝑛 x𝑛
Let 𝑠𝑚 =
∑𝑛
𝑖=1
∣𝛼𝑖 ∣. By assumption 𝑠𝑚 > 𝑚 ∥y𝑚 ∥ ≥ 0. Define
x𝑚 =
1 𝑚
y
𝑠𝑚
Then
x𝑚 = 𝛽1𝑚 x1 + 𝛽2𝑚 x2 + ⋅ ⋅ ⋅ + 𝛽𝑛𝑚 x𝑛
∑𝑛
1
𝑚
𝑚
𝑚
where 𝛽𝑖𝑚 = 𝛼𝑚
𝑖 /𝑠 ,
𝑖=1 ∣𝛽𝑖 ∣ = 1 and ∥x ∥ < 𝑚 for every 𝑛 = 1, 2, . . . .
𝑚
Consequently ∥x ∥ → 0.
51
c 2001 Michael Carter
⃝
All rights reserved
Solutions for Foundations of Mathematical Economics
∑𝑛
2. Since 𝑖=1 ∣𝛽𝑖𝑚 ∣ = 1, ∣𝛽𝑖𝑚 ∣ ≤ 1 for every 𝑖. Consequently, for every coordinate
𝑖, the sequence (𝛽𝑖𝑚 ) is bounded. By the Bolzano-Weierstrass theorem (Exercise
1.119), the sequence (𝛽1𝑚 ) has a convergent subsequence with 𝛽1𝑚 → 𝛽1 . Let x𝑚,1
denote the corresponding subsequence of x𝑚 .
Similarly, 𝛽2𝑚,1 has a subsequence converging to 𝛽2 . Let (x𝑚,2 ) denote the corresponding subsequence of (x𝑚 ). Proceeding coordinate by coordinate, we obtain
a subsequence (x𝑚,𝑛 ) where each term is
x𝑚,𝑛 = 𝛽 𝑚,𝑛 x1 + 𝛽 𝑚,𝑛 x2 + ⋅ ⋅ ⋅ + 𝛽 𝑚,𝑛 x𝑛
and each coefficient converges 𝛽𝑖𝑚,𝑛 → 𝛽𝑖 . Let
x = 𝛽1 x1 + 𝛽2 x2 + ⋅ ⋅ ⋅ + 𝛽2 x𝑛
Then x𝑚,𝑛 → x (Exercise 1.202).
∑𝑛
∑𝑛
𝑚
3. Since
𝑖=1 ∣𝛽𝑖 ∣ = 1 for every 𝑚,
𝑖=1 ∣𝛽𝑖 ∣ = 1. Consequently, at least one
of the coefficients 𝛽𝑖 ∕= 0. Since x1 , x2 , . . . , x𝑛 are linearly independent, x ∕= 0
(Exercise 1.133) and therefore ∥x∥ ∕= 0. But (x𝑚,𝑛 ) is a subsequence of (x𝑚 ).
This contradicts the earlier conclusion (part 1) that ∥x𝑚 ∥ → 0.
1.211
1. Let 𝑋 be a normed linear space 𝑋 of dimension 𝑛 and let { x1 , x2 , . . . , x𝑛 }
be a basis for 𝑋. Let (x𝑚 ) be a Cauchy sequence in 𝑋. Each term x𝑚 has a
unique representation
𝑚
𝑚
x𝑚 = 𝛼𝑚
1 x1 + 𝛼2 x2 + ⋅ ⋅ ⋅ + 𝛼𝑛 x𝑛
We show that each of the sequences 𝛼𝑚
𝑖 is a Cauchy sequence in ℜ.
Since x𝑚 is a Cauchy sequence, for every 𝜖 > 0 there exists an 𝑁 such that
∥x𝑚 − x𝑟 ∥ < 𝜖 for all 𝑚, 𝑟 ≥ 𝑁 . Using Lemma 1.1, there exists 𝑐 > 0 such that
𝑛
𝑛
∑
∑
𝑚
𝑟
𝑚
𝑟
𝑐
∣𝛼𝑖 − 𝛼𝑖 ∣ ≤ (𝛼𝑖 − 𝛼𝑖 )x𝑖 = ∥x𝑚 − x𝑟 ∥ < 𝜖
𝑖=1
𝑖=1
for all 𝑚, 𝑟 ≥ 𝑁 . Dividing by 𝑐 > 0 we get for every 𝑖
𝑟
∣𝛼𝑚
𝑖 − 𝛼𝑖 ∣ ≤
𝑛
∑
𝑟
∣𝛼𝑚
𝑖 − 𝛼𝑖 ∣ <
𝑖=1
𝜖
𝑐
𝛼𝑚
𝑖
is a Cauchy sequence in ℜ. Since ℜ is complete, each
Thus each sequence
sequence converges to some limit 𝛼𝑖 ∈ ℜ.
2. Let
x = 𝛼1 x1 + 𝛼2 x2 + ⋅ ⋅ ⋅ + 𝛼𝑛 x𝑛
Then x ∈ 𝑋 and
𝑛
𝑛
∑
∑
∥x𝑚 − x∥ = (𝛼𝑚
∣𝛼𝑚
𝑖 − 𝛼𝑖 )x𝑖 ≤
𝑖 − 𝛼𝑖 ∣ ∥x𝑖 ∥
𝑖=1
𝑖=1
𝑚
𝑚
Since 𝛼𝑚
𝑖 → 𝛼𝑖 for every 𝑖, ∥x − x∥ → 0 which implies that x → x.
3. Since (x𝑚 ) was an arbitrary Cauchy sequence, we have shown that every Cauchy
sequence in 𝑋 converges. Hence 𝑋 is complete.
52
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
1.212 Let 𝑆 be an open set according to the ∥⋅∥𝑎 and let x0 be a point in 𝑆. Since 𝑆 is
open, it contains an open ball in the ∥⋅∥𝑎 topology about x0 , namely 𝐵𝑎 (x0 , 𝑟) = { x ∈
𝑋 : ∥x − x0 ∥𝑎 < 𝑟 } ⊆ 𝑆 Let
𝐵𝑏 (x0 , 𝑟) = { x ∈ 𝑋 : ∥x − x0 ∥𝑏 < 𝑟 }
be the open ball about x0 in the ∥⋅∥𝑏 topology. The condition (1.15) implies that
𝐵𝑏 (x0 , 𝑟) ⊆ 𝐵𝑎 (x0 , 𝑟) ⊆ 𝑆 and therefore
x0 ∈ 𝐵𝑏 (x0 , 𝑟) ⊂ 𝑆
𝑆 is open in the ∥⋅∥𝑏 topology. Similarly, any 𝑆 open in the ∥⋅∥𝑏 topology is open in
the ∥⋅∥𝑎 topology.
1.213 Let 𝑋 be a normed linear space of dimension 𝑛. and let { x1 , x2 , . . . , x𝑛 } be
a basis for 𝑋. Let ∥⋅∥𝑎 and ∥⋅∥𝑏 be two norms on 𝑋. Every x ∈ 𝑋 has a unique
representation
x = 𝛼1 x1 + 𝛼2 x2 + ⋅ ⋅ ⋅ + 𝛼𝑛 x𝑛
Repeated application of the triangle inequality gives
∥x∥𝑎 = ∥𝛼1 x1 + 𝛼2 x2 + ⋅ ⋅ ⋅ + 𝛼𝑛 x𝑛 ∥𝑎
𝑛
∑
≤
∥𝛼𝑖 x𝑖 ∥𝑎
𝑖=1
=
𝑛
∑
∣𝛼𝑖 ∣ ∥x𝑖 ∥𝑎
𝑖=1
𝑛
∑
≤𝑘
∣𝛼𝑖 ∣
𝑖=1
where 𝑘 = max𝑖 ∥x𝑖 ∥.
By Lemma 1.1, there is a positive constant 𝑐 such that
𝑛
∑
𝑖=1
∣𝛼𝑖 ∣ ≤ ∥x∥𝑏 /𝑐
Combining these two inequalities, we have
∥x∥𝑎 ≤ 𝑘 ∥x∥𝑏 /𝑐
Setting 𝐴 = 𝑐/𝑘 > 0, we have shown
𝐴 ∥x∥𝑎 ≤ ∥x∥𝑏
The other inequality in (1.15) is obtained by interchanging the roles of ∥⋅∥𝑎 and ∥⋅∥𝑏 .
1.214 Assume x𝑛 → x = (𝑥1 , 𝑥2 , . . . , 𝑥𝑛 ). Then, for every 𝜖 > 0, there exists some 𝑁
such that ∥x𝑛 − x∥∞ < 𝜖. Therefore, for 𝑖 = 1, 2, . . . , 𝑛
∣𝑥𝑛𝑖 − 𝑥𝑖 ∣ ≤ max ∣𝑥𝑛𝑖 − 𝑥𝑖 ∣ = ∥x𝑛 − x∥∞ < 𝜖
𝑖
Therefore 𝑥𝑛𝑖 → x𝑖 .
53
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
Conversely, assume that (x𝑛 ) is a sequence in ℜ𝑛 with 𝑥𝑛𝑖 → 𝑥𝑖 for every coordinate 𝑖.
Choose some 𝜖 > 0. For every 𝑖, there exists some integer 𝑁𝑖 such that
∣𝑥𝑛𝑖 − 𝑥𝑖 ∣ < 𝜖 for every 𝑛 ≥ 𝑁𝑖
Let 𝑁 = max𝑖 { 𝑁1 , 𝑁2 , . . . , 𝑁𝑛 }. Then
∣𝑥𝑛𝑖 − 𝑥𝑖 ∣ < 𝜖 for every 𝑛 ≥ 𝑁
and
∥x𝑛 − x∥∞ = max ∣𝑥𝑛𝑖 − 𝑥𝑖 ∣ < 𝜖 for every 𝑛 ≥ 𝑁
𝑖
𝑛
That is, x → x.
A similar proof can be given using the Euclidean norm ∥⋅∥2 , but it is slightly more
complicated. This illustrates an instance where the sup norm is more tractable.
1.215
1. Let 𝑆 ⊆ 𝑋 be closed and bounded and let x𝑚 be a sequence in 𝑆. Every
term x𝑚 has a representation
𝑛
∑
x𝑚 =
𝛼𝑚
𝑖 x𝑖
𝑖=1
Since 𝑆 is bounded, so is x𝑚 . That is, there exists 𝑘 such that ∥x𝑚 ∥ ≤ 𝑘 for all
𝑚. Applying Lemma 1.1, there is a positive constant 𝑐 such that
𝑐
𝑛
∑
∣𝛼𝑖 ∣ ≤ ∥x𝑚 ∥ ≤ 𝑘
𝑖=1
Hence, for every 𝑖, the sequence of scalars 𝛼𝑛𝑖 is bounded.
2. By the Bolzano-Weierstrass theorem (Exercise 1.119), the sequence 𝛼𝑚
1 has a
convergent subsequence with limit 𝛼1 . Let 𝑥𝑚
(1) be the corresponding subsequence
of x𝑚 .
𝑚
3. Similarly, 𝑥𝑚
(1) has a subsequence for which the corresponding scalars 𝛼2 converge to 𝛼2 . Repeating this process 𝑛 times (this is were finiteness is important), we deduce the existence of a subsequence 𝑥𝑚
(𝑛) whose scalars converge to
(𝛼1 , 𝛼2 , . . . , 𝛼𝑛 ).
4. Let
x=
𝑛
∑
𝛼𝑖 x𝑖
𝑖=1
𝑚
𝑚
Since 𝛼𝑚
𝑖 → 𝛼𝑖 for every 𝑖, ∥x − x∥ → 0 which implies that x → x.
5. Since 𝑆 is closed, x ∈ 𝑆.
6. We have shown that every sequence in 𝑆 has a subsequence which converges in
𝑆. 𝑆 is compact.
1.216 Let x and y belong to 𝐵 = { x : ∥x∥ < 1 }, the unit ball in the normed linear
space 𝑋. Then ∥x∥ , ∥y∥ < 1. By the triangle inequality
∥𝛼x + (1 − 𝛼)y∥ ≤ 𝛼 ∥x∥ + (1 − 𝛼) ∥y∥ ≤ 𝛼 + (1 − 𝛼) = 1
Hence 𝛼x + (1 − 𝛼)y ∈ 𝐵.
54
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
1.217 If int 𝑆 is empty, it is trivially convex. Therefore, assume int 𝑆 ∕= ∅ and let
x, y ∈ int 𝑆. We must show that z = 𝛼x + (1 − 𝛼)y ∈ int 𝑆.
Since x, y ∈ int 𝑆, there exists some 𝑟 > 0 such that the open balls 𝐵(x, 𝑟) and 𝐵(y, 𝑟)
are both contained in int 𝑆. Let w be any vector with ∥w∥ < 𝑟. The point
z + w = 𝛼(x + w) + (1 − 𝛼)(y + w) ∈ 𝑆
since x + w ∈ 𝐵(x, 𝑟) ⊂ 𝑆 and y + w ∈ 𝐵(y, 𝑟) ⊂ 𝑆 and 𝑆 is convex. Hence z is an
interior point of 𝑆.
Similarly, if 𝑆 is empty, it is trivially convex. Therefore, assume 𝑆 ∕= ∅ and let x, y ∈ 𝑆.
Choose some 𝛼. We must show that 𝑧 = 𝛼x + (1 − 𝛼)y ∈ 𝑆.
There exist sequences (x𝑛 ) and (y𝑛 ) in 𝑆 which converge to x and y respectively
(Exercise 1.105). Since 𝑆 is convex, the sequence (𝛼x𝑛 + (1 − 𝛼)y𝑛 ) lies in 𝑆 and
moreover converges to 𝛼x + (1 − 𝛼)y = z (Exercise 1.202). Therefore 𝑧 is the limit of
a sequence in 𝑆 and hence 𝑧 ∈ 𝑆. Therefore, 𝑆 is convex.
1.218 Let x̄ = 𝛼x1 + (1 − 𝛼)x2 for some 𝛼 ∈ (0, 1). Since x1 ∈ 𝑆,
x1 ∈ 𝑆 + 𝑟𝐵
𝛼x1 ∈ 𝛼(𝑆 + 𝑟𝐵)
The open ball about x̄ of radius 𝑟 is
𝐵(x̄, 𝑟) = x̄ + 𝑟𝐵
= 𝛼x1 + (1 − 𝛼)x2 + 𝑟𝐵
⊆ 𝛼(𝑆 + 𝑟𝐵) + (1 − 𝛼)x2 + 𝑟𝐵
= 𝛼𝑆 + (1 − 𝛼)x2 + (1 + 𝛼)𝑟𝐵
(
)
1+𝛼
= 𝛼𝑆 + (1 − 𝛼) x2 +
𝑟𝐵
1−𝛼
Since x2 ∈ int 𝑆
x2 +
(
)
1+𝛼
1+𝛼
𝑟𝐵 = 𝐵 x2 ,
𝑟 ⊆𝑆
1−𝛼
1−𝛼
for sufficiently small 𝑟. For such 𝑟
𝐵(x̄, 𝑟) ⊆ 𝛼𝑆 + (1 − 𝛼)𝑆
=𝑆
by Exercise 1.168. Therefore x̄ ∈ int 𝑆.
1.219 It is easy to show that
𝑆⊆
∩
𝑆𝑖
𝑖∈𝐼
To show the converse, choose any x ∈ 𝑆 and let x0 ∈ 𝑆𝑖 for every 𝑖 ∈ 𝐼. By Exercise
1.218, 𝛼x + (1 − 𝛼)x0 ∈ 𝑆𝑖 for all 0 < 𝛼 < 1. This implies that 𝛼x + (1 − 𝛼)x0 ∈
∩𝑖∈𝐼 𝑆𝑖 = 𝑆 for all 0 < 𝛼 < 1, and therefore that x0 = lim𝛼→0 𝛼x + (1 − 𝛼)x0 ∈ 𝑆.
1.220 Assume that x ∈ int 𝑆. Then, there exists some 𝑟 such that
𝐵(x, 𝑟) = x + 𝑟𝐵 ⊆ 𝑆
55
c 2001 Michael Carter
⃝
All rights reserved
Solutions for Foundations of Mathematical Economics
Let y be any element in the unit ball 𝐵. Then −y ∈ 𝐵 and
x1 = x + 𝑟y ∈ 𝑆
x2 = x − 𝑟y ∈ 𝑆
so that
x=
1
1
x1 + x2
2
2
x is not an extreme point. We have shown that no interior point is an extreme point;
hence every extreme point must be a boundary point.
1.221 We showed in Exercise 1.220 that ext(𝑆) ⊆ b(𝑆). To show the converse, assume
that x is a boundary point which is not an extreme point. That is, there exist x1 , x2 ∈ 𝑆
such that
x = 𝛼x1 + (1 − 𝛼)x2
0<𝛼<1
This contradicts the assumption that 𝑆 is strictly convex.
1.222 If 𝑆 is open, int 𝑆 = 𝑆. Since 𝑆 is convex
𝛼x + (1 − 𝛼)y ∈ 𝑆 = int 𝑆 for every 0 ≤ 𝛼 ≤ 1
A fortiori for every x ∕= y
𝛼x + (1 − 𝛼)y ∈ 𝑆 = int 𝑆 for every 0 < 𝛼 < 1
𝑆 is strictly convex.
1.223 Let 𝑆 be open and x ∈ conv 𝑆. That is
x=
with x𝑖 ∈ 𝑆, 𝛼𝑖 ∈ [0, 1] and
𝑛
∑
𝛼𝑖 x𝑖
𝑖=1
∑
𝑖
𝛼𝑖 = 1. The open ball about x
𝐵(x, 𝑟) = x + 𝑟𝐵
( 𝑛
)
∑
=
𝛼𝑖 x𝑖 + 𝑟𝐵
𝑖=1
=
=
( 𝑛
∑
)
𝛼𝑖 x𝑖
𝑖=1
𝑛
∑
(
+
𝑛
∑
)
𝛼𝑖 𝑟𝐵
𝑖=1
(𝛼𝑖 x𝑖 + 𝑟𝐵)
𝑖=1
Since 𝑆 is open, there exists some 𝑟 such that x𝑖 + 𝑟𝐵 ∈ 𝑆 for all 𝑖. For this 𝑟
𝐵(x, 𝑟) ⊆ conv 𝑆
Therefore conv 𝑆 is open.
1.224
conv 𝑆 = { (𝑥1 , 𝑥2 ) ∈ ℜ2 : 𝑥2 > 0 }
56
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
1.225 𝑆 is closed and bounded (Proposition 1.1).
1. 𝑆 is bounded, that is there exists some 𝐾 such that ∥x∥ < 𝐾 for every x ∈ 𝑆.
Let x ∈ conv 𝑆. x is a convex combination of a finite number of points in 𝑆, that
is
x=
with 𝑥𝑖 ∈ 𝑆, 𝛼𝑖 ≥ 0 and
𝑚
∑
𝛼𝑖 x𝑖
𝑖=1
∑𝑚
𝑖=1
𝛼𝑖 = 1. By the triangle inequality
∥x∥ ≤
𝑚
∑
𝛼𝑖 ∥x𝑖 ∥ < 𝐾
𝑖=1
Therefore conv 𝑆 is bounded.
2. Let x belong to conv 𝑆. Then, there exists a sequence (x𝑘 ) in conv 𝑆 which converges to x. By Carathéodory’s theorem, each term x𝑘 is a convex combination
of at most 𝑛 + 1 points, that is
𝑛+1
∑
x𝑘 =
𝛼𝑘𝑖 x𝑘𝑖
𝑖=1
where x𝑘𝑖 ∈ 𝑆.
For each 𝑖, the sequence (x𝑘𝑖 ) lies in a compact set 𝑆 and hence contains a convergent subsequence. Similarly, the sequence of coefficients (𝛼𝑘𝑖 ) ∈ [0, 1] is bounded
and contains a convergent subsequence (Bolzano-Weierstrass theorem, Exercise
1.119). Proceeding coordinate by coordinate as in Exercise 1.215, we can construct convergent subsequences 𝛼𝑘𝑖 → 𝛼𝑖 and x𝑘𝑖 − x𝑖 .
3. Let
x=
𝑛+1
∑
𝛼𝑖 x𝑖
𝑖=1
Since
𝑛+1
∑
𝑘
x − x = (𝛼𝑘𝑖 x𝑘𝑖 − 𝛼𝑖 x𝑖 )
≤
=
=
𝑖=1
𝑛+1
∑
𝑘 𝑘
𝛼𝑖 x𝑖 − 𝛼𝑖 x𝑖 𝑖=1
𝑛+1
∑
𝑖=1
𝑛+1
∑
𝑘 𝑘
𝛼𝑖 x𝑖 − 𝛼𝑖 x𝑘𝑖 + 𝛼𝑖 x𝑘𝑖 − 𝛼𝑖 x𝑖 ∑
𝑘
𝑛+1
𝛼𝑖 − 𝛼𝑖 x𝑘𝑖 +
∣𝛼𝑖 ∣ x𝑘𝑖 − x𝑖 𝑖=1
𝑖=1
→0
as 𝛼𝑘𝑖 → 𝛼𝑖 , x𝑘𝑖 − x𝑖 , 𝛼𝑖 and x𝑘 are bounded. Therefore x𝑘 → x.
∑𝑛+1
4. Since 𝛼𝑘𝑖 ≥ 0 and 𝑖=1 𝛼𝑘𝑖 = 1 for every 𝑘, we conclude that 𝛼𝑖 = lim 𝛼𝑘𝑖 ≥ 0 and
∑𝑛+1
𝛼𝑖 = 1. Furthermore, since 𝑆 is closed, x𝑖 ∈ 𝑆 for every 𝑖 and therefore
𝑖=1
∑𝑛+1
x = 𝑖=1 𝛼𝑖 x𝑖 ∈ conv 𝑆.
57
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
5. We have shown that conv 𝑆 ⊆ conv 𝑆, that is conv 𝑆 is closed.
6. conv 𝑆 is a closed and bounded subset of a finite dimensional space, and hence
conv 𝑆 is compact (Proposition 1.4 and Exercise 1.215).
1.226
1. 𝑆 is bounded. Therefore, there exists some 𝑐 such that ∥x∥∞ = max𝑖 ∣𝑥𝑖 ∣ <
𝑐 for every x ∈ 𝑆. That is −𝑐 ≤ 𝑥𝑖 ≤ 𝑐 so that
x ∈ 𝐶 = { x = (𝑥1 , 𝑥2 , . . . , 𝑥𝑛 ) ∈ ℜ𝑛 : −𝑐 ≤ 𝑥𝑖 ≤ 𝑐 for every 𝑖 }
Therefore 𝑆 ⊂ 𝐾.
2. Exercise 1.177.
3. 𝐶 is the convex hull of a finite set and hence is compact (Exercise 1.225)
4. 𝑆 is a closed subset of a compact set and hence is compact (Exercise 1.110).
1.227 A polytope is the convex hull of a finite set. Any finite set is compact.
1.228 The unit simplex Δ𝑛−1 in ℜ𝑛 is the convex hull of the unit vectors e1 , e2 , . . . , e𝑛 ,
that is
Δ𝑛−1 = conv { e1 , e2 , . . . , e𝑛 }
{
}
∑
= (𝑥1 , 𝑥2 , . . . , 𝑥𝑛 ) ∈ ℜ𝑛 : 𝑥𝑖 ≥ 0 and
𝑥𝑖 = 1
This simplex has a nonempty relative interior, namely
{
}
∑
ri 𝑆 = (𝑥1 , 𝑥2 , . . . , 𝑥𝑛 ) ∈ ℜ𝑛 : 𝑥𝑖 > 0 and
𝑥𝑖 < 1
1.229 Let 𝑛 = dim 𝑆. By Exercise 1.182, 𝑆 contains a simplex 𝑆 𝑛 of the same dimension. That is, there exist 𝑛 vertices v1 , v2 , . . . , v𝑛 such that
𝑆 𝑛 = conv { v1 , v2 , . . . , v𝑛 }
{
= 𝛼1 v1 + 𝛼2 v2 + ⋅ ⋅ ⋅ + 𝛼𝑛 v𝑛 :
𝛼1 , 𝛼2 , . . . , 𝛼𝑛 ≥ 0,
𝛼1 + 𝛼2 + . . . + 𝛼𝑛 = 1
}
Analogous to the previous part, the relative interior of 𝑆 𝑛 is
ri 𝑆 𝑛 = conv { v1 , v2 , . . . , v𝑛 }
{
= 𝛼1 v1 + 𝛼2 v2 + ⋅ ⋅ ⋅ + 𝛼𝑛 v𝑛 :
𝛼1 , 𝛼2 , . . . , 𝛼𝑛 > 0,
𝛼1 + 𝛼2 + . . . + 𝛼𝑛 = 1
}
which is nonempty.
Note, the proposition is trivially true for a set containing a single point (𝑛 = 0), since
this point is the whole affine space.
1.230 If int 𝑆 ∕= ∅, then aff 𝑆 = 𝑋 and ri 𝑆 = int 𝑆. The converse follows from Exercise
1.229.
1.231 Since
𝑚 > inf
x∈𝑋
𝑛
∑
𝑖=1
58
𝑝𝑖 𝑥𝑖
c 2001 Michael Carter
⃝
All rights reserved
Solutions for Foundations of Mathematical Economics
there exists some x ∈ 𝑋 such that
𝑛
∑
𝑝𝑖 𝑥𝑖 ≤ 𝑚
𝑖=1
Therefore x ∈ 𝑋(p, 𝑚) which is nonempty.
Let 𝑝ˇ = min𝑖 𝑝𝑖 be the lowest price of the 𝑛 goods. Then 𝑋(p, 𝑚) ⊆ 𝐵(0, 𝑚/ˇ
𝑝) and
so is bounded. (That is, no component of an affordable bundle can contain more than
𝑚/ˇ
𝑝 units.)
To show that 𝑋(p, 𝑚) is closed, let (x𝑛 ) be a sequence of consumption bundles in
𝑋(p, 𝑚). Since 𝑋(p, 𝑚) is bounded, x𝑛 → x ∈ 𝑋. Furthermore
𝑝1 𝑥𝑛1 + 𝑝2 𝑥𝑛2 + ⋅ ⋅ ⋅ + 𝑝𝑛 𝑥𝑛𝑛 ≤ 𝑚 for every 𝑛
This implies that
𝑝1 𝑥1 + 𝑝2 𝑥2 + ⋅ ⋅ ⋅ + 𝑝𝑛 𝑥𝑛 ≤ 𝑚
𝑛
so that x → x ∈ 𝑋(p, 𝑚). Therefore 𝑋(p, 𝑚) is closed.
We have shown that 𝑋(p, 𝑚) is a closed and bounded subset of ℜ𝑛 . Hence it is compact
(Proposition 1.4).
1.232 Let x, y ∈ 𝑋(p, 𝑚). That is
𝑛
∑
𝑝𝑖 𝑥𝑖 ≤ 𝑚
𝑖=1
𝑛
∑
𝑝𝑖 𝑦 𝑖 ≤ 𝑚
𝑖=1
For any 𝛼 ∈ [0, 1], the cost of the weighted average bundle z = 𝛼x + (1 − 𝛼)y (where
each component 𝑧𝑖 = 𝛼𝑥𝑖 + (1 − 𝛼)𝑦𝑖 ) is
𝑛
∑
𝑖=1
𝑝𝑖 𝑧 𝑖 =
𝑛
∑
𝑝𝑖 (𝛼𝑥𝑖
𝑖=1
𝑛
∑
=𝛼
+ (1 − 𝛼)𝑦𝑖
𝑝𝑖 𝑥𝑖 + (1 − 𝛼)
𝑖=1
𝑛
∑
𝑝𝑖 𝑦 𝑖
𝑖=1
≤ 𝛼𝑚 + (1 − 𝛼)𝑚
=𝑚
Therefore z ∈ 𝑋(p, 𝑚). The budget set 𝑋(p, 𝑚) is convex.
1.233
1. Assume that ≻ is strongly monotone. Let x, y ∈ 𝑋 with x ≥ y.
Either x ≩ y so that x ≻ y by strong monotonicity
or x = y so that x ≿ y by reflexivity.
In either case, x ≿ y so that ≿ is weakly monotonic.
2. Again, assume that ≿ is strongly monotonic and let y ∈ 𝑋. 𝑋 is open (relative
to itself). Therefore, there exists some 𝑟 such that
𝐵(y, 𝑟) = y + 𝑟𝐵 ⊆ 𝑋
Let x = y + 𝑟e1 be the consumption bundle containing 𝑟 more units of good 1.
Then e1 ∈ 𝐵, x ∈ 𝐵(y, 𝑟) and therefore ∥x − y∥ < 𝑟. Furthermore, x ≩ y and
therefore x ≻ y.
59
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
3. Assume ≿ is locally nonsatiated. Then, for every x ∈ 𝑋, there exists some y ∈ 𝑋
such that y ≻ x. Therefore, there is no best element.
1.234
is assume that x∗ ≿ x for every x ∈ 𝐵(p, 𝑚) but that
∑𝑛 Assume otherwise, that∑
𝑛
𝑖=1 𝑝𝑖 𝑥𝑖 < 𝑚. Let 𝑟 = 𝑚 −
𝑖=1 𝑝𝑖 𝑥𝑖 be the unspent income. Spending the residual
on good 1, the commodity bundle x = x∗ + 𝑝𝑟1 e1 is affordable
𝑛
∑
𝑝𝑖 𝑥𝑖 =
𝑖=1
𝑛
∑
𝑝𝑖 𝑥∗𝑖 + 𝑝1
𝑖=1
𝑟
=𝑚
𝑝1
Moreover, since x ≩ x∗ , x ≻ x∗ , which contradicts the assumption that x∗ is the best
element in 𝑋(p, 𝑚).
1.235
otherwise, that is assume that x∗ ≿ x for every x ∈ 𝐵(p, 𝑚) but that
∑𝑛 Assume
∗
∗
𝑖=1 𝑝𝑖 𝑥𝑖 < 𝑚. This implies that x ∈ int 𝑋(p, 𝑚). Therefore, there exists a neigh∗
borhood 𝑁 of x with 𝑁 ⊆ 𝑋(p, 𝑚). Within this neighborhood, there exists some
x ∈ 𝑁 ⊆ 𝑋(p, 𝑚) with x ≻ x∗ , which contradicts the assumption that x∗ is the best
element in 𝑋(p, 𝑚).
1.236
1. Assume ≿ is continuous. Choose some y ∈ 𝑋. For any x0 in ≻(y), x0 ≻ y
and (since ≿ is continuous) there exists some neighborhood 𝑆(x0 ) such that x ≻ y
for every x ∈ 𝑆(x0 ). That is, 𝑆(x0 ) ⊆ ≻(y) and ≻(y) is open.
Similarly, for any x0 ∈ ≺(y), x0 ≺ y and there exists some neighborhood 𝑆(x0 )
such that x ≺ y for every x ∈ 𝑆(x0 ). Thus 𝑆(x0 ) ⊆ ≺(y) and ≺(y) is open.
2. Conversely, assume that the sets ≻(y) = { x : x ≻ y } and ≺(y) = { x : x ≺ y }
are open in x. Assume x0 ≻ y0 .
(a) Suppose there exists some y such that x0 ≻ y ≻ z0 . Then x0 ∈ ≻(y), which
is open by assumption. That is, ≻(y) is an open neighborhood of x0 and
x ≻ y for every x ∈ ≻(y). Similarly, ≺(y) is an open neighborhood of z0 for
which y ≻ z for every z ∈ ≺(y). Therefore 𝑆(x0 ) = ≻(y) and 𝑆(z0 ) = ≺(y)
are the required neighborhoods of x0 and z0 respectively such that
x≻y≻z
for every x ∈ 𝑆(x0 ) and y ∈ 𝑆(z0 )
(b) Suppose there is no y such that x0 ≻ y ≻ z0 .
i. By assumption
∙ ≻(z0 ) is open
∙ x0 ≻ z0 which implies x0 ∈ ≻(z0 ),
Therefore ≻(z0 ) is an open neighborhood of x0 .
ii. Since ≿ is complete, either y ≺ x0 or y ≿ x0 for every y ∈ 𝑋 (Exercise
1.56. Since there is no y such that x0 ≻ y ≻ z0
y ≻ z0 =⇒ y ∕≺ x0 =⇒ y ≿ x0
Therefore ≻(z0 ) = ≿(x0 ).
iii. Since x ≿ x0 ≻ z0 for every x ∈ ≿(x0 ) = ≻(z0 )
x ≻ z0 for every x ∈ ≻(z0 )
60
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
iv. Therefore 𝑆(x0 ) = ≻(z0 ) is an open neighborhood of x0 such that
x ≻ z0 for every x ∈ 𝑆(x0 )
Similarly, 𝑆(z0 ) = ≺(x0 ) is an open neighborhood of z0 such that z ≺ x0
for every z ∈ 𝑆(z0 ). Consequently
x≻z
for every x ∈ 𝑆(x0 ) and z ∈ 𝑆(z0 )
(
)𝑐
3. ≿(y) = ≺(y) (Exercise 1.56). Therefore, ≿(y) is closed if and only if ≺(y) is
open (Exercise 1.80). Similarly, ≾(y) is closed if and only if ≻(y) is open.
1.237
1. Let 𝐹 = { (x, y) ∈ 𝑋 ×𝑋 : x ≿ y }. Let ((x𝑛 , y𝑛 )) be a sequence in 𝐹 which
converges to (x, y). Since (x𝑛 , y𝑛 ) ∈ 𝐹 , x𝑛 ≿ y𝑛 for every 𝑛. By assumption,
x ≿ y. Therefore, (x, y) ∈ 𝐹 which establishes that 𝐹 is closed (Exercise 1.106)
Conversely, assume that 𝐹 is closed and let ((x𝑛 , y𝑛 )) be a sequence converging
to (x, y) with x𝑛 ≿ y𝑛 for every 𝑛. Then ((x𝑛 , y𝑛 )) ∈ 𝐹 which implies that
(x, y) ∈ 𝐹 . Therefore x ≿ y.
2. Yes. Setting y𝑛 = y for every 𝑛, their definition implies that for every sequence
(x𝑛 ) in 𝑋 with x𝑛 ≿ y, x = lim x𝑛 ≿ y. That is, the upper contour set
≿(y) = { x : x ≿ y } is closed. Similarly, the lower contour set ≾(y) is closed.
Conversely, assume that the preference relation is continuous (in our definition).
We show that the set 𝐺 = { (x, y) : x ≺ y } is open. Let (x0 , y0 ) ∈ 𝐺. Then
x0 ≺ y0 . By continuity, there exists neighborhoods 𝑆(x0 ) and 𝑆(y0 ) of x0 and
y0 such that x ≺ y for every x ∈ 𝑆(x0 ) and y ∈ 𝑆(y0 ). Hence, for every
(x, y) ∈ 𝑁 = 𝑆(x0 ) × 𝑆(y0 ), x ≺ y. Therefore 𝑁 ⊆ 𝐺 which implies that 𝐺 is
open. Consequently 𝐺𝑐 = { (x, y) : x ≿ y } is closed.
1.238 Assume the contrary. That is, assume there is no y with x ≻ y ≻ z. Since ≿ is
complete, either y ≺ x0 or y ≿ x0 for every y ∈ 𝑋 (Exercise 1.56). Since there is no
y such that x0 ≻ y ≻ z0
y ≻ z0 =⇒ y ∕≺ x0 =⇒ y ≿ x0
Therefore ≻(z0 ) = ≿(x0 ). By continuity, ≻(z0 ) is open and ≿(x0 ) is closed. Hence
≻(z0 ) = ≿(x0 ) is both open and closed (Exercise 1.83).
Alternatively, ≿(x0 ) and ≾(z0 ) are both open sets which partition 𝑋. This contradicts
the assumption that 𝑋 is connected.
1.239 Let 𝑋 ∗ denote the set of best elements. As demonstrated in the preceding proof
∩
𝑋∗ =
≿(y𝑖 )
y∈𝑋
Therefore 𝑋 ∗ is closed (Exercise 1.85) and hence compact (Exercise 1.110).
1.240 Assume for simplicity that 𝑝1 = 𝑝2 = 1 and that 𝑚 = 1. Then, the budget set is
𝐵(1, 1) = { x ∈ ℜ2++ : 𝑥1 + 𝑥2 ≤ 1 }
The consumer would like to spend as much as possible of her income on good 1.
However, the point (1, 0) is not feasible, since (1, 0) ∈
/ 𝑋.
61
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
1.241 Essentially, consumer theory (in economics) is concerned with predicting the way
in which consumer purchases vary with changes in observable parameters such as prices
and incomes. Predictions are deduced by assuming that the consumer will consistently
choose the best affordable alternative in her budget set. The theory would be empty
if there was no such optimal choice.
1.242
1. Let 𝑋 0 = 𝑋 ∩ ℜ𝑛+ . Then 𝑋 0 is compact and 𝑋 1 ⊆ 𝑋 0 . Define the order
x ≿1 y if and only if 𝑑1 (x) ≤ 𝑑1 (y). Then ≿1 is continuous on 𝑋 and
𝑋 1 = { x ∈ 𝑋 : 𝑑1 (x) ≤ 𝑑1 (y) for every y ∈ 𝑋 }
is the set of best elements in 𝑋 with respect to the order ≿1 . By Exercise 1.239,
𝑋 1 is nonempty and compact.
2. Assume 𝑋 𝑘−1 is compact. Define the order x ≿𝑘 y if and only if 𝑑𝑘 (x) ≤ 𝑑𝑘 (y).
Then ≿𝑘 is continuous on 𝑋 𝑘−1 and
𝑋 𝑘 = { x ∈ 𝑋 𝑘−1 : 𝑑𝑘 (x) ≤ 𝑑𝑘 (y) for every y ∈ 𝑋 𝑘−1 }
is the set of best elements in 𝑋 𝑘−1 with respect to the order ≿𝑘 . By Exercise
1.239, 𝑋 𝑘 is nonempty and compact.
3. Assume x ∈ Nu. Then
x ≿ y for every y ∈ 𝑋
d(x) ≾𝐿 d(y) for every y ∈ 𝑋
For every 𝑘 = 1, 2, . . . , 2𝑛
d𝑘 (x) ≤ d𝑘 (y) for every y ∈ 𝑋
𝑛
𝑛
which implies x ∈ 𝑋 𝑘 . In particular x ∈ 𝑋 2 . Therefore Nu ⊆ 𝑋 2 .
𝑛
𝑛
𝑛
Suppose Nu ⊂ 𝑋 2 . Then there exists some x, y ∈ 𝑋, x ∈
/ 𝑋 2 and y ∈ 𝑋 2 such
𝑑
/ 𝑋 𝑘 . Then 𝑑𝑘 (x) > 𝑑𝑘 (y).
that x ≿ y. Let 𝑘 be the smallest integer such that x ∈
𝑙
But x ∈ 𝑋 for every 𝑙 < 𝑘 and therefore 𝑑𝑙 (x) = 𝑑𝑙 (y) for 𝑙 = 1, 2, . . . , 𝑘 − 1.
This means that d(y) ≺𝐿 d(x) so that x ≺𝑑 y. This contradiction establishes
𝑛
that Nu = 𝑋 2 .
1.243 Assume ≿ is convex. Choose any y ∈ 𝑋 and let x ∈ ≿(y). Then x ≿ y. Since
≿ is convex, this implies that
𝛼x + (1 − 𝛼)y ≿ y
for every 0 ≤ 𝛼 ≤ 1
and therefore
𝛼x + (1 − 𝛼)y ∈ ≿(y)
for every 0 ≤ 𝛼 ≤ 1
Therefore ≿(y) is convex.
To show the converse, assume that ≿(y) is convex for every y ∈ 𝑋. Choose x, y ∈ 𝑋.
Interchanging x and y if necessary, we can assume that x ≿ y so that x ∈ ≿(y). Of
course, y ∈ ≿(y). Since ≿(y) is convex
𝛼x + (1 − 𝛼)y ∈ ≿(y)
for every 0 ≤ 𝛼 ≤ 1
which implies
𝛼x + (1 − 𝛼)y ≿ y
for every 0 ≤ 𝛼 ≤ 1
62
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
1.244 𝑋 ∗ may be empty, in which case it is trivially convex. Otherwise, let x∗ ∈ 𝑋 ∗ .
For every x ∈ 𝑋 ∗
x ≿ x∗ which implies x ∈ ≿(x∗ )
Therefore 𝑋 ∗ ⊆ ≿(x∗ ). Conversely, by transitivity
x ≿ x∗ ≿ y for every y ∈ 𝑋
for every x ∈ ≿(x∗ ) which implies ≿(x∗ ) ⊆ 𝑋 ∗ . Therefore, 𝑋 ∗ = ≿(x∗ ) which is
convex.
1.245 To show that ≿𝑑 is strictly convex, assume that x, y ∈ 𝑋 are such d(x) = d(y)
with x ∕= y. Suppose
)
(
d(x) = 𝑑(𝑆1 , x), 𝑑(𝑆2 , x) . . . , 𝑑(𝑆2𝑛 , x)
In the order 𝑆1 , 𝑆2 , . . . , 𝑆2𝑛 , let 𝑆𝑘 be the first coalition for which 𝑑(𝑆𝑘 , x) ∕= 𝑑(𝑆𝑘 , y).
That is
𝑑(𝑆𝑗 , x) = 𝑑(𝑆𝑗 , y) for every 𝑗 < 𝑘
(1.32)
Since 𝑑(𝑆𝑘 , x) ∕= 𝑑(𝑆𝑘 , y) and d(x) is listed in descending order, we must have
𝑑(𝑆𝑘 , x) > 𝑑(𝑆𝑘 , y)
(1.33)
𝑑(𝑆𝑘 , x) ≥ 𝑑(𝑆𝑗 , y) for every 𝑗 > 𝑘
(1.34)
and
Choose 0 < 𝛼 < 1 and let z = 𝛼x + (1 − 𝛼)y. For any coalition 𝑆
∑
𝑑(𝑆, z) = 𝑤(𝑆) −
𝑧𝑖
𝑖∈𝑆
∑(
)
= 𝑤(𝑆) −
𝛼𝑥𝑖 + (1 − 𝛼)𝑦𝑖
𝑖∈𝑆
= 𝑤(𝑆) − 𝛼
(
∑
𝑥𝑖 − (1 − 𝛼)
𝑖∈𝑆
= 𝛼 𝑤(𝑆) −
∑
𝑖∈𝑆
)
𝑥𝑖
∑
𝑦𝑖
(
+ (1 − 𝛼) 𝑤(𝑆) −
𝑖∈𝑆
∑
)
𝑦𝑖
𝑖∈𝑆
= 𝛼𝑑(𝑆, x) + (1 − 𝛼)𝑑(𝑆, y)
Using (1.55) to (1.57), this implies that
𝑑(𝑆𝑗 , z) = 𝑑(𝑆𝑗 , x),
𝑗<𝑘
𝑑(𝑆𝑘 , z) < 𝑑(𝑆𝑘 , x)
𝑑(𝑆𝑘 , z) ≤ 𝑑(𝑆𝑗 , x),
𝑗>𝑘
for every 0 < 𝛼 < 1, Therefore d(z) ≺𝐿 d(x). Thus z ≻𝑑 x, which establishes that ≿
is strictly convex.
The set of feasible outcomes is convex. Assume x, y ∈ Nu ⊆ 𝑋, x ∕= y. Then
d(x) = d(y) and
z = 𝛼x + (1 − 𝛼)y ≻𝑑 x
for every 0 < 𝛼 < 1 which contradicts the assumption that x ∈ Nu. We conclude that
the nucleolus contains only one element.
63
Solutions for Foundations of Mathematical Economics
1.246
c 2001 Michael Carter
⃝
All rights reserved
1. (a) Clearly ≺(x0 ) ⊆ ≾(x0 ) and ≻(y0 ) ⊆ ≿(y0 ). Consequently ≺(x0 ) ∪
≻(y0 ) ⊆ ≾(x0 ) ∪ ≿(y0 ). We claim that these sets are in fact equal.
Let z ∈ ≾(x0 ) ∪ ≿(y0 ). Suppose that z ∈ ≾(x0 ) but z ∈
/ ≺(x0 ). Then
z ≿ x0 . By transitivity, z ≿ x0 ≻ y0 which implies that z ∈ ≻(y0 ).
Similarly z ∈ ≿(y0 ) ∖ ≻(y0 ) implies z ∈ ≺(x0 ). Therefore
≺(x0 ) ∪ ≻(y0 ) = ≾(x0 ) ∪ ≿(y0 )
(b) By continuity, ≺(x0 ) ∪ ≻(y0 ) is open and ≾(x0 ) ∪ ≿(y0 ) = ≺(x0 ) ∪ ≻(y0 ) is
closed. Further x0 ≻ y0 implies that x0 ∈ ≻(y0 ) so that ≺(x0 ) ∪ ≻(y0 ) ∕= ∅.
We have established that ≺(x0 ) ∪ ≻(y0 ) is a nonempty subset of 𝑋 which
is both open and closed. Since 𝑋 is connected, this implies (Exercise 1.83)
that
≺(x0 ) ∪ ≻(y0 ) = 𝑋
2. (a) By definition, x ∈
/ ≺(x). So ≺(x) ∩ ≺(y) = 𝑋 implies x ∈ ≻(y), that is
x ≿ y contradicting the noncomparability of x and y. Therefore
≺(x) ∩ ≺(y) ∕= 𝑋
(b) By assumption, there exists at least one pair x0 , y0 such that x0 ≻ y0 . By
the previous part
≺(x0 ) ∪ ≻(y0 ) = 𝑋
This implies either x ≺ x0 or x ≻ y0 . Without loss of generality, assume
x ≻ y0 . Again using the previous part, we have
≺(x) ∪ ≻(y0 ) = 𝑋
Since x and y are not comparable, y ∈
/ ≺(x) which implies that y ∈ ≻(y0 ).
Therefore x ≻ y0 and y ≻ y0 or alternatively
y0 ∈ ≺(x0 ) ∩ ≻(y0 ) ∕= ∅
(c) Clearly ≺(x) ⊆ ≾(x) and ≻(y) ⊆ ≿(y). Consequently
≺(x) ∩ ≺(y) ⊆ ≾(x) ∩ ≾(y)
Let z ∈ ≾(x) ∩ ≾(y). That is, z ≾ x. If x ≾ z, then transitivity implies x ≾
z ≾ y, which contradicts the noncomparability of x and y. Consequently
x ∕≾ z which implies z ≺ x and z ∈ ≺(x). Similarly z ∈ ≺(y) and therefore
≺(x) ∩ ≺(y) = ≾(x) ∩ ≾(y)
3. If x and y are noncomparable, ≺(x)∩≺(y) is a nonempty proper subset of 𝑋. By
continuity ≺(x) ∩ ≺(y) = ≾(x) ∩ ≾(y) is both open and closed which contradicts
the assumption that 𝑋 is connected (Exercise 1.83). We conclude that ≿ must
be complete.
1.247 Assume x ≻ y. Then x ∈ ≻(y). Since ≻(y) is open, x ∈ int ≻(y). Also
y ∈ ≻(y). By Exercise 1.218, 𝛼x + (1 − 𝛼)y ∈ int ≻(y) for every 0 < 𝛼 < 1, which
implies
𝛼x + (1 − 𝛼)y ≻ y for every 0 < 𝛼 < 1
64
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
1.248 For every x ∈ 𝑋, there exists some z such that z ≻ x (Nonsatiation). For any 𝑟,
choose some 𝛼 ∈ (0, 𝑟/ ∥x − z∥) and let y = 𝛼z + (1 − 𝛼)x. Then
∥x − y∥ = 𝛼 ∥x − z∥ < 𝑟
Moreover, since ≿ is strictly convex,
y = 𝛼z + (1 − 𝛼)x ≻ x
Thus, ≿ is locally nonsatiated.
We have previously shown that local nonsatiation implies nonsatiation (Exercise 1.233).
Consequently, these two properties are equivalent for strictly convex preferences.
1.249 Assume that x is not strongly Pareto efficient. That is, there exist allocation y
such that y ≿𝑖 x for all 𝑖 and some individual 𝑗 for which y ≻𝑗 x. Take 1 − 𝑡 percent
of 𝑗’s consumption and distribute it equally to the other participants. By continuity,
1−𝑡
there exists some 𝑡 such that 𝑡y ≻𝑗 x. The other agents receive y𝑖 + 𝑛−1
y𝑗 which, by
monotonicity, they strictly prefer to x𝑖 .
1.250 Assume that (p∗ , x∗ ) is a competitive equilibrium of an exchange economy, but
that x∗ does not belong to the core of the corresponding market game. Then there
exists some coalition 𝑆 and allocation
y ∈∑
𝑊 (𝑆) such that y𝑖 ≻𝑖 x∗𝑖 for every 𝑖 ∈ 𝑆.
∑
Since y ∈ 𝑊 (𝑆), we must have 𝑖∈𝑆 y𝑖 = 𝑖∈𝑆 w𝑖 .
Since x∗ is a competitive equilibrium and y𝑖 ≻𝑖 x∗𝑖 for every 𝑖 ∈ 𝑆, y must be unaffordable, that is
𝑙
∑
𝑝𝑗 𝑦𝑖𝑗 >
𝑗=1
𝑙
∑
𝑝𝑗 w𝑖𝑗 for every 𝑖 ∈ 𝑆
𝑗=1
and therefore
𝑙
∑∑
𝑖∈𝑆 𝑗=1
𝑝𝑗 𝑦𝑖𝑗 >
𝑙
∑∑
𝑝𝑗 w𝑖𝑗
𝑖∈𝑆 𝑗=1
which contradicts the assumption that 𝑦 ∈ 𝑊 (𝑆).
1.251 Combining the previous exercise with Exercise 1.64
x∗ ∈ core ⊆ Pareto
65
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
Chapter 2: Functions
2.1 In general, the birthday mapping is not one-to-one since two individuals may have
the same birthday. It is not onto since some days may be no one’s birthday.
2.2 The origin 0 is fixed point for every 𝜃. Furthermore, when 𝜃 = 0, 𝑓 is an identity
function and every point is a fixed point.
2.3 For every 𝑥 ∈ 𝑋, there exists some 𝑦 ∈ 𝑌 such that 𝑓 (𝑥) = 𝑦, whence 𝑥 ∈ 𝑓 −1 (𝑦).
Therefore, every 𝑥 belongs to some contour. To show that distinct contours are disjoint,
assume 𝑥 ∈ 𝑓 −1 (𝑦1 ) ∩ 𝑓 −1 (𝑦2 ). Then 𝑓 (𝑥) = 𝑦1 and also 𝑓 (𝑥) = 𝑦2 . Since 𝑓 is a
function, this implies that 𝑦1 = 𝑦2 .
2.4 Assume 𝑓 is one-to-one and onto. Then, for every 𝑦 ∈ 𝑌 , there exists 𝑥 ∈ 𝑋 such
that 𝑓 (𝑥) = 𝑦. That is, 𝑓 −1 (𝑦) ∕= ∅ for every 𝑦 ∈ 𝑌 . If 𝑓 is one to one, 𝑓 (𝑥) = 𝑦 = 𝑓 (𝑥′ )
implies 𝑥 = 𝑥′ . Therefore, 𝑓 −1 (𝑦) consists of a single element. Therefore, the inverse
function 𝑓 −1 exists.
Conversely, assume that 𝑓 : 𝑋 → 𝑌 has an inverse 𝑓 −1 . As 𝑓 −1 is a function mapping
𝑌 to 𝑋, it must be defined for every 𝑦 ∈ 𝑌 . Therefore 𝑓 is onto. Assume there
exists 𝑥, 𝑥′ ∈ 𝑋 and 𝑦 ∈ 𝑌 such that 𝑓 (𝑥) = 𝑦 = 𝑓 (𝑥′ ). Then 𝑓 −1 (𝑦) = 𝑥 and
also 𝑓 −1 (𝑦) = 𝑥′ . Since 𝑓 −1 is a function, this implies that 𝑥 = 𝑥′ . Therefore 𝑓 is
one-to-one.
2.5 Choose any 𝑥 ∈ 𝑋 and let 𝑦 = 𝑓 (𝑥). Since 𝑓 is one-to-one, 𝑥 = 𝑓 −1 (𝑦) = 𝑓 −1 (𝑓 (𝑥)).
The second identity is proved similarly.
2.6 (2.2) implies for every 𝑥 ∈ ℜ
𝑒𝑥 𝑒−𝑥 = 𝑒0 = 1
and therefore
𝑒−𝑥 =
1
𝑒𝑥
For every 𝑥 ≥ 0
𝑒𝑥 = 1 +
𝑥3
𝑥 𝑥2
+
+
+ ⋅⋅⋅ > 0
1
2
6
and therefore by (2.28) 𝑒𝑥 > 0 for every 𝑥 ∈ ℜ. For every 𝑥 ≥ 1
𝑒𝑥 = 1 +
𝑥3
𝑥 𝑥2
+
+
+ ⋅ ⋅ ⋅ ≥ 1 + 𝑥 → ∞ as 𝑥 → ∞
1
2
6
and therefore 𝑒𝑥 → ∞ as 𝑥 → ∞. By (2.28) 𝑒𝑥 → 0 as 𝑥 → −∞.
2.7
𝑒𝑥/2 𝑒𝑥/2
𝑒𝑥
=
𝑥
2 𝑥2
( 𝑥/2 )
1 𝑒
=
𝑒𝑥/2 → ∞ as 𝑥 → ∞
2 𝑥/2
66
(2.28)
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
since the term in brackets is strictly greater than 1 for any 𝑥 > 0. Similarly
𝑒𝑥
(𝑒𝑥/(𝑛+1) )𝑛 𝑒𝑥/(𝑛+1)
=
𝑥 𝑛
𝑥
(𝑛 + 1)𝑛 ( 𝑛+1
)
( 𝑥/(𝑛+1) )𝑛
1
𝑒
=
𝑒𝑥/(𝑛+1) → ∞
(𝑛 + 1)𝑛 𝑥/(𝑛 + 1)
2.8 Assume that 𝑆 ⊆ ℜ is compact. Then 𝑆 is bounded (Proposition 1.1), and there
exists 𝑀 such that ∣𝑥∣ ≤ 𝑀 for every 𝑥 ∈ 𝑆. For all 𝑛 ≥ 𝑚 ≥ 2𝑀
𝑛
∑ 𝑥𝑘 𝑥𝑚+1 𝑛−𝑚
∑ ( 𝑥 )𝑘 ∣𝑓𝑛 (𝑥) − 𝑓𝑚 (𝑥)∣ = ≤
𝑘! (𝑚 + 1)!
𝑚 𝑘=𝑚+1
𝑘=0
𝑀 𝑚+1 𝑛−𝑚
∑ ( 𝑀 )𝑘 ≤
(𝑚 + 1)!
𝑚 𝑘=0
(
( )𝑛−𝑚 )
𝑀 𝑚+1
1
1 1
≤
1 + + + ⋅⋅⋅ +
(𝑚 + 1)!
2 4
2
(
( )𝑚
)
𝑚+1
𝑀
1
𝑀 𝑚+1
≤2
≤2
≤
(𝑚 + 1)!
𝑚+1
2
by Exercise 1.206. Therefore 𝑓𝑛 converges to 𝑓 for all 𝑥 ∈ 𝑆.
2.9 This is a special case of Example 2.8. For any 𝑓, 𝑔 ∈ 𝐹 (𝑋), define
(𝑓 + 𝑔) = 𝑓 (𝑥) + 𝑔(𝑥)
(𝛼𝑓 )(𝑥) = 𝛼𝑓 (𝑥)
With these definitions 𝑓 + 𝑔 and 𝛼𝑓 also map 𝑋 to ℜ. Hence 𝐹 (𝑋) is closed under
addition and scalar multiplication. It is straightforward but tedious to verify that 𝐹 (𝑋)
satisfies the other requirements of a linear space.
2.10 The zero element in 𝐹 (𝑋) is the constant function 𝑓 (𝑥) = 0 for every 𝑥 ∈ 𝑋.
2.11
1. From the definition of ∥𝑓 ∥ it is clear that
∙ ∥𝑓 ∥ ≥ 0.
∙ ∥𝑓 ∥ = 0 of and only 𝑓 is the zero functional.
∙ ∥𝛼𝑓 ∥ = ∣𝛼∣ ∥𝑓 ∥ since sup𝑥∈𝑋 ∣𝛼𝑓 (𝑥)∣ = ∣𝛼∣ sup𝑥∈𝑋 ∣𝑓 (𝑥)∣
It remains to verify the triangle inequality, namely
∥𝑓 + 𝑔∥ = sup ∣(𝑓 + 𝑔)(𝑥)∣
𝑥∈𝑋
= sup ∣𝑓 (𝑥) + 𝑔(𝑥)∣
𝑥∈𝑋
{
}
≤ sup ∣𝑓 (𝑥)∣ + ∣𝑔(𝑥)∣
𝑥∈𝑋
≤ sup ∣(𝑓 (𝑥)∣ + sup ∣𝑔(𝑥)∣
𝑥∈𝑋
𝑥∈𝑋
= ∥𝑓 ∥ + ∥𝑔∥
2. Consequently, for any 𝑓 ∈ 𝐵(𝑋), 𝛼𝑓 (𝑥) ≤ ∣𝛼∣ ∥𝑓 ∥ for every 𝑥 ∈ 𝑋 and therefore
𝛼𝑓 ∈ 𝐵(𝑋). Similarly, for any 𝑓, 𝑔 ∈ 𝐵(𝑋), (𝑓 + 𝑔)(𝑥) ≤ ∥𝑓 ∥ + ∥𝑔∥ for every
67
c 2001 Michael Carter
⃝
All rights reserved
Solutions for Foundations of Mathematical Economics
𝑥 ∈ 𝑋 and therefore 𝑓 + 𝑔 ∈ 𝐵(𝑋). Hence, 𝐵(𝑋) is closed under addition and
scalar multiplication; it is a subspace of the linear space 𝐹 (𝑋). We conclude that
𝐵(𝑋) is a normed linear space.
3. To show that 𝐵(𝑋) is complete, assume that (𝑓 𝑛 ) is a Cauchy sequence in 𝐵(𝑋).
For every 𝑥 ∈ 𝑋
∣𝑓 𝑛 (𝑥) − 𝑓 𝑚 (𝑥)∣ ≤ ∥𝑓 𝑛 − 𝑓 𝑚 ∥ → 0
Therefore, for 𝑥 ∈ 𝑋, 𝑓 𝑛 (𝑥) is a Cauchy sequence of real numbers. Since ℜ is
complete, this sequence converges. Define the function
𝑓 (𝑥) = lim 𝑓 𝑛 (𝑥)
𝑛→∞
We need to show
∙ ∥𝑓 𝑛 − 𝑓 ∥ → 0 and
∙ 𝑓 ∈ 𝐵(𝑋)
𝑛
(𝑓 ) is a Cauchy sequence. For given 𝜖 > 0, choose 𝑁 such that ∥𝑓 𝑛 − 𝑓 𝑚 ∥ < 𝜖/2
for very 𝑚, 𝑛 ≥ 𝑁 . For any 𝑥 ∈ 𝑋 and 𝑛 ≥ 𝑁 ,
∣𝑓 𝑛 (𝑥) − 𝑓 (𝑥)∣ ≤ ∣𝑓 𝑛 (𝑥) − 𝑓 𝑚 (𝑥)∣ + ∣𝑓 𝑚 (𝑥) − 𝑓 (𝑥)∣
≤ ∥𝑓 𝑛 − 𝑓 𝑚 ∥ + ∣𝑓 𝑚 (𝑥) − 𝑓 (𝑥)∣
By suitable choice of 𝑚 (which may depend upon 𝑥), each term on the right can
be made smaller than 𝜖/2 and therefore
∣𝑓 𝑛 (𝑥) − 𝑓 (𝑥)∣ < 𝜖
for every 𝑥 ∈ 𝑋 and 𝑛 ≥ 𝑁 .
∥𝑓 𝑛 − 𝑓 ∥ = sup ∣𝑓 𝑛 (𝑥) − 𝑓 (𝑥)∣ ≤ 𝜖
𝑥∈𝑋
Finally, this implies ∥𝑓 ∥ = lim𝑛→∞ ∥𝑓 𝑛 ∥. Therefore 𝑓 ∈ 𝐵(𝑋).
2.12 If the die is fair, the probability of the elementary outcomes is
𝑃 ({1}) = 𝑃 ({2}) = 𝑃 ({3}) = 𝑃 ({4}) = 𝑃 ({5}) = 𝑃 ({6}) = 1/6
By Condition 3
𝑃 ({2, 4, 6}) = 𝑃 ({2}) + 𝑃 ({4}) + 𝑃 ({6}) = 1/2
2.13 The profit maximization problem of a competitive single-output firm is to choose
the combination of inputs x ∈ ℜ𝑛+ and scale of production 𝑦 to maximize net profit.
This is summarized in the constrained maximization problem
max 𝑝𝑦 −
x,𝑦
∑𝑛
𝑛
∑
𝑤𝑖 𝑥𝑖
𝑖=1
subject to x ∈ 𝑉 (𝑦)
where 𝑝𝑦 is total revenue and 𝑖=1 𝑤𝑖 𝑥𝑖 total cost. The profit function, which depends
upon both 𝑝 and w, is defined by
Π(𝑝, w) =
max 𝑝𝑦 −
𝑦,x∈𝑉 (𝑦)
68
𝑛
∑
𝑖=1
𝑤𝑖 𝑥𝑖
c 2001 Michael Carter
⃝
All rights reserved
Solutions for Foundations of Mathematical Economics
For analysis, it is convenient to represent the technology 𝑉 (𝑦) by a production function
(Example 2.24). The firm’s optimization can then be expressed as
max𝑛 𝑝𝑓 (x) −
x∈ℜ+
𝑛
∑
𝑤𝑖 𝑥𝑖
𝑖=1
and the profit function as
Π(𝑝, w) = max𝑛 𝑝𝑓 (x) −
x∈ℜ+
2.14
𝑛
∑
𝑤𝑖 𝑥𝑖
𝑖=1
1. Assume that production is profitable at p. That is, there exists some y ∈ 𝑌
such that 𝑓 (y, p) > 0. Since the technology exhibits constant returns to scale, 𝑌
is a cone (Example 1.101). Therefore 𝛼y ∈ 𝑌 for every 𝛼 > 0 and
∑
∑
𝑝𝑖 (𝛼𝑦𝑖 ) = 𝛼
𝑝𝑖 𝑦𝑖 = 𝛼𝑓 (y, p)
𝑓 (𝛼y, p) =
𝑖
𝑖
Therefore { 𝑓 (𝛼y, p) : 𝛼 > 0 } is unbounded and
Π(p) = sup 𝑓 (y, p) ≥ sup 𝑓 (𝛼y, p) = +∞
𝛼>0
y∈𝑌
2. Assume to the contrary that there exists p ∈ ℜ𝑛+ with Π(p) = 𝜋 ∈
/ { 0, +∞, −∞ }.
There are two possible cases.
(a) 0 < 𝜋 < +∞. Since 𝜋 = sup𝑦∈𝑌 𝑓 (y, p) > 0, there exists y ∈ 𝑌 such that
𝑓 (y, p) > 0 The previous part implies Π(p) = +∞.
(b) −∞ < 𝜋 < 0. Then there exists y ∈ 𝑌 such that 𝑓 (y, p) < 0 By a similar
argument to the previous part, this implies Π(p) = −∞.
2.15 Assume x∗ is a solution to (2.4).
𝑓 (x∗ , 𝜽) ≥ 𝑓 (x, 𝜽) for every x ∈ 𝐺(𝜽)
and therefore
𝑓 (x∗ , 𝜽) ≥ sup 𝑓 (x, 𝜽) = 𝑣(𝜽)
x∈𝐺(𝜽)
On the other hand x∗ ∈ 𝐺(𝜽) and therefore
𝑣(𝜽) = sup 𝑓 (x, 𝜽) ≥ 𝑓 (x∗ , 𝜽)
x∈𝐺(𝜽)
Therefore, x∗ satisfies (2.5). Conversely, assume x∗ ∈ 𝐺(𝜽) satisfies (2.5). Then
𝑓 (x∗ , 𝜽) = 𝑣(𝜽) = sup 𝑓 (x, 𝜽) ≥ 𝑓 (x, 𝜽) for every x ∈ 𝐺(𝜽)
x∈𝐺(𝜽)
x∗ solve (2.4).
2.16 The assumption that 𝐺(𝑥) ∕= ∅ for every 𝑥 ∈ 𝑋 implies Γ(𝑥0 ) ∕= ∅ for every
𝑥0 ∈ 𝑋. There always exist feasible plans from any starting point. Since 𝑢 is bounded,
there exists 𝑀 such that ∣𝑓 (𝑥𝑡 , 𝑥𝑡+1 )∣ ≤ 𝑀 for every x ∈ Γ(𝑥0 ). Consequently, for
every x ∈ Γ(𝑥0 ), 𝑈 (x) ∈ ℜ and
∞
∞
∞
∑
∑
∑
𝑀
∣𝑈 (x)∣ = 𝛽 𝑡 𝑓 (𝑥𝑡 , 𝑥𝑡+1 ) ≤
𝛽 𝑡 ∣𝑓 (𝑥𝑡 , 𝑥𝑡+1 )∣ ≤
𝛽𝑡𝑀 =
1−𝛽
𝑡=0
𝑡=0
𝑡=0
69
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
using the formula for a geometric series (Exercise 1.108). Therefore
𝑣(𝑥0 ) =
sup 𝑈 (x) ≤
x∈Γ(𝑥0 )
𝑀
1−𝛽
and 𝑣 ∈ 𝐵(𝑋). Next, we note that for every feasible plan x ∈ Γ(𝑥0 )
𝑈 (x) =
∞
∑
𝛽 𝑡 𝑓 (𝑥𝑡 , 𝑥𝑡+1 )
𝑡=0
= 𝑓 (𝑥0 , 𝑥1 ) + 𝛽
∞
∑
𝛽 𝑡 𝑓 (𝑥𝑡+1 , 𝑥𝑡+2 )
𝑡=0
= 𝑓 (𝑥0 , 𝑥1 ) + 𝛽𝑈 (x′ )
(2.29)
where x′ = (𝑥1 , 𝑥2 , . . . ) is the continuation of the plan x starting at 𝑥1 . For any 𝑥0 ∈ 𝑋
and 𝜖 > 0, there exists a feasible plan x ∈ Γ(𝑥0 ) such that
𝑈 (x) ≥ 𝑣(𝑥0 ) − 𝜖
Let x′ = (𝑥1 , 𝑥2 , . . . ) be the continuation of the plan x starting at 𝑥1 . Using (2.29)
and the fact that 𝑈 (x′ ) ≤ 𝑣(𝑥1 ), we conclude that
𝑣(𝑥0 ) − 𝜖 ≤ 𝑈 (x)
= 𝑓 (𝑥0 , 𝑥1 ) + 𝛽𝑈 (x′ )
≤ 𝑓 (𝑥0 , 𝑥1 ) + 𝛽𝑣(𝑥1 )
≤ sup { 𝑓 (𝑥0 , 𝑦) + 𝛽𝑣(𝑦) }
𝑦∈𝐺(𝑥)
Since this is true for every 𝜖 > 0, we must have
𝑣(𝑥0 ) ≤ sup { 𝑓 (𝑥0 , 𝑦) + 𝛽𝑣(𝑦) }
𝑦∈𝐺(𝑥)
On the other hand, choose any 𝑥1 ∈ 𝐺(𝑥0 ) ⊆ 𝑋. Since
𝑣(𝑥1 ) =
sup 𝑈 (x)
x∈Γ(𝑥1 )
there exists a feasible plan x′ = (𝑥1 , 𝑥2 , . . . ) starting at 𝑥1 such that
𝑈 (x′ ) ≥ 𝑣(𝑥1 ) − 𝜖
Moreover, the plan x = (𝑥0 , 𝑥1 , 𝑥2 , . . . ) is feasible from 𝑥0 and
𝑣(𝑥0 ) ≥ 𝑈 (x) = 𝑓 (𝑥0 , 𝑥1 ) + 𝛽𝑈 (x′ ) ≥ 𝑓 (𝑥0 , 𝑥1 ) + 𝛽𝑣(𝑥1 ) − 𝛽𝜖
Since this is true for every 𝜖 > 0 and 𝑥1 ∈ 𝐺(𝑥0 ), we conclude that
𝑣(𝑥0 ) ≥ sup { 𝑓 (𝑥0 , 𝑦) + 𝛽𝑣(𝑦) }
𝑦∈𝐺(𝑥)
Together with (2.30) this establishes the required result, namely
𝑣(𝑥0 ) = sup { 𝑓 (𝑥0 , 𝑦) + 𝛽𝑣(𝑦) }
𝑦∈𝐺(𝑥)
70
(2.30)
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
2.17 Assume x is optimal, so that
𝑈 (x∗ ) ≥ 𝑈 (x) for every x ∈ Γ(𝑥0 )
This implies (using (2.39))
𝑓 (𝑥0 , 𝑥∗1 ) + 𝛽𝑈 (x∗ ′ ) ≥ 𝑓 (𝑥0 , 𝑥1 ) + 𝛽𝑈 (x′ )
where x′ = (𝑥1 , 𝑥2 , . . . ) is the continuation of the plan x starting at 𝑥1 and x∗ ′ =
(𝑥∗1 , 𝑥∗2 , . . . ) is the continuation of the plan x∗ . In particular, this is true for every plan
x ∈ Γ(𝑥0 ) with 𝑥1 = 𝑥∗1 and therefore
𝑓 (𝑥0 , 𝑥∗1 ) + 𝛽𝑈 (x∗ ′ ) ≥ 𝑓 (𝑥0 , 𝑥∗1 ) + 𝛽𝑈 (x′ ) for every x′ ∈ Γ(𝑥∗1 )
which implies that
𝑈 (x∗ ′ ) ≥ 𝑈 (x′ ) for every x′ ∈ Γ(𝑥∗1 )
That is, x∗ ′ is optimal starting at 𝑥∗1 and therefore 𝑈 (x∗ ′ ) = 𝑣(𝑥∗1 ) (Exercise 2.15).
Consequently
𝑣(𝑥0 ) = 𝑈 (x∗ ) = 𝑓 (𝑥0 , 𝑥∗1 ) + 𝛽𝑈 (x∗ ′ ) = 𝑓 (𝑥0 , 𝑥∗1 ) + 𝛽𝑣(𝑥∗1 )
This verifies (2.13) for 𝑡 = 0. A similar argument verifies (2.13) for any period 𝑡.
To show the converse, assume that x∗ = (𝑥0 , 𝑥∗1 , 𝑥∗2 , . . . ) ∈ Γ(𝑥0 ) satisfies (2.13). Successively using (2.13)
𝑣(𝑥0 ) = 𝑓 (𝑥0 , 𝑥∗1 ) + 𝛽𝑣(𝑥∗1 )
= 𝑓 (𝑥0 , 𝑥∗1 ) + 𝛽𝑓 (𝑥∗1 , 𝑥∗2 ) + 𝛽 2 𝑣(𝑥∗1 )
=
1
∑
𝛽 𝑡 𝑓 (𝑥∗𝑡 , 𝑥∗𝑡+1 ) + 𝛽 2 𝑣(𝑥∗2 )
𝑡=0
=
2
∑
𝛽 𝑡 𝑓 (𝑥∗𝑡 , 𝑥∗𝑡+1 ) + 𝛽 3 𝑣(𝑥∗3 )
𝑡=0
=
𝑇
−1
∑
𝛽 𝑡 𝑓 (𝑥𝑡 , 𝑥𝑡 + 1) + 𝛽 𝑇 𝑣(𝑥∗𝑇 )
𝑡=0
for any 𝑇 = 1, 2, . . . . Since 𝑣 is bounded (Exercise 2.16), 𝛽 𝑇 𝑣(𝑥∗𝑇 ) → 0 as 𝑇 → ∞ and
therefore
𝑣(𝑥0 ) =
∞
∑
𝛽 𝑡 𝑓 (𝑥𝑡 , 𝑥𝑡+1 ) = 𝑈 (x∗ )
𝑡=0
Again using Exercise 2.15, x∗ is optimal.
2.18 We have to show that
∙ for any 𝑣 ∈ 𝐵(𝑋), 𝑇 𝑣 is a functional on 𝑋.
∙ 𝑇 𝑣 is bounded.
Since 𝐹 ∈ 𝐵(𝑋 × 𝑋), there exists 𝑀1 < ∞ such that ∣𝑓 (𝑥, 𝑦)∣ ≤ 𝑀1 for every (𝑥, 𝑦) ∈
𝑋 × 𝑋. Similarly, for any 𝑣 ∈ 𝐵(𝑋), there exists 𝑀2 < ∞ such that ∣𝑣(𝑥)∣ ≤ 𝑀2 for
every 𝑥 ∈ 𝑋. Consequently for every (𝑥, 𝑦) ∈ 𝑋 × 𝑋 and 𝑣 ∈ 𝐵(𝑋)
∣𝑓 (𝑥, 𝑦) + 𝛽𝑣(𝑦)∣ ≤ ∣𝑓 (𝑥, 𝑦)∣ + 𝛽 ∣𝑣(𝑦)∣ ≤ 𝑀1 + 𝛽𝑀2 < ∞
71
(2.31)
c 2001 Michael Carter
⃝
All rights reserved
Solutions for Foundations of Mathematical Economics
For each 𝑥 ∈ 𝑋, the set
𝑆𝑥 =
{
𝑓 (𝑥, 𝑦) + 𝛽𝑣(𝑦) : 𝑦 ∈ 𝐺(𝑥)
}
is a nonempty bounded subset of ℜ, which has least upper bound. Therefore
(𝑇 𝑣)(𝑥) = sup 𝑆𝑥 = sup 𝑓 (𝑥, 𝑦) + 𝛽𝑣(𝑦)
𝑦∈𝐺(𝑥)
defines a functional on 𝑋. Moreover by (2.31)
∣(𝑇 𝑣)(𝑘)∣ ≤ 𝑀1 + 𝛽𝑀2 < ∞
Therefore 𝑇 𝑣 ∈ 𝐵(𝑋).
2.19 Let 𝑁 = {1, 2, 3}. Any individual is powerless so that
𝑤({𝑖}) = 0
𝑖 = 1, 2, 3
Any two players can allocate the $1 to between themselves, leaving the other player
out. Therefore
𝑤({𝑖, 𝑗}) = 1
𝑖, 𝑗 ∈ 𝑁, 𝑖 ∕= 𝑗
The best that the three players can achieve is to allocate the $1 amongst themselves,
so that
𝑤(𝑁 ) = 1
2.20 If the consumers preferences are continuous and strictly convex, she has a unique
optimal choice x∗ for every set of prices p and income 𝑚 in 𝑃 (Example 1.116). Therefore, the demand correspondence is single valued.
2.21 Assume 𝑠∗𝑖 ∈ 𝐵(s∗ ) for every 𝑖 ∈ 𝑁 . Then for every player 𝑖 ∈ 𝑁
(𝑠𝑖 , s−𝑖 ) ≿𝑖 (𝑠′𝑖 , s−𝑖 ) for every 𝑠′𝑖 ∈ 𝑆𝑖
s∗ = (𝑠∗1 , 𝑠∗2 , . . . , 𝑠∗𝑛 ) is a Nash equilibrium. Conversely, assume s∗ = (𝑠∗1 , 𝑠∗2 , . . . , 𝑠∗𝑛 ) is
a Nash equilibrium. Then for every player 𝑖 ∈ 𝑁
(𝑠𝑖 , s−𝑖 ) ≿𝑖 (𝑠′𝑖 , s−𝑖 ) for every 𝑠′𝑖 ∈ 𝑆𝑖
which implies that
𝑠∗𝑖 ∈ 𝐵(s∗ ) for every 𝑖 ∈ 𝑁
2.22 For any nonempty compact set 𝑇 ⊆ 𝑆, 𝐵(𝑇 ) is nonempty and compact provided
≿𝑖 is continuous (Proposition 1.5) and 𝐵(𝑇 ) ⊆ 𝑇 . Therefore
𝐵𝑖1 ⊇ 𝐵𝑖2 ⊇ 𝐵𝑖3 . . .
is a nested sequence of∩nonempty compact sets. By the nested intersection theorem
∞
(Exercise 1.117), 𝑅𝑖 = 𝑛=0 𝐵𝑖𝑛 ∕= ∅.
2.23 If s∗ is a Nash equilibrium, 𝑠𝑖 ∈ 𝐵𝑖𝑛 for every 𝑛.
72
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
2.24 For any 𝜽, let x∗ ∈ 𝜑(𝜽). Then
𝑓 (x∗ , 𝜽) ≥ 𝑓 (x, 𝜽)
for every x ∈ 𝐺(𝜽)
Therefore
𝑓 (x∗ , 𝜽) ≥ 𝑣(𝜽) = sup 𝑓 (x, 𝜽)
x∈𝐺(𝜽)
Conversely
𝑣(𝜽) = sup 𝑓 (x, 𝜽) ≥ sup 𝑓 (x, 𝜽) ≥ 𝑓 (x∗ , 𝜽) for every x∗ ∈ 𝜑(𝜽)
x∈𝐺(𝜽)
x∈𝐺(𝜽)
Consequently
𝑣(𝜽) = 𝑓 (x∗ , 𝜽) for any x∗ ∈ 𝜑(𝜽)
2.25 The graph of 𝑉 is
graph(𝑉 ) = { (𝑦, x) ∈ ℜ+ × ℜ𝑛+ : x ∈ 𝑉 (𝑦) }
while the production possibility set 𝑌 is
𝑌 = { (𝑦, −x) ∈ ℜ+ × ℜ𝑛+ : 𝑥 ∈ 𝑉 (𝑦) }
Assume that 𝑌 is convex and let (𝑦 𝑖 , x𝑖 ) ∈ graph(𝑉 ), 𝑖 = 1, 2. This means that
(𝑦 1 , −x1 ) ∈ 𝑌 and (𝑦 2 , −x2 ) ∈ 𝑌
Let
𝑦¯ = 𝛼𝑦 1 + (1 − 𝛼)𝑦 2 and x̄ = 𝛼x1 + (1 − 𝛼)x2
for some 0 ≤ 𝛼 ≤ 1. Since 𝑌 is convex
(¯
𝑦 , −x̄) = 𝛼(𝑦 1 , −x1 ) + (1 − 𝛼)(𝑦 2 , −x2 ) ∈ 𝑌
and therefore x̄ ∈ 𝑉 (¯
𝑦 ) so that (¯
𝑦 , x̄) ∈ graph(𝑉 ). That is graph(𝑉 ) is convex.
Conversely, assuming graph(𝑉 ) is convex, if (𝑦 𝑖 , −x𝑖 ) ∈ 𝑌 , 𝑖 = 1, 2, then (𝑦 𝑖 , x𝑖 ) ∈
graph(𝑉 ) and therefore
(¯
𝑦 , x̄) ∈ graph(𝑉 ) =⇒ x̄ ∈ 𝑉 (¯
𝑦 ) =⇒ (¯
𝑦, −x̄) ∈ 𝑌
so that 𝑌 is convex.
2.26 The graph of 𝜑 is
graph(𝐺) = { (𝜽, x) ∈ Θ × 𝑋 : x ∈ 𝐺(𝜽) }
Assume that (𝜽𝑖 , x𝑖 ) ∈ graph(𝐺), 𝑖 = 1, 2. This means that x𝑖 ∈ 𝐺(𝜽) and therefore
𝑔 𝑗 (x, 𝜽) ≤ 𝑐𝑗 for every 𝑗 and 𝑖 = 1, 2. Since 𝑔 𝑗 is convex
𝑔(𝛼x1 + (1 − 𝛼)x2 , 𝛼𝜽1 + (1 − 𝛼)𝜽 2 ) ≥ 𝛼𝑔(x1 , 𝜽 1 ) + (1 − 𝛼)𝑔(x2 , 𝜽2 ) ≥ 𝑐𝑗
Therefore 𝛼x1 +(1−𝛼)x2 ∈ 𝐺(𝛼𝜽 1 +(1−𝛼)𝜽2 ) and (𝛼𝜽 1 +(1−𝛼)𝜽 2 , 𝛼x1 +(1−𝛼)x2 ) ∈
graph(𝐺). 𝐺 is convex.
73
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
2.27 The identity function 𝐼𝑋 : 𝑋 → 𝑋 is defined by 𝐼𝑋 (𝑥) = 𝑥 for every 𝑥 ∈ 𝑋.
Therefore
𝑥2 ≻𝑋 𝑥1 =⇒ 𝐼𝑋 (𝑥2 ) = 𝑥2 ≻𝑋 𝑥1 = 𝐼𝑋 (𝑥1 )
2.28 Assume that 𝑓 and 𝑔 are increasing. Then, for every 𝑥1 , 𝑥2 ∈ 𝑋 with 𝑥2 ≿𝑋 𝑥1
𝑓 (𝑥2 ) ≿𝑌 𝑓 (𝑥1 ) =⇒ 𝑔(𝑓 (𝑥2 )) ≿𝑍 𝑔(𝑓 (𝑥1 ))
𝑔 ∘ 𝑓 is also increasing. Similarly, if 𝑓 and 𝑔 are strictly increasing,
𝑥2 ≻𝑋 𝑥1 =⇒ 𝑓 (𝑥2 ) ≻𝑌 𝑓 (𝑥1 ) =⇒ 𝑔(𝑓 (𝑥2 )) ≻𝑍 𝑔(𝑓 (𝑥1 ))
2.29 For every 𝑦 ∈ 𝑓 (𝑋), there exists a unique 𝑥 ∈ 𝑋 such that 𝑓 (𝑥) = y. (For if 𝑥1 , 𝑥2
are such that 𝑓 (𝑥1 ) = 𝑓 (𝑥2 ), then 𝑥1 = 𝑥2 .) Therefore, 𝑓 is one-to-one and onto 𝑓 (𝑋),
and so has an inverse (Exercise 2.4). Further
𝑥2 > 𝑥2 ⇐⇒ 𝑓 (𝑥2 ) > 𝑓 (𝑥1 )
Therefore 𝑓 −1 is strictly increasing.
2.30 Assume 𝑓 : 𝑋 → ℜ is increasing. Then, for every 𝑥2 ≿ 𝑥1 , 𝑓 (𝑥2 ) ≥ 𝑓 (𝑥1 ) which
implies that −𝑓 (𝑥2 ) ≤ −𝑓 (𝑥1 ). −𝑓 is decreasing.
2.31 For every 𝑥2 ≿ 𝑥1 .
𝑓 (𝑥2 ) ≥ 𝑓 (𝑥1 )
𝑔(𝑥2 ) ≥ 𝑔(𝑥1 )
Adding
(𝑓 + 𝑔)(𝑥2 ) = 𝑓 (𝑥2 ) + 𝑔(𝑥2 ) ≥ 𝑓 (𝑥1 ) + 𝑓 (𝑥1 ) = (𝑓 + 𝑔)(𝑥1 )
That is, 𝑓 + 𝑔 is increasing. Similarly for every 𝛼 ≥ 0
𝛼𝑓 (𝑥2 ) ≥ 𝛼𝑓 (𝑥1 )
and therefore 𝛼𝑓 is increasing. By Exercise 1.186, the set of all increasing functionals
is a convex cone in 𝐹 (𝑋).
If 𝑓 is strictly increasing, then for every 𝑥2 ≻ 𝑥1 ,
𝑓 (𝑥2 ) > 𝑓 (𝑥1 )
𝑔(𝑥2 ) ≥ 𝑔(𝑥1 )
Adding
(𝑓 + 𝑔)(𝑥2 ) = 𝑓 (𝑥2 ) + 𝑔(𝑥2 ) > 𝑓 (𝑥1 ) + 𝑔(𝑥1 ) = (𝑓 + 𝑔)(𝑥1 )
𝑓 + 𝑔 is strictly increasing. Similarly for every 𝛼 > 0
𝛼𝑓 (𝑥2 ) > 𝛼𝑓 (𝑥1 )
𝛼𝑓 is strictly increasing.
2.32 For every 𝑥2 ≻ 𝑥1 .
𝑓 (𝑥2 ) > 𝑓 (𝑥1 ) > 0
𝑔(𝑥2 ) > 𝑔(𝑥1 ) > 0
and therefore
(𝑓 𝑔)(𝑥2 ) = 𝑓 (𝑥2 )𝑔(𝑥2 ) > 𝑓 (𝑥2 )𝑔(𝑥1 ) > 𝑓 (𝑥1 )𝑔(𝑥1 ) = (𝑓 𝑔)(𝑥1 )
using Exercise 2.31.
74
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
2.33 By Exercise 2.31 and Example 2.53, each 𝑔𝑛 is strictly increasing on ℜ+ . That is
𝑥1 < 𝑥2 =⇒ 𝑔𝑛 (𝑥1 ) < 𝑔𝑛 (𝑥2 ) for every 𝑛
(2.32)
and therefore
lim 𝑔𝑛 (𝑥1 ) ≤ lim 𝑔𝑛 (𝑥2 )
𝑛→∞
𝑛→∞
This suffices to show that 𝑔(𝑥) = lim𝑛→∞ 𝑔𝑛 (𝑥) is increasing (not strictly increasing).
However, 1 + 𝑥 is strictly increasing, and therefore by Exercise 2.31
𝑒𝑥 = 1 + 𝑥 + 𝑔(𝑥)
is strictly increasing on ℜ+ . While it is the case that 𝑔 = lim 𝑔𝑛 is strictly increasing
on ℜ+ , (2.32) does not suffice to show this.
2.34 For every 𝑎 > 0, 𝑎 log 𝑥 is strictly increasing (Exercise 2.32) and therefore 𝑒𝑎 log 𝑥
is strictly increasing (Exercise 2.28). For every 𝑎 < 0, −𝑎 log 𝑥 is strictly increasing
and therefore (Exercise 2.30 𝑎 log 𝑥 is strictly decreasing. Therefore 𝑒𝑎 log 𝑥 is strictly
decreasing (Exercise 2.28).
2.35 Apply Exercises 2.31 and 2.28 to Example 2.56.
2.36 𝑢 is (strictly) increasing so that
𝑥2 ≿ 𝑥1 =⇒ 𝑢(𝑥2 ) ≥ 𝑢(𝑥1 )
To show the converse, assume that 𝑥1 , 𝑥2 ∈ 𝑋 with 𝑢(𝑥2 ) ≥ 𝑢(𝑥1 ). Since ≿ is complete,
either 𝑥2 ≿ 𝑥1 or 𝑥1 ≻ 𝑥2 . However, the second possibility cannot occur since 𝑢 is
strictly increasing and therefore
𝑥1 ≻ 𝑥2 =⇒ 𝑢(𝑥1 ) > 𝑢(𝑥2 )
contradicting the hypothesis that 𝑢(𝑥2 ) ≥ 𝑢(𝑥1 ). We conclude that
𝑢(𝑥2 ) ≥ 𝑢(𝑥1 ) =⇒ 𝑥2 ≿ 𝑥1
2.37 Assume 𝑢 represents the preference ordering ≿ on 𝑋 and let 𝑔 : ℜ → ℜ be strictly
increasing. Then composition 𝑔 ∘ 𝑢 : 𝑋 → ℜ is strictly increasing (Exercise 2.28).
Therefore 𝑔 ∘ 𝑢 is a utility function (Example 2.58). Since 𝑔 is strictly increasing
𝑔(𝑢(𝑥2 )) ≥ 𝑔(𝑢(𝑥1 )) ⇐⇒ 𝑢(𝑥2 ) ≥ 𝑢(𝑥1 ) ⇐⇒ 𝑥2 ≿ 𝑥1
for every 𝑥1 , 𝑥2 ∈ 𝑋 Therefore, 𝑔 ∘ 𝑢 also represents ≿.
2.38
1. (a) Let 𝑧¯ = max𝑛𝑖=1 𝑥𝑖 . Then z̄ = 𝑧¯1 ≿ x and therefore z̄ ∈ 𝑍x+ . Similarly,
let 𝑧 = min𝑛𝑖=1 𝑥𝑖 . Then z = 𝑧1 ∈ 𝑍x− . Therefore, 𝑍x+ and 𝑍x− are both
nonempty. By continuity, the upper and lower contour sets ≿(x) and ≾(x)
are closed. 𝑍 is a closed cone. Since
𝑍x+ = ≿(x) ∩ 𝑍 and 𝑍x− = ≾(x) ∩ 𝑍
𝑍x+ and 𝑍x− are closed.
(b) By completeness, 𝑍x+ ∪ 𝑍x− = 𝑍. Since 𝑍 is connected, 𝑍x+ ∩ 𝑍x− ∕= ∅.
(Otherwise, 𝑍 is the union of two disjoint closed sets and hence the union
of two disjoint open sets.)
(c) Let zx ∈ 𝑍x+ ∩ 𝑍x− . Then z ≿ x and also z ≾ x. That is, z ∼ x.
75
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
(d) Suppose x ∼ z1x and x ∼ z2x with z1x ∕= z2x . Then either z1x > z2x or
z1x < z2x . Without loss of generality, assume z2x > z1x . Then monotonicity
and transitivity imply
x ∼ z2x ≻ z1x ∼ x
which is a contradiction. Therefore zx is unique.
Let 𝑧x denote the scale of zx , that is zx = 𝑧x 1. For every x ∈ ℜ𝑛+ , there is a
unique zx ∼ x and the function 𝑢 : ℜ𝑛+ → ℜ given by 𝑢(x) = 𝑧x is well-defined.
Moreover
x2 ≿ x1 ⇐⇒ zx2 ≿ zx1
⇐⇒ 𝑧x2 ≥ 𝑧x1
⇐⇒ 𝑢(x2 ) ≥ 𝑢(x1 )
𝑢 represents the preference order ≿.
2.39
1. For every 𝑥1 ∈ ℜ, (𝑥1 , 2) ≻𝐿 (𝑥1 , 1) in the lexicographic order. If 𝑢 represents
≿𝐿 , 𝑢 is strictly increasing and therefore 𝑢(𝑥1 , 2) > 𝑢(𝑥1 , 1). There exists a
rational number 𝑟(𝑥1 ) such that 𝑢(𝑥1 , 2) > 𝑟(𝑥1 ) > 𝑢(𝑥1 , 1).
2. The preceding construction associates a rational number with every real number
𝑥1 ∈ ℜ. Hence 𝑟 is a function from ℜ to the set of rational numbers 𝑄. For any
𝑥11 , 𝑥21 ∈ ℜ with 𝑥21 > 𝑥11
𝑟(𝑥21 ) > 𝑢(𝑥21 , 1) > 𝑢(𝑥11 , 2) > 𝑟(𝑥11 )
Therefore
𝑥21 > 𝑥11 =⇒ 𝑟(𝑥21 ) > 𝑟(𝑥11 )
𝑟 is strictly increasing.
3. By Exercise 2.29, 𝑟 has an inverse. This implies that 𝑟 is one-to-one and onto,
which is impossible since 𝑄 is countable and ℜ is uncountable (Example 2.16).
This contradiction establishes that ≿𝐿 has no such representation 𝑢.
2.40 Let a1 , a2 ∈ 𝐴 with a1 ≿2 a2 . Since the game is strictly competitive, a2 ≿1 a1 .
Since 𝑢1 represents ≿1 , 𝑢1 (a2 ) ≥ 𝑢1 (a1 ) which implies that −𝑢1 (a2 ) ≤ −𝑢1 (a1 ), that
is 𝑢2 (a1 ) ≥ 𝑢2 (a2 ) where 𝑢2 = −𝑢1 . Similarly
𝑢2 (a1 ) ≥ 𝑢2 (a2 ) =⇒ 𝑢1 (a1 ) ≤ 𝑢1 (a2 ) ⇐⇒ a1 ≾1 a2 =⇒ a1 ≿2 a2
Therefore 𝑢2 = −𝑢1 represents ≿2 and
𝑢1 (a) + 𝑢2 (a) = 0 for every a ∈ 𝐴
2.41 Assume 𝑆 ⫋ 𝑇 . By superadditivity
𝑤(𝑇 ) ≥ 𝑤(𝑆) + 𝑤(𝑇 ∖ 𝑆) ≥ 𝑤(𝑆)
2.42 Assume 𝑣, 𝑤 ∈ 𝐵(𝑋) with 𝑤(𝑦) ≥ 𝑣(𝑦) for every 𝑦 ∈ 𝑋. Then for any 𝑥 ∈ 𝑋
𝑓 (𝑥, 𝑦) + 𝛽𝑤(𝑦) ≥ 𝑓 (𝑥, 𝑦) + 𝛽𝑣(𝑦) for every 𝑦 ∈ 𝑋
and therefore
(𝑇 𝑤)(𝑥) = sup {𝑓 (𝑥, 𝑦) + 𝛽𝑤(𝑦)} ≥ sup {𝑓 (𝑥, 𝑦) + 𝛽𝑣(𝑦)} = (𝑇 𝑣)(𝑥)
𝑦∈𝐺(𝑥)
𝑦∈𝐺(𝑥)
T is increasing.
76
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
2.43 For every 𝜃2 ≥ 𝜃1 ∈ Θ, if 𝑥1 ∈ 𝐺(𝜃1 ) and 𝑥2 ∈ 𝐺(𝜃2 ), then 𝑥1 ∧ 𝑥2 ≤ 𝑥1 and
therefore 𝑥1 ∧ 𝑥2 ∈ 𝐺(𝜃1 ). If 𝑥1 ≥ 𝑥2 , then 𝑥1 ∨ 𝑥2 = 𝑥1 ≤ 𝑔(𝜃1 ) ≤ 𝑔(𝜃2 ) and therefore
𝑥1 ∨ 𝑥2 ∈ 𝐺(𝜃2 ). On the other hand, if 𝑥1 ≤ 𝑥2 , then 𝑥1 ∨ 𝑥2 = 𝑥2 ∈ 𝐺(𝜃2 ).
2.44 Assume 𝜑 is increasing, and let 𝑥1 , 𝑥2 ∈ 𝑋 with 𝑥2 ≿ 𝑥1 . Let 𝑦1 ∈ 𝜑(𝑥1 ). Choose
any 𝑦 ′ ∈ 𝜑(𝑥2 ). Since 𝜑 is increasing, 𝜑(𝑥2 ) ≿𝑆 𝜑(𝑥1 ) and therefore 𝑦2 = 𝑦1 ∨ 𝑦 ′ ∈
𝜑(𝑥2 ). 𝑦2 ≿ 𝑦1 as required. Similarly, for every 𝑦2 ∈ 𝜑(𝑥2 ), there exists some 𝑦 ′ ∈ 𝜑(𝑥2 )
such that 𝑦1 = 𝑦 ′ ∧ 𝑦2 ∈ 𝜑(𝑥1 ) with 𝑦2 ≿ 𝑦1 .
2.45 Since 𝜑(𝑥) is a sublattice, sup 𝜑(𝑥) ∈ 𝜑(𝑥) for every 𝑥. Therefore, the function
𝑓 (𝑥) = sup 𝜑(𝑥)
is a selection. Similarly
𝑔(𝑥) = inf 𝜑(𝑥)
is a selection. Both 𝑓 and 𝑔 are increasing (Exercise 1.50).
∏
2.46 Let 𝑥1 , 𝑥2 belong to 𝑋∏
with 𝑥2 ≿ 𝑥1 . Choose y1 = (𝑦11 , 𝑦21 , . . . , 𝑦𝑛1 ) ∈ 𝑖 𝜑𝑖 (𝑥1 )
and y2 = (𝑦12 , 𝑦22 , . . . , 𝑦𝑛2 ) ∈ 𝑖 𝜑𝑖 (𝑥2 ). Then, for each 𝑖 = 1, 2, . . . , 𝑛, 𝑦𝑖1 ∈ 𝜑𝑖 (𝑥1 ) and
(𝑥1 ) and 𝑦𝑖1 ∨ 𝑦𝑖2 ∈∏𝜑𝑖 (𝑥2 ) for
𝑦𝑖2 ∈ 𝜑𝑖 (𝑥2 ). Since each 𝜑𝑖 is∏increasing, 𝑦𝑖1 ∧ 𝑦𝑖2 ∈ 𝜑𝑖∏
1
2
1
1
2
each 𝑖. Therefore y ∧ y ∈ 𝑖 𝜑𝑖 (𝑥 ) and y ∨ y ∈ 𝑖 𝜑𝑖 (𝑥2 ). 𝜑(𝑥) = 𝑖 𝜑𝑖 (𝑥) is
increasing.
∩
∩
2.47 Let 𝑥1 , 𝑥2 belong to 𝑋 with 𝑥2 ≿ 𝑥1 . Choose 𝑦 1 ∈ 𝑖 𝜑𝑖 (𝑥1 ) and 𝑦 2 ∈ 𝑖 𝜑𝑖 (𝑥2 ).
Then 𝑦 1 ∈ 𝜑𝑖 (𝑥1 ) and 𝑦 2 ∈ 𝜑𝑖 (𝑥2 ) for each 𝑖 = 1, 2, . . . , 𝑛. Since each 𝜑𝑖∩is increasing,
𝜑𝑖 (𝑥1 ) and 𝑦 1 ∨∩𝑦 2 ∈ 𝜑𝑖 (𝑥2 ) for each 𝑖. Therefore 𝑦 1 ∧ 𝑦 2 ∈ 𝑖 𝜑𝑖 (𝑥1 ) and
𝑦1 ∧ 𝑦2 ∈ ∩
1
2
𝑦 ∨ 𝑦 ∈ 𝑖 𝜑𝑖 (𝑥2 ). 𝜑 = 𝑖 𝜑 is increasing.
2.48 Let 𝑓 be a selection from an always increasing correspondence 𝜑 : 𝑋 ⇉ 𝑌 . For
every 𝑥1 , 𝑥2 ∈ 𝑋, 𝑓 (𝑥1 ) ∈ 𝜑(𝑥1 ) and 𝑓 (𝑥2 ) ∈ 𝜑(𝑥2 ). Since 𝜑 is always increasing
𝑥1 ≿𝑋 𝑥2 =⇒ 𝑓 (𝑥1 ) ≿𝑌 𝑓 (𝑥2 )
𝑓 is increasing. Conversely, assume every selection 𝑓 ∈ 𝜑 is increasing. Choose any
𝑥1 , 𝑥2 ∈ 𝑋 with 𝑥1 ≿ 𝑥2 . For every 𝑦1 ∈ 𝜑(𝑥1 ) and 𝑦2 ∈ 𝜑(𝑥2 ), there exists a selection
𝑓 with 𝑦𝑖 = 𝜑(𝑥𝑖 ), 𝑖 = 1, 2. Since 𝑓 is increasing,
𝑥1 ≿𝑋 𝑥2 =⇒ 𝑦1 ≿𝑌 𝑦2
𝜑 is increasing.
2.49 Let 𝑥1 , 𝑥2 ∈ 𝑋. If 𝑋 is a chain, either 𝑥1 ≿ 𝑥2 or 𝑥2 ≿ 𝑥1 . Without loss of
generality , assume 𝑥2 ≿ 𝑥1 . Then 𝑥1 ∨ 𝑥2 = 𝑥2 and 𝑥1 ∧ 𝑥2 = 𝑥1 and (2.17) is satisfied
as an identity.
2.50
(𝑓 + 𝑔)(𝑥1 ∨ 𝑥2 ) + (𝑓 + 𝑔)(𝑥1 ∧ 𝑥2 ) = 𝑓 (𝑥1 ∨ 𝑥2 ) + 𝑔(𝑥1 ∨ 𝑥2 ) + 𝑓 (𝑥1 ∧ 𝑥2 ) + 𝑔(𝑥1 ∧ 𝑥2 )
= 𝑓 (𝑥1 ∨ 𝑥2 ) + 𝑓 (𝑥1 ∧ 𝑥2 ) + 𝑔(𝑥1 ∨ 𝑥2 ) + 𝑔(𝑥1 ∧ 𝑥2 )
≥ 𝑓 (𝑥1 ) + 𝑓 (𝑥2 ) + 𝑔(𝑥1 ) + 𝑔(𝑥2 )
= (𝑓 + 𝑔)(𝑥1 ) + (𝑓 + 𝑔)(𝑥2 )
Similarly
𝑓 (𝑥1 ∨ 𝑥2 ) + 𝑓 (𝑥1 ∧ 𝑥2 ) ≥ 𝑓 (𝑥1 ) + 𝑓 (𝑥2 )
77
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
implies
𝛼𝑓 (𝑥1 ∨ 𝑥2 ) + 𝛼𝑓 (𝑥1 ∧ 𝑥2 ) ≥ 𝛼𝑓 (𝑥1 ) + 𝛼𝑓 (𝑥2 )
for all 𝛼 ≥ 0. By Exercise 1.186, the set of all supermodular functions is a convex cone
in 𝐹 (𝑋).
2.51 Since 𝑓 is supermodular and 𝑔 is nonnegative definite,
(
)
𝑓 (𝑥1 ∨ 𝑥2 )𝑔(𝑥1 ∨ 𝑥2 ) ≥ 𝑓 (𝑥1 ) + 𝑓 (𝑥2 ) − 𝑓 (𝑥1 ∧ 𝑥2 ) 𝑔(𝑥1 ∨ 𝑥2 )
(
)
= 𝑓 (𝑥2 )𝑔(𝑥1 ∨ 𝑥2 ) + 𝑓 (𝑥1 ) − 𝑓 (𝑥1 ∧ 𝑥2 ) 𝑔(𝑥1 ∨ 𝑥2 )
for any 𝑥1 , 𝑥2 ∈ 𝑋. Since 𝑓 and 𝑔 are increasing, this implies
(
)
𝑓 (𝑥1 ∨ 𝑥2 )𝑔(𝑥1 ∨ 𝑥2 ) ≥ 𝑓 (𝑥2 )𝑔(𝑥1 ∨ 𝑥2 ) + 𝑓 (𝑥1 ) − 𝑓 (𝑥1 ∧ 𝑥2 ) 𝑔(𝑥1 )
(2.33)
Similarly, since 𝑓 is nonnegative definite, 𝑔 supermodular, and 𝑓 and 𝑔 increasing
(
)
𝑓 (𝑥2 )𝑔(𝑥1 ∨ 𝑥2 ) ≥ 𝑓 (𝑥2 ) 𝑔(𝑥1 ) + 𝑔(𝑥2 ) − 𝑔(𝑥1 ∧ 𝑥2 )
(
)
= 𝑓 (𝑥2 )𝑔(𝑥2 ) + 𝑓 (𝑥2 ) 𝑔(𝑥1 ) − 𝑔(𝑥1 ∧ 𝑥2 )
(
)
≥ 𝑓 (𝑥2 )𝑔(𝑥2 ) + 𝑓 (𝑥1 ∧ 𝑥2 ) 𝑔(𝑥1 ) − 𝑔(𝑥1 ∧ 𝑥2 )
Combining this inequality with (2.33) gives
(
)
𝑓 (𝑥1 ∨ 𝑥2 )𝑔(𝑥1 ∨ 𝑥2 ) ≥ 𝑓 (𝑥2 )𝑔(𝑥2 ) + 𝑓 (𝑥1 ∧ 𝑥2 ) 𝑔(𝑥1 ) − 𝑔(𝑥1 ∧ 𝑥2 )
(
)
+ 𝑓 (𝑥1 ) − 𝑓 (𝑥1 ∧ 𝑥2 ) 𝑔(𝑥1 )
= 𝑓 (𝑥2 )𝑔(𝑥2 ) + 𝑓 (𝑥1 ∧ 𝑥2 )𝑔(𝑥1 ) − 𝑓 (𝑥1 ∧ 𝑥2 )𝑔(𝑥1 ∧ 𝑥2 )
+ 𝑓 (𝑥1 )𝑔(𝑥1 ) − 𝑓 (𝑥1 ∧ 𝑥2 )𝑔(𝑥1 )
= 𝑓 (𝑥2 )𝑔(𝑥2 ) − 𝑓 (𝑥1 ∧ 𝑥2 )𝑔(𝑥1 ∧ 𝑥2) + 𝑓 (𝑥1 )𝑔(𝑥1 )
or
𝑓 𝑔(𝑥1 ∨ 𝑥2 ) + 𝑓 𝑔(𝑥1 ∧ 𝑥2 ) ≥ 𝑓 𝑔(𝑥1 ) + 𝑓 𝑔(𝑥2 )
𝑓 𝑔 is supermodular. (I acknowledge the help of Don Topkis in formulating this proof.)
2.52 Exercises 2.49 and 2.50.
2.53 For simplicity, assume that the firm produces two products. For every production
plan y = (𝑦1 , 𝑦2 ),
y = (𝑦1 , 0) ∨ (0, 𝑦2 )
0 = (𝑦1 , 0) ∧ (0, 𝑦2 )
If 𝑐 is strictly submodular
𝑐(w, y) + 𝑐(w, 0) < 𝑐(w, (𝑦1 , 0)) + 𝑐(w, (0, 𝑦2 ))
Since 𝑐(w, 0) = 0
𝑐(w, y) < 𝑐(w, (𝑦1 , 0)) + 𝑐(w, (0, 𝑦2 ))
The technology displays economies of scope.
78
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
2.54 Assume (𝑁, 𝑤) is convex, that is
𝑤(𝑆 ∪ 𝑇 ) + 𝑤(𝑆 ∩ 𝑇 ) ≥ 𝑤(𝑆) + 𝑤(𝑇 ) for every 𝑆, 𝑇 ⊆ 𝑁
For all disjoint coalitions 𝑆 ∩ 𝑇 = ∅
𝑤(𝑆 ∪ 𝑇 ) ≥ 𝑤(𝑆) + 𝑤(𝑇 )
𝑤 is superadditive.
2.55 Rewriting (2.18), this implies
𝑤(𝑆 ∪ 𝑇 ) − 𝑤(𝑇 ) ≥ 𝑤(𝑆) − 𝑤(𝑆 ∩ 𝑇 ) for every 𝑆, 𝑇 ⊆ 𝑁
(2.34)
Let 𝑆 ⊂ 𝑇 ⊂ 𝑁 ∖ {𝑖} and let 𝑆 ′ = 𝑆 ∪ {𝑖}. Substituting in (2.34)
𝑤(𝑆 ′ ∪ 𝑇 ) − 𝑤(𝑇 ) ≥ 𝑤(𝑆 ′ ) − 𝑤(𝑆 ′ ∩ 𝑇 )
Since 𝑆 ⊂ 𝑇
𝑆 ′ ∪ 𝑇 = (𝑆 ∪ {𝑖}) ∪ 𝑇 = 𝑇 ∪ {𝑖}
𝑆 ′ ∩ 𝑇 = (𝑆 ∪ {𝑖}) ∩ 𝑇 = 𝑆
Substituting in the previous equation gives the required result, namely
𝑤(𝑇 ∪ {𝑖}) − 𝑤(𝑇 ) ≥ 𝑤(𝑆 ∪ {𝑖}) − 𝑤(𝑆)
Conversely, assume that
𝑤(𝑇 ∪ {𝑖}) − 𝑤(𝑇 ) ≥ 𝑤(𝑆 ∪ {𝑖}) − 𝑤(𝑆)
(2.35)
for every 𝑆 ⊂ 𝑇 ⊂ 𝑁 ∖ {𝑖}. Let 𝑆 and 𝑇 be arbitrary coalitions. Assume 𝑆 ∩ 𝑇 ⊂ 𝑆
and 𝑆 ∩ 𝑇 ⊂ 𝑇 (otherwise (2.18) is trivially satisfied). This implies that 𝑇 ∖ 𝑆 ∕= ∅.
Assume these players are labelled 1, 2, . . . , 𝑚, that is 𝑇 ∖ 𝑆 = {1, 2, . . . , 𝑚}. By (2.35)
𝑤(𝑆 ∪ {1}) − 𝑤(𝑆) ≥ 𝑤((𝑆 ∩ 𝑇 ) ∪ {1}) − 𝑤(𝑆 ∩ 𝑇 )
(2.36)
Successively adding the remaining players in 𝑇 ∖ 𝑆
𝑤(𝑆 ∪ {1, 2}) − 𝑤(𝑆 ∪ {1}) ≥ 𝑤((𝑆 ∩ 𝑇 ) ∪ {1, 2}) − 𝑤((𝑆 ∩ 𝑇 ) ∪ {1})
..
.
(
)
𝑤(𝑆 ∪ (𝑇 ∖ 𝑆)) − 𝑤(𝑆 ∪ {1, 2, . . . , 𝑚 − 1}) ≥ 𝑤 𝑆 ∩ 𝑇 ) ∪ (𝑇 ∖ 𝑆)
− 𝑤((𝑆 ∩ 𝑇 ) ∪ {1, 2, . . . , 𝑚 − 1})
Adding these inequalities to (2.36), we get
(
)
𝑤(𝑆 ∪ (𝑇 ∖ 𝑆)) − 𝑤(𝑆) ≥ 𝑤 𝑆 ∩ 𝑇 ) ∪ (𝑇 ∖ 𝑆) − 𝑤(𝑆 ∩ 𝑇 )
This simplifies to
𝑤(𝑆 ∪ 𝑇 ) − 𝑤(𝑆) ≥ 𝑆(𝑇 ) − 𝑤(𝑆 ∩ 𝑇 )
which can be arranged to give (2.18).
79
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
2.56 The cost allocation game is not convex. Let 𝑆 = {𝐴𝑃, 𝐾𝑀 }, 𝑇 = {𝐾𝑀, 𝑇 𝑁 }.
Then 𝑆 ∪ 𝑇 = {𝐴𝑃, 𝐾𝑀, 𝑇 𝑁 } = 𝑁 and 𝑆 ∩ 𝑇 = {𝐾𝑀 } and
𝑤(𝑆 ∪ 𝑇 ) + 𝑤(𝑆 ∩ 𝑇 ) = 1530 < 1940 = 770 + 1170 = 𝑤(𝑆) + 𝑤(𝑇 )
Alternatively, observe that TN’s marginal contribution to coalition {𝐾𝑀, 𝑇 𝑁 } is 1170,
which is greater than its marginal contribution to the grand coalition {𝐴𝑃, 𝐾𝑀, 𝑇 𝑁 }
(1530 − 770 = 760).
2.57 𝑓 is supermodular if
𝑓 (𝑥1 ∨ 𝑥2 ) + 𝑓 (𝑥1 ∧ 𝑥2 ) ≥ 𝑓 (𝑥1 ) + 𝑓 (𝑥2 )
which can be rearranged to give
𝑓 (𝑥1 ∨ 𝑥2 ) − 𝑓 (𝑥2 ) ≥ 𝑓 (𝑥1 ) − 𝑓 (𝑥1 ∧ 𝑥2 )
If the right hand side of this inequality is nonnegative, then so a fortiori is the left
hand side, that is
𝑓 (𝑥1 ) ≥ 𝑓 (𝑥1 ∧ 𝑥2 ) =⇒ 𝑓 (𝑥1 ∨ 𝑥2 ) ≥ 𝑓 (𝑥2 )
If the right hand side is strictly positive, so must be the left hand side
𝑓 (𝑥1 ) > 𝑓 (𝑥1 ∧ 𝑥2 ) =⇒ 𝑓 (𝑥1 ∨ 𝑥2 ) > 𝑓 (𝑥2 )
2.58 Assume 𝑥2 ≿ 𝑥1 ∈ 𝑋 and 𝑦2 ≿𝑌 𝑦2 ∈ 𝑌 . Assume that 𝑓 displays increasing
differences in (𝑥, 𝑦), that is
𝑓 (𝑥2 , 𝑦2 ) − 𝑓 (𝑥1 , 𝑦2 ) ≥ 𝑓 (𝑥2 , 𝑦1 ) − 𝑓 (𝑥1 , 𝑦1 )
(2.37)
𝑓 (𝑥2 , 𝑦2 ) − 𝑓 (𝑥2 , 𝑦1 ) ≥ 𝑓 (𝑥1 , 𝑦2 ) − 𝑓 (𝑥1 , 𝑦1 )
(2.38)
Rearranging
Conversely, (2.38) implies (2.37) .
2.59 We showed in the text that supermodularity implies increasing differences. To
show that reverse, assume that 𝑓 : 𝑋 × 𝑌 → ℜ displays increasing differences in (𝑥, 𝑦).
Choose any (𝑥1 , 𝑦1 ), (𝑥2 , 𝑦2 ) ∈ 𝑋 × 𝑌 . If (𝑥1 , 𝑦1 ), (𝑥2 , 𝑦2 ) are comparable, so that either
(𝑥1 , 𝑦1 ) ≿ (𝑥2 , 𝑦2 ) or (𝑥1 , 𝑦1 ) ≾ (𝑥2 , 𝑦2 ), then (2.17) holds has an equality. Therefore
assume that (𝑥1 , 𝑦1 ), (𝑥2 , 𝑦2 ) are incomparable. Without loss of generality, assume that
𝑥1 ≾ 𝑥2 while 𝑦1 ≿ 𝑦2 . (This is where we require that 𝑋 and 𝑌 be chains). This implies
(𝑥1 , 𝑦1 ) ∧ (𝑥2 , 𝑦2 ) = (𝑥1 , 𝑦2 ) and (𝑥1 , 𝑦1 ) ∨ (𝑥2 , 𝑦2 ) = (𝑥2 , 𝑦1 )
Increasing differences implies that
𝑓 (𝑥2 , 𝑦1 ) − 𝑓 (𝑥1 , 𝑦1 ) ≥ 𝑓 (𝑥2 , 𝑦2 ) − 𝑓 (𝑥1 , 𝑦2 )
which can be rewritten as
𝑓 (𝑥2 , 𝑦1 ) + 𝑓 (𝑥1 , 𝑦2 ) ≥ 𝑓 (𝑥1 , 𝑦1 ) + 𝑓 (𝑥2 , 𝑦2 )
Substituting (2.39)
(
)
(
)
𝑓 (𝑥1 , 𝑦1 ) ∨ (𝑥2 , 𝑦2 ) + 𝑓 (𝑥1 , 𝑦1 ) ∧ (𝑥2 , 𝑦2 ) ≥ 𝑓 (𝑥1 , 𝑦1 ) + 𝑓 (𝑥2 , 𝑦2 )
which establishes the supermodularity of 𝑓 on 𝑋 × 𝑌 (2.17).
80
(2.39)
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
2.60 In the standard Bertrand model of oligopoly
∙ the strategy space of each firm is ℜ+ , a lattice.
∙ 𝑢𝑖 (𝑝𝑖 , p−𝑖 ) is supermodular in 𝑝𝑖 (Exercise 2.51).
∙ If the other firm’s increase their prices from p1−𝑖 to p2−𝑖 , the effect on the demand
for firm 𝑖’s product is
∑
𝑓 (𝑝𝑖 , p2−𝑖 ) − 𝑓 (𝑝𝑖 , p1−𝑖 ) =
𝑑𝑖𝑗 (𝑝2𝑗 − 𝑝1𝑗 )
𝑖∕=𝑗
If the goods are gross substitutes, demand for firm 𝑖 increases and the amount
of the increase is independent of 𝑝𝑖 . Consequently, the effect on profit will be increasing in 𝑝𝑖 . That is the payoff function (net revenue) has increasing differences
in (𝑝𝑖 , p−𝑖 ). Specifically,
∑
𝑢(𝑝𝑖 , p2−𝑖 ) − 𝑢(𝑝𝑖 , p1−𝑖 ) =
𝑑𝑖𝑗 (𝑝𝑖 − 𝑐¯𝑖 )(𝑝2𝑗 − 𝑝1𝑗 )
𝑖∕=𝑗
For any price increase p2−𝑖 ≩ p1−𝑖 , the change in profit 𝑢(𝑝𝑖 , p2−𝑖 ) − 𝑢(𝑝𝑖 , p1−𝑖 ) is
increasing in 𝑝𝑖 .
Hence, the Bertrand oligopoly model is a supermodular game.
2.61 Suppose 𝑓 displays increasing differences so that for all 𝑥2 ≿ 𝑥1 and 𝑦2 ≿ 𝑦1
𝑓 (𝑥2 , 𝑦2 ) − 𝑓 (𝑥1 , 𝑦2 ) ≥ 𝑓 (𝑥2 , 𝑦1 ) − 𝑓 (𝑥1 , 𝑦1 )
Then
𝑓 (𝑥2 , 𝑦1 ) − 𝑓 (𝑥1 , 𝑦1 ) ≥ 0 =⇒ 𝑓 (𝑥2 , 𝑦2 ) − 𝑓 (𝑥1 , 𝑦2 ) ≥ 0
and
𝑓 (𝑥2 , 𝑦1 ) − 𝑓 (𝑥1 , 𝑦1 ) > 0 =⇒ 𝑓 (𝑥2 , 𝑦2 ) − 𝑓 (𝑥1 , 𝑦2 ) > 0
2.62 For any 𝜽 ∈ Θ∗ , let x1 , x2 ∈ 𝜑(𝜽). Supermodularity implies
𝑓 (x1 ∨ x2 , 𝜽) + 𝑓 (x1 ∧ x2 , 𝜽) ≥ 𝑓 (x1 , 𝜽) + 𝑓 (x2 , 𝜽)
which can be rearranged to give
𝑓 (x1 ∨ x2 , 𝜽) − 𝑓 (x2 , 𝜽) ≥ 𝑓 (x1 , 𝜽) − 𝑓 (x1 ∧ x2 , 𝜽)
(2.40)
However x1 and x2 are both maximal in 𝐺(𝜽).
𝑓 (x2 , 𝜽) ≥ 𝑓 (x1 ∨ x2 , 𝜽) =⇒ 𝑓 (x1 ∨ x2 , 𝜽) − 𝑓 (x2 , 𝜽) ≤ 0
𝑓 (x1 , 𝜽) ≥ 𝑓 (x1 ∧ x2 , 𝜽) =⇒ 𝑓 (x1 , 𝜽) − 𝑓 (x1 ∧ x2 , 𝜽) ≥ 0
Substituting in (2.40), we conclude
0 ≥ 𝑓 (x1 ∨ x2 , 𝜽) − 𝑓 (x2 , 𝜽) ≥ 𝑓 (x1 , 𝜽) − 𝑓 (x1 ∧ x2 , 𝜽) ≥ 0
This inequality must be satisfied as an equality with
𝑓 (x1 ∨ x2 , 𝜽) = 𝑓 (x2 , 𝜽)
𝑓 (x1 ∧ x2 , 𝜽) = 𝑓 (x1 , 𝜽)
That is x1 ∨ x2 ∈ 𝜑(𝜽) and x1 ∧ x2 ∈ 𝜑(𝜽). By Exercise 2.45, 𝜑 has an increasing
selection.
81
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
2.63 As in the proof of the theorem, let 𝜽1 , 𝜽 2 belong to Θ with 𝜽 2 ≿ 𝜽1 . Choose any
optimal solutions x1 ∈ 𝜑(𝜽 1 ) and x2 ∈ 𝜑(𝜽2 ). We claim that x2 ≿𝑋 x1 . Assume
otherwise, that is assume x2 ∕≿𝑋 x1 . This implies (Exercise 1.44) that x1 ∧ x2 ∕= x1 .
Since x1 ≿ x1 ∧ x2 , we must have x1 ≻ x1 ∧ x2 . Strictly increasing differences implies
𝑓 (x1 , 𝜽 2 ) − 𝑓 (x1 , 𝜽1 ) > 𝑓 (x1 ∧ x2 , 𝜽2 ) − 𝑓 (x1 ∧ x2 , 𝜽 1 )
which can be rearranged to give
𝑓 (x1 , 𝜽 2 ) − 𝑓 (x1 ∧ x2 , 𝜽2 ) > 𝑓 (x1 , 𝜽1 ) − 𝑓 (x1 ∧ x2 , 𝜽 1 )
(2.41)
Supermodularity implies
𝑓 (x1 ∨ x2 , 𝜽2 ) + 𝑓 (x1 ∧ x2 , 𝜽2 ) ≥ 𝑓 (x1 , 𝜽2 ) + 𝑓 (x2 , 𝜽 2 )
which can be rearranged to give
𝑓 (x1 ∨ x2 , 𝜽2 ) − 𝑓 (x2 , 𝜽2 ) ≥ 𝑓 (x1 , 𝜽2 ) − 𝑓 (x1 ∧ x2 , 𝜽 2 )
Combining this inequality with (2.41) gives
𝑓 (x1 ∨ x2 , 𝜽2 ) − 𝑓 (x2 , 𝜽2 ) > 𝑓 (x1 , 𝜽1 ) − 𝑓 (x1 ∧ x2 , 𝜽 1 )
(2.42)
However x1 and x2 are optimal for their respective parameter values, that is
𝑓 (x2 , 𝜽2 ) ≥ 𝑓 (x1 ∨ x2 , 𝜽 2 ) =⇒ 𝑓 (x1 ∨ x2 , 𝜽 2 ) − 𝑓 (x2 , 𝜽2 ) ≤ 0
𝑓 (x1 , 𝜽1 ) ≥ 𝑓 (x1 ∧ x2 , 𝜽 1 ) =⇒ 𝑓 (x1 , 𝜽 1 ) − 𝑓 (x1 ∧ x2 , 𝜽1 ) ≥ 0
Substituting in (2.42), we conclude
0 ≥ 𝑓 (x1 ∨ x2 , 𝜽2 ) − 𝑓 (x2 , 𝜽2 ) > 𝑓 (x1 , 𝜽1 ) − 𝑓 (x1 ∧ x2 , 𝜽 1 ) ≥ 0
This contradiction implies that our assumption that x2 ∕≿𝑋 x1 is false. x2 ≿𝑋 x1 as
required. 𝜑 is always increasing.
2.64 The budget correspondence is descending in p and therefore ascending in −p.
Consequently, the indirect utility function
𝑣(p, 𝑚) =
sup
x∈𝑋(p,𝑚)
𝑢(x)
is increasing in −p, that is decreasing in p.
2.65 ⇐= Let 𝜽2 ≿ 𝜽 1 and 𝐺2 ≿𝑆 𝐺1 . Select x1 ∈ 𝜑(𝜽 1 , 𝐺1 ) and x2 ∈ 𝜑(𝜽 2 , 𝐺2 ).
Since 𝐺2 ≿𝑆 𝐺1 , x1 ∧ x2 ∈ 𝐺1 . Since x1 is optimal (x1 ∈ 𝜑(𝜽1 , 𝐺1 )), 𝑓 (x1 , 𝜽1 ) ≥
𝑓 (x1 ∧ x2 , 𝜽1 ). Quasisupermodularity implies 𝑓 (x1 ∨ x2 , 𝜽 1 ) ≥ 𝑓 (x2 , 𝜽1 ). By the single
crossing condition 𝑓 (x1 ∨ x2 , 𝜽2 ) ≥ 𝑓 (x2 , 𝜽 2 ). Therefore x1 ∨ x2 ∈ 𝜑(𝜽2 , 𝐺2 ).
Similarly, since 𝐺2 ≿𝑆 𝐺1 , x1 ∨ x2 ∈ 𝐺(𝜽 2 ). But x2 is optimal, which implies that
𝑓 (x2 , 𝜽2 ) ≥ 𝑓 (x1 ∨ x2 , 𝜽2 ) or 𝑓 (x1 ∨ x2 , 𝜽 2 ) ≤ 𝑓 (x2 , 𝜽2 ). The single crossing condition
implies that a similar inequality holds at 𝜽 1 , that is 𝑓 (x1 ∨ x2 , 𝜽1 ) ≤ 𝑓 (x2 , 𝜽1 ). Quasisupermodularity implies that 𝑓 (x1 , 𝜽 1 ) ≤ 𝑓 (x1 ∧x2 , 𝜽 1 ). Therefore x1 ∧x2 ∈ 𝜑(𝜽 1 , 𝐺1 ).
Since x1 ∨ x2 ∈ 𝜑(𝜽 2 , 𝐺2 ) and x1 ∧ x2 ∈ 𝜑(𝜽 1 , 𝐺1 ), 𝜑 is increasing in (𝜽, 𝐺).
=⇒ To show that 𝑓 is quasisupermodular, suppose that 𝜽 is fixed. Choose any
x1 , x2 ∈ 𝑋. Let 𝐺1 = {x1 , x1 ∧ x2 } and 𝐺2 = {x2 , x1 ∨ x2 }. Then 𝐺2 ≿𝑆 𝐺1 . Assume
that 𝑓 (x1 , 𝜽) ≥ 𝑓 (x1 ∧x2 , 𝜽). Then x1 ∈ 𝜑(𝜽, 𝐺1 ) which implies that x1 ∨x2 ∈ 𝜑(𝜽, 𝐺2 ).
(If x2 ∈ 𝜑(𝜽, 𝐺2 ), then also x1 ∨ x2 ∈ 𝜑(𝜽, 𝐺2 ) since 𝜑 is increasing in (𝜽, 𝐺)). But
this implies that 𝑓 (x1 ∨ x2 , 𝜽) ≥ 𝑓 (x2 , 𝜽). 𝑓 is quasisupermodular in 𝑋.
82
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
To show that 𝑓 satisfies the single crossing condition, choose any x2 ≿ x1 and let
𝐺 = {x1 , x2 }. Assume that 𝑓 (x2 , 𝜽1 ) ≥ 𝑓 (x1 , 𝜽 1 ). Then x2 ∈ 𝜑(𝜽1 , 𝐺) which implies
that x2 ∈ 𝜑(𝜽 2 , 𝐺) for any 𝜽 2 ≿ 𝜽1 . (If x1 ∈ 𝜑(𝜽 2 , 𝐺), then also x1 ∨x2 = x2 ∈ 𝜑(𝜽 2 , 𝐺)
since 𝜑 is increasing in (𝜽, 𝐺).) But this implies that 𝑓 (x2 , 𝜽2 ) ≥ 𝑓 (x1 , 𝜽2 ). 𝑓 satisfies
the single crossing condition.
2.66 First, assume that 𝑓 is continuous. Let 𝑇 be an open subset in 𝑌 and 𝑆 = 𝑓 −1 (𝑇 ).
If 𝑆 = ∅, it is open. Otherwise, choose 𝑥0 ∈ 𝑆 and let 𝑦0 = 𝑓 (𝑥0 ) ∈ 𝑇 . Since 𝑇 is
open, there exists a neighborhood 𝑁 (𝑦0 ) ⊆ 𝑇 . Since 𝑓 is continuous, there exists a
corresponding neighborhood 𝑁 (𝑥0 ) with 𝑓 (𝑁 (𝑥0 )) ⊆ 𝑁 (𝑓 (𝑥0 )). Since 𝑁 (𝑓 (𝑥0 )) ⊆ 𝑇 ,
𝑁 (𝑥0 ) ⊆ 𝑆. This establishes that for every 𝑥0 ∈ 𝑆 there exist a neighborhood 𝑁 (𝑥0 )
contained in 𝑆. That is, 𝑆 is open in 𝑋.
Conversely, assume that the inverse image of every open set in 𝑌 is open in 𝑋. Choose
some 𝑥0 ∈ 𝑋 and let 𝑦0 = 𝑓 (𝑥0 ). Let 𝑇 ⊂ 𝑌 be a neighborhood of 𝑦0 . 𝑇 contains an
open ball 𝐵𝑟 (𝑦0 ) about 𝑦0 . By hypothesis, the inverse image 𝑆 = 𝑓 −1 (𝐵𝑟 (𝑦0 )) is open
in 𝑋. Therefore, there exists a neighborhood 𝑁 (𝑥0 ) ⊆ 𝑓 −1 (𝐵𝑟 (𝑦0 )). Since 𝐵𝑟 (𝑦0 ) ⊆ 𝑇 ,
𝑓 (𝑁 (𝑥0 )) ⊆ 𝑇 . Since the choice of 𝑥0 was arbitrary, we conclude that 𝑓 is continuous.
2.67 Assume 𝑓 is continuous. Let 𝑇 be a closed set in 𝑌 and let 𝑆 = 𝑓 −1 (𝑇 ). Then,
𝑇 𝑐 is open. By the previous exercise, 𝑓 −1 (𝑇 𝑐 ) = 𝑆 𝑐 is open and therefore 𝑆 is closed.
Conversely, for every open set 𝑇 ⊆ 𝑌 , 𝑇 𝑐 is closed. By hypothesis, 𝑆 𝑐 = 𝑓 −1 (𝑇 𝑐 ) is
closed and therefore 𝑆 = 𝑓 −1 (𝑇 ) is open. 𝑓 is continuous by the previous exercise.
2.68 Assume 𝑓 is continuous. Let 𝑥𝑛 be a sequence converging to 𝑥 Let 𝑇 be a neighborhood of 𝑓 (𝑥). Since 𝑓 is continuous, there exists a neighborhood 𝑆 ∋ 𝑥 such that
𝑓 (𝑆) ⊆ 𝑇 . Since 𝑥𝑛 converges to 𝑥, there exists some 𝑁 such that 𝑥𝑛 ∈ 𝑆 for all
𝑛 ≥ 𝑁 . Consequently 𝑓 (𝑥𝑛 ) ∈ 𝑇 for every 𝑛 ≥ 𝑁 . This establishes that 𝑓 (𝑥𝑛 ) → 𝑓 (𝑥).
Conversely, assume that for every sequence 𝑥𝑛 → 𝑥, 𝑓 (𝑥𝑛 ) → 𝑓 (𝑥). We show that if 𝑓
were not continuous, it would be possible to construct a sequence which violates this
hypothesis. Suppose then that 𝑓 is not continuous. Then there exists a neighborhood
/ 𝑇 . In
𝑇 of 𝑓 (𝑥) such that for every neighborhood 𝑆 of 𝑥, there is 𝑥′ ∈ 𝑆 with 𝑓 (𝑥′ ) ∈
particular, consider the sequence of open balls 𝐵1/𝑛 (𝑥). For every 𝑛, choose a point
/ 𝑇 . Then 𝑥𝑛 → 𝑥 but 𝑓 (𝑥𝑛 ) does not converge to 𝑓 (𝑥).
𝑥𝑛 ∈ 𝐵1/𝑛 (𝑥) with 𝑓 (𝑥𝑛 ) ∈
This contradicts the assumption. We conclude that 𝑓 must be continuous.
2.69 Since 𝑓 is one-to-one and onto, it has an inverse 𝑔 = 𝑓 −1 which maps 𝑌 onto
𝑋. Let 𝑆 be an open set in 𝑋. Since 𝑓 is open, 𝑇 = 𝑔 −1 (𝑆) = 𝑓 (𝑆) is open in 𝑌 .
Therefore 𝑔 = 𝑓 −1 is continuous.
2.70 Assume 𝑓 is continuous. Let (𝑥𝑛 , 𝑦 𝑛 ) be a sequence of points in graph(𝑓 ) converging to (𝑥, 𝑦). Then 𝑦 𝑛 = 𝑓 (𝑥𝑛 ) and 𝑥𝑛 → 𝑥. Since 𝑓 is continuous, 𝑦 = 𝑓 (𝑥) =
lim𝑛→∞ 𝑓 (𝑥𝑛 ) = lim𝑛→∞ 𝑦 𝑛 . Therefore (𝑥, 𝑦) ∈ graph(𝑓 ) which is therefore closed.
2.71 By the previous exercise, 𝑓 continuous implies graph(𝑓 ) closed. Conversely, suppose graph(𝑓 ) is closed and let 𝑥𝑛 be a sequence converging to 𝑥. Then (𝑥𝑛 , 𝑓 (𝑥𝑛 )))
is a sequence in graph(𝑓 ). Since 𝑌 is compact, 𝑓 (𝑥𝑛 ) contains a subsequence which
converges 𝑦. Since graph(𝑓 ) is closed, (𝑥, 𝑦) ∈ graph(𝑓 ) and therefore 𝑦 = 𝑓 (𝑥) and
𝑓 (𝑥𝑛 ) → 𝑓 (𝑥).
2.72 Let 𝑇 be an open set in 𝑍. Since 𝑓 and 𝑔 are continuous, 𝑔 −1 (𝑇 ) is open in 𝑌
and 𝑓 −1 (𝑔 −1 (𝑇 )) is open in 𝑋. But 𝑓 −1 (𝑔 −1 (𝑇 )) = (𝑓 ∘ 𝑔)−1 (𝑇 ). Therefore 𝑓 ∘ 𝑔 is
continuous.
2.73 Exercises 1.201 and 2.68.
83
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
2.74 Let 𝑢 be defined as in Exercise 2.38. Let (x𝑛 ) be a sequence converging to x. Let
𝑧 𝑛 = 𝑢(x𝑛 ) and 𝑧 = 𝑢(x). We need to show that 𝑧 𝑛 → 𝑧.
(𝑧 𝑛 ) has a convergent subsequence. Let 𝑧¯ = max𝑖 𝑥𝑖 and 𝑧 = min𝑖 𝑥𝑖 . Then 𝑧 ∈
[𝑧, 𝑧¯]. Fix some 𝜖 > 0. Since x𝑛 → x, there exists some 𝑁 such that ∥x𝑛 − x∥∞ <
𝜖 for every 𝑛 ≥ 𝑁 . Consequently, for all 𝑛 ≥ 𝑁 , the terms of the sequence (𝑧 𝑛 )
lie in the compact set [𝑧 − 𝜖, 𝑧¯ + 𝜖]. Hence, (𝑧 𝑛 ) has a convergent subsequence
(𝑧 𝑚 ).
Every convergent subsequence (𝑧 𝑚 ) converges to 𝑧. Suppose not. That is, suppose there exists a convergent subsequence which converges to 𝑧 ′ . Without loss
of generality, assume 𝑧 ′ > 𝑧. Let 𝑧ˆ = 12 (𝑧 + 𝑧 ′ ) and let z = 𝑧1, z′ = 𝑧 ′ 1, ẑ = 𝑧ˆ1
be the corresponding commodity bundles (see Exercise 2.38). Since 𝑧 𝑚 → 𝑧 ′ > 𝑧ˆ,
there exists some 𝑀 such that 𝑧 𝑚 > 𝑧ˆ for every 𝑚 ≥ 𝑀 . This implies that
x𝑚 ∼ z𝑚 ≻ ẑ for every 𝑚 ≥ 𝑀
by monotonicity. Now x𝑚 → x and continuity of preferences implies that x ≿ ẑ.
However x ∼ z which implies that z ≿ ẑ which contradicts monotonicity, since
ẑ > z. Consequently, every convergent subsequence (𝑧 𝑚 ) converges to 𝑧.
2.75 Assume 𝑋 is compact. Let 𝑦 𝑛 be a sequence in 𝑓 (𝑋). There exists a sequence
𝑥𝑛 in 𝑋 with 𝑦 𝑛 = 𝑓 (𝑥𝑛 ). Since 𝑋 is compact, it contains a convergent subsequence
𝑥𝑚 → 𝑥. If 𝑓 is continuous, the subsequence 𝑦 𝑚 = 𝑓 (𝑥𝑚 ) converges in 𝑓 (𝑋) (Exercise
2.68). Therefore 𝑓 (𝑋) is compact.
Assume 𝑋 is connected but ∪
𝑓 (𝑋) is not. This means
∩ there exists open subsets 𝐺 and
𝐻 in 𝑌 such that 𝑓 (𝑋) ⊂ 𝐺 𝐻 and (𝐺 ∩ 𝑓 (𝑋)) (𝐻 ∩ 𝑓 (𝑋)) = ∅. This implies that
𝑋 = 𝑓 −1 (𝐺) ∪ 𝑓 −1 (𝐻) is a disconnection of 𝑋, which contradicts the connectedness of
𝑋.
2.76 Let 𝑆 be any open set in 𝑋. Its complement 𝑆 𝑐 is closed and therefore compact.
Consequently, 𝑓 (𝑆 𝑐 ) is compact (Exercise 2.3) and hence closed. Since 𝑓 is one-to-one
and onto, 𝑓 (𝑆) is the complement of 𝑓 (𝑆 𝑐 ), and thus open in 𝑌 . Therefore, 𝑓 is an
open mapping. By Exercise 2.69, 𝑓 −1 is continuous and 𝑓 is a homeomorphism.
2.77 Assume 𝑓 continuous. The sets {𝑓 (𝑥) ≥ 𝑎} and {𝑓 (𝑥) ≤ 𝑎} are closed subsets of
the ℜ and hence ≿(𝑎) = 𝑓 −1 {𝑓 (𝑥) ≥ 𝑎} and ≾(𝑎) = 𝑓 −1 {𝑓 (𝑥) ≤ 𝑎} are closed subsets
of 𝑋 (Exercise 2.67).
Conversely, assume that all upper ≿(𝑎) and lower ≾(𝑎) contour sets are closed. This
implies that the sets ≻(𝑎) and ≺(𝑎) are open.
Let 𝐴 be an open set in ℜ. Then for every 𝑎 ∈ 𝐴, there exists an open ball 𝐵𝑟𝑎 (𝑎) ⊆ 𝐴
∪
𝐴=
𝐵𝑟𝑎 (𝑎)
𝑎∈𝐴
For every 𝑎 ∈ 𝐴, 𝐵𝑟𝑎 (𝑎) = (𝑎 − 𝑟𝑎 , 𝑎 + 𝑟𝑎 ) and
𝑓 −1 (𝐵𝑟𝑎 (𝑎)) = ≻(𝑎 − 𝑟𝑎 ) ∩ ≺(𝑎 + 𝑟𝑎 )
which is open. Consequently
∪
∪ (
)
≻(𝑎 − 𝑟𝑎 ) ∩ ≺(𝑎 + 𝑟𝑎 )
𝑓 −1 (𝐵𝑟𝑎 (𝑎)) =
𝑓 −1 (𝐴) =
𝑎∈𝐴
𝑎∈𝐴
is open. 𝑓 is continuous by Exercise 2.66.
84
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
2.78 Choose any 𝑥0 ∈ 𝑋 and 𝜖 > 0. Since 𝑓 is continuous, there exists 𝛿1 such that
𝜌(𝑥, 𝑥0 ) < 𝛿1 =⇒ ∣𝑓 (𝑥) − 𝑓 (𝑥0 )∣ < 𝜖/2
Similarly, there exists 𝛿2 such that
𝜌(𝑥, 𝑥0 ) < 𝛿2 =⇒ ∣𝑔(𝑥) − 𝑔(𝑥0 )∣ < 𝜖/2
Let 𝛿 = min{𝛿1 , 𝛿2 }. Then, provided 𝜌(𝑥, 𝑥0 ) < 𝛿
∣(𝑓 + 𝑔)(𝑥) − (𝑓 + 𝑔)(𝑥0 )∣ = ∣𝑓 (𝑥) + 𝑔(𝑥) − 𝑓 (𝑥0 ) − 𝑔(𝑥0 )∣
≤ ∣𝑓 (𝑥) − 𝑓 (𝑥0 )∣ + ∣𝑔(𝑥) − 𝑔(𝑥0 )∣
<𝜖
This establishes 𝑓 + 𝑔 is continuous at 𝑥0 . Since 𝑥0 was arbitrary, 𝑓 + 𝑔 is continuous
for every 𝑥0 ∈ 𝑋. The continuity of 𝛼𝑓 is shown similarly.
2.79 Choose any 𝑥0 ∈ 𝑋. Given 0 < 𝜂 ≤ 1, there exists 𝛿 > 0 such that
∣𝑓 (𝑥) − 𝑓 (𝑥0 )∣ < 𝜂 and ∣𝑔(𝑥) − 𝑔(𝑥0 )∣ < 𝜂
whenever 𝜌(𝑥, 𝑥0 ) < 𝛿. Consequently, while 𝜌(𝑥, 𝑥0 ) < 𝛿
∣𝑓 (𝑥)∣ ≤ ∣𝑓 (𝑥) − 𝑓 (𝑥0 )∣ + ∣𝑓 (𝑥0 )∣
< 𝜂 + ∣𝑓 (𝑥0 )∣
≤ 1 + ∣𝑓 (𝑥0 )∣
and
∣(𝑓 𝑔)(𝑥) − (𝑓 𝑔)(𝑥0 )∣ = ∣𝑓 (𝑥)𝑔(𝑥) − 𝑓 (𝑥0 )𝑔(𝑥0 )∣
= ∣𝑓 (𝑥)(𝑔(𝑥) − 𝑔(𝑥0 )) + 𝑔(𝑥0 )(𝑓 (𝑥) − 𝑓 (𝑥0 ))∣
≤ ∣𝑓 (𝑥)∣ ∣𝑔(𝑥) − 𝑔(𝑥0 )∣ + ∣𝑔(𝑥0 )∣ ∣𝑓 (𝑥) − 𝑓 (𝑥0 )∣
< 𝜂(1 + ∣𝑓 (𝑥0 )∣ + ∣𝑔(𝑥0 )∣)
Given 𝜖 > 0, let 𝜂 = min{1, 𝜖/(1 + ∣𝑓 (𝑥0 )∣ + ∣𝑔(𝑥0 )∣)}. Then, we have shown that there
exists 𝛿 > 0 such that
𝜌(𝑥, 𝑥0 ) < 𝛿 =⇒ ∣(𝑓 𝑔)(𝑥) − (𝑓 𝑔)(𝑥0 )∣ < 𝜖
Therefore, 𝑓 𝑔 is continuous at 𝑥0 .
2.80 Apply Exercises 2.78 and 2.72.
2.81 For any 𝑎 ∈ ℜ, the upper and lower contour sets of 𝑓 ∨ 𝑔, namely
{ 𝑥 : max{𝑓 (𝑥), 𝑔(𝑥)} ≥ 𝑎} = {𝑥 : 𝑓 (𝑥) ≥ 𝑎 } ∪ { 𝑥 : 𝑔(𝑥) ≥ 𝑎 }
{ 𝑥 : max{𝑓 (𝑥), 𝑔(𝑥)} ≤ 𝑎} = {𝑥 : 𝑓 (𝑥) ≤ 𝑎 } ∩ { 𝑥 : 𝑔(𝑥) ≤ 𝑎 }
are closed. Therefore 𝑓 ∨ 𝑔 is continuous (Exercise 2.77). Similarly for 𝑓 ∧ 𝑔.
2.82 The set 𝑇 = 𝑓 (𝑋) is compact (Proposition 2.3). We want to show that 𝑇 has
both largest and smallest elements. Assume otherwise, that is assume that 𝑇 has
no largest element. Then, the set of intervals {(−∞, 𝑡) : 𝑡 ∈ 𝑇 } forms an open
covering of 𝑇 . Since 𝑇 is compact, there exists a finite subcollection of intervals
{(−∞, 𝑡1 ), (−∞, 𝑡2 ), . . . , (−∞, 𝑡𝑛 )} which covers 𝑇 . Let 𝑡∗ be the largest of these 𝑡𝑖 .
Then 𝑡∗ does not belong to any of the intervals {(−∞, 𝑡1 ), (−∞, 𝑡2 ), . . . , (−∞, 𝑡𝑛 )},
contrary to the fact that they cover 𝑇 . This contradiction shows that, contrary to our
assumption, there must exist a largest element 𝑡∗ ∈ 𝑇 , that is 𝑡∗ ≥ 𝑡 for all 𝑡 ∈ 𝑇 .
Let 𝑥∗ ∈ 𝑓 −1 (𝑡∗ ). Then 𝑡∗ = 𝑓 (𝑥∗ ) ≥ 𝑓 (𝑥) for all 𝑥 ∈ 𝑋. The existence of a smallest
element is proved analogously.
85
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
2.83 By Proposition 2.3, 𝑓 (𝑋) is connected and hence an interval (Exercise 1.95).
2.84 The range 𝑓 (𝑋) is a compact subset of ℜ (Proposition 2.3). Therefore 𝑓 is bounded
(Proposition 1.1).
˜
2.85 Let 𝐶(𝑋)
denote the set of all continuous (not necessarily bounded) functionals
on 𝑋. Then
˜
𝐶(𝑋) = 𝐵(𝑋) ∩ 𝐶(𝑋)
˜
𝐵(𝑋), 𝐶(𝑋)
are a linear subspaces of the set of all functionals 𝐹 (𝑋) (Exercises 2.11,
˜
2.78 respectively). Therefore 𝐶(𝑋) = 𝐵(𝑋) ∩ 𝐶(𝑋)
is a subspace of 𝐹 (𝑋) (Exercise
1.130). Clearly 𝐶(𝑋) ⊆ 𝐵(𝑋). Therefore 𝐶(𝑋) is a linear subspace of 𝐵(𝑋).
Let 𝑓 be a bounded function in the closure of 𝐶(𝑋), that is 𝑓 ∈ 𝐶(𝑋). We show that
𝑓 is continuous. For any 𝜖 > 0, there exists 𝑓0 ∈ 𝐶(𝑋) such that ∥𝑓 − 𝑓0 ∥ < 𝜖/3.
Therefore ∣𝑓 (𝑥) − 𝑓0 (𝑥)∣ < 𝜖/3 for every 𝑥 ∈ 𝑋. Choose some 𝑥0 ∈ 𝑋. Since 𝑓0 is
continuous, there exists 𝛿 > 0 such that
𝜌(𝑥, 𝑥0 ) < 𝛿 =⇒ ∣𝑓0 (𝑥) − 𝑓0 (𝑥0 )∣ < 𝜖/3
Therefore, for every 𝑥 ∈ 𝑋 such that 𝜌(𝑥, 𝑥0 ) < 𝛿
∣𝑓 (𝑥) − 𝑓 (𝑥0 )∣ = ∣𝑓 (𝑥) − 𝑓0 (𝑥) + 𝑓0 (𝑥) − 𝑓0 (𝑥0 ) + 𝑓0 (𝑥0 ) − 𝑓 (𝑥0 )∣
≤ ∣𝑓 (𝑥) − 𝑓0 (𝑥)∣ + ∣𝑓0 (𝑥) − 𝑓0 (𝑥0 )∣ + ∣𝑓0 (𝑥0 ) − 𝑓 (𝑥0 )∣
< 𝜖/3 + 𝜖/3 + 𝜖/3 = 𝜖
Therefore 𝑓 is continuous at 𝑥0 . Since 𝑥0 was arbitrary, we conclude that is continuous
everywhere, that is 𝑓 ∈ 𝐶(𝑋). Therefore 𝐶(𝑋) = 𝐶(𝑋) and 𝐶(𝑋) is closed in 𝐵(𝑋).
Since 𝐵(𝑋) is complete (Exercise 2.11), we conclude that 𝐶(𝑋) is complete (Exercise
1.107). Therefore 𝐶(𝑋) is a Banach space.
2.86 For every 𝛼 ∈ ℜ,
{ 𝑥 : 𝑓 (𝑥) ≥ 𝛼 } = {𝑥 : −𝑓 (𝑥) ≤ −𝛼 }
and therefore
{ 𝑥 : 𝑓 (𝑥) ≥ 𝛼 } is closed ⇐⇒ {𝑥 : −𝑓 (𝑥) ≤ −𝛼 } is closed
2.87 Exercise 2.77.
2.88 1 implies 2 Suppose 𝑓 is upper semi-continuous. Let 𝑥𝑛 be a sequence converging to 𝑥0 . Assume 𝑓 (𝑥𝑛 ) → 𝜇. For every 𝛼 < 𝜇, there exists some 𝑁 such that
𝑓 (𝑥𝑛 ) > 𝛼 for every 𝑛 ≥ 𝑁 . Hence
𝑥0 ∈ { 𝑥 : 𝑓 (𝑥) ≥ 𝛼 } = { 𝑥 : 𝑓 (𝑥) ≥ 𝛼 }
since 𝑓 is upper semi-continuous. That is, 𝑓 (𝑥0 ) ≥ 𝛼 for every 𝛼 < 𝜇. Hence
𝑓 (𝑥0 ) ≥ 𝜇 = lim𝑛→∞ 𝑓 (𝑥𝑛 ).
2 implies 3 Let (𝑥𝑛 , 𝑦 𝑛 ) be a sequence in hypo 𝑓 which converges to (𝑥, 𝑦). That is,
𝑥𝑛 → 𝑥, 𝑦 𝑛 → 𝑦 and 𝑦 𝑛 ≤ 𝑓 (𝑥𝑛 ). Condition 2 implies that 𝑓 (𝑥) ≥ 𝑦. Hence,
(𝑥, 𝑦) ∈ hypo 𝑓 . Therefore hypo 𝑓 is closed.
3 implies 1 For fixed 𝛼 ∈ ℜ, let 𝑥𝑛 be a sequence in { 𝑥 : 𝑓 (𝑥) ≥ 𝛼 }. Suppose
𝑥𝑛 → 𝑥0 . Then, the sequence (𝑥𝑛 , 𝛼) converges to (𝑥0 , 𝛼) ∈ hypo 𝑓 . Hence
𝑓 (𝑥0 ) ≥ 𝛼 and 𝑥0 ∈ { 𝑥 : 𝑓 (𝑥) ≥ 𝛼 }, which is therefore closed (Exercise 1.106).
86
c 2001 Michael Carter
⃝
All rights reserved
Solutions for Foundations of Mathematical Economics
2.89 Let 𝑀 = sup𝑥∈𝑋 𝑓 (𝑥), so that
𝑓 (𝑥) ≤ 𝑀 for every 𝑥 ∈ 𝑋
(2.43)
There exists a sequence 𝑥𝑛 in 𝑋 with 𝑓 (𝑥𝑛 ) → 𝑀 . Since 𝑋 is compact, there exists
a convergent subsequence 𝑥𝑚 → 𝑥∗ and 𝑓 (𝑥𝑚 ) → 𝑀 . However, since 𝑓 is upper
semi-continuous, 𝑓 (𝑥∗ ) ≥ lim 𝑓 (𝑥𝑚 ) = 𝑀 . Combined with (2.43), we conclude that
𝑓 (𝑥∗ ) = 𝑀 .
2.90 Choose some 𝜖 > 0. Since 𝑓 is uniformly continuous, there exists some 𝛿 > 0 such
that 𝜌(𝑓 (𝑥𝑚 ), 𝑓 (𝑥𝑛 )) < 𝜖 for every 𝑥𝑚 , 𝑥𝑛 ∈ 𝑋 such that 𝜌(𝑥𝑚 , 𝑥𝑛 ) < 𝛿. Let (𝑥𝑛 )
be a Cauchy sequence in 𝑋. There exists some 𝑁 such that 𝜌(𝑥𝑚 , 𝑥𝑛 ) < 𝛿 for every
𝑚, 𝑛 ≥ 𝑁 . Uniform continuity implies that 𝜌(𝑓 (𝑥𝑚 ), 𝑓 (𝑥𝑛 )) < 𝜖 for every 𝑚, 𝑛 ≥ 𝑁 .
(𝑓 (𝑥𝑛 )) is a Cauchy sequence.
2.91 Suppose not. That is, suppose 𝑓 is continuous but not uniformly continuous. Then
there exists some 𝜖 > 0 such that for 𝑛 = 1, 2, . . . , there exist points 𝑥1𝑛 , 𝑥2𝑛 such that
𝜌(𝑥1𝑛 , 𝑥2𝑛 ) < 1/𝑛 but 𝜌(𝑓 (𝑥1𝑛 ), 𝑓 (𝑥2𝑛 )) ≥ 𝜖
(2.44)
Since 𝑋 is compact, (𝑥1𝑛 ) has a subsequence (𝑥1𝑚 ) converging to some 𝑥 ∈ 𝑋. By
construction (𝜌(𝑥1𝑛 , 𝑥2𝑛 ) < 1/𝑛), the sequence (𝑥2𝑚 ) also converges to 𝑥 and by continuity
lim 𝑓 (𝑥1𝑚 ) = lim 𝑓 (𝑥2𝑚 )
𝑚→∞
𝑚→∞
which contradicts (2.44).
2.92 Assume 𝑓 is Lipschitz with constant 𝛽. For any 𝜖 > 0, let 𝛿 = 𝜖/2𝛽. Then,
provided 𝜌(𝑥, 𝑥0 ) ≤ 𝛿
𝜌(𝑓 (𝑥), 𝑓 (𝑥0 )) ≤ 𝛽𝜌(𝑥, 𝑥0 ) = 𝛽𝛿 = 𝛽
𝜖
𝜖
= <𝜖
2𝛽
2
𝑓 is uniformly continuous.
2.93 Let 𝑓, 𝑔 ∈ 𝐵(𝑋).F Since 𝐵(𝑋) is a normed linear space, for every 𝑥 ∈ 𝑋
𝑓 (𝑥) − 𝑔(𝑥) = (𝑓 − 𝑔)(𝑥) ≤ ∥𝑓 − 𝑔∥
which implies that
𝑓 (𝑥) ≤ 𝑔(𝑥) + ∥𝑓 − 𝑔∥
Since 𝑇 is increasing and satisfies (2.21)
𝑇 (𝑓 ) ≤ 𝑇 (𝑔 + ∥𝑓 − 𝑔∥) = 𝑇 (𝑔) + 𝛽 ∥𝑓 − 𝑔∥
or
𝑇 (𝑓 ) − 𝑇 (𝑔) ≤ 𝛽 ∥𝑓 − 𝑔∥
That is, for every 𝑥 ∈ 𝑋
(𝑇 𝑓 − 𝑇 𝑔)(𝑥) ≤ 𝛽 ∥𝑓 − 𝑔∥
and consequently
∥𝑇 𝑓 − 𝑇 𝑔∥ ≤ 𝛽 ∥𝑓 − 𝑔∥
𝑇 is a contraction with modulus 𝛽.
87
c 2001 Michael Carter
⃝
All rights reserved
Solutions for Foundations of Mathematical Economics
2.94 We have previously shown that 𝑇 is increasing (Exercise 2.42). By direct calculation, for any constant 𝑐 ∈ ℜ,
{
(
)}
𝑓 (𝑥, 𝑦) + 𝛽 𝑣(𝑦) + 𝑐
𝑇 (𝑣 + 𝑐)(𝑥) = sup
𝑦∈𝐺(𝑥)
= sup
𝑦∈𝐺(𝑥)
{
𝑓 (𝑥, 𝑦) + 𝛽𝑣(𝑦)
}
+ 𝛽𝑐
= 𝑇 (𝑣)(𝑥) + 𝛽𝑐
2.95 Assume that 𝐹 is a compact subset of 𝐶(𝑋). Then 𝐹 is bounded (Proposition
1.1). To show that 𝐹 is equicontinuous, choose 𝜖 > 0. 𝐹 is totally bounded (Exercise
1.113), so that there exist finite set of functions {𝑓1 , 𝑓2 , . . . , 𝑓𝑛 } in F such that
𝑛
min ∥𝑓 − 𝑓𝑘 ∥ ≤ 𝜖/3
𝑘=1
Each 𝑓𝑘 is uniformly continuous (Exercise 2.91), so that there exists 𝛿𝑘 > 0 such that
𝜌(𝑥, 𝑥0 ) ≤ 𝛿 =⇒ 𝜌(𝑓𝑘 (𝑥), 𝑓𝑘 (𝑥0 ) < 𝜖/3
Let 𝛿 = min{𝛿1 , 𝛿2 , . . . , 𝛿𝑘 }. Given any 𝑓 ∈ 𝐹 , let 𝑘 be such that ∥𝑓 − 𝑓𝑘 ∥ < 𝜖/3. Then
for any 𝑥, 𝑥0 ∈ 𝑋, 𝜌(𝑥, 𝑥0 ) ≤ 𝛿 implies
𝜌(𝑓 (𝑥), 𝑓 (𝑥0 ) ≤ 𝜌(𝑓 (𝑥), 𝑓𝑘 (𝑥)) + 𝜌(𝑓𝑘 (𝑥), 𝑓𝑘 (𝑥0 )) + 𝜌(𝑓𝑘 (𝑥0 ), 𝑓 (𝑥0 )) <
𝜖
𝜖
𝜖
+ + =𝜖
3 3 3
for every 𝑓 ∈ 𝐹 . Therefore, 𝐹 is equicontinuous.
Conversely, assume that 𝐹 ⊆ 𝐶(𝑋) is closed, bounded and equicontinuous. Let (𝑓𝑛 )
be a bounded equicontinuous sequence of functions in 𝐹 . We show that (𝑓𝑛 ) has a
convergent subsequence.
1. First, we show that for any 𝜖 > 0, there is exists a subsequence (𝑓𝑚 ) such that
∥𝑓𝑚 − 𝑓𝑚′ ∥ < 𝜖 for every 𝑓𝑚 , 𝑓𝑚′ in the subsequence. Since the functions are
equicontinuous, there exists 𝛿 > 0 such that
𝜌(𝑓𝑛 (𝑥) − 𝑓𝑛 (𝑥0 ) <
𝜖
3
for every 𝑥, 𝑥0 in 𝑋 with 𝜌(𝑥, 𝑥0 ) ≤ 𝛿. Since 𝑋 is compact, it is totally bounded
(Exercise 1.113). That is, there exist a finite number of open balls 𝐵𝛿 (𝑥𝑖 ),
𝑖 = 1, 2 . . . , 𝑘 which cover 𝑋. The sequence (𝑓𝑛 (𝑥1 ), 𝑓𝑛 (𝑥2 , . . . , 𝑓𝑛 (𝑥𝑘 )) is a
bounded sequence in ℜ𝑛 . By the Bolzano-Weierstrass theorem (Exercise 1.119),
this sequence has a convergent subsequence (𝑓𝑚 (𝑥1 ), 𝑓𝑚 (𝑥2 ), . . . , 𝑓𝑚 (𝑥𝑘 )) such
that 𝑓𝑚 (𝑥𝑖 ) − 𝑓𝑚′ (𝑥𝑖 ) < 𝜖/3 for 𝑖 and every 𝑓𝑚 , 𝑓𝑚′ in the subsequence. Consequently, for any 𝑥 ∈ 𝑋, there exists 𝑖 such that
𝜌(𝑓𝑚 (𝑥), 𝑓𝑚′ (𝑥) ≤ 𝜌(𝑓𝑚 (𝑥), 𝑓𝑚 (𝑥𝑖 )) + 𝜌(𝑓𝑚 (𝑥𝑖 ), 𝑓𝑚′ (𝑥𝑖 )) + 𝜌(𝑓𝑚′ (𝑥𝑖 ), 𝑓𝑚′ (𝑥))
𝜖
𝜖
𝜖
< + + =𝜖
3 3 3
That is, ∥𝑓𝑚 − 𝑓𝑚′ ∥ < 𝜖 for every 𝑓𝑚 , 𝑓𝑚′ in the subsequence.
2. Choose a ball 𝐵1 of radius 1 in 𝐶(𝑋) which contains infinitely many elements of
(𝑓𝑛 ). Applying step 1, there exists a ball 𝐵2 of radius 1/2 containing infinitely
many elements of (𝑓𝑛 ). Proceeding in this fashion, we obtain a nested sequence
𝐵1 ⊇ 𝐵2 ⊇ . . . of balls in 𝐶(𝑋) such that (a) 𝑑(𝐵𝑖 ) → 0 and (b) each 𝐵𝑖 contains
infinitely many terms of (𝑓𝑛 ). Choosing 𝑓𝑛𝑖 ∈ 𝐵𝑖 gives a convergent subsequence.
88
c 2001 Michael Carter
⃝
All rights reserved
Solutions for Foundations of Mathematical Economics
2.96 Let 𝑔 ∈ 𝐹 . Then for every 𝜖 > 0 there exists 𝛿 > 0 and 𝑓 ∈ 𝐹 such that
∥𝑓 − 𝑔∥ < 𝜖/3 and
𝜌(𝑥, 𝑥0 ) ≤ 𝛿 =⇒ 𝜌(𝑓 (𝑥), 𝑓 (𝑥0 ) < 𝜖/3
so that if 𝜌(𝑥, 𝑥0 ) ≤ 𝛿
∥𝑔(𝑥) − 𝑔(𝑥0 )∥ ≤ ∥𝑓 (𝑥) − 𝑔(𝑥)∥ + ∥𝑓 (𝑥) − 𝑓 (𝑥0 )∥ + ∥𝑓 (𝑥0 ) − 𝑔(𝑥0 )∥ <
𝜖
𝜖
𝜖
+ + =𝜖
3 3 3
2.97 For every 𝑇 ⊆ 𝑌
𝜑− (𝑇 𝑐 ) = { 𝑥 ∈ 𝑋 : 𝜑(𝑥) ∩ 𝑇 𝑐 ∕= ∅ }
𝜑+ (𝑇 ) = { 𝑥 ∈ 𝑋 : 𝜑(𝑥) ⊆ 𝑇 }
For every x ∈ 𝑋 either 𝜑(𝑥) ⊆ 𝑇 or 𝜑(𝑥) ∩ 𝑇 𝑐 ∕= ∅ but not both. Therefore
𝜑+ (𝑇 ) ∪ 𝜑− (𝑇 𝑐 ) = 𝑋
𝜑+ (𝑇 ) ∩ 𝜑− (𝑇 𝑐 ) = ∅
That is
(
)𝑐
𝜑+ (𝑇 ) = 𝜑− (𝑇 𝑐 )
2.98 Assume 𝑥 ∈ 𝜑(𝑇 )−1 . Then 𝜑(𝑥) = 𝑇 , 𝜑(𝑥) ⊆ 𝑇 and 𝑥 ∈ 𝜑+ (𝑇 ). Now assume
𝑥 ∈ 𝜑+ (𝑇 ) so that 𝜑(𝑥) ⊆ 𝑇 . Consequently, 𝜑(𝑥) ∩ 𝑇 = 𝜑(𝑥) ∕= ∅ and 𝑥 ∈ 𝜑− (𝑇 ).
2.99 The respective inverses are:
{𝑡1 }
{𝑡2 }
{𝑡1 , 𝑡2 }
{𝑡2 , 𝑡3 }
{𝑡1 , 𝑡2 , 𝑡3 }
𝜑−1
2
∅
∅
{𝑠1 }
{𝑠2 }
∅
𝜑+
2
∅
∅
{𝑠1 }
{𝑠2 }
{𝑠1 , 𝑠2 }
𝜑−
2
{𝑠1 }
{𝑠1 , 𝑠2 }
{𝑠1 , 𝑠2 }
{𝑠1 , 𝑠2 }
{𝑠1 , 𝑠2 }
2.100 Let 𝑇 be an open interval meeting 𝜑(1), that is 𝜑(1) ∩ 𝑇 ∕= ∅. Since 𝜑(1) = {1},
we must have 1 ∈ 𝑇 and therefore 𝜑(𝑥) ∩ 𝑇 ∕= ∅ for every 𝑥 ∈ 𝑋. Therefore 𝜑 is lhc at
𝑥 = 1. On the other hand, the open interval 𝑇 = (1/2, 3/2) contains 𝜑(1) but it does
not contain 𝜑(𝑥) for any 𝑥 > 1. Therefore, 𝜑 is not uhc at 𝑥 = 1.
2.101 Choose any open set 𝑇 ⊆ 𝑌 and 𝑥 ∈ 𝑋. Since 𝜑(𝑥) = 𝐾 = 𝜑(𝑥′ ) for every
𝑥, 𝑥′ ∈ 𝑋
∙ 𝜑(𝑥) ⊆ 𝑇 if and only if 𝜑(𝑥′ ) ⊆ 𝑇 for every 𝑥, 𝑥′ ∈ 𝑋
∙ 𝜑(𝑥) ∩ 𝑇 ∕= ∅ if and only if 𝜑(𝑥′ ) ∩ 𝑇 ∕= ∅ for every 𝑥, 𝑥′ ∈ 𝑋.
Consequently, 𝜑 is both uhc and lhc at all 𝑥 ∈ 𝑋.
2.102 First assume that the 𝜑 is uhc. Let 𝑇 be any open subset in 𝑌 and 𝑆 = 𝜑+ (𝑇 ).
If 𝑆 = ∅, it is open. Otherwise, choose 𝑥0 ∈ 𝑆 so that 𝜑(𝑥0 ) ⊆ 𝑇 . Since 𝜑 is uhc,
there exists a neighborhood 𝑆(𝑥0 ) such that 𝜑(𝑥) ⊆ 𝑇 for every 𝑥 ∈ 𝑆(𝑥0 ). That is,
𝑆(𝑥0 ) ⊆ 𝜑+ (𝑇 ) = 𝑆. This establishes that for every 𝑥0 ∈ 𝑆 there exist a neighborhood
𝑆(𝑥0 ) contained in 𝑆. That is, 𝑆 is open in 𝑋.
Conversely, assume that the upper inverse of every open set in 𝑌 is open in 𝑋. Choose
some 𝑥0 ∈ 𝑋 and let 𝑇 be an open set containing 𝜑(𝑥0 ). Let 𝑆 = 𝜑+ (𝑇 ). 𝑆 is an open
set containing 𝑥0 . That is, 𝑆 is a neighborhood of 𝑥0 with 𝜑(𝑥) ⊆ 𝑇 for every 𝑥 ∈ 𝑆.
Since the choice of 𝑥0 was arbitrary, we conclude that 𝜑 is uhc.
The lhc case is analogous.
89
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
2.103 Assume 𝜑 is uhc and 𝑇 be any closed set in 𝑌 . By Exercise 2.97
[
]
𝜑− (𝑇 ) = 𝜑+ (𝑇 𝑐 )
𝑇 𝑐 is open. By the previous exercise, 𝜑+ (𝑇 𝑐 ) is open which implies that 𝜑− (𝑇 ) is
closed.
Conversely, assume 𝜑− (𝑇 ) is closed for every closed set 𝑇 . Let 𝑇 be an open subset of
𝑌 so that 𝑇 𝑐 is closed. Again by Exercise 2.97,
[
]
𝜑+ (𝑇 ) = 𝜑− (𝑇 𝑐 )
By assumption 𝜑− (𝑇 𝑐 ) is closed and therefore 𝜑+ (𝑇 ) is open. By the previous exercise,
𝜑 is uhc.
The lhc case is analogous.
2.104 Assume that 𝜑 is uhc at 𝑥0 . We first show that (𝑦 𝑛 ) is bounded and hence has
a convergent subsequence. Since 𝜑(𝑥0 ) is compact, there exists a bounded open set 𝑇
containing 𝜑(𝑥0 ). Since 𝜑 is uhc, there exists a neighborhood 𝑆 of 𝑥0 such that 𝜑(𝑥) ⊆
𝑇 for 𝑥 ∈ 𝑆. Since 𝑥𝑛 → 𝑥0 , there exists some 𝑁 such that 𝑥𝑛 ∈ 𝑆 for every 𝑛 ≥ 𝑁 .
Consequently, 𝜑(𝑥𝑛 ) ⊆ 𝑇 for every 𝑛 ≥ 𝑁 and therefore 𝑦 𝑛 ∈ 𝑇 for every 𝑛 ≥ 𝑁 .
The sequence 𝑦 𝑛 is bounded and hence has a convergent subsequence 𝑦 𝑚 → 𝑦0 .
To complete the proof, we have to show that 𝑦0 ∈ 𝜑(𝑥0 ). Assume not, assume that
/ 𝜑(𝑥0 ). Then, there exists an open set 𝑇 containing 𝜑(𝑥0 ) such that 𝑦0 ∈
/ 𝑇
𝑦0 ∈
(Exercise 1.93). Since 𝜑 is uhc, there exists 𝑁 such that 𝜑(𝑥𝑛 ) ⊆ 𝑇 for every 𝑛 ≥ 𝑁 .
This implies that 𝑦 𝑚 ∈ 𝑇 for every 𝑚 ≥ 𝑁 . Since 𝑦 𝑚 → 𝑦0 , we conclude that 𝑦0 ∈ 𝑇 ,
contradicting the specification of 𝑇 .
Conversely, suppose that for every sequence 𝑥𝑛 → 𝑥0 , 𝑦 𝑛 ∈ 𝜑(𝑥𝑛 ), there is a subsequence of 𝑦 𝑚 → 𝑦0 ∈ 𝜑(𝑥0 ). Suppose that 𝜑 is not uhc at 𝑥0 . That is, there exists
an open set 𝑇 ⊇ 𝜑(𝑥0 ) such that every neighborhood contains some 𝑥 with 𝜑(𝑥) ∕⊆ 𝑇 .
From the sequence of neighborhoods 𝐵1/𝑛 (𝑥0 ), we can construct a sequence 𝑥𝑛 → 𝑥
and 𝑦 𝑛 ∈ 𝜑(𝑥𝑛 ) but 𝑦 𝑛 ∈
/ 𝑇 . Such a sequence cannot have a subsequence which converges to 𝑦 0 ∈ 𝜑(𝑥) , contradicting the hypothesis. We conclude that 𝜑 must be uhc at
𝑥0 .
2.105 Assume that 𝜑 is lhc. Let 𝑥𝑛 be a sequence converging to 𝑥0 and 𝑦0 ∈ 𝜑(𝑥0 ).
Consider the sequence of open balls 𝐵1/𝑚 (𝑦0 ), 𝑚 = 1, 2, . . . . Note that every 𝐵1/𝑚 (𝑦0 )
meets 𝜑(𝑥0 ). Since 𝜑 is lhc, there exists a sequence (𝑆 𝑚 ) of neighborhoods of 𝑥0 such
that 𝜑(𝑥) ∩ 𝐵1/𝑚 ∕= ∅ for every 𝑥 ∈ 𝑆 𝑚 . Since 𝑥𝑛 → 𝑥, for every 𝑚, there exists some
𝑁𝑚 such that 𝑥𝑛 ∈ 𝑆𝑚 for every 𝑛 ≥ 𝑁𝑚 . Without loss of generality, we can assume
that 𝑁1 < 𝑁2 < 𝑁3 . . . . We can now construct the desired sequence 𝑦 𝑛 . For each
𝑛 = 1, 2, . . . , choose 𝑦 𝑛 in the set 𝜑(𝑥𝑛 ) ∩ 𝐵 1/m where 𝑁𝑚 ≤ 𝑛 ≤ 𝑁𝑚+1 since
𝑛 ≥ 𝑁𝑚 =⇒ 𝑥𝑛 ∈ 𝑆𝑚 =⇒ 𝜑(𝑥𝑛 ) ∩ 𝐵1/𝑚 ∕= ∅
Since 𝑦 𝑛 ∈ 𝐵 1/m (𝑦0 ), the sequence (𝑦 𝑛 ) converges to 𝑦0 and 𝑛 → ∞.
Conversely, assume that 𝜑 is not lhc at 𝑥0 , that is there exists an open set 𝑇 with
𝑇 ∩ 𝜑(𝑥0 ) ∕= ∅ such that every neighborhood 𝑆 ∋ 𝑥0 contains some 𝑥 with 𝜑(𝑥) ∩ 𝑇 = ∅.
Therefore, there exists a sequence 𝑥𝑛 → 𝑥 with 𝜑(𝑥)∩𝑇 = ∅. Choose any 𝑦0 ∈ 𝜑(𝑥0 )∩𝑇 .
By assumption, there exists a sequence 𝑦 𝑛 → 𝑦 with 𝑦 𝑛 ∈ 𝜑(𝑥𝑛 ). Since 𝑇 is open and
𝑦0 ∈ 𝑇 , there exists some 𝑁 such that 𝑦 𝑛 ∈ 𝑇 for all 𝑛 ≥ 𝑁 , for which 𝜑(𝑦 𝑛 ) ∩ 𝑇 ∕= ∅.
This contradiction establishes that 𝜑 is lhc at 𝑥0 .
90
c 2001 Michael Carter
⃝
All rights reserved
Solutions for Foundations of Mathematical Economics
2.106
1. Assume 𝜑 is closed. For any 𝑥 ∈ 𝑋, let (𝑦 𝑛 ) be a sequence in 𝜑(𝑥). Since
𝜑 is closed, 𝑦 𝑛 → 𝑦 ∈ 𝜑(𝑥). Therefore 𝜑(𝑥) is closed.
2. Assume 𝜑 is closed-valued and uhc. Choose any (𝑥, 𝑦) ∈
/ graph(𝜑). Since 𝜑(𝑥) is
closed, there exist disjoint open sets 𝑇1 and 𝑇2 in 𝑌 such that 𝑦 ∈ 𝑇1 and 𝜑(𝑥) ⊆
𝑇2 (Exercise 1.93). Since 𝜑 is uhc, 𝜑+ (𝑇2 ) is a neighborhood of 𝑥. Therefore
𝜑+ (𝑇2 ) × 𝑇1 is a neighborhood of (𝑥, 𝑦) disjoint from graph(𝜑). Therefore the
complement of graph(𝜑) is open, which implies that graph(𝜑) is closed.
3. Since 𝜑 is closed and 𝑌 compact, 𝜑 is compact-valued. Let (𝑥𝑛 ) → 𝑥 be a
sequence in 𝑋 and (𝑦 𝑛 ) a sequence in 𝑌 with 𝑦 𝑛 ∈ 𝜑(𝑥𝑛 ). Since 𝑌 is compact,
there exists a subsequence 𝑦 𝑚 → 𝑦. Since 𝜑 is closed, 𝑦 ∈ 𝜑(𝑥). Therefore, by
Exercise 2.104, 𝜑 is uhc.
2.107 Assume 𝜑 is closed-valued and uhc. Then 𝜑 is closed (Exercise 2.106). Conversely, if 𝜑 is closed, then 𝜑(𝑥) is closed for every 𝑥 (Exercise 2.106). If 𝑌 is compact,
then 𝜑 is compact-valued (Exercise 1.110). By Exercise 2.104, 𝜑 is uhc.
2.108 𝜑1 is closed-valued (Exercise 2.106). Similarly, 𝜑2 is closed-valued (Proposition
1.1). Therefore, for every 𝑥 ∈ 𝑋, 𝜑(𝑥) = 𝜑1 (𝑥) ∩ 𝜑2 (𝑥) is closed (Exercise 1.85) and
hence compact (Exercise 1.110). Hence 𝜑 is compact-valued.
Now, for any 𝑥0 ∈ 𝑋, let 𝑇 be an open neighborhood of 𝜑(𝑥0 ). We need to show that
there is a neighborhood 𝑆 of 𝑥0 such that 𝜑(𝑆) ⊆ 𝑇 .
Case 1 𝑇 ⊇ 𝜑2 (𝑥0 ): Since 𝜑2 is uhc, there exists a neighborhood such that 𝑆 ∋ 𝑥0
such that 𝜑2 (𝑆) ⊆ 𝑇 which implies that 𝜑(𝑆) ⊆ 𝜑2 (𝑆) ⊆ 𝑇
Case 2 𝑇 ∕⊇ 𝜑2 (𝑥0 ): Let 𝐾 = 𝜑2 (𝑥0 ) ∖ 𝑇 ∕= ∅. For every 𝑦 ∈ 𝐾, there exist neighborhoods 𝑆𝑦 (𝑥0 ) and 𝑇 (𝑦) such that 𝜑1 (𝑆𝑦 (𝑥0 )) ∩ 𝑇 (𝑦) = ∅ (Exercise 1.93). The sets
𝑇 (𝑦) constitute an open covering of 𝐾. Since 𝐾 is compact, there exists a finite
subcover, that is there exists a finite number of elements 𝑦1 , 𝑦2 , . . . 𝑦𝑛 such that
𝑛
∪
𝐾⊆
𝑇 (𝑦𝑖 )
𝑖=1
∪𝑛
Let 𝑇 (𝐾) denote 𝑖=1 𝑇 (𝑦𝑖 ). Note that 𝑇 ∪𝑇 (𝐾) is an open set containing 𝜑2 (𝑥0 ).
Since 𝜑2 is uhc, there exists a neighborhood 𝑆 ′ (𝑥0 ) such that 𝜑2 (𝑆 ′ (𝑥0 )) ⊆ 𝑇 ∪
𝑇 (𝐾). Let
𝑆(𝑥0 ) =
𝑛
∩
𝑆𝑦𝑖 (𝑥0 ) ∩ 𝑆 ′ (𝑥0 )
𝑖=1
𝑆(𝑥0 ) is an open neighborhood of 𝑥0 for which
𝜑1 (𝑆(𝑥0 )) ∩ 𝑇 (𝐾) = ∅ and 𝜑2 (𝑆(𝑥0 )) ⊆ 𝑇 ∪ 𝑇 (𝐾)
from which we conclude that
𝜑(𝑆(𝑥0 )) = 𝜑1 (𝑆(𝑥0 )) ∩ 𝜑2 (𝑆(𝑥0 )) ⊆ 𝑇
∑𝑛
2.109
1. Let x ∈ 𝑋(p, 𝑚) ∩ 𝑇 . Then x ∈ 𝑋(p, 𝑚) and 𝑖=1 𝑝𝑖 𝑥𝑖 ≤ 𝑚. Since 𝑇 is
open, there exists 𝛼 < 1 such that x̃ = 𝛼x ∈ 𝑇 and
𝑛
∑
𝑖=1
𝑝𝑖 𝑥˜𝑖 = 𝛼
𝑛
∑
𝑝𝑖 𝑥𝑖 <
𝑖=1
91
𝑛
∑
𝑖=1
𝑝𝑖 𝑥𝑖 ≤ 𝑚
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
2. (a) Suppose that 𝑋(p, 𝑚) is not lhc. Then for every neighborhood 𝑆 of (p, 𝑚),
there exists (p′ , 𝑚′ ) ∈ 𝑆 such that 𝑋(p′ , 𝑚′ ) ∩ 𝑇 = ∅. In particular, for
every open ball 𝐵𝑛 (p, 𝑚), there exists a point (p𝑛 , 𝑚𝑛 ) ∈ 𝐵𝑛 (p, 𝑚) such
that 𝑋(p𝑛 , 𝑚𝑛 ) ∩ 𝑇 = ∅. ((p𝑛 , 𝑚𝑛 )) is the required sequence.
(b) By construction, ∥p𝑛 − p∥ < 1/𝑛 → 0 which implies that 𝑝𝑛𝑖 → 𝑝𝑖 for every
𝑖. Therefore (Exercise 1.202)
∑
∑
˜𝑖 →
𝑝𝑖 𝑥˜𝑖 < 𝑚 and 𝑚𝑛 → 𝑚
𝑝𝑛𝑖 𝑥
and therefore there exists 𝑁 such that
∑
˜ 𝑖 < 𝑚𝑁
𝑝𝑁
𝑖 𝑥
which implies that
x̃ ∈ 𝑋(p𝑁 , 𝑚𝑁 )
(c) Also by construction 𝑋(p𝑁 , 𝑚𝑁 ) ∩ 𝑇 = ∅ which implies 𝑋(p𝑁 , 𝑚𝑁 ) ⊆ 𝑇 𝑐
and therefore
x̃ ∈ 𝑋(p𝑛 , 𝑚𝑛 ) =⇒ x̃ ∈
/𝑇
The assumption that 𝑋(p, 𝑚) is not lhc at (p, 𝑚) implies that x̃ ∈
/ 𝑇 , contradicting the conclusion in part 1 that x̃ ∈ 𝑇 .
3. This contradiction establishes that (p, 𝑚) is lhc at (p, 𝑚). Since the choice of
(p, 𝑚) was arbitrary, we conclude that the budget correspondence 𝑋(p, 𝑚) is lhc
for all (p, 𝑚) ∈ 𝑃 (assuming 𝑋 = ℜ𝑛+ ).
4. In the previous example (Example 2.89), we have shown that 𝑋(p, 𝑚) is uhc.
Hence, ∑
the budget correspondence is continuous for all (p, 𝑚) such that 𝑚 >
𝑚
inf x∈𝑋 𝑖=1 𝑝𝑖 𝑥𝑖 .
2.110 We give two alternative proofs.
Proof 1 Let 𝒞 = {𝑆} be an open cover of 𝜑(𝐾). For every 𝑥 ∈ 𝐾, 𝜑(𝑥) ⊆ 𝜑(𝐾) is
compact and hence can be covered by a finite number of the sets 𝑆 ∈ 𝒞. Let
𝑆𝑥 denote the union of the finite cover of 𝜑(𝑥). Since 𝜑 is uhc, every 𝜑+ (𝑆𝑥 )
is open in 𝑋. Therefore { 𝜑+ (𝑆𝑥 ) : 𝑥 ∈ 𝐾 } is an open covering of 𝐾. If 𝐾 is
compact, it contains an finite covering { 𝜑+ (𝑆𝑥1 ), 𝜑+ (𝑆𝑥2 ), . . . , 𝜑+ (𝑆𝑥𝑛 ) }. The
sets 𝑆𝑥1 , 𝑆𝑥2 , . . . , 𝑆𝑥𝑛 are a finite subcovering of 𝜑(𝐾).
Proof 2 Let (𝑦 𝑛 ) be a sequence in 𝜑(𝐾). We have to show that (𝑦 𝑛 ) has a convergent
subsequence with a limit in 𝜑(𝐾). For every 𝑦 𝑛 , there is an 𝑥𝑛 with 𝑦 𝑛 ∈
𝜑(𝑥𝑛 ). Since 𝐾 is compact, the sequence (𝑥𝑛 ) has a convergent subsequence
𝑥𝑚 → 𝑥 ∈ 𝐾. Since 𝜑 is uhc, the sequence (𝑦 𝑚 ) has a subsequence (𝑦 𝑝 ) which
converges to 𝑦 ∈ 𝜑(𝑥) ⊆ 𝜑(𝐾). Hence the original sequence (𝑦 𝑛 ) has a convergent
subsequence.
2.111 The sets 𝑋, 𝜑(𝑋), 𝜑2 (𝑋), . . . form a sequence of nonempty compact sets. Since
𝜑(𝑋) ⊆ 𝑋, 𝜑2 (𝑋) ⊆ 𝜑(𝑋) and so on, the sequence of sets 𝜑𝑛 𝑋 is decreasing. Let
𝐾=
∞
∩
𝜑𝑛 (𝑋)
𝑛=1
By the nested intersection theorem (Exercise 1.117), 𝐾 ∕= ∅. Since 𝐾 ⊆ 𝜑𝑛−1 (𝑋),
𝜑(𝐾) ⊆ 𝜑𝑛 (𝑋) for every 𝑛, which implies that 𝜑(𝐾) ⊆ 𝐾.
92
Solutions for Foundations of Mathematical Economics
To show that 𝐾
that 𝑦 ∈ 𝜑(𝑥𝑛 ).
𝑥𝑚 ∈ 𝜑𝑚 (𝑋) for
(Exercise 2.107),
c 2001 Michael Carter
⃝
All rights reserved
⊆ 𝜑(𝐾), let 𝑦 ∈ 𝐾. For every 𝑛 there exists an 𝑥𝑛 ∈ 𝜑𝑛 (𝑋) such
Since 𝑋 is compact, there exists a subsequence 𝑥𝑚 → 𝑥0 . Since
every 𝑚, 𝑥0 ∈ 𝐾. The sequence (𝑥𝑚 , 𝑦) → (𝑥0 , 𝑦). Since 𝜑 is closed
𝑦 ∈ 𝜑(𝑥0 ). Therefore 𝑦 ∈ 𝜑(𝐾) which implies that 𝐾 ⊆ 𝜑(𝐾).
2.112 𝜑(𝑥) is compact for every 𝑥 ∈ 𝑋 by Tychonoff’s theorem (Proposition 1.2).
Let 𝑥𝑘 → 𝑥 be a sequence in 𝑋 and let 𝑦 𝑘 = (𝑦1𝑘 , 𝑦2𝑘 , . . . , 𝑦𝑛𝑘 ) with 𝑦𝑖𝑘 ∈ 𝜑(𝑥𝑘 ) be
a corresponding sequence of points in 𝑌 . For each 𝑦𝑖𝑘 , 𝑖 = 1, 2, . . . , 𝑛, there exists a
′
subsequence 𝑦𝑖𝑘 → 𝑦𝑖 with 𝑦𝑖 ∈ 𝜑𝑖 (𝑥) (Exercise 2.104). Therefore 𝑦 = (𝑦1 , 𝑦2 , . . . , 𝑦𝑛 ) ∈
𝜑(𝑥) which implies that 𝜑 is uhc.
2.113 Let 𝑣 ∈ 𝐶(𝑋). For every x ∈ 𝑋, the maximand 𝑓 (𝑥, 𝑦) + 𝛽𝑣(𝑦) is a continuous
function on a compact set 𝐺(𝑥). Therefore the supremum is attained, and max can
replace sup in the definition of the operator 𝑇 (Theorem 2.2). 𝑇 𝑣 is the value function
for the constrained optimization problem
max { 𝑓 (𝑥, 𝑦) + 𝛽𝑣(𝑦) }
𝑦∈𝐺(𝑥)
satisfying the requirements of the continuous maximum theorem (Theorem 2.3), which
ensures that 𝑇 𝑣 is continuous on 𝑋. We have previously shown that 𝑇 𝑣 is bounded
(Exercise 2.18). Therefore 𝑇 𝑣 ∈ 𝐶(𝑋).
2.114
1. 𝑆 has a least upper bound since 𝑋 is a complete lattice. Let 𝑠∗ = sup 𝑆.
Then 𝑆 ∗ = ≿(𝑠∗ ) is a complete sublattice of 𝑋 (Exercise 1.48).
2. For every 𝑠 ∈ 𝑆, 𝑠 ≾ 𝑠∗ and since 𝑓 is increasing and 𝑠 is a fixed point
𝑠 = 𝑓 (𝑠) ≾ 𝑓 (𝑠∗ )
Therefore 𝑓 (𝑠∗ ) ∈ 𝑆 ∗ . (𝑓 (𝑠∗ ) is an upper bound of 𝑆). Again, since 𝑓 is increasing, this implies that 𝑓 (𝑥) ≿ 𝑓 (𝑠∗ ) for every 𝑥 ∈ 𝑆 ∗ . Therefore 𝑓 (𝑆 ∗ ) ⊆ 𝑆 ∗ .
3. Let 𝑔 be the restriction of 𝑓 to the sublattice 𝑆 ∗ . Since 𝑓 (𝑆 ∗ ) ⊆ 𝑆 ∗ , 𝑔 is an
increasing function on a complete lattice. Applying Theorem 2.4, 𝑔 has a smallest
fixed point 𝑥
˜.
4. 𝑥
˜ is a fixed point of 𝑓 , that is 𝑥
˜ ∈ 𝐸. Furthermore, 𝑥˜ ∈ 𝑆 ∗ . Therefore 𝑥˜ is
an upper bound for 𝑆 in 𝐸. Moreover, 𝑥
˜ is the smallest fixed point of 𝑓 in 𝑆 ∗ .
Therefore, 𝑥
˜ is the least upper bound of 𝑆 in 𝐸.
5. By Exercise 1.47, this implies that 𝐸 is a complete lattice.
In Example 2.91, if 𝑆 = {(2, 1), (1, 2)}, 𝑆 ∗ = {(2, 2), (3, 2), (2, 3), (3, 3)} and 𝑥˜ = (3, 3).
2.115
1. For every 𝑥 ∈ 𝑀 , there exists some 𝑦𝑥 ∈ 𝜑(𝑥) such that 𝑦𝑥 ≾ 𝑥. Moreover,
𝑥) such that
since 𝜑 is increasing and 𝑥
˜ ≾ 𝑥, there exists some 𝑧𝑥 ∈ 𝜑(˜
𝑧𝑥 ≾ 𝑦𝑥 ≾ 𝑥 for every 𝑥 ∈ 𝑀
2. Let 𝑧˜ = inf{𝑧𝑥 }
˜.
(a) Since 𝑧𝑥 ≾ 𝑥 for every 𝑥 ∈ 𝑀 , 𝑧˜ = inf{𝑧𝑥 } ≾ inf{𝑥} = 𝑥
(b) Since 𝜑(˜
𝑥) is a complete sublattice of 𝑋, 𝑧˜ = inf{𝑧𝑥 } ∈ 𝜑(˜
𝑥).
3. Therefore, 𝑥
˜ ∈ 𝑀.
4. Since 𝑧˜ ≾ 𝑥˜ and 𝜑 is increasing, there exists some 𝑦 ∈ 𝜑(˜
𝑧 ) such that
𝑦 ≾ 𝑧˜ ∈ 𝜑(˜
𝑥)
Hence 𝑧˜ ∈ 𝑀 .
93
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
5. This implies that 𝑥
˜ ≾ 𝑧˜. Therefore
𝑥
˜ = 𝑧˜ ∈ 𝜑(˜
𝑥)
𝑥
˜ is a fixed point of 𝜑.
6. Since 𝐸 ⊆ 𝑀 , 𝑥˜ = inf 𝑀 is the least fixed point of 𝜑.
2.116
1. Let 𝑆 ⊆ 𝐸 and 𝑠∗ = sup 𝑆. For every 𝑥 ∈ 𝑆, 𝑥 ∈ 𝜑(𝑥). Since 𝜑 is
increasing, there exists some 𝑧𝑥 ∈ 𝜑(𝑠∗ ) such that 𝑧𝑥 ≿ 𝑥.
2. Let 𝑧 ∗ = sup 𝑧𝑥 . Then
(a) Since 𝑧𝑥 ≿ 𝑥 for every 𝑥 ∈ 𝑆, 𝑧 ∗ = sup 𝑧𝑥 ≿ sup 𝑥 = 𝑠∗
(b) 𝑧 ∗ ∈ 𝜑(𝑠∗ ) since 𝜑(𝑠∗ ) is a complete sublattice.
3. Define
𝑆 ∗ = { 𝑥 ∈ 𝑋 : 𝑥 ≿ 𝑠 for every 𝑠 ∈ 𝑆 }
𝑆 ∗ is the set of all upper bounds of 𝑆 in 𝑋. Then 𝑆 ∗ is a complete lattice, since
𝑆 ∗ = ≿(𝑠∗ )
4. Let 𝜇 : 𝑆 ∗ ⇉ 𝑆 ∗ be the correspondence
𝜇(𝑥) = 𝜑(𝑥) ∩ 𝜓(𝑥)
where 𝜓 : 𝑆 ∗ ⇉ 𝑆 ∗ is the constant correspondence defined by 𝜓(𝑥) = 𝑆 ∗ for every 𝑥 ∈
𝑆 ∗ . Then
(a) Since 𝜑 is increasing, for every 𝑥 ≿ 𝑠∗ , there exists some 𝑦𝑥 ∈ 𝜑(𝑥) such
that 𝑦𝑥 ≿ 𝑠∗ . Therefore 𝜇(𝑥) ∕= ∅ for every 𝑥 ∈ 𝑆 ∗ .
(b) Both 𝜑(𝑥) and 𝜓(𝑥) are complete sublattices for every 𝑥 ∈ 𝑆 ∗ . Therefore
𝜇(𝑥) is a complete sublattice for every 𝑥 ∈ 𝑆 ∗ .
(c) Since both 𝜑 and 𝜓 are increasing on 𝑆 ∗ , 𝜇 is increasing on 𝑆 ∗ (Exercise
2.47).
5. By the previous exercise, 𝜇 has a least fixed point 𝑥˜.
6. 𝑥
˜ ∈ 𝑆 ∗ is an upper bound of 𝑆. Therefore 𝑥
˜ is the least upper bound of 𝑆 in 𝐸.
7. By the previous exercise, 𝐸 has a least element. Since we have shown every
subset 𝑆 ⊆ 𝐸 has a least upper bound, this establishes that 𝐸 is complete lattice
(Exercise 1.47).
2.117 For any 𝑖, let a1−𝑖 , a2−𝑖 ∈ 𝐴−𝑖 with a2−𝑖 ≿ a1−𝑖 . Let 𝑎
¯1𝑖 = 𝑓 (a1−𝑖 ) and 𝑎
¯2𝑖 = 𝑓 (a2−𝑖 ).
2
1
1
1
We want to show that 𝑎
¯𝑖 ≿ 𝑎
¯𝑖 . Since 𝑎
¯𝑖 ∈ 𝐵(a−𝑖 ) and 𝐵(a−𝑖 ) is increasing, there
¯1𝑖 . (Exercise 2.44). Therefore
exists some 𝑎𝑖 ∈ 𝐵(a2−𝑖 ) such that 𝑎𝑖 ≿ 𝑎
sup 𝐵(a−𝑖 ) = 𝑎
¯2𝑖 ≿ 𝑎𝑖 ≿ 𝑎
¯1𝑖
𝑓¯𝑖 is increasing.
2.118 For any player 𝑖, their best response correspondence 𝐵𝑖 (a−𝑖 ) is
1. increasing by the monotone maximum theorem (Theorem 2.1).
2. a complete sublattice of 𝐴𝑖 for every a−𝑖 ∈ 𝐴−𝑖 (Corollary 2.1.1).
94
c 2001 Michael Carter
⃝
All rights reserved
Solutions for Foundations of Mathematical Economics
The joint best response correspondence
𝐵(a) = 𝐵1 (a−1 ) × 𝐵2 (a−2 ) × ⋅ ⋅ ⋅ × 𝐵𝑛 (a−𝑛 )
is also
1. increasing (Exercise 2.46)
2. a complete sublattice of 𝐴 for every a ∈ 𝐴
Therefore, the best response correspondence 𝐵(a) satisfies the conditions of Zhou’s
theorem, which implies that the set 𝐸 of fixed points of 𝐵 is a nonempty complete
lattice. 𝐸 is precisely the set of Nash equilibria of the game.
2.119 In proving the theorem, we showed that
𝜌(𝑥𝑛 , 𝑥𝑛+𝑚 ) ≤
𝛽𝑛
𝜌(𝑥0 , 𝑥1 )
1−𝛽
for every 𝑚, 𝑛 ≥ 0. Letting 𝑚 → ∞, 𝑥𝑛+𝑚 → 𝑥 and therefore
𝜌(𝑥𝑛 , 𝑥) ≤
𝛽𝑛
𝜌(𝑥0 , 𝑥1 )
1−𝛽
Similarly, for every 𝑛, 𝑚 ≥ 0
𝜌(𝑥𝑛 , 𝑥𝑛+𝑚 ) ≤ 𝜌(𝑥𝑛 , 𝑥𝑛+1 ) + 𝜌(𝑥𝑛+1 , 𝑥𝑛+2 ) + ⋅ ⋅ ⋅ + 𝜌(𝑥𝑛+𝑚−1 , 𝑥𝑛+𝑚 )
≤ (𝛽 + 𝛽 2 + ⋅ ⋅ ⋅ + 𝛽 𝑚 )𝜌(𝑥𝑛−1 , 𝑥𝑛 )
≤
𝛽(1 − 𝛽 𝑚 )
𝜌(𝑥𝑛−1 , 𝑥𝑛 )
1−𝛽
Letting 𝑚 → ∞, 𝑥𝑛+𝑚 → 𝑥 and 𝛽 𝑚 → 0 so that
𝜌(𝑥𝑛 , 𝑥) ≤
𝛽
𝜌(𝑥𝑛−1 , 𝑥𝑛 )
1−𝛽
2.120 First observe that 𝑓 (𝑥) ≥ 1 for every 𝑥 ≥ 1. Therefore 𝑓 : 𝑋 → 𝑋. For any
𝑥, 𝑧 ∈ 𝑋
𝑥 − 𝑦 + 𝑥2 −
𝑓 (𝑥) − 𝑓 (𝑦)
=
𝑥−𝑦
2(𝑥 − 𝑦)
Since
1
𝑥𝑦
=
1
1
−
2 𝑥𝑦
≤ 1 for all 𝑥, 𝑦 ∈ 𝑋
−
so that
2
𝑦
𝑓 (𝑥) − 𝑓 (𝑦)
1
1
≤
≤
2
𝑥−𝑦
2
𝑓 (𝑥) − 𝑓 (𝑦) ∣𝑓 (𝑥) − 𝑓 (𝑦)∣
1
=
≤
𝑥−𝑦
∣𝑥 − 𝑦∣
2
or
∣𝑓 (𝑥) − 𝑓 (𝑦)∣ ≤
𝑓 is a contraction on 𝑋 with modulus 1/2.
95
1
∣𝑥 − 𝑦∣
2
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
𝑋 is closed and hence complete (Exercise 1.107). Therefore, 𝑓 has a fixed point. That
is, there exists 𝑥0 ∈ 𝑋 such that
𝑥0 = 𝑓 (𝑥0 ) =
1
2
(𝑥0 + )
2
𝑥0
Rearranging
so that 𝑥0 =
2𝑥20 = 𝑥20 + 2 =⇒ 𝑥20 = 2
√
2.
Letting 𝑥0 = 2
𝑥1 =
3
1
(2 + 1) =
2
2
Using the error bounds in Corollary 2.5.1,
√
𝛽𝑛
𝜌(𝑥𝑛 , 2) ≤
𝜌(𝑥0 , 𝑥1 )
1−𝛽
(1/2)𝑛
=
1/2
1/2
1
= 𝑛
2
1
< 0.001
=
1024
when 𝑛 = 10. Therefore, we conclude that 10 iterations are ample to reduce the
error below 0.001. Actually, with experience, we can refine this a priori estimate. In
Example 1.64, we calculated the first five terms of the sequence to be
(2, 1.5, 1.416666666666667, 1.41421568627451, 1.41421356237469)
We observe that
𝜌(𝑥3 , 𝑥4 ) = 1.41421568627451 − 1.41421356237469) = 0.0000212389982
so that using the second inequality of Corollary 2.5.1
𝜌(𝑥4 ,
√
2) ≤
1/2
0.0000212389982 < 0.001
1/2
𝑥4 = 1.41421356237469 is the desired approximation.
2.121 Choose any 𝑥0 ∈ 𝑆. Define the sequence 𝑥𝑛 = 𝑓 (𝑥𝑛 ) = 𝑓 𝑛 (𝑥0 ). Then (𝑥𝑛 ) is a
Cauchy sequence in 𝑆 converging to 𝑥. Since 𝑆 is closed, 𝑥 ∈ 𝑆.
2.122 By the Banach fixed point theorem, 𝑓 𝑁 has a unique fixed point 𝑥. Let 𝛽 be the
Lipschitz constant of 𝑓 𝑁 . We have to show
𝑥 is a fixed point of 𝑓
𝜌(𝑓 (𝑥), 𝑥) = 𝜌(𝑓 (𝑓 𝑁 (𝑥), 𝑓 𝑁 (𝑥)) = 𝜌(𝑓 𝑁 (𝑓 (𝑥), 𝑓 𝑁 (𝑥)) ≤ 𝛽𝜌(𝑓 (𝑥), 𝑥)
Since 𝛽 < 1, this implies that 𝜌(𝑓 (𝑥), 𝑥) = 0 or 𝑓 (𝑥) = 𝑥.
𝑥 is the only fixed point of 𝑓 Suppose 𝑧 = 𝑓 (𝑧) is another fixed point of 𝑓 . Then
𝑧 is a fixed point of 𝑓 𝑁 and
𝜌(𝑥, 𝑧) = 𝜌(𝑓 𝑁 (𝑥), 𝑓 𝑁 (𝑧)) ≤ 𝛽𝜌(𝑥, 𝑧)
which implies that 𝑥 = 𝑧.
96
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
2.123 By the Banach fixed point theorem, for every 𝜃 ∈ Θ, there exists 𝑥𝜃 ∈ 𝑋 such
that 𝑓𝜃 (𝑥𝜃 ) = 𝑥𝜃 . Choose any 𝜃0 ∈ Θ.
𝜌(𝑥𝜃 , 𝑥𝜃0 ) = 𝜌(𝑓𝜃 (𝑥𝜃 ), 𝑓𝜃0 (𝑥𝜃0 ))
≤ 𝜌(𝑓𝜃 (𝑥𝜃 ), 𝑓𝜃 (𝑥𝜃0 )) + 𝜌(𝑓𝜃 (𝑥𝜃0 ), 𝑓𝜃0 (𝑥𝜃0 ))
≤ 𝛽𝜌(𝑥𝜃 , 𝑥𝜃0 ) + 𝜌(𝑓𝜃 (𝑥𝜃0 ), 𝑓𝜃0 (𝑥𝜃0 ))
(1 − 𝛽)𝜌(𝑥𝜃 , 𝑥𝜃0 ) ≤ 𝜌(𝑓𝜃 (𝑥𝜃0 ), 𝑓𝜃0 (𝑥𝜃0 ))
𝜌(𝑥𝜃 , 𝑥𝜃0 ) ≤
𝜌(𝑓𝜃 (𝑥𝜃0 ), 𝑓𝜃0 (𝑥𝜃0 ))
→0
(1 − 𝛽)
as 𝜃 → 𝜃0 . Therefore 𝑥𝜃 → 𝑥𝜃0 .
2.124
1. Let x be a fixed point of 𝑓 . Then x satisfies
x = (𝐼 − 𝐴)x + c = x − 𝐴x + 𝑐
which implies that 𝐴x = 𝑐.
2. For any x1 , x2 ∈ 𝑋
𝑓 (x1 ) − 𝑓 (x2 ) = (𝐼 − 𝐴)(x1 − x2 )
≤ ∥𝐼 − 𝐴∥ x1 − x2 Since 𝑎𝑖𝑖 = 1, the norm of 𝐼 − 𝐴 is
∥𝐼 − 𝐴∥ = max
𝑖
∑
∣𝑎𝑖𝑗 ∣ = 𝑘
𝑗∕=𝑖
and
𝑓 (x1 ) − 𝑓 (x2 ) ≤ 𝑘 x1 − x2 By the assumption of strict diagonal dominance, 𝑘 < 1. Therefore 𝑓 is a contraction and has a unique fixed point x.
2.125
1.
𝜑(𝑥) = { 𝑦 ∗ ∈ 𝐺(𝑥) : 𝑓 (𝑥, 𝑦 ∗ ) + 𝛽𝑣(𝑦 ∗ ) = 𝑣(𝑥) }
= {𝑦 ∗ ∈ 𝐺(𝑥) : 𝑓 (𝑥, 𝑦 ∗ ) + 𝛽𝑣(𝑦 ∗ ) = sup {𝑓 (𝑥, 𝑦) + 𝛽𝑣(𝑦)}}
𝑦∈𝐺(𝑥)
∗
∗
∗
= {𝑦 ∈ 𝐺(𝑥) : 𝑓 (𝑥, 𝑦 ) + 𝛽𝑣(𝑦 ) ≥ 𝑓 (𝑥, 𝑦) + 𝛽𝑣(𝑦) for every 𝑦 ∈ 𝐺(𝑥)}
= arg max {𝑓 (𝑥, 𝑦) + 𝛽𝑣(𝑦)}
𝑦∈𝐺(𝑥)
2. 𝜑(𝑥) is the solution correspondence of a standard constrained maximization problem, with 𝑥 as parameter and 𝑦 the decision variable. By assumption the maximand 𝑓 (𝑥, 𝑦) = 𝑓 (𝑥, 𝑦) + 𝛽𝑣(𝑦) is continuous and the constraint correspondence
𝐺(𝑥) is continuous and compact-valued. Applying the continuous maximum theorem (Theorem 2.3), 𝜑 is nonempty, compact-valued and uhc.
3. We have just shown that 𝜑(𝑥) is nonempty for every 𝑥 ∈ 𝑋. Starting at 𝑥0 ,
choose some 𝑥∗1 ∈ 𝜑(𝑥0 ). Then choose 𝑥∗2 ∈ 𝜑(𝑥∗1 ). Proceeding in this way,
we can construct a plan x∗ = 𝑥0 , 𝑥∗1 , 𝑥∗2 , . . . such that 𝑥∗𝑡+1 ∈ 𝜑(𝑥∗𝑡 ) for every
𝑡 = 0, 1, 2, . . . .
97
c 2001 Michael Carter
⃝
All rights reserved
Solutions for Foundations of Mathematical Economics
4. Since 𝑥∗𝑡+1 ∈ 𝜑(𝑥∗𝑡 ) for every 𝑡, x satisfies Bellman’s equation, that is
𝑣(𝑥∗𝑡 ) = 𝑓 (𝑥∗𝑡 , 𝑥∗𝑡+1 ) + 𝛽𝑣(𝑥∗𝑡+1 ),
𝑡 = 0, 1, 2, . . .
Therefore x is optimal (Exercise 2.17).
2.126
1. In the previous exercise (Exercise 2.125) we showed that the set 𝜑 of solutions to Bellman’s equation (Exercise 2.17) is the solution correspondence of the
constrained maximization problem
𝜑(𝑥) = arg max { 𝑓 (𝑥, 𝑦) + 𝛽𝑣(𝑦) }
𝑦∈𝐺(𝑥)
This problem satisfies the requirements of the monotone maximum theorem (Theorem 2.1), since the objective function 𝑓 (𝑥, 𝑦) + 𝛽𝑣(𝑦)
∙ supermodular in 𝑦
∙ displays strictly increasing differences in (𝑥, 𝑦) since for every 𝑥2 ≥ 𝑥1
𝑓 (𝑥2 , 𝑦) + 𝛽𝑣(𝑦) − 𝑓 (𝑥1 , 𝑦) + 𝛽𝑣(𝑦) = 𝑓 (𝑥2 , 𝑦) − 𝑓 (𝑥1 , 𝑦)
∙ 𝐺(𝑥) is increasing.
By Corollary 2.1.2, 𝜑(𝑥) is always increasing.
2. Let x∗ = (𝑥0 , 𝑥∗1 , 𝑥∗2 , . . . ) be an optimal plan. Then (Exercise 2.17)
𝑥∗𝑡+1 ∈ 𝜑(𝑥∗𝑡 ),
𝑡 = 0, 1, 2, . . .
Since 𝜑 is always increasing
𝑥∗𝑡 ≥ 𝑥∗𝑡−1 =⇒ 𝑥∗𝑡+1 ≥ 𝑥∗𝑡
for every 𝑡 = 1, 2, . . . . Similarly
𝑥∗𝑡 ≤ 𝑥∗𝑡−1 =⇒ 𝑥∗𝑡+1 ≤ 𝑥∗𝑡
x∗ = (𝑥0 , 𝑥∗1 , 𝑥∗2 , . . . ) is a monotone sequence.
2.127 Let 𝑔(𝑥) = 𝑓 (𝑥) − 𝑥. 𝑔 is continuous (Exercise 2.78) with
𝑔(0) ≥ 0 and 𝑔(1) ≤ 0
By the intermediate value theorem (Exercise 2.83), there exists some point 𝑥 ∈ [0, 1]
with 𝑔(𝑥) = 0 which implies that 𝑓 (𝑥) = 𝑥.
2.128
1. To show that a label min{ 𝑖 : 𝛽𝑖 ≤ 𝛼𝑖 ∕= 0 } exists for every x ∈ 𝑆, assume
to the contrary that, for some x ∈ 𝑆, 𝛽𝑖 > 𝛼𝑖 for every 𝑖 = 0, 1, . . . , 𝑛. This
implies
𝑛
∑
𝛽𝑖 >
𝑖=0
𝑛
∑
𝛼𝑖 = 1
𝑖=0
contradicting the requirement that
𝑛
∑
𝛽𝑖 = 1 for every 𝑓 (x) ∈ 𝑆
𝑖=0
98
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
2. The barycentric coordinates of vertex x𝑖 are 𝛼𝑖 = 1 with 𝛼𝑗 = 0 for every 𝑗 ∕= 𝑖.
Therefore the rule assigns vertex x𝑖 the label 𝑖.
3. Similarly, if x belongs to a proper face of 𝑆, it coordinates relative to the vertices
not in that face are 0, and it cannot be assigned a label corresponding to a vertex
not in the face. To be concrete, suppose that x ∈ conv {x1 , x2 , x4 }. Then
x = 𝛼1 x1 + 𝛼2 x2 + 𝛼4 x4 ,
𝛼1 + 𝛼2 + 𝛼4 = 1
/ {1, 2, 4}. Therefore
and 𝛼𝑖 = 0 for 𝑖 ∈
x +−→ min{ 𝑖 : 𝛽𝑖 ≤ 𝛼𝑖 ∕= 0 } ∈ {1, 2, 4}
2.129
1. Since 𝑆 is compact, it is bounded (Proposition 1.1) and therefore it is
contained in a sufficiently large simplex 𝑇 .
2. By Exercise 3.74, there exists a continuous retraction 𝑟 : 𝑇 → 𝑆. The composition
𝑓 ∘ 𝑟 : 𝑇 → 𝑆 ⊆ 𝑇 . Furthermore as the composition of continuous functions, 𝑓 ∘ 𝑟
is continuous (Exercise 2.72). Therefore 𝑓 ∘ 𝑟 has a fixed point x∗ ∈ 𝑇 , that is
𝑓 ∘ 𝑟(x∗ ) = x∗ .
3. Since 𝑓 ∘ 𝑟(x) ∈ 𝑆 for every x ∈ 𝑇 , we must have 𝑓 ∘ 𝑟(x∗ ) = x∗ ∈ 𝑆. Therefore,
𝑟(x∗ ) = x∗ which implies that 𝑓 (x∗ ) = x∗ . That is, x∗ is a fixed point of 𝑓 .
2.130 Convexity of 𝑆 is required to ensure that there is a continuous retraction of the
simplex onto 𝑆.
2.131
1. 𝑓 (𝑥) = 𝑥2 on 𝑆 = (0, 1) or 𝑓 (𝑥) = 𝑥 + 1 on 𝑆 = ℜ+ .
2. 𝑓 (𝑥) = 1 − 𝑥 on 𝑆 = [0, 1/3] ∪ [2/3, 1].
3. Let 𝑆 = [0, 1] and define
{
𝑓 (𝑥) =
1
0
0 ≤ 𝑥 < 1/2
otherwise
2.132 Suppose such a function exists. Define 𝑓 (x) = −𝑟(x). Then 𝑓 : 𝐵 → 𝐵 continously, and has no fixed point since for
∙ x ∈ 𝑆, 𝑓 (x) = −𝑟(x) = −x ∕= x
∙ x ∈ 𝐵 ∖ 𝑆, 𝑓 (x) ∈
/ 𝐵 ∖ 𝑆 and therefore𝑓 (x) ∕= x
Therefore 𝑓 has no fixed point contradicting Brouwer’s theorem.
2.133 Suppose to the contrary that 𝑓 has no fixed point. For every x ∈ 𝐵, let 𝑟(z)
denote the point where the line segment from 𝑓 (x) through x intersects the boundary
𝑆 of 𝐵. Since 𝑓 is continuous and 𝑓 (x) ∕= x, 𝑟 is a continuous function from 𝐵 to its
boundary, that is a retraction, contradicting Exercise 2.132. We conclude that 𝑓 must
have a fixed point.
2.134 No-retraction =⇒ Brouwer Note first that the no-retraction theorem (Exercise 2.132) generalizes immediately to a closed ball about 0 of arbitrary radius.
Assume that 𝑓 is a continuous operator on a compact, convex set 𝑆 in a finite dimensional normed linear space. There exists a closed ball 𝐵 containing 𝑆
(Proposition 1.1). Define 𝑔 : 𝐵 → 𝑆 by
𝑔(y) = { x ∈ 𝑆 : x is closest to y }
As in Exercise 2.129, 𝑔 is well-defined, continuous and 𝑔(x) = x for every x ∈ 𝑆.
𝑓 ∘ 𝑔 : 𝐵 → 𝑆 ⊆ 𝐵 and has a fixed point x∗ = 𝑓 (𝑔(x∗ )) by Exercise 2.133. Since
99
c 2001 Michael Carter
⃝
All rights reserved
Solutions for Foundations of Mathematical Economics
𝑓 ∘ 𝑔(x) ∈ 𝑆 for every x ∈ 𝐵, we must have 𝑓 ∘ 𝑔(x∗ ) = x∗ ∈ 𝑆. Therefore,
𝑔(x∗ ) = x∗ which implies that 𝑓 (x∗ ) = x∗ . That is, x∗ is a fixed point of 𝑓 .
Brouwer =⇒ no-retraction Exercise 2.132.
2.135 Let Λ𝑘 , 𝑘 = 1, 2, . . . be a sequence of simplicial partitions of 𝑆 in which the
maximum diameter of the subsimplices tend to zero as 𝑘 → ∞. By Sperner’s lemma
(Proposition 1.3), every partition Λ𝑘 has a completely labeled subsimplex with vertices
x𝑘0 , x𝑘1 , . . . , x𝑘𝑛 . By construction of an admissible labeling, each x𝑘𝑖 belongs to a face
containing x𝑖 , that is
x𝑘𝑖 ∈ conv {x𝑖 , . . . }
and therefore
x𝑘𝑖 ∈ 𝐴𝑖 ,
𝑖 = 0, 1, . . . , 𝑛
′
Since 𝑆 is compact, each sequence x𝑘𝑖 has a convergent subsequence x𝑘𝑖 . Moreover,
since the diameters of the subsimplices converge to zero, these subsequences must
converge to the same point, say x∗ . That is,
′
lim x𝑘𝑖 = x∗ ,
𝑖 = 0, 1, . . . , 𝑛
𝑘′ →∞
Since the sets 𝐴𝑖 are closed, x∗ ∈ 𝐴𝑖 for every 𝑖 and therefore
𝑛
∩
x∗ ∈
𝐴𝑖 ∕= ∅
𝑖=0
2.136
=⇒ Let 𝑓 : 𝑆 → 𝑆 be a continuous operator on an 𝑛-dimensional simplex 𝑆
with vertices x0 , x1 , . . . , x𝑛 . For 𝑖 = 0, 1, . . . , 𝑛, let
𝐴𝑖 = { x ∈ 𝑆 : 𝛽𝑖 ≤ 𝛼𝑖 }
where 𝛼0 , 𝛼1 , . . . , 𝛼𝑛 and 𝛽0 , 𝛽1 , . . . , 𝛽𝑛 are the barycentric coordinates of x and
𝑓 (x) respectively. Then
∙ 𝑓 continuous =⇒ 𝐴𝑖 closed for every 𝑖 = 0, 1, . . . , 𝑛 (Exercise 1.106)
∙ Let x ∈ conv { x𝑖 : 𝑖 ∈ 𝐼 } for some 𝐼 ⊆ { 0, 1, . . . , 𝑛 }. Then
∑
𝛼𝑖 = 1 =
𝑛
∑
𝛽𝑖
𝑖=0
𝑖∈𝐼
which implies that 𝛽𝑖 ≤ 𝛼𝑖 for some 𝑖 ∈ 𝐼, so that x ∈ 𝐴𝑖 . Therefore
∪
𝐴𝑖
conv { x𝑖 : 𝑖 ∈ 𝐼 } ⊆
𝑖∈𝐼
Therefore the collection 𝐴0 , 𝐴1 , . . . , 𝐴𝑛 satisfies the hypotheses of the K-K-M
theorem and their intersection is nonempty. That is, there exists
x∗ ∈
𝑛
∩
𝐴𝑖 ∕= ∅ with 𝛽𝑖∗ ≤ 𝛼∗𝑖 ,
𝑖 = 0, 1, . . . , 𝑛
𝑖=0
where ∑
𝛼∗ and ∑
𝛽 ∗ are the barycentric coordinates of x∗ and 𝑓 (x∗ ) respectively.
∗
Since
𝛽𝑖 = 𝛼∗𝑖 = 1, this implies that
𝛽𝑖∗ = 𝛼∗𝑖
𝑖 = 0, 1, . . . , 𝑛
In other words, 𝑓 (x∗ ) = x∗ .
100
c 2001 Michael Carter
⃝
All rights reserved
Solutions for Foundations of Mathematical Economics
⇐=
Let 𝐴0 , 𝐴1 , . . . , 𝐴𝑛 be closed subsets of an 𝑛 dimensional simplex 𝑆 with vertices
x0 , x1 , . . . , x𝑛 such that
∪
conv { x𝑖 : 𝑖 ∈ 𝐼 } ⊆
𝐴𝑖
𝑖∈𝐼
for every 𝐼 ⊆ { 0, 1, . . . , 𝑛 }. For 𝑖 = 0, 1, . . . , 𝑛, let
𝑔𝑖 (x) = 𝜌(x, 𝐴𝑖 )
For any x ∈ 𝑆 with barycentric coordinates 𝛼0 , 𝛼1 , . . . , 𝛼𝑛 , define
𝑓 (x) = 𝛽0 x0 + 𝛽1 x1 + ⋅ ⋅ ⋅ + 𝛽𝑛 x𝑛
where
𝛽𝑖 =
𝛼𝑖 + 𝑔𝑖 (x)
∑
1 + 𝑛𝑗=0 𝑔𝑗 (x)
𝑖 = 0, 1, . . . , 𝑛
(2.45)
∑
By construction 𝛽𝑖 ≥ 0 and 𝑛𝑖=0 𝛽𝑖 = 1. Therefore 𝑓 (x) ∈ 𝑆. That is, 𝑓 : 𝑆 → 𝑆.
Furthermore 𝑓 is continuous. By Brouwer’s theorem, there exists a fixed point
𝑥∗ with 𝑓 (x∗ ) = x∗ . That is 𝛽𝑖∗ = 𝛼∗𝑖 for 𝑖 = 0, 1, . . . , 𝑛.
Now, since the collection 𝐴0 , 𝐴1 , . . . , 𝐴𝑛 covers 𝑆, there exists some 𝑖 for which
𝜌(x∗ , 𝐴𝑖 ) = 0. Substituting 𝛽𝑖∗ = 𝛼∗𝑖 in (2.45) we have
𝛼∗𝑖 =
1+
𝛼∗
∑𝑛 𝑖
𝑗=0
𝑔𝑗 (x∗ )
which implies that 𝑔𝑗 (x∗ ) = 0 for every 𝑗. Since the 𝐴𝑖 are closed, x∗ ∈ 𝐴𝑖 for
every 𝑖 and therefore
x∗ ∈
𝑛
∩
𝐴𝑖 ∕= ∅
𝑖=0
(
)
2.137 To simplify the notation, let 𝑧𝑘+ (p) = max 0, z𝑖 (p) . Assume p∗ is a fixed point
of 𝑔. Then for every 𝑘 = 1, 2, . . . , 𝑛
𝑝∗𝑘 =
𝑝𝑘 + 𝑧𝑘+ (p∗ )
∑𝑛
1 + 𝑗=1 𝑧𝑗+ (p∗ )
Cross-multiplying
𝑝∗𝑘 + 𝑝∗𝑘
𝑛
∑
𝑗=1
𝑧𝑗+ (p) = 𝑝∗𝑘 + 𝑧𝑘+ (p∗ )
or
𝑝∗𝑘
𝑛
∑
𝑗=1
𝑧𝑗+ (p) = 𝑧𝑘+ (p∗ )
𝑘 = 1, 2, . . . 𝑛
Multiplying each equation by 𝑧𝑘 (p) we get
𝑝∗𝑘 𝑧𝑘 (p∗ )
𝑛
∑
𝑗=1
𝑧𝑖+ (p) = 𝑧𝑘 (p∗ )𝑧𝑘+ (p∗ )
101
𝑘 = 1, 2, . . . 𝑛
c 2001 Michael Carter
⃝
All rights reserved
Solutions for Foundations of Mathematical Economics
Summing over 𝑘
𝑛
∑
Since
∑𝑛
𝑘=1
𝑝∗𝑘 𝑧𝑘 (p∗ )
𝑛
∑
𝑗=1
𝑘=1
𝑧𝑖+ (p) =
𝑛
∑
𝑘=1
𝑧𝑘 (p∗ )𝑧𝑘+ (p∗ )
𝑝∗𝑘 𝑧𝑘 (p∗ ) = 0 this implies that
𝑛
∑
𝑘=1
𝑧𝑘 (p∗ )𝑧𝑘+ (p∗ ) = 0
(
)2
Each term of this sum is nonnegative, since it is either 0 or 𝑧𝑘 (p∗ ) . Consequently,
every term must be zero which implies that 𝑧𝑘 (p∗) ≤ 0 for every 𝑘 = 1, 2, . . . , 𝑙. In
other words, z(p∗ ) ≤ 0.
2.138 Every individual demand function x𝑖 (p, 𝑚) is continuous (Example 2.90) in p
and 𝑚. For given endowment 𝝎 𝑖
𝑚𝑖 =
𝑙
∑
𝑝𝑗 𝝎 𝑖𝑗
𝑗=1
is continuous in p (Exercise 2.78). Therefore the excess demand function
z𝑖 (p) = x𝑖 (p, 𝑚) − 𝝎 𝑖
is continuous for every consumer 𝑖 and hence the aggregate excess demand function is
continuous.
Similarly, the consumer’s demand function x𝑖 (p, 𝑚) is homogeneous of degree 0 in p
and 𝑚. For given endowment 𝝎 𝑖 , the consumer’s wealth is homogeneous of degree 1 in
p and therefore the consumer’s excess demand function z𝑖 (p) is homogeneous of degree
0. So therefore is the aggregate excess demand function z(p).
2.139
z(p) =
=
𝑛
∑
𝑖=1
𝑛
∑
z𝑖 (p)
(
)
x𝑖 (p, 𝑚) − 𝝎 𝑖
𝑖=1
and therefore
p𝑇 z(p) =
𝑛
∑
p𝑇 x𝑖 (p, 𝑚) −
𝑖=1
𝑛
∑
p𝑇 𝝎 𝑖
𝑖=1
Since preferences are nonsatiated and strictly convex, they are locally nonsatiated
(Exercise 1.248) which implies (Exercise 1.235) that every consumer must satisfy his
budget constraint
p𝑇 x𝑖 (p, 𝑚) = p𝑇 𝝎𝑖 for every 𝑖 = 1, 2, . . . , 𝑛
Therefore in aggregate
p𝑇 z(p) =
𝑛
∑
p𝑇 x𝑖 (p, 𝑚) −
𝑖=1
𝑛
∑
𝑖=1
for every p.
102
p𝑇 𝝎 𝑖 = 0
c 2001 Michael Carter
⃝
All rights reserved
Solutions for Foundations of Mathematical Economics
2.140 Assume there exists p∗ such that z(p∗ ) ≤ 0. That is
z(p∗ ) =
𝑛
∑
z𝑖 (p) =
𝑖=1
𝑛
𝑛
𝑛
∑
∑
(
) ∑
x𝑖 (p, 𝑚) − 𝝎𝑖 =
x𝑖 (p, 𝑚) −
𝝎𝑖 ≤ 0
𝑖=1
or
𝑖=1
∑
𝑖∈𝑁
x𝑖 ≤
∑
𝑖=1
𝝎𝑖
𝑖∈𝑁
Aggregate demand is less or equal to available supply.
∑𝑙
Let 𝑚∗𝑖 = 𝑗=1 𝑝∗𝑗 𝝎 𝑖𝑗 denote the wealth of consumer 𝑖 when the price system is p∗
and let x∗𝑖 = x(p∗ , 𝑚∗ ) be his chosen consumption bundle. Then
x∗𝑖 ≿ x𝑖 for every x𝑖 ∈ 𝑋(p∗ , 𝑚𝑖 )
Let x∗ = (x∗1 , x∗2 , . . . , x∗𝑛 ) be the allocation comprising these optimal bundles. The
pair (p∗ , x∗ ) is a competitive equilibrium.
2.141 For each x𝑘 , let 𝑆 𝑘 denote the subsimplex of Λ𝑘 which contains x𝑘 and let
x𝑘0 , x𝑘1 , . . . , x𝑘𝑛 denote the vertices of 𝑆 𝑘 . Let 𝛼𝑘0 , 𝛼𝑘1 , . . . , 𝛼𝑘𝑛 denote the barycentric
coordinates (Exercise 1.159) of x with respect to the vertices of 𝑆 𝑘 and let y𝑖𝑘 = 𝑓 𝑘 (x𝑘𝑖 ),
𝑖 = 0, 1, . . . , 𝑛, denote the images of the vertices. Since 𝑆 is compact, there exists
′
′
′
subsequences x𝑘𝑖 , y𝑖𝑘 and 𝛼𝑘 such that
x𝑘𝑖 → x∗𝑖
y𝑖𝑘 → y𝑖∗ and 𝛼𝑘𝑖 → 𝛼∗𝑖
𝑖 = 0, 1, . . . , 𝑛
Furthermore, 𝛼∗𝑖 ≥ 0 and 𝛼∗0 +𝛼∗1 +⋅ ⋅ ⋅+𝛼∗𝑛 = 1. Since the diameters of the subsimplices
converge to zero, their vertices must converge to the same point. That is, we must have
x∗0 = x∗1 = ⋅ ⋅ ⋅ = x∗𝑛 = x∗
By definition of 𝑓 𝑘
𝑓 𝑘 (x𝑘 ) = 𝛼𝑘0 𝑓 (x𝑘0 ) + 𝛼𝑘1 𝑓 (x𝑘1 ) + ⋅ ⋅ ⋅ + 𝛼𝑘𝑛 𝑓 (x𝑘𝑛 )
Substituting y𝑖𝑘 = 𝑓 𝑘 (x𝑘𝑖 ), 𝑖 = 0, 1, . . . , 𝑛 and recognizing that x𝑘 is a fixed point of
𝑓 𝑘 , we have
𝑥𝑘 = 𝑓 𝑘 (x𝑘 ) = 𝛼𝑘0 y0𝑘 + 𝛼𝑘1 y1𝑘 + ⋅ ⋅ ⋅ + 𝛼𝑘𝑛 y𝑛𝑘
Taking limits
x∗ = 𝛼∗0 y0∗ + 𝛼∗1 y1∗ + ⋅ ⋅ ⋅ + 𝛼∗𝑛 y𝑛∗
(2.46)
For each coordinate 𝑖, (x𝑘𝑖 , y𝑖𝑘 ) ∈ graph(𝜑) for every 𝑘 = 0, 1, . . . . Since 𝜑 is closed,
(x∗𝑖 , y𝑖∗ ) ∈ graph(𝜑). That is, y𝑖∗ ∈ 𝜑(x∗𝑖 ) = 𝜑(x∗ ) for every 𝑖 = 0, 1, . . . , 𝑛. Therefore,
(2.46) implies
x∗ ∈ conv 𝜑(x∗ )
Since 𝜑 is convex valued,
x∗ ∈ 𝜑(x∗ )
103
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
2.142 Analogous to Exercise 2.129, there exists a simplex 𝑇 containing 𝑆 and a retraction of 𝑇 onto 𝑆, that is a continuous function 𝑔 : 𝑇 → 𝑆 with 𝑔(x) = x for every x ∈ 𝑆.
Then 𝜑 ∘ 𝑔 : 𝑇 ⇉ 𝑆 ⊂ 𝑇 is closed-valued (Exercise 2.106) and uhc (Exercise 2.103).
By the argument in the proof, there exists a point x∗ ∈ 𝑇 such that x∗ ∈ 𝜑 ∘ 𝑔(x∗ ).
However, since 𝜑 ∘ 𝑔(x∗ ) ⊆ 𝑆, we must have x∗ ∈ 𝑆 and therefore 𝑔(x∗ ) = x∗ . This
implies x∗ ∈ 𝜑(x∗ ). That is, x∗ is a fixed point of 𝜑.
2.143 𝐵 = 𝐵1 × 𝐵2 × . . . × 𝐵𝑛 is the Cartesian product of uhc, compact- and convexvalued correspondences. Therefore 𝐵 is also compact-valued and uhc (Exercise 2.112
and also convex-valued (Exercise 1.165). By Exercise 2.106, 𝐵 is closed.
2.144 Strict quasiconcavity ensures that the best response correspondence is in fact a
function 𝐵 : 𝑆 → 𝑆. Since the hypotheses of Example 2.96 apply, there exists at least
one equilibrium. Suppose that there are two Nash equilibria s and s′ . Since 𝐵 is a
contraction,
𝜌(𝐵(s), 𝐵(s′ ) ≤ 𝛽𝜌(s, s′ )
for some 𝛽 < 1. However
𝐵(s) = s and 𝐵(s′ ) = s′
and (2.46) implies that
𝜌(s, s′ ) ≤ 𝛽𝜌(s, s′ )
which is possible if and only if s = s′ . This implies that the equilibrium must be unique.
2.145 Since 𝐾 is compact, it is totally bounded (Exercise 1.112). There exists a finite
set of points x1 , x2 , . . . , x𝑛 such that
𝑛
∩
𝐾⊆
𝐵𝜖 (x𝑖 )
𝑖=1
Let 𝑆 = conv {x1 , x2 , . . . , x𝑛 }. For 𝑖 = 1, 2, . . . , 𝑛 and x ∈ 𝑋, define
𝛼𝑖 (x) = max{0, 𝜖 − ∥x − x𝑖 ∥}
Then for every x ∈ 𝐾,
0 ≤ 𝛼𝑖 (x) ≤ 𝜖,
𝑖 = 1, 2, . . . , 𝑛
and
𝛼𝑖 (x) > 0 ⇐⇒ x ∈ 𝐵𝜖 (x𝑖 )
Note that 𝛼𝑖 (x) > 0 for some 𝑖. Define
∑
𝛼𝑖 (x)x𝑖
ℎ(x) = ∑
𝛼𝑖 (x)
Then ℎ(x) ∈ 𝑆 and therefore ℎ : 𝐾 → 𝑆. Furthermore, ℎ is continuous and
∑
𝛼𝑖 (x)x𝑖
∑
−
x
∥ℎ(x) − x∥ = 𝛼𝑖 (x)
∑
𝛼𝑖 (x)(x𝑖 − x) ∑
=
𝛼𝑖 (x)
∑
𝛼𝑖 (x) ∥x𝑖 − x∥
∑
=
𝛼𝑖 (x)
∑
𝛼𝑖 (x)𝜖
≤ ∑
=𝜖
𝛼𝑖 (x)
since 𝛼𝑖 (x) > 0 ⇐⇒ ∥x𝑖 − x∥ ≤ 𝜖.
104
Solutions for Foundations of Mathematical Economics
2.146
c 2001 Michael Carter
⃝
All rights reserved
(
)
1. For every x ∈ 𝑆 𝑘 , 𝑓 (x) ∈ 𝑆 and therefore 𝑔 𝑘 (x) = ℎ𝑘 𝑓 (x) ∈ 𝑆 𝑘 .
2. For any x ∈ 𝑆 𝑘 , let y = 𝑓 (x) ∈ 𝑓 (𝑆) and therefore
𝑘
ℎ (y) − y < 1
𝑘
which implies
𝑘
𝑔 (x) − 𝑓 (x) ≤ 1 for every x ∈ 𝑆 𝑘
𝑘
2.147 By the Triangle inequality
𝑘
x − 𝑓 (x) ≤ 𝑔 𝑘 (x𝑘 ) − 𝑓 (x𝑘 ) + 𝑓 (x𝑘 ) − 𝑓 (x)
As shown in the previous exercise
𝑘 𝑘
𝑔 (x ) − 𝑓 (x𝑘 ) ≤ 1 → 0
𝑘
as 𝑘 → ∞. Also since 𝑓 is continuous
𝑓 (x𝑘 ) − 𝑓 (x) → 0
Therefore
𝑘
x − 𝑓 (x) → 0 =⇒ x = 𝑓 (x)
x is a fixed point of 𝑓 .
2.148 𝑇 (𝐹 ) is bounded and equicontinuous and so therefore is 𝑇 (𝐹 ) (Exercise 2.96). By
Ascoli’s theorem (Exercise 2.95), 𝑇 (𝐹 ) is compact. Therefore 𝑇 is a compact operator.
Applying Corollary 2.8.1, 𝑇 has a fixed point.
105
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
Chapter 3: Linear Functions
3.1 Let x1 , x2 ∈ 𝑋 and 𝛼1 , 𝛼2 ∈ ℜ. Homogeneity implies that
𝑓 (𝛼1 x1 ) = 𝛼1 𝑓 (𝑥1 )
𝑓 (𝛼2 x2 ) = 𝛼2 𝑓 (𝑥2 )
and additivity implies
𝑓 (𝛼1 x1 + 𝛼2 x2 ) = 𝛼1 𝑓 (x1 ) + 𝛼2 𝑓 (x2 )
Conversely, assume
𝑓 (𝛼1 x1 + 𝛼2 x2 ) = 𝛼1 𝑓 (x1 ) + 𝛼2 𝑓 (x2 )
for all x1 , x2 ∈ 𝑋 and 𝛼1 , 𝛼2 ∈ ℜ. Letting 𝛼1 = 𝛼2 = 1 implies
𝑓 (x1 + x2 ) = 𝑓 (x1 ) + 𝑓 (x2 )
while setting x2 = 0 implies
𝑓 (𝛼1 x1 ) = 𝛼1 𝑓 (x1 )
3.2 Assume 𝑓1 , 𝑓2 ∈ 𝐿(𝑋, 𝑌 ). Define the mapping 𝑓1 + 𝑓2 : 𝑋 → 𝑌 by
(𝑓1 + 𝑓2 )(x) = 𝑓1 (x) + 𝑓2 (x)
We have to confirm that 𝑓1 + 𝑓2 is linear, that is
(𝑓1 + 𝑓2 )(x1 + x2 ) = 𝑓1 (x1 + x2 ) + 𝑓2 (x1 + x2 )
= 𝑓1 (x1 ) + 𝑓1 (x2 ) + 𝑓2 (x1 ) + 𝑓2 (x2 )
= 𝑓1 (x1 ) + 𝑓2 (x1 ) + 𝑓1 (x1 ) + 𝑓2 (x2 )
= (𝑓1 + 𝑓2 )(x1 ) + (𝑓1 + 𝑓2 )(x2 )
and
(𝑓1 + 𝑓2 )(𝛼x) = 𝑓1 (𝛼x) + 𝑓2 (𝛼x)
= 𝛼(𝑓1 (x) + 𝑓2 (x))
= 𝛼(𝑓1 + 𝑓2 )(x)
Similarly let 𝑓 ∈ 𝐿(𝑋, 𝑌 ) and define 𝛼𝑓 : 𝑋 → 𝑌 by
(𝛼𝑓 )(x) = 𝛼𝑓 (x)
𝛼𝑓 is also linear, since
(𝛼𝑓 )(𝛽x) = 𝛼𝑓 (𝛽x)
= 𝛼𝛽𝑓 (x)
= 𝛽𝛼𝑓 (x)
= 𝛽(𝛼𝑓 )(x)
106
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
3.3 Let x, x1 , x2 ∈ ℜ2 . Then
𝑓 (x1 + x2 ) = 𝑓 (𝑥11 + 𝑥21 , 𝑥12 + 𝑥22 )
)
(
= (𝑥11 + 𝑥21 ) cos 𝜃 − (𝑥12 + 𝑥22 ) sin 𝜃, (𝑥11 + 𝑥21 ) sin 𝜃 − (𝑥12 + 𝑥22 ) cos 𝜃
)
(
= (𝑥11 cos 𝜃 − 𝑥12 sin 𝜃) + (𝑥21 cos 𝜃 − 𝑥22 sin 𝜃), (𝑥11 sin 𝜃 + 𝑥12 cos 𝜃) + (𝑥21 sin 𝜃 − 𝑥22 cos 𝜃)
(
)
= (𝑥11 cos 𝜃 − 𝑥12 sin 𝜃, 𝑥11 sin 𝜃 + 𝑥12 cos 𝜃) + (𝑥21 cos 𝜃 − 𝑥22 sin 𝜃, 𝑥21 sin 𝜃 − 𝑥22 cos 𝜃
= 𝑓 (𝑥11 , 𝑥12 ) + 𝑓 (𝑥21 , 𝑥22 )
= 𝑓 (x1 ) + 𝑓 (x2 )
and
𝑓 (𝛼x) = 𝑓 (𝛼𝑥1 , 𝛼𝑥2 )
= (𝛼𝑥1 cos 𝜃 − 𝛼𝑥2 sin 𝜃, 𝛼𝑥1 sin 𝜃 + 𝛼𝑥2 cos 𝜃)
= 𝛼 (𝑥1 cos 𝜃 − 𝑥1 sin 𝜃, 𝑥1 sin 𝜃 + 𝑥2 cos 𝜃)
= 𝛼𝑓 (𝑥1 , 𝑥2 )
= 𝛼𝑓 (x)
3.4 Let x, x1 , x2 ∈ ℜ3 .
𝑓 (x1 + x2 ) = 𝑓 (𝑥11 + 𝑥2 , 𝑥12 + 𝑥22 , 𝑥13 + 𝑥23 )
= (𝑥11 + 𝑥21 , 𝑥12 + 𝑥22 , 0)
= (𝑥11 , 𝑥12 , 0) + (𝑥21 , 𝑥22 , 0)
= 𝑓 (𝑥11 , 𝑥12 , 𝑥13 ) + 𝑓 (𝑥21 , 𝑥22 , 𝑥23 )
= 𝑓 (x1 ) + 𝑓 (x2 )
and
𝑓 (𝛼x) = 𝑓 (𝛼𝑥1 , 𝛼𝑥2 , 𝛼𝑥3 )
= (𝛼𝑥1 , 𝛼𝑥2 , 0)
= 𝛼(𝑥1 , 𝑥2 , 0)
= 𝛼𝑓 (𝑥1 , 𝑥2 , 𝑥3 )
= 𝛼𝑓 (x)
This mapping is the projection of 3-dimensional space onto the (2-dimensional) plane.
3.5 Applying the definition
(
)(
)
0 1
𝑥1
𝑓 (𝑥1 , 𝑥2 ) =
1 0
𝑥2
= (𝑥2 , 𝑥1 )
This function interchanges the two coordinates of any point in the plane ℜ2 . Its action
corresponds to reflection about the line 𝑥1 = 𝑥2 ( 45 degree diagonal).
3.6 Assume (𝑁, 𝑤) and (𝑁, 𝑤′ ) are two games in 𝒢 𝑁 . For any coalition 𝑆 ⊆ 𝑁
(𝑤 + 𝑤′ )(𝑆) − (𝑤 + 𝑤′ )(𝑆 ∖ {𝑖}) = 𝑤(𝑆) + 𝑤(𝑆 ′ ) − 𝑤(𝑆 ∖ {𝑖}) − 𝑤′ (𝑆 ∖ {𝑖})
= (𝑤(𝑆) − 𝑤(𝑆 ∖ {𝑖})) + (𝑤′ (𝑆) − 𝑤′ (𝑆 ∖ {𝑖}))
= 𝜑𝑖 (𝑤) + 𝜑𝑖 (𝑤′ )
107
c 2001 Michael Carter
⃝
All rights reserved
Solutions for Foundations of Mathematical Economics
3.7 The characteristic function of cost allocation game is
𝑤(𝐴𝑃 ) = 0
𝑤(𝑇 𝑁 ) = 0
𝑤(𝐴𝑃, 𝑇 𝑁 ) = 210
𝑤(𝐴𝑃, 𝐾𝑀 ) = 770
𝑤(𝐾𝑀 ) = 0
𝑤(𝐾𝑀, 𝑇 𝑁 ) = 1170
𝑤(𝑁 ) = 1530
The following table details the computation of the Shapley value for player 𝐴𝑃 .
𝑆
𝐴𝑃
𝐴𝑃, 𝑇 𝑁
𝐴𝑃, 𝐾𝑀
𝐴𝑃, 𝑇 𝑁, 𝐾𝑀
𝜑𝑓 (𝑤)
𝛾𝑆
1/3
1/6
1/6
1/3
𝑤(𝑆)
0
210
770
1530
𝑤(𝑆 ∖ {𝑖})
0
0
0
1170
𝛾𝑆 (𝑤(𝑆) − 𝑤(𝑆 ∖ {𝑖}))
0
35
128 1/3
120
283 1/3
Thus 𝜑𝐴𝑃 𝑤 = 283 1/3. Similarly, we can calculate that 𝜑𝑇 𝑁 𝑤 = 483 1/3 and 𝜑𝐾𝑀 𝑤 =
763 1/3.
3.8
∑
𝜑𝑖 𝑤 =
𝑖∈𝑁
=
∑
(
∑
𝑖∈𝑁
𝑆∋𝑖
∑
(
𝑆∋𝑖
=
∑
)
𝛾𝑆 (𝑤(𝑆) − 𝑤(𝑆 ∖ {𝑖}))
)
𝛾𝑆 (𝑤(𝑆) − 𝑤(𝑆 ∖ {𝑖}))
𝑖∈𝑁
∑∑
𝛾𝑆 𝑤(𝑆) −
𝑆⊆𝑁 𝑖∈𝑆
=
∑
∑
𝛾𝑆 𝑤(𝑆 ∖ {𝑖})
𝑆⊆𝑁 𝑖∈𝑆
𝑠 × 𝛾𝑆 𝑤(𝑆) −
𝑆⊆𝑁
=
∑∑
(
∑
𝛾𝑆
𝑆⊆𝑁
𝑠 × 𝛾𝑆 𝑤(𝑆) −
∑
∑
)
𝑤(𝑆 ∖ {𝑖})
𝑖∈𝑆
𝑠 × 𝛾𝑆 𝑤(𝑆)
𝑆⊂𝑁
𝑆⊆𝑁
= 𝑛 × 𝛾𝑁 𝑤(𝑁 )
= 𝑤(𝑁 )
3.9 If 𝑖, 𝑗 ∈ 𝑆
𝑤(𝑆 ∖ {𝑖}) = 𝑤(𝑆 ∖ {𝑖, 𝑗} ∪ {𝑗}) = 𝑤(𝑆 ∖ {𝑖, 𝑗} ∪ {𝑖}) = 𝑤(𝑆 ∖ {𝑖})
𝜑𝑖 (𝑤) =
∑
𝛾𝑆 (𝑤(𝑆) − 𝑤(𝑆 ∖ {𝑖}))
𝑆∋𝑖
=
∑
𝛾𝑆 (𝑤(𝑆) − 𝑤(𝑆 ∖ {𝑖})) +
𝑆∋𝑖,𝑗
=
∑
∑
𝛾𝑆 (𝑤(𝑆) − 𝑤(𝑆 ∖ {𝑗})) +
=
∑
𝛾𝑆 (𝑤(𝑆 ∪ {𝑖}) − 𝑤(𝑆))
𝑆∕∋𝑖,𝑗
𝛾𝑆 (𝑤(𝑆) − 𝑤(𝑆 ∖ {𝑗})) +
∑
𝛾𝑆 ′ (𝑤(𝑆 ′ ∪ {𝑗}) − 𝑤(𝑆 ′ ))
𝑆 ′ ∕∋𝑖,𝑗
𝑆∋𝑖,𝑗
∑
𝛾𝑆 (𝑤(𝑆) − 𝑤(𝑆 ∖ {𝑖}))
𝑆∋𝑖,𝑆∕∋𝑗
𝑆∋𝑖,𝑗
=
∑
𝛾𝑆 (𝑤(𝑆) − 𝑤(𝑆 ∖ {𝑗})) +
𝑆∋𝑖,𝑗
∑
𝑆∕∋𝑖,𝑆∋𝑗
= 𝜑𝑗 (𝑤)
108
𝛾𝑆 (𝑤(𝑆) − 𝑤(𝑆 ∖ {𝑗}))
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
3.10 For any null player
𝑤(𝑆) − 𝑤(𝑆 ∖ {𝑖}) = 0
for every 𝑆 ⊆ 𝑁 . Consequently
∑
𝜑𝑖 (𝑤) =
𝛾𝑆 (𝑤(𝑆) − 𝑤(𝑆 ∖ {𝑖})) = 0
𝑆⊆𝑁
3.11 Every 𝑖 ∈
/ 𝑇 is a null player, so that
𝜑𝑖 (𝑢𝑇 ) = 0
Feasibility requires that
∑
for every 𝑖 ∈
/𝑇
∑
𝜑𝑖 (𝑢𝑇 ) =
𝑖∈𝑇
𝜑𝑖 (𝑢𝑇 ) = 1
𝑖∈𝑁
Further, any two players in 𝑇 are substitutes, so that symmetry requires that
𝜑𝑖 (𝑢𝑇 ) = 𝜑𝑗 (𝑢𝑇 )
for every 𝑖, 𝑗 ∈ 𝑇
Together, these conditions require that
𝜑𝑖 (𝑢𝑇 ) =
1
𝑡
for every 𝑖 ∈ 𝑇
The Shapley value of the a T-unanimity game is
{
1
𝑖∈𝑇
𝜑𝑖 (𝑢𝑇 ) = 𝑡
0 𝑖∈
/𝑇
where 𝑡 = ∣𝑇 ∣.
3.12 Any coalitional game can be represented as a linear combination of unanimity
games 𝑢𝑇 (Example 1.75)
∑
𝑤=
𝛼𝑇 𝑢𝑇
𝑇
By linearity, the Shapley value is
⎛
∑
𝜑𝑤 = 𝜑 ⎝
=
∑
⎞
𝛼𝑇 𝑢𝑇 ⎠
𝑇 ⊆𝑁
𝛼𝑇 𝜑𝑢𝑇
𝑇 ⊆𝑁
and therefore for player 𝑖
𝜑𝑖 𝑤 =
∑
𝛼𝑇 𝜑𝑖 𝑢𝑇
𝑇 ⊆𝑁
=
∑ 1
𝛼𝑇
𝑡
𝑇 ⊆𝑁
𝑇 ∋𝑖
∑ 1
∑ 1
𝛼𝑇 −
𝛼𝑇
=
𝑡
𝑡
𝑇 ⊆𝑁
𝑇 ⊆𝑁
𝑖∈𝑇
/
= 𝑃 (𝑁, 𝑤) − 𝑃 (𝑁 ∖ {𝑖}, 𝑤)
109
c 2001 Michael Carter
⃝
All rights reserved
Solutions for Foundations of Mathematical Economics
Using Exercise 3.8
𝑤(𝑁 ) =
∑
𝜑𝑖 𝑤
𝑖∈𝑁
=
∑(
)
𝑃 (𝑁, 𝑤) − 𝑃 (𝑁 ∖ {𝑖}, 𝑣)
𝑖∈𝑁
= 𝑛𝑃 (𝑁, 𝑤) −
∑
𝑃 (𝑁 ∖ {𝑖}, 𝑣)
𝑖∈𝑁
which implies that
1
𝑃 (𝑁, 𝑤) =
𝑛
(
𝑤(𝑁 ) −
∑
)
𝑃 (𝑁 ∖ {𝑖}, 𝑣)
𝑖∈𝑁
3.13 Choose any x ∕= 0 ∈ 𝑋.
0𝑋 = x − x
and by additivity
𝑓 (0𝑋 ) = 𝑓 (x − x)
= 𝑓 (x) − 𝑓 (x)
= 0𝑌
3.14 Let x1 , x2 belong to 𝑋. Then
𝑔 ∘ 𝑓 (x1 + x2 ) = 𝑔 ∘ 𝑓 (x1 + x2 )
)
(
= 𝑔 𝑓 (x1 ) + 𝑓 (x2 )
)
(
)
(
= 𝑔 𝑓 (x1 ) + 𝑔 𝑓 (x2 )
= 𝑔 ∘ 𝑓 (x1 ) + 𝑔 ∘ 𝑓 (x2 )
and
𝑔 ∘ 𝑓 (𝛼x) = 𝑔 (𝑓 (𝛼x))
= 𝑔 (𝛼𝑓 (x))
= 𝛼𝑔 (𝑓 (x))
= 𝛼𝑔 ∘ 𝑓 (x)
Therefore 𝑔 ∘ 𝑓 is linear.
3.15 Let 𝑆 be a subspace of 𝑋 and let y1 , y2 belong to 𝑓 (𝑆). Choose any x1 ∈ 𝑓 −1 (y1 )
and x2 ∈ 𝑓 −1 (y2 ). Then for 𝛼1 , 𝛼2 ∈ ℜ
𝛼1 x1 + 𝛼2 x2 ∈ 𝑆
Since 𝑓 is linear (Exercise 3.1)
𝛼1 y1 + 𝛼2 y2 = 𝛼1 𝑓 (x1 ) + 𝛼2 𝑓 (x2 ) = 𝑓 (𝛼1 x1 + 𝛼2 x2 ) ∈ 𝑓 (𝑆)
𝑓 (𝑆) is a subspace.
Let 𝑇 be a subspace of 𝑌 and let x1 , x2 belong to 𝑓 −1 (𝑇 ). Let y1 = 𝑓 (x1 ) and
y2 = 𝑓 (x2 ). Then y1 , y2 ∈ 𝑇 . For every 𝛼1 , 𝛼2 ∈ ℜ
𝛼1 y1 + 𝛼2 y2 = 𝛼1 𝑓 (x1 ) + 𝛼2 𝑓 (x2 ) ∈ 𝑇
110
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
Since 𝑓 is linear, this implies that
𝑓 (𝛼1 x1 + 𝛼2 x2 ) = 𝛼1 𝑓 (x1 ) + 𝛼2 𝑓 (x2 ) ∈ 𝑇
Therefore
𝛼1 x1 + 𝛼2 x2 ∈ 𝑓 −1 (𝑇 )
We conclude that 𝑓 −1 (𝑇 ) is a subspace.
3.16 𝑓 (𝑋) is a subspace of 𝑌 . rank 𝑓 (𝑋) = rank 𝑌 implies that 𝑓 (𝑋) = 𝑌 . 𝑓 is onto.
3.17 This is a special case of the previous exercise, since {0𝑌 } is a subspace of 𝑌 .
3.18 Assume not. That is, assume that there exist two distinct elements x1 and x2
with 𝑓 (x1 ) = 𝑓 (x2 ). Then x1 − x2 ∕= 0𝑋 but
𝑓 (x1 − x2 ) = 𝑓 (x1 ) − 𝑓 (x2 ) = 0𝑌
so that x1 − x2 ∈ kernel 𝑓 which contradicts the assumption that kernel 𝑓 = {0}.
3.19 If 𝑓 has an inverse, then it is one-to-one and onto (Exercise 2.4), that is 𝑓 −1 (0) = 0
and 𝑓 (𝑋) = 𝑌 . Conversely, if kernel 𝑓 = {0} then 𝑓 is one-to-one by the previous
exercise. If furthermore 𝑓 (𝑋) = 𝑌 , then 𝑓 is one-to-one and onto, and therefore has
an inverse (Exercise 2.4).
3.20 Let 𝑓 be a nonsingular linear function from 𝑋 to 𝑌 with inverse 𝑓 −1 . Choose
y1 , y2 ∈ 𝑌 and let
x1 = 𝑓 −1 (y1 )
x2 = 𝑓 −1 (y2 )
so that
y1 = 𝑓 (x1 )
y2 = 𝑓 (x2 )
Since 𝑓 is linear
𝑓 (x1 + x2 ) = 𝑓 (x1 ) + 𝑓 (x2 ) = y1 + y2
which implies that
𝑓 −1 (y1 + y2 ) = x1 + x2 = 𝑓 −1 (y1 ) + 𝑓 −1 (y2 )
The homogeneity of 𝑓 −1 can be demonstrated similarly.
3.21 Assume that 𝑓 : 𝑋 → 𝑌 and 𝑔 : 𝑌 → 𝑍 are nonsingular. Then (Exercise 3.19)
∙ 𝑓 (𝑋) = 𝑌 and 𝑔(𝑌 ) = 𝑍
∙ kernel 𝑓 = {0𝑋 } and kernel 𝑔 = {0𝑌 }
We have previously shown (Exercise 3.14) that ℎ = 𝑔 ∘ 𝑓 : 𝑋 → 𝑍 is linear. To show
that ℎ is nonsingular, we note that
∙ ℎ(𝑋) = 𝑔 ∘ 𝑓 (𝑋) = 𝑔(𝑌 ) = 𝑍
111
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
∙ If x ∈ kernel (ℎ) then
ℎ(x) = 𝑔 (𝑓 (x)) = 0
and 𝑓 (x) ∈ kernel 𝑔 = {0𝑌 }. Therefore 𝑓 (x) = 0𝑌 which implies that x = 0𝑋 .
Thus kernel ℎ = {0𝑋 }.
We conclude that ℎ is nonsingular.
Finally, let z be any point in 𝑍 and let
x1 = ℎ−1 (z) = (𝑔 ∘ 𝑓 )−1 (z)
y = 𝑔 −1 (z)
x2 = 𝑓 −1 (y)
Then
z = ℎ(x1 ) = 𝑔 ∘ 𝑓 (x1 )
z = 𝑔(y) = 𝑔 ∘ 𝑓 (x2 )
which implies that x1 = x2 .
3.22 Suppose 𝑓 were one-to-one. Then kernel 𝑓 = {0} ⊆ kernel ℎ and 𝑔 = ℎ ∘ 𝑓 −1 is a
well-defined linear function mapping 𝑓 (𝑋) to 𝑌 with
(
)
𝑔 ∘ 𝑓 = ℎ ∘ 𝑓 −1 ∘ 𝑓 = ℎ
We need to show that this still holds if 𝑓 is not one-to-one. In this case, for arbitrary
y ∈ 𝑓 (𝑋), 𝑓 −1 (y) may contain more than one element. Suppose x1 and x2 are distinct
elements in 𝑓 −1 (y). Then
𝑓 (x1 − x2 ) = 𝑓 (x1 ) − 𝑓 (x2 ) = y − y = 0
so that x1 − x2 ∈ kernel 𝑓 ⊆ kernel ℎ (by assumption). Therefore
ℎ(x1 ) − ℎ(x2 ) = ℎ(x1 − x2 ) = 0
which implies that ℎ(x1 ) = ℎ(x2 ) for all x1 , x2 ∈ 𝑓 −1 (y). Thus 𝑔 = ℎ∘ 𝑓 −1 : 𝑓 (𝑋) → 𝑍
is well defined even if 𝑓 is many-to-one.
To show that 𝑔 is linear, choose y1 , y2 in 𝑓 (𝑋) and let
x1 ∈ 𝑓 −1 (y1 )
x2 ∈ 𝑓 −1 (y2 )
Since 𝑓 (x1 + x2 ) = 𝑓 (x1 ) + 𝑓 (x2 ) = y1 + y2
x1 + x2 ∈ 𝑓 −1 (y1 + y2 )
and
𝑔(y1 + y2 ) = ℎ(x1 + x2 )
Therefore
𝑔(y1 ) + 𝑔(y2 ) = ℎ(x1 ) + ℎ(x2 )
= ℎ(x1 + x2 )
= 𝑔(y1 + y2 )
112
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
Similarly 𝛼x1 ∈ 𝑓 −1 (𝛼y1 ) and
𝑔(𝛼y1 ) = ℎ(𝛼x1 )
= 𝛼ℎ(x1 )
= 𝛼𝑔(y1 )
We conclude that 𝑔 = ℎ ∘ 𝑓 −1 is a linear function mapping 𝑓 (𝑋) to 𝑍 with ℎ = 𝑔 ∘ 𝑓 .
3.23 Let y be an arbitrary element of 𝑓 (𝑋) with x ∈ 𝑓 −1 (y). Since B is a basis for
𝑋, x can be represented as a linear combination of elements of 𝐵, that is there exists
x1 , x2 , .., x𝑚 ∈ 𝐵 and 𝛼1 , ..., 𝛼𝑚 ∈ 𝑅 such that
x=
𝑚
∑
𝛼𝑖 x𝑖
𝑖=1
y = 𝑓 (x)
)
(
∑
𝛼𝑖 x𝑖
=𝑓
=
∑
𝑖
𝛼𝑖 𝑓 (x𝑖 )
𝑖
Since 𝑓 (x𝑖 ) ∈ 𝑓 (𝐵), we have shown that y can be written as a linear combination of
elements of 𝑓 (𝐵), that is
y ∈ lin 𝐵
Since the choice of y was arbitrary, 𝑓 (𝐵) spans 𝑓 (𝑋), that is
lin 𝐵 = 𝑓 (𝑋)
3.24 Let 𝑛 = dim 𝑋 and 𝑘 = dim kernel 𝑓 . Let x1 , . . . , x𝑘 be a basis for the kernel of
𝑓 . This can be extended (Exercise 1.142) to a basis 𝐵 for 𝑋. Exercise 3.23 showed
lin 𝐵 = 𝑓 (𝑋)
Since x1 , x2 , . . . , x𝑘 ∈ kernel 𝑓 , 𝑓 (x𝑖 ) = 0 for 𝑖 = 1, 2, . . . , 𝑘. This implies that
{𝑓 (x𝑘+1 ), . . . , 𝑓 (x𝑛 )} spans 𝑓 (𝑋), that is
lin {(x𝑘+1 ), ..., 𝑓 (x𝑛 )} = 𝑓 (𝑋)
To show that dim 𝑓 (𝑋) = 𝑛 − 𝑘, we have to show that {𝑓 (𝑥𝑘+1 ), 𝑓 (𝑥𝑘+2 ), . . . , 𝑓 (𝑥𝑛 )} is
linearly independent. Assume not. That is, assume there exist 𝛼𝑘+1 , 𝛼𝑘+2 , ..., 𝛼𝑛 ∈ 𝑅
such that
𝑛
∑
𝛼𝑖 𝑓 (x𝑖 ) = 0
𝑖=𝑘+1
This implies that
(
𝑓
)
𝑛
∑
𝛼𝑖 x𝑖
=0
𝑖=𝑘+1
or
x=
𝑛
∑
𝛼𝑖 𝑥𝑖 ∈ kernel 𝑓
𝑖=𝑘+1
113
c 2001 Michael Carter
⃝
All rights reserved
Solutions for Foundations of Mathematical Economics
This implies that x can also be expressed as a linear combination of elements in
{x1 , 𝑥2 , ..., x𝑘 }, that is there exist scalars 𝛼1 , 𝛼2 , . . . , 𝛼𝑘 such that
x=
𝑘
∑
𝛼𝑖 x𝑖
𝑖=1
or
x=
𝑘
∑
𝑛
∑
𝛼𝑖 x𝑖 =
𝑖=1
𝛼𝑖 x𝑖
𝑖=𝑘+1
which contradicts the assumption that 𝐵 is a basis for 𝑋. Therefore {𝑓 (x𝑘+1 ), . . . , 𝑓 (x𝑛 )}
is a basis for 𝑓 (𝑋) and therefore dim 𝑓 (𝑥) = 𝑛 − 𝑘. We conclude that
dim kernel 𝑓 + dim 𝑓 (𝑋) = 𝑛 = dim 𝑋
3.25 Equation (3.2) implies that nullity 𝑓 = 0, and therefore 𝑓 is one-to-one (Exercise
3.18).
3.26 Choose some x = (𝑥1 , 𝑥2 , . . . , 𝑥𝑛 ) ∈ 𝑋. x has a unique representation in terms of
the standard basis (Example 1.79)
x=
𝑛
∑
𝑥𝑗 e𝑗
𝑗=1
Let y = 𝑓 (x). Since 𝑓 is linear
⎛
y = 𝑓 (x) = 𝑓 ⎝
⎞
𝑛
∑
𝑥𝑗 e𝑗 ⎠ =
𝑗=1
𝑛
∑
x𝑗 𝑓 (e𝑗 )
𝑗=1
Each 𝑓 (e𝑗 ) has a unique representation of the form
𝑓 (e𝑗 ) =
𝑚
∑
𝑎𝑖𝑗 e𝑖
𝑖=1
so that
y = 𝑓 (x)
=
𝑛
∑
𝑗=1
=
𝑚
∑
𝑖=1
(
𝑥𝑗
𝑚
∑
)
𝑎𝑖𝑗 e𝑖
𝑖=1
⎛
⎞
𝑛
∑
⎝
𝑎𝑖𝑗 𝑥𝑗 ⎠ e𝑖
𝑗=1
⎞
⎛ ∑𝑛
𝑎1𝑗 𝑥𝑗
∑𝑗=1
𝑛
⎜ 𝑗=1 𝑎2𝑗 𝑥𝑗 ⎟
⎟
⎜
=⎜
⎟
..
⎠
⎝
∑𝑛 .
𝑎
𝑥
𝑗=1 𝑚𝑗 𝑗
= 𝐴x
where
⎛
⎞
𝑎11 𝑎12 . . . 𝑎1𝑛
⎜ 𝑎21 𝑎22 . . . 𝑎2𝑛 ⎟
⎟
𝐴=⎜
⎝ . . . . . . . . . . . . . . . . . . . . .⎠
𝑎𝑚1 𝑎𝑚2 . . . 𝑎𝑚𝑛
114
Solutions for Foundations of Mathematical Economics
3.27
(
1 0
0 1
c 2001 Michael Carter
⃝
All rights reserved
)
0
0
3.28 We must specify bases for each space. The most convenient basis for 𝐺𝑁 is the
T-unanimity games. We adopt the standard basis for ℜ𝑛 . With respect to these bases,
the Shapley value 𝜑 is represented by the 2𝑛−1 ×𝑛 matrix where each row is the Shapley
value of the corresponding T-unanimity game.
For three player games (𝑛 = 3), the matrix is
⎛
1 0
⎜0 1
⎜
⎜0 0
⎜1 1
⎜
⎜ 21 2
⎜
⎜ 2 01
⎝0
2
1
3
1
3
⎞
0
0⎟
⎟
1⎟
⎟
0⎟
⎟
1⎟
2⎟
1⎠
2
1
3
3.29 Clearly, if 𝑓 is continuous, 𝑓 is continuous at 0.
To show the converse, assume that 𝑓 : 𝑋 → 𝑌 is continuous at 0. Let (x𝑛 ) be a sequence
which converges to x ∈ 𝑋. Then the sequence (x𝑛 − x) converges to 0𝑋 and therefore
𝑓 (x𝑛 −x) → 0𝑌 by continuity (Exercise 2.68). By linearity, 𝑓 (x𝑛 )−𝑓 (x) = 𝑓 (x𝑛 −x) →
0𝑌 and therefore 𝑓 (x𝑛 ) converges to 𝑓 (x). We conclude that 𝑓 is continuous at x.
3.30 Assume that 𝑓 is bounded, that is
∥𝑓 (x)∥ ≤ 𝑀 ∥x∥ for every x ∈ 𝑋
Then 𝑓 is Lipschitz at 0 (with Lipschitz constant 𝑀 ) and hence continuous (by the
previous exercise).
Conversely, assume 𝑓 is continuous but not bounded. Then, for every positive integer
𝑛, there exists some x𝑛 ∈ 𝑋 such that ∥𝑓 (x𝑛 )∥ > 𝑛 ∥x𝑛 ∥ which implies that
(
)
x𝑛
𝑓
>1
𝑛 ∥x𝑛 ∥ Define
y𝑛 =
x𝑛
𝑛 ∥x𝑛 ∥
Then y𝑛 → 0 but 𝑓 (y𝑛 ) ∕→ 0. This implies that 𝑓 is not continuous at the origin,
contradicting our hypothesis.
3.31 Let {x1 , x2 , . . . , x𝑛 } be a basis for 𝑋. For every x ∈ 𝑋, there exists numbers
𝛼1 , 𝛼2 , . . . , 𝛼𝑛 such that
x=
𝑛
∑
𝛼𝑖 x𝑖
𝑖=1
115
c 2001 Michael Carter
⃝
All rights reserved
Solutions for Foundations of Mathematical Economics
and
𝑓 (x) =
𝑛
∑
𝛼𝑖 𝑓 (x𝑖 )
𝑖=1
𝑛
∑
∥𝑓 (x)∥ = ≤
𝑖=1
𝑛
∑
𝛼𝑖 𝑓 (x𝑖 )
∣𝛼𝑖 ∣ ∥𝑓 (x𝑖 )∥
𝑖=1
𝑛
)∑
( 𝑛
∣𝛼𝑖 ∣
≤ max ∥𝑓 (x𝑖 )∥
𝑖=1
𝑖=1
By Lemma 1.1, there exists a constant 𝑐 such that
𝑛
𝑛
1
∑
1
∑
∣𝛼𝑖 ∣ ≤ 𝛼𝑖 x𝑖 = ∥x∥
𝑐
𝑐
𝑖=1
𝑖=1
Combining these two inequalities
∥𝑓 (x)∥ ≤ 𝑀 ∥x∥
where 𝑀 = max𝑛𝑖=1 ∥𝑓 (x𝑖 )∥ /𝑐.
3.32 For any x ∈ 𝑋, let 𝑎 = ∥x∥ and define y = x/𝑎. Linearity implies that
∥𝑓 (x)∥
= sup ∥𝑓 (x/𝑎)∥ = sup ∥𝑓 (y)∥
𝑎
x∕=0
x∕=0
∥y∥=1
∥𝑓 ∥ = sup
3.33 ∥𝑓 ∥ is a norm Let 𝑓 ∈ 𝐵𝐿(𝑋, 𝑌 ). Clearly
∥𝑓 ∥ = sup ∥𝑓 (x)∥ ≥ 0
∥x∥=1
Further, for every 𝛼 ∈ ℜ,
∥𝛼𝑓 ∥ = sup ∥𝛼𝑓 (x)∥ = ∣𝛼∣ ∥𝑓 ∥
∥x∥=1
Finally, for every 𝑔 ∈ 𝐵𝐿(𝑋, 𝑌 ),
∥𝑓 + 𝑔∥ = sup ∥𝑓 (x) + 𝑔(x)∥ ≤ sup ∥𝑓 (x)∥ + sup ∥𝑔(x)∥ ≤ ∥𝑓 ∥ + ∥𝑔∥
∥x∥=1
∥x∥=1
∥x∥=1
verifying the triangle inequality. There ∥𝑓 ∥ is a norm.
𝐵𝐿(𝑋, 𝑌 ) is a linear space Let 𝑓, 𝑔 ∈ 𝐵𝐿(𝑋, 𝑌 ). Since 𝐵𝐿(𝑋, 𝑌 ) ⊆ 𝐿(𝑋, 𝑌 ), 𝑓 + 𝑔
is linear, that is 𝑓 +𝑔 ∈ 𝐿(𝑋, 𝑌 ) (Exercise 3.2). Similarly, 𝛼𝑓 ∈ 𝐿(𝑋, 𝑌 ) for every
𝛼 ∈ ℜ. Further, by the triangle inequality ∥𝑓 + 𝑔∥ ≤ ∥𝑓 ∥ + ∥𝑔∥ and therefore for
every x ∈ 𝑋
∥(𝑓 + 𝑔)(x)∥ ≤ ∥𝑓 + 𝑔∥ ∥x∥ ≤ (∥𝑓 ∥ + ∥𝑔∥) ∥x∥
Therefore 𝑓 + 𝑔 ∈ 𝐵𝐿(𝑋, 𝑌 ). Similarly
∥(𝛼𝑓 )(x)∥ ≤ (∣𝛼∣ ∥𝑓 ∥) ∥x∥
so that 𝛼𝑓 ∈ 𝐵𝐿(𝑋, 𝑌 ).
116
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
𝐵𝐿(𝑋, 𝑌 ) is complete with this norm Let (𝑓 𝑛 ) be a Cauchy sequence in 𝐵𝐿(𝑋, 𝑌 ).
For every x ∈ 𝑋
∥𝑓 𝑛 (x) − 𝑓 𝑚 (x)∥ ≤ ∥𝑓 𝑛 − 𝑓 𝑚 ∥ ∥x∥
Therefore (𝑓 𝑛 (x)) is a Cauchy sequence in 𝑌 , which converges since 𝑌 is complete.
Define the function 𝑓 : 𝑋 → 𝑌 by 𝑓 (x) = lim𝑛→∞ 𝑓 𝑛 (x).
𝑓 is linear since
𝑓 (x1 + x2 ) = lim 𝑓 𝑛 (x1 + x2 ) = lim 𝑓 𝑛 (x1 ) + lim 𝑓 𝑛 (x2 ) = 𝑓 (x1 ) + 𝑓 (x2 )
and
𝑓 (𝛼x) = lim 𝑓 𝑛 (𝛼x) = 𝛼 lim 𝑓 𝑛 (x) = 𝛼𝑓 (x)
To show that 𝑓 is bounded, we observe that
∥𝑓 (x)∥ = lim 𝑓 𝑛 (x) = lim ∥𝑓 𝑛 (x)∥ ≤ sup ∥𝑓 𝑛 (x)∥ ≤ sup ∥𝑓 𝑛 ∥ ∥x∥
𝑛
𝑛
𝑛
𝑛
Since (𝑓 𝑛 ) is a Cauchy sequence, (𝑓 𝑛 ) is bounded (Exercise 1.100), that is there
exists 𝑀 such that ∥𝑓 𝑛 ∥ ≤ 𝑀 . This implies
∥𝑓 (x)∥ ≤ sup ∥𝑓 𝑛 ∥ ∥x∥ ≤ 𝑀 ∥x∥
𝑛
Thus, 𝑓 is bounded.
To complete the proof, we must show 𝑓 𝑛 → 𝑓 , that is ∥𝑓 𝑛 − 𝑓 ∥ → 0. Since (𝑓 𝑛 )
is a Cauchy sequence, for every 𝜖 > 0, there exists 𝑁 such that ∥𝑓 𝑛 − 𝑓 𝑚 ∥ ≤ 𝜖
for every 𝑛, 𝑚 ≥ 𝑁 and consequently
∥𝑓 𝑛 (x) − 𝑓 𝑚 (x)∥ = ∥(𝑓 𝑛 − 𝑓 𝑚 )(x)∥ ≤ 𝜖 ∥x∥
Letting 𝑚 go to infinity,
∥𝑓 𝑛 (x) − 𝑓 (x)∥ = ∥(𝑓 𝑛 − 𝑓 )(x)∥ ≤ 𝜖 ∥x∥
for every x ∈ 𝑋 and 𝑛 ≥ 𝑁 and therefore
∥𝑓 𝑛 − 𝑓 ∥ = sup {𝑓 𝑛 − 𝑓 )(x)} ≤ 𝜖
∥x∥=1
for every 𝑛 ≥ 𝑁 .
3.34
1. Since 𝑋 is finite-dimensional, 𝑆 is compact (Proposition 1.4). Since 𝑓 is
continuous, 𝑓 (𝑆) is a compact set in 𝑌 (Exercise 2.3). Since 0𝑋 ∈
/ 𝑆, 0𝑌 =
𝑓 (0𝑋 ) ∈
/ 𝑓 (𝑆).
(
)𝑐
2. Consequently, 𝑓 (𝑆) is an open set containing 0𝑌 . It contains an open ball
(
)𝑐
𝑇 ⊆ 𝑓 (𝑆) around 0𝑌 .
3. Let y ∈ 𝑇 and choose any x ∈ 𝑓 −1 (y) and consider y/ ∥x∥. Since 𝑓 is linear,
(
)
x
𝑓 (x)
y
=
=𝑓
∈ 𝑓 (𝑆)
∥x∥
∥x∥
∥x∥
and therefore y/ ∥x∥ ∈
/ 𝑇 since 𝑇 ∩ 𝑓 (𝑆) = ∅.
117
c 2001 Michael Carter
⃝
All rights reserved
Solutions for Foundations of Mathematical Economics
Suppose that y ∈
/ 𝑓 (𝐵). Then ∥x∥ ≥ 1 and therefore
y ∈ 𝑇 =⇒
y
∈𝑇
∥x∥
since 𝑇 is convex. This contradiction establishes that y ∈ 𝑓 (𝐵) and therefore
𝑇 ⊆ 𝑓 (𝐵). We conclude that 𝑓 (𝐵) contains an open ball around 0𝑌 .
4. Let 𝑆 be any open set in 𝑋. We need to show that 𝑓 (𝑆) is open in 𝑌 . Choose
any y ∈ 𝑓 (𝑆) and x ∈ 𝑓 −1 (y). Then x ∈ 𝑆 and, since 𝑆 is open, there exists
some 𝑟 > 0 such that 𝐵𝑟 (x) ⊆ 𝑆. Now 𝐵𝑟 (x) = x + 𝑟𝐵 and
𝑓 (𝐵𝑟 (x)) = y + 𝑟𝑓 (𝐵) ⊆ 𝑓 (𝑆)
by linearity. As we have just shown, there exists an open ball T about 0𝑌 such
that 𝑇 ⊆ 𝑓 (𝐵). Let 𝑇 (x) = y + 𝑟𝑇 . 𝑇 (x) is an open ball about y. Since
𝑇 ⊆ 𝑓 (𝐵), 𝑇 (x) = y + 𝑟𝑇 ⊆ 𝑓 (𝐵𝑟 (x)) ⊆ 𝑓 (𝑆). This implies that 𝑓 (𝑆) is open.
Since 𝑆 was an arbitrary open set, 𝑓 is an open map.
5. Exercise 2.69.
3.35 𝑓 is linear
𝑓 (𝛼 + 𝛽) =
𝑛
∑
(𝛼𝑖 + 𝛽𝑖 )x𝑖 =
𝑖=1
𝑛
∑
𝛼𝑖 x𝑖 +
𝑖=1
𝑛
∑
𝛽𝑖 x𝑖 = 𝑓 (𝛼) + 𝑓 (𝛽)
𝑖=1
Similarly for every 𝑡 ∈ ℜ
𝑓 (𝑡𝛼) = 𝑡
𝑛
∑
𝛼𝑖 x𝑖 = 𝑡𝑓 (𝑡𝛼)
𝑖=1
𝑓 is one-to-one Exercise 1.137.
𝑓 is onto By definition of a basis lin {x1 , x2 , . . . , x𝑛 } = 𝑋
𝑓 is continuous Exercise 3.31
𝑓 is an open map Proposition 3.2
3.36 𝑓 is bounded and therefore there exists 𝑀 such that ∥𝑓 (x)∥ ≤ 𝑀 ∥x∥. Similarly,
𝑓 −1 is bounded and therefore there exists 𝑚 such that for every x
𝑓 −1 (y) ≤
1
∥y∥
𝑚
where y = 𝑓 (x). This implies
𝑚 ∥x∥ ≤ ∥𝑓 (x)∥
and therefore for every x ∈ 𝑋.
𝑚 ∥x∥ ≤ ∥𝑓 (x)∥ ≤ 𝑀 ∥x∥
By the linearity of 𝑓 ,
𝑚 ∥x1 − x2 ∥ ≤ ∥𝑓 (x1 − x2 )∥ = ∥𝑓 (x1 ) − 𝑓 (x2 )∥ ≤ 𝑀 ∥x1 − x2 ∥
118
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
3.37 For any function, continuity implies closed graph (Exercise 2.70). To show the converse, assume that 𝐺 = graph(𝑓 ) is closed. 𝑋 ×𝑌 with norm ∥(x, y)∥ = max{∥x∥ , ∥y∥}
is a Banach space (Exercise 1.209). Since 𝐺 is closed, 𝐺 is complete. Also, 𝐺 is a subspace of 𝑋 × 𝑌 . Consequently, 𝐺 is a Banach space in its own right.
Consider the projection ℎ : 𝐺 → 𝑋 defined by ℎ(x, 𝑓 (x)) = x. Clearly ℎ is linear,
one-to-one and onto with
ℎ−1 (x) = (x, 𝑓 (x))
It is also bounded since
∥ℎ(x, 𝑓 (x))∥ = ∥x∥ ≤ ∥(x, 𝑓 (x)∥
By the open mapping theorem, ℎ−1 is bounded. For every x ∈ 𝑋
∥𝑓 (x)∥ ≤ ∥(x, 𝑓 (x))∥ = ℎ−1 (x) ≤ ℎ−1 ∥x∥
We conclude that 𝑓 is bounded and hence continuous.
3.38 𝑓 (1) = 5, 𝑓 (2) = 7 but
𝑓 (1 + 2) = 𝑓 (3) = 9 ∕= 𝑓 (1) + 𝑓 (2)
Similarly
𝑓 (3 × 2) = 𝑓 (6) = 15 ∕= 3 × 𝑓 (2)
3.39 Assume 𝑓 is affine. Let y = 𝑓 (0) and define
𝑔(x) = 𝑓 (x) − y
𝑔 is homogeneous since for every 𝛼 ∈ ℜ
𝑔(𝛼x) = 𝑔(𝛼x + (1 − 𝛼)0)
= 𝑓 (𝛼x + (1 − 𝛼)0) − y
= 𝛼𝑓 (x) + (1 − 𝛼)𝑓 (0) − y
= 𝛼𝑓 (x) + (1 − 𝛼)y − y
= 𝛼𝑓 (x) − 𝛼y
= 𝛼(𝑓 (𝑥) − y)
= 𝛼𝑔(x)
Similarly for any x1 , x2 ∈ 𝑋
𝑔(𝛼x1 + (1 − 𝛼)x2 ) = 𝑓 (𝛼x1 + (1 − 𝛼)x2 ) − 𝑦
= 𝛼𝑓 (x1 ) + (1 − 𝛼)𝑓 (x2 ) − 𝑦
Therefore, for 𝛼 = 1/2
1
1
1
1
𝑔( x1 + x2 ) = 𝑓 (x1 ) + 𝑓 (x2 ) − 𝑦
2
2
2
2
1
1
= (𝑓 (x1 ) − 𝑦) + (𝑓 (x2 ) − 𝑦)
2
2
1
1
= 𝑔(x1 ) + 𝑔(x2 )
2
2
119
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
Since 𝑔 is homogeneous
𝑔(x1 + x2 ) = 𝑔(x1 ) + 𝑔(x2 )
which shows that 𝑔 is additive and hence linear.
Conversely if
𝑓 (x) = 𝑔(x) + y
with 𝑔 linear
𝑓 (𝛼x1 + (1 − 𝛼)x2 ) = 𝛼𝑔(x1 ) + (1 − 𝛼)𝑔(x2 ) + 𝑦
= 𝛼𝑔(x1 ) + 𝑦 + (1 − 𝛼)𝑔(x2 ) + 𝑦
= 𝛼𝑓 (x1 ) + (1 − 𝛼)𝑓 (x2 )
3.40 Let 𝑆 be an affine subset of 𝑋 and let y1 , y2 belong to 𝑓 (𝑆). Choose any x1 ∈
𝑓 −1 (y1 ) and x2 ∈ 𝑓 −1 (y2 ). Then for any 𝛼 ∈ ℜ
𝛼x1 + (1 − 𝛼)x2 ∈ 𝑆
Since 𝑓 is affine
𝛼y1 + (1 − 𝛼)y2 = 𝛼𝑓 (x1 ) + (1 − 𝛼)𝑓 (x2 ) = 𝑓 (𝛼x1 + (1 − 𝛼)x2 ) ∈ 𝑓 (𝑆)
𝑓 (𝑆) is an affine set.
Let 𝑇 be an affine subset of 𝑌 and let x1 , x2 belong to 𝑓 −1 (𝑇 ). Let y1 = 𝑓 (x1 ) and
y2 = 𝑓 (x2 ). Then y1 , y2 ∈ 𝑇 . For every 𝛼 ∈ ℜ
𝛼y1 + (1 − 𝛼)y2 = 𝛼𝑓 (x1 ) + (1 − 𝛼)𝑓 (x2 ) ∈ 𝑇
Since 𝑓 is affine, this implies that
𝑓 (𝛼x1 + (1 − 𝛼)x2 ) = 𝛼𝑓 (x1 ) + (1 − 𝛼)𝑓 (x2 ) ∈ 𝑇
Therefore
𝛼x1 + (1 − 𝛼)x2 ∈ 𝑓 −1 (𝑇 )
We conclude that 𝑓 −1 (𝑇 ) is an affine set.
3.41 For any y1 , y2 ∈ 𝑓 (𝑆), choose x1 , x2 ∈ 𝑆 such that y𝑖 = 𝑓 (x𝑖 ). Since 𝑆 is convex,
𝛼x1 + (1 − 𝛼)x2 ∈ 𝑆 and therefore
𝑓 (𝛼x1 + (1 − 𝛼)x2 ) = 𝛼𝑓 (x1 ) + (1 − 𝛼)𝑓 (x2 )
= 𝛼y1 + (1 − 𝛼)y2 ∈ 𝑓 (𝑆)
Therefore 𝑓 (𝑆) is convex.
3.42 Suppose otherwise that y is not efficient. Then there exists another production
plan y′ ∈ 𝑌 such that y′ ≥ y. Since p > 0, this implies that py′ > py, contradicting
the assumption that y maximizes profit.
3.43 The random variable 𝑋 can be represented as the sum
∑
𝑋(𝑠)𝜒{𝑠}
𝑋=
𝑠∈𝑆
120
c 2001 Michael Carter
⃝
All rights reserved
Solutions for Foundations of Mathematical Economics
where 𝜒{𝑠} is the indicator function of the set {𝑠}. Since 𝐸 is linear
𝐸(𝑋) =
∑
𝑋(𝑠)𝐸(𝜒{𝑠} )
𝑠∈𝑆
=
∑
𝑝𝑆 𝑋(𝑠)
𝑠∈𝑆
since 𝐸(𝜒{𝑠} = 𝑃 ({𝑠}) = 𝑝𝑠 ≥ 0. For the random variable 𝑋 = 1, 𝑋(𝑠) = 1 for every 𝑠 ∈
𝑆 and
∑
𝑝𝑆 = 1
𝐸(1) =
𝑠∈𝑆
3.44 Let 𝑥1 , 𝑥2 ∈ 𝐶[0, 1]. Recall that addition in C[0,1] is defined by
(𝑥1 + 𝑥2 )(𝑡) = 𝑥1 (𝑡) + 𝑥2 (𝑡)
Therefore
𝑓 (𝑥1 + 𝑥2 ) = (𝑥1 + 𝑥2 )(1/2) = 𝑥1 (1/2) + 𝑥2 (1/2) = 𝑓 (𝑥1 ) + 𝑓 (𝑥2 )
Similarly
𝑓 (𝛼𝑥1 ) = (𝛼𝑥1 )(1/2) = 𝛼𝑥1 (1/2) = 𝛼𝑓 (𝑥1 )
3.45 Assume that x∗ = x∗1 + x∗2 + ⋅ ⋅ ⋅ + x∗𝑛 maximizes 𝑓 over 𝑆. Suppose to the contrary
that there exists y𝑗 ∈ 𝑆𝑗 such that 𝑓 (y𝑗 ) > 𝑓 (x∗𝑗 ). Then y = x∗1 + x∗2 + ⋅ ⋅ ⋅ + y𝑗 + ⋅ ⋅ ⋅ +
x∗𝑛 ∈ 𝑆 and
∑
∑
𝑓 (y) =
𝑓 (x∗𝑖 ) + 𝑓 (y𝑖 ) >
𝑓 (x∗𝑖 ) = 𝑓 (x∗ )
𝑖
𝑖∕=𝑗
contradicting the assumption at 𝑓 is maximized at x∗ .
Conversely, assume
𝑓 (x∗𝑖 ) ≥ 𝑓 (x𝑖 ) for every x𝑖 ∈ 𝑆𝑖
for every 𝑖 = 1, 2, . . . , 𝑛. Summing
∑
∑
∑
∑
𝑓 (x∗ ) = 𝑓 (
x∗𝑖 ) =
𝑓 (𝑥∗𝑖 ) ≥
𝑓 (x𝑖 ) = 𝑓 (
x𝑖 ) = 𝑓 (x) for every x ∈ 𝑆
x∗ = x∗1 + x∗2 + ⋅ ⋅ ⋅ + x∗𝑛 maximizes 𝑓 over 𝑆.
3.46
1. Assume (𝑥𝑡 ) is a sequence in 𝑙1 with 𝑠 =
the sequence of partial sums
𝑠𝑡 =
𝑡
∑
∑∞
𝑡=1
∣𝑥𝑗 ∣ < ∞. Let (𝑠𝑡 ) denote
∣𝑥𝑗 ∣
𝑗=1
Then (𝑠𝑡 ) is a bounded monotone sequence in ℜ𝑛 which converges to 𝑠. Consequently, (𝑠𝑡 ) is a Cauchy sequence. For every 𝜖 > 0 there exists an 𝑁 such
that
𝑚+𝑘
∑
∣𝑥𝑡 ∣ < 𝜖
𝑛=𝑚
121
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
for every 𝑚 ≥ 𝑁 and 𝑘 ≥ 0. Letting 𝑘 = 0
∣𝑥𝑡 ∣ < 𝜖 for every 𝑛 ≥ 𝑁
We conclude that 𝑥𝑡 → 0 so that (𝑥𝑡 ) ∈ 𝑐0 . This establishes 𝑙1 ⊆ 𝑐0 .
To see that the inclusion is strict, that is 𝑙1 ⊂ 𝑐0 , we observe that the sequence
(1/𝑛) = (1, 1/2, 1/3, . . . ) converges to zero but that since
∞ ∑
1
= 1 + 1 + 1+ = ∞
𝑛
2 3
𝑛=1
(1/𝑛) ∈
/ 𝑙1 .
Every convergent sequence is bounded (Exercise 1.97). Therefore 𝑐0 ⊂ 𝑙∞ .
2. Clearly, every sequence (𝑝𝑡 ) ∈ 𝑙1 defines a linear functional 𝑓 ∈ 𝑐′0 given by
𝑓 (x) =
∞
∑
𝑝𝑡 𝑥𝑡
𝑛=1
for every x = (𝑥𝑡 ) ∈ 𝑐0 . To show that 𝑓 is bounded we observe that every
(𝑥𝑡 ) ∈ 𝑐0 is bounded and consequently
∣𝑓 (x)∣ ≤
∞
∑
𝑛=1
∣𝑝𝑡 ∣ ∣𝑥𝑡 ∣ ≤ ∥(𝑥𝑡 )∥∞
∞
∑
𝑛=1
∣𝑝𝑡 ∣ = ∥(𝑝𝑡 )∥1 ∥(𝑥𝑡 )∥∞
Therefore 𝑓 ∈ 𝑐∗0 .
To show the converse, let e𝑡 denote the unit sequences
e1 = (1, 0, 0, . . . )
e2 = (0, 1, 0, . . . )
e3 = (0, 0, 1, . . . )
{e1 , e2 , e3 , . . . , } form a basis for 𝑐0 . Then every sequence (𝑥𝑡 ) ∈ 𝑐0 has a unique
representation
(𝑥𝑡 ) =
∞
∑
𝑥𝑡 e𝑡
𝑛=1
Let 𝑓 ∈ 𝑐∗0 be a continuous linear functional on 𝑐0 . By continuity and linearity
𝑓 (x) =
∞
∑
𝑥𝑡 𝑓 (e𝑡 )
𝑛=1
Let
𝑝𝑡 = 𝑓 (e𝑡 )
so that
𝑓 (x) =
∞
∑
𝑝𝑡 𝑥𝑡
𝑛=1
Every linear function is determined by its action on a basis (Exercise 3.23).
122
c 2001 Michael Carter
⃝
All rights reserved
Solutions for Foundations of Mathematical Economics
We need to show that the sequence (𝑝𝑡 ) ∈ 𝑙1 . For any 𝑁 , consider the sequence
x𝑡 = (𝑥1 , 𝑥2 , . . . , 𝑥𝑡 , 0, 0, . . . ) where
⎧
⎨0
𝑝𝑡 = 0 or 𝑛 ≥ 𝑁
𝑥𝑡 = ∣𝑝𝑡 ∣
⎩
otherwise
𝑝𝑡
Then (x𝑡 ) ∈ 𝑐0 , ∥x𝑡 ∥∞ = 1 and
𝑓 (x𝑡 ) =
𝑡
∑
𝑝𝑡 𝑥𝑡 =
𝑛=1
𝑡
∑
∣𝑝𝑡 ∣
𝑛=1
Since 𝑓 ∈ 𝑐∗0 , 𝑓 is bounded and therefore
𝑓 (x𝑡 ) ≤ ∥𝑓 ∥ ∥x𝑡 ∥ = ∥𝑓 ∥ < ∞
and therefore
𝑡
∑
∣𝑝𝑡 ∣ < ∞ for every 𝑁 = 1, 2, . . .
𝑛=1
Consequently
∞
∑
∣𝑝𝑡 ∣ = sup
𝑡
∑
𝑁 𝑛=1
𝑛=1
∣𝑝𝑡 ∣ ≤ ∥𝑓 ∥ < ∞
We conclude that (𝑝𝑡 ) ∈ 𝑙1 and therefore 𝑐∗0 = 𝑙1
3. Similarly, every sequence (𝑝𝑡 ) ∈ 𝑙∞ defines a linear functional 𝑓 on 𝑙1 given by
𝑓 (x) =
∞
∑
𝑝𝑡 𝑥𝑡
𝑛=1
for every x = (𝑥𝑡 ) ∈ 𝑙1 . Moreover 𝑓 is bounded since
∣𝑓 (x)∣ ≤
∞
∑
∣𝑝𝑡 ∣ ∣𝑥𝑡 ∣ ≤ ∥(𝑝𝑡 )∥
𝑛=1
∞
∑
∣𝑥𝑡 ∣ < ∞
𝑛=1
for every x = (𝑥𝑡 ) ∈ 𝑙1 Again, given any linear functional 𝑓 ∈ 𝑙1∗ , let 𝑝𝑡 = 𝑓 (e𝑡 )
where e𝑡 is the 𝑛 unit sequence. Then 𝑓 has the representation
𝑓 (x) =
∞
∑
𝑝𝑡 𝑥𝑡
𝑛=1
To show that (𝑝𝑡 ) ∈ 𝑙∞ , for 𝑁 = 1, 2, . . . , consider the sequence x𝑡 = (0, 0, . . . , 𝑥𝑡 , 0, 0, . . . )
where
⎧
⎨ ∣𝑝𝑡 ∣ 𝑛 = 𝑁 and 𝑝 ∕= 0
𝑡
𝑝𝑡
𝑥𝑡 =
⎩
0
otherwise
Then x𝑡 ∈ 𝑙1 , ∥x𝑡 ∥1 = 1 and
𝑓 (x𝑡 ) = ∣𝑝𝑡 ∣
Since 𝑓 ∈
𝑙1∗ ,
𝑓 is bounded and therefore
𝑁
𝑝 = 𝑓 (x𝑁 ) ≤ ∥𝑓 ∥ ∥x𝑛 ∥ = ∥𝑓 ∥
for every 𝑁 . Consequently (𝑝𝑁 ) ∈ 𝑙∞ . We conclude that 𝑙1∗ = 𝑙∞
123
c 2001 Michael Carter
⃝
All rights reserved
Solutions for Foundations of Mathematical Economics
3.47 By linearity
𝜑(𝑥, 𝑡) = 𝜑(𝑥, 0) + 𝜑(0, 𝑡)
= 𝜑(𝑥, 0) + 𝜑(0, 1)𝑡
Considered as a function of 𝑥, 𝜑(𝑥, 0) is a linear functional on 𝑋. Define
𝑔(𝑥) = 𝜑(𝑥, 0)
𝛼 = 𝜑(0, 1)
Then
𝜑(𝑥, 𝑡) = 𝑔(𝑥) + 𝛼𝑡
3.48 Suppose
𝑚
∩
kernel 𝑔𝑗 ⊆ kernel 𝑓
𝑗=1
Define the function 𝐺 : 𝑋 → ℜ𝑛 by
𝐺(x) = (𝑔1 (x), 𝑔2 (x), . . . , 𝑔𝑚 (x))
Then
kernel 𝐺 = { x ∈ 𝑋 : 𝑔𝑗 (x) = 0, 𝑗 = 1, 2, . . . 𝑚 }
𝑚
∩
=
kernel 𝑔𝑗
𝑗=1
⊆ kernel 𝑓
𝑓 : 𝑋 → ℜ and 𝐺 : 𝑋 → ℜ𝑛 . By Exercise 3.22, there exists a linear function 𝐻 : ℜ𝑛 → ℜ
such that 𝑓 = 𝐻 ∘ 𝐺. That is, for every 𝑥 ∈ 𝑋
𝑓 (x) = 𝐻 ∘ 𝐺(x) = 𝐻(𝑔1 (x), 𝑔2 (x), . . . , 𝑔𝑚 (x))
Let 𝛼𝑗 = 𝐻(e𝑗 ) where e𝑗 is the 𝑗-th unit vector in ℜ𝑚 . Since every linear mapping is
determined by its action on a basis, we must have
𝑓 (x) = 𝛼1 𝑔1 (x) + 𝛼2 𝑔2 (x) + ⋅ ⋅ ⋅ + 𝛼𝑚 𝑔𝑚 (x)
for every 𝑥 ∈ 𝑋
That is
𝑓 ∈ lin 𝑔1 , 𝑔2 , . . . , 𝑔𝑚
Conversely, suppose
𝑓 ∈ lin 𝑔1 , 𝑔2 , . . . , 𝑔𝑚
That is
𝑓 (x) = 𝛼1 𝑔1 (x) + 𝛼2 𝑔2 (x) + ⋅ ⋅ ⋅ + 𝛼𝑚 𝑔𝑚 (x)
for every 𝑥 ∈ 𝑋
∩𝑚
For every x ∈ 𝑗=1 kernel 𝑔𝑗 , 𝑔𝑗 (x) = 0, 𝑗 = 1, 2, . . . , 𝑚 and therefore 𝑓 (x) = 0.
Therefore 𝑥 ∈ kernel 𝑓 . That is
𝑚
∩
kernel 𝑔𝑗 ⊆ kernel 𝑓
𝑗=1
124
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
3.49 Let 𝐻 be a hyperplane in 𝑋. Then there exists a unique subspace 𝑉 such that
𝐻 = x0 + 𝑉 for some x0 ∈ 𝐻 (Exercise 1.153). There are two cases to consider.
Case 1: x0 ∈
/ 𝑉 . For every x ∈ 𝑋, there exists unique 𝛼x ∈ ℜ such
x = 𝛼x x0 + 𝑣 for some 𝑣 ∈ 𝑉
Define 𝑓 (x) = 𝛼x . Then 𝑓 : 𝑋 → ℜ. It is straightforward to show that 𝑓 is linear.
Since 𝐻 = x0 + 𝑉 , 𝛼x = 1 if and only if x ∈ 𝐻. Therefore
𝐻 = { x ∈ 𝑋 : 𝑓 (x) = 1 }
Case 2: x0 ∈ 𝑉 . In this case, choose some x1 ∈
/ 𝑉 . Again, for every x ∈ 𝑋, there
exists a unique 𝛼x ∈ ℜ such
x = 𝛼x x1 + 𝑣 for some 𝑣 ∈ 𝑉
and 𝑓 (x) = 𝛼x is a linear functional on 𝑋. Furthermore x0 ∈ 𝑉 implies 𝐻 = 𝑉
(Exercise 1.153) and therefore 𝑓 (x) = 0 if and only if x ∈ 𝐻. Therefore
𝐻 = { x ∈ 𝑋 : 𝑓 (x) = 0 }
Conversely, let 𝑓 be a nonzero linear functional in 𝑋 ′ . Let 𝑉 = kernel 𝑓 and choose
x0 ∈ 𝑓 −1 (1). (This is why we require 𝑓 ∕= 0). For any x ∈ 𝑋
𝑓 (x − 𝑓 (x)x0 ) = 𝑓 (x) − 𝑓 (x) × 1 = 0
so that x − 𝑓 (x)x0 ∈ 𝑉 . That is, x = 𝑓 (x)x0 + 𝑣 for some 𝑣 ∈ 𝑉 . Therefore,
𝑋 = lin (x0 , 𝑉 ) so that 𝑉 is a maximal proper subspace.
For any 𝑐 ∈ ℜ, let x1 ∈ 𝑓 −1 (𝑐). Then, for every x ∈ 𝑓 −1 (𝑐), 𝑓 (x − x1 ) = 0 and
{ x : 𝑓 (x) = 𝑐} = {x : 𝑓 (x − x1 ) = 0 } = x1 + 𝑉
which is a hyperplane.
3.50 By the previous exercise, there exists a linear functional 𝑔 such that
𝐻 = { 𝑥 ∈ 𝑋 : 𝑓 (𝑥) = 𝑐 }
for some 𝑐 ∈ ℜ. Since 0 ∈
/ 𝐻, 𝑐 ∕= 0. Without loss of generality, we can assume that
𝑐 = 1. (Otherwise, take the linear functional 1𝑐 𝑓 ).
To show that 𝑓 is unique, assume that 𝑔 is another linear functional with
𝐻 = { x : 𝑓 (𝑥) = 1} = {x : 𝑔(𝑥) = 1 }
Then
𝐻 ⊆ { x : 𝑓 (𝑥) − 𝑔(𝑥) = 0 }
Since 𝐻 is a maximal subset, 𝑋 is the smallest subspace containing 𝐻. Therefore
𝑓 (𝑥) = 𝑔(𝑥) for every 𝑥 ∈ 𝑋.
125
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
3.51 By Exercise 3.49, there exists a linear functional 𝑓 such that
𝐻 = { 𝑥 ∈ 𝑋 : 𝑓 (𝑥) = 0 }
Since x0 ∈
/ 𝐻, 𝑓 (x0 ) ∕= 0. Without loss of generality, we can normalize so that 𝑓 (x0 ) =
1. (If 𝑓 (x0 ) = 𝑐 ∕= 1, then the linear functional 𝑓 ′ = 1/c𝑓 has 𝑓 ′ (x0 ) = 1 and kernel 𝑓 ′ =
𝐻.)
To show that 𝑓 is unique, suppose that 𝑔 is another linear functional with kernel 𝑔 = 𝐻
and 𝑔(x0 ) = 1. For any x ∈ 𝑋, there exists 𝛼 ∈ ℜ such that
x = 𝛼x0 + v
with 𝑣 ∈ 𝐻 (Exercise 1.153). Since 𝑓 (v) = 𝑔(v) = 0 and 𝑓 (x0 ) = 𝑔(x0 ) = 1
𝑔(x) = 𝑔(𝛼x0 + v) = 𝛼𝑔(𝑥0 ) = 𝛼𝑓 (x0 ) = 𝑓 (𝛼x0 + v) = 𝑓 (x)
3.52 Assume 𝑓 = 𝜆𝑔, 𝜆 ∕= 0. Then
𝑓 (𝑥) = 0 ⇐⇒ 𝑔(𝑥) = 0
Conversely, let 𝐻 = 𝑓 −1 (0) = 𝑔 −1 (0). If 𝐻 = 𝑋, then 𝑓 = 𝑔 = 0. Otherwise, 𝐻
is a hyperplane containing 0. Choose some x0 ∈
/ 𝐻. Every x ∈ 𝑋 has a unique
representation x = 𝛼x0 + v with v ∈ 𝐻 (Exercise 1.153) and
𝑓 (x) = 𝛼𝑓 (x0 )
𝑔(x) = 𝛼𝑔(x0 )
Let 𝜆 = 𝑓 (x0 )/𝑔(x0 ) so that 𝑓 (x0 ) = 𝜆𝑔(x0 ). Substituting
𝑓 (x) = 𝛼𝑓 (x0 ) = 𝛼𝜆𝑔(x0 ) = 𝜆𝑔(x)
3.53 𝑓 continuous implies that the set { 𝑥 ∈ 𝑋 : 𝑓 (𝑥) = 𝑐 } = 𝑓 −1 (𝑐) is closed for every
𝑐 ∈ ℜ (Exercise 2.67). Conversely, let 𝑐 = 0 and assume that 𝐻 = { 𝑥 ∈ 𝑋 : 𝑓 (𝑥) = 0 }
is closed. There exists x0 ∕= 0 such that 𝑋 = lin {𝑥0 , 𝐻} (Exercise 1.153). Let x𝑛 → x
be a convergent sequence in 𝑋. Then there exist 𝛼𝑛 , 𝛼 ∈ ℜ and v𝑛 , 𝑣 ∈ 𝐻 such that
x𝑛 = 𝛼𝑛 x0 + v𝑛 , x = 𝛼x0 + v and
∥x𝑛 − x∥ = ∥𝛼𝑛 x0 + v𝑛 − 𝛼x0 + v∥
= ∥𝛼𝑛 x0 − 𝛼x0 + v𝑛 − v∥
≤ ∣𝛼𝑛 − 𝛼∣ ∥x0 ∥ + ∥v𝑛 − v∥
→0
which implies that 𝛼𝑛 → 𝛼. By linearity
𝑓 (x𝑛 ) = 𝛼𝑛 𝑓 (x0 ) + 𝑓 (v𝑛 ) = 𝛼𝑛 𝑓 (x0 )
since v𝑛 ∈ 𝐻 and therefore
𝑓 (x𝑛 ) = 𝛼𝑛 𝑓 (x0 ) → 𝛼𝑓 (x0 ) = 𝑓 (x)
𝑓 is continuous.
126
c 2001 Michael Carter
⃝
All rights reserved
Solutions for Foundations of Mathematical Economics
3.54
𝑓 (x + x′ , y) =
𝑚 ∑
𝑛
∑
𝑎𝑖𝑗 (𝑥𝑖 + 𝑥′𝑖 )𝑦𝑗
𝑖=1 𝑗=1
=
𝑚 ∑
𝑛
∑
𝑎𝑖𝑗 𝑥𝑖 𝑦𝑗 +
𝑖=1 𝑗=1
𝑚 ∑
𝑛
∑
𝑎𝑖𝑗 𝑥′𝑖 𝑦𝑗
𝑖=1 𝑗=1
= 𝑓 (x, y) + 𝑓 (x′ , y)
Similarly, we can show that
𝑓 (x, y + y′ ) = 𝑓 (x, y) + 𝑓 (x, y′ )
and
𝑓 (𝛼x, y) = 𝛼𝑓 (x, y) = 𝑓 (x, 𝛼y) for every 𝛼 ∈ ℜ
3.55 Let x1 , x2 , . . . , x𝑚 be a basis for 𝑋 and y1 , y2 , . . . , y𝑛 be a basis for 𝑌 . Let the
numbers 𝑎𝑖𝑗 represent the action of 𝑓 on these bases, that is
𝑎𝑖𝑗 = 𝑓 (x𝑖 , y𝑗 )
𝑖 = 1, 2, . . . , 𝑚, 𝑗 = 1, 2, . . . , 𝑛
and let 𝐴 be the 𝑚 × 𝑛 matrix of numbers 𝑎𝑖𝑗 .
Choose any x ∈ 𝑋 and y ∈ 𝑌 and let their representations in terms of the bases be
x=
𝑚
∑
𝛼𝑖 x𝑖 and y =
𝑖=1
𝑛
∑
𝛽𝑖 y𝑗
𝑗=1
respectively. By the bilinearity of 𝑓
∑
∑
𝑓 (x, y) = 𝑓 (
𝛼𝑖 x𝑖 ,
𝛼𝑗 y𝑗 )
=
∑
𝑖
𝛼𝑖 𝑓 (x𝑖 ,
𝑖
=
∑
𝛼𝑖
𝑖
=
∑
𝑗
∑
∑
𝛼𝑗 y𝑗 )
𝑗
𝛼𝑗 𝑓 (x𝑖 , y𝑗 )
𝑗
𝛼𝑖
∑
𝑖
=
∑
𝛼𝑗 𝑎𝑖𝑗
𝑗
𝛼𝑖 𝐴y
𝑖
′
= x 𝐴y
3.56 Every y ∈ 𝑋 ′ is a linear functional on 𝑋. Hence
y(x + x′ ) = y(x) + y(x′ )
y(𝛼x) = 𝛼y(x)
and therefore
𝑓 (x + x′ , y) = y(x + x′ ) = y(x) + y(x′ ) = 𝑓 (x, y) + 𝑓 (x′ , y)
𝑓 (𝛼x, y) = y(𝛼x) = 𝛼y(x) = 𝛼𝑓 (x, y)
127
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
In the dual space 𝑋 ′
(y + y′ )(x) ≡ y(x) + y′ (x)
(𝛼y)(x) ≡ 𝛼y(x)
and therefore
𝑓 (x, y + y′ ) = (y + y′ )(x) = y(x) + y′ (x) = 𝑓 (x, y) + 𝑓 (x, y′ )
𝑓 (x, 𝛼y) = (𝛼y)(x) = 𝛼y(x) = 𝛼𝑓 (x, y)
3.57 Assume 𝑓1 , 𝑓2 ∈ 𝐵𝑖𝐿(𝑋 × 𝑌, 𝑍). Define the mapping 𝑓1 + 𝑓2 : 𝑋 × 𝑌 → 𝑍 by
(𝑓1 + 𝑓2 )(x, y) = 𝑓1 (x, y) + 𝑓2 (x, y)
We have to confirm that 𝑓1 + 𝑓2 is bilinear, that is
(𝑓1 + 𝑓2 )(x1 + x2 , y) = 𝑓1 (x1 + x2 , y) + 𝑓2 (x1 + x2 , y)
= 𝑓1 (x1 , y) + 𝑓1 (x2 , y) + 𝑓2 (x1 , y) + 𝑓2 (x2 , y)
= 𝑓1 (x1 , y) + 𝑓2 (x1 , y) + 𝑓1 (x1 , y) + 𝑓2 (x2 , y)
= (𝑓1 + 𝑓2 )(x1 , y) + (𝑓1 + 𝑓2 )(x2 , y)
Similarly, we can show that
(𝑓1 + 𝑓2 )(x, y1 + y2 ) = (𝑓1 + 𝑓2 )(x, y1 ) + (𝑓1 + 𝑓2 )(x, y2 )
and
(𝑓1 + 𝑓2 )(𝛼x, y) = 𝛼(𝑓1 + 𝑓2 )(x, y) = (𝑓1 + 𝑓2 )(x, 𝛼y)
For every 𝑓 ∈ 𝐵𝑖𝐿(𝑋 × 𝑌, 𝑍) define the function 𝛼𝑓 : 𝑋 × 𝑌 → 𝑍 by
(𝛼𝑓 )(x, y) = 𝛼𝑓 (x, y)
𝛼𝑓 is also bilinear, since
(𝛼𝑓 )(x1 + x2 , y) = 𝛼𝑓 (x1 + x2 , y)
= 𝛼𝑓 (x1 , y) + 𝛼𝑓 (x2 , y)
= (𝛼𝑓 )(x1 , y) + (𝛼𝑓 (x2 , y)
Similarly
(𝛼𝑓 )(x, y1 + y2 ) = (𝛼𝑓 )(x, y1 ) + (𝛼𝑓 )(x, y2 )
(𝛼𝑓 )(𝛽x, y) = 𝛽(𝛼𝑓 )(x, y) = (𝛼𝑓 )(x, 𝛽y)
Analogous to (Exercise 2.78), 𝑓1 + 𝑓2 and 𝛼𝑓 are also continuous
3.58
1. 𝐵𝐿(𝑌, 𝑍) is a linear space and therefore so is 𝐵𝐿(𝑋, 𝐵𝐿(𝑌, 𝑍)) (Exercise
3.33).
128
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
2. 𝜑x is linear and therefore
𝑓 (x, y1 + y2 ) = 𝜑(x)(y1 + y2 ) = 𝜑(x)(y1 ) + 𝜑(x)(y2 ) = 𝑓 (x, y1 ) + 𝑓 (x, y2 )
and
𝑓 (x, 𝛼y) = 𝜑(x)(𝛼y) = 𝛼𝜑(x)(y) = 𝛼𝑓 (x, y)
Similarly, 𝜑 is linear and therefore
𝑓 (x1 + x2 , y) = 𝜑x1 +x2 (y) = 𝜑x1 (y) + 𝜑x2 (y) = 𝑓 (x1 , y) + 𝑓 (x2 , y)
and
𝑓 (𝛼x, y) = 𝜑𝛼x (y) = 𝛼𝜑x (y) = 𝛼𝑓 (x, y)
𝑓 is bilinear
3. Let 𝑓 ∈ 𝐵𝑖𝐿(𝑋 × 𝑌, 𝑍). For every x ∈ 𝑋, the partial function 𝑓x : 𝑌 → 𝑍 is
linear. Therefore 𝑓x ∈ 𝐵𝐿(𝑌, 𝑍) and 𝜑 ∈ 𝐵𝐿(𝑋, 𝐵𝐿(𝑌, 𝑍)).
3.59 Bilinearity and symmetry imply
𝑓 (x − 𝛼y, x − 𝛼y) = 𝑓 (x, x − 𝛼y) − 𝛼𝑓 (y, x − 𝛼y)
= 𝑓 (x, x) − 𝛼𝑓 (x, y) − 𝛼𝑓 (y, x) + 𝛼2 𝑓 (y, y)
= 𝑓 (x, x) − 2𝛼𝑓 (x, y) + 𝛼2 𝑓 (y, y)
Nonnegativity implies
𝑓 (x − 𝛼y, x − 𝛼y) = 𝑓 (x, x) − 2𝛼𝑓 (x, y) + 𝛼2 𝑓 (y, y) ≥ 0
(3.38)
for every x, y ∈ 𝑋 and 𝛼 ∈ ℜ
Case 1 𝑓 (x, x) = 𝑓 (y, y) = 0 Then (3.38) becomes
−2𝛼𝑓 (x, y) ≥ 0
Setting 𝛼 = 𝑓 (x, y) generates
(
)2
−2 𝑓 (x, y) ≥ 0
which implies that
𝑓 (x, y) = 0
Case 2 Either 𝑓 (x, x) > 0 or 𝑓 (y, y) > 0. Without loss of generality, assume 𝑓 (y, y) >
0 and set 𝛼 = 𝑓 (x, y)/𝑓 (y, y) in (3.38). That is
(
)
(
)2
𝑓 (x, y)
𝑓 (x, y)
𝑓 (x, x) − 2
𝑓 (x, y) +
𝑓 (y, y) ≥ 0
𝑓 (y, y)
𝑓 (y, y)
or
𝑓 (x, x) −
𝑓 (x, y)2
≥0
𝑓 (y, y)
which implies
(
)2
𝑓 (x, y) ≤ 𝑓 (x, x)𝑓 (y, y) for every x, y ∈ 𝑋
129
c 2001 Michael Carter
⃝
All rights reserved
Solutions for Foundations of Mathematical Economics
3.60 A Euclidean space is a finite-dimensional normed space, which is complete (Proposition 1.4).
3.61 𝑓 (x, y) = x𝑇 y satisfies the requirements of Exercise 3.59 and therefore
(x𝑇 y)2 ≤ (x𝑇 x)(y𝑇 y)
Taking square roots
𝑇 x y ≤ ∥x∥ ∥y∥
3.62 By definition, the inner product is a bilinear functional. To show that it is continuous, let 𝑋 be an inner product space with inner product denote by x𝑇 y. Let x𝑛 → x
and y𝑛 → y be sequences in 𝑋.
𝑛 𝑇 𝑛
(x ) y − x𝑇 y = (x𝑛 )𝑇 y𝑛 − (x𝑛 )𝑇 y + (x𝑛 )𝑇 y − x𝑇 y
≤ (x𝑛 )𝑇 y𝑛 − (x𝑛 )𝑇 y + (x𝑛 )𝑇 y − x𝑇 y
≤ (x𝑛 )𝑇 (y𝑛 − y) + (x𝑛 − x)𝑇 y
Applying the Cauchy-Schwartz inequality
𝑛 𝑇 𝑛
(x ) y − x𝑇 y ≤ ∥x𝑛 ∥ ∥y𝑛 − y∥ + ∥x𝑛 − x∥ ∥y∥
Since the sequence x𝑛 converges, it is bounded, that is there exists 𝑀 such that ∥x𝑛 ∥ ≤
𝑀 for every 𝑛. Therefore
𝑛 𝑇 𝑛
(x ) y − x𝑇 y ≤ ∥x𝑛 ∥ ∥y𝑛 − y∥ + ∥x𝑛 − x∥ ∥y∥ ≤ 𝑀 ∥y𝑛 − y∥ + ∥x𝑛 − x∥ ∥y∥ → 0
3.63 Applying the properties of the inner product
√
∙ ∥x∥ = x𝑇 x ≥ 0
√
∙ ∥x∥ = x𝑇 x = 0 if and only if x = 0
√
√
∙ ∥𝛼x∥ = (𝛼x)𝑇 (𝛼x) = 𝛼2 x𝑇 x = ∣𝛼∣ ∥x∥
To prove the triangle inequality, observe that bilinearity and symmetry imply
2
∥x + y∥ = (x + y)𝑇 (x + y)
= x𝑇 x + x𝑇 y + y𝑇 x + z𝑇 z
= x𝑇 x + 2x𝑇 y + y𝑇 y
2
2
= ∥x∥ + 2x𝑇 y + ∥y∥
≤ ∥x∥2 + 2 x𝑇 y + ∥y∥2
Applying the Cauchy-Schwartz inequality
2
2
∥x + y∥ ≤ ∥x∥ + 2 ∥x∥ ∥y∥ + ∥y∥
2
= (∥x∥ + ∥y∥)2
3.64 For every y ∈ 𝑋, the partial function 𝑓y (x) = x𝑇 y is a linear functional on 𝑋
(since x𝑇 y is bilinear). Continuity follows from the Cauchy-Schwartz inequality, since
for every x ∈ 𝑋
∣𝑓y (x)∣ = x𝑇 y ≤ ∥y∥ ∥x∥
130
c 2001 Michael Carter
⃝
All rights reserved
Solutions for Foundations of Mathematical Economics
which shows that ∥𝑓y ∥ ≤ ∥y∥. In fact, ∥𝑓y ∥ = ∥y∥ since
∥𝑓y ∥ = sup ∣𝑓y (x)∣
∥x∥=1
(
)
y ≥ 𝑓y
∥y∥ (
y )𝑇 =
y
∥y∥
=
1 𝑇
y y = ∥y∥
∥y∥
3.65 By the Weierstrass Theorem (Theorem 2.2), the continuous function 𝑔(x) = ∥x∥
attains a maximum on the compact set 𝑆 at some point x0 .
We claim that x0 is an extreme point. Suppose not. Then, there exist x1 , x2 ∈ 𝑆 such
that
x0 = 𝛼x1 + (1 − 𝛼)x2 = x2 + 𝛼(x1 − x2 )
Since x0 maximizes ∥x∥ on 𝑆
(
)𝑇 (
)
2
2
x2 + 𝛼(x1 − x2 )
∥x2 ∥ ≤ ∥x0 ∥ = x2 + 𝛼(x1 − x2 )
2
= ∥x2 ∥ + 2𝛼x𝑇2 (x1 − x2 ) + 𝛼2 ∥x1 − x2 ∥
2
or
2
2x𝑇2 (x1 − x2 ) + 𝛼 ∥x1 − x2 ∥ ≥ 0
(3.39)
Similarly, interchanging the role of x1 and x2
2
2x𝑇1 (x2 − x1 ) + 𝛼 ∥x2 − x1 ∥ ≥ 0
or
−2x𝑇1 (x1 − x2 ) + 𝛼 ∥x1 − x2 ∥2 ≥ 0
(3.40)
Adding the inequalities (3.39) and (3.40) yields
2
2(x2 − x1 )𝑇 (x1 − x2 ) + 2𝛼 ∥x1 − x2 ∥ ≥ 0
or
2(x2 − x1 )𝑇 (x2 − x1 ) = −2(x2 − x1 )𝑇 (x1 − x2 ) ≤ 2𝛼 ∥x1 − x2 ∥2
and therefore
∥x2 − x1 ∥ ≤ 𝛼 ∥x2 − x1 ∥
Since 0 < 𝛼 < 1, this implies that ∥x1 − x2 ∥ = 0 or x1 = x2 which contradicts our
premise that x0 is not an extreme point.
3.66 Using bilinearity and symmetry of the inner product
∥x + y∥2 + ∥x − y∥2 = (x + y)𝑇 (x + y) + (x − y)𝑇 (x − y)
= x𝑇 x + x𝑇 y + y𝑇 x + y𝑇 y +
x𝑇 x − x𝑇 y − y𝑇 x + y𝑇 y
= 2x𝑇 x + 2y𝑇 y
2
= 2 ∥x∥ + 2 ∥y∥
131
2
c 2001 Michael Carter
⃝
All rights reserved
Solutions for Foundations of Mathematical Economics
3.67 Note that ∥𝑥∥ = ∥𝑦∥ = 1 and
∥𝑥 + 𝑦∥ = sup
(
)
𝑥(𝑡) + 𝑦(𝑡) = sup (1 + 𝑡) = 2
∥𝑥 − 𝑦∥ = sup
(
)
𝑥(𝑡) − 𝑦(𝑡) = sup (1 − 𝑡) = 1
0≤𝑡≤1
0≤𝑡≤1
0≤𝑡≤1
0≤𝑡≤1
so that
2
2
2
∥𝑥 + 𝑦∥ + ∥𝑥 − 𝑦∥ = 5 ∕= 2 ∥𝑥∥ + 2 ∥𝑥∥
2
Since 𝑥 and 𝑦 do not satisfy the parallelogram law (Exercise 3.66), 𝐶(𝑋) cannot be an
inner product space.
3.68 Let {x1 , x2 , . . . , x𝑛 } be a set of pairwise orthogonal vectors. Assume
0 = 𝛼x1 + 𝛼2 x2 + ⋅ ⋅ ⋅ + 𝛼𝑛 x𝑛
Using bilinearity, this implies
0 = 0𝑇 x𝑗 =
𝑛
∑
𝛼𝑖 x𝑇𝑖 x𝑗 = 𝛼𝑗 ∥x𝑗 ∥
𝑖=1
for every 𝑗 = 1, 2, . . . , 𝑛. Since x𝑗 ∕= 0, this implies 𝛼𝑗 = 0 for every 𝑗 = 1, 2, . . . , 𝑛.
We conclude that the set {x1 , x2 , . . . , x𝑛 } is linearly independent (Exercise 1.133).
3.69 Let x1 , x2 , . . . , x𝑛 be a orthonormal basis for 𝑋. Since 𝐴 represents 𝑓
𝑓 (x𝑗 ) =
𝑛
∑
𝑎𝑖𝑗 x𝑖
𝑖=1
for 𝑗 = 1, 2, . . . , 𝑛. Taking the inner product with x𝑖 ,
( 𝑛
)
𝑛
∑
∑
𝑇
𝑇
x𝑖 𝑓 (x𝑗 ) = x𝑖
𝑎𝑖𝑗 x𝑖 =
𝑎𝑖𝑗 x𝑇𝑖 x𝑗
𝑖=1
𝑖=1
Since {x1 , x2 , . . . , x𝑛 } is orthonormal
{
x𝑇𝑘 x𝑗
=
1
0
if 𝑖 = 𝑗
otherwise
so that the last sum simplifies to
x𝑇𝑖 𝑓 (x𝑗 ) = 𝑎𝑖𝑗 for every 𝑖, 𝑗
3.70
1. By the Cauchy-Schwartz inequality
𝑇 x y ≤ ∥x∥ ∥x∥
for every x and y, so that
𝑇
x y ≤1
∣cos 𝜃∣ = ∥x∥ ∥y∥ which implies
−1 ≤ cos 𝜃 ≤ 1
132
c 2001 Michael Carter
⃝
All rights reserved
Solutions for Foundations of Mathematical Economics
2. Since cos 90 = 0, 𝜃 = 90 implies that x𝑇 y = 0 or x ⊥ y. Conversely, if x ⊥ y,
x𝑇 y = 0 and cos 𝜃 = 0 which implies 𝜃 = 90 degrees.
3.71 By bilinearity
2
2
∥x + y∥ = (x + y)𝑇 (x + y) = ∥x∥ + x𝑇 y + y𝑇 x + ∥y∥
2
If x ⊥ y, x𝑇 y = y𝑇 x = 0 and
2
2
∥x + y∥ = ∥x∥ + ∥y∥
3.72
2
1. Choose some x̂ ∈ 𝑆 and let 𝑆ˆ be the set of all x ∈ 𝑆 which are closer to y
than x̂, that is
𝑆ˆ = { x ∈ 𝑆 : ∥x − y∥ ≤ ∥x̂ − y∥ }
𝑆ˆ is compact (Proposition 1.4).
By the Weierstrass theorem (Theorem 2.2), the continuous function 𝑔(x) = ∥xy∥
ˆ That is
attains a minimum on 𝑆ˆ at some point x0 ∈ 𝑆.
∥x0 − y∥ ≤ ∥x − y∥ for every x ∈ 𝑆ˆ
A fortiori
∥x0 − y∥ ≤ ∥x − y∥ for every x ∈ 𝑆
2. Suppose there exists some x1 ∈ 𝑆 such that
∥x1 − y∥ = ∥x0 − y∥ = 𝛿
By the parallelogram law (Exercise 3.66)
2
∥x0 − x1 ∥ = ∥x0 − y + y − x1 ∥
2
2
2
= 2 ∥x0 − y∥ + 2 ∥x1 − y∥ − ∥(x0 − y) − (y − x1 )∥
2
2
2
2 1
= 2 ∥x0 − y∥ + 2 ∥x1 − y∥ − 2 (x0 + x1 ) − y
2
2
1
= 2𝛿 2 + 2𝛿 2 − 22 2 (x0 + x1 ) − y
2
since 12 (x0 + x1 ) ∈ 𝑆 and therefore 12 (x0 + x1 ) − y ≥ 𝛿 so that
2
∥x0 − x1 ∥ ≤ 2𝛿 2 + 2𝛿 2 − 4𝛿 2 = 0
which implies that x1 = x0 .
3. Let x ∈ 𝑆. Since 𝑆 is convex, the line segment 𝛼x+(1−𝛼)x0 = x0 +𝛼(x−x0 ) ∈ 𝑆
and therefore (since x0 is the closest point)
(
2
)
∥x0 − y∥2 ≤ x0 + 𝛼(x − x0 ) − y
2
= ∥(x0 − y) + 𝛼(x − x0 )∥
(
)𝑇 (
)
= (x0 − y) + 𝛼(x − x0 )
(x0 − y) + 𝛼(x − x0 )
2
= ∥x0 − y∥ + 2𝛼(x0 − y)𝑇 (x − x0 ) + 𝛼2 ∥x − x0 ∥
133
2
c 2001 Michael Carter
⃝
All rights reserved
Solutions for Foundations of Mathematical Economics
which implies that
2
2𝛼(x0 − y)𝑇 (x − x0 ) + 𝛼2 ∥x − x0 ∥ ≥ 0
Dividing through by 𝛼
2
2(x0 − y)𝑇 (x − x0 ) + 𝛼 ∥x − x0 ∥ ≥ 0
which inequality must hold for every 0 < 𝛼 < 1. Letting 𝛼 → 0, we must have
(x0 − y)𝑇 (x − x0 ) ≥ 0
as required.
3.73
1. Using the parallelogram law (Exercise 3.66),
2
∥x𝑚 − x𝑛 ∥ = ∥(x𝑚 − y) + (y − x𝑛 )∥
2
2
2
2
= 2 ∥x𝑚 − y∥ + 2 ∥y − x𝑛 ∥ − 2 ∥x𝑚 + x𝑛 ∥
for every 𝑚, 𝑛. Since 𝑆 is convex, (x𝑚 +x𝑛 )/2 ∈ 𝑆 and therefore ∥x𝑚 + x𝑛 ∥ ≥ 2𝑑.
Therefore
2
2
2
∥x𝑚 − x𝑛 ∥ = 2 ∥x𝑚 − y∥ + 2 ∥y − x𝑛 ∥ − 4𝑑2
Since ∥x𝑚 − y∥ → 𝑑 and ∥x𝑛 − y∥ → 𝑑 as 𝑚, 𝑛 → ∞, we conclude that
2
∥x𝑚 − x𝑛 ∥ → 0. That is, (x𝑛 ) is a Cauchy sequence.
2. Since 𝑆 is a closed subspace of complete space, there exists x0 ∈ 𝑆 such that
x𝑛 → x0 . By continuity of the norm
∥x0 − y∥ = lim ∥x𝑛 − y∥ = 𝑑
𝑛→∞
Therefore
∥x0 − y∥ ≤ ∥x − y∥ for every x ∈ 𝑆
Uniqueness follows in the same manner as the finite-dimensional case.
3.74 Define 𝑔 : 𝑇 → 𝑆 by
𝑔(y) = { x ∈ 𝑆 : x is closest to y }
The function 𝑔 is well-defined since for every y ∈ 𝑇 there exists a unique point x ∈ 𝑆
which is closest to y (Exercise 3.72). Clearly, for every x ∈ 𝑆, x is the closest point to
x. Therefore 𝑔(x) = x for every x ∈ 𝑆.
To show that 𝑔 is continuous, choose any y1 and y2 in 𝑇
x1 = 𝑔(y1 ) and x2 = 𝑔(y2 )
134
c 2001 Michael Carter
⃝
All rights reserved
Solutions for Foundations of Mathematical Economics
be the corresponding closest points in 𝑆. Then
(
)𝑇 (
)
2
∥(y1 − y2 ) − (x1 − x2 )∥ = (y1 − y2 ) − (x1 − x2 )
(y1 − y1 ) − (x1 − x2 )
= (y1 − y2 )𝑇 (y1 − y2 ) + (x1 − x2 )𝑇 (x1 − x2 )
− 2(y1 − y2 )𝑇 (x1 − x2 )
= ∥y1 − y2 ∥2 + ∥x1 − x2 ∥2 − 2(y1 − y2 )𝑇 (x1 − x2 )
2
2
= ∥y1 − y2 ∥ + ∥x1 − x2 ∥ − 2(y1 − y2 )𝑇 (x1 − x2 )
2
− 2 ∥x1 − x2 ∥ + 2(x1 − x2 )𝑇 (x1 − x2 )
2
2
= ∥y1 − y2 ∥ − ∥x1 − x2 ∥
(
)𝑇
+ 2 (x1 − x2 ) − (y1 − y2 ) (x1 − x2 )
= ∥y1 − y2 ∥2 − ∥x1 − x2 ∥2
+ 2(x1 − y1 )𝑇 (x1 − x2 ) − 2(x2 − y2 )𝑇 (x1 − x2 )
2
2
= ∥y1 − y2 ∥ − ∥x1 − x2 ∥
− 2(x1 − y1 )𝑇 (x2 − x1 ) − 2(x2 − y2 )𝑇 (x1 − x2 )
so that
2
2
∥y1 − y2 ∥ − ∥x1 − x2 ∥ = ∥(y1 − y2 ) − (x1 − x2 )∥
2
+ 2(x1 − y1 )𝑇 (x2 − x1 ) + 2(x2 − y2 )𝑇 (x1 − 𝑥2 )
Using Exercise 3.72
(x1 − y1 )𝑇 (x2 − x1 ) ≥ 0 and (x2 − y2 )𝑇 (x1 − x2 ) ≥ 0
which implies that the left-hand side
∥y1 − y2 ∥2 − ∥x1 − x2 ∥2 ≥ 0
or
∥x1 − x2 ∥ = ∥𝑔(y1 ) − 𝑔(y2 )∥ ≤ ∥y1 − y2 ∥
𝑔 is Lipschitz continuous.
3.75 Let 𝑆 = kernel 𝑓 . Then 𝑆 is a closed subspace of 𝑋. If 𝑆 = 𝑋, then 𝑓 is the zero
functional and y = 0 is the required element. Otherwise chose any y ∈
/ 𝑆 and let x0
be the closest point in 𝑆 (Exercise 3.72). Define z = x0 − y. Then z ∕= 0 and
z𝑇 x ≥ 0 for every x ∈ 𝑆
Since 𝑆 is subspace, this implies that
z𝑇 x = 0 for every x ∈ 𝑆
that is z is orthogonal to 𝑆.
Let 𝑆ˆ be the subset of 𝑋 defined by
𝑆ˆ = { 𝑓 (x)z − 𝑓 (z)x : x ∈ 𝑋 }
For every x ∈ 𝑆ˆ
(
)
𝑓 (x) = 𝑓 𝑓 (x)z − 𝑓 (z)x = 𝑓 (x)𝑓 (z) − 𝑓 (z)𝑓 (x) = 0
135
c 2001 Michael Carter
⃝
All rights reserved
Solutions for Foundations of Mathematical Economics
Therefore 𝑆ˆ ⊆ 𝑆. For every x ∈ 𝑋
(
)𝑇
𝑓 (x)z − 𝑓 (z)x z = 𝑓 (x)z𝑇 z − 𝑓 (z)x𝑇 z = 0
since z ∈ 𝑆 ⊥ . Therefore
𝑓 (x) =
𝑓 (z)
∥z∥
(
𝑇
2x z
= x𝑇
z𝑓 (z)
∥z∥
2
)
= x𝑇 y
where
y=
z𝑓 (z)
∥z∥
2
3.76 𝑋 ∗ is always complete (Proposition 3.3). To show that it is a Hilbert space, we
have to that it has an inner product. For this purpose, it will be clearer if we use an
alternative notation < x, y > to denote the inner product of x and y. Assume 𝑋 is a
Hilbert space. By the Riesz representation theorem (Exercise 3.75), for every 𝑓 ∈ 𝑋 ∗
there exists y𝑓 ∈ 𝑋 such that
𝑓 (x) =< x, y𝑓 > for every x ∈ 𝑋
Furthermore, if y𝑓 represents 𝑓 and y𝑔 represents 𝑔 ∈ 𝑋 ∗ , then y𝑓 + y𝑔 represents
𝑓 + 𝑔 and 𝛼y𝑓 represents 𝛼𝑓 since
(𝑓 + 𝑔)(x) = 𝑓 (x) + 𝑔(x) =< x, y𝑓 > + < x, y𝑔 >=< x, y𝑓 + y𝑔 >
(𝛼𝑓 )(x) = 𝛼𝑓 (x) = 𝛼 < x, y𝑓 >=< x, 𝛼y𝑓 >
Define an inner product on 𝑋 ∗ by
< 𝑓, 𝑔 >=< y𝑔 , y𝑓 >
We show that it satisfies the properties of an inner product, namely
symmetry < 𝑓, 𝑔 >=< y𝑔 , y𝑓 >=< y𝑓 , y𝑔 >=< 𝑔, 𝑓 >
additivity < 𝑓1 + 𝑓2 , 𝑔 >=< y𝑔 , y𝑓1 +𝑓2 >=< y𝑔 , y𝑓1 + y𝑓2 >=< 𝑓1 , 𝑔 > + < 𝑓2 , 𝑔 >
homogeneity < 𝛼𝑓, 𝑔 >=< y𝑔 , 𝛼y𝑓 >= 𝛼 < y𝑔 , y𝑓 >= 𝛼 < 𝑓, 𝑔 >
positive definiteness < 𝑓, 𝑔 >=< y𝑔 , y𝑓 >≥ 0 and < 𝑓, 𝑔 >=< y𝑔 , y𝑓 >= 0 if and
only if 𝑓 = 𝑔.
Therefore, 𝑋 ∗ is a complete inner product space, that is a Hilbert space.
3.77 Let 𝑋 be a Hilbert space. Applying the previous exercise a second time, 𝑋 ∗∗ is also
a Hilbert space. Let 𝐹 be an arbitrary functional in 𝑋 ∗∗ . By the Riesz representation
theorem, there exists 𝑔 ∈ 𝑋 ∗ such that
𝐹 (𝑓 ) =< 𝑓, 𝑔 > for every 𝑓 ∈ 𝑋 ∗
Again by the Riesz representation theorem, there exists x𝑓 (representing 𝑓 ) and x𝐹
(representing 𝑔) in 𝑋 such that
𝐹 (𝑓 ) =< 𝑓, 𝑔 >=< x𝐹 , x𝑓 >
and
𝑓 (x) =< x, x𝑓 >
136
c 2001 Michael Carter
⃝
All rights reserved
Solutions for Foundations of Mathematical Economics
In particular,
𝑓 (x𝐹 ) =< x𝐹 , x𝑓 >= 𝐹 (𝑓 )
That is, for every 𝐹 ∈ 𝑋 ∗∗ , there exists an element x𝐹 ∈ 𝑋 such that
𝐹 (𝑓 ) = 𝑓 (x𝐹 )
𝑋 is reflexive.
3.78
1. Adapt Exercise 3.64.
2. By Exercise 3.75, there exists unique x∗ ∈ 𝑋 such that
𝑓y (x) = x𝑇 x∗
3. Substituting
𝑓 (x)𝑇 y = 𝑓y (x) = x𝑇 x∗ = 𝑥𝑇 𝑓 ∗ (y)
4. For every y1 , y2 ∈ 𝑌
(
(
)
)
x𝑇 𝑓 ∗ (y1 + y2 ) = 𝑓 (x)𝑇 y1 + y2 = 𝑓 (x)𝑇 y1 + 𝑓 (x)𝑇 y1 = x𝑇 𝑓 ∗ (y1 ) + x𝑇 𝑓 ∗ (y1 )
and for every y ∈ 𝑌
x𝑇 𝑓 ∗ (𝛼y) = 𝑓 (x)𝑇 𝛼y = 𝛼𝑓 (x)𝑇 y = 𝛼x𝑇 𝑓 ∗ (y) = x𝑇 𝛼𝑓 ∗ (y)
3.79 The zero element 0𝑋 is a fixed point of every linear operator (Exercise 3.13).
3.80 𝐴𝐴−1 = 𝐼 so that
det(𝐴) det(𝐴−1 ) = det(𝐼) = 1
3.81 Expanding along the 𝑖th row using (3.8)
det(𝐶) =
𝑛
∑
(−1)𝑖+𝑗 (𝛼𝑎𝑖𝑗 + 𝛽𝑏𝑖𝑗 ) det(𝐶𝑖𝑗 )
𝑗=1
𝑛
∑
=𝛼
(−1)𝑖+𝑗 𝑎𝑖𝑗 det(𝐶𝑖𝑗 ) + 𝛽
𝑗=1
𝑛
∑
(−1)𝑖+𝑗 𝑏𝑖𝑗 det(𝐶𝑖𝑗 )
𝑗=1
But the matrices differ only in the 𝑖th row and therefore
𝐴𝑖𝑗 = 𝐵𝑖𝑗 = 𝐶𝑖𝑗 ,
𝑗 = 1, 2, . . . 𝑛
so that
det(𝐶) = 𝛼
𝑛
∑
(−1)𝑖+𝑗 𝑎𝑖𝑗 det(𝐴𝑖𝑗 ) + 𝛽
𝑗=1
𝑛
∑
(−1)𝑖+𝑗 𝑏𝑖𝑗 det(𝐵𝑖𝑗 )
𝑗=1
= 𝛼 det(𝐴) + 𝛽 det(𝐵)
3.82 Suppose that x1 and x2 are eigenvectors corresponding to the eigenvalue 𝜆. By
linearity
𝑓 (x1 + x2 ) = 𝑓 (x1 ) + 𝑓 (x2 ) = 𝜆x1 + 𝜆x2 = 𝜆(x1 + x2 )
and
𝑓 (𝛼x1 ) = 𝛼𝑓 (x1 ) = 𝛼𝜆x
Therefore x1 + x2 and 𝛼x1 are also eigenvectors.
137
c 2001 Michael Carter
⃝
All rights reserved
Solutions for Foundations of Mathematical Economics
3.83 Suppose 𝑓 is singular. Then there exists x ∕= 0 such that 𝑓 (x) = 0. Therefore x
is an eigenvector with eigenvalue 0. Conversely, if 0 is an eigenvalue
𝑓 (x) = 0x = 0
for any x ∕= 0. Therefore 𝑓 is singular.
3.84 Since 𝑓 (x) = 𝜆x
𝑓 (x)𝑇 x = 𝜆x𝑇 x = 𝜆x𝑇 x
3.85 By Exercise 3.69
𝑎𝑖𝑗 = x𝑇𝑖 𝑓 (x𝑗 )
𝑎𝑗𝑖 = x𝑇𝑗 𝑓 (x𝑖 ) = 𝑓 (x𝑖 )𝑇 x𝑗
and therefore
𝑎𝑖𝑗 = 𝑎𝑗𝑖 ⇐⇒ x𝑇𝑖 𝑓 (x𝑗 ) = 𝑓 (x𝑖 )𝑇 x𝑗
3.86 By bilinearity
x𝑇1 𝑓 (x2 ) = x𝑇1 𝜆2 x2 = 𝜆2 x𝑇1 x2
𝑓 (x1 )𝑇 x2 = 𝜆1 x𝑇1 x2 = 𝜆1 x𝑇1 x2
Since 𝑓 is symmetric, this implies
(𝜆1 − 𝜆2 )x𝑇1 x2 = 0
and 𝜆1 ∕= 𝜆2 implies x𝑇1 x2 = 0.
3.87
1. Since 𝑆 compact and 𝑓 is continuous (Exercises 3.31, 3.62), the maximum is
attained at some x0 ∈ 𝑆 (Theorem 2.2), that is
𝜆 = 𝑓 (x0 )𝑇 x0 ≥ 𝑓 (x)𝑇 x for every x ∈ 𝑆
Hence
(
)𝑇
𝑔(x, y) = 𝜆x − 𝑓 (x) y
is well-defined.
2. For any x ∈ 𝑋
(
)𝑇
𝑔(x, x) = 𝜆x − 𝑓 (x) x
= 𝜆x𝑇 x − 𝑓 (x)𝑇 x
= 𝜆 ∥x∥2 − 𝑓 (x)𝑇 x
(
2
2
= 𝜆 ∥x∥ − ∥x∥ 𝑓
x
)𝑇 (
2
∥x∥
)
2(
𝑇
= ∥x∥ 𝜆 − 𝑓 (z) z ≥ 0
since z = x/ ∥x∥ ∈ 𝑆.
138
x
∥x∥
)
2
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
3. Since 𝑓 is symmetric
(
)𝑇
𝑔(y, x) = 𝜆y − 𝑓 (y) x
= 𝜆y𝑇 x − 𝑓 (y)𝑇 x
= 𝜆x𝑇 y − 𝑓 (x)𝑇 𝑦
(
)𝑇
= 𝜆x − 𝑓 (x) y = 𝑔(x, y)
4. 𝑔 satisfies the conditions of Exercise 3.59 and therefore
(𝑔(x, y))2 ≤ 𝑔(x, x)𝑔(y, y) for every x, y ∈ 𝑋
(3.41)
By definition 𝑔(x0 , x0 ) = 0 and (3.41) implies that
𝑔(x0 , y) = 0 for every y ∈ 𝑋
That is
(
)𝑇
𝑔(x0 , y) = 𝜆x0 − 𝑓 (x0 ) y = 0 for every 𝑦 ∈ 𝑋
and therefore
𝜆x0 − 𝑓 (x0 ) = 0
or
𝑓 (x0 ) = 𝜆x0
In other words, x0 is an eigenvector. By construction, ∥x0 ∥ = 1.
3.88
1. Suppose x2 , x3 ∈ 𝑆. Then
(
)𝑇
𝛼x2 + 𝛽x3 x1 = 𝛼x𝑇2 x1 + 𝛽x𝑇3 x1 = 0
so that 𝛼x2 + 𝛽x3 ∈ 𝑆. 𝑆 is a subspace.
Let {x1 , x2 , . . . , x𝑛 } be a basis for 𝑋 (Exercise 1.142). For x ∈ 𝑋, there exists
(Exercise 1.137) unique 𝛼1 , 𝛼2 , . . . , 𝛼𝑛 such that
x = 𝛼1 x1 + 𝛼2 x2 + ⋅ ⋅ ⋅ + 𝛼𝑛 x𝑛
If x ∈ 𝑆
x𝑇 x1 = 𝛼1 x𝑇1 x1 = 0
which implies that 𝛼1 = 0.
dim 𝑆 = 𝑛 − 1.
Therefore, x2 , x3 , . . . , x𝑛 span 𝑆 and therefore
2. For every x ∈ 𝑆,
𝑓 (x)𝑇 x0 = x𝑇 𝑓 (x0 ) = x𝑇 𝜆x0 = 𝜆x𝑇 x0 = 0
since 𝑓 is symmetric. Therefore 𝑓 (x) ∈ {x0 }⊥ = 𝑆.
3.89 Let 𝑓 be a symmetric operator. By the spectral theorem (Proposition 3.6), there
exists a diagonal matrix 𝐴 which represents 𝑓 . The elements of 𝐴 are the eigenvalues of
𝑓 . By Proposition 3.5, the determinant of 𝐴 is the product of these diagonal elements.
139
c 2001 Michael Carter
⃝
All rights reserved
Solutions for Foundations of Mathematical Economics
3.90 By linearity
𝑓 (x) =
∑
𝑥𝑗 𝑓 (x𝑗 )
𝑗
𝑄 defines a quadratic form since
⎞
(
)𝑇 ⎛
∑
∑∑
∑∑
∑
⎝
𝑄(x) = x𝑇 𝑓 (x) =
𝑥𝑖 x𝑖
𝑥𝑗 𝑓 (x𝑗 )⎠ =
𝑥𝑖 𝑥𝑗 x𝑇𝑖 𝑓 (x𝑗 ) =
𝑎𝑖𝑗 𝑥𝑖 𝑥𝑗
𝑖
𝑗
𝑖
𝑗
𝑖
𝑗
by Exercise 3.69.
3.91 Let 𝑓 be the symmetric linear operator defining 𝑄
𝑄(x) = x𝑇 𝑓 (x)
By the spectral theorem (Proposition 3.6), there exists an orthonormal basis x1 , x2 , . . . , x𝑛
comprising the eigenvectors of 𝑓 . Let 𝜆1 , 𝜆2 , . . . , 𝜆𝑛 be the corresponding eigenvalues,
that is
𝑓 (x𝑖 ) = 𝜆𝑖 x𝑖
𝑖 = 1, 2 . . . , 𝑛
Then for x = 𝑥1 x1 + 𝑥2 x2 + ⋅ ⋅ ⋅ + 𝑥𝑛 x𝑛
𝑄(x) = x𝑇 𝑓 (x)
= (𝑥1 x1 + 𝑥2 x2 + ⋅ ⋅ ⋅ + 𝑥𝑛 x𝑛 )𝑇 𝑓 (𝑥1 x1 + 𝑥2 x2 + ⋅ ⋅ ⋅ + 𝑥𝑛 x𝑛 )
= (𝑥1 x1 + 𝑥2 x2 + ⋅ ⋅ ⋅ + 𝑥𝑛 x𝑛 )𝑇 (𝑥1 𝑓 (x1 ) + 𝑥2 𝑓 (x2 ) + ⋅ ⋅ ⋅ + 𝑥𝑛 𝑓 (x𝑛 ))
= (𝑥1 x1 + 𝑥2 x2 + ⋅ ⋅ ⋅ + 𝑥𝑛 x𝑛 )𝑇 (𝑥1 𝜆1 x1 + 𝑥2 𝜆2 x2 + ⋅ ⋅ ⋅ + 𝑥𝑛 𝜆𝑛 x𝑛 )
= 𝑥1 𝜆1 𝑥1 + 𝑥2 𝜆2 𝑥2 + ⋅ ⋅ ⋅ + 𝑥𝑛 𝜆𝑛 𝑥𝑛
= 𝜆1 𝑥21 + 𝜆2 𝑥22 + ⋅ ⋅ ⋅ + 𝜆𝑛 𝑥2𝑛
3.92
1. Assuming that 𝑎11 ∕= 0, the quadratic form can be rewritten as follows
𝑄(𝑥1 , 𝑥2 ) = 𝑎11 𝑥21 + 2𝑎12 𝑥1 𝑥2 + 𝑎22 𝑥22
= 𝑎11 𝑥21 + 2𝑎12 𝑥1 𝑥2 +
𝑎212 2 𝑎212 2
𝑥 −
𝑥 + 𝑎22 𝑥22
𝑎11 2 𝑎11 2
(
(
)2 ) (
)
𝑎
𝑎
𝑎212
12
12
2
= 𝑎11 𝑥1 + 2
𝑥1 𝑥2 +
𝑥2
+ 𝑎22 −
𝑥22
𝑎11
𝑎11
𝑎11
(
)2 (
)
𝑎11 𝑎22 − 𝑎212
𝑎12
= 𝑎11 𝑥1 +
𝑥2 +
𝑥22
𝑎11
𝑎11
2. We observe that 𝑞 must be positive for every 𝑥1 and 𝑥2 provided 𝑎11 > 0 and
𝑎11 𝑎22 − 𝑎212 > 0. Similarly 𝑞 must be negative for every 𝑥1 and 𝑥2 if 𝑎11 > 0 and
𝑎11 𝑎22 − 𝑎212 > 0. Otherwise, we can choose values for 𝑥1 and 𝑥2 which make 𝑞
both positive and negative.
Note that the condition 𝑎11 𝑎22 > 𝑎212 > 0 implies that 𝑎11 and 𝑎12 must have the
same sign.
3. If 𝑎11 = 𝑎22 = 0, then 𝑞 is indefinite. Otherwise, if 𝑎11 = 0 but 𝑎22 ∕= 0, then the
𝑞 can we can “complete the square” using 𝑎22 and deduce
{
}
{
}
nonnegative
𝑎11 , 𝑎22 ≥ 0
𝑞 is
definite if and only if
and 𝑎11 𝑎22 ≥ 𝑎212
nonpositive
𝑎11 , 𝑎22 ≤ 0
140
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
3.93 Let 𝑄 : 𝑋 → ℜ be a quadratic form on 𝑋. Then there exists a linear operator 𝑓
such that
𝑄(x) = x𝑇 𝑓 (x)
and (Exercise 3.13)
𝑄(0) = 0𝑇 𝑓 (0) = 0
3.94 Suppose to the contrary that the positive (negative) definite matrix 𝐴 is singular.
Then there exists x ∕= 0 such that 𝐴x = 0 and therefore x′ 𝐴x = 0 contradicting the
definiteness of 𝐴.
3.95 Let e1 , e2 , . . . , e𝑛 be the standard basis for ℜ𝑛 (Example 1.79). Then for every 𝑖
e′𝑖 𝐴e𝑖 = 𝑎𝑖𝑖 > 0
3.96 Let 𝑄 be the quadratic form defined by 𝐴. By Exercise 3.91, there exists an
orthonormal basis such that
𝑄(x) = 𝜆1 x21 + 𝜆2 x22 + ⋅ ⋅ ⋅ + 𝜆𝑛 x2𝑛
where 𝜆1 , 𝜆2 , . . . , 𝜆𝑛 are the eigenvalues of 𝐴. This implies
⎧
⎧
⎫
⎫
𝜆𝑖 > 0
𝑄(x) > 0






⎨
⎨
⎬
⎬
𝑄(x) ≥ 0
𝜆𝑖 ≥ 0
⇐⇒
𝑖 = 1, 2, . . . , 𝑛
𝜆𝑖 < 0
𝑄(x) < 0




⎩
⎭
⎩
⎭
𝑄(x) ≤ 0
𝜆𝑖 ≤ 0
3.97 Let 𝜆1 , 𝜆2 , . . . , 𝜆𝑛 be the eigenvalues of 𝐴. By Exercise 3.89
det(𝐴) = 𝜆1 𝜆2 . . . 𝜆𝑛
By Exercise 3.96, 𝜆𝑖 ≥ 0 for every 𝑖 and therefore det(𝐴) ≥ 0. We conclude that
det(𝐴) > 0 ⇐⇒ 𝜆𝑖 > 0 for every 𝑖 ⇐⇒ 𝐴 is positive definite
by Exercise 3.96.
3.98
1. 𝐴0 = 0. Therefore, 0 is always a solution.
2. Assume x1 and x2 are solutions, that is
𝐴x1 = 0 and 𝐴x2 = 0
Then
𝐴(x1 + x2 ) = 𝐴x1 + 𝐴x2 = 0
x1 + x2 is also a solution.
3. Let 𝑓 be the linear function defined by
𝑓 (x) = 𝐴x
The system of equations 𝐴x = 0 has a nontrivial solution if and only if
kernel 𝑓 ∕= {0} ⇐⇒ nullity 𝑓 > 0
By the rank theorem (Exercise 3.24)
rank𝑓 + nullity𝑓 = dim 𝑋
so that
nullity 𝑓 > 0 ⇐⇒ rank𝑓 < dim 𝑋 = 𝑛
141
Solutions for Foundations of Mathematical Economics
3.99
c 2001 Michael Carter
⃝
All rights reserved
1. Assume x1 and x2 are solutions of (3.16). That is
𝐴x1 = c and 𝐴x2 = c
Subtracting
𝐴x1 − 𝐴x2 = 𝐴(x1 − x2 ) = 0
2. Assume x𝑝 solves (3.16) while x is any solution to (3.17). That is
𝐴x𝑝 = c and 𝐴x = 0
Adding
𝐴x𝑝 + 𝐴x = 𝐴(x𝑝 + x) = c
We conclude that x𝑝 + x solves (3.16) for every x ∈ 𝐾.
3. If 0 is the only solution of (3.17), 𝐾 = {0}. Assume x1 and x2 are solutions of
(3.16). Then x1 − x2 ∈ 𝐾 = {0} which implies x1 = x2 .
3.100 Let 𝑆 = { x : 𝐴x = 𝑐 }. For every x, y ∈ 𝑆 and 𝛼 ∈ ℜ
𝐴𝛼x + (1 − 𝛼)y = 𝛼𝐴x + (1 − 𝛼)𝐴y = 𝛼c + (1 − 𝛼)c = 𝑐
Therefore, z = 𝛼x + (1 − 𝛼)y ∈ 𝑆. 𝑆 is affine.
3.101 Let 𝑆 ∕= ∅ be an affine set ℜ𝑛 . Then there exists a unique subspace 𝑉 such that
𝑆 = x0 + 𝑉
for some x0 ∈ 𝑆 (Exercise 1.150). The orthogonal complement of 𝑉 is
𝑉 ⊥ = { a ∈ 𝑋 : ax = 0 for every x ∈ 𝑉 }
Let (a1 , a2 , . . . , a𝑚 ) be a basis for 𝑉 ⊥ . Then
𝑉 = (𝑉 ⊥ )⊥ = {x : a𝑖 x = 0,
𝑖 = 1, 2, . . . 𝑚}
Let 𝐴 be the 𝑚×𝑛 matrix whose rows are a1 , a2 , . . . , a𝑚 . Then 𝑉 is the set of solutions
to the homogeneous linear system 𝐴x = 0, that is
𝑉 = { 𝑥 : 𝐴x = 0 }
Therefore
𝑆 = x0 + 𝑉
= x0 + { x : 𝐴x = 0 }
= { x : 𝐴(x − x0 ) = 0 }
= { x : 𝐴x = c }
where c = 𝐴x0 .
3.102 Consider corresponding homogeneous system
𝑥1 + 3𝑥2 = 0
𝑥1 − 𝑥2 = 0
142
c 2001 Michael Carter
⃝
All rights reserved
Solutions for Foundations of Mathematical Economics
Multiplying the second equation by 3
𝑥1 + 6𝑥2 = 0
3𝑥1 − 3𝑥2 = 0
and adding yields
4𝑥1 = 0
for which the only solution is 𝑥1 = 0. Substituting in the first equation implies 𝑥2 = 0.
The kernel of 𝑓 = 𝐴x is {0}. Therefore dim 𝑓 (ℜ2 ) = 2, and the system 𝐴x = 𝑐 has a
unique solution for every 𝑐1 , 𝑐2 .
3.103 We can write the system 𝐴x = c in the form
⎛ ⎞
⎛ ⎞
⎛
⎞ ⎛ ⎞
𝑎11
𝑎1𝑗
𝑎1𝑛
𝑐1
⎜ .. ⎟
⎜ .. ⎟
⎜ .. ⎟ ⎜ .. ⎟
𝑥1 ⎝ . ⎠ + ⋅ ⋅ ⋅ + 𝑥𝑗 ⎝ . ⎠ + ⋅ ⋅ ⋅ + 𝑥𝑛 ⎝ . ⎠ = ⎝ . ⎠
𝑎𝑛1
𝑎𝑛𝑗
𝑎𝑛𝑛
𝑐𝑛
Subtracting c from the 𝑗th column gives
⎛ ⎞
⎞
⎛
⎞
⎛
𝑎11
𝑎1𝑛
𝑥𝑗 𝑎1𝑗 − 𝑐1
⎜ ⎟
⎟
⎜ . ⎟
⎜
..
𝑥1 ⎝ ... ⎠ + ⋅ ⋅ ⋅ + ⎝
⎠ + ⋅ ⋅ ⋅ + 𝑥𝑛 ⎝ .. ⎠ = 0
.
𝑥𝑗 𝑎𝑛𝑗 − 𝑐𝑛
𝑎𝑛1
so that the columns of the matrix
⎛
𝑎11 . . .
⎜ ..
𝐶=⎝ .
...
𝑎𝑛1
𝑎𝑛𝑛
(𝑥𝑗 𝑎1𝑗 − 𝑐1 )
..
.
(𝑥𝑗 𝑎𝑛𝑗 − 𝑐𝑛 )
...
...
⎞
𝑎1𝑛
.. ⎟
. ⎠
𝑎𝑛𝑛
are linearly dependent (Exercise 1.133). Therefore det(𝐶) = 0. Let 𝐵𝑗 denote the
matrix obtained from 𝐴 by replacing the 𝑗th column with c. Then 𝐴, 𝐵𝑗 and 𝐶 differ
only in the 𝑗th column, with the 𝑗th column of 𝐶 being a linear combination of the
𝑗th columns of 𝐴 and 𝐵𝑗 .
⎛ ⎞
⎛ ⎞ ⎛ ⎞
𝑐1𝑗
𝑎1𝑗
𝑐1𝑗
⎜ .. ⎟
⎜ .. ⎟ ⎜ .. ⎟
=
𝑥
−
⎝ . ⎠
⎝ . ⎠
𝑗⎝ . ⎠
𝑐𝑛𝑗
𝑎𝑛𝑗
𝑐𝑛𝑗
By Exercise 3.81
det(𝐶) = x𝑗 det(𝐴) − det(𝐵𝑗 ) = 0
and therefore
𝑥𝑗 =
det(𝐵𝑗 )
det(𝐴)
as required.
3.104 Let
(
𝑎
𝑐
)−1 (
𝑏
𝐴
=
𝑑
𝐶
143
𝐵
𝐷
)
c 2001 Michael Carter
⃝
All rights reserved
Solutions for Foundations of Mathematical Economics
The inverse satisfies the equation
(
)(
𝑎 𝑏
𝐴
𝑐 𝑑
𝐶
In particular, this means that 𝐴 and
(
𝑎
𝑐
By Cramer’s rule (Exercise 3.103)
where Δ = 𝑎𝑑 − 𝑏𝑐. Similarly
𝐵
𝐷
)
(
1
=
0
)
0
1
𝐶 satisfy the equation
)( ) ( )
𝑏
𝐴
1
=
𝑑
𝐶
0
1 𝑏 0 𝑑
𝑑
=
𝐴 = Δ
𝑎 𝑏 𝑐 𝑑
𝑎 1
𝑐 0
−𝑐
=
𝐶 = Δ
𝑎 𝑏 𝑐 𝑑
𝐵 and 𝐷 are determined analogously.
3.105 A portfolio is duplicable if and only if there is a different portfolio y ∕= x such
that
𝑅x = 𝑅y
or
𝑅(x − y) = 0
There exists a duplicable portfolio if and only if this homogeneous system has a nontrivial solution, that is if rank 𝑅 < 𝐴.
3.106 State 𝑠¯ is insurable if there is a solution to the linear system
𝑅x = e𝑠¯
(3.42)
where e𝑠¯ is the 𝑠¯-th unit vector (the 𝑠¯ Arrow-Debreu security). (3.42) has a solution
for every state 𝑠 if and only if 𝑓 (ℜ𝐴 ) = ℜ𝑆 , that is rank 𝑅 = 𝑆.
3.107 Figure 3.1.
3.108 Let 𝑆 be an affine subset of ℜ𝑛 . Then there exists (Exercise 3.101) a system of
linear equations 𝐴x = c such that
𝑆 = { x : 𝐴x = c }
Let a𝑖 denote the 𝑖-th row of 𝐴. Then
𝑆 = { 𝑥 : a𝑖 x = 𝑐𝑖 , 𝑖 = 1, 2, . . . , 𝑛 } =
𝑛
∩
{ x : a𝑖 x = 𝑐𝑖 }
𝑖=1
where each { x : a𝑖 x = 𝑐𝑖 } is a hyperplane in ℜ𝑛 (Example 3.21).
144
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
Figure 3.1: The solutions of three equations in two unknowns
3.109 Let 𝑆 = { x : 𝐴x ≤ c }. For every x, y ∈ 𝑆 and 0 ≤ 𝛼 ≤ 1
𝐴x ≤ c
𝐴y ≤ c
and therefore
𝐴𝛼x + (1 − 𝛼)y = 𝛼𝐴x + (1 − 𝛼)𝐴y ≤ 𝛼c + (1 − 𝛼)c = 𝑐
Therefore, z = 𝛼x + (1 − 𝛼)y ∈ 𝑆. 𝑆 is a convex set.
3.110 We have already seen that 𝑆 = { x : 𝐴x ≤ 0 } is convex. To show that it is a
cone, let x ∈ 𝑆. Then
𝐴x ≤ 0
𝐴𝛼x ≤ 0
so that 𝛼x ∈ 𝑆. 𝑆 is a convex cone.
3.111
1. Each column 𝐴𝑗 is a vector in ℜ𝑚 . If the set {𝐴1 , 𝐴2 , . . . , 𝐴𝑘 } is linearly
independent, it has at most 𝑚 elements, that is 𝑘 ≤ 𝑚 and x is a basic feasible
solution.
2. (a) Assume {𝐴1 , 𝐴2 , . . . , 𝐴𝑘 } are linearly dependent. Then (Exercise 1.133)
there exist numbers 𝑦1 , 𝑦2 , . . . , 𝑦𝑘 , not all zero, such that
𝑦1 𝐴1 + 𝑦2 𝐴2 + ⋅ ⋅ ⋅ + 𝑦𝑘 𝐴𝑘 = 0
y = (𝑦1 , 𝑦2 , . . . , 𝑦𝑘 ) is a nontrivial solution to the homogeneous system.
(b) For every 𝑡 ∈ ℜ, −𝑡y ∈ kernel 𝑓 = 𝐴x and x′ = x − 𝑡y is a solution of
the corresponding nonhomogeneous system 𝐴x = c. To see this directly,
subtract
𝐴𝑡y = 0
145
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
from
𝐴x = c
to give
𝐴x′ = 𝐴(x − 𝑡y) = c
(c) Note that x > 0 and therefore 𝑡ˆ > 0 which implies that 𝑥
ˆ𝑗 > 0 for every
𝑦𝑗 ≤ 0. For every 𝑦𝑗 > 0, 𝑥𝑗 /𝑦𝑗 ≥ 𝑡ˆ, which implies that 𝑥𝑗 ≥ 𝑡ˆ𝑦𝑗 , so that
𝑥ˆ𝑗 ≥ 𝑥𝑗 − 𝑡ˆ𝑦𝑗 ≥ 0
Therefore, x̂ is a feasible solution.
(d) There exists some coordinate ℎ such that 𝑡ˆ = 𝑥ℎ /𝑦ℎ so that
𝑥
ˆℎ = 𝑥ℎ − 𝑡ˆ𝑦ℎ = 0
so that
𝑘
∑
c=
𝑥ˆ𝑗 𝐴𝐽
𝑗 =1
𝑗∕=ℎ
𝑥
ˆ is a feasible solution with one less positive component.
3. Starting with any nonbasic feasible solution, this elimination technique can be
repeated until the remaining vectors are linearly independent and a basic feasible
solution is obtained.
3.112
1. Exercise 1.173.
2. For each 𝑖, there exists 𝑙𝑖 elements x𝑖𝑗 and coefficients 𝑎𝑖𝑗 > 0 such that
x𝑖 =
𝑙𝑖
∑
𝑎𝑖𝑗 x𝑖𝑗
𝑖=1
and
∑ 𝑙𝑖
𝑗=1
𝑎𝑖𝑗 = 1. Hence
x=
𝑛
∑
x𝑖 =
𝑖=1
𝑛 ∑
𝑙𝑖
∑
𝑎𝑖𝑗 x𝑖𝑗
𝑖=1 𝑗=1
3. Direct computation.
4. Regarding the 𝑎𝑖𝑗 as “variables” and the points 𝑧𝑖𝑗 as coefficents,
z=
𝑙𝑖
𝑛 ∑
∑
𝑎𝑖𝑗 z𝑖𝑗
𝑖=1 𝑗=1
is a linear equation system in which variables are restricted to be nonnegative.
By the fundamental theorem of linear programming (Exercise 3.111), there exists
146
c 2001 Michael Carter
⃝
All rights reserved
Solutions for Foundations of Mathematical Economics
a basic feasible solution. That is, there exists coefficients 𝑏𝑖𝑗 ≥ 0 and 𝑏𝑖𝑗 > 0 for
at most (𝑚 + 𝑛) components such that
z=
𝑙𝑖
𝑛 ∑
∑
𝑏𝑖𝑗 z𝑖𝑗
(3.43)
𝑖=1 𝑗=1
Decomposing, (3.43) implies
x=
𝑙𝑖
𝑛 ∑
∑
𝑏𝑖𝑗 x𝑖𝑗
𝑖=1 𝑗=1
and
𝑙𝑖
∑
𝑏𝑖𝑗 = 1
for every 𝑖
𝑗=1
5. (3.43) implies that at least one 𝑏𝑖𝑗 > 0 for every 𝑖. This accounts for at least
𝑛 of the positive 𝑏𝑖𝑗 . Since there are at most (𝑚 + 𝑛) coefficients 𝑏𝑖𝑗 which are
strictly positive, there are at most 𝑚 indices 𝑖 which have more than one positive
coefficient 𝑏𝑖𝑗 . For the remaining 𝑚 − 𝑛 indices, x𝑖 = x𝑖𝑗 for some 𝑗; that is
x𝑖 ∈ 𝑆𝑖 .
3.113
1. Since 𝐴 is productive, there exists x ≥ 0 such that 𝐴x > 0. Consider any
z for which 𝐴z ≥ 0. For every 𝛼 > 0
𝐴(x + 𝛼z) = 𝐴x + 𝛼𝐴z > 0
(3.44)
Suppose to the contrary that z ∕≥ 0. That is, there exists some component 𝑧𝑖 < 0.
Let
𝑧𝑖
𝛼 = max{− }
𝑥𝑖
Without loss of generality, 𝑧1 attains this maximum, that is assume 𝛼 = 𝑧1 /𝑥1 .
Then
𝑥1 + 𝛼𝑧1 = 0
and
𝑥𝑖 + 𝛼𝑧𝑖 ≥ 0
for every 𝑖.
Now consider the matrix 𝐵 = 𝐼 − 𝐴. By the assumptions of the Leontief model
(Example 3.35), the matrix 𝐴 has 1 along the diagonal and negative off-diagonal
elements. That is
𝑎𝑖𝑖 = 1
𝑖 = 1, 2, . . . , 𝑛
𝑎𝑖𝑗 ≤ 0
𝑖, 𝑗 = 1, 2, . . . , 𝑛,
Therefore
𝐵 =𝐼 −𝐴≥0
147
𝑗 ∕= 𝑗
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
That is, every element of 𝐵 is nonnegative. Consequently since x + 𝛼z ≥ 0
𝐵(x + 𝛼z) ≥ 0
(3.45)
On the other hand, substituting 𝐴 = 𝐼 − 𝐵 in (3.45)
(𝐼 − 𝐵)(x + 𝛼z) > 0
x + 𝛼z > 𝐵(x + 𝛼z)
which implies that the first component of 𝐵(x + 𝛼z) is negative, contradicting (3.45).
This contradiction establishes that z ≥ 0.
Suppose 𝐴x = 0. A fortiori 𝐴x ≥ 0. By the previous part this implies x ≥ 0. On the
other hand, it also implies that −𝐴x = 𝐴(−x) = 0 so that −x ≥ 0. We conclude that
x = 0 is the only solution to 𝐴x = 0. 𝐴 is nonsingular.
Since 𝐴 is nonsingular, the system 𝐴x = y has a unique solution x for any y ≥ 0. By
the first part, x ≥ 0.
3.114 Suppose 𝐴 is productive. By the previous exercise, 𝐴 is nonsingular with inverse
𝐴−1 . Let e𝑖 be the 𝑖th unit vector. Since e𝑖 ≥ 0, there exists x𝑖 ≥ 0 such that
𝐴x𝑖 = e𝑖
Multiplying by 𝐴−1
x𝑖 = 𝐴−1 𝐴x𝑖 = 𝐴−1 e𝑖 = 𝐴−1
𝑖
where 𝐴−1
is the 𝑖 column of 𝐴−1 . Since x𝑖 ≥ 0 for every 𝑖, we conclude that 𝐴−1 ≥ 0.
𝑖
Conversely, assume that 𝐴−1 ≥ 0. Let 1 = (1, 1, . . . , 1) denote a net output of 1 for
each commodity. Then
x = 𝐴−1 1 ≥ 0
and
𝐴x = 1 > 0
𝐴 is productive.
3.115 Takayama 1985, p.383, Theorem 4.C.4.
3.116 Let a0 = (𝑎01 , 𝑎02 , . . . , 𝑎0𝑛 ) be the vector of labour requirements and 𝑤 the wage
rate. The unit profit of industry 𝑖 is
∑
𝜋𝑖 = 𝑝𝑖 +
𝑎𝑖𝑗 𝑝𝑗 − 𝑤𝑎0
𝑗∕=𝑖
Recall that 𝑎𝑖𝑗 ≤ 0 for 𝑗 ∕= 𝑖. The vector of unit profits for all industries is
Π = 𝐴p − 𝑤𝑎0
Profits will be zero in all industries if there exists a price system p such that
Π = 𝐴p − 𝑤𝑎0 = 0
or
𝐴p = 𝑤𝑎0
(3.46)
By the previous results, (3.46) has a unique nonnegative solution p = 𝐴−1 𝑤𝑎0 if the
technology 𝐴 is productive. Furthermore, 𝐴−1 is nonnegative. Since 𝑎0 > 0, so is
p > 0.
148
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
3.117 Let 𝑢𝐵 denote the steady state unemployment rate for blacks. Then 𝑢𝐵 satisfies
the equation
𝑢𝐵 = 0.0038(1 − 𝑢𝐵 ) + 0.8975𝑢𝐵
which implies that 𝑢𝐵 = 0.036. That is, the data implies an unemployment rate of 3.6
percent for blacks. Similarly, the unemployment rate for white males 𝑢𝑊 satisfies the
equation
𝑢𝑊 = 0.0022(1 − 𝑢𝑊 ) + 0.8614𝑢𝑊
which implies that 𝑢𝑊 = 0.016 or 1.6 percent.
3.118 The transition matrix is
(
𝑇 =
)
.6 .25
.4 .75
If the current state vector is x0 = (.4, .6), the state vector after a single mailing will be
x1 = 𝑇 x0
(
)(
)
.6 .25
.4
=
.4 .75
.6
(
)
0.39
=
.61
Following a single mailing, the number of subscribers will drop to 30 percent of the
mailing list, comprising 24 percent from renewals and 15 percent new subscriptions.
3.119 Let 𝑓 (𝑥) = 𝑥2 . For every 𝑥1 , 𝑥2 ∈ ℜ and 0 ≤ 𝛼 ≤ 1
𝑓 (𝛼𝑥1 + (1 − 𝛼)𝑥2 ) = (𝛼𝑥1 + (1 − 𝛼)𝑥2 )2
= (𝛼𝑥1 + (1 − 𝛼)𝑥2 )(𝛼𝑥1 + (1 − 𝛼)𝑥2 )
= 𝛼2 𝑥21 + 2𝛼(1 − 𝛼)𝑥1 𝑥2 + (1 − 𝛼)2 𝑥22
= 𝛼𝑥21 + (1 − 𝛼)𝑥22 − 𝛼𝑥21 − (1 − 𝛼)𝑥22 + 𝛼2 𝑥21 + 2𝛼(1 − 𝛼)𝑥1 𝑥2 + (1 − 𝛼)2 𝑥22
)
(
= 𝛼𝑥21 + (1 − 𝛼)𝑥22 − 𝛼(1 − 𝛼)𝑥21 − 2𝛼(1 − 𝛼)𝑥1 𝑥2 + 𝛼(1 − 𝛼)𝑥22
= 𝛼𝑥21 + (1 − 𝛼)𝑥22 − 𝛼(1 − 𝛼)(𝑥1 − 𝑥2 )2
≤ 𝛼𝑥21 + (1 − 𝛼)𝑥22
= 𝛼𝑓 (𝑥1 ) + (1 − 𝛼)(𝑥2 )
3.120 𝑓 (𝑥) = 𝑥 is linear and therefore convex. In the previous exercise we showed that
𝑥2 is convex. Therefore 𝑓 (𝑥) = 𝑥𝑛 is convex for 𝑛 = 1, 2. Assume that 𝑓 is convex for
149
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
𝑛 − 1. Then
𝑓 (𝛼𝑥1 + (1 − 𝛼)𝑥2 ) = (𝛼𝑥1 + (1 − 𝛼)𝑥2 )𝑛
= (𝛼𝑥1 + (1 − 𝛼)𝑥2 )(𝛼𝑥1 + (1 − 𝛼)𝑥2 )𝑛−1
≤ (𝛼𝑥1 + (1 − 𝛼)𝑥2 )(𝛼𝑥𝑛−1
+ (1 − 𝛼)𝑥𝑛−1
)
1
2
(since 𝑥𝑛−1 is convex)
= 𝛼2 𝑥𝑛1 + 𝛼(1 − 𝛼)𝑥𝑛−1
𝑥2 + 𝛼(1 − 𝛼)𝑥1 𝑥𝑛−1
+ (1 − 𝛼)2 𝑥𝑛2
1
2
= 𝛼𝑥𝑛1 + (1 − 𝛼)𝑥𝑛2 − 𝛼𝑥𝑛1 − (1 − 𝛼)𝑥𝑛2
+ 𝛼2 𝑥𝑛1 + 𝛼(1 − 𝛼)𝑥𝑛−1
𝑥2 + 𝛼(1 − 𝛼)𝑥1 𝑥𝑛−1
+ (1 − 𝛼)2 𝑥𝑛2
1
2
)
(
𝑛−1
𝑛
= 𝛼𝑥𝑛1 + (1 − 𝛼)𝑥𝑛2 − 𝛼(1 − 𝛼) 𝑥𝑛1 − 𝑥1 𝑥𝑛−1
−
𝑥
𝑥
+
𝑥
2
2
2
1
(
)
𝑛−1
𝑛−1
𝑛
𝑛
= 𝛼𝑥1 + (1 − 𝛼)𝑥2 − 𝛼(1 − 𝛼) 𝑥1 (𝑥1 − 𝑥2 ) − 𝑥2 (𝑥1 − 𝑥2 )
(
)
𝑛−1
−
𝑥
)
= 𝛼𝑥𝑛1 + (1 − 𝛼)𝑥𝑛2 − 𝛼(1 − 𝛼) (𝑥1 − 𝑥2 )(𝑥𝑛−1
1
2
Since 𝑥𝑚 is monotonic (Example 2.53)
𝑥𝑛−1
− 𝑥𝑛−1
≥ 0 ⇐⇒ 𝑥1 − 𝑥2 ≥ 0
1
2
and therefore
(𝑥1 − 𝑥2 )(𝑥𝑛−1
− 𝑥𝑛−1
)≥0
1
2
We conclude that
𝑓 (𝛼𝑥1 + (1 − 𝛼)𝑥2 ) ≤ 𝛼𝑥𝑛1 + (1 − 𝛼)𝑥𝑛2 = 𝛼𝑓 (𝑥1 ) + (1 − 𝛼)(𝑥2 )
𝑓 is convex for all 𝑛 = 1, 2, . . . .
3.121 For given x1 , x2 ∈ 𝑆, define 𝑔 : [0, 1] → 𝑆 by
𝑔(𝑡) = (1 − 𝑡)x1 + 𝑡x2
Then 𝑔(0) = x1 , 𝑔(1) = x2 and ℎ = 𝑔 ∘ 𝑓 .
Assume 𝑓 is convex. For any 𝑡1 , 𝑡2 ∈ [0, 1], let
𝑔(𝑡1 ) = x̄1 and 𝑔(𝑡2 ) = x̄2
For any 𝛼 ∈ [0, 1]
)
(
𝑔 𝛼𝑡1 + (1 − 𝛼)𝑡2 = 𝛼x̄1 + (1 − 𝛼)x̄2
)
(
)
(
ℎ 𝛼𝑡1 + (1 − 𝛼)𝑡2 = 𝑓 𝛼x̄1 + (1 − 𝛼)x̄2
≤ 𝛼𝑓 (x̄1 ) + (1 − 𝛼)𝑓 (x̄2 )
≤ 𝛼ℎ(𝑡1 ) + (1 − 𝛼)𝑡2 )
ℎ is convex.
Conversely, assume ℎ is convex for any x1 , x2 ∈ 𝑆. For any 𝛼 ∈ [0, 1]
𝑔(𝛼) = 𝛼x1 + (1 − 𝛼)x2
and
)
(
𝑓 𝛼x1 + (1 − 𝛼)x2 = ℎ(𝛼)
≤ 𝛼ℎ(0) + (1 − 𝛼)ℎ(1)
= 𝛼𝑓 (x1 ) + (1 − 𝛼)𝑓 (x2 )
Since this is true for any x1 , x2 ∈ 𝑆, we conclude that 𝑓 is convex.
150
c 2001 Michael Carter
⃝
All rights reserved
Solutions for Foundations of Mathematical Economics
3.122 Assume 𝑓 is convex which implies epi 𝑓 is convex. The points (x𝑖 , 𝑓 (x𝑖 )) ∈ epi 𝑓 .
Since epi 𝑓 is convex
𝛼1 (x1 , 𝑓 (x1 )) + 𝛼2 (x1 , 𝑓 (x1 )) + ⋅ ⋅ ⋅ + (x𝑛 , 𝑓 (x𝑛 )) ∈ epi 𝑓
that is
𝑓 (𝛼1 x1 + 𝛼2 x2 + ⋅ ⋅ ⋅ + 𝛼𝑛 x𝑛 ) ≤ 𝛼1 𝑓 (x1 ) + 𝛼2 𝑓 (x1 ) + ⋅ ⋅ ⋅ + 𝛼𝑛 𝑓 (x𝑛 ))
Conversely, letting 𝑛 = 2 and 𝛼 = 𝛼1 , (3.25) implies that
𝑓 (𝛼x1 + (1 − 𝛼)x2 ) ≤ 𝛼𝑓 (x1 ) + (1 − 𝛼)𝑓 (x2 )
Jensen’s inequality can also be proved by induction from the definition of a convex
function (see for example Sydsaeter + Hammond 1995; p.624).
3.123 For each 𝑖, let 𝑦𝑖 = log 𝑥𝑖 so that
𝑥𝑖 = 𝑒𝑦𝑖
𝛼𝑖 𝑦𝑖
𝑖
𝑥𝛼
𝑖 = 𝑒
Since 𝑒𝑥 is convex (Example 3.41)
𝑥𝑎1 1 𝑥𝑎2 2 . . . 𝑥𝑎𝑛𝑛
𝑎𝑖 > 0 =
∏
exp(𝛼𝑖 𝑦𝑖 ) = exp
(∑
) ∑
∑
𝛼𝑖 𝑒𝑦𝑖 =
𝛼𝑖 𝑥𝑖
𝛼𝑖 𝑦𝑖 ≤
by Jensen’s inequality. Setting 𝛼𝑖 = 1/𝑛, we have
(𝑥1 𝑥2 . . . 𝑥𝑛 )1/𝑛 ≤
𝑛
1∑
𝑥𝑖
𝑛 𝑖=1
as required.
3.124 Assume 𝑓 is concave. That is for every x1 , x2 ∈ 𝑆 and 0 ≤ 𝛼 ≤ 1
𝑓 (𝛼x1 + (1 − 𝛼)x2 ) ≥ 𝛼𝑓 (x1 ) + (1 − 𝛼)𝑓 (x2 )
Multiplying through by −1 reverses the inequality so that
−𝑓 (𝛼x1 + (1 − 𝛼)x2 ) ≤ −𝛼𝑓 (x1 ) + (1 − 𝛼)𝑓 (x2 ) = 𝛼 − 𝑓 (x1 ) + (1 − 𝛼) − 𝑓 (x2 )
which shows that −𝑓 is concave. The converse follows analogously.
3.125 Assume that 𝑓 is concave. Then −𝑓 is convex and by Theorem 3.7
epi − 𝑓 = { (𝑥, 𝑦) ∈ 𝑋 × ℜ : 𝑦 ≥ −𝑓 (𝑥), 𝑥 ∈ 𝑋 }
is convex. But
epi − 𝑓 = { (𝑥, 𝑦) ∈ 𝑋 × ℜ : 𝑦 ≥ −𝑓 (𝑥), 𝑥 ∈ 𝑋 } = { (𝑥, 𝑦) ∈ 𝑋 × ℜ : 𝑦 ≤ 𝑓 (𝑥), 𝑥 ∈ 𝑋 } = hypo 𝑓
Therefore hypo 𝑓 is convex.
Conversely, if hypo 𝑓 is convex, epi − 𝑓 is convex which implies that −𝑓 is convex and
hence 𝑓 is concave.
151
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
3.126 Suppose that x1 minimizes the cost of producing 𝑦 at input prices w1 while x2
minimizes cost at w2 . For some 𝛼 ∈ [0, 1], let w̄ be the weighted average price, that is
w̄ = 𝛼w1 + (1 − 𝛼)w2
and suppose that x̄ minimizes cost at w̄. Then
𝑐(w̄, 𝑦) = w̄x̄
= (𝛼w1 + (1 − 𝛼)w2 )x̄
= 𝛼w1 x̄ + (1 − 𝛼)w2 x̄
But since x1 and x2 minimize cost at w1 and w2 respectively
𝛼w1 x̄ ≥ 𝛼w1 x1 = 𝛼𝑐(w1 , 𝑦)
(1 − 𝛼)w2 x̄ ≥ (1 − 𝛼)w2 x2 = (1 − 𝛼)𝑐(w2 , 𝑦)
so that
𝑐(w̄, 𝑦) = 𝑐(𝛼w1 + (1 − 𝛼)w2 , 𝑦) = 𝛼w1 x̄ + (1 − 𝛼)w2 x̄ ≥ 𝛼𝑐(w1 , 𝑦) + (1 − 𝛼)𝑐w2 , 𝑦)
This establishes that the cost function 𝑐 is concave in w.
3.127 Since 𝑢 is concave, Jensen’s inequality implies
( 𝑇
)
𝑇
𝑇
∑1
∑
1
1∑
𝑐𝑡 ≥
𝑢(𝑐𝑡 ) =
𝑢
𝑢(𝑐𝑡 )
𝑇
𝑇
𝑇 𝑡=1
𝑡=1
𝑡=1
for any consumption stream 𝑐1 , 𝑐2 , . . . , 𝑐𝑇 so that
( 𝑇
)
𝑇
∑
∑1
𝑐𝑡 = 𝑇 𝑢(¯
𝑈=
𝑢(𝑐𝑡 ) ≤ 𝑇 𝑢
𝑐)
𝑇
𝑡=1
𝑡=1
It is impossible to do better than consume a constant fraction 𝑐¯ = 𝑤/𝑇 of wealth in
each period.
3.128 If 𝑥1 = 𝑥3 , the inequality is trivially satisfied. Now assume 𝑥1 ∕= 𝑥3 . Since
𝑥2 ∈ [𝑥1 , 𝑥3 ], there exists 𝛼 ∈ [0, 1] such that
𝑥2 = 𝛼𝑥1 + (1 − 𝛼)𝑥2
Let 𝑥
¯ = 𝑥1 − 𝑥2 + 𝑥3 . Then 𝑥
¯ ∈ [𝑥1 , 𝑥3 ] and there exists 𝛽 ∈ [0, 1] such that
𝑥¯ = 𝛽𝑥1 + (1 − 𝛽)𝑥2
Adding
(
)
𝑥¯ + 𝑥2 = (𝛼 + 𝛽)𝑥1 + (1 − 𝛼) + (1 − 𝛽) 𝑥3
or
𝑥1 − 𝑥3 = (𝛼 + 𝛽)(𝑥3 − 𝑥1 )
which implies that 𝛼 + 𝛽 = 1 and therefore 𝛽 = 1 − 𝛼. Since 𝑓 is convex
𝑓 (𝑥2 ) ≤ 𝛼𝑓 (𝑥1 ) + (1 − 𝛼)𝑓 (𝑥2
𝑓 (¯
𝑥) ≤ 𝛽𝑓 (𝑥1 ) + (1 − 𝛽)𝑓 (𝑥2 )
= (1 − 𝛼)𝑓 (𝑥1 ) + 𝛼𝑓 (𝑥3 )
Adding
𝑓 (¯
𝑥) + 𝑓 (𝑥2 ) ≤ 𝑓 (𝑥1 ) + 𝑓 (𝑥3 )
152
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
3.129 Let 𝑥1 , 𝑥2 , 𝑦1 , 𝑦2 ∈ ℜ with 𝑥1 < 𝑥2 and 𝑦1 < 𝑦2 . Note that 𝑥1 − 𝑦2 ≤ 𝑥2 − 𝑦2 ≤
𝑥2 − 𝑦1 and therefore (Exercise 3.128)
(
)
𝑓 𝑥1 − 𝑦2 ) − (𝑥2 − 𝑦2 ) + (𝑥2 − 𝑦1 ) > 𝑓 (𝑥1 − 𝑦2 ) − 𝑓 (𝑥2 − 𝑦2 ) + 𝑓 (𝑥2 − 𝑦1 )
That is
𝑓 (𝑥1 − 𝑦1 ) > 𝑓 (𝑥1 − 𝑦2 ) − 𝑓 (𝑥2 − 𝑦2 ) + 𝑓 (𝑥2 − 𝑦1 )
Rearranging
𝑓 (𝑥2 − 𝑦2 ) − 𝑓 (𝑥1 − 𝑦2 ) > 𝑓 (𝑥2 − 𝑦1 ) − 𝑓 (𝑥1 − 𝑦1 )
as required.
3.130 A functional is affine if and only if inequalities (3.24) and (3.26) are satisfied as
equalities.
3.131 Since 𝑓 and 𝑔 are convex on 𝑆
𝑓 (𝛽x1 + (1 − 𝛽)x2 ) ≤ 𝛽𝑓 (x1 ) + (1 − 𝛽)𝑓 (x2 )
1
2
1
2
𝑔(𝛽x + (1 − 𝛽)x ) ≤ 𝛽𝑔(x ) + (1 − 𝛽)𝑔(x )
(3.47)
(3.48)
for every x1 , x2 ∈ 𝑆 and 𝛽 ∈ [0, 1]. Adding
(𝑓 + 𝑔)(𝛽x1 + (1 − 𝛽)x2 ) ≤ 𝛽(𝑓 + 𝑔)(x1 ) + (1 − 𝛽)𝑓 (x2 )
𝑓 + 𝑔 is convex. Multiplying (3.47) by 𝛼 ≥ 0
𝛼𝑓 (𝛽x1 + (1 − 𝛽)x2 ) ≤ 𝛼(𝛽𝑓 (x1 ) + (1 − 𝛽)𝑓 (x2 ))
= (𝛽𝛼𝑓 (x1 ) + (1 − 𝛽)𝛼𝑓 (x2 ))
𝛼𝑓 is convex.
Moreover, if 𝑓 is strictly convex,
𝑓 (𝛽x1 + (1 − 𝛽)x2 ) < 𝛽𝑓 (x1 ) + (1 − 𝛽)𝑓 (x2 )
(3.49)
for every x1 , x2 ∈ 𝑆, x1 ∕= x2 and 𝛽 ∈ (0, 1). Adding this to (3.48)
(𝑓 + 𝑔)(𝛽x1 + (1 − 𝛽)x2 ) < 𝛽(𝑓 + 𝑔)(x1 ) + (1 − 𝛽)𝑓 (x2 )
so that 𝑓 + 𝑔 is strictly convex. Multiplying (3.49) by 𝛼 > 0
𝛼𝑓 (𝛽x1 + (1 − 𝛽)x2 ) < 𝛼(𝛽𝑓 (x1 ) + (1 − 𝛽)𝑓 (x2 ))
= (𝛽𝛼𝑓 (x1 ) + (1 − 𝛽)𝛼𝑓 (x2 ))
𝛼𝑓 is strictly convex.
3.132
x ∈ epi (𝑓 ∨ 𝑔) ⇐⇒ x ∈ epi 𝑓 and x ∈ epi 𝑔
That is
epi (𝑓 ∨ 𝑔) = epi 𝑓 ∩ epi 𝑔
Therefore epi 𝑓 ∨ 𝑔 is convex (Exercise 1.162) and therefore 𝑓 is convex (Proposition
3.7).
153
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
3.133 If 𝑓 is convex
𝑓 (𝛼x1 + (1 − 𝛼)x2 ) ≤ 𝛼𝑓 (x1 ) + (1 − 𝛼)𝑓 (x2 )
Since 𝑔 is increasing
(
)
(
)
𝑔 𝑓 (𝛼x1 + (1 − 𝛼)x2 ) ≤ 𝑔 𝛼𝑓 (x1 ) + (1 − 𝛼)𝑓 (x2 )
)
(
)
(
≤ 𝛼𝑔 𝑓 (x1 ) + (1 − 𝛼)𝑔 𝑓 (x2 )
since 𝑔 is also convex. The concave case is proved similarly.
3.134 Let 𝐹 = log 𝑓 . If 𝐹 is convex, 𝑓 (x) = 𝑒𝐹 (x) is an increasing convex function of a
convex function and is therefore convex (Exercise 3.133).
3.135 If 𝑓 is positive and concave, then log 𝑓 is concave (Exercise 3.51). Therefore
log
1
= log 1 − log 𝑓 = − log 𝑓
𝑓
is convex. By the previous exercise (Exercise 3.134), this implies that 1/𝑓 is convex.
If 𝑓 is negative and convex, then −𝑓 is positive and concave, 1/ − 𝑓 is convex, and
therefore 1/𝑓 is concave.
3.136 Consider the identity
)
(
)
(
)
(
)
(
𝑔 𝑓 (𝑥1 ∨ 𝑥2 ) + 𝑔 𝑓 (𝑥1 ∧ 𝑥2 ) − 𝑔 𝑓 (𝑥1 ) − 𝑔 𝑓 (𝑥2 )
)
(
)
(
)
(
)
)
)
(
= 𝑔 𝑓 (𝑥1 ∨ 𝑥2 ) + 𝑔 𝑓 (𝑥1 ∧ 𝑥2 ) − 𝑔 𝑓 (𝑥1 ) − 𝑔 𝑓 (𝑥1 ∨ 𝑥2 ) + 𝑓 (𝑥1 ∧ 𝑥2 ) − 𝑓 (𝑥1 )
(
)
(
)
+ 𝑔 𝑓 (𝑥1 ∨ 𝑥2 ) + 𝑓 (𝑥1 ∧ 𝑥2 ) − 𝑓 (𝑥1 ) − 𝑔 𝑓 (𝑥2 )
(3.50)
Define
(
)
(
)
(
)
(
)
𝜑(𝑥1 , 𝑥2 ) = 𝑔 𝑓 (𝑥1 ∨ 𝑥2 ) + 𝑔 𝑓 (𝑥1 ∧ 𝑥2 ) − 𝑔 𝑓 (𝑥1 ) − 𝑔 𝑓 (𝑥2 )
Then 𝑔 ∘ 𝑓 is supermodular if 𝜑 is nonnegative definite and submodular if 𝜑 is nonpositive definite. Using the identity (3.50), 𝜑 can be decomposed into two components
𝜑(𝑥1 , 𝑥2 ) = 𝜑1 (𝑥1 , 𝑥2 ) + 𝜑2 (𝑥1 , 𝑥2 )
(
)
(
)
(
)
𝜑1 (𝑥1 , 𝑥2 ) = 𝑔 𝑓 (𝑥1 ∨ 𝑥2 ) + 𝑔 𝑓 (𝑥1 ∧ 𝑥2 ) − 𝑔 𝑓 (𝑥1 )
(
)
− 𝑔 𝑓 (𝑥1 ∨ 𝑥2 ) + 𝑓 (𝑥1 ∧ 𝑥2 ) − 𝑓 (𝑥1 )
(
)
(
)
𝜑2 (𝑥1 , 𝑥2 ) = 𝑔 𝑓 (𝑥1 ∨ 𝑥2 ) + 𝑓 (𝑥1 ∧ 𝑥2 ) − 𝑓 (𝑥1 ) − 𝑔 𝑓 (𝑥2 )
(3.51)
𝜑 will definite if both components are definite.
For any 𝑥1 , 𝑥2 ∈ 𝑥1 , let 𝑎 = 𝑓 (𝑥1 ∧ 𝑥2 ), 𝑏 = 𝑓 (𝑥1 ) and 𝑐 = 𝑓 (𝑥1 ∨ 𝑥2 ). Provided 𝑓 is
monotone, 𝑏 lies between 𝑎 and 𝑐. Substituting in (3.51)
𝜑1 (𝑥1 , 𝑥2 ) = 𝑔(𝑐) + 𝑔(𝑎) − 𝑔(𝑏) − 𝑔(𝑐 + 𝑎 − 𝑏)
and Exercise 3.128 implies
{
𝜑1 (𝑥1 , 𝑥2 ) = 𝑔(𝑐) + 𝑔(𝑎) − 𝑔(𝑏) − 𝑔(𝑐 + 𝑎 − 𝑏)
Now consider 𝜑2 .
}
{
}
≥𝑂
convex
if 𝑔 is
≤0
concave
{ }
{
}
≥
supermodular
𝑓 (𝑥1 ∨ 𝑥2 ) + 𝑓 (𝑥1 ∧ 𝑥2 ) − 𝑓 (𝑥1 ) is
𝑓 (𝑥2 ) if 𝑓 is
≤
submodular
154
(3.52)
Solutions for Foundations of Mathematical Economics
and therefore since 𝑔 is increasing
{
𝜑2 (𝑥1 , 𝑥2 ) =
c 2001 Michael Carter
⃝
All rights reserved
≥ 0 if 𝑓 is supermodular
≤ 0 if 𝑓 is submodular
(3.53)
Together (3.52) and (3.53) gives the desired result.
3.137
1. Assume that 𝑓 is bounded above in a neighborhood of x0 . Then there
exists a ball 𝐵(𝑥0 ) and constant 𝑀 such that
𝑓 (x) ≤ 𝑀 for every x ∈ 𝐵(𝑥0 )
Since 𝑓 is convex
𝑓 (𝛼x + (1 − 𝛼)x0 ) ≤ 𝛼𝑓 (x) + (1 − 𝛼)𝑓 (x0 ) ≤ 𝛼𝑀 + (1 − 𝛼)𝑓 (x0 )
(3.54)
2. Given x ∈ 𝐵(𝑥0 ) and 𝛼 ∈ [0, 1] let
z = 𝛼x + (1 − 𝛼)x0
(3.55)
Subtracting 𝑓 (x0 ) from (3.54) gives
𝑓 (z) − 𝑓 (x0 ) ≤ 𝛼(𝑀 − 𝑓 (x0 ))
Rewriting (3.55)
(1 − 𝛼)x0 = z − 𝛼x
(1 + 𝛼)x0 = z + 𝛼(2x0 − x)
𝛼
1
z+
(2x0 − x)
x0 =
1+𝛼
1+𝛼
3. Note that
(2x0 − x) = x0 − (x − x0 ) ∈ 𝐵(x0 )
so that
𝑓 (2x0 − x) ≤ 𝑀
and therefore
𝑓 (x0 ) ≤
𝛼
𝛼
1
1
𝑓 (z) +
𝑓 (2x0 − x) ≤
𝑓 (z) +
𝑀
1+𝛼
1+𝛼
1+𝛼
1+𝛼
which implies
(1 + 𝛼)𝑓 (x0 ) ≤ 𝑓 (z) + 𝛼𝑀
𝛼(𝑓 (x0 ) − 𝑀 ) ≤ 𝑓 (z) − 𝑓 (x0 )
4. Combined with (3.56) we have
𝛼(𝑓 (x0 ) − 𝑀 ) ≤ 𝑓 (z) − 𝑓 (x0 ) ≤ 𝛼(𝑀 − 𝑓 (x0 ))
or
∣𝑓 (z) − 𝑓 (x0 )∣ ≤ 𝛼(𝑀 − 𝑓 (x0 ))
and therefore 𝑓 (z) → 𝑓 (x0 ) as z → x0 . 𝑓 is continuous.
155
(3.56)
Solutions for Foundations of Mathematical Economics
3.138
c 2001 Michael Carter
⃝
All rights reserved
1. Since 𝑆 is open, there exists a ball 𝐵𝑟 (x1 ) ⊆ 𝑆. Let 𝑡 = 1 + 𝑟2 . Then
x0 + 𝑡(x1 − x0 ) ∈ 𝐵𝑟 (𝑥1 ) ⊆ 𝑆.
2. Let 𝑠 = 𝑡−1
𝑡 𝑟. The open ball 𝐵𝑠 (x1 ) of radius 𝑠 centered on x1 is contained in 𝑇 .
Therefore 𝑇 is a neighborhood of x1 .
3. Since 𝑓 is convex, for every y ∈ 𝑇
𝑓 (y) ≤ (1 − 𝛼)𝑓 (x) + 𝛼𝑓 (z) ≤ (1 − 𝛼)𝑀 + 𝛼𝑓 (z) ≤ 𝑀 + 𝑓 (z)
Therefore 𝑓 is bounded on 𝑇 .
3.139 The previous exercise showed that 𝑓 is locally bounded from above for every
x ∈ 𝑆. To show that it is also locally bounded from below, choose some x0 ∈ 𝑆. There
exists some 𝐵(x0 and 𝑀 such that
𝑓 (x) ≤ 𝑀 for every x ∈ 𝐵(x0 )
Choose some 𝑥1 ∈ 𝐵(x0 ) and let x2 = 2x0 − x1 . Then
x2 = 2x0 − x1 = x0 − (x1 − x0 ) ∈ 𝐵(x0 )
and 𝑓 (x2 ) ≤ 𝑀 . Since 𝐹 is convex
𝑓 (x) ≤
1
1
𝑓 (x1 ) + 𝑓 (x2 )
2
2
and therefore
𝑓 (x1 ) ≥ 2𝑓 (x) − 𝑓 (x2 )
Since 𝑓 (x2 ) ≤ 𝑀 , −𝑓 (x2 ) ≥ −𝑀 and therefore
𝑓 (x1 ) ≥ 2𝑓 (x) − 𝑀
so that 𝑓 is bounded from below.
3.140 Let 𝑓 be a convex function defined on an open convex set 𝑆 in a normed linear
space, which is bounded from above in a neighborhood of a single point x0 ∈ 𝑆. By
Exercise 3.138, 𝑓 is bounded above at every x ∈ 𝑆. This implies (Exercise 3.137) that
𝑓 is continuous at every x ∈ 𝑆.
3.141 Without loss of generality, assume 0 ∈ 𝑆. Assume 𝑆 has dimension 𝑛 and let
x1 , x2 , . . . , x𝑛 be a basis for the subspace containing 𝑆. Choose some 𝜆 > 0 small
enough so that
𝑈 = conv {0, 𝜆x1 , 𝜆x2 , . . . , 𝜆𝑥𝑛 } ⊆ 𝑆
Any x ∈ 𝑈 is a convex∑
combination of the points 0, x1 , x2 , . . . , x𝑛 and so there exists
𝛼0 , 𝛼1 , 𝛼2 , . . . , 𝛼𝑛 ≥ 0,
𝛼𝑖 = 1 such that x = 𝛼0 0 + 𝛼1 x1 + ⋅ ⋅ ⋅ + 𝛼𝑛 x𝑛 . By Jensen’s
inequality
𝑓 (x) = 𝑓 (𝛼0 0 + 𝛼1 x1 + ⋅ ⋅ ⋅ + 𝛼𝑛 x𝑛 ) ≤ 𝛼0 𝑓 (0) + 𝛼1 𝑓 (x1 ) + ⋅ ⋅ ⋅ + 𝛼𝑛 𝑓 (x𝑛 )
≤ max{ 𝑓 (0), 𝑓 (x1 ), . . . , 𝑓 (x𝑛 ) }
Therefore, 𝑓 is bounded above on a neighbourhood of some x0 ∈ int 𝑈 (which is
nonempty by Exercise 1.229). By Proposition 3.8, 𝑓 is continuous on 𝑆.
156
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
3.142 Clearly, if 𝑓 is convex, it is locally convex at every x ∈ 𝑆, where 𝑆 is the required
neighborhood. To prove the converse, assume to the contrary that 𝑓 is locally convex
at every x ∈ 𝑆 but it is not globally convex. That is, there exists x1 , x2 ∈ 𝑆 such that
𝑓 (𝛼x1 + (1 − 𝛼)x2 ) > 𝛼𝑓 (x1 ) + (1 − 𝛼)𝑓 (x2 )
Let
)
(
ℎ(𝑡) = 𝑓 𝑡x1 + (1 − 𝑡)x2
Local convexity implies that 𝑓 is continuous at every x ∈ 𝑆 (Corollary 3.8.1), and
therefore continuous on 𝑆. Therefore, ℎ is continuous on [0, 1]. By the continuous
maximum theorem (Theorem 2.3),
𝑇 = arg max ℎ(𝑡)
x∈[x1 ,x2 ]
is nonempty and compact. Let 𝑡0 = max 𝑇 . For every 𝜖 > 0,
ℎ(𝑡0 − 𝜖) ≤ ℎ(𝑡0 ) and ℎ(𝑡0 + 𝜖) < ℎ(𝑡0 )
Let
x0 = 𝑡0 x1 + (1 − 𝑡0 )x2 and x𝜖 = (𝑡0 + 𝜖)x1 + (1 − 𝑡0 − 𝜖)x2
Every neighborhood 𝑉 of x0 contains x−𝜖 , x𝜖 ∈ [x1 , x2 ] with
1
1
1
1
𝑓 (x−𝜖 ) + 𝑓 (x𝜖 ) = ℎ(𝑡0 − 𝜖) + ℎ(𝑡0 + 𝜖) < ℎ(𝑡0 ) = 𝑓 (x0 ) = 𝑓
2
2
2
2
(
1
1
x−𝜖 + x𝜖
2
2
)
contradicting the local convexity of 𝑓 at x0 .
3.143 Assume 𝑓 is quasiconcave. That is for every x1 , x2 ∈ 𝑆 and 0 ≤ 𝛼 ≤ 1
𝑓 (𝛼x1 + (1 − 𝛼)x2 ) ≥ min{𝑓 (x1 ), (x2 )}
Multiplying through by −1 reverses the inequality so that
−𝑓 (𝛼x1 + (1 − 𝛼)x2 ) ≤ − min{𝑓 (x1 ), 𝑓 (x2 )} = max{−𝑓 (x1 ), −𝑓 (x2 )}
which shows that −𝑓 is quasiconvex. The converse follows analogously.
3.144 Assume 𝑓 is concave, that is
𝑓 (𝛼x1 + (1 − 𝛼)x2 ) ≥ 𝛼𝑓 (x1 ) + (1 − 𝛼)𝑓 (x2 ) for every x1 , x2 ∈ 𝑆 and 0 ≤ 𝛼 ≤ 1
Without loss of generality assume that 𝑓 (x1 ) ≤ 𝑓 (x2 ). Then
𝑓 (𝛼x1 + (1 − 𝛼)x2 ) ≥ 𝛼𝑓 (x1 ) + (1 − 𝛼)𝑓 (x2 ) ≥ 𝛼𝑓 (x1 ) + (1 − 𝛼)𝑓 (x1 ) = 𝑓 (x1 ) = min{𝑓 (x1 ), 𝑓 (x2 )}
𝑓 is quasiconcave.
3.145 Let 𝑓 : ℜ → ℜ. Choose any 𝑥1 , 𝑥2 in ℜ with 𝑥1 < 𝑥2 . If 𝑓 is increasing, then
𝑓 (𝑥1 ) ≤ 𝑓 (𝛼𝑥1 + (1 − 𝛼)𝑥2 ) ≤ 𝑓 (𝑥2 )
for every 0 ≤ 𝛼 ≤ 1. The first inequality implies that
𝑓 (𝑥1 ) = min{𝑓 (𝑥1 ), 𝑓 (𝑥2 )} ≤ 𝑓 (𝛼𝑥1 + (1 − 𝛼)𝑥2 )
157
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
so that 𝑓 is quasiconcave. The second inequality implies that
𝑓 (𝛼𝑥1 + (1 − 𝛼)𝑥2 ) ≤ max{𝑓 (𝑥1 ), 𝑓 (𝑥2 )} = 𝑓 (𝑥2 )
so that 𝑓 is also quasiconvex.
Conversely, if 𝑓 is decreasing
𝑓 (𝑥1 ) ≥ 𝑓 (𝛼𝑥1 + (1 − 𝛼)𝑥2 ) ≥ 𝑓 (𝑥2 )
for every 0 ≤ 𝛼 ≤ 1. The first inequality implies that
𝑓 (𝑥1 ) = max{𝑓 (𝑥1 ), 𝑓 (𝑥2 )} ≥ 𝑓 (𝛼𝑥1 + (1 − 𝛼)𝑥2 )
so that 𝑓 is quasiconvex. The second inequality implies that
𝑓 (𝛼𝑥1 + (1 − 𝛼)𝑥2 ) ≤ max{𝑓 (𝑥1 ), 𝑓 (𝑥2 )} = 𝑓 (𝑥2 )
so that 𝑓 is also quasiconcave.
3.146
≾𝑓 (𝑐) = { x ∈ 𝑋 : 𝑓 (x) ≤ 𝑎 } = {x ∈ 𝑋 : −𝑓 (x) ≥ −𝑐} = ≿−𝑓 (−𝑐)
3.147 For given 𝑐 and 𝑚, choose any p1 and p2 in ≾𝑣 (𝑐). For any 0 ≤ 𝛼 ≤ 1, let
p̄ = 𝛼p1 + (1 − 𝛼)p2 . The key step is to show that any commodity bundle x which is
affordable at p̄ is also affordable at either p1 or p2 . Assume that x is affordable at p̄,
that is x is in the budget set
x ∈ 𝑋(p̄, 𝑚) = { x : p̄x ≤ 𝑚 }
To show that x is affordable at either p1 or p2 , that is
x ∈ 𝑋(p1 , 𝑚) or x ∈ 𝑋(p2 , 𝑚)
assume to the contrary that
x∈
/ 𝑋(p1 , 𝑚) and x ∈
/ 𝑋(p2 , 𝑚)
This implies that
p1 x > 𝑚 and p2 x > 𝑚
so that
𝛼p1 x > 𝛼𝑚 and (1 − 𝛼)p2 > (1 − 𝛼)𝑚
Summing these two inequalities
p̄x = (𝛼p1 + (1 − 𝛼)p2 )x > 𝑚
contradicting the assumption that x ∈ 𝑋(p̄, 𝑚). We conclude that
𝑋(p̄, 𝑚) ⊆ 𝑋(p1 , 𝑚) ∪ 𝑋(p2 , 𝑚)
Now
𝑣(¯
𝑝, 𝑚) = sup{ 𝑢(x) : x ∈ 𝑋(p̄, 𝑚) }
≤ sup{ 𝑢(x) : x ∈ 𝑋(p1 , 𝑚) ∪ 𝑋(p2 , 𝑚) }
≤𝑐
Therefore p̄ ∈ ≾𝑣 (𝑐) for every 0 ≤ 𝛼 ≤ 1. Thus, ≾𝑣 (𝑐) is convex and so 𝑣 is quasiconvex
(Exercise 3.146).
158
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
3.148 Since 𝑓 is quasiconcave
𝑓 (𝛼x1 + (1 − 𝛼)x2 ) ≥ min{𝑓 (x1 ), 𝑓 (x2 )} for every x1 , x2 ∈ 𝑆 and 0 ≤ 𝛼 ≤ 1
Since 𝑔 is increasing
)
(
)
(
) (
)
(
𝑔 𝑓 (𝛼x1 + (1 − 𝛼)x2 ) ≥ 𝑔( min{𝑓 (x1 ), 𝑓 (x2 )}) ≥ min{𝑔 𝑓 (x1 ) , 𝑔 𝑓 (x2 ) }
𝑔 ∘ 𝑓 is quasiconcave.
3.149 When 𝜌 ≥ 1, the function
ℎ(x) = 𝛼1 𝑥𝜌1 + 𝛼2 𝑥𝜌2 + . . . 𝛼𝑛 𝑥𝜌𝑛
is convex (Example 3.58) as is 𝑦 1/𝜌 . Therefore
𝑓 (x) = (ℎ(x))
1/𝜌
is an increasing convex function of a convex function and is therefore convex (Exercise
3.133).
3.150 𝑓 is a monotonic transformation of the concave function ℎ(x) = x.
3.151 By Exercise 3.39, there exist linear functionals 𝑓ˆ and 𝑔ˆ and scalars 𝑏 and 𝑐 such
that
𝑓 (x) = 𝑓ˆ(x) + 𝑏 and 𝑔(x) = 𝑔ˆ(x) + 𝑐
The upper contour set
≿ℎ (𝑎) = { 𝑥 ∈ 𝑆 : ℎ(x) ≥ 𝑎 }
𝑓ˆ(𝑥) + 𝑏
≥ 𝑎}
= {𝑥 ∈ 𝑆 :
𝑔ˆ(𝑥) + 𝑐
= { 𝑥 ∈ ℜ𝑛 : 𝑓ˆ(x) + 𝑏 ≥ 𝑎ˆ
𝑔(x) + 𝑎𝑐 }
+
𝑔 (x) ≥ 𝑏 − 𝑎𝑐 }
= { 𝑥 ∈ ℜ𝑛+ : 𝑓ˆ(x) − 𝑎ˆ
which is a halfspace in 𝑋 and therefore convex. Similarly, the lower contour set
≾ℎ (𝑎) = { 𝑥 ∈ 𝑆 : ℎ(x) ≥ 𝑎 }
is also a halfspace and hence convex. Therefore ℎ is both quasiconcave and quasiconvex.
3.152 For 𝑎 ≤ 0
≿(𝑎) = { 𝑥 ∈ 𝑆 : ℎ(x) ≥ 0 } = 𝑆
which is convex. For 𝑎 > 0
≿ℎ (𝑎) = { 𝑥 ∈ 𝑆 : ℎ(x) ≥ 𝑎 }
𝑓 (x)
≥ 𝑎}
𝑔(x)
= { 𝑥 ∈ 𝑆 : 𝑓 (x) ≥ 𝑎𝑔(x) }
= {𝑥 ∈ 𝑆 :
= { 𝑥 ∈ 𝑆 : 𝑓 (x) − 𝑎𝑔(x) ≥ 0 }
is convex since 𝑓 − 𝑎𝑔 = 𝑓 + 𝑎(−𝑔) is concave (Exercises 3.124 and 3.131). Since ≿ℎ (𝑎)
is convex for every 𝑎, ℎ is quasiconcave.
159
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
3.153
𝑓 (x)
𝑔ˆ(x)
ℎ(x) =
where 𝑔ˆ = 1/𝑔 is positive and convex by Exercise 3.135. By the previous exercise, ℎ is
quasiconcave.
3.154 Let 𝐹 = log 𝑓 . If 𝐹 is concave, 𝑓 (x) = 𝑒𝐹 (x) is an increasing function of
(quasi)concave function, and hence is quasiconcave (Exercise 3.148).
3.155 Let
𝐹 (x) = log 𝑓 (x) =
𝑛
∑
𝛼𝑖 log 𝑓𝑖 (x)
𝑖=1
As the sum of concave functions, 𝐹 is concave (Exercise 3.131). By the previous
exercise, 𝑓 is quasiconcave.
¯ = 𝛼𝜽1 + (1 − 𝛼)𝜽 2
3.156 Assume x1 , x2 and x̄ are optimal solutions for 𝜽1 , 𝜽2 and 𝜽
respectively. That is
𝑓 (x1 , 𝜽 1 ) = 𝑣(𝜽 1 )
𝑓 (x2 , 𝜽 2 ) = 𝑣(𝜽 2 )
¯ = 𝑣(𝜽)
¯
𝑓 (x̄, 𝜽)
Since 𝑓 is convex in 𝜽
¯ = 𝑓 (x̄, 𝜽)
¯
𝑣(𝜽)
= 𝑓 (x̄, 𝛼𝜽 1 + (1 − 𝛼)𝜽 2 )
≤ 𝛼𝑓 (x̄, 𝜽 1 ) + (1 − 𝛼)𝑓 (x∗ , 𝜽2 )
≤ 𝛼𝑓 (x1 , 𝜽1 ) + (1 − 𝛼)𝑓 (x2 , 𝜽 2 )
= 𝛼𝑣(𝜽 1 ) + (1 − 𝛼)𝑣(𝜽 2 )
𝑣 is convex.
3.157 Assume to the contrary that x1 and x2 are distinct optimal solutions, that is
∕ x2 , for some 𝜽 ∈ Θ∗ , so that
x1 , x2 ∈ 𝜑(𝜽), x1 =
𝑓 (x1 , 𝜽) = 𝑓 (x2 , 𝜽) = 𝑣(𝜽) ≥ 𝑓 (x, 𝜽) for every x ∈ 𝐺(𝜽)
Let x̄ = 𝛼x1 + (1 − 𝛼)x2 for 𝛼 ∈ (0, 1). Since 𝐺(𝜽) is convex, x̄ is feasible. Since 𝑓 is
strictly quasiconcave
𝑓 (x̄, 𝜽) > min{ 𝑓 (x1 , 𝜽), 𝑓 (x2 , 𝜽) } = 𝑣(𝜽)
contradicting the optimality of x1 and x2 . We conclude that 𝜑(𝜽) is single-valued for
every 𝜽 ∈ Θ∗ . In other words, 𝜑 is a function.
3.158
1. The value function is
𝑣(𝑥0 ) =
sup 𝑈 (x)
x∈Γ(𝑥0 )
where
𝑈 (x) =
∞
∑
𝑡=0
160
𝛽 𝑡 𝑓 (𝑥𝑡 , 𝑥𝑡+1 )
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
and
Γ(𝑥0 ) = {x ∈ 𝑋 ∞ : 𝑥𝑡+1 ∈ 𝐺(𝑥𝑡 ), 𝑡 = 0, 1, 2, . . . }
Since an optimal policy exists (Exercise 2.125), the maximum is attained and
𝑣(𝑥0 ) = max 𝑈 (x)
x∈Γ(𝑥0 )
(3.57)
It is straightforward to show that
∙ 𝑈 (x) is strictly concave and
∙ Γ(𝑥0 ) is convex
Applying the Concave Maximum Theorem (Theorem 3.1) to (3.57), we conclude
that the value function 𝑣 is strictly concave.
2. Assume to the contrary that x′ and x′′ are distinct optimal plans, so that
𝑣(𝑥0 ) = 𝑈 (x′ ) = 𝑈 (x′′ )
Let x̄ = 𝛼x′ + (1 − 𝛼)x′′ . Since Γ(𝑥0 ) is convex, x̄ is feasible and
𝑈 (x̄) > 𝛼𝑈 (x′ ) + (1 − 𝛼)𝑈 (x′′ ) = 𝑈 (x′ )
which contradicts the optimality of x′ . We conclude that the optimal plan is
unique.
3.159 We observe that
∙ 𝑢(𝐹 (𝑘) − 𝑦) is supermodular in 𝑦 (Exercise 2.51)
∙ 𝑢(𝐹 (𝑘) − 𝑦) displays strictly increasing differences in (𝑘, 𝑦) (Exercise 3.129)
∙ 𝐺(𝑘) = [0, 𝐹 (𝑘)] is increasing.
Applying Exercise 2.126, we can conclude that the optimal policy (𝑘0 , 𝑘1∗ , 𝑘2∗ , . . . ) is a
monotone sequence. Since 𝑋 is compact, k∗ is a bounded monotone sequence, which
converges monotonically to some steady state 𝑘 ∗ (Exercise 1.101).
3.160 Suppose there exists (x∗ , y∗ ) ∈ 𝑋 × 𝑌 such that
𝑓 (x, y∗ ) ≤ 𝑓 (x∗ , y∗ ) ≤ 𝑓 (x∗ , y) for every x ∈ 𝑋 and y ∈ 𝑌
Let 𝑣 = 𝑓 (x∗ , y∗ ). Since
𝑓 (x, y∗ ) ≤ 𝑣 for every x ∈ 𝑋
max 𝑓 (𝑥, y∗ ) ≤ 𝑣
x∈𝑋
and therefore
min max 𝑓 (x, y) ≤ max 𝑓 (x, y∗ ) ≤ 𝑣
y∈𝑌 x∈𝑋
x∈𝑋
Similarly
max min 𝑓 (x, y) ≥ 𝑣
x∈𝑋 y∈𝑦
Combining the last two inequalities, we have
max min 𝑓 (x, y) ≥ 𝑣 ≥ min max 𝑓 (𝑥, y)
x∈𝑋 y∈𝑦
y∈𝑌 x∈𝑋
161
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
Together with (3.28), this implies equality
max min 𝑓 (x, y) = min max 𝑓 (x, y)
x∈𝑋 y∈𝑌
y∈𝑌 x∈𝑋
Conversely, suppose that
max min 𝑓 (x, y) = 𝑣 = min max 𝑓 (x, y)
x∈𝑋 y∈𝑌
y∈𝑌 x∈𝑋
The function
𝑔(x) = min 𝑓 (x, y)
y∈𝑌
is a continuous function (Theorem 2.3) on a compact set 𝑋. By the Weierstrass theorem
(Theorem 2.2), there exists x∗ which maximizes 𝑔 on 𝑋, that is
𝑔(x∗ ) = min 𝑓 (x∗ , y) = max 𝑔(x) = max min 𝑓 (x, y) = 𝑣
y∈𝑌
x∈𝑋
x∈𝑋 y∈𝑌
which implies that
𝑓 (x∗ , y) ≥ 𝑣 for every y ∈ 𝑌
Similarly, there exists y ∈ 𝑌 such that
𝑓 (x, y∗ ) ≤ 𝑣 for every x ∈ 𝑋
Combining these inequalities, we have
𝑓 (x, y∗ ) ≤ 𝑣 ≤ 𝑓 (x∗ , y) for every x ∈ 𝑋 and y ∈ 𝑌
In particular, we have
𝑓 (x∗ , y∗ ) ≤ 𝑣 ≤ 𝑓 (x∗ , y∗ )
so that 𝑣 = 𝑓 (x∗ , y∗ ) as required.
3.161 For any 𝑥 ∈ 𝑋 and 𝑦 ∈ 𝑌 , let
𝑔(𝑥) = min 𝑓 (𝑥, 𝑦) and ℎ(𝑦) = max 𝑓 (𝑥, 𝑦)
𝑦∈𝑌
𝑥∈𝑋
Then
𝑔(𝑥) = min 𝑓 (𝑥, 𝑦) ≤ max 𝑓 (𝑥, 𝑦) = ℎ(𝑦)
𝑦∈𝑌
𝑥∈𝑋
and therefore
max 𝑔(𝑥) ≤ max ℎ(𝑦)
𝑥∈𝑋
𝑦∈𝑌
That is
max min 𝑓 (𝑥, 𝑦) ≤ 𝑓𝑦 max 𝑓 (𝑥, 𝑦)
𝑥
𝑦
𝑥
162
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
3.162 Clearly 𝑓 (𝑥) = 𝑥𝑎 his homogeneous of degree 𝑎. Conversely assume 𝑓 is homogeneous of degree 𝑎, that is
𝑓 (𝑡𝑥) = 𝑡𝑎 𝑓 (𝑥)
Letting 𝑥 = 1
𝑓 (𝑡) = 𝑡𝑎 𝑓 (1)
Setting 𝑓 (1) = 𝐴 ∈ ℜ and interchanging 𝑥 and 𝑡 yields the result.
3.163
1/𝜌
𝑓 (𝑡x) = (𝑎1 (𝑡𝑥1 )𝜌 + 𝑎2 (𝑡𝑥2 )𝜌 + . . . 𝑎𝑛 (𝑡𝑥𝑛 )𝜌 )
1/𝜌
= 𝑡 (𝑎1 𝑥𝜌1 + 𝑎2 𝑥𝜌2 + . . . 𝑎𝑛 𝑥𝜌𝑛 )
= 𝑡𝑓 (x)
3.164 For 𝛽 ∈ ℜ++
ℎ(𝛽𝑡) = 𝑓 (𝛽𝑡x0 ) = 𝛽 𝑘 𝑓 (𝑡x0 ) = 𝛽 𝑘 ℎ(𝑡)
3.165 Suppose that x∗ minimizes the cost of producing output 𝑦 at prices w. That is
w𝑇 x∗ ≤ w𝑇 x
for every x ∈ 𝑉 (𝑦)
It follows that
𝑡w𝑇 x∗ ≤ 𝑡w𝑇 x
for every x ∈ 𝑉 (𝑦)
for every 𝑡 > 0, verifying that x∗ minimizes the cost of producing 𝑦 at prices 𝑡w.
Therefore
𝑐(𝑡w, 𝑦) = (𝑡w)x∗ = 𝑡(w𝑇 x∗ ) = 𝑡𝑐(w, 𝑦)
𝑐(w, 𝑦) homogeneous of degree one in input prices w.
3.166 For given prices w, let x∗ minimize the cost of producing one unit of output, so
that 𝑐(w, 1) = w𝑇 x∗ . Clearly 𝑓 (x∗ ) = 1 where 𝑓 is the production function.
Now consider any output 𝑦. Since 𝑓 is homogeneous
𝑓 (𝑦x∗ ) = 𝑦𝑓 (x∗ ) = 𝑦
Therefore 𝑦x∗ is sufficient to produce 𝑦, so that
𝑐(w, 𝑦) ≤ w𝑇 (𝑦x∗ ) = 𝑦w𝑇 x∗ = 𝑦𝑐(w, 1)
Suppose that
𝑐(w, 𝑦) < w𝑇 (𝑦x∗ ) = 𝑦𝑐(w, 1)
Then there exists x′ such that 𝑓 (x′ ) = 𝑦 and
w𝑇 x′ < w𝑇 (𝑦x∗ )
which implies that
w
𝑇
(
x′
𝑦
)
< w𝑇 x∗ = 𝑐(w, 1)
163
Solutions for Foundations of Mathematical Economics
Since 𝑓 is homogeneous
(
𝑓
x′
𝑦
)
=
c 2001 Michael Carter
⃝
All rights reserved
1
𝑓 (x′ ) = 1
𝑦
Therefore, x′ is a lower cost method of producing one unit of output, contradicting the
definition of x∗ . We conclude that
𝑐(w, 𝑦) = 𝑦𝑐(w, 1)
𝑐(w, 𝑦) is homogeneous of degree one in 𝑦.
3.167 If the consumer’s demand is invariant to proportionate changes in all prices and
income, so also will the derived utility. More formally, suppose that x∗ maximizes
utility at prices p and income 𝑚, that is
x∗ ≿ x
for every x ∈ 𝑋(p, 𝑚)
Then
𝑣(p, 𝑚) = 𝑢(x∗ )
Since 𝑋(𝑡p, 𝑡𝑚) = 𝑋(p, 𝑚)
x∗ ≿ x
for every x ∈ 𝑋(𝑡p, 𝑡𝑚)
and
𝑣(𝑡p, 𝑡𝑚) = 𝑢(x∗ ) = 𝑣(p, 𝑚)
3.168 Assume 𝑓 is homogeneous of degree one, so that
𝑓 (𝑡x) = 𝑡𝑓 (x)
for every 𝑡 > 0
Let (x, 𝑦) ∈ epi 𝑓 , so that
𝑓 (x) ≤ 𝑦
For any 𝑡 > 0
𝑓 (𝑡x) = 𝑡𝑓 (x) ≤ 𝑡𝑦
which implies that (𝑡x, 𝑡𝑦) ∈ epi 𝑓 . Therefore epi 𝑓 is a cone.
Conversely assume epi 𝑓 is a cone. Let x ∈ 𝑆 and define 𝑦 = 𝑓 (x). Then (x, 𝑦) ∈ epi 𝑓
and therefore (𝑡x, 𝑡𝑦) ∈ epi 𝑓 so
𝑓 (𝑡x) ≤ 𝑡𝑦
Now suppose to the contrary that
𝑓 (𝑡x) = 𝑧 < 𝑡𝑦 = 𝑡𝑓 (x)
(3.58)
Then (𝑡x, 𝑧) ∈ epi 𝑓 . Since epi 𝑓 is a cone, we must have (x, 𝑧/𝑡) ∈ epi 𝑓 so that
𝑓 (x) ≤
𝑧
𝑡
and
𝑡𝑓 (x) ≤ 𝑧 = 𝑓 (𝑡x)
contradicting (3.58). We conclude that
𝑓 (𝑡x) = 𝑡𝑓 (x) for every 𝑡 > 0
164
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
3.169 Take any x1 and x2 in 𝑆 and let
𝑦1 = 𝑓 (x1 ) > 0 and 𝑦2 = 𝑓 (x2 ) > 0
Since 𝑓 is homogeneous of degree one,
( )
( )
x1
x2
𝑓
=𝑓
=1
𝑦1
𝑦2
Since 𝑓 is quasiconcave
(
)
x2
x1
+ (1 − 𝛼)
𝑓 𝛼
≥1
𝑦1
𝑦2
for every 0 ≤ 𝛼 ≤ 1. Choose 𝛼 = 𝑦1 /(𝑦1 + 𝑦2 ) so that (1 − 𝛼) = 𝑦2 /(𝑦1 + 𝑦2 ). Then
(
)
x1
x2
𝑓
+
≥1
𝑦1 + 𝑦2
𝑦1 + 𝑦2
Again using the homogeneity of 𝑓 , this implies
𝑓 (x1 + x2 ) ≥ 𝑦1 + 𝑦2 = 𝑓 (x1 ) + 𝑓 (x2 )
3.170 Let 𝑓 ∈ 𝐹 (𝑆) be a strictly positive definite, quasiconcave functional which is
homogeneous of degree one. For any x1 , x2 in 𝑆 and 0 ≤ 𝛼 ≤ 1 𝛼x1 , (1 − 𝛼)x2 in 𝑆
and therefore
𝑓 (𝛼x1 + (1 − 𝛼)x2 ) ≥ 𝑓 (𝛼x1 ) + 𝑓 ((1 − 𝛼)x2 )
since 𝑓 is superadditive (Exercise 3.169). But
𝑓 (𝛼x1 ) = 𝛼𝑓 (x1 )
𝑓 ((1 − 𝛼)x2 ) = (1 − 𝛼)𝑓 (x2 )
by homogeneity. Substituting in (3.58), we conclude that
𝑓 (𝛼x1 + (1 − 𝛼)x2 ) ≥ 𝛼𝑓 (x1 ) + (1 − 𝛼)𝑓 ((1 − 𝛼)x2 )
𝑓 is concave.
3.171 Assume that 𝑓 is strictly positive definite, quasiconcave and homogeneous of
degree 𝑘, 0 < 𝑘 < 1. Define
ℎ(x) = (𝑓 (x))
1/𝑘
Then ℎ is quasiconcave (Exercise 3.148. Further, for every 𝑡 > 0
1/𝑘
ℎ(𝑡x) = (𝑓 (𝑡x))
(
)1/𝑘
= 𝑡𝑘 𝑓 (x)
= 𝑡 (𝑓 (x))
1/𝑘
= 𝑡ℎ(x)
so that ℎ is homogeneous of degree 1. By Exercise 3.170, ℎ is concave.
𝑓 (x) = (ℎ(x))
𝑘
That is 𝑓 = 𝑔 ∘ ℎ where
𝑔(𝑦) = 𝑦 𝑘
is monotone and concave provided 𝑘 ≤ 1. By Exercise 3.133, 𝑓 = 𝑔 ∘ ℎ is concave.
165
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
3.172 Continuity is a necessary and sufficient condition for the existence of a utility
function representing ≿ (Remark 2.9).
Suppose 𝑢 represents the homothetic preference relation ≿. For any x1 , x2 ∈ 𝑆
𝑢(x1 ) = 𝑢(x2 ) =⇒ x1 ∼ x2 =⇒ 𝑡x1 ∼ 𝑡x2 =⇒ 𝑢(𝑡x1 ) = 𝑢(𝑡x2 ) for every 𝑡 > 0
Conversely, if 𝑢 is a homothetic functional,
x1 ∼ x2 =⇒ 𝑢(x1 ) = 𝑢(x2 ) =⇒ 𝑢(𝑡x1 ) = 𝑢(𝑡x2 ) =⇒ 𝑡x1 ∼ 𝑡x2 for every 𝑡 > 0
3.173 Suppose that 𝑓 = 𝑔 ∘ ℎ where 𝑔 is strictly increasing and ℎ is homogeneous of
degree 𝑘. Then
(
)1/𝑘
ℎ̂(x) = ℎ(x)
is homogeneous of degree one and 𝑓 = 𝑔ˆ ∘ ℎ̂ where
( )
𝑔ˆ(𝑦) = 𝑔 𝑦 𝑘 )
is increasing.
3.174 Assume x1 , x2 ∈ 𝑆 with
𝑓 (x1 ) = 𝑔(ℎ(x1 )) = 𝑔(ℎx2 )) = 𝑓 (x2 )
Since 𝑔 is strictly increasing, this implies that
ℎ(x1 ) = ℎ(x2 )
Since ℎ is homogeneous
ℎ(𝑡x1 ) = 𝑡𝑘 ℎ(x1 ) = 𝑡𝑘 ℎ(x2 ) = ℎ(𝑡x2 )
for some 𝑘. Therefore
𝑓 (𝑡x1 ) = 𝑔(ℎ(𝑡x1 )) = 𝑔(ℎ(𝑡x2 )) = 𝑓 (𝑡x2 )
3.175 Let x0 ∕= 0 be any point in 𝑆, and define 𝑔 : ℜ → ℜ by
𝑔(𝛼) = 𝑓 (𝛼x0 )
Since 𝑓 is strictly increasing, so is 𝑔 and therefore 𝑔 has a strictly increasing inverse
𝑔 −1 . Let ℎ = 𝑔 −1 ∘ 𝑓 so that 𝑓 = 𝑔 ∘ ℎ.
We need to show that ℎ is homogeneous. For any x ∈ 𝑆, there exists 𝛼 such that
𝑔(𝛼) = 𝑓 (𝛼x0 ) = 𝑓 (x)
that is 𝛼 = ℎ(x) = 𝑔 −1 (𝑓 (x)). Since 𝑓 is homothetic
𝑔(𝑡𝛼) = 𝑓 (𝑡𝛼x0 )𝑓 (𝑡x) for every 𝑡 > 0
and therefore
ℎ(𝑡x) = 𝑔 −1 (𝑓 (𝑡x)) = 𝑔 −1 (𝑓 (𝑡𝛼x0 )) = 𝑔 −1 𝑔(𝑡𝛼) = 𝑡𝛼 = 𝑡ℎ(x)
ℎ is homogeneous of degree one.
166
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
3.176 Let 𝑓 be the production function. If 𝑓 is homothetic, there exists (Exercise 3.175)
a linearly homogeneous function ℎ and strictly increasing function 𝑔 such that 𝑓 = 𝑔 ∘ℎ.
𝑐(w, 𝑦) = min{ w𝑇 x : 𝑓 (x) ≥ 𝑦 }
x
= min{ w𝑇 x : 𝑔(ℎ(x)) ≥ 𝑦 }
x
= min{ w𝑇 x : ℎ(x) ≥ 𝑔 −1 (𝑦) }
x
= 𝑔 −1 (𝑦)𝑐(w, 1)
by Exercise 3.166.
3.177 Let 𝑓 : 𝑆 → ℜ be positive, strictly increasing, homothetic and quasiconcave. By
Exercise 3.175, there exists a linearly homogeneous function ℎ : 𝑆 → ℜ and strictly
increasing function 𝑔 ∈ 𝐹 (𝑅) such that 𝑓 = 𝑔 ∘ ℎ. ℎ = 𝑔 −1 ∘ 𝑓 is positive, quasiconcave
(Exercise 3.148) and homogeneous of degree one. By Proposition 3.12, ℎ is concave
and therefore 𝑓 = 𝑔 ∘ ℎ is concavifiable.
3.178 Since 𝐻𝑓 (𝑐) is a supporting hyperplane to 𝑆 at x0 , then
𝑓 (x0 ) = 𝑐
and either
𝑓 (x) ≥ 𝑐 = 𝑓 (x0 ) for every x ∈ 𝑆
or
𝑓 (x) ≤ 𝑐 = 𝑓 (x0 ) for every x ∈ 𝑆
3.179 Suppose to the contrary that y = (ℎ, 𝑞) ∈ int 𝐴 ∩ 𝐵. Then y ≿ y∗ . By strict
convexity
y𝛼 = 𝛼y + (1 − 𝛼)y∗ ≻ y∗ for every 𝛼 ∈ (0, 1)
Since y ∈ int 𝐴, y𝛼 ∈ 𝐴 for 𝛼 sufficiently small. That is, there exists some 𝛼 such that
y𝛼 is feasible and y𝛼 ≻ y∗ , contradicting the optimality of y∗ .
3.180 For notational simplicity, let 𝑓 be the linear functional which separates 𝐴 and 𝐵
in Example 3.77. 𝑓 (y) measure the cost of the plan y = (ℎ, 𝑞), that is 𝑓 (y) = 𝑤ℎ + 𝑝𝑞.
Assume to the contrary there exists a preferred lifestyle in 𝑋, that is there exists some
y = (ℎ, 𝑞) ∈ 𝑋 such that y ≻ y∗ = (ℎ∗ , 𝑞 ∗ ). Since y ∈ 𝐵, 𝑓 (y) ≥ 𝑓 (y∗ ) by (3.29). On
the other hand, y ∈ 𝑋 which implies that 𝑓 (y) ≤ 𝑓 (y∗ ). Consequently, 𝑓 (y) = 𝑓 (y∗ ).
By continuity, there exists some 𝛼 < 1 such that 𝛼y ≻ y∗ which implies that 𝛼y ∈ 𝐵.
By linearity
𝑓 (𝛼y) = 𝛼𝑓 (y) < 𝑓 (y) = 𝑓 (y∗ ) = 𝛼
contrary to (3.29). This contradiction establishes that y∗ is the best choice in budget
set 𝑋.
3.181 By Proposition 3.7, epi 𝑓 is a convex set in 𝑋 × ℜ with (x0 , 𝑓 (x0 )) a point on
its boundary. By Corollary 3.2.2 of the Separating Hyperplane Theorem, there exists
linear a functional 𝜑 ∈ (𝑋 × ℜ)′ such that
𝜑(x, 𝑦) ≥ 𝜑(x0 , 𝑓 (x0 )) for every (x, 𝑦) ∈ epi 𝑓
167
(3.59)
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
𝜑 can be decomposed into two components (Exercise 3.47)
𝜑(x, 𝑦) = −𝑔(x) + 𝛼𝑦
The assumption that x0 ∈ int 𝑆 ensures that 𝛼 > 0 and we can normalize so that
𝛼 = 1. Substituting in (3.59)
−𝑔(x) + 𝑓 (x) ≥ −𝑔(x0 ) + 𝑓 (x0 )
𝑓 (x) ≥ 𝑓 (x0 ) + 𝑔(x − x0 )
for every x ∈ 𝑆.
3.182 By Exercise 3.72, there exists a unique point x0 ∈ 𝑆 such that
(x0 − y)𝑇 (x − x0 ) ≥ 0 for every x ∈ 𝑆
Define the linear functional (Exercise 3.64)
𝑓 (x) = (x0 − y)𝑇 x
and let 𝑐 = 𝑓 (x0 ). For all x ∈ 𝑆
𝑓 (x) − 𝑓 (x0 ) = 𝑓 (x − x0 ) = (x0 − y)𝑇 (x − x0 ) ≥ 0
and therefore
𝑓 (x) ≥ 𝑓 (x0 ) = 𝑐 for every x ∈ 𝑆
Furthermore
2
𝑓 (x0 ) − 𝑓 (y) = 𝑓 (x0 − y) = (x0 − y)𝑇 (x0 − y) = ∥x0 − y∥ > 0
since y ∕= x0 . Therefore 𝑓 (x0 ) > 𝑓 (y) and
𝑓 (y) < 𝑐 ≤ 𝑓 (x) for every x ∈ 𝑆
3.183 If y ∈ b(𝑆), y ∈ 𝑆 𝑐 and there exists a sequence of points {y𝑛 } ∈ 𝑆 𝑐 converging
to y (Exercise 1.105). That is, there exists a sequence of nonboundary points {y𝑛 } ∈
/𝑆
converging to y. For every point y𝑛 , there is a linear functional 𝑔 𝑛 ∈ 𝑋 ∗ and 𝑐𝑛 such
that
𝑔 𝑛 (y𝑛 ) < 𝑐𝑛 ≤ 𝑔 𝑛 (x)
for every x ∈ 𝑆
Define 𝑓 𝑛 = 𝑔 𝑛 / ∥𝑔 𝑛 ∥. By construction, the sequence of linear functionals 𝑓 𝑛 belong
to the unit ball in 𝑋 ∗ (since ∥𝑓 ∥ = 1). Since 𝑋 ∗ is finite dimensional, the unit ball is
compact as so 𝑓 𝑛 has a convergent subsequence with limit 𝑓 such that
𝑓 (y) ≤ 𝑓 (x)
for every 𝑥 ∈ 𝑆
𝑓 (y) ≤ 𝑓 (x)
for every 𝑥 ∈ 𝑆
A fortiori
3.184 There are two possible cases.
y∈
/ 𝑆 By Exercise 3.182, there exists a hyperplane which separates y and 𝑆 which a
fortiori separates y and 𝑆, that is
𝑓 (y) ≤ 𝑓 (x)
for every x ∈ 𝑆
168
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
y ∈ 𝑆 Since y ∈
/ 𝑆, y must be a boundary point of 𝑆. By the previous exercise,
there exists a supporting hyperplane at y, that is there exists a continuous linear
functional 𝑓 ∈ 𝑋 ∗ such that
𝑓 (y) ≤ 𝑓 (x)
3.185
for every x ∈ 𝑆
1. 𝑓 (𝑆) ⊆ ℜ.
2. 𝑓 (𝑆) is convex and hence an interval (Exercise 1.160.
3. 𝑓 (𝑆) is open in ℜ (Proposition 3.2).
3.186 𝑆 is nonempty and convex and 0 ∈
/ 𝑆. (Otherwise, there exists x ∈ 𝐴 and y ∈ 𝐵
such that 0 = y + (−x) which implies that x = y contradicting the assumption that
𝐴 ∩ 𝐵 = ∅.) Thus there exists a continuous linear functional 𝑓 ∈ 𝑋 ∗ such that
𝑓 (y − x) ≥ 𝑓 (0) = 0
for every x ∈ 𝐴, y ∈ 𝐵
so that
𝑓 (x) ≤ 𝑓 (y) for every x ∈ 𝐴, y ∈ 𝐵
Let 𝑐 = supx∈𝐴 𝑓 (x). Then
𝑓 (x) ≤ 𝑐 ≤ 𝑓 (y) for every x ∈ 𝐴, y ∈ 𝐵
By Exercise 3.185, 𝑓 (int 𝐴) is an open interval in (−∞, 𝑐], hence 𝑓 (int 𝐴) ⊆ (−∞, 𝑐),
so that 𝑓 (x) < 𝑐 for every x ∈ int 𝐴. Similarly, 𝑓 (int 𝐵) > 𝑐 and
𝑓 (x) < 𝑐 < 𝑓 (y) for every x ∈ int 𝐴, y ∈ int 𝐵
3.187 Since int 𝐴 ∩ 𝐵 = ∅, int 𝐴 and 𝐵 can be separated. That is, there exists a
continuous linear functional 𝑓 ∈ 𝑋 ∗ and a number 𝑐 such that
𝑓 (x) ≤ 𝑐 ≤ 𝑓 (y)
for every x ∈ 𝐴, y ∈ int 𝐵
which implies that
𝑓 (x) ≤ 𝑐 ≤ 𝑓 (y)
for every x ∈ 𝐴, y ∈ 𝐵
since
𝑐≤
inf
y∈int 𝐵
𝑓 (y) = inf 𝑓 (y)
y∈𝐵
Conversely, suppose that 𝐴 and 𝐵 can be separated. That is, there exists 𝑓 ∈ 𝑋 ∗ such
that
𝑓 (x) ≤ 𝑐 ≤ 𝑓 (y)
for every x ∈ 𝐴, y ∈ 𝐵
Then 𝑓 (int 𝐴) is an open interval in [𝑐, ∞), which is disjoint from the interval 𝑓 (𝐵) ⊆
(−∞, 𝑐]. This implies that int 𝐴 ∩ 𝐵 = ∅.
3.188 Since x0 ∈ b(𝑆), {x0 } ∩ int 𝑆 = ∅ and int 𝑆 ∕= ∅. By Corollary 3.2.1, {x0 } and
𝑆 can be separated, that is there exist 𝑓 ∈ 𝑋 ∗ such that
𝑓 (x0 ) ≤ 𝑓 (x) for every x ∈ 𝑆
169
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
3.189 Let x ∈ 𝐶. Since 𝐶 is a cone, 𝜆x ∈ 𝐶 for every 𝜆 ≥ 0 and therefore
𝑓 (𝜆x) ≥ 𝑐
or
𝑓 (x) ≥ 𝑐/𝜆
for every 𝜆 ≥ 0
Taking the limit as 𝜆 → ∞ implies that
𝑓 (x) ≥ 0
for every x ∈ 𝐶
3.190 First note that 0 ∈ 𝑍 and therefore 𝑓 (0) = 0 ≤ 𝑐 so that 𝑐 ≥ 0. Suppose that
there exists some z ∈ 𝑍 for which 𝑓 (z) = 𝜖 ∕= 0. By linearity, this implies
𝑓(
2𝑐
2𝑐
z) = 𝑓 (z) = 2𝑐 > 𝑐
𝜖
𝜖
which contradicts the requirement
𝑓 (z) ≤ 𝑐 for every z ∈ 𝑍
3.191 By Corollary 3.2.1, there exists 𝑓 ∈ 𝑋 ∗ such that
𝑓 (z) ≤ 𝑐 ≤ 𝑓 (x)
for every x ∈ 𝑆, z ∈ 𝑍
By Exercise 3.190
𝑓 (z) = 0
for every z ∈ 𝑍
𝑓 (x) ≥ 0
for every x ∈ 𝑆
and therefore
Therefore 𝑍 is contained in the hyperplane 𝐻𝑓 (0) which separates 𝑆 from 𝑍.
3.192 Combining Theorem 3.2 and Corollary 3.2.1, there exists a hyperplane 𝐻𝑓 (𝑐)
such that
𝑓 (x) ≤ 𝑐 ≤ 𝑓 (y)
for every x ∈ 𝐴, y ∈ 𝐵
and such that
𝑓 (x) < 𝑐 ≤ 𝑓 (y)
for every x ∈ int 𝐴, y ∈ 𝐵
Since int 𝐴 ∕= ∅, there exists some x ∈ int 𝐴 with 𝑓 (x) < 𝑐. Hence 𝐴 ⊈ 𝑓 −1 (𝑐) = 𝐻𝑓 (𝑐).
3.193 Follows directly from the basic separation theorem, since 𝐴 = int 𝐴 and 𝐵 =
int 𝐵.
3.194 Let 𝑆 = 𝐵 − 𝐴. Then
1. 𝑆 is a nonempty, closed, convex set (Exercise 1.203).
2. 0 ∈
/ 𝑆.
There exists a continuous linear functional 𝑓 ∈ 𝑋 ∗ such that
𝑓 (x) ≥ 𝑐 > 𝑓 (0) = 0
170
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
𝐴
𝐵
Figure 3.2: 𝐴 and 𝐵 cannot be strongly separated.
for every z ∈ 𝑆 (Exercise 3.182). For every x ∈ 𝐴, y ∈ 𝐵, z = y − x ∈ 𝑆 and
𝑓 (z) = 𝑓 (y) − 𝑓 (x) ≥ 𝑐 > 0
or
𝑓 (x) + 𝑐 ≤ 𝑓 (y)
which implies that
sup 𝑓 (x) + 𝑐 ≤ inf 𝑓 (y)
y∈𝐵
x∈𝐴
and
sup 𝑓 (x) < inf 𝑓 (y)
x∈𝐴
y∈𝐵
3.195 No. See Figure 3.2.
3.196
1. Assume that there exists a convex neighborhood 𝑈 ∋ 0 such that
(𝐴 + 𝑈 ) ∩ 𝐵 = ∅
Then (𝐴 + 𝑈 ) is convex and 𝐴 ⊂ int (𝐴 + 𝑈 ) ∕= ∅ and int (𝐴 + 𝑈 ) ∩ 𝐵 = ∅. By
Corollary 3.2.1, there exists continuous linear functional such that
𝑓 (x + u) ≤ 𝑓 (y)
for every x ∈ 𝐴, u ∈ 𝑈, y ∈ 𝐵
Since 𝑓 (𝑈 ) is an open interval containing 0, there exists some u0 with 𝑓 (u0 ) =
𝜖 > 0.
𝑓 (x) + 𝜖 ≤ 𝑓 (y)
for every x ∈ 𝐴, y ∈ 𝐵
which implies that
sup 𝑓 (x) < inf 𝑓 (y)
y∈𝐵
x∈𝐴
Conversely, assume that 𝐴 and 𝐵 can be strongly separated. That is, there exists
a continuous linear functional 𝑓 ∈ 𝑋 ∗ and number 𝜖 > 0 such that
𝑓 (x) ≤ 𝑐 − 𝜖 < 𝑐 + 𝜖 ≤ 𝑓 (y) for every x ∈ 𝐴, y ∈ 𝐵
Let 𝑈 = { 𝑥 ∈ 𝑋 : ∣𝑓 (𝑥)∣ < 𝜖 }. 𝑈 is a convex neighborhood of 0 such that
(𝐴 + 𝑈 ) ∩ 𝐵 = ∅.
171
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
2. Let 𝐴 and 𝐵 be nonempty, disjoint, convex subsets in a normed linear space 𝑋
with 𝐴 compact and 𝐵 closed. By Exercise 1.208, there exists a convex neighborhood 𝑈 ∋ 0 such that (𝐴 + 𝑈 ) ∩ 𝐵 = ∅. By the previous part, 𝐴 and 𝐵 can
be strongly separated.
3.197 Assume 𝜌(𝐴, 𝐵) = inf{ ∥x − y∥ : x ∈ 𝐴, y ∈ 𝐵 } = 2𝜖 > 0. Let 𝑈 = 𝐵𝜖 (0) be
the open ball around 0 of radius 𝜖. For every x ∈ 𝐴, u ∈ 𝑈, y ∈ 𝐵
∥x + (−u) − y∥ = ∥x − y − u∥ ≥ ∥x − y∥ − ∥u∥
so that
𝜌(𝐴 + 𝑈, 𝐵) = inf ∥x + (−u) − y∥ ≥ inf (∥x − y∥ − ∥u∥)
x,u,y
x,u,y
≥ inf ∥x − y∥ − sup ∥u∥)
x,y
u
= 2𝜖 − 𝜖
=𝜖>0
Therefore (𝐴 + 𝑈 ) ∩ 𝐵 = ∅ and so 𝐴 and 𝐵 can be strongly separated.
Conversely, assume that 𝐴 and 𝐵 can be strongly separated, so that there exists a
convex neighborhood 𝑈 of 0 such that (𝐴 + 𝑈 ) ∩ 𝐵 = ∅. Therefore, there exists 𝜖 > 0
such that 𝐵𝜖 (0) ⊆ 𝑈 and
𝐴 + 𝐵𝜖 ∩ 𝐵 = ∅
This implies that
𝜌(𝐴, 𝐵) = inf{ ∥x − y∥ : x ∈ 𝐴, y ∈ 𝐵 } > 𝜖 > 0
3.198 Take 𝐴 = {y} and 𝐵 = 𝑀 in Proposition 3.14. There exists 𝑓 ∈ 𝑋 ∗ such that
𝑓 (y) < 𝑐 ≤ 𝑓 (x)
for every x ∈ 𝑀
By Corollary 3.2.3, 𝑐 = 0.
3.199
1. Consider the set
𝑍 = { 𝑓 (𝑥), −𝑔1 (𝑥), −𝑔2 (𝑥), . . . , −𝑔𝑚 (𝑥) : 𝑥 ∈ 𝑋 }
𝑍 is the image of a linear mapping from 𝑋 to 𝑌 = ℜ𝑚+1 and hence is a subspace
of ℜ𝑚+1 .
2. By hypothesis, the point e0 = (1, 0, 0, . . . , 0) ∈ ℜ𝑚+1 does not belong to 𝑍.
Otherwise, we have an 𝑥 ∈ 𝑋 such that 𝑔𝑖 (𝑥) = 0 for every 𝑖 but 𝑓 (𝑥) = 1.
3. By the previous exercise, there exists a linear functional 𝜑 ∈ 𝑌 ∗ such that
𝜑(e0 ) > 0
𝜑(z) = 0
for every z ∈ 𝑍
4. In other words, there exists a vector 𝜆 = (𝜆0 , 𝜆1 , . . . , 𝜆𝑚 ) ∈ 𝑌 = ℜ(𝑚+1)∗ such
that
𝜆e0 > 0
𝜆z = 0
(3.60)
for every z ∈ 𝑍
172
(3.61)
c 2001 Michael Carter
⃝
All rights reserved
Solutions for Foundations of Mathematical Economics
Equation (3.61) states that
𝜆z = 𝜆0 z0 + 𝜆1 z1 + ⋅ ⋅ ⋅ + 𝜆𝑚 z𝑚 = 0
for every z ∈ 𝑍
That is, for every 𝑥 ∈ 𝑋,
𝜆0 𝑓 (x) − 𝜆1 𝑔1 (x) − 𝜆2 𝑔2 (x) − . . . − 𝜆𝑚 𝑔𝑚 (x) = 0
5. Inequality (3.60) establishes that 𝜆0 > 0. Without loss of generality we can
normalize so that 𝜆0 = 1.
6. Therefore
𝑓 (𝑥) =
𝑚
∑
𝜆𝑖 𝑔𝑖 (𝑥)
𝑖=1
3.200 For every x ∈ 𝑆, 𝑔𝑗 (x) = 0, 𝑗 = 1, 2 . . . 𝑚 and therefore
𝑓 (x) =
𝑚
∑
𝜆𝑖 𝑔𝑖 (x) = 0
𝑖=1
3.201 The set
𝑍 = { 𝑔1 (𝑥), 𝑔2 (𝑥), . . . , 𝑔𝑚 (𝑥) : 𝑥 ∈ 𝑋 }
is a closed subspace in ℜ𝑚 . If the system is inconsistent, c = (𝑐1 , 𝑐2 , . . . , 𝑐𝑚 ) ∈
/ 𝑍. By
Exercise 3.198, there exists a linear functional 𝜑 on ℜ𝑚 such
𝜑(z) = 0 for every z ∈ 𝑍
𝜑(c) > 0
That is, there exist numbers 𝜆1 , 𝜆2 , . . . , 𝜆𝑚 such that
𝑚
∑
𝜆𝑗 𝑔𝑗 (x) = 0
𝑗=1
and
𝑚
∑
𝜆𝑗 𝑐𝑗 > 0
𝑗=1
which contradicts the hypothesis
𝑚
∑
𝜆𝑗 𝑔𝑗 = 0 =⇒
𝑗=1
𝑚
∑
𝜆𝑗 𝑐𝑗 = 0
𝑗=1
Conversely, if for some x ∈ 𝑋
𝑔𝑗 (x) = 𝑐𝑗
𝑗 = 1, 2, . . . , 𝑚
then
𝑚
∑
𝜆𝑗 𝑔𝑗 (x) =
𝑗=1
𝑚
∑
𝜆𝑗 𝑐𝑗
𝑗=1
and
𝑚
∑
𝜆𝑗 𝑔𝑗 = 0 =⇒
𝑗=1
𝑚
∑
𝑗=1
173
𝜆𝑗 𝑐𝑗 = 0
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
ˆ = { x ∈ 𝐾 : ∥x∥ = 1 } is
3.202 The set 𝐾
1
∙ compact (the unit ball is compact if and only if 𝑋 is finite-dimensional)
∙ convex (which is why we need the 1 norm)
By Proposition 3.14, there exists a linear functional 𝑓 ∈ 𝑋 ∗ such that
ˆ
for every x̂ ∈ 𝐾
for every x ∈ 𝑀
𝑓 (x̂) > 0
𝑓 (x) = 0
ˆ Then
For any x ∈ 𝐾, x ∕= 0, define x̂ = x/ ∥x∥1 ∈ 𝐾.
𝑓 (x) = 𝑓 (∥x∥1 x̂) = ∥x∥1 𝑓 (x̂) > 0
3.203
1. Let
𝐴 = { (x, 𝑦) : 𝑦 ≥ 𝑔(x), x ∈ 𝑋 }
𝐵 = { (x, 𝑦) : 𝑦 = 𝑓0 (x), x ∈ 𝑍 }
𝐴 is the epigraph of a convex functional and hence convex. 𝐵 is a subspace of
𝑌 = 𝑋 × ℜ and also convex.
2. Since 𝑔 is convex, int 𝐴 ∕= ∅. Furthermore
𝑓0 (x) ≤ 𝑔(x) =⇒ int 𝐴 ∩ 𝐵 = ∅
3. By Exercise 3.2.3, there exists linear functional 𝜑 ∈ 𝑌 ∗ such that
𝜑(x, 𝑦) ≥ 0
𝜑(x, 𝑦) = 0
for every (x, 𝑦) ∈ 𝐴
for every (x, 𝑦) ∈ 𝐵
There exists 𝑦 such that 𝑦 > 𝑔(0) and therefore (0, 𝑦) ∈ int 𝐴 and 𝜑(0, 𝑦) > 0.
Therefore
𝜑(0, 1) =
1
𝜑(0, 1) > 0
𝑦
4. Let 𝑓 ∈ 𝑋 ∗ be defined by
1
𝑓 (x) = − 𝜑(x, 0)
𝑐
where 𝑐 = 𝜑(0, 1). Since
𝜑(x, 0) = 𝜑(x, 𝑦) − 𝜑(0, 𝑦)
= 𝜑(x, 𝑦) − 𝑐𝑦
1
1
𝑓 (x) = − (𝜑(x, 𝑦) − 𝑐𝑦) = − 𝜑(x, 𝑦) + 𝑦
𝑐
𝑐
for every 𝑦 ∈ ℜ
5. For every x ∈ 𝑍
1
𝑓 (x) = − 𝜑(x, 𝑓0 (x)) + 𝑓0 (x)
𝑐
= 𝑓0 (x)
since 𝜑(x, 𝑓0 (x)) = 0 for every x ∈ 𝑍. Thus 𝑓 is an extension of 𝑓0 .
174
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
6. For any x ∈ 𝑋, let 𝑦 = 𝑔(x). Then (x, 𝑦) ∈ 𝐴 and 𝜑(x, 𝑦) ≥ 0. Therefore
1
𝑓 (x) = − 𝜑(x, 𝑦) + 𝑦
𝑐
1
= − 𝜑(x, 𝑦) + 𝑔(x)
𝑐
≤ 𝑔(x)
Therefore 𝑓 is bounded by 𝑔 as required.
3.204 Let 𝑔 ∈ 𝑋 ∗ be defined by
𝑔(x) = ∥𝑓0 ∥𝑍 ∥x∥
Then 𝑓0 (x) ≤ 𝑔(x) for all x ∈ 𝑍. By the Hahn-Banach theorem (Exercise 3.15), there
exists an extension 𝑓 ∈ 𝑋 ∗ such that
𝑓 (x) ≤ 𝑔(x) = ∥𝑓0 ∥𝑍 ∥x∥
Therefore
∥𝑓 ∥𝑋 = sup ∥𝑓 (x)∥ = ∥𝑓0 ∥𝑍
∥x∥=1
3.205 If x0 = 0, any bounded linear functional will do. Therefore, assume x0 ∕= 0. On
the subspace lin {x0 } = {𝛼x0 : 𝛼 ∈ ℜ}, define the function
𝑓0 (𝛼x0 ) = 𝛼 ∥x0 ∥
𝑓0 is a bounded linear functional on lin {x0 } with norm 1. By the previous part, 𝑓0
can be extended to a bounded linear functional 𝑓 ∈ 𝑋 ∗ with the same norm, that is
∥𝑓 ∥ = 1 and 𝑓 (x0 ) = ∥x0 ∥.
3.206 Since x1 ∕= x2 , x1 − x2 ∕= 0. There exists a bounded linear functional such that
𝑓 (x1 − x2 ) = ∥x1 − x2 ∥ ∕= 0
so that
𝑓 (x1 ) ∕= 𝑓 (x2 )
3.207
1.
∙ 𝔉 is a complete lattice (Exercise 1.179).
∙ The intersection of any chain is
– nonempty (since 𝑆 is compact)
– a face (Exercise 1.179)
Hence every chain has a minimal element.
∙ By Zorn’s lemma (Remark 1.5), 𝔉 has a minimal element 𝐹0 .
2. Assume to the contrary that 𝐹0 contains two distinct elements x1 , x2 . Then
(Exercise 3.206) there exists a continuous linear functional 𝑓 ∈ 𝑋 ∗ such that
𝑓 (x1 ) ∕= 𝑓 (x2 )
Let 𝑐 be in the minimum value of 𝑓 (x) on 𝐹0 and let 𝐹1 be the set on which it
attains this minimum. (Since 𝐹0 is compact, 𝑐 is well-defined and 𝐹1 is nonempty.
That is
𝑐 = min{ 𝑓 (𝑥) : x ∈ 𝐹0 }
𝐹1 = { x ∈ 𝑆 : 𝑓 (x) = 𝑐 }
175
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
Now 𝐹1 ⊂ 𝐹0 since 𝑓 (x1 ) ∕= 𝑓 (x2 ).
To show that 𝐹1 is a face of 𝐹0 , assume that 𝛼x+(1−𝛼)y ∈ 𝐹1 for some x, y ∈ 𝐹0 .
Then 𝑐 = 𝑓 (𝛼x + (1 − 𝛼)y) = 𝛼𝑓 (x) + (1 − 𝛼)𝑓 (y) = 𝑐. Since x, y ∈ 𝐹0 , this
implies that 𝑓 (x) = 𝑓 (y) = 𝑐 so that x, y ∈ 𝐹1 . Therefore 𝐹1 is a face.
We have shown that, if 𝐹0 contains two distinct elements, there exists a smaller
face 𝐹1 ⊂ 𝐹0 , contradicting the minimality of 𝐹0 . We conclude that 𝐹0 comprises
a single element x0 .
3. 𝐹0 = {x0 } which is an extreme point of 𝑆.
3.208 Let 𝐻 = 𝐻𝑓 (𝑐) be a supporting hyperplane to 𝑆. Without loss of generality
assume
𝑓 (x) ≤ 𝑐 for every x ∈ 𝑆
(3.62)
and there exists some x∗ ∈ 𝑆 such that
𝑓 (x∗ ) = 𝑐
That is 𝑓 is maximized at x∗ .
Version 1 By the previous exercise, 𝑓 achieves its maximum at an extreme point.
That is, there exists an extreme point x0 ∈ 𝑆 such that
𝑓 (x0 ) ≥ 𝑓 (x) for every x ∈ 𝑆
In particular, 𝑓 (x0 ) ≥ 𝑓 (x∗ ) = 𝑐. But (3.62) implies 𝑓 (x0 ) ≤ 𝑐. Therefore, we
conclude that 𝑓 (x0 ) = 𝑐 and therefore x0 ∈ 𝐻.
Version 2 The set 𝐻 ∩ 𝑆 is a nonempty, compact, convex subset of a linear space.
Hence, by Exercise 3.207, 𝐻 ∩ 𝑆 contains an extreme point, say x0 . We show
that x0 is an extreme point of 𝑆.
Assume not, that is assume that there exists x1 , x2 ∈ 𝑆 such that x0 = 𝛼x1 +
(1 − 𝛼)x2 for some 𝛼 ∈ (0, 1). Since x0 is an extreme point of 𝐻 ∩ 𝑆, at least
one of the points x1 , x2 must lie outside 𝐻. Assume x1 ∈
/ 𝐻 which implies that
𝑓 (x1 ) < 𝑐. Since 𝑓 (x2 ) ≤ 𝑐
𝑓 (x0 ) = 𝛼𝑓 (x1 ) + (1 − 𝛼)𝑓 (x2 ) < 𝑐
(3.63)
However, since x0 ∈ 𝐻 ∩ 𝑆, we must have
𝑓 (x0 ) = 𝑐
which contradicts (3.63).
Therefore x0 is an extreme point of 𝑆. In fact, we have shown that every extreme
point of 𝐻 ∩ 𝑆 must be an extreme point of 𝑆.
3.209 Let 𝑆ˆ denote the closed, convex hull of the extreme points of 𝑆. (The closed,
convex hull of a set is simply the closure of the convex hull.) Clearly 𝑆ˆ ⊂ 𝑆 and it
remains to show that 𝑆ˆ contains all of 𝑆.
ˆ By the Strong Separation
Assume not. That is, assume 𝑆ˆ ⊊ 𝑆 and let x0 ∈ 𝑆 ∖ 𝑆.
Theorem, there exists a linear functional 𝑓 ∈ 𝑋 ∗ such that
𝑓 (x0 ) > 𝑓 (x) for every x ∈ 𝑆ˆ
176
(3.64)
c 2001 Michael Carter
⃝
All rights reserved
Solutions for Foundations of Mathematical Economics
On the other hand, by Exercise 3.16, 𝑓 attains its maximum at an extreme point of 𝑆.
That is, there exists x1 ∈ 𝑆ˆ such that
𝑓 (x1 ) ≥ 𝑓 (x) for every x ∈ 𝑆
In particular
𝑓 (x1 ) ≥ 𝑓 (x0 )
ˆ
since x0 ∈ 𝑆ˆ ⊂ 𝑆. This contradicts (3.64) since x1 ∈ 𝑆.
Thus our assumption that 𝑆 ⊊ 𝑆ˆ yields a contradiction. We conclude that
𝑆 = 𝑆ˆ
3.210
1. (a) 𝑃 is compact and convex, since it is the product of compact, convex
sets (Proposition 1.2, Exercise 1.165).
∑𝑛
∑𝑛
(b) Since x ∈ 𝑖=1 conv 𝑆𝑖 , there exist x𝑖 ∈ conv 𝑆𝑖 such that x = 𝑖=1 x𝑖 .
(x1 , x2 , . . . , x𝑛 ) ∈ 𝑃 (x) so that 𝑃 (x) ∕= ∅.
(c) By the Krein-Millman theorem (or Exercise 3.207), 𝑃 (x) has an extreme
point z = (z1 , z2 , . . . , z𝑛 ) such that
∙ z𝑖 ∈ conv 𝑆𝑖 for every 𝑖
∑𝑛
∙
𝑖=1 z𝑖 = x.
since z ∈ 𝑃 (x).
2. (a) Exercise 1.176
(b) Since 𝑙 > 𝑚 = dim 𝑋, the vectors y1 , y2 , . . . , y𝑙 are linearly dependent
(Exercise 1.143). Consequently, there exists numbers 𝛼′1 , 𝛼′2 , . . . , 𝛼′𝑙 , not all
zero, such that
𝛼′1 y1 + 𝛼′2 y2 + ⋅ ⋅ ⋅ + 𝛼′𝑙 y𝑙 = 0
(Exercise 1.133). Let
𝛼𝑖 =
𝛼′𝑖
max𝑖 ∣𝛼𝑖 ∣
Then ∣𝛼𝑖 ∣ ≤ 1 for every 𝑖 and
𝛼1 y1 + 𝛼2 y2 + ⋅ ⋅ ⋅ + 𝛼𝑙 y𝑙 = 0
(c) Since ∣𝛼𝑖 ∣ ≤ 1, z𝑖 + 𝛼𝑖 y𝑖 ∈ conv 𝑆𝑖 for every 𝑖 = 1, 2, . . . , 𝑙. Furthermore
𝑛
∑
𝑖=1
z+
𝑖 =
𝑛
∑
z𝑖 +
𝑖=1
𝑙
∑
𝛼𝑖 y𝑖 =
𝑖=1
𝑛
∑
z𝑖 = x
𝑖=1
Therefore, z+ ∈ 𝑃 (x). Similarly, z− ∈ 𝑃 (x).
(d) By direct computation
z=
1 + 1 −
z + z
2
2
which implies that z is not an extreme point of 𝑃 (x), contrary to our assumption. This establishes that at least 𝑛 − 𝑚 z𝑖 are extreme points of the
corresponding conv 𝑆𝑖 .
177
c 2001 Michael Carter
⃝
All rights reserved
Solutions for Foundations of Mathematical Economics
4
3
(0, 2.5)
conv 𝑆2
2
(.5, 2)
P(x)
1
0
1
2
3
4
conv 𝑆1
Figure 3.3: Illustrating the proof of the Shapley Folkman theorem.
3. Every extreme point of conv 𝑆𝑖 is an element of 𝑆𝑖 .
3.211 See Figure 3.3.
3.212 Let {𝑆1 , 𝑆2 , . . . , 𝑆𝑛 } be a ∑
collection of
subsets of an 𝑚-dimensional
∑nonempty
𝑛
𝑛
linear space and let x ∑
∈ conv
𝑆
=
conv
𝑆
. That is, there exists x𝑖 ∈
𝑖
𝑖
𝑖=1
𝑖=1
conv 𝑆𝑖 such that x = 𝑛𝑖=1 x𝑖 . By Carathéodory’s theorem, there exists for every x𝑖
a finite number of points x𝑖1 , x𝑖2 , . . . , x𝑖𝑙𝑖 such that x𝑖 ∈ conv {x𝑖1 , x𝑖2 , . . . , x𝑖𝑙𝑖 }.
For every 𝑖 = 1, 2, . . . , 𝑛, let
𝑆˜𝑖 = { x𝑖𝑗 : 𝑗 = 1, 2, . . . , 𝑙𝑖 }
Then
x=
𝑛
∑
x𝑖 ,
x𝑖 ∈ conv 𝑆˜𝑖
𝑖=1
∑
∑˜
𝑆𝑖 . Moreover, the sets 𝑆𝑖 are compact (in fact
That is, x ∈
conv 𝑆˜𝑖 = conv
finite). By the previous exercise, there exists 𝑛 points z𝑖 ∈ 𝑆˜𝑖 such that
x=
𝑛
∑
z𝑖 ,
z𝑖 ∈ conv 𝑆˜𝑖
𝑖=1
and moreover z𝑖 ∈ 𝑆˜𝑖 ⊆ 𝑆𝑖 for at least 𝑛 − 𝑚 indices 𝑖.
3.213 Let 𝑆 be a closed convex set in a normed linear space. Clearly, 𝑆 is contained in
the intersection of all the closed halfspaces which contain 𝑆.
For any y ∈
/ 𝑆, there exists a hyperplane which strongly separates {y} and 𝑆. One
of its closed halfspaces contains 𝑆 but not y. Consequently, y does not belong to the
intersection of all the closed halfspaces containing 𝑆.
3.214
1. Since 𝑉 ∗ (𝑦) is the intersection of closed, convex sets, it is closed and convex.
Assume x is feasible, that is x ∈ 𝑉 (𝑦). Then w𝑇 x ≤ (𝑐w, 𝑦) and x ∈ 𝑉 ∗ (𝑦).
That is, 𝑉 (𝑦) ⊆ 𝑉 ∗ (𝑦).
178
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
2. Assume 𝑉 (𝑦) is convex. For any x0 ∈
/ 𝑉 (𝑦) there exists w such that
w𝑇 x0 <
inf w𝑇 x = 𝑐(w, 𝑦)
x∈𝑉 (𝑦)
by the Strong Separation Theorem. Monotonicity ensures that w ≥ 0 and hence
x0 ∈
/ 𝑉 ∗ (𝑦).
3.215 Assume x ∈ 𝑉 (𝑦) = 𝑉 ∗ (𝑦). That is
𝑐(w) for every x
w𝑇 x ≥ 𝑦ˆ
Therefore, for any 𝑡 ∈ ℜ+
𝑡w𝑇 x ≥ 𝑡𝑦𝑐(w) for every x
which implies that 𝑡x ∈ 𝑉 ∗ (𝑦) = 𝑉 (𝑦).
3.216 A polyhedron
𝑆 = { 𝑥 ∈ 𝑋 : 𝑔𝑖 (𝑥) ≤ 𝑐𝑖 , 𝑖 = 1, 2, . . . , 𝑚 }
𝑚
∩
=
{ x ∈ 𝑋 : 𝑔𝑖 (x) ≤ 𝑐𝑖 }
𝑖=1
is the intersection of a finite number of closed convex sets.
3.217 Each row a𝑖 = (𝑎𝑖1 , 𝑎𝑖2 , . . . 𝑎𝑖𝑛 ) of 𝐴 defines a linear functional 𝑔𝑖 (x) = 𝑎𝑖1 𝑥1 +
𝑎𝑖2 𝑥2 + ⋅ ⋅ ⋅ + 𝑎𝑖𝑛 𝑥𝑛 on ℜ𝑛 . The set 𝑆 of solutions to 𝐴x ≤ c is
𝑆 = { 𝑥 ∈ 𝑋 : 𝑔𝑖 (x) ≤ 𝑐𝑖 , 𝑖 = 1, 2, . . . , 𝑚 }
is a polyhedron.
3.218 For simplicity, we assume that the game is superadditive, so that 𝑤(𝑖) ≥ 0 for
every 𝑖. Consequently, in every core allocation x, 0 ≤ 𝑥𝑖 ≤ 𝑤(𝑁 ) and
core ⊆ [0, 𝑤(𝑁 )] × [0, 𝑤(𝑁 )] × ⋅ ⋅ ⋅ × [0, 𝑤(𝑁 )] ⊂ ℜ𝑛
Thus, the core is bounded. Since it is the intersection of closed halfspaces, the core is
also closed. By Proposition 1.1, the core is compact.
3.219 polytope =⇒ polyhedron Assume that 𝑃 is a polytope generated by the
points { x1 , x2 , . . . , x𝑚 } and let 𝐹1 , 𝐹2 , . . . , 𝐹𝑘 denote the proper faces of 𝑃 . For
each 𝑖 = 1, 2, . . . , 𝑘, let 𝐻𝑖 denote the hyperplane containing 𝐹𝑖 so that 𝐹𝑖 =
𝑃 ∩ 𝐻𝑖 . For every such hyperplane, there exists a nonzero linear functional 𝑔𝑖
and constant 𝑐𝑖 such that 𝑔𝑖 (x) = 𝑐𝑖 for every x ∈ 𝐻𝑖 . Furthermore, every such
hyperplane is a bounding hyperplane of 𝑃 . Without loss of generality, we can
assume that 𝑔𝑖 (x) ≤ 𝑐 for every x ∈ 𝑃 . Let
𝑆 = { x ∈ 𝑋 : 𝑔𝑖 (x) ≤ 𝑐𝑖 , 𝑖 = 1, 2, . . . , 𝑚 }
Clearly 𝑃 ⊆ 𝑆. To show that 𝑆 ⊆ 𝑃 , assume not. That is, assume that there
exists y ∈ 𝑆 ∖𝑃 and let x ∈ ri 𝑃 . (ri 𝑃 is nonempty by exercise 1.229). Since 𝑃 is
closed (Exercise 1.227), there exists a some 𝛼 such that x̄ = 𝛼x+(1−𝛼)y belongs
to the relative boundary of 𝑃 , and there exists some 𝑖 such that x̄ ∈ 𝐹𝑖 ⊆ 𝐻𝑖 .
Let 𝐻𝑖+ = { x ∈ 𝑋 : 𝑔𝑖 (x) ≤ 𝑐𝑖 } denote the closed half-space bounded by 𝐻𝑖 and
containing 𝑃 . 𝐻𝑖 is a face of 𝐻𝑖+ containing x̄ = 𝛼x+(1−𝛼)y, which implies that
x, y ∈ 𝐻𝑖 . This in turn implies that x ∈ 𝐹𝑖 , which contradicts the assumption
that x ∈ ri 𝑃 . We conclude that 𝑆 = 𝑃 .
179
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
polyhedron =⇒ polytope Conversely, assume 𝑆 is a nonempty compact polyhedral
set in a normed linear space. Then, there exist linear functionals 𝑔1 , 𝑔2 , . . . , 𝑔𝑚
in 𝑋 ∗ and numbers 𝑐1 , 𝑐2 , . . . , 𝑐𝑚 such that 𝑆 = { x ∈ 𝑋 : 𝑔𝑖 (x) ≤ 𝑐𝑖 , 𝑖 =
1, 2, . . . , 𝑚 }. We show that 𝑆 has a finite number of extreme points. Let 𝑛 denote
the dimension of 𝑆. If 𝑛 = 1, 𝑆 is either a single point or closed line segment
(since 𝑆 is compact), and therefore has a finite number of extreme points (that
is, 1 or 2).
Now assume that every compact polyhedral set of dimension 𝑛 − 1 has a finite
number of extreme points. Let 𝐻𝑖 , 𝑖 = 𝑖 = 1, 2, . . . , 𝑚 denote the hyperplanes
associated with the linear functionals 𝑔𝑖 defining 𝑆 (Exercise 3.49). Let x be
an extreme point of 𝑆. Then 𝑆 is a boundary point of 𝑆 (Exercise 1.220) and
therefore belongs to some 𝐻𝑗 . We claim that x is also an extreme point of the set
𝑆 ∩ 𝐻𝑗 . To see this, assume otherwise. That is, assume that x is not an extreme
point of 𝑆 ∩𝐻𝑗 . Then, there exists x1 , x2 ∈ 𝑆 ∩𝐻𝑗 such that x = 𝛼x1 + (1 − 𝛼)x2 .
But then x1 , x2 ∈ 𝑆 and x is not an extreme point of 𝑆. Therefore, every extreme
point of 𝑆 is an extreme point of some 𝑆 ∩ 𝐻𝑖 , which is a compact polyhedral set
of dimension 𝑛 − 1. By hypothesis, each 𝑆 ∩ 𝐻𝑖 has a finite number of extreme
points. Since there are only 𝑚 such hyperplanes 𝐻𝑖 , 𝑆 has a finite number of
extreme points.
By the Krein-Milman theorem (Exercise 3.209), 𝑆 is the closed convex hull of its
extreme points. Since there are only finite extreme points, 𝑆 is a polytope.
3.220
1. Let 𝑓, 𝑔 ∈ 𝑆 ∗ so that 𝑓 (x) ≤ 0 and 𝑔(x) ≤ 0 for every x ∈ 𝑆. For every
𝛼, 𝛽 ≥ 0
𝛼𝑓 (x) + 𝛽𝑓 (x) ≤ 0
for every 𝑥 ∈ 𝑆. This shows that 𝛼𝑓 + 𝛽𝑔 ∈ 𝑆 ∗ . 𝑆 ∗ is a convex cone.
To show that 𝑆 ∗ is closed, let 𝑓 be the limit of a sequence (𝑓𝑛 ) of functionals in
𝑆 ∗ . Then, for every x ∈ 𝑆,
𝑓𝑛 (x) ≤ 0
so that
𝑓 (x) = lim 𝑓𝑛 (x) ≤ 0
2. Let x, y ∈ 𝑆 ∗∗ . Then, for every 𝑓 ∈ 𝑆 ∗
𝑓 (x) ≤ 0 and 𝑓 (y) ≤ 0
and therefore
𝑓 (𝛼x + 𝛽y) = 𝛼𝑓 (x) + 𝛽𝑓 (y) ≤ 0
for every 𝛼, 𝛽 ≥ 0. There 𝛼x + 𝛽y ∈ 𝑆 ∗∗ . 𝑆 ∗∗ is a convex cone.
To show that 𝑆 ∗∗ is closed, let x𝑛 be a sequence of points in 𝑆 ∗∗ converging to
𝑥. For every 𝑛 = 1, 2, . . .
𝑓 (x𝑛 ) ≤ 0 for every 𝑓 ∈ 𝑆 ∗
By continuity
𝑓 (x) = lim 𝑓 (x𝑛 ) ≤ 0 for every 𝑓 ∈ 𝑆 ∗
Consequently x ∈ 𝑆 ∗∗ which is therefore closed.
180
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
3. Let x ∈ 𝑆. Then 𝑓 (x) ≤ 0 for every 𝑓 ∈ 𝑆 ∗ so that x ∈ 𝑆 ∗∗ .
4. Exercise 1.79.
3.221 Let 𝑓 ∈ 𝑆2∗ . Then 𝑓 (x) ≤ 0 for every x ∈ 𝑆2 . A fortiori, since 𝑆1 ⊆ 𝑆2 ,
𝑓 (x) ≤ 0 for every x ∈ 𝑆1 . Therefore 𝑓 ∈ 𝑆1∗ .
3.222 Exercise 3.220 showed that 𝑆 ⊆ 𝑆 ∗∗ . To show the converse, let y ∈
/ 𝑆. By
Proposition 3.14, there exists some 𝑓 ∈ 𝑋 ∗ and 𝑐 such that
𝑓 (y) > 𝑐
𝑓 (x) < 𝑐
for every x ∈ 𝑆
Since 𝑆 is a cone, 0 ∈ 𝑆 and 𝑓 (0) = 0 < 𝑐. Since 𝛼𝑆 = 𝑆 for every 𝛼 > 0 then
𝑓 (x) < 0
for every x ∈ 𝑆
/ 𝑆 ∗∗ . That is
so that 𝑓 ∈ 𝑆 ∗ . 𝑓 (y) > 0, y ∈
y∈
/ 𝑆 =⇒ y ∈
/ 𝑆 ∗∗
from which we conclude that 𝑆 ∗∗ ⊆ 𝑆.
3.223 Let
𝐾 = cone {𝑔1 , 𝑔2 , . . . , 𝑔𝑚 }
𝑚
∑
= { 𝑔 ∈ 𝑋∗ : 𝑔 =
𝜆𝑗 𝑔𝑗 , 𝜆𝑗 ≥ 0 }
𝑗=1
be the set of all nonnegative linear combinations of the linear functionals 𝑔𝑗 . 𝐾 is a
closed convex cone.
Suppose that 𝑓 ∈
/ cone {𝑔1 , 𝑔2 , . . . , 𝑔𝑚 }, that is assume that 𝑓 ∈
/ 𝐾. Then {𝑓 } is a
compact convex set disjoint from 𝐾. By Proposition 3.14, there exists a continuous
linear functional 𝜑 and number 𝑐 such that
sup 𝜑(𝑔) < 𝑐 < 𝜑(𝑓 )
𝑔∈𝐾
Since 0 ∈ 𝐾, 𝑐 ≥ 0 and so 𝜑(𝑓 ) > 0. Further, for every 𝑔 ∈ 𝐺
𝑚
∑
𝜆𝑗 𝑔𝑗 )
𝜑(𝑔) = 𝜑(
𝑗=1
=
𝑚
∑
𝜆𝑗 𝜑(𝑔𝑗 ) < 𝑐 for every 𝜆𝑗 ≥ 0
𝑗=1
Since 𝜆𝑗 can be made arbitrarily large, this last inequality implies that
𝜑(𝑔𝑗 ) ≤ 0
𝑗 = 1, 2, . . . , 𝑚
By the Riesz representation theorem (Exercise 3.75), there exists x ∈ 𝑋
𝜑(𝑔𝑗 ) = 𝑔𝑗 (x) and 𝜑(𝑓 ) = 𝑓 (x)
Since
𝜑(𝑔𝑗 ) = 𝑔𝑗 (x) ≤ 0
181
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
x ∈ 𝑆. By hypothesis
𝑓 (x) = 𝜑(𝑓 ) ≤ 0
contradicting the conclusion that 𝜑(𝑓 ) > 0. This contradiction establishes that 𝑓 ∈ 𝐾,
that is
𝑓 (𝑥) =
𝑚
∑
𝜆𝑗 𝑔𝑗 (𝑥),
𝜆𝑗 ≥ 0
𝑗=1
3.224 Let a1 , a2 , . . . , a𝑚 denote the rows of 𝐴 and define the linear functional 𝑓, 𝑔1 , 𝑔2 , . . . , 𝑔𝑚
by
𝑓 (x) = cx
𝑔𝑗 (x) = a𝑗 x 𝑗 = 1, 2, . . . , 𝑚
Assume cx ≤ 0 for every x satisfying 𝐴x ≤ 0, that is 𝑓 (x) ≤ 0 for every x ∈ 𝑆 where
𝑆 = { x ∈ 𝑋 : 𝑔𝑗 (x) ≤ 0, 𝑗 = 1, 2, . . . , 𝑚 }
By Proposition 3.18, there exists y ∈ ℜ𝑚
+ such that
𝑓 (x) =
𝑚
∑
𝑦𝑗 𝑔𝑗 (x)
𝑗=1
or
c=
𝑚
∑
𝑦𝑗 a𝑗 = 𝐴𝑇 y
𝑗=1
Conversely, assume that
c = 𝐴𝑇 y =
𝑚
∑
𝑦𝑗 a𝑗
𝑗=1
Then
𝐴x ≤ 0 =⇒ a𝑗 x ≤ 0 for every 𝑗 =⇒ cx ≤ 0
3.225 Let 𝑁 = ℜ𝑛+ denote the positive orthant of ℜ𝑛 . 𝑁 is a convex set (indeed cone)
with a nonempty interior. By Corollary 3.2.1, there exists a hyperplane 𝐻p (𝑐) such
that
p𝑇 x ≤ 𝑐 ≤ py
for every x ∈ 𝑆, y ∈ 𝑁
Since 0 ∈ 𝑁
p0 = 0 ≥ 𝑐
which implies that 𝑐 ≤ 0 and
p𝑇 x ≤ 𝑐 ≤ 0
for every 𝑥 ∈ 𝑆
To show that p is nonnegative, let e1 , e2 , . . . , e𝑛 denote the standard basis for ℜ𝑛 .
Each e𝑖 belongs to 𝑁 so that
pe𝑖 = 𝑝𝑖 ≥ 0
182
for every 𝑖
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
3.226 Assume y∗ is an efficient production plan in 𝑌 and let 𝑆 = 𝑌 − 𝑦 ∗ . 𝑆 is convex.
We claim that 𝑆 ∩ℜ𝑛++ = ∅. Otherwise, if there exists some z ∈ 𝑆 ∩ℜ𝑛++ , let y′ = y∗ +z
∙ z ∈ 𝑆 implies y′ ∈ 𝑌 while
∙ z ∈ ℜ𝑛++ implies y′ > y∗
contradicting the efficiency of y∗ . Therefore, 𝑆 is a convex set which contains no
interior points of the nonnegative orthant ℜ𝑛+ . By Exercise 3.225, there exists a price
system p such that
p𝑇 x ≤ 0 for every x ∈ 𝑆
Since 𝑆 = 𝑌 − 𝑦 ∗ , this implies
p(y − y∗ ) ≤ 0 for every y ∈ 𝑌
or
py∗ ≥ py for every y ∈ 𝑌
𝑦 ∗ maximizes the producer’s profit at prices p.
3.227 Consider the set 𝑆 − = { x ∈ ℜ𝑛 : −x ∈ 𝑆 }.
𝑆 ∩ int ℜ𝑛− = ∅ =⇒ 𝑆 − ∩ int ℜ𝑛+ = ∅
From the previous exercise, there exists a hyperplane with nonnegative normal p ≩ 0
such that
p𝑇 x ≤ 0
for every x ∈ 𝑆 −
p𝑇 x ≥ 0
for every x ∈ 𝑆
Since p ≩ 0, this implies
3.228
1. Suppose x ∈ ≿(x∗ ). Then, there exists an allocation (x1 , x2 , . . . , x𝑛 ) such
that
x=
𝑛
∑
x𝑖
𝑖=1
where x𝑖 ∈ ≿(x∗𝑖 ) for every 𝑖 = 1, 2, . . . , 𝑛. Conversely, if (x1∑
, x2 , . . . , x𝑛 ) is an
𝑛
allocation with x𝑖 ∈ ≿(x∗𝑖 ) for every 𝑖 = 1, 2, . . . , 𝑛, then x = 𝑖=1 x𝑖 ∈ ≿(x∗ ).
2. For every agent 𝑖, x∗𝑖 ∈ ≿(x∗𝑖 ), which implies that
x∗ =
𝑛
∑
x∗𝑖 ∈ ≿(x∗ )
𝑖=1
and therefore
0 ∈ 𝑆 = ≿(x∗ ) − x∗ ∕= ∅
Since individual preferences
are convex, ≿(x∗𝑖 ) is convex for each 𝑖 and therefore
∑
∗
∗
∗
𝑆 = ≿(x ) − x = 𝑖 ≿(x𝑖 ) − x∗ is convex (Exercise 1.164).
183
c 2001 Michael Carter
⃝
All rights reserved
Solutions for Foundations of Mathematical Economics
Assume to the contrary that 𝑆 ∩ int ℜ𝑙− ∕= ∅. That is, there exists some z ∈ 𝑆
with z < 0. This implies that there exists some allocation (x1 , x2 , . . . , x𝑛 ) such
that
∑
x𝑖 − x∗ < 0
z=
𝑖
x∗𝑖
and x𝑖 ≿
for every 𝑖 ∈ 𝑁 . Distribute z equally to all the consumers. That is,
consider the allocation
y𝑖 = x𝑖 + z/𝑛
By strict monotonicity, y𝑖 ≻ x𝑖 ≿ x∗𝑖 for every 𝑖 ∈ 𝑁 . Since
∑
∑
∑
y𝑖 =
x𝑖 + z = x∗ =
x∗𝑖
𝑖
𝑖
𝑖
(y1 , y2 , . . . , y𝑛 ) is a reallocation of the original allocation x∗ which is strictly
preferred by all consumers. This contradicts the assumed Pareto efficiency of x∗ .
We conclude that
𝑆 ∩ int ℜ𝑙− ∕= ∅
3. Applying Exercise 3.227, there exists a hyperplane with nonnegative normal p∗ ≩
0 such that
p∗ z ≥ 0 for every z ∈ 𝑆
That is
p∗ (x − x∗ ) ≥ 0 or p∗ x ≥ p∗ x∗ for every x ∈ ≿(x∗ )
(3.65)
4. Consider any allocation which is strictly preferred to x∗ by consumer 𝑗, that is
x𝑗 ∈ ≻𝑗 (x∗𝑗 ). Construct another allocation y by taking 𝜖 > 0 of each commodity
away from agent 𝑗 and distributing amongst the other agents to give
y𝑗 = (1 − 𝜖)x𝑗
𝜖
y𝑖 = x∗𝑖 +
x𝑗 ,
𝑛−1
𝑖 ∕= 𝑗
By continuity, there exists some 𝜖 > 0 such that y𝑗 = (1 − 𝜖)x𝑗 ≻𝑗 x∗𝑗 . By
monotonicity, y𝑖 ≻𝑖 x∗𝑖 for every 𝑖 ∕= 𝑗. We have constructed ∑
an allocation y
which is strictly preferred to x∗ by all the agents, so that y = 𝑖 y𝑖 ∈ ≿(x∗ ).
(3.65) implies that
py ≥ px∗
That is
⎛
∑(
x∗𝑖 +
p ⎝(1 − 𝜖)x𝑗 +
𝑖∕=𝑗
⎞
⎞
⎞
⎛
⎛
)
∑
∑
𝜖
x𝑗 ⎠ = p ⎝x𝑗 +
x∗𝑖 ⎠ ≥ p ⎝x∗𝑗 +
x∗𝑖 ⎠
𝑛−1
𝑖∕=𝑗
𝑖∕=𝑗
which implies that
px𝑗 ≥ px∗𝑗
for every x𝑗 ∈ ≻(x∗𝑗 )
184
(3.66)
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
5. Trivially, x∗ is a feasible allocation with endowments w𝑖 = x∗𝑖 and 𝑚𝑖 = p∗ w𝑖 =
p∗ x∗𝑖 . To show that (p∗ , x∗ ) is a competitive equilibrium, we have to show that
x∗𝑖 is the best allocation in the budget set 𝑋𝑖 (p, 𝑚𝑖 ) for each consumer 𝑖. Suppose
to the contrary there exists some consumer 𝑗 and allocation y𝑗 such that y𝑗 ≻ x𝑗
and py𝑗 ≤ 𝑚𝑗 = px∗𝑗 . By continuity, there exists some 𝛼 ∈ (0, 1) such that
𝛼y𝑗 ≻𝑖 x∗𝑗 and
p(𝛼y𝑗 ) = 𝛼py𝑗 < py𝑗 ≤ px∗
contradicting (3.66). We conclude that
x∗𝑖 ≿𝑖 x𝑖 for every x ∈ 𝑋(p∗ , 𝑚𝑖 )
for every consumer 𝑖. (p∗ , x∗ ) is a competitive equilibrium.
3.229 By the previous exercise, there exists a price system p∗ such that x∗𝑖 is optimal
for each consumer 𝑖 in the budget set 𝑋(p∗ , p∗ x∗𝑖 ), that is
x∗𝑖 ≿𝑖 x𝑖 for every x𝑖 ∈ 𝑋(p∗ , p∗ x∗𝑖 )
(3.67)
For each consumer, let 𝑡𝑖 be the difference between her endowed wealth p∗ w𝑖 and her
required wealth p∗ x∗𝑖 . That is, define
𝑡𝑖 = p∗ x∗𝑖 − p∗ w𝑖 = p∗ (x∗𝑖 − w𝑖 )
Then
p∗ x∗𝑖 = p∗ + w𝑖
(3.68)
∗
By assumption x is feasible, so that
∑
∑
∑
x∗𝑖 −
w𝑖 =
(x∗𝑖 − w𝑖 ) = 0
𝑖
so that
𝑖
∑
𝑖
𝑡𝑖 = p∗
𝑖
∑
(x∗𝑖 − w𝑖 ) = 0
𝑖
∗
Furthermore, for 𝑚𝑖 = 𝑝 w𝑖 + 𝑡𝑖 , (3.68) implies
𝑋(p∗ , 𝑚𝑖 ) = { x𝑖 : p∗ x𝑖 ≤ p∗ w𝑖 + 𝑡𝑖 } = { x𝑖 : p∗ x𝑖 ≤ p∗ x∗𝑖 } = 𝑋(p∗ , p∗ x∗𝑖 )
for each consumer 𝑖. Using (3.67) we conclude that
x∗𝑖 ≿𝑖 x𝑖 for every x𝑖 ∈ 𝑋(p∗ , 𝑚𝑖 )
for every agent 𝑖. (p∗ , x∗ ) is a competitive equilibrium where each consumer’s after-tax
wealth is
𝑚𝑖 = pw𝑖 + 𝑡𝑖
3.230 Apply Exercise 3.202 with 𝐾 = ℜ𝑛+ .
3.231
𝐾 ∗ = { p : p𝑇 x ≤ 0 for every x ∈ 𝐾 }
No such hyperplane exists if and only if 𝐾 ∗ ∩ ℜ𝑛++ = ∅. Assume this is the case. By
Exercise 3.225, there exists x ≩ 0 such that
xp = p𝑇 x ≤ 0 for every p ∈ 𝐾 ∗
In other words, x ∈ 𝐾 ∗∗ . By the duality theorem 𝐾 ∗∗ = 𝐾 which implies that x ∈ 𝐾
as well as ℜ𝑛+ , contrary to the hypothesis that 𝐾 ∩ ℜ𝑛+ = {0}. This contradiction
establishes that 𝐾 ∗ ∩ ℜ𝑛++ ∕= ∅.
185
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
3.232 Given a set of financial assets with prices p and payoff matrix 𝑅, let
𝑍 = { (−px, 𝑅𝑥) : x ∈ ℜ𝑛 }
𝑍 is the set of all possible (cost, payoff) pairs. It is a subspace of ℜ𝑆+1 . Let 𝑁 be the
nonnegative orthant in ℜ𝑆+1 . The no arbitrage condition
𝑅x ≥ 0 =⇒ p𝑇 x ≥ 0
implies that 𝑍 ∩ 𝑁 = {0}. By Exercise 3.230, there exists a hyperplane with positive
normal 𝜆 = 𝜆0 , 𝜆1 , . . . , 𝜆𝑆 such that
𝜆z = 0
for every z ∈ 𝑍
𝜆z > 0
∖ {0}
for every z ∈ ℜ𝑆+1
+
That is
for every x ∈ ℜ𝑛
−𝜆0 px + 𝜆𝑅x = 0
or
p𝑇 x = 𝜆/𝜆0 𝑅x
for every x ∈ ℜ𝑛
𝜆/𝜆0 is required state price vector.
Conversely, if a state price vector exists
𝑝𝑎 =
𝑆
∑
𝑅𝑎𝑠 𝜋𝑠
𝑎=1
then clearly
𝑅x ≥ 0 =⇒ p𝑇 x ≥ 0
No arbitrage portfolios exist.
3.233 Apply the Farkas lemma to the system
−𝐴x ≤ 0
−c𝑇 x > 0
3.234 The inequality system 𝐴𝑇 y ≥ c has a nonnegative solution if and only if the
corresponding system of equations
𝐴𝑇 y − z = c
𝑛
has a nonnegative solution y ∈ ℜ𝑚
+ , z ∈ ℜ+ . This is equivalent to the system
(
)
y
′
𝐵
=c
z
(3.69)
where 𝐵 ′ = (𝐴𝑇 , −𝐼𝑛 ) and 𝐼𝑛 is the 𝑛 × 𝑛 identity matrix. By the Farkas lemma,
system (3.69) has no solution if and only if the system
𝐵x ≤ 0 and c𝑇 x > 0
(
)
𝐴
has a solution x ∈ ℜ𝑛 . Since 𝐵 =
, 𝐵x ≤ 0 implies
−𝐼
𝐴x ≤ 0 and − 𝐼x ≤ 0
and the latter inequality implies x ∈ ℜ𝑛+ . Thus we have established that the system
𝐴𝑇 y ≥ c has no nonnegative solution if and only if
𝐴x ≤ 0 and c𝑇 x > 0 for some x ∈ ℜ𝑛+
186
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
3.235 Assume system I has a solution, that is there exists x̂ ∈ ℜ𝑛+ such that
𝐴x̂ = 0, cx̂ > 0, x̂ ≥ 0
Then x = x̂/cx̂ satisfies the system
𝐴x = 0, cx = 1, x ≥ 0
(3.70)
x′ 𝐴𝑇 = 0, xc = 1, x ≥ 0
(3.71)
which is equivalent to
Suppose y ∈ ℜ𝑚 satisfies
𝐴y ≥ c
Multiplying by x ≥ 0 gives
x′ 𝐴𝑇 y ≥ xc
Substituting (3.71), this implies the contradiction
0≥1
We conclude that system II cannot have a solution if I has a solution.
Now, assume system I has no solution. System I is equivalent to (3.70) which in turn
is equivalent to the system
(
)
(
)
𝐴
0
x=
c
1
or
𝐵x = b
(3.72)
)
(
)
(
−𝐴
0
is (𝑚 + 1) × 𝑛 and b =
∈ ℜ𝑚+1 . If (3.72) has no solution,
where 𝐵 =
c
1
there exists (by the Farkas alternative) some z ∈ ℜ𝑚+1 such that
𝐵 ′ z ≤ 0 and bz > 0
Decompose z into z = (y, 𝑧) with y ∈ ℜ𝑚 and 𝑧 ∈ ℜ. The second inequality implies
that
(0, 1)′ (y, 𝑧) = 0y + 𝑧 = 𝑧 > 0
Without loss of generality, we can normalize so that 𝑧 = 1 and z = (y, 1).
Now 𝐵 ′ = (−𝐴𝑇 , c) and so the first inequality implies that
(
)
y
𝑇
(−𝐴 , c)
= −𝐴𝑇 y + c ≤ 0
1
or
𝐴𝑇 y ≥ c
We conclude that II has a solution.
187
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
3.236 For every linear functional 𝑔𝑗 , there exists a vector a ∈ ℜ𝑛 such that
𝑔𝑗 (x) = a𝑗 ẋ
(Proposition 3.11). Let 𝐴𝑇 be the matrix whose rows are a𝑗 , that is
⎛ 1⎞
a
⎜ a2 ⎟
⎟
𝐴=⎜
⎝. . .⎠
a𝑚
Then, the system of inequalities (3.31) is
𝐴𝑇 x ≥ c
where c = (𝑐1 , 𝑐2 , . . . , 𝑐𝑚 ). By the preceding exercise, this system is consistent if and
only there is no solution to the system
𝐴𝜆 = 0
c𝜆 > 0
𝜆≥0
Now
𝑚
∑
𝐴𝜆 = 0 ⇐⇒
𝜆𝑗 𝑔𝑖 = 0
𝑖 = 1, 2, . . . , 𝑚
𝑗=1
Therefore, the inequalities (3.31) is consistent if an only if
𝑚
∑
𝜆𝑗 𝑔𝑖 = 0 =⇒
𝑗=1
𝑚
∑
𝜆𝑗 𝑐𝑗 ≤ 0
𝑗=1
for every set of nonnegative numbers 𝜆1 , 𝜆2 , . . . , 𝜆𝑚 .
3.237 Let 𝐵 be the 2𝑚 × 𝑛 matrix comprising 𝐴 and −𝐴 as follows
(
)
𝐴
𝐵=
−𝐴
Then the Fredholm alternative I
c𝑇 x = 1
𝐴x = 0
is equivalent to the system
𝐵x ≤ 0
cx > 0
(3.73)
2𝑚
By the Farkas alternative theorem, either (3.73) has a solution or there exists 𝜆 ∈ 𝑅+
such that
𝐵′𝜆 = c
(3.74)
Decompose 𝜆 into two 𝑚-vectors
𝑚
𝜆 = (𝜇, 𝛿), 𝜇, 𝛿 ∈ 𝑅+
so that (3.74) can be rewritten as
𝐵 ′ 𝜆 = 𝐴𝑇 𝜇 − 𝐴𝑇 𝛿 = 𝐴𝑇 (𝜇 − 𝛿) = c
Define y = 𝜇 − 𝛿 ∈ ℜ𝑚 We have established that either (3.73) has a solution or there
exists a vector y ∈ ℜ𝑚 such that
𝐴𝑇 y = c
188
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
3.238 Let a𝑗 , 𝑗 = 1, 2, . . . , 𝑚 denote the rows of 𝐴. Each a𝑖 defines linear functional
𝑔𝑗 (𝑥) = a𝑗 𝑥 on ℜ𝑛 , and c defines another linear functional 𝑓 (𝑥) = c𝑇 x. Assume that
𝑓 (𝑥) = c𝑇 x = 0 for every x ∈ 𝑆 where
𝑆 = { x : 𝑔𝑗 (x) = a𝑖 x = 0, 𝑗 = 1, 2, . . . , 𝑚 }
Then the system
𝐴𝑥 = 0
has no solution satisfying the constraint c𝑇 x > 0. By Exercise 3.20, there exists scalars
𝑦1 , 𝑦2 , . . . , 𝑦𝑚 such that
𝑓 (x)=
𝑚
∑
𝑦𝑗 𝑔𝑗 (x)
𝑗=1
or
c=
𝑚
∑
𝑦𝑗 𝑎𝑗 = 𝐴𝑇 y
𝑗=1
That is y = (𝑦1 , 𝑦2 , . . . , 𝑦𝑚 ) solves the related nonhomogeneous system
𝐴𝑇 y = c
Conversely, assume that 𝐴𝑇 y = c for some 𝑦 ∈ ℜ𝑚 . Then
c𝑇 x = 𝑦𝐴𝑥 = 0
for all 𝑥 such that 𝐴𝑥 = 0 and therefore there is no solution satisfying the constraint
c𝑇 x = 1.
3.239 Let
𝑆 = { z : z = 𝐴x, x ∈ ℜ }
the image of 𝑆. 𝑆 is a subspace. Assume that system I has no solution, that is
𝑆 ∩ ℜ𝑚
++ = ∅
By Exercise 3.225, there exists y ∈ ℜ𝑚
+ ∖ {0} such that
yz = 0 for every z ∈ 𝑆
That is
y𝐴x = 0 for every x ∈ ℜ𝑛
Letting x = 𝐴𝑇 y, we have y𝐴𝐴𝑇 y = 0 which implies that
𝐴𝑇 y = 0
System II has a solution y.
Conversely, assume that x̂ is a solution to I. Suppose to the contrary there also exists
a solution ŷ to II. Then, since 𝐴x̂ > 0 and ŷ ≩ 0, we must have ŷ𝐴x̂ = x̂𝐴𝑇 ŷ > 0.
On the other hand, 𝐴𝑇 ŷ = 0 which implies x̂𝐴𝑇 ŷ = 0, a contradiction. Hence, we
conclude that II cannot have a solution if I has a solution.
189
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
3.240 We have already shown (Exercise 3.239) that the alternatives I and II are mutually incompatible. If Gordan’s system II
𝐴𝑇 y = 0
has a semipositive solution y ≩ 0, then we can normalize y such that 1y = 1 and the
system
𝐴𝑇 y = 0
1y = 1
has a nonnegative solution.
Conversely, if Gordan’s system II has no solution, the system
𝐵′y = c
(
)
𝐴𝑇
where 𝐵 =
and c = (0, 1) = (0, 0, . . . , 0, 1), 0 ∈ ℜ𝑚 , is the (𝑚 + 1)st unit
1
vector has no solution y ≥ 0. By the Farkas lemma, there exists z ∈ ℜ𝑛+1 such that
′
𝐵z ≥ 0
cz < 0
Decompose z into z = (x, 𝑥) with x ∈ ℜ𝑛 . The second inequality implies that 𝑥 < 0
since
cz = (0, 1)′ (x, 𝑥) = 𝑥 < 0
Since 𝐵 = (𝐴, 1), the first inequality implies that
𝐵z = (𝐴, 1)(x, 𝑥) = 𝐴x + 1𝑥 ≥ 0
or
𝐴x ≥ −1𝑥 > 0
x solves Gordan’s system I.
3.241 Let a1 , a2 , . . . , a𝑚 be a basis for 𝑆. Let
𝐴 = (a1 , a2 , . . . , a𝑚 )
be the matrix whose columns are a𝑗 . To say that 𝑆 contains no positive vector means
that the system
𝐴x > 0
has no solution. By Gordan’s theorem, there exists some y ≩ 0 such that
𝐴𝑇 y = 0
that is
a𝑗 y = ya𝑗 = 0, 𝑗 = 1, 2, . . . , 𝑚
so that y ∈ 𝑆 ⊥ .
190
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
3.242 Let 𝑍 be the subspace 𝑍 = { z : 𝐴x : x ∈ ℜ𝑛 }. System I has no solution 𝐴x ≩ 0
if and only if 𝑍 has no nonnegative vector z ≩ 0. By the previous exercise, 𝑍 ⊥ contains
a positive vector y > 0 such that
yz = 0 for every z ∈ 𝑍
Letting x = 𝐴𝑇 y, we have y𝐴𝐴𝑇 y = 0 which implies that
𝐴𝑇 y = 0
System II has a solution y.
3.243 Let
𝑆 = { z : z = 𝐴x, x ∈ ℜ }
the image of 𝑆. 𝑆 is a subspace. Assume that system I has no solution, that is
𝑆 ∩ ℜ𝑚
+ = {0}
By Exercise 3.230, there exists y ∈ ℜ𝑚
++ such that
yz = 0 for every z ∈ 𝑆
That is
y𝐴x = 0 for every x ∈ ℜ𝑛
Letting x = 𝐴𝑇 y, we have y𝐴𝐴𝑇 y = 0 which implies that
𝐴𝑇 y = 0
System II has a solution y.
Conversely, assume that x̂ is a solution to I. Suppose to the contrary there also exists
a solution ŷ to II. Then, since 𝐴x̂ ≩ 0 and ŷ > 0, we must have ŷ𝐴x̂ = x̂𝐴𝑇 ŷ > 0.
On the other hand, 𝐴𝑇 ŷ = 0 which implies x̂𝐴𝑇 ŷ = 0, a contradiction. Hence, we
conclude that II cannot have a solution if I has a solution.
3.244 The inequality system 𝐴𝑇 y ≤ 0 has a nonnegative solution if and only if the
corresponding system of equations
𝐴𝑇 y + z = 0
𝑛
has a nonnegative solution y ∈ ℜ𝑚
+ , z ∈ ℜ+ . This is equivalent to the system
(
)
y
′
𝐵
=0
z
(3.75)
where 𝐵 ′ = (𝐴𝑇 , 𝐼𝑛 ) and 𝐼𝑛 is the 𝑛 × 𝑛 identity matrix. By Gordan’s theorem, system
(3.75) has no solution if and only if the system
(
has a solution x ∈ ℜ𝑛 . Since 𝐵 =
𝐵x > 0
)
𝐴
, 𝐵x > 0 implies
𝐼
𝐴x > 0 and 𝐼x > 0
and the latter inequality implies x ∈ ℜ𝑛++ . Thus we have established that the system
𝐴𝑇 y ≤ 0 has no nonnegative solution if and only if
𝐴x > 0 for some x ∈ ℜ𝑛++
191
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
3.245 Assume system II has no solution, that is there is no y ∈ ℜ𝑛 such that
𝐴y ≤ 0, y ≩ 0
This implies that the system
−𝐴y ≥ 0
1y ≥ 1
(
)
−𝐴
𝑚
′
has no solution y ∈ ℜ+ . Defining 𝐵 =
, the latter can be written as
1′
𝐵 ′ y ≥ −e𝑚+1
(3.76)
where −e𝑚+1 = (0, 1), 0 ∈ ℜ𝑚 .
By the Gale alternative (Exercise 3.234), if system (3.76) has no solution, the alternative system
𝐵z ≤ 0, −e𝑚+1 z > 0
𝑛
has a nonnegative solution z ∈ ℜ𝑛+1
+ . Decompose z into z = (x, 𝑧) where x ∈ ℜ+ and
𝑧 ∈ ℜ+ . The second inequality implies 𝑧 > 0 since e𝑚+1 z = 𝑧.
𝐵 = (−𝐴𝑇 , 1) and the first inequality implies
(
)
x
𝐵z = (−𝐴𝑇 , 1)
= −𝐴𝑇 x + 1𝑧 ≤ 0
𝑧
or
𝐴𝑇 x ≥ 1𝑧 > 0
Thus system I has a solution x ∈ ℜ𝑛+ . Since x = 0 implies 𝐴x = 0, we conclude that
x ≩ 0.
Conversely, assume that II has a solution y ≩ 0 such that 𝐴y ≤ 0. Then, for every
x ∈ ℜ𝑛+
x𝐴𝑇 y = y′ 𝐴𝑇 x ≤ 0
Since y ≩ 0, this implies
𝐴𝑇 x ≤ 0
for every x ∈ ℜ𝑛+ which contradicts I.
3.246 We give a constructive proof, by proposing an algorithm which will generate
the desired decomposition. Assume that x satisfies 𝐴x ≩ 0. Arrange the rows of 𝐴
such that the positive elements of 𝐴x are listed first. That is, decompose 𝐴 into two
submatrices such that
𝐵1x > 0
𝐶1x = 0
Either
Case 1 𝐶 1 x ≩ 0 has no solution and the result is proved or
192
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
Case 2 𝐶 1 x ≩ 0 has a solution x′ .
Let x̄ be a linear combination of x and x′ . Specifically, define
x̄ = 𝛼x + x′
where
𝛼 > max
−b𝑗 x
b𝑗 x′
where b𝑗 is the 𝑗th row of 𝐵 1 . 𝛼 is chosen so that
𝛼𝐵 1 x > 𝐵 1 x′
By direct computation
𝐵 1 x̄ = 𝛼𝐵 1 x + 𝐵 1 x′ > 0
𝐶 1 x̄ = 𝛼𝐶 1 x + 𝐶 1 x′ ≩ 0
since 𝐶 1 x = 0 and 𝐶 1 x′ ≩ 0. By construction, x̄ is another solution to 𝐴x ≩ 0
such that 𝐴x̄ has more positive components than 𝐴x. Again, collect all the positive
components together, decomposing 𝐴 into two submatrices such that
𝐵 2 x̄ > 0
𝐶 2 x̄ = 0
Either
Case 1 𝐶 2 x ≩ 0 has no solution and the result is proved or
Case 2 𝐶 2 x ≩ 0 has a solution x′′ .
In the second case, we can repeat the previous procedure, generating another decomposition 𝐵 3 , 𝐶 3 and so on. At each stage 𝑘, the matrix 𝐵 𝑘 get larger and 𝐶 𝑘 smaller.
The algorithm must terminate before 𝐵 𝑘 equals 𝐴, since we began with the assumption
that 𝐴x > 0 has no solution.
3.247 There are three possible cases to consider.
Case 1: y = 0 is the only solution of 𝐴𝑇 y = 0. Then 𝐴x > 0 has a solution x′ by
Gordan’s theorem and
𝐴x′ + 0 > 0
Case 2: 𝐴𝑇 y = 0 has a positive solution y > 0 Then 0 is the only solution 𝐴x ≥ 0
by Stiemke’s theorem and
𝐴0 + y > 0
Case 3 𝐴𝑇 y = 0 has a solution y ≩ 0 but y ∕> 0. By Gordan’s theorem 𝐴x > 0 has
no solution. By the previous exercise, 𝐴 can be decomposed into two consistent
subsystems
𝐵x > 0
𝐶x = 0
193
c 2001 Michael Carter
⃝
All rights reserved
Solutions for Foundations of Mathematical Economics
such that 𝐶x ≩ 0 has no solution. Assume that 𝐵 is 𝑘 × 𝑛 and 𝐶 is 𝑙 × 𝑛 where
𝑙 = 𝑚 − 𝑘. Applying Stiemke’s theorem to 𝐶, there exists z > 0, z ∈ ℜ𝑙 . Define
y ∈ ℜ𝑚
+ by
{
0
𝑗 = 1, 2, . . . , 𝑘
𝑦𝑗 =
𝑦𝑗 = 𝑧𝑗−𝑘 𝑗 = 𝑘 + 1, 𝑘 + 2, . . . , 𝑚
Then x, y is the desired solution since for every 𝑗, 𝑗 = 1, 2, . . . , 𝑚 either 𝑦𝑗 > 0
or (𝐴x)𝑗 = (𝐵x)𝑗 > 0.
3.248 Consider the dual pair
(
)
(
)
y
𝐴
= 0, y ≥ 0, z ≥ 0
x ≥ 0 and (𝐴𝑇 , 𝐼)
z
𝐼
By Tucker’s theorem, this has a solution x∗ , y∗ , z∗ such that
𝐴x∗ ≥ 0, x∗ ≥ 0, 𝐴𝑇 y∗ + z∗ = 0, y∗ ≥ 0, z∗ ≥ 0
𝐴x + y > 0
𝐼x∗ + 𝐼z > 0
Substituting z∗ = −𝐴𝑇 y∗ implies
𝐴𝑇 y ≤ 0
and
x − 𝐴𝑇 y∗ > 0
3.249 Consider the dual pair
𝐴x ≥ 0 and 𝐴𝑇 y = 0, y ≥ 0
where 𝐴 is an 𝑚 × 𝑛 matrix. By Tucker’s theorem, there exists a pair of solutions
x∗ ∈ ℜ𝑛 and y∗ ∈ ℜ𝑚 such that
𝐴x∗ + y∗ > 0
(3.77)
Assume that 𝐴x > 0 has no solution (Gordan I). Then there exists some 𝑗 such that
(𝐴x∗ )𝑗 = 0 and (3.77) implies that 𝑦𝑗∗ > 0. Therefore y∗ ≩ 0 and solves Gordan II.
Conversely, assume that 𝐴𝑇 y = 0 has no solution y > 0 (Stiemke II). Then, there
exists some 𝑗 such that 𝑦𝑗∗ = 0 and (3.77) implies that (𝐴x∗ )𝑗 > 0). Therefore x∗
solves 𝐴x ≩ 0 (Stiemke I).
3.250 We have already shown that Farkas I and II are mutually inconsistent. Assume
that Farkas system I
𝐴x ≥ 0, c𝑇 x < 0
(
has no solution. Define the (𝑚 + 1) × 𝑛 matrix 𝐵 =
the system
𝐵x ≥ 0
194
𝐴
−c′
)
. Our assumption is that
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
has no solution with (𝐵x)𝑚+1 = −cx > 0. By Tucker’s theorem, the dual system
𝐵′z = 0
has a solution z ∈ ℜ𝑚+1
with z𝑚+1 > 0. Without loss of generality, we can normalize
+
′
𝑇
so that z𝑚+1 = 1. Decompose z into z = (y, 1) with y ∈ ℜ𝑚
+ . Since 𝐵 = (𝐴 , −c),
′
𝐵 z = 0 implies
𝐵 ′ z = (𝐴𝑇 , −c)(y, 1) = 𝐴𝑇 y − c = 0
or
𝐴𝑇 y = c
y ∈ ℜ𝑚
+ solves Farkas II.
3.251 If x ≥ 0 solves I, then
x′ (𝐴𝑇 y1 + 𝐵 ′ y2 + 𝐶 ′ y3 ) = x′ 𝐴𝑇 y1 + x′ 𝐵 ′ y2 + x′ 𝐶 ′ y3 ) > 0
since x′ 𝐴𝑇 y1 = y1 𝐴x > 0, x′ 𝐵 ′ y2 = y2 𝐵x ≥ 0 and x′ 𝐶 ′ y3 = y3 𝐶x = 0 which
contradicts II.
The equation 𝐶x = 0 is equivalent to the pair of inequalities 𝐶x ≥ 0, −𝐶x ≥ 0. By
Tucker’s theorem the dual pair
𝐴𝑇 y1 + 𝐵 ′ y2 + 𝐶 ′ y3 − 𝐶 ′ y4 = 0
𝐴x ≥ 0
𝐵x ≥ 0
𝐶x ≥ 0
−𝐶x ≥ 0
has solutions 𝑥 ∈ ℜ𝑛 , y1 ∈ ℜ𝑚1 , y2 ∈ ℜ𝑚2 , u3 , v3 ∈ ℜ𝑚3 such that
y1 ≥ 0
𝐴x + y1 > 0
y2 ≥ 0
u3 ≥ 0
𝐵x + y2 > 0
𝐶x + u3 > 0
v3 ≥ 0
−𝐶x + v3 > 0
Assume Motzkin I has no solution. That is, there is y1 ≩ 0. Define y3 = u3 − v3 .
Then y1 , y2 , y3 satisfies Motzkin II.
3.252
1. For every a ∈ 𝑆, let 𝑆a∗ be the polar set
𝑆a∗ = { x ∈ ℜ𝑛 : ∥x∥ = 1, xa ≥ 0 }
𝑆a∗ is nonempty since 0 ∈ 𝑆a∗ . Let x be the limit of a sequence x𝑛 of points in 𝑆a∗ .
Since x𝑛 a ≥ 0 for every 𝑛, xa ≥ 0 so that x ∈ 𝑆a∗ . Hence 𝑆a∗ is a closed subset of
𝐵 = { x ∈ ℜ𝑛 : ∥x∥ = 1 }.
2. Let {a1 , a2 , . . . , a𝑚 } be any finite set of points in 𝑆. Since 0 ∈
/ 𝑆, the system
𝑚
∑
𝑦𝑖 a𝑖 = 0,
𝑖=1
𝑚
∑
𝑦𝑖 = 1, 𝑦𝑖 ≥ 0
𝑖=1
has no solution. A fortiori, the system
𝑚
∑
𝑦𝑖 a𝑖 = 0
𝑖=1
195
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
has no solution 𝑦 ∈ ℜ𝑚
+ . If 𝐴 is the 𝑚×n matrix whose rows are a𝑖 , the latter
system can be written as
𝐴𝑇 y = 0
3. By Gordan’s theorem, the system
𝐴x > 0
(3.78)
has a solution x̄ ∕= 0.
4. Without loss of generality, we can take ∥x̄∥ = 1. (3.78) implies that
a𝑖 x̄ = x̄a𝑖 > 0
for every 𝑖 = 1, 2 . . . , 𝑚 so that x̄ ∈ 𝑆a∗𝑖 . Hence
𝑚
∩
x̄ ∈
𝑖=1
𝑆a∗𝑖
∩𝑚
5. We have shown that for every finite set {a1 , a2 , . . . , a𝑚 } ⊆ 𝑆, 𝑖=1 𝑆a∗𝑖 is nonempty closed subset of the compact set 𝐵 = {𝑥 ∈ ℜ𝑛 : ∥x∥ = 1}. By the Finite
intersection property (Exercise 1.116)
∩
𝑆a∗ ∕= ∅
a∈𝑆
6. For every p ∈
∩
a∈𝑆
𝑆a∗
pa ≥ 0 for every a ∈ 𝑆
p defines a hyperplane 𝑓 (a) = pa which separates 𝑆 from 0.
3.253 The expected outcome if player 1 adopts the mixed strategy p = (𝑝1 , 𝑝2 , . . . , 𝑝𝑚 )
and player 2 plays her 𝑗 pure strategy is
𝑢(p, 𝑗) =
𝑚
∑
𝑝𝑖 𝑎𝑖𝑗 = pa𝑗
𝑖=1
where a𝑗 is the 𝑗th column of 𝐴. The expected payoff to 1 for all possible responses
of player 2 is the vector (p𝐴)′ = 𝐴𝑇 p. The mixed strategy p ensures player 1 a
nonnegative security level provided 𝐴𝑇 p ≥ 0.
Similarly, if 2 adopts the mixed strategy q = (𝑞1 , 𝑞2 , . . . , 𝑞𝑛 ), the expected payoff to 2
if 1 plays his 𝑖 strategy is a𝑖 q where a𝑖 is the 𝑖th row of 𝐴. The expected outcome for
all the possible responses of player 1 is the vector 𝐴q. The mixed strategy q ensures
player 2 a nonpositive security level provided 𝐴q ≤ 0.
By the von Neumann alternative theorem (Exercise 3.245), at least one of these alternatives must be true. That is, either
Either I 𝐴𝑇 p > 0, p ≩ 0 for some p ∈ ℜ𝑚
or II 𝐴q ≤ 0, q ≩ 0 for some q ∈ ℜ𝑛
Since p ≩ 0 and q ≩ 0, we can normalize so that p ∈ Δ𝑚−1 and q ∈ Δ𝑛−1 . At least
one of the players has a strategy which guarantees she cannot lose.
196
Solutions for Foundations of Mathematical Economics
3.254
c 2001 Michael Carter
⃝
All rights reserved
1. For any 𝑐 ∈ ℜ, define the game
𝑢ˆ(a1 , a2 ) = 𝑢(a1 , a2 ) − 𝑐
with
ˆ(p, 𝑗) = max min 𝑢(p, 𝑗) − 𝑐 = 𝑣1 − 𝑐
𝑣ˆ1 = max min 𝑢
p
p
𝑗
𝑗
𝑣ˆ2 = min max 𝑢
ˆ(𝑖, q) = min max 𝑢(𝑖, q) − 𝑐 = 𝑣2 − 𝑐
q
q
𝑖
𝑖
By the previous exercise,
Either 𝑣ˆ1 ≥ 0 or 𝑣ˆ𝑦 ≤ 0
That is
Either 𝑣1 ≥ 𝑐 or 𝑣2 ≤ 𝑐
2. Since this applies for arbitrary 𝑐 ∈ ℜ, it implies that while
𝑣1 ≤ 𝑣2
and there is no 𝑐 such that
𝑣1 < 𝑐 < 𝑣2
Therefore, we conclude that 𝑣1 = 𝑣2 as required.
3.255
1. The mixed strategies p of player 1 are elements of the simplex Δ𝑚−1 ,
which is compact (Example 1.110). Since 𝑣1 (p) = min𝑛𝑗=1 𝑢(p, 𝑗) is continuous
(Maximum theorem 2.3), 𝑣1 (p) achieves its maximum on Δ𝑚−1 (Weierstrass
theorem 2.2). That is, there exists p∗ ∈ Δ𝑚−1 such that
𝑣1 = 𝑣1 (p∗ ) = max 𝑣1 (p)
p
Similarly, there exists q∗ ∈ Δ𝑛−1 such that
𝑣2 = 𝑣2 (q∗ ) = min 𝑣2 (q)
q
2. Let 𝑢(p, q) denote the expected outcome when player 1 adopts mixed strategy p
and player 2 plays q. That is
𝑢(p, q) =
𝑚 ∑
𝑛
∑
𝑝𝑖 𝑞𝑖 𝑎𝑖𝑗
𝑖=1 𝑗=1
Then
𝑣 = 𝑢(p∗ , q∗ ) = max 𝑢(𝑖, q∗ ) ≥
𝑖
∑
𝑝𝑖 𝑢(𝑖, q∗ ) = 𝑢(p, q∗ ) for every p ∈ Δ𝑚−1
𝑖
Similarly
𝑣 = 𝑢(p∗ , q∗ ) = min 𝑢(p∗ , 𝑗) ≤
𝑗
∑
𝑞𝑗 𝑢(p∗ , 𝑗) = 𝑢(p∗ , q) for every q ∈ Δ𝑛−1
𝑗
(p∗ , q∗ ) is a Nash equilibrium.
197
c 2001 Michael Carter
⃝
All rights reserved
Solutions for Foundations of Mathematical Economics
3.256 By the Minimax theorem, every finite two person zero-sum game has a value.
The previous result shows that this is attained at a Nash equilibrium.
3.257 If player 2 adopts the strategy 𝑡1
𝑓p (𝑡1 ) = −𝑝1 + 2𝑝2 < 0 if 𝑝1 > 2𝑝2
If player 2 adopts the strategy 𝑡5
𝑓p (𝑡5 ) = 𝑝1 − 2𝑝2 < 0 if 𝑝1 < 2𝑝2
Therefore
𝑣1 (p) = min 𝑓p (z) ≤ min{𝑓p (𝑡1 ), 𝑓p (𝑡5 )} < 0
𝑧∈𝑍
for every p such that 𝑝1 ∕= 𝑝2 . Since 𝑝1 + 𝑝2 = 1, we conclude that
{
= 0 p = p∗ = ( 2/3, 1/3)
𝑣1 (p)
< 0 otherwise
We conclude that
𝑣1 = max 𝑣1 (p) = 0
p
which is attained at p∗ = ( 2/3, 1/3).
3.258
1.
𝑚
𝑣2 = min max 𝑧𝑖
z∈𝑍 𝑖=1
Since 𝑍 is compact, 𝑣2 = 0 implies there exists z̄ ∈ 𝑍 such that
𝑚
max 𝑧¯𝑖 = 0
𝑖=1
which implies that z̄ ≤ 0. Consequently 𝑍 ∩ ℜ𝑛− ∕= ∅.
2. Assume to the contrary that there exists
z ∈ 𝑍 ∩ int ℜ𝑛−
That is, there exists some strategy q ∈ Δ𝑛−1 such that 𝐴q < 0 and therefore
𝑣2 < 0, contrary to the hypothesis.
3. There exists a hyperplane with nonnegative normal separating 𝑍 from ℜ𝑛− (Exercise 3.227). That is, there exists p∗ ∈ ℜ𝑛+ , p∗ ∕= 0 such that
𝑓p∗ (z) ≥ 0 for every z ∈ 𝑍
and therefore
𝑣1 (p∗ ) = min 𝑓p∗ (z) ≥ 0
z∈𝑍
Without loss of generality, we can normalize so that
p∗ ∈ Δ𝑚−1 .
198
∑𝑛
𝑖=1
𝑝∗𝑖 = 1 and therefore
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
4. Consequently
𝑣1 = max 𝑣1 (p) ≥ 𝑣1 (p∗ ) ≥ 0
p
On the other hand, we know that 𝑍 contains a point z̄ ≤ 0. For every p ≥ 0
𝑓p (z̄) ≤ 0
and therefore
𝑧) ≤ 0
𝑣1 (p) = min 𝑓p (z) ≤ 𝑓p (¯
z∈𝑍
so that
𝑣1 = max 𝑣1 (p) ≤ 0
p
We conclude that
𝑣1 = 0 = 𝑣2
3.259 Consider the game with the same strategies and the payoff function
𝑢
ˆ(a1 , a2 ) = 𝑢(a1 , a2 ) − 𝑐
The expected value to player 2 is
ˆ(𝑖, q) = min max 𝑢(𝑖, q) − 𝑐 = 𝑣2 − 𝑐 = 0
𝑣ˆ2 = min max 𝑢
q
q
𝑖
𝑖
By the previous exercise 𝑣ˆ1 = 𝑣ˆ2 = 0 and
𝑣1 = max min 𝑢(p, 𝑗) = max min 𝑢
ˆ(p, 𝑗) + 𝑐 = 𝑣ˆ1 + 𝑐 = 𝑐 = 𝑣2
p
q
𝑗
𝑗
3.260 Assume that p1 and p2 are both optimal strategies for player 1. Then
𝑢(p1 , q) ≥ 𝑣 for every q ∈ Δ𝑛−1
𝑢(p2 , q) ≥ 𝑣 for every q ∈ Δ𝑛−1
Let p̄ = 𝛼p1 , p2 + (1 − 𝛼). Since 𝑢 is bilinear
𝑢(p̄, q) = 𝛼𝑢(p1 , q) + (1 − 𝛼)𝑢(p2 , q) ≥ 𝑣 for every q ∈ Δ𝑛−1
Consequently, p̄ is also an optimal strategy for player 1.
3.261 𝑓 is the payoff function of some 2 person zero-sum game in which the players
have 𝑚 + 1 and 𝑛 + 1 strategies respectively. The result follows from the Minimax
Theorem.
3.262
1. The possible partitions of 𝑁 = {1, 2, 3} are:
{1}, {2}, {3}
{𝑖, 𝑗}, {𝑘},
𝑖, 𝑗, 𝑘 =∈ 𝑁, 𝑖 ∕= 𝑗 ∕= 𝑘
{1, 2, 3}
In any partition, at most one coalition can have two or more players, and therefore
𝐾
∑
𝑤(𝑆𝑘 ) ≤ 1
𝑘=1
199
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
2. Assume x = (𝑥1 , 𝑥2 , 𝑥3 ) ∈ core. Then x must satisfy the following system of
inequalities
𝑥1 + 𝑥2 ≥ 1 = 𝑤({1, 2})
𝑥1 + 𝑥3 ≥ 1 = 𝑤({1, 3})
𝑥2 + 𝑥3 ≥ 1 = 𝑤({2, 3})
which can be summed to yield
2(𝑥1 + 𝑥2 + 𝑥3 ) ≥ 3
or
𝑥1 + 𝑥2 + 𝑥3 ≥ 3/2
which implies that x exceeds the sum available. This contradiction establishes
that the core is empty.
Alternatively, observe that the three person majority game is a simple game with
no veto players. By Exercise 1.69, its core is empty.
3.263 Assume that the game (𝑁, 𝑤) is not cohesive. Then there exists a partition
{𝑆1 , 𝑆2 , . . . , 𝑆𝐾 } of 𝑁 such that
𝑤(𝑁 ) <
𝐾
∑
𝑤(𝑆𝑘 )
𝑘=1
Assume x ∈ core. Then
∑
𝑥𝑖 ≥ 𝑤(𝑆𝑘 )
𝑘 = 1, 2, . . . , 𝐾
𝑖∈𝑆𝑘
Since {𝑆1 , 𝑆2 , . . . , 𝑆𝐾 } is a partition
∑
𝑥𝑖 =
𝑖∈𝑁
𝐾 ∑
∑
𝑥𝑖 ≥
𝑘=1 𝑖∈𝑆𝑘
𝑁
∑
𝑤(𝑆𝑘 ) > 𝑤(𝑁 )
𝑘=1
which contradicts the assumption that x ∈ core. This establishes that cohesivity is
necessary for the existence of the core.
To show that cohesivity is not sufficient, we observe that the three person majority
game is cohesive, but its core is empty.
3.264 The other balanced families of coalitions in a three player game are
1. ℬ = {𝑁 } with weights
{
𝑤(𝑆) =
1
0
𝑆=𝑁
otherwise
2. ℬ = {{1}, {2}, {3}} with weights 𝑤(𝑆) = 1 for every 𝑆 ∈ ℬ
3. ℬ = {{𝑖}, {𝑗, 𝑘}}, 𝑖, 𝑗, 𝑘 ∈ ℬ, 𝑖 ∕= 𝑗 ∕= 𝑘 with weights 𝑤(𝑆) = 1 for every 𝑆 ∈ ℬ
3.265 The following table lists some nontrivial balanced families of coalitions for a four
player game. Other balanced families can be obtained by permutation of the players.
200
Solutions for Foundations of Mathematical Economics
{123}, {124}, {34}
{12}, {13}, {23}, {4}
{123}, {14}, {24}, {3}
{123}, {14}, {24}, {34}
{123}, {124}, {134}, {234}
c 2001 Michael Carter
⃝
All rights reserved
Weights
1/2, 1/2, 1/2
1/2, 1/2, 1/2, 1
1/2, 1/2, 1/2, 1/2
2/3, 1/3, 1/3, 1/3
1/3, 1/3, 1/3, 1/3
3.266 Both sides of the expression
e𝑁 =
∑
𝜆𝑆 e𝑆
𝑆∈ℬ
are vectors, with each component corresponding to a particular player. For player 𝑖,
the 𝑖𝑡 ℎ component of e𝑁 is 1 and the 𝑖𝑡 ℎ component of e𝑆 is 1 if 𝑖 ∈ 𝑆 and 0 otherwise.
Therefore, for each player 𝑖, the preceding expression can be written
∑
𝜆𝑆 = 1
𝑆∈ℬ∣𝑆∋𝑖
For each coalition 𝑆, the share of the coalition 𝑆 at the allocation x is
∑
𝑔𝑆 (x) =
𝑖 ∈ 𝑆𝑥𝑖 = e𝑆 ẋ
The condition
𝑔𝑁 =
∑
(3.79)
𝜆𝑆 𝑔𝑆
𝑆∈ℬ
means that for every x ∈ 𝑋
𝑔𝑁 (x) =
∑
𝜆𝑆 𝑔𝑆 (x)
𝑆∈ℬ
Substituting (3.79)
e𝑁 ẋ =
∑
𝜆𝑆 𝑒𝑆 ẋ
𝑆∈ℬ
which is equivalent to the condition
∑
𝜆𝑆 e𝑆 = e𝑁
𝑆∈ℬ
3.267 By construction, 𝜇 ≥ 0. If 𝜇 = 0,
∑
𝜆𝑆 𝑔𝑆 − 𝜇𝑔𝑁 = 0
𝑆⊆𝑁
implies that 𝜆𝑆 = 0 for all 𝑆 and consequently
∑
𝜆𝑆 𝑤(𝑆) − 𝜇𝑤(𝑁 ) ≤ 0
𝑆⊆𝑁
is trivially satisfied. On the other hand, if 𝜇 > 0, we can divide both conditions by 𝜇.)
201
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
3.268 Let (𝑁, 𝑤1 ) and (𝑁, 𝑤2 ) be balanced games. By the Bondareva-Shapley theorem,
they have nonempty cores. Let x1 ∈ core(𝑁, 𝑤1 ) and x2 ∈ core(𝑁, 𝑤2 ). That is,
𝑔𝑆 (x1 ) ≥ 𝑤1 (𝑆) for every 𝑆 ⊆ 𝑁
𝑔𝑆 (x2 ) ≥ 𝑤2 (𝑆) for every 𝑆 ⊆ 𝑁
Adding, we have
𝑔𝑆 (x1 ) + 𝑔𝑆 (x2 ) = 𝑔𝑆 (x1 + x2 ) ≥ 𝑤1 (𝑆) + 𝑤2 (𝑆) for every 𝑆 ⊆ 𝑁
which implies that x1 + x2 belongs to core(𝑁, 𝑤1 + 𝑤2 ). Therefore (𝑁, 𝑤1 + 𝑤2 ) is
balanced. Similarly, if x ∈ core(𝑁, 𝑤), then 𝛼x belongs to core(𝑁, 𝛼𝑤) for every
𝛼 ∈ ℜ+ . That is (𝑁, 𝛼𝑤) is balanced for every 𝛼 ∈ ℜ+ .
3.269
1. Assume otherwise. That is assume there exists some y ∈ 𝐴 ∩ 𝐵. Taking
the first 𝑛 components, this implies that
∑
e𝑁 =
𝜆𝑠 e𝑆
𝑆⊆𝑁
for some (𝜆𝑆 ≥ 0 : 𝑆 ⊆ 𝑁 ). Let ℬ = {𝑆 ⊂ 𝑁 ∣ 𝜆𝑆 > 0} be the set of coalitions
with strictly positive weights. Then ℬ is a balanced family of coalitions with
weights 𝜆𝑆 (Exercise 3.266).
However, looking at the last coordinate, y ∈ 𝐴 ∩ 𝐵 implies
∑
𝜆𝑠 𝑤(𝑆) = 𝑤(𝑁 ) + 𝜖 > 𝑤(𝑁 )
𝑆∈ℬ
which contradicts the assumption that the game is balanced. We conclude that
𝐴 and 𝐵 are disjoint if the game is balanced.
2. (a) Substituting y = (e∅ , 0) in (3.36) gives
(z, 𝑧0 )′ (0, 0) = 0 ≥ 𝑐
which implies that 𝑐 ≤ 0.
NOTE We still have to show that 𝑐 ≥ 0.
(b) Substituting (e𝑁 , 𝑤𝑦(𝑁 )) in (3.36) gives
𝑧e𝑁 + 𝑧0 𝑤(𝑁 ) > 𝑧e𝑁 + 𝑧0 𝑤(𝑁 ) + 𝑧0 𝜖
for all 𝜖 > 0, which implies that 𝑧0 < 0.
3. Without loss of generality, we can normalize so that 𝑧0 = −1. Then the separating
hyperplane conditions become
(z, −1)′ y ≥ 0
′
(z, −1) (e𝑁 , 𝑤(𝑁 ) + 𝜖) < 0
for every y ∈ 𝐴
(3.80)
for every 𝜖 > 0
(3.81)
For any 𝑆 ⊆ 𝑁 , (e𝑆 , 𝑤(𝑆)) ∈ 𝐴. Substituting y = (e𝑆 , 𝑤(𝑆)) in (3.80) gives
e′𝑆 z − 𝑤(𝑆) ≥ 0
that is
𝑔𝑆 (z) = e′𝑆 z =≥ 𝑤(𝑆)
202
c 2001 Michael Carter
⃝
All rights reserved
Solutions for Foundations of Mathematical Economics
while (3.81) implies
𝑔𝑁 (z) = e′𝑁 z > 𝑤(𝑁 ) + 𝜖
for every 𝜖 > 0
This establishes that z belongs to the core. Hence the core is nonempty.
∑
3.270
1. Let 𝛼 = 𝑤(𝑁 ) − 𝑖∈𝑁 𝑤𝑖 > 0 since (𝑁, 𝑤) is essential. For every 𝑆 ⊆ 𝑁 ,
define
(
)
∑
1
0
𝑤(𝑆) −
𝑤𝑖
𝑤 (𝑆) =
𝛼
𝑖∈𝑆
Then
𝑤0 ({𝑖}) = 0 for every 𝑖 ∈ 𝑁
𝑤0 (𝑁 ) = 1
𝑤0 is 0–1 normalized.
2. Let y ∈ core(𝑁, 𝑤0 ). Then for every 𝑆 ⊆ 𝑁
∑
𝑦𝑖 ≥ 𝑤0 (𝑆)
(3.82)
𝑖∈𝑆
∑
𝑦𝑖 = 1
(3.83)
𝑖∈𝑁
Let w = (𝑤1 , 𝑤2 , . . . , 𝑤𝑛 ) where 𝑤𝑖 = 𝑤({𝑖}). Let x = 𝛼y + w. Using (3.82) and
(3.83)
∑
∑
𝑥𝑖 =
(𝛼𝑦𝑖 + 𝑤𝑖 )
𝑖∈𝑆
𝑖∈𝑆
=𝛼
∑
𝑦𝑖 +
𝑖∈𝑆
∑
𝑖∈𝑁
𝑤𝑖
𝑖∈𝑆
≥ 𝛼𝑤0 (𝑆) +
1
=𝛼
𝛼
∑
∑
𝑤𝑖
𝑖∈𝑆
(
𝑤(𝑆) −
∑
𝑖∈𝑆
)
𝑤𝑖
+
∑
𝑤𝑖
𝑖∈𝑆
= 𝑤(𝑆)
∑
𝑥𝑖 =
(𝛼𝑦𝑖 + 𝑤𝑖 )
𝑖∈𝑁
=𝛼+
∑
𝑤𝑖
𝑖∈𝑁
= 𝑤(𝑁 )
Therefore, x = 𝛼y + w ∈ core(𝑁, 𝑤). Similarly, we can show that
x ∈ core(𝑁, 𝑤) =⇒ y =
1
(x − w) ∈ core(𝑁, 𝑤0 )
𝛼
and therefore
core(𝑁, 𝑤) = 𝛼core(𝑁, 𝑤0 ) + w
203
c 2001 Michael Carter
⃝
All rights reserved
Solutions for Foundations of Mathematical Economics
3. This immediately implies
core(𝑁, 𝑤) = ∅ ⇐⇒ core(𝑁, 𝑤0 ) = ∅
3.271 (𝑁, 𝑤) is 0–1 normalized, that is
𝑤({𝑖} = 0 for every 𝑖 ∈ 𝑁
𝑤(𝑁 ) = 1
Consequently, x belongs to the core of (𝑁, 𝑤) if and only if
∑
𝑥𝑖 ≥ 𝑤𝑖 = 0
(3.84)
𝑥𝑖 = 𝑤(𝑁 ) = 1
(3.85)
𝑥𝑖 ≥ 𝑤(𝑆) for every 𝑆 ∈ 𝒜
(3.86)
𝑖∈𝑁
∑
𝑖∈𝑆
(3.84) and (3.85) ensure that x = (𝑥1 , 𝑥2 , . . . , 𝑥𝑛 ) is a mixed strategy for player 1 in the
two-person zero-sum game. Using this mixed strategy, the expected payoff to player I
for any strategy 𝑆 of player II is
𝑢(x, 𝑆) =
∑
𝑥𝑖 𝑢(𝑖, 𝑆) =
𝑖∈𝑁
∑
𝑖∈𝑆
𝑥𝑖
1
𝑤(𝑆)
(3.86) implies
𝑢(x, 𝑆) =
∑
𝑖∈𝑆
𝑥𝑖
1
≥ 1 for every 𝑆 ∈ 𝒜
𝑤(𝑆)
That is any x ∈ core(𝑁, 𝑤) provides a mixed strategy for player I which ensures a
payoff at least 1. That is
core(𝑁, 𝑤) ∕= ∅ =⇒ 𝛿 ≥ 1
Conversely, if the 𝛿 < 1, there is no mixed strategy for player I which satisfies (3.86) and
consequently no x which satisfies (3.84), (3.85) and (3.86). In other words, core(𝑁, 𝑤) =
∅.
3.272 If 𝛿 is the value of 𝐺, there exists a mixed strategy which will guarantee that II
pays no more than 𝛿. That is, there exists numbers 𝑦𝑆 ≥ 0 for every coalition 𝑆 ∈ 𝒜
such that
∑
𝑦𝑆 = 1
𝑆∈𝒜
and
∑
𝑦𝑆 𝑢(𝑖, 𝑆) ≤ 𝛿
for every 𝑖 ∈ 𝑁
𝑆∈𝒜
that is
∑
𝑆∈𝒜
𝑦𝑆
1
≤𝛿
𝑤(𝑆)
for every 𝑖 ∈ 𝑁
𝑆∋𝑖
204
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
or
∑
𝑆∈𝒜
𝑦𝑆
≤1
𝛿𝑤(𝑆)
for every 𝑖 ∈ 𝑁
(3.87)
𝑆∋𝑖
For each coalition 𝑆 ∈ 𝒜 let
𝜆𝑆 =
𝑦𝑆
𝛿𝑤(𝑆)
in (3.87)
∑
𝜆𝑆 ≤ 1
𝑆∈𝒜
𝑆∋𝑖
Augment the collection 𝒜 with the single-player coalitions to form the collection
ℬ = 𝒜 ∪ { {𝑖} : 𝑖 ∈ 𝑁 }
and with weights { 𝜆𝑆 : 𝑆 ∈ 𝒜 } and
∑
𝜆{𝑖} = 1 −
𝜆𝑆
𝑆∈𝒜
Then ℬ is a balanced collection.
Since the game (𝑁, 𝑤) is balanced
1 = 𝑤(𝑁 ) ≥
∑
𝜆𝑆 𝑤(𝑆)
𝑆∈ℬ
=
∑
𝜆𝑆 𝑤(𝑆)
𝑆∈𝒜
∑
𝑦𝑆
𝑤(𝑆)
𝛿𝑤(𝑆)
𝑆∈ℬ
1∑
=
𝑦𝑆
𝛿
=
𝑆∈ℬ
1
=
𝛿
that is
1≥
1
𝛿
If I plays the mixed strategy x̄ = (1/𝑛, 1/𝑛, . . . , 1/𝑛), the payoff is
𝑢(x̄, 𝑆) =
∑
𝑖∈𝑁
1
1
=
> 0 for every 𝑆 ⊆ 𝒜
𝑛𝑤(𝑆)
𝑤(𝑆)
Therefore 𝛿 > 0 and (3.88) implies that
𝛿≥1
205
(3.88)
c 2001 Michael Carter
⃝
All rights reserved
Solutions for Foundations of Mathematical Economics
3.273 Assume core(𝑁, 𝑤) ∕= ∅ and let 𝑥 ∈ core(𝑁, 𝑤). Then
𝑔𝑆 (x) ≥ 𝑤(𝑆) for every 𝑆 ⊆ 𝑁
where 𝑔𝑆 =
∑
𝑖∈𝑆
(3.89)
𝑥𝑖 measures the share coalition 𝑆 at the allocation x.
Let ℬ be a balanced family of coalitions with weights 𝜆𝑆 . For every 𝑆 ∈ ℬ, (3.89)
implies
𝜆𝑆 𝑔𝑆 (x) ≥ 𝜆𝑆 𝑤(𝑆)
Summing over all 𝑆 ∈ ℬ
∑
∑
𝜆𝑆 𝑔𝑆 (x) ≥
𝑆∈ℬ
𝜆𝑆 𝑤(𝑆)
𝑆∈ℬ
Evaluating the left hand side of this inequality
∑
∑ ∑
𝜆𝑆 𝑔𝑆 (x) =
𝜆
𝑥𝑖
𝑆∈ℬ
𝑆∈ℬ
=
𝑖∈𝑆
∑∑
𝜆𝑥𝑖
𝑖∈𝑁 𝑆∈ℬ
=
∑
𝑆∋𝑖
𝑥𝑖
𝑖∈𝑁
=
∑
∑
𝑆∈ℬ
𝑆∋𝑖
𝑥𝑖
𝑖∈𝑁
= 𝑤(𝑁 )
Substituting this in (3.90) gives
𝑤(𝑁 ) ≥
∑
𝑆∈ℬ
The game is balanced.
206
𝜆𝑆 𝑤(𝑆)
𝜆
(3.90)
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
Chapter 4: Smooth Functions
4.1 Along the demand curve, price and quantity are related according to the equation
𝑝 = 10 − 𝑥
This is called the inverse demand function. Total revenue 𝑅(𝑥) (price times quantity)
is given by
𝑅(𝑥) = 𝑝𝑥
= (10 − 𝑥)𝑥
= 10𝑥 − 𝑥2
= 𝑓 (𝑥)
𝑔(𝑥) can be rewritten as
𝑔(𝑥) = 21 + 4(𝑥 − 3)
At 𝑥 = 3, the price is 7 but the marginal revenue of an additional unit is only 4. The
function 𝑔 decomposes (approximately) the total revenue into two components — the
revenue from the sale of 3 units (21 = 3 × 7) plus the marginal revenue from the sale
of additional units (4(𝑥 − 3)).
4.2 If your answer is 5 per cent, obtained by subtracting the inflation rate from the
growth rate of nominal GDP, you are implicitly using a linear approximation. To see
this, let
𝑝
𝑞
𝑑𝑝
𝑑𝑞
= price level at the beginning of the year
= real GDP at the beginning of the year
= change in prices during year
= change in output during year
We are told that nominal GDP at the end of the year, (𝑝 + 𝑑𝑝)(𝑞 + 𝑑𝑞), equals 1.10
times nominal GDP at the beginning of the year, 𝑝𝑞. That is
(𝑝 + 𝑑𝑝)(𝑞 + 𝑑𝑞) = 1.10𝑝𝑞
(4.42)
Furthermore, the price level at the end of the year, 𝑝 + 𝑑𝑝 equals 1.05 times the price
level of the start of year, 𝑝:
𝑝 + 𝑑𝑝 = 1.05𝑝
Substituting this in equation (4.38) yields
1.05𝑝(𝑞 + 𝑑𝑞) = 1.10𝑝𝑞
which can be solved to give
𝑑𝑞 = (
1.10
− 1)𝑞 = 0.0476
1.05
The growth rate of real GDP (𝑑𝑞/𝑞) is equal to 4.76 per cent.
207
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
To show how the estimate of 5 per cent involves a linear approximation, we expand the
expression for real GDP at the end of the year.
(𝑝 + 𝑑𝑝)(𝑞 + 𝑑𝑞) = 𝑝𝑞 + 𝑝𝑑𝑞 + 𝑞𝑑𝑝 + 𝑑𝑝𝑑𝑞
Dividing by 𝑝𝑞
(𝑝 + 𝑑𝑝)(𝑞 + 𝑑𝑞)
𝑑𝑞
𝑑𝑝 𝑑𝑝𝑑𝑞
=1+
+
+
𝑝𝑞
𝑞
𝑝
𝑝𝑞
The growth rate of nominal GDP is
(𝑝 + 𝑑𝑝)(𝑞 + 𝑑𝑞) − 𝑝𝑞
(𝑝 + 𝑑𝑝)(𝑞 + 𝑑𝑞)
=
−1
𝑝𝑞
𝑝𝑞
𝑑𝑞 𝑑𝑝 𝑑𝑝𝑑𝑞
=
+
+
𝑞
𝑑𝑝
𝑝𝑞
= Growth rate of output
+ Inflation rate
+ Error term
For small changes, the error term 𝑑𝑝𝑑𝑞/𝑝𝑞 is insignificant, and we can approximate the
growth rate of output according to the sum
Growth rate of nominal GDP = Growth rate of output + Inflation rate
This is a linear approximation since it approximates the function (𝑝 + 𝑑𝑝)(𝑞 + 𝑑𝑞) by
the linear function 𝑝𝑞 + 𝑝𝑑𝑞 + 𝑞𝑑𝑝. In effect, we are evaluating the change output at
the old prices, and the change in prices at the old output, and ignoring in interaction
between changes in prices and changes in quantities. The use of linear approximation
in growth rates is extremely common in practice.
4.3 From (4.2)
∥x∥ 𝜂(x) = 𝑓 (x0 + x) − 𝑓 (x0 ) − 𝑔(x)
and therefore
𝜂(x) =
𝑓 (x0 + x) − 𝑓 (x0 ) − 𝑔(x)
∥x∥
𝜂(x) → 0𝑌 as x → 0𝑋
can be expressed as
lim 𝜂(x) = 0𝑌
x→0𝑋
4.4 Suppose not. That is, there exist two linear maps such that
𝑓 (x0 + x) = 𝑓 (x0 ) + 𝑔1 (x) + ∥x∥ 𝜂1 (x)
𝑓 (x0 + x) = 𝑓 (x0 ) + 𝑔2 (x) + ∥x∥ 𝜂2 (x)
with
lim 𝜂𝑖 (x) = 0,
x→0
208
𝑖 = 1, 2
c 2001 Michael Carter
⃝
All rights reserved
Solutions for Foundations of Mathematical Economics
Subtracting we have
𝐿1 (x) − 𝐿2 (x) = ∥x∥ (𝜂1 (x) − 𝜂2 (x))
and
lim
x→0
𝑔1 (x) − 𝑔2 (x)
=0
∥x∥
Since 𝑔1 − 𝑔2 is linear, (4) implies that 𝑔1 (x) = 𝑔2 (x) for all x ∈ 𝑋.
To see this, we proceed by contradiction. Again, suppose not. That is, suppose there
exists some x ∈ 𝑋 such that
𝑔1 (x) ∕= 𝑔2 (x)
For this x, let
𝜂=
𝑔1 (x) − 𝑔2 (x)
∥x∥
By linearity,
𝑔1 (𝑡x) − 𝑔2 (𝑡x)
= 𝜂 for every ∀𝑡 > 0
∥𝑡x∥
and therefore
lim
𝑡→0
𝑔1 (𝑡x) − 𝑔2 (𝑡x)
= 𝜂 ∕= 0
∥𝑡x∥
which contradicts (4). Therefore 𝑔1 (x) = 𝑔2 (x) for all x ∈ 𝑋.
4.5 If 𝑓 : 𝑋 → 𝑌 is differentiable at x0 , then
𝑓 (x0 + x) = 𝑓 (x0 ) + 𝑔(x) + 𝜂(x) ∥x∥
where 𝜂(x) → 0𝑌 as x → 0𝑋 . Since 𝑔 is a continuous linear function, 𝑔(x) → 0𝑌 as
x → 0𝑋 . Therefore
lim 𝑓 (x0 + x) = lim 𝑓 (x0 ) + lim 𝑔(x) + lim 𝜂(x) ∥x∥
x→0
x→0
x→0
x→0
= 𝑓 (x0 )
𝑓 is continuous.
4.6
4.7
4.8 The approximation error at the point (2, 16) is
𝑓 (2, 16)
𝑔(2, 16)
Absolute error
Percentage error
Relative error
=8.0000
=11.3333
=-3.3333
=-41.6667
=-4.1667
By contrast, ℎ(2, 16) = 8 = 𝑓 (2, 16). Table 4.1 shows that ℎ gives a good approximation
to 𝑓 in the neighborhood of (2, 16).
209
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
Table 4.1: Approximating the Cobb-Douglas function at (2, 16)
x
x0 + x
At their intersection:
(0.0, 0.0) (2.0, 16.0)
Approximation Error
Percentage Relative
𝑓 (x0 + x)
ℎ(x0 + x)
8.0000
8.0000
0.0000
NIL
Around the unit circle:
(1.0, 0.0) (3.0, 16.0)
(0.7, 0.7) (2.7, 16.7)
(0.0, 1.0) (2.0, 17.0)
(-0.7, 0.7) (1.3, 16.7)
(-1.0, 0.0) (1.0, 16.0)
(-0.7, -0.7) (1.3, 15.3)
(0.0, -1.0) (2.0, 15.0)
(0.7, -0.7) (2.7, 15.3)
9.1577
9.1083
8.3300
7.1196
6.3496
6.7119
7.6631
8.5867
9.3333
9.1785
8.3333
7.2929
6.6667
6.8215
7.6667
8.7071
-1.9177
-0.7712
-0.0406
-2.4342
-4.9934
-1.6323
-0.0466
-1.4018
-0.1756
-0.0702
-0.0034
-0.1733
-0.3171
-0.1096
-0.0036
-0.1204
Around a smaller circle:
(0.10, 0.00) (2.1, 16.0)
(0.07, 0.07) (2.1, 16.1)
(0.00, 0.10) (2.0, 16.1)
(-0.07, 0.07) (1.9, 16.1)
(-0.10, 0.00) (1.9, 16.0)
(-0.07, -0.07) (1.9, 15.9)
(0.00, -0.10) (2.0, 15.9)
(0.07, -0.07) (2.1, 15.9)
8.1312
8.1170
8.0333
7.9279
7.8644
7.8813
7.9666
8.0693
8.1333
8.1179
8.0333
7.9293
7.8667
7.8821
7.9667
8.0707
-0.0266
-0.0103
-0.0004
-0.0181
-0.0291
-0.0110
-0.0004
-0.0171
-0.0216
-0.0083
-0.0003
-0.0143
-0.0229
-0.0087
-0.0003
-0.0138
Parallel to the
(-2.0, 0.0)
(-1.0, 0.0)
(-0.5, 0.0)
(-0.1, 0.0)
(0.0, 0.0)
(0.1, 0.0)
(0.5, 0.0)
(1.0, 0.0)
(2.0, 0.0)
(4.0, 0.0)
x1 axis:
(0.0, 16.0)
(1.0, 16.0)
(1.5, 16.0)
(1.9, 16.0)
(2.0, 16.0)
(2.1, 16.0)
(2.5, 16.0)
(3.0, 16.0)
(4.0, 16.0)
(6.0, 16.0)
0.0000
6.3496
7.2685
7.8644
8.0000
8.1312
8.6177
9.1577
10.0794
11.5380
5.3333
6.6667
7.3333
7.8667
8.0000
8.1333
8.6667
9.3333
10.6667
13.3333
NIL
-4.9934
-0.8922
-0.0291
0.0000
-0.0266
-0.5678
-1.9177
-5.8267
-15.5602
-2.6667
-0.3171
-0.1297
-0.0229
NIL
-0.0216
-0.0979
-0.1756
-0.2936
-0.4488
Parallel to the
(0.0, -4.0)
(0.0, -2.0)
(0.0, -1.0)
(0.0, -0.5)
(0.0, -0.1)
(0.0, 0.0)
(0.0, 0.1)
(0.0, 0.5)
(0.0, 1.0)
(0.0, 2.0)
(0.0, 4.0)
x2 axis:
(2.0, 12.0)
(2.0, 14.0)
(2.0, 15.0)
(2.0, 15.5)
(2.0, 15.9)
(2.0, 16.0)
(2.0, 16.1)
(2.0, 16.5)
(2.0, 17.0)
(2.0, 18.0)
(2.0, 20.0)
6.6039
7.3186
7.6631
7.8325
7.9666
8.0000
8.0333
8.1658
8.3300
8.6535
9.2832
6.6667
7.3333
7.6667
7.8333
7.9667
8.0000
8.0333
8.1667
8.3333
8.6667
9.3333
-0.9511
-0.2012
-0.0466
-0.0112
-0.0004
0.0000
-0.0004
-0.0105
-0.0406
-0.1522
-0.5403
-0.0157
-0.0074
-0.0036
-0.0018
-0.0003
NIL
-0.0003
-0.0017
-0.0034
-0.0066
-0.0125
210
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
4.9 To show that 𝑟 is nonlinear, consider
𝑟((1, 2, 3, 4, 5) + (66, 55, 75, 81, 63)) = 𝑟(67, 57, 78, 85, 68)
= (85, 78, 68, 67, 58)
∕= (5, 4, 3, 2, 1) + (81, 75, 67, 63, 55)
To show that 𝑟 is differentiable, consider a particular point, say (66, 55, 75, 81, 63).
Consider the permutation 𝑔 : ℜ𝑛 → ℜ𝑛 defined by
𝑔(𝑥1 , 𝑥2 , . . . , 𝑥5 ) = (𝑥4 , 𝑥3 , 𝑥1 , 𝑥5 , 𝑥2 )
𝑔 is linear and
𝑔(66, 55, 75, 81, 63) = (81, 75, 67, 63, 55) = 𝑟(66, 55, 75, 81, 63)
Furthermore, 𝑔(x) = 𝑟(x) for all x close to (66, 55, 75, 81, 63). Hence, 𝑔(x) approximates 𝑟(x) in a neighborhood of (66, 55, 75, 81, 63) and so 𝑟 is differentiable at (66, 55, 75, 81, 63).
The choice of (66, 55, 75, 81, 63) was arbitrary, and the argument applies at every x such
that x𝑖 ∕= x𝑗 .
In summary, each application of 𝑟 involves a permutation, although the particular
permutation depends upon the argument, x. However, for any given x0 with x0𝑖 ∕=
x0𝑗 , the same permutation applies to all x in the neighborhood of x0 , so that the
permutation (which is a linear function) is the derivative of 𝑟 at x0 .
4.10 Using (4.3), we have for any x
𝑓 (x0 + 𝑡x) − 𝑓 (x0 ) − 𝐷𝑓 [x0 ](𝑡x)
=0
𝑡x→0
∥𝑡x∥
lim
or
𝑓 (x0 + 𝑡x) − 𝑓 (x0 ) − 𝑡𝐷𝑓 [x0 ](x)
=0
𝑡→0
𝑡 ∥x∥
lim
For ∥x∥ = 1, this implies
𝑡𝐷𝑓 [x0 ](x)
𝑓 (x0 + 𝑡x) − 𝑓 (x0 )
=
𝑡→0
𝑡
𝑡
lim
that is
0
0
⃗ x 𝑓 [x0 ] = lim 𝑓 (x + 𝑡x) − 𝑓 (x ) = 𝐷𝑓 [x0 ](x)
𝐷
𝑡→0
𝑡
4.11 By direct calculation
ℎ(x0𝑖 + 𝑡) − ℎ(x0𝑖 )
𝑡→0
𝑡
𝑓 (x01 , x02 , . . . , x0𝑖 + 𝑡, . . . , x0𝑛 ) − 𝑓 (x01 , x02 , . . . , x0𝑖 , . . . , x0𝑛 )
= lim
𝑡→0
𝑡
𝑓 (x0 + 𝑡e𝑖 ) − 𝑓 (x0 )
= lim
𝑡→0
𝑡
0
⃗
= 𝐷e𝑖 𝑓 [x ]
𝐷𝑥𝑖 𝑓 [x0 ] = lim
211
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
4.12 Define the function
(
)
ℎ(𝑡) = 𝑓 (8, 8) + 𝑡(1, 1)
= (8 + 𝑡)1/3 (8 + 𝑡)2/3
=8+𝑡
The directional derivative of 𝑓 in the direction (1, 1) is
⃗ (1,1) 𝑓 (8, 8) = lim ℎ(𝑡) − ℎ(0)
𝐷
𝑡→0
𝑡
=1
Generalization of this example reveals that the directional derivative of 𝑓 along any
⃗ x0 𝑓 [x0 ] = 1 for every x0 . Economically,
ray through the origin equals 1, that is 𝐷
this means that increasing inputs in the same proportions leads to a proportionate
increase in output, which is the property of constant returns to scale. We will study
this property of homogeneity is some depth in Section 4.6.
4.13 Let p = ∇𝑓 (x0 ). Each component of p represents the action of the derivative on
an element of the standard basis {e1 , e2 , . . . , e𝑛 }(see proof of Theorem 3.4)
𝑝𝑖 = 𝐷𝑓 [x0 ](e𝑖 )
Since ∥e𝑖 ∥ = 1, 𝐷𝑓 [x0 ](e𝑖 ) is the directional derivative at x0 in the direction e𝑖 (Exercise 4.10)
⃗ e𝑖 (x0 )
𝑝𝑖 = 𝐷𝑓 [x0 ](e𝑖 ) = 𝐷
But this is simply the 𝑖 partial derivative of 𝑓 (Exercise 4.11)
⃗ e𝑖 (x0 ) = 𝐷𝑥𝑖 𝑓 (x0 )
𝑝𝑖 = 𝐷𝑓 [x0 ](e𝑖 ) = 𝐷
4.14 Using the standard inner product on ℜ𝑛 (Example 3.26) and Exercise 4.13
< ∇𝑓 (x0 ), x >=
𝑛
∑
𝐷𝑥𝑖 𝑓 [x0 ]x𝑖 = 𝐷𝑓 [x0 ](x)
𝑖=1
4.15 Since 𝑓 is differentiable
𝑓 (x1 + 𝑡x) = 𝑓 (x1 ) + ∇𝑓 (x0 )𝑇 𝑡x + 𝜂(𝑡x) ∥𝑡x∥
with 𝜂(𝑡x) → 0 as 𝑡x → 0. If 𝑓 is increasing, 𝑓 (x1 + 𝑡x) ≥ 𝑓 (x1 ) for every x ≥ 0 and
𝑡 > 0. Therefore
∇𝑓 (x0 )𝑇 𝑡x + 𝜂(𝑡x) ∥𝑡x∥ = 𝑡∇𝑓 (x0 )𝑇 x + 𝑡𝜂(𝑡x) ∥x∥ ≥ 0
Dividing by 𝑡 and letting 𝑡 → 0
∇𝑓 (x0 )𝑇 x ≥ 0 for every x ≥ 0
In particular, this applies for unit vectors e𝑖 . Therefore
𝐷𝑥𝑖 𝑓 (x1 ) ≥ 0,
𝑖 = 1, 2, . . . , 𝑛
212
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
⃗ x 𝑓 (x0 ) measures the rate of increase of 𝑓 in the di4.16 The directional derivative 𝐷
rection x. Using Exercises 4.10, 4.14 and 3.61, assuming x has unit norm,
⃗ x 𝑓 (x0 ) = 𝐷𝑓 [x0 ](x) =< ∇𝑓 (x0 ), x >≤ ∇𝑓 (x0 )
𝐷
This bound is attained when x = ∇𝑓 (x0 )/ ∇𝑓 (x0 ) since
∇𝑓 (x0 )2
∇𝑓 (x0 )
0
0
⃗
𝐷x 𝑓 (x ) =< ∇𝑓 (x ),
>=
= ∇𝑓 (x0 )
0
0
∥∇𝑓 (x )∥
∥∇𝑓 (x )∥
The directional derivative is maximized when ∇𝑓 (x0 ) and x are aligned.
4.17 Using Exercise 4.14
𝐻 = { x ∈ 𝑋 :< ∇𝑓 [x0 ], x >= 0 }
4.18 Assume each 𝑓𝑗 is differentiable at x0 and let
𝐷𝑓 [x0 ] = (𝐷𝑓1 [x0 ], 𝐷𝑓2 [x0 ], . . . , 𝐷𝑓𝑚 [x0 ])
Then
⎛
⎜
⎜
f (x0 + x) − f [x0 ] − 𝐷f [x0 ]x = ⎜
⎝
𝑓1 (x0 + x) − 𝑓1 [x0 ] − 𝐷𝑓1 [x0 ]x
𝑓2 (x0 + x) − 𝑓2 [x0 ] − 𝐷𝑓2 [x0 ]x
..
.
⎞
⎟
⎟
⎟
⎠
𝑓𝑚 (x0 + x) − 𝑓𝑚 (x0 ) − 𝐷𝑓𝑚 [x0 ]x
and
𝑓𝑗 (x0 + x) − 𝑓𝑗 (x0 ) − 𝐷𝑓𝑗 [x0 ]x
→ 0 as ∥x∥ → 0
∥x∥
for every 𝑗 implies
f (x0 + x) − f (x0 ) − 𝐷f [x0 ](x)
→ 0 as ∥x∥ → 0
∥x∥
(4.43)
Therefore f is differentiable with derivative
𝐷f [x0 ] = 𝐿 = (𝐷𝑓1 (x0 ), 𝐷𝑓2 [x0 ], . . . , 𝐷𝑓𝑚 [x0 ])
Each 𝐷𝑓𝑗 [x0 ] is represented by the gradient ∇𝑓𝑗 [x0 ] (Exercise 4.13) and therefore
𝐷𝑓 [x0 ] is represented by the matrix
⎛
⎞ ⎛
⎞
∇𝑓1 [x0 ]
𝐷𝑥1 𝑓1 [x0 ] 𝐷𝑥2 𝑓1 [x0 ] . . . 𝐷𝑥𝑛 𝑓1 [x0 ]
⎜ ∇𝑓2 [x0 ] ⎟ ⎜ 𝐷𝑥1 𝑓2 [x0 ] 𝐷𝑥2 𝑓2 [x0 ] . . . 𝐷𝑥𝑛 𝑓2 [x0 ] ⎟
⎜
⎟ ⎜
⎟
𝐽 =⎜
⎟=⎜
⎟
..
..
..
..
..
⎝
⎠ ⎝
⎠
.
.
.
.
.
∇𝑓𝑚 [x0 ]
𝐷𝑥1 𝑓𝑚 [x0 ] 𝐷𝑥2 𝑓𝑚 [x0 ] . . .
𝐷𝑥𝑛 𝑓𝑚 [x0 ]
Conversely, if f is differentiable, its derivative 𝐷f [x0 ] : ℜ𝑛 → ℜ𝑚 be decomposed into
𝑚 component 𝐷𝑓1 [x0 ], 𝐷𝑓2 [x0 ], . . . , 𝐷𝑓𝑚 [x0 ] functionals such that
⎞
⎛
𝑓1 (x0 + x) − 𝑓1 (x0 ) − 𝐷𝑓1 [x0 ]x
⎜ 𝑓2 (x0 + x) − 𝑓2 (x0 ) − 𝐷𝑓2 [x0 ]x ⎟
⎟
⎜
f (x0 + x) − f (x0 ) − 𝐷f [x0 ]x = ⎜
⎟
..
⎠
⎝
.
𝑓𝑚 (x0 + x) − 𝑓𝑚 (x0 ) − 𝐷𝑓𝑚 [x0 ]x
213
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
(4.43) implies that
𝑓𝑗 (x0 + x) − 𝑓𝑗 (x0 ) − 𝐷𝑓𝑗 [x0 ]x
→ 0 as ∥x∥ → 0
∥x∥
for every 𝑗.
4.19 If 𝐷𝑓 [x0 ] has full rank, then it is one-to-one (Exercise 3.25) and onto (Exercise
3.16). Therefore 𝐷𝑓 [x0 ] is nonsingular. The Jacobian 𝐽𝑓 (x0 ) represents 𝐷𝑓 [x0 ], which
is therefore nonsingular if and only if det 𝐽𝑓 (x0 ) ∕= 0.
4.20 When 𝑓 is a functional, rank 𝑋 ≥ 𝑟𝑎𝑛𝑘𝑌 = 1. If 𝐷𝑓 [x0 ] has full rank (1), then
𝐷𝑓 [x0 ] maps 𝑋 onto ℜ (Exercise 3.16), which requires that ∇𝑓 (x0 ) ∕= 0.
4.21
4.23 If 𝑓 : 𝑋 × 𝑌 → 𝑍 is bilinear
𝑓 (x0 + x, y0 + y) = 𝑓 (x0 , y0 ) + 𝑓 (x0 , y) + 𝑓 (x, y0 ) + 𝑓 (x, y)
Defining
𝐷𝑓 [x0 , y0 ](x, y) = 𝑓 (x0 , y) + 𝑓 (x, y0 )
𝑓 (x0 + x, y0 + y) = 𝑓 (x0 , y0 ) + 𝐷𝑓 [x0 , y0 ](x, y) + 𝑓 (x, y)
Since 𝑓 is continuous, there exists 𝑀 such that
𝑓 (x, y) ≤ 𝑀 ∥x∥ ∥y∥
for every x ∈ 𝑋 and y ∈ 𝑌
and therefore
NOTE This is not quite right. See Spivak p. 23. Avez (Tilburg) has
(
)2
∥𝑓 (x, y)∥ ≤ 𝑀 ∥x∥ ∥y∥ ≤ 𝑀 ∥x∥ + ∥y∥ ≤ 𝑀 ∥(x, y)∥2
which implies that
∥𝑓 (x, y)∥
→ 0 as (x, y) → 0
∥(x, y)∥
lim
x1 ,x2 →0
𝑓 (x1 , x2 )
=0
∥x1 ∥ ∥x2 ∥
Therefore 𝑓 is differentiable with derivative
𝐷𝑓 [x0 , y0 ] = 𝑓 (x0 , y) + 𝑓 (x, y0 )
4.24 Define 𝑚 : ℜ2 → ℜ by
𝑚(𝑧1 , 𝑧2 ) = 𝑧1 𝑧2
Then 𝑚 is bilinear (Example 3.23) and continuous (Exercise 2.79) and therefore differentiable (Exercise 4.23) with derivative
𝐷𝑚[𝑧1 , z2 ] = 𝑚(z1 , ⋅) + 𝑚(⋅, z2 )
214
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
The function 𝑓 𝑔 is the composition of 𝑚 with 𝑓 and 𝑔,
𝑓 𝑔(x, y) = 𝑚(𝑓 (x), 𝑔(y))
By the chain rule, the derivative of 𝑓 𝑔 is
(
)
𝐷𝑓 𝑔[x, y] = 𝐷𝑚[𝑧1 , z2 ] 𝐷𝑓 [x], 𝐷𝑔[x]
= 𝑚(z1 , 𝐷𝑔[y]) + 𝑚(𝐷𝑓 [x], z2 )
= 𝑓 [x]𝐷𝑔[y]) + 𝑔(y)𝐷𝑓 [x]
where z1 = 𝑓 (x) and z2 = 𝑔(y).
4.25 For 𝑛 = 1, 𝑓 (𝑥) = 𝑥 is linear and therefore (Exercise 4.6) 𝐷𝑓 [𝑥] = 1 (𝐷𝑓 [𝑥](𝑥) =
𝑥). For 𝑛 = 2, let 𝑔(𝑥) = 𝑥 so that 𝑓 (𝑥) = 𝑥2 = 𝑔(𝑥)𝑔(𝑥). Using the product rule
𝐷𝑓 [x] = 𝑔(𝑥)𝐷𝑔(𝑥) + 𝑔(𝑥)𝐷𝑔(𝑥) = 2𝑥
Now assume it is true for 𝑛 − 1 and let 𝑔(𝑥) = 𝑥𝑛−1 , so that 𝑓 (x) = 𝑥𝑔(𝑥). By the
product rule
𝐷𝑓 [x] = 𝑥𝐷𝑔[𝑥] + 𝑔(𝑥)1
By assumption 𝐷𝑔[𝑥] = (𝑛 − 1)𝑥𝑛−2 and therefore
𝐷𝑓 [x] = 𝑥𝐷𝑔[𝑥] + 𝑔(𝑥)1 = 𝑥(𝑛 − 1)𝑥𝑛−2 + 𝑥𝑛−1 = 𝑛𝑥𝑛−1
4.26 Using the product rule (Exercise 4.24)
𝐷𝑥 𝑅(𝑥0 ) = 𝑓 (𝑥0 )𝐷𝑥 𝑥 + 𝑥0 𝐷𝑥 𝑓 (𝑥0 )
= 𝑝0 + 𝑥0 𝐷𝑥 𝑓 (𝑥0 )
where 𝑝0 = 𝑓 (𝑥0 ). Marginal revenue equals one unit at the current price minus the
reduction in revenue caused by reducing the price on existing sales.
(
)−1
4.27 Fix some x0 and let 𝑔 = 𝐷𝑓 [x0 ]
. Let y0 = 𝑓 (x0 ). For any y, let x =
−1 0
−1 0
0
𝑓 (y + y) − 𝑓 (y ) so that 𝑔(y) = 𝑓 (x + x) − 𝑓 (x) and
−1 0
(
)
𝑓 (y + y) − 𝑓 −1 (y0 ) − 𝑔(y) = (x − 𝑔 𝑓 (x0 + x) − 𝑓 (x0 )) Since 𝑓 is differentiable at x0 with 𝐷𝑓 [x0 ] = 𝑔 −1
𝑓 (x0 + x) − 𝑓 (x0 ) = 𝑔 −1 (x) + 𝜂(x) ∥x∥
Substituting
(
)
−1 0
𝑓 (y + y) − 𝑓 −1 (y0 ) − 𝑔(y) = x − 𝑔 𝑔 −1 (x) + 𝜂(x) ∥x∥ (
)
= 𝑔 𝜂(x) ∥x∥ (
)
= ∥x∥ 𝑔 𝜂(x) (
)
with 𝜂(x) → 0𝑌 as x → 0𝑋 . Since 𝑓 −1 and 𝑔 are continuous, 𝑔 𝜂(x) → 0𝑋 as y → 0.
)−1
(
.
We conclude that 𝑓 −1 is differentiable with derivative 𝑔 = 𝐷𝑓 [x0 ]
215
c 2001 Michael Carter
⃝
All rights reserved
Solutions for Foundations of Mathematical Economics
4.28
log 𝑓 (𝑥) = 𝑥 log 𝑎
and therefore
(
)
𝑓 (𝑥) = exp log 𝑓 (𝑥) = 𝑒𝑥 log 𝑎
By the Chain Rule, 𝑓 is differentiable with derivative
𝐷𝑥 𝑓 (𝑥) = 𝑒𝑥 log 𝑎 log 𝑎 = 𝑎𝑥 log 𝑎
4.29 By Exercise 4.15, the function 𝑔 : ℜ → ℜ defined by 𝑔(𝑦) =
tiable with derivative
1
𝐷𝑦 𝑔[𝑦] = −𝑦 −2 = − 2
𝑦
1
𝑦
= 𝑦 −1 is differen-
Applying the Chain Rule, 1/𝑓 = 𝑔 ∘ 𝑓 is differentiable with derivative
1
𝐷𝑓 [x]
𝐷 [x] = 𝐷𝑔[𝑓 (x)]𝐷𝑓 [x] = − (
)2
𝑓
𝑓 (x)
4.30 Applying the Product Rule to 𝑓 × (1/𝑔)
1
1
𝑓
𝐷𝑓 [x]
𝐷 [x, y] = 𝑓 (x)𝐷 [y] +
𝑔
𝑔
𝑔(y)
𝐷𝑔[y]
1
= −𝑓 (x) (
𝐷𝑓 [x]
)2 +
𝑔(y)
𝑔(y)
𝑔(y)𝐷𝑓 [x] − 𝑓 (x)𝐷𝑔[y]
=
(
)2
𝑔(y)
4.31 In the particular case where
1/3 2/3
𝑓 (x1 , x2 ) = x1 x2
the partial derivatives at the point (8, 8) are
𝐷𝑥1 𝑓 [(8, 8)] =
2
1
and 𝐷𝑥2 𝐹 [(8, 8)] =
3
3
4.32 The partial derivatives of 𝑓 (x) are from Table 4.4
𝐷𝑥𝑖 𝑓 [x] = 𝑥𝑎1 1 𝑥𝑎2 2 . . . 𝑎𝑖 𝑥𝑖𝑎𝑖 −1 . . . 𝑥𝑎𝑛𝑛
= 𝑎𝑖
so that the gradient is
(
∇𝑓 (x) =
𝑓 (x)
𝑥𝑖
𝑎1 𝑎2
𝑎𝑛
, ,...,
𝑥1 𝑥2
𝑥𝑛
)
𝑓 (x)
4.33 Applying the chain rule (Exercise 4.22) to general power function (Example 4.15),
the partial derivatives of the CES function are
𝐷𝑥𝑖 𝑓 [x] =
1
1
−1
(𝑎1 𝑥𝜌1 + 𝑎2 𝑥𝜌2 + ⋅ ⋅ ⋅ + 𝑎𝑛 𝑥𝜌𝑛 ) 𝜌 𝑎𝑖 𝜌𝑥𝜌−1
𝑖
𝜌
= 𝑎𝑖 𝑥𝜌−1
(𝑎1 𝑥𝜌1 + 𝑎2 𝑥𝜌2 + ⋅ ⋅ ⋅ + 𝑎𝑛 𝑥𝜌𝑛 )
𝑖
(
)1−𝜌
𝑓 (x)
= 𝑎𝑖
𝑥𝑖
216
1−𝜌
𝜌
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
4.34 Define
ℎ(𝑥) = 𝑓 (𝑥) −
𝑓 (𝑏) − 𝑓 (𝑎)
(𝑥 − 𝑎)
𝑏−𝑎
Then ℎ is continuous on [𝑎, 𝑏] and differentiable on (𝑎, 𝑏) with
ℎ(𝑏) = 𝑓 (𝑏) −
𝑓 (𝑏) − 𝑓 (𝑎)
(𝑏 − 𝑎)𝑓 (𝑎) = ℎ(𝑎)
𝑏−𝑎
By Rolle’s theorem (Exercise 5.8), there exists 𝑥 ∈ (𝑎, 𝑏) such that
ℎ′ (𝑥) = 𝑓 ′ (𝑥) −
𝑓 (𝑏) − 𝑓 (𝑎)
=0
𝑏−𝑎
4.35 Assume ∇𝑓 (x) ≥ 0 for every x ∈ 𝑋. By the mean value theorem, for any x2 ≥ x1
in 𝑋, there exists x̄ ∈ (x1 , x2 ) such that
𝑓 (x2 ) = 𝑓 (x1 ) + 𝐷𝑓 [x̄](x2 − x1 )
Using (4.6)
𝑓 (x2 ) = 𝑓 (x1 ) +
𝑛
∑
𝐷𝑥𝑖 𝑓 (x̄)(𝑥2𝑖 − 𝑥1𝑖 )
(4.44)
𝑖=1
∇𝑓 (x̄) ≥ 0 and x2 ≥ x1 implies that
𝑛
∑
𝐷𝑥𝑖 𝑓 (x̄)(𝑥2𝑖 − 𝑥1𝑖 ) ≥ 0
𝑖=1
and therefore 𝑓 (x2 ) ≥ 𝑓 (x1 ). 𝑓 is increasing. The converse was established in Exercise
4.15
4.36 ∇𝑓 (x̄) > 0 and x2 ≥ x1 implies that
𝑛
∑
𝐷𝑥𝑖 𝑓 (x̄)(𝑥2𝑖 − 𝑥1𝑖 ) > 0
𝑖=1
Substituting in (4.44)
𝑓 (x2 ) = 𝑓 (x1 ) +
𝑛
∑
𝐷𝑥𝑖 𝑓 (x̄)(𝑥2𝑖 − 𝑥1𝑖 ) > 𝑓 (x1 )
𝑖=1
𝑓 is strictly increasing.
4.37 Differentiability implies the existence of the gradient and hence the partial derivatives of 𝑓 (Exercise 4.13). Continuity of 𝐷𝑓 [x] implies the continuity of the partial
derivatives.
To prove the converse, choose some x0 ∈ 𝑆 and define for the partial functions
ℎ𝑖 (𝑡) = 𝑓 (𝑥01 , 𝑥02 , . . . , 𝑥0𝑖−1 , 𝑡, 𝑥0𝑖+1 + 𝑥𝑖+1 , . . . , 𝑥0𝑛 + 𝑥𝑛 )
𝑖 = 1, 2, . . . , 𝑛
so that ℎ′𝑖 (𝑡) = 𝐷𝑥𝑖 𝑓 (x𝑖 ) where x𝑖 = (𝑥01 , 𝑥02 , . . . , 𝑥0𝑖 , 𝑡, 𝑥0𝑖+1 + 𝑥𝑖+1 , . . . , 𝑥0𝑛 + 𝑥𝑛 ). Further, ℎ1 (𝑥01 + 𝑥1 ) = 𝑓 (x0 + x), ℎ𝑛 (𝑥0𝑛 ) = 𝑓 (x0 ), and ℎ𝑖 (𝑥0𝑖 + 𝑥𝑖 ) = ℎ𝑖−1 (𝑥0𝑖 ) so that
𝑓 (x0 + x) − 𝑓 (x0 ) =
𝑛
∑
(
)
ℎ𝑖 (𝑥0𝑖 + 𝑥𝑖 ) − ℎ𝑖 (𝑥0𝑖 )
𝑖=1
217
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
By the mean value theorem, there exists, for each 𝑖, 𝑡¯𝑖 between 𝑥0𝑖 + 𝑥𝑖 and 𝑥𝑖 such
that
ℎ𝑖 (𝑥0𝑖 + 𝑥𝑖 ) − ℎ𝑖 (𝑥𝑖 ) = 𝐷𝑥𝑖 𝑓 (x̄𝑖 )𝑥𝑖
where x̄𝑖 = (𝑥01 , 𝑥02 , . . . , 𝑥0𝑖 , 𝑡¯, 𝑥0𝑖+1 + 𝑥𝑖+1 , . . . , 𝑥0𝑛 + 𝑥𝑛 ). Therefore
𝑓 (x0 + x) − 𝑓 (x0 ) =
𝑛
∑
𝐷𝑥𝑖 𝑓 (x̄𝑖 )𝑥𝑖
𝑖=1
Define the linear functional
𝑔(x) =
𝑛
∑
𝐷𝑥𝑖 𝑓 (x0 )𝑥𝑖
𝑖=1
Then
𝑓 (x0 + x) − 𝑓 (x0 ) − 𝑔(x) =
𝑛 (
)
∑
𝐷𝑥𝑖 𝑓 (x̄𝑖 ) − 𝐷𝑥𝑖 𝑓 (x0 ) 𝑥𝑖
𝑖=1
and
𝑛
∑
𝑓 (x0 + x) − 𝑓 (x0 ) − 𝑔(x) ≤
(𝐷𝑥𝑖 𝑓 (x̄𝑖 ) − 𝐷𝑥𝑖 𝑓 (x0 ) ∣𝑥𝑖 ∣
𝑖=1
so that
𝑛
𝑓 (x0 + x) − 𝑓 (x0 ) − 𝑔(x) ∑
(𝐷𝑥𝑖 𝑓 (x̄𝑖 ) − 𝐷𝑥𝑖 𝑓 (x0 ) ∣𝑥𝑖 ∣
≤
lim
x→0
∥x∥
∥x∥
𝑖=1
≤
𝑛
∑
(𝐷𝑥𝑖 𝑓 (x̄𝑖 ) − 𝐷𝑥𝑖 𝑓 (x0 )
𝑖=1
=0
since the partial derivatives 𝐷𝑥𝑖 𝑓 (x) are continuous. Therefore 𝑓 is differentiable with
derivative
𝑔(x) =
𝑛
∑
𝐷𝑥𝑖 𝑓 [x0 ]𝑥𝑖
𝑖=1
4.38 For every x1 , x2 ∈ 𝑆
∥𝑓 (x1 ) − 𝑓 (x2 )∥ ≤
sup
x∈[x1 ,x2 ]
∥𝐷𝑓 (x)∥ ∥x1 − x2 ∥
by Corollary 4.1.1. If 𝐷𝑓 [x] = 0 for every x ∈ 𝑋, then
∥𝑓 (x1 ) − 𝑓 (x2 )∥ = 0
which implies that 𝑓 (x1 ) = 𝑓 (x2 ). We conclude that 𝑓 is constant on 𝑆. The converse
was established in Exercise 4.7.
4.39 For any x0 ∈ 𝑆, let 𝐵 ⊆ 𝑆 be an open ball of radius of radius 𝑟 centered on x0 .
Applying the mean value inequality (Corollary 4.1.1) to 𝑓𝑛 − 𝑓𝑚 we have
(
)
𝑓𝑛 (x) − 𝑓𝑚 (x) − 𝑓𝑛 (x0 ) − 𝑓𝑚 (x0 ) ≤ sup ∥𝐷𝑓𝑛 [x̄] − 𝐷𝑓𝑚 [x̄]∥ ∥x − x0 ∥
x̄∈𝐵
≤ 𝑟 sup ∥𝐷𝑓𝑛 [x̄] − 𝐷𝑓𝑚 [x̄]∥
x̄∈𝐵
218
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
for every x ∈ 𝐵. Given 𝜖 > 0, there exists 𝑁 such that for every 𝑚, 𝑛 > 𝑁
∥𝐷𝑓𝑛 − 𝐷𝑓𝑚 ∥ < 𝜖/𝑟 and ∥𝐷𝑓𝑛 − 𝑔∥ < 𝜖
Letting 𝑚 → ∞
(
)
𝑓𝑛 (x) − 𝑓 (x) − 𝑓𝑛 (x0 ) − 𝑓 (x0 ) ≤ 𝜖 ∥x − x0 ∥
(4.45)
for 𝑛 ≥ 𝑁 and x ∈ 𝐵. Applying the mean value inequality to 𝑓𝑛 , there exists 𝛿 such
that
∥𝑓𝑛 (x) − 𝑓𝑛 (x0 )∥ ≤ 𝜖 ∥x − x0 ∥
(4.46)
Using (4.45) and (4.46) and the fact that ∥𝐷𝑓𝑛 − 𝑔∥ < 𝜖 we deduce that
∥𝑓 (x) − 𝑓 (x0 ) − 𝑔(x0 )∥ ≤ 3𝜖 ∥x − x0 ∥
𝑓 is differentiable with derivative 𝑔.
4.40 Define
𝑓 (𝑥) =
𝑒𝑥+𝑦
𝑒𝑦
By the chain rule (Exercise 4.22)
𝑓 ′ (𝑥) =
𝑒𝑥+𝑦
= 𝑓 (𝑥)
𝑒𝑦
which implies (Example 4.21) that
𝑓 (𝑥) =
𝑒𝑥+𝑦
= 𝐴𝑒𝑥 for some 𝐴 ∈ ℜ
𝑒𝑦
Evaluating at 𝑥 = 0 using 𝑒0 = 1 gives
𝑓 (0) =
𝑒𝑦
= 𝐴 for some 𝐴 ∈ ℜ
𝑒𝑦
so that
𝑓 (𝑥) =
𝑒𝑦 𝑥
𝑒𝑥+𝑦
=
𝑒
𝑒𝑦
𝑒𝑦
which implies that
𝑒𝑥+𝑦 = 𝑒𝑥 𝑒𝑦
4.41 If 𝑓 = 𝐴𝑥𝑎 , 𝑓 ′ (𝑥) = 𝑎𝐴𝑥𝑎−1 and
𝐸(𝑥) = 𝑥
𝑎𝐴𝑥𝑎−1
=𝑎
𝐴𝑥𝑎
To show that this is the only function with constant elasticity, define
𝑔(𝑥) =
𝑓 (𝑥)
𝑥𝑎
𝑔 is differentiable (Exercise 4.30) with derivative
𝑔 ′ (𝑥) =
𝑥𝑎 𝑓 ′ (𝑥) − 𝑓 (𝑥)𝑎𝑥𝑎−1
𝑥𝑓 ′ (𝑥) − 𝑎𝑓 (𝑥)
=
𝑥2𝑎
𝑥𝑎+1
219
(4.47)
c 2001 Michael Carter
⃝
All rights reserved
Solutions for Foundations of Mathematical Economics
If
𝐸(𝑥) = 𝑥
𝑓 ′ (𝑥)
=𝑎
𝑓 (𝑥)
then
𝑥𝑓 ′ (𝑥) = 𝑎𝑓 (𝑥)
Substituting in (4.47)
𝑔 ′ (𝑥) =
𝑥𝑓 ′ (𝑥) − 𝑎𝑓 (𝑥)
= 0 for every 𝑥 ∈ ℜ
𝑥𝑎+1
Therefore, 𝑔 is a constant function (Exercise 4.38). That is, there exists 𝐴 ∈ ℜ such
that
𝑔(𝑥) =
𝑓 (𝑥)
= 𝐴 or 𝑓 (𝑥) = 𝐴𝑥𝑎
𝑥𝑎
4.42 Define 𝑔 : 𝑆 → 𝑌 by
𝑔(x) = 𝑓 (x) − 𝐷𝑓 [x0 ](x)
𝑔 is differentiable with
𝐷𝑔[x] = 𝐷𝑓 [x] − 𝐷𝑓 [x0 ]
Applying Corollary 4.1.1 to 𝑔,
∥𝑔(x1 ) − 𝑔(x2 )∥ ≤
sup
x∈[x1 ,x2 ]
∥𝐷𝑔[x]∥ ∥x1 − x2 ∥
for every x1 , x2 ∈ 𝑆. Substituting for 𝑔 and 𝐷𝑔
∥𝑓 (x1 ) − 𝐷𝑓 [x0 ](x1 ) − 𝑓 (x2 ) + 𝐷𝑓 [x0 ](x2 )∥ = ∥𝑓 (x1 ) − 𝑓 (x2 ) − 𝐷𝑓 [x0 ](x1 − x2 )∥
≤
sup
x∈[x1 ,x2 ]
∥𝐷𝑓 [x] − 𝐷𝑓 [x0 ]∥ ∥x1 − x2 ∥
4.43 Since 𝐷𝑓 is continuous, there exists a neighborhood 𝑆 of x0 such that
∥𝐷𝑓 [x] − 𝐷𝑓 [x0 ]∥ < 𝜖 for every x ∈ 𝑆
and therefore for every x1 , x2 ∈ 𝑆
sup
x∈[x1 ,x2 ]
∥𝐷𝑓 [x] − 𝐷𝑓 [x0 ]∥ < 𝜖
By the previous exercise (Exercise 4.42)
∥𝑓 (x1 ) − 𝑓 (x2 ) − 𝐷𝑓 [x0 ](x1 − x2 )∥ ≤ 𝜖 ∥x1 − x2 ∥
4.44 By the previous exercise (Exercise 4.43), there exists a neighborhood such that
∥𝑓 (x1 ) − 𝑓 (x2 ) − 𝐷𝑓 [x0 ](x1 − x2 )∥ ≤ 𝜖 ∥x1 − x2 ∥
The Triangle Inequality (Exercise 1.200) implies
∥𝑓 (x1 ) − 𝑓 (x2 )∥ − ∥𝐷𝑓 [x0 ](x1 − x2 )∥ ≤ ∥𝑓 (x1 ) − 𝑓 (x2 ) − 𝐷𝑓 [x0 ](x1 − x2 )∥ ≤ 𝜖 ∥x1 − x2 ∥
and therefore
∥𝑓 (x1 ) − 𝑓 (x2 )∥ ≤ ∥𝐷𝑓 [x0 ](x1 − x2 )∥ + 𝜖 ∥x1 − x2 ∥ ≤ ∥𝐷𝑓 [x0 ] + 𝜖∥ ∥x1 − x2 ∥
220
c 2001 Michael Carter
⃝
All rights reserved
Solutions for Foundations of Mathematical Economics
4.45 Assume not. That is, assume that
y = 𝑓 (x1 ) − 𝑓 (x2 ) ∕∈ conv 𝐴
Then by the (strong) separating hyperplane theorem (Proposition 3.14) there exists a
linear functional 𝜑 on 𝑌 such that
𝜑(y) > 𝜑(a)
for every a ∈ 𝐴
(4.48)
where
𝜑(𝑦) = 𝜑(𝑓 (x1 ) − 𝑓 (x2 )) = 𝜑(𝑓 (x1 )) − 𝜑(𝑓 (x2 ))
𝜑𝑓 is a functional on 𝑆. By the mean value theorem (Theorem 4.1), there exists some
x̄ ∈ [x1 , x2 ] such that
𝜑 ∘ 𝑓 (x1 ) − 𝜑 ∘ 𝑓 (x2 )) = 𝐷(𝜑 ∘ 𝑓 )[x̄](x1 − x2 ) = 𝜑 ∘ 𝐷𝑓 [x̄](x − x2 ) = 𝜑(𝑎)
for some 𝑎 ∈ 𝐴 contradicting (4.44).
4.46 Define ℎ : [𝑎, 𝑏] → ℜ by
(
)
(
)
ℎ(𝑥) = 𝑓 (𝑏) − 𝑓 (𝑎) 𝑔(𝑥) − 𝑔(𝑏) − 𝑔(𝑎) 𝑓 (𝑥)
ℎ ∈ 𝐶[𝑎, 𝑏] and is differentiable on 𝑎, 𝑏) with
(
)
(
)
ℎ(𝑎) = 𝑓 (𝑏) − 𝑓 (𝑎) 𝑔(𝑎) − 𝑔(𝑏) − 𝑔(𝑎) 𝑓 (𝑎) = 𝑓 (𝑏)𝑔(𝑎) − 𝑓 (𝑎)𝑔(𝑏) = ℎ(𝑏)
By Rolle’s theorem (Exercise 5.8), there exists 𝑥 ∈ (𝑎, 𝑏) such that
(
)
(
)
ℎ′ (𝑥) = 𝑓 (𝑏) − 𝑓 (𝑎) 𝑔 ′ (𝑥) − 𝑔(𝑏) − 𝑔(𝑎) 𝑓 ′ (𝑥) = 0
4.47 The hypothesis that lim𝑥→𝑎 𝐷𝑓 (𝑥)/𝐷𝑔(𝑥) exists contains two implicit assumptions, namely
∙ 𝑓 and 𝑔 are differentiable on a neighborhood 𝑆 of 𝑎 (except perhaps at 𝑎)
∙ 𝑔 ′ (𝑥) ∕= 0 in this neighborhood (except perhaps at 𝑎).
Applying the Cauchy mean value theorem, for every 𝑥 ∈ 𝑆, there exists some 𝑦𝑥 ∈ (𝑎, 𝑥)
such that
𝑓 ′ (𝑦𝑥 )
𝑓 (𝑥) − 𝑓 (𝑎)
𝑓 (𝑥)
=
=
𝑔 ′ (𝑦𝑥 )
𝑔(𝑥) − 𝑔(𝑎)
𝑔(𝑥)
and therefore
𝑓 (𝑥)
𝑓 ′ (𝑦𝑥 )
𝑓 ′ (𝑥)
= lim ′
= lim ′
𝑥→𝑎 𝑔(𝑥)
𝑥→𝑎 𝑔 (𝑦𝑥 )
𝑥→𝑎 𝑔 (𝑥)
lim
4.48 Let 𝐴 = 𝑎1 + 𝑎2 + ⋅ ⋅ ⋅ + 𝑎𝑛 ∕= 1. Then from (4.12)
𝑎1 log 𝑥1 + 𝑎2 log 𝑥2 + . . . 𝑎𝑛 log 𝑥𝑛
𝐴
𝑎2
𝑎𝑛
𝑎1
log 𝑥1 +
log 𝑥2 + . . .
log 𝑥𝑛
=
𝐴
𝐴
𝐴
lim 𝑔(𝜌) =
𝜌→0
and therefore
lim log 𝑓 (𝜌, x) =
𝜌→0
𝑎1
𝑎2
𝑎𝑛
log 𝑥1 +
log 𝑥2 + . . .
log 𝑥𝑛
𝐴
𝐴
𝐴
so that
𝑎1
𝑎1
𝑎1
lim 𝑓 (𝜌, x) = 𝑥1𝐴 𝑥2𝐴 . . . 𝑥𝑛𝐴
𝜌→0
which is homogeneous of degree one.
221
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
4.49 Average cost is given by 𝑐(𝑦)/𝑦 which is undefined at 𝑦 = 0. We seek lim𝑦→0 𝑐(𝑦)/𝑦.
By L’Hôpital’s rule
𝑐(𝑦)
𝑐′ (𝑦)
= lim
𝑦→0 𝑦
𝑦→0 1
= 𝑐′ (0)
lim
which is marginal cost at zero output.
4.50
1. Since lim𝑥→∞ 𝑓 ′ (𝑥)/𝑔 ′ (𝑥) = k, for every 𝜖 > 0 there exists 𝑎 such that
′
𝑓 (¯
𝑥) − 𝑘 < 𝜖/2 for every 𝑥
¯>𝑎
(4.49)
𝑔 ′ (¯
𝑥)
For every 𝑥 > 𝑎, there exists (Exercise 4.46) 𝑥
¯ ∈ (𝑎, 𝑥) such that
𝑓 (𝑥) − 𝑓 (𝑎)
𝑓 ′ (¯
𝑥)
= ′
𝑔(𝑥) − 𝑔(𝑎)
𝑔 (¯
𝑥)
and therefore by (4.49)
𝑓 (𝑥) − 𝑓 (𝑎)
< 𝜖/2 for every 𝑥 > 𝑎
−
𝑘
𝑔(𝑥) − 𝑔(𝑎)
2.
𝑓 (𝑥)
𝑓 (𝑥) − 𝑓 (𝑎)
𝑓 (𝑥)
𝑔(𝑥) − 𝑔(𝑎)
=
×
×
𝑔(𝑥)
𝑔(𝑥) − 𝑔(𝑎)
𝑓 (𝑥) − 𝑓 (𝑎)
𝑔(𝑥)
𝑓 (𝑥) − 𝑓 (𝑎) 1 −
×
=
𝑔(𝑥) − 𝑔(𝑎)
1−
𝑔(𝑎)
𝑔(𝑥)
𝑓 (𝑎)
𝑓 (𝑥)
For fixed 𝑎
lim
1−
𝑥→∞
1−
𝑔(𝑎)
𝑔(𝑥)
𝑓 (𝑎)
𝑓 (𝑥)
=1
and therefore there exists 𝑎2 such that
1−
1−
𝑔(𝑎)
𝑔(𝑥)
𝑓 (𝑎)
𝑓 (𝑥)
< 2 for every 𝑥 > 𝑎2
which implies that
𝜖
𝑓 (𝑥)
𝑔(𝑥) − 𝑘 < 2 × 2 for every 𝑥 > 𝑎 = max{𝑎1 , 𝑎2 }
4.51 We know that the result holds for 𝑛 = 1 (Exercise 4.22). Assume that the result
holds for 𝑛 − 1. By the chain rule
𝐷(𝑔 ∘ 𝑓 )[x] = 𝐷𝑔[𝑓 (x)] ∘ 𝐷𝑓 [x]
If 𝑓, 𝑔 ∈ 𝐶 𝑛 , the 𝐷𝑓, 𝐷𝑔 ∈ 𝐶 𝑛−1 and therefore (by assumption) 𝐷(𝑔 ∘ 𝑓 ) ∈ 𝐶 𝑛−1 ,
which implies that 𝑔 ∘ 𝑓 ∈ 𝐶 𝑛 .
222
c 2001 Michael Carter
⃝
All rights reserved
Solutions for Foundations of Mathematical Economics
4.52 The partial derivatives of the quadratic function are
𝐷1 𝑓 = 2𝑎𝑥1 + 2𝑏𝑥2
𝐷2 𝑓 = 2𝑏𝑥1 + 2𝑐𝑥2
The second-order partial derivatives are
𝐷11 𝑓 = 2𝑎
𝐷21 𝑓 = 2𝑏
𝐷12 𝑓 = 2𝑏
𝐷22 𝑓 = 2𝑐
4.53 Apply Exercise 4.37 to each partial derivative 𝐷𝑖 𝑓 [x].
4.54
𝐻(x0 ) =
(
𝐷11 𝑓 𝑓 [x0 ] 𝐷12 𝑓 𝑓 [x0 ]
𝐷21 𝑓 𝑓 [x0 ] 𝐷22 𝑓 𝑓 [x0 ]
)
(
=2
𝑎
𝑐
𝑏
𝑑
)
4.55
4.56 For any 𝑥1 ∈ 𝑆, define 𝑔 : 𝑆 → ℜ by
𝑔(𝑡) = 𝑓 (𝑡) + 𝑓 ′ [𝑡](𝑥1 − 𝑡) + 𝑎2 (𝑥1 − 𝑡)2
𝑔 is differentiable on 𝑆 with
𝑝′ (𝑡) = 𝑓 ′ [𝑡] − 𝑓 ′ [𝑡] + 𝑓 ′′ [𝑡](𝑥1 − 𝑡) − 2𝑎2 (𝑥1 − 𝑡) = 𝑓 ′′ [𝑡](𝑥1 − 𝑡) − 2𝑎2 (𝑥1 − 𝑡)
Note that 𝑔(𝑥1 ) = 𝑓 (𝑥1 ) and
𝑔(𝑥0 ) = 𝑓 (𝑥0 ) + 𝑓 ′ (𝑥0 )(𝑥1 − 𝑥0 ) + 𝑎2 (𝑥1 − 𝑥0 )2
(4.50)
is a quadratic approximation for 𝑓 near 𝑥0 . If we require that this be exact at 𝑥1 ∕= 𝑥0 ,
then 𝑔(𝑥0 ) = 𝑓 (𝑥1 ) = 𝑔(𝑥1 ). By the mean value theorem (Theorem 4.1), there exists
some 𝑥
¯ between 𝑥0 and 𝑥1 such that
𝑔(𝑥1 ) − 𝑔(𝑥0 ) = 𝑝′ (¯
𝑥)(𝑥1 − 𝑥0 ) = 𝑓 ′′ (¯
𝑥)(𝑥1 − 𝑥0 ) − 2𝑎2 (𝑥1 − 𝑡) = 0
which implies that
𝑎2 =
1 ′′
𝑓 (¯
𝑥)
2
Setting 𝑥 = 𝑥1 − 𝑥0 in (4.50) gives the required result.
4.57 For any 𝑥1 ∈ 𝑆, define 𝑔 : 𝑆 → ℜ by
1
1
𝑔(𝑡) = 𝑓 (𝑡) + 𝑓 ′ [𝑡](𝑥1 − 𝑡) + 𝑓 ′′ [𝑡](𝑥1 − 𝑡)2 + 𝑓 (3) [𝑡](𝑥1 − 𝑡)3 + . . .
2
3!
1 (𝑛)
𝑛
𝑛+1
+ 𝑓 [𝑡](𝑥1 − 𝑡) + 𝑎𝑛+1 (𝑥1 − 𝑡)
𝑛!
𝑔 is differentiable on 𝑆 with
1
1
𝑔 ′ (𝑡) = 𝑓 ′ [𝑡] − 𝑓 ′ [𝑡] + 𝑓 ′′ [𝑡](𝑥1 − 𝑡) − 𝑓 ′′ [𝑡](𝑥1 − 𝑡) + 𝑓 (3) [𝑡](𝑥1 − 𝑡)2 − 𝑓 (3) [𝑡](𝑥1 − 𝑥0 )2 + . . .
2
2
1
1 (𝑛+1)
(𝑛)
𝑛−1
𝑛
𝑓 [𝑡](𝑥1 − 𝑡)
+
+ 𝑓
[𝑡](𝑥1 − 𝑡) − (𝑛 + 1)𝑎𝑛+1 (𝑥1 − 𝑡)𝑛
(𝑛 − 1)!
𝑛!
All but the last two terms cancel, so that
1
𝑔 (𝑡) = 𝑓 (𝑛+1) [𝑡](𝑥1 − 𝑡)𝑛 − (𝑛 + 1)𝑎𝑛+1 (𝑥1 − 𝑡)𝑛 =
𝑛!
′
223
(
)
1 (𝑛+1)
𝑓
[𝑡] − (𝑛 + 1)𝑎𝑛+1 (𝑥1 − 𝑡)𝑛
𝑛!
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
Note that 𝑔(𝑥1 ) = 𝑓 (𝑥1 ) and
1
1
𝑔(𝑥0 ) = 𝑓 (𝑥0 ) + 𝑓 ′ [𝑥0 ](𝑥1 − 𝑥0 ) + 𝑓 ′′ [𝑥0 ](𝑥1 − 𝑥0 )2 + 𝑓 (3) [𝑥0 ](𝑥1 − 𝑥0 )3 + . . .
2
3!
1
+ 𝑓 (𝑛+1) [𝑥0 ](𝑥1 − 𝑥0 )𝑛 + 𝑎𝑛+1 (𝑥1 − 𝑥0 )𝑛+1
(4.51)
𝑛!
is a polynomial approximation for 𝑓 near 𝑥0 . If we require that 𝑎𝑛+1 be such that
𝑔(𝑥0 ) = 𝑓 (𝑥1 ) = 𝑔(𝑥1 ), there exists (Theorem 4.1) some 𝑥
¯ between 𝑥0 and 𝑥1 such
that
𝑥)(𝑥1 − 𝑥0 ) = 0
𝑔(𝑥1 ) − 𝑔(𝑥0 ) = 𝑔 ′ (¯
which for 𝑥1 ∕= 𝑥0 implies that
𝑔 ′ (¯
𝑥) =
1 𝑛+1
[¯
𝑥] − (𝑛 + 1)𝑎𝑛+1 = 0
𝑓
𝑛!
or
𝑎𝑛+1 =
1
𝑓 𝑛+1 [¯
𝑥]
(𝑛 + 1)!
Setting 𝑥 = 𝑥1 − 𝑥0 in (4.51) gives the required result.
4.58 By Taylor’s theorem (Exercise 4.57), for every 𝑥 ∈ 𝑆 − 𝑥0 , there exists 𝑥
¯ between
0 and 𝑥 such that
1
𝑓 (𝑥0 + 𝑥) = 𝑓 (𝑥0 ) + 𝑓 ′ [𝑥0 ]𝑥 + 𝑓 ′′ [𝑥0 ]𝑥2 + 𝜖(𝑥)
2
where
𝜖(𝑥) =
1 (3)
𝑓 [¯
𝑥]𝑥3
3!
and
1
𝜖(𝑥)
= 𝑓 (3) [¯
𝑥](𝑥)
𝑥2
3!
𝑥] is bounded on [0, 𝑥] and therefore
Since 𝑓 ∈ 𝐶 3 , 𝑓 (3) [¯
lim ∣
𝑥→0
𝑒(𝑥)
1
∣ = lim ∣𝑓 (3) [¯
𝑥](𝑥)∣ = 0
𝑥→0 3!
𝑥2
4.59 The function 𝑔 : ℜ → 𝑆 defined by
𝑔(𝑡) = 𝑡x0 + (1 − 𝑡)x
𝑔 is 𝐶 ∞ with 𝐷𝑔[𝑡] = x and 𝐷𝑘 𝑔(𝑡) = 0 for 𝑘 = 2, 3, . . . . By Exercise 4.51, the
composite function ℎ = 𝑓 ∘ 𝑔 is 𝐶 𝑛+1 . By the Chain rule
ℎ′ (𝑡) = 𝐷𝑓 [𝑔(𝑡)] ∘ 𝐷𝑔[𝑡] = 𝐷𝑓 [𝑔(𝑡)](x)
Similarly
(
)
ℎ′′ (𝑡) = 𝐷 𝐷𝑓 [𝑔(𝑡)](x)
= 𝐷2 𝑓 [𝑔(𝑡)] ∘ 𝐷𝑔[𝑡](x − x0 )
= 𝐷2 𝑓 [𝑔(𝑡)](x)(2)
224
c 2001 Michael Carter
⃝
All rights reserved
Solutions for Foundations of Mathematical Economics
and for all 1 ≤ 𝑘 ≤ 𝑛 + 1
)
(
ℎ(𝑘) (𝑡) = 𝐷 𝐷(𝑘−1) 𝑓 [𝑔(𝑡)](x)(𝑘−1)
= 𝐷𝑘 𝑓 [𝑔(𝑡)] ∘ 𝐷𝑔[𝑡](x − x0 )(𝑘−1)
= 𝐷𝑘 𝑓 [𝑔(𝑡)](x)(𝑘)
4.60 From Exercise 4.54, the Hessian of 𝑓 is
(
𝑎
𝐻(x) = 2
𝑐
𝑏
𝑑
)
and the gradient of 𝑓 is
(
)
∇𝑓 (x) = (2𝑎𝑥1 , 2𝑐𝑥2 ) with ∇𝑓 (0, 0) = 0
so that the second order Taylor series at (0, 0) is
1
𝑓 (x) = 𝑓 (0, 0) + ∇𝑓 (0, 0)x + 2x𝑇
2
(
𝑎 𝑏
𝑐 𝑑
)
x
= 𝑎𝑥21 + 2𝑏𝑥1 𝑥2 + 𝑐𝑥22
Not surprisingly, we conclude that the best quadratic approximation of a quadratic
function is the function itself.
4.61
1. Since 𝐷𝑓 [x0 ] is continuous and one-to-one (Exercise 3.36), there exists a
constant 𝑚 such that
𝑚 ∥x1 − x2 ∥ ≤ ∥𝐷𝑓 [x0 ](x1 − x2 )∥
(4.52)
Let 𝜖 = 𝑚/2. By Exercise 4.43, there exists a neighborhood 𝑆 such that
∥𝐷𝑓 [x0 ](x1 − x2 ) − (𝑓 (x1 ) − 𝑓 (x2 ))∥ = ∥𝑓 (x1 ) − 𝑓 (x2 ) − 𝐷𝑓 [x0 ](x1 − x2 )∥ ≤ 𝜖 ∥x1 − x2 ∥
for every x1 , x2 ∈ 𝑆. The Triangle Inequality (Exercise 1.200) implies
∥𝐷𝑓 [x0 ](x1 − x2 )∥ − ∥(𝑓 (x1 ) − 𝑓 (x2 ))∥ ≤ 𝜖 ∥x1 − x2 ∥
Substituting (4.52)
2𝜖 ∥x1 − x2 ∥ − ∥(𝑓 (x1 ) − 𝑓 (x2 ))∥ ≤ 𝜖 ∥x1 − x2 ∥
That is
𝜖 ∥x1 − x2 ∥ ≤ ∥(𝑓 (x1 ) − 𝑓 (x2 ))∥
(4.53)
and therefore
𝑓 (x1 ) = 𝑓 (x2 ) =⇒ x1 = x2
2. Let 𝑇 = 𝑓 (𝑆). Since the restriction of 𝑓 to 𝑆 is one-to-one and onto, and therefore
there exists an inverse 𝑓 −1 : 𝑇 → 𝑆. For any y1 , y2 ∈ 𝑇 , let x1 = 𝑓 −1 (y1 ) and
x2 = 𝑓 −1 (y2 ). Substituting in (4.53)
𝜖 𝑓 −1 (y1 ) − 𝑓 −1 (y2 ) ≤ ∥y1 − y2 ∥
so that
−1
𝑓 (y1 ) − 𝑓 −1 (y2 ) ≤ 1 ∥y1 − y2 ∥
𝜖
𝑓 −1 is continuous.
225
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
3. Since 𝑆 is open, 𝑇 = 𝑓 −1 (𝑆) is open. Therefore, 𝑇 = 𝑓 (𝑆) is a neighborhood of
𝑓 (x0 ). Therefore, 𝑓 is locally onto.
4.62 Assume to the contrary that there exists x0 ∕= x1 ∈ 𝑆 with 𝑓 (x0 ) = 𝑓 (x1 ). Let
x = x1 − x0 . Define 𝑔 : [0, 1] → 𝑆 by 𝑔(𝑡) = (1 − 𝑡)x0 + 𝑡x1 = x0 + 𝑡x. Then
𝑔(0) = x0
Define
𝑔(1) = x1
𝑔 ′ (𝑡) = x
( (
)
)
ℎ(𝑡) = x𝑇 𝑓 𝑔(𝑡) − 𝑓 (x0 )
Then
ℎ(0) = 0 = ℎ(1)
By the mean value theorem (Mean value theorem), there exists 0 < 𝛼 < 1 such that
𝑔(𝛼) ∈ 𝑆 and
ℎ′ (𝛼) = x𝑇 𝐷𝑓 [𝑔(𝛼)]x = x𝑇 𝐽𝑓 (𝑔(𝛼))x = 0
which contradicts the definiteness of 𝐽𝑓 .
4.63 Substituting the linear functions in (4.35) and (4.35), the IS-LM model can be
expressed as
(1 − 𝐶𝑦 )𝑦 − 𝐼𝑟 𝑟 = 𝐶0 + 𝐼0 + 𝐺 − 𝐶𝑦 𝑇
𝐿𝑦 𝑦 + 𝐿𝑟 𝑟 = 𝑀/𝑃
which can be rewritten in matrix form as
)(
) (
)
(
𝑦
𝑍 − 𝐶𝑦 𝑇
1 − 𝐶𝑦 𝐼𝑟
=
𝐿𝑦
𝐿𝑟
𝑟
𝑀/𝑃
where 𝑍 = 𝐶0 + 𝐼0 + 𝐺. Provided the system is nonsingular, that is
1 − 𝐶𝑦 𝐼𝑟 ∕= 0
𝐷=
𝐿𝑦
𝐿𝑟 the system can be solved using Cramer’s rule (Exercise 3.103) to yield
(1 − 𝐶𝑦 )𝑀/𝑃 − 𝐿𝑦 (𝑍 − 𝐶𝑦 𝑇 )
𝐷
𝐿𝑟 (𝑍 − 𝐶𝑦 )𝑇 − 𝐼𝑟 𝑀/𝑃
𝑦=
𝐷
𝑟=
4.64 The kernel
kernel 𝐷𝐹 [(x0 , 𝜽0 )] = { (x, 𝜽) : 𝐷𝐹 [(x0 , 𝜽0 )](x, 𝜽) = 0 }
is the set of solutions to the equation
(
) (
) (
)
x
𝐷x 𝑓 (x0 , 𝜽0 )x + 𝐷𝜽 𝑓 (x0 , 𝜽0 )𝜽
0
=
𝐷𝐹 [x0 , 𝜽0 ]
=
𝜽
𝜽
0
Only 𝜽 = 0 satisfies this equation. Substituting 𝜽 = 0, the equation reduces to
𝐷x 𝑓 (x0 , 𝜽0 )x = 0
which has a unique solution x = 0 since 𝐷x 𝑓 [x0 , 𝜽0 ] is nonsingular. Therefore the
kernel of 𝐷𝐹 [x0 , 𝜽0 ] consists of the single point (0, 0) which implies that 𝐷𝐹 [x0 , 𝜽 0 ] is
nonsingular (Exercise 3.19).
226
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
4.65 The IS curve is horizontal if its slope is zero, that is
𝐷𝑦 𝑔 = −
1 − 𝐷𝑦 𝐶
−𝐷𝑟 𝐼
This requires either
1. unit marginal propensity to consume (𝐷𝑦 𝐶 = 1)
2. infinite interest elasticity of investment (𝐷𝑟 𝐼 = ∞)
4.66 The LM curve 𝑟 = ℎ(𝑦) is implicitly defined by the equation
𝑓 (𝑟, 𝑦; 𝐺, 𝑇, 𝑀 ) = 𝐿(𝑦, 𝑟) − 𝑀/𝑃 = 0
the slope of which is given by
𝐷𝑦 𝑓
𝐷𝑟 𝑓
𝐷𝑦 𝐿
=−
𝐷𝑟 𝐿
𝐷𝑦 ℎ = −
Economic considerations dictate that the numerator (𝐷𝑦 𝑓 ) is positive while the denominator (𝐷𝑟 𝐿) is negative. Preceded by a negative sign, the slope of the LM curve
is positive. The LM curve would be vertical (infinite slope) if the interest elasticity of
the demand for money was zero (𝐷𝑟 𝐿 = 0).
4.67 Suppose 𝑓 is convex. For any x, x0 ∈ 𝑆 let
)
(
ℎ(𝑡) = 𝑓 𝑡x + (1 − 𝑡)x0 ≤ 𝑡𝑓 (x) + (1 − 𝑡)𝑓 (x0 )
for 0 < 𝑡 < 1. Subtracting ℎ(0) = 𝑓 (x0 )
ℎ(𝑡) − ℎ(0) ≤ 𝑡𝑓 (x) − 𝑡𝑓 (x0 )
and therefore
𝑓 (x) − 𝑓 (x0 ) ≥
ℎ(𝑡) − ℎ(0)
𝑡
Using Exercise 4.10
𝑓 (x) − 𝑓 (x0 ) ≥ lim
𝑡→0
ℎ(𝑡) − ℎ(0)
⃗ x 𝑓 [x0 ] = 𝐷𝑓 [x0 ](x − x0 )
=𝐷
𝑡
Conversely, let x0 = 𝛼x1 + (1 − 𝛼)x2 for any x1 , x2 ∈ 𝑆. If 𝑓 satisfies (4.29) on 𝑆, then
𝑓 (x1 ) ≥ 𝑓 (x0 ) + 𝐷𝑓 [x0 ](x1 − x0 )
𝑓 (x2 ) ≥ 𝑓 (x0 ) + 𝐷𝑓 [x0 ](x2 − x0 )
and therefore for any 0 ≤ 𝛼 ≤ 1
𝛼𝑓 (x1 ) ≥ 𝛼𝑓 (x0 + 𝛼𝐷𝑓 [x0 ](x1 − x0 )
(1 − 𝛼)𝑓 (x2 ) ≥ (1 − 𝛼)𝑓 (x0 + (1 − 𝛼)𝐷𝑓 [x0 ](x2 − x0 )
Adding and using the linearity of 𝐷𝑓 (Exercise 4.21)
𝛼𝑓 (x1 ) + (1 − 𝛼)𝑓 (x2 ) ≥ 𝑓 (x0 ) + 𝐷𝑓 [x0 ](𝛼x1 + (1 − 𝛼)x2 − x0 )
= 𝑓 (x0 ) = 𝑓 (𝛼x1 + (1 − 𝛼)x2 )
That is, 𝑓 is convex. If (4.29) is strict, so is (4.54).
227
(4.54)
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
4.68 Since ℎ is convex, it has a subgradient 𝑔 ∈ 𝑋 ∗ (Exercise 3.181) such that
ℎ(x) ≥ ℎ(x0 ) + 𝑔(x − x0 ) for every x ∈ 𝑋
(4.31) implies that 𝑔 is also a subgradient of 𝑓 on 𝑆
𝑓 (x) ≥ 𝑓 (𝑥0 ) + 𝑔(x − x0 ) for every x ∈ 𝑆
Since 𝑓 is differentiable, this implies that 𝑔 is unique (Remark 4.14) and equal to the
derivative of 𝑓 . Hence ℎ is differentiable at x0 with 𝐷ℎ[x0 ] = 𝐷𝑓 [x0 ].
4.69 Assume 𝑓 is convex. For every x, x0 ∈ 𝑆, Exercise 4.67 implies
(
)
𝑓 (x) ≥ 𝑓 (x0 ) + ∇𝑓 (x0 )𝑇 x − x0
)
(
𝑓 (x0 ) ≥ 𝑓 (x) + ∇𝑓 (x)𝑇 x0 − x
Adding
)
(
(
)
𝑓 (x) + 𝑓 (x0 ) ≥ 𝑓 (x) + 𝑓 (x0 ) + ∇𝑓 (x)𝑇 x0 − x + ∇𝑓 (x0 )𝑇 x − x0
or
(
)
(
)
∇𝑓 (x)𝑇 x − x0 ≥ ∇𝑓 (x0 )𝑇 x − x0
and therefore
∇𝑓 (x) − ∇𝑓 (x0 )𝑇 x − x0 ≥ 0
When 𝑓 is strictly convex, the inequalities are strict.
Conversely, assume (4.32). By the mean value theorem (Theorem 4.1), there exists
x̄ ∈ (x, x0 ) such that
𝑓 (x) − 𝑓 (x0 ) = ∇𝑓 (x̄)𝑇 x − x0
By assumption
∇𝑓 (x̄) − ∇𝑓 (x0 )𝑇 x̄ − x0 ≥ 0
But
x̄ − x0 = 𝛼x0 + (1 − 𝛼)x − x0 = (1 − 𝛼)(x − x0 )
and therefore
(1 − 𝛼)∇𝑓 (x̄) − ∇𝑓 (x0 )𝑇 x − x0 ≥ 0
so that
∇𝑓 (x̄)𝑇 x − x0 ≥ ∇𝑓 (x0 )𝑇 x − x0 ≥ 0
and therefore
𝑓 (x) − 𝑓 (x0 ) = ∇𝑓 (x̄)𝑇 x − x0 ≥ ∇𝑓 (x0 )𝑇 x − x0
Therefore 𝑓 is convex by Exercise 4.67.
228
c 2001 Michael Carter
⃝
All rights reserved
Solutions for Foundations of Mathematical Economics
4.70 For 𝑆 ⊆ ℜ, ∇𝑓 (𝑥) = 𝑓 ′ (𝑥) and (4.32) becomes
(𝑓 ′ (𝑥2 ) − 𝑓 ′ (𝑥1 )(𝑥2 − 𝑥1 ) ≥ 0
for every 𝑥1 , 𝑥2 ∈ 𝑆. This is equivalent to
𝑓 ′ (𝑥2 )(𝑥2 − 𝑥1 ) ≥ 𝑓 ′ (𝑥1 )(𝑥2 − 𝑥1 )0
or
𝑥2 > 𝑥1 =⇒ 𝑓 ′ (𝑥2 ) ≥ 𝑓 ′ (𝑥1 )
𝑓 is strictly convex if and only if the inequalities are strict.
4.71 𝑓 ′ is increasing if and only if 𝑓 ′′ = 𝐷𝑓 ′ ≥ 0 (Exercise 4.35). 𝑓 ′ is strictly increasing
if 𝑓 ′′ = 𝐷𝑓 ′ > 0 (Exercise 4.36).
4.72 Adapting the previous example
⎧

⎨= 0
𝑓 ′′ (𝑥) = 𝑛(𝑛 − 1)𝑥𝑛 − 2 = ≥ 0

⎩
indeterminate
if 𝑛 = 1
if 𝑛 = 2, 4, 6, 𝑑𝑜𝑡𝑠
otherwise
Therefore, the power function is convex if 𝑛 is even, and neither convex if 𝑛 ≥ 3 is odd.
It is both convex and concave when 𝑛 = 1.
4.73 Assume 𝑓 is quasiconcave, and 𝑓 (x) ≥ 𝑓 (x0 ). Differentiability at x0 implies for
all 0 < 𝑡 < 1
𝑓 (x0 + 𝑡(x − x0 ) = 𝑓 (x0 ) + ∇𝑓 (x0 )𝑡(x − x0 ) + 𝜂(𝑡) ∥𝑡(x − x0 )∥
where 𝜂(𝑡) → 0 and 𝑡 → 0. Quasiconcavity implies
𝑓 (x0 + 𝑡(x − x0 ) ≥ 𝑓 (x0 )
and therefore
∇𝑓 (x0 )𝑡(x − x0 ) + 𝜂(𝑡) ∥𝑡(x − x0 )∥ ≥ 0
Dividing by 𝑡 and letting 𝑡 → 0, we get
∇𝑓 (x0 )(x − x0 ) ≥ 0
Conversely, assume 𝑓 is a differentiable functional satisfying (4.36). For any x1 , x2 ∈ 𝑆
with 𝑓 (x1 ) ≥ 𝑓 (x2 for every x, x0 ∈ 𝑆), define ℎ : [0, 1] → ℜ by
(
)
)
(
ℎ(𝑡) = 𝑓 (1 − 𝑡)x1 + 𝑡x2 = 𝑓 x1 + 𝑡(x2 − x1 )
We need to show that ℎ(𝑡) ≥ ℎ(1) for every 𝑡 ∈ (0, 1). Suppose to the contrary that
ℎ(𝑡1 ) < ℎ(1). Then (see below) there exists 𝑡0 with ℎ(𝑡0 ) < ℎ(1) and ℎ′ (𝑡0 ) < 0. By
the Chain Rule, this implies
ℎ′ (𝑡0 ) = ∇𝑓 (x0 )(x2 − x1 ) < 0
critical where x0 = x1 + 𝑡(x2 − x1 ). Since x2 − x0 = (1 − 𝑡)(x2 − x1 ) this implies that
ℎ′ (𝑡0 ) =
1
∇𝑓 (x0 )(x2 − x0 )
1−𝑡
229
(4.55)
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
On the other hand, since 𝑓 (x0 ) ≥ 𝑓 (x2 ), (4.36) implies
∇𝑓 (x0 )(x2 − x0 ) ≥ 0
contradicting (4.55).
To show that there exists 𝑡0 with ℎ(𝑡0 ) < ℎ(1) and ℎ′ (𝑡0 ) < 0: Since 𝑓 is continuous,
there exists an open interval (𝑎, 𝑏) with 𝑎 < 𝑡1 < 𝑏 with ℎ(𝑎) = ℎ(𝑏) = ℎ(1) and
ℎ(𝑡) < ℎ(1) for every 𝑡 ∈ (𝑎, 𝑏). By the Mean Value Theorem, there exist 𝑡0 ∈ (𝑎, 𝑡1 )
such that
0 < ℎ(𝑡1 ) − ℎ(𝑎) = ℎ′ (𝑡0 )(𝑡1 − 𝑎)
which implies that ℎ′ (𝑡0 ) > 0.
4.74 Suppose to the contrary that
𝑓 (x) > 𝑓 (x0 ) and ∇𝑓 (x0 )(x − x0 ) ≤ 0
critical Let x1 = −∇𝑓 (x0 ) ∕= 0. For every 𝑡 ∈ ℜ+
∇𝑓 (x0 )(x + 𝑡x1 − x0 ) = ∇𝑓 (x0 )𝑡x1 + ∇𝑓 (x0 )(x − x0 )
≤ 𝑡∇𝑓 (x0 )x1
2
= −𝑡 ∥∇𝑓 (x0 )∥ < 0
Since 𝑓 is continuous, there exists 𝑡 > 0 such that
𝑓 (x + 𝑡x1 ) > 𝑓 (x0 ) and ∇𝑓 (x0 )(x + 𝑡x1 − x0 ) < 0
contradicting the quasiconcavity of 𝑓 (4.36).
4.75 Suppose
𝑓 (x) < 𝑓 (x0 ) =⇒ ∇𝑓 (x0 )(x − x0 ) < 0
This implies that
−𝑓 (x) > −𝑓 (x0 ) =⇒ ∇ − 𝑓 (x0 )(x − x0 ) > 0
and −𝑓 is pseudoconcave.
4.76
1. If 𝑓 ∈ 𝐹 [𝑆] is concave (and differentiable)
𝑓 (x) ≤ 𝑓 (x0 ) + ∇𝑓 (x0 )𝑇 (x − x0 )
for every x, x0 ∈ 𝑆(equation 4.30). Therefore
𝑓 (x) > 𝑓 (x0 ) =⇒ ∇𝑓 (x0 )𝑇 (x − x0 ) > 0
𝑓 is pseudoconcave.
2. Assume to the contrary that 𝑓 is pseudoconcave but not quasiconcave. Then,
there exists x̄ = 𝛼x1 + (1 − 𝛼)x2 , x1 , x2 ∈ 𝑆 such that
𝑓 (x̄) < min{𝑓 (x1 ), 𝑓 (x2 )}
Assume without loss of generality that
𝑓 (x̄) < 𝑓 (x1 ) ≤ 𝑓 (x2 )
230
(4.56)
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
Pseudoconcavity (4.38) implies
∇𝑓 (x̄)(x2 − x̄) > 0
(4.57)
Since x1 = (x̄ − (1 − 𝛼)x2 )/𝛼
x1 − x̄ =
)
1(
1−𝛼
x̄ − (1 − 𝛼)x2 − 𝛼x̄ = −
(x2 − x̄)
𝛼
𝛼
Substituting in (4.57) gives
∇𝑓 (x̄)(x1 − x̄) < 0
which by pseudoconcavity implies 𝑓 (x1 ) ≤ 𝑓 (x̄) contradicting our assumption
(4.56) .
3. Exercise 4.74.
4.77 The CES function is quasiconcave provided 𝜌 ≤ 1 (Exercise 3.58). Since 𝐷𝑥𝑖 𝑓 (x) >
0 for all x ∈ ℜ𝑛+ +, the CES function with 𝜌 ≤ 1 is pseudoconcave on ℜ𝑛++ .
4.78 Assume that 𝑓 : 𝑆 → ℜ is homogeneous of degree 𝑘, so that for every x ∈ 𝑆
𝑓 (𝑡x) = 𝑡𝑛 𝑓 (x) for every 𝑡 > 0
Differentiating both sides of this identity with respect to 𝑥𝑖
𝐷𝑥𝑖 𝑓 (𝑡x)𝑡 = 𝑡𝑛 𝐷𝑥𝑖 𝑓 (x)
and dividing by 𝑡 > 0
𝐷𝑥𝑖 𝑓 (𝑡x) = 𝑡𝑘−1 𝐷𝑥𝑖 𝑓 (x)
4.79 If 𝑓 is homogeneous of degree 𝑘
⃗ x 𝑓 (x) = lim 𝑓 (x + 𝑡x) − 𝑓 (x)
𝐷
𝑡→0
𝑡
𝑓 ((1 + 𝑡)x) − 𝑓 (x)
= lim
𝑡→0
𝑡
(1 + 𝑡)𝑛 𝑓 (x) − 𝑓 (x)
= lim
𝑡→0
𝑡
(1 + 𝑡)𝑛 − 1
= lim
𝑓 (x)
𝑡→0
𝑡
Applying L’Hôpital’s Rule (Exercise 4.47)
(1 + 𝑡)𝑘−1
𝑘(1 + 𝑡)𝑘−1
𝑓 (x) = lim
=𝑘
𝑡→0
𝑡→0
𝑡
1
lim
and therefore
⃗ x 𝑓 (x) = 𝑘𝑓 (x)
𝐷
(4.58)
4.80 For fixed x, define
ℎ(𝑡) = 𝑓 (𝑡x)
By the Chain Rule
ℎ′ (𝑡) = 𝑡𝐷𝑓 [𝑡x](x) = 𝑡𝑘𝑓 (𝑡x) = 𝑡𝑘ℎ(𝑡)
231
(4.59)
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
using(4.40). Differentiating the product ℎ(𝑡)𝑡−𝑘
(
)
(
)
𝐷𝑡 ℎ(𝑡)𝑡−𝑘 = −𝑘ℎ(𝑡)𝑡−𝑘−1 + 𝑡−𝑘 ℎ′ (𝑡) = 𝑡−𝑘 ℎ′ (𝑡) − 𝑘𝑡ℎ(𝑡) = 0
from (4.59). Since this holds for every 𝑡, ℎ(𝑡)𝑡−𝑘 must be constant (Exercise 4.38), that
is there exists 𝑐 ∈ ℜ such that
ℎ(𝑡)𝑡−𝑘 = 𝑐 =⇒ ℎ(𝑡) = 𝑐𝑡𝑘
Evaluating at 𝑡 = 1, ℎ(1) = 𝑐 and therefore
ℎ(𝑡) = 𝑡𝑘 ℎ(1)
Since ℎ(𝑡) = 𝑓 (𝑡x) and ℎ(1) = 𝑓 (x), this implies
𝑓 (𝑡x) = 𝑡𝑘 𝑓 (x) for every x and 𝑡 > 0
𝑓 is homogeneous of degree 𝑘.
4.81 If 𝑓 is linearly homogeneous and quasiconcave, then 𝑓 is concave (Proposition
3.12). Therefore, its Hessian is nonpositive definite (Proposition 4.1). and its diagonal
elements 𝐷𝑥2 𝑖 𝑥𝑖 𝑓 (x) are nonpositive (Exercise 3.95). By Wicksell’s law, 𝐷𝑥2 𝑖 𝑥𝑗 𝑓 (x) is
nonnegative.
4.82 Assume 𝑓 is homogeneous of degree 𝑘, that is
𝑓 (𝑡x) = 𝑡𝑘 𝑓 (x) for every x ∈ 𝑆 and 𝑡 > 0
By Euler’s theorem
𝐷𝑡 𝑓 [𝑡x](𝑡x) = 𝑘𝑓 (𝑡x)
and therefore the elasticity of scale is
𝑡
𝑡
𝐷𝑡 𝑓 (𝑡x)
𝑘𝑓 (𝑡x) = 𝑘
=
𝐸(x) =
𝑓 (𝑡x)
𝑓
(𝑡x)
𝑡=1
Conversely, assume that
𝑡
𝐸(x) =
𝐷𝑡 𝑓 (𝑡x)
=𝑘
𝑓 (𝑡x)
𝑡=1
that is
𝐷𝑡 𝑓 (𝑡x) = 𝑘𝑓 (𝑡x)
By Euler’s theorem, 𝑓 is homogeneous of degree 𝑘.
4.83 Assume 𝑓 ∈ 𝐹 (𝑆) is differentiable and homogeneous of degree 𝑘 ∕= 0. By Euler’s
theorem
𝐷𝑓 [x](x) = 𝑘𝑓 (x) ∕= 0
for every x ∈ 𝑆 such that 𝑓 (x) ∕= 0.
4.84 𝑓 satisfies Euler’s theorem
𝑘𝑓 (x) =
𝑛
∑
𝐷𝑖 𝑓 (x)𝑥𝑖
𝑖=1
232
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
Differentiating with respect to 𝑥𝑗
𝑘𝐷𝑗 𝑓 (x) =
𝑛
∑
𝐷𝑖𝑗 𝑓 (x)𝑥𝑖 + 𝐷𝑗 𝑓 (x)
𝑖=1
or
(𝑘 − 1)𝐷𝑗 𝑓 (x) =
𝑛
∑
𝐷𝑖𝑗 𝑓 (x)𝑥𝑖
𝑗 = 1, 2, . . . , 𝑛
𝑖=1
Multiplying each equation by 𝑥𝑗 and summing
(𝑘 − 1)
𝑛
∑
𝐷𝑗 𝑓 (x)𝑥𝑗 =
𝑗=1
𝑛 ∑
𝑛
∑
𝐷𝑖𝑗 𝑓 (x)𝑥𝑖 𝑥𝑗 = x′ 𝐻x
𝑗=1 𝑖=1
By Euler’s theorem, the left hand side is
(𝑘 − 1)𝑘𝑓 (x) = x′ 𝐻x
4.85 If 𝑓 is homothetic, there exists strictly increasing 𝑔 and linearly homogeneous ℎ
such that 𝑓 = 𝑔 ∘ ℎ (Exercise 3.175). Using the Chain Rule and Exercise 4.78
𝐷𝑥𝑖 𝑓 (𝑡x) = 𝑔 ′ (𝑓 (𝑡x))𝐷𝑥𝑖 ℎ(𝑡x) = 𝑡𝑔 ′ (𝑓 (𝑡x))𝐷𝑥𝑖 ℎ(x)
and therefore
𝐷𝑥𝑖 𝑓 (𝑡x)
𝑡𝑔 ′ (𝑓 (𝑡x))𝐷𝑥𝑖 ℎ(x)
=
𝐷𝑥𝑗 𝑓 (𝑡x)
𝐷𝑥𝑗 𝑡𝑔 ′ (𝑓 (𝑡x))𝐷𝑥𝑗 ℎ(x)
𝐷𝑥𝑖 ℎ(x)
=
𝐷𝑥𝑗 ℎ(x)
𝐷𝑥𝑗
𝑔 ′ (𝑓 (x)𝐷𝑥𝑖 ℎ(x)
=
𝐷𝑥𝑗 𝑔 ′ (𝑓 (x))𝐷𝑥𝑗 ℎ(x)
𝐷𝑥𝑖 𝑓 (x)
=
𝐷𝑥𝑗 𝑓 (x)
𝐷𝑥𝑗
233
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
Chapter 5: Optimization
5.1 As stated, this problem has no optimal solution. Revenue 𝑓 (𝑥) increases without
bound as the rate of exploitation 𝑥 gets smaller and smaller. Given any positive exploitation rate 𝑥0 , a smaller rate will increase total revenue. Nonexistence arises from
inadequacy in modeling the island leaders’ problem. For example, the model ignores
any costs of extraction and sale. Realistically, we would expect per-unit costs to decrease with volume (increasing returns to scale) at least over lower outputs. Extraction
and transaction costs should make vanishingly small rates of output prohibitively expensive and encourage faster utilization. Secondly, even if the government weights
future generations equally with the current generation, it would be rational to value
current revenue more highly than future revenue and discount future returns. Discounting is appropriate for two reasons
∙ Current revenues can be invested to provide a future return. There is an opportunity cost (the interest foregone) to delaying extraction and sale.
∙ Innovation may create substitutes which reduce the future demand for the fertilizer. If the government is risk averse, it has an incentive to accelerate exploitation,
trading-off of lower total return against reduced risk.
5.2 Suppose that x∗ is a local optimum which is not a global optimum. That is, there
exists a neighborhood 𝑆 of x∗ such that
𝑓 (x∗ , 𝜽) ≥ 𝑓 (x, 𝜽) for every x ∈ 𝑆 ∩ 𝐺(𝜽)
and also another point x∗∗ ∈ 𝐺(𝜽) such that
𝑓 (x∗∗ , 𝜽) > 𝑓 (x∗ , 𝜽)
Since 𝐺(𝜽) is convex, there exists 𝛼 ∈ (0, 1) such that
𝛼x∗ + (1 − 𝛼)x∗∗ ∈ 𝑆 ∩ 𝐺(𝜽)
By concavity of 𝑓
𝑓 (𝛼x∗ + (1 − 𝛼)x∗∗ , 𝜽) ≥ 𝛼𝑓 (x∗ , 𝜽) + (1 − 𝛼)𝑓 (x∗∗ , 𝜽) > 𝑓 (x∗ , 𝜽)
contradicting the assumption that x∗ is a local optimum.
5.3 Suppose that x∗ is a local optimum which is not a global optimum. That is, there
exists a neighborhood 𝑆 of x∗ such that
𝑓 (x∗ , 𝜽) ≥ 𝑓 (x, 𝜽) for every x ∈ 𝑆 ∩ 𝐺(𝜽)
and also another point x∗∗ ∈ 𝐺(𝜽) such that
𝑓 (x∗∗ , 𝜽) > 𝑓 (x∗ , 𝜽)
Since 𝐺(𝜽) is convex, there exists 𝛼 ∈ (0, 1) such that
𝛼x∗ + (1 − 𝛼)x∗∗ ∈ 𝑆 ∩ 𝐺(𝜽)
234
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
By strict quasiconcavity of 𝑓
𝑓 (𝛼x∗ + (1 − 𝛼)x∗∗ , 𝜽) > min{ 𝑓 (x∗ , 𝜽), 𝑓 (x∗∗ , 𝜽) } > 𝑓 (x∗ , 𝜽)
contradicting the assumption that x∗ is a local optimum. Therefore, if x∗ is local
optimum, it must be a global optimum.
Now suppose that x∗ is a weak global optimum, that is
𝑓 (x∗ , 𝜽) ≥ 𝑓 (x, 𝜽) for every x ∈ 𝑆
but there another point x∗∗ ∈ 𝑆 such that
𝑓 (x∗∗ , 𝜽) = 𝑓 (x∗ , 𝜽)
Since 𝐺(𝜽) is convex, there exists 𝛼 ∈ (0, 1) such that
𝛼x∗ + (1 − 𝛼)x∗∗ ∈ 𝑆 ∩ 𝐺(𝜽)
By strict quasiconcavity of 𝑓
𝑓 (𝛼x∗ + (1 − 𝛼)x∗∗ , 𝜽) > min{ 𝑓 (x∗ , 𝜽), 𝑓 (x∗∗ , 𝜽) } = 𝑓 (x∗ , 𝜽)
contradicting the assumption that x∗ is a global optimum. We conclude that every
optimum is a strict global optimum and hence unique.
5.4 Suppose that x∗ is a local optimum of (5.3) in 𝑋, so that
𝑓 (x∗ ) ≥ 𝑓 (x)
(5.80)
for every x in a neighborhood 𝑆 of x∗ . If 𝑓 is differentiable,
𝑓 (x) = 𝑓 (x∗ ) + 𝐷𝑓 [x∗ ](x − x∗ ) + 𝜂(x) ∥x − x∗ ∥
where 𝜂(x) → 0 as x → x∗ . (5.80) implies that there exists a ball 𝐵𝑟 (x∗ ) such that
𝐷𝑓 [x∗ ](x − x∗ ) + 𝜂(x) ∥x − x∗ ∥ ≤ 0
for every x ∈ 𝐵𝑟 (x∗ ). Letting x → x∗ , we conclude that
𝐷𝑓 [x∗ ](x − x∗ ) ≤ 0
for every x ∈ 𝐵𝑟 (x∗ ).
Suppose there exists x ∈ 𝐵𝑟 (x∗ ) such that
𝐷𝑓 [x∗ ](x − x∗ ) = 𝑦 < 0
Let dx = x − x∗ so that x = x∗ + dx. Then x∗ − dx ∈ 𝐵𝑟 (x∗ ). Since 𝐷𝑓 [x∗ ] is linear,
𝐷𝑓 [x∗ ](−dx) = −𝐷𝑓 [x∗ ](dx) = −𝑦 > 0
contradicting (5.80). Therefore
𝐷𝑓 [x∗ ](x − x∗ ) = 0
for every x ∈ 𝐵𝑟 (x∗ ).
235
c 2001 Michael Carter
⃝
All rights reserved
Solutions for Foundations of Mathematical Economics
5.5 We apply the reasoning of Example 5.5 to each component. Formally, for each 𝑖,
let 𝑓ˆ𝑖 be the projection of 𝑓 along the 𝑖𝑡ℎ axis
𝑓ˆ𝑖 (𝑥𝑖 ) = 𝑓 (𝑥∗1 , 𝑥∗2 , . . . , 𝑥∗𝑖−1 , 𝑥𝑖 , 𝑥∗𝑖+1 , . . . , 𝑥∗𝑛 )
𝑥∗𝑖 maximizes 𝑓ˆ𝑖 (𝑥𝑖 ) over ℜ+ , for which it is necessary that
𝐷𝑥𝑖 𝑓ˆ𝑖 (𝑥∗𝑖 ) ≤ 0
𝑥∗𝑖 ≥ 0
𝑥∗𝑖 𝐷𝑥𝑖 𝑓ˆ𝑖 (𝑥∗𝑖 ) = 0
Substituting
𝐷𝑥𝑖 𝑓ˆ𝑖 (𝑥∗𝑖 ) = 𝐷𝑥𝑖 𝑓 [x∗ ]
yields
𝐷𝑥𝑖 𝑓 [x∗ ] ≤ 0
𝑥∗𝑖 ≥ 0
𝑥∗𝑖 𝐷𝑥𝑖 𝑓 [x∗ ] = 0
5.6 By Taylor’s Theorem (Example 4.33)
1
2
𝑓 (x∗ + dx) = 𝑓 (x∗ ) + ∇𝑓 (x∗ )dx + dx𝑇 𝐻𝑓 (x∗ )dx + 𝜂(dx) ∥dx∥
2
with 𝜂(dx) → 0 as dx → 0. Given
1. ∇𝑓 (x∗ ) = 0 and
2. 𝐻𝑓 (x∗ ) is negative definite
and letting dx → 0, we conclude that
𝑓 (x∗ + dx) < 𝑓 (x∗ )
for small dx. x∗ is a strict local maximum.
5.7 If x∗ is a local minimum of 𝑓 (x), it is necessary that
𝑓 (x∗ ) ≤ 𝑓 (x)
for every x in a neighborhood 𝑆 of x∗ . Assuming that 𝑓 is 𝐶 2 , 𝑓 (x) can be approximated by
1
𝑓 (x) ≈ 𝑓 (x∗ ) + ∇𝑓 (x∗ )dx + dx𝑇 𝐻𝑓 (x∗ )dx
2
where dx = x − x∗ . If x∗ is a local minimum, then there exists a ball 𝐵𝑟 (x∗ ) such that
1
𝑓 (x∗ ) ≤ 𝑓 (x∗ ) + ∇𝑓 (x∗ )dx + dx𝑇 𝐻𝑓 (x∗ )dx
2
or
1
∇𝑓 (x∗ )dx + dx𝑇 𝐻𝑓 (x∗ )dx ≥ 0
2
for every dx ∈ 𝐵𝑟 (x∗ ). To satisfy this inequality for all small dx requires that the
first term be zero and the second term nonnegative. In other words, for a point x∗ to
be a local minimum of a function 𝑓 , it is necessary that the gradient be zero and the
Hessian be nonnegative definite at x∗ . Furthermore, by Taylor’s Theorem
1
2
𝑓 (x∗ + dx) = 𝑓 (x∗ ) + ∇𝑓 (x∗ )dx + dx𝑇 𝐻𝑓 (x∗ )dx + 𝜂(dx) ∥dx∥
2
with 𝜂(dx) → 0 as dx → 0. Given
236
c 2001 Michael Carter
⃝
All rights reserved
Solutions for Foundations of Mathematical Economics
(1,2,3)
3
2 𝑥
2
0
1
1
𝑥1
2
Figure 5.1: The strictly concave function 𝑓 (𝑥1 , 𝑥2 ) = 𝑥1 𝑥2 + 3𝑥2 − 𝑥21 − 𝑥22 has a unique
global maximum.
1. ∇𝑓 (x∗ ) = 0 and
2. 𝐻𝑓 (x∗ ) is positive definite
and letting dx → 0, we conclude that
𝑓 (x∗ + dx) > 𝑓 (x∗ )
for small dx. x∗ is a strict local minimum.
5.8 By the Weierstrass theorem (Theorem 2.2), 𝑓 has a maximum 𝑥∗ and a minimum
𝑥∗ on [𝑎, 𝑏]. Either
∙ 𝑥∗ ∈ (𝑎, 𝑏) and 𝑓 ′ (𝑥∗ ) = 0 (Theorem 5.1) or
∙ 𝑥∗ ∈ (𝑎, 𝑏) and 𝑓 ′ (𝑥∗ ) = 0 (Exercise 5.7) or
∙ Both maxima and minima are boundary points, that is 𝑥∗ , 𝑥∗ ∈ {𝑎, 𝑏} which
implies that 𝑓 is constant on [𝑎, 𝑏] and therefore 𝑓 ′ (𝑥) = 0 for every 𝑥 ∈ (𝑎, 𝑏)
(Exercise 4.7).
5.9 The first-order conditions for a maximum are
𝐷𝑥1 𝑓 (𝑥1 , 𝑥2 ) = 𝑥2 − 2𝑥1 = 0
𝐷𝑥2 𝑓 (𝑥1 , 𝑥2 ) = 𝑥1 + 3 − 2𝑥2 = 0
which have the unique solution 𝑥∗1 = 1, 𝑥∗2 = 2. (1, 2) is the only stationary point of
𝑓 and hence the only possible candidate for a maximum. To verify that (1, 2) satisfies
the second-order condition for a maximum, we compute the Hessian of 𝑓
)
(
−2
1
𝐻(x) =
1 −2
which is negative definite everywhere. Therefore (1, 2) is a strict local maximum of 𝑓 .
Further, since 𝑓 is strictly concave (Proposition 4.1), we conclude that (1, 2) is a strict
global maximum of 𝑓 (Exercise 5.2), where it attains its maximum value 𝑓 (1, 2) = 3
(Figure 5.1).
237
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
5.10 The first-order conditions for a maximum (or minimum) are
𝐷1 𝑓 (𝑥) = 2𝑥1 = 0
𝐷2 𝑓 (𝑥) = 2𝑥2 = 0
which have a unique solution 𝑥1 = 𝑥2 = 0. This is the only stationary point of 𝑓 . Since
the Hessian of 𝑓
(
)
2 0
𝐻=
0 2
is positive definite, we deduce (0, 0) is a strict global minimum of 𝑓 (Proposition 4.1,
Exercise 5.2).
5.11 The average firm’s profit function is
1
1
Π(𝑘, 𝑙) = 𝑦 − 𝑘 − 𝑙 −
2
6
and the firm’s profit maximization problem is
1
1
max Π(𝑘, 𝑙) = 𝑘 1/6 𝑙1/3 − 𝑘 − 𝑙 −
𝑘,𝑙
2
6
A necessary condition for a profit maximum is that the profit function be stationary,
that is
1 −5/6 1/3 1
𝑘
𝑙 − =0
6
2
1 1/6 −2/3
−1=0
𝐷𝑙 Π(𝑘, 𝑙) = 𝑘 𝑙
3
𝐷𝑘 Π(𝑘, 𝑙) =
which can be solved to yield
𝑘=𝑙=
1
9
The firm’s output is
𝑦=
1 1/6 1 1/3
1
=
9 9
3
and its profit is
1 11 1 1
1 1
− − =0
Π( , ) = −
3 3
3 29 9 6
5.12 By the Chain Rule
𝐷x (ℎ ∘ 𝑓 )[x∗ ] = 𝐷ℎ ∘ 𝐷x 𝑓 [x∗ ] = 0
Since 𝐷ℎ > 0
𝐷x (ℎ ∘ 𝑓 )[𝑥∗ ] = 0 ⇐⇒ 𝐷x 𝑓 [x∗ ] = 0
ℎ ∘ 𝑓 has the same stationary points as 𝑓 .
238
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
5.13 Since the log function is monotonic, finding the maximum likelihood estimators is
equivalent to solving the maximization problem ( Exercise 5.12)
max log 𝐿(𝜇, 𝜎) = −
𝜇,𝜎
𝑇
1 ∑
𝑇
log 2𝜋 − 𝑇 log 𝜎 − 2
(𝑥𝑡 − 𝜇)2
2
2𝜎 𝑡=1
For (ˆ
𝜇, 𝜎
ˆ 2 ) to solve this problem, it is necessary that log 𝐿 be stationary at (ˆ
𝜇, 𝜎
ˆ 2 ),
that is
𝜇, 𝜎
ˆ2) =
𝐷𝜇 log 𝐿(ˆ
𝑇
1 ∑
(𝑥𝑡 − 𝜇
ˆ) = 0
𝜎
ˆ 2 𝑡=1
𝐷𝜎 log 𝐿(ˆ
𝜇, 𝜎
ˆ2) = −
𝑇
1 ∑
𝑇
+ 3
(𝑥𝑡 − 𝜇
ˆ)2 = 0
𝜎
ˆ
𝜎
ˆ 𝑡=1
which can be solved to yield
𝜇
ˆ=𝑥
¯=
𝜎
ˆ2 =
𝑇
1 ∑
𝑥𝑡
𝑇 𝑡=1
𝑇
1∑
(𝑥𝑡 − 𝑥
¯)2
𝑇 𝑡=1
5.14 The gradient of the objective function is
)
(
−2(𝑥1 − 1)
∇𝑓 (x) =
−2(𝑥2 − 1)
while that of the constraint is
(
∇𝑔(x) =
2𝑥1
2𝑥2
)
A necessary condition for the optimal solution is that these be proportional that is
)
(
(
)
2𝑥1
−2(𝑥1 − 1)
=𝜆
∇𝑓 (𝑥) =
= ∇𝑔(x)
−2(𝑥2 − 1)
2𝑥2
which can be solved to yield
𝑥1 = 𝑥2 =
1
1+𝜆
which includes an unknown constant of proportionality 𝜆. However, any solution must
also satisfy the constraint
(
)2
1
𝑔(𝑥1 , 𝑥2 ) = 2
=1
1+𝜆
This can be solved for 𝜆
𝜆=
√
2−1
and substituted into (5.80)
1
𝑥1 = 𝑥2 = √
2
239
c 2001 Michael Carter
⃝
All rights reserved
Solutions for Foundations of Mathematical Economics
5.15 The consumer’s problem is
max 𝑢(x) = 𝑥1 + 𝑎 log 𝑥2
x≥0
subject to 𝑔(x) = 𝑥1 + 𝑝2 𝑥2 − 𝑚 = 0
The first-order conditions for a (local) optimum are
𝐷𝑥1 𝑢(x∗ ) = 1 ≤ 𝜆 = 𝐷𝑥1 𝑔(𝑥∗ )
𝑥1 ≥ 0
𝑎
≤ 𝜆𝑝2 = 𝐷𝑥2 𝑔(𝑥∗ )
𝐷𝑥2 𝑢(x∗ ) =
𝑥2
𝑥2 ≥ 0
(
𝑥2
𝑥1 (1 − 𝜆) = 0
)
=0
𝑎
− 𝜆𝑝2
𝑥2
(5.81)
(5.82)
We can distinguish two cases:
Case 1 𝑥1 = 0 in which case the budget constraint implies that 𝑥2 = 𝑚/𝑝2 .
Case 2 𝑥1 > 0 In this case, (5.81) implies that 𝜆 = 1. Consequently, the first inequality of (5.82) implies that 𝑥2 > 0 and therefore the last equation implies 𝑥2 = 𝑎/𝑝2
with 𝑥1 = 𝑚 − 𝑎.
We deduce that the consumer first spends portion 𝑎 of her income on good 2 and the
remainder on good 1.
5.16 Suppose without loss of generality that the first 𝑘 components of y∗ are strictly
positive while the remaining components are zero. That is
𝑦𝑖∗ > 0
𝑦𝑖∗ = 0
𝑖 = 1, 2, . . . , 𝑘
𝑖 = 𝑘 + 1, 𝑘 + 2, . . . , 𝑛
(x∗ , y∗ ) solves the problem
max 𝑓 (x)
subject to g(x) = 0
𝑖 = 𝑘 + 1, 𝑘 + 2, . . . , 𝑛
𝑦𝑖 = 0
By Theorem 5.2, there exist multipliers 𝜆1 , 𝜆2 , . . . , 𝜆𝑚 and 𝜇𝑘+1 , 𝜇𝑘+2 , . . . , 𝜇𝑛 such that
𝐷x 𝑓 [x∗ , y∗ ] =
𝐷y 𝑓 [x∗ , y∗ ] =
𝑚
∑
𝑗=1
𝑚
∑
𝜆𝑗 𝐷x 𝑔𝑗 [x∗ , y∗ ]
𝜆𝑗 𝐷y 𝑔𝑗 [x∗ , y∗ ] +
𝑗=1
𝑛
∑
𝜇𝑖 𝑦 𝑖
𝑖=𝑘+1
Furthermore, 𝜇𝑖 ≥ 0 for every 𝑖 so that
𝐷y 𝑓 [x∗ , y∗ ] ≤
𝑚
∑
𝜆𝑗 𝐷y 𝑔𝑗 [x∗ , y∗ ]
𝑗=1
with
𝐷𝑦𝑖 𝑓 [x∗ , y∗ ] =
𝑚
∑
𝜆𝑗 𝐷𝑦𝑖 𝑔𝑗 [x∗ , y∗ ] if 𝑦𝑖 > 0
𝑗=1
5.17 Assume that x∗ = (𝑥∗1 , 𝑥∗2 ) solves
max 𝑓 (𝑥1 , 𝑥2 )
𝑥1 ,𝑥2
240
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
subject to
𝑔(𝑥1 , 𝑥2 ) = 0
By the implicit function theorem, there exists a function ℎ : ℜ → ℜ such that
𝑥1 = ℎ(𝑥2 )
(5.83)
and
𝑔(ℎ(𝑥2 ), 𝑥2 ) = 0
for 𝑥2 in a neighborhood of 𝑥∗2 . Furthermore
𝐷ℎ[𝑥∗2 ] = −
𝐷𝑥1 𝑔[x∗ ]
𝐷𝑥2 𝑔[x∗ ]
(5.84)
Using (5.41), we can convert the original problem into the unconstrained maximization
of a function of a single variable
max 𝑓 (ℎ(𝑥2 ), 𝑥2 )
𝑥2
If 𝑥∗2 maximizes this function, it must satisfy the first-order condition (applying the
Chain Rule)
𝐷𝑥1 𝑓 [𝑥∗] ∘ 𝐷ℎ[𝑥∗2 ] + 𝐷𝑥2 𝑓 [x∗ ] = 0
Substituting (5.42) yields
(
)
𝐷𝑥1 𝑔[x∗ ]
𝐷𝑥1 𝑓 [𝑥∗] −
+ 𝐷𝑥2 𝑓 [x∗ ] = 0
𝐷𝑥2 𝑔[x∗ ]
or
𝐷𝑥1 𝑓 [𝑥∗]
𝐷𝑥1 𝑔[x∗ ]
=
𝐷𝑥2 𝑓 [x∗ ]
𝐷𝑥2 𝑔[x∗ ]
5.18 The consumer’s problem is
max 𝑢(x)
x∈𝑋
subject to p𝑇 x = 𝑚
Solving for 𝑥1 from the budget constraint yields
∑𝑛
𝑚 − 𝑖=2 𝑝𝑖 𝑥𝑖
𝑥1 =
𝑝1
Substituting this in the utility function, the affordable utility levels are
∑
(
)
𝑚 − 𝑛𝑖=2 𝑝𝑖 𝑥𝑖
, 𝑥2 , 𝑥3 , . . . , 𝑥𝑛
𝑢
ˆ(𝑥2 , 𝑥3 , . . . , 𝑥𝑛 ) = 𝑢
𝑝1
(5.85)
and the consumer’s problem is to choose (𝑥2 , 𝑥3 , . . . , 𝑥𝑛 ) to maximize (5.85). The
first-order conditions are that 𝑢
ˆ(𝑥2 , 𝑥3 , . . . , 𝑥𝑛 ) be stationary, that is for every good
𝑗 = 2, 3, . . . , 𝑛
∑𝑛
(
)
𝑚 − 𝑖=2 𝑝𝑖 𝑥𝑖
∗
𝐷𝑥𝑗 𝑢
ˆ(𝑥2 , 𝑥3 , . . . , 𝑥𝑛 ) = 𝐷𝑥1 𝑢(x )𝐷𝑥𝑗
+ 𝐷𝑥𝑗 𝑢(x∗ ) = 0
𝑝1
241
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
which reduces to
𝐷𝑥1 𝑢(x∗ )(−
𝑝1
) + 𝐷𝑥𝑗 𝑢(x∗ ) = 0
𝑝𝑗
or
𝑝1
𝐷𝑥1 𝑢(x∗ )
=
∗
𝐷𝑥𝑗 𝑢(x )
𝑝𝑗
𝑗 = 2, 3, . . . , 𝑛
This is the familiar equality between the marginal rate of substitution and the price
ratio (Example 5.15). Since our selection of 𝑥1 was arbitrary, this applies between any
two goods.
5.19 Adapt Exercise 5.6.
5.20 Corollary 5.1.2 implies that x∗ is a global maximum of 𝐿(x, 𝝀), that is
𝐿(x∗ , 𝝀) ≥ 𝐿(x, 𝝀) for every x ∈ 𝑋
which implies
𝑓 (x∗ ) −
∑
𝜆𝑗 𝑔𝑗 (x∗ ) ≥ 𝑓 (x) −
∑
𝜆𝑗 𝑔𝑗 (x) for every x ∈ 𝑋
Since g(x∗ ) = 0 this implies
𝑓 (x∗ ) ≥ 𝑓 (x) −
∑
𝜆𝑗 𝑔𝑗 (x) for every x ∈ 𝑋
A fortiori
𝑓 (x∗ ) ≥ 𝑓 (x) for every x ∈ 𝐺 = { x ∈ 𝑋 : g(x) = 0 }
5.21 Suppose that x∗ is a local maximum of 𝑓 on 𝐺. That is, there exists a neighborhood
𝑆 such that
𝑓 (x∗ ) ≥ 𝑓 (x) for every x ∈ 𝑆 ∩ 𝐺
But for every x ∈ 𝐺, 𝑔𝑗 (x) = 0 for every 𝑗 and
∑
𝐿(x) = 𝑓 (x) +
𝜆𝑗 𝑔𝑗 (x) = 𝑓 (x)
and therefore
𝐿(x∗ ) ≥ 𝐿(x) for every x ∈ 𝑆 ∩ 𝐺
5.22 The area of the base is
Base = 𝑤2 = 𝐴/3
and the four sides
Sides = 4𝑤ℎ
√ √
𝐴 𝐴
=4
3 12
4𝐴
=
16
2𝐴
=
3
242
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
5.23 Let the dimensions of the vat be 𝑤 × 𝑙 × ℎ. We wish to
min Surface area = 𝐴 = 𝑤 × 𝑙 + 2𝑤ℎ + 2𝑙ℎ
𝑤,𝑙,ℎ
subject to 𝑤 × 𝑙 × ℎ = 32
The Lagrangean is
𝐿(𝑤, 𝑙, ℎ, 𝜆) = 𝑤𝑙 + 2𝑤ℎ + 2𝑙ℎ − 𝜆𝑤𝑙ℎ.
The first-order conditions for a maximum are
𝐷𝑤 𝐿 = 𝑙 + 2ℎ − 𝜆𝑙ℎ = 0
(5.86)
𝐷𝑙 𝐿 = 𝑤 + 2ℎ − 𝜆𝑤ℎ = 0
(5.87)
𝐷ℎ 𝐿 = 2𝑤 + 2𝑙 − 𝜆𝑤𝑙 = 0
𝑤𝑙ℎ = 32
(5.88)
Subtracting (5.45) from (5.44)
𝑙 − 𝑤 = 𝜆(𝑙 − 𝑤)ℎ
This equation has two possible solutions. Either
𝜆=
1
or 𝑙 = 𝑤
ℎ
But if 𝜆 = 1/ℎ, (5.44) implies that 𝑙 = 0 and the volume is zero. Therefore, we conclude
that 𝑤 = 𝑙. Substituting 𝑤 = 𝑙 in (5.45) and (5.46) gives
𝑤 + 2ℎ = 𝜆𝑤ℎ
4𝑤 = 𝜆𝑤2
from which we deduce that
𝜆=
4
𝑤
Substituting in (5.45)
𝑤 + 2ℎ =
4
𝑤ℎ = 4ℎ
𝑤
which implies that
𝑤 = 2ℎ or ℎ =
1
𝑤
2
To achieve the required volume of 32 cubic metres requires that
1
𝑤 × 𝑙 × ℎ = 𝑤 × 𝑤 × 𝑤 = 32
2
so that the dimensions of the vat are
𝑤=4
𝑙=4
ℎ=2
The area of sheet metal required is
𝐴 = 𝑤𝑙 + 2𝑤ℎ + 2𝑙ℎ = 48
243
c 2001 Michael Carter
⃝
All rights reserved
Solutions for Foundations of Mathematical Economics
5.24 The Lagrangean for this problem is
𝐿(x, 𝜆) = 𝑥21 + 𝑥22 + 𝑥23 − 𝜆(2𝑥1 − 3𝑥2 + 5𝑥3 − 19)
A necessary condition for x∗ to solve the problem is that the Lagrangean be stationary
at x∗ , that is
𝐷𝑥1 𝐿 = 2𝑥∗1 − 2𝜆 = 0
𝐷𝑥2 𝐿 = 2𝑥∗2 + 3𝜆 = 0
𝐷𝑥3 𝐿 = 2𝑥∗3 − 5𝜆 = 0
which implies
3
𝑥∗2 = − 𝜆
2
𝑥∗1 = 𝜆
𝑥∗2 =
5
𝜆
2
(5.89)
It is also necessary that the solution satisfy the constraint, that is
2𝑥∗1 − 3𝑥∗2 + 5𝑥∗3 = 19
Substituting (5.89) into the constraint we get
9
25
2𝜆 + 𝜆 + 𝜆 = 19𝜆 = 19
2
2
which implies 𝜆 = 1. Substituting in (5.89), the solution is x∗ = (1, − 32 , 52 ). Since the
constraint is affine and the objective (−𝑓 ) is concave, stationarity of the Lagrangean
is also sufficient for global optimum (Corollary 5.2.4).
5.25 The Lagrangean is
1−𝛼
𝐿(𝑥1 , 𝑥2 , 𝜆) = 𝑥𝛼
− 𝜆(𝑝1 𝑥1 + 𝑝2 𝑥2 − 𝑚)
1 𝑥2
The Lagrangean is stationary where
𝐷𝑥1 𝐿 = 𝛼𝑥𝛼−1
𝑥1−𝛼
− 𝜆𝑝1 = 0
1
2
1−𝛼−1
𝐷𝑥2 𝐿 = 1 − 𝛼𝑥𝛼
− 𝜆𝑝2 = 0
1 𝑥2
Therefore the first-order conditions for a maximum are
1−
𝛼𝑥𝛼−1
𝑥1−𝛼
= 𝜆𝑝1
1
2
(5.90)
1−𝛼−1
𝛼𝑥𝛼
1 𝑥2
(5.91)
= 𝜆𝑝2
𝑝1 𝑥1 + 𝑝2 𝑥2 − 𝑚
Dividing (5.48) by (5.49) gives
𝛼𝑥𝛼−1
𝑥1−𝛼
1
2
(1−𝛼)−1
1 − 𝛼𝑥𝛼
1 𝑥2
= 𝑝1 𝑝2
which simplifies to
𝑝1
𝛼𝑥2
=
(1 − 𝛼)𝑥1
𝑝2
or
𝑝2 𝑥2 =
(1 − 𝛼)
𝑝1 𝑥1
𝛼
Substituting in the budget constraint (5.50)
(1 − 𝛼)
𝑝1 𝑥1 = 𝑚
𝛼
𝛼 + (1 − 𝛼)
𝑝1 𝑥1 = 𝑚
𝛼
𝑝1 𝑥1 +
244
(5.92)
c 2001 Michael Carter
⃝
All rights reserved
Solutions for Foundations of Mathematical Economics
so that
𝑥∗1 =
𝑚
𝛼
𝛼 + (1 − 𝛼) 𝑝1
From the budget constraint (5.92)
𝑥∗2 =
(1 − 𝛼) 𝑚
𝛼 + (1 − 𝛼) 𝑝2
5.26 The Lagrangean is
𝛼𝑛
1 𝛼2
𝐿(x, 𝜆) = 𝑥𝛼
1 𝑥2 . . . 𝑥𝑛 − 𝜆(𝑝1 𝑥1 + 𝑝2 𝑥2 + ... + 𝑝𝑛 𝑥𝑛 )
The first-order conditions for a maximum are
𝛼𝑖 −1
1 𝛼2
𝑛
𝐷𝑥𝑖 𝐿 = 𝛼𝑖 𝑥𝛼
. . . 𝑥𝛼
𝑛 − 𝜆𝑝𝑖 =
1 𝑥2 . . . 𝑥𝑖
𝛼𝑖 𝑢(𝑥)
− 𝜆𝑝𝑖 = 0
𝑥𝑖
or
𝛼𝑖 𝑢(𝑥)
= 𝑝𝑖 𝑥𝑖
𝜆
𝑖 = 1, 2, . . . , 𝑛
Summing over all goods and using the budget constraint
𝑛
∑
𝛼𝑖 𝑢(𝑥)
𝑖=1
Letting
∑𝑛
𝑖=1
𝜆
𝑛
=
𝑛
∑
𝑢(𝑥) ∑
𝛼𝑖 =
𝑝𝑖 𝑥𝑖 = 𝑚
𝜆 𝑖=1
𝑖=1
𝛼𝑖 = 𝛼, this implies
𝑚
𝑢(x)
=
𝜆
𝛼
Substituting in (5.93)
𝑝𝑖 𝑥𝑖 =
𝛼𝑖
𝑚
𝛼
or
𝑥∗𝑖 =
𝛼𝑖 𝑚
𝛼 𝑝𝑖
𝑖 = 1, 2, . . . , 𝑛
5.27 The Lagrangean is
𝐿(x, 𝜆) = 𝑤1 𝑥1 + 𝑤2 𝑥2 − 𝜆(𝑥𝜌1 + 𝑥𝜌2 − 𝑦 𝜌 ).
The necessary conditions for stationarity are
𝐷𝑥1 𝐿(x, 𝜆) = 𝑤1 − 𝜆𝜌𝑥𝜌−1
=0
1
𝐷𝑥2 𝐿(x, 𝜆) = 𝑤2 − 𝜆𝜌𝑥𝜌−1
=0
2
or
𝑤1 = 𝜆𝜌𝑥𝜌−1
1
𝑤2 = 𝜆𝜌𝑥𝜌−1
2
245
(5.93)
c 2001 Michael Carter
⃝
All rights reserved
Solutions for Foundations of Mathematical Economics
which reduce to
𝑥𝜌−1
𝑤1
= 1𝜌−1
𝑤2
𝑥
2
𝑤2 𝜌−1
𝜌−1
𝑥2 =
𝑥
𝑤1 1
𝜌
( ) 𝜌−1
𝑤2
𝑥𝜌2 =
𝑥𝜌1
𝑤1
Substituting in the production constraint
(
) 𝜌
𝑤2 𝜌−1 𝜌
+
𝑥1 = 𝑦 𝜌
𝑤1
(
𝜌 )
( ) 𝜌−1
𝑤2
1+
𝑥𝜌1 = 𝑦 𝜌
𝑤1
𝑥𝜌1
we can solve 𝑥1
(
𝑥𝜌1
=
(
1+
(
𝑥1 =
(
1+
𝑤2
𝑤1
𝑤2
𝑤1
𝜌 )−1
) 𝜌−1
𝑦𝜌
𝜌 )−1/𝑝
) 𝜌−1
𝑦
Similarly
(
𝑥2 =
(
1+
𝑤1
𝑤2
𝜌 )−1/𝑝
) 𝜌−1
𝑦
5.28 Example 5.27 is flawed. The optimum of the constrained maximization problem
(ℎ = 𝑤/2) is in fact a saddle point of the Lagrangean. It maximizes the Lagrangean in
the feasible set, but not globally.
The net benefit approach to the Lagrange multiplier method is really only applicable
when the Lagrangean (net benefit function) is concave, so that every stationary point
is a global maximum. This requirement is satisfied in many standard examples, such
as the consumer’s problem (Example 5.21) and cost minimization (Example 5.28). It
is also met in Example 5.29. The requirement of concavity is not recognized in the
text, and Section 5.3.6 should be amended accordingly.
5.29 The Lagrangean
𝐿(x, 𝜆) =
𝑛
∑
(
𝑐𝑖 (𝑥𝑖 ) + 𝜆 𝐷 −
𝑖=1
𝑛
∑
)
𝑥𝑖
(5.94)
𝑖=1
can be rewritten as
𝐿(x, 𝜆) = −
𝑛
∑
)
(
𝜆𝑥𝑖 − 𝑐𝑖 (𝑥𝑖 ) + 𝜆𝐷
(5.95)
𝑖=1
The 𝑖th term in the sum is the net profit of plant 𝑖 if its output is valued at 𝜆. Therefore,
if the company undertakes to buy electricity from its plants at the price 𝜆 and instructs
each plant manager to produce so as to maximize the plant’s net profit, each manager
246
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
will be induced to choose an output level which maximizes the profit of the company
as a whole. This is the case whether the price 𝜆 is the market price at which the
company can buy electricity from external suppliers or the shadow price determined
by the need to satisfy the total demand 𝐷. In this way, the shadow price 𝜆 can be used
to decentralize the production decision.
5.30 The Lagrangean for this problem is
𝐿(𝑥1 , 𝑥2 , 𝜆1 , 𝜆2 ) = 𝑥1 𝑥2 − 𝜆1 (𝑥21 + 2𝑥22 − 3) − 𝜆2 (2𝑥21 + 𝑥22 − 3)
The first-order conditions for stationarity
𝐷𝑥1 𝐿 = 𝑥2 − 2𝜆1 𝑥1 − 4𝜆2 𝑥1 = 0
𝐷𝑥2 𝐿 = 𝑥1 − 4𝜆1 𝑥2 − 2𝜆2 𝑥2 = 0
can be written as
𝑥2 = 2(𝜆1 + 2𝜆2 )𝑥1
(5.96)
𝑥1 = 2(2𝜆1 + 𝜆2 )𝑥2
(5.97)
which must be satisfied along with the complementary slackness conditions
𝑥21 + 2𝑥22 − 3 ≤ 0
𝜆1 ≥ 0
𝜆1 (𝑥21 + 2𝑥22 − 3) = 0
2𝑥21 + 𝑥22 − 3 ≤ 0
𝜆2 ≥ 0
𝜆2 (2𝑥21 + 𝑥22 − 3) = 0
First suppose that both constraints are slack so that 𝜆1 = 𝜆2 = 0. Then the first-order
conditions (5.96) and (5.97) imply that 𝑥1 = 𝑥2 = 0. (0, 0) satisfies the Kuhn-Tucker
conditions. Next suppose that the first constraint is binding while the second constraint
have two solutions,
is slack
√ (𝜆2 = 0).√ The first-order
√ and (5.97) √
√ conditions (5.96)
√
𝑥1 = 3/2, 𝑥2 = 3/2, 𝜆 = 1/(2 2) and 𝑥1 = − 3/2, 𝑥2 = − 3/2, 𝜆 = 1/(2 2),
but these violate the second constraint. Similarly, there is no solution in which the first
constraint is slack and the second constraint binding. Finally, assume that the both
constraints are binding. This implies that 𝑥1 = 𝑥2 = 1 or 𝑥1 = 𝑥2 = −1, which points
satisfy the first-order conditions (5.96) and (5.97) with 𝜆1 = 𝜆2 = 1/6.
We conclude that three points satisfy the Kuhn-Tucker conditions, namely (0, 0), (1, 1)
and (−1, −1). Noting the objective function, we observe that (0, 0) in fact minimizes
the objective. We conclude that there are two local maxima, (1, 1) and (−1, −1), both
of which achieve the same level of the objective function.
5.31 Dividing the first-order conditions, we obtain
𝑟
𝜆(𝑠 − 𝑟)
𝐷𝑘 𝑅(𝑘, 𝑙)
= −
𝐷𝑙 𝑅(𝑘, 𝑙)
𝑤
(1 − 𝜆)𝑤
Using the revenue function
𝑅(𝑘, 𝑙) = 𝑝(𝑓 (𝑘, 𝑙))𝑓 (𝑘, 𝑙)
the marginal revenue products of capital and labor are
𝐷𝑘 𝑅(𝑘, 𝑙) = 𝐷𝑦 𝑝(𝑦)𝐷𝑘 𝑓 (𝑘, 𝑙)
𝐷𝑙 𝑅(𝑘, 𝑙) = 𝐷𝑦 𝑝(𝑦)𝐷𝑙 𝑓 (𝑘, 𝑙)
so that their ratio is equal to the ratio of the marginal products
𝐷𝑘 𝑓 (𝑘, 𝑙)
𝐷𝑘 𝑅(𝑘, 𝑙)
=
𝐷𝑙 𝑅(𝑘, 𝑙)
𝐷𝑙 𝑓 (𝑘, 𝑙
247
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
The necessary condition for optimality can be expressed as
𝑟
𝜆 𝑠−𝑟
𝐷𝑘 𝑓 (𝑘, 𝑙)
= −
𝐷𝑙 𝑓 (𝑘, 𝑙)
𝑤
1−𝜆 𝑤
whereas the necessary condition for cost minimization is (Example 5.16)
𝐷𝑘 𝑓 (𝑘, 𝑙)
𝑟
=
𝐷𝑙 𝑓 (𝑘, 𝑙)
𝑤
The regulated firm does not use the cost-minimizing combination of inputs.
5.32 The general constrained optimization problem
max 𝑓 (x)
x
subject to g(x) ≤ 0
can be transformed into an equivalent equality constrained problem
max 𝑓 (x)
x,s
subject to g(x) + s = 0 and s ≥ 0
through the addition of nonnegative slack variables s. Letting ĝ(x, s) = g(x) + s, the
first-order conditions a local optimum are (Exercise 5.16)
∑
∑
𝐷x 𝑓 (x∗ ) =
𝜆𝑗 𝐷x 𝑔ˆ𝑗 (x∗ , s∗ ) =
𝜆𝑗 𝐷x 𝑔𝑗 (x∗ )
∑
𝜆𝑗 𝐷s 𝑔ˆ𝑗 (x, s) = 𝝀
(5.98)
0 = 𝐷s 𝑓 (x∗ ) ≤
𝝀𝑇 s = 0
s≥0
(5.99)
Condition (5.98) implies that 𝜆𝑗 ≥ 0 for every 𝑗. Furthermore, rewriting the constraint
as
s = −g(x)
the complementary slackness condition (5.99) becomes
𝝀𝑇 g(x) = 0
g(x) ≤ 0
This establishes the necessary conditions of Theorem 5.3.
5.33 The equality constrained maximization problem
max 𝑓 (x)
x
subject to g(x) = 0
is equivalent to the problem
max 𝑓 (x)
x
subject to g(x) ≤ 0
−g(x) ≤ −0
By the Kuhn-Tucker theorem (Theorem 5.3), there exists nonnegative multipliers
+
−
−
+
−
𝜆+
1 , 𝜆2 , . . . , 𝜆𝑚 and 𝜆1 , 𝜆2 , . . . , 𝜆𝑚 such that
∑
∑
∗
∗
𝐷𝑓 (x∗ ) =
𝜆+
𝜆−
(5.100)
𝑗 𝐷𝑔𝑗 [x ] −
𝑗 𝐷𝑔𝑗 [x ] = 0
248
c 2001 Michael Carter
⃝
All rights reserved
Solutions for Foundations of Mathematical Economics
with
−
𝜆+
𝑗 𝑔𝑗 (x) = 0 and 𝜆𝑗 𝑔𝑗 (x) = 0
𝑗 = 1, 2, . . . , 𝑚
−
Defining 𝜆𝑗 = 𝜆+
𝑗 − 𝜆𝑗 , (5.100) can be written as
𝐷𝑓 (x∗ ) =
∑
𝜆𝑗 𝐷𝑔𝑗 [x∗ ]
which is the first-order condition for an equality constrained problem. Furthermore, if
x∗ satisfies the inequality constraints
𝑔(x∗ ) ≤ 0 and 𝑔(x∗ ) ≥ 0
it satisfies the equality
𝑔(x∗ ) = 0
5.34 Suppose that x∗ solves the problem
max c𝑇 x subject to 𝐴x ≤ 0
x
with Lagrangean
𝐿 = c𝑇 x − 𝝀𝑇 𝐴x
Then there exists 𝝀 ≥ 0 such that
𝐷x 𝐿 = c𝑇 − 𝝀𝑇 𝐴 = 0
that is, 𝐴𝑇 𝝀 = c. Conversely, if there is no solution, there exists x such that 𝐴x ≤ 0
and
c𝑇 x > c𝑇 0 = 0
5.35 There are two binding constraints at (4, 0), namely
𝑔(𝑥1 , 𝑥2 ) = 𝑥1 + 𝑥2 ≤ 4
ℎ(𝑥1 , 𝑥2 ) = −𝑥2 ≤ 0
with gradients
∇𝑔(4, 0) = (1, 1)
∇ℎ(4, 0) = (0, 1)
which are linearly independent. Therefore the binding constraints are regular at (0, 4).
5.36 The Lagrangean for this problem is
𝐿(x, 𝜆) = 𝑢(x) − 𝜆(p𝑇 x − 𝑚)
and the first-order (Kuhn-Tucker) conditions are (Corollary 5.3.2)
𝐷𝑥𝑖 𝐿[x∗ , 𝜆] = 𝐷𝑥𝑖 𝑢[x∗ ] − 𝜆𝑝𝑖 ≤ 0
𝑇
∗
p x ≤𝑚
x∗𝑖 ≥ 0
𝜆≥0
𝑥∗𝑖 (𝐷𝑥𝑖 𝑢[x∗ ] − 𝜆𝑝𝑖 ) = 0
𝑇
∗
𝜆(p x − 𝑚) = 0
for every good 𝑖 = 1, 2, . . . , 𝑚. Two cases must be distinguished.
249
(5.101)
(5.102)
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
Case 1 𝜆 > 0 This implies that p𝑇 x = 𝑚, the consumer spends all her income.
Condition (5.101) implies
𝐷𝑥𝑖 𝑢[x∗ ] ≤ 𝜆𝑝𝑖 for every 𝑖 with 𝐷𝑥𝑖 𝑢[x∗ ] = 𝜆𝑝𝑖 for every 𝑖 for which 𝑥𝑖 > 0
This case was analyzed in Example 5.17.
Case 2 𝜆 = 0 This allows the possibility that the consumer does not spend all her
income. Substituting 𝜆 = 0 in (5.101) we have 𝐷𝑥𝑖 𝑢[x∗ ] = 0 for every 𝑖. At the
optimal consumption bundle x∗ , the marginal utility of every good is zero. The
consumer is satiated, that is no additional consumption can increase satisfaction.
This case was analyzed in Example 5.31.
In summary, at the optimal consumption bundle x∗ , either
∙ the consumer is satiated (𝐷𝑥𝑖 𝑢[x∗ ] = 0 for every 𝑖) or
∙ the consumer consumes only those goods whose marginal utility exceeds the
threshold 𝐷𝑥𝑖 𝑢[x∗ ] ≥ 𝜆𝑝𝑖 and adjusts consumption so that the marginal
utility is proportional to price for all consumed goods.
5.37 Assume x ∈ 𝐷(x∗ ). Then there exists 𝛼
¯ ∈ ℜ such that x∗ + 𝛼x ∈ 𝑆 for every
0 ≤ 𝛼 ≤ 𝛼.
¯ Define 𝑔 ∈ 𝐹 ([0, 𝛼
¯ ]) by 𝑔(𝛼) = 𝑓 (x∗ + 𝛼x). If x∗ is a local maximum, 𝑔
has a local maximum at 0, and therefore 𝑔 ′ (0) ≤ 0 (Theorem 5.1). By the chain rule
(Exercise 4.22), this implies
𝑔 ′ (0) = 𝐷𝑓 [x∗ ](x) ≤ 0
and therefore x ∈
/ 𝐻 + (x∗ ).
5.38 If x is a tangent vector, so is 𝛽x for any nonnegative 𝛽 (replace 1/𝛼𝑘 by 𝛽/𝛼𝑘 in
the preceding definition. Also, trivially, x = 0 is a tangent vector (with x𝑘 = x∗ and
𝛼𝑘 = 1 for all 𝑘). The set 𝑇 of all vectors tangent to 𝑆 at x∗ is therefore a nonempty
cone, which is called the cone of tangents to 𝑆 at x∗ .
To show that 𝑇 is closed, let x𝑛 be a sequence in 𝑇 converging to some x ∈ ℜ𝑛 . We
need to show that x ∈ 𝑇 . Since x𝑛 ∈ 𝑇 , there exist feasible points x𝑚𝑛 ∈ 𝑆 and 𝛼𝑚𝑛
such that
(x𝑚𝑛 − x∗ )/𝛼𝑚𝑛 → x𝑛 as 𝑚 → ∞
For any 𝑁 choose 𝑛 such that
∥x𝑛 − x∥ ≤
1
𝑁
2
and then choose 𝑚 such that
∥x𝑚𝑛 − x∗ ∥ ≤ 𝑁 and ∥(x𝑚𝑛 − x∗ )/𝛼𝑚𝑛 − x𝑛 ∥ ≤
1
𝑁
2
Relabeling x𝑚𝑛 as x𝑁 and 𝛼𝑚𝑛 as 𝛼𝑁 we have we have constructed a sequence x𝑁 in
S such that
𝑁
x − x∗ ≤ 𝑁
and
𝑁
(x − x∗ )/𝛼𝑁 − x ≤ (x𝑁 − x∗ )/𝛼𝑁 − x𝑛 + ∥x𝑛 − x∥ ≤ 1 𝑁
1
Letting 𝑁 → ∞, x𝑁 converges to x∗ and (x𝑁 − x∗ )/𝛼𝑁 converges to x, which proves
that x ∈ 𝑇 as required.
250
c 2001 Michael Carter
⃝
All rights reserved
Solutions for Foundations of Mathematical Economics
5.39 Assume x ∈ 𝐷(x∗ ). That is, there exists 𝛼
¯ such that x∗ + 𝛼x ∈ 𝑆 for every
𝛼 ∈ [0, 𝛼
¯ ]. For 𝑘 = 1, 2, . . . , let 𝛼𝑘 = 𝛼
¯ /𝑘. Then x𝑘 = x∗ + 𝛼𝑘 x ∈ 𝑆, x𝑘 → x∗ and
𝑘
∗
∗
∗
(x − x )/𝛼𝑘 = (x + 𝛼𝑘 x − x )/𝛼𝑘 = x. Therefore, x ∈ 𝑇 (x∗ ).
5.40 Let dx ∈ 𝑇 (x∗ ). Then there exists a feasible sequence {x𝑘 } converging to x∗ and a
sequence {𝛼𝑘 } of nonnegative scalars such that the sequence {(x𝑘 − x∗ )/𝛼𝑘 } converges
to dx. For any 𝑗 ∈ 𝐵(x∗ ), 𝑔𝑗 (x∗ ) = 0 and
𝑔𝑗 (x𝑘 ) = 𝐷𝑔𝑗 [x∗ ](x𝑘 − x∗ ) + 𝜂𝑗 x𝑘 − x∗ where 𝜂𝑗 → 0 as k → ∞. This implies
1
1
𝑔𝑗 (x𝑘 ) = 𝑘 𝐷𝑔𝑗 [x∗ ](x𝑘 − x∗ ) + 𝜂𝑗 (x𝑘 − x∗ )/𝛼𝑘 𝑘
𝛼
𝛼
Since x𝑘 is feasible
1
𝑔𝑗 (x𝑘 ) ≤ 0
𝛼𝑘
and therefore
𝐷𝑔𝑗 [x∗ ]((x𝑘 − x∗ )/𝛼𝑘 ) + 𝜂 𝑖 (x𝑘 − x∗ )/𝛼𝑘 ≤ 0
Letting 𝑘 → ∞ we conclude that
𝐷𝑔𝑗 [x∗ ](dx) ≤ 0
That is, dx ∈ 𝐿.
5.41 𝐿0 ⊆ 𝐿1 by definition. Assume dx ∈ 𝐿1 . That is
𝐷𝑔𝑗 [x∗ ](dx) < 0
∗
𝐷𝑔𝑗 [x ](dx) ≤ 0
for every 𝑗 ∈ 𝐵 𝑁 (x∗ )
𝐶
∗
for every 𝑗 ∈ 𝐵 (x )
(5.103)
(5.104)
where 𝐵 𝐶 (x∗ ) = 𝐵(x∗ ) − 𝐵 𝑁 (x∗ ) is the set of concave binding constraints at x∗ . By
concavity (Exercise 4.67), (5.104) implies that
𝑔𝑗 (x∗ + 𝛼dx) ≤ 𝑔𝑗 (x∗ ) = 0 for every 𝛼 ≥ 0 and 𝑗 ∈ 𝐵 𝐶 (x∗ )
From (5.103) there exists some 𝛼𝑁 such that
𝑔𝑗 (x∗ + 𝛼dx) < 0 for every 𝛼 ∈ [0, 𝛼𝑁 ] and 𝑗 ∈ 𝐵 𝑁 (x∗ )
Furthermore, since 𝑔𝑗 (x∗ ) < 0 for all 𝑗 ∈ 𝑆(x∗ ), there exists some 𝛼𝑆 > 0 such that
𝑔𝑗 (x∗ + 𝛼dx) < 0 for every 𝛼 ∈ [0, 𝛼𝑆 ] and 𝑗 ∈ 𝑆(x∗ )
Setting 𝛼
¯ = min{𝛼𝑁 , 𝛼𝑆 } we have
𝑔𝑗 (x∗ + 𝛼dx) ≤ 0 for every 𝛼 ∈ [0, 𝛼
¯ ] and 𝑗 = 1, 2, . . . , 𝑚
or
x∗ + 𝛼dx ∈ 𝐺 = { x : 𝑔𝑗 (x) ≤ 0, 𝑗 = 1, 2, . . . , 𝑚 } for every 𝛼 ∈ [0, 𝛼
¯]
Therefore dx ∈ 𝐷. We have previously shown (Exercises 5.39 and 5.40) that 𝐷 ⊂ 𝑇 ⊂
𝐿.
251
c 2001 Michael Carter
⃝
All rights reserved
Solutions for Foundations of Mathematical Economics
5.42 Assume that g satisfies the Quasiconvex CQ condition at x∗ . That is, for every
𝑗 ∈ 𝐵(x∗ ), 𝑔𝑗 is quasiconvex, ∇𝑔𝑗 (x∗ ) ∕= 0 and there exists x̂ such that 𝑔𝑗 (x̂) < 0.
Consider the perturbation dx = x̂ − x∗ . Quasiconvexity and regularity implies that
for every binding constraint 𝑗 ∈ 𝐵(x∗ ) (Exercises 4.74 and 4.75)
𝑔𝑗 (x̂) < 𝑔𝑗 (x∗ ) =⇒ ∇𝑔𝑗 (x∗ )𝑇 (x̂ − x∗ ) = ∇𝑔𝑗 (x∗ )𝑇 dx < 0
That is
𝐷𝑔𝑗 [x∗ ](dx) < 0
Therefore, dx ∈ 𝐿0 (x∗ ) ∕= ∅ and g satisfies the Cottle constraint qualification condition.
5.43 If the binding constraints 𝐵(x∗ ) are regular at x∗ , their gradients are linearly
independent. That is, there exists no 𝜆𝑗 ∕= 0, 𝑗 ∈ 𝐵(x∗ ) such that
∑
𝜆𝑗 ∇𝑔𝑗 [x∗ ] = 0
𝑗∈𝐵(x∗ )
By Gordan’s theorem (Exercise 3.239), there exists dx ∈ ℜ𝑛 such that
∇𝑔𝑗 [x∗ ]𝑇 dx < 0 for every 𝑗 ∈ 𝐵(x∗ )
Therefore dx ∈ 𝐿0 (x∗ ) ∕= ∅.
5.44 If 𝑔𝑗 concave, 𝐵 𝑁 (x∗ ) = ∅, and AHUCQ is trivially satisfied (with dx = 0 ∈ 𝐿1 ).
For every 𝑗, let
𝑆𝑗 = { dx : 𝐷𝑔𝑗 [x∗ ](dx) < 0 }
Then
⎛
𝐿1 (x∗ ) = ⎝
⎞
∩
𝑆𝑖 ⎠
∩
⎛
∩
⎝
𝑖∈𝐵 𝑁 (x∗ )
⎞
𝑆𝑖 ⎠
𝑖∈𝐵 𝐶 (x∗ )
where 𝐵 𝐶 (x∗ ) and 𝐵 𝑁 (x∗ ) are respectively the concave and nonconcave constraints
binding at x∗ . If 𝑔𝑗 satisfies the AHUCQ condition, 𝐿1 (x∗ ) ∕= ∅ and Exercise 1.219
implies that
⎞ ⎛
⎞
⎛
∩
∩
∩
𝐿1 = ⎝
𝑆𝑖 ⎠ ⎝
𝑆𝑖 ⎠
𝑖∈𝐵 𝑁 (x∗ )
𝑖∈𝐵 𝐶 (x∗ )
Now
𝑆𝑖 = { dx : 𝐷𝑔𝑗 [x∗ ](dx) ≤ 0 }
and therefore
∩
𝐿1 =
𝑆𝑖 = 𝐿
𝑗∈𝐵(x∗ )
Since (Exercise 5.41)
𝐿1 ⊆ 𝑇 ⊆ 𝐿
and 𝑇 is closed (Exercise 5.38), we have
𝐿 = 𝐿1 ⊆ 𝑇 ⊆ 𝐿
which implies that 𝑇 = 𝐿.
252
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
5.45 For each 𝑗 = 1, 2, . . . , 𝑚, either
𝑔𝑗 (x∗ ) < 0 which implies that 𝜆𝑗 = 0 and therefore 𝜆𝑗 𝐷𝑔𝑗 [x∗ ](x − x∗ ) = 0
or
∗
𝑔𝑗 (x ) = 0 Since 𝑔𝑗 is quasiconvex and 𝑔𝑗 (x) ≤ 0 = 𝑔(x∗ ), Exercise 4.73 implies that
𝐷𝑔𝑗 [x∗ ](x − x∗ ) ≤ 0. Since 𝜆𝑗 ≥ 0, this implies that 𝜆𝑗 𝐷𝑔𝑗 [x∗ ](x − x∗ ) ≤ 0.
We have shown that for every 𝑗, 𝜆𝑗 𝐷𝑔𝑗 [x∗ ](x − x∗ ) ≤ 0. The first-order condition
implies that
∑
𝜆𝑗 𝐷𝑔𝑗 [x∗ ](x − x∗ ) ≤ 0
𝐷𝑓 [x∗ ](x − x∗ ) =
𝑗
If
∇𝑓 (x∗ ) ≤
∑
𝜆𝑗 ∇𝑔𝑗 (x∗ )
x∗ ≥ 0
(
)𝑇
∇𝑓 (x∗ ) − 𝜆𝑗 ∇𝑔𝑗 (x∗ ) x∗ = 0
The first-order conditions imply that for every x ∈ 𝐺, x ≥ 0 and
(
)𝑇
∇𝑓 (x∗ ) − 𝜆𝑗 ∇𝑔𝑗 (x∗ ) x ≤ 0
and therefore
(
)𝑇
∇𝑓 (x∗ ) − 𝜆𝑗 ∇𝑔𝑗 (x∗ ) (x − x∗ ) ≤ 0
or
∇𝑓 (x∗ )𝑇 (x − x∗ ) ≤
∑
𝜆𝑗 ∇𝑔𝑗 (x∗ )𝑇 (x − x∗ ) ≤ 0
5.46 Assuming 𝑥𝑑 = 𝑥𝑑 = 0, the constraints become
2𝑥𝑐 ≤ 30
2𝑥𝑐 ≤ 25
𝑥𝑐 ≤ 20
The first and third conditions are redundant, which implies that 𝜆𝑓 = 𝜆𝑚 = 0. Complementary slackness requires that, if 𝑥𝑐 > 0,
𝐷𝑥𝑐 𝐿 = 1 − 2𝜆𝑓 − 2𝜆𝑙 − 𝜆𝑚 = 0
or 𝜆𝑙 = 12 . Evaluating the Lagrangean at (0, 1/2, 0) yields
))
( (
1
= 3𝑥𝑏 + 𝑥𝑐 + 3𝑥𝑑
𝐿 x, 0, , 0
2
1
− (𝑥𝑏 + 2𝑥𝑐 + 3𝑥𝑑 − 25)
2
25 5
3
=
+ 𝑥𝑏 + 𝑥𝑑
2
2
2
This basic feasible solution is clearly not optimal, since profit would be increased by
increasing either 𝑥𝑏 or 𝑥𝑑 .
253
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
Following the hint, we allow 𝑥𝑑 > 0, retaining the assumption that 𝑥𝑏 = 0. We must
be alert to the possibility that 𝑥𝑐 = 0. With 𝑥𝑏 = 0, the constraints become
2𝑥𝑐 + 𝑥𝑑 ≤ 30
2𝑥𝑐 + 3𝑥𝑑 ≤ 25
𝑥𝑐 + 𝑥𝑑 ≤ 20
The first constraint is redundant, which implies that 𝜆𝑓 = 0. If 𝑥𝑑 > 0, complementary
slackness requires that
𝐷𝑥𝑑 𝐿 = 3 − 3𝜆𝑙 − 𝜆𝑚 = 0
or
𝜆𝑚 = 3(1 − 𝜆𝑙 )
(5.105)
The requirement that 𝜆𝑚 ≥ 0 implies that 𝜆𝑙 ≤ 1. Substituting (5.105) in the second
first-order condition
𝐷𝑥𝑐 𝐿 = 1 − 2𝜆𝑙 − 𝜆𝑚 = 1 − 2𝜆𝑙 − 3(1 − 𝜆𝑙 ) = −2 + 𝜆𝑙
implies that
𝐷𝑥𝑐 𝐿 = −2 + 𝜆𝑙 < 0
for every 𝜆𝑙 ≤ 1
Complementary slackness then requires implies that 𝑥𝑐 = 0.
The constraints now become
𝑥𝑑 ≤ 30
3𝑥𝑑 ≤ 25
𝑥𝑑 ≤ 20
The first and third are redundant, so that 𝜆𝑓 and 𝜆𝑚 = 0. Equation (5.105) implies
that 𝜆𝑙 = 1.
Evaluating the Lagrangean at this point (𝜆 = 0, 1, 0), we have
𝐿(𝑥, (0, 1, 0)) = 3𝑥𝑏 + 𝑥𝑐 + 3𝑥𝑑
− (𝑥𝑏 + 2𝑥𝑐 + 3𝑥𝑑 − 25)
= 25 + 2𝑥𝑏 − 𝑥𝑐
Clearly this is not an optimal solution, An increase in 𝑥𝑏 is indicated. This leads us
to the hypothesis 𝑥𝑏 > 0, 𝑥𝑑 > 0, 𝑥𝑐 = 0 which was evaluated in the text, and in fact
lead to the optimal solution.
5.47 If we ignore the hint and consider solutions with 𝑥𝑏 > 0, 𝑥𝑐 ≥ 0, 𝑥𝑑 = 0, the
constraints become
2𝑥𝑏 + 2𝑥𝑐 ≤ 30
𝑥𝑏 + 2𝑥𝑐 ≤ 25
2𝑥𝑏 + 𝑥𝑐 ≤ 20
These three constraints are linearly dependent, so that any one of them is redundant
and can be eliminated. For example, 3/2 times the first constraint is equal to the sum of
254
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
the second and third constraints. The feasible solution 𝑥𝑏 = 0, 𝑥𝑐 = 5, 𝑥𝑑 = 10, where
the constraints are linearly dependent, is known as a degenerate solution. Degeneracy
is a significant feature of linear programming, allowing the theoretical possibility of a
breakdown in the simplex algorithm. Fortunately, such breakdown seems very rare in
practice. Degeneracy at the optimal solution indicates multiple optima.
One way to proceed in this example is to arbitrarily designate one constraint as redundant, assuming the corresponding multiplier is zero. Arbitrarily choosing 𝜆𝑚 = 0 and
proceeding as before, complementary slackness (𝑥𝑑 > 0) requires that
𝐷𝑥𝑑 𝐿 = 3 − 2𝜆𝑓 − 𝜆𝑙 = 0
or
𝜆𝑙 = 3 − 2𝜆𝑓
(5.106)
Nonnegativity of 𝜆𝑙 implies that 𝜆𝑓 ≤ 32 .
Substituting (5.106) in the second first-order condition yields
𝐷𝑥𝑐 𝐿 = 1 − 2𝜆𝑓 − 2𝜆𝑙
= 1 − 2𝜆𝑓 − 2(3 − 2𝜆𝑓 )
= −5 + 2𝜆𝑓 < 0 for every 𝜆𝑓 ≤
3
2
Complementary slackness therefore implies that 𝑥𝑐 = 0, which takes us back to the
starting point of the presentation in the text, where 𝑥𝑏 > 0, 𝑥𝑐 = 𝑥𝑑 = 0.
5.48 Assume that (c1 , 𝑧1 ) and (c2 , 𝑧2 ) belong to 𝐵. That is
𝑧1 ≥ 𝑧 ∗
𝑧2 ≥ 𝑧 ∗
c1 ≤ 0
c2 ≤ 0
For any 𝛼 ∈ (0, 1),
𝑧¯ = 𝛼𝑧1 + (1 − 𝛼)𝑧2 ≤ 𝑧 ∗
c̄ = 𝛼c1 + (1 − 𝛼)c2 ≤ 0
and therefore (c̄, 𝑧¯) ∈ 𝐵. This shows that 𝐵 is convex. Let 1 = (1, 1, . . . , 1) ∈ ℜ𝑚 .
Then (c − 1, 𝑧 + 1) ∈ int 𝐵 ∕= ∅. There 𝐵 has a nonempty interior.
5.49 Let (c, 𝑧) ∈ int 𝐵. This implies that c < 0 and 𝑧 > 𝑧 ∗ . Since 𝑣 is monotone
𝑣(c) ≤ 𝑣(0) = z∗ < 𝑧
which implies that (c, 𝑧) ∈
/ 𝐴.
5.50 The linear functional 𝐿 can be decomposed into separate components, so that
there exists (Exercise 3.47) 𝜑 ∈ 𝑌 ∗ and 𝛼 ∈ ℜ such that
𝐿(c, 𝑧) = 𝛼𝑧 − 𝜑(c)
Assuming 𝑌 ⊆ ℜ𝑚 , there exists (Proposition 3.4) 𝝀 ∈ ℜ𝑚 such that 𝜑(c) = 𝝀𝑇 c and
therefore
𝐿(c, 𝑧) = 𝛼𝑧 − 𝝀𝑇 c
The point (0, 𝑧 ∗ + 1) belongs to 𝐵. Therefore, by (5.75),
𝐿(0, 𝑧 ∗ ) ≤ 𝐿(0, 𝑧 ∗ + 1)
255
c 2001 Michael Carter
⃝
All rights reserved
Solutions for Foundations of Mathematical Economics
which implies that
𝛼𝑧 ∗ − 𝝀𝑇 0 ≤ 𝛼(𝑧 ∗ + 1) − 𝝀𝑇 0
or 𝛼 ≥ 0. Similarly, let { e1 , e2 , . . . , e𝑚 } denote the standard basis for ℜ𝑚 (Example
1.79). For any 𝑗 = 1, 2, . . . , 𝑚, the point (0 − e𝑗 , 𝑧 ∗ ) (which corresponds to decreasing
resource 𝑗 by one unit) belongs to 𝐵 and therefore (from (5.75))
𝑧 ∗ − 𝝀𝑇 (0 − e𝑗 ) = 𝑧 ∗ + 𝜆𝑗 ≥ 𝑧 ∗ − 𝝀𝑇 0 = 𝑧 ∗
which implies that 𝜆𝑗 ≥ 0.
5.51 Let
ĉ = 𝑔(x̂) < 0 and 𝑧ˆ = 𝑓 (x̂)
Suppose 𝛼 = 0. Then, since 𝐿 is nonzero, at least one component of 𝝀 must be nonzero.
That is, 𝝀 ≩ 0 and therefore
𝝀𝑇 𝑐ˆ < 0
(5.107)
But (ĉ, 𝑧ˆ) ∈ 𝐴 and (5.74) implies
𝛼ˆ
𝑧 − 𝝀𝑇 ĉ ≤ 𝛼𝑧 ∗ − 𝝀𝑇 0
and therefore 𝛼 = 0 implies
𝝀𝑇 𝑐ˆ ≥ 0
contradicting (5.107). Therefore, we conclude that 𝛼 > 0.
5.52 The utility’s optimization problem is
max 𝑆(𝑦, 𝑌 ) =
𝑦,𝑌 ≥0
𝑛 ∫
∑
𝑖=1
𝑦𝑖
0
(𝑝𝑖 (𝜏 ) − 𝑐𝑖 )𝑑𝜏 − 𝑐0 𝑌
subject to 𝑔𝑖 (y, 𝑌 ) = 𝑦𝑖 − 𝑌 ≤ 0
𝑖 = 1, 2, . . . , 𝑛
The demand independence assumption ensures that the objective function 𝑆 is concave,
since its Hessian
⎛
⎞
𝐷𝑝1 0 . . .
0
0
⎜ 0
𝐷𝑝2 . . . 0 0⎟
⎟
𝐻𝑆 = ⎜
⎝ 0
...
𝐷𝑝𝑛 0⎠
0
...
0
0
is nonpositive definite (Exercise 3.96). The constraints are linear and hence convex.
Moreover, there exists a point (0, 1) such that for every 𝑖 = 1, 2, . . . , 𝑛
𝑔𝑖 (0, 1) = 0 − 1 < 0
Therefore the problem satisfies the conditions of Theorem 5.6. The optimal solution
(y∗ , 𝑌 ∗ ) satisfies the Kuhn-Tucker conditions, that is there exist multipliers 𝜆1 , 𝜆2 , . . . , 𝜆𝑚
such that for every period 𝑖 = 1, 2, . . . , 𝑛
𝐷𝑦𝑖 𝐿 = 𝑝𝑖 (𝑦𝑖 ) − 𝑐𝑖 − 𝜆𝑖 ≤ 0
𝑦𝑖 ≥ 0
𝑦𝑖 (𝑝𝑖 (𝑦𝑖 ) − 𝑐𝑖 − 𝜆𝑖 ) = 0
𝑦𝑖 ≤ 𝑌
𝜆𝑖 ≥ 0
𝜆(𝑌 − 𝑦𝑖 ) = 0
256
(5.108)
c 2001 Michael Carter
⃝
All rights reserved
Solutions for Foundations of Mathematical Economics
and that capacity be chosen such that
𝐷 𝑌 𝐿 = 𝑐0 −
𝑛
∑
(
𝜆𝑖 ≤ 0
𝑌 ≥0
𝑌
𝑐0 −
𝑖=1
𝑛
∑
)
𝜆𝑖
=0
(5.109)
𝑖=1
where 𝐿 is the Lagrangean
𝐿(𝑦, 𝑌, 𝜆) =
𝑛 ∫
∑
𝑖=1
0
𝑦𝑖
(𝑝𝑖 (𝜏 ) − 𝑐𝑖 )𝑑𝜏 − 𝑐0 𝑌 −
𝑛
∑
𝜆𝑖 (𝑦𝑖 − 𝑌 )
𝑖=1
In off-peak periods (𝑦𝑖 < 𝑌 ), complementary slackness requires that 𝜆𝑖 = 0 and therefore from (5.108)
𝑝𝑖 (𝑦𝑖 ) = 𝑐𝑖
assuming 𝑦𝑖 > 0. In peak periods (𝑦𝑖 = 𝑌 )
𝑝𝑖 (𝑦𝑖 ) = 𝑐𝑖 + 𝜆𝑖
We conclude that it is optimal to price at marginal cost in off-peak periods and charge
a premium during peak periods. Furthermore, (5.109) implies that the total premium
is equal to the marginal capacity cost
𝑛
∑
𝜆𝑖 = 𝑐0
𝑖=1
Furthermore, note that
𝑛
∑
𝜆𝑖 𝑦𝑖 =
𝑖=1
∑
Peak
=
=
𝜆𝑖 𝑦𝑖
Off-peak
∑
𝜆𝑖 𝑦𝑖 +
𝑦𝑖 =𝑌
=
∑
𝜆𝑖 𝑦𝑖 +
∑
∑
𝜆𝑖 𝑦𝑖
𝜆𝑖 =0
𝜆𝑖 𝑌
𝑦𝑖 =𝑌
𝑛
∑
𝜆𝑖 𝑌 = 𝑐0 𝑌
𝑖=1
Therefore, the utility’s total revenue is
𝑅(𝑦, 𝑌 ) =
=
=
=
𝑛
∑
𝑖=1
𝑛
∑
𝑖=1
𝑛
∑
𝑖=1
𝑛
∑
𝑝𝑖 (𝑦𝑖 )𝑦𝑖
(𝑐𝑖 + 𝜆𝑖 )𝑦𝑖
𝑐𝑖 𝑦 𝑖 +
𝑛
∑
𝜆𝑖 𝑦𝑖
𝑖=1
𝑐𝑖 𝑦𝑖 + 𝑐0 𝑌 = 𝑐(𝑦, 𝑌 )
𝑖=1
Under the optimal pricing policy, revenue equals cost and the utility breaks even.
257
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
Chapter 6: Comparative Statics
6.1 The Jacobian is
(
𝐻𝐿
𝐽g
𝐽=
𝐽g𝑇
0
)
where 𝐻𝐿 is the Hessian of the Lagrangean. We note that
∙ 𝐻𝐿 (x0 ) is negative definite in the subspace 𝑇 = { x : 𝐽g x = 0 } (since x0 satisfies
the conditions for a strict local maximum)
∙ 𝐽g has rank 𝑚 (since the constraints are regular).
Consider the system of equations
(
𝐻𝐿
𝐽g
𝐽g𝑇
0
)( ) ( )
x
0
=
y
0
(6.28)
where x ∈ ℜ𝑛 and y ∈ ℜ𝑚 . It can be decomposed into
𝐻𝐿 x + 𝐽g𝑇 y = 0
(6.29)
𝐽g x = 0
(6.30)
Suppose x solves (6.30). Multiplying (6.29) by x𝑇 gives
x𝑇 𝐻𝐿 x + x𝑇 𝐽g𝑇 y = x𝑇 𝐻𝐿 x + (𝐽g x)𝑇 y = 0
But (6.30) implies that the second term is 0 and therefore x𝑇 𝐻𝐿 x = 0. Since 𝐻𝐿 is
positive definite on 𝑇 = { x : 𝐽g x = 0 }, we must have x = 0. Then (6.29) reduces to
𝐽g𝑇 y = 0
Since 𝐽g has rank 𝑚, this has only the trivial solution y = 0 (Section 3.6.1). We have
shown that the system (6.38) has only the trivial solution (0, 0). This implies that the
matrix 𝐽 is nonsingular.
6.2 The Lagrangean for this problem is
(
)
𝐿 = 𝑓 (x) − 𝝀𝑇 g(x) − c
By Corollary 6.1.1
∇𝑣(c) = 𝐷c 𝐿 = 𝝀
6.3 Optimality implies
𝑓 (x1 , 𝜽1 ) ≥ 𝑓 (x, 𝜽 1 ) and 𝑓 (x2 , 𝜽 2 ) ≥ 𝑓 (x, 𝜽2 ) for every x ∈ 𝑋
In particular
𝑓 (x1 , 𝜽1 ) ≥ 𝑓 (x2 , 𝜽1 ) and 𝑓 (x2 , 𝜽2 ) ≥ 𝑓 (x1 , 𝜽2 )
Adding these inequalities
𝑓 (x1 , 𝜽1 ) + 𝑓 (x2 , 𝜽2 ) ≥ 𝑓 (x2 , 𝜽1 ) + 𝑓 (x1 , 𝜽2 )
258
Solutions for Foundations of Mathematical Economics
c 2001 Michael Carter
⃝
All rights reserved
Rearranging and using the bilinearity of 𝑓 gives
𝑓 (x1 − x2 , 𝜽1 ) ≥ 𝑓 (x1 − x2 , 𝜽2 )
and
𝑓 (x1 − x2 , 𝜽 1 − 𝜽 2 ) ≥ 0
6.4 Let 𝑝1 denote the profit maximizing price with the cost function 𝑐1 (𝑦) and let 𝑦1 be
the corresponding output. Similarly let 𝑝2 and 𝑦2 be the profit maximizing price and
output when the costs are given by 𝑐2 (𝑦).
With cost function 𝑐1 , the firms profit is
Π = 𝑝𝑦 − 𝑐1 (𝑦)
Since this is maximised at 𝑝1 and 𝑦1 (although the monopolist could have sold 𝑦2 at
price 𝑝2 )
𝑝1 𝑦1 − 𝑐1 (𝑦1 ) ≥ 𝑝2 𝑦2 − 𝑐1 (𝑦2 )
Rearranging
𝑝1 𝑦1 − 𝑝2 𝑦2 ≥ 𝑐1 (𝑦1 ) − 𝑐1 (𝑦2 )
(6.31)
The increase in revenue in moving from 𝑦2 to 𝑦1 is greater than the increase in cost.
Similarly
𝑝2 𝑦2 − 𝑐2 (𝑦2 ) ≥ 𝑝1 𝑦1 − 𝑐2 (𝑦1 )
which can be rearranged to yield
𝑐2 (𝑦1 ) − 𝑐2 (𝑦2 ) ≥ 𝑝1 𝑦1 − 𝑝2 𝑦2
Combining the previous inequality with (6.31) yields
𝑐2 (𝑦1 ) − 𝑐2 (𝑦2 ) ≥ 𝑐1 (𝑦1 ) − 𝑐1 (𝑦2 )
(6.32)
6.5 By Theorem 6.2
𝐷w Π[w, 𝑝] = −x∗ and 𝐷𝑝 Π[w, 𝑝] = 𝑦 ∗
and therefore
2
Π(𝑝, w) ≥ 0
𝐷𝑝 𝑦(𝑝, w) = 𝐷𝑝𝑝
2
Π(𝑝, w) ≤ 0
𝐷𝑤𝑖 𝑥𝑖 (𝑝, w) = −𝐷𝑤
𝑖 𝑤𝑖
2
Π(𝑝, w) = 𝐷𝑤𝑖 𝑥𝑗 (𝑝, w)
𝐷𝑤𝑗 𝑥𝑖 (𝑝, w) = −𝐷𝑤
𝑖 𝑤𝑗
2
Π(𝑝, w) = −𝐷𝑤𝑖 𝑦(𝑝, w)
𝐷𝑝 𝑥𝑖 (𝑝, w) = −𝐷𝑤
𝑖𝑝
since Π is convex and therefore 𝐻Π (w, 𝑝) is symmetric (Theorem 4.2) and nonnegative
definite (Proposition 4.1).
6.6 By Shephard’s lemma (6.17)
𝑥𝑖 (𝑤, 𝑦) = 𝐷𝑤𝑖 𝑐(𝑤, 𝑦)
259
c 2001 Michael Carter
⃝
All rights reserved
Solutions for Foundations of Mathematical Economics
Using Young’s theorem (Theorem 4.2),
2
𝐷𝑦 𝑥𝑖 [w, 𝑦] = 𝐷𝑤
𝑐[w, 𝑦]
𝑖𝑦
2
𝑐[w, 𝑦]
= 𝐷𝑦𝑤
𝑖
= 𝐷𝑤𝑖 𝐷𝑦 𝑐[w, 𝑦]
Therefore
𝐷𝑦 𝑥𝑖 [w, 𝑦] ≥ 0 ⇐⇒ 𝐷𝑤𝑖 𝐷𝑦 𝑐[w, 𝑦] ≥ 0
6.7 The demand functions must satisfy the budget contraint identically, that is
𝑛
∑
𝑝𝑖 𝑥𝑖 (p, 𝑚) = 𝑚 for every p and 𝑚
𝑖=1
Differentiating with respect to m
𝑛
∑
𝑝𝑖 𝐷𝑚 𝑥𝑖 [p, 𝑚] = 1
𝑖=1
This is the Engel aggregation condition, which simply states that any additional income
be spent on some goods. Multiplying each term by 𝑥𝑖 𝑚/(𝑥𝑖 𝑚)
𝑛
∑
𝑝𝑖 𝑥𝑖
𝑖=1
𝑚
𝐷𝑚 𝑥𝑖 [p, 𝑚] = 1
𝑚 𝑥𝑖 (p, 𝑚)
the Engel aggregation condition can be written in elasticity form
𝑛
∑
𝛼𝑖 𝜂𝑖 = 1
𝑖=1
where 𝛼𝑖 = 𝑝𝑖 𝑥𝑖 /𝑚 is the budget share of good 𝑖. On average, goods must have unit
income elasticities.
Differentiating the budget constraint with respect to 𝑝𝑗
𝑛
∑
𝑝𝑖 𝐷𝑝𝑗 𝑥𝑖 [p, 𝑚] + 𝑥𝑗 (𝑝, 𝑚) = 0
𝑖=1
This is the Cournot aggregation condition, which implies that an increase in the price
of 𝑝𝑗 is equivalent to a decrease in real income of 𝑥𝑗 𝑑𝑝𝑗 . Multiplying each term in the
sum by 𝑥𝑖 /𝑥𝑖 gives
𝑛
∑
𝑝𝑖 𝑥𝑖
𝑖=1
𝑥𝑖
𝐷𝑝𝑗 𝑥𝑖 [p, 𝑚] = −𝑥𝑗
Multiplying through by 𝑝𝑗 /𝑚
𝑛
∑
𝑝𝑖 𝑥𝑖 𝑝𝑗
𝑖=1
𝑚 𝑥𝑖
𝐷𝑝𝑗 𝑥𝑖 [p, 𝑚] = −
𝑛
∑
𝛼𝑖 𝜖𝑖𝑗 = −𝛼𝑗
𝑖=1
260
𝑝𝑗 𝑥𝑗
𝑚
c 2001 Michael Carter
⃝
All rights reserved
Solutions for Foundations of Mathematical Economics
6.8 Supermodularity of Π(x, 𝑝, −w) follows from Exercises 2.50 and 2.51. To show
strictly increasing differences, consider two price vectors w2 ≥ w1
Π(x, 𝑝, −w1 ) − Π(x, 𝑝, −w2 ) =
𝑛
∑
(−𝑤𝑖1 )𝑥𝑖 −
𝑖=1
=
𝑛
∑
𝑛
∑
(−𝑤𝑖2 )𝑥𝑖
𝑖=1
(𝑤𝑖2 − 𝑤𝑖1 )𝑥𝑖
𝑖=1
∑𝑛
Since w2 ≥ w1 , w2 − w1 ≥ 0 and
2
𝑖=1 (𝑤𝑖
− 𝑤𝑖1 )𝑥𝑖 is strictly increasing in x.
6.9 For any 𝑝2 ≥( 𝑝1 , 𝑦 2 = 𝑓 (𝑝2 ) ≤ 𝑓 (𝑝1 ) = 𝑦 1 and 𝑐(𝑦 1 , 𝜃) − 𝑐(𝑦 2 , 𝜃) is increasing in 𝜃
and therefore − 𝑐(𝑓 (𝑝2 ), 𝜃) − 𝑐(𝑓 (𝑝1 ), 𝜃)) is increasing in 𝜃.
6.10 The firm’s optimization problem is
max 𝜃𝑝𝑦 − 𝑐(𝑦)
𝑦∈ℜ+
The objective function
𝑓 (𝑦, 𝑝, 𝜃) = 𝜃𝑝𝑦 − 𝑐(𝑦)
is
∙ supermodular in 𝑦 (Exercise 2.49)
∙ displays strictly increasing differences in (𝑦, 𝜃) since
(
)
𝑓 (𝑦 2 , 𝑝, 𝜃) − 𝑓 (𝑦 1 , 𝑝, 𝜃) = 𝜃𝑝(𝑦 2 − 𝑦 1 ) − 𝑐(𝑦 2 ) − 𝑐(𝑦 1 )
is strictly increasing in 𝜃 for 𝑦 2 > 𝑦 1 .
Therefore (Corollary 2.1.2), the firm’s output correspondence is strongly increasing and
every selection is increasing (Exercise 2.45). Therefore, the firm’s output increases as
the yield increases. It is analogous to an increase in the exogenous price.
6.11 With two factors, the Hessian is
(
𝑓11
𝐻𝑓 =
𝑓21
𝑓12
𝑓22
)
Therefore, its inverse is (Exercise 3.104)
𝐻𝑓−1
1
=
Δ
(
𝑓22
−𝑓21
−𝑓12
𝑓11
)
where Δ = 𝑓11 𝑓22 − 𝑓12 𝑓21 ≥ 0 by the second-order condition. Therefore, the Jacobian
of the demand functions is
(
)
)
(
1
1 −1
𝐷𝑤1 𝑥1 𝐷𝑤2 𝑥1
𝑓22 −𝑓12
= 𝐻𝑓 =
𝐷𝑤1 𝑥2 𝐷𝑤2 𝑥2
𝑓11
𝑝
𝑝Δ −𝑓21
Therefore
𝑓21
𝐷𝑤1 𝑥2 = −
𝑝Δ
{
<0
≥0
261
if 𝑓21 > 0
otherwise
Téléchargement