(a) Let l:V×W→U be a bilinear map. Then by the universal property of free module, lˉ:F(V×W)→U is well defined, and by the bilinearity, generators of 2.1(1) are in the kernel of lˉ. Therefore, the map l~:V⊗W→U is well defined. Also, it satisfies l~∘φ=l by the definition of the map. Since l~(v⊗w) is determined by the value of l and elements of the form v⊗w generate V⊗W, such l~ must be unique. The rest of the argument is directly followed by the properties of universal objects.
(b) Since V×W≅W×V, it is true because of the universal property.
(c) For v∈V, let fv:W×U→(V×W)×U→(V⊗W)×U→(V⊗W)⊗U be the map (w,u)↦((v,w),u)↦(v⊗w,u)↦(v⊗w)⊗u. Since it is bilinear, one can define f~v:W⊗U→(V⊗W)⊗U. Then, if we set g:V×(W⊗U)→(V⊗W)⊗U,(v,x)↦f~v(x) for all x∈W⊗U, it is also bilinear, hence one can define g~:V⊗(W⊗U)→(V⊗W)⊗U, and g~(v⊗(w⊗u))=(v⊗w)⊗u). By the same method, one can also define h~:(V⊗W)⊗U→V⊗(W⊗U), and it is easy to see that g~ and h~ are the inverse to each other. Therefore, V⊗(W⊗U)≅(V⊗W)⊗U.
(d) Let f:V∗×W→Hom(V,W) be a function such that f(φ,w)(v)=φ(v)⋅w. Since it is bilinear, one can define a map α:V∗⊗W→Hom(V,W).
(1) Let {vi},{wj} be a basis of V,W, respectively, and {vi∗} be the dual basis corresponding to {vi}, i.e., va∗(vb)=δab. Then elements of V∗⊗W can be represented as ∑i,jaijvi∗⊗wj. If α(∑i,jaijvi∗⊗wj)=0, then ∀v∈V,∑i,jaijvi∗(v)⋅wj=0. In particular, if one substitutes vi for v, then ∑jaijwj=0,∀i. Therefore, by the linear independence of the basis, aij=0,∀i,j, and it means α is injective.
(2) Let φ∈Hom(V,W). Then if φ(vi)=∑jaijwj, set ψ=α(∑i,jaij⊗wj). Then ψ and φ have the same value at vi, so they must be the same linear map. Therefore, α is surjective. Therefore, α is an isomorphism. Hence, dimV∗⊗W=dimHom(V,W)=(dimV)(dimW), and dimV⊗W=dimV∗∗⊗W=(dimV∗)(dimW)=(dimV)(dimW).
(e) By arguments similar to the above, {ei⊗fj} span V⊗W. Since the number of {ei⊗fj} is the same as the dimension of V⊗W by (d), it must be a basis.
2.
(a) Let V=R2, and consider e1⊗e2−e2⊗e1∈V⊗V. If it is decomposable, say (ae1+be2)⊗(ce1+de2), then ac=bd=0,ad=1,bc=−1, which is impossible.
(b) If the dimension is 1 or 2, then it is trivially true. Let dimV=3,V=⟨v1,v2,v3⟩. Then we only need to check that all elements of Λ2(V) are decomposable. However, for any elementx=av1∧v2+bv2∧v3+cv3∧v1∈Λ2(V),if a=0, then x=(bv2−cv1)∧v3, and if a=0, then x=(v1−abv3)∧(av2−cv3). Therefore x is always decomposable.
(c) If V=R4, then e1∧e2+e3∧e4 is indecomposable. If it is decomposable, say(a1e1+a2e2+a3e3+a4e4)∧(b1e1+b2e2+b3e3+b4e4),thena1b2−b1a2=a3b4−b3a4=1,a1b3−b1a3=a1b4−b1a4=a2b3−b2a3=a2b4−b2a4=0.If a1=0,b1a2=−1,b1a3=b1a4=0, so a3=a4=0, which contradicts a3b4−b3a4=1.
If a1=0,b3=a1b1a3,b4=a1b1a4, which also contradicts a3b4−b3a4=1.
(d) No. Let α=e1∧e2+e3∧e4∈Λ2(R4). Then α∧α=2e1∧e2∧e3∧e4∈Λ4(R4), which is not zero.
3.
(a) That u∧v∈λk+l(V) is a direct consequence of the associativity of tensor products. Since the last equation is bilinear, we only need to show if u and v are decomposable. However, if u=u1∧…∧uk and v=v1∧…∧vl,v∧u=v1∧…∧vl∧u1∧…∧uk=(−1)lu1∧v1∧…∧vl∧u2∧…∧uk=…=(−1)klu1∧…∧uk∧v1∧…∧vl=(−1)klu∧v.(b) Let us follow the argument in the text. Since an arbitrary element of I(V) has zero determinant, e1⊗…⊗en cannot be in I(V), i.e., e1∧…∧en=0. That {eΦ} spans Λ(V) is obvious. If ∑ΦaΦeΦ=0, since homogeneous parts should be zero, ∀0≤k≤n, ∑∣Φ∣=kaΦeΦ=0. However, for some Φ0, if we wedge eΦ0c on the equation, then we get aΦ0e1∧…∧en=0, hence aΦ0=0. Since Φ0 is arbitrary, it means {eΦ} is linearly independent. Therefore, {eΦ} is a basis. The rest of the argument trivially follows.
(c) h~ can be defined because of the universal property of tensor products and I(V) should be in the kernel of the induced map since h is alternating. Its uniqueness is also followed by the universal property of tensor products. If W=R, since h~∈Λk(V)∗ and h∈Ak(V), φ∗:Λk(V)∗→Ak(V) gives an isomorphism.
4.
Since (2), (3), and (4) are bilinear forms, we only need to consider when f and g are both induced by decomposable elements. Therefore if φ:Λ(V)∗→A(V) is the isomorphism, and f=φ∘α(w1∗∧…∧wp∗),g=φ∘α(wp+1∗∧…∧wp+q∗). Thenf∧αg(v1,…,vp+q)=w1∗∧…∧wp∗∧wp+1∗∧…∧wp+q∗(v1∧…∧vp+q)=det(wi∗vj)1≤i,j≤p+q=σ∈Sp+q∑sgn(σ)i=1∏p+qwi∗vσ(i)=π:p,q shuffles∑σ1∈Sp∑σ2∈Sq∑sgn(π)sgn(σ1)sgn(σ2)i1=1∏pwi1∗vπ(σ1(i1))i2=1∏qwp+i2∗vπ(p+σ2(i2))sgn(π)det(wi∗vπ(i))1≤i≤pdet(wj∗vπ(j))p+1≤j≤p+q=π:p,q shuffles∑sgn(π)f(vπ(1),…,vπ(p))g(vπ(p+1),…,vπ(p+q)).On the other hand, let f=φ∘β(w1∗∧…∧wp∗),g=φ∘β(wp+1∗∧…∧wp+q∗). Thenf∧βg(v1,…,vp+q)=w1∗∧…∧wp∗∧wp+1∗∧…∧wp+q∗(v1∧…∧vp+q)=(p+q)!1det(wi∗vj)1≤i,j≤p+q=(p+q)!1σ∈Sp+q∑sgn(σ)i=1∏p+qwi∗vσ(i)=(p+q)!1π:p,q shuffles∑σ1∈Sp∑σ2∈Sq∑sgn(π)sgn(σ1)sgn(σ2)i1=1∏pwi1∗vπ(σ1(i1))i2=1∏qwp+i2∗vπ(p+σ2(i2))=(p+q)!1π:p,q shuffles∑sgn(π)det(wi∗vπ(i))1≤i≤pdet(wj∗vπ(j))p+1≤j≤p+q=(p+q)!p!q!π∈Sp+q∑sgn(π)det(wi∗vπ(i))1≤i≤pdet(wj∗vπ(j))p+1≤j≤p+q=(p+q)!1π∈Sp+q∑sgn(π)f(vπ(1),…,vπ(p))g(vπ(p+1),…,vπ(p+q)).Therefore,f∧αg(v1,…,vp+q)=π:p,q shuffles∑sgn(π)f(vπ(1),…,vπ(p))g(vπ(p+1),…,vπ(p+q))=p!q!1π∈Sp+q∑sgn(π)f(vπ(1),…,vπ(p))g(vπ(p+1),…,vπ(p+q))=p!q!(p+q)!f∧βg.
5.
(where γ is an integral curve of X)LX(f)∣m=t→0limtδXt(fXt(m))−fm=t→0limtfXt(m)−f(m)=t→0limtf(γm(t))−f(m)=dγ(dtd∣∣0)(f)=Xm(f).
6.
(Since t is defined on [0,ε), the limit should be calculated when t→0+.) We will use L’Hôpital’s rule.
Let g(x,y,z,w)=f(Y−xX−yYzXw)(m). Then,∂x∂g(x,y,z,w)∂y∂g(x,y,z,w)∂z∂g(x,y,z,w)∂w∂g(x,y,z,w)∂x2∂2g(x,y,z,w)∂y2∂2g(x,y,z,w)∂z2∂2g(x,y,z,w)∂w2∂2g(x,y,z,w)∂x∂y∂2g(x,y,z,w)∂x∂z∂2g(x,y,z,w)∂x∂w∂2g(x,y,z,w)∂y∂z∂2g(x,y,z,w)∂y∂w∂2g(x,y,z,w)∂z∂w∂2g(x,y,z,w)=−YY−xX−yYzXw(m)(f)=−XX−yYzXw(m)(f∘Y−x)=YYzXw(m)(f∘Y−x∘X−y)=XXw(m)(f∘Y−x∘X−y∘Yz)=YY−xX−yYzXw(m)(Y(f))=XX−yYzXw(m)(X(f∘Y−x))=YYzXw(m)(Y(f∘Y−x∘X−y))=XXw(m)(X(f∘Y−x∘X−y∘Yz))=XX−yYzXw(m)(Y(f)∘Y−x)=−YYzXw(m)(Y(f)∘Y−x∘X−y)=−XXw(m)(Y(f)∘Y−x∘X−y∘Yz)=−YYzXw(m)(X(f∘Y−x)∘X−y)=−XXw(m)(X(f∘Y−x)∘X−y∘Yz)=XXw(m)(Y(f∘Y−x∘X−y)∘Yz)Therefore,t→0+limtf(β(t))−f(β(0))=t→0+limt2f(β(t2))−f(β(0))=t→0+limt2g(t,t,t,t)−g(0,0,0,0)=t→0+lim2tdg(t,t,t,t)/dt=t→0+lim2d2g(t,t,t,t)/dt2.Sincedg(t,t,t,t)/dt∣t=0=−Ym(f)−Xm(f)+Ym(f)+Xm(f)=0,we havet→0+lim2d2g(t,t,t,t)/dt2=21(Ym(Y(f))+Xm(X(f))+Ym(Y(f))+Xm(X(f)))+Xm(Y(f))−Ym(Y(f))−Xm(Y(f))−Ym(X(f))−Xm(X(f))+Xm(Y(f))=Xm(Y(f))−Ym(X(f))=[X,Y]∣m(f).
7.
For m∈M, let x1,…,xn be local coordinates around m. Since the equation is linear with respect to ω, it is enough to consider the case when ω is decomposable, i.e., ω=gdxm1∧…∧dxmp. Then,LY0(ω(Y1,…,Yp))=Y0(gdet(Y1(xmi),…,Yp(xmi))1≤i≤p)=Y0(g)det(Y1(xmi),…,Yp(xmi))1≤i≤p+j=1∑pgdet(Y1(xmi),…,Y0(Yj(xmi)),…,Yp(xmi))1≤i≤p,LY0(ω)(Y1,…,Yp)=Y0(g)det(Y1(xmi),…,Yp(xmi))1≤i≤p+j=1∑pgdet(Y1(xmi),…,Yj(Y0(xmi)),…,Yp(xmi))1≤i≤p,ω(Y1,…,LY0Yj,…,Yp)=det(Y1,…,Y0(Yj(xmi)),…,Yp)1≤i≤p−det(Y1,…,Yj(Y0(xmi)),…,Yp)1≤i≤p.(e) follows by combining these equations.
(⇐) Let (m,n)∈M×N, and choose local coordinates of m∈M,n∈N, say {x1,…,xs},{y1,…,yt}, such that local coordinates of (m,n)∈M×N consist of {x1,…,xs,y1,…,yt}. Then, ω can be described as∣Φ∣+∣Ψ∣=p∑gΦ,ΨdxΦ∧dyΨ.And we can take a vector field X on M×N as ∂xi∂×(C∞ function which is zero outside the local chart and 1 at (m,n)). Then X(m,n)=∂xi∂∣∣(m,n) and dπ(X)=0. Therefore, by assumption, LXω=0. But,LXω=∣Φ∣+∣Ψ∣=p∑∂xi∂gΦ,ΨdxΦ∧dyΨ.Therefore ∀xi,∂xi∂gΦ,Ψ=0. If we let ιm′:N→M×N,n′↦(m′,n′),(διmω)n=∣Ψ∣=p∑gΦ,Ψ(m,n)dyΨ.Therefore, by the result above, on the small neighborhood of m∈M and for all m′ in the neighborhood, (διm′ω)n is the same as (δlmω)n. Since (m,n) is arbitrary, we can say that διmω is locally the same regardless of the choice of m. Also, since M is connected, we conclude that διmω is always the same regardless of the choice of m∈M. Let α be this p-form on N. Then, ω=δπ(α). Indeed, for all (m,n)∈M×N, and X1,…,Xp∈(M×N)(m,n),(ω−δπ(α))(m,n)(X1,…,Xp)=ω(m,n)(X1,…,Xp)−δπ(διmω)(m,n)(X1,…,Xp)=ω(m,n)(X1,…,Xp)−ω(m,n)(d(ιm∘π)(X1),…,d(ιm∘π)(Xp))=ω(m,n)(X1−d(ιm∘π)(X1),…,Xp−d(ιm∘π)(Xp)).The last term is zero since if we set X as a vector field of M×N such thatX(m,n)=X1−d(ιm∘π)(X1)and dπX=0, (it is possible sincedπ(X1−d(ιm∘π)(X1))=0,the last term is(i(X)ω)(m,n)(X2−d(ιm∘π)(X2),…,Xp−d(ιm∘π)(Xp))which should be zero by the assumption.
9.
(⇐) If a1v1+…+arvr=0 and some of ai’s are nonzero, then without loss of generality, one can assume a1=0. Dividing the equation by a1, one can also assume a1=1. Thenv1∧…∧vn=(−a2v2−…−arvr)∧v2∧…∧vr=0.
(⇒) Suppose v1,…,vp are linearly independent, then there is a linear map φ:V→kr,vi↦ei. By the universal property, it induces a map Λr(φ):Λr(V)→Λr(kr). Since φ(v1∧…∧vp)=e1∧…∧ep=0 by 2.6, v1∧…∧vp=0.
10.
(⇒) Let vi=∑jaijwj. Then by direct calculation, v1∧…∧vr=(detA)w1∧…∧wr.
(⇐) If w∈⟨w1,…,wr⟩∖⟨v1,…vr⟩, v1∧…∧vr∧w=0 by Exercise 9, which contradicts w1∧…∧wr∧w=0. Therefore ⟨w1,…,wr⟩⊂⟨v1,…vr⟩ and vice versa.
11.
(Condition⇒(a)) Since dωi∈I by definition of differential ideals.
((a)⇒(b))dω=dω1∧…∧ωr+…+(−1)r+1ω1∧…∧dωr=α∧ωby using (a).
((b)⇒(a))dω=dω1∧…∧ωr+…+(−1)r+1ω1∧…∧dωr=α∧ω.Since ω∧ωi=0, by wedging ωi on the equation, we get dωi∧ω=0. Therefore, dωi∈⟨ω1,…,ωr⟩.
((a)⇒condition)d⟨ω1,…,ωr⟩⊂⟨dω1,…,dωr,ω1,…,ωr⟩by the property of derivations, and the result directly follows.
12.
For another basis w1,…,wn, let wi=∑jaijvj. ThenAw1∧…∧Awn=j∑a1jAvj∧…∧j∑anjAvj=det(aij)Av1∧…∧Avn.Therefore by Exercise 10, detA does not depend on the choice of basis. If we choose the standard basis e1,…,en, the equation of detA follows directly. Also, for two matrices A,B,Bv1∧…∧Bvn=(detB)v1∧…∧vn,soABv1∧…∧ABvn=(detB)Av1∧…∧Avn=(detB)(detA)v1∧…∧vn,which proves detAdetB=detAB.
13.
The orthonormality comes from direct calculations.
(5) Since ∗∗ is linear, we only need to show this property for the basis elements. If Φ={i1,…,ip}⊂{1,…,n} and let f be the element which satisfies eΦ∧f=e1∧…∧en, thenf∧eΦ=(−1)p(n−p)e1∧…∧en.Therefore ∗∗eΦ=±∗f=(−1)p(n−p)eΦ.
(6) Since the equation is bilinear in v and w, we only need to show this for the basis elements. Therefore we can assume that v=eΦ1,w=eΦ2. If v=w,⟨v,w⟩=∗(w∧∗v)=∗(v∧∗w)=0by definition of star and inner product. If v=w,⟨v,w⟩=∗(w∧∗v)=∗(v∧∗w)=1also by definition. Hence (6) is proved.
14.
It suffices to show that for all v∈Λp+1(V) and w∈Λp(V), ⟨γ(v),w⟩=⟨v,ξ∧w⟩.But it is equivalent to (using Exercise 13):⇔⇔⇔(−1)np⟨w,∗(ξ∧∗v)⟩=⟨ξ∧w,v⟩(−1)np∗(w∧(−1)p(n−p)ξ∧∗v)=⟨ξ∧w,v⟩(−1)p∗(w∧ξ∧∗v)=⟨ξ∧w,v⟩∗(ξ∧w∧∗v)=⟨ξ∧w,v⟩which is true by Exercise 13.
15.
(It is true only if ξ=0) Obviously (ξ∧)2=0. Conversely, if ξ∧w=0, thenw=ξ∧γ(⟨ξ,ξ⟩w).To prove this, it is enough to show that for all v∈Λp+1(V),⟨ξ∧γ(w),v⟩=⟨ξ,ξ⟩⟨w,v⟩.But⟨ξ∧γ(w),v⟩=⟨γ(w),γ(v)⟩=⟨(−1)np∗(ξ∧∗w),(−1)np∗(ξ∧∗v)⟩=∗(∗(ξ∧∗w)∧(−1)p(n−p)ξ∧∗v)=∗(ξ∧∗v∧∗(ξ∧∗w))=⟨ξ∧∗v,ξ∧∗w⟩.Let us consider a basis {vi} of V which contains ξ and an expression of w in terms of the basis derived by {vi}, say ∑∣I∣=p+1aIvI where I={vi1,…,vip+1}⊂{vi} and vI=vi1∧…∧vip+1 (to clarify the definition, it is assumed that orders of elements of all finite subset of {vi} are determined, possibly by using the Axiom of Choice if necessary). Since ξ∧w=0, it follows that all I which aI=0 should contain ξ. Therefore, considering an expression of ∗w derived by w, we see that there is no ξ in the expression of ∗w. It means that if calculating ⟨ξ∧∗v,ξ∧∗w⟩ by splitting matrices and getting determinants, the first column of matrices should be (⟨ξ,ξ⟩,0,…,0). It means that ⟨ξ∧∗v,ξ∧∗w⟩ should be ⟨ξ,ξ⟩⟨∗v,∗w⟩=⟨ξ,ξ⟩⟨v,w⟩.
16.
∑iθi∧ωi=0. If we wedge ω1∧…∧ωi∧…∧ωp, we getθi∧ω1∧…∧ωp=0.Therefore θi∈⟨ω1,…,ωp⟩. So it is possible to write θi=∑jAijωj, where Aij are C∞ functions. One can derive Aij=Aji by just putting these expressions into the given equation and noting that ωi∧ωj’s are linearly independent.