# How to Solve for Properties and Proofs of Cross Product of Vectors for Calculus

Just like the dot product, the cross product is a definition that arises from an identity within vector calculus. In our case, to find the cross product we look at a parallelogram with sides of vectors **a** and **b**. If I want to find the area of this parallelogram, I need to know the base and height. The base would be ||**b**|| and the height corresponds to ||**a**|| SinΘ. Therefore, the area is

||**a**|| ||**b**|| SinΘ = ||**a**|| ||**b**|| [(SinΘ)^{2}]^{0.5}

Now if we use the Pythagorean Identity Sin^{2}Θ + Cos^{2}Θ = 1 And solve for Sin^{2}Θ, we can substitute that into our equation and get

||**a**|| ||**b**|| [(SinΘ)^{2}]^{0.5} = ||**a**|| ||**b**|| [1 - Cos^{2}Θ]^{0.5}

Now, from the definition of the dot product, we know that Cos Θ = (**a ∙ b) / **(||**a**|| ||**b**||). Plugging this in for CosΘ gives us

||**a**|| ||**b**|| [1 - Cos^{2}Θ]^{0.5} = ||**a**|| ||**b**|| [1 – [(**a ∙ b) / **(||**a**|| ||**b**||)]^{2}]^{0.5}

= ||**a**|| ||**b**|| [1 – (**a ∙ b)**^{2}** / **(||**a**||^{2} ||**b**||^{2})]^{0.5}

= [(||**a**|| ||**b**||)^{2}]^{0.5} [1 – (**a ∙ b)**^{2}** / **(||**a**||^{2} ||**b**||^{2})]^{0.5}

This would be the simplification of a common term that I pulled out of the square root in the first place, so going backwards we would have

[(||**a**|| ||**b**||)^{2}]^{0.5} [1 – (**a ∙ b)**^{2}** / **(||**a**||^{2} ||**b**||^{2})]^{0.5} = [(||**a**|| ||**b**||)^{2} – (||**a**|| ||**b**||)^{2} (**a ∙ b)**^{2}** / **(||**a**||^{2} ||**b**||^{2})]^{0.5}

And in the second term we have a (||**a**|| ||**b**||)^{2} on top and bottom, so they cancel and we have

[(||**a**|| ||**b**||)^{2} – (||**a**|| ||**b**||)^{2} (**a ∙ b)**^{2}** / **(||**a**||^{2} ||**b**||^{2})]^{0.5} = [(||**a**|| ||**b**||)^{2} – (**a ∙ b)**^{2}]^{0.5}

Now all we have to do is some algebra. Let’s do this by expanding in steps. First,

(||**a**|| ||**b**||)^{2} = ||**a**||^{2} ||**b**||^{2}

= [(a_{1}^{2} + a_{2}^{2} + a_{3}^{2})^{0.5}]^{2 }[(b_{1}^{2} + b_{2}^{2} + b_{3}^{2})^{0.5}]^{2}

= (a_{1}^{2} + a_{2}^{2} + a_{3}^{2})(b_{1}^{2} + b_{2}^{2} + b_{3}^{2})

= a_{1}^{2}b_{1}^{2} + a_{1}^{2}b_{2}^{2} + a_{1}^{2}b_{3}^{2} + a_{2}^{2}b_{1}^{2} + a_{2}^{2}b_{2}^{2} + a_{2}^{2}b_{3}^{2} + a_{3}^{2}b_{1}^{2} + a_{3}^{2}b_{2}^{2} + a_{3}^{2}b_{3}^{2}

Secondly,

(**a ∙ b)**^{2} = [a_{1}b_{1} + a_{2}b_{2} + a_{3}b_{3}]^{2}

= [a_{1}b_{1} + a_{2}b_{2} + a_{3}b_{3}] [a_{1}b_{1} + a_{2}b_{2} + a_{3}b_{3}]

= (a_{1}b_{1})(a_{1}b_{1}) + (a_{1}b_{1})(a_{2}b_{2}) + (a_{1}b_{1})(a_{3}b_{3}) + (a_{2}b_{2})(a_{1}b_{1}) + (a_{2}b_{2})(a_{2}b_{2}) + (a_{2}b_{2})(a_{3}b_{3}) + (a_{3}b_{3})(a_{1}b_{1}) + (a_{3}b_{3})(a_{2}b_{2}) + (a_{3}b_{3})(a_{3}b_{3})

= a_{1}^{2}b_{1}^{2}+ 2(a_{1}b_{1})(a_{2}b_{2}) + 2(a_{1}b_{1})(a_{3}b_{3}) + a_{2}^{2}b_{2}^{2} + 2(a_{2}b_{2})(a_{3}b_{3}) + a_{3}^{2}b_{3}^{2}

Here comes the ugly part. Plug all of this back into our equation and we have

[(||**a**|| ||**b**||)^{2} – (**a ∙ b)**^{2}]^{0.5} = [a_{1}^{2}b_{1}^{2} + a_{1}^{2}b_{2}^{2} + a_{1}^{2}b_{3}^{2} + a_{2}^{2}b_{1}^{2} + a_{2}^{2}b_{2}^{2} + a_{2}^{2}b_{3}^{2} + a_{3}^{2}b_{1}^{2} + a_{3}^{2}b_{2}^{2} + a_{3}^{2}b_{3}^{2} – (a_{1}^{2}b_{1}^{2}+ 2(a_{1}b_{1})(a_{2}b_{2}) + 2(a_{1}b_{1})(a_{3}b_{3}) + a_{2}^{2}b_{2}^{2} + 2(a_{2}b_{2})(a_{3}b_{3}) + a_{3}^{2}b_{3}^{2}]^{0.5}

=[a_{1}^{2}b_{1}^{2} + a_{1}^{2}b_{2}^{2} + a_{1}^{2}b_{3}^{2} + a_{2}^{2}b_{1}^{2} + a_{2}^{2}b_{2}^{2} + a_{2}^{2}b_{3}^{2} + a_{3}^{2}b_{1}^{2} + a_{3}^{2}b_{2}^{2} + a_{3}^{2}b_{3}^{2} – a_{1}^{2}b_{1}^{2} -2(a_{1}b_{1})(a_{2}b_{2}) - 2(a_{1}b_{1})(a_{3}b_{3}) - a_{2}^{2}b_{2}^{2} - 2(a_{2}b_{2})(a_{3}b_{3}) - a_{3}^{2}b_{3}^{2}]^{0.5}

Fortunately, some terms cancel out and we have

[a_{1}^{2}b_{2}^{2} + a_{1}^{2}b_{3}^{2} + a_{2}^{2}b_{1}^{2} + a_{2}^{2}b_{3}^{2} + a_{3}^{2}b_{1}^{2} + a_{3}^{2}b_{2}^{2} - 2(a_{1}b_{1})(a_{2}b_{2}) - 2(a_{1}b_{1})(a_{3}b_{3}) - 2(a_{2}b_{2})(a_{3}b_{3})]^{0.5}

=[a_{1}^{2}b_{2}^{2} - 2(a_{1}b_{1})(a_{2}b_{2}) + a_{2}^{2}b_{1}^{2} + a_{1}^{2}b_{3}^{2 }- 2(a_{1}b_{1})(a_{3}b_{3}) + a_{3}^{2}b_{1}^{2} + a_{2}^{2}b_{3}^{2 }- 2(a_{2}b_{2})(a_{3}b_{3}) + a_{3}^{2}b_{2}^{2})]^{0.5}

=[(a_{1}b_{2} – a_{2}b_{1})^{2} + (a_{1}b_{3} – a_{3}b_{1})^{2} + (a_{2}b_{3} – a_{3}b_{2})^{2}]^{0.5}

Now, someone noticed that what we have inside the square root looks a lot like someone was attempting to find the length of a vector. In fact, what we have here is the length of the *determinant of a matrix*! That is, someone plugged

| **i** **j** **k **|

| a_{1} a_{2} a_{3} | = **i**(a_{2}b_{3} – a_{3}b_{2}) - **j**(a_{1}b_{3} – a_{3}b_{1}) + **k**(a_{1}b_{2} – a_{2}b_{1})

| b_{1} b_{2} b_{3} |

into the formula for the magnitude of a vector. So, the above determinant and its expanded form are represented by **a x b** and

[(a_{1}b_{2} – a_{2}b_{1})^{2} + (a_{1}b_{3} – a_{3}b_{1})^{2} + (a_{2}b_{3} – a_{3}b_{2})^{2}]^{0.5} = |**a x b**|

Which, from where we started all the way at the beginning of this proof, equals ||**a**|| ||**b**|| SinΘ. So

|**a x b**| = ||**a**|| ||**b**|| SinΘ

Or the length of the cross product between **a **and **b** equals the length of **a **times the length of **b** times the sin of the angle between the vectors. We now also have a way to find SinΘ

SinΘ = |**a x b**| / (||**a**|| ||**b**||)

And this was all in the attempt to find the area of a parallelogram. We now know that will equal the magnitude of the cross product. The cross product itself is a vector, unlike the dot product with is a scalar. As we will see, it has many useful properties (Larson 792).

## The Purpose of the Cross Product

Now that we know what this tool is, what does it do for us? The cross product is mainly used in vector calculus to find a vector that is orthogonal, or perpendicular, to two vectors (792). How do I know that the cross product actually results in this? Remember that the dot product showed that two vectors are orthogonal to one another if the dot product between them equaled zero. So if I have vectors **a**, **b**, and cross product **a x b**, then

**a ∙ (a x b) = a ∙ [i**(a_{2}b_{3} – a_{3}b_{2}) - **j**(a_{1}b_{3} – a_{3}b_{1}) + **k**(a_{1}b_{2} – a_{2}b_{1})]

= a_{1}(a_{2}b_{3} – a_{3}b_{2}) – a_{2}(a_{1}b_{3} – a_{3}b_{1}) + a_{3}(a_{1}b_{2} – a_{2}b_{1})

= a_{1}a_{2}b_{3} – a_{1}a_{3}b_{2} – a_{2}a_{1}b_{3} + a_{2}a_{3}b_{1} + a_{3}a_{1}b_{2} – a_{3}a_{2}b_{1}

Now, if I rearrange a few of the terms using the commutative property, we will have

a_{1}a_{2}b_{3 }– a_{2}a_{1}b_{3} – a_{1}a_{3}b_{2 }+ a_{3}a_{1}b_{2} + a_{2}a_{3}b_{1}– a_{3}a_{2}b_{1}

Which we can see is just pairs of the same number being added and subtracted together, so

a_{1}a_{2}b_{3 }– a_{2}a_{1}b_{3} – a_{1}a_{3}b_{2 }+ a_{3}a_{1}b_{2} + a_{2}a_{3}b_{1}– a_{3}a_{2}b_{1} = 0

The proof is the same idea for the **b** vector. So when I find the cross product of two vectors, it can be handy to use this tool to know if I have applied the product correctly. Also note that if the cross product of the two vectors is orthogonal to **a** and **b**, then

|**a x b**| = ||**a**|| ||**b**|| Sin90 = ||**a**|| ||**b**||

So in this case the length of the cross product is just the lengths of the **a **and **b** vectors multiplied together (797).

## Commutative Property of the Cross Product?

Wow, after all that work, we finally have what the cross product is, but is it commutative? That is, does

**a x b = b x a**

First of all,

**a x b** = **i**(a_{2}b_{3} – a_{3}b_{2}) - **j**(a_{1}b_{3} – a_{3}b_{1}) + **k**(a_{1}b_{2} – a_{2}b_{1})

= **i**(–a_{3}b_{2} + a_{2}b_{3}) - **j**(–a_{3}b_{1} + a_{1}b_{3}) + **k**(–a_{2}b_{1} + a_{1}b_{2})

Because of the commutative property of real numbers. But notice now that

**i**(–a_{3}b_{2} + a_{2}b_{3}) - **j**(–a_{3}b_{1} + a_{1}b_{3}) + **k**(–a_{2}b_{1} + a_{1}b_{2}) = (-1)(-1) [**i**(–a_{3}b_{2} + a_{2}b_{3}) - **j**(–a_{3}b_{1} + a_{1}b_{3}) + **k**(–a_{2}b_{1} + a_{1}b_{2})]

= (-1)[**-i**(–a_{3}b_{2} + a_{2}b_{3}) + **j**(–a_{3}b_{1} + a_{1}b_{3}) - **k**(–a_{2}b_{1} + a_{1}b_{2})]

= (-1)**[i**(a_{3}b_{2} - a_{2}b_{3}) - **j**(a_{3}b_{1} - a_{1}b_{3}) + **k**(a_{2}b_{1} - a_{1}b_{2})]

= (-1)[**i**(_{}b_{2}a_{3} - b_{3}a_{2})- **j**(b_{1}a_{3} - b_{3}a_{1}) + **k**(b_{1}a_{2} - b_{2}a_{1})]

Which, by noticing the order of the terms, has the a and b components switched from my original cross product, so

(-1)[**i**(_{}b_{2}a_{3} - b_{3}a_{2})- **j**(b_{1}a_{3} - b_{3}a_{1}) + **k**(b_{1}a_{2} - b_{2}a_{1})]= (-1)(**b x a**)

The cross product is NOT commutative! We have just shown that

(**a x b**) = (-1)(**b x a**)

So be careful when changing the order of the terms, because you will not arrive at the same answer unless you incorporate that negative sign (791).

## The Distributive Property and the Cross Product

Let us now investigate and see what we will get when we expand

**a x (b + c)**

Well,

| **i j k |**

| a_{1 }a_{2 }a_{3} |

| b_{1} + c_{1} b_{2} + c_{2} b_{3} + c_{3 }|

= **i**[a_{2}(b_{3} + c_{3}) – a_{3 }(b_{2} + c_{2})] – **j[**a_{1}(b_{3} + c_{3}) – a_{3}(b_{1} + c_{1})] + **k[**a_{1}(b_{2} + c_{2}) – a_{2}(b_{2} + c_{2})]

After distributing the a component terms, we have

**i**[a_{2}b_{3} + a_{2}c_{3} – a_{3}b_{2} – a_{3}c_{2})] – **j[**a_{1}b_{3} + a_{1}c_{3} – a_{3}b_{1} – a_{3}c_{1})] + **k[**a_{1}b_{2} + a_{1}c_{2} – a_{2}b_{2} – a_{2}c_{2})]

And then if I rearrange everything inside the parentheses according to the commutative property of real numbers, I will have

**i**[a_{2}b_{3 }– a_{3}b_{2} + a_{2}c_{3 }– a_{3}c_{2})] – **j[**a_{1}b_{3 }– a_{3}b_{1} + a_{1}c_{3} – a_{3}c_{1})] + **k[**a_{1}b_{2 }– a_{2}b_{2} + a_{1}c_{2} – a_{2}c_{2})]

=** i**(a_{2}b_{3 }– a_{3}b_{2)} + **i**(a_{2}c_{3 }– a_{3}c_{2}) – **j**(a_{1}b_{3 }– a_{3}b_{1}) – **j**(a_{1}c_{3} – a_{3}c_{1}) + **k**(a_{1}b_{2 }– a_{2}b_{2}) + **k**(a_{1}c_{2} – a_{2}c_{2})

And if I group together the common b and c terms, we will arrive at

**i**(a_{2}b_{3 }– a_{3}b_{2)} – **j**(a_{1}b_{3 }– a_{3}b_{1}) + **k**(a_{1}b_{2 }– a_{2}b_{2}) + **i**(a_{2}c_{3 }– a_{3}c_{2})– **j**(a_{1}c_{3} – a_{3}c_{1}) + **k**(a_{1}c_{2} – a_{2}c_{2})

Which is just the determent of two matrices added together, one with a and b terms and another with a and c terms, so we have shown that

**a x (b + c)** = **a x b + a x c (**791).

## The Cross Product and Scalars

What happens when I multiply a cross product with a scalar? Let’s see.

p(**a x b**) = p[**i**(a_{2}b_{3} – a_{3}b_{2}) - **j**(a_{1}b_{3} – a_{3}b_{1}) + **k**(a_{1}b_{2} – a_{2}b_{1})]

=p**i**(a_{2}b_{3} – a_{3}b_{2}) - p**j**(a_{1}b_{3} – a_{3}b_{1}) + p**k**(a_{1}b_{2} – a_{2}b_{1})

=** i**(pa_{2}b_{3} – pa_{3}b_{2}) - **j**(pa_{1}b_{3} – a_{3}b_{1}) + **k**(pa_{1}b_{2} – pa_{2}b_{1})

And if I associate the p term with the a components, then

**i**(pa_{2}b_{3} – pa_{3}b_{2}) - **j**(pa_{1}b_{3} – a_{3}b_{1}) + **k**(pa_{1}b_{2} – pa_{2}b_{1}) = (p**a****x b**)

But if I associate the p term with the b components, then

**i**(a_{2}pb_{3} – a_{3}p_{}b_{2}) - **j**(a_{1}pb_{3} – a_{3}pb_{1}) + **k**(a_{1}pb_{2} – a_{2}pb_{1}) = (**a****x **p**b**)

So, in general,

p(**a x b**) = (p**a****x b**) = (**a****x **p**b**) (791).

## The Cross Product and the Zero Vector

Here we will look at three properties that make use of that zero vector. First, what if we take the cross product of a vector and the zero vector?

(**a x 0**) = [**i**(a_{2}0 – a_{3}0) - **j**(a_{1}0 – a_{3}0) + **k**(a_{1}0 – a_{2}0)]

= [**i**(0 – 0) - **j**(0 – 0) + **k**(0 – 0)]

= [**i**0- **j**0 + **k**0]

= **0**

Notice that it does not matter the order of which vector was the zero vector. It will still result in that zero vector! So

(**a x 0**) = (**0 x a**) = **0 (**791).

Second, what happens when I take the cross product of a vector and itself?

(**a x a**) = [**i**(a_{2}a_{3} – a_{3}a_{2}) - **j**(a_{1}a_{3} – a_{3}a_{1}) + **k**(a_{1}a_{2} – a_{2}a_{1})]

But notice how it is the same term minus itself, which equals zero. So

[**i**(a_{2}a_{3} – a_{3}a_{2}) - **j**(a_{1}a_{3} – a_{3}a_{1}) + **k**(a_{1}a_{2} – a_{2}a_{1})] = [**i**0- **j**0 + **k**0]

= **0**

Therefore, if I attempt to take the cross product of a vector with itself, I end up with **0** (791).

Finally, we will show that **a x b** = **0 ** if and only if **a **and **b **are scalar multiples of each other (792). Because this is an “if and only if” or IFF proof, we have to show that both “If **a x b** = **0, **then **a **and **b **are scalar multiples of each other” and “If **a **and **b **are scalar multiples of each other, then **a x b** = **0**” are true.

To start,

**a x b =****i**(a_{2}b_{3} – a_{3}b_{2}) - **j**(a_{1}b_{3} – a_{3}b_{1}) + **k**(a_{1}b_{2} – a_{2}b_{1})

and if that equals **0**, then

**i**0 = **i**(a_{2}b_{3} – a_{3}b_{2})

**j**0 = **j**(a_{1}b_{3} – a_{3}b_{1})

**k**0 = **k**(a_{1}b_{2} – a_{2}b_{1})

Because all the components will have to equal one another if they are the same vector. Using the first equality as an example, we can factor out the common directional unit vector so that

0 = (a_{2}b_{3} – a_{3}b_{2})

Or that after adding a_{3}b_{2} to both sides that

a_{2}b_{3} = a_{3}b_{2}

And if I divide both sides by a_{2}b_{2} we will arrive at

b_{3}/ b_{2} = a_{3} / a_{2}

Now, this is a set of ratios or a proportion. That means that the only way these two fractions are equal to each other is if I factor out a term that both the numerator and the denominator have. But then that means that one was just a scalar multiple of the other vector.

Great, now we need to show that the other statement is also true. If **a **and **b** are scalar multiples of one another, then **a** = p**b** which means

**a x b** = **i**(a_{2}b_{3} – a_{3}b_{2}) - **j**(a_{1}b_{3} – a_{3}b_{1}) + **k**(a_{1}b_{2} – a_{2}b_{1})

= **i**(a_{2}pa_{3} – a_{3}pa_{2}) - **j**(a_{1}pa_{3} – a_{3}pa_{1}) + **k**(a_{1}pa_{2} – a_{2}pa_{1})

= **i**(0) - **j**(0) + **k**(0)

= **0**

## Works Cited

Larson, Ron, Robert Hostetler, and Bruce H. Edwards. Calculus: Early Transcendental Functions. Maidenhead: McGraw-Hill Education, 2007. Print. 791-2, 7.

- Basic Vector Properties and Proofs

Vectors are an important component of higher-level calculus. How do we arrive at many of the results that we take for granted? - Dot Product Properties and Proofs

The mysterious dot product has a basis in vector theory that has wide applications. Here we examine how we know what it is and how to use it.

**© 2014 Leonard Kelley**

## Comments

No comments yet.