# Useful Vector Properties and Proofs for Calculus

Frequently in physics we need to have vectors if we are to have any meaning to the physical phenomena that it tries to describe. But there is more than one way to talk about properties of a vector. Oftentimes we refer to a *scalar*, or a number that has units but no direction. That means that I may know how much of something I have but I do not know how it is being applied to an object. To describe that, we need to give it a direction. A number being applied in such a manner is called a *vector*. The vector is always symbolized with a letter that is bolded or has an arrow above it. In this hub I will use the bold letter to signify this. Vector **a **has the components <a_{1}, a_{2}, a_{3}> and has a length to it called the *magnitude*. We define that as ||**a||** and it equals [a_{1}² + a_{2}² + a_{3}²]^{0.5}. This result stems from the Pythagorean Theorem in which if I have a right triangle then c² = a² + b² where c is the distance between those two points. Those components tell me in what direction each one is being applied to, with the first in the x-direction, the second in the y-direction, and the third in the z-direction (Larson 762-3).

## Vector Sum Proofs

To add vectors together, we need to add their components together accordingly. That is, the *vector sum* **a + b = **<a_{1} + b_{1}, a_{2} + b_{2,} a_{3 }+ b_{3}>. Similarly, the *vector difference ***a – b = **<a_{1 }– b_{1}, a_{2} – b_{2,} a_{3} – b_{3}> (764).

The commutative property, where the order of the addition of numbers does not matter, for real numbers is used all the time. But does it hold for vectors also? Well,

**a + b = **<a_{1} + b_{1}, a_{2} + b_{2,} a_{3 }+ b_{3}>.

But since the components are real numbers, the order does not matter, so

<a_{1} + b_{1}, a_{2} + b_{2,} a_{3 }+ b_{3}> = <b_{1} + a_{1}, b_{2} + a_{2,} b_{3 }+ a_{3}>

= **b + a**.

Yes, the commutative property works for vectors, so **a + b = b + a**

What about the associative property, where I can add groups of numbers in different sets of parentheses? Let’s find out. If we have (**a + b) + c, **then the vectors in the parentheses equal <a_{1} + b_{1}, a_{2} + b_{2,} a_{3 }+ b_{3}> so

(**a + b**)** + c = **<a_{1} + b_{1}, a_{2} + b_{2,} a_{3 }+ b_{3}> + <c_{1}, c_{2}, c_{3}>

= <a_{1} + b_{1} + c_{1}, a_{2} + b_{2} + c_{2,} a_{3 }+ b_{3} + c_{3}>.

But since the components are real numbers and the way I add them does not matter,

<a_{1} + b_{1} + c_{1}, a_{2} + b_{2} + c_{2,} a_{3 }+ b_{3} + c_{3}> = <a_{1}, a_{2}, a_{3}> + <b_{1} + c_{1}, b_{2} + c_{2,} b_{3} + c_{3}>

= **a + **(**b + c**).

Yes, the associative property works for vectors, so (**a + b**)** + c = a + **(**b + c**) (765).

Now we shall see if the additive identity property works. This is simply the fact that if I add zero to any number then the number remains unchanged. We do have the *zero vector* where all the components are simply zero. So

**a + 0 **= <a_{1}, a_{2}, a_{3}> + <0, 0, 0>

= <a_{1} + 0, a_{2} + 0, a_{3} + 0>.

A theme is beginning to develop. Because the components are real numbers and adding zero does not change them,

<a_{1} + 0, a_{2} + 0, a_{3} + 0> = <a_{1}, a_{2}, a_{3}> .

The additive identity property works for vectors. So **a + 0 = a**.

Another additive property exists for us to check: the additive inverse property. What does **a + (-a) **equal? Well,

**a + (-a) = **<a_{1}, a_{2}, a_{3}> + <-a_{1}, -a_{2}, -a_{3}>

= <a_{1} + (-a_{1}), a_{2} + ( -a_{2}), a_{3} + (-a_{3})>.

Again, these are real numbers and adding its opposite results in zero. So

<a_{1} + (-a_{1}), a_{2} + ( -a_{2}), a_{3} + (-a_{3})> = <0, 0, 0>

= **0**.

The additive inverse property works. So **a + (-a) = 0**.

## Vector Distribution Proofs

Sometimes we will wish to change the length of the vector we are using, especially in physics. To do this, we can multiply the vector by a scalar. Any length of the base vector that is more/less than the original vector is a *scalar multiple* and is defined as c**a = **<ca_{1}, ca_{2}, ca_{3}> (764).

If we are using more than one scalar on a vector, we can multiply the scalars together first before we apply it to the vector. To show this is true, we start with

c(d**a**) = c(<da_{1}, da_{2}, da_{3}>

= <cda_{1}, cda_{2}, cda_{3}>

= cd(<a_{1}, a_{2}, a_{3}> )

= (cd)**a.**

Knowing this, does the distributive property work? We need to look at two examples of it to confirm that it does. First, let’s look at

(c + d)**a** = (c + d) <a_{1}, a_{2}, a_{3}>

= <(c+d)a_{1}, (c+d)a_{2},(c+d)a_{3}>

= <ca_{1 + }da_{1}, ca_{2 }+ da_{2}, ca_{3} + da_{3}>

which can be broken down into the sum of two vectors based on vector addition. So

<ca_{1 + }da_{1}, ca_{2 }+ da_{2}, ca_{3} + da_{3}> = <ca_{1, }ca_{2, }ca_{3}> + <da_{1}, da_{2}, da_{3}>

= c<a_{1}, a_{2}, a_{3}> + d<a_{1}, a_{2}, a_{3}>

= c**a **+ d**a**.

Indeed, the distribution of two scalars to a vector is the same as multiplying each scalar by the vector and then adding them up (765).

But what about c(**a + b**)? Well,

c(**a + b**) = c(<a_{1} + b_{1}, a_{2} + b_{2,} a_{3 }+ b_{3}>)

= <c(a_{1} + b_{1}), c(a_{2} + b_{2}), c(a_{3 }+ b_{3})>

=<ca_{1} + cb_{1}, ca_{2} + cb_{2,} ca_{3 }+ cb_{3}>

Again, because of vector addition, we can break this down into the sum of two vectors. So,

<ca_{1} + cb_{1}, ca_{2} + cb_{2,} ca_{3 }+ cb_{3}> = <ca_{1}, ca_{2}, ca_{3}> + <cb_{1}, cb_{2,} cb_{3}>

=c<a_{1}, a_{2}, a_{3}> + c<b_{1}, b_{2,} b_{3}>

=c**a + **c**b**

We can now see that the distribution of a scalar across a vector sum also works.

Finally, one last proof before we are ready to continue on our quest through vectorland. Remember that one times any real number equals itself. With vectors,

1(**a**) **= **1<a_{1}, a_{2}, a_{3}>

=<1a_{1}, 1a_{2}, 1a_{3}>

And since all the components are real numbers,

<1a_{1}, 1a_{2}, 1a_{3}> = <a_{1}, a_{2}, a_{3}>

=**a**

Therefore, 1(**a**) = **a**. Another case with real numbers that applies to vectors is multiplying a real number by zero, which always equals zero. For vectors, it simply equals the zero vector **0,** or 0(**a**) = **0**.

## Works Cited

Larson, Ron, Robert Hostetler, and Bruce H. Edwards. __Calculus: Early Transcendental Functions.__ Maidenhead: McGraw-Hill Education, 2007. Print. 762-5.

- Solving Terminating Fractions and Repeating Decimals...

Repeating decimals and terminating fractions can seen foreign to us and hard to utilize. But for certain ones, they have a relation with 9 that makes them easy to solve. - Strange Geometry

Remember that math class that had all the proofs? It had some strange facts in there also.

**© 2014 Leonard Kelley**

## More by this Author

- 0
With the use of the cross product, vector calculus really takes off, exploring new ideas and concepts that would otherwise be denied to us.

- 0
These proofs go beyond the standard properties of the dot and cross product and explore new applications and surprising results.

- 5
Not something we think about often, but these waves would solve many problems in physics. Now, how to find them...

No comments yet.