## Vladimir Dobrushkin

http://math.uri.edu/~dobrush/

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the appendix entitled GNU Free Documentation License.

# Vector Spaces

In mathematics, physics, and engineering, a Euclidean **vector** (simply a vector) is a geometric object
that has magnitude (or length) and direction. Many familiar physical notions, such as forces, velocitues, and
accelerations, involve both magnitude (the amount of the force, velocity, or acceleration) and a direction. In most
physical situations involving vectors, only the magnitude and direction of the vector are significant; consequently,
we regard vectors with the same length and direction as being equal irrespective to their positions.

It is a custom to identify vectors with arrows (geometric object). The tail of the arrow is called the
**initial point** of the vector and the tip
the **terminal point**. To emphasis this approach, an arrow is placed above the initial and terminal
points, for example, the notation \( {\bf v} = \vec{AB} \) tells us that *A* is the
starting point of the vector **v** and its terminal point is *B*. In this tutorial
(as in most science papers and textbooks), we will denote vectors in boldface type applied to lower case
letters such as **v**, **u**, or **x**.

Any two vectors **x** and **y** can be added in "tail-to-head" manner; that is, either
**x** or **y** may be applied to any point and then another vector is applied to the
endpoint of the first. If this is done, the endpoint of the latter is the endpoint of their sum, which is denoted
by **x** + **y**. Besides the operation of vector addition there is another natural
operation that can be performed on vectors---multiplication by a scalar that are often taken to be real numbers.
When a vector is multiplied by a real number *k*, its magnitude is multiplied by |*k*| and its direction
remains the same when *k* is positive and the opposite direction when *k* is negative. Such vector is
denoted by *k***x**.

The concept of vector, as we know it today, evolved gradually over a period of more than 200 years. The Italian mathematician, senator, and municipal councilor Giusto Bellavitis (1803--1880) abstracted the basic idea in 1835. The term vector was introduced by the Irish mathematician, astronomer, and mathematical physicist William Rowan Hamilton (1805--1865) as part of a quaternion.

Vectors can be described also algebraically. Historically, the first vectors were Euclidean vectors that can be
expanded through standard basic vectors that are used as coordinates. Then any vector can be uniquely represented
by a sequence of scalars called coordinates or components. The set of such ordered *n*-tuples is denoted by
\( \mathbb{R}^n . \) When scalars are complex numbers, the set of ordered *n*-tuples
of complex numbers is denoted by \( \mathbb{C}^n . \) Motivated by these two approaches, we
present the general definition of vectors.

**vector space**

*V*over set of either real numbers or complex numbers is a set of elements, called vectors, together with two operations that satisfy the eight axioms listed below.

1. The first operation is an inner operation that assigns to any two vectors

**x**and

**y**a third vector which is commonly written as

**x**+

**y**and called the sum of these two vectors.

2. The second operation, is an outer operation that assigns to any scalar

*k*and vector

**x**another vector, denoted by

*k*

**x**.

- Associativity of addition: \( ({\bf v} + {\bf u}) + {\bf w} = {\bf v} + ({\bf u} + {\bf w}) \) for \( ({\bf v} , {\bf u}) , {\bf w} \in V . \)
- Commutativity of addition: \( ({\bf v} + {\bf u}) = {\bf u} + {\bf v} \) for \( ({\bf v} , {\bf u}) \in V . \)
- Identity element of addition: there exists an element \( ({\bf 0} \in V , \) called
the zero vector, such that \( {\bf v} +{\bf 0}) = {\bf v} \) for every vector from
*V*. - Inverse elements of addition: for every vector
**v**, there exists an element \( -{\bf v} \in V , \) called the additive inverse of**v**, such that \( {\bf v} + (-{\bf v}) = {\bf 0} . \) - Compatibility of scalar multiplication with field multiplication: \( a(b{\bf v}) = (ab){\bf v} \)
for any scalars
*a*and*b*and arbitrary vector**v**. - Identity element of scalar multiplication: \( 1{\bf v} = {\bf v} , \) where 1 denotes the multiplicative identity.
- Distributivity of scalar multiplication with respect to vector addition: \( k\left( {\bf v} +
{\bf u}\right) = k{\bf v} + k{\bf u} \) for any scalar
*k*and arbitary vectors**v**and**u**. - Distributivity of scalar multiplication with respect to field addition: \( \left( a+b \right)
{\bf v} = a\,{\bf v} + b\,{\bf v} \) for any two scalars
*a*and*b*and arbitrary vector**v**. ■

Historically, the first ideas leading to vector spaces can be traced back as far as the 17th century; however, the idia crystallized with the work of the Prussian/German mathematician Hermann Günther Grassmann (1809--1877), who published a paper in 1862. He was also a linguist, physicist, neohumanist, general scholar, and publisher. His mathematical work was little noted until he was in his sixties. It is interested that while he was a student at the University of Berlin, Hermann studied theology, also taking classes in classical languages, philosophy, and literature. He does not appear to have taken courses in mathematics or physics. Although lacking university training in mathematics, it was the field that most interested him when he returned to Stettin (Province of Pomerania, Kingdom of Prussia; present-day Szczecin, Poland) in 1830 after completing his studies in Berlin.

*n*-tuples is our first familiar example of vector spaces. This space has a standard basis: \( {\bf e}_1 = (1,0,0,\ldots ,0 ) ,\quad {\bf e}_2 = (0,1,0,\ldots , 0 ), \ldots , {\bf e}_n = (0,0,\ldots , 0,1) .\) In \( \mathbb{R}^3 \) these unit vectors are denoted by

*a*,

*b*| denote an open or closed or semiclosed interval on the real axis. The set \( C(|a,b|) \) of all continuous functions on the interval |

*a*,

*b*| is a vector space.

*n*is an expression of the form

*n*is a nonnegative integer and each coefficient

*a*

_{i}is a scalar/number. The zero polynomial is the polynomial having all coefficients to be zero. The polynomials \( p_n (x) = a_0 + a_1 x + \cdots + a_n x^n \) and \( q_n (x) = b_0 + b_1 x + \cdots + b_m x^m \) , where for simplicity \( n\ge m , \) can be added:

*n*is a vector space.

**dot product**of two vectors of the same size \( {\bf x} = \left[ x_1 , x_2 , \ldots , x_n \right] \) and \( {\bf y} = \left[ y_1 , y_2 , \ldots , y_n \right] \) (independently whether they are columns or rows) is the number, denoted either by \( {\bf x} \cdot {\bf y} \) or \( \left\langle {\bf x} , {\bf y} \right\rangle ,\)

**outer product**is the tensor product of two coordinate vectors \( {\bf u} = \left[ u_1 , u_2 , \ldots , u_m \right] \) and \( {\bf v} = \left[ v_1 , v_2 , \ldots , v_n \right] , \) denoted \( {\bf u} \otimes {\bf v} , \) is an

*m*-by-

*n*matrix

**W**such that its coordinates satisfy \( w_{i,j} = u_i v_j . \) The outer product \( {\bf u} \otimes {\bf v} , \) is equivalent to a matrix multiplication \( {\bf u} \, {\bf v}^{\ast} , \) (or \( {\bf u} \, {\bf v}^{\mathrm T} , \) if vectors are real) provided that

**u**is represented as a column \( m \times 1 \) vector, and

**v**as a column \( n \times 1 \) vector. Here \( {\bf v}^{\ast} = \overline{{\bf v}^{\mathrm T}} . \) For three dimensional vectors \( {\bf a} = a_1 \,{\bf i} + a_2 \,{\bf j} + a_3 \,{\bf k} = \left[ a_1 , a_2 , a_3 \right] \) and \( {\bf b} = b_1 \,{\bf i} + b_2 \,{\bf j} + b_3 \,{\bf k} = \left[ b_1 , b_2 , b_3 \right] \) , it is possible to define special multiplication, called

**cross-product**:

*m*= 4 and

*n*= 3, then

*Mathematica*, the outer product has a special command:

Outer[Times, {1, 2, 3, 4}, {a, b, c}]

An **inner product** of two vectors of the same size, usually denoted by \( \left\langle {\bf x} , {\bf y} \right\rangle ,\) is a generalization of the dot product if it satisfies the following properties:

- \( \left\langle {\bf v}+{\bf u} , {\bf w} \right\rangle = \left\langle {\bf v} , {\bf w} \right\rangle + \left\langle {\bf u} , {\bf w} \right\rangle . \)
- \( \left\langle {\bf v} , \alpha {\bf u} \right\rangle = \alpha \left\langle {\bf v} , {\bf u} \right\rangle \) for any scalar α.
- \( \left\langle {\bf v} , {\bf u} \right\rangle = \overline{\left\langle {\bf u} , {\bf v} \right\rangle} , \) where overline means complex conjugate.
- \( \left\langle {\bf v} , {\bf v} \right\rangle \ge 0 , \) and equal if and only if \( {\bf v} = {\bf 0} . \)

The fourth condition in the list above is known as the positive-definite condition.

**u**and

**v**of the same size are

**orthogona**l (or

**perpendicular**) when their inner product is zero: \( \left\langle {\bf u} , {\bf v} \right\rangle = 0 . \) We abbreviate it as \( {\bf u} \perp {\bf v} . \) If

**A**is an \( n \times n \) positive definite matrix and

**u**and

**v**are

*n*-vectors, then we can define the weighted Euclidean inner product

*w*

_{1},

*w*

_{2}, ... ,

*w*

_{n}are positive real numbers, which are called weights, and if

**u**= (

*u*

_{1},

*u*

_{2}, ... ,

*u*

_{n}) and

**v**= (

*v*

_{1},

*v*

_{2}, ... ,

*v*

_{n}) are vectors in \( \mathbb{R}^n , \) then the formular

**weighted Euclidean inner product**with weights

*w*

_{1},

*w*

_{2}, ... ,

*w*

_{n}.

**matrix inner product**. Let

**A**be an invertible

*n*-by-

*n*matrix. Then the formula

**A**.

*f*and

*g*as

*n*. If

With dot product, we can assign a length of a vector, which is also called the **Euclidean norm** or 2-norm:

*p*: \[ \| {\bf x} \|_p = \left( x_1^p + x_2^p + \cdots + x_n^p \right)^{1/p} . \]

*p*=1 has a special name: Taxicab norm or Manhattan norm, which is also called 1-norm:

`Norm[{2, \[ImaginaryJ], -2}]`

Norm[{2, \[ImaginaryJ], -2}, 3/2]

Out[2]=

**u**is a vector whose length equals one: \( {\bf u} \cdot {\bf u} =1 . \) We say that two vectors

**x**and

**y**are perpendicular if their dot product is zero. There are known many other norms.

Augustin-Louis Cauchy | Viktor Yakovlevich Bunyakovsky | Hermann Amandus Schwarz |

**Cauchy--Bunyakovsky--Schwarz**(or simply CBS)

**inequality**holds: