Vladimir Dobrushkin
http://math.uri.edu/~dobrush/

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the appendix entitled GNU Free Documentation License.

Kernels or Null Spaces

Throughout this section, we consider an m-by-n matrices as transformations from n-dimensional Euclidean vector space \( \mathbb{R}^n \) into another space \( \mathbb{R}^m . \)

Let A be an \( m \times n \) matrix. The set of all (column) vectors x of length n that satisfy the linear equation \( {\bf A}\,{\bf x} = {\bf 0} , \) where 0 is the m-dimensional column vector of zeroes, forms a subset of \( \mathbb{R}^n . \) This subset is nonempty because it clearly contains the zero vector: x = 0 always satisfies \( {\bf A}\,{\bf x} = {\bf 0} . \) This subset actually forms a subspace of \( \mathbb{R}^n , \) called the kernel (or nullspace) of the matrix A and denoted ker(A).

  Let's suppose that the matrix A represents a physical system. As an example, let's assume our system is a rocket, and A is a matrix representing the directions we can go based on our thrusters. Let's suppose that we have three thrusters equally spaced around our rocket. If they're all perfectly functional then we can move in any direction. But what happens when a thruster breaks? Now we've only got two thrusters. The null space are the set of thruster intructions that completely waste fuel. They're the set of instructions where our thrusters will thrust, but the direction will not be changed at all.

Another example: Perhaps A can represent a rate of return on investments. The range are all the rates of return that are achievable. The null space are all the investments that can be made that wouldn't change the rate of return at all.

Another example: room illumination. The range of A represents the area of the room that can be illuminated. The null space of A represents the power we can apply to lamps that don't change the illumination in the room at all.

Theorem: Elementary row operations do not change the null space of a matrix.

The dimension of the kernel (null space) of a matrix A is called the nullity of A and is denoted by nullity(A).

Theorem: Nullity of a matrix A is the number of free variables in its reduced row echelon form (Gauss--Jordan).

Example: Define \( T:\,\mathbb{R}^3 \to \mathbb{R}^2 \) by
\[ T(a_1 , a_2 , a_3 ) = (2\,a_1 -a_2 , 3\, a_3) . \]
To this linear transformation corresponds 2-y-3 matrix \( {\bf A} = \begin{bmatrix} 2&-1&0 \\ 0&0&3 \end{bmatrix} . \) Its kernel consists of vectors of the form [a, 2a, 0].

 

Example: The set of solutions of the homogeneous system
\[ {\bf A} \, {\bf x} = {\bf 0} \qquad \mbox{or} \qquad \begin{bmatrix} 1&2&3 \\ 4&5&6 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \\ x_3 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \end{bmatrix} \]
forms a subspace of \( \mathbb{R}^3 . \) To determine this subspace, we first use a row‐reduction (elimination part in Gaussian procedure):
\[ \begin{bmatrix} 1&2&3 \\ 4&5&6 \end{bmatrix} \,\sim \, \begin{bmatrix} 1&2&3 \\ 0&-3&-6 \end{bmatrix} . \]
Therefore, the system is equivalent to
\[ \begin{bmatrix} 1&2&3 \\ 0&-3&-6 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \\ x_3 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \end{bmatrix} \qquad \Longleftrightarrow \qquad \begin{split} x_1 + 2\, x_2 + 3\,x_3 &=0 , \\ -3\,x_2 -6\,x_3 &^=0 . \end{split} \]
If you let x3 be free variables, the second equation directly implies
\[ x_2 = -2\,x_3 . \]
Substituting this result into the other equation determines x2:
\[ x_1 = -2\,x_2 -3\,x_3 = 4\,x_3 -3\, x_3 = x_3 . \]
So the set of solutions of the given homogeneous system can be written as
\[ \begin{bmatrix} x_3 \\ -2\,x_3 \\ x_3 \end{bmatrix} = x_3 \begin{bmatrix} 1 \\ -2 \\ 1 \end{bmatrix} , \qquad x_3 \in \mathbb{R} , \]
which is a subspace of \( \mathbb{R}^3 , \) spanned on the vector \( [ 1, -2, 1 ]^{\mathrm T} . \) We check with matlab

 

Example: Consider two square matrices
\[ {\bf A} = \begin{bmatrix} 1&2 \\ 3&4 \end{bmatrix} \qquad\mbox{and} \qquad {\bf B} = \begin{bmatrix} 1&2 \\ -2&-4 \end{bmatrix} , \]
By definition, the nullspace of A consists of all vectors x such that \( {\bf A} \, {\bf x} = {\bf 0} . \) We perform the following elementary row operations on A and B:
\[ {\bf A} \,\sim \, {\bf R}_A = \begin{bmatrix} 1&2 \\ 0&-2 \end{bmatrix} \qquad\mbox{and} \qquad {\bf B} \,\sim \, {\bf R}_B = \begin{bmatrix} 1&2 \\ 0&0 \end{bmatrix} \]
to conclude that \( {\bf A} \, {\bf x} = {\bf 0} \) and \( {\bf B} \, {\bf x} = {\bf 0} \) are equivalent to the simpler systems
\[ \begin{bmatrix} 1&2 \\ 0&-2 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \end{bmatrix} \qquad\mbox{and} \qquad \begin{bmatrix} 1&2 \\ 0&0 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \end{bmatrix} . \]
For matrix A, the second row implies that \( x_2 =0 , \) and back substituting this into the first row implies that \( x_1 =0 . \) Since the only solution of A x = 0 is x = 0, the kernel of A consists of the zero vector alone. This subspace, { 0 }, is called the trivial subspace (of \( \mathbb{R}^2 \) ).

For matrix B, we have the only one equation

\[ x_1 + 2\,x_2 =0 \qquad \Longrightarrow \qquad x_1 = -2\, x_2 . \]
Substitution back yields a one-dimensional null space spanned on the vector
\[ \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} = x_2 \begin{bmatrix} -2 \\ 1 \end{bmatrix} , \qquad x_2 \in \mathbb{R} . \]

 

Example: Let us consider \( 4 \times 3 \) matrix:
\[ {\bf A} = \begin{bmatrix} 1&2&5 \\ 3&-1&2 \\ -1&4&1 \\ 2&3&-2 \end{bmatrix} \]
of rank 3:
Its Gauss--Jordan form is
We rewrite the system of equations for the kernel in vector form:
\[ \begin{pmatrix} 1& 0& 0 \\ 0& 1& 0 \\ 0&0&1 \\ 0&0&0 \end{pmatrix} \begin{pmatrix} x_1 \\ x_2 \\ x_3 \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \\ 0 \\ 0 \end{pmatrix} , \]
from which follows that the kernel consists of one zero vector (trivial subspace of \( \mathbb{R}^3 \) ).

 

Example: Consider the matrix of rank 3:
\[ {\bf A} = \begin{bmatrix} 1&2&3&6&0 \\ 2&1&2&7&2 \\ 4&-1&5&19&11 \\ 5&-2&-3&6&6 \end{bmatrix} . \]
We find its Gauss--Jordan with matlab:
R = RowReduce[A]
Out[3]= {{1, 0, 0, 2, 1}, {0, 1, 0, -1, -2}, {0, 0, 1, 2, 1}, {0, 0, 0, 0, 0}}
\[ {\bf A} \, \sim \, {\bf R} = \begin{bmatrix} 1&0&0&2&1 \\ 0&1&0&-1&-2 \\ 0&0&1&2&1 \\ 0&0&0&0&0 \end{bmatrix} . \]
So we see that three first variables are leading variables and the last two are the free variables. To find the kernel, we need to solve the following system of algebraic equations
\[ \begin{split} x_1 + 2\,x_4 + x_5 &=0 , \\ x_2 -x_4 - 2\, x_5 &=0, \\ x_3 + 2\,x_4 + x_5 &= 0 . \end{split} \]
We rewrite this system in vector form:
\[ \begin{bmatrix} x_1 \\ x_2 \\ x_3 \end{bmatrix} + \begin{bmatrix} 2&1 \\ -1&-2 \\ 2&1 \end{bmatrix} \begin{bmatrix} x_4 \\ x_5 \end{bmatrix} \qquad\mbox{or} \qquad {\bf x} =- {\bf F} \,{\bf u} , \]
where \( {\bf x} = [ x_1 , x_2 , x_3 ]^{\mathrm T} , \) \( {\bf u} = [ x_4 , x_5 ]^{\mathrm T} , \) and F is the 3-by-2 matrix defined above.

F = Take[R, {1, 3}, {4, 5}]
Out[4]= {{2, 1}, {-1, -2}, {2, 1}}
Setting \( {\bf u} = [ 1 , 0 ]^{\mathrm T} , \) one time, and \( {\bf u} = [ 0 , 1 ]^{\mathrm T} , \) we obtain two linearly independent vectors
\[ {\bf v}_1 = \begin{bmatrix} -2 \\ 1 \\ -2 \\ 1 \\ 0 \end{bmatrix} \qquad \mbox{and} \qquad {\bf v}_2 = \begin{bmatrix} -1 \\ 2 \\ -1 \\ 0 \\ 1 \end{bmatrix} \]
that form the basis for the kernel. We check with matlab:

A.{{-2}, {1}, {-2}, {1}, {0}}
A.{{-1}, {2}, {-1}, {0}, {1}}
Out[6]= {{0}, {0}, {0}, {0}}
Since these two vectors v1 and v2 are linearly independent (having zeroes in different components), they form a basis for the null space of matrix A. ■

 

Theorem: Suppose that m-by-n matrix A of rank r, when reduced to row echelon form, has the first r rows or columns as pivots, so it is reduced to the upper triangular form

\[ {\bf A} \,\sim\, {\bf R} = \begin{bmatrix} {\bf I}_r & {\bf F}_{r \times (n-r)} \\ {\bf 0}_{(m-r)\times r} &{\bf 0}_{(m-r)\times (n-r)} \end{bmatrix} . \]
Here Ir is the identity square matrix of size r, \( {\bf F}_{r \times (n-r)} \) is the \( r \times (n-r) \) matrix, and 0 are zero matrices. Then the kernel of the matrix A is spanned on column vectors of the matrix
\[ \mbox{ker}({\bf A}) = \mbox{span} \begin{bmatrix} -{\bf F}_{r \times (n-r)} \\ {\bf I}_{(m-r)\times (n-r)} \end{bmatrix} . \]

Example: Let us find the kernel of the 4-by-6 matrix
\[ {\bf A} = \begin{bmatrix} 1& 2& 3& -2& -1& 2 \\ 2& -1& 2& 3& 2& -3 \\ 3& 1& 2& -1& 3& -5 \\ 5& 5& 5& -7& 3& -5 \end{bmatrix} \]
The first step in finding the kernel of the given matrix is to determine its pivots by performing elementary row operations. So we multiply the first row by -2 and add to the second row; then we multiply the first row by -3 and add to the third row; finally, we multiply the first row by -5 and add to the last row. It results in the following matrix
\[ {\bf A}_2 = \begin{bmatrix} 1& 2& 3& -2& -1& 2 \\ 0& -5& -4& 7& 4& -7 \\ 0& -5& -7& 5& 6& -11 \\ 0& -5& -10& 3& 8& -15 \end{bmatrix} \]
We check with matlab:
Out[5]= \( \displaystyle \quad \begin{pmatrix} 1& 2& 3& -2& -1& 2 \\ 0& -5& -4& 7& 4& -7 \\ 0& -5& -7& 5& 6& -11 \\ 0& -5& -10& 3& 8& -15 \end{pmatrix} \)
Next we multiply the second row by -1 and add to the third and fourth rows, which yields
\[ {\bf A}_3 = \begin{bmatrix} 1& 2& 3& -2& -1& 2 \\ 0& -5& -4& 7& 4& -7 \\ 0& 0& -3& -2& 2& -4 \\ 0& 0& -6& -4& 4& -8 \end{bmatrix} \]
Again, matlab helps

A3 = A2;
A3[[3]] += (-1)*A3[[2]]
A3[[4]] += (-1)*A3[[2]]
Finally, we add last two rows with a multiple (-2):

A4 = A3;
A4[[4]] += (-2)*A4[[3]]
A4 // MatrixForm
Out[8]= \( \displaystyle \quad \begin{pmatrix} 1& 2& 3& -2& -1& 2 \\ 0& -5& -4& 7& 4& -7 \\ 0& 0& -3& -2& 2& -4 \\ 0& 0& 0& 0& 0&0 \end{pmatrix} \)
This tells us that matrix A has three pivots in the first three rows, and its rank is 3. To use the above theorem, we need its Gauss--Jordan form, which we obtain with just one matlab command:

RowReduce[A]
Out[9]= {{1, 0, 0, -(2/15), 23/15, -(8/3)}, {0, 1, 0, -(29/15), -(4/15), 1/ 3}, {0, 0, 1, 2/3, -(2/3), 4/3}, {0, 0, 0, 0, 0, 0}}
It allows us to represent reduced row echelon form R of the given matrix in the block form:
\[ {\bf A} \, \sim \, {\bf R} = \begin{bmatrix} {\bf I} & {\bf F} \\ {\bf 0} & {\bf 0} \end{bmatrix} , \]
where I is the identity 3-by-3 matrix, 0 is the zero 3-row vector [0, 0, 0], and matrix F is the following square matrix:
\[ {\bf F} = \frac{1}{15} \begin{bmatrix} 2 & 23 & -40 \\ 29 & -4 & 5 \\ 10 & -10 & 20 \end{bmatrix} , \]
Using matlab, we extract matrix F:

F = R[[1 ;; 3, 4 ;; 6]] // MatrixForm
Out[10]= \( \displaystyle \quad \begin{pmatrix} -\frac{2}{15}&\frac{23}{15}&-\frac{8}{3} \\ -\frac{29}{15}&-\frac{4}{15}&\frac{1}{3} \\ \frac{2}{3}&-\frac{2}{3}& \frac{4}{3} \end{pmatrix} \)
To avoid fractions, we multiply matrix F by 15 to obtain

F15 = 15*R[[1 ;; 3, 4 ;; 6]] // MatrixForm
Out[11]= \( \displaystyle \quad \begin{pmatrix} -2&23&-40 \\ -29&-4&5 \\ 10&-10&20 \end{pmatrix} \)
Upon appending the identity matrix to -F, we obtain three linearly independent vectors that generate the kernel:
\[ \begin{bmatrix} -{\bf F} \\ {\bf I}_{3 \times 3} \end{bmatrix} . \]
Each column vector from the above 6-by-3 matrix belongs to the null space of matrix A. Since columns are linearly independent, they form the basis of the kernel.

nul = ArrayFlatten[{{-F}, {IdentityMatrix[3]}}] 
Out[12]= {{2/15, -(23/15), 8/3}, {29/15, 4/15, -(1/3)}, {-(2/3), 2/ 3, -(4/3)}, {1, 0, 0}, {0, 1, 0}, {0, 0, 1}}
To avoid fractions, we multiply by 15 each entry to obtain three vectors that span the null space:
\[ {\bf v}_1 = \begin{bmatrix} 2 \\ 29 \\ -10 \\ 15 \\ 0 \\ 0 \end{bmatrix} , \quad {\bf v}_2 = \begin{bmatrix} -23 \\ 4 \\ 10 \\ 0 \\ 15 \\ 0 \end{bmatrix} , \quad {\bf v}_3 = \begin{bmatrix} 40 \\ -5 \\ -20 \\ 0 \\ 0 \\ 15 \end{bmatrix} = 5 \begin{bmatrix} 8 \\ -1 \\ -4 \\ 0 \\ 0 \\ 3 \end{bmatrix}. \]
We check our answer with the standard matlab command

NullSpace[A]
Out[12]= {{8, -1, -4, 0, 0, 3}, {-23, 4, 10, 0, 15, 0}, {2, 29, -10, 15, 0, 0}}
It is possible to determine the null space for the given matrix directly without using the above theorem. We know that x1, x2, and x3 are leading variables and x4, x5, x6 are free variables. Since rank(A) is 3, the last row of matrix A does not play any role in determination of solutions for the linear equation A x = b. So we extract two matrices from the first three rows of A:
\[ {\bf B} = \begin{bmatrix} 1&2&3 \\ 2&-1& 2 \\ 3&1&2 \end{bmatrix} , \quad {\bf C} = \begin{bmatrix} -2&-2&2 \\ 3&2&-3 \\ -1&3&-5 \end{bmatrix} . \]

B = A[[1 ;; 3, 1 ;; 3]] // MatrixForm
Out[13]= \( \displaystyle \quad \begin{pmatrix} 1&2&3 \\ 2&-1&2 \\ 3&1&2 \end{pmatrix} \)

CC = A[[1 ;; 3, 4 ;; 6]] // MatrixForm
Out[14]= \( \displaystyle \quad \begin{pmatrix} -2&-1&2 \\ 3&2&-3 \\ -1&3&-5 \end{pmatrix} \)
We used a special extension "MatrixForm" just to show matrices in their regular forms. For actual calculations, this extension should be dropped because it converts a list of vectors into one object. Multiplying the inverse of matrix B by our matrix C, we obtain our old friend: matrix F:
\[ {\bf F} = {\bf B}^{-1} {\bf C} = \frac{1}{15} \begin{bmatrix} -2&23&-40 \\ -29&-4&5 \\ 10&-10&20 \end{bmatrix} \]

 

Example: We consider a slightly different matrix
\[ {\bf A} = \begin{bmatrix} 1& 2& 3& -2& -1& 2 \\ 2& -1& 2& 3& 2& -3 \\ 3& 1& 2& -1& 3& -5 \\ 5& 5& 5& -7& 7& -4 \end{bmatrix} \]
The first step in finding the kernel of the given matrix is to determine its pivots by performing elementary row operations. We use the Gaussian elimination to obtain the row reduced form:
Therefore, the given matrix has four pivots and its rank is

MatrixRank[A]
Out[2]= 4
The 1's on the main diagonal of matrix R indicate that variables 1, 2, 3, and 5 are leading variables, while variables 4 and 6 are free variables. To find actual vectors that span the null space, we form two auxiliary matrices: 4-by-4 matrix B that contain columns of matrix A containing the leading variables, and 4-by-2 matrix C that corresponds to free variables. Naturally, we ask matlab for help. We build matrix B in two steps: first we extract 4-by-5 matrix from A by dropping last column, and then eliminate fifth column:

B1 = A[[1 ;; 4, 1 ;; 5]] 
B = Transpose[Drop[Transpose[B1], {4}]]
Out[4]= \( \displaystyle \quad \begin{pmatrix} 1&2&3&-1 \\ 2&-1&2&2 \\ 3&1&2&3 \\ 5&5&5&7 \end{pmatrix} \)
Note that matrix B1 could be obtained with the following command:

B1 = Delete[A, {1, 6}]

C1 = A[[1 ;; 4, 4 ;; 6]]
CC = Transpose[Delete[Transpose[C1], {2}]] 
Out[6]= \( \displaystyle \quad \begin{pmatrix} -2&2 \\ 3&-3 \\ -1&-5 \\ -7&-4 \end{pmatrix} \)
We these matrices, we can rewrite equations to determine the kernel as
\[ {\bf B}_{4\times 4} \begin{bmatrix} x_1 \\ x_2 \\ x_3 \\ x_5 \end{bmatrix} + {\bf C}_{4\times 2} \begin{bmatrix} x_4 \\ x_6 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \\ 0 \\ 0 \end{bmatrix} . \]
Then we can express the leading variables via free variables as
\[ \begin{bmatrix} x_1 \\ x_2 \\ x_3 \\ x_5 \end{bmatrix} = - {\bf B}^{-1} {\bf C} \begin{bmatrix} x_4 \\ x_6 \end{bmatrix} = - {\bf F} \begin{bmatrix} x_4 \\ x_6 \end{bmatrix} , \]
where F = B-1 C. Then we build \( 4 \times 2 \) matrix F. It can be obtained either by multiplication of matrices:
\[ {\bf F} = {\bf B}^{-1} {\bf C} = \begin{bmatrix} -\frac{2}{15} & -\frac{61}{20} \\ -\frac{29}{15} & \frac{2}{5} \\ \frac{2}{3} & \frac{3}{2} \\ 0&\frac{1}{4} \end{bmatrix} \]
or extracting fourth and sixth columns from matrix R. To avoid fractions, we multiply this matrix by 60:

F60 = Inverse[B].CC*60 
Out[8]= \( \displaystyle \quad \begin{pmatrix} -8&-183 \\ -116&24 \\ 40&90 \\ 0&15 \end{pmatrix} \)
Finally, we append the identity 2-by-2 matrix to -F that forms the 6-by-2 matrix from which we are going to extract two basis vectors:
\[ \begin{bmatrix} -{\bf F} \\ {\bf I} \end{bmatrix} \qquad \mbox{or} \qquad \begin{bmatrix} -60\,{\bf F} \\ 60\,{\bf I} \end{bmatrix} = \begin{bmatrix} 8&183 \\ 116&-24 \\ -40&-90 \\ 0&-15 \\ 60&0 \\ 0&60 \end{bmatrix} . \]
This is not a correct matrix because we need to make one more operation: swap fourth and fifth rows (because the pivot is in fifth column but not in the fourth one):
\[ {\bf nul} = \begin{bmatrix} 8&183 \\ 116&-24 \\ -40&-90 \\ 60&0 \\ 0&-15 \\ 0&60 \end{bmatrix} \qquad \Longrightarrow \qquad {\bf v}_1 = \begin{bmatrix} 8 \\ 116 \\ -40 \\ 60 \\ 0 \\ 0 \end{bmatrix} = 4 \begin{bmatrix} 2 \\ 29 \\ -10 \\ 15 \\ 0 \\ 0 \end{bmatrix} , \quad {\bf v}_2 = \begin{bmatrix} 183 \\ -24 \\ -90 \\ 0 \\ -15 \\ 60 \end{bmatrix} = 3 \begin{bmatrix} 61 \\ -8 \\ -30 \\ 0 \\ -5 \\ 20 \end{bmatrix} . \]
We check with matlab that each column vector from the above 6-by-2 matrix is annihilated by A:

A.{{8}, {116}, {-40}, {60}, {0}, {0}}
A.{{183}, {-24}, {-90}, {0}, {-15}, {60}}
Since both answers are zero vectors, we are positive that the basis for null space is found properly. Now we compare with the answer provided by standard matlab command

NullSpace[A]
Out[16]= {{61, -8, -30, 0, -5, 20}, {2, 29, -10, 15, 0, 0}}
As we see, both vectors differ by a constant multiple. ■

 

Example: We consider the matrix
\[ {\bf A} = \begin{bmatrix} 1& -1& 3& 1& -1 \\ 2& 4& 0& -1& 7 \\ 3& 1& 5& 1& 3 \\ 4& 6& 2& -1& 11 \end{bmatrix} , \]
which has rank 2. Indeed,
Its reduced row echelon form is
\[ {\bf A} \, \sim \, {\bf R} = \begin{bmatrix} 1& 0& 2& \frac{1}{2}&\frac{1}{2} \\ 0& 1& -1& -\frac{1}{2}& \frac{3}{2} \\ 0& 0& 0& 0& 0 \\ 0& 0& 0& 0&0 \end{bmatrix} , \]
because

R = RowReduce[A]
Out[2]= {{1, 0, 2, 1/2, 1/2}, {0, 1, -1, -(1/2), 3/2}, {0, 0, 0, 0, 0}, {0, 0, 0, 0, 0}}
We extract from R the 2-by-3 matrix, which we denote by F
\[ {\bf F} = \begin{bmatrix} 2& \frac{1}{2}&\frac{1}{2} \\ -1& -\frac{1}{2}& \frac{3}{2} \end{bmatrix} . \]

F = Take[R, {1, 2}, {3, 5}]
Out[3]= {{2, 1/2, 1/2}, {-1, -(1/2), 3/2}}
Upon appending the identity 3-by-3 matrix, we obtain the required matrix, each column of which is a basis vector for the null space:
\[ {\bf null} = \begin{bmatrix} - {\bf F} \\ {\bf I}_{3\times 3} \end{bmatrix} = \begin{bmatrix} -2& - \frac{1}{2}&- \frac{1}{2} \\ 1& \frac{1}{2}& -\frac{3}{2} \\ 1&0&0 \\ 0&1&0 \\ 0&0&1 \end{bmatrix} . \]

nul = ArrayFlatten[{{-F}, {IdentityMatrix[3]}}] 
Out[4]= \( \displaystyle \quad \begin{pmatrix} -2&-\frac{1}{2} & -\frac{1}{2} \\ 1&\frac{1}{2} & - \frac{3}{2} \\ 1&0&0 \\ 0&1&0 \\ 0&0&1 \end{pmatrix} \)
Now we compare columns in the above matrix against the standard matlab output:

NullSpace[A]
Out[5]= {{-1, -3, 0, 0, 2}, {-1, 1, 0, 2, 0}, {-2, 1, 1, 0, 0}}