### 10. Matrices

Motivation

Matrix groups are standard objects in Galois Representation Theory. Matrix multiplication is the group operator.

An R-matrix consists only of elements from some number system R. Examples are a -matrix, -matrix, . Matrices are rectangular, consisting of columns and rows, normally written enclosed in brackets. To multiply 2 matrices, , the columns of the must number the same as the rows of . An matrix times an matrix results in an matrix. Multiplication proceeds by forming the dot product of rows of by columns of . The dot product of two n-tuples is the sum of the pairwise products of elements. The entry of the product matrix is the dot product of the th row of and the th column of . The following examples are -matrices:

Linear algebra is the study of equations of lines, polynomial equations of degree 1. Matrices are used to represent systems of linear equations. For example, consider the two equations:

Expressed as matrices,

has a solution if is invertible, meaning exists and

.

Without showing the computation of , the solution to the two equations is found by:

Thus is a solution to the equations. The denominator 19 in the entries for the inverse matrix is a special value assigned to , called its determinant, a number in If the determinant of a matrix has an inverse in , then the matrix is invertible. The determinant of a matrix is used to calculate the entries in the inverse matrix when it exists. Invertibility is discussed in the next section.

### 10.1. Supplement: Basic Notions of Vector Spaces

The rows and columns of a matrix are vectors, in the sense of -tuples with a defined arithmetic. Going beyond numbers and individual matrix computations, the theory of solutions of linear equations is based in collections of vectors, called vector spaces. The machinery associated with vector spaces allows the underlying concepts to be unified and visualized. Most of the machinery needed is also found in a generalization of a vector space called a module, which is a topic linked to general ring theory. But for the purpose here, discussion is in terms of vector spaces, which require only the machinery of fields already introduced.

Staying with the notion of vectors as -tuples, given a field , the set of all -tuples of elements of , , form a vector space , aka , of dimension* over , and each -tuple element of is called a vector. (*Note this is not a definition of the dimension of a vector space, but rather an equivalent usage for coordinate spaces such as .)

The vectors in can be added, where addition is associative and commutative, where additive inverse vectors exist for each vector, and there is an additive identity vector. Vectors can be multiplied by numbers from , also called scalars, where multiplication is associative, scalar multiplication distrubutes over vector addition, vectors distribute over scalar addition, and the number 1 in leaves a vector unchanged under multiplication.

Given a vector space , a subset of vectors in can also form a vector space, called a subspace of . Typically, a subspace arises as the image or kernel of a linear mapping on a vector space, or as a direct sum or quotient of two subspaces (not further defined here). The orthogonal complement of a vector space is the space of all vectors perpendicular to all the vectors in , .

Examples of vector spaces follow. Any sort of objects, for which the rules of vector arithmetic apply, can become a vector space.

The trivial vector space is simply the zero vector, .

is itself a vector space of dimension 1 over .

is a vector space of dimension 2 over , and more generally, a field extension forms a vector space over the extended field, using the arithmetic of that field. E.g. is a vector space of uncountable dimension over .

is a vector space of dimension over . In particular, when , the vectors correspond to the set of all points in Euclidean space. In , subspaces are , the lines through the origin, the planes through the origin, and itself.

The set of all matrices over F form a vector space of dimension over .

The set of all functions, continuous and single-valued on the closed unit interval, is an uncountable infinite-dimensional vector space.

Polynomials over of degree or less form an -dimensional vector space over . If polynomials of all degrees are considered, the dimension is countably infinite.

Pertinent to the current discussion, solutions of a set of homogeneous linear equations form a vector space over , because adding such solutions together or multiplying by elements of yields further solutions.

Given vectors, , their linear combination is expressed by

where are scalars from . All linear combinations of vectors in some subset of a vector space generate a subspace, and the vectors in the set are said to span the subspace.

Consider a set of vectors which have some linear combination equal to , the zero vector in ; then the vectors in the set are said to be linearly dependent. If no such linear combination exists, the set of vectors are said to be linearly independent. Equivalently, for the matrix form of a system of equations above, the columns of are linearly independent only if is the zero matrix whenever is the zero matrix.

The transpose of a matrix , written , is the matrix with its rows and columns exchanged; if the elements of are , the elements of are . The principal diagonal of an matrix A goes from to . The transpose of a square matrix flips the elements about the principal diagonal.

All the remaining examples come from , providing the most intuitive feel for the concepts. An discussion generalizes to any finite-dimensional vector space and in particular . is a 3-dimensional vector space over , the set of all triples of real numbers . Consider the vectors:

The set of all linear combinations of generates the entire vector space . Furthermore, the are linearly independent. A linearly independent set of vectors that spans a vector space is called a basis. Thus, are basis vectors for , or simply a basis for . Geometrically, the are just the coordinate unit vectors of . The condition for orthogonality of two vectors is that their dot product is 0. The are thus mutually orthogonal. And because they are each of unit length, they are called an orthonormal basis for , also known as the standard basis for . Any set of linearly independent vectors in form a basis for , and all such bases contain the same number of vectors, called the dimension of the vector space over , written or just . (For vector space , the number of elements in each vector and the number of vectors in each basis are the same.) Thus the 3-dimensional vector space has three linearly independent vectors in its standard basis. Given a vector , there is a unique way of writing as a linear combination of basis vectors. For example,

for some coefficients , where the are called the coordinates of relative to the selected basis vectors . For the orthonormal basis , the coordinates are the usual coordinates of Euclidean space.

The vector space concepts above can be applied to systems of linear equations.

Consider vector space over a field , together with a general system of m linear equations with n unknowns written as:

where values of the coefficients and unknowns are in . As shown in the prior section, there is a matrix form of these equations, written , but now with of size .

The equivalent vector form of these equations is in form :

Initially, consider just the column space generated by the left hand vectors. The collection of all possible linear combinations of the vectors on the left-hand side span a subspace called the column space of , . If the column vectors are linearly independent, . The equations have a solution just when is in . If every vector in has exactly one expression as a linear combination of the given left-hand vectors, then any solution is unique. In any event, has a basis of linearly independent vectors that do guarantee exactly one expression; and the number of vectors in that basis (its dimension) cannot be larger than or , but it can be smaller. The existence of independent vectors guarantees a solution regardless of the right-hand side, which otherwise is not guaranteed.

The null space of matrix contains all solutions to . Referring to the matrix above, the subspaces of are:

row space of , , aka )

null space of ,

and the subspaces of are:

column space of above,

null space of , .

Any linear transformation from can be represented by an matrix. The above shows that is associated with such a linear projection, specifically a mapping from the row space to the column space, and from the null space to , where solutions in the null space are in the kernel of map.

The following additional relations hold between the subspaces:

where dim is called the rank of .

To understand the connections between the four spaces associated with solutions of , there are different conditions to consider, corresponding to constraints on the rank of matrix . There will be seen a close connection between solutions of the non-homogeneous equation above, and homogeneous equation . In each case where a particular solution exists to , the total solution set can be written as:

The non-homogeneous equation is a translation of the homogeneous equation by the vector .

The `unique solution’ case corresponds to , where the columns of are linearly independent (in other words, is invertible and the column vectors form a basis for ). Then and . There is a unique solution: to the system of equations, where and the only solution in is . In Euclidean space, is a point.

In the case , has full row rank, there at least as many unknowns as equations, has a right inverse, and there are infinitely many solutions of the form . For example, in Euclidean space with , if , the two equations describe planes and then the solution is the set of points on the line at the intersection of the two planes (because rank is 2, the planes cannot be parallel). If , the solution is the plane described by the equation.

In the case , has full column rank, there are fewer unknowns than equations, has a left inverse, and there may be one solution, or none. There will be a solution if . Else, does not exist (the equations are not consistent).

The final `everything else’ case has and . There can be either infinite solutions, or none. Infinite solutions generally obtain if , when . No solutions result if and .