# Relationship between row and column space example

### Row Space and Column Space of a Matrix

Look at a simple concrete example, say the matrix. A = [ 2 5 − 1 3 − 1 2 ]. The column space of A is by definition the set of all linear combinations of the columns. In linear algebra, the column space of a matrix A is the span of its .. For example , if the row space is a plane through the origin in three dimensions, then the null space will be the. We now look at specific examples and how to find the null space of a matrix. . is there a relationship between the row space, column space, null space of.

Showing that the candidate basis does span C A Video transcript We spent a good deal of time on the idea of a null space. What I'm going to do in this video is introduce you to a new type of space that can be defined around a matrix, it's called a column space. And you could probably guess what it means just based on what it's called.

But let's say I have some matrix A. Let's say it's an m by n matrix. So I can write my matrix A and we've seen this multiple times, I can write it as a collection of columns vectors. So this first one, second one, and I'll have n of them. How do I know that I have n of them? Because I have n columns. And each of these column vectors, we're going to have how many components? So v1, v2, all the way to vn. This matrix has m rows. So each of these guys are going to have m components. So they're all members of Rm.

So the column space is defined as all of the possible linear combinations of these columns vectors. So the column space of A, this is my matrix A, the column space of that is all the linear combinations of these column vectors. What's all of the linear combinations of a set of vectors? It's the span of those vectors. So it's the span of vector 1, vector 2, all the way to vector n. And we've done it before when we first talked about span and subspaces.

But it's pretty easy to show that the span of any set of vectors is a legitimate subspace. It definitely contains the 0 vector. If you multiply all of these guys by 0, which is a valid linear combination added up, you'll see that it contains the 0 vector. If, let's say that I have some vector a that is a member of the column space of a. That means it can be represented as some linear combination.

So a is equal to c1 times vector 1, plus c2 times vector 2, all the way to Cn times vector n. Now, the question is, is this closed under multiplication? If I multiply a times some new-- let me say I multiply it times some scale or s, I'm just picking a random letter-- so s times a, is this in my span? Well s times a would be equal to s c1 v1 plus s c2 v2, all the way to s Cn Vn Which is once again just a linear combination of these column vectors. So this Sa, would clearly be a member of the column space of a.

And then finally, to make sure it's a valid subspace-- and this actually doesn't apply just to column space, so this applies to any span. So the column space of our transpose was the span of this R3 vector right there, So it was this one right here.

So let me copy and paste it. Copy and scroll down, and we can paste it just like that. OK, let's see if we can visualize this now, now that we have them all in one place. So first of all, if we imagine a transformation, x, that is equal to A times x, our transformation is going to be a mapping from what? So it would be a mapping from R3 and then it would be a mapping to R2 because we have two rows here, right?

You multiply a 2-by-3 matrix times a 3-by-1 vector, and you're going to get a 2-by-1 vector, so it's going to be a mapping to R2. So that's our codomain.

• Row Space and Column Space of a Matrix
• Column space of a matrix
• Visualizations of left nullspace and rowspace

So let's draw our domains and our codomains. I'll just write them very generally right here. So you could imagine R3 is our domain. And then our codomain is going to be R2 just like that. And our T is a mapping, or you could even imagine A is a mapping between any vector there and any vector there when you multiply them.

Now, what is our column space of A? Our column space of A is the span of the vector 2 minus 4. It's an R2 vector. This is a subspace of R2. We could write this. So let me write this. So our column space of A, these are just all of the vectors that are spanned by this. We figured out that these guys are just multiples of this first guy, or we could have done it the other way. We could have said this guy and that guy are multiples of that guy, either way. But the basis is just one of these vectors.

We just have to have one of these vectors, and so it was equal to this right here. So the column space is a subset of R2. And what else is a subset of R2? Well, our left null space.

Our left null space is also a subset of R2. So let's graph them, actually. So I won't be too exact, but you can imagine. Let's see, if we draw the vector 2, let me draw some axes here. Let me scroll down a little bit. So if you have some vector-- let me draw my-- do this as neatly as possible. That's my vertical axis. That is my horizontal axis. And then, what does the span of our column space look like? So you draw the vector 2, minus 4, so you're going to go out one, two, and then you're going to go down one, two, three, four.

So that's what that vector looks like. And the span of this vector is essentially all of the multiples of this vector, where you could say linear combinations of it, but you're taking a combination of just one vector, so it's just going to be all of the multiples of this vector.

So if I were to graph it, it would just be a line that is specified by all of the linear combinations of that vector right there. This right here is a graphical representation of the column space of A. Now, let's look at the left null space of A, or you could imagine, the null space of the transpose. They are the same thing. You saw why in the last video. What does this look like? So the left null space is a span of 2, 1. So if you graph 2, and then you go up 1, it's the graph of 2, 1, and it looks like this.

Let me do it in a different color.

So that's what the vector looks like. The vector looks like that, but of course, we want the span of that vector, so it's going to be all of the combinations. All you can do when you combine one vector is just multiply it by a bunch of scalars, so it's going to be all of the scalar multiples of that vector. So let me draw it like that. It's going to be like that.

And the first thing you might notice, let me write this. This is our left null space of A or the null space of our transpose. This is equal to the left null space of A. And actually, since we're writing, we wrote this in terms of A transpose.

It's the null space of A transpose, which is the left null space of A. Let's write the column space of A also in terms of A transpose. This is equal to the row space of A transpose, right? If you're looking at the columns of A, everything it spans, the columns of A are the same things as the rows of A transpose. But the first thing that you see, when I just at least visually drew it like this, is that these two spaces look to be orthogonal to each other.

It looks like I drew it in R2. It looks like there's a degree angle there. And if we wanted to verify it, all we have to do is take the dot product.

Well, any vector that is in our column space, you could take an arbitrary vector that's in our column space, it's going to be equal to c times 2 minus 4. So let me write that down. I want this stuff up here. So let me do 2 times row one, minus row two. So let me say 2 times row one, and I'm going to minus row two. So 2 times 1 minus 2 is 0, which is exactly what I wanted there. That's nice to have right there. All right, now let me see if I can zero out this guy here.

So what can I do? I could do any combination, anything that essentially zeroes this guy out. But I want to minimize my number of negative numbers. So let me take this third row, minus 3 times this first row. So I'm going take minus 3 times that first row and add it to this third row.

So 3 minus 3 times 1 is 0. These are just going to be a bunch of 3's. And 2 minus 3 times 1 is minus 1. Now if we want to get this into reduced row echelon form we need to target that one there and that one there. And what can we do? So let's keep my middle row the same. My middle row is not going to change. And to get rid of this one up here I can just replace my first row with my first row minus my second row. Because then this won't change.

## Row and column spaces

I'll have 1 minus 0 is 1. That's what we wanted. That's 1 plus 2. That's 1 plus 1. Now let me do my third row. Let me replace my third row with my third row subtracted from my first row. They are obviously the same thing. So if I subtract the third row from the second row I'm just going to get a bunch of 0's.

Minus 2 minus minus 2 is 0. And minus 1 minus minus 1. That's minus 1 plus 1. That's equal to 0. And just like that we have it now in reduced row echelon form.

So this right here is the reduced row echelon form of A. Now the whole the reason why we even went through this exercise is we wanted to figure out the null space of A. And we already know that the null space of A is equal to the null space of the reduced row echelon form of A.

So if this is the reduce row echelon form of A, let's figure out its null space.

## Null space and column space basis

So the null space is the set of all of vectors in R4, because we have 4 columns here. The null space is the set of all of vectors that satisfy this equation, where we're going to have three 0's right here. That's the 0 vector in R3, because we have three rows right there, and you can figure it out. This times this has to equal that 0. That dotted with that essentially is going to equal that 0. That dotted with that is equal to that 0.

I say essentially because I didn't define a row vector dot a column vector. I've only defined column vectors dotted with other column vectors. But we've been over that in a previous video, where you can say this is a transpose of a column vector. So let's just take this, and write a system of equations with this.

So we get 1 times x1. So this times this is going to be equal to that 0. So one times x1, that is x1. Plus 0 times x2. Let me just write that out. Plus 3 times x3.

Plus 2 times x4 is equal to that 0. And then -- I'll do it in yellow right here -- I have 0 times x1. Plus 1 times x2. Minus 2 times x3. Minus x4 is equal to 0. And then this gives me no information. So it just turns into 0 equals 0. So let's see if we can solve for our pivot entries, or our pivot variables. What are our pivot entries? This is a pivot entry. That's a pivot entry. That's what reduced row echelon form is all about, getting these entries that are 1 and they're the only non-zero term in their respective columns.

And that every pivot entry is to the right of a pivot entry above it. And then the columns that don't have pivot entries?

These columns represent the free variables. So this column has no pivot entry. And so when you take the dot product, this column turned into this column in our system of equations. So we know that x3 is a free variable. We can set it equal to anything.

Likewise x4 is a free variable. X1 and x2 are pivot variables, because their corresponding columns in our reduced row echelon form have pivot entries in them. So let's see if we can simplify this into a form we know. And we've seen this before. So if I solve for x1 -- this 0 I can ignore. That 0 I can ignore -- I could say that x1 is equal to minus 3x3 minus 2x4. I just subtracted these two from both sides of the equation and I can say that x2 is equal to 2x3 plus x4.

And if we want to write our solution set now, so if I wanted to find the null space of A, which is the same thing as the null space of the reduced row echelon form of A, is equal to all of the vectors -- let me do a new color. Maybe I'll do blue -- is equal to all of the vectors x1, x2, x3, x4 that are equal to -- So what are they going to be equal to?

X1 has to be equal to minus 3x3 minus 2x4. Just to be clear, these are free variables because I can set these to be anything.

### Row and column spaces - Wikipedia

And these are pivot variables because I can't just set them to anything. When I determine what my x3's and my x4's are, they determine what my x1's and my x2's have to be. So these are pivoted variables. These are free variables.

I can make this guy pi. And I can make this guy minus 2. We can set them to anything.

So x1 is equal to -- let's see, let me write it this way -- they're equal to x3 -- let me do it in a different color -- do x3 like this. So it's equal to x3 times some vector plus x4 times some other vector. So any solution set in my null space is going to be a linear combination of these two vectors. We can figure out what these two vectors are just from these two constraints right here.

So -- let me do it in a neutral color -- x1 is equal to minus 3 times x3 minus 2 times x4. What's x3 equal to? Well x3 is equal to itself. Whatever we set x3 equal to, that's going to be x3. So x3 is going to be 1 times x3 plus 0 times x4. It is not going to have any x4 in it.