# Re: Multiplication of matrices

*From*: Ray Vickson <RGVickson@xxxxxxx>*Date*: Tue, 16 Sep 2008 15:18:07 -0700 (PDT)

On Sep 16, 10:58 am, mstem...@xxxxxxxxxxxxxxxxxxxx (Michael Stemper)

wrote:

I've spent several unproductive (if enjoyable) days trying to figure

out if a non-singular matrix can be written as the product of a singular

matrix and another matrix.

First, I came up with an algebraic proof that the product of a singular

and a non-singular matrix is always non-singular:

Let A, B be nxn matrices, with A singular and B non-singular, and let

AB = C. Then, ABB^-1 = CB^-1 = A. Since A is singular and B^-1 is

non-singular, and non-singular nxn matrices are closed under

multiplication, C must be singular.

I then tried to show that it was possible to have two singular matrices

whose product was non-singular. This was to be analgous to the demonstration

that (r-s) + s, with r rational and s irrational, was rational. Lacking

an inverse for singular matrices made this unfruitful.

Digging around, I found that the determinant of the product is the product

of the determinants (I think). This obviously would settle the issue, but

it's less than satisfying. I'd like an algebraic proof that it's not

possible to multiply two singular matrices and get a non-singular matrix.

Does such a proof exist?

Victor Meldrew has already suggested one approach. Here is another

one. The following are a couple of simply-proved facts about n x n

matrices. (1) If B is a nonsingular matrix, it transforms a set of n

linearly independent vectors v1, v2, ..., vn into linearly independent

vectors w1 = Bv1, w2 = Bv2, ..., wn = Bvn. Proof: B nonsingular

implies that no wi is the zero vector. If the wi are linearly

dependent, there exist scalars c1, c2, ..., cn, not all zero, such

that 0 = c1*w1 + ... + cn*wn = B*(c1*v1 + ... + cn*vn), so c1*v1 + ...

+ cn*vn = 0, contradicting the linear independence of the vi.

(2) If B is a singular matrix, it transforms a set of n vectors into

linearly dependent vectors. Obviously (and easily proved): if the vi

are dependent, so are the wi. However, the result asserts more: even

if the vi are linearly independent, the wi will be linearly dependent.

To prove this, assume the contrary. Then, the only solution to sum

ci*wi= 0 is c1 = c2 = ... = 0. Thus, the only solution to 0 = B*(sum

ci*vi) = 0 is c = 0. However, there is a nonzero solution to B*w = 0,

and the vi form a basis, so there are ci, not all zero, such that w =

sum ci*vi. This is a contradiction.

OK, so if C = A*B and B is singular, a set of n linearly independent

vectors is transformed by B into a linearly dependent set, and these

are in turn transformed into a linearly dependent set for any A,

singular or not. Thus, C transforms a basis into a linearly dependent

set, so is singular. Next, if B is nonsingular but A is singular, we

get a linearly independent set wi = B*vi, but then A then transforms

these into a linearly dependent set. Again, C is singular.

R.G. Vickson

To make it a little clearer, I'd like a proof that two elements of a

monoid with a sub-group cannot be multiplied to give an element of the

sub-group unless both of the elements are in the sub-group.

Any hints?

--

Michael F. Stemper

#include <Standard_Disclaimer>

Build a man a fire, and you warm him for a day. Set him on fire,

and you warm him for a lifetime.

.

**References**:**Multiplication of matrices***From:*Michael Stemper

- Prev by Date:
**Re: Arranging a set of vectors as orthogonal as possible?** - Next by Date:
**Re: Breadline schmuck's basic mistakes** - Previous by thread:
**Re: Multiplication of matrices** - Next by thread:
**Re: Quadratic Diophantines: what JSH dosen't want you to know.** - Index(es):