Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> As vectors, obviously they have inverses, additive inverses. Since vectors don't have multiplication, there is no multiplicative inverse

A vector is pretty much by definition also a matrix, and there is a standard way to multiply matrices. You can define several inverses of a vector that way, though you can't define a unique inverse.

The standard inner product is of course also an exceptionally typical way to multiply vectors, but the concept of an inverse there doesn't make much sense.



No, a vector is defined as an object that has certain properties, like addition and scalar multiplication. It's a very general, and abstract concept.

There are vector spaces of functions, with infinite dimension, but there are also vector spaces with a finite number of elements.

So only some vectors can even be written as 1xN matrices, if that is what you're referring to. But even if you write a vector that way, it doesn't mean it IS a matrix or that it automatically "has" multiplication.

In mathematics, an object only has an operation if it's part of the definition, and as such, vectors don't "have" multiplication.


What you say is mostly correct, but only for a certain meaning of the word "vector", which has been used with 2 distinct meanings since its introduction in the first half of the 19th century.

The set of elements defined by certain properties of their addition and of their multiplication with the elements belonging to a set of scalars is named "vector space" by some and "linear space" by others.

According to the etymology of the word vector, "linear space" would be more appropriate. You have used "vector" with the meaning "element of a linear space", and what you have said is correct, except that for any "vector" as an element of a linear space, considered as a column vector, there exists a corresponding row vector, even in the infinite-dimensional case.

"Vector" means translation of the space, and this is what "vector" meant when the word was introduced by Hamilton. While the set of translations is a linear space a.k.a. a vector space in the generalized sense, the set of translations, i.e. vectors in the strict sense, has additional properties due to the multiplication operations that must be defined for "vectors" in their strict sense (which are needed e.g. to determine the angles between translations and the distances).

"Vectors" as elements of linear spaces are a very general notion, which appears in many domains, and for all linear spaces, including for those infinite-dimensional, you can define matrices, i.e. linear functions, and matrix multiplication, i.e. composition of linear functions, and also the correspondence between a 1xN vector and a Nx1 vector, more correctly between a vector and an associated linear form. The latter also exists for the infinite-dimensional case, even if it is less likely to use names like row vectors and column vectors (though the names bra vectors and ket vectors are still in use for the infinite-dimensional case).

For the infinite-dimensional case the vectors and the matrices become functions of 1 or of 2 parameters and the sums from the formulas of matrix multiplication become integrals.

While for most computer applications, "vectors" refer just to elements of linear spaces, most "vectors" used in models of physical systems are vectors in the original sense of the word, where not only the vector addition and the product with scalars matter, but the products of vectors also have an essential role and their meaning can be best understood in the context of the complete geometric algebra theory.


> A vector is pretty much by definition also a matrix, and there is a standard way to multiply matrices.

A standard way to multiply a MxN with a NxK matrix, but none for a 1xN with a 1xN or a Nx1 with a Nx1 matrix - the two possible ways to describe a vector. You have to transpose exactly one of the two vectors. And then you have two possible results, 1xN multiplied with Nx1 yields a scalar (that's actually the 'usual' dot-product/scalar product/whatever you call it) and Nx1 multiplied with 1xN, where the result is a NxN matrix.


Doesn't matter. We already didn't have a unique inverse, but it's perfectly possible to find a left pseudoinverse and a right pseudoinverse, bearing in mind that they're not unique.

Though thinking about it more, it seems like the outer-product-inverse of a vector (a) must be unique if it exists; and (b) is highly unlikely to exist.

> 1xN multiplied with Nx1 yields a scalar (that's actually the 'usual' dot-product/scalar product/whatever you call it)

I'm aware of this, but there are two ways we might conceive of an "inverse":

- Since a vector is a matrix, the inverse of a vector might be defined by matrix multiplication, where A is the inverse of B if AB is "the" identity matrix. This is only strictly defined for square matrices, but the pseudoinverse concept extends it to nonsquare matrices.

- Or, we could go for a more basic sense of "multiplicative inverse", where the concept is that if AB = C, then B = A⁻¹C. This is what I was thinking of when saying that the concept of an inverse doesn't make sense when multiplication is the inner product - if I give you a vector v, and its inner product with some other vector u, there is no way of recovering what u was.


> The standard inner product is of course also an exceptionally typical way to multiply vectors, but the concept of an inverse there doesn't make much sense.

Not all vector spaces are equipped with an inner product. The point is that you can start with some simple axioms and build these more complicated things (inner product spaces, algebras over a field, geometric algebras, etc.).


> Not all vector spaces are equipped with an inner product.

Any vector space over a field (usually part of the definition of a vector space) is equipped with the standard inner product, because multiplication and addition are part of the definition of a field.


That’s not how it works. Unless you explicitly give the vector space an inner product, it doesn’t have one.

What you probably mean is that “you can always define an inner product” but that’s a very different statement.

That’s not true either though, the different dimensions in a vector space do bot have to belong to the same field, so you can’t assume you can add them together.


> Unless you explicitly give the vector space an inner product, it doesn’t have one.

Well, no, not at all.

The inner product is still there. It's still an inner product. The space in which your vectors exist is still an inner product space. You may not care about the inner product, but it doesn't cease to exist when you stop looking at it.


I really don't understand why you would say this, it's obviously false.

Setting aside the subtler point of what it means to "have" something in mathematics:

clearly only some vector spaces even have the potential to introduce an inner product. Consider F for some random finite field. You can make a vector space from it, but what would the inner product be? Or R x F for that matter, you could never give that an inner product.

That's why the concepts of "vector space" and "inner product space" are separate concepts. Some vector spaces aren't, and could never be, inner product spaces.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: