ALGORITHM:
- .........
+ First we handle the special cases where the algebra is
+ trivial, this element is zero, or the dimension of the algebra
+ is one and this element is not zero. With those out of the
+ way, we may assume that ``self`` is nonzero, the algebra is
+ nontrivial, and that the dimension of the algebra is at least
+ two.
+
+ Beginning with the algebra's unit element (power zero), we add
+ successive (basis representations of) powers of this element
+ to a matrix, row-reducing at each step. After row-reducing, we
+ check the rank of the matrix. If adding a row and row-reducing
+ does not increase the rank of the matrix at any point, the row
+ we've just added lives in the span of the previous ones; thus
+ the corresponding power of ``self`` lives in the span of its
+ lesser powers. When that happens, the degree of the minimal
+ polynomial is the rank of the matrix; if it never happens, the
+ degree must be the dimension of the entire space.
SETUP::
sage: x = random_eja().random_element()
sage: x.degree() == x.minimal_polynomial().degree()
True
-
"""
n = self.parent().dimension()
if self.is_nilpotent():
raise ValueError("this only works with non-nilpotent elements!")
- J = self.subalgebra_generated_by()
+ # The subalgebra is transient (we return an element of the
+ # superalgebra, i.e. this algebra) so why bother
+ # orthonormalizing?
+ J = self.subalgebra_generated_by(orthonormalize=False)
u = J(self)
# The image of the matrix of left-u^m-multiplication
# subspace... or do we? Can't we just solve, knowing that
# A(c) = u^(s+1) should have a solution in the big space,
# too?
- #
- # Beware, solve_right() means that we're using COLUMN vectors.
- # Our FiniteDimensionalAlgebraElement superclass uses rows.
u_next = u**(s+1)
A = u_next.operator().matrix()
c = J.from_vector(A.solve_right(u_next.to_vector()))
- # Now c is the idempotent we want, but it still lives in the subalgebra.
+ # Now c is the idempotent we want, but it still lives in
+ # the subalgebra.
return c.superalgebra_element()