X-Git-Url: http://gitweb.michael.orlitzky.com/?a=blobdiff_plain;f=mjo%2Feja%2FTODO;h=2a13855ac7f3fdfbb61db04adc7649cc200d5fcf;hb=72a8ca0572c2846170bde9e3e46f48979b3aa0b0;hp=76b9eaf1dcf19f3c8385bd1f0486128b005ca3f2;hpb=8b85fd74f79fe1eb23e9f04bfd73b7d3cbf9b554;p=sage.d.git diff --git a/mjo/eja/TODO b/mjo/eja/TODO index 76b9eaf..2a13855 100644 --- a/mjo/eja/TODO +++ b/mjo/eja/TODO @@ -1,20 +1,43 @@ -1. Add CartesianProductEJA. +1. Add references and start citing them. -2. Add references and start citing them. +2. Implement the octonion simple EJA. We don't actually need octonions + for this to work, only their real embedding (some 8x8 monstrosity). -3. Implement the octonion simple EJA. +3. Pre-cache charpoly for some small algebras? -4. Factor out the unit-norm basis (and operator symmetry) tests once - all of the algebras pass. +RealSymmetricEJA(4): -5. Override inner_product(), _max_test_case_size(), et cetera in - DirectSumEJA. +sage: F = J.base_ring() +sage: a0 = (1/4)*X[4]**2*X[6]**2 - (1/2)*X[2]*X[5]*X[6]**2 - (1/2)*X[3]*X[4]*X[6]*X[7] + (F(2).sqrt()/2)*X[1]*X[5]*X[6]*X[7] + (1/4)*X[3]**2*X[7]**2 - (1/2)*X[0]*X[5]*X[7]**2 + (F(2).sqrt()/2)*X[2]*X[3]*X[6]*X[8] - (1/2)*X[1]*X[4]*X[6*X[8] - (1/2)*X[1]*X[3]*X[7]*X[8] + (F(2).sqrt()/2)*X[0]*X[4]*X[7]*X[8] + (1/4)*X[1]**2*X[8]**2 - (1/2)*X[0]*X[2]*X[8]**2 - (1/2)*X[2]*X[3]**2*X[9] + (F(2).sqrt()/2)*X[1]*X[3]*X[4]*X[9] - (1/2)*X[0]*X[4]**2*X[9] - (1/2)*X[1]**2*X[5]*X[9] + X[0]*X[2]*X[5]*X[9] -6. Switch to QQ in *all* algebras for _charpoly_coefficients(). +4. Profile the construction of "large" matrix algebras (like the + 15-dimensional QuaternionHermitianAlgebra(3)) to find out why + they're so slow. -7. Pass already_echelonized (default: False) and echelon_basis - (default: None) into the subalgebra constructor. The value of - already_echelonized can be passed to V.span_of_basis() to save - some time, and usinf e.g. FreeModule_submodule_with_basis_field - we may somehow be able to pass the echelon basis straight in to - save time. +5. Instead of storing a basis multiplication matrix, just make + product_on_basis() a cached method and manually cache its + entries. The cython cached method lookup should be faster than a + python-based matrix lookup anyway. NOTE: we should still be able + to recompute the table somehow. Is this worth it? + +6. What the ever-loving fuck is this shit? + + sage: O = Octonions(QQ) + sage: e0 = O.monomial(0) + sage: e0*[[[[]]]] + [[[[]]]]*e0 + +7. In fact, could my octonion matrix algebra be generalized for any + algebra of matrices over the reals whose entries are not real? Then + we wouldn't need real embeddings at all. They might even be fricking + vector spaces if I did that... + +8. Every once in a long while, the test + + sage: set_random_seed() + sage: x = random_eja().random_element() + sage: x.is_invertible() == (x.det() != 0) + + in eja_element.py returns False. + +9. Add an alias for AlbertAlgebra.