--- /dev/null
+This directory contains all of the tests for the claims in the paper,
+as well as the supporting code. It has a few requirements:
+
+ * Python built with support for sqlite
+ * SymPy <https://pypi.org/project/sympy/>
+ * MessagePack for python <https://pypi.org/project/msgpack/>
+
+Here is a quick summary of the files:
+
+ * compute.py - utilities for computing all cones, ranks of Lorentz
+ cones, etc. and storing them in a database. These are long-
+ running computations, not otherwise used by the test suite.
+
+ * cones.py - the SymmetricCone class and its subclasses L, HR, HC,
+ HH, and HO.
+
+ * live.db - an empty (until you fill it) SQLite database for storing
+ the cones/ranks computed by the functions in compute.py.
+
+ HINT: unless you want "git diff" to take forever and tempt you to
+ commit gigabytes of binary junk to the repo, you should tell git
+ (insofar as is possible) to ignore changes to the live database:
+
+ $ git update-index --assume-unchanged tests/live.db
+ $ git update-index --skip-worktree tests/live.db
+
+ * partitions.py - functions relating to integer partitions.
+
+ * sql.py - functions for working with the SQL database.
+
+ * tests.py - the main tests for the results in the paper.
+
+ * verify-live-db.py - a runnable script that will inspect live.db to
+ make sure there is enough (consistent) data there.
+
+Every function (not just the results in the paper) is itself
+tested. To run the entire test suite, you can run
+
+ $ python -m doctest *.py
+
+from within this directory. To populate the live database, there are
+two things to do. First, run compute.py:
+
+ $ python compute.py
+ computing cones of dimension 0...
+ computing cones of dimension 1...
+
+This will start computing all symmetric cones, up to isomorphism, in
+dimension=1,2,.... successively. It will take a very long time and
+eventually consume all of your RAM. Hit Ctrl-C when you get scared,
+and it will stop. The cones are saved to the database at each step, so
+killing the program will only abandon the computation for the current
+dimension (not all previous dimensions).
+
+In addition to the cones, you will also want to compute some Lyapunov
+ranks corresponding to Lorentz cones. For that, the same compute.py
+is used, but you have to pass it "lorentz" as the first argument on
+the command-line:
+
+ $ python compute.py lorentz
+ computing lorentz ranks in dimension 0...
+ computing lorentz ranks in dimension 1...
+
+Again, hit Ctrl-C when you get bored. Afterwards your live.db will
+hopefully contain enough data to check the results in the paper.
+Running,
+
+ $ python verify-live-db.py
+
+will confirm it.
--- /dev/null
+r"""
+Compute results used in the paper. There are three main functions:
+
+ * :func:`admissible_lorentz_ranks` computes all possible Lyapunov
+ ranks arising from sums of Lorentz cones in a given dimension.
+ This is rather fast (so we can compute a lot of them), but it is
+ only useful for proving that certain cones DO NOT have simulacra.
+ It is however useless for determining whether or not, say,
+ ``L(m)+L(n)`` has simulacra, because a priori we expect the
+ Lyapunov rank of that cone to be in the list (put there by
+ itself). To ask questions about simulacra, we would need to know
+ _how many_ sums of Lorentz cones (up to isomorphism) had that same
+ Lyapunov rank... but the function does not provide this
+ information.
+
+ * :func:`dim_ranks_cones` is an attempt to address the shortcomings
+ of :func:`admissible_lorentz_ranks`. It computes all cones in a
+ given dimension, as well as their Lyapunov ranks, and returns them
+ in a dimension => rank => tuple-of-cones map. Isomorphic cones are
+ deduplicated along the way. For this to work efficiently, a custom
+ serialized representation of a symmetric cone is used; essentially
+ we represent them as lists of integers.
+
+Both of these functions are fundamentally recursive: the way you get
+all cones of dimension four is to start with the irreducible ones, and
+add them to the reducible ones you get from partitioning n=4 and and
+running the subproblem in e.g. n=3 and n=1. Both of these functions
+therefore operate with a cache to avoid recomputing earlier cases, and
+can optionally cache to a SQL database as well. For our test suite to
+run efficiently, there is no other option than to precompute as many
+values of these functions as we can.
+
+There is one more function of interest in this module:
+
+ * :func:`compute_Ln_Ln_simulacra` looks for simulacra of
+ ``L(n)+L(n)`` over a range of ``n`` and in parallel. From the
+ paper we know that it suffices to consider only sums of Lorentz
+ cones for this, and thus for performance this function operates
+ on integer partitions rather than with cones. To catalogue the
+ simulacra in the paper, you might run
+
+ >>> compute_Ln_Ln_simulacra(0, 100, 4) # doctest: +SKIP
+
+ which would check ``n=0`` up to ``n=100`` using four processes.
+ Afterwards, it prints the results to the console, and you don't
+ need it any more.
+
+The two functions that use a database take both a ``db`` argument, and
+a ``sql`` toggle. If you really want to affect the live database, you
+have to set ``sql`` to ``True``, and ``db`` to ``sql.LIVE_DATABASE``.
+These are not default because we don't want to modify the live
+database by surprise while testing.
+
+If you _execute_ this module, on the other hand, it will begin to
+update the live database immediately, as it tries to compute more
+and more cones with :func:`dim_cones_ranks`::
+
+ $ python compute.py
+ computing cones of dimension 99...
+
+This however takes "forever" and will probably run your system out of
+RAM. The bottleneck for both speed and space is putting all of the new
+cones for a given dimension into a list and sorting them to eliminate
+duplicates. You can also use it to compute admissible Lorentz ranks:
+
+ $ python compute.py lorentz
+ computing lorentz ranks in dimension 200...
+
+"""
+import sqlite3
+
+from cones import L,HR,HC,HH,HO, SerialCone, SerialSum, SymmetricCone
+import partitions
+import sql
+
+def _admissible_lorentz_ranks(n : int, d : dict[int, tuple[int,...]] | None, db : str) -> tuple[int, ...]:
+ r"""
+ Recursive implementation underlying :func:`admissible_lorentz_ranks`.
+
+ This is necessary for that public function to have a nice user
+ interface because the recursive bit will always pass a cache dict
+ down to the next level, but we don't want users to have to pass in
+ an empty dict to get started.
+
+ If ``d`` is ``None`` instead of a dict, the SQL database ``db``
+ will be consulted/updated instead.
+
+ Parameters
+ ----------
+
+ n : int
+ The dimension for which you'd like to know, what Lyapunov ranks
+ are possible if we consider only Lorentz cone factors?
+
+ d : dict|None
+ Either a dict to cache the results in, or ``None`` if you want
+ to use the SQL database ``db`` as a cache instead.
+
+ db : str
+ The name of the SQLite database to use as a cache (if ``d`` is
+ ``None``).
+
+ Returns
+ -------
+
+ tuple[int]
+ A tuple of Lyapunov ranks that can be achieved in dimension
+ ``n`` using only Lorentz cone factors, in no particular order.
+
+ Examples
+ --------
+
+ The examples for :func:`admissible_lorents_ranks` demonstrate this
+ indirectly, but we can check a few trivial cases by hand::
+
+ >>> _admissible_lorentz_ranks(0, {}, "unused")
+ (0,)
+ >>> _admissible_lorentz_ranks(1, {}, "no database")
+ (1,)
+
+ This one will use a new, temporary database::
+
+ >>> sql.new_database(sql.TEST_DATABASE)
+ >>> sorted(_admissible_lorentz_ranks(3, None, sql.TEST_DATABASE))
+ [3, 4]
+
+ And the second time, it will be cached::
+
+ >>> sorted(_admissible_lorentz_ranks(3, None, sql.TEST_DATABASE))
+ [3, 4]
+
+ To avoid surprises, ensure that the ``n = 0`` case gets added to the
+ dictionary or database::
+
+ >>> d = {}
+ >>> _admissible_lorentz_ranks(2, d, None)
+ (2,)
+ >>> d
+ {0: (0,), 1: (1,), 2: (2,)}
+
+ >>> sql.new_database(sql.TEST_DATABASE)
+ >>> _admissible_lorentz_ranks(2, None, sql.TEST_DATABASE)
+ (2,)
+ >>> sql.admissible_lorentz_ranks(0, sql.TEST_DATABASE)
+ (0,)
+
+ """
+ if n == 1:
+ # This function has side effects (updating the dictionary or
+ # database) that are missed for n=0 because we never recurse
+ # that far down. To avoid surprises, we call the theoretical
+ # base case from the de facto base case to trigger its side
+ # effects.
+ _ = _admissible_lorentz_ranks(0, d, db)
+
+ if d is None:
+ if sql.have_lorentz_rank_dim(n, db=db):
+ return sql.admissible_lorentz_ranks(n, db=db)
+ else:
+ if n in d:
+ return d[n]
+
+ s: set[int]
+ s = set()
+ for i in range(1,(n//2)+1):
+ # We can stop at (n//2)+1 because afterwards, i passes n-i.
+ # and the situation is symmetric.
+ #
+ # Compute s1 first (i.e. outside of the comprehension) because
+ # we want to be sure that the lower values get computed before
+ # we try to compute the higher ones.
+ s1 = _admissible_lorentz_ranks(i, d, db)
+ s2 = _admissible_lorentz_ranks(n-i, d, db)
+ s.update( b1 + b2 for b1 in s1 for b2 in s2 )
+
+ this_fn: tuple[int,...]
+ this_fn = ()
+ if n != 2:
+ # n=2 is the one case where f(n) = 1 + 1 + ... + 1 (n times)
+ # and the cone is reducible, so f(n) would wind up in the list
+ # twice, once for L(n) and once for L(1) + L(1).
+ this_fn = (partitions.f(n),)
+ result = tuple(s) + this_fn
+
+ if d is None:
+ sql.insert_lorentz_ranks(n, result, db=db)
+ else:
+ d[n] = result
+ return result
+
+
+def admissible_lorentz_ranks(n : int, sql : bool = False, db : str = sql.TEST_DATABASE) -> tuple[int, ...]:
+ r"""
+ Compute admissible Lyapunov ranks for sums of Lorentz cones
+ of total dimension ``n``.
+
+ This operates recursively and caches the results at each step (to
+ make future steps faster). It can also use a SQL database (instead
+ of the default python dict) as a cache.
+
+ Parameters
+ ----------
+
+ n : int
+ The dimension for which you'd like to know, what Lyapunov ranks
+ are possible if we consider only Lorentz cone factors?
+
+ sql : bool, default=False
+ Whether or not to use the SQL database, or start fresh.
+
+ db : str, default=TEST_DATABASE
+ The name of the SQLite database to use (if ``sql`` is ``True``).
+
+ Returns
+ -------
+
+ tuple[int]
+ The "set" of Lyapunov ranks that can be achieved in dimension
+ ``n`` using only Lorentz cone factors. A tuple is used instead
+ of a set because msgpack knows what to do with a tuple, unlike
+ a set.
+
+ Examples
+ --------
+
+ Low-dimensional examples::
+
+ >>> admissible_lorentz_ranks(0)
+ (0,)
+ >>> admissible_lorentz_ranks(1)
+ (1,)
+ >>> admissible_lorentz_ranks(2)
+ (2,)
+ >>> sorted(admissible_lorentz_ranks(3))
+ [3, 4]
+
+ The SQL values should agree with the ones we compute. We can
+ check this in a reasonable amount of time up to ``n = 75``. We
+ sort the results before comparing them because, despite our use of
+ tuples, the order that they wind up in is not meaningful::
+
+ >>> import sql
+ >>> def check(n):
+ ... if not sql.have_lorentz_rank_dim(n):
+ ... # don't fail if we're just missing the data
+ ... return True
+ ... actual = sorted(sql.admissible_lorentz_ranks(n))
+ ... expected = sorted(admissible_lorentz_ranks(n))
+ ... return (actual == expected)
+ >>>
+ >>> # these magic numbers have no particular meaning
+ >>> random_ns = [0,1,7,12,15,19,23]
+ >>> all( check(n) for n in random_ns )
+ True
+ >>> from random import randint
+ >>> n = randint(0,76)
+ >>> check(n)
+ True
+
+ Since we don't recurse all the way down to ``n = 0``, some care is
+ needed to make sure that we don't assume the existence of that row
+ based on the presence of rows for larger dimensions::
+
+ >>> sql.new_database(sql.TEST_DATABASE)
+ >>> sorted(admissible_lorentz_ranks(4, True))
+ [4, 5, 7]
+ >>> admissible_lorentz_ranks(0, True)
+ (0,)
+
+ """
+ # The implementation of this function _always_ uses a cache,
+ # the only question is, whether or not the cache will be
+ # a python dict that gets passed around, or an implicit
+ # SQL database.
+ d: dict[ int, tuple[int,...] ] | None
+ d = {}
+ if sql:
+ d = None
+
+ # Now just run the real, recursive implementation.
+ return _admissible_lorentz_ranks(n, d, db)
+
+
+def _irreducible_cones_of_dim(n : int) -> tuple[SymmetricCone, ...]:
+ r"""
+ Return a tuple of irreducible cones in dimension ``n``.
+
+ Outside of dimension two, there is always a Lorentz cone in
+ dimension ``n``, but there may not be any others. This is "up to
+ isomorphism," so, for example, you won't get ``HR(2)`` in
+ dimension three.
+
+ Parameters
+ ----------
+
+ n : int
+ The dimension in which you'd like the irreducible cones.
+
+ Returns
+ -------
+
+ A tuple consisting of all irreducible cones in dimension ``n``.
+
+ Examples
+ --------
+
+ There are no irreducible cones of dimension two::
+
+ >>> _irreducible_cones_of_dim(0)
+ (L(0),)
+ >>> _irreducible_cones_of_dim(1)
+ (L(1),)
+ >>> _irreducible_cones_of_dim(2)
+ ()
+
+ Isomorphic results are not returned::
+
+ >>> L(3).dim == HR(2).dim == 3
+ True
+ >>> _irreducible_cones_of_dim(3)
+ (L(3),)
+
+ Larger factors appear where we think they will::
+
+ >>> HR(3) in _irreducible_cones_of_dim(6)
+ True
+ >>> HO(3) in _irreducible_cones_of_dim(27)
+ True
+
+ """
+ s : list[SymmetricCone]
+ s = []
+ if n != 2:
+ s.append(L(n))
+
+ if n >= 27: # HO(3).dim
+ if (a := HO.in_dim(n)):
+ s.append(a)
+ elif n >= 15: # HH(3).dim
+ if (b := HH.in_dim(n)):
+ s.append(b)
+ elif n >= 9: # HC(3).dim
+ if (c := HC.in_dim(n)):
+ s.append(c)
+ elif n >= 6: # HR(3).dim
+ if (d := HR.in_dim(n)):
+ s.append(d)
+
+ return tuple(s)
+
+
+def _merge_factors(a, b) -> SerialSum:
+ r"""
+ Merged two serialized cones into a third.
+
+ We can do this efficiently because the sort order for cone
+ factors is already based on their serializations. As a result,
+ we don't need to deserialize and reserialize; we can just
+ combine and sort the serialized representations directly.
+
+ Parameters
+ ----------
+
+ a : int|tuple[int]
+ The first cone, serialized (as either an int or a tuple of
+ ints).
+ b : int|tuple[int]
+ The second cone, serialized (as either an int or a tuple of
+ ints).
+
+ Returns
+ -------
+
+ A tuple of ints representing a direct sum. If the two inputs ``a``
+ and ``b`` were deserialized and then combined into a
+ :class:`DirectSum`, the serialization of that direct sum is what
+ we return.
+
+ Examples
+ --------
+
+ A simple example with two irreducible factors::
+
+ >>> from cones import SymmetricCone
+ >>> a = L(4).serialize()
+ >>> b = HR(3).serialize()
+ >>> c = _merge_factors(a, b); c
+ (32, 41)
+ >>> SymmetricCone.deserialize(c)
+ HR(3) + L(4)
+
+ A random example showing that this does what we think it does,
+ i.e. circumvents deserialization/reserialization accurately::
+
+ >>> from cones import DirectSum, SymmetricCone, random_cone
+ >>> K = random_cone()
+ >>> a = K.serialize()
+ >>> J = random_cone()
+ >>> b = J.serialize()
+ >>> expected = DirectSum([K,J])
+ >>> actual = SymmetricCone.deserialize(_merge_factors(a,b))
+ >>> actual == expected
+ True
+ """
+ # Computing/comparing the type to int is actually a bit faster on
+ # average than isinstance.
+ if type(a) == int:
+ a = (a,)
+ if type(b) == int:
+ b = (b,)
+
+ # We have to re-sort the factors, because that's what the direct
+ # sum constructor would do to ensure that no duplicates sneak
+ # in. The serializations (s1,s2) and (s2,s1) represent the same
+ # cone! The caller is responsible for deduplicating them.
+ return tuple(sorted(a+b))
+
+
+def _dim_ranks_cones(n : int, d : dict[int, dict[int,tuple[SerialCone,...]]] | None, db : str, progress : bool) -> dict[int,tuple[SerialCone,...]]:
+ r"""
+ Recursive implementation underlying :func:`dim_ranks_cones`.
+
+ This is necessary for that public function to have a nice user
+ interface because the recursive bit will always pass a cache dict
+ down to the next level, but we don't want users to have to pass in
+ an empty dict to get started.
+
+ If ``d`` is ``None`` instead of a dict, the SQL database ``db``
+ will be consulted/updated instead.
+
+ Parameters
+ ----------
+
+ n : int
+ The dimension for which you want the rank => cones map.
+
+ d : dict[ int, dict[ int, tuple[tuple[int,...],...] ] ] | None
+ Either a dict to cache the results in, or ``None`` if you want
+ to use the SQL database ``db`` as a cache instead.
+
+ db : str
+ The name of the SQLite database to use as a cache (if ``d`` is
+ ``None``).
+
+ progress : bool
+ Whether or not to print a dot "." as the function progresses.
+ This isn't tested or anything, but it's real nice to see things
+ moving when computing the (n+1)st dimension takes two days.
+
+ Returns
+ -------
+
+ dict[ int, tuple[tuple[int,...],...] ]
+ A dictionary whose keys are the possible Lyapunov ranks in
+ dimension ``n`` and whose values are tuples of serialized cones
+ in dimension ``n`` having a Lyapunov rank equal to the key.
+
+ Examples
+ --------
+
+ The examples for :func:`dim_ranks_cones` demonstrate this
+ indirectly, but we can check a few trivial cases by hand::
+
+ >>> _dim_ranks_cones(0, {}, "unused", False)
+ {0: (1,)}
+ >>> _dim_ranks_cones(1, {}, "no database", False)
+ {1: (11,)}
+
+ This one will use a new, temporary database::
+
+ >>> sql.new_database(sql.TEST_DATABASE)
+ >>> _dim_ranks_cones(3, None, sql.TEST_DATABASE, False)
+ {3: ((11, 11, 11),), 4: (31,)}
+
+ And the second time, it will be cached::
+
+ >>> _dim_ranks_cones(3, None, sql.TEST_DATABASE, False)
+ {3: ((11, 11, 11),), 4: (31,)}
+
+ To avoid surprises, ensure that the ``n = 0`` case gets added to the
+ dictionary or database::
+
+ >>> d = {}
+ >>> _ = _dim_ranks_cones(2, d, None, False)
+ >>> d[0]
+ {0: (1,)}
+
+ >>> sql.new_database(sql.TEST_DATABASE)
+ >>> _ = _dim_ranks_cones(2, None, sql.TEST_DATABASE, False)
+ >>> sql.dim_ranks_cones(0, sql.TEST_DATABASE)
+ {0: (1,)}
+
+ """
+ if n == 1:
+ # This function has side effects (updating the dictionary or
+ # database) that are missed for n=0 because we never recurse
+ # that far down. To avoid surprises, we call the theoretical
+ # base case from the de facto base case to trigger its side
+ # effects.
+ _ = _dim_ranks_cones(0, d, db, progress)
+
+ if d is None:
+ if sql.have_cone_dim(n, db=db):
+ return sql.dim_ranks_cones(n, db=db)
+ else:
+ if n in d:
+ return d[n]
+
+ # The dict for this n. It will either be inserted as d[n],
+ # or put into the SQL database instead. (We'll convert the
+ # set to a tuple before doing so.)
+ d_n : dict[ int, set[SerialCone] ]
+ d_n = {}
+
+ # We partition "n" ourselves here. Basically, we split n into (i,
+ # n-i), and then recurse into each of them. Every partition of "n"
+ # arises from a partition of "i" plus a partition of "n-i". We can
+ # stop at n//2 here because the situation is symmetric: once i
+ # passes n-i, the resulting set is going to be the same; (5,2)
+ # gives the same result as (2,5).
+ for i in range(1,(n//2)+1):
+ s1 = _dim_ranks_cones(i, d, db, progress)
+ if progress:
+ print(".", end="", flush=True)
+ s2 = _dim_ranks_cones(n-i, d, db, progress)
+ if progress:
+ print(".", end="", flush=True)
+
+ # the ranks possible in dim=n are the sums of ranks possible
+ # in dim=i and dim=(n-i)
+ for r1 in s1:
+ for r2 in s2:
+ s : set[SerialCone]
+ s = set( _merge_factors(b1,b2)
+ for b1 in s1[r1]
+ for b2 in s2[r2] )
+ r = r1 + r2
+ if r in d_n:
+ # there's more than one way to add up to r!
+ d_n[r].update(s)
+ else:
+ d_n[r] = s
+
+ for c in _irreducible_cones_of_dim(n):
+ if c.rank in d_n:
+ d_n[c.rank].add(c.serialize())
+ else:
+ d_n[c.rank] = {c.serialize()}
+
+ # for serialization we want tuples, not sets
+ ret = { r: tuple(d_n[r]) for r in d_n }
+ del d_n
+
+ if d is None:
+ sql.insert_cones(n, ret, db=db)
+ else:
+ d[n] = ret
+ return ret
+
+
+def dim_ranks_cones(n : int, sql : bool = False, db : str = sql.TEST_DATABASE, progress : bool = False) -> dict[int,tuple[SerialCone,...]]:
+ r"""
+ Compute a rank => cones map for all cones of dimension ``n``.
+
+ Parameters
+ ----------
+ n : int
+ The dimension for which you want the rank => cones dict.
+
+ sql : bool, default=False
+ Whether or not to use the SQL database, or start fresh.
+
+ db : str, default=TEST_DATABASE
+ The name of the SQLite database to use (if ``sql`` is ``True``).
+
+ progress : bool, default=False
+ Whether or not to print a dot "." as the function progresses.
+ This isn't tested or anything, but it's real nice to see things
+ moving when computing the (n+1)st dimension takes two days.
+
+ Examples
+ --------
+
+ Easy low-dimensional cases::
+
+ >>> dim_ranks_cones(3)
+ {3: ((11, 11, 11),), 4: (31,)}
+
+ >>> dim_ranks_cones(4)
+ {4: ((11, 11, 11, 11),), 5: ((11, 31),), 7: (41,)}
+
+ >>> dim_ranks_cones(5)
+ {5: ((11, 11, 11, 11, 11),), 6: ((11, 11, 31),), 8: ((11, 41),), 11: (51,)}
+
+ The precomputed values should agree with the ones we compute::
+
+ >>> import sql
+ >>> def check(n):
+ ... if not sql.have_cone_dim(n):
+ ... # don't fail if we're just missing the data
+ ... return True
+ ... d1 = sql.dim_ranks_cones(n)
+ ... d2 = dim_ranks_cones(n)
+ ... return ( all( set(d1[r]) == set(d2[r]) for r in d1 )
+ ... and sorted(d1.keys()) == sorted(d2.keys()) )
+ >>>
+ >>> random_ns = [8,6,7,5,3,0,9]
+ >>> all( check(n) for n in random_ns )
+ True
+ >>> from random import randint
+ >>> n = randint(0,40)
+ >>> check(n)
+ True
+
+ Since we don't recurse all the way down to ``n = 0``, some care is
+ needed to make sure that we don't assume the existence of that row
+ based on the presence of rows for larger dimensions::
+
+ >>> sql.new_database(sql.TEST_DATABASE)
+ >>> _ = dim_ranks_cones(4, True)
+ >>> dim_ranks_cones(0, sql=True)
+ {0: (1,)}
+
+ And we might as well check the other rows we just computed::
+
+ >>> all(
+ ... dim_ranks_cones(k)
+ ... ==
+ ... dim_ranks_cones(k, sql=True)
+ ... for k in range(5)
+ ... )
+ True
+
+ """
+ # The implementation of this function _always_ uses a cache, the
+ # only question is, whether or not the cache will be a python dict
+ # that gets passed around, or an implicit SQL database.
+ d : dict[int, dict[int,tuple[SerialCone,...]]] | None
+ d = {}
+ if sql:
+ d = None
+ # Now just run the real, recursive implementation.
+ return _dim_ranks_cones(n, d, db, progress)
+
+
+def _one_partition_simulacra(p : list[int]) -> list[int] | None:
+ r"""
+ Get the first simulacra we can find for ``p``.
+
+ This is similar to :func:`partitions.partition_simulacra`, but it
+ can make an optimization that destroys the uniqueness of the
+ partitions because we are only returning one of them anyway.
+
+ Parameters
+ ----------
+
+ p : list[int]
+ The partition you want to find a simulacra for.
+
+ Returns
+ -------
+
+ Either a partition of the same sum/rank as ``p``, or ``None`` if
+ there are none.
+
+ Examples
+ --------
+
+ The partition found by this function may not be the first
+ partition found by :func:`partitions.partition_simulacra`, but it
+ should _eventually_ be found by that function::
+
+ >>> from random import choice, randint
+ >>> from partitions import partitions, partition_simulacra
+ >>> n = randint(0,30)
+ >>> p = choice(tuple(partitions(n, include_two=False)))
+ >>> s = _one_partition_simulacra(p)
+ >>> s is None or s in partition_simulacra(p)
+ True
+
+ """
+ from partitions import partitions, partition_rank
+ target_rank = partition_rank(p)
+
+ # All factors in a simulacrum can't be less than or equal to the
+ # smallest factor in the target. So if p[0] is the smallest factor
+ # in the target, we might as well start partitioning assuming that
+ # there's a p[0]+1 factor, then a p[0]+2 factor, then...
+ #
+ # This destroys the uniqueness of the partitions, but we're only
+ # going to return one of them from this function!
+ psize = sum(p)
+
+ k_start = p[0]+1
+ if k_start == 2:
+ # Oh, we should exclude 2...
+ k_start = 3
+
+ # Not a typo: psize+1 would have us checking for a simulacra
+ # of [psize], which is not possible.
+ k_end = psize
+ for k in range(k_start, k_end):
+ k_rank = partition_rank([k])
+ for q in partitions(psize-k, include_two=False):
+ if (partition_rank(q) == (target_rank - k_rank)):
+ # Sorting slows us down, but returning a value
+ # isn't the slow part, and these lists are tiny
+ # anyway.
+ res = sorted(q + [k])
+ if not res == p:
+ return res
+ return None
+
+
+def compute_Ln_Ln_simulacra(start : int, end : int, nprocs : int = 1):
+ r"""
+ Compute simulacra of ``L(n) + L(n)`` for all ``n`` between
+ ``start`` and ``end``, possibly in parallel, and then print the
+ result.
+
+ This is a fairly trivial wrapper around
+ :func:`_one_partition_simulacra` and the
+ :class:`multiprocessing.Pool` class.
+
+ Parameters
+ ----------
+
+ start : int
+ The first value of ``n``.
+ end : int
+ The last value of ``n``.
+ nprocs : int, default=1
+ The number of simultaneous processes to launch
+
+ Examples
+ --------
+
+ >>> compute_Ln_Ln_simulacra(0,10,4)
+ 0: None
+ 1: None
+ 2: None
+ 3: None
+ 4: [1, 1, 1, 5]
+ 5: None
+ 6: None
+ 7: None
+ 8: [1, 5, 10]
+ 9: [1, 1, 1, 3, 12]
+ 10: [1, 1, 5, 13]
+
+ """
+ ns = list(range(start, end+1)) # we consume this twice!
+ args = ( 2*[n] for n in ns )
+
+ from multiprocessing import Pool
+ results = Pool(processes=nprocs).map(_one_partition_simulacra, args)
+ for (x,y) in zip(ns, results):
+ print(f"{x}: {y}")
+
+
+
+def _compute_cones():
+ r"""
+ This gets run when you execute ``python compute.py``.
+ """
+ # if executed, we start computing more cones
+ mcd = sql.max_cone_dim()
+ if mcd < 0:
+ # the max dim will be negative if the db is empty
+ n = 0
+ else:
+ n = mcd + 1
+
+ while True:
+ print(f"computing cones of dimension {n}", end="", flush=True)
+ _ = dim_ranks_cones(n, True, sql.LIVE_DATABASE, True)
+ print(" done.")
+ n += 1
+
+def _compute_lorentz_ranks():
+ r"""
+ This gets run when you execute ``python compute.py lorentz``.
+ """
+ # if executed, we start computing more cones
+ mlrd = sql.max_lorentz_rank_dim()
+ if mlrd < 0:
+ # the max dim will be negative if the db is empty
+ n = 0
+ else:
+ n = mlrd + 1
+
+ while True:
+ print(f"computing lorentz ranks in dimension {n}", end="", flush=True)
+ _ = admissible_lorentz_ranks(n, True, sql.LIVE_DATABASE)
+ print(" done.")
+ n += 1
+
+
+if __name__ == "__main__":
+ from sys import argv
+ if len(argv) > 1 and argv[1] == "lorentz":
+ _compute_lorentz_ranks()
+ else:
+ _compute_cones()
--- /dev/null
+r"""
+Symmetric cone classes and functions.
+
+This module provides symmetric cones via the class hierarchy,
+
+ * :class:`SymmetricCone` (the superclass)
+ * :class:`L` (Lorentz)
+ * :class:`HR` (Real symmetric PSD)
+ * :class:`HC` (Complex hermitian PSD)
+ * :class:`HH` (Quaternion hermitian PSD)
+ * :class:`HO` (Octonion hermitian "PSD")
+ * :class:`DirectSum` (direct sums of the others)
+
+Each cone has methods and/or attributes for its dimension, Lyapunov
+rank, signature, et cetera. Much of the real work, however, is
+dedicated to efficient isomorphism testing and storage. How do we
+check if two symmetric cones are isomorphic, and how can we store
+them?
+
+To store the cones, rather than store all of the python class info, we
+prefer to record only what is necessary to reconstruct each cone. For
+the five irreducible families, this basically boils down to two
+numbers. One identifies the type (``L``, ``HR``, ``HC``, ``HH``, or
+``HO``), and then one identifies the size (usually denoted by ``n``).
+Direct sums of these factors are then identified by tuples of
+integers, where each integer in the tuple represents a factor. A
+priori, direct sums of direct sums are possible by putting tuples
+within the tuples. Storing (and otherwise working with) integers is
+very fast, and we can store a lot of cones represented in this manner
+at the expense of having to convert them back to python classes at
+some point. For more details, see the two methods
+:meth:`SymmetricCone.serialize` and :meth:`DirectSum.serialize` for
+irreducible cones and direct sums, respectively.
+
+Now on to isomorphism checking. The direct sum decomposition is unique
+up to order (and ignoring trivial factors), so what we do for
+isomorphism is to put the factors in a list, flatten sums of sums,
+eliminate trivial factors, replace each remaining factor with a
+canonical representative, and then sort them all into a unique
+order. Choosing canonical representatives is done in the class
+constructors. For example, if you try to construct ``L(2)``, you will
+get ``L(1)+L(1)`` instead. That is because we have declared
+``L(1)+L(1)`` to be the canonical two-dimensional symmetric cone: it
+should not be possible to construct another symmetric cone that is
+isomorphic but not equal to it. Likewise, flattening happens when you
+construct a cone such as ``(L(3)+L(4)) + (L(5)+L(6))``. The factors of
+these summands are collected and combined into ``L(3)+L(4)+L(5)+L(6)``.
+
+Finally, we sort the factors to ensure that ``L(1)+L(3)`` is the same
+cone as ``L(3)+L(1)``. How we should sort cones is not obvious, but
+thankfully, we can fall back on the serialization format that we are
+using to store them: to sort a collection of cones, we serialize them
+to tuples of ints, and then sort the tuples lexicographically.
+(Irreducible cones can be thought of as singleton tuples for this
+purpose.)
+
+Most of this happens in the :method:`DirectSum.__new__` method. But to
+understand how we replace ``L(2)`` with ``L(1)+L(1)``, we have to
+mention one last performance optimization. Namely that we cache all
+instances of the symmetric cone classes to avoid, say, creating a
+million copies of ``L(1)`` in memory. This adds to the general level
+of confusion, but saves quite a bit of memory during large
+computations, and has the nice side effect that "isomorphism" becomes
+just equality of objects. To accomplish it, each subclass of
+:class:`SymmetricCone` keeps an ``_instance_cache`` dictionary that is
+keyed on the size of the cone, ``n``. If you ask for ``L(1)`` a hunred
+times, then the last 99 of them, you'll get the copy that's stored in
+``L._instance_cache[1]``. This makes declaring canonical factors
+relatively easy, because we can just pre-populate the instance caches
+with the canonical choices. There are only a few ``n`` that lead to
+isomorphism, but you will see for example in :class:`HR` that we set::
+
+ _instance_cache = { 0: L(0), 1: L(1), 2: L(3) }
+
+Meaning that, when you ask for ``HR(2)``, you get ``L(3)``. When all
+is said and done, with a great deal of care, we are able to work "up
+to isomorphism," without which we would not directly be able to test
+for simulacra.
+
+Finally, this module provides two "random cone" functions,
+
+ * :func:`random_irreducible_cone`
+ * :func:`random_cone`
+
+that are useful for testing.
+"""
+
+# allow forward-references in type signatures
+from __future__ import annotations
+
+from itertools import chain
+from math import sqrt
+
+type SerialSum = tuple[int,...]
+type SerialIrreducible = int
+type SerialCone = SerialIrreducible | SerialSum
+
+# Lyapunov rank and dimension calculations
+class SymmetricCone:
+ r"""
+ A symmetric cone of "size" ``n``.
+
+ Equality means isomorphism in our case, and each cone should be
+ represented by a unique object::
+
+ >>> K = random_cone()
+ >>> J = random_cone()
+ >>> (K == J) == (J == K)
+ True
+ >>> (K is J) == (K == J)
+ True
+ >>> K == DirectSum([K]) and J == DirectSum([J])
+ True
+
+ More isomorphic examples::
+
+ >>> RN(2) == L(2)
+ True
+ >>> RN(2) == DirectSum([L(1)]*2)
+ True
+ >>> RN(1) == DirectSum([L(1)])
+ True
+ >>> RN(1) == L(1)
+ True
+ >>> RN(1) == HR(1)
+ True
+ >>> RN(1) == HC(1)
+ True
+ >>> RN(1) == HH(1)
+ True
+ >>> RN(1) == HO(1)
+ True
+ >>> L(3) == HR(2)
+ True
+ >>> L(4) == HC(2)
+ True
+ >>> L(6) == HH(2)
+ True
+ >>> L(10) == HO(2)
+ True
+ >>> RN(4) == DirectSum([L(2)]*2)
+ True
+
+ And again. This time the isomorphism requires reordering the
+ factors::
+
+ >>> K1 = RN(3)
+ >>> K2 = HR(2)
+ >>> K3 = L(0)
+ >>> K4 = HH(3)
+ >>> K5 = L(1)
+ >>> J1 = L(2)
+ >>> J2 = HH(3)
+ >>> J3 = L(2)
+ >>> J4 = L(3)
+ >>> DirectSum([K1,K2,K3,K4,K5]) == DirectSum([J1,J2,J3,J4])
+ True
+
+ """
+ # Declare attributes for the "size" n, dimension, and Lyapunov
+ # rank in the superclass.
+ n: int
+ dim: int
+ rank: int
+
+ # All subclasses have an instance cache and an id as well.
+ _instance_cache: dict
+ id: int
+
+ @staticmethod
+ def irreducible_classes() -> tuple:
+ return (L,HR,HC,HH,HO)
+
+ @staticmethod
+ def _deserialize_one(s : SerialIrreducible) -> SymmetricCone:
+ r"""
+ Deserialize one integer into an irreducible cone.
+
+ Parameters
+
+ s : int
+ The integer to deserialize.
+
+ Returns
+ -------
+
+ An instance of an irreducible cone (:class:`L`, :class:`HR`,
+ :class:`HC`, :class:`HH`, or :class:`HO`).
+
+ Examples
+ --------
+
+ >>> SymmetricCone._deserialize_one(32)
+ HR(3)
+ >>> SymmetricCone._deserialize_one(29)
+ Traceback (most recent call last):
+ ...
+ ValueError: unable to deserialize 29
+
+ We can deserialize the trivial cone::
+
+ >>> SymmetricCone._deserialize_one(1)
+ L(0)
+
+ The deserialized cone already has its ``_serial`` attribute
+ cached::
+
+ >>> K = L(22091)
+ >>> J = SymmetricCone._deserialize_one(K.serialize())
+ >>> J._serial
+ 220911
+
+ """
+ # Heads up: serializing L(0) produces 01 = 1.
+ # The math for i and n below should produce
+ # i=1 and n=0 for "1".
+ i = s % 10
+ n = (s - i) // 10
+ for c in SymmetricCone.irreducible_classes():
+ if c.id == i:
+ res = c(n)
+ res._serial = s
+ return res
+ raise ValueError(f"unable to deserialize {s}")
+
+ @staticmethod
+ def deserialize(s : SerialCone) -> SymmetricCone:
+ r"""
+ Deserialize either an integer or a tuple of integers into
+ a symmetric cone.
+
+ Parameters
+ ----------
+
+ s : SerialCone
+ Either an int or a tuple of ints to deserialize.
+
+ Returns
+ -------
+
+ The symmetric cone that serializes to ``s``.
+
+ Examples
+ --------
+
+ Typical examples::
+
+ >>> SymmetricCone.deserialize(11)
+ L(1)
+ >>> SymmetricCone.deserialize((31, 33, 34))
+ L(3) + HC(3) + HH(3)
+
+ Invalid input (not from a serialized cone)::
+
+ >>> SymmetricCone.deserialize((11, 100))
+ Traceback (most recent call last):
+ ...
+ ValueError: unable to deserialize 100
+
+ The trivial cone works as expected::
+
+ >>> SymmetricCone.deserialize(L(0).serialize()) == L(0)
+ True
+
+ A random cone works as expected::
+
+ >>> K = random_cone()
+ >>> SymmetricCone.deserialize(K.serialize()) == K
+ True
+
+ """
+ if isinstance(s, tuple):
+ # The DirectSum instance cache key is actually just
+ # the hash of its serialized factor tuple, i.e. what
+ # we were given.
+ h = hash(s)
+ if h not in DirectSum._instance_cache:
+ # Convert to tuple explicitly if we're going to skip
+ # canonicalization.
+ c = DirectSum(
+ tuple ( map(SymmetricCone._deserialize_one, s) ),
+ False
+ )
+ DirectSum._instance_cache[h] = c
+ return DirectSum._instance_cache[h]
+ else:
+ return SymmetricCone._deserialize_one(s)
+
+
+ def __init__(self, n):
+ # cached values
+ self._serial = None
+ self._hash = None
+
+ # no error checking, it makes deserialization too slow
+ self.n = n
+
+ @staticmethod
+ def name() -> str:
+ r"""
+ The name of this cone. This method should be overridden
+ in each subclass, and exists only to make the implementation
+ of :meth:`__repr__` easier.
+ """
+ raise NotImplementedError
+
+ def __repr__(self) -> str:
+ r"""
+ The string representation of this cone.
+ """
+ return f"{self.name()}({self.n})"
+
+ def __hash__(self) -> int:
+ r"""
+ Return a unique integer hash for this cone. The
+ :meth:`serialize` method already returns a unique integer or
+ tuple of integers corresponding to this cone, so we can simply
+ hash that.
+
+ WARNING: hashes in python can collide! The only guarantee
+ is that equal objects must have equal hashes. If you don't
+ believe this::
+
+ >>> hash(-1) == hash(-2)
+ True
+
+ We're using hashes as the lookup keys in our instance
+ caches. This should generally be safe for nonnegative integers
+ (what we get when we serialize our cones), but I can't promise
+ that two different tuples will never have the same hash, even
+ if their components do not. So there is a small chance that we
+ load the wrong :class:`DirectSum` from the instance cache.
+
+ NB: The ``verify-live-db.py`` script will check a large random
+ sample of cones in the database to ensure that none of their
+ hashes collide.
+ """
+ if self._hash is None:
+ self._hash = hash(self.serialize())
+ return self._hash
+
+ def __eq__(self, other) -> bool:
+ return self.serialize() == other.serialize()
+
+ def __lt__(self, other) -> bool:
+ r"""
+ Implement an ordering for symmetric cones.
+
+ We already have equality via :meth:`serialize`, and we can
+ derive "less than" from that too. Basically we treat
+ irreducible cones as singleton direct sums, serialize
+ everything to tuples of integers, and then use the
+ lexicographic "less than" for tuples.
+
+ Examples
+ --------
+
+ A simple example where "less than" is consistent::
+
+ >>> L(3) < L(3)
+ False
+ >>> L(3) < L(4)
+ True
+ >>> L(4) < L(3)
+ False
+
+ Larger cones of the same type should be greater than smaller
+ cones. One nice aspect of the integer sort is that it handles
+ this correctly whereas an alphabetical sort of the dimension
+ would not::
+
+ >>> L(1) < L(10)
+ True
+ >>> L(1) < L(2)
+ True
+ >>> L(10) < L(2)
+ False
+
+ Another pleasing side effect of our :meth:`serialize` method
+ is that it sorts the real, complex, quaternion, and octonion
+ PSD cones of the same size all in that order::
+
+ >>> from random import randint
+ >>> n = randint(2,10)
+ >>> HR(n) < HC(n) < HH(n)
+ True
+ >>> n > 3 or HH(n) < HO(n)
+ True
+
+ This implementation of "less than" leads to a total order::
+
+ >>> K = random_irreducible_cone()
+ >>> J = random_irreducible_cone()
+ >>> (K < J) or (J < K) or (K == J)
+ True
+ >>> not ((K < J) and (J < K))
+ True
+ >>> (K != J) or not (K < J or J < K)
+ True
+ >>> H = random_irreducible_cone()
+ >>> (not (J < K and H < J)) or (H < K)
+ True
+ >>> (not (K < J and J < H)) or (K < H)
+ True
+
+ """
+ s = self.serialize()
+ o = other.serialize()
+ if not isinstance(s, tuple):
+ s = (s,)
+ if not isinstance(o, tuple):
+ o = (o,)
+
+ return s < o
+
+
+ @staticmethod
+ def _flatten_factors(factors : tuple[SymmetricCone, ...]) -> tuple[SymmetricCone, ...]:
+ r"""
+ Flatten the given factors, removing all class:`DirectSum`
+ wrappers.
+
+ This is analogous to flattening a list like
+ ``[a,[b,[c,d,e]]]`` down to ``[a,b,c,d,e]``.
+
+ The implementation of method is suitable only for irreducible
+ cones, and must be overridden in the class:`DirectSum` class.
+ In general, this method should be suitable for use on the
+ result of the class's :meth:`factors` method, which,
+ for irreducible cones, will return a singleton list.
+
+ Parameters
+ ----------
+
+ factors : tuple[SymmetricCone, ...]
+ A list of symmetric cones, some of which may be direct sums.
+
+ Returns
+ -------
+
+ A tuple of irreducible cones.
+
+ Examples
+ --------
+
+ >>> HC(3)._flatten_factors(HC(3).factors())
+ (HC(3),)
+
+ """
+ return factors
+
+
+ def factors(self) -> tuple[SymmetricCone, ...]:
+ r"""
+ Return the factors of this symmetric cone.
+
+ The result should be unique up to the ordering of the factors.
+ This method is suitable for irreducible cones, which have only
+ a single factor. It should be overridden in :class:`DirectSum`.
+
+ To ensure that hashes are unique, we agree once and for all
+ that the trivial cone has no factors.
+
+ Returns
+ -------
+
+ A list of symmetric cones.
+
+ TODO: we should be able to restrict the return type to
+ irreducible cones.
+
+ Examples
+ --------
+
+ >>> HC(3).factors()
+ (HC(3),)
+
+ We give the canonical answer (no factors) for the trivial
+ cone::
+
+ >>> L(0).factors()
+ ()
+ """
+ if self.dim == 0:
+ return ()
+ else:
+ return (self,)
+
+
+ def signature(self) -> tuple[int,int]:
+ r"""
+ The signature of this cone.
+
+ The signature of a cone is simply a pair consisting of its
+ dimension and its Lyapunov rank.
+
+ Returns
+ -------
+
+ A pair (2-tuple) of ints. Its first entry is the dimension of
+ this cone, and its second entry is its Lypaunov rank.
+
+ Examples
+ --------
+
+ >>> L(3).signature()
+ (3, 4)
+
+ """
+ return (self.dim, self.rank)
+
+
+ def serialize(self) -> SerialCone:
+ r"""
+ Serialize this cone to an integer (irreducible cones) or
+ tuple of integers (direct sums).
+
+ No two nonequal cones should serialize to the same value. The
+ default is appropriate only for irreducible cones.
+
+ Examples
+ --------
+
+ The one unusual case is the trivial cone, where ``01`` becomes
+ ``1``, but this should not cause any problems so long as we are
+ expecting it::
+
+ >>> L(0).serialize()
+ 1
+
+ """
+ if self._serial is None:
+ self._serial = 10*self.n + self.id
+ return self._serial
+
+
+ def simulacra(self) -> tuple[SymmetricCone, ...]:
+ r"""
+ Return all simulacra of this cone.
+
+ This relies implicitly on the live database of cones
+ containing the necessary (pre-computed) data -- otherwise the
+ computation would be much too slow. If you ask for simulacra
+ in a dimension not present in the database, a ``ValueError``
+ will be raised.
+
+ Returns
+ -------
+
+ A tuple of symmetric cones, not isomorphic to ``self``, but
+ sharing their dimensions and Lyapunov ranks with it.
+
+ Examples
+ --------
+
+ Examples from the paper::
+
+ >>> try:
+ ... HR(3).simulacra()
+ ... except ValueError:
+ ... # missing data, just print the right answer
+ ... (DirectSum([L(1),L(1),L(4)]),)
+ (L(1) + L(1) + L(4),)
+
+ Examples from an earlier version of the paper where we computed
+ simulacra for multiple copies of ``HC(3)`` explicitly::
+
+ >>> K2 = DirectSum([L(7),L(3), RN(8)])
+ >>> try:
+ ... K2 in DirectSum([HC(3)]*2).simulacra()
+ ... except ValueError:
+ ... # missing data, just print the right answer
+ ... True
+ True
+ >>> K3 = DirectSum([L(8),L(4), RN(15)])
+ >>> try:
+ ... K3 in DirectSum([HC(3)]*3).simulacra()
+ ... except ValueError:
+ ... # missing data, just print the right answer
+ ... True
+ True
+
+ A cone is never its own simulacrum::
+
+ >>> import sql
+ >>> K = random_cone()
+ >>> try:
+ ... K in K.simulacra()
+ ... except ValueError:
+ ... # missing data, just print the right answer
+ ... False
+ False
+
+ """
+ from sql import have_cone_dim, simulacra
+ if not have_cone_dim(self.dim):
+ raise ValueError(f"no cone data for dimension {self.dim}")
+ return simulacra(self)
+
+
+def RN(n : int) -> SymmetricCone:
+ r"""
+ A convenient wrapper around an ``n``-fold direct sum of
+ one-dimensional Lorentz cones.
+
+ Parameters
+ ----------
+
+ n : int
+ How many copies of ``L(1)`` you want.
+
+ Returns
+ -------
+
+ The direct sum of ``n`` copies of ``L(1)``.
+
+ Examples
+ --------
+
+ >>> RN(3)
+ L(1) + L(1) + L(1)
+ >>> RN(0)
+ L(0)
+
+ """
+ return DirectSum((L(1),)*n)
+
+
+class L(SymmetricCone):
+ r"""
+ The Lorentz cone in dimension ``n``.
+
+ Examples::
+
+ >>> L(3)
+ L(3)
+
+ """
+ id = 1
+
+ @staticmethod
+ def name() -> str:
+ return "L"
+
+ @staticmethod
+ def in_dim(d : int) -> L:
+ r"""
+ Return the Lorentz cone of dimension ``d``.
+
+ This always works, because there's a Lorentz cone in every dimension.
+
+ Examples
+ --------
+
+ >>> L.in_dim(21)
+ L(21)
+
+ """
+ return L(d)
+
+ # Instance cache. After the class definition, we'll prepopulate
+ # it with L(0) and fix its Lyapunov rank so we can avoid special
+ # cases in the constructor.
+ _instance_cache = {}
+ def __new__(cls, n):
+ r"""
+ Examples::
+
+ >>> L(3).dim
+ 3
+ >>> L(3).rank
+ 4
+
+ >>> L(3) == L(3)
+ True
+ >>> L(3) is L(3)
+ True
+
+ And the edge case that the usual formula fails to account for::
+
+ >>> L(0).rank
+ 0
+
+ """
+ # there's a special case for n=2 inserted after the DirectSum
+ # class is defined
+ if not n in cls._instance_cache:
+ cls._instance_cache[n] = super().__new__(cls)
+ cls._instance_cache[n].dim = n
+ cls._instance_cache[n].rank = (n**2 - n + 2)//2
+ return cls._instance_cache[n]
+
+# We have to fix this case because the general formula we use in
+# __new__ fails at n=0.
+L(0).rank = 0
+
+
+class HR(SymmetricCone):
+ r"""
+ The real symmetric PSD cone of order ``n``.
+
+ Examples
+ --------
+
+ >>> HR(4)
+ HR(4)
+ >>> HR(4).signature()
+ (10, 16)
+
+ When ``n`` is small, you will get the Lorentz cone isomorphic to
+ ``HR(n)`` instead::
+
+ >>> HR(0)
+ L(0)
+ >>> HR(1)
+ L(1)
+ >>> HR(2)
+ L(3)
+
+ """
+ id = 2
+
+ @staticmethod
+ def name() -> str:
+ return "HR"
+
+ @staticmethod
+ def in_dim(d : int) -> L | HR | None:
+ r"""
+ Return the real symmetric PSD cone of dimension ``d``, or
+ ``None`` if there isn't one.
+
+ Examples
+ --------
+
+ >>> HR.in_dim(0)
+ L(0)
+ >>> HR.in_dim(1)
+ L(1)
+ >>> HR.in_dim(2)
+ >>> HR.in_dim(3)
+ L(3)
+ >>> HR.in_dim(4)
+ >>> HR.in_dim(5)
+ >>> HR.in_dim(6)
+ HR(3)
+ >>> HR.in_dim(7)
+ >>> HR.in_dim(8)
+ >>> HR.in_dim(9)
+ >>> HR.in_dim(10)
+ HR(4)
+ """
+ n = (sqrt(8*d + 1) - 1)/2
+ if n.is_integer():
+ return HR(int(n))
+ else:
+ return None
+
+ # Instance cache. All zero- or one-dimensional cones are L(0) or
+ # L(1). The 2x2 cone is canonically isomorphic to the 3d Lorentz
+ # cone.
+ _instance_cache = { 0: L(0), 1: L(1), 2: L(3) }
+ def __new__(cls, n):
+ if not n in cls._instance_cache:
+ cls._instance_cache[n] = super().__new__(cls)
+ cls._instance_cache[n].dim = (n**2 + n) // 2
+ cls._instance_cache[n].rank = n**2
+ return cls._instance_cache[n]
+
+
+class HC(SymmetricCone):
+ r"""
+ The complex hermitian PSD cone of order ``n``.
+
+ >>> HC(4)
+ HC(4)
+ >>> HC(4).signature()
+ (16, 31)
+
+ When ``n`` is small, you will get the Lorentz cone isomorphic to
+ ``HC(n)`` instead::
+
+ >>> HC(0)
+ L(0)
+ >>> HC(1)
+ L(1)
+ >>> HC(2)
+ L(4)
+
+ Examples
+ --------
+
+ The direct sum of ``m >= 2`` copies of the 3-by-3 cone has
+ simulacra. This used to be a Lemma in the paper, but it has
+ been superseded.
+
+ >>> K2 = DirectSum([
+ ... L(7),
+ ... L(3),
+ ... RN(8)
+ ... ])
+ >>> K2.signature()
+ (18, 34)
+ >>> DirectSum(2*[HC(3)]).signature()
+ (18, 34)
+
+ >>> K3 = DirectSum([
+ ... L(8),
+ ... L(4),
+ ... RN(15)
+ ... ])
+ >>> K3.signature()
+ (27, 51)
+ >>> DirectSum(3*[HC(3)]).signature()
+ (27, 51)
+
+ Do a random check so make sure this works for many copies::
+
+ >>> from random import randint
+ >>> from sql import admissible_lorentz_ranks, max_lorentz_rank_dim
+ >>> m = randint(2, 12)
+ >>> K = DirectSum(m*[HC(3)])
+ >>> skip = K.dim > max_lorentz_rank_dim()
+ >>> skip or K.rank in admissible_lorentz_ranks(K.dim)
+ True
+
+ """
+ id = 3
+
+ @staticmethod
+ def name() -> str:
+ return "HC"
+
+ @staticmethod
+ def in_dim(d : int) -> L | HC | None:
+ r"""
+ Return the complex hermitian PSD cone of dimension ``d``,
+ or ``None`` if there isn't one.
+
+ Examples
+ --------
+
+ >>> HC.in_dim(0)
+ L(0)
+ >>> HC.in_dim(1)
+ L(1)
+ >>> HC.in_dim(2)
+ >>> HC.in_dim(3)
+ >>> HC.in_dim(4)
+ L(4)
+ >>> HC.in_dim(5)
+ >>> HC.in_dim(6)
+ >>> HC.in_dim(7)
+ >>> HC.in_dim(8)
+ >>> HC.in_dim(9)
+ HC(3)
+ >>> HC.in_dim(10)
+ >>> HC.in_dim(11)
+ >>> HC.in_dim(12)
+ >>> HC.in_dim(13)
+ >>> HC.in_dim(14)
+ >>> HC.in_dim(15)
+ >>> HC.in_dim(16)
+ HC(4)
+
+ """
+ n = sqrt(d)
+ if n.is_integer():
+ return HC(int(n))
+ else:
+ return None
+
+ # Instance cache. All zero- or one-dimensional cones are L(0) or
+ # L(1). The 2x2 cone is canonically isomorphic to the 4d Lorentz
+ # cone.
+ _instance_cache = { 0: L(0), 1: L(1), 2: L(4) }
+ def __new__(cls, n):
+ if not n in cls._instance_cache:
+ cls._instance_cache[n] = super().__new__(cls)
+ cls._instance_cache[n].dim = n**2
+ cls._instance_cache[n].rank = 2*n**2 - 1
+ return cls._instance_cache[n]
+
+
+class HH(SymmetricCone):
+ r"""
+ The quaternion hermitian PSD cone of order ``n``.
+
+ Examples
+ --------
+
+ >>> HH(4)
+ HH(4)
+ >>> HH(4).signature()
+ (28, 64)
+
+ When ``n`` is small, you will get the Lorentz cone isomorphic to
+ ``HH(n)`` instead::
+
+ >>> HH(0)
+ L(0)
+ >>> HH(1)
+ L(1)
+ >>> HH(2)
+ L(6)
+
+ """
+ id = 4
+
+ @staticmethod
+ def name() -> str:
+ return "HH"
+
+ @staticmethod
+ def in_dim(d : int) -> L | HH | None:
+ r"""
+ Return the quaternion hermitian PSD cone of dimension ``d``, or
+ ``None`` if there isn't one.
+
+ Examples
+ --------
+
+ >>> HH.in_dim(0)
+ L(0)
+ >>> HH.in_dim(1)
+ L(1)
+ >>> HH.in_dim(2)
+ >>> HH.in_dim(3)
+ >>> HH.in_dim(4)
+ >>> HH.in_dim(5)
+ >>> HH.in_dim(6)
+ L(6)
+ >>> HH.in_dim(7)
+ >>> HH.in_dim(8)
+ >>> HH.in_dim(9)
+ >>> HH.in_dim(10)
+ >>> HH.in_dim(11)
+ >>> HH.in_dim(12)
+ >>> HH.in_dim(13)
+ >>> HH.in_dim(14)
+ >>> HH.in_dim(15)
+ HH(3)
+
+ """
+ if d == 0:
+ # This is the one special case where the other solution to
+ # the quadratic formula is valid.
+ return HH(0)
+
+ n = (sqrt(8*d + 1) + 1)/4
+ if n.is_integer():
+ return HH(int(n))
+ else:
+ return None
+
+ # Instance cache. All zero- or one-dimensional cones are L(0) or
+ # L(1). The 2x2 cone is canonically isomorphic to the 6d Lorentz
+ # cone.
+ _instance_cache = { 0: L(0), 1: L(1), 2: L(6) }
+ def __new__(cls, n):
+ if not n in cls._instance_cache:
+ cls._instance_cache[n] = super().__new__(cls)
+ cls._instance_cache[n].dim = 2*n**2 - n
+ cls._instance_cache[n].rank = 4*n**2
+ return cls._instance_cache[n]
+
+
+class HO(SymmetricCone):
+ r"""
+ The cone of squares in the Albert algebra of order ``n``.
+
+ Examples
+ --------
+
+ When ``n`` is small, you will get the Lorentz cone isomorphic to
+ ``HO(n)`` instead::
+
+ >>> HO(0)
+ L(0)
+ >>> HO(1)
+ L(1)
+ >>> HO(2)
+ L(10)
+
+ These cones are only symmetric for ``n <= 3``::
+
+ >>> HO(7)
+ Traceback (most recent call last):
+ ...
+ ValueError: invalid size n=7
+
+ So there's only one interesting case remaining::
+
+ >>> HO(3)
+ HO(3)
+ >>> HO(3).signature()
+ (27, 79)
+
+ """
+ id = 5
+
+ @staticmethod
+ def name() -> str:
+ return "HO"
+
+ @staticmethod
+ def in_dim(d : int) -> L | HO | None:
+ r"""
+ Return the octonion hermitian "PSD" cone of dimension ``d``,
+ or ``None`` if there isn't one.
+
+ Examples
+ --------
+
+ >>> HO.in_dim(0)
+ L(0)
+ >>> HO.in_dim(1)
+ L(1)
+ >>> HO.in_dim(2)
+ >>> HO.in_dim(4)
+ >>> HO.in_dim(3)
+ >>> HO.in_dim(5)
+ >>> HO.in_dim(6)
+ >>> HO.in_dim(7)
+ >>> HO.in_dim(8)
+ >>> HO.in_dim(9)
+ >>> HO.in_dim(10)
+ L(10)
+ >>> HO.in_dim(26)
+ >>> HO.in_dim(27)
+ HO(3)
+
+ The would-be Octonion cone in dimension 52 is not symmetric
+ (it corresponds to n=4)::
+
+ >>> HO.in_dim(52) is None
+ True
+
+ """
+ # why bother with the quadratic formula when we know all of them?
+ if d == 0:
+ return L(0)
+ elif d == 1:
+ return L(1)
+ elif d == 10:
+ return L(10)
+ elif d == 27:
+ return HO(3)
+ else:
+ return None
+
+ # Instance cache. All zero- or one-dimensional cones are L(0) or
+ # L(1). The 2x2 cone is canonically isomorphic to the 10d Lorentz
+ # cone.
+ _instance_cache = { 0: L(0), 1: L(1), 2: L(10) }
+ def __new__(cls, n):
+ r"""
+ Examples::
+
+ >>> HO(0).dim
+ 0
+ >>> HO(1).dim
+ 1
+ >>> HO(2).dim
+ 10
+ >>> HO(3).dim
+ 27
+ """
+ if not n in cls._instance_cache:
+ if n > 3:
+ raise ValueError(f"invalid size n={n}")
+ cls._instance_cache[n] = super().__new__(cls)
+ cls._instance_cache[n].dim = 27
+ cls._instance_cache[n].rank = 79
+ return cls._instance_cache[n]
+
+
+class DirectSum(SymmetricCone):
+ r"""
+ A direct sum of factors, provided as a generator (list,
+ tuple, set, whatever).
+
+ Examples
+ --------
+
+ >>> K = DirectSum([])
+ >>> K.dim
+ 0
+ >>> K.rank
+ 0
+
+ """
+ _factors: tuple[SymmetricCone, ...]
+
+ # instance cache
+ _instance_cache = {}
+ def __new__(cls, factors, canonicalize : bool = True):
+ r"""
+ Ensure that factors are flattened, and that trivial cones
+ are removed::
+
+ >>> DirectSum([
+ ... L(2),
+ ... DirectSum([ L(0), HR(3) ]),
+ ... DirectSum([ HC(3),
+ ... DirectSum([HO(3), L(1)])
+ ... ])
+ ... ])
+ L(1) + L(1) + L(1) + HR(3) + HC(3) + HO(3)
+
+ >>> DirectSum([HC(3),L(0)]) == HC(3)
+ True
+
+ """
+ # We have to remove trivial factors before we do anything
+ # else, otherwise the length of the factors list might be
+ # wrong. For example we need [HC(3),L(0)] to return HC(3),
+ # which only happens if there is one factor.
+ factors = [f for f in factors if not f.dim == 0]
+ lf = len(factors)
+ if lf == 0:
+ return L(0)
+ elif lf == 1:
+ return factors[0]
+
+ # len(factors) >= 2 guaranteed
+ #
+ # there's no "n" for a direct sum, but the hash of our
+ # serialization is eventually what the hash of this cone
+ # will be, and a hash is exactly what we want for the key
+ if canonicalize:
+ # we'll get the wrong hash if we don't flatten first!
+ factors = tuple(sorted(
+ f for f in cls._flatten_factors(factors)
+ ))
+
+ _serial = tuple( f.serialize() for f in factors )
+ _hash = hash(_serial)
+ if not _hash in cls._instance_cache:
+ K = super().__new__(cls)
+ # just computed these, might as well save them
+ K._factors = factors
+ K._serial = _serial
+ K._hash = _hash
+ K.dim = 0
+ K.rank = 0
+ for f in factors:
+ K.dim += f.dim
+ K.rank += f.rank
+ cls._instance_cache[_hash] = K
+ return cls._instance_cache[_hash]
+
+ def __init__(self, factors, canonicalize : bool = True):
+ pass
+
+ def __repr__(self):
+ return " + ".join(repr(f) for f in self._factors)
+
+ @staticmethod
+ def _flatten_factors(factors : tuple[SymmetricCone, ...]) -> tuple[SymmetricCone, ...]:
+ r"""
+ Recursively flatten the factors of the given factors.
+
+ The base case where we do nothing for an irreducible factor
+ is implemented as :meth:`SymmetricCone._flatten_factors`.
+
+ Parameters
+ ----------
+
+ factors : tuple[SymmetricCone, ...]
+ A tuple of (not necessarily irreducible) symmetric cones.
+
+ Returns
+ -------
+
+ A new tuple of factors, but this time with each factor
+ irreducible.
+
+ TODO: we should be able to tighten the return type!
+
+ Examples
+ --------
+
+ >>> K = DirectSum([DirectSum([L(3)]*2), HC(3)])
+ >>> K._flatten_factors(K.factors())
+ (L(3), L(3), HC(3))
+
+ """
+ return sum( (f._flatten_factors(f.factors()) for f in factors),
+ () )
+
+
+ def factors(self) -> tuple[SymmetricCone, ...]:
+ return self._factors
+
+
+ def serialize(self) -> SerialSum:
+ r"""
+ Serialize this cone to a tuple of integers.
+
+ We override the superclass default to return the tuple we get
+ from serializing our factors.
+ """
+ if self._serial is None:
+ self._serial = tuple( f.serialize() for f in self._factors )
+ return self._serial
+
+# The 2d Lorentz cone is canonically isomorphic
+# to a pair of 1d Lorentz cones.
+L._instance_cache[2] = DirectSum( (L(1),L(1)) )
+
+def random_irreducible_cone(n_min : int = 1, n_max : int = 10) -> SymmetricCone:
+ r"""
+ Generate a random irreducible symmetric cone; that is, a
+ :class:`SymmetricCone` that is not a :class:`DirectSum`.
+
+ Parameters
+ ----------
+
+ n_min : int, default=1
+ The smallest possible "size" for the cone.
+
+ n_max : int, default=10
+ The largest possible "size" for the cone.
+
+ Returns
+ -------
+
+ An irreducible symmetric cone.
+
+ TODO: we should be able to tighten the return type!
+
+ Examples
+ --------
+
+ >>> isinstance(random_irreducible_cone(), DirectSum)
+ False
+
+ >>> K = random_irreducible_cone(10,10)
+ >>> K.dim >= 10
+ True
+
+ """
+ from random import choice, randint
+
+ c = choice(SymmetricCone.irreducible_classes())
+ n = randint(n_min, n_max)
+ if c == HO:
+ # otherwise not symmetric
+ n = min(3,n)
+ elif c == L and n == 2:
+ # otherwise not irreducible... return our friend?
+ c = HC
+ n = 3
+ return c(n)
+
+
+def random_cone() -> SymmetricCone:
+ r"""
+ Produce a somewhat-random cone with ten or fewer factors.
+
+ Returns
+ -------
+
+ A symmetric cone.
+
+ Examples
+ --------
+
+ >>> K = random_cone()
+ >>> K.dim >= 0
+ True
+ >>> K.rank >= K.dim
+ True
+ >>> isinstance(K, SymmetricCone)
+ True
+
+ """
+ from random import choice, randint
+
+ # Produce at most 10 factors...
+ num_factors = randint(1, 10)
+
+ factors: list[SymmetricCone]
+ factors = []
+
+ # All of the others share signatures with Lorentz cones, so we
+ # bump up the probability that a 3x3 complex PSD factor will get
+ # selected here.
+ if randint(1,4) == 4:
+ factors.append(HC(3))
+ num_factors -= 1
+
+ classes = [RN, L, HR, HC, HH, HO]
+
+ while (len(factors) < num_factors):
+ # ... with each having at most "size" 6.
+ n = randint(1, 10)
+ c = choice(classes)
+
+ if c == HO:
+ # This isn't symmetric for n > 3.
+ n = min(3,n)
+
+ factors.append(c(n))
+
+ return DirectSum(factors)
+
--- /dev/null
+r"""
+Functions for working with partitions.
+
+In a few places in the paper, it suffices to consider only sums of
+Lorentz cones, or maybe sums of Lorentz cones with exactly one
+non-Lorentz factor. Sums of Lorentz cones are particularly nice
+because they can be represented by partitions of the dimension you're
+working in. For example, in dimension four, you can have ``L(4)``, or
+``L(1)+L(3)``, or ``L(2)+L(2)``, or... you get the idea. Each possibility
+is associated with a partition of ``4``, and vice-versa.
+
+Eliminating isomorphic cones is also a bit easier when they are
+represented by partitions. To eliminate any ambiguity arising from the
+order of the factors, all one must do is sort the elements of the
+corresponding partition; in fact, the :func:`partitions` function of
+Kelleher and O'Sullivan does this already. There is only one more way
+that ambiguity can arise, and that is from the equivalence of
+``L(1)+L(1) == L(2)``. However this can be avoided quite easily by
+ignoring any partitions that contain a ``2`` (which is fast, because
+they are sorted).
+
+For these reasons, we use partitions to test some of our
+results. Here's a quick summary of this module's functions:
+
+ * :func:`f` is the function from the paper that takes ``n`` and
+ returns the Lyapunov rank of ``L(n)``.
+
+ * :func:`partitions` is a slightly-modified version of the fast
+ integer partitioning function of Kelleher and O'Sullivan.
+
+ * :func:`partition_rank` computes the Lyapunov rank of the cone
+ associated with a given partition.
+
+ * :func:`partition_simulacra` computes all simulacra (represented as
+ partitions) of the given cone (represented as a partition).
+
+"""
+
+from typing import Generator
+
+
+def f(n : int) -> int:
+ r"""
+ Compure the Lyapunov rank of the Lorentz cone in ``n``
+ dimensions. This function was imaginatively called ``f`` in
+ Proposition 9 and Theorem 4.
+
+ Parameters
+ ----------
+
+ n : int
+ The dimension of the Lorentz cone whose Lyapunov rank you
+ want. Should be nonnegative unless you want a nonsense
+ answer.
+
+ Returns
+ -------
+
+ int
+ The Lyapunov rank of the Lorentz cone in ``n`` dimensions
+
+ Examples
+ --------
+
+ Small ``n`` are easy to check by hand::
+
+ >>> f(0)
+ 0
+ >>> f(1)
+ 1
+ >>> f(2)
+ 2
+ >>> f(3)
+ 4
+
+ """
+ if n == 0:
+ return 0
+ return (n**2 - n + 2) // 2
+
+
+def _partition_contains_two(p : list[int]) -> bool:
+ r"""
+ Return ``True`` if a _sorted_ partition (arising from
+ the :func:`partitions` function) contains a ``2``.
+
+ Parameters
+ ----------
+
+ p : list[int]
+ The partition to check. Must be sorted least-to-greatest.
+
+ Returns
+ -------
+
+ True if the given partition contains ``2``, and ``False`` otherwise.
+
+ Examples
+ --------
+
+ >>> _partition_contains_two([1,1,1,1,3])
+ False
+
+ >>> _partition_contains_two([1,1,1,1,2,3])
+ True
+
+ If the partition isn't sorted, all bets are off::
+
+ >>> _partition_contains_two([1,1,5,1,2])
+ False
+
+ """
+ for z in p:
+ if z == 2:
+ return True
+ if z >= 3:
+ # Early return assuming the list is sorted
+ return False
+ return False
+
+
+def partitions(n : int, entry_max : int | None = None, include_two : bool = True) -> Generator[list[int], None, None]:
+ r"""
+ Return all partitions of the integer ``n`` in ascending order.
+
+ This is Jerome Kelleher's ``accel_asc`` function from
+ https://jeromekelleher.net/category/combinatorics.html, modified
+ in two ways:
+
+ 1. It takes an ``entry_max`` argument which limits the maximum size of
+ any entry in a partition. Since the partitions are sorted, this is
+ very easy to check before returning one of them.
+
+ 2. We can exclude partitions containing twos. One easy way to
+ decrease the number of partitions under consideration is to
+ normalize ``[2]`` to ``[1,1]``. Since a partition containing
+ ``[1,1]`` in place of that ``[2]`` will be returned _anyway_,
+ the easy way to do this is to drop all partitions containing
+ a ``2``. We'll waste a tiny bit of space this way, but checking
+ for ``2`` is a lot simpler than checking for ``1,1``.
+
+ Parameters
+ ----------
+
+ n : int
+ The integer to partition.
+
+ entry_max : int | None, default=None
+ An inclusive upper limit on the size of a partitions entries. If
+ any entry in a partition exceeds this limit, it is
+ omitted. Defaults to ``None`` (no limit).
+
+ include_two : bool
+ Whether or not to return partitions containing twos. (These are
+ all equivalent to partitions containing ``1,1`` for the purposes
+ of Lyapunov rank).
+
+ Returns
+ -------
+
+ list[list[int]]
+ A list of partitions of ``n``. Each "partition" it
+ itself a list of integers that sums to ``n``.
+
+ Examples
+ --------
+
+ Each "partition" should actually partition ``n``::
+
+ >>> from random import randint
+ >>> n = randint(1,20)
+ >>> m = randint(1,20)
+ >>> all( sum(p) == n for p in partitions(n,m) )
+ True
+
+ If ``entry_max`` is larger than the integer we're partitioning, it
+ should have no effect::
+
+ >>> from random import randint
+ >>> n = randint(1,15)
+ >>> tuple(partitions(n)) == tuple(partitions(n,25))
+ True
+
+ Check that the ``entry_max`` is respected::
+
+ >>> from random import randint
+ >>> n = randint(1,15)
+ >>> entry_max = randint(0,n)
+ >>> all( z <= entry_max
+ ... for p in partitions(n, entry_max)
+ ... for z in p )
+ True
+
+ If ``entry_max`` is one less than ``n``, that should only
+ eliminate one partition (namely, ``[n]``)::
+
+ >>> from random import randint
+ >>> n = randint(1,20)
+ >>> actual = len(tuple(partitions(n, n-1)))
+ >>> expected = len(tuple(partitions(n))) - 1
+ >>> actual == expected
+ True
+
+ Similarly, there's only one partition with all entries less than
+ or equal to (i.e. equal to) one::
+
+ >>> from random import randint
+ >>> n = randint(1,20)
+ >>> ps = tuple(partitions(n, 1))
+ >>> len(ps)
+ 1
+ >>> len(ps[0]) == n
+ True
+
+ We can exclude ``2`` from the results entirely as a cheap form of
+ normalization::
+
+ >>> list(partitions(2, include_two=False))
+ [[1, 1]]
+
+ This is a significant reduction in the number of partitions::
+
+ >>> len(list(partitions(20)))
+ 627
+ >>> len(list(partitions(20, include_two=False)))
+ 242
+
+ """
+ a = [0 for i in range(n + 1)]
+ k = 1
+ y = n - 1
+ while k != 0:
+ x = a[k - 1] + 1
+ k -= 1
+ while 2 * x <= y:
+ a[k] = x
+ y -= x
+ k += 1
+ l = k + 1
+ while x <= y:
+ a[k] = x
+ a[l] = y
+ if (entry_max is None) or a[k+1] <= entry_max:
+ # "a" is sorted with the largest entries at the end
+ if include_two or not _partition_contains_two(a[:k + 2]):
+ yield a[:k + 2]
+ x += 1
+ y -= 1
+ a[k] = x + y
+ y = x + y - 1
+ if (entry_max is None) or a[k] <= entry_max:
+ # "a" is sorted with the largest entries at the end
+ if include_two or not _partition_contains_two(a[:k + 1]):
+ yield a[:k + 1]
+
+
+def _direct_lorentz_ranks(n : int) -> tuple:
+ r"""
+ Generate admissible Lyapunov ranks for all sums of Lorentz
+ cones whose dimensions add up to ``n``.
+
+ This is a direct (and much slower) computation using a different
+ strategy than :func:`admissible_lorentz_ranks`.
+
+ Parameters
+ ----------
+
+ n : int
+ The dimension to compute signatures for.
+
+ Returns
+ -------
+
+ tuple[int]
+ A tuple of Lypaunov ranks.
+
+ Examples
+ --------
+
+ In dimensions two and fewer, the Lorentz cone is the nonnegative
+ orthant::
+
+ >>> _direct_lorentz_ranks(0)
+ (0,)
+ >>> _direct_lorentz_ranks(1)
+ (1,)
+ >>> _direct_lorentz_ranks(2)
+ (2,)
+
+ Things only get non-trivial in dimension three::
+
+ >>> _direct_lorentz_ranks(3)
+ (3, 4)
+
+ Some comparisons with the recursive algorithm. The
+ magic numbers below are not important, they were chosen "randomly"
+ but small enough that this doesn't take forever::
+
+ >>> import compute
+ >>> all(
+ ... _direct_lorentz_ranks(k)
+ ... ==
+ ... compute.admissible_lorentz_ranks(k)
+ ... for k in [0,1,7,12,15,19,23]
+ ... )
+ True
+
+ """
+ # go from generator -> set -> tuple to deduplicate them
+ return tuple(set( (sum(f(p_k) for p_k in p) )
+ for p in partitions(n, include_two=False) ))
+
+
+def partition_rank(p):
+ r"""
+ Return the Lyapunov rank of a sum of Lorentz factors whose
+ dimensions are given by an integer partition.
+
+ A direct sum of Lorentz cones is determined (almost) uniquely by
+ the dimensions of its factors. The "almost" is because ``L(2)``
+ and ``L(1) + L(1)`` are equal, but ``[1,1]`` and ``[2]`` are not.
+ In any case, if we are given an integer partition that is intended
+ to identify a direct sum of Lorentz cone (for example, computed by
+ the :func:`partitions` function), then this function
+ computes the Lyapunov rank of that direct sum.
+
+ Parameters
+ ----------
+
+ p : [int]
+ A partition of some integer, represented as a list
+ of integers (whose sum if the one being partitioned).
+
+ Returns
+ -------
+
+ An integer: the Lypaunov rank of the direct sum of Lorentz cones
+ where the superscripts (i.e. the dimensions of the factors) are
+ given by this partition.
+
+ Examples
+ --------
+
+ >>> partition_rank([])
+ 0
+ >>> partition_rank([0])
+ 0
+ >>> partition_rank([1,2,3])
+ 7
+
+ """
+ return sum( map(f,p) )
+
+
+def partition_simulacra(p : list[int]) -> Generator[list[int], None, None]:
+ r"""
+ Find all partitions of the same integer as the given
+ partition that have the same :func:`partitions.partition_rank` but
+ are not equivalent.
+
+ WARNING: the input partition should be sorted least to greatest,
+ and should not contain ``2``. In other words, it should match the
+ format returned by :func:`partitions`.
+
+ We infer the integer from the given partition, and then compute
+ the other partitions of it one-at-a-time. We skip partitions all
+ of whose entries are less than the smallest entry in the given
+ partition, since by Proposition 3, the Lyapunov rank associated
+ with any such partition will be too small. We also skip partitions
+ whose largest entry is equal to the smallest entry in the target
+ partition, since the best we could hope for in that case is that
+ all entries of both partitions are equal, and in that case the
+ partitions are equal -- we want to toss those out regardless,
+ because they are equivalent.
+
+ After eliminating as many partitions as possible, we return
+ the first with a rank that matches the given one.
+
+ Parameters
+ ----------
+
+ p : list[int]
+ The partition whose simulacra you want. It should be SORTED,
+ and should NOT CONTAIN ``2``.
+
+ Returns
+ -------
+
+ Yields partitions of ``sum(p)`` that have the same
+ :func:`partitions.partition_rank` as ``p``, but are not equivalent
+ to it.
+
+ Examples
+ --------
+
+ The nonnegative orthant will never have simulacra::
+
+ >>> list(partition_simulacra([0]))
+ []
+ >>> list(partition_simulacra([1]))
+ []
+ >>> list(partition_simulacra([1,1,1,1,1]))
+ []
+
+ All other partitions of ``4`` have ranks that are too small (you
+ can just try them all in your head)::
+
+ >>> list(partition_simulacra([4]))
+ []
+
+ The two simulacra from Example 1::
+
+ >>> next(partition_simulacra([3,3,3,4]))
+ [1, 1, 1, 1, 1, 1, 1, 1, 5]
+ >>> sims = partition_simulacra([1, 1, 1, 1, 1, 1, 1, 1, 5])
+ >>> _ = next(sims); next(sims)
+ [3, 3, 3, 4]
+
+ If you ignore the warning and provide an unsorted partition or a
+ partition containing ``2``, you may get wrong answers. Lack of
+ sorting can eliminate valid simulacra::
+
+ >>> len(list(partition_simulacra([3,3,3,4])))
+ 2
+ >>> len(list(partition_simulacra([4,3,3,3])))
+ 1
+
+ And including ``2`` can produce equivalent partitions::
+
+ >>> list(partition_simulacra([2,5,13]))
+ [[1, 1, 5, 13], [10, 10]]
+
+ """
+ target_rank = partition_rank(p)
+ for q in partitions(sum(p), include_two=False):
+ # The largest entry of q is its last, and the smallest entry
+ # of p is its first.
+ if q[-1] <= p[0]:
+ continue
+ if (partition_rank(q) == target_rank) and (not q == p):
+ yield q
--- /dev/null
+r"""
+Query/update the SQL database where we store cones and Lypaunov
+ranks.
+
+For the test suite to complete in a reasonable amount of time, some
+things must be pre-computed. The two main long-running computations
+whose results we need access to are,
+
+ 1. :func:`compute.admissible_lorentz_ranks`
+ 2. :func:`compute.dim_ranks_cones`
+
+Each of those functions can save the data it computes in a SQL
+database. This module provides the functions to make that possible,
+and also provides helper functions that make it easier to query the
+data from within the test suite. There are in fact two databases, one
+"live" database, and one "test" database. The test database is
+temporary and may be erased, replaced, or overwritten during
+testing. The live database is where important data are stored, and
+should not be modified by the test suite, only intentionally by
+calling the functions in :mod:`compute.
+
+Within the database, there are only two tables: ``cones``, and
+``lorentz_ranks``. These correspond in an obvious way to the two
+functions in :mod:`compute` mentioned above. Their structure can be
+inferred from :func:`new_database`, which is used to create them.
+
+Typically, within a test, you will use these SQL functions rather than
+the ones in :mod:`compute`. For example, if you want to know what
+Lyapunov ranks are achievable using sums of Lorentz cones in a given
+dimension, you would use :func:`admissible_lorentz_ranks` from this
+module rather than :func:`compute.admissible_lorentz_ranks`. Using the
+SQL module ensures that the live database is never modified by the
+test suite (no computation is done). The only caveat is that you must
+first check (using :func:`max_lorentz_rank_dim`, for example) that
+there are enough rows in the database to answer your query.
+
+The live database is guaranteed to exist, but it is not guaranteed to
+contain any data. The tests should pass with an empty live
+database. If you accidentally delete it, the live database can easily
+be recreated by running ``new_database(LIVE_DATABASE)`` in a python
+interpreter. The test database, on the other hand, is created
+on-the-fly and you shouldn't have to worry about it.
+
+If you want to be sure that the cached data are sufficient to verify
+the results in the paper, run the ``verify-live-db.py`` script.
+"""
+
+import sqlite3
+import msgpack
+from cones import SymmetricCone, SerialCone
+
+# The default "live" database.
+LIVE_DATABASE = "live.db"
+
+# The database used for testing.
+TEST_DATABASE = "test.db"
+
+def new_database(db : str = TEST_DATABASE):
+ r"""
+ Create a new database (destroying any existing databases of
+ the given name) containing our two tables.
+
+ This is destructive, so we use the test database as the default.
+
+ Parameters
+ ----------
+
+ db : str, default=TEST_DATABASE
+ The name of the SQLite database to use.
+
+ Examples
+ --------
+
+ Create a fresh test database::
+
+ >>> new_database()
+ >>> conn = sqlite3.connect(TEST_DATABASE)
+ >>> stmt = "SELECT name FROM sqlite_master WHERE type='table';"
+ >>> with conn:
+ ... print(conn.execute(stmt).fetchall())
+ [('cones',), ('lorentz_ranks',)]
+ >>> conn.close()
+
+ """
+ from os import remove
+ try:
+ remove(db)
+ except FileNotFoundError:
+ pass
+
+ conn = sqlite3.connect(db)
+
+ with conn:
+ conn.execute("""CREATE TABLE cones (
+ dim INTEGER NOT NULL,
+ rank INTEGER NOT NULL,
+ data BLOB NOT NULL
+ );""")
+ conn.execute("CREATE INDEX dim_rank_idx ON cones (dim,rank);")
+
+ conn.execute("""CREATE TABLE lorentz_ranks (
+ dim INTEGER NOT NULL,
+ rank INTEGER NOT NULL
+ );""")
+ conn.execute("CREATE INDEX dim_idx ON lorentz_ranks (dim);")
+
+ conn.close()
+
+
+def all_cones_of_dim(n : int, db : str = LIVE_DATABASE) -> tuple[SymmetricCone, ...]:
+ r"""
+ Return all symmetric cones having dimension ``n``.
+
+ Since this does not modify the database, we use the live database
+ by default.
+
+ Parameters
+ ----------
+
+ n : int
+ The dimension of the cones you want.
+
+ db : str, default=LIVE_DATABASE
+ The name of the SQLite database to use.
+
+ Setup::
+
+ >>> from cones import *
+
+ Low-dimensional examples, computed from scratch::
+
+ >>> import compute
+ >>> new_database(db=TEST_DATABASE)
+ >>> drc = compute.dim_ranks_cones(6, sql=True)
+
+ >>> all_cones_of_dim(0, db=TEST_DATABASE)
+ (L(0),)
+ >>> all_cones_of_dim(1, db=TEST_DATABASE)
+ (L(1),)
+ >>> all_cones_of_dim(2, db=TEST_DATABASE)
+ (L(1) + L(1),)
+
+ >>> sorted(all_cones_of_dim(3, db=TEST_DATABASE))
+ [L(1) + L(1) + L(1), L(3)]
+
+ >>> sorted(all_cones_of_dim(4, db=TEST_DATABASE))
+ [L(1) + L(1) + L(1) + L(1), L(1) + L(3), L(4)]
+
+ >>> sorted(all_cones_of_dim(5, db=TEST_DATABASE))
+ [L(1) + L(1) + L(1) + L(1) + L(1), L(1) + L(1) + L(3), L(1) + L(4), L(5)]
+
+ Finally at dim 6, the 3x3 real PSD cone makes an entrance::
+
+ >>> sorted(all_cones_of_dim(6, db=TEST_DATABASE))
+ [L(1) + L(1) + L(1) + L(1) + L(1) + L(1), L(1) + L(1) + L(1) + L(3), L(1) + L(1) + L(4), L(1) + L(5), L(3) + L(3), HR(3), L(6)]
+
+ Sets of cones in different dimensions should never intersect::
+
+ >>> from random import randint
+ >>> d1 = randint(0,20)
+ >>> d2 = randint(0,20)
+ >>> c1 = all_cones_of_dim(d1)
+ >>> c2 = all_cones_of_dim(d2)
+ >>> expected = ()
+ >>> if d2 == d1:
+ ... expected = c1
+ >>> actual = tuple( c for c in c1 if c in c2 )
+ >>> actual == expected
+ True
+
+ Check for the existence of some expected cones::
+
+ >>> not have_cone_dim(6) or HR(3) in all_cones_of_dim(6)
+ True
+ >>> not have_cone_dim(9) or HC(3) in all_cones_of_dim(9)
+ True
+ >>> not have_cone_dim(15) or HH(3) in all_cones_of_dim(15)
+ True
+ >>> not have_cone_dim(27) or HO(3) in all_cones_of_dim(27)
+ True
+
+ """
+ conn = sqlite3.connect(db)
+ stmt = "SELECT data FROM cones WHERE dim=?"
+ with conn:
+ result = tuple(
+ SymmetricCone.deserialize(
+ msgpack.unpackb(t[0], use_list=False)
+ )
+ for t in conn.execute(stmt, (n,)).fetchall()
+ )
+ conn.close()
+ return result
+
+
+def simulacra(K : SymmetricCone, db : str = LIVE_DATABASE) -> tuple[SymmetricCone, ...]:
+ r"""
+ Return all simulacra of the given cone.
+
+ Since this does not modify the database, we use the live database
+ by default.
+
+ Parameters
+ ----------
+
+ K : SymmetricCone
+ The cone whose simulacra you want.
+
+ db : str, default=LIVE_DATABASE
+ The name of the SQLite database to use.
+
+ """
+ conn = sqlite3.connect(db)
+ stmt = "SELECT data FROM cones WHERE dim=? AND rank=? AND data<>?"
+ args = ( K.dim, K.rank, msgpack.packb(K.serialize()) )
+ with conn:
+ result = tuple(
+ SymmetricCone.deserialize(
+ msgpack.unpackb(t[0], use_list=False)
+ )
+ for t in conn.execute(stmt, args).fetchall()
+ )
+ conn.close()
+ return result
+
+
+
+def have_cone_dim(d : int, db : str = LIVE_DATABASE) -> bool:
+ r"""
+ Return ``True`` if we have cached the cones of the given
+ dimension, and ``False`` otherwise.
+
+ Since this does not modify the database, we use the live database
+ by default. This is _almost_ redundant, since we should have all
+ cones of dimension less than func:`max_cone_dim`, but it's nicer
+ to check for exactly what we need.
+
+ Parameters
+ ----------
+
+ d : int
+ The dimension to check for in the database.
+
+ db : str, default=LIVE_DATABASE
+ The name of the SQLite database to use.
+
+ Examples
+ --------
+
+ An outrageously large example::
+
+ >>> have_cone_dim(8675309)
+ False
+
+ In a new database, we won't have anything...::
+
+ >>> new_database(db=TEST_DATABASE)
+ >>> all( not have_cone_dim(n, db=TEST_DATABASE)
+ ... for n in range(25) )
+ True
+
+ ...until we add it::
+
+ >>> import compute
+ >>> _ = compute.dim_ranks_cones(5, sql=True)
+ >>> have_cone_dim(5, db=TEST_DATABASE)
+ True
+
+ """
+ conn = sqlite3.connect(db)
+ stmt = "SELECT dim FROM cones where dim=?"
+ result = False
+ with conn:
+ if conn.execute(stmt, (d,)).fetchone():
+ result = True
+ conn.close()
+ return result
+
+
+def max_cone_dim(db : str = LIVE_DATABASE) -> int:
+ r"""
+ Return the maximum dimension of any cone in the database, or
+ ``~sys.maxsize`` if the database is empty.
+
+ Returning "negative infinity" for the maximum of an empty set is
+ standard in optimization because it makes comparisons less clunky.
+ Our ``~sys.maxsize`` is not quite negative infinity, but it's
+ close?
+
+ Since this does not modify the database, we use the live database
+ by default.
+
+ Parameters
+ ----------
+
+ db : str, default=LIVE_DATABASE
+ The name of the SQLite database to use.
+
+ Returns
+ -------
+
+ The largest dimesion in the ``cones`` table, or ``~sys.maxsize``
+ if the table is empty.
+
+ Examples
+ --------
+
+ The right answer depends on how long you're willing to wait (and
+ whether or not the data exist)::
+
+ >>> from sys import maxsize
+ >>> mcd = max_cone_dim()
+ >>> isinstance(mcd, int) or mcd == ~maxsize
+ True
+
+ In a new database, there won't be a maximum::
+
+ >>> from sys import maxsize
+ >>> new_database(db=TEST_DATABASE)
+ >>> max_cone_dim(db=TEST_DATABASE) == ~maxsize
+ True
+ """
+ from sys import maxsize
+ conn = sqlite3.connect(db)
+ stmt = "SELECT MAX(dim) FROM cones"
+ with conn:
+ result = conn.execute(stmt).fetchone()[0]
+ conn.close()
+ if result is None:
+ return ~maxsize
+ else:
+ return result
+
+
+
+def admissible_lorentz_ranks(n : int, db : str = LIVE_DATABASE) -> tuple[int, ...]:
+ r"""
+ Return the admissible Lyapunov ranks for sums of Lorentz cones
+ of total dimension ``n``.
+
+ Since this does not modify the database, we use the live database
+ by default.
+
+ Parameters
+ ----------
+
+ n : int
+ The dimension for which you'd like to know: what Lyapunov ranks
+ are possible if we consider only Lorentz cone factors?
+
+ db : str, default=LIVE_DATABASE
+ The name of the SQLite database to use.
+
+ Returns
+ -------
+
+ tuple
+ The "set" of Lyapunov ranks that can be achieved in dimension
+ ``n`` using only Lorentz cone factors.
+
+ Examples
+ --------
+
+ Low-dimensional examples that we compute from scratch::
+
+ >>> import compute
+ >>> new_database(db=TEST_DATABASE)
+ >>> alrs = compute.admissible_lorentz_ranks(3, sql=True)
+ >>> sorted(alrs)
+ [3, 4]
+ >>> admissible_lorentz_ranks(0, db=TEST_DATABASE)
+ (0,)
+ >>> admissible_lorentz_ranks(1, db=TEST_DATABASE)
+ (1,)
+ >>> admissible_lorentz_ranks(2, db=TEST_DATABASE)
+ (2,)
+ >>> sorted(admissible_lorentz_ranks(3, db=TEST_DATABASE))
+ [3, 4]
+
+ """
+ conn = sqlite3.connect(db)
+ stmt = "SELECT rank FROM lorentz_ranks WHERE dim=?"
+ result = ()
+ with conn:
+ result = tuple( r[0]
+ for r in conn.execute(stmt, (n,)).fetchall() )
+ conn.close()
+ return result
+
+
+def admissible_ranks(n: int, db : str = LIVE_DATABASE) -> tuple[int, ...]:
+ r"""
+ Compute all admissible Lyapunov ranks for cones of total
+ dimension ``n``.
+
+ This is basically :func:`admissible_lorentz_ranks`, but with an
+ optional 3x3 complex PSD factor thrown in. All symmetric cones
+ share a signature with a cone of this form.
+
+ Since this does not modify the database, we use the live database
+ by default.
+
+ Parameters
+ ----------
+
+ n : int
+ The dimension for which you'd like to know: what Lyapunov ranks
+ are possible?
+
+ db : str, default=LIVE_DATABASE
+ The name of the SQLite database to use.
+
+ Returns
+ -------
+
+ tuple[int]
+ The set of Lyapunov ranks that can be achieved in dimension ``n``
+
+ db : str, default=TEST_DATABASE
+ The name of the SQLite database to use.
+
+ Examples
+ --------
+
+ Low-dimensional examples agree with
+ :func:`admissible_lorentz_ranks`, at least until there is room for
+ that 3x3 complex PSD factor to fit::
+
+ >>> all( admissible_ranks(k)
+ ... ==
+ ... admissible_lorentz_ranks(k)
+ ... for k in range(9)
+ ... if have_lorentz_rank_dim(k) )
+ True
+ >>> (
+ ... not have_lorentz_rank_dim(9)
+ ... or
+ ... not (
+ ... admissible_ranks(9)
+ ... ==
+ ... admissible_lorentz_ranks(9)
+ ... )
+ ... )
+ True
+
+ All cones share a signature with a cone of this form::
+
+ >>> from cones import random_cone
+ >>> from sql import max_lorentz_rank_dim
+ >>> mlrd = max_lorentz_rank_dim()
+ >>>
+ >>> if mlrd < 3:
+ ... # not enough data, just return the right answer
+ ... True
+ ... else:
+ ... K = random_cone()
+ ... while K.dim > mlrd:
+ ... K = random_cone()
+ ... K.rank in admissible_ranks(K.dim)
+ True
+
+ """
+ if n < 9:
+ return admissible_lorentz_ranks(n, db)
+
+ a = admissible_lorentz_ranks(n, db)
+ b = tuple( 17 + r for r in admissible_lorentz_ranks(n - 9, db) )
+ return tuple(set(a+b)) # dedupe
+
+
+
+def have_lorentz_rank_dim(d : int, db : str = LIVE_DATABASE) -> bool:
+ r"""
+ Return ``True`` if we know the admissible Lorentz ranks for
+ the given dimension, and ``False`` otherwise.
+
+ Since this does not modify the database, we use the live database
+ by default. This is _almost_ redundant, since we should have all
+ ranks for dimensions less than func:`max_lorentz_rank_dim`, but
+ it's nicer to check for exactly what we need.
+
+ Parameters
+ ----------
+
+ d : int
+ The dimension to check for in the database.
+
+ db : str, default=LIVE_DATABASE
+ The name of the SQLite database to use.
+
+ Examples
+ --------
+
+ An outrageously large example::
+
+ >>> have_lorentz_rank_dim(8675309)
+ False
+
+ In a new database, we won't have anything...::
+
+ >>> new_database(db=TEST_DATABASE)
+ >>> all( not have_lorentz_rank_dim(n, db=TEST_DATABASE)
+ ... for n in range(25) )
+ True
+
+ ...until we add it::
+
+ >>> import compute
+ >>> _ = compute.admissible_lorentz_ranks(10, sql=True)
+ >>> have_lorentz_rank_dim(10, db=TEST_DATABASE)
+ True
+ """
+ conn = sqlite3.connect(db)
+ stmt = "SELECT dim FROM lorentz_ranks where dim=?"
+ result = False
+ with conn:
+ if conn.execute(stmt, (d,)).fetchone():
+ result = True
+ conn.close()
+ return result
+
+
+def max_lorentz_rank_dim(db : str = LIVE_DATABASE) -> int:
+ r"""
+ Return the maximum dimension of any Lorentz rank in the
+ database, or ``~sys.maxsize`` if there are none.
+
+ Returning "negative infinity" for the maximum of an empty set is
+ standard in optimization because it makes comparisons less clunky.
+ Our ``~sys.maxsize`` is not quite negative infinity, but it's
+ close?
+
+ Since this does not modify the database, we use the live database
+ by default.
+
+ Parameters
+ ----------
+
+ db : str, default=LIVE_DATABASE
+ The name of the SQLite database to use.
+
+ Returns
+ -------
+
+ The largest dimesion in the ``lorentz_ranks`` table, or ``-math.inf``
+ if the table is empty.
+
+ Examples
+ --------
+
+ The right answer depends on how long you're willing to wait (and
+ whether or not the data exist)::
+
+ >>> from sys import maxsize
+ >>> mlrd = max_lorentz_rank_dim()
+ >>> isinstance(mlrd, int) or mlrd == ~maxsize
+ True
+
+ In a new database, there won't be a maximum::
+
+ >>> from sys import maxsize
+ >>> new_database(db=TEST_DATABASE)
+ >>> max_lorentz_rank_dim(db=TEST_DATABASE) == ~maxsize
+ True
+ """
+ from sys import maxsize
+ conn = sqlite3.connect(db)
+ stmt = "SELECT MAX(dim) FROM lorentz_ranks"
+ with conn:
+ result = conn.execute(stmt).fetchone()[0]
+ conn.close()
+ if result is None:
+ return ~maxsize
+ else:
+ return result
+
+
+def insert_lorentz_ranks(n : int, ranks : tuple[int,...], db : str = TEST_DATABASE):
+ r"""
+ Insert one dimension's worth of admissible Lorentz ranks into
+ the database.
+
+ This is destructive, so we use the test database as the default.
+
+ Parameters
+ ----------
+
+ n : int
+ The dimension to which your list of Lyapunov ranks corresponds.
+
+ ranks : list[int]
+ A list of Lyapunov ranks that are achievable using only Lorentz
+ cones in dimension ``n``.
+
+ db : str, default=TEST_DATABASE
+ The name of the SQLite database to use.
+
+ Returns
+ -------
+
+ Nothing.
+
+ Examples
+ --------
+
+ A simple example::
+
+ >>> new_database(db=TEST_DATABASE)
+ >>> insert_lorentz_ranks(8, [6,7,5,3,0,9])
+ >>> stmt = "SELECT dim,rank FROM lorentz_ranks"
+ >>> conn = sqlite3.connect(TEST_DATABASE)
+ >>> with conn:
+ ... print(conn.execute(stmt).fetchall())
+ [(8, 6), (8, 7), (8, 5), (8, 3), (8, 0), (8, 9)]
+ >>> conn.close()
+
+ """
+ conn = sqlite3.connect(db)
+ stmt = "INSERT INTO lorentz_ranks (dim,rank) VALUES (?,?)"
+ with conn:
+ conn.executemany(stmt, ((n, r) for r in ranks) )
+ conn.close()
+
+
+def insert_cones(n : int, d : dict, db : str = TEST_DATABASE):
+ r"""
+ Insert cones into the database from a rank => cones dictionary.
+
+ This is used in the implementation of :func:`compute.dim_ranks_cones`
+ to save newly-computed cones.
+
+ This is destructive, so we use the test database as the default.
+
+ Parameters
+ ----------
+
+ n : int
+ The dimension of your cones.
+
+ d : dict
+ A dictionary whose keys are Lyapunov ranks and whose values
+ are tuples consisting of all serialized cones of dimension
+ ``n`` having those Lyapunov ranks.
+
+ db : str, default=TEST_DATABASE
+ The name of the SQLite database to use.
+
+ Returns
+ -------
+
+ Nothing.
+
+ Examples
+ --------
+
+ Insert the two cones of dimension three, and then check that we
+ can pull them back out with :func:`dim_ranks_cones`::
+
+ >>> from cones import L, RN
+ >>> new_database(db=TEST_DATABASE)
+ >>> n = 3
+ >>> d = { 3: [RN(3).serialize()], 4: [L(3).serialize()] }
+ >>> insert_cones(n, d, db=TEST_DATABASE)
+ >>> dim_ranks_cones(3, db=TEST_DATABASE)
+ {3: ((11, 11, 11),), 4: (31,)}
+
+ """
+ conn = sqlite3.connect(db)
+ stmt = "INSERT INTO cones (dim,rank,data) VALUES (?,?,?)"
+ with conn:
+ conn.executemany(stmt, ((n, r, msgpack.packb(s))
+ for r in d
+ for s in d[r]) )
+ conn.close()
+
+
+def dim_ranks_cones(n : int, db : str = LIVE_DATABASE) -> dict[ int, tuple[SerialCone,...] ]:
+ r"""
+ Return all cones of dimension ``n`` as a rank => cones map.
+
+ Since this does not modify the database, we use the live database
+ by default.
+
+ Parameters
+ ----------
+
+ n : int
+ The dimension of the cones you want.
+
+ db : str, default=LIVE_DATABASE
+ The name of the SQLite database to use.
+
+ Returns
+ -------
+
+ A dictionary whose keys are Lyapunov ranks and whose values
+ are tuples consisting of all serialized cones of dimension
+ ``n`` having those Lyapunov ranks.
+
+ Examples
+ --------
+
+ Low-dimensional examples that we compute from scratch::
+
+ >>> import compute
+ >>> new_database(db=TEST_DATABASE)
+ >>> _ = compute.dim_ranks_cones(3, sql=True)
+ >>> dim_ranks_cones(0, db=TEST_DATABASE)
+ {0: (1,)}
+ >>> dim_ranks_cones(1, db=TEST_DATABASE)
+ {1: (11,)}
+ >>> dim_ranks_cones(2, db=TEST_DATABASE)
+ {2: ((11, 11),)}
+ >>> dim_ranks_cones(3, db=TEST_DATABASE)
+ {3: ((11, 11, 11),), 4: (31,)}
+
+ """
+ conn = sqlite3.connect(db)
+ stmt = "SELECT rank,data FROM cones WHERE dim=?"
+ result_pairs = []
+ with conn:
+ result_pairs = conn.execute(stmt, (n,) ).fetchall()
+ conn.close()
+
+ # Convert the paired results to a dict
+ d_n: dict[ int, list[SerialCone] ]
+ d_n = {}
+
+ for r,s in result_pairs:
+ d_n.setdefault(r, []).append(msgpack.unpackb(s, use_list=False))
+
+ # Now convert the lists in the dict to tuples before returning.
+ return { k: tuple(d_n[k]) for k in d_n }
--- /dev/null
+from partitions import *
+from cones import *
+
+
+def max_dimK(n):
+ r"""
+ The largest possible dimension for ``K`` when ``n`` is fixed
+ and must satisfy :func:`lowerbound1` and :func:`lowerbound2`.
+
+ This is computed the dumb way rather than by solving the
+ quadratic.
+
+ Parameters
+ ----------
+
+ n : int
+ The fixed dimension of the Lorentz cone in Theorem 5.
+
+ Returns
+ -------
+
+ The largest integer dimension that ``K`` can have if ``n`` will
+ satisfy both ``lowerbound1(K)`` and ``lowerbound2(K)``.
+
+ Examples
+ --------
+
+ Check the table in Theorem 5::
+
+ >>> max_dimK(3)
+ 2
+ >>> max_dimK(5)
+ 3
+ >>> max_dimK(6)
+ 4
+ >>> max_dimK(7)
+ 4
+ >>> max_dimK(8)
+ 5
+ >>> max_dimK(9)
+ 5
+ >>> max_dimK(10)
+ 6
+ >>> max_dimK(11)
+ 6
+ >>> max_dimK(12)
+ 6
+ >>> max_dimK(13)
+ 7
+ >>> max_dimK(14)
+ 7
+
+ Compare the answer against the smart way, by solving the quadratic
+ inequality ``d**2 - d - 4*n + 10 <= 0`` to find where the
+ upwards-facing parabola last crosses the x-axis. (Note that
+ we need ``n >= 3`` to get real solutions to this.)
+
+ >>> from math import floor, sqrt
+ >>> def qf(n):
+ ... a = 1
+ ... b = -1
+ ... c = 10 - 4*n
+ ... if (b**2 - 4*a*c) < 0:
+ ... return None
+ ... else:
+ ... return floor( (-b + sqrt(b**2 - 4*a*c)) / (2*a) )
+ >>> all( qf(n) == max_dimK(n) for n in range(3,15) )
+ True
+
+ """
+ d = 1
+ while d*(d-1) <= (4*n - 10):
+ d += 1
+ return d-1
+
+
+def fix_floor(s):
+ r"""
+ Strip symbolic "floor" calls from SymPy expressions.
+
+ Sympy doesn't know (for example) that ``n**2 + n`` is even, so it
+ will insert a :func:`sympy.functions.elementary.integers.floor`
+ around the result of ``(n**2 + n)//2``. We however know that the
+ "floor" is superfluous, so we can remove it using this function.
+
+ Parameters
+ ----------
+
+ s : sympy.core.expr.Expr
+ A sympy expression possibly containing a call to
+ :func:`sympy.functions.elementary.integers.floor`.
+
+ Returns
+ -------
+
+ Another :class:`sympy.core.expr.Expr`, devoid of floors.
+
+ Examples
+ --------
+
+ ``n**2 + n`` is even::
+
+ >>> from sympy import symbols
+ >>> n = symbols("n", integer=True, positive=True)
+ >>> expr = (n**2 + n) // 2
+ >>> expr
+ floor(n**2/2 + n/2)
+ >>> fix_floor(expr)
+ n**2/2 + n/2
+
+ """
+ from sympy.functions.elementary.integers import floor
+ return s.replace(floor, lambda x: x)
+
+
+def lowerbound1(K):
+ r"""
+ The first of the three lower bounds on "n", used in the
+ proof of Lemma 4 and elsewhere.
+
+ Parameters
+ ----------
+
+ K : SymmetricCone
+ The cone for which you want the lower bound.
+
+ Returns
+ -------
+
+ int
+ The lower bound on "n" corresponding to ``K``.
+
+ Examples
+ --------
+
+ This lower bound is tight, regardless of the other two::
+
+ >>> K = random_cone()
+ >>> while K.dim == 0:
+ ... K = random_cone()
+ >>> n = lowerbound1(K) - 1
+ >>> J1 = DirectSum([K, L(n)])
+ >>> J2 = DirectSum([RN(K.dim - 1), L(n+1)])
+ >>> J1.signature() == J2.signature()
+ True
+
+ We know some examples where this is the bound that's tight, and
+ where ``K`` is big enough to ensure non-isomorphism (the existence
+ of a real simulacrum, not just a matching signature)::
+
+ >>> m = 5
+ >>> K = L(m)
+ >>> n = lowerbound1(K) - 1
+ >>> n >= lowerbound2(K)
+ True
+ >>> n != 4
+ True
+ >>> J1 = DirectSum([K, L(n)])
+ >>> J2 = DirectSum([RN(K.dim - 1), L(n+1)])
+ >>> J1.signature() == J2.signature()
+ True
+ >>> J1 == J2
+ False
+
+ """
+ return 2 + K.rank - K.dim
+
+
+def lowerbound2(K):
+ r"""
+ The second lower bound on "n" used in the proof of Lemma 5
+ and elsewhere.
+
+ Parameters
+ ----------
+
+ K : SymmetricCone
+ The cone for which you want the lower bound.
+
+ Returns
+ -------
+
+ int
+ The lower bound on "n" corresponding to ``K``.
+
+ Examples
+ --------
+
+ It works with symbolic values, though we have to manually
+ eliminate the `floor` that arises from Python's integer division
+ (``m**2 + m + 2`` is guaranteed to be even)::
+
+ >>> from sympy import expand, symbols
+ >>> m = symbols("m", integer=True, positive=True)
+ >>> expand(fix_floor(lowerbound2(L(m))))
+ m + 2
+
+ This bound is tight, because we have already computed
+ a counterexample wherein the other two lower bounds are
+ satisfied::
+
+ >>> K = HC(3)
+ >>> n = lowerbound2(K) - 1
+ >>> n >= lowerbound1(K)
+ True
+ >>> n != 4
+ True
+ >>> J1 = DirectSum([K,L(n)])
+ >>> J2 = DirectSum([L(29),L(10)])
+ >>> J1.signature() == J2.signature()
+ True
+ >>> J1 == J2
+ False
+
+ This lower bound is exactly what is needed in Lemma 5::
+
+ >>> from sympy import expand, symbols
+ >>> n,d,r = symbols("n,d,r", integer=True, positive=True)
+ >>> K = SymmetricCone(0)
+ >>> K.dim = d
+ >>> K.rank = r
+ >>> ineq1 = f(n) - f(n-1) + 1 >= 2 + f(K.dim + 1) - K.rank
+ >>> ineq1 = expand(fix_floor(ineq1))
+ >>> ineq1 == expand(fix_floor(n >= lowerbound2(K)))
+ True
+
+ """
+ return 2 + f(1 + K.dim) - K.rank
+
+
+def lowerbound3(K : SymmetricCone) -> int:
+ r"""
+ The third lower bound on "n", used in the proof of Lemma 3
+ and elsewhere (and later loosened to ``n != 4``).
+
+ Parameters
+ ----------
+
+ K : SymmetricCone
+ The cone for which you want the lower bound (this parameter is
+ essentially ignored).
+
+ Returns
+ -------
+
+ int
+ This lower bound is always 10.
+
+ Examples
+ --------
+
+ Yup::
+
+ >>> lowerbound3(random_cone())
+ 10
+
+ """
+ return 10
+
+
+def test_lemma1():
+ r"""
+ Test the statement of Lemma 1.
+
+ We construct many random irreducible cones and checking that their
+ signatures match if and only if they are equal.
+
+ Examples
+ --------
+
+ The result should hold::
+
+ >>> test_lemma1()
+ True
+
+ The only cone of the same dimension as ``HO(3)`` is ``L(27)``, and
+ its Lyapunov rank is too large::
+
+ >>> HR(6).dim, HR(7).dim
+ (21, 28)
+ >>> HC(5).dim, HC(6).dim
+ (25, 36)
+ >>> HH(3).dim, HH(4).dim
+ (15, 28)
+ >>> L(27).rank
+ 352
+
+ There are no members (``3``-by-``3`` or larger) of the matrix
+ families with matching signatures::
+
+ >>> from sympy import symbols, solve
+ >>> m,n = symbols("m,n", integer=True, positive=True)
+ >>> solve([fix_floor(HR(m).dim) - HC(n).dim, HR(m).rank - HC(n).rank], n)
+ []
+ >>> solve([fix_floor(HR(m).dim) - HC(n).dim, HR(m).rank - HC(n).rank], m)
+ []
+ >>> solve([fix_floor(HR(m).dim) - HH(n).dim, HR(m).rank - HH(n).rank], n)
+ []
+ >>> solve([fix_floor(HR(m).dim) - HH(n).dim, HR(m).rank - HH(n).rank], m)
+ []
+ >>> solve([HC(m).dim - HH(n).dim, HC(m).rank - HH(n).rank], m)
+ []
+ >>> solve([HC(m).dim - HH(n).dim, HC(m).rank - HH(n).rank], n)
+ []
+
+ We should actually be obtaining the solution ``m == n == 1`` for
+ all of these, but the formula (per Appendix A) for the Lyapunov
+ rank of ``HH(1)`` is wrong. I don't know why the solution set is
+ empty for ``HR(1)`` and ``HC(1)``. We can easily verify::
+
+ >>> [HR(1).dim - HC(1).dim, HR(1).rank - HC(1).rank]
+ [0, 0]
+
+ If we allow ``m`` and ``n`` to be zero, the ``m == n == 1``
+ solution shows up indirectly::
+
+ >>> m,n = symbols("m,n", integer=True, nonnegative=True)
+ >>> solve([HR(m).dim - HC(n).dim, HR(m).rank - HC(n).rank], m)
+ [(-sqrt(2*n**2 - 1),), (sqrt(2*n**2 - 1),)]
+
+ However, only the latter of these is real and positive, and only
+ for ``n >= 1``. If we substitute ``m == sqrt(2*n**2 - 1)`` into
+ the equation relating the dimensions, we derive ``n == 1``, albeit
+ with no help from SymPy, who thinks that the equation
+ ``sqrt(2*n**2 - 1) == 1`` has no solutions::
+
+ >>> from sympy import sqrt
+ >>> eq = 2*(fix_floor(HR(sqrt(2*n**2 - 1)).dim) - HC(n).dim)
+ >>> eq
+ sqrt(2*n**2 - 1) - 1
+ >>> solve(eq, n)
+ []
+
+ Anyway, back to the argument: the signature of a Lorentz cone
+ matches that of an ``n``-by-``n`` matrix cone only for ``n <= 2``,
+ where they are isomorphic to Lorentz cones. Though note that the
+ formulas for Lyapunov rank (per Appendix A) are not even valid for
+ the matrix cones when ``n <= 1``::
+
+ >>> solve(fix_floor(L(HR(n).dim).rank - HR(n).rank), n)
+ [1, 2]
+ >>> solve(fix_floor(L(HC(n).dim).rank - HC(n).rank), n)
+ [1, 2]
+ >>> solve(fix_floor(L(HH(n).dim).rank - HH(n).rank), n)
+ [2]
+
+ """
+ # Use lists here to avoid quietly exhausting the generator in the
+ # "all" comprehension.
+ Ks = [ random_irreducible_cone(0,20) for _ in range(250) ]
+ Js = [ random_irreducible_cone(0,20) for _ in range(250) ]
+
+ return all(
+ (K.signature() == J.signature())
+ ==
+ (K == J)
+ for K in Ks
+ for J in Js
+ )
+
+
+def test_proposition3():
+ r"""
+ Test the statement of Proposition 3.
+
+ We construct a random cone and checking that its Lorentz rank is
+ exceeded by that of the Lorentz cone of the same dimension.
+
+ Setup
+ -----
+
+ Create some symbols that we'll use in the following examples::
+
+ >>> from sympy import expand, simplify, symbols
+ >>> m,n,k = symbols("m,n,k", integer=True, positive=True)
+
+ Examples
+ --------
+
+ The result should hold::
+
+ >>> test_proposition3()
+ True
+
+ Confirm that using two Lorentz cones gives you a smaller Lyapunov
+ rank than one Lorentz cone would, assuming the dimension is
+ fixed::
+
+ >>> K = L(n)
+ >>> J1 = L(k)
+ >>> J2 = L(n-k)
+ >>> J = DirectSum([J1,J2], False) # can't sort symbolics
+ >>> actual = fix_floor(K.rank - J.rank)
+ >>> expected = (n-k)*k - 1
+ >>> simplify(actual - expected)
+ 0
+
+ The rank of ``L(27)`` exceeds that of ``HO(3)``::
+
+ >>> K = L(27)
+ >>> J = HO(3)
+ >>> K.dim == J.dim
+ True
+ >>> K.rank > J.rank
+ True
+
+ The difference between the rank of the Lorentz cone and the rank
+ of the real symmetric PSD cone of equal dimension is the
+ polynomial we expect::
+
+ >>> K = L((m**2 + m)/2)
+ >>> J = HR(m)
+ >>> fix_floor(K.dim - J.dim)
+ 0
+ >>> expand(fix_floor(K.rank - J.rank))
+ m**4/8 + m**3/4 - 9*m**2/8 - m/4 + 1
+
+ The difference between the rank of the Lorentz cone and the rank
+ of the complex Hermitian PSD cone of equal dimension is the
+ polynomial we expect::
+
+ >>> K = L(m**2)
+ >>> J = HC(m)
+ >>> fix_floor(K.dim - J.dim)
+ 0
+ >>> fix_floor(K.rank - J.rank)
+ m**4/2 - 5*m**2/2 + 2
+
+ The difference between the rank of the Lorentz cone and the rank
+ of the quaternion Hermitian PSD cone of equal dimension is the
+ polynomial we expect::
+
+ >>> K = L(2*m**2 - m)
+ >>> J = HH(m)
+ >>> fix_floor(K.dim - J.dim)
+ 0
+ >>> expand(fix_floor(K.rank - J.rank))
+ 2*m**4 - 2*m**3 - 9*m**2/2 + m/2 + 1
+
+ """
+ K = random_cone()
+ J = L(K.dim)
+
+ # They could be isomorphic; but if not, it's a strict inequality.
+ return ( J == K or J.rank > K.rank )
+
+
+def test_proposition4() -> bool:
+ r"""
+ Test the statement of Proposition 4.
+
+ Returns
+ -------
+
+ ``True`` if the test passed, and ``False`` otherwise.
+
+ Examples
+ --------
+
+ The result should hold::
+
+ >>> test_proposition4()
+ True
+
+ From the proof of Proposition 4, we know that for ``n >= 3``,
+ ``HR(n)`` has symmetric simulacra with only Lorentz
+ factors. Moreover when ``n < 3``, ``HR(n)`` _is_ a Lorentz
+ cone. In either case, ``HR(n)`` should share its signature with a
+ sum of Lorentz cones. We begin by computing the largest ``n`` for
+ which we have the corresponding sum-of-Lorentz-cone data cached::
+
+ >>> from sql import admissible_lorentz_ranks, max_lorentz_rank_dim
+ >>> n_max = -1
+ >>> mlrd = max_lorentz_rank_dim()
+ >>> while HR(n_max).dim <= mlrd:
+ ... n_max += 1
+ >>> n_max -= 1
+ >>> all(
+ ... HR(n).rank
+ ... in admissible_lorentz_ranks(HR(n).dim)
+ ... for n in range(n_max+1)
+ ... )
+ True
+
+ In fact, we know the formula for at least one such simulacrum::
+
+ >>> def check(n):
+ ... K1 = L(n+1)
+ ... K2 = RN((n**2 - n - 2) // 2)
+ ... K = DirectSum([K1,K2])
+ ... return (HR(n).signature() == K.signature())
+ >>>
+ >>> all( check(n) for n in range(2,100) )
+ True
+
+ """
+ from sql import max_cone_dim
+
+ # Figure out how big "n" can be if we want to use the database of
+ # cached cones (the ``simulacra`` method uses it implicitly).
+ n_max = 0
+ mcd = max_cone_dim()
+ while HR(n_max).dim <= mcd:
+ n_max += 1
+ n_max -= 1
+
+ n_min = 3
+ if n_max <= n_min:
+ # No cached cones?
+ return True
+
+ return all( HR(n).simulacra() for n in range(n_min, n_max+1) )
+
+
+
+def test_proposition5() -> bool:
+ r"""
+ Test the statement of Proposition 5.
+
+ Returns
+ -------
+
+ ``True`` if the test passed, and ``False`` otherwise.
+
+ Examples
+ --------
+
+ The result should hold::
+
+ >>> test_proposition5()
+ True
+
+ From the proof of Proposition 5, we know that for ``n >= 4``,
+ ``HC(n)`` has symmetric simulacra with only Lorentz
+ factors. Moreover when ``n < 3``, ``HC(n)`` _is_ a Lorentz
+ cone. In either case, ``HC(n)`` should share its signature with a
+ sum of Lorentz cones. We begin by computing the largest ``n`` for
+ which we have the corresponding sum-of-Lorentz-cone data cached
+ (it's easy in this case)::
+
+ >>> from math import floor, sqrt
+ >>> from sql import admissible_lorentz_ranks, max_lorentz_rank_dim
+ >>> n_max = -1
+ >>> mlrd = max_lorentz_rank_dim()
+ >>> if mlrd >= 0:
+ ... n_max = floor(sqrt(mlrd))
+ >>> all(
+ ... HC(n).rank
+ ... in admissible_lorentz_ranks(HC(n).dim)
+ ... for n in range(n_max+1)
+ ... if n != 3
+ ... )
+ True
+
+ We know the simulacra for ``n >= 4`` explicitly; they are given in
+ the proof of the proposition::
+
+ >>> all(
+ ... HC(n).signature() == K.signature()
+ ... for n in range(4,100)
+ ... if (K2 := RN(n**2 - 5*n + 6))
+ ... and (K := DirectSum(2*[L(n+1)] + [K2, L(4)] + (n-4)*[L(3)]))
+ ... )
+ True
+
+ An example using the :meth:`SymmetricCone.simulacra` method for
+ ``n = 4``::
+
+ >>> from sql import max_cone_dim
+ >>> K = DirectSum([L(5), L(5), L(4), RN(2)])
+ >>> mcd = max_cone_dim()
+ >>> skip = ( HC(4).dim > mcd )
+ >>> skip or K in HC(4).simulacra()
+ True
+
+ """
+ from math import floor, sqrt
+ from sql import max_cone_dim
+
+ # Figure out how big "n" can be if we want to use the database of
+ # cached cones (the ``simulacra`` method uses it implicitly).
+ n_max = 0
+ mcd = max_cone_dim()
+ if mcd >= 0:
+ n_max = floor(sqrt(mcd))
+
+ n_min = 4
+ if n_max <= n_min:
+ # No cached cones?
+ return True
+
+ return ( not HC(3).simulacra()
+ and
+ all(HC(n).simulacra() for n in range(n_min, n_max+1)) )
+
+
+
+def test_proposition6() -> bool:
+ r"""
+ Test the statement of Proposition 6.
+
+ Returns
+ -------
+
+ ``True`` if the test passed, and ``False`` otherwise.
+
+ Examples
+ --------
+
+ The result should hold::
+
+ >>> test_proposition6()
+ True
+
+ From the proof of Proposition 6, we know that for ``n >= 3``,
+ ``HH(n)`` has symmetric simulacra with only Lorentz
+ factors. Moreover when ``n < 3``, ``HH(n)`` _is_ a Lorentz
+ cone. In either case, ``HH(n)`` should share its signature with a
+ sum of Lorentz cones. We begin by computing the largest ``n`` for
+ which we have the corresponding sum-of-Lorentz-cone data cached::
+
+ >>> from sql import admissible_lorentz_ranks, max_lorentz_rank_dim
+ >>> n_max = -1
+ >>> mlrd = max_lorentz_rank_dim()
+ >>> while HH(n_max).dim <= mlrd:
+ ... n_max += 1
+ >>> n_max -= 1
+ >>>
+ >>> all(
+ ... HH(n).rank
+ ... in admissible_lorentz_ranks(HH(n).dim)
+ ... for n in range(n_max+1)
+ ... )
+ True
+
+ We know the formula for simulacra explicitly; they are given in
+ the proof of the proposition::
+
+ >>> all(
+ ... HH(n).signature() == K.signature()
+ ... for n in range(3,100)
+ ... if (K3 := RN(2*n**2 - 3*n - 2))
+ ... and (K := DirectSum([L(2*n+2), K3]))
+ ... )
+ True
+
+ Further checks of the low-dimensional formulas using the
+ :meth:`SymmetricCone.simulacra` method::
+
+ >>> from sql import max_cone_dim
+ >>> mcd = max_cone_dim()
+
+ >>> K = DirectSum([L(8), RN(7)])
+ >>> skip = ( HH(3).dim > mcd )
+ >>> skip or K in HH(3).simulacra()
+ True
+
+ >>> K = DirectSum([L(10), RN(18)])
+ >>> skip = ( HH(4).dim > mcd )
+ >>> skip or K in HH(4).simulacra()
+ True
+
+ >>> K = DirectSum([L(12), RN(33)])
+ >>> skip = ( HH(5).dim > mcd )
+ >>> skip or K in HH(5).simulacra()
+ True
+
+ """
+ from sql import max_cone_dim
+
+ # Figure out how big "n" can be if we want to use the database of
+ # cached cones (the ``simulacra`` method uses it implicitly).
+ n_max = 0
+ mcd = max_cone_dim()
+ while HH(n_max).dim <= mcd:
+ n_max += 1
+ n_max -= 1
+
+ n_min = 3
+ if n_max <= n_min:
+ # No cached cones?
+ return True
+
+ return all( HH(n).simulacra() for n in range(n_min, n_max+1) )
+
+
+def test_proposition7() -> bool:
+ r"""
+ Test the statement of Proposition 7.
+
+ Returns
+ -------
+
+ ``True`` if the test passed, and ``False`` otherwise.
+
+ Examples
+ --------
+
+ The result should hold::
+
+ >>> test_proposition7()
+ True
+
+ From the proof of Proposition 7, we know that ``HO(3)`` has a
+ symmetric simulacrum with only Lorentz factors. Moreover when ``n
+ < 3``, ``HO(n)`` _is_ a Lorentz cone. In either case, ``HO(n)``
+ should share its signature with a sum of Lorentz cones. We begin
+ by computing the largest ``n`` for which we have the corresponding
+ sum-of-Lorentz-cone data cached::
+
+ >>> from sql import admissible_lorentz_ranks, max_lorentz_rank_dim
+ >>> n_max = -1
+ >>> mlrd = max_lorentz_rank_dim()
+ >>> while n_max <= 3 and HO(n_max).dim <= mlrd:
+ ... n_max += 1
+ >>> n_max -= 1
+ >>>
+ >>> all(
+ ... HO(n).rank
+ ... in admissible_lorentz_ranks(HO(n).dim)
+ ... for n in range(n_max+1)
+ ... )
+ True
+
+ We know the formula for one simulacrum explicitly; it is given
+ in the proof of the proposition::
+
+ >>> K = DirectSum([L(11),L(5),L(3),RN(8)])
+ >>> K.signature() == HO(3).signature()
+ True
+
+ Repeat with the cached simulacra data::
+
+ >>> from sql import have_cone_dim
+ >>> (not have_cone_dim(HO(3).dim)) or K in HO(3).simulacra()
+ True
+
+ """
+ from sql import max_cone_dim
+
+ if HO(3).dim > max_cone_dim():
+ # no data
+ return True
+
+ return not (not HO(3).simulacra())
+
+
+def test_theorem2() -> bool:
+ r"""
+ Test the statement of Theorem 2.
+
+ Returns
+ -------
+
+ ``True`` if the test passed, and ``False`` otherwise.
+
+ Examples
+ --------
+
+ The result should hold::
+
+ >>> test_theorem2()
+ True
+
+ Implicit in the proof of this theorem is the fact that every
+ irreducible symmetric cone other than ``HC(3)`` shares its
+ signature with a sum or Lorentz cones::
+
+ >>> from sql import admissible_lorentz_ranks, max_lorentz_rank_dim
+ >>> Ks = ( random_irreducible_cone() for _ in range(100) )
+ >>> all ( K.rank in admissible_lorentz_ranks(K.dim)
+ ... or K == HC(3)
+ ... for K in Ks
+ ... if K.dim <= max_lorentz_rank_dim() )
+ True
+
+ """
+ from sql import max_cone_dim
+ Ks = ( random_irreducible_cone() for _ in range(100) )
+ return all ( not (not K.simulacra())
+ or K == HC(3)
+ or isinstance(K,L)
+ for K in Ks
+ if K.dim <= max_cone_dim() )
+
+
+def test_lemma2() -> bool:
+ r"""
+ Test the statement of Lemma 2.
+
+ Returns
+ -------
+
+ ``True`` if the test passed, and ``False`` otherwise.
+
+ Examples
+ --------
+
+ The result should hold::
+
+ >>> test_lemma2()
+ True
+
+ """
+ from sql import max_cone_dim
+ Ks = ( random_cone() for _ in range(100) )
+
+ return all(
+ K.simulacra()
+ or
+ all(not K_i.simulacra() for K_i in K.factors())
+ for K in Ks
+ if K.dim <= max_cone_dim()
+ )
+
+
+def test_corollary2() -> bool:
+ r"""
+ Test the statement of Corollary 2.
+
+ Returns
+ -------
+
+ ``True`` if the test passed, and ``False`` otherwise.
+
+ Examples
+ --------
+
+ The result should hold::
+
+ >>> test_corollary2()
+ True
+
+ """
+ from sql import max_cone_dim
+
+ Ks = ( random_cone() for _ in range(100) )
+ return all(
+ not (not K.simulacra())
+ for K in Ks
+ if K.dim <= max_cone_dim()
+ and K.factors().count(HC(3)) > 1
+ )
+
+
+def test_theorem3() -> bool:
+ r"""
+ Test the statement of Theorem 3.
+
+ Returns
+ -------
+
+ ``True`` if the test passed, and ``False`` otherwise.
+
+ Examples
+ --------
+
+ The result should hold::
+
+ >>> test_theorem3()
+ True
+
+ """
+ # The fact that every cone shares a signature with a sum of
+ # Lorentz cones and/or HC(3) is the basis for the function
+ # sql.admissible_ranks(), but here we test it directly using
+ # partitions.
+ K = random_cone()
+ while K.dim > 60:
+ # Make sure we don't have to partition anything too big (it
+ # takes a looong time). Partitioning 60 can be done in a few
+ # seconds, but e.g. 80 may crash the machine.
+ K = random_cone()
+
+ r = False
+ r |= any( partition_rank(p) == K.rank
+ for p in partitions(K.dim, include_two=False) )
+
+ dim_rest = K.dim - 9
+ if dim_rest >= 0:
+ # Try with an HC(3) factor
+ r |= any( partition_rank(p) == (K.rank - 17)
+ for p in partitions(dim_rest, include_two=False) )
+ return r
+
+
+def test_theorem3_example() -> bool:
+ r"""
+ Test the example given directly after Theorem 3.
+
+ Returns
+ -------
+
+ ``True`` if the test passed, and ``False`` otherwise.
+
+ Examples
+ --------
+
+ The relationships in the example should hold::
+
+ >>> test_theorem3_example()
+ True
+
+ """
+ result = True
+
+ # The two simulacra of HC(3) mentioned in the example
+ K1 = DirectSum([L(11),L(3)] + [L(5),RN(8)])
+ K2 = DirectSum([L(11),L(3)] + [L(4)] + 3*[L(3)])
+
+ # Signature comparison, suffices because they're obviously
+ # not isomorphic
+ result &= ( K1.signature() == K2.signature() )
+ result &= ( K1.signature() == HO(3).signature() )
+
+ # And repeat using cached simulacra if possible
+ from sql import have_cone_dim
+ if have_cone_dim(HO(3).dim):
+ result &= K1 in K2.simulacra()
+ result &= K2 in K1.simulacra()
+
+ result &= K1 in HO(3).simulacra()
+ result &= HO(3) in K1.simulacra()
+
+ result &= K2 in HO(3).simulacra()
+ result &= HO(3) in K2.simulacra()
+
+ return result
+
+
+def test_lemma3() -> bool:
+ r"""
+ Test Lemma 3.
+
+ Returns
+ -------
+
+ ``True`` if the test passed, and ``False`` otherwise.
+
+ Examples
+ --------
+
+ The implication in the result should hold::
+
+ >>> test_lemma3()
+ True
+
+ Derive the result by minimizing ``K.rank`` (for a fixed ``K.dim``)
+ in :func:`lowerbound2`::
+
+ >>> from sympy import symbols
+ >>> n,d = symbols("n,d", integer=True, positive=True)
+ >>> fix_floor(n >= 2 + L(d+1).rank - L(d).rank).expand()
+ n >= d + 2
+
+ """
+ from random import randint
+ K_n_pairs = ( (random_cone(),randint(0,1000))
+ for _ in range(100) )
+ return all(
+ (n >= K.dim + 2) or any([n < lowerbound1(K),
+ n < lowerbound2(K),
+ n < lowerbound3(K)])
+ for (K,n) in K_n_pairs
+ )
+
+
+def test_lemma4() -> bool:
+ r"""
+ Test Lemma 4.
+
+ Returns
+ -------
+
+ ``True`` if the test passed, and ``False`` otherwise.
+
+ Examples
+ --------
+
+ The implication in the result should hold::
+
+ >>> test_lemma4()
+ True
+
+ """
+ from random import randint
+ from sql import admissible_ranks, max_lorentz_rank_dim
+
+ # Repeat the check 100 times with random values. If any of them
+ # fail, we set the result to False before returning it.
+ result = True
+ Ks = ( random_cone() for _ in range(100) )
+
+ for K in Ks:
+ # The check for a single cone. We start with a random cone
+ # ``K``, then add a Lorentz cone factor to it whose dimension
+ # is bounded below according to the Lemma.
+ n_min = lowerbound1(K)
+
+ # We're going to use sql.admissible_ranks() to find signature
+ # matches for K + L(n), so we have to make sure that we have
+ # rank data cached up to dimension ``K.dim + n`` at least.
+ n_max = max_lorentz_rank_dim() - K.dim
+ if n_min > n_max:
+ continue
+
+ # There's room for an ``L(n)``, so let's add it.
+ n = randint(n_min, n_max)
+ Ln_plus_K = DirectSum([K, L(n)])
+
+ # The largest possible value of k is K.dim: if k exceeds
+ # K.dim, then we wind up searching for a J whose dimension is
+ # negative. The Lemma requires k >= 1, though, so we may skip
+ # trivial K.
+ if K.dim == 0:
+ continue
+
+ # Otherwise, pick a k at random.
+ k = randint(1, K.dim)
+
+ # The dimension and rank that J must have in the Lemma.
+ J_dim = Ln_plus_K.dim - L(n+k).dim
+ J_rank = Ln_plus_K.rank - L(n+k).rank
+
+ # Does such a J exist? It should not.
+ result &= J_rank not in admissible_ranks(J_dim)
+
+ return result
+
+
+
+def test_lemma5() -> bool:
+ r"""
+ Test Lemma 5.
+
+ Returns
+ -------
+
+ ``True`` if the test passed, and ``False`` otherwise.
+
+ Examples
+ --------
+
+ The implication in the result should hold::
+
+ >>> test_lemma5()
+ True
+
+ Test the strengthened version of Proposition 3. When ``a >= b >= c
+ >= 0``, this should be nonnegative::
+
+ >>> from sympy import symbols, simplify
+ >>> a,b,c = symbols("a,b,c", integer=True)
+ >>> simplify(fix_floor(f(a+c) + f(b-c) - f(a) - f(b)))
+ c*(a - b + c)
+
+ """
+ from random import randint
+
+ Ks = ( random_cone() for _ in range(10) )
+
+ result = True
+ for K in Ks:
+ # Partitioning 60 can be done in a few seconds, but e.g. 80
+ # may crash the machine. Keep in mind that we're partitioning
+ # n + dim(K), and that this is multiplied (at worst) by the
+ # length of Ks!
+ min_n = max(lowerbound1(K), lowerbound2(K), lowerbound3(K))
+ max_n = 60 - K.dim
+
+ if min_n > max_n:
+ # This is a pretty tight window. For example we know that
+ # min_n = 31 for K == HC(3), but then K.dim == 9 already.
+ continue
+ n = randint(min_n, max_n)
+
+ J = DirectSum([K,L(n)])
+ for p in partitions(n + K.dim, n-1, include_two=False):
+ result &= (partition_rank(p) < J.rank)
+
+ return result
+
+
+
+def test_theorem4() -> bool:
+ r"""
+ Do nothing and return ``True``.
+
+ Theorem 5 is a stronger version of Theorem 4, so there is no point
+ in testing Theorem 4, on its own, directly. There are however some
+ doctests for the proof strategy below.
+
+ Examples
+ --------
+
+ >>> test_theorem4()
+ True
+
+ In the proof of this theorem, we "replace" the non-Lorentz
+ irreducible factors with sums of Lorentz cones. The replacements
+ should agree in dimension, and have a dominant Lyapunov rank. The
+ first example we give is for the 3x3 complex PSD cone::
+
+ >>> L(9).dim == HC(3).dim
+ True
+ >>> L(9).rank > HC(3).rank
+ True
+
+ For all other non-Lorentz factors, we verify explicitly that any
+ of their all-Lorentz simulacra will have no factor of dimension
+ ``n`` or greater::
+
+ >>> from sql import max_cone_dim
+ >>> K = random_cone()
+ >>> min_n = max(lowerbound1(K), lowerbound2(K), lowerbound3(K))
+ >>> max_n = max_cone_dim() - K.dim
+ >>> all(
+ ... (not isinstance(f,L)) or f.dim < n
+ ... for n in range(min_n, max_n)
+ ... for J in DirectSum([K,L(n)]).simulacra()
+ ... for I in J.factors()
+ ... if (not isinstance(I,L)) and I.dim >= n
+ ... for I_prime in I.simulacra()
+ ... for f in I_prime.factors()
+ ... )
+ True
+
+ As part of the argument, we claim that all real, complex, and
+ quaternion PSD factors ``I`` satisfy the bound ``12*I.dim >=
+ 5*I.rank``. We confirm this with a random sample of irreducible
+ cones, and set ``n`` large enough to ensure that the factors we
+ generate are not isomorphic to Lorentz cones::
+
+ >>> all(
+ ... 12*K.dim >= 5*K.rank
+ ... for _ in range(100)
+ ... if (K := random_irreducible_cone(3,20))
+ ... and not isinstance(K, (L,HO))
+ ... )
+ True
+
+ ...Although, this is easier to just check::
+
+ >>> from sympy import symbols
+ >>> n = symbols("n", integer=True, positive=True)
+ >>> K = SymmetricCone(0)
+ >>> # HR(n)
+ >>> K.dim = n**2
+ >>> K.dim = (n**2 + n)/2
+ >>> K.rank = n**2
+ >>> 12*K.dim - 5*K.rank
+ n**2 + 6*n
+ >>> # HC(n)
+ >>> K.dim = n**2
+ >>> K.rank = 2*n**2 - 1
+ >>> 12*K.dim - 5*K.rank
+ 2*n**2 + 5
+ >>> # HH(n)
+ >>> K.dim = 2*n**2 - n
+ >>> K.rank = 4*n**2
+ >>> 12*K.dim - 5*K.rank
+ 4*n**2 - 12*n
+
+ We also claim that ``5*L(n).rank >= 12*(2*n - 1)`` as part of the
+ argument. This holds under our assumption that ``n >= 10``, and
+ no smaller bound will work::
+
+ >>> all(
+ ... 5*L(n).rank >= 12*(2*n - 1)
+ ... for n in range(10, 100)
+ ... )
+ True
+ >>> 5*L(9).rank >= 12*(2*9 - 1)
+ False
+
+ """
+ return True
+
+
+def test_theorem5() -> bool:
+ r"""
+ Test Theorem 5.
+
+ Returns
+ -------
+
+ ``True`` if the test passed, and ``False`` otherwise.
+
+ Examples
+ --------
+
+ The implication in the result should hold::
+
+ >>> test_theorem5()
+ True
+
+ Test the claim that if we drop the ``n >= 10`` condition on
+ Theorem 4, then the only new counterexample we get is ``HR(3) ~
+ L(4) + L(2)`` at ``n == 4``. Thus the real condition is ``n !=
+ 4``, for which we get no counterexamples::
+
+ >>> from sql import all_cones_of_dim, max_cone_dim
+ >>> actual = []
+ >>> # skip check if not enough data; the largest dimension
+ >>> # we need is 14 when n=9 and d=5.
+ >>> skip = ( max_cone_dim() < 14 )
+ >>> if not skip:
+ ... for n in range(10):
+ ... for d in range(max_dimK(n)+1):
+ ... for K in all_cones_of_dim(d):
+ ... if n < lowerbound1(K) or n < lowerbound2(K):
+ ... continue
+ ... C = DirectSum([K, L(n)])
+ ... for s in C.simulacra():
+ ... if L(n) not in s.factors():
+ ... actual.append( (n, (C,s)) )
+ >>>
+ >>> expected = [ (4, (DirectSum([L(2),L(4)]), HR(3))) ]
+ >>> skip or (actual == expected)
+ True
+
+ For a given ``K.dim``, :func:`lowerbound1` is minimized by the
+ nonnegative orthant (with value ``n >= 2``), and
+ :func:`lowerbound2` is minimized by the Lorentz cone with value
+ ``K.dim + 2`` (proof: we are either maximizing beta(K) or
+ minimizing the Lyapunov rank in fixed dimensions). The second
+ bound is therefore increasing with ``K.dim`` and obviously
+ dominates the first. As a result, we can use :func:`lowerbound2`
+ to determine the first potentially valid ``n`` corresponding to
+ any ``K.dim``. Moreover we can use this to compute the first
+ ``K.dim`` such that ``n + K.dim`` will exceed the largest
+ dimension we have cached: basically we just add ``K.dim`` to the
+ bound and set it greater than the largest dimension we have
+ cached, i.e. we solve ``K.dim + 2 + K.dim > max_cone_dim()``::
+
+ >>> lowerbound1(RN(3))
+ 2
+ >>> lowerbound1(RN(8))
+ 2
+ >>> lowerbound1(RN(22))
+ 2
+ >>> lowerbound1(RN(57))
+ 2
+ >>> from sympy import symbols, expand
+ >>> d = symbols("d", integer=True, positive=True)
+ >>> expand(fix_floor(lowerbound2(L(d))))
+ d + 2
+
+ Now we begin to verify the techniques used in the proof. First we
+ define the function that we'll use to confirm that most
+ non-Lorentz factors have a Lypaunov rank too small to be
+ considered in the theorem. We need to be careful that ``n +
+ K.dim`` is at least large enough to hold a particular factor,
+ otherwise we might get false positives::
+
+ >>> from itertools import chain
+ >>> from sympy import symbols
+ >>> n,d = symbols("n,d", integer=True, positive=True)
+ >>> f = lambda x: (x**2 - x + 2)/2 # no integer division... sympy
+ >>> def rank_too_small(X):
+ ... g = f(n) + d - X.rank - f(n+d-X.dim)
+ ... return all( g.subs({n:i,d:j}) > 0
+ ... for i in chain(range(0,4), range(5,10))
+ ... for j in range(max_dimK(i)+1)
+ ... if i+j-X.dim >= 0 )
+
+ The argument we use to rule out ``HR(3)``, ``HR(4)``, and
+ ``HC(3)`` factors::
+
+ >>> X = HR(3)
+ >>> rank_too_small(X)
+ True
+
+ >>> X = HR(4)
+ >>> rank_too_small(X)
+ True
+
+ >>> X = HC(3)
+ >>> rank_too_small(X)
+ True
+
+ The list above is comprehensive::
+
+ >>> [ c(n)
+ ... for c in [HR, HC, HH]
+ ... for n in [3,4,5]
+ ... if c(n).dim <= 14 ]
+ [HR(3), HR(4), HC(3)]
+
+ Check the relationships between the lower bounds on ``n``. Each
+ can be violated while the others are satisfied, and in each case,
+ Theorem 5 fails. This is discussed subsequent to Theorem 5 in the
+ paper::
+
+ >>> from random import randint
+ >>> m = randint(5,20)
+ >>> K = L(m)
+ >>> lowerbound2(K) < lowerbound1(K)
+ True
+ >>> n = lowerbound1(K) - 1
+ >>> (n >= lowerbound1(K), n >= lowerbound2(K), n != 4)
+ (False, True, True)
+ >>> ( DirectSum([L(m), L(n)]).signature()
+ ... ==
+ ... DirectSum([RN(m-1), L(n+1)]).signature() )
+ True
+
+ >>> K = RN(2)
+ >>> n = 4
+ >>> lowerbound1(K) < lowerbound2(K)
+ True
+ >>> (n >= lowerbound1(K), n >= lowerbound2(K), n != 4)
+ (True, True, False)
+ >>> DirectSum([K,L(n)]).signature() == HR(3).signature()
+ True
+
+ >>> K = HC(3)
+ >>> lowerbound1(K) < lowerbound2(K)
+ True
+ >>> n = 30
+ >>> (n >= lowerbound1(K), n >= lowerbound2(K), n != 4)
+ (True, False, True)
+ >>> ( DirectSum([K,L(n)]).signature()
+ ... ==
+ ... DirectSum([L(29),L(10)]).signature() )
+ True
+
+ """
+ # Use cached data so that we can go beyond the n=15 case
+ # and check the result for Theorem 4, too.
+ from sql import all_cones_of_dim, max_cone_dim
+ result = True
+
+ # Explained in the docstring. We don't _really_ have to stop where
+ # L(n)+K will have the max cached dim, because anything larger
+ # will appear to have no simulacra, and (for the sake of the
+ # theorem) that's fine. But we do have to stop _somewhere_, so we
+ # might as well stop here?
+ max_d = (max_cone_dim() - 2) // 2
+
+ for d in range(1, max_d+1):
+ min_n = max(lowerbound1(RN(d)), lowerbound2(L(d)))
+ max_n = max_cone_dim() - d
+ for n in range(min_n, max_n+1):
+ if n == 4: continue
+ for K in all_cones_of_dim(d):
+ if n < lowerbound1(K): continue
+ if n < lowerbound2(K): continue
+ lhs = DirectSum([K,L(n)])
+ for J in lhs.simulacra():
+ fs = list(J.factors())
+ # remove() raises an error if L(n) isn't a factor,
+ # so this guarantees that L(n) is one.
+ fs.remove(L(n))
+ J_prime = DirectSum(fs)
+ result &= J_prime in K.simulacra()
+
+ return result
+
+
+def test_corollary3() -> bool:
+ r"""
+ Test Corollary 3.
+
+ Returns
+ -------
+
+ ``True`` if the test passed, and ``False`` otherwise.
+
+ Examples
+ --------
+
+ The implication in the result should hold::
+
+ >>> test_corollary3()
+ True
+
+ Check the claim for ``HC(3)`` directly, using whatever cached
+ cones are available::
+
+ >>> from sql import max_cone_dim
+ >>> n_min = 31
+ >>> n_max = 0
+ >>> n_max = max_cone_dim() - 9
+ >>> all( not DirectSum([HC(3),L(n)]).simulacra()
+ ... for n in range(n_min, n_max+1) )
+ True
+
+ The paper notes that only sums of Lorentz cones need to be checked
+ for simulacra of ``HC(3) + L(n)``. Here we repeat the check above
+ using our cached Lorentz ranks (which are easier to compute)::
+
+ >>> from sql import admissible_lorentz_ranks, max_lorentz_rank_dim
+ >>> n_min = 31
+ >>> n_max = max_lorentz_rank_dim() - 9
+ >>>
+ >>> all(
+ ... (17+L(n).rank)
+ ... not in admissible_lorentz_ranks(9+n)
+ ... for n in range(n_min, n_max+1)
+ ... )
+ True
+
+ """
+ from random import randint
+ from sql import all_cones_of_dim, max_cone_dim
+
+ # We'll logical-and this with the result from each test case.
+ result = True
+
+ # Use the same trick we used in test_theorem5() to decide where to
+ # stop.
+ max_d = (max_cone_dim() - 2) // 2
+
+ for d in range(1, max_d+1):
+ max_n = max_cone_dim() - d # leave room for K
+ for K in all_cones_of_dim(d):
+ min_n = max(lowerbound1(K),lowerbound2(K))
+ if min_n > max_n:
+ # The dimension of L(n)+K is guaranteed to exceed the
+ # dimensions we have cached, so skip this cone.
+ continue
+ if min_n == 4 and max_n == 4:
+ # We're gonne get stuck generating random 4s (because
+ # 4 is not a valid value for n) forever
+ continue
+ if not K.simulacra():
+ n = 4
+ while n == 4:
+ n = randint(min_n, max_n)
+ # the corollary says that L(n)+K should have no
+ # simulacra
+ result &= not DirectSum([K,L(n)]).simulacra()
+
+ return result
+
+
+def test_proposition8() -> bool:
+ r"""
+ Test Proposition 8.
+
+ Returns
+ -------
+
+ ``True`` if the test passed, and ``False`` otherwise.
+
+ Examples
+ --------
+
+ The implication in the result should hold::
+
+ >>> test_proposition8()
+ True
+
+ Confirm the table for ``n <= 30`` using sums of Lorentz cones
+ (this suffices)::
+
+ >>> from sql import admissible_lorentz_ranks, max_lorentz_rank_dim
+ >>>
+ >>> d = {
+ ... 2: [3,3,5],
+ ... 3: [4,4,4],
+ ... 4: [2,2,3,6],
+ ... 5: [1,3,4,6],
+ ... 6: [5,5,5],
+ ... 7: [4,6,6],
+ ... 8: [1,2,2,3,9],
+ ... 9: [1,2,7,8],
+ ... 10: [3,7,9],
+ ... 15: [2,8,14],
+ ... 18: [13,14],
+ ... 21: [11,19],
+ ... 22: [1,9,21],
+ ... 30: [10,29]
+ ... }
+ >>>
+ >>> def check(n):
+ ... J = DirectSum([ HC(3), L(n) ])
+ ... if n in d:
+ ... K = DirectSum([ L(i) for i in d[n] ])
+ ... return(
+ ... J.rank in admissible_lorentz_ranks(J.dim)
+ ... and
+ ... K.signature() == J.signature()
+ ... )
+ ... else:
+ ... return J.rank not in admissible_lorentz_ranks(J.dim)
+ >>>
+ >>> n_max = max_lorentz_rank_dim() - 9
+ >>> all( check(n) for n in range(n_max+1) )
+ True
+
+ """
+ from sql import max_cone_dim
+
+ # the list of "n" where we expect to find simulacra
+ expected_n = [2, 3, 4, 5, 6, 7, 8, 9, 10, 15, 18, 21, 22, 30]
+
+ # We're going to be calling simulacra() on L(n)+HC(3), so make
+ # sure we don't make "n" so large that we exceed what is cached.
+ n_max = max_cone_dim() - 9
+
+ return all(
+ (not DirectSum([HC(3),L(n)]).simulacra())
+ or
+ (n in expected_n)
+ for n in range(n_max+1)
+ )
+
+
+def test_proposition9() -> bool:
+ r"""
+ Test Proposition 9.
+
+ Returns
+ -------
+
+ ``True`` if the test passed, and ``False`` otherwise.
+
+ Examples
+ --------
+
+ The implication in the result should hold::
+
+ >>> test_proposition9()
+ True
+
+ Find the pairs of ``m`` and ``n`` that aren't a corollary of
+ Theorem 5. The cases where ``m == 0`` are trivial::
+
+ >>> [
+ ... (m,n)
+ ... for m in range(1,100)
+ ... for n in range(100)
+ ... if (
+ ... n < lowerbound1(L(m))
+ ... or n < lowerbound2(L(m))
+ ... or n == 4
+ ... )
+ ... and n >= (m**2 - 3*m + 6)/2
+ ... ]
+ [(1, 2), (1, 4), (2, 2), (2, 3), (2, 4), (3, 3), (3, 4), (4, 5)]
+
+ Verify the cases mentioned explicitly in the proof. First, the
+ ``m != 2`` cases where there are no simulacra::
+
+ >>> from sql import max_cone_dim
+ >>> mcd = max_cone_dim()
+
+ >>> m = 4
+ >>> K = L(m)
+ >>> n = 5
+ >>> (n >= lowerbound1(K), n >= lowerbound2(K), n != 4)
+ (True, False, True)
+ >>> if K.dim + n <= mcd:
+ ... DirectSum([K,L(n)]).simulacra()
+ ... else:
+ ... ()
+ ()
+
+ >>> m = 3
+ >>> K = L(m)
+ >>> n = 4
+ >>> (n >= lowerbound1(K), n >= lowerbound2(K), n != 4)
+ (True, False, False)
+ >>> if K.dim + n <= mcd:
+ ... DirectSum([K,L(n)]).simulacra()
+ ... else:
+ ... ()
+ ()
+ >>> n = 3
+ >>> (n >= lowerbound1(K), n >= lowerbound2(K), n != 4)
+ (True, False, True)
+ >>> if K.dim + n <= mcd:
+ ... DirectSum([K,L(n)]).simulacra()
+ ... else:
+ ... ()
+ ()
+
+ >>> m = 1
+ >>> K = L(m)
+ >>> n = 2
+ >>> (n >= lowerbound1(K), n >= lowerbound2(K), n != 4)
+ (True, False, True)
+ >>> if K.dim + n <= mcd:
+ ... DirectSum([K,L(n)]).simulacra()
+ ... else:
+ ... ()
+ ()
+ >>> n = 4
+ >>> (n >= lowerbound1(K), n >= lowerbound2(K), n != 4)
+ (True, True, False)
+ >>> if K.dim + n <= mcd:
+ ... DirectSum([K,L(n)]).simulacra()
+ ... else:
+ ... ()
+ ()
+
+ And now the ``m == 2`` case where there is exactly one
+ counterexample::
+
+ >>> from sql import max_cone_dim
+ >>> mcd = max_cone_dim()
+ >>> m = 2
+ >>> K = L(m)
+ >>> ns = [2, 3]
+ >>> any( n < lowerbound1(K) for n in ns ) # not violated
+ False
+ >>> all( n < lowerbound2(K) for n in ns ) # violated
+ True
+ >>> sims = []
+ >>> for n in ns:
+ ... if K.dim + n <= mcd:
+ ... sims.append(DirectSum([K,L(n)]).simulacra())
+ ... else:
+ ... sims.append(())
+ >>> sims
+ [(), ()]
+
+ >>> from sql import max_cone_dim
+ >>> expected = (HR(3),)
+ >>> J = DirectSum([K,L(4)])
+ >>> skip = ( J.dim > max_cone_dim() )
+ >>> skip or (J.simulacra() == expected)
+ True
+
+ Confirm that the stated bound actually comes from
+ :func:`lowerbound1`::
+
+ >>> from sympy import symbols
+ >>> m = symbols("m", integer=True, positive=True)
+ >>> fix_floor(lowerbound1(L(m))) == (m**2 - 3*m + 6)/2
+ True
+
+ The ``L(m) + L(n)`` cases with ``m,n <= 2`` are singled out in
+ a bullet point::
+
+ >>> all(
+ ... not list(partition_simulacra(p))
+ ... for k in range(5)
+ ... for p in partitions(k)
+ ... )
+ True
+
+ The ``L(3) + L(2)`` case is also singled out in a bullet point::
+
+ >>> list(partition_simulacra([3,2]))
+ []
+
+ """
+ from sql import max_cone_dim
+ mcd = max_cone_dim()
+
+ return all(
+ not DirectSum([L(m),L(n)]).simulacra()
+ for m in range(1, mcd + 1)
+ for n in range(lowerbound1(L(m)), mcd - m)
+ if m != 2
+ )
+
+
+def test_lemma6() -> bool:
+ r"""
+ Test Lemma 6.
+
+ Returns
+ -------
+
+ ``True`` if the test passed, and ``False`` otherwise.
+
+ Examples
+ --------
+
+ The implication in the result should hold::
+
+ >>> test_lemma6()
+ True
+
+ For ``n <= 2``, there shouldn't be any simulacra in the first
+ place::
+
+ >>> from sql import max_cone_dim
+ >>> mcd = max_cone_dim()
+ >>> [ K.simulacra() if 2*n <= mcd else ()
+ ... for n in range(3)
+ ... if (K := DirectSum(2*[L(n)])) ]
+ [(), (), ()]
+
+ """
+ from sql import all_cones_of_dim, max_cone_dim
+
+ # For ALL cones with simulacra, there EXISTS a simulacrum, such
+ # that ALL of its factors are HC(3) or Lorentz cones.
+ n_max = max_cone_dim() // 2
+ return all(
+ not K.simulacra()
+ or
+ any(
+ all( f == HC(3) or isinstance(f,L) for f in J.factors() )
+ for J in K.simulacra()
+ )
+ for n in range(n_max+1)
+ if (K := DirectSum(2*[L(n)]))
+ )
+
+
+def test_proposition10() -> bool:
+ r"""
+ Test Proposition 10.
+
+ Returns
+ -------
+
+ ``True`` if the test passed, and ``False`` otherwise.
+
+ Examples
+ --------
+
+ The characterization in the result should hold::
+
+ >>> test_proposition10()
+ True
+
+ In addition to the ``simulacra`` check, we can also use our cached
+ partitions thanks to Lemma 6. This allows us to test all the way
+ up to ``n == 100``.....
+
+
+ Define the "constants" we use in the proof::
+
+ >>> m = lambda n: (n // 5)
+ >>> k = lambda n: n - 5*m(n)
+ >>> r = lambda n: m(n) - k(n)**2 + 1 - 3*((m(n) - k(n)**2 + 1) // 3)
+ >>> alpha = lambda n: (m(n) - 4*k(n)**2 + 15*k(n) - 14 - r(n)) / 3
+ >>> gamma = lambda n: 2*m(n) - 22*k(n) - (4*m(n) - 16*k(n)**2 - 68 + 5*r(n))/3
+
+ And check some of their properties::
+
+ >>> n_max = 200
+ >>> all( n == 5*m(n) + k(n) for n in range(n_max+1) )
+ True
+ >>> all( alpha(n).is_integer() for n in range(n_max+1) )
+ True
+ >>> all( gamma(n).is_integer() for n in range(n_max+1) )
+ True
+ >>> all( alpha(n) >= 0 for n in range(100, n_max+1) )
+ True
+ >>> all( gamma(n) >= 0 for n in range(100, n_max+1) )
+ True
+
+ Finally, we check the dimension and Lyapunov rank of our
+ simulacrum symbolically::
+
+ >>> from sympy import expand, symbols
+ >>> from partitions import f
+ >>> n,m = symbols("n,m", integer=True, positive=True)
+ >>> k,r = symbols("k,r", integer=True, nonnegative=True)
+ >>> alpha = (m - 4*k**2 + 15*k - 14 - r) / 3
+ >>> gamma = 2*m - 22*k - (4*m - 16*k**2 - 68 + 5*r)/3
+ >>> J_dim = (7*m + k) + (m + 3*k - 4) + 4*alpha + 3*r + gamma
+ >>> J_rank = fix_floor(f(7*m + k)) + fix_floor(f(m + 3*k - 4)) + alpha*f(4) + r*f(3) + gamma
+ >>> K_dim = 2*n
+ >>> K_rank = 2*fix_floor(f(n))
+ >>> (K_dim - J_dim).subs({n: 5*m + k})
+ 0
+ >>> expand((K_rank - J_rank).subs({n: 5*m + k}))
+ 0
+
+ Since the target cone has exactly two factors, it suffices (per
+ the proof) to check partitions with/without an ``HC(3)`` offset.
+ We do this only up to ``n == 18`` because all greater ``n`` lead
+ to simulacra, and the existence of a simulacra is much easier to
+ verify by just writing down its factors::
+
+ >>> from partitions import partition_rank, partition_simulacra
+ >>> n_max = 18
+ >>> n_without_simulacra = []
+ >>> for n in range(n_max+1):
+ ... p = [n,n]
+ ... simcount = len(list(partition_simulacra(p)))
+ ... f = lambda q: partition_rank(q) == (partition_rank(p) - 17)
+ ... if n >= 5:
+ ... simcount += len(list(
+ ... filter(f, partitions(2*n - 9, include_two=False))
+ ... ))
+ ... if simcount == 0:
+ ... n_without_simulacra.append(n)
+ >>> n_without_simulacra
+ [0, 1, 2, 3, 5, 6, 7, 11, 12, 13, 18]
+
+ Finally, we demonstrate simulacra for all ``n`` between ``0`` and
+ ``100`` that are not in the no-simulacra list::
+
+ >>> d = {
+ ... 4: [1, 2, 5],
+ ... 8: [1, 5, 10],
+ ... 9: [1, 2, 3, 12],
+ ... 10: [2, 5, 13],
+ ... 14: [1, 10, 17],
+ ... 15: [1, 3, 6, 20],
+ ... 16: [2, 2, 2, 2, 2, 22],
+ ... 17: [2, 4, 5, 23],
+ ... 19: [1, 2, 2, 2, 5, 26],
+ ... 20: [2, 13, 25],
+ ... 21: [1, 2, 12, 27],
+ ... 22: [1, 17, 26],
+ ... 23: [1, 3, 12, 30],
+ ... 24: [1, 2, 2, 2, 2, 6, 33],
+ ... 25: [2, 3, 12, 33],
+ ... 26: [5, 13, 34],
+ ... 27: [2, 2, 2, 12, 36],
+ ... 28: [1, 5, 13, 37],
+ ... 29: [1, 2, 2, 2, 2, 2, 7, 40],
+ ... 30: [1, 2, 3, 4, 9, 41],
+ ... 31: [3, 8, 9, 42],
+ ... 32: [1, 26, 37],
+ ... 33: [1, 3, 20, 42],
+ ... 34: [2, 25, 41],
+ ... 35: [2, 8, 13, 47],
+ ... 36: [3, 3, 19, 47],
+ ... 37: [1, 2, 2, 5, 14, 50],
+ ... 38: [1, 2, 4, 19, 50],
+ ... 39: [1, 2, 27, 48],
+ ... 40: [2, 2, 4, 19, 53],
+ ... 41: [2, 4, 23, 53],
+ ... 42: [1, 2, 3, 3, 19, 56],
+ ... 43: [2, 5, 23, 56],
+ ... 44: [1, 37, 50],
+ ... 45: [1, 3, 30, 56],
+ ... 46: [5, 29, 58],
+ ... 47: [1, 2, 2, 2, 26, 61],
+ ... 48: [1, 2, 2, 2, 6, 18, 65],
+ ... 49: [2, 2, 4, 26, 64],
+ ... 50: [1, 2, 10, 20, 67],
+ ... 51: [2, 3, 33, 64],
+ ... 52: [2, 41, 61],
+ ... 53: [2, 5, 31, 68],
+ ... 54: [1, 3, 4, 30, 70],
+ ... 55: [2, 2, 2, 5, 26, 73],
+ ... 56: [10, 29, 73],
+ ... 57: [2, 2, 2, 36, 72],
+ ... 58: [1, 50, 65],
+ ... 59: [1, 3, 42, 72],
+ ... 60: [1, 2, 2, 2, 2, 33, 78],
+ ... 61: [4, 5, 34, 79],
+ ... 62: [1, 2, 3, 4, 33, 81],
+ ... 63: [1, 2, 48, 75],
+ ... 64: [1, 2, 2, 44, 79],
+ ... 65: [4, 7, 34, 85],
+ ... 66: [1, 2, 4, 5, 33, 87],
+ ... 67: [3, 3, 4, 37, 87],
+ ... 68: [1, 2, 7, 38, 88],
+ ... 69: [1, 2, 2, 6, 37, 90],
+ ... 70: [3, 3, 47, 87],
+ ... 71: [2, 4, 8, 34, 94],
+ ... 72: [1, 2, 3, 4, 41, 93],
+ ... 73: [1, 2, 2, 2, 2, 2, 40, 95],
+ ... 74: [1, 65, 82],
+ ... 75: [1, 3, 56, 90],
+ ... 76: [1, 2, 4, 50, 95],
+ ... 77: [2, 4, 53, 95],
+ ... 78: [1, 3, 6, 46, 100],
+ ... 79: [2, 3, 57, 96],
+ ... 80: [5, 58, 97],
+ ... 81: [1, 4, 7, 45, 105],
+ ... 82: [2, 2, 4, 53, 103],
+ ... 83: [2, 5, 56, 103],
+ ... 84: [3, 3, 59, 103],
+ ... 85: [4, 7, 50, 109],
+ ... 86: [10, 53, 109],
+ ... 87: [2, 3, 64, 105],
+ ... 88: [5, 65, 106],
+ ... 89: [1, 2, 2, 2, 61, 110],
+ ... 90: [1, 4, 6, 54, 115],
+ ... 91: [1, 7, 9, 45, 120],
+ ... 92: [1, 82, 101],
+ ... 93: [1, 2, 75, 108],
+ ... 94: [1, 2, 2, 2, 2, 61, 118],
+ ... 95: [2, 2, 4, 64, 118],
+ ... 96: [1, 2, 4, 5, 57, 123],
+ ... 97: [2, 5, 68, 119],
+ ... 98: [2, 3, 8, 57, 126],
+ ... 99: [2, 2, 2, 72, 120],
+ ... 100: [2, 85, 113]
+ ... }
+ >>> all(
+ ... K.rank == partition_rank(d[n])
+ ... for n in range(101)
+ ... if not n in n_without_simulacra
+ ... if (K := DirectSum(2*[L(n)]))
+ ... )
+ True
+
+ """
+ from sql import max_cone_dim
+ n_max = max_cone_dim() // 2
+ n_without_simulacra = [
+ n for n in range(n_max+1)
+ if (K := DirectSum([L(n)]*2))
+ and not K.simulacra()
+ ]
+
+ # We have to filter the expected result based on the
+ # max available cone dimension, too.
+ expected = [ e for e in [0, 1, 2, 3, 5, 6, 7, 11, 12, 13, 18]
+ if e <= n_max ]
+
+ return (n_without_simulacra == expected)
--- /dev/null
+r"""
+This script verifies that the live SQL database contains enough
+information to verify the results in the paper. This is necessary
+because the test suite is designed to pass regardless of how much
+information is contained in the database (the tests should pass if the
+database is completely empty).
+
+This script can be executed directly,
+
+ $ python verify-live-db.py
+
+and it will return either success (0) or failure (1) after telling you
+what it is doing.
+"""
+
+import msgpack
+from random import randint
+import sqlite3
+
+from cones import SymmetricCone
+from sql import *
+
+if __name__ == "__main__":
+ result = True
+
+ mcd = max_cone_dim()
+ mlrd = max_lorentz_rank_dim()
+
+ def report():
+ if result:
+ print("ok", flush=True)
+ else:
+ print("fail", flush=True)
+
+
+ print("Checking for the trivial cone in both tables... ", flush=True, end="")
+ if not (have_cone_dim(0) and have_lorentz_rank_dim(0)):
+ result = False
+ report()
+
+ print("All cone dimensions up to the max are present... ", flush=True, end="")
+ for n in range(mcd + 1):
+ if not have_cone_dim(n):
+ result = False
+ report()
+
+ print("All lorentz rank dimensions up to the max are present... ", flush=True, end="")
+ for n in range(mlrd + 1):
+ if not have_lorentz_rank_dim(n):
+ result = False
+ report()
+
+ print("Verifying the rank/dimension of some cones of a random size... ", flush=True, end="")
+ if mcd < 0:
+ result = False
+ else:
+ # decide on a dimension first; otherwise random ordering is
+ # too slow.
+ n = randint(0, mcd)
+ conn = sqlite3.connect(LIVE_DATABASE)
+ stmt = "SELECT rank,data FROM cones WHERE dim=? ORDER BY random() LIMIT 10000"
+ with conn:
+ rows = conn.execute(stmt, (n,)).fetchall()
+ conn.close()
+
+ for row in rows:
+ K = SymmetricCone.deserialize(msgpack.unpackb(row[1], use_list=False))
+ if not (K.dim == n and K.rank == row[0]):
+ result = False
+ report()
+
+ print("Need max_cone_dim() >= 4 for Lemma 6... ", flush=True, end="")
+ if mcd < 4:
+ result = False
+ report()
+
+ print("Need max_cone_dim() >= 6 for non-Lorentz cones to exist... ", flush=True, end="")
+ if mcd < 6:
+ result = False
+ report()
+
+ print("Need max_cone_dim() >= 9 for Propositions 5 and 9... ", flush=True, end="")
+ if mcd < 9:
+ result = False
+ report()
+
+ print("Need max_cone_dim() >= 18 for Corollary 2... ", flush=True, end="")
+ if mcd < 18:
+ result = False
+ report()
+
+ print("Need max_cone_dim() >= 21 for Theorem 5... ", flush=True, end="")
+ if mcd < 21:
+ result = False
+ report()
+
+ print("Need max_cone_dim() >= 27 for Example 1... ", flush=True, end="")
+ if mcd < 27:
+ result = False
+ report()
+
+ print("Need max_cone_dim() >= 36 for Proposition 10... ", flush=True, end="")
+ if mcd < 36:
+ result = False
+ report()
+
+ print("Need max_cone_dim() >= 39 for Proposition 8... ", flush=True, end="")
+ if mcd < 39:
+ result = False
+ report()
+
+ print("Need max_cone_dim() >= 40 for Corollary 3... ", flush=True, end="")
+ if mcd < 40:
+ result = False
+ report()
+
+ print("Need max_lorentz_rank_dim() >= 39 for Proposition 8... ", flush=True, end="")
+ if mlrd < 39:
+ result = False
+ report()
+
+ print("Need max_lorentz_rank_dim() >= 40 for Corollary 3... ", flush=True, end="")
+ if mlrd < 40:
+ result = False
+ report()
+
+ print("Confirming admissible_lorentz_ranks(60) directly... ", flush=True, end="")
+ if mlrd < 60:
+ result = False
+ else:
+ from partitions import _direct_lorentz_ranks
+ actual = admissible_lorentz_ranks(60)
+ expected = _direct_lorentz_ranks(60)
+ if (set(actual) != set(expected)):
+ result = False
+ report()
+
+ # This takes forever to get a random sample but there's no easy
+ # way around it.
+ print("There should be no hash collisions in the database... ", flush=True, end="")
+ conn = sqlite3.connect(LIVE_DATABASE)
+ stmt = f"SELECT data FROM cones ORDER BY random() LIMIT 1000000"
+ with conn:
+ rows = conn.execute(stmt).fetchall()
+ conn.close()
+
+ # Dedupe via set(); if any were removed, there'll be fewer elements
+ # than we started with.
+ expected = len(rows)
+ actual = len(set( hash(msgpack.unpackb(r[0], use_list=False))
+ for r in rows ))
+ if not (actual == expected):
+ result = False
+ report()
+
+ exit(int(not result))