ELIFE-ASU / Inform

A cross platform C library for information analysis of dynamical systems

Home Page:https://elife-asu.github.io/Inform

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Alternative Entropies

dglmoore opened this issue · comments

This is a discussion post. Please feel free to comment and contribute to the discussion even if you are not directly involved in the development of inform or its wrapper libraries.

Premise

Claude Shannon introduced his measure of entropy in his 1948 paper A Mathematical Theory of Communication. Since then there have been several new measures of entropy have been developed, (see Renyi, 1961 and Tsallis, 1988 for notable examples). Each of these measures are actually families of entropy measures parameterized by at least one continuous parameter, and tend toward Shannon's measure in some limit of those parameters. They also admit divergences which tend toward Kullback–Leibler divergence in the same limit.

Question

These alternative entropy measures do have a place in our toolbox. The question is, is it worthwhile to put some effort into implementing these measures. To my knowledge Renyi and Tsallis entropies are not implemented in any of the standard information toolkits. This could be because they are not useful, in which case implementing them would be a waste of a lot of thought, energy and drive us a little closer to carpel-tunnel. Or it could be that no one has considered using them and treasures are waiting to be uncovered.

Let us know what you think!

Thanks @dglmoore for starting this conversation. At the moment I'm inclined to request Tsallis entropies, because of their apparent use in non-equilibrium (non-ergodic) stat mech. However, at the moment I don't have any specific measurements or systems I'd like to test with these entropies yet.

I would love it if we could come up with an API that would let the user decide which class of measure they would like to use. The C API would probably be hideous, maybe something like

typedef enum inform_entropy_class
{
    INFORM_SHANNON = 0,
    INFORM_RENYI   = 1,
    INFORM_TSALLIS = 2,
} inform_entropy_class;

double inform_active_info(int const *series, size_t n, size_t m, int b, size_t k, inform_entropy_class h, inform_error *err);

Which might look something like

class EntropyClass(Enum):
    shannon = 0
    renyi   = 1
    tsallis = 2

def active_info(series, k, b=2, local=True, eclass=EntropyClass.shannon):

There are a couple of other ways we could go about this, but I think they may be less efficient.

Thoughts?