Build Status (Travis CI) Build Status (Appveyor)

Inform is a cross-platform C library designed for performing information analysis of complex system.

  1. The inform_dist struct provides discrete, emperical probability distributions. These form the basis for all of the information-theoretic measures.

  2. A collection of information measures built upon the distribution struct provide the core algorithms for the library, and are provided through the shannon.h header.

  3. A host of measures of the information dynamics on time series are built upon the core information measures. Each measure is housed in its own header, e.g. active_info.h.

In addition to the core components, a small collection of utilities are also provided.

Getting Started

Installation from Source

To build Inform, you will need to have CMake. Most of you can use your package manager, e.g. apt-get, pacman or yum on Linux or homebrew, macports or fink on OS X.

Linux, OS X, and Windows (Bash, MinGW and Cygwin)

Once CMake is installed, building, testing and installing the library is a snap

λ cmake . -DCMAKE_BUILD_TYPE=Release -DEXAMPLES=Yes
λ make all tests
λ sudo make install

Windows with MSVC

Building with MSVC is a bit more involved. Open the Visual C++ MSBuild command prompt (should be in your start menu). You can then run cmake build and test from the prompt:

λ cmake -DCMAKE_BUILD_TYPE=Release -DEXAMPLES=Yes -G"Visual Studio 14 2015"
λ msbuild /p:Configuration=Release ALL_BUILD.vcxproj
λ test\Release\inform_unittest.exe

Installation requires the user to manually copy the headers and libraries to wherever the user would like. This project-by-project approach is standard for Windows development, as you probably know.

Binary Installation

Precompiled binaries can be found at https://github.com/elife-asu/inform/releases.

Getting Help

Inform is developed to help anyone interested in applying information-theoretic techniques get things done quickly and painlessly. We cannot do that without your feedback. We host the project’s source code and issue tracker on Github. Please create an issue if you find a bug, an error in this documentation, or have a feature you’d like to request. Your contribution will make Inform a better tool for everyone.

If you are interested in contributing to Inform, please contact the developers, and we’ll get you up and running!

Inform Community

While Inform is a great tool to use directly in C or C\\, significant effort has gone into making it easy to wrap higher level languages. Here are a few of the wrappers that are under active developed:

Table 1. Inform Wrappers
Package Name Language Repository Website

PyInform

Python

elife-asu/pyinform

https://elife-asu.github.io/PyInform

rinform

R

elife-asu/rinform

https://elife-asu.github.io/rinform

Inform.jl

Julia

elife-asu/inform.jl

forthcoming

InformWolfram

Mathematica/Wolfram Language

elife-asu/informwolfram

forthcoming

Intellectual Relatives

Inform was not created in an intellectual vacuum. Many similar projects have preceded it and are under active development. All of those projects have advantages and disadvantages compared to Inform. If Inform doesn’t meet your needs, I hope you will let us know, and try one of these:

  • JIDT: Java Information Dynamics Toolkit (Java)

  • TRENTOOL: TRansfer ENtropy TOOLbox (Matlab)

  • dit: discrete information theory (Python)

  • MuTE: MuTE Toolbox- The Dynamic DIrected Links Detector (Matlab)

  • MVGC: Multivariate Granger Causality Toolbox (Matlab)

  • ACSS: Algorithmic Complexity for Short Strings ®

  • OACC: Online Algorithmic Complexity Calculator (web-based, R)

Copyright © 2016-2018 ELIFE, Arizona State University. Free use of this software is granted under the terms of the MIT License.

See the LICENSE file for details.

Relevant Publications

  • Moore, D.G., Valentini, G., Walker, S.I. and Levin, M. (2018) "Inform: Efficient Information-Theoretic Analysis of Collective Behaviors" Frontiers in Robotics & AI. (under review).

  • Moore, D.G., Valentini, G., Walker, S.I. and Levin, M. (2018) "Inform: A Toolkit for Information-Theoretic Analysis of Complex Systems". In: Proceedings of the 2017 IEEE Symposium on Computational Intelligence, Symposium on Artificial Life, IEEE Press. (in press).

Support

This project was supported in part by a grant provided by the Templeton World Charity Foundation as part of the Power of Information Inititive.

Error Handling

Introduction

Much of the time invested in Inform has been ensuring that errors are properly handled. Since Inform is designed to be used both natively and wrapped for use in another language, we needed a simple way to provide error information while still being complete. The result was the inform_error enumeration.

Most of Inform's functions take a pointer to an inform_error as a last argument. This allows the function to pass error information up the call stack. Below is a standard example of how to handle errors which arise during standard usage:

size_t const n = 1, m = 7, k = 2;

// expect only binary states
int const base = 2;
// state 5 is invalid
int const series[] = {0,0,1,1,2,0,1};

// Don't forget to initialize the error
inform_error err = INFORM_SUCCESS;
double ai = inform_active_info(series, n, m, k, base, &err);
if (inform_failed(&err))
{
    // ERROR: time series has states inconsistent with expected base
    fprintf(stderr, "ERROR: %s\n", inform_strerror(&err));
    return -1;
}

Most functions also return a value which signifies erroneous results. If the user is uninterested in the specific error message, then they can simply ignore it by passing a NULL pointer.

double ai = inform_active_info(series, n, m, k, base, NULL);
if (isnan(ai))
{
    fprintf(stderr, "ERROR: an error occurred in computing AI\n");
    return -1;
}

We strongly recommend that you handle errors. This is both good style and helps immensely when there’s actually a problem.

This covers the most common use cases for non-developers. If you are a developer, we strongly suggest that you review the rest of the error handling API. Otherwise, you’d probably be better served by moving on to the more interesting stuff such as Time Series Measures.

API Documentation

All of the error handling API is provided by the inform/error.h header. The user rarely has to explicitly include this header since most of the work-horse functions are bring this header along.

typedef enum { ... } inform_error;
Table 2. Descriptions of Inform's Error Tags
Error Tag Description

INFORM_SUCCESS

no error occurred

INFORM_FAILURE

an unspecified error occurred

INFORM_EFAULT

invalid pointer

INFORM_EARG

invalid argument

INFORM_ENOMEM

malloc/calloc/realloc failed

INFORM_ETIMESERIES

time series is NULL

INFORM_ENOSOURCES

time series has no sources

INFORM_ENOINITS

time series has no initial conditions

INFORM_ESHORTSERIES

time series has less than two time steps

INFORM_EKZERO

history length is zero

INFORM_EKLONG

history is too long for the time series

INFORM_EBASE

the provided base is invalid

INFORM_ENEGSTATE

time series has negative state

INFORM_EBADSTATE

time series has states inconsistent with expected base

INFORM_EDIST

invalid distribution

INFORM_EBIN

invalid binning

INFORM_EENCODE

cannot encode state

INFORM_ETPM

invalid TPM

INFORM_ETPMROW

all zero row in transition probability matrix

INFORM_ESIZE

invalid size

INFORM_EPARTS

invalid partitioning

Header

inform/error.h

#define INFORM_ERROR(ERR, TAG)

Set an error pointer ERR to a particular error TAG. If ERR is NULL, this is essentially a noop.

Example:

int *allocate_array(size_t n, inform_error *err)
{
    int *data = malloc(n * sizeof(int));
    if (!data)
    {
        INFORM_ERROR(err, INFORM_ENOMEM);
    }
    return data;
}
Header

inform/error.h

See also

INFORM_ERROR_RETURN, INFORM_ERROR_RETURN_VOID

#define INFORM_ERROR_RETURN(ERR, TAG, RET)

Set an error pointer ERR to a particular error TAG, and return a value RET. If ERR is NULL, this is essentially a just a return statement.

Example:

int *double_values(int *array, size_t n, inform_error *err)
{
    if (!array)
    {
        INFORM_ERROR_RETURN(err, INFORM_EFAULT, array);
    }
    for (size_t i = 0; i < n; ++i)
    {
        array[i] *= 2;
    }
    return array;
}
Header

inform/error.h

See also

INFORM_ERROR, INFORM_ERROR_RETURN_VOID

#define INFORM_ERROR_RETURN_VOID(ERR, TAG)

Set an error pointer ERR to a particular error TAG, and return void. If ERR is NULL, this is essentially a just a return statement.

Example:

void double_values(int *array, size_t n, inform_error *err)
{
    if (!array)
    {
        INFORM_ERROR_RETURN(err, INFORM_EFAULT);
    }
    for (size_t i = 0; i < n; ++i)
    {
        array[i] *= 2;
    }
}
Header

inform/error.h

See also

INFORM_ERROR, INFORM_ERROR_RETURN

bool inform_succeeded(inform_error const *err);

Return true if the value of the error pointer signifies success.

This function will only return true if either err == NULL or *err == INFORM_SUCCESS.

Examples:

inform_error err = INFORM_SUCCESS;

// Successes
assert(inform_succeeded(NULL));
assert(inform_succeeded(&err));

// Failures
err = INFORM_FAILURE;
assert(!inform_succeeded(&err));
err = INFORM_ENOMEM;
assert(!inform_succeeded(&err));
Header

inform/error.h

See also

inform_error, inform_failed

bool inform_failed(inform_error const *err);

Return true if the value of the error pointer signifies failure.

This function will only return true if both err != NULL and *err != INFORM_SUCCESS.

Examples:

inform_error err = INFORM_SUCCESS;

// Successes
assert(!inform_succeeded(NULL));
assert(!inform_succeeded(&err));

// Failures
err = INFORM_FAILURE;
assert(inform_succeeded(&err));
err = INFORM_ENOMEM;
assert(inform_succeeded(&err));
Header

inform/error.h

See also

inform_error, inform_succeeded

char const *inform_strerror(inform_error const *err);

Return a string describing an error.

Examples:

inform_error err = INFORM_SUCCESS;

// Successes
assert(strcmp(inform_strerror(NULL), "no error occurred") == 0);
assert(strcmp(inform_strerror(&err), "no error occurred") == 0);

// Failures
err = INFORM_FAILURE;
assert(strcmp(inform_strerror(&err),
              "an unspecified error occurred") == 0);
err = INFORM_ENOMEM;
assert(strcmp(inform_strerror(&err),
              "malloc/calloc/realloc failed") == 0);
Header

inform/error.h

See also

inform_error

Time Series Measures

The original purpose of Inform was to analyze time series data. This explains why most of Inform's functionality resides in functions specifically optimized for analyzing time series. The API was designed to be easy to use in C or C++, or to be wrapped in a higher-level language, e.g. Python. This means that we avoided some of the "niceties" of C, such as extensive use of macros and other generic atrocities, in favor of wrappability. Keep this in mind as you learn the API.

Many information measures have "local" variants which compute a time series of point-wise values. These local variants have names similar to their averaged or global counterparts, e.g inform_active_info and inform_local_active_info. We have been meticulous in ensuring that function and parameter names are consistent across measures. If you notice some inconsistency, please report it as an issue.

Notation

Throughout the discussion of time series measures, we will try to use a consistent notation. We will denote random variables as \(X,Y,\ldots\), and let \(x_i,y_i,\ldots\) represent the \(i\)-th time step of a time series drawn from the associated random variable. Many of the measures consider \(k\)-histories (a.k.a \(k\)-blocks) of the time series, e.g. \(x_i^{(k)} = \left\{x_{i-k+1}, x_{i-k+2},\ldots,x_i\right\}\).

When denoting probability distributions, we will only make the random variable explicit in situations where the notation is ambiguous. We will typically write \(p(x_i)\), \(p(x_i^{(k)})\), and \(p(x_i^{(k)}, x_{i+1})\) to denote the empirical probability of observing the \(x_i\) state, the \(x_i^{(k)}\) \(k\)-history, and the joint probability of observing \(\left(x_i^{(k)}, x_{i+1}\right)\).

Please report any notational ambiguities as an issue.

Implementation Details

The Base: States and Logarithms

The word "base" has two different meanings in the context of information measures on time series. It could refer to the base of the time series itself, that is the number of unique states in the time series. For example, the time series \(\{0,2,1,0,0\}\) is a base-3 time series. On the other hand, it could refer to the base of the logarithm used in computing the information content of the inferred probability distributions. The problem is that these two meanings clash. The base of the time series affects the range of values the measure can produce, and the base of the logarithm represents a rescaling of those values.

In this library we deal with this by always using base-2 logarithms, and having the user specify the base of the time series — don’t worry, we set an error if the provided base doesn’t make sense. All of this ensures that the library is a simple as reasonably possible.

Multiple Initial Conditions

You generally need a lot of data to infer a probability distribution. An experimentalist or simulator might then collect data over multiple trials or initial conditions. Most of Inform's time series measures allow the user to pass in a two-dimensional, rectangular array which each row representing a time series from a different initial condition. From this the probability distributions are inferred, and the desired value is calculated. This has the downside of requiring that the user store all of the data in memory, but it has the advantage of being fast and simple. Trade-offs, man…​

A subsequent release, likely v1.1.0, will allows the initial conditions to have time series of different lengths and will provide accumulator implementations of all of these measures that will let the incrementally construct the distributions. This will lift some of the memory burden at the expense of runtime performance.

Calling Conventions

All of the of the time series functions described in this section use the same basic calling conventions and use the same (or similar) argument names were possible.

Argument Name Description

series

A 2-D or 3-D, finite-state time series in contiguous, row-major form

l

The number of "sources" or variables in a 3-D time series

n

The number of initial conditions per "source"

m

The number of time steps per "source"

b

The base of the time series

k

The length history length

err

An error argument

Average measures generally return a double-precision, floating-point value while local variants return a pointer to an appropriately shaped, contiguous array. Local measures accept an argument, often named after the function (e.g. local active information takes an ai argument), which is used to store the computed local values. If that argument is NULL, then the function allocates an array.

We will try to note any deviations from these conventions.

Active Information

Active information (AI) was introduced in [Lizier2012] to quantify information storage in distributed computations. Active information is defined in terms of a temporally local variant

\[a_{X,i}(k) = \log_2{\frac{p(x_i^{(k)}, x_{i+1})}{p(x_i^{(k)})p(x_{i+1})}}.\]

where the probabilities are constructed empirically from the entire time series. From the local variant, the temporally global active information is defined as

\[A_X(k) = \langle a_{X,i}(k) \rangle_i = \sum_{x_i^{(k)},x_{i+1}} p(x_i^{(k)},x_{i+1}) \log_2{\frac{p(x_i^{(k)}, x_{i+1})}{p(x_i^{(k)})p(x_{i+1})}}.\]

Strictly speaking, the local and average active information are defined as

\[a_{X,i} = \lim_{k\rightarrow \infty}a_{X,i}(k) \qquad \textrm{and} \qquad A_X = \lim_{k\rightarrow \infty}A_X(k),\]

but we do not provide limiting functionality in this library (yet!).

double inform_active_info(int const *series, size_t n, size_t m,
        int b, size_t k, inform_error *err);

Compute the average active information with a history length k.

Examples:

One initial condition:

inform_error err = INFORM_SUCCESS;
int const series[9] = {0,0,1,1,1,1,0,0,0};
double ai = inform_active_info(series, 1, 9, 2, 2, &err);
assert(inform_succeeded(&err));
// ai ~ 0.305958

Two initial conditions:

inform_error err = INFORM_SUCCESS;
int const series[18] = {0,0,1,1,1,1,0,0,0,
                        1,0,0,1,0,0,1,0,0};
double ai = inform_active_info(series, 2, 9, 2, 2, &err);
assert(inform_succeeded(&err));
// ai ~ 0.359879
Header

inform/active_info.h

double *inform_local_active_info(int const *series, size_t n, size_t m,
        int b, size_t k, double *ai, inform_error *err);

Compute the local active information with a history length k.

Examples:

One initial condition:

inform_error err = INFORM_SUCCESS;
int const series[9] = {0,0,1,1,1,1,0,0,0};
double *ai = inform_local_active_info(series, 1, 9, 2, 2, NULL, &err);
assert(inform_succeeded(&err));
// ai ~ {-0.193, 0.807, 0.222, 0.222, -0.363, 1.222, 0.222}
free(ai);

Two initial conditions:

inform_error err = INFORM_SUCCESS;
int const series[18] = {0,0,1,1,1,1,0,0,0,
                        1,0,0,1,0,0,1,0,0};
double ai[14];
inform_local_active_info(series, 2, 9, 2, 2, ai, &err);
assert(inform_succeeded(&err));
// ai ~ { 0.807, -0.363, 0.637, 0.637, -0.778, 0.807, -1.193,
//        0.807,  0.807, 0.222, 0.807,  0.807, 0.222,  0.807 }

// no need to free since `ai` was statically allocated in this scope
// free(ai);
Header

inform/active_info.h

Block Entropy

Block entropy, also known as \(N\)-gram entropy [Shannon1948], is the standard Shannon entropy of the \(k\)-histories of a time series:

\[H(X^{(k)}) = -\sum_{x_i^{(k)}} p(x_i^{(k)}) \log_2{p(x_i^{(k)})}\]

which reduces to the traditional Shannon entropy for \(k=1\).

double inform_block_entropy(int const *series, size_t n, size_t m,
        int b, size_t k, inform_error *err);

Compute the average block entropy of a time series with block size k.

Examples:

One initial condition:

inform_error err = INFORM_SUCCESS;
int const series[9] = {0,0,1,1,1,1,0,0,0};

// k = 1
double h = inform_block_entropy(series, 1, 9, 2, 1, &err);
assert(inform_succeeded(&err));
// h ~ 0.991076

// k = 2
h = inform_block_entropy(series, 1, 9, 2, 2, &err);
assert(inform_succeeded(&err));
// h ~ 1.811278

Two initial conditions:

inform_error err = INFORM_SUCCESS;
int const series[18] = {0,0,1,1,1,1,0,0,0,
                        1,0,0,1,0,0,1,0,0};
double h = inform_active_info(series, 2, 9, 2, 2, &err);
assert(inform_succeeded(&err));
// h ~ 1.936278
Header

inform/block_entropy.h

double *inform_local_block_entropy(int const *series, size_t n,
        size_t m, int b, size_t k, double *ent, inform_error *err);

Compute the local block entropy of a time series with block size k.

Examples:

One initial condition:

inform_error err = INFORM_SUCCESS;
int const series[9] = {0,0,1,1,1,1,0,0,0};

// k == 1
double *h = inform_local_block_entropy(series, 1, 9, 2, 1, NULL, &err);
assert(inform_succeeded(&err));
// h ~ { 0.848, 0.848, 1.170, 1.170, 1.170, 1.170, 0.848, 0.848, 0.848 }

// k == 2
double *h = inform_local_block_entropy(series, 1, 9, 2, 2, NULL, &err);
assert(inform_succeeded(&err));
// h ~ { 1.415, 3.000, 1.415, 1.415, 1.415, 3.000, 1.415, 1.415 }

free(ai);

Two initial conditions:

inform_error err = INFORM_SUCCESS;
int const series[18] = {0,0,1,1,1,1,0,0,0,
                        1,0,0,1,0,0,1,0,0};
double h[16];
inform_local_block_entropy(series, 2, 9, 2, 2, h, &err);
assert(inform_succeeded(&err));
// h ~ { 1.415, 2.415, 2.415, 2.415, 2.415, 2.000, 1.415, 1.415,
//       2.000, 1.415, 2.415, 2.000, 1.415, 2.415, 2.000, 1.415 }

// no need to free since `h` was statically allocated in this scope
// free(h);
Header

inform/block_entropy.h

Conditional Entropy

Conditional entropy is a measure of the amount of information required to describe a random variable \(Y\) given knowledge of another random variable \(X\). When applied to time series, two time series are used to construct the empirical distributions, and inform_shannon_ce can be applied to yield

\[H(Y|X) = - \sum_{x_i,y_i} p(x_i,y_i) \log_2{p(y_i|x_i)}.\]

This can be viewed as the time-average of the local conditional entropy

\[h_i(Y|X) = -\log_2{p(y_i|x_i)}.\]

See [Cover1991] for more information.

double inform_conditional_entropy(int const *xs, int const *ys,
        size_t n, int bx, int by, inform_error *err);

Compute the conditional entropy between two time series.

This function expects the condition to be the first argument, xs. It is expected that each time series be the same length n, but may have different bases bx and by.

Examples:

inform_error err = INFORM_SUCCESS;
int const xs[20] = {0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1};
int const ys[20] = {0,0,1,1,1,1,1,1,1,1,1,1,1,1,1,1,0,0,0,1};

double ce = inform_conditional_entropy(xs, ys, 20, 2, 2, &err);
assert(inform_succeeded(&err));
// ce == 0.597107

ce = inform_conditional_entropy(ys, xs, 20, 2, 2, &err);
assert(inform_succeeded(&err));
// ce == 0.507757
Header

inform/conditional_entropy.h

double *inform_local_conditional_entropy(int const *xs, int const *ys,
        size_t n, int bx, int by, double *mi, inform_error *err);

Compute the local conditional entropy between two time series.

This function expects the condition to be the first argument, xs. It is expected that each time series be the same length n, but may have different bases bx and by.

Examples:

inform_error err = INFORM_SUCCESS;
int const xs[20] = {0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1};
int const ys[20] = {0,0,1,1,1,1,1,1,1,1,1,1,1,1,1,1,0,0,0,1};

double *ce = inform_local_conditional_entropy(xs, ys, 20, 2, 2, NULL, &err);
assert(inform_succeeded(&err));
// ce == { 3.00, 3.00, 0.19, 0.19, 0.19, 0.19, 0.19, 0.19, 0.19, 0.19,
//         0.19, 0.19, 0.19, 0.19, 0.19, 0.19, 0.42, 0.42, 0.42, 2.00 }

inform_local_conditional_entropy(ys, xs, 20, 2, 2, ce, &err);
assert(inform_succeeded(&err));
// ce == { 1.32, 1.32, 0.10, 0.10, 0.10, 0.10, 0.10, 0.10, 0.10, 0.10,
//         0.10, 0.10, 0.10, 0.10, 0.10, 0.10, 0.74, 0.74, 0.74, 3.91 }

free(ce);
Header

inform/conditional_entropy.h

Cross Entropy

Cross entropy between two distributions \(p_X\) and \(q_X\) measures the amount of information needed to identify events using a coding scheme optimized for \(q_X\) when \(p_X\) is the "real" distributions over \(X\).

\[H(p,q) = -\sum_{x} p(x) \log_2{q(x)}\]

Cross entropy’s local variant is equivalent to the self-information of \(q_X\), and as such is implemented by inform_local_block_entropy.

See [Cover1991] for more details.

double inform_cross_entropy(int const *ps, int const *qs, size_t n,
        int b, inform_error *err);

Compute the cross entropy between the "true" and "unnatural" distributions \(p_X\) and \(q_X\) from associated time series ps and qs, respectively.

Examples:

inform_error err = INFORM_SUCCESS;
int const ps[10] = {0,1,1,0,1,0,0,1,0,0};
int const qs[10] = {0,0,0,0,0,1,1,0,0,1};

double ce = inform_cross_entropy(ps, qs, 10, 2, &err);
assert(inform_succeeded(&err));
// ce == 1.003530

ce = inform_cross_entropy(qs, ps, 10, 2, &err);
assert(inform_succeeded(&err));
// ce == 0.912454
Header

inform/cross_entropy.h

Effective Information

Effective information is a causal measure aimed at teasing out the causal structure of a dynamical system. In essence, it is the mutual information between an "intervention" distribution — a probability distribution over initial states — and the distribution after one time step:

\[EI(A,p) = I(p,p^TA)\]

where \(A\) is the transition probability matrix. Functionality to construct a transition probability matrix from time series is provided by the [inform_tpm] function.

See [Tononi2003], [Hoel2013] or [Hoel2017] for more information.

double inform_effective_info(double const *tpm, double const *inter,
        size_t n, inform_error *err);

Compute the effective information from an n\(\times\)n transition probability matrix tpm given an intervention distribution inter.

If inter is NULL, then the uniform distribution over the n states is used.

Examples:

Uniform intervention distribution:

inform_error err = INFORM_SUCCESS;
double const tpm[4] = {0.50,0.50,
                       0.25,0,75};
double ei = inform_effective_info(tmp, NULL, 2, &err);
assert(inform_succeeded(&err));
// ei == 0.048795

Non-uniform intervention distribution:

inform_error err = INFORM_SUCCESS;
double const tpm[4] = {0.50,0.50,
                       0.25,0,75};
double const inter[2] = {0.488372, 0.511628};
double ei = inform_effective_info(tmp, inter, 2, &err);
assert(inform_succeeded(&err));
// ei == 0.048821
Header

inform/effective_info.h

Entropy Rate

Entropy rate quantifies the amount of information needed to describe the next state of \(X\) given observations of \(X^{(k)}\). In other wrods, it is the entropy of the time series conditioned on the \(k\)-histories. The local entropy rate

\[h_{X,i}(k) = \log_2{\frac{p(x_i^{(k)}, x_{i+1})}{p(x_i^{(k)})}}.\]

can be averaged to obtain the global entropy rate

\[H_X(k) = \langle h_{X,i}(k) \rangle_i = \sum_{x_i^{(k)},x_{i+1}} p(x_i^{(k)},x_{i+1}) \log_2{\frac{p(x_i^{(k)}, x_{i+1})}{p(x_i^{(k)})}}.\]

Much as with active information, the local and average entropy rates are formally obtained in the limit

\[h_{X,i} = \lim_{k\rightarrow \infty}h_{X,i}(k) \qquad \textrm{and} \qquad H_X = \lim_{k\rightarrow \infty}H_X(k),\]

but we do not provide limiting functionality in this library (yet!).

See [Cover1991] for more details.

double inform_entropy_rate(int const *series, size_t n, size_t m,
        int b, size_t k, inform_error *err);

Compute the average entropy rate with a history length k.

Examples:

One initial condition:

inform_error err = INFORM_SUCCESS;
int const series[9] = {0,0,1,1,1,1,0,0,0};
double er = inform_entropy_rate(series, 1, 9, 2, 2, &err);
assert(inform_succeeded(&err));
// er ~ 0.679270

Two initial conditions:

inform_error err = INFORM_SUCCESS;
int const series[18] = {0,0,1,1,1,1,0,0,0,
                        1,0,0,1,0,0,1,0,0};
double er = inform_entropy_rate(series, 2, 9, 2, 2, &err);
assert(inform_succeeded(&err));
// er ~ 0.625349
Header

inform/entropy_rate.h

double *inform_local_entropy_rate(int const *series, size_t n,
        size_t m, int b, size_t k, double *er, inform_error *err);

Compute the local entropy rate with a history length k.

Examples:

One initial condition:

inform_error err = INFORM_SUCCESS;
int const series[9] = {0,0,1,1,1,1,0,0,0};
double *er = inform_local_entropy_rate(series, 1, 9, 2, 2, NULL, &err);
assert(inform_succeeded(&err));
// er ~ { 1.000, 0.000, 0.585, 0.585, 1.585, 0.000, 1.000 }
free(er);

Two initial conditions:

inform_error err = INFORM_SUCCESS;
int const series[18] = {0,0,1,1,1,1,0,0,0,
                        1,0,0,1,0,0,1,0,0};
double er[14];
inform_local_entropy_rate(series, 2, 9, 2, 2, er, &err);
assert(inform_succeeded(&err));
// er ~ { 0.415, 1.585, 0.585, 0.585, 1.585, 0.000, 2.000,
//        0.000, 0.415, 0.585, 0.000, 0.415, 0.585, 0.000 }

// no need to free since `er` was statically allocated in this scope
// free(er);
Header

inform/entropy_rate.h

Excess Entropy

Formally, the excess entropy is the mutual information between two adjacent, semi-infinite blocks of variables:

\[E_X = \lim_{k \rightarrow \infty}I[(x_{-k},\ldots,x_{-1}),(x_0,\ldots,x_{k-1})].\]

Of course, we cannot take the limit in practice, so we implement the finite form:

\[E_X(k) = I[(x_{-k},\ldots,x_1),(x_0,\ldots,x_{k-1})].\]

We can think of excess entropy as a slight generalization of active information or a special case of predictive information.

See [Crutchfield2003] and [Feldman2003] for more details.

double inform_excess_entropy(int const *series, size_t n, size_t m,
        int b, size_t k, inform_error *err);

Compute the excess entropy from a time series with block size k.

Examples:

One initial condition:

inform_error err = INFORM_SUCCESS;
int const series[9] = {0,0,1,1,0,0,1,1,0};
double ee = inform_excess_entropy(series, 1, 9, 2, 2, &err);
assert(!err);
// ee ~ 1.918296

Two initial conditions:

inform_error err = INFORM_SUCCESS;
int const series[18] = {
    0,0,1,1,0,0,1,1,0,
    0,1,0,1,0,1,0,1,0
};
double ee = inform_excess_entropy(series, 2, 9, 2, 2, &err);
assert(!err);
// ee ~ 1.109170
Header

inform/excess_entropy.h

double *inform_local_excess_entropy(int const *series, size_t n,
        size_t m, int b, size_t k, double *ee, inform_error *err);

Compute the local excess entropy from a time series with block size k.

Examples:

One initial condition:

inform_error err = INFORM_SUCCESS;
int const series[9] = {0,0,1,1,0,0,1,1,0};
double *ee = inform_local_excess_entropy(series, 1, 9, 2, 2, NULL, &err);
assert(!err);
// ee ~ { 1.585 1.585 2.585 2.585 1.585 1.585 }
free(ee);

Two initial conditions:

inform_error err = INFORM_SUCCESS;
int const series[18] = {
    0,0,1,1,0,0,1,1,0,
    0,1,0,1,0,1,0,1,0
};
double *ee = inform_local_excess_entropy(series, 2, 9, 2, 2, NULL, &err);
assert(!err);
// ee ~ { 2.585 -0.059 3.585 -0.415 2.585 -0.059
//        0.848  0.848 0.848  0.848 0.848  0.848 }
free(ee);
Header

inform/excess_entropy.h

Information Flow

Information flow (IF) was introduced by Ay and Polani as a measure of the "strength of a causal effect" ([Ay2008]). Unlike many of the other measures in Inform, information flow was designed around interventions: rather than allowing dynamics to progress naturally, some subset of the random variables are forced into a particular state. If no interventions are performed, then IF reduces to conditional mutual information.

Practially, there is no way for Inform to know from time series whether or not any interventions have actually been performed upon the system. As such, the onous is on the user to determine whether this function is in-fact computing Ay and Polani’s information flow.

In accordance with [Ay2008], information flow is defined as

\[I_p(X \rightarrow Y | Z) = \sum_{x,y,z} p(z)p(x|\hat{z})p(y|\hat{x},\hat{z}) \log{ \frac{p(y|\hat{x},\hat{z})} {\sum_{x^\prime} p(x^\prime|\hat{z})p(y|\hat{x}^\prime,\hat{z})} }\]

All probabilities are estimated from sequences of observations — the order of those sequences is irrelevant. A subsequent version will allow the user to specify intervention distributions.

double inform_information_flow(int const *src, int const *dst,
        int const *back, size_t l_src, size_t l_dst, size_t l_back,
        size_t n, size_t m, int b, inform_error *err);

Examples: This example demonstrates how to user inform_information_flow to analyzed Ay and Polani’s diamond structure example ([Ay2008] p.28-29). In this example, we have four interacting Boolean variables \(W\), \(X\), \(Y\) and \(Z\). The variables \(X\) and \(Y\) are each dependent on \(W\): they simply copy it’s state. The \(Z\) variable is the XOR of the states of \(X\) and \(Y\). The variable \(W\) is uniformly distributed.

Example 1: Natural dynamics

Left to it’s own devices, this system might produce a sequence of observations like:

int const ws[8] = {0,0,1,0,1,1,0,1};
int const xs[8] = {0,0,1,0,1,1,0,1};
int const ys[8] = {0,0,1,0,1,1,0,1};
int const zs[8] = {0,0,0,0,0,0,0,0};

From these we can compute the following information flows:

\[I_p(X \rightarrow Y) = 1 \\ I_p(X \rightarrow Y | W) = 0 \\ I_p(W \rightarrow Z | Y) = 0.\]

Our notation departs somewhat from that of [Ay2008] as we denote these as information flows despite the fact that we are not strictly intervening upon the system. If the user prefers, they can think of this as "intervening on the system that is indistiguishable from the natural dynamics". In any case, we can use inform_information_flow to compute the the above values

inform_error err = INFORM_SUCCESS;
double flow = 0.0;
flow = inform_information_flow(xs, ys, NULL, 1, 1, 0, 1, 8, 2, &err);
assert(!err);
// flow ~ 1.0
flow = inform_information_flow(xs, ys, ws, 1, 1, 1, 1, 8, 2, &err);
assert(!err);
// flow ~ 0.0
flow = inform_information_flow(ws, zs, ys, 1, 1, 1, 1, 8, 2, &err);
assert(!err);
// flow ~ 0.0

Example 2: Interventions

Now, consider that we can intervene on \(Y\) and force it to whichever state we so choose. Then we might end up with a sequence of observations as

int const ws[8] = {0,0,1,0,1,1,0,1};
int const xs[8] = {0,0,1,0,1,1,0,1};
int const ys[8] = {1,0,1,0,0,1,1,0};
int const zs[8] = {1,0,0,0,1,0,1,1};

From these we can compute the following information flows:

\[I_p(X \rightarrow Y) = 0 \\ I_p(X \rightarrow Y | W) = 0 \\ I_p(W \rightarrow Z | Y) = 1.\]

In Inform this looks like:

inform_error err = INFORM_SUCCESS;
double flow = 0.0;
flow = inform_information_flow(xs, ys, NULL, 1, 1, 0, 1, 8, 2, &err);
assert(!err);
// flow ~ 0.0
flow = inform_information_flow(xs, ys, ws, 1, 1, 1, 1, 8, 2, &err);
assert(!err);
// flow ~ 0.0
flow = inform_information_flow(ws, zs, ys, 1, 1, 1, 1, 8, 2, &err);
assert(!err);
// flow ~ 1.0
Header

inform/information_flow.h

Evidence Of Integration

Evidence of Integration (EoI) was introduced in [Biehl2016] as a means of identifying representations of agents in dynamical systems. Given a collection of random variables, \(\mathcal{X}=\{X_1,X_2,\ldots,X_l\}\), we can form (non-trivial) partitionings of \(\mathcal{X}\):

\[\{\mathcal{Z} \subset \mathcal{P}(\mathcal{X})\setminus\{\mathcal{X},\varnothing\}\ |\ \mathcal{Z} \not= \varnothing, \bigcup_{Z \in \mathcal{Z}} Z = \mathcal{X} ~\textrm{and}~ Z_i \cap Z_j = \varnothing, i \not = j \}\]

Each partitioning \(\mathcal{Z} = \{Z_1, \ldots, Z_m\}\) consists of a collection of joint random variables built from \(\mathcal{X}\). The configurations of \(\mathcal{Z}\) are in one-to-one correspondence with the configurations of \(\mathcal{X}\). Given such a partitioning, we can ask what the pointwise mutual information is:

\[mi_\mathcal{Z}(z_1, \ldots, z_m) = \log{\frac{p(z_1, \ldots, z_m)}{p(z_1) \ldots p(z_m)}}.\]

If \(z=\{z_1,\ldots,z_m\}\) corresponds to the configuration \(x=\{x_1,\ldots,x_l\}\), we refer to the above mutual information as the "evidence of integration of \(x\) with respect to the partitioning \(\mathcal{Z}\)".

We say that a configuration \(x\) is integrated if and only if the evidence of integration of \(x\) is positive for all partitionings of the system.

double *inform_integration_evidence(int const *series, size_t l,
        size_t n, int const *b, double *evidence, inform_error *err);

Given a sequence of n observed states of l random variables (series), compute the evidence of integration for each partitioning of the l variables, and return the minimum and maximum evidence for each observation.

Examples:

inform_error err = INFORM_SUCCESS;
int const series[30] = {
    0, 1, 0, 1, 1, 1, 0, 0, 1, 0,
    0, 1, 0, 1, 1, 1, 0, 0, 1, 0,
    1, 1, 1, 1, 1, 0, 0, 0, 0, 0,
};
double *evidence = inform_integration_evidence(series, 3, 10, (int[3]){2,2,2}, NULL, &err);
assert(!err);
// evidence ~ { -0.322 0.263 -0.322 0.263 0.263 -0.322 0.263 0.263 -0.322 0.263   // MIN
//               1.000 1.263  1.000 1.263 1.263  1.000 1.263 1.263  1.000 1.263 } // MAX
// integrated?   NO    YES    NO    YES   YES    NO    YES   YES    NO    YES
free(evidence);
Header

inform/integration.h

double *inform_integration_evidence_part(int const *series, size_t l,
        size_t n, int const *b, size_t const *parts, size_t nparts,
        double *evidence, inform_error *err);

Given a sequence of n observed states of l random variables (series), compute the evidence of integration each observation with respect to a partitioning parts.

This function is useful when the number of variables l is large; the user can test a small subset of the partitionings to increase confidence that the system is integrated. Biehl et. al. suggest considering only the finest partitioning, \(\mathcal{Z} = \{\{X_1\}, \ldots \{X_l\}\}\).

inform_error err = INFORM_SUCCESS;
int const series[30] = {
    0, 1, 0, 1, 1, 1, 0, 0, 1, 0, // X_1
    0, 1, 0, 1, 1, 1, 0, 0, 1, 0, // X_2
    1, 1, 1, 1, 1, 0, 0, 0, 0, 0, // X_3
};
size_t partitions[3] = {0,0,1}; // Partition as {{X_1, X_2}, {X_3}} (2 partitions)
double *evidence = inform_integration_evidence_part(series, 3, 10,
    (int[3]){2,2,2}, partitions, 2, NULL, &err);
assert(!err);
// evidence ~ { -0.322 0.263 -0.322 0.263 0.263 -0.322 0.263 0.263 -0.322 0.263 }
free(evidence);
Header

inform/integration.h

Mutual Information

Mutual information (MI) is a measure of the amount of mutual dependence between at least two random variables. Locally, MI is defined as

\[i_i(X_1,\ldots,X_l) = \frac{p(x_{1,i},\ldots,x_{l,i})}{p(x_{1,i})\ldots p(x_{l,i})}.\]

The mutual information is then just the time average of \(i_i(X_1,\ldots,X_l)\):

\[I(X_1,\ldots,X_l) = \sum_{x_{1,i},\ldots,x_{l,i}} p(x_{1,i},\ldots,x_{l,i}) \frac{p(x_{1,i},\ldots,x_{l,i})}{p(x_{1,i})\ldots p(x_{l,i})}.\]

See [Cover1991] for more details.

double inform_mutual_info(int const *series, size_t l, size_t n,
        int const *b, inform_error *err);

Compute the mutual information between two or more time series.

For this function, l is the number of random variables, and n is the length of each variable’s time series. Each variable can have a different base, so b is an array of length l.

Examples:

Two variables:

inform_error err = INFORM_SUCCESS;
int const xs[40] = {0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1, // var 1
                    0,0,1,1,1,1,1,1,1,1,1,1,1,1,1,1,0,0,0,1  // var 2};

double mi = inform_mutual_info(xs, 2, 20, (int[2]){2,2}, &err);
assert(inform_succeeded(&err));
// mi == 0.214171

Three variables:

inform_error err = INFORM_SUCCESS;
int const xs[60] = {0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1, // var 1
                    0,0,1,1,1,1,1,1,1,1,1,1,1,1,1,1,0,0,0,1, // var 2
                    1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1, // var 3};

double mi = inform_mutual_info(xs, 3, 20, (int[3]){2,2,2}, &err);
assert(inform_succeeded(&err));
// mi == 1.095462
Header

inform/mutual_info.h

double *inform_local_mutual_info(int const *series, size_t l, size_t n,
        int const *b, double *mi, inform_error *err);

Compute the local mutual information between two or more time series.

For this function, l is the number of random variables, and n is the length of each variable’s time series. Each variable can have a different base, so b is an array of length l.

Examples:

Two variables:

inform_error err = INFORM_SUCCESS;
int const xs[40] = {0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1, // var 1
                    0,0,1,1,1,1,1,1,1,1,1,1,1,1,1,1,0,0,0,1  // var 2};

double *mi = inform_local_mutual_info(xs, 2, 20, (int[2]){2,2}, NULL, &err);
assert(inform_succeeded(&err));
// mi ~ { -1.000, -1.000, 0.222,  0.222, 0.222, 0.222, 0.222, 0.222,
//         0.222,  0.222, 0.222,  0.222, 0.222, 0.222, 0.222, 0.222,
//         1.585,  1.585, 1.585, -1.585 }
free(mi);

Three variables:

inform_error err = INFORM_SUCCESS;
int const xs[60] = {0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1, // var 1
                    0,0,1,1,1,1,1,1,1,1,1,1,1,1,1,1,0,0,0,1, // var 2
                    1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1, // var 3};


double *mi = inform_local_mutual_info(xs, 3, 20, (int[3]){2,2,2}, NULL, &err);
assert(inform_succeeded(&err));
// mi ~ { 0.737, 0.737, 0.737, 0.737, 0.737, 0.737, 0.737, 0.737,
//        0.737, 0.737, 0.737, 0.737, 0.737, 0.737, 0.737, 0.737,
//        3.322, 3.322, 0.322, 0.152 }
free(mi);
Header

inform/mutual_info.h

Partial Information Decomposition

typedef struct inform_pid_source
{
    size_t *name;
    struct inform_pid_source **above;
    struct inform_pid_source **below;
    size_t size, n_above, n_below;
    double imin;
    double pi;
} inform_pid_source;
Header

inform/pid.h

typedef struct inform_pid_lattice
{
    inform_pid_source **sources;
    inform_pid_source *top;
    inform_pid_source *bottom;
    size_t size;
} inform_pid_lattice;
Header

inform/pid.h

void inform_pid_lattice_free(inform_pid_lattice *l);
Header

inform/pid.h

inform_pid_lattice *inform_pid(int const *stimulus,
        int const *responses, size_t l, size_t n, int bs,
        int const *br, inform_error *err);
Header

inform/pid.h

Predictive Information

Formally, the predictive information is the mutual information between a finite-history and a semi-infinite future:

\[P_X(k) = \lim_{l \rightarrow \infty}I[(x_{-k},\ldots,x_{-1}),(x_0,\ldots,x_{l-1})].\]

Of course, we cannot take the limit in practice, so we implement the finite form:

\[P_X(k,l) = I[(x_{-k},\ldots,x_1),(x_0,\ldots,x_{k-1})].\]

We can think of active information and excess entropy as a special cases of predictive information.

See [Bialek2001a] and [Bialek2001b] for more details.

double inform_predictive_info(int const *series, size_t n, size_t m,
        int b, size_t kpast, size_t kfuture, inform_error *err);

Compute the predictive information from a time series with history length kpast and "future length" kfuture.

Examples: Our examples will compute the predictive information between a the current time step and then next two time steps, \(P_X(1,2)\).

One initial condition:

inform_error err = INFORM_SUCCESS;
int const series[9] = {0,0,1,1,0,0,1,1,0};
double pi = inform_predictive_info(series, 1, 9, 2, 1, 2, &err);
assert(!err);
// pi ~ 0.985228

Two initial conditions:

inform_error err = INFORM_SUCCESS;
int const series[18] = {
    0,0,1,1,0,0,1,1,0,
    0,1,0,1,0,1,0,1,0
};
double pi = inform_predictive_info(series, 2, 9, 2, 1, 2, &err);
assert(!err);
// pi ~ 0.244905
Header

inform/predictive_info.h

double *inform_local_predictive_info(int const *series, size_t n,
        size_t m, int b, size_t kpast, size_t kfuture, double *pi,
        inform_error *err);

Compute the local predictive information from a time series with history length kpast and "future length" kfuture.

Examples: Our examples will compute the local predictive information between a the current time step and then next two time steps.

One inital condition:

inform_error err = INFORM_SUCCESS;
int const series[9] = {0,0,1,1,0,0,1,1,0};
double *pi = inform_local_predictive_info(series, 1, 9, 2, 1, 2, NULL, &err);
assert(!err);
// pi ~ { 0.807 0.807 1.222 1.222 0.807 0.807 1.222 }
free(pi);

Two inital conditions:

inform_error err = INFORM_SUCCESS;
int const series[18] = {
    0,0,1,1,0,0,1,1,0,
    0,1,0,1,0,1,0,1,0
};
double *pi = inform_local_predictive_info(series, 2, 9, 2, 1, 2, NULL, &err);
assert(!err);
// pi ~ { -0.515 0.807 -0.363 1.222 -0.515 0.807 -0.363
//         0.222 0.485  0.222 0.485  0.222 0.485  0.222 }
free(pi);
Header

inform/predictive_info.h

Relative Entropy

Relative entropy, also known as the Kullback-Leibler divergence, measures the amount of information gained in switching from a prior distribution \(q_X\) to a posterior distribution \(p_X\) over the same support:

\[D_{KL}(p||q) = \sum_{x_i} p(x_i) \log_2{\frac{p(x_i)}{q(x_i)}}.\]

The local counterpart is

\[d_{KL,i}(p||q) = log_2{\frac{p(x_i)}{q(x_i)}}.\]

Note that the average in moving from the local to the non-local relative entropy is taken over the posterior distribution.

See [Kullback1951] and [Cover1991] for more information.

double inform_relative_entropy(int const *xs, int const *ys, size_t n,
        int b, inform_error *err);

Compute the relative entropy between time series drawn from posterior and prior distributions, here xs and ys respectively.

Examples:

inform_error err = INFORM_SUCCESS;
int const xs[10] = {0,1,0,0,0,0,0,0,0,1};
int const ys[10] = {0,1,1,1,1,0,0,1,0,0};

double re = inform_relative_entropy(xs, ys, 10, 2, &err);
assert(inform_succeeded(&err));
// re == 0.278072

re = inform_relative_entropy(ys, xs, 10, 2, &err);
assert(inform_succeeded(&err));
// re == 0.321928
Header

inform/relative_entropy.h

double *inform_local_relative_entropy(int const *xs, int const *ys,
        size_t n, int b, double *re, inform_error *err);

Examples:

inform_error err = INFORM_SUCCESS;
int const xs[10] = {0,1,0,0,0,0,0,0,0,1};
int const ys[10] = {0,1,1,1,1,0,0,1,0,0};

double *re = inform_local_relative_entropy(xs, ys, 10, 2, NULL, &err);
assert(inform_succeeded(&err));
// re ~ { 0.678, -1.322, 0.678, 0.678, 0.678, 0.678, 0.678,
//        0.678, 0.678, -1.322 };

inform_local_relative_entropy(ys, xs, 10, 2, re, &err);
assert(inform_succeeded(&err));
// re ~ { -0.678, 1.322, 1.322, 1.322, 1.322, -0.678, -0.678, 1.322,
//        -0.678, -0.678 }

free(re);
Header

inform/relative_entropy.h

Separable Information

Separable information (SI) was introduced in [Lizier2010] as a method for detecting information modification in distributed computations. Given a random variable \(Y\) and a collection of potentially informative sources \(V_Y\) (wherein \(Y \not\in V_Y\)), separable information quantifies the total information gained about \(Y\) from seperate observations of histories of \(Y\) and "transfer contributions" from each of the sources \(X \in V_Y\). In other words, it is the sum of the Active Information of \(Y\) and the (apparent) Transfer Entropy from each source to \(Y\):

\[s_{Y,i}(k) = a_{Y,i}(k) + \sum_{X \in V_Y} t_{X \rightarrow Y, i}(k),\\ S_Y(k) = \langle s_{Y,i}(k) \rangle_i = A_Y(k) + \sum_{X \in V_Y} T_{X \rightarrow Y}(k).\]
double inform_separable_info(int const *srcs, int const *dest,
        size_t l, size_t n, size_t m, int b, size_t k,
        inform_error *err);

Examples: One initial condition, one source:

inform_error err = INFORM_SUCCESS;
int const dest[9] = {0,1,1,1,1,0,0,0,0};
int const srcs[9] = {0,0,1,1,1,1,0,0,0};

double si = inform_separable_info(srcs, dest, 1, 1, 9, 2, 2, &err);
assert(!err);
// si ~ 0.591673

One initial condition, multiple sources:

inform_error err = INFORM_SUCCESS;
int const dest[9] = {0,1,1,1,1,0,0,0,0};
int const srcs[18] = {
    0,0,1,1,1,1,0,0,0,
    1,1,1,1,0,0,0,0,0,
};

double si = inform_separable_info(srcs, dest, 2, 1, 9, 2, 2, &err);
assert(!err);
// si ~ 0.985228

Multiple initial conditions, multiple sources:

inform_error err = INFORM_SUCCESS;
int const dest[18] = {
    0,1,1,1,1,0,0,0,0,
    1,1,0,1,1,0,1,1,0
};
int const srcs[36] = {
    0,0,1,1,1,1,0,0,0, 1,1,1,1,1,0,1,1,0,
    1,1,1,1,0,0,0,0,0, 0,0,0,0,1,1,1,1,0,
};

double si = inform_separable_info(srcs, dest, 2, 2, 9, 2, 2, &err);
assert(!err);
// si ~ 0.625349
Header

inform/separable_info.h

double *inform_local_separable_info(int const *srcs, int const *dest,
        size_t l, size_t n, size_t m, int b, size_t k, double *si,
        inform_error *err);

Examples: One initial condition, one source:

inform_error err = INFORM_SUCCESS;
int const dest[9] = {0,1,1,1,1,0,0,0,0};
int const srcs[9] = {0,0,1,1,1,1,0,0,0};

double *si = inform_local_separable_info(srcs, dest, 1, 1, 9, 2, 2, NULL, &err);
assert(!err);
// si ~ { 1.222 0.637 0.637 -0.778 0.807 0.807 0.807 }
free(si);

One initial condition, multiple sources:

inform_error err = INFORM_SUCCESS;
int const dest[9] = {0,1,1,1,1,0,0,0,0};
int const srcs[18] = {
    0,0,1,1,1,1,0,0,0,
    1,1,1,1,0,0,0,0,0,
};

double *si = inform_local_separable_info(srcs, dest, 2, 1, 9, 2, 2, NULL, &err);
assert(!err);
// si ~ { 1.222 1.222 1.222 0.807 0.807 0.807 0.807 }
free(si);

Multiple initial conditions, multiple sources:

inform_error err = INFORM_SUCCESS;
int const dest[18] = {
    0,1,1,1,1,0,0,0,0,
    1,1,0,1,1,0,1,1,0
};
int const srcs[36] = {
    0,0,1,1,1,1,0,0,0, 1,1,1,1,1,0,1,1,0,
    1,1,1,1,0,0,0,0,0, 0,0,0,0,1,1,1,1,0,
};

double *si = inform_local_separable_info(srcs, dest, 2, 2, 9, 2, 2, NULL, &err);
assert(!err);
// si ~ { 1.000 -0.000 -0.000  1.000 0.585 1.000  1.000
//        1.000 -0.415  1.000 -0.000 1.585 1.000 -0.000 }
free(si);
Header

inform/separable_info.h

Transfer Entropy

Transer entropy (TE) was introduced in [Schreiber2000] to quantify information transfer between an information source and destination, conditioning out shared history effects. TE was originally formulated considering only the source and destination; however many systems of interest have more just those two components. As such, it may be further necessary to condition the probabilities on the states of all "background" components in the system. These two forms are sometimes referred to as apparent and complete transfer entropy, respectively ([Lizier2008]).

Our implementation of TE allows the user to condition the probabilities on any number of background processes, within hardware limits. For the subsequent description, take \(X\) to be the source, \(Y\) the target, and \(\mathcal{W} = \left\{W_1, \ldots, W_l\right\}\) be the background processes against which we’d like to condition. For example, we might take the state of two nodes in a dynamical network as the source and target, while all other nodes in the network are treated as the background. Transfer entropy is then defined in terms of a time-local variant

\[t_{X \rightarrow Y, \mathcal{W}, i}(k) = \log_2{\frac{p(y_{i+1},x_i|y^{(k)}_i,W_i)} {p(y_{i+1}|y^{(k)}_i,W_i)p(x_{i+1}|y^{(k)}_i,W_i)}}\]

where \(W_i = w_{1,i},\ldots,w_{l,i}\) are the states of each of the background nodes at time step \(i\), and the probabilities are constructed empirically from the entire time series. From the local variant, the temporally global transfer entropy is defined as

\[T_{X \rightarrow Y, \mathcal{W}}(k) = \langle t_{X \rightarrow Y, \mathcal{W}, i}(k) \rangle_i = \sum_{y_{i+1},y^{(k)},x_i,W_i} p(y_{i+1},y^{(k)}_i,x_i,W_i) \log_2{\frac{p(y_{i+1},x_i|y^{(k)}_i,W_i)} {p(y_{i+1}|y^{(k)}_i,W_i)p(x_{i+1}|y^{(k)}_i,W_i)}}.\]

Strictly speaking, the local and average transfer entropy are defined as

\[t_{X \rightarrow Y, \mathcal{W}, i} = \lim_{k\rightarrow \infty} t_{X \rightarrow Y, \mathcal{W}, i}(k) \qquad \textrm{and} \qquad T_{X \rightarrow Y, \mathcal{W}} = \lim_{k\rightarrow \infty} t_{X \rightarrow Y, \mathcal{W}}(k),\]

but we do not provide limiting functionality in this library (yet!).

double inform_transfer_entropy(int const *src, int const *dst,
        int const *back, size_t l, size_t n, size_t m, int b, size_t k,
        inform_error *err);

Compute the average transfer entropy with a history length k.

Examples:

One initial condition, no background:

inform_error err = INFORM_SUCCESS;
int const xs[9] = {0,1,1,1,1,0,0,0,0};
int const ys[9] = {0,0,1,1,1,1,0,0,0};
double te = inform_transfer_entropy(xs, ys, NULL, 0, 1, 9, 2, 2, &err);
assert(inform_succeeded(&err));
// te ~ 0.679270

Two initial conditions, no background:

inform_error err = INFORM_SUCCESS;
int const xs[18] = {1,0,0,0,0,1,1,1,1,
                    1,1,1,1,0,0,0,1,1};
int const ys[18] = {0,0,1,1,1,1,0,0,0,
                    1,0,0,0,0,1,1,1,0};
double te = inform_transfer_entropy(xs, ys, NULL, 0, 2, 9, 2, 2, &err);
assert(inform_succeeded(&err));
// te ~ 0.693536

One initial condition, one background process:

inform_error err = INFORM_SUCCESS;
int const xs[9] = {0,1,1,1,1,0,0,0,0};
int const ys[9] = {0,0,1,1,1,1,0,0,0};
int const ws[9] = {0,1,1,1,1,0,1,1,1};
double te = inform_transfer_entropy(xs, ys, ws, 1, 1, 9, 2, 2, &err);
assert(inform_succeeded(&err));
// te ~ 0.285714

One initial condition, two background processes:

inform_error err = INFORM_SUCCESS;
int const xs[9]  = {0,1,1,1,1,0,0,0,0};
int const ys[9]  = {0,0,1,1,1,1,0,0,0};
int const ws[18] = {1,0,1,0,1,1,1,1,1,
                    1,1,0,1,0,1,1,1,1};
double te = inform_transfer_entropy(xs, ys, ws, 2, 1, 9, 2, 2, &err);
assert(inform_succeeded(&err));
// te ~ 0.0

This example is interesting in that the two background process provide the same information about the target as the source process does, but they don’t separately.

Header

inform/transfer_entropy.h

double *inform_local_transfer_entropy(int const *src, int const *dst,
        int const *back, size_t l, size_t n, size_t m, int b, size_t k,
        double *te, inform_error *err);

Compute the local transfer entropy with a history length k.

Examples:

One initial condition, no background:

inform_error err = INFORM_SUCCESS;
int const xs[9] = {0,1,1,1,1,0,0,0,0};
int const ys[9] = {0,0,1,1,1,1,0,0,0};
double *lte = inform_local_transfer_entropy(xs, ys, NULL, 0, 1, 9, 2, 2, NULL, &err);
assert(inform_succeeded(&err));
// lte ~ {  1.000  0.000  0.585  0.585  1.585  0.000  1.000 }
free(lte);

Two initial conditions, no background:

inform_error err = INFORM_SUCCESS;
int const xs[18] = {1,0,0,0,0,1,1,1,1,
                    1,1,1,1,0,0,0,1,1};
int const ys[18] = {0,0,1,1,1,1,0,0,0,
                    1,0,0,0,0,1,1,1,0};
double *lte = inform_local_transfer_entropy(xs, ys, NULL, 0, 2, 9, 2, 2, NULL, &err);
assert(inform_succeeded(&err));
// lte ~ {  1.322  0.000  0.737  0.737  1.322  0.000  0.737
//          0.000  0.737  0.737  1.322  0.000  0.737  1.322 }
free(lte);

One initial condition, one background process:

inform_error err = INFORM_SUCCESS;
int const xs[9] = {0,1,1,1,1,0,0,0,0};
int const ys[9] = {0,0,1,1,1,1,0,0,0};
int const ws[9] = {0,1,1,1,1,0,1,1,1};
double *lte = inform_local_transfer_entropy(xs, ys, ws, 1, 1, 9, 2, 2, NULL, &err);
assert(inform_succeeded(&err));
// lte ~ {  1.000  0.000  0.000  0.000  0.000  0.000  1.000 }
free(lte);

One initial condition, two background processes:

inform_error err = INFORM_SUCCESS;
int const xs[9]  = {0,1,1,1,1,0,0,0,0};
int const ys[9]  = {0,0,1,1,1,1,0,0,0};
int const ws[18] = {1,0,1,0,1,1,1,1,1,
                    1,1,0,1,0,1,1,1,1};
double *lte = inform_local_transfer_entropy(xs, ys, ws, 2, 1, 9, 2, 2, NULL, &err);
assert(inform_succeeded(&err));
// lte ~ {  0.000  0.000  0.000  0.000  0.000  0.000  0.000 }
free(lte);
Header

inform/transfer_entropy.h

Utilities

Binning Time Series

double inform_range(double const *series, size_t n, double *min,
        double *max, inform_error *err);

Compute the range, minimum and maximum values in a floating-point time series.

Examples:

inform_error = INFORM_SUCCESS;
double min, max, rng;
rng = inform_range(NULL, 6, &min, &max, &err);
assert(inform_failed(&err));
assert(rng == 0.0);
inform_error = INFORM_SUCCESS;
double arr[6] = { 0.2, 0.5, -3.2, 1.8, 0.6, 2.3 };
double min, max, rng;
rng = inform_range(arr, 0, &min, &max, &err);
assert(inform_failed(&err));
assert(rng == 0.0);
inform_error = INFORM_SUCCESS;
double arr[6] = { 0.2, 0.5, -3.2, 1.8, 0.6, 2.3 };
double min, max, rng;
rng = inform_range(arr, 6, &min, &max, &err);
// { min, max, rng } ~ { -3.2, 1.8, 5.0 }
inform_error = INFORM_SUCCESS;
double arr[6] = { 0.2, 0.5, -3.2, 1.8, 0.6, 2.3 };
double rng = inform_range(arr, 6, NULL, NULL, &err);
// rng ~ 5.0
Headers

inform/utilities.h, inform/utilities/binning.h

double inform_bin(double const *series, size_t n, int b, int *binned,
        inform_error *err);

Bin a floating-point time series into a finite-state time series with b uniform sized bins, and return the size of the bins.

If the size of each bin is too small, less than 10*ε, then all entries are placed in the same bin and an error is set. (ε is the double-precision machine epsilon.)

Examples:

int binned[6];
double series[6] = {1,2,3,4,5,6};
double bin_size = inform_bin(series, 6, 2, binned, &err);
assert(bin_size == 2.5);
// binned ~ {0,0,0,1,1,1}
int binned[6];
double series[6] = {1,2,3,4,5,6};
double bin_size = inform_bin(series, 6, 3, binned, &err);
assert(bin_size == 5.0/3.0);
// binned ~ {0,0,1,1,2,2}
int binned[6];
double series[6] = {1,2,3,4,5,6};
double bin_size = inform_bin(series, 6, 6, binned, &err);
assert(bin_size == 5.0/3.0);
// binned ~ {0,1,2,3,4,5}
Headers

inform/utilities.h, inform/utilities/binning.h

int inform_bin_step(double const *series, size_t n, double step,
        int *binned, inform_error *err);

Bin a floating-point time series into a number bins of a specified size step, and return the number of bins.

If the bin size is too small, less than 10*ε, then an error is set. (ε is the double-precision machine epsilon.)

Examples:

int binned[6];
double series[6] = {1,2,3,4,5,6};
int n = inform_bin_step(series, 6, 2.0, binned, &err);
assert(n == 3);
// binned ~ {0,0,1,1,2,2}
int binned[6];
double series[6] = {1,2,3,4,5,6};
int n = inform_bin(series, 6, 2.5, binned, &err);
assert(n == 3);
// binned ~ {0,0,0,1,1,2}
int binned[6];
double series[6] = {1,2,3,4,5,6};
int n = inform_bin(series, 6, 1.0, binned, &err);
assert(n == 6);
// binned ~ {0,1,2,3,4,5}
Headers

inform/utilities.h, inform/utilities/binning.h

int inform_bin_bounds(double const *series, size_t n,
        double const *bounds, size_t m, int *binned,
        inform_error *err);
Headers

inform/utilities.h, inform/utilities/binning.h

Black-Boxing Time Series

It is often useful when analyzing complex systems to black-box components of the system. This process amounts to grouping components of the system and treating them as a single entity, without regard for its small-scale structure. As an example, consider you have two Boolean random variables \(X\) and \(Y\), and you observe time series of each simultaneously:

\[X:\ \{0,1,1,0,1,0,0,1\}\\ Y:\ \{1,0,0,1,1,0,1,0\}\]

It may be worthwhile to consider these observations as observations of the joint variable \((X,Y)\):

\[(X,Y):\ \{(0,1),(1,0),(1,0),(0,1),(1,1),(0,0),(0,1),(1,0)\}.\]

The joint observations can then be naturally encoded, for example, as base-4 states

\[(X,Y):\ \{1,2,2,1,3,0,1,2\}.\]

We refer this process of mapping the observations of \(X\) and \(Y\) to the encoded observations of \((X,Y)\) as black boxing. In this case, the black boxing procedure is performed in "space" (you might think of \(X\) and \(Y\) having locations in space). We may also black box in time. The canonical example of this is considering the \(k\)-history of a random variable:

\[X:\ \{0,1,1,0,1,0,0,1\}\\ X^{(2)}:\ \{((0,1),(1,1),(1,0),(0,1),(1,0),(0,0),(0,1)\},\]

and the observations of \(X^{(2)}\) can be encoded as base-4 states:

\[X^{(2)}:\ \{(1,3,2,1,2,0,1\}.\]

We provide a basic black-boxing function that allows the user to black-box in both space and into the future and past of a collection of random variable [inform_black_box]. The [inform_black_box_parts] allows the user to black-box time series based on a partitioning scheme (useful in the implementation of integration measures such as Evidence Of Integration).

int *inform_black_box(int const *series, size_t l, size_t n, size_t m,
        int const *b, size_t const *r, size_t const *s, int *box,
        inform_error *err);

Black-box a collection of l time series (each with shape (n,m)) with bases b into a single time series with base b[0]*…​*b[l-1]. History lengths for each time series may be provided through r and future lengths through l.

Examples:

Example 1: Black-box two time series with no history or futures:

inform_error err = INFORM_SUCCESS;
int const series[16] = {
    0,1,1,0,1,0,0,1,
    1,0,0,1,1,0,1,0,
};
int b[2] = {2,2};

int box[8];
inform_black_box(series, 2, 1, 8, b, NULL, NULL, box, &err);
assert(!err);
// box ~ { 1 2 2 1 3 0 1 2 }

This is the first example described in Black-Boxing Time Series.

Example 2: Black-box a single time series in time with history length 2:

inform_error err = INFORM_SUCCESS;
int const series[8] = {
    0,1,1,0,1,0,0,1,
};
int b = 2;

size_t r = 2;
int box[7];
inform_black_box(series, 1, 1, 8, &b, &r, NULL, box, &err);
assert(!err);
// box ~ { 1 3 2 1 2 0 1 }

This is the second example described in Black-Boxing Time Series.

Example 3: Black-box two time series with histories and futures: In this example we consider two time series:

\[X:\ \{0,1,1,0,1,0,0,1\}\\ Y:\ \{1,0,0,1,1,0,1,0\}\]

and produce observations of \((X^{(2,0)},Y^{(1,1)})\)

\[(X^{(2,0)},Y^{(1,1)}):\ \{(0,1,0,0),(1,1,0,1),(1,0,1,1),(0,1,1,0),(1,0,0,1),(0,0,1,0)\}\]

encoded as

\[(X^{(2,0)},Y^{(1,1)}):\ \{4,13,11,6,9,2\}.\]
inform_error err = INFORM_SUCCESS;
int const series[16] = {
    0,1,1,0,1,0,0,1,
    1,0,0,1,1,0,1,0,
};
int b[2] = {2,2};

size_t r[2] = {2,1};
size_t s[2] = {0,1};
int box[6];
inform_black_box(series, 2, 1, 8, b, r, s, box, &err);
assert(!err);
// box ~ { 4 13 11 6 9 2 }
Headers

inform/utilities.h, inform/utilities/black_boxing.h

int *inform_black_box_parts(int const *series, size_t l, size_t n,
        int const *b, size_t const *parts, size_t nparts, int *box,
        inform_error *err);

Black-box l time series (each of length n) with bases b into nparts time series according to the partitioning scheme parts. The resulting time series and their bases are stored in box and returned. The box must have enough space to store the black-boxed time series AND the base of each of the resulting time series. That is length(box) >= nparts * n + nparts. If box == NULL, then exactly enough space is allocated for result.

See Partitioning Time Series for more information about partitioning schemes.

Examples:

Example 1: Black-box 4 time series (each of length 8) into a single time series

inform_error err = INFORM_SUCCESS;
int const series[32] = {
    0,1,1,0,1,0,0,1,
    1,0,0,1,1,0,1,0,
    0,0,0,1,1,1,0,0,
    1,0,1,0,1,1,1,0,
};
int b[4] = {2,2,2,2};

size_t parts[4] = {0,0,0,0};
size_t nparts = 1; // max(parts) + 1
int *box = inform_black_box_parts(series, 4, 8, b, parts, nparts, NULL, &err);
assert(!err);
// box ~ {
//   5 8 9 6 15 3 5 8   # the time series for the 0st (and only) partition
//   16                 # the base of the time series
// }
free(box);

This could have been more simply using [inform_black_box], but it is illustrative.

Example 2: Black-box 4 time series (of length 8) into two time series using the partitioning scheme (0,1,1,0). That is, combine the 0th and 4th, and the 1st and 2nd.

inform_error err = INFORM_SUCCESS;
int const series[32] = {
    0,1,1,0,1,0,0,1,
    1,0,0,1,1,0,1,0,
    0,0,0,1,1,1,0,0,
    1,0,1,0,1,1,1,0,
};
int b[4] = {2,2,2,2};

size_t parts[4] = {0,1,1,0};
size_t nparts = 2; // max(parts) + 1
int *box = inform_black_box_parts(series, 4, 8, b, parts, nparts, NULL, &err);
assert(!err);
// box ~ {
//   1 2 3 0 3 1 1 2    # the time series for the 0th partition
//   2 0 0 3 3 1 2 0    # the time series for the 1st partition
//   4 4                # the bases for the time series (in order)
// }

Note that the two time series each have a base of 4, and the bases are returned as the last two elements of box.

Example 3: Multivariate Mutual Information of Partitions In this example, we compute the multivariate mutual information between 4 time series (each of length 8) for all partitionings of the system (except the very first since that only yields a single time series). That is, each partition is treated as a seperate variable and the mutual information is computed between the partitioned variables.

#include <assert.h>
#include <inform/utilities.h>
#include <inform/mutual_info.h>
#include <stdio.h>

int main()
{
        #define L 4
        #define N 8

        inform_error err = INFORM_SUCCESS;
        int const series[L*N] = {
            0,1,1,0,1,0,0,1,
            1,0,0,1,1,0,1,0,
            0,0,0,1,1,1,0,0,
            1,0,1,0,1,1,1,0,
        };
        int b[L] = {2,2,2,2};

        int box[L + L*N]; // make sure there's enough space for the finest partitioning

        size_t *parts = inform_first_partitioning(L);
        size_t nparts = 1;
        while((nparts = inform_next_partitioning(parts, L)))
        {
            inform_black_box_parts(series, L, N, b, parts, nparts, box, &err);
            assert(!err);

            int *bases = box + N*nparts;
            double mi = inform_mutual_info(box, nparts, N, bases, &err);
            assert(!err);
            printf("%0.3lf ", mi);
        }
        printf("\n");
}

prints

0.610 0.954 1.217 1.220 1.000 1.311 1.360 1.311 1.000 1.360 1.360 1.360 1.406 1.409

This example is tantalizingly close to an implementation of Evidence Of Integration.

Headers

inform/utilities.h, inform/utilities/black_boxing.h

Coalescing Time Series

Information measures over time series are invariant under any change of encoding that does not change the relative occurrance of each symbol. For example, the following time series have the same entropy:

\[A = \{5,2,2,5,2,5,5,2\} \\ B = \{1,0,0,1,0,1,1,0\}.\]

Each of the time series has an effective base of 2, but their apparent bases are different: 5 and 2, respectively. The result is that computations using \(A\) require \(log_2{5}\times\) more memory due to larger potential state spaces. They also tend to run a bit slower since many information-theoretic algorithms have runtimes that trend with the volume of the state space.

The inform_coalesce function maps a time series to a new time series whose effective base is the same as its apparent base: e.g. maps \(A \mapsto B\).

int inform_coalesce(int const *series, size_t n, int *coal,
        inform_error *err);

Reduce the apparent base of a time series to its effective base by removing "gaps" between observed states.

Examples:

inform_error err = INFORM_SUCCESS;
int const series[8] = {5,2,2,5,2,5,5,2}; // A
int coal[8];
int b = inform_coalesce(series, 8, coal, &err);
assert(!err);
assert(b == 2);
// coal ~ { 1 0 0 1 0 1 1 0 } // B

Note that we ensure that if \(a_i <= a_j\), then \(b_i <= b_j\) for all \(i,j\). This ensures that if the the apparent base of the time series is the effective base, then the time series is unchanged.

inform_error err = INFORM_SUCCESS;
int const series[8] = {2,1,0,0,1,2,1,3};
int coal[8];
int b = inform_coalesce(series, 8, coal, &err);
assert(!err);
assert(b == 4);
// coal ~ { 2 1 0 0 1 2 1 3 } // the time series is unchanged
Headers

inform/utilities.h, inform/utilities/coalesce.h

Encoding/Decoding States

Many of Inform's implementations require that states be encoded as integers. Two functions, [inform_encode] and [inform_decode] handle encoding and decoding states to and from integers, respectively.

int32_t inform_encode(int const *state, size_t n, int b,
        inform_error *err);

Encode a base-b state with n-digits as a 32-bit integer.

Examples:

Binary States:

inform_error err = INFORM_SUCCESS;
int32_t code;

code = inform_encode((int[]){1,0,0}, 3, 2, &err);
assert(!err && code == 4);

code = inform_encode((int[]){0,1,0}, 3, 2, &err);
assert(!err && code == 2);

code = inform_encode((int[]){1,0,1}, 3, 2, &err);
assert(!err && code == 5);

Base-4 States:

inform_error err = INFORM_SUCCESS;
int32_t code;

code = inform_encode((int[]){3,0,0}, 3, 4, &err);
assert(!err && code == 48);

code = inform_encode((int[]){0,3,0}, 3, 4, &err);
assert(!err && code == 12);

code = inform_encode((int[]){2,2,1}, 3, 4, &err);
assert(!err && code == 41);
Headers

inform/utilities.h, inform/utilities/encode.h

void inform_decode(int32_t encoding, int b, int *state, size_t n,
        inform_error *err);

Decode a 32-bit integer as a base-b state with n-digits.

Examples:

Binary States:

inform_error err = INFORM_SUCCESS;
int state[3];

inform_decode(4, 2, state, 3, &err);
// state ~ { 1 0 0 }

inform_decode(2, 2, state, 3, &err);
// state ~ { 0 1 0 }

inform_decode(5, 2, state, 3, &err);
// state ~ { 1 0 1 }

Base-4 States:

inform_error err = INFORM_SUCCESS;
int state[3];

inform_decode(48, 4, state, 3, &err);
// state ~ { 3 0 0 }

inform_decode(12, 4, state, 3, &err);
// state ~ { 0 3 0 }

inform_decode(41, 4, state, 3, &err);
// state ~ { 2 2 1 }
Headers

inform/utilities.h, inform/utilities/encode.h

Partitioning Time Series

Many analyses of complex systems consider partitioning of the system into components or modules. One example of this is Evidence Of Integration.

For generality, we represent a partitioning of \(N\) items into \(1 \leq M \leq N\) partitions as a sequence of integers \(p_1, \ldots, p_N\) where \(0 \leq p_i < M\) is the partition to which the \(i\)-th item belongs.

As an example, suppose we partitioned \(\{X_1, X_2, X_3\}\) as \(\{\{X_1\}, \{X_2,X_3\}\}\). This would be represented as \((0,1,1)\) because \(X_1\) belongs to the zeroth partition and \(X_2,X_3\) belong to the first partition.

We provide two functions to facilitate the generation of all unique partitionings of a system of N elements: [inform_first_partitioning] and [inform_next_partitioning].

size_t *inform_first_partitioning(size_t n);

Return the first partitioning of n items. This is always the coarsest partitioning — the partitioning with a single partition. The returned array must be freed by the user.

Examples:

size_t *part = inform_first_partitioning(2);
assert(part);
// part ~ { 0 0 }
free(part);
size_t *part = inform_first_partitioning(5);
assert(part);
// part ~ { 0 0 0 0 0 }
free(part);
Headers

inform/utilities.h, inform/utilities/partitions.h

size_t inform_next_partitioning(size_t *xs, size_t n);

Provided a partitioning xs of n items, find the next partitioning. The old partitioning is overwritten by the new partitioning, and the number of partitions is returned. Zero is returned if no more partitions remain.

Examples:

Say we wish to partition 3 items:

void print_partitioning(size_t *part, size_t n)
{
    for (size_t i = 0; i < n; ++i) printf("%ld ", part[i]);
    printf("\n");
}

int main()
{
    size_t const size = 3;

    size_t *part = inform_first_partitioning(size);
    size_t npart = 1;

    print_partitioning(part, size);
    while ((npart = inform_next_partitioning(part, size)))
    {
        print_partitioning(part, size);
    }
}

We’ll get the following output

0 0 0 // {{X_1, X_2, X_3}}
0 0 1 // {{X_1, X_2}, {X_3}}
0 1 0 // {{X_1, X_3}, {X_2}}
0 1 1 // {{X_1}, {X_2, X_3}}
0 1 2 // {{X_1}, {X_2}, {X_3}}

You’ll notice that this is equal too the third Bell number \(B_3 = 5\).

We can compute the 9-th Bell number as follows:

size_t const size = 9;
size_t *part = inform_first_partitioning(size);
size_t npart = 1, bell_number = 1;
while ((npart = inform_next_partitioning(part,size)))
{
    bell_number += 1;
}
assert(bell_number == 21147);
Headers

inform/utilities.h, inform/utilities/partitions.h

Random Time Series

It is sometimes useful to generate random time series, particularly when testing functions. Since we’ve implemented a few functions which take care of the memory allocation, etc…​, we decided to expose them.

void inform_random_seed();

Seed the random number generated based on the current clock-time.

Headers

inform/utilities.h, inform/utilities/random.h

int inform_random_int(int a, int b);

Generate a random integer in the range \([a,b)\).

Example:

srand(2018);
for (size_t i = 0; i < 10; ++i)
{
    printf("%d ", inform_random_int(0, 2));
}
printf("\n");

prints

1 1 1 0 1 0 0 0 1 1
Headers

inform/utilities.h, inform/utilities/random.h

int *inform_random_ints(int a, int b, size_t n);

Generate an array n random integers in the range \([a,b)\).

Example:

srand(2018);
int *arr = inform_random_ints(0, 2, 10);
assert(arr);
for (size_t i = 0; i < 10; ++i)
{
    printf("%d ", arr[i]);
}
printf("\n");
free(arr);

prints

1 1 1 0 1 0 0 0 1 1
Headers

inform/utilities.h, inform/utilities/random.h

int *inform_random_series(size_t n, int b);

Generate a base-b "time series" of n time steps.

Example:

srand(2018);
int *series = inform_random_series(10, 2);
assert(series);
for (size_t i = 0; i < 10; ++i)
{
    printf("%d ", series[i]);
}
printf("\n");
free(series);

prints

1 1 1 0 1 0 0 0 1 1
Headers

inform/utilities.h, inform/utilities/random.h

Time Series to TPM

Some information measures are defined on transition probability matrices (TPM), e.g. Effective Information. For this reason, we provide a function [inform_tpm] which estimates the one-time-step TPM from time series data.

double *inform_tpm(int const *series, size_t n, size_t m, int b,
        double *tpm, inform_error *err);

Estimate the one-time-step transition probability matrix \(A\) from a time series. The element \(A_{ij}\) is the probability of transitioning to state \(j\) in the next time step given the system is in state \(i\).

Examples:

One initial condition:

inform_error err = INFORM_SUCCESS;
int const series[15] = {0,2,1,0,1,2,0,1,2,1,0,0,2,1,1};
double *tpm = inform_tpm(series, 1, 15, 3, NULL, &err);
assert(!err);
// tpm ~ { 0.20 0.40 0.40
//         0.40 0.20 0.40
//         0.25 0.75 0.00 }
free(tpm);

Multiple initial conditions:

inform_error err = INFORM_SUCCESS;
int const series[14] = {
    0,0,
    0,1,
    0,1,
    0,1,
    1,0,
    1,1,
    2,2,
};
double *tpm = inform_tpm(series, 7, 2, 3, NULL, &err);
assert(!err);
// tpm ~ { 0.25 0.75 0.00
//         0.50 0.50 0.00
//         0.00 0.00 1.00 }
free(tpm);
Headers

inform/utilities.h, inform/utilities/tpm.h

Empirical Distributions

The inform_dist struct provides an empirical distribution, i.e. a histogram, representing the observed frequencies of some fixed-sized set of events. This structure is the basis for all of the fundamental information measures provided on discrete probability distributions.

The purpose of the inform_dist is to make writing custom implementations of information-theoretic measures possible. Most of the measures implemented in Inform use this structure at their core.

Distribution Type

typedef struct inform_distribution
{
    /// the histogram or array of observation frequencies
    uint32_t *histogram;
    /// the size of the support
    size_t size;
    /// the number of observations made so far
    uint64_t counts;
} inform_dist;

A distribution of observed event frequencies

This structure is the basis for almost all of the calculations performed via the library. It provides a means of managing observations and converting the observed event frequencies to probabilities.

A distribution can be allocated with a set number of observable events (inform_dist_alloc), resized (inform_dist_realloc), copied (inform_dist_copy) or duplicated (inform_dist_dup). Once the distribution has served its purpose, it can be free’d (inform_dist_free).

The distribution is, roughly, a histogram with finite support. The events are assumed to be mapped to the dense set of integers \(\{0,1,\ldots,N-1\}\) where \(N\) is the number of observable events. The number of observable events can be extracted with inform_dist_size.

Events can be observed one at a time (inform_dist_tick) or in batches (inform_dist_set). The occurrence frequency of a given event can be obtained (inform_dist_get) as can the total number of observations (inform_dist_counts).

Once the distribution populated, the probabilities can be extracted in either element-by-element (inform_dist_prob) or dumped to a dynamically allocated array (inform_dist_dump).

Whenever the size of the support or the number of observed events is zero, the distribution is considered invalid meaning that you can’t trust any probabilities extracted from it. One can use inform_dist_is_valid to assess the validity of the distribution.

Header

inform/dist.h

Allocation

inform_dist* inform_dist_alloc(size_t n);

Allocate a distribution with a specified support size.

The allocation will fail and return NULL if either n == 0 or the memory allocation fails for whatever reason.

Immediately following allocation, the distribution is invalid.

Examples:

inform_dist *dist = inform_dist_alloc(0);
assert(dist == NULL);
inform_dist *dist = inform_dist_alloc(2);
assert(dist != NULL);
assert(dist->size == 2);
assert(dist->counts == 0);
// dist->histogram ~ {0,0}
inform_dist_free(dist);
Header

inform/dist.h

inform_dist* inform_dist_realloc(inform_dist *dist, size_t n);

Resize the distribution to have new support.

If the distribution is NULL, a new distribution of the requested size is allocated.

If the distribution already has the requested size, then the original distribution is returned. If the requested size is greater than the current size, then the newly observable events are assigned zero frequency. If the requested size is less than the current size, then the discarded events are lost and the observation count is adjusted accordingly.

Note that the distribution cannot be reduced to size 0. In that case the distribution is left unmodified.

In the event that the reallocation fails, the original distribution is left untouched.

Examples:

inform_dist *dist = inform_dist_alloc(2);
assert(dist);
// dist->histogram ~ {0,0}

inform_dist *bigger = inform_dist_realloc(dist, 5);
assert(bigger);
// bigger->histogram ~ {0,0,0,0,0}

inform_dist *smaller = inform_dist_realloc(bigger, 3);
assert(smaller != NULL);
// smaller->histogram ~ {0,0,0}

inform_dist_free(smaller);
Header

inform/dist.h

inform_dist* inform_dist_copy(inform_dist const *src,
        inform_dist *dest);

Copy a distribution to a destination.

If the source distribution is NULL, then the destination is left untouched and NULL is returned.

In the event that the destination is NULL or has a different size from that of the source, the destination is reallocated, and the result is returned.

Examples:

inform_dist *source = inform_dist_alloc(5);
assert(source);
for (int i = 0; i < (int) source->size; ++i)
    source->histogram[i] = i;
// source->histogram ~ {0,1,2,3,4};

inform_dist *target = inform_dist_alloc(5);
assert(target);

inform_dist_copy(source, target);
// target->histogram ~ {0,1,2,3,4};

inform_dist_free(target);
inform_dist_free(source);
Header

inform/dist.h

inform_dist* inform_dist_dup(inform_dist const *dist);

Duplicate a distribution.

This function simply duplicates a distribution, essentially allocating a new distribution and copying the contents of the source to the new distribution.

If the allocation fails or the source distribution is NULL, then the return value is NULL.

Examples:

inform_dist *source = inform_dist_alloc(5);
assert(source);
for (int i = 0; i < (int) source->size; ++i)
    source->histogram[i] = i;
// source->histogram ~ {0,1,2,3,4};

inform_dist *target = inform_dist_dup(source);
assert(target);
// target->histogram ~ {0,1,2,3,4};

inform_dist_free(target);
inform_dist_free(source);
Header

inform/dist.h

inform_dist* inform_dist_create(uint32_t const *data, size_t n);

Create a distribution from an underlying histogram.

Examples:

inform_dist *dist = inform_dist_create((int[5]){0,1,2,3,4}, 5);
assert(dist);
// dist->histogram ~ {0,1,2,3,4}
inform_dist_free(dist);
Header

inform/dist.h

inform_dist* inform_dist_infer(int const *events, size_t n);

Infer a distribution from a collection of observed events.

Examples:

inform_dist *dist = inform_dist_infer((int[]){0,0,1,0,1}, 5);
assert(dist);
// dist->histogram ~ {3,2}
inform_dist_free(dist);
inform_dist *dist = inform_dist_infer((int[]){0,0,1,0,1,2,2,1}, 8);
// dist->histogram ~ {3,3,2}
inform_dist_free(dist);
Header

inform/dist.h

inform_dist* inform_dist_approximate(double const *probs, size_t n,
        double tol);

Approximate a given probability distribution to a given tolerance.

Examples:

double probs[3] = {0.5, 0.2, 0.3};
inform_dist *dist = inform_dist_approximate(probs, 3, 1e-3);
assert(dist);
// dist->histogram ~ {5, 2, 3}
inform_dist_free(dist);
double probs[2] = {1./3, 2./3};
inform_dist *dist = inform_dist_approximate(probs, 2, 1e-3);
assert(dist);
// dist->histogram ~ {1, 2}
inform_dist_free(dist);
double probs[4] = {1./3, 1./3, 1./6, 1./6};
inform_dist *dist = inform_dist_approximate(probs, 4, 1e-3);
assert(dist);
// dist->histogram ~ {333,333,166,166}
inform_dist_free(dist);
double probs[4] = {1./7, 2./7, 1./3, 10./42};
inform_dist *dist = inform_dist_approximate(probs, 4, 1e-3);
assert(dist);
// dist->histogram ~ {142,285,333,238}
inform_dist_free(dist);
Header

inform/dist.h

inform_dist* inform_dist_uniform(size_t n);

Create a uniform distribution of a given size.

Examples:

inform_dist *dist = inform_dist_uniform(0);
assert(dist == NULL);
inform_dist *dist = inform_dist_uniform(3);
assert(dist);
// dist->histogram ~ {1,1,1}
inform_dist_free(dist);
Header

inform/dist.h

Deallocation

void inform_dist_free(inform_dist *dist);

Free all dynamically allocated memory associated with a distribution.

Examples:

inform_dist *dist = NULL;
inform_dist_free(dist);
inform_dist *dist = inform_dist_alloc(3);
assert(dist);
inform_dist_free(dist);
Header

inform/dist.h

Accessors/Mutators

size_t inform_dist_size(inform_dist const *dist);

Get the size of the distribution’s support.

If the distribution is NULL, then a support of 0 is returned.

Examples:

inform_dist *dist = NULL;
assert(inform_dist_size(dist) == 0);
inform_dist *dist = inform_dist_uniform(5);
assert(dist);
assert(inform_dist_size(dist) == 5);
inform_dist_free(dist);
Header

inform/dist.h

uint64_t inform_dist_counts(inform_dist const *dist);

Get the total number of observations so far made.

If the distribution is NULL, then return 0.

Examples:

inform_dist *dist = NULL;
assert(inform_dist_counts(dist) == 0);
inform_dist *dist = inform_dist_uniform(5);
assert(dist);
assert(inform_dist_counts(dist) == 5);
inform_dist_set(dist,2) = 4;
inform_dist_set(dist,3) = 3;
assert(inform_dist_counts(dist) == 10);
inform_dist_free(dist);
Header

inform/dist.h

bool inform_dist_is_valid(inform_dist const *dist);

Determine whether or not the distribution is valid.

In order to safely extract probabilities, the distribution must be non-NULL, the size of the support must be non-zero and the number of observations must be non-zero. In any other case, the distribution is invalid.

Examples:

inform_dist *dist = NULL;
assert(!inform_dist_is_valid(dist));
inform_dist *dist = inform_dist_alloc(3);
assert(dist);
assert(!inform_dist_is_valid(dist));
inform_dist_free(dist);
inform_dist *dist = inform_dist_uniform(5);
assert(dist);
assert(inform_dist_is_valid(dist));
inform_dist_free(dist);
Header

inform/dist.h

uint32_t inform_dist_get(inform_dist const *dist, size_t event);

Get the number of occurrences of a given event.

If the distribution is NULL or the event is not in the support, 0 is returned.

Examples:

inform_dist *dist = NULL;
assert(inform_dist_get(dist, 0) == 0);
assert(inform_dist_get(dist, 1) == 0);
assert(inform_dist_get(dist, 2) == 0);
inform_dist *dist = inform_dist_create((int[]){3,2,1,0}, 4);
assert(dist);
assert(inform_dist_get(dist, 0) == 3);
assert(inform_dist_get(dist, 1) == 2);
assert(inform_dist_get(dist, 2) == 1);
assert(inform_dist_get(dist, 3) == 0);
inform_dist_free(dist);
Header

inform/dist.h

uint32_t inform_dist_set(inform_dist *dist, size_t event, uint32_t x);

Set the number of occurrences of a given event.

This function manually sets the number of occurrences of a given event. Note that the only restriction is that the value be positive. This means that this function can be used to invalidate the distribution by changing all of the event frequencies to zero.

If the event is not in the support or the distribution is NULL, then nothing happens and zero is returned.

Examples:

inform_dist *dist = NULL;
assert(inform_dist_set(dist, 0, 5) == 0);
assert(inform_dist_get(dist, 0) == 0);
inform_dist *dist = inform_dist_alloc(2);

assert(inform_dist_set(dist, 0, 3) == 3);
assert(inform_dist_set(dist, 1, 7) == 7);

assert(inform_dist_get(dist, 0) == 3);
assert(inform_dist_get(dist, 1) == 7);

inform_dist_free(dist);
Header

inform/dist.h

uint32_t inform_dist_tick(inform_dist *dist, size_t event);

Increment the number of observations of a given event.

As an alternative to inform_dist_set, this function simply increments the number of occurrences of a given event. This is useful for when iteratively observing events.

If the event is not in the support or the distribution is NULL, then nothing happens and zero is returned.

Examples:

inform_dist *dist = NULL;
assert(inform_dist_tick(dist, 0) == 0);
assert(inform_dist_tick(dist, 1) == 0);
inform_dist *dist = inform_dist_create((int[]){2,4}, 2);

assert(inform_dist_tick(dist, 0) == 3);
assert(inform_dist_tick(dist, 1) == 5);

assert(inform_dist_get(dist, 0) == 3);
assert(inform_dist_get(dist, 1) == 5);

inform_dist_free(dist);
Header

inform/dist.h

size_t inform_dist_accumulate(inform_dist *dist, int const *events,
        size_t n);

Accumulate observations from a series.

If an invalid distribution is provided, no events will be observed (0 will be returned). If an invalid event is provided, then the number of valid events to that point will be returned.

Examples:

int const events[5] = {0,0,1,0,1};
assert(inform_dist_accumulate(NULL, events, 5) == 0);
inform_dist *dist = inform_dist_create((int[]){1,2,3}, 3);
assert(dist);
assert(inform_dist_accumulate(dist, NULL, 5) == 0);
inform_dist_free(dist);
int const events[5] = {0,0,1,0,1};
inform_dist *dist = inform_dist_create((int[]){1,2}, 2);
assert(dist);
assert(inform_dist_accumulate(dist, events, 5) == 5);
// dist->histogram ~ { 4, 4 }
inform_dist_free(dist);
int const events[5] = {0,1,1,3,1};
inform_dist *dist = inform_dist_create((int[]){1,2}, 2);
assert(dist);
assert(inform_dist_accumulate(dist, events, 5) == 3);
// dist->histogram ~ { 2, 4 }
inform_dist_free(dist);
Header

inform/dist.h

Probabilities

double inform_dist_prob(inform_dist const *dist, size_t event);

Extract the probability of an event.

This function simply computes the probability of a given event and returns that value.

If the event is not in the support, the distribution is NULL, or no observations have yet been made, then a zero probability is returned.

Examples:

inform_dist *dist = NULL;
assert(inform_dist_prob(dist, 0) == 0.0);
inform_dist *dist = inform_dist_create((int[]){2,2,4}, 3);
assert(dist);
assert(inform_dist_prob(dist, 0) == 0.25);
assert(inform_dist_prob(dist, 1) == 0.25);
assert(inform_dist_prob(dist, 2) == 0.50);
inform_dist_free(dist);
Header

inform/dist.h

size_t inform_dist_dump(inform_dist const *dist, double *probs,
        size_t n);

Dump the probabilities of all events to an array.

This function computes the probabilities of all of the events in the support, stores them in the provided array, and the number of values written.

If the distribution is NULL, -1 is returned. If the destination is NULL, -2 is returned. If n is not equal to the distribution’s support, -3 is returned.

Examples:

double probs[3];
assert(inform_dist_dump(NULL, probs, 3) == -1);
inform_dist *dist = inform_dist_create((int[]){2,2,4}, 3);
assert(dist);
assert(inform_dist_dump(dist, NULL, 3) == -2);
inform_dist_free(dist);
double probs[3];
inform_dist *dist = inform_dist_create((int[]){2,2,4}, 3);
assert(dist);
assert(inform_dist_dump(dist, NULL, 2) == -3);
inform_dist_free(dist);
double probs[3];
inform_dist *dist = inform_dist_create((int[]){2,2,4}, 3);
assert(dist);
assert(inform_dist_dump(dist, NULL, 3) == 0);
// probs ~ { 0.25, 0.25, 0.50 };
inform_dist_free(dist);
Header

inform/dist.h

Shannon Information Measures

The inform/shannon.h header provides a collection of entropy and information measures on discrete probability distributions (inform_dist). These functions provide the core of Inform as all of the time series analysis functions are build upon them.

double inform_shannon_si(inform_dist const *dist, size_t event,
        double base);

Compute the Shannon self-information of an event given some distribution

See [Shannon1948] for more details.

Header

inform/shannon.h

double inform_shannon_entropy(inform_dist const *dist, double base);

Compute the Shannon information of a distribution.

This function will return NaN if the distribution is not valid, i.e. !inform_dist_is_valid(dist).

See [Shannon1948] for more details.

Header

inform/shannon.h

double inform_shannon_pmi(inform_dist const *joint,
        inform_dist const *marginal_x, inform_dist const *marginal_y,
        size_t event_joint, size_t event_x, size_t event_y,
        double base);

Compute the point-wise mutual information of an combination of events

Header

inform/shannon.h

double inform_shannon_mi(inform_dist const *joint,
        inform_dist const *marginal_x, inform_dist const *marginal_y,
        double base);

Compute the Shannon-based mutual information of a distribution and two marginals.

This function will return NaN if inform_shannon returns NaN when applied to any of the distribution arguments.

Header

inform/shannon.h

double inform_shannon_pce(inform_dist const *joint,
        inform_dist const *marginal, size_t event_joint,
        size_t event_marginal, double base);

Compute the point-wise conditional entropy of a combination of events

Header

inform/shannon.h

double inform_shannon_ce(inform_dist const* joint,
        inform_dist const *marginal, double base);

Compute the Shannon-based conditional entropy of a joint distribution and a marginal.

This function will return NaN if inform_shannon returns NaN when applied to any of the distribution arguments.

Header

inform/shannon.h

double inform_shannon_pcmi(inform_dist const *joint,
        inform_dist const *marginal_xz, inform_dist const *marginal_yz,
        inform_dist const *marginal_z, size_t event_joint,
        size_t event_marginal_xz, size_t event_marginal_yz,
        size_t event_marginal_z, double base);

Compute the point-wise conditional mutual information of a combination of events

Header

inform/shannon.h

double inform_shannon_cmi(inform_dist const *joint,
        inform_dist const *marginal_xz, inform_dist const *marginal_yz,
        inform_dist const *marginal_z, double base);

Compute the conditional mutual entropy of a joint distribution, and the xz-, yz-, and z-marginals.

Header

inform/shannon.h

double inform_shannon_pre(inform_dist const *p, inform_dist const *q,
        size_t event, double base);

Compute the point-wise relative entropy between two distributions with equal support size at some event.

Header

inform/shannon.h

double inform_shannon_re(inform_dist const *p, inform_dist const *q,
        double base);

Compute the relative entropy between two distributions with equal support size.

Header

inform/shannon.h

double inform_shannon_cross(inform_dist const *p, inform_dist const *q,
        double base);

Compute the cross entropy between two distributions with equal support size.

Header

inform/shannon.h

double inform_shannon_multi_pmi(inform_dist const *joint,
        inform_dist const **marginals, size_t n, size_t joint_event,
        size_t const *marginal_events, double base);

Compute the multivariate point-wise mutual information between a collection of distributions.

Header

inform/shannon.h

double inform_shannon_multi_mi(inform_dist const *joint,
        inform_dist const **marginals, size_t n, double base);

Compute the multivariate mutual information between a collection of distributions.

Header

inform/shannon.h

References