643 research outputs found
Statistical pairwise interaction model of stock market
Financial markets are a classical example of complex systems as they comprise
many interacting stocks. As such, we can obtain a surprisingly good description
of their structure by making the rough simplification of binary daily returns.
Spin glass models have been applied and gave some valuable results but at the
price of restrictive assumptions on the market dynamics or others are
agent-based models with rules designed in order to recover some empirical
behaviours. Here we show that the pairwise model is actually a statistically
consistent model with observed first and second moments of the stocks
orientation without making such restrictive assumptions. This is done with an
approach based only on empirical data of price returns. Our data analysis of
six major indices suggests that the actual interaction structure may be thought
as an Ising model on a complex network with interaction strengths scaling as
the inverse of the system size. This has potentially important implications
since many properties of such a model are already known and some techniques of
the spin glass theory can be straightforwardly applied. Typical behaviours, as
multiple equilibria or metastable states, different characteristic time scales,
spatial patterns, order-disorder, could find an explanation in this picture.Comment: 11 pages, 8 figure
Identification of functional information subgraphs in complex networks
We present a general information theoretic approach for identifying
functional subgraphs in complex networks where the dynamics of each node are
observable. We show that the uncertainty in the state of each node can be
expressed as a sum of information quantities involving a growing number of
correlated variables at other nodes. We demonstrate that each term in this sum
is generated by successively conditioning mutual informations on new measured
variables, in a way analogous to a discrete differential calculus. The analogy
to a Taylor series suggests efficient search algorithms for determining the
state of a target variable in terms of functional groups of other degrees of
freedom. We apply this methodology to electrophysiological recordings of
networks of cortical neurons grown it in vitro. Despite strong stochasticity,
we show that each cell's patterns of firing are generally explained by the
activity of a small number of other neurons. We identify these neuronal
subgraphs in terms of their mutually redundant or synergetic character and
reconstruct neuronal circuits that account for the state of each target cell.Comment: 4 pages, 4 figure
Redundant variables and Granger causality
We discuss the use of multivariate Granger causality in presence of redundant
variables: the application of the standard analysis, in this case, leads to
under-estimation of causalities. Using the un-normalized version of the
causality index, we quantitatively develop the notions of redundancy and
synergy in the frame of causality and propose two approaches to group redundant
variables: (i) for a given target, the remaining variables are grouped so as to
maximize the total causality and (ii) the whole set of variables is partitioned
to maximize the sum of the causalities between subsets. We show the application
to a real neurological experiment, aiming to a deeper understanding of the
physiological basis of abnormal neuronal oscillations in the migraine brain.
The outcome by our approach reveals the change in the informational pattern due
to repetitive transcranial magnetic stimulations.Comment: 4 pages, 5 figures. Accepted for publication in Physical Review
Shared Information -- New Insights and Problems in Decomposing Information in Complex Systems
How can the information that a set of random variables
contains about another random variable be decomposed? To what extent do
different subgroups provide the same, i.e. shared or redundant, information,
carry unique information or interact for the emergence of synergistic
information?
Recently Williams and Beer proposed such a decomposition based on natural
properties for shared information. While these properties fix the structure of
the decomposition, they do not uniquely specify the values of the different
terms. Therefore, we investigate additional properties such as strong symmetry
and left monotonicity. We find that strong symmetry is incompatible with the
properties proposed by Williams and Beer. Although left monotonicity is a very
natural property for an information measure it is not fulfilled by any of the
proposed measures.
We also study a geometric framework for information decompositions and ask
whether it is possible to represent shared information by a family of posterior
distributions.
Finally, we draw connections to the notions of shared knowledge and common
knowledge in game theory. While many people believe that independent variables
cannot share information, we show that in game theory independent agents can
have shared knowledge, but not common knowledge. We conclude that intuition and
heuristic arguments do not suffice when arguing about information.Comment: 20 page
Prediction of spatiotemporal patterns of neural activity using a higher-order Markov representation of instantaneous pairwise maximum entropy model
Stimulus-dependent maximum entropy models of neural population codes
Neural populations encode information about their stimulus in a collective
fashion, by joint activity patterns of spiking and silence. A full account of
this mapping from stimulus to neural activity is given by the conditional
probability distribution over neural codewords given the sensory input. To be
able to infer a model for this distribution from large-scale neural recordings,
we introduce a stimulus-dependent maximum entropy (SDME) model---a minimal
extension of the canonical linear-nonlinear model of a single neuron, to a
pairwise-coupled neural population. The model is able to capture the
single-cell response properties as well as the correlations in neural spiking
due to shared stimulus and due to effective neuron-to-neuron connections. Here
we show that in a population of 100 retinal ganglion cells in the salamander
retina responding to temporal white-noise stimuli, dependencies between cells
play an important encoding role. As a result, the SDME model gives a more
accurate account of single cell responses and in particular outperforms
uncoupled models in reproducing the distributions of codewords emitted in
response to a stimulus. We show how the SDME model, in conjunction with static
maximum entropy models of population vocabulary, can be used to estimate
information-theoretic quantities like surprise and information transmission in
a neural population.Comment: 11 pages, 7 figure
Neuronal assembly dynamics in supervised and unsupervised learning scenarios
The dynamic formation of groups of neurons—neuronal assemblies—is believed to mediate cognitive phenomena at many levels, but their detailed operation and mechanisms of interaction are still to be uncovered. One hypothesis suggests that synchronized oscillations underpin their formation and functioning, with a focus on the temporal structure of neuronal signals. In this context, we investigate neuronal assembly dynamics in two complementary scenarios: the first, a supervised spike pattern classification task, in which noisy variations of a collection of spikes have to be correctly labeled; the second, an unsupervised, minimally cognitive evolutionary robotics tasks, in which an evolved agent has to cope with multiple, possibly conflicting, objectives. In both cases, the more traditional dynamical analysis of the system’s variables is paired with information-theoretic techniques in order to get a broader picture of the ongoing interactions with and within the network. The neural network model is inspired by the Kuramoto model of coupled phase oscillators and allows one to fine-tune the network synchronization dynamics and assembly configuration. The experiments explore the computational power, redundancy, and generalization capability of neuronal circuits, demonstrating that performance depends nonlinearly on the number of assemblies and neurons in the network and showing that the framework can be exploited to generate minimally cognitive behaviors, with dynamic assembly formation accounting for varying degrees of stimuli modulation of the sensorimotor interactions
Expanding the Transfer Entropy to Identify Information Subgraphs in Complex Systems
We propose a formal expansion of the transfer entropy to put in evidence
irreducible sets of variables which provide information for the future state of
each assigned target. Multiplets characterized by a large contribution to the
expansion are associated to informational circuits present in the system, with
an informational character which can be associated to the sign of the
contribution. For the sake of computational complexity, we adopt the assumption
of Gaussianity and use the corresponding exact formula for the conditional
mutual information. We report the application of the proposed methodology on
two EEG data sets
Inference of kinetic Ising model on sparse graphs
Based on dynamical cavity method, we propose an approach to the inference of
kinetic Ising model, which asks to reconstruct couplings and external fields
from given time-dependent output of original system. Our approach gives an
exact result on tree graphs and a good approximation on sparse graphs, it can
be seen as an extension of Belief Propagation inference of static Ising model
to kinetic Ising model. While existing mean field methods to the kinetic Ising
inference e.g., na\" ive mean-field, TAP equation and simply mean-field, use
approximations which calculate magnetizations and correlations at time from
statistics of data at time , dynamical cavity method can use statistics of
data at times earlier than to capture more correlations at different time
steps. Extensive numerical experiments show that our inference method is
superior to existing mean-field approaches on diluted networks.Comment: 9 pages, 3 figures, comments are welcom
The Effect of Nonstationarity on Models Inferred from Neural Data
Neurons subject to a common non-stationary input may exhibit a correlated
firing behavior. Correlations in the statistics of neural spike trains also
arise as the effect of interaction between neurons. Here we show that these two
situations can be distinguished, with machine learning techniques, provided the
data are rich enough. In order to do this, we study the problem of inferring a
kinetic Ising model, stationary or nonstationary, from the available data. We
apply the inference procedure to two data sets: one from salamander retinal
ganglion cells and the other from a realistic computational cortical network
model. We show that many aspects of the concerted activity of the salamander
retinal neurons can be traced simply to the external input. A model of
non-interacting neurons subject to a non-stationary external field outperforms
a model with stationary input with couplings between neurons, even accounting
for the differences in the number of model parameters. When couplings are added
to the non-stationary model, for the retinal data, little is gained: the
inferred couplings are generally not significant. Likewise, the distribution of
the sizes of sets of neurons that spike simultaneously and the frequency of
spike patterns as function of their rank (Zipf plots) are well-explained by an
independent-neuron model with time-dependent external input, and adding
connections to such a model does not offer significant improvement. For the
cortical model data, robust couplings, well correlated with the real
connections, can be inferred using the non-stationary model. Adding connections
to this model slightly improves the agreement with the data for the probability
of synchronous spikes but hardly affects the Zipf plot.Comment: version in press in J Stat Mec
- …