126 research outputs found
Inference of kinetic Ising model on sparse graphs
Based on dynamical cavity method, we propose an approach to the inference of
kinetic Ising model, which asks to reconstruct couplings and external fields
from given time-dependent output of original system. Our approach gives an
exact result on tree graphs and a good approximation on sparse graphs, it can
be seen as an extension of Belief Propagation inference of static Ising model
to kinetic Ising model. While existing mean field methods to the kinetic Ising
inference e.g., na\" ive mean-field, TAP equation and simply mean-field, use
approximations which calculate magnetizations and correlations at time from
statistics of data at time , dynamical cavity method can use statistics of
data at times earlier than to capture more correlations at different time
steps. Extensive numerical experiments show that our inference method is
superior to existing mean-field approaches on diluted networks.Comment: 9 pages, 3 figures, comments are welcom
The Effect of Nonstationarity on Models Inferred from Neural Data
Neurons subject to a common non-stationary input may exhibit a correlated
firing behavior. Correlations in the statistics of neural spike trains also
arise as the effect of interaction between neurons. Here we show that these two
situations can be distinguished, with machine learning techniques, provided the
data are rich enough. In order to do this, we study the problem of inferring a
kinetic Ising model, stationary or nonstationary, from the available data. We
apply the inference procedure to two data sets: one from salamander retinal
ganglion cells and the other from a realistic computational cortical network
model. We show that many aspects of the concerted activity of the salamander
retinal neurons can be traced simply to the external input. A model of
non-interacting neurons subject to a non-stationary external field outperforms
a model with stationary input with couplings between neurons, even accounting
for the differences in the number of model parameters. When couplings are added
to the non-stationary model, for the retinal data, little is gained: the
inferred couplings are generally not significant. Likewise, the distribution of
the sizes of sets of neurons that spike simultaneously and the frequency of
spike patterns as function of their rank (Zipf plots) are well-explained by an
independent-neuron model with time-dependent external input, and adding
connections to such a model does not offer significant improvement. For the
cortical model data, robust couplings, well correlated with the real
connections, can be inferred using the non-stationary model. Adding connections
to this model slightly improves the agreement with the data for the probability
of synchronous spikes but hardly affects the Zipf plot.Comment: version in press in J Stat Mec
Effect of coupling asymmetry on mean-field solutions of direct and inverse Sherrington-Kirkpatrick model
We study how the degree of symmetry in the couplings influences the
performance of three mean field methods used for solving the direct and inverse
problems for generalized Sherrington-Kirkpatrick models. In this context, the
direct problem is predicting the potentially time-varying magnetizations. The
three theories include the first and second order Plefka expansions, referred
to as naive mean field (nMF) and TAP, respectively, and a mean field theory
which is exact for fully asymmetric couplings. We call the last of these simply
MF theory. We show that for the direct problem, nMF performs worse than the
other two approximations, TAP outperforms MF when the coupling matrix is nearly
symmetric, while MF works better when it is strongly asymmetric. For the
inverse problem, MF performs better than both TAP and nMF, although an ad hoc
adjustment of TAP can make it comparable to MF. For high temperatures the
performance of TAP and MF approach each other
Bump formation in a binary attractor neural network
This paper investigates the conditions for the formation of local bumps in
the activity of binary attractor neural networks with spatially dependent
connectivity. We show that these formations are observed when asymmetry between
the activity during the retrieval and learning is imposed. Analytical
approximation for the order parameters is derived. The corresponding phase
diagram shows a relatively large and stable region, where this effect is
observed, although the critical storage and the information capacities
drastically decrease inside that region. We demonstrate that the stability of
the network, when starting from the bump formation, is larger than the
stability when starting even from the whole pattern. Finally, we show a very
good agreement between the analytical results and the simulations performed for
different topologies of the network.Comment: about 14 page
Stimulus-dependent maximum entropy models of neural population codes
Neural populations encode information about their stimulus in a collective
fashion, by joint activity patterns of spiking and silence. A full account of
this mapping from stimulus to neural activity is given by the conditional
probability distribution over neural codewords given the sensory input. To be
able to infer a model for this distribution from large-scale neural recordings,
we introduce a stimulus-dependent maximum entropy (SDME) model---a minimal
extension of the canonical linear-nonlinear model of a single neuron, to a
pairwise-coupled neural population. The model is able to capture the
single-cell response properties as well as the correlations in neural spiking
due to shared stimulus and due to effective neuron-to-neuron connections. Here
we show that in a population of 100 retinal ganglion cells in the salamander
retina responding to temporal white-noise stimuli, dependencies between cells
play an important encoding role. As a result, the SDME model gives a more
accurate account of single cell responses and in particular outperforms
uncoupled models in reproducing the distributions of codewords emitted in
response to a stimulus. We show how the SDME model, in conjunction with static
maximum entropy models of population vocabulary, can be used to estimate
information-theoretic quantities like surprise and information transmission in
a neural population.Comment: 11 pages, 7 figure
U.S. stock market interaction network as learned by the Boltzmann Machine
We study historical dynamics of joint equilibrium distribution of stock
returns in the U.S. stock market using the Boltzmann distribution model being
parametrized by external fields and pairwise couplings. Within Boltzmann
learning framework for statistical inference, we analyze historical behavior of
the parameters inferred using exact and approximate learning algorithms. Since
the model and inference methods require use of binary variables, effect of this
mapping of continuous returns to the discrete domain is studied. The presented
analysis shows that binarization preserves market correlation structure.
Properties of distributions of external fields and couplings as well as
industry sector clustering structure are studied for different historical dates
and moving window sizes. We found that a heavy positive tail in the
distribution of couplings is responsible for the sparse market clustering
structure. We also show that discrepancies between the model parameters might
be used as a precursor of financial instabilities.Comment: 15 pages, 17 figures, 1 tabl
Beyond inverse Ising model: structure of the analytical solution for a class of inverse problems
I consider the problem of deriving couplings of a statistical model from
measured correlations, a task which generalizes the well-known inverse Ising
problem. After reminding that such problem can be mapped on the one of
expressing the entropy of a system as a function of its corresponding
observables, I show the conditions under which this can be done without
resorting to iterative algorithms. I find that inverse problems are local (the
inverse Fisher information is sparse) whenever the corresponding models have a
factorized form, and the entropy can be split in a sum of small cluster
contributions. I illustrate these ideas through two examples (the Ising model
on a tree and the one-dimensional periodic chain with arbitrary order
interaction) and support the results with numerical simulations. The extension
of these methods to more general scenarios is finally discussed.Comment: 15 pages, 6 figure
Generalized mean field approximation for parallel dynamics of the Ising model
The dynamics of the non-equilibrium Ising model with parallel updates is investigated using a generalized mean field approximation that incorporates multiple two-site correlations at any two time steps, which can be obtained recursively. The proposed method shows significant improvement in predicting local system properties compared to other mean field approximation techniques, particularly in systems with symmetric interactions. Results are also evaluated against those obtained from Monte Carlo simulations. The method is also employed to obtain parameter values for the kinetic inverse Ising modeling problem, where couplings and local field values of a fully connected spin system are inferred from data. © 2014 IOP Publishing Ltd and SISSA Medialab srl
Statistical pairwise interaction model of stock market
Financial markets are a classical example of complex systems as they comprise
many interacting stocks. As such, we can obtain a surprisingly good description
of their structure by making the rough simplification of binary daily returns.
Spin glass models have been applied and gave some valuable results but at the
price of restrictive assumptions on the market dynamics or others are
agent-based models with rules designed in order to recover some empirical
behaviours. Here we show that the pairwise model is actually a statistically
consistent model with observed first and second moments of the stocks
orientation without making such restrictive assumptions. This is done with an
approach based only on empirical data of price returns. Our data analysis of
six major indices suggests that the actual interaction structure may be thought
as an Ising model on a complex network with interaction strengths scaling as
the inverse of the system size. This has potentially important implications
since many properties of such a model are already known and some techniques of
the spin glass theory can be straightforwardly applied. Typical behaviours, as
multiple equilibria or metastable states, different characteristic time scales,
spatial patterns, order-disorder, could find an explanation in this picture.Comment: 11 pages, 8 figure
- …