Abstract
Zipf's law is a hallmark of several complex systems with a modular structure,
such as books composed by words or genomes composed by genes. In these
component systems, Zipf's law describes the empirical power law distribution of
component frequencies. Stochastic processes based on a sample-space-reducing
(SSR) mechanism, in which the number of accessible states reduces as the system
evolves, have been recently proposed as a simple explanation for the ubiquitous
emergence of this law. However, many complex component systems are
characterized by other statistical patterns beyond Zipf's law, such as a
sublinear growth of the component vocabulary with the system size, known as
Heap's law, and a specific statistics of shared components. This work shows,
with analytical calculations and simulations, that these statistical properties
can emerge jointly from a SSR mechanism, thus making it an appropriate
parameter-poor representation for component systems. Several alternative (and
equally simple) models, for example based on the preferential attachment
mechanism, can also reproduce Heaps' and Zipf's laws, suggesting that
additional statistical properties should be taken into account to select the
most-likely generative process for a specific system. Along this line, we will
show that the temporal component distribution predicted by the SSR model is
markedly different from the one emerging from the popular rich-gets-richer
mechanism. A comparison with empirical data from natural language indicates
that the SSR process can be chosen as a better candidate model for text
generation based on this statistical property. Finally, a limitation of the SSR
model in reproducing the empirical "burstiness" of word appearances in texts
will be pointed out, thus indicating a possible direction for extensions of the
basic SSR process.
Description
Heaps' law, statistics of shared components and temporal patterns from a
sample-space-reducing process
Links and resources
Tags