Do you want to publish a course? Click here

Graph Compression -- Save Information by Exploiting Redundancy

326   0   0.0 ( 0 )
 Added by Jie Sun
 Publication date 2007
  fields Physics
and research's language is English




Ask ChatGPT about the research

In this paper we raise the question of how to compress sparse graphs. By introducing the idea of redundancy, we find a way to measure the overlap of neighbors between nodes in networks. We exploit symmetry and information by making use of the overlap in neighbors and analyzing how information is reduced by shrinking the network and using the specific data structure we created, we generalize the problem of compression as an optimization problem on the possible choices of orbits. To find a reasonably good solution to this problem we use a greedy algorithm to determine the orbit of symmetry identifications, to achieve compression. Some example implementations of our algorithm are illustrated and analyzed.



rate research

Read More

Collective behavior, both in real biological systems as well as in theoretical models, often displays a rich combination of different kinds of order. A clear-cut and unique definition of phase based on the standard concept of order parameter may therefore be complicated, and made even trickier by the lack of thermodynamic equilibrium. Compression-based entropies have been proved useful in recent years in describing the different phases of out-of-equilibrium systems. Here, we investigate the performance of a compression-based entropy, namely the Computable Information Density (CID), within the Vicsek model of collective motion. Our entropy is defined through a crude coarse-graining of the particle positions, in which the key role of velocities in the model only enters indirectly through the velocity-density coupling. We discover that such entropy is a valid tool in distinguishing the various noise regimes, including the crossover between an aligned and misaligned phase of the velocities, despite the fact that velocities are not used by this entropy. Furthermore, we unveil the subtle role of the time coordinate, unexplored in previous studies on the CID: a new encoding recipe, where space and time locality are both preserved on the same ground, is demonstrated to reduce the CID. Such an improvement is particularly significant when working with partial and/or corrupted data, as it is often the case in real biological experiments.
We consider the problem of decomposing the total mutual information conveyed by a pair of predictor random variables about a target random variable into redundant, unique and synergistic contributions. We focus on the relationship between redundant information and the more familiar information-theoretic notions of common information. Our main contribution is an impossibility result. We show that for independent predictor random variables, any common information based measure of redundancy cannot induce a nonnegative decomposition of the total mutual information. Interestingly, this entails that any reasonable measure of redundant information cannot be derived by optimization over a single random variable.
In this work an iterative solution to build a network lifetime-preserving sampling strategy for WSNs is presented. The paper describes the necessary steps to reconstruct a graph from application data. Once the graph structure is obtained, a sampling strategy aimed at finding the smallest number of concurrent sensors needed to reconstruct the data in the unsampled nodes within a specific error bound, is presented. An iterative method then divides the sensor nodes into sets to be sampled sequentially to increase lifetime. Results on a real-life dataset show that the reconstruction RMSE can be easily traded off for a larger number of disjoint sampling sets which improve the network lifetime linearly.
67 - Chulan Kwon 2016
We investigate thermodynamics of feedback processes driven by measurement. Regarding system and memory device as a composite system, mutual information as a measure of correlation between the two constituents contributes to the entropy of the composite system, which makes the generalized total entropy of the joint system and reservoir satisfy the second law of thermodynamics. We investigate the thermodynamics of the Szilard engine for an intermediate period before the completion of cycle. We show the second law to hold resolving the paradox of Maxwells demon independent of the period taken into account. We also investigate a feedback process to confine a particle excessively within a trap, which is operated by repetitions of feedback in a finite time interval. We derive the stability condition for multi-step feedback and find the condition for confinement below thermal fluctuation in the absence of feedback. The results are found to depend on interval between feedback steps and intensity of feedback protocol, which are expected to be important parameters in real experiments.
Convolutional Neural Networks (CNNs) have achieved state-of-the-art performance in many computer vision tasks over the years. However, this comes at the cost of heavy computation and memory intensive network designs, suggesting potential improvements in efficiency. Convolutional layers of CNNs partly account for such an inefficiency, as they are known to learn redundant features. In this work, we exploit this redundancy, observing it as the correlation between convolutional filters of a layer, and propose an alternative approach to reproduce it efficiently. The proposed LinearConv layer learns a set of orthogonal filters, and a set of coefficients that linearly combines them to introduce a controlled redundancy. We introduce a correlation-based regularization loss to achieve such flexibility over redundancy, and control the number of parameters in turn. This is designed as a plug-and-play layer to conveniently replace a conventional convolutional layer, without any additional changes required in the network architecture or the hyperparameter settings. Our experiments verify that LinearConv models achieve a performance on-par with their counterparts, with almost a 50% reduction in parameters on average, and the same computational requirement and speed at inference.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا