Do you want to publish a course? Click here

Are networks with more edges easier to synchronize?

184   0   0.0 ( 0 )
 Added by Li Rong
 Publication date 2007
and research's language is English




Ask ChatGPT about the research

In this paper, the relationship between the network synchronizability and the edge distribution of its associated graph is investigated. First, it is shown that adding one edge to a cycle definitely decreases the network sychronizability. Then, since sometimes the synchronizability can be enhanced by changing the network structure, the question of whether the networks with more edges are easier to synchronize is addressed. It is shown by examples that the answer is negative. This reveals that generally there are redundant edges in a network, which not only make no contributions to synchronization but actually may reduce the synchronizability. Moreover, an example shows that the node betweenness centrality is not always a good indicator for the network synchronizability. Finally, some more examples are presented to illustrate how the network synchronizability varies following the addition of edges, where all the examples show that the network synchronizability globally increases but locally fluctuates as the number of added edges increases.



rate research

Read More

In the last decade, motivated by the success of Deep Learning, the scientific community proposed several approaches to make the learning procedure of Neural Networks more effective. When focussing on the way in which the training data are provided to the learning machine, we can distinguish between the classic random selection of stochastic gradient-based optimization and more involved techniques that devise curricula to organize data, and progressively increase the complexity of the training set. In this paper, we propose a novel training procedure named Friendly Training that, differently from the aforementioned approaches, involves altering the training examples in order to help the model to better fulfil its learning criterion. The model is allowed to simplify those examples that are too hard to be classified at a certain stage of the training procedure. The data transformation is controlled by a developmental plan that progressively reduces its impact during training, until it completely vanishes. In a sense, this is the opposite of what is commonly done in order to increase robustness against adversarial examples, i.e., Adversarial Training. Experiments on multiple datasets are provided, showing that Friendly Training yields improvements with respect to informed data sub-selection routines and random selection, especially in deep convolutional architectures. Results suggest that adapting the input data is a feasible way to stabilize learning and improve the generalization skills of the network.
Modern machine learning models for computer vision exceed humans in accuracy on specific visual recognition tasks, notably on datasets like ImageNet. However, high accuracy can be achieved in many ways. The particular decision function found by a machine learning system is determined not only by the data to which the system is exposed, but also the inductive biases of the model, which are typically harder to characterize. In this work, we follow a recent trend of in-depth behavioral analyses of neural network models that go beyond accuracy as an evaluation metric by looking at patterns of errors. Our focus is on comparing a suite of standard Convolutional Neural Networks (CNNs) and a recently-proposed attention-based network, the Vision Transformer (ViT), which relaxes the translation-invariance constraint of CNNs and therefore represents a model with a weaker set of inductive biases. Attention-based networks have previously been shown to achieve higher accuracy than CNNs on vision tasks, and we demonstrate, using new metrics for examining error consistency with more granularity, that their errors are also more consistent with those of humans. These results have implications both for building more human-like vision models, as well as for understanding visual object recognition in humans.
57 - Lu Wang , Yu Song , Hong Huang 2021
In the real world, networks often contain multiple relationships among nodes, manifested as the heterogeneity of the edges in the networks. We convert the heterogeneous networks into multiple views by using each view to describe a specific type of relationship between nodes, so that we can leverage the collaboration of multiple views to learn the representation of networks with heterogeneous edges. Given this, we propose a emph{regularized graph auto-encoders} (RGAE) model, committed to utilizing abundant information in multiple views to learn robust network representations. More specifically, RGAE designs shared and private graph auto-encoders as main components to capture high-order nonlinear structure information of the networks. Besides, two loss functions serve as regularization to extract consistent and unique information, respectively. Concrete experimental results on realistic datasets indicate that our model outperforms state-of-the-art baselines in practical applications.
Being fundamentally a non-equilibrium process, synchronization comes with unavoidable energy costs and has to be maintained under the constraint of limited resources. Such resource constraints are often reflected as a finite coupling budget available in a network to facilitate interaction and communication. Here, we show that introducing temporal variation in the network structure can lead to efficient synchronization even when stable synchrony is impossible in any static network under the given budget, thereby demonstrating a fundamental advantage of temporal networks. The temporal networks generated by our open-loop design are versatile in the sense of promoting synchronization for systems with vastly different dynamics, including periodic and chaotic dynamics in both discrete-time and continuous-time models. Furthermore, we link the dynamic stabilization effect of the changing topology to the curvature of the master stability function, which provides analytical insights into synchronization on temporal networks in general. In particular, our results shed light on the effect of network switching rate and explain why certain temporal networks synchronize only for intermediate switching rate.
Quality of Service (QoS) in the IP world mainly manages forwarding resources, i.e., link capacities and buffer spaces. In addition, Information Centric Networking (ICN) offers resource dimensions such as in-network caches and forwarding state. In constrained wireless networks, these resources are scarce with a potentially high impact due to lossy radio transmission. In this paper, we explore the two basic service qualities (i) prompt and (ii) reliable traffic forwarding for the case of NDN. The resources we take into account are forwarding and queuing priorities, as well as the utilization of caches and of forwarding state space. We treat QoS resources not only in isolation, but correlate their use on local nodes and between network members. Network-wide coordination is based on simple, predefined QoS code points. Our findings indicate that coordinated QoS management in ICN is more than the sum of its parts and exceeds the impact QoS can have in the IP world.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا