No Arabic abstract
Self-similarity in the network traffic has been studied from several aspects: both at the user side and at the network side there are many sources of the long range dependence. Recently some dynamical origins are also identified: the TCP adaptive congestion avoidance algorithm itself can produce chaotic and long range dependent throughput behavior, if the loss rate is very high. In this paper we show that there is a close connection between the static and dynamic origins of self-similarity: parallel TCPs can generate the self-similarity themselves, they can introduce heavily fluctuations into the background traffic and produce high effective loss rate causing a long range dependent TCP flow, however, the dropped packet ratio is low.
Textures in images can often be well modeled using self-similar processes while they may at the same time display anisotropy. The present contribution thus aims at studying jointly selfsimilarity and anisotropy by focusing on a specific classical class of Gaussian anisotropic selfsimilar processes. It will first be shown that accurate joint estimates of the anisotropy and selfsimilarity parameters are performed by replacing the standard 2D-discrete wavelet transform by the hyperbolic wavelet transform, which permits the use of different dilation factors along the horizontal and vertical axis. Defining anisotropy requires a reference direction that needs not a priori match the horizontal and vertical axes according to which the images are digitized, this discrepancy defines a rotation angle. Second, we show that this rotation angle can be jointly estimated. Third, a non parametric bootstrap based procedure is described, that provides confidence interval in addition to the estimates themselves and enables to construct an isotropy test procedure, that can be applied to a single texture image. Fourth, the robustness and versatility of the proposed analysis is illustrated by being applied to a large variety of different isotropic and anisotropic self-similar fields. As an illustration, we show that a true anisotropy built-in self-similarity can be disentangled from an isotropic self-similarity to which an anisotropic trend has been superimposed.
The $k$-power domination problem is a problem in graph theory, which has applications in many areas. However, it is hard to calculate the exact $k$-power domination number since determining k-power domination number of a generic graph is a NP-complete problem. We determine the exact $k$-power domination number in two graphs which have the same number of vertices and edges: pseudofractal scale-free web and Sierpinski gasket. The $k$-power domination number becomes 1 for $kge2$ in the Sierpinski gasket, while the $k$-power domination number increases at an exponential rate with regard to the number of vertices in the pseudofractal scale-free web. The scale-free property may account for the difference in the behavior of two graphs.
Recommender systems are significant to help people deal with the world of information explosion and overload. In this Letter, we develop a general framework named self-consistent refinement and implement it be embedding two representative recommendation algorithms: similarity-based and spectrum-based methods. Numerical simulations on a benchmark data set demonstrate that the present method converges fast and can provide quite better performance than the standard methods.
Despite the tremendous success of Stochastic Gradient Descent (SGD) algorithm in deep learning, little is known about how SGD finds generalizable solutions in the high-dimensional weight space. By analyzing the learning dynamics and loss function landscape, we discover a robust inverse relation between the weight variance and the landscape flatness (inverse of curvature) for all SGD-based learning algorithms. To explain the inverse variance-flatness relation, we develop a random landscape theory, which shows that the SGD noise strength (effective temperature) depends inversely on the landscape flatness. Our study indicates that SGD attains a self-tuned landscape-dependent annealing strategy to find generalizable solutions at the flat minima of the landscape. Finally, we demonstrate how these new theoretical insights lead to more efficient algorithms, e.g., for avoiding catastrophic forgetting.
In recent years, many efforts have been addressed on collision avoidance of collectively moving agents. In this paper, we propose a modified version of the Vicsek model with adaptive speed, which can guarantee the absence of collisions. However, this strategy leads to an aggregated state with slowly moving agents. We therefore further introduce a certain repulsion, which results in both faster consensus and longer safe distance among agents, and thus provides a powerful mechanism for collective motions in biological and technological multi-agent systems.