Do you want to publish a course? Click here

Clustering is difficult only when it does not matter

152   0   0.0 ( 0 )
 Added by Amit Daniely
 Publication date 2012
and research's language is English




Ask ChatGPT about the research

Numerous papers ask how difficult it is to cluster data. We suggest that the more relevant and interesting question is how difficult it is to cluster data sets {em that can be clustered well}. More generally, despite the ubiquity and the great importance of clustering, we still do not have a satisfactory mathematical theory of clustering. In order to properly understand clustering, it is clearly necessary to develop a solid theoretical basis for the area. For example, from the perspective of computational complexity theory the clustering problem seems very hard. Numerous papers introduce various criteria and numerical measures to quantify the quality of a given clustering. The resulting conclusions are pessimistic, since it is computationally difficult to find an optimal clustering of a given data set, if we go by any of these popular criteria. In contrast, the practitioners perspective is much more optimistic. Our explanation for this disparity of opinions is that complexity theory concentrates on the worst case, whereas in reality we only care for data sets that can be clustered well. We introduce a theoretical framework of clustering in metric spaces that revolves around a notion of good clustering. We show that if a good clustering exists, then in many cases it can be efficiently found. Our conclusion is that contrary to popular belief, clustering should not be considered a hard task.



rate research

Read More

While Bernoullis equation is one of the most frequently mentioned topics in Physics literature and other means of dissemination, it is also one of the least understood. Oddly enough, in the wonderful book Turning the world inside out [1], Robert Ehrlich proposes a demonstration that consists of blowing a quarter dollar coin into a cup, incorrectly explained using Bernoullis equation. In the present work, we have adapted the demonstration to show situations in which the coin jumps into the cup and others in which it does not, proving that the explanation based on Bernoullis is flawed. Our demonstration is useful to tackle the common misconception, stemming from the incorrect use of Bernoullis equation, that higher velocity invariably means lower pressure.
In deep model compression, the recent finding Lottery Ticket Hypothesis (LTH) (Frankle & Carbin, 2018) pointed out that there could exist a winning ticket (i.e., a properly pruned sub-network together with original weight initialization) that can achieve competitive performance than the original dense network. However, it is not easy to observe such winning property in many scenarios, where for example, a relatively large learning rate is used even if it benefits training the original dense model. In this work, we investigate the underlying condition and rationale behind the winning property, and find that the underlying reason is largely attributed to the correlation between initialized weights and final-trained weights when the learning rate is not sufficiently large. Thus, the existence of winning property is correlated with an insufficient DNN pretraining, and is unlikely to occur for a well-trained DNN. To overcome this limitation, we propose the pruning & fine-tuning method that consistently outperforms lottery ticket sparse training under the same pruning algorithm and the same total training epochs. Extensive experiments over multiple deep models (VGG, ResNet, MobileNet-v2) on different datasets have been conducted to justify our proposals.
General wisdom tells us that if two quantum states are ``macroscopically distinguishable then their superposition should be hard to observe. We make this intuition precise and general by quantifying the difficulty to observe the quantum nature of a superposition of two states that can be distinguished without microscopic accuracy. First, we quantify the distinguishability of any given pair of quantum states with measurement devices lacking microscopic accuracy, i.e. measurements suffering from limited resolution or limited sensitivity. Next, we quantify the required stability that have to be fulfilled by any measurement setup able to distinguish their superposition from a mere mixture. Finally, by establishing a relationship between the stability requirement and the ``macroscopic distinguishability of the two superposed states, we demonstrate that indeed, the more distinguishable the states are, the more demanding are the stability requirements.
241 - Feiyang Pan , Jia He , Dandan Tu 2020
It is a popular belief that model-based Reinforcement Learning (RL) is more sample efficient than model-free RL, but in practice, it is not always true due to overweighed model errors. In complex and noisy settings, model-based RL tends to have trouble using the model if it does not know when to trust the model. In this work, we find that better model usage can make a huge difference. We show theoretically that if the use of model-generated data is restricted to state-action pairs where the model error is small, the performance gap between model and real rollouts can be reduced. It motivates us to use model rollouts only when the model is confident about its predictions. We propose Masked Model-based Actor-Critic (M2AC), a novel policy optimization algorithm that maximizes a model-based lower-bound of the true value function. M2AC implements a masking mechanism based on the models uncertainty to decide whether its prediction should be used or not. Consequently, the new algorithm tends to give robust policy improvements. Experiments on continuous control benchmarks demonstrate that M2AC has strong performance even when using long model rollouts in very noisy environments, and it significantly outperforms previous state-of-the-art methods.
In recent, deep learning has become the most popular direction in machine learning and artificial intelligence. However, preparation of training data is often a bottleneck in the lifecycle of deploying a deep learning model for production or research. Reusing models for inferencing a dataset can greatly save the human costs required for training data creation. Although there exist a number of model sharing platform such as TensorFlow Hub, PyTorch Hub, DLHub, most of these systems require model uploaders to manually specify the details of each model and model downloaders to screen keyword search results for selecting a model. They are in lack of an automatic model searching tool. This paper proposes an end-to-end process of searching related models for serving based on the similarity of the target dataset and the training datasets of the available models. While there exist many similarity measurements, we study how to efficiently apply these metrics without pair-wise comparison and compare the effectiveness of these metrics. We find that our proposed adaptivity measurement which is based on Jensen-Shannon (JS) divergence, is an effective measurement, and its computation can be significantly accelerated by using the technique of locality sensitive hashing.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا