Do you want to publish a course? Click here

Tradeoffs for Space, Time, Data and Risk in Unsupervised Learning

54   0   0.0 ( 0 )
 Added by Mario Lucic
 Publication date 2016
and research's language is English




Ask ChatGPT about the research

Faced with massive data, is it possible to trade off (statistical) risk, and (computational) space and time? This challenge lies at the heart of large-scale machine learning. Using k-means clustering as a prototypical unsupervised learning problem, we show how we can strategically summarize the data (control space) in order to trade off risk and time when data is generated by a probabilistic model. Our summarization is based on coreset constructions from computational geometry. We also develop an algorithm, TRAM, to navigate the space/time/data/risk tradeoff in practice. In particular, we show that for a fixed risk (or data size), as the data size increases (resp. risk increases) the running time of TRAM decreases. Our extensive experiments on real data sets demonstrate the existence and practical utility of such tradeoffs, not only for k-means but also for Gaussian Mixture Models.

rate research

Read More

Learning from unlabeled and noisy data is one of the grand challenges of machine learning. As such, it has seen a flurry of research with new ideas proposed continuously. In this work, we revisit a classical idea: Steins Unbiased Risk Estimator (SURE). We show that, in the context of image recovery, SURE and its generalizations can be used to train convolutional neural networks (CNNs) for a range of image denoising and recovery problems without any ground truth data. Specifically, our goal is to reconstruct an image $x$ from a noisy linear transformation (measurement) of the image. We consider two scenarios: one where no additional data is available and one where we have measurements of other images that are drawn from the same noisy distribution as $x$, but have no access to the clean images. Such is the case, for instance, in the context of medical imaging, microscopy, and astronomy, where noise-less ground truth data is rarely available. We show that in this situation, SURE can be used to estimate the mean-squared-error loss associated with an estimate of $x$. Using this estimate of the loss, we train networks to perform denoising and compressed sensing recovery. In addition, we also use the SURE framework to partially explain and improve upon an intriguing results presented by Ulyanov et al. in Deep Image Prior: that a network initialized with random weights and fit to a single noisy image can effectively denoise that image. Public implementations of the networks and methods described in this paper can be found at https://github.com/ricedsp/D-AMP_Toolbox.
We present a faster symbolic algorithm for the following central problem in probabilistic verification: Compute the maximal end-component (MEC) decomposition of Markov decision processes (MDPs). This problem generalizes the SCC decomposition problem of graphs and closed recurrent sets of Markov chains. The model of symbolic algorithms is widely used in formal verification and model-checking, where access to the input model is restricted to only symbolic operations (e.g., basic set operations and computation of one-step neighborhood). For an input MDP with $n$ vertices and $m$ edges, the classical symbolic algorithm from the 1990s for the MEC decomposition requires $O(n^2)$ symbolic operations and $O(1)$ symbolic space. The only other symbolic algorithm for the MEC decomposition requires $O(n sqrt{m})$ symbolic operations and $O(sqrt{m})$ symbolic space. A main open question is whether the worst-case $O(n^2)$ bound for symbolic operations can be beaten. We present a symbolic algorithm that requires $widetilde{O}(n^{1.5})$ symbolic operations and $widetilde{O}(sqrt{n})$ symbolic space. Moreover, the parametrization of our algorithm provides a trade-off between symbolic operations and symbolic space: for all $0<epsilon leq 1/2$ the symbolic algorithm requires $widetilde{O}(n^{2-epsilon})$ symbolic operations and $widetilde{O}(n^{epsilon})$ symbolic space ($widetilde{O}$ hides poly-logarithmic factors). Using our techniques we present faster algorithms for computing the almost-sure winning regions of $omega$-regular objectives for MDPs. We consider the canonical parity objectives for $omega$-regular objectives, and for parity objectives with $d$-priorities we present an algorithm that computes the almost-sure winning region with $widetilde{O}(n^{2-epsilon})$ symbolic operations and $widetilde{O}(n^{epsilon})$ symbolic space, for all $0 < epsilon leq 1/2$.
In recent years much effort has been concentrated towards achieving polynomial time lower bounds on algorithms for solving various well-known problems. A useful technique for showing such lower bounds is to prove them conditionally based on well-studied hardness assumptions such as 3SUM, APSP, SETH, etc. This line of research helps to obtain a better understanding of the complexity inside P. A related question asks to prove conditional space lower bounds on data structures that are constructed to solve certain algorithmic tasks after an initial preprocessing stage. This question received little attention in previous research even though it has potential strong impact. In this paper we address this question and show that surprisingly many of the well-studied hard problems that are known to have conditional polynomial time lower bounds are also hard when concerning space. This hardness is shown as a tradeoff between the space consumed by the data structure and the time needed to answer queries. The tradeoff may be either smooth or admit one or more singularity points. We reveal interesting connections between different space hardness conjectures and present matching upper bounds. We also apply these hardness conjectures to both static and dynamic problems and prove their conditional space hardness. We believe that this novel framework of polynomial space conjectures can play an important role in expressing polynomial space lower bounds of many important algorithmic problems. Moreover, it seems that it can also help in achieving a better understanding of the hardness of their corresponding problems in terms of time.
In function inversion, we are given a function $f: [N] mapsto [N]$, and want to prepare some advice of size $S$, such that we can efficiently invert any image in time $T$. This is a well studied problem with profound connections to cryptography, data structures, communication complexity, and circuit lower bounds. Investigation of this problem in the quantum setting was initiated by Nayebi, Aaronson, Belovs, and Trevisan (2015), who proved a lower bound of $ST^2 = tildeOmega(N)$ for random permutations against classical advice, leaving open an intriguing possibility that Grovers search can be sped up to time $tilde O(sqrt{N/S})$. Recent works by Hhan, Xagawa, and Yamakawa (2019), and Chung, Liao, and Qian (2019) extended the argument for random functions and quantum advice, but the lower bound remains $ST^2 = tildeOmega(N)$. In this work, we prove that even with quantum advice, $ST + T^2 = tildeOmega(N)$ is required for an algorithm to invert random functions. This demonstrates that Grovers search is optimal for $S = tilde O(sqrt{N})$, ruling out any substantial speed-up for Grovers search even with quantum advice. Further improvements to our bounds would imply new classical circuit lower bounds, as shown by Corrigan-Gibbs and Kogan (2019). To prove this result, we develop a general framework for establishing quantum time-space lower bounds. We further demonstrate the power of our framework by proving quantum time-space lower bounds for Yaos box problem and salted cryptography.
We explore bounds of {em time-space tradeoffs} in language recognition on {em two-way finite automata} for some special languages. We prove: (1) a time-space tradeoff upper bound for recognition of the languages $L_{EQ}(n)$ on {em two-way probabilistic finite automata} (2PFA): $TS={bf O}(nlog n)$, whereas a time-space tradeoff lower bound on {em two-way deterministic finite automata} is ${bf Omega}(n^2)$, (2) a time-space tradeoff upper bound for recognition of the languages $L_{INT}(n)$ on {em two-way finite automata with quantum and classical states} (2QCFA): $TS={bf O}(n^{3/2}log n)$, whereas a lower bound on 2PFA is $TS={bf Omega}(n^2)$, (3) a time-space tradeoff upper bound for recognition of the languages $L_{NE}(n)$ on exact 2QCFA: $TS={bf O}(n^{1.87} log n)$, whereas a lower bound on 2PFA is $TS={bf Omega}(n^2)$. It has been proved (Klauck, STOC00) that the exact one-way quantum finite automata have no advantage comparing to classical finite automata in recognizing languages. However, the result (3) shows that the exact 2QCFA do have an advantage in comparison with their classical counterparts, which has been the first example showing that the exact quantum computing have advantage in time-space tradeoff comparing to classical computing. Usually, two communicating parties, Alice and Bob, are supposed to have an access to arbitrary computational power in {em communication complexity} model that is used. Instead of that we will consider communication complexity in such a setting that two parties are using only finite automata and we prove in this setting that quantum automata are better than classical automata and also probabilistic automata are better than deterministic automata for some well known tasks.

suggested questions

comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا