Do you want to publish a course? Click here

How good is good enough? Re-evaluating the bar for energy disaggregation

115   0   0.0 ( 0 )
 Added by Nipun Batra
 Publication date 2015
and research's language is English




Ask ChatGPT about the research

Since the early 1980s, the research community has developed ever more sophisticated algorithms for the problem of energy disaggregation, but despite decades of research, there is still a dearth of applications with demonstrated value. In this work, we explore a question that is highly pertinent to this research community: how good does energy disaggregation need to be in order to infer characteristics of a household? We present novel techniques that use unsupervised energy disaggregation to predict both household occupancy and static properties of the household such as size of the home and number of occupants. Results show that basic disaggregation approaches performs up to 30% better at occupancy estimation than using aggregate power data alone, and are up to 10% better at estimating static household characteristics. These results show that even rudimentary energy disaggregation techniques are sufficient for improved inference of household characteristics. To conclude, we re-evaluate the bar set by the community for energy disaggregation accuracy and try to answer the question how good is good enough?



rate research

Read More

In the classical synthesis problem, we are given an LTL formula psi over sets of input and output signals, and we synthesize a system T that realizes psi: with every input sequences x, the system associates an output sequence T(x) such that the generated computation x otimes T(x) satisfies psi. In practice, the requirement to satisfy the specification in all environments is often too strong, and it is common to add assumptions on the environment. We introduce a new type of relaxation on this requirement. In good-enough synthesis (GE-synthesis), the system is required to generate a satisfying computation only if one exists. Formally, an input sequence x is hopeful if there exists some output sequence y such that the computation x otimes y satisfies psi, and a system GE-realizes psi if it generates a computation that satisfies psi on all hopeful input sequences. GE-synthesis is particularly relevant when the notion of correctness is multi-valued (rather than Boolean), and thus we seek systems of the highest possible quality, and when synthesizing autonomous systems, which interact with unexpected environments and are often only expected to do their best. We study GE-synthesis in Boolean and multi-valued settings. In both, we suggest and solve various definitions of GE-synthesis, corresponding to different ways a designer may want to take hopefulness into account. We show that in all variants, GE-synthesis is not computationally harder than traditional synthesis, and can be implemented on top of existing tools. Our algorithms are based on careful combinations of nondeterministic and universal automata. We augment systems that GE-realize their specifications by monitors that provide satisfaction information. In the multi-valued setting, we provide both a worst-case analysis and an expectation-based one, the latter corresponding to an interaction with a stochastic environment.
Kohn-Sham density functional theory (DFT) has become established as an indispensable tool for investigating aqueous systems of all kinds, including those important in chemistry, surface science, biology and the earth sciences. Nevertheless, many widely used approximations for the exchange-correlation (XC) functional describe the properties of pure water systems with an accuracy that is not fully satisfactory. The explicit inclusion of dispersion interactions generally improves the description, but there remain large disagreements between the predictions of different dispersion-inclusive methods. We present here a review of DFT work on water clusters, ice structures and liquid water, with the aim of elucidating how the strengths and weaknesses of different XC approximations manifest themselves across this variety of water systems. Our review highlights the crucial role of dispersion in describing the delicate balance between compact and extended structures of many different water systems, including the liquid. By referring to a wide range of published work, we argue that the correct description of exchange-overlap interactions is also extremely important, so that the choice of semi-local or hybrid functional employed in dispersion-inclusive methods is crucial. The origins and consequences of beyond-2-body errors of approximate XC functionals are noted, and we also discuss the substantial differences between different representations of dispersion. We propose a simple numerical scoring system that rates the performance of different XC functionals in describing water systems, and we suggest possible future developments.
178 - Jason Wei 2021
This paper asks whether extrapolating the hidden space distribution of text examples from one class onto another is a valid inductive bias for data augmentation. To operationalize this question, I propose a simple data augmentation protocol called good-enough example extrapolation (GE3). GE3 is lightweight and has no hyperparameters. Applied to three text classification datasets for various data imbalance scenarios, GE3 improves performance more than upsampling and other hidden-space data augmentation methods.
159 - Yuanwei Zhao , Lan Huang , Bo Wang 2021
Ontology-based data integration has been one of the practical methodologies for heterogeneous legacy database integrated service construction. However, it is neither efficient nor economical to build the cross-domain ontology on top of the schemas of each legacy database for the specific integration application than to reuse the existed ontologies. Then the question lies in whether the existed ontology is compatible with the cross-domain queries and with all the legacy systems. It is highly needed an effective criteria to evaluate the compatibility as it limits the upbound quality of the integrated services. This paper studies the semantic similarity of schemas from the aspect of properties. It provides a set of in-depth criteria, namely coverage and flexibility to evaluate the compatibility among the queries, the schemas and the existing ontology. The weights of classes are extended to make precise compatibility computation. The use of such criteria in the practical project verifies the applicability of our method.
The discriminator from generative adversarial nets (GAN) has been used by researchers as a feature extractor in transfer learning and appeared worked well. However, there are also studies that believe this is the wrong research direction because intuitively the task of the discriminator focuses on separating the real samples from the generated ones, making features extracted in this way useless for most of the downstream tasks. To avoid this dilemma, we first conducted a thorough theoretical analysis of the relationship between the discriminator task and the features extracted. We found that the connection between the task of the discriminator and the feature is not as strong as was thought, for that the main factor restricting the feature learned by the discriminator is not the task, but is the need to prevent the entire GAN model from mode collapse during the training. From this perspective and combined with further analyses, we found that to avoid mode collapse, the features extracted by the discriminator are not guided to be different for the real samples, but divergence without noise is indeed allowed and occupies a large proportion of the feature space. This makes the features more robust and helps answer the question as to why the discriminator can succeed as a feature extractor in related research. Consequently, to expose the essence of the discriminator extractor as different from other extractors, we analyze the counterpart of the discriminator extractor, the classifier extractor that assigns the target samples to different categories. We found the performance of the discriminator extractor may be inferior to the classifier based extractor when the source classification task is similar to the target task, which is the common case, but the ability to avoid noise prevents the discriminator from being replaced by the classifier.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا