Do you want to publish a course? Click here

Trade-off Between Antenna Efficiency and Q-Factor

91   0   0.0 ( 0 )
 Added by Miloslav Capek
 Publication date 2017
  fields Physics
and research's language is English




Ask ChatGPT about the research

The trade-off between radiation efficiency and antenna bandwidth, expressed in terms of Q-factor, for small antennas is formulated as a multi-objective optimization problem in current distributions of predefined support. Variants on the problem are constructed to demonstrate the consequences of requiring a self-resonant current as opposed to one tuned by an external reactance. The resulting Pareto-optimal sets reveal the relative cost of valuing low radiation Q-factor over high efficiency, the cost in efficiency to require a self-resonant current, the effects of lossy parasitic loading, and other insights.



rate research

Read More

Quantum thermodynamics and quantum information are two frameworks for employing quantum mechanical systems for practical tasks, exploiting genuine quantum features to obtain advantages with respect to classical implementations. While appearing disconnected at first, the main resources of these frameworks, work and correlations, have a complicated yet interesting relationship that we examine here. We review the role of correlations in quantum thermodynamics, with a particular focus on the conversion of work into correlations. We provide new insights into the fundamental work cost of correlations and the existence of optimally correlating unitaries, and discuss relevant open problems.
We identify a trade-off between robustness and accuracy that serves as a guiding principle in the design of defenses against adversarial examples. Although this problem has been widely studied empirically, much remains unknown concerning the theory underlying this trade-off. In this work, we decompose the prediction error for adversarial examples (robust error) as the sum of the natural (classification) error and boundary error, and provide a differentiable upper bound using the theory of classification-calibrated loss, which is shown to be the tightest possible upper bound uniform over all probability distributions and measurable predictors. Inspired by our theoretical analysis, we also design a new defense method, TRADES, to trade adversarial robustness off against accuracy. Our proposed algorithm performs well experimentally in real-world datasets. The methodology is the foundation of our entry to the NeurIPS 2018 Adversarial Vision Challenge in which we won the 1st place out of ~2,000 submissions, surpassing the runner-up approach by $11.41%$ in terms of mean $ell_2$ perturbation distance.
A massive multiple input multiple-output system is very important to optimize the trade-off energy efficiency and spectral efficiency in fifth-generation cellular networks. The challenges for the next generation depend on increasing the high data traffic in the wireless communication system for both EE and SE. In this paper, the trade off energy efficiency and spectral efficiency based on the first derivative of transmit antennas and transmit power in a downlink massive MIMO system has been investigated. The trade off EE-SE by using a multiobjective optimization problem to decrease transmit power has been analyzed. The EE and SE based on constraint maximum transmit power allocation and a number of antennas by computing the first derivative of transmit power to maximize the trade-off energy efficiency and spectral efficiency has been improved. From the simulation results, the optimum trade-off between EE and SE can be obtained based on the first derivative by selecting the optimal antennas with a low cost of transmit power. Therefore, based on an optimal optimization problem is flexible to make trade-offs between EE-SE for distinct preferences
A robot can invoke heterogeneous computation resources such as CPUs, cloud GPU servers, or even human computation for achieving a high-level goal. The problem of invoking an appropriate computation model so that it will successfully complete a task while keeping its compute and energy costs within a budget is called a model selection problem. In this paper, we present an optimal solution to the model selection problem with two compute models, the first being fast but less accurate, and the second being slow but more accurate. The main insight behind our solution is that a robot should invoke the slower compute model only when the benefits from the gain in accuracy outweigh the computational costs. We show that such cost-benefit analysis can be performed by leveraging the statistical correlation between the accuracy of fast and slow compute models. We demonstrate the broad applicability of our approach to diverse problems such as perception using neural networks and safe navigation of a simulated Mars rover.
We provide a general framework for characterizing the trade-off between accuracy and robustness in supervised learning. We propose a method and define quantities to characterize the trade-off between accuracy and robustness for a given architecture, and provide theoretical insight into the trade-off. Specifically we introduce a simple trade-off curve, define and study an influence function that captures the sensitivity, under adversarial attack, of the optima of a given loss function. We further show how adversarial training regularizes the parameters in an over-parameterized linear model, recovering the LASSO and ridge regression as special cases, which also allows us to theoretically analyze the behavior of the trade-off curve. In experiments, we demonstrate the corresponding trade-off curves of neural networks and how they vary with respect to factors such as number of layers, neurons, and across different network structures. Such information provides a useful guideline to architecture selection.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا