ترغب بنشر مسار تعليمي؟ اضغط هنا

Resource-Performance Trade-off Analysis for Mobile Robot Design

157   0   0.0 ( 0 )
 نشر من قبل Morteza Lahijanian
 تاريخ النشر 2016
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

The design of mobile autonomous robots is challenging due to the limited on-board resources such as processing power and energy. A promising approach is to generate intelligent schedules that reduce the resource consumption while maintaining best performance, or more interestingly, to trade off reduced resource consumption for a slightly lower but still acceptable level of performance. In this paper, we provide a framework to aid designers in exploring such resource-performance trade-offs and finding schedules for mobile robots, guided by questions such as what is the minimum resource budget required to achieve a given level of performance? The framework is based on a quantitative multi-objective verification technique which, for a collection of possibly conflicting objectives, produces the Pareto front that contains all the optimal trade-offs that are achievable. The designer then selects a specific Pareto point based on the resource constraints and desired performance level, and a correct-by-construction schedule that meets those constraints is automatically generated. We demonstrate the efficacy of this framework on several robotic scenarios in both simulations and experiments with encouraging results.

قيم البحث

اقرأ أيضاً

In this paper, we present the Role Playing Learning (RPL) scheme for a mobile robot to navigate socially with its human companion in populated environments. Neural networks (NN) are constructed to parameterize a stochastic policy that directly maps s ensory data collected by the robot to its velocity outputs, while respecting a set of social norms. An efficient simulative learning environment is built with maps and pedestrians trajectories collected from a number of real-world crowd data sets. In each learning iteration, a robot equipped with the NN policy is created virtually in the learning environment to play itself as a companied pedestrian and navigate towards a goal in a socially concomitant manner. Thus, we call this process Role Playing Learning, which is formulated under a reinforcement learning (RL) framework. The NN policy is optimized end-to-end using Trust Region Policy Optimization (TRPO), with consideration of the imperfectness of robots sensor measurements. Simulative and experimental results are provided to demonstrate the efficacy and superiority of our method.
Computation task service delivery in a computing-enabled and caching-aided multi-user mobile edge computing (MEC) system is studied in this paper, where a MEC server can deliver the input or output datas of tasks to mobile devices over a wireless mul ticast channel. The computing-enabled and caching-aided mobile devices are able to store the input or output datas of some tasks, and also compute some tasks locally, reducing the wireless bandwidth consumption. The corresponding framework of this system is established, and under the latency constraint, we jointly optimize the caching and computing policy at mobile devices to minimize the required transmission bandwidth. The joint policy optimization problem is shown to be NP-hard, and based on equivalent transformation and exact penalization of the problem, a stationary point is obtained via concave convex procedure (CCCP). Moreover, in a symmetric scenario, gains offered by this approach are derived to analytically understand the influences of caching and computing resources at mobile devices, multicast transmission, the number of mobile devices, as well as the number of tasks on the transmission bandwidth. Our results indicate that exploiting the computing and caching resources at mobile devices can provide significant bandwidth savings.
137 - Jiawei Shao , Jun Zhang 2020
The recent breakthrough in artificial intelligence (AI), especially deep neural networks (DNNs), has affected every branch of science and technology. Particularly, edge AI has been envisioned as a major application scenario to provide DNN-based servi ces at edge devices. This article presents effective methods for edge inference at resource-constrained devices. It focuses on device-edge co-inference, assisted by an edge computing server, and investigates a critical trade-off among the computation cost of the on-device model and the communication cost of forwarding the intermediate feature to the edge server. A three-step framework is proposed for the effective inference: (1) model split point selection to determine the on-device model, (2) communication-aware model compression to reduce the on-device computation and the resulting communication overhead simultaneously, and (3) task-oriented encoding of the intermediate feature to further reduce the communication overhead. Experiments demonstrate that our proposed framework achieves a better trade-off and significantly reduces the inference latency than baseline methods.
322 - Sha Hu , Zeshi Yang , Greg Mori 2020
We consider the problem of optimizing a robot morphology to achieve the best performance for a target task, under computational resource limitations. The evaluation process for each morphological design involves learning a controller for the design, which can consume substantial time and computational resources. To address the challenge of expensive robot morphology evaluation, we present a continuous multi-fidelity Bayesian Optimization framework that efficiently utilizes computational resources via low-fidelity evaluations. We identify the problem of non-stationarity over fidelity space. Our proposed fidelity warping mechanism can learn representations of learning epochs and tasks to model non-stationary covariances between continuous fidelity evaluations which prove challenging for off-the-shelf stationary kernels. Various experiments demonstrate that our method can utilize the low-fidelity evaluations to efficiently search for the optimal robot morphology, outperforming state-of-the-art methods.
Mobile virtual reality (VR) delivery is gaining increasing attention from both industry and academia due to its ability to provide an immersive experience. However, achieving mobile VR delivery requires ultra-high transmission rate, deemed as a first killer application for 5G wireless networks. In this paper, in order to alleviate the traffic burden over wireless networks, we develop an implementation framework for mobile VR delivery by utilizing caching and computing capabilities of mobile VR device. We then jointly optimize the caching and computation offloading policy for minimizing the required average transmission rate under the latency and local average energy consumption constraints. In a symmetric scenario, we obtain the optimal joint policy and the closed-form expression of the minimum average transmission rate. Accordingly, we analyze the tradeoff among communication, computing and caching, and then reveal analytically the fact that the communication overhead can be traded by the computing and caching capabilities of mobile VR device, and also what conditions must be met for it to happen. Finally, we discuss the optimization problem in a heterogeneous scenario, and propose an efficient suboptimal algorithm with low computation complexity, which is shown to achieve good performance in the numerical results.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا