Do you want to publish a course? Click here

Opportunities and Challenges for Next Generation Computing

91   0   0.0 ( 0 )
 Added by Gregory D. Hager
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

Computing has dramatically changed nearly every aspect of our lives, from business and agriculture to communication and entertainment. As a nation, we rely on computing in the design of systems for energy, transportation and defense; and computing fuels scientific discoveries that will improve our fundamental understanding of the world and help develop solutions to major challenges in health and the environment. Computing has changed our world, in part, because our innovations can run on computers whose performance and cost-performance has improved a million-fold over the last few decades. A driving force behind this has been a repeated doubling of the transistors per chip, dubbed Moores Law. A concomitant enabler has been Dennard Scaling that has permitted these performance doublings at roughly constant power, but, as we will see, both trends face challenges. Consider for a moment the impact of these two trends over the past 30 years. A 1980s supercomputer (e.g. a Cray 2) was rated at nearly 2 Gflops and consumed nearly 200 KW of power. At the time, it was used for high performance and national-scale applications ranging from weather forecasting to nuclear weapons research. A computer of similar performance now fits in our pocket and consumes less than 10 watts. What would be the implications of a similar computing/power reduction over the next 30 years - that is, taking a petaflop-scale machine (e.g. the Cray XK7 which requires about 500 KW for 1 Pflop (=1015 operations/sec) performance) and repeating that process? What is possible with such a computer in your pocket? How would it change the landscape of high capacity computing? In the remainder of this paper, we articulate some opportunities and challenges for dramatic performance improvements of both personal to national scale computing, and discuss some out of the box possibilities for achieving computing at this scale.



rate research

Read More

By all measures, wireless networking has seen explosive growth over the past decade. Fourth Generation Long Term Evolution (4G LTE) cellular technology has increased the bandwidth available for smartphones, in essence, delivering broadband speeds to mobile devices. The most recent 5G technology is further enhancing the transmission speeds and cell capacity, as well as, reducing latency through the use of different radio technologies and is expected to provide Internet connections that are an order of magnitude faster than 4G LTE. Technology continues to advance rapidly, however, and the next generation, 6G, is already being envisioned. 6G will make possible a wide range of powerful, new applications including holographic telepresence, telehealth, remote education, ubiquitous robotics and autonomous vehicles, smart cities and communities (IoT), and advanced manufacturing (Industry 4.0, sometimes referred to as the Fourth Industrial Revolution), to name but a few. The advances we will see begin at the hardware level and extend all the way to the top of the software stack. Artificial Intelligence (AI) will also start playing a greater role in the development and management of wireless networking infrastructure by becoming embedded in applications throughout all levels of the network. The resulting benefits to society will be enormous. At the same time these exciting new wireless capabilities are appearing rapidly on the horizon, a broad range of research challenges loom ahead. These stem from the ever-increasing complexity of the hardware and software systems, along with the need to provide infrastructure that is robust and secure while simultaneously protecting the privacy of users. Here we outline some of those challenges and provide recommendations for the research that needs to be done to address them.
Advancements in digital technologies have a bootstrapping effect. The past fifty years of technological innovations from the computer architecture community have brought innovations and orders-of-magnitude efficiency improvements that engender use cases that were not previously possible -- stimulating novel application domains and increasing uses and deployments at an ever-faster pace. Consequently, computing technologies have fueled significant economic growth, creating education opportunities, enabling access to a wider and more diverse spectrum of information, and, at the same time, connecting people of differing needs in the world together. Technology must be offered that is inclusive of the worlds physical, cultural, and economic diversity, and which is manufactured, used, and recycled with environmental sustainability at the forefront. For the next decades to come, we envision significant cross-disciplinary efforts to build a circular development cycle by placing pervasive connectivity, sustainability, and demographic inclusion at the design forefront in order to sustain and expand the benefits of a technologically rich society. We hope this work will inspire our computing community to take broader and more holistic approaches when developing technological solutions to serve people from different parts of the world.
The purpose of this paper is to explore the applications of quantum computing to energy systems optimization problems and discuss some of the challenges faced by quantum computers with techniques to overcome them. The basic concepts underlying quantum computation and their distinctive characteristics in comparison to their classical counterparts are also discussed. Along with different hardware architecture description of two commercially available quantum systems, an example making use of open-source software tools is provided as a first step for diving into the new realm of programming quantum computers for solving systems optimization problems. The trade-off between qualities of these two quantum architectures is also discussed. Complex nature of energy systems due to their structure and large number of design and operational constraints make energy systems optimization a hard problem for most available algorithms. Problems like facility location allocation for energy systems infrastructure development, unit commitment of electric power systems operations, and heat exchanger network synthesis which fall under the category of energy systems optimization are solved using both classical algorithms implemented on conventional CPU based computer and quantum algorithm realized on quantum computing hardware. Their designs, implementation and results are stated. Additionally, this paper describes the limitations of state-of-the-art quantum computers and their great potential to impact the field of energy systems optimization.
Services computing can offer a high-level abstraction to support diverse applications via encapsulating various computing infrastructures. Though services computing has greatly boosted the productivity of developers, it is faced with three main challenges: privacy and security risks, information silo, and pricing mechanisms and incentives. The recent advances of blockchain bring opportunities to address the challenges of services computing due to its build-in encryption as well as digital signature schemes, decentralization feature, and intrinsic incentive mechanisms. In this paper, we present a survey to investigate the integration of blockchain with services computing. The integration of blockchain with services computing mainly exhibits merits in two aspects: i) blockchain can potentially address key challenges of services computing and ii) services computing can also promote blockchain development. In particular, we categorize the current literature of services computing based on blockchain into five types: services creation, services discovery, services recommendation, services composition, and services arbitration. Moreover, we generalize Blockchain as a Service (BaaS) architecture and summarize the representative BaaS platforms. In addition, we also outline open issues of blockchain-based services computing and BaaS.
Reservoir computing is a best-in-class machine learning algorithm for processing information generated by dynamical systems using observed time-series data. Importantly, it requires very small training data sets, uses linear optimization, and thus requires minimal computing resources. However, the algorithm uses randomly sampled matrices to define the underlying recurrent neural network and has a multitude of metaparameters that must be optimized. Recent results demonstrate the equivalence of reservoir computing to nonlinear vector autoregression, which requires no random matrices, fewer metaparameters, and provides interpretable results. Here, we demonstrate that nonlinear vector autoregression excels at reservoir computing benchmark tasks and requires even shorter training data sets and training time, heralding the next generation of reservoir computing.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا