No Arabic abstract
Lets HPC (www.letshpc.org) is an open-access online platform to supplement conventional classroom oriented High Performance Computing (HPC) and Parallel & Distributed Computing (PDC) education. The web based platform provides online plotting and analysis tools which allow users to learn, evaluate, teach and see the performance of parallel algorithms from a systems viewpoint. The user can quantitatively compare and understand the importance of numerous deterministic as well as non-deterministic factors of both the software and the hardware that impact the performance of parallel programs. At the heart of this platform is a database archiving the performance and execution environment related data of standard parallel algorithms executed on different computing architectures using different programming environments, this data is contributed by various stakeholders in the HPC community. The plotting and analysis tools of our platform can be combined seamlessly with the database to aid self-learning, teaching, evaluation and discussion of different HPC related topics. Instructors of HPC/PDC related courses can use the platforms tools to illustrate the importance of proper analysis in understanding factors impacting performance, to encourage peer learning among students, as well as to allow students to prepare a standard lab/project report aiding the instructor in uniform evaluation. The platforms modular design enables easy inclusion of performance related data from contributors as well as addition of new features in the future.
In the last few years the Web has progressively acquired the status of an infrastructure for social computation that allows researchers to coordinate the cognitive abilities of human agents in on-line communities so to steer the collective user activity towards predefined goals. This general trend is also triggering the adoption of web-games as a very interesting laboratory to run experiments in the social sciences and whenever the contribution of human beings is crucially required for research purposes. Nowadays, while the number of on-line users has been steadily growing, there is still a need of systematization in the approach to the web as a laboratory. In this paper we present Experimental Tribe (XTribe in short), a novel general purpose web-based platform for web-gaming and social computation. Ready to use and already operational, XTribe aims at drastically reducing the effort required to develop and run web experiments. XTribe has been designed to speed up the implementation of those general aspects of web experiments that are independent of the specific experiment content. For example, XTribe takes care of user management by handling their registration and profiles and in case of multi-player games, it provides the necessary user grouping functionalities. XTribe also provides communication facilities to easily achieve both bidirectional and asynchronous communication. From a practical point of view, researchers are left with the only task of designing and implementing the game interface and logic of their experiment, on which they maintain full control. Moreover, XTribe acts as a repository of different scientific experiments, thus realizing a sort of showcase that stimulates users curiosity, enhances their participation, and helps researchers in recruiting volunteers.
The utilisation of blockchain has moved beyond digital currency to other fields such as health, the Internet of Things, and education. In this paper, we present a systematic mapping study to collect and analyse relevant research on blockchain technology related to the higher education field. The paper concentrates on two main themes. First, it examines state of the art in blockchain-based applications that have been developed for educational purposes. Second, it summarises the challenges and research gaps that need to be addressed in future studies.
Power-spectrum analysis is an important tool providing critical information about a signal. The range of applications includes communication-systems to DNA-sequencing. If there is interference present on a transmitted signal, it could be due to a natural cause or superimposed forcefully. In the latter case, its early detection and analysis becomes important. In such situations having a small observation window, a quick look at power-spectrum can reveal a great deal of information, including frequency and source of interference. In this paper, we present our design of a FPGA based reconfigurable platform for high performance power-spectrum analysis. This allows for the real-time data-acquisition and processing of samples of the incoming signal in a small time frame. The processing consists of computation of power, its average and peak, over a set of input values. This platform sustains simultaneous data streams on each of the four input channels.
To harness the potential of advanced computing technologies, efficient (real time) analysis of large amounts of data is as essential as are front-line simulations. In order to optimise this process, experts need to be supported by appropriate tools that allow to interactively guide both the computation and data exploration of the underlying simulation code. The main challenge is to seamlessly feed the user requirements back into the simulation. State-of-the-art attempts to achieve this, have resulted in the insertion of so-called check- and break-points at fixed places in the code. Depending on the size of the problem, this can still compromise the benefits of such an attempt, thus, preventing the experience of real interactive computing. To leverage the concept for a broader scope of applications, it is essential that a user receives an immediate response from the simulation to his or her changes. Our generic integration framework, targeted to the needs of the computational engineering domain, supports distributed computations as well as on-the-fly visualisation in order to reduce latency and enable a high degree of interactivity with only minor code modifications. Namely, the regular course of the simulation coupled to our framework is interrupted in small, cyclic intervals followed by a check for updates. When new data is received, the simulation restarts automatically with the updated settings (boundary conditions, simulation parameters, etc.). To obtain rapid, albeit approximate feedback from the simulation in case of perpetual user interaction, a multi-hierarchical approach is advantageous. Within several different engineering test cases, we will demonstrate the flexibility and the effectiveness of our approach.
With the upcoming generation of telescopes, cluster scale strong gravitational lenses will act as an increasingly relevant probe of cosmology and dark matter. The better resolved data produced by current and future facilities requires faster and more efficient lens modeling software. Consequently, we present Lenstool-HPC, a strong gravitational lens modeling and map generation tool based on High Performance Computing (HPC) techniques and the renowned Lenstool software. We also showcase the HPC concepts needed for astronomers to increase computation speed through massively parallel execution on supercomputers. Lenstool-HPC was developed using lens modelling algorithms with high amounts of parallelism. Each algorithm was implemented as a highly optimised CPU, GPU and Hybrid CPU-GPU version. The software was deployed and tested on the Piz Daint cluster of the Swiss National Supercomputing Centre (CSCS). Lenstool-HPC perfectly parallel lens map generation and derivative computation achieves a factor 30 speed-up using only 1 GPUs compared to Lenstool. Lenstool-HPC hybrid Lens-model fit generation tested at Hubble Space Telescope precision is scalable up to 200 CPU-GPU nodes and is faster than Lenstool using only 4 CPU-GPU nodes.