No Arabic abstract
Students sometimes produce code that works but that its author does not comprehend. For example, a student may apply a poorly-understood code template, stumble upon a working solution through trial and error, or plagiarize. Similarly, passing an automated functional assessment does not guarantee that the student understands their code. One way to tackle these issues is to probe students comprehension by asking them questions about their own programs. We propose an approach to automatically generate questions about student-written program code. We moreover propose a use case for such questions in the context of automatic assessment systems: after a students program passes unit tests, the system poses questions to the student about the code. We suggest that these questions can enhance assessment systems, deepen student learning by acting as self-explanation prompts, and provide a window into students program comprehension. This discussion paper sets an agenda for future technical development and empirical research on the topic.
The robot rights debate, and its related question of robot responsibility, invokes some of the most polarized positions in AI ethics. While some advocate for granting robots rights on a par with human beings, others, in a stark opposition argue that robots are not deserving of rights but are objects that should be our slaves. Grounded in post-Cartesian philosophical foundations, we argue not just to deny robots rights, but to deny that robots, as artifacts emerging out of and mediating human being, are the kinds of things that could be granted rights in the first place. Once we see robots as mediators of human being, we can understand how the `robots rights debate is focused on first world problems, at the expense of urgent ethical concerns, such as machine bias, machine elicited human labour exploitation, and erosion of privacy all impacting societys least privileged individuals. We conclude that, if human being is our starting point and human welfare is the primary concern, the negative impacts emerging from machinic systems, as well as the lack of taking responsibility by people designing, selling and deploying such machines, remains the most pressing ethical discussion in AI.
Multiweek projects in physics labs can engage students in authentic experimentation practices, and it is important to understand student experiences during projects along multiple dimensions. To this end, we conducted an exploratory quantitative investigation to look for connections between students pre-project views about experimental physics and their post-project sense of project ownership. We administered the Colorado Learning Attitudes About Science Survey for Experimental Physics (E-CLASS) and the Project Ownership Survey (POS) to 96 students enrolled in 6 lab courses at 5 universities. E-CLASS and POS scores were positively correlated, suggesting that students views about experimentation may be linked to their ownership of projects. This finding motivates future studies that could explore whether these constructs are causally related.
We present a general approach to batching arbitrary computations for accelerators such as GPUs. We show orders-of-magnitude speedups using our method on the No U-Turn Sampler (NUTS), a workhorse algorithm in Bayesian statistics. The central challenge of batching NUTS and other Markov chain Monte Carlo algorithms is data-dependent control flow and recursion. We overcome this by mechanically transforming a single-example implementation into a form that explicitly tracks the current program point for each batch member, and only steps forward those in the same place. We present two different batching algorithms: a simpler, previously published one that inherits recursion from the host Python, and a more complex, novel one that implemenents recursion directly and can batch across it. We implement these batching methods as a general program transformation on Python source. Both the batching system and the NUTS implementation presented here are available as part of the popular TensorFlow Probability software package.
Most modern (classical) programming languages support recursion. Recursion has also been successfully applied to the design of several quantum algorithms and introduced in a couple of quantum programming languages. So, it can be expected that recursion will become one of the fundamental paradigms of quantum programming. Several program logics have been developed for verification of quantum while-programs. However, there are as yet no general methods for reasoning about (mutual) recursive procedures and ancilla quantum data structure in quantum computing (with measurement). We fill the gap in this paper by proposing a parameterized quantum assertion logic and, based on which, designing a quantum Hoare logic for verifying parameterized recursive quantum programs with ancilla data and probabilistic control. The quantum Hoare logic can be used to prove partial, total, and even probabilistic correctness (by reducing to total correctness) of those quantum programs. In particular, two counterexamples for illustrating incompleteness of non-parameterized assertions in verifying recursive procedures, and, one counterexample for showing the failure of reasoning with exact probabilities based on partial correctness, are constructed. The effectiveness of our logic is shown by three main examples -- recursive quantum Markov chain (with probabilistic control), fixed-point Grovers search, and recursive quantum Fourier sampling.
Lets HPC (www.letshpc.org) is an open-access online platform to supplement conventional classroom oriented High Performance Computing (HPC) and Parallel & Distributed Computing (PDC) education. The web based platform provides online plotting and analysis tools which allow users to learn, evaluate, teach and see the performance of parallel algorithms from a systems viewpoint. The user can quantitatively compare and understand the importance of numerous deterministic as well as non-deterministic factors of both the software and the hardware that impact the performance of parallel programs. At the heart of this platform is a database archiving the performance and execution environment related data of standard parallel algorithms executed on different computing architectures using different programming environments, this data is contributed by various stakeholders in the HPC community. The plotting and analysis tools of our platform can be combined seamlessly with the database to aid self-learning, teaching, evaluation and discussion of different HPC related topics. Instructors of HPC/PDC related courses can use the platforms tools to illustrate the importance of proper analysis in understanding factors impacting performance, to encourage peer learning among students, as well as to allow students to prepare a standard lab/project report aiding the instructor in uniform evaluation. The platforms modular design enables easy inclusion of performance related data from contributors as well as addition of new features in the future.