Do you want to publish a course? Click here

First things first: If software engineering is the solution, then what is the problem?

250   0   0.0 ( 0 )
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

Software engineering (SE) undergoes an ontological crisis and it lacks of a theory. Why? Among other reasons, because always it succumbed to the pragmatism demanded by the commercial and political interests and abandoned any intention to become a science instead of a professional discipline. For beginning a discussion for define a theory of software, first, is required to know what software is.



rate research

Read More

The field of in-vivo neurophysiology currently uses statistical standards that are based on tradition rather than formal analysis. Typically, data from two (or few) animals are pooled for one statistical test, or a significant test in a first animal is replicated in one (or few) further animals. The use of more than one animal is widely believed to allow an inference on the population. Here, we explain that a useful inference on the population would require larger numbers and a different statistical approach. The field should consider to perform studies at that standard, potentially through coordinated multi-center efforts, for selected questions of exceptional importance. Yet, for many questions, this is ethically and/or economically not justifiable. We explain why in those studies with two (or few) animals, any useful inference is limited to the sample of investigated animals, irrespective of whether it is based on few animals, two animals or a single animal.
The unquenched spectral density of the Dirac operator at $mu eq0$ is complex and has oscillations with a period inversely proportional to the volume and an amplitude that grows exponentially with the volume. Here we show how the oscillations lead to the discontinuity of the chiral condensate.
Assuming that the Permanent polynomial requires algebraic circuits of exponential size, we show that the class VNP does not have efficiently computable equations. In other words, any nonzero polynomial that vanishes on the coefficient vectors of all polynomials in the class VNP requires algebraic circuits of super-polynomial size. In a recent work of Chatterjee and the authors (FOCS 2020), it was shown that the subclasses of VP and VNP consisting of polynomials with bounded integer coefficients do have equations with small algebraic circuits. Their work left open the possibility that these results could perhaps be extended to all of VP or VNP. The results in this paper show that assuming the hardness of Permanent, at least for VNP, allowing polynomials with large coefficients does indeed incur a significant blow up in the circuit complexity of equations.
We highlight that the anomalous orbits of Trans-Neptunian Objects (TNOs) and an excess in microlensing events in the 5-year OGLE dataset can be simultaneously explained by a new population of astrophysical bodies with mass several times that of Earth ($M_oplus$). We take these objects to be primordial black holes (PBHs) and point out the orbits of TNOs would be altered if one of these PBHs was captured by the Solar System, inline with the Planet 9 hypothesis. Capture of a free floating planet is a leading explanation for the origin of Planet 9 and we show that the probability of capturing a PBH instead is comparable. The observational constraints on a PBH in the outer Solar System significantly differ from the case of a new ninth planet. This scenario could be confirmed through annihilation signals from the dark matter microhalo around the PBH.
Software systems have been continuously evolved and delivered with high quality due to the widespread adoption of automated tests. A recurring issue hurting this scenario is the presence of flaky tests, a test case that may pass or fail non-deterministically. A promising, but yet lacking more empirical evidence, approach is to collect static data of automated tests and use them to predict their flakiness. In this paper, we conducted an empirical study to assess the use of code identifiers to predict test flakiness. To do so, we first replicate most parts of the previous study of Pinto~et~al.~(MSR~2020). This replication was extended by using a different ML Python platform (Scikit-learn) and adding different learning algorithms in the analyses. Then, we validated the performance of trained models using datasets with other flaky tests and from different projects. We successfully replicated the results of Pinto~et~al.~(2020), with minor differences using Scikit-learn; different algorithms had performance similar to the ones used previously. Concerning the validation, we noticed that the recall of the trained models was smaller, and classifiers presented a varying range of decreases. This was observed in both intra-project and inter-projects test flakiness prediction.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا