No Arabic abstract
The transition to a future electricity system based primarily on wind and solar PV is examined for all regions in the contiguous US. We present optimized pathways for the build-up of wind and solar power for least backup energy needs as well as for least cost obtained with a simplified, lightweight model based on long-term high resolution weather-determined generation data. In the absence of storage, the pathway which achieves the best match of generation and load, thus resulting in the least backup energy requirements, generally favors a combination of both technologies, with a wind/solar PV energy mix of about 80/20 in a fully renewable scenario. The least cost development is seen to start with 100% of the technology with the lowest average generation costs first, but with increasing renewable installations, economically unfavorable excess generation pushes it toward the minimal backup pathway. Surplus generation and the entailed costs can be reduced significantly by combining wind and solar power, and/or absorbing excess generation, for example with storage or transmission, or by coupling the electricity system to other energy sectors.
We study the costs of coal-fired electricity in the United States between 1882 and 2006 by decomposing it in terms of the price of coal, transportation costs, energy density, thermal efficiency, plant construction cost, interest rate, capacity factor, and operations and maintenance cost. The dominant determinants of costs have been the price of coal and plant construction cost. The price of coal appears to fluctuate more or less randomly while the construction cost follows long-term trends, decreasing from 1902 - 1970, increasing from 1970 - 1990, and leveling off since then. Our analysis emphasizes the importance of using long time series and comparing electricity generation technologies using decomposed total costs, rather than costs of single components like capital. By taking this approach we find that the history of coal-fired electricity costs suggests there is a fluctuating floor to its future costs, which is determined by coal prices. Even if construction costs resumed a decreasing trend, the cost of coal-based electricity would drop for a while but eventually be determined by the price of coal, which fluctuates while showing no long-term trend.
Systematized subject classification is essential for funding and assessing scientific projects. Conventionally, classification schemes are founded on the empirical knowledge of the group of experts; thus, the experts perspectives have influenced the current systems of scientific classification. Those systems archived the current state-of-art in practice, yet the global effect of the accelerating scientific change over time has made the updating of the classifications system on a timely basis vertually impossible. To overcome the aforementioned limitations, we propose an unbiased classification scheme that takes advantage of collective knowledge; Wikipedia, an Internet encyclopedia edited by millions of users, sets a prompt classification in a collective fashion. We construct a Wikipedia network for scientific disciplines and extract the backbone of the network. This structure displays a landscape of science and technology that is based on a collective intelligence and that is more unbiased and adaptable than conventional classifications.
While powerful tools have been developed to analyze quantum query complexity, there are still many natural problems that do not fit neatly into the black box model of oracles. We create a new model that allows multiple oracles with differing costs. This model captures more of the difficulty of certain natural problems. We test this model on a simple problem, Search with Two Oracles, for which we create a quantum algorithm that we prove is asymptotically optimal. We further give some evidence, using a geometric picture of Grovers algorithm, that our algorithm is exactly optimal.
We consider the problem of choosing the best of $n$ samples, out of a large random pool, when the sampling of each member is associated with a certain cost. The quality (worth) of the best sample clearly increases with $n$, but so do the sampling costs, and one important question is how many to sample for optimal gain (worth minus costs). If, in addition, the assessment of worth for each sample is associated with some measurement error, the perceived best out of $n$ might not be the actual best, complicating the issue. Situations like this are typical in mate selection, job hiring, and food foraging, to name just a few. We tackle the problem by standard order statistics, yielding suggestions for optimal strategies, as well as some unexpected insights.
We study citation dynamics of the Physics, Economics, and Mathematics papers published in 1984 and focus on the fraction of uncited papers in these three collections. Our model of citation dynamics, which considers citation process as an inhomogeneous Poisson process, captures this uncitedness ratio fairly well. It should be noted that all parameters and variables in our model are related to citations and their dynamics, while uncited papers appear as a byproduct of the citation process and this is the Poisson statistics which makes the cited and uncited papers inseparable. This indicates that the most part of uncited papers constitute the inherent part of the scientific enterprise, namely, uncited papers are not unread.