Do you want to publish a course? Click here

We investigated the depth dependence of coherence times of nitrogen-vacancy (NV) centers through precisely depth controlling by a moderately oxidative at 580{deg}C in air. By successive nanoscale etching, NV centers could be brought close to the diamond surface step by step, which enable us to trace the evolution of the number of NV centers remained in the chip and to study the depth dependence of coherence times of NV centers with the diamond etching. Our results showed that the coherence times of NV centers declined rapidly with the depth reduction in their last about 22 nm before they finally disappeared, revealing a critical depth for the influence of rapid fluctuating surface spin bath. By monitoring the coherence time variation with depth, we could make a shallow NV center with long coherence time for detecting external spins with high sensitivity.
The phase space analysis of cosmological parameters $Omega_{phi}$ and $gamma_{phi}$ is given. Based on this, the well-known quintessence cosmology is studied with an exponential potential $V(phi)=V_{0}exp(-lambdaphi)$. Given observational data, the current state of universe could be pinpointed in the phase diagrams, thus making the diagrams more informative. The scaling solution of quintessence usually is not supposed to give the cosmic accelerating expansion, but we prove it could educe the transient acceleration. We also find that the differential equations of system used widely in study of scalar field are incomplete, and then a numerical method is used to figure out the range of application.
We presented a high-sensitivity temperature detection using an implanted single Nitrogen-Vacancy center array in diamond. The high-order Thermal Carr-Purcell-Meiboom-Gill (TCPMG) method was performed on the implanted single nitrogen vacancy (NV) center in diamond in a static magnetic field. We demonstrated that under small detunings for the two driving microwave frequencies, the oscillation frequency of the induced fluorescence of the NV center equals approximately to the average of the detunings of the two driving fields. On basis of the conclusion, the zero-field splitting D for the NV center and the corresponding temperature could be determined. The experiment showed that the coherence time for the high-order TCPMG was effectively extended, particularly up to 108 {mu}s for TCPMG-8, about 14 times of the value 7.7 {mu}s for thermal Ramsey method. This coherence time corresponded to a thermal sensitivity of 10.1 mK/Hz1/2. We also detected the temperature distribution on the surface of a diamond chip in three different circumstances by using the implanted NV center array with the TCPMG-3 method. The experiment implies the feasibility for using implanted NV centers in high-quality diamonds to detect temperatures in biology, chemistry, material science and microelectronic system with high-sensitivity and nanoscale resolution.
The varying speed of light (VSL) theory is controversial. It succeeds in explaining some cosmological problems, but on the other hand it is excluded by mainstream physics because it will shake the foundation of physics. In the present paper, we devote ourselves to test whether the speed of light is varying from the observational data of the type Ia Supernova, Baryon Acoustic Oscillation, Observational $H(z)$ data and Cosmic Microwave Background (CMB). We select the common form $c(t)=c_0a^n(t)$ with the contribution of dark energy and matter, where $c_0$ is the current value of speed of light, $n$ is a constant, and consequently construct a varying speed of light dark energy model (VSLDE). The combined observational data show a much trivial constraint $n=-0.0033 pm 0.0045$ at 68.3% confidence level, which indicates that the speed of light may be a constant with high significance. By reconstructing the time-variable $c(t)$, we find that the speed of light almost has no variation for redshift $z < 10^{-1}$. For high-$z$ observations, they are more sensitive to the VSLDE model, but the variation of speed of light is only in order of $10^{-2}$. We also introduce the geometrical diagnostic $Om (z)$ to show the difference between the VSLDE and $Lambda$CDM model. The result shows that the current data are difficult to differentiate them. All the results show that the observational data favor the constant speed of light.
Recently a $f(T)$ gravity based on the modification of the teleparallel gravity was proposed to explain the accelerated expansion of the universe without the need of dark energy. We use observational data from Type Ia Supernovae, Baryon Acoustic Oscillations, and Cosmic Microwave Background to constrain this $f(T)$ theory and reconstruct the effective equation of state and the deceleration parameter. We obtain the best-fit values of parameters and find an interesting result that the $f(T)$ theory considered here allows for the accelerated Hubble expansion to be a transient effect.
Two types of interacting dark energy models are investigated using the type Ia supernova (SNIa), observational $H(z)$ data (OHD), cosmic microwave background (CMB) shift parameter and the secular Sandage-Loeb (SL) test. We find that the inclusion of SL test can obviously provide more stringent constraint on the parameters in both models. For the constant coupling model, the interaction term including the SL test is estimated at $delta=-0.01 pm 0.01 (1sigma) pm 0.02 (2sigma)$, which has been improved to be only a half of original scale on corresponding errors. Comparing with the combination of SNIa and OHD, we find that the inclusion of SL test directly reduces the best-fit of interaction from 0.39 to 0.10, which indicates that the higher-redshift observation including the SL test is necessary to track the evolution of interaction. For the varying coupling model, we reconstruct the interaction $delta (z)$, and find that the interaction is also negative similar as the constant coupling model. However, for high redshift, the interaction generally vanishes at infinity. The constraint result also shows that the $Lambda$CDM model still behaves a good fit to the observational data, and the coincidence problem is still quite severe. However, the phantom-like dark energy with $w_X<-1$ is slightly favored over the $Lambda$CDM model.
In this paper, we propose a novel image interpolation algorithm, which is formulated via combining both the local autoregressive (AR) model and the nonlocal adaptive 3-D sparse model as regularized constraints under the regularization framework. Estimating the high-resolution image by the local AR regularization is different from these conventional AR models, which weighted calculates the interpolation coefficients without considering the rough structural similarity between the low-resolution (LR) and high-resolution (HR) images. Then the nonlocal adaptive 3-D sparse model is formulated to regularize the interpolated HR image, which provides a way to modify these pixels with the problem of numerical stability caused by AR model. In addition, a new Split-Bregman based iterative algorithm is developed to solve the above optimization problem iteratively. Experiment results demonstrate that the proposed algorithm achieves significant performance improvements over the traditional algorithms in terms of both objective quality and visual perception
We study the problem of high-dimensional variable selection via some two-step procedures. First we show that given some good initial estimator which is $ell_{infty}$-consistent but not necessarily variable selection consistent, we can apply the nonnegative Garrote, adaptive Lasso or hard-thresholding procedure to obtain a final estimator that is both estimation and variable selection consistent. Unlike the Lasso, our results do not require the irrepresentable condition which could fail easily even for moderate $p_n$ (Zhao and Yu, 2007) and it also allows $p_n$ to grow almost as fast as $exp(n)$ (for hard-thresholding there is no restriction on $p_n$). We also study the conditions under which the Ridge regression can be used as an initial estimator. We show that under a relaxed identifiable condition, the Ridge estimator is $ell_{infty}$-consistent. Such a condition is usually satisfied when $p_nle n$ and does not require the partial orthogonality between relevant and irrelevant covariates which is needed for the univariate regression in (Huang et al., 2008). Our numerical studies show that when using the Lasso or Ridge as initial estimator, the two-step procedures have a higher sparsity recovery rate than the Lasso or adaptive Lasso with univariate regression used in (Huang et al., 2008).
34 - Han Liu , Jian Zhang 2008
In this paper we consider the problem of grouped variable selection in high-dimensional regression using $ell_1-ell_q$ regularization ($1leq q leq infty$), which can be viewed as a natural generalization of the $ell_1-ell_2$ regularization (the group Lasso). The key condition is that the dimensionality $p_n$ can increase much faster than the sample size $n$, i.e. $p_n gg n$ (in our case $p_n$ is the number of groups), but the number of relevant groups is small. The main conclusion is that many good properties from $ell_1-$regularization (Lasso) naturally carry on to the $ell_1-ell_q$ cases ($1 leq q leq infty$), even if the number of variables within each group also increases with the sample size. With fixed design, we show that the whole family of estimators are both estimation consistent and variable selection consistent under different conditions. We also show the persistency result with random design under a much weaker condition. These results provide a unified treatment for the whole family of estimators ranging from $q=1$ (Lasso) to $q=infty$ (iCAP), with $q=2$ (group Lasso)as a special case. When there is no group structure available, all the analysis reduces to the current results of the Lasso estimator ($q=1$).
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا