We study scalar bubble collisions in first-order phase transitions focusing on the relativistic limit. We propose trapping equation which describes the wall behavior after collision, and test it with numerical simulations in several setups. We also examine the energy dynamics after collision and discuss its implications to gravitational wave production.
A statistically significant excess of gamma rays has been reported and robustly confirmed in the Galactic Center over the past decade. Large local dark matter densities suggest that this Galactic Center Excess (GCE) may be attributable to new physics
, and indeed it has been shown that this signal is well-modelled by annihilations dominantly into $bbar{b}$ with a WIMP-scale cross section. In this paper, we consider Majorana dark matter annihilating through a Higgs portal as a candidate source for this signal, where a large CP-violation in the Higgs coupling may serve to severely suppress scattering rates. In particular, we explore the phenomenology of two minimal UV completions, a singlet-doublet model and a doublet-triplet model, and map out the available parameter space which can give a viable signal while respecting current experimental constraints.
Dark Matter (DM) models providing possible alternative solutions to the small- scale crisis of standard cosmology are nowadays of growing interest. We consider DM interacting with light hidden fermions via well motivated fundamental operators showing
the resultant matter power spectrum is suppressed on subgalactic scales within a plausible parameter region. Our basic description of the evolution of cosmological perturbations relies on a fully consistent first principles derivation of a perturbed Fokker-Planck type equation, generalizing existing literature. The cosmological perturbation of the Fokker-Planck equation is presented for the first time in two different gauges, where the results transform into each other according to the rules of gauge transformation. Furthermore, our focus lies on a derivation of a broadly applicable and easily computable collision term showing important phenomenological differences to other existing approximations. As one of the main results and concerning the small-scale crisis, we show the equal importance of vector and scalar boson mediated interactions between DM and light fermions.
We study large scale structure in the cosmology of Coleman-de Luccia bubble collisions. Within a set of controlled approximations we calculate the effects on galaxy motion seen from inside a bubble which has undergone such a collision. We find that g
enerically bubble collisions lead to a coherent bulk flow of galaxies on some part of our sky, the details of which depend on the initial conditions of the collision and redshift to the galaxy in question. With other parameters held fixed the effects weaken as the amount of inflation inside our bubble grows, but can produce measurable flows past the number of efolds required to solve the flatness and horizon problems.
We revisit the sgoldstino interpretation of the diphoton excess in the context of gauge mediation. While the bound on the gluino mass might seem to make the sgoldstino contribution to the diphoton excess unobservable, we show that the interpretation
is viable in a thin, near critical region of the parameter space. This regime gives rise to drastic departures from the standard gauge mediation picture. While the fermion messengers lie in the (10-100) TeV range, some scalar messengers are significantly lighter and are responsible for the sgoldstino production and decay. Their effective coupling to the sgoldstino is correspondingly enhanced, and a non-perturbative regime is triggered when light and heavy messenger masses differ by a factor $sim4pi$. We also comment on the possible role of an R-axion and on the possibility to decouple the sfermions in this context.
Current methods for training robust networks lead to a drop in test accuracy, which has led prior works to posit that a robustness-accuracy tradeoff may be inevitable in deep learning. We take a closer look at this phenomenon and first show that real
image datasets are actually separated. With this property in mind, we then prove that robustness and accuracy should both be achievable for benchmark datasets through locally Lipschitz functions, and hence, there should be no inherent tradeoff between robustness and accuracy. Through extensive experiments with robustness methods, we argue that the gap between theory and practice arises from two limitations of current methods: either they fail to impose local Lipschitzness or they are insufficiently generalized. We explore combining dropout with robust training methods and obtain better generalization. We conclude that achieving robustness and accuracy in practice may require using methods that impose local Lipschitzness and augmenting them with deep learning generalization techniques. Code available at https://github.com/yangarbiter/robust-local-lipschitz