ﻻ يوجد ملخص باللغة العربية
The detection of binary neutron star mergers represents one of the most important and complex astrophysical discoveries of the recent years. One of the unclear aspects of the problem is the turbulent magnetic field amplification, initially triggered by the Kelvin-Helmholtz instability at much smaller scales than any reachable numerical resolution nowadays. Here we present numerical simulations of the first ten milliseconds of a binary neutron star merger. First, we confirm in detail how the simulated amplification depends on the numerical resolution and is distributed on a broad range of scales, as expected from turbulent MHD theory. We find that an initial large-scale magnetic field of $10^{11},$G inside each star is amplified in the remnant to root-mean-square values above $10^{16},$G within the first $5$ milliseconds for our highest-resolution run. Then, we run large eddy simulations, exploring the performance of the subgrid-scale gradient model, already tested successfully in previous turbulent box simulations. We show that the addition of this model is especially important in the induction equation, since it leads to an amplification of the magnetic field comparable to a higher-resolution run, but with a greatly reduced computational cost. In the first 10 milliseconds, there is no clear hint for an ordered, large-scale magnetic field, which should indeed occur in longer timescales through magnetic winding and the magneto-rotational instability.