No Arabic abstract
The star formation rate (SFR) is a fundamental property of galaxies and it is crucial to understand the build-up of their stellar content, their chemical evolution, and energetic feedback. The SFR of galaxies is typically obtained by observing the emission by young stellar populations directly in the ultraviolet, the optical nebular line emission from gas ionized by newly-formed massive stars, the reprocessed emission by dust in the infrared range, or by combining observations at different wavelengths and fitting the full spectral energy distributions of galaxies. In this brief review we describe the assumptions, advantages and limitations of different SFR indicators, and we discuss the most promising SFR indicators for high-redshift studies.
What else can be said about star formation rate indicators that has not been said already many times over? The `coming of age of large ground-based surveys and the unprecedented sensitivity, angular resolution and/or field-of-view of infrared and ultraviolet space missions have provided extensive, homogeneous data on both nearby and distant galaxies, which have been used to further our understanding of the strengths and pitfalls of many common star formation rate indicators. The synergy between these surveys has also enabled the calibration of indicators for use on scales that are comparable to those of star-forming regions, thus much smaller than an entire galaxy. These are being used to investigate star formation processes at the sub-galactic scale. I review progress in the field over the past decade or so.
We compare common star-formation rate (SFR) indicators in the local Universe in the GAMA equatorial fields (around 160 sq. deg.), using ultraviolet (UV) photometry from GALEX, far-infrared (FIR) and sub-millimetre (sub-mm) photometry from H-ATLAS, and Halpha spectroscopy from the GAMA survey. With a high-quality sample of 745 galaxies (median redshift 0.08), we consider three SFR tracers: UV luminosity corrected for dust attenuation using the UV spectral slope beta (SFRUV,corr), Halpha line luminosity corrected for dust using the Balmer decrement (BD) (SFRHalpha,corr), and the combination of UV and IR emission (SFRUV+IR). We demonstrate that SFRUV,corr can be reconciled with the other two tracers after applying attenuation corrections by calibrating IRX (i.e. the IR to UV luminosity ratio) and attenuation in the Halpha (derived from BD) against beta. However, beta on its own is very unlikely to be a reliable attenuation indicator. We find that attenuation correction factors depend on parameters such as stellar mass, z and dust temperature (Tdust), but not on Halpha equivalent width (EW) or Sersic index. Due to the large scatter in the IRX vs beta correlation, when compared to SFRUV+IR, the beta-corrected SFRUV,corr exhibits systematic deviations as a function of IRX, BD and Tdust.
As images and spectra from ISO and Spitzer have provided increasingly higher-fidelity representations of the mid-infrared (MIR) and Polycyclic Aromatic Hydrocarbon (PAH) emission from galaxies and galactic and extra-galactic regions, more systematic efforts have been devoted to establishing whether the emission in this wavelength region can be used as a reliable star formation rate indicator. This has also been in response to the extensive surveys of distant galaxies that have accumulated during the cold phase of the Spitzer Space Telescope. Results so far have been somewhat contradictory, reflecting the complex nature of the PAHs and of the mid-infrared-emitting dust in general. The two main problems faced when attempting to define a star formation rate indicator based on the mid-infrared emission from galaxies and star-forming regions are: (1) the strong dependence of the PAH emission on metallicity; (2) the heating of the PAH dust by evolved stellar populations unrelated to the current star formation. I review the status of the field, with a specific focus on these two problems, and will try to quantify the impact of each on calibrations of the mid-infrared emission as a star formation rate indicator.
We derive a physical model for the observed relations between star formation rate (SFR) and molecular line (CO and HCN) emission in galaxies, and show how these observed relations are reflective of the underlying star formation law. We do this by combining 3D non-LTE radiative transfer calculations with hydrodynamic simulations of isolated disk galaxies and galaxy mergers. We demonstrate that the observed SFR-molecular line relations are driven by the relationship between molecular line emission and gas density, and anchored by the index of the underlying Schmidt law controlling the SFR in the galaxy. Lines with low critical densities (e.g. CO J=1-0) are typically thermalized and trace the gas density faithfully. In these cases, the SFR will be related to line luminosity with an index similar to the Schmidt law index. Lines with high critical densities greater than the mean density of most of the emitting clouds in a galaxy (e.g. CO J=3-2, HCN J=1-0) will have only a small amount of thermalized gas, and consequently a superlinear relationship between molecular line luminosity and mean gas density. This results in a SFR-line luminosity index less than the Schmidt index for high critical density tracers. One observational consequence of this is a significant redistribution of light from the small pockets of dense, thermalized gas to diffuse gas along the line of sight, and prodigious emission from subthermally excited gas. At the highest star formation rates, the SFR-Lmol slope tends to the Schmidt index, regardless of the molecular transition. The fundamental relation is the Kennicutt-Schmidt law, rather than the relation between SFR and molecular line luminosity. We use these results to make imminently testable predictions for the SFR-molecular line relations of unobserved transitions.
We have calibrated the 6.5 m James Webb Space Telescope (JWST) mid-infrared filters as star formation rate indicators, using JWST photometry synthesized from $Spitzer$ spectra of 49 low redshift galaxies, which cover a wider luminosity range than most previous studies. We use Balmer decrement corrected $rm{Halpha}$ luminosity and synthesized mid-infrared photometry to empirically calibrate the $Spitzer$, WISE and JWST filters as star formation rate indicators. Our $Spitzer$ and WISE calibrations are in good agreement with recent calibrations from the literature. While mid-infrared luminosity may be directly proportional to star formation rate for high luminosity galaxies, we find a power-law relationship between mid-infrared luminosity and star formation rate for low luminosity galaxies ($L_{rm Halpha} leq 10^{43}~{rm erg~s^{-1}}$). We find that for galaxies with a $rm{Halpha}$ luminosity of $rm{10^{40}}~erg~s^{-1}$ (corresponding to a star formation rate of $sim 0.055~{rm M_odot~yr^{-1}}$), the corresponding JWST mid-infrared $ u L_{ u}$ luminosity is between $rm{10^{40.50}}$ and $rm{10^{41.00}}~erg~s^{-1}$. Power-law fits of JWST luminosity as a function of $rm{Halpha}$ luminosity have indices between 1.17 and 1.32. We find that the scatter in the JWST filter calibrations decreases with increasing wavelength from 0.39 to 0.20 dex, although F1000W is an exception where the scatter is just 0.24 dex.