To date, galaxy image simulations for weak lensing surveys usually approximate the light profiles of all galaxies as a single or double Sérsic profile, neglecting the influence of galaxy substructures and morphologies deviating from such a simplified parametric characterization. While this approximation may be sufficient for previous data sets, the stringent cosmic shear calibration requirements and the high quality of the data in the upcoming Euclid survey demand a consideration of the effects that realistic galaxy substructures have on shear measurement biases. Here we present a novel deep learning-based method to create such simulated galaxies directly from HST data. We first build and validate a convolutional neural network based on the wavelet scattering transform to learn noise-free representations independent of the point-spread function of HST galaxy images that can be injected into simulations of images from Euclid's optical instrument VIS without introducing noise correlations during PSF convolution or shearing. Then, we demonstrate the generation of new galaxy images by sampling from the model randomly and conditionally. Next, we quantify the cosmic shear bias from complex galaxy shapes in Euclid-like simulations by comparing the shear measurement biases between a sample of model objects and their best-fit double-Sérsic counterparts. Using the KSB shape measurement algorithm, we find a multiplicative bias difference between these branches with realistic morphologies and parametric profiles on the order of $6.9\times 10^{-3}$ for a realistic magnitude-Sérsic index distribution. Moreover, we find clear detection bias differences between full image scenes simulated with parametric and realistic galaxies, leading to a bias difference of $4.0\times 10^{-3}$ independent of the shape measurement method. This makes it relevant for stage IV weak lensing surveys such as Euclid.
Determining the physical processes that control galactic-scale star formation rates is essential for an improved understanding of galaxy evolution. The role of orbital shear is currently unclear, with some models expecting reduced star formation rates (SFRs) and efficiencies (SFEs) with increasing shear, e.g., if shear stabilizes gas against gravitational collapse, while others predicting enhanced rates, e.g., if shear-driven collisions between giant molecular clouds (GMCs) trigger star formation. Expanding on the analysis of 16 galaxies by Suwannajak, Tan, & Leroy (2014), we assess the shear dependence of SFE per orbital time ($\epsilon_\mathrm{orb}$) in 49 galaxies selected from the PHANGS-ALMA survey. In particular, we test a prediction of the shear-driven GMC collision model that $\epsilon_\mathrm{orb}\propto(1-0.7\beta)$, where $\beta\equiv{d}\:\mathrm{ln}\:v_\mathrm{circ}/d\:\mathrm{ln}\:r$, i.e., SFE per orbital time declines with decreasing shear. We fit the function $\epsilon_\mathrm{orb}=\epsilon_\mathrm{orb,\,0}(1-\alpha_\mathrm{CC}\beta)$ finding $\alpha_\mathrm{CC}\simeq0.76\pm0.16$; an alternative fit with $\epsilon_\mathrm{orb}$ normalized by the median value in each galaxy yields $\alpha_\mathrm{CC}^*=0.80\pm0.15$. These results are in good agreement with the prediction of the shear-driven GMC collision theory. We also examine the impact of a galactic bar on $\epsilon_\mathrm{orb}$ finding a modest decrease in SFE in the presence of bar, which can be attributed to lower rates of shear in these regions. We discuss the implications of our results for the GMC life cycle and environmental dependence of star formation activity.
We present an integral-based technique (IBT) algorithm to accelerate supernova (SN) radiative transfer calculations. The algorithm utilizes ``integral packets'', which are calculated by the path integral of the Monte-Carlo energy packets, to synthesize the observed spectropolarimetric signal at a given viewing direction in a 3-D time-dependent radiative transfer program. Compared to the event-based technique (EBT) proposed by (Bulla et al. 2015), our algorithm significantly reduces the computation time and increases the Monte-Carlo signal-to-noise ratio. Using a 1-D spherical symmetric type Ia supernova (SN Ia) ejecta model DDC10 and its derived 3-D model, the IBT algorithm has successfully passed the verification of: (1) spherical symmetry; (2) mirror symmetry; (3) cross comparison on a 3-D SN model with direct-counting technique (DCT) and EBT. Notably, with our algorithm implemented in the 3-D Monte-Carlo radiative transfer code SEDONA, the computation time is faster than EBT by a factor of $10-30$, and the signal-to-noise (S/N) ratio is better by a factor of $5-10$, with the same number of Monte-Carlo quanta.
As a promising dark matter candidate, primordial black holes (PBHs) lighter than $\sim10^{-18}M_{\odot}$ are supposed to have evaporated by today through Hawking radiation. This scenario is challenged by the memory burden effect, which suggests that the evaporation of black holes may slow down significantly after they have emitted about half of their initial mass. We explore the astrophysical implications of the memory burden effect on the PBH abundance by today and the possibility for PBHs lighter than $\sim10^{-18}M_{\odot}$ to persist as dark matter. Our analysis utilizes current LIGO-Virgo-KAGRA data to constrain the primordial power spectrum and infers the PBH abundance. We find a null detection of scalar-induced gravitational waves that accompanied the formation of the PBHs. Then we place an upper limit on the primordial power spectrum and the PBH abundance to be $f_{\mathrm{pbh}}\simeq0.3$ for PBHs with masses $\sim10^{-24}M_{\odot}$. Furthermore, we expect that next-generation gravitational wave detectors, such as the Einstein Telescope and the Cosmic Explorer, will provide even more stringent constraints. Our results indicate that future detectors can reach sensitivities that could rule out PBH as dark matter within $\sim[10^{-29}M_{\odot},10^{-19}M_{\odot}]$ in the null detection of scalar-induced gravitational waves.
Polarized synchrotron emission is a fundamental process in high-energy astrophysics, particularly in the environments around black holes and pulsars. Accurate modeling of this emission requires precise computation of the emission, absorption, rotation, and conversion coefficients, which are critical for radiative transfer simulations. Traditionally, these coefficients are derived using fit functions based on precomputed ground truth values. However, these fit functions often lack accuracy, particularly in specific plasma conditions not well represented in the datasets used to generate them. In this work, we introduce ${\tt MLody}$, a deep neural network designed to compute polarized synchrotron coefficients with high accuracy across a wide range of plasma parameters. We demonstrate ${\tt MLody}$'s capabilities by integrating it with a radiative transfer code to generate synthetic polarized synchrotron images for an accreting black hole simulation. Our results reveal significant differences, up to a factor of two, in both linear and circular polarization compared to traditional methods. These differences could have important implications for parameter estimation in Event Horizon Telescope observations, suggesting that ${\tt MLody}$ could enhance the accuracy of future astrophysical analyses.
this https URL . Electronic tables are available from the author