We address the issue of which broad set of initial conditions for the planet Jupiter best matches the current presence of a ``fuzzy core" of heavy elements, while at the same time comporting with measured parameters such as its effective temperature, atmospheric helium abundance, radius, and atmospheric metallicity. Our focus is on the class of fuzzy cores that can survive convective mixing to the present day and on the unique challenges of an inhomogeneous Jupiter with stably-stratified regions now demanded by the \textit{Juno} gravity data. Hence, using the new code \texttt{APPLE}, we attempt to put a non-adiabatic Jupiter into an evolutionary context. This requires not only a mass density model, the major relevant byproduct of the \textit{Juno} data, but a thermal model that is subject to interior heat transport, a realistic atmospheric flux boundary, a helium rain algorithm, and the latest equation of state. The result is a good fit to most major thermal, compositional, and structural constraints that still preserve a fuzzy core and that should inform future more detailed models of the current Jupiter in the context of its evolution from birth.
A transiting planet invites us to measure its size, mass, orbital parameters, atmospheric composition, and other characteristics. But the invitation can only be accepted if the host star is bright enough for precise measurements of its flux and spectrum. NASA's Transiting Exoplanet Survey Satellite (TESS) is dedicated to finding such favorable systems. Operating from a 13.7-day elliptical orbit around the Earth, TESS uses four 10.5 cm telescopes to capture optical images of a 24 x 96 degree field of view. By shifting the field of view every 27 days, TESS can survey most of the sky every few years. In its first six years, TESS has identified approximately 7,000 planet candidates, with several hundred confirmed as planets. Mass measurements of these planets allow astronomers to differentiate between rocky "super-Earths" and gas-rich or volatile-rich "mini-Neptunes," while observations with the James Webb Space Telescope are revealing the secrets of their atmospheres. TESS has discovered planets orbiting a wide range of stars, including young stars, low-mass stars, binary stars, and even a white dwarf star. Beyond planet detection, TESS probes the optical variability of stars and a diverse array of other astronomical objects, including asteroids, comets, supernovae, and active galactic nuclei.
We deploy the new Arkenstone galactic wind model in cosmological simulations for the first time, allowing us to robustly resolve the evolution and impact of high specific energy winds. In a (25$\,h^{-1}\,$Mpc)$^3$ box we perform a set of numerical experiments that systematically vary the mass and energy loadings of such winds, finding that their energy content is the key parameter controlling the stellar to dark matter mass ratio. Increasing the mass loading, at fixed energy, actually results in mildly enhanced star formation, counter to prevailing wisdom but in agreement with recent analytic models. Of the simple parametrisations that we test, we find that an energy loading that scales inversely with halo mass best matches a wide range of observations, and can do so with mass loadings drastically lower than those in most previous cosmological simulations. In this scenario, much less material is ejected from the interstellar medium. Instead, winds both heat gas in the circumgalactic medium, slowing infall onto the galaxy, and also drive shocks beyond the virial radius, preventing accretion onto the halo in the first place. We have not yet tied the mass and energy loadings to high-resolution simulations (a key goal of the Learning the Universe collaboration); however, we can already report that a much lower fraction of the available supernova energy is needed in preventative galaxy regulation than required by ejective wind feedback models such as IllustrisTNG.
At frequencies below 1\,Hz, fluctuations in atmospheric emission in the Chajnantor region in northern Chile are the primary source of interference for bolometric millimeter-wave observations. This paper focuses on the statistics of these fluctuations using measurements from the Atacama Cosmology Telescope (ACT) and the Atacama Pathfinder Experiment (APEX) water vapor radiometer. After introducing a method for separating atmospheric effects from other systematic effects, we present a direct measurement of the temporal outer scale of turbulence of $\tau_0\approx50$s corresponding to a spatial scale of $L_0\approx500$m. At smaller scales, the fluctuations are well described by the Kolmogorov 2/3 power law until, at yet smaller scales, the effects of beam smearing become important. As a part of this study, we present measurements of the atmosphere by the APEX radiometer over 20 years, focused on fluctuations in precipitable water vapor (PWV). We find that the 30-minute mean of the total PWV is not in general a robust estimator of the level of fluctuations. We show that the microwave frequency spectrum of these fluctuations is in good agreement with predictions by the \texttt{am} code for bands above 90~GHz. We then show that the variance of fluctuations in ACT's mm-wave bands correlates with the variance of fluctuations in PWV measured by APEX, even though the observatories are 6\,km apart and observe different lines of sight. We find that ACT's atmosphere-determined optical efficiencies are consistent with previous planet-based results.
Based on the high resolution and high signal-to-noise spectra, we derived the chemical abundances of 20 elements for 20 barium (Ba-) stars. For the first time, the detailed abundances of four sample stars, namely HD 92482, HD 150430, HD 151101 and HD 177304 have been analyzed. Additionally, Ba element abundance has been measured using high resolution spectra for the first time in six of the other 16 sample stars. Based on the [s/Fe] ratios, the Ba-unknown star HD 115927 can be classified as a strong Ba-star, while the Ba-likely star HD 160538 can be categorized into a mild Ba-star. Consequently, our sample comprises three strong and 17 mild Ba-stars. The light odd-Z metal elements and Fe-peak elements exhibit near-solar abundances. The [{\alpha}/Fe] ratios demonstrate decreasing trends with increasing metallicity. Moreover, the abundances of n-capture elements show significant enhancements in different degrees. Using a threshold of the signed distances to the solar r-process abundance pattern ds = 0.6, we find that all of our sample stars are normal Ba-stars, indicating that the enhancements of s-process elements should be attributed to material transfer from their companions. We compare the observed n-capture patterns of sample stars with the FRUITY models, and estimate the mass of the Thermally-Pulsing Asymptotic Giant Branch stars that previously contaminated the Ba-stars. The models with low masses can successfully explain the observations. From a kinematic point of view, we note that most of our sample stars are linked with the thin disk, while HD 130255 may be associated with the thick disk.
Modeling of microlensing events poses computational challenges for the resolution of the lens equation and the high dimensionality of the parameter space. In particular, numerical noise represents a severe limitation to fast and efficient calculations of microlensing by multiple systems, which are of particular interest in exoplanetary searches. We present a new public code built on our previous experience on binary lenses that introduces three new algorithms for the computation of magnification and astrometry in multiple microlensing. Besides the classical polynomial resolution, we introduce a multi-polynomial approach in which each root is calculated in a frame centered on the closest lens. In addition, we propose a new algorithm based on a modified Newton-Raphson method applied to the original lens equation without any numerical manipulation. These new algorithms are more accurate and robust compared to traditional single-polynomial approaches at a modest computational cost, opening the way to massive studies of multiple lenses. The new algorithms can be used in a complementary way to optimize efficiency and robustness.