Various galaxy merger detection methods have been applied to diverse datasets. However, it is difficult to understand how they compare. We aim to benchmark the relative performance of machine learning (ML) merger detection methods. We explore six leading ML methods using three main datasets. The first one (the training data) consists of mock observations from the IllustrisTNG simulations and allows us to quantify the performance metrics of the detection methods. The second one consists of mock observations from the Horizon-AGN simulations, introduced to evaluate the performance of classifiers trained on different, but comparable data. The third one consists of real observations from the Hyper Suprime-Cam Subaru Strategic Program (HSC-SSP) survey. For the binary classification task (mergers vs. non-mergers), all methods perform reasonably well in the domain of the training data. At $0.1<z<0.3$, precision and recall range between $\sim$70\% and 80\%, both of which decrease with increasing $z$ as expected (by $\sim$5\% for precision and $\sim$10\% for recall at $0.76<z<1.0$). When transferred to a different domain, the precision of all classifiers is only slightly reduced, but the recall is significantly worse (by $\sim$20-40\% depending on the method). Zoobot offers the best overall performance in terms of precision and F1 score. When applied to real HSC observations, all methods agree well with visual labels of clear mergers but can differ by more than an order of magnitude in predicting the overall fraction of major mergers. For the multi-class classification task to distinguish pre-, post- and non-mergers, none of the methods offer a good performance, which could be partly due to limitations in resolution and depth of the data. With the advent of better quality data (e.g. JWST and Euclid), it is important to improve our ability to detect mergers and distinguish between merger stages.
16 pages, 14 figures, 1 table, submitted to MNRAS
18 pages, 4 figures, 4 tables
9 pages, 4 figures, accepted for publication in The Astrophysical Journal Letters
Accepted to MNRAS. 14 pages, 10 figures
20 pages, 17 figures, submitted to MNRAS
25 pages, 9 figures, 2 tables. Comments welcome!
16 pages, 8 figures. Submitted to ApJ
25 pages, 14 figures, 5 tables, accepted by A&A
Accepted for publication in A&A Letters
submitted to ApJ
16 pages, 6 figures
9 pages, 4 figures, revised after the initial review
21 pages, 11 figures. Accepted to The Astronomical Journal
14 pages, 5 figures. To be published in Solar Physics
20 pages, 10 figures, accepted for publication in the MNRAS
16 pages, 8 figures. Accepted for publication in MNRAS
Submitted to Astronomy & Astrophysics
Submitted to ApJ, 13 pages, 8 figures, comments welcome
6 pages; submitted to the Proceedings of the Polish Astronomical Society Meeting 2023
Accepted for publication in Astronomy & Astrophysics
Accepted by MNRAS
37 pages, 7 figures, 3 tables, 45 references
20 pages, 12 figures, accepted for publication in MNRAS
32 pages, 12 figures
15 pages, 14 figures
26 pages, 17 figures, 2 tables. Submitted to Life (MDPI Journal) after one round of reviews. We welcome comments from the community at this stage, while the paper is still under consideration
10 pages, 10 figures, 3 tables, Accepted for publication in MNRAS
ApJ submitted, comments are welcome
11 pages, 12 figures, s.ubmitted to A&A. Comments are welcome
accepted by The Astrophysical Journal
9 pages, 4 figures, 3 appendices, accepted for publication in A&A on Mar 12 2024
13 pages, 14 figures
17 pages, 18 figures
10 pages, 5 figures. Accepted for publication in Physical Review D
6 pages + references, 2 figures
26 pages, 14 figures
13 pages, 11 figures
22 pages, 7 figures, comments welcome