skip to main content
Language:
Search Limited to: Search Limited to: Resource type Show Results with: Show Results with: Search type Index

Interpretable Machine Learning for Finding Intermediate-mass Black Holes

The Astrophysical journal, 2024-04, Vol.965 (1), p.89 [Peer Reviewed Journal]

2024. The Author(s). Published by the American Astronomical Society. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. ;ISSN: 0004-637X ;EISSN: 1538-4357 ;DOI: 10.3847/1538-4357/ad2261

Full text available

Citations Cited by
  • Title:
    Interpretable Machine Learning for Finding Intermediate-mass Black Holes
  • Author: Pasquato, Mario ; Trevisan, Piero ; Askar, Abbas ; Lemos, Pablo ; Carenini, Gaia ; Mapelli, Michela ; Hezaveh, Yashar
  • Subjects: Astrophysical black holes ; Black holes ; Classifiers ; Globular clusters ; Initial conditions ; Intermediate-mass black holes ; Machine learning ; Physics ; Predictions ; Risk reduction ; Simulation ; Training
  • Is Part Of: The Astrophysical journal, 2024-04, Vol.965 (1), p.89
  • Description: Definitive evidence that globular clusters (GCs) host intermediate-mass black holes (IMBHs) is elusive. Machine-learning (ML) models trained on GC simulations can in principle predict IMBH host candidates based on observable features. This approach has two limitations: first, an accurate ML model is expected to be a black box due to complexity; second, despite our efforts to simulate GCs realistically, the simulation physics or initial conditions may fail to reflect reality fully. Therefore our training data may be biased, leading to a failure in generalization to observational data. Both the first issue—explainability/interpretability—and the second—out of distribution generalization and fairness—are active areas of research in ML. Here we employ techniques from these fields to address them: we use the anchors method to explain an Extreme Gradient Boosting (XGBoost) classifier; we also independently train a natively interpretable model using Certifiably Optimal RulE ListS (CORELS). The resulting model has a clear physical meaning, but loses some performance with respect to XGBoost. We evaluate potential candidates in real data based not only on classifier predictions but also on their similarity to the training data, measured by the likelihood of a kernel density estimation model. This measures the realism of our simulated data and mitigates the risk that our models may produce biased predictions by working in extrapolation. We apply our classifiers to real GCs, obtaining a predicted classification, a measure of the confidence of the prediction, an out-of-distribution flag, a local rule explaining the prediction of XGBoost, and a global rule from CORELS.
  • Publisher: Philadelphia: IOP Publishing
  • Language: English
  • Identifier: ISSN: 0004-637X
    EISSN: 1538-4357
    DOI: 10.3847/1538-4357/ad2261
  • Source: Directory of Open Access Journals
    Alma/SFX Local Collection

Searching Remote Databases, Please Wait