In recent years, increasingly advanced artificial intelligence (AI), and in particular machine learning, has shown great promise as a tool in various healthcare contexts. Yet as machine
Annie B. Friedrich
May 5, 2024
AI | The Blackest Box | Explainability
AI
Rethinking explainability: toward a of black-box artificial intelligence in medicine
In recent years, increasingly advanced artificial intelligence (AI), and in particular machine learning, has shown great promise as a tool in various healthcare contexts. Yet as machine learning in medicine has become more useful and more widely adopted, concerns have arisen about the “black-box” nature of some of these AI models, or the inability to understand—and explain—the inner workings of the technology. Some critics argue that AI algorithms must be explainable to be responsibly used in the clinical encounter, while supporters of AI dismiss the importance of explainability and instead highlight the many benefits the application of this technology could have for medicine. However, this dichotomy fails to consider the particular ways in which machine learning technologies mediate relations in the clinical encounter, and in doing so, makes explainability more of a problem than it actually is. We argue that postphenomenology is a highly useful theoretical lens through which to examine black-box AI, because it helps us better understand the particular mediating effects this type of technology brings to clinical encounters and moves beyond the explainability stalemate. Using a postphenomenological approach, we argue that explainability is more of a concern for physicians than it is for patients, and that a lack of explainability does not introduce a novel concern to the physician–patient encounter. Explainability is just one feature of technological mediation and need not be the central concern on which the use of black-box AI hinges.
Notes
In this paper, we focus our analysis mainly on physician use of black-box AI, as they most often deploy AI models and use particular models for diagnosis and/or treatment. Yet we recognize that other clinicians may also grapple with the use of AI in their work, and thus our analysis can be applied to healthcare workers in other roles, as well.
While we agree with much of Kiran’s analysis, it is worth questioning whether this aspect of his conclusion is justified. If technology completely frames and constrains what we see, like Kiran says, are we free to choose the extent to which we are constrained? It would distract from our thesis to address this question in full here, but we worry that Kiran’s suggestion does not take seriously the extent to which technology determines our gaze.
This can lead to widespread concerns about bias and discrimination, discussed further below.
Even though patients do not often request a robust explanation of the medical technologies routinely used in their care, there remains a normative question of whether they should, especially given certain features, such as potential algorithmic bias. While this is an interesting question worthy of further study, inquiring into the individual responsibilities of patients is beyond the scope of this paper.
This also raises legal and ethical concerns of accountability and liability at the level of the healthcare system. Who is to blame when an algorithm turns out to be flawed or systematically prescribes harmful treatments: the individual physician? The hospital? The firm that designed the AI model? (Grote & Berens, 2020).
While Ihde also introduces the epistemological magnification-reduction effects of technological mediation, we will focus here on Kiran’s three dimensions of technological mediation (enabling-constraining, revealing-concealing, involving-alienating), as these sufficiently build from Ihde and include elements of magnification-reduction throughout. In general, black box AI magnifies some parameters and necessarily reduces others, but the specific data points being magnified and reduced are context-specific and thus warrant unique analysis for individual AI algorithms.
This also highlights the multistability of black-box AI, or the various ways in which different users can engage with the technology. What feels very important to physicians may not actually be important to patients, and vice versa.
AI
GHBIO
Ethics
Humanity
Presence and Impact