May 20, 2024
AI in Health Care

New Framework Proposes Ethical and Practical Guidance for AI in Health Care

Artificial intelligence (AI) tools have the potential to greatly improve patient care in the health care industry. However, their implementation and evaluation have been inconsistent due to the lack of a comprehensive framework. In a new article published in the journal Patterns, researchers from Carnegie Mellon University, The Hospital for Sick Children, the Dalla Lana School of Public Health, Columbia University, and the University of Toronto propose a framework that considers not only the properties of AI tools but also the values and systems surrounding their use.

According to Alex John London, K&L Gates Professor of Ethics and Computational Technologies at Carnegie Mellon and co-author of the article, current regulatory guidelines and institutional approaches focus too narrowly on the performance of AI tools, neglecting the necessary knowledge, practices, and procedures required to integrate these tools into the larger social systems of medical practice. London emphasizes that AI tools are not neutral; they reflect our values and are influenced by the people, processes, and environments in which they are utilized.

London and his co-authors advocate for a conceptual shift in which AI tools are viewed as part of a larger intervention ensemble – a collection of knowledge, practices, and procedures necessary for delivering patient care. By treating AI tools as sociotechnical systems, the authors’ proposed framework aims to promote the responsible integration of AI systems into health care.

Unlike previous work in this area that primarily described the interaction between AI systems and human systems, the framework proposed by London and his colleagues offers proactive guidance for designers, funders, and users. It provides insights on how to integrate AI systems into workflows to maximize their potential in patient care. Furthermore, the framework can be used for regulation and institutional insights, as well as for evaluating, appraising, and using AI tools ethically and responsibly.

To illustrate the practical application of their framework, the authors provided an example using AI systems developed for diagnosing more than mild diabetic retinopathy. They highlight that only a small majority of models evaluated through clinical trials have demonstrated a net benefit. Melissa McCradden, a Bioethicist at the Hospital for Sick Children and co-author of the article, emphasizes the importance of their proposed framework in providing precision to evaluation and supporting regulatory bodies in determining the necessary evidence to oversee AI systems.

Overall, the proposed framework offers a much-needed guidance for the integration and evaluation of AI tools in health care. By considering not only the technical aspects of AI, but also the ethical values and societal systems, health care organizations can ensure the responsible use of AI tools in improving patient care. With the increasing reliance on AI in health care, this framework sets a crucial precedent for the development and implementation of AI technology in the medical field.

*Note:
1. Source: Coherent Market Insights, Public sources, Desk research
2. We have leveraged AI tools to mine information and compile it