ASCO Sets Six Guiding Principles for AI in Oncology

May 31, 2024

The American Society of Clinical Oncology (ASCO) released, “Principles for the Responsible Use of Artificial Intelligence in Oncology,” to guide ASCO’s consideration of all aspects of artificial intelligence (AI). With this manuscript, the Society joins colleagues across medicine in offering principles that should be applied in development and implementation of AI. The principles are offered as a framework to help the oncology community safely use AI to the benefit of patients and the clinicians who care for them.

“As we enter a new era of discovery in cancer care and research fueled and supported by AI, ASCO understands the potential for this new technology to provide global benefits but is also aware of the need for thoughtful deployment and monitoring,” according to the principles document. “ASCO will continue to investigate the impact of AI in oncology with ongoing research and deeper analysis of its role in cancer care. In the coming years, we expect to learn a great deal about how AI will change our health care system in both negative and positive ways. ASCO will continue to follow these developments closely and analyze how new lessons learned can be applied to future policy development.”

The following six principles will guide ASCO’s consideration of all aspects of AI:

  1. Transparency – AI tools and applications should be transparent throughout their lifecycle.
  2. Informed Stakeholders – Patients and clinicians should be aware when AI is used in clinical decision-making and patient care.
  3. Equity and Fairness – Developers and users of AI should protect against bias in AI model design and use and ensure access to AI tools in application.
  4. Accountability – AI systems must comply with legal, regulatory, and ethical requirements that govern the use of data. AI developers should assume responsibility for their AI systems, its decisions, and their adherence to legal, regulatory, and ethical standards.
  5. Oversight and Privacy – Decision-makers should establish institutional compliance policies that govern the use of AI, including protections that guard clinician and patient autonomy in clinical decision-making and privacy of personal health information.
  6. Human-Centered Application – Human interaction is a fundamental element of health care delivery; AI does not eliminate the need for human interaction and should not be used as a substitute for sensitive interactions that require it.

Read the full principles document.