How Sentara is using eight principles to deploy artificial intelligence safely
By Joe Evans, M.D., Vice President and Chief Health Information Officer at Sentara Health
This post is part of our Improving Health Leadership Blog, which explores Sentara’s leadership on issues affecting the health and well-being of our consumers and how we’re pioneering new ways to make health care simple, seamless, personal, and more affordable
Since the public rollout of ChatGPT in 2022, artificial intelligence has played an increasingly important role in our daily lives and in our society. It has answered our online shopping questions, fueled facial recognition technology, automated investments, and more.
As AI has grown more powerful, it has become a foundation of many healthcare applications. Many expect this trend to accelerate. According to a recent survey, three-quarters of providers and pharmaceutical professionals think AI-related technologies will be widespread in the industry in the next three years.
That could be good for the sector. AI has the potential to positively impact healthcare delivery and patient care. According to one study, AI tools could lead to better healthcare quality, safety, and access while cutting costs by hundreds of billions of dollars annually.
However, AI also poses clear risks in the medical field. For example, an AI-enabled clinical tool could make a dangerous and costly error without human oversight, such as misdiagnosing a condition or recommending a wrong treatment.
At Sentara Health, we’re optimistic about AI’s potential to enhance health outcomes. However, we recognize that we need proper oversight of this rapidly evolving technology.
To reap the benefits of AI while avoiding the dangers, we’ve created an AI Oversight Program staffed by senior leaders from our organization. The committee oversees use cases and development of AI tools across Sentara’s integrated delivery network, which includes 12 hospitals, five free-standing emergency departments, more than 1 million health plan members, and a medical group that completes more than 2.8 million patient visits annually.
The committee’s members include our chief nursing officer, chief quality officer, chief data officer, ethics and legal representatives, and other highly qualified experts. David Torgerson, our chief analytics officer, and I co-chair the committee.
As part of our charter, the committee has devised eight AI Principles to ensure we develop and use AI solutions safely, responsibly, and in a trustworthy framework. These principles reflect our commitment to innovation, patient safety, and high-quality care:
1. Human oversight: There will always be a “human in the loop” to ensure proper human oversight of AI tools.
2. Technical robustness and safety: AI tools will not negatively impact Sentara’s information technology infrastructure or cybersecurity.
3. Privacy and data governance: AI tools will adhere to Sentara’s rigorous standards for maintaining privacy and protecting health information.
4. Transparency: AI solutions will be as transparent as possible with inputs and the algorithm used to derive the outputs.
5. Non-discrimination and fairness: AI tools will not create or reinforce bias and will not provide variable outputs based on race, ethnicity, etc.
6. Environmental and societal well-being: AI tools will promote environmental and societal well-being. The Committee will consider various factors to ensure that we develop broadly beneficial solutions.
7. Accountability: Humans will regularly monitor AI tools to ensure they produce the desired results and do not change or “drift” over time.
8. Benefit: The benefits of AI tools will outweigh the risks.
By applying these principles, our AI Committee seeks to ensure that AI tools comply with legal, regulatory, and ethical considerations while aligning with Sentara’s focus on promoting our consumers’ overall health and well-being.
One example of Sentara safely using AI to improve healthcare is our recent rollout of a solution that helps physicians and other medical providers create draft clinical notes.
These clinical notes, which document patient visits, add to provider workloads and can detract from patient interactions. According to the American Medical Association, physicians spend nearly two hours on documentation and administrative duties for every hour of patient care. They spend an additional hour or two nightly doing computer or clerical work.
Now, our physicians and providers can use an AI tool called Microsoft DAX Copilot to reduce this workload, improving patient-doctor visits.
After informing patients, our providers can now record a visit with the DAX Copilot smartphone app. The app, which is secure and protects patient information, uses generative AI to sort relevant from nonrelevant information. It can convert a conversation into a structured clinical note in seconds. The provider reviews the note, makes edits, and approves.
Joshua Greenhoe, M.D., an internal medicine physician at Sentara Martha Jefferson Hospital, has been using DAX Copilot since the April rollout.
He said he feels like a better physician with the tool. His interactions with patients have been much more productive and satisfying.
“I would rather be an editor than the writer,” Dr. Greenhoe said. “My mental energy is reserved for the more important aspects of my work.”
Our AI Committee vetted DAX Copilot, ensuring the tool’s rollout is consistent with our AI Principles. The tool’s rollout has received positive media coverage in the communities we serve, from The Virginian-Pilot to WVTF Radio IQ.
As AI evolves, we will continue to apply our AI Principles. Our goal is to ensure applications align with our mission, which is to improve health every day.
About the author
Dr. Evans, co-chair of the AI Committee, has more than two decades of experience in internal medicine, population health, and clinical informatics. He recently discussed how Sentara is leveraging artificial intelligence to enhance patient care and operational efficiency in a MedCity Pivot Podcast. You can listen to the podcast here.