Close
Skip to main content

15 Smart Questions to Ask Healthcare Artificial Intelligence Vendors

By John McCormack


The buzz surrounding artificial intelligence (AI) is deafening. As such, health information (HI) professionals need to cut through the noise by carefully assessing benefits and risks and then posing specific questions to vendors.

Gathering insights through such assessments can help HI professionals determine if their organizations should move forward with specific product investments, according to Nancy Robert, PhD, MBA/DSS, BSN, managing partner for Polaris Solutions, a healthcare technology consulting company based in Madison, CT. The answers can shed light on “how committed the AI systems/app provider is in terms of supporting evolving global AI standards of use, and what capabilities the vendor has to actually execute on those commitments,” she notes.

Given massive investments in global AI, “one cannot assume that all AI vendors provide the same quality of application development, implementation and support,” Robert emphasizes. 

Exploring the Benefits of AI

To ensure that an AI software assessment is on the right track, HI professionals need to remember to follow the Hippocratic Oath – and do everything possible to guarantee that AI does no harm, according to Crystal Clack, MS, RHIA, CCS, CDIP, application consultant for Microsoft.

To keep this altruism front and center, the National Academy of Medicine (NAM) has initiated the AI Code of Conduct to create a framework for developers, health systems, payors, and researchers to ensure appropriate and ethical use of AI algorithms across the AI lifecycle. When complete, this framework will be a critical tool to shape the conversations that buyers of AI technology may have with developers and sellers.

After considering the greater good, professionals can then zero in on specific benefits, notes Clack, who is based in Springfield, OR. HI professionals should seek AI tools that can “automate routine tasks, including data entry and ICD-10 coding,” says Clack.

The ability to provide such efficiency could be AI’s strongest selling point, says David Marc, PhD, CHDA, who serves as associate professor, department chair, and program director in the Health Informatics Graduate Program at The College of St. Scholastic, in Duluth, MN.

“The greatest benefits are related to the work that's required for a lot of administrative repetitive tasks. There could be streamlined processes in place where AI can alleviate some of the workload and pressure regarding completing those tasks,” Marc says.

Before making an investment in AI technology, however, HI professionals should look beyond the lure of these obvious benefits and ask questions about additional benefits as well as risks such as:

  1. Will the AI tool result in improved data analysis and Insights? Some AI software systems “can analyze large and or complex datasets quickly, providing valuable insights into patient outcomes, disease patterns, and treatment effectiveness. This can aid in evidence-based decision-making,” Clack says.

  2. Can the AI software help with diagnosis? Some AI technologies, particularly machine learning algorithms, can assist healthcare professionals in diagnosing diseases more accurately by analyzing medical images, lab results, and patient histories,” Clack says.

  3. Will the system support personalized medicine? “AI can tailor treatment plans based on individual patient characteristics, genetics, and health history, potentially leading to more personalized and effective healthcare interventions,” Clack says.

  4. Can the solution enhance patient engagement? “AI-powered tools improve patient engagement by providing personalized health recommendations, reminders, and virtual health assistants,” Clack says.

  5. Will use of the product raise privacy and cybersecurity issues? The use of AI in [health information] involves handling tremendous amounts of health data. If adequate security measures are not in place, there are concerns over privacy breaches and unauthorized access. As such, it’s important to determine what encryption and authentication measures are in place to protect sensitive health information and to determine if solutions comply with HIPAA [Health Insurance Portability and Accountability Act of 1996],” Clack says.

  6. Will humans provide oversight? “It is advisable to ask whether a human is involved in assessing AI-generated communication. Human oversight is crucial to help prevent inappropriate or harmful responses. Humans can identify biases, inaccuracies, and potential risks that automated systems might miss,” Clack says.

    Robert adds that “while [human intervention] may not avoid hacker attempts and harmful chat streams, it will enhance capabilities to readily identify suspect content and act as needed. Automation can play a role, but we already have examples of how reductions in human interventions impact technology platform results.”

  7. Who will be responsible for data privacy? AI can put patient information at risk by sharing data across various devices; obscure the use of AI to users, making it difficult to understand what types of data is being generated; and leveraging machine learning to infer sensitive information from non-sensitive forms of data, according to an article published by the Information Systems Audit and Control Association.

    With all this risk, it is important to determine if the vendor or the healthcare organization is responsible for data protection. “Who's ultimately responsible? Is it the vendor that you purchased the product from ultimately responsible for the oversight of their product, or is it the purchaser ultimately responsible for how that product is being used?” Marc says.

  8. Are algorithms biased? To determine if algorithms are biased, HI professionals need to ensure that tools are using appropriate data. As such, it is important to determine if the AI tool is allowed appropriate privileges to access needed data and is restricted from accessing other unneeded data, according to Marc.

    However, it’s also important to remember that if AI algorithms are trained on biased datasets, they may perpetuate healthcare disparities, leading to unfair treatment and outcomes for specific demographic groups. The data is only as unbiased as the human that collects and feeds the data into the algorithm, Clack says.

    HI professionals should strive to understand the origins and types of data that AI tools use. “Understanding the data extrapolated from the algorithm will mitigate biases and ensure AI systems provide equitable outcomes, which is critical with diverse patient populations. The ability to challenge and interpret AI decisions enhances accountability and user trust,” Clack says.

  9. Is there a potential for misdiagnosis and errors? “Overreliance on AI for diagnostic decisions may result in errors, misdiagnoses, or inappropriate treatments, especially if the algorithms are not thoroughly validated and continuously monitored,” Clack notes.

    HI professionals need to understand how to validate and monitor the effectiveness and accuracy of AI algorithms through clearly defined clinical evidence and validation studies.

  10. Has the lack of regulations and standards affected product development? The rapid development of AI in healthcare has outpaced regulatory frameworks and standards. Therefore, healthcare professionals need to determine if vendors have properly addressed the safety, reliability, and interoperability of AI applications, according to Marc.

    Clack added that HI professionals need to understand how to validate and monitor the efficiencies and accuracy of AI algorithms through clearly defined clinical evidence and validation studies. “Policies and procedures are essential to keep accountability in check,” Clack says.

  11. Are there potential human-AI collaboration challenges? Integrating AI into healthcare workflows requires effective collaboration between AI systems and healthcare professionals. Issues may arise if there is a lack of trust, understanding, or clear communication between humans and AI. “AI will not fix a broken system – only adequate and detailed policies and procedures will,” Marc says.

    Healthcare organization leaders need to ensure that users know that they are talking to a chatbot, not a human, when they are using an AI system. As such, AI professionals need to ask vendors about “the measures of transparency that are being implemented to ensure that all users are aware they are interacting with an AI solution,” Marc advises.

  12. Exactly when does the product leverage AI – and when does it use other forms of automation? “Understanding when AI is actually being used is somewhat of a challenge. There's not always a lot of transparency about whether a product is leveraging AI or whether this is some other form of automation that is happening behind the scenes. And there's a lot of discussion right now regarding ethical and responsible use of AI. This transparency is one of those ethical concerns,” Marc says.

    If users do not realize that AI is making decisions, “that could create further distrust in the system as a whole … you don't want to seed mistrust. You don't want to put a barrier in place” that will ultimately interfere with patients’ willingness to work with a healthcare entity, Marc says.
  1. Will users realize they are interacting with AI? “So you want to, as an organization, be able to trust that the solution you're implementing doesn’t deceive the person who is interfacing with the tool. The user should not believe that they are interacting with a human, when it is a chatbot,” Marc says.

    In addition, Clack notes that “a provider’s office may also wish to have a written release from the patient allowing AI to be used in the exam room.”
  1. What maintenance steps are being put in place? “It’s important to understanding not just how is this tool going to be implemented initially to access certain data, but also what does the long-term picture look like,” Marc says. (see sidebar)
  2. How will data be governed and maintained during and after the implementation process? “First, data sharing streams need to be considered with and without a business associate agreement (BAA) in place. HI professionals need to consider how a BAA agreement will impact governance processes versus relationships with vendors without a BAA in place. Governance processes should be bi-lateral agreements between the vendor and the organization purchasing the application,” Robert says.

Key questions that can drive governance boundaries and processes include:

  • With whom will the data be shared?
  • What data is included?
  • How will the data be audited and cleaned, and who is responsible for doing so?
  • How will data be stored and secured?
  • Do data practices meet all regulatory requirements in place today, and how quickly can a vendor adopt emerging regulations?

Slow and steady

While the allure of AI is strong, Robert advises healthcare organizations to take a measured approach to AI investments and implementations.

“There are an abundance of AI systems/apps being developed. Organizations must prioritize AI systems/apps and not try to eat the whole elephant all at once,” she concludes.

 

John McCormack is a Riverside, IL-based freelance writer covering healthcare information technology, policy, and clinical care issues.

 

Additional Questions about AI Implementation

Before buying an AI solution, Clack suggests that HI professionals also find out about implementation by asking questions such as:
 

  1. How does the implementation process work, and what is the expected timeline?
  2. What resources, including personnel and technology infrastructure, are required for a smooth implementation?
  3. Are there best practices or guidelines for ensuring successful implementation?
  4. How easily can the AI solution integrate with existing health information systems, electronic health records, and other relevant healthcare technologies?
  5. Are there potential challenges or considerations for integration, and how are they addressed?
  6. What support is provided for data migration and compatibility with current workflows?
  7. What training programs or resources are available for HI professionals and end-users?
  8. How is user support provided during the initial implementation phase, and what ongoing support will be available?
  9. How is the organization planning and managing the cultural and workflow changes associated with AI implementation?
  10. Are there strategies to address potential resistance to change among staff?
  11. What measures are in place to ensure the quality and accuracy of data used by the AI solution?
  12. Will there be a committee to oversee the management of data?
  13. Are there protocols for handling data discrepancies or errors identified through the AI system?
  14. What mechanisms are in place for monitoring the performance of the AI solution post-implementation?
  15. How often will performance be assessed, and what steps will be taken to optimize the system based on feedback and outcome?
  16. What security measures are in place to protect patient data during and after the implementation phase?
Back to top