Commentary: AI can revolutionise healthcare but several issues need to be tackled first

Singapore has been at the forefront of deploying artificial-intelligence solutions in healthcare, but key risks and uncertainties remain. The authors provide three recommendations for how healthcare organisations can effectively integrate AI tools to enhance outcomes.
Picture this: Doctors teaming up with smart computers to revolutionise healthcare. This is not science-fiction — it is the exciting reality of artificial intelligence (AI) making its mark in every element of medicine, from patient education to crafting precision medicine with genetic information.
However, from AI's enigmatic decision-making to its struggle to adapt to the complex realities of healthcare settings, this union already has its fair share of obstacles to navigate.
ADDRESSING THE HURDLES IN AI IMPLEMENTATION
In recent years, we have witnessed the growing integration of AI tools into various facets of healthcare. Local healthcare systems, like Singapore’s National University Health System, have been at the forefront of deploying AI solutions for healthcare screening, hospital operations, and clinical workflows.
While AI has a lot of potential, most healthcare AI tools are still in the early stages of development and testing.
Despite being able to predict things well, some models may struggle when faced with new information in a new context. Others, even if accurate, can be tough to put into action.
As such, there have not been many successful uses of AI tools in healthcare yet.
A 2020 study in Australia on the use of machine learning in healthcare found that moving from AI testing to deploying AI in real-world healthcare is a formidable challenge. There remain several issues to be ironed out including disparities in data, how AI interacts with local conditions, and the complications of fitting it into the day-to-day work of healthcare teams.
Singapore's Ministry of Health, in collaboration with Synapxe (formerly Integrated Health Information Systems), has come up with some smart ways to increase the reliability of AI tech in healthcare. However, while these practices are commendable, concerns beyond model accuracy remain.
There are three key obstacles healthcare organisations should consider when implementing AI:
1. Ensuring AI Tool Transparency
When people check and give feedback on smart computer programmes, they talk a lot about testing the data and making sure everything is on track. But there is a problem in how we check the results from AI tools for accuracy against real-world data, or what we call “ground truth”.
In a United States study, teams working in a tertiary hospital tasked with checking medical diagnostic AI tools’ accuracy had trouble figuring out if what it determined as right was true. Sometimes, what the AI tool said was right did not match what local experts thought.
This happens because the AI tool might not have had enough time to learn from enough examples, or because not all experts agree on the rules.

2. Contextualising AI Tools
Another big challenge is making sure AI tools fit well into how healthcare organisations work.
Bringing any new tech into the daily routine of healthcare work requires coordination between experts on the tech, health IT infrastructure teams, and healthcare professionals to make sure the new tools work well in the environment.
Furthermore, it takes a lot of effort to make sure AI tech can work with existing systems, get the right data, and go through the necessary processes, which might involve data stored in different parts of the system.
This is potentially further complicated by the fact that different parts of the organisation have different goals.
On top of dealing with data, the AI tool needs to match the tasks and workflows of healthcare professionals. This might mean changing how things are done and even how the AI results are shown on computers or in the workplace.
Differences in how different people work can cause problems during the integration phase.
3. Enhancing AI Tool Explainability
AI models are often characterised by their inscrutability — they make decisions, but nobody really knows how. This creates a “black box” of decision-making.
The lack of transparency can cause issues when making critical medical decisions.
In a recent AI project at Ng Teng Fong General Hospital, the team, which includes one of the authors, developed a highly reliable Natural Language Processing (NLP) model for predicting sepsis.
However, they hit a roadblock in proving that the important details used in the programme were right. There were variables derived from doctors’ patient notes that made it hard to explain how the programme could predict if a patient might get sepsis.
Moreover, the model’s reliability depended on how well doctors wrote down information. This raised uncertainties about how well it would perform when applied to patient notes from other hospitals.
The problem is, explaining how computer programmes figure things out is so complicated that even doctors struggle to understand.
This puts doctors in a tough spot — they are in charge of decisions made by AI but cannot fully grasp how AI makes those decisions.
This creates an undesirable scenario where the authority of doctors eventually becomes rooted not in their knowledge, but in their role as operators of AI.
Patients then face a dilemma: How do they trust decisions made by doctors using an AI tool if those doctors are unable to fully understand how said tool works?
RECOMMENDATIONS FOR IMPLEMENTING HEALTHCARE AI
Here are three key recommendations for healthcare organisations in introducing AI, focused on three main relationships.
1. AI Developers with AI Evaluation Team
To start, healthcare organisations looking to integrate AI use into day-to-day operations need to form a dedicated, cross-functional AI evaluation team to assess the suitability of new AI tools for such use.
The team should include clinical innovators, data scientists, and medical informatics representatives. The role of team members is to understand and validate the chosen AI model's performance within the organisation's specific conditions.
The team's first task should be to review the AI model's reported measures, including accuracy metrics and data sources. This review helps explore the model's core assumptions and relationships.
The next step involves verifying the AI model's performance using local data and collaborating with clinical experts to cross-check ground truth labels. This process ensures the AI model operates accurately within the organisation's context.
2. AI Implementation Team with Stakeholders
The second recommendation involves integrating any AI tool into the workflows of target departments. Healthcare organisations should establish an AI integration team, structured like typical enterprise system project teams, including a steering committee, working committee, and AI implementation project teams.
The steering committee, led by senior clinicians and executives, provides leadership and direction for AI implementation. The working committee, led by AI leads, focuses on technical, clinical, and operational integration, addressing privacy, ethics, and safety concerns. AI implementation project teams are responsible for deploying the AI tool and monitoring process metrics, closely coordinating with the working committee to address issues.
3. AI Users and Patients
The final recommendation concentrates on AI users — mainly clinicians and patients directly impacted by AI-enabled healthcare processes.
One strategy is to create interpretable explanations for AI predictions using related but more easily explainable models. Additionally, allowing clinicians and users to query the conditions under which the AI model makes predictions will enhance trust.
Creating user-friendly interfaces that enable easy interpretation can boost confidence in AI-based medical decisions.
CONCLUSION
As the use of AI tools in healthcare continues to advance, these recommendations can help organisations tackle implementation challenges. By following these guidelines, healthcare organisations can effectively integrate AI tools, unlocking their potential to enhance healthcare outcomes.
ABOUT THE AUTHORS:
Adrian Yeow is an associate professor at the School of Business of the Singapore University of Social Sciences. He is also an associate editor of the Journal of the Association for Information Systems and area editor for Clinical Systems and Informatics of the Health Systems Journal
Foong Pin Sym is a senior research fellow and head of design (telehealth core) at the Saw Swee Hock School of Public Health, National University of Singapore.
The content in this article was adapted and updated with permission from Asian Management Insights, Centre for Management Practice, Singapore Management University. The original article can be found here.