Thinking on its own: AI in the NHS

Healthcare in the UK needs reform if it is to remain a high-quality national health service free at the point of care. Recently, attention has turned to the use of artificial intelligence (AI) in healthcare to help deliver an NHS fit for the future. Both the Life Sciences Industrial Strategy and the Government's review, Growing the Artificial Intelligence Industry in the UK, highlight the great potential of AI in healthcare. 

The NHS has, however, had a history of difficulties in realising the benefits of technology, as it has often "been layered on top of existing structures."  For this reason, it is crucial to understand what AI can do to help reform the NHS and the challenges that will have to be tackled to fully reap the benefits of this technology. 

What is artificial intelligence?

AI describes a set of advanced technologies that enable machines to do highly complex tasks effectively - which would require intelligence if a person were to perform them. 

Although there is "no standard definition of intelligence", if defined as an "ability to achieve goals in a wide range of environments", AI is any manmade agent (i.e. software or robot) which exhibits intelligence. 

Potential of artificial intelligence in the NHS

With funding pressures increasing, the NHS needs reform if it is to continue delivering good quality care. AI could be an enabler of these reforms. The Five Year Forward View provided a vision for service transformation. It aims to narrow three gaps in health provision: the health and well being gap, the care and quality gap, and the efficiency and funding gap. AI has the potential to help deliver the Five Year Forward View and narrow these gaps.

AI could predict individuals or groups of individuals at risk of illness and allow the NHS to target treatment and close the health and wellbeing gap.  For example, AI could interpret information collected by wearables, such as fitness trackers, giving people greater access to knowledge about their physical condition. AI could also enable clinicians to identify individuals with health conditions who are more likely to develop certain complications. 

AI could give all health professionals and patients access to cutting edge diagnostics and treatment tailored to individual need, reducing the care and quality gap. AI can be deployed to help clinicians keep abreast of advances, demonstrated by IBM's Watson, which could process new healthcare literature alongside patient data to aid diagnosis and treatment recommendations. AI could also ensure better diagnosis, such as interpreting mammograms 30 times quicker than humans and with great accuracy. In terms of treatment, AI is making inroads in surgery, with experimental studies illustrating how autonomous robots can perform better stitching than surgeons and be used in the treatment of common mental health conditions such as anxiety or depression. 

AI could close the efficiency and funding gap by automating tasks, connecting patients to relevant services and enabling self-care. One AI application identifies where trauma patients - depending on the severity of the injury - should be treated on arrival to hospital, ensuring trauma patients are treated in the right location. AI also promises to reduce the administrative burden with virtual assistants supporting medical staff to complete administrative work. 

Moving forward, the NHS should consider how to embed AI to deliver a more efficient system focused on achieving better outcomes for patients in its future service transformation plans. 

Improving buy-in

For AI to support the delivery of a more efficient healthcare system that delivers better outcomes, it must overcome concerns of both the public and healthcare professionals. Professor Dame Wendy Hall recognises that "building public confidence and trust will be vital to successful development" of AI. 

Winning the hearts of healthcare professionals is also an important factor. AI must show that it improves patient incomes, that it is safe and that it is easy to use. The interfaces used to interact with these systems should be intuitive for staff and simplify current processes, rather than complicate them. Moreover, clinicians need some degree of transparency and interpretability over the results produced by AI systems to understand how the diagnostic, prognostic or treatment plan was reached. 

Applications of AI in healthcare data are dependent on accessing individual or population datasets. These datasets, however, can be difficult to access due to a "lack of public and patient engagement" when it comes to sharing data. The third Caldicott review recognises the huge potential that could come from sharing this type of information, recommending a clear consent and opt-out model to give people a choice and increase trust.  Reticence towards data sharing is also fostered by a lack of public trust on how data is handled and stored. Only 41 per cent of people trust their GP surgery to use their data appropriately and 35 per cent trust the wider NHS in this regard. The creation of a secure and transparent environment with clarity and visibility over who access data, and for what purpose, will be key barriers to overcome.   

Overcoming system challenges

Enthusiasm for AI in healthcare should be contained at this early stage, as the NHS considers barriers to implementation. The availability of appropriate data and certification of these systems are two of the main challenges facing the adoption of AI in healthcare. 

System challenge 1: 


Getting data right is crucial for the increased adoption rate of AI within the NHS. This means collecting the right type of data in the right format, increasing its quality and securely granting access to it. 

Data is the fuel of AI as many algorithms learn by using examples found in the data that is used to train them. Machine learning, a subset of AI, allows computer systems to learn by analysing huge amounts of data and draw insight from it. This requires a specific type of data environment to function. A feedback loop is necessary to learn, reinforce positive action and not repeat negative ones, providing a 'virtuous circle' of data use, application and learning (See Figure 1).

Currently, the NHS does not provide the most amenable environment for this virtuous circle. In many cases, it would require collecting data in a new kind of way. 

As highlighted by the Life Sciences Industrial Strategy, the most "important changes in healthcare will emerge with the increasing digitisation of a wide range of information." This means moving away from paper systems and developing "a 'machine-friendly' data environment". Although there have been some successes, such as Electronic Healthcare Records (EHRs) in primary care being deployed universally, the NHS still has some way to go to achieve the healthcare digitisation agenda. 

High-quality data is essential for the accuracy of AI algorithms, dictating the quality of the output. Improving data-collection processes, such as timeliness of data entry and completeness of information, are of crucial importance for AI algorithms to produce accurate results. The design of data-collection systems and their user-friendliness can have an impact on the quality of data and, with a greater focus on data visualisation, "can reveal data quality problems."

Access to NHS data and the linking of data sources can be difficult as a result of both technical barriers and legal requierements. One of the main technical barriers to linking data is the lack of interoperability of IT systems in healthcare. For example, secondary-care trusts use a range of different IT systems which collect and store information - meaning the process of linking data is more cumbersome. The NHS recognises this and highlights that the focus of interoperability strategies should be on creating an "open environment for information sharing." 

System challenge 2:

The ethics of building AI

There are many ethical questions surrounding the applications of AI in healthcare. Some concern the building of AI systems, who should bear the costs and reap the benefits; others focus on safety and the certification procedures for AI.

NHS data is a hugely valuable asset, fostering debate over who should reap the economic benefits of products that are developed as a result of patient data. The Life Sciences Industrial Strategy promotes a framework to realise the value of NHS data. If industry is to use NHS data to design AI, as it does now, the NHS should make sure that it can reap the benefits in the long term. Government should explore mutually beneficial arrangements such as profit and risk-sharing agreements. 

The regulation of AI is a thorny issue, with some believing it would stifle innovation and others arguing that attention should be paid to their fallibility and biases. Healthcare is a high-risk area, where the impact of a mistake could have profound consequences on a person's life. Public safety and ethical concerns relating to the usage of AI in the NHS should be a central concern for healthcare regulators such as NICE, the MHRA and Government. 

It is critical that people developing AI algorithms are able to prove, test and validate the accuracy and performance of their algorithms. It is not sufficient to prove that AI algorithms are "technically sound", but it is vital to understand how it deals with "hazards that might arise unexpectedly." This highlights the importance of truly stress-testing these systems before applying them in healthcare. Interestingly, the FDA has recently assembled a team to "oversee and anticipate future developments in AI-driven medical software." The MHRA should follow in the FDA's steps and create such a team to provide clarity as to the verification and validation process of AI systems in healthcare. 

The transparency and interpretability of AI algorithms is important for their verification and validation as it allows for better scrutiny. This can relate to the 'technical transparency' of AI algorithms. In other words, understanding how the AI system is making sense of input data. It can also relate to the disclosure of the 'code' underpinning the algorithm. This type of transparency might be problematic in terms of the commercial sensitivity or intellectual property law. However, it is important that during the certification procedure sufficient information be given about the AI algorithm so that is can be appropriately stress-tested. 

One of the ethical concerns that can arise from the use of algorithms is that evidence is inscrutable. This means that there is a lack of knowledge about the data being used, how it has been pre-processed and how the algorithm has used it to reach its conclusion. The process of cleaning and transforming data before use implies many subjective decisions which will have an impact on the output of AI algorithms. A careless approach to AI in healthcare could further entrench healthcare inequalities through the reinforcement of biases found in healthcare data. Implementing methods to detect and prevent biases from occurring in machine-learning methods might have a positive impact to tackle challenges such as variations in healthcare outcomes. 

Given the current state of technology, AI applications in medical decision-making are describes as decision-pointing tools, not agents making decisions for people. This means that currently, accountability and legal liability remains on the doctor's shoulders. Nevertheless, it is important to be aware that clinical staff can be influenced by a machine's recommendation. Clear guidelines should be established as to how medical staff is to interact with AI tools. 

Looking ahead

AI presents a great opportunity to help the NHS deliver its service transformation plans. Nevertheless, the NHS "has a long way to go before AI can be effectively leveraged." Buy-in from patients and healthcare professionals needs to improve and barriers to implementation need to be overcome.  AI has the potential to make processes within the healthcare system more efficient and reduce costs. The NHS must consider gradually embedding this technology in future service transformation plans. 


Recommendation 1: NHS Digital and the 44 Sustainability and Transformation Partnerships should consider producing reviews outlining how AI could be appropriately and gradually integrated to deliver service transformation and better outcomes for patients at a local level. Caution should be taken when embedding AI within service transformation plans. It should not be regarded as tool that will decide what objectives or outcomes should be reached. AI is an enabler not the vision.

Recommendation 2: NHS England and the National Institute for Health and Care Excellence should set out a clear framework for the procurement of AI systems to ensure that complex to use and unintuitive products are not purchased as they could hamper service transformation and become burdensome of the healthcare professionals.

Recommendation 3: The NHS should pursue its efforts to fully digitise its data and ensure that moving forward all data is generated in machine-readable format.

Recommendation 4: National Institute for Health and Care Excellence should consider including user-interface design and general user-friendliness of healthcare IT systems in the procurement process of medical data collection IT systems moving forward. Staff should not have to be required to go through intensive training to be able to use medical software. IT providers should be mandated to create user-friendly and intuitive systems.

Recommendation 5: NHS Digital should make submissions to the data quality maturity index mandatory, to have a better monitoring of data quality across the healthcare system.

Recommendation 6: In line with the recommendation of the Wachter review, all healthcare IT suppliers should be required to build interoperability of systems from the start allowing healthcare professionals to migrate data from one system to another. This would allow for compliance with the EU's General Data Protection Regulation principle of data portability.

Recommendation 7: NHS Digital should commission a Review seeking to evaluate how data from technologies and devices outside of the health and care system, such as wearables and sensors, could be integrated and used within the NHS.

Recommendation 8: NHS Digital, the National Data Guardian and the Information Commissioner’s Office, in partnership with industry should work on developing a digital and interactive solution, such as a chatbot, to help stakeholders navigate the NHS’s data flow and information governance framework.

Recommendation 9: NHS Digital should create list of training datasets, such as clinical imaging datasets, which it should make more easily available to companies who want to train their AI algorithms to deliver better care and improved outcomes. It should also develop a specific framework specifying the conditions to securely access this data.

Recommendation 10: The Department of Health and the Centre for Data Ethics and Innovation should build a national framework of conditions upon which commercial value is to be generated from patient data in a way that is beneficial to the NHS. The Department of Health should then encourage NHS Digital to work with STPs and trusts to use this framework and ensure industry acts locally as a useful partner to the NHS.

Recommendation 11: The Medicine and Healthcare Products Regulatory Agency and NHS Digital should assemble a team dedicated to developing framework for the ethical and safe applications of AI in the NHS. The framework should include what type of pre-release trials should be carried out and how the AI algorithms should be continuously monitored.

Recommendation 12: NHS Digital, the Medicines and Healthcare Products Regulatory Agency and the Caldicott Guardians should work together to create a framework of 'AI explainability’. This would require that every organisation deploying an AI application within the NHS to explain clearly on their website the purpose of their AI application (including the health benefits compared to the current situation), what type of data is being used, how it is being used and how they are protecting anonymity.

Recommendation 13: The Medicine and Healthcare Products Regulatory Agency should require as part of its certification procedure access to: data pre-processing procedures and training data.

Recommendation 14: The Medicine and Healthcare Products Regulatory Agency Review in partnership with NHS Digital should design a framework for testing for biases in AI systems and healthcare datasets. It should apply this framework to testing for biases in training data.

Recommendation 15: Tech companies operating AI algorithms in the NHS should be a held accountable for system failures in the same way that other medical device or drug companies are held accountable under the Medicine and Healthcare Products Regulatory Agency framework.

Recommendation 16: The Department of Health in conjunction with the Care Quality Commission, Medicine and Healthcare Products Regulatory Agency should develop clear guidelines as to how medical staff is to interact with AI as decision-support tools.