Patient Safety Spotlight interview with Clive Flashman, Chief Digital Officer at Patient Safety Learning

Clive, the founder of Flashfuture Consulting, talks to the interviewer from Patient Safety Learning about the important role of digital technologies in tackling the big issues healthcare faces, the need for digital tools and records to be joined-up and interoperable, and how his experiences as a carer have shaped how he sees patient safety.

Read the full article here

Ten digital priorities for NHS patient safety

Clive collaborated with the CEO of his client, Patient Safety Learning, to contribute a number of key points to this article on patient safety. Link here

The points raised were as follows:

4. Better access to centrally sourced and patient-generated data.

Care provision is often based on when a patient was last seen by a clinician. Having patient generated data from wearables or apps, could make available data more timely and contextual to the patient. This could make a big difference to:

  1. Creating dynamic patient risk profiles in near real time. These could suggest optimum timings and approaches for interventions, probability of re-admission, or the need for additional support. As devices continue to get smarter and AI becomes more accurate, they could for example, read someone’s heart rate variability to determine future risk of a mental health crisis.

  2. Highlighting trends in unsafe care so that they can be targeted quickly to avert more significant harm.

  3. Understanding the impact of actions taken so that lessons learned can be continually refined and shared. Closed-loop learning is not yet well used in healthcare.

5. Adopt patient safety standards and embed these into new technologies, especially AI.

Solutions designed with patient safety standards at their core could be intrinsically safer. This requires including patient safety in the design stage of digital solutions, considering how the product will actually be used and ensuring that is as safe as possible.

We are currently working with several healthcare organisations to finalise new patient safety standards.

AI is as good as the algorithms used to create it and it is essential that those are also designed with end user safety as a priority. Parameters used to ‘educate’ AI, or the rules made available to machine learning platforms, should always include patient safety considerations.

6. Build safety more strongly into the user experience.

We need to look at technology design, intended uses, and how it is actually used. If we put new technology into an existing environment with individuals resistant to change, people might create workarounds, or ignore it completely. Redesigning the environment is key to successful adoption and safe implementation.

Surveys of poorly-managed technology implementations have shown they can become a safety risk. If the digital solution is not properly installed, configured and tested with users, then problems related to human factors may inhibit safety from the beginning.

Sometimes technological solutions such as electronic patient records can be highly complicated and designed for organisations rather than the end users – eg, focusing on reimbursement coding rather than capturing clinical observations intuitively. Delivering safe and effective user experience requires co-design and co-production by developers, clinicians and patients.

7. Patient safety maturity index

In the same way that providers measure digital maturity, they ought to be able to use a patient safety maturity index. This could be linked to an accreditation system based on patient safety standards. Digital products could have a stated minimum patient safety threshold that must be achieved before they are procured by healthcare organisations. Users should be encouraged to provide feedback on any safety issues experienced, and ideas to improve the safety of products used.

The top 10 dangers of digital health – a patient’s perspective

In his newsletter today (The Top 10 Dangers of Digital Health), the medical futurist, Bertalan Meskó, raises some very topical questions about the dangers of digital health. As a huge advocate of the benefits of digital health, I am aware of most of these but tend to downplay the negative aspects as I generally believe that in this domain the good outweighs the bad. However, as I was reading his article, I realised that it was written very much from the perspective of a clinician and, to some extent, a healthcare organisation too. The patient perspective was included but not from a patient safety angle. Many of the issues that he raises do have significant patient safety issues associated with them which I’d like to share in this blog.

1. Regulating adaptive AI algorithms

Where an AI tool quickly adapts to reflect its environment and the context in which it operates, the AI may “reinforce those harmful biases such as discriminating based on one’s ethnicity and/or gender”. These will further exacerbate existing health inequalities and place certain patients at a disadvantage. It is important that the ground rules for these AI tools include firm parameters that seek to prioritise patient safety. A bit like Asimov’s Zeroth Law, ”a robot may not harm humanity, or, by inaction, allow humanity to come to harm”.

2. Hacking medical devices remotely

The idea that hackers might target people's implantable cardiac devices was popularised in a 2012 episode of the US television drama ‘Homeland’, in which terrorists hacked a fictional vice president's pacemaker and killed him. It is not just VIPs (or VPs) who need to worry about this. Potentially anyone with an implanted device could have it hacked and be held to ransom. Medical device manufacturers should take far more care in the security that they build into their devices to protect patients from unwarranted attacks on them. Frankly, when large healthcare organisations are procuring these types of devices, this is one of the key areas that they should be interrogating their potential suppliers about.  

3. Privacy breaches by and on direct-to-consumer devices and services

This is a difficult one because if we want digital systems to really understand us and provide advice or treatment personalised to us, then those digital tools must have access to our confidential medical data. However, privacy is still very much a high priority for most patients and they (rightly) want to know what is happening to their data – who is using it, how long is it being held, is it being passed on to third parties without the patient’s explicit consent? People often forget who they have given access to their data, for what purpose and sometimes stop using a digital tool without realising that all of their data is still being held (and possibly collected via an active API) by the digital tool’s supplier. It would be helpful if our mobile phones and PCs could highlight:

a. When we shared sensitive data, who with, and what data was shared.

b. A list of active APIs that are still sharing our data, etc.

Data that is used for purposes other than those intended by the patient are potentially a safety risk to that patient and should be treated as such. 

4. Ransomware attacks on hospitals

Yes, this is awful for the hospital, and yes, it may cost them money; however, let’s not forget whose data has been stolen, the patients’! Are they sufficiently alerted to this, told what is happening, given ways to mitigate any issues to them personally? In an ideal world they are, but in reality the hospital is probably in panic mode and communicating transparently with patients is low down on its priority list. As the Medical Futurist says: “The average patient should demand more security over their data” – but how do they do this? What can a single patient do to ensure that the hospitals who have stewardship over their data (not ownership in my opinion) make it as secure as possible.

This brings me back to an idea that my sadly departed friend, Michael Seres, had many years ago. On each hospital exec team (not Board) there should be a Chief Patient Officer, whose job it is to push for patient interests in operational matters (which is why they shouldn’t be a non-exec member of the Board). That is the person whose job it should be to hold their organisation to account over the security of their patients’ data.

5. Technologies supporting self-diagnosis

Dr Google has been an issue for some years, and people’s off-the-shelf devices that monitor their vital signs are not necessarily medical grade, nor do their users generally have the skill to interpret the outputs from them. However, doctors should embrace patients who are keen to manage their own chronic conditions and support them in doing so. This ‘shared accountability’ has to be the model for improved population health and doctors not willing to work with their patients shouldn’t have any.

6. Bioterrorism through digital health technologies

A bit exotic this one and certainly not a near-term risk when looking at the sorts of things described in the newsletter. However, in a world that is still dealing with a pandemic, and reliant on vaccines to gain some normality back into our everyday lives, the security of (for example) that supply chain is critical.

What if a batch was intentionally sabotaged or in some way its efficacy reduced? In exactly the same way that medical products (especially implants) should be made as safe and secure as possible, the same is true for the medicines that we rely on.

7. AI not tested in a real-life clinical setting

The newsletter makes the case for issues related to how staff use the AI, but PLEASE… test this with patients first! Safety in use is critical and only feedback involving patients will help developers to optimise these digital tools to be as safe as possible.

8. Electronic medical records not being able to accommodate patient-obtained digital health data

This is a very personal issue for me. Why should my doctor have to send me for tests when I can give him/her perfectly reasonable data that I have gathered myself from a device that has been CE marked and approved by the FDA/MHRA etc.? Electronic Medical Record vendors are incredibly reticent to allow anyone other than the authorised doctor to enter anything into a patient’s record. There are some good reasons for this. However, I’ve long thought that there could be an annexe to the record that is patient-controlled where they can enter a new address, add data from their own blood pressure device and over-the-counter drugs or remedies that they are taking. That way, doctors would have an up to date, (hopefully) reliable set of data to have a more informed discussion with their patient and it could accelerate the time between consultation and referral/treatment.

9. Face recognition cameras in hospitals

I’m less worried by this in principle; however, I am interested to know how the data generated will be used and the security around it. If it is only used by the hospital to optimise patient flow, or remotely detect symptoms that are then used to help patients either directly or indirectly, then fine. If it is shared with others for more sinister purposes, then I would be concerned.

10. Health insurance: Dr Big Brother

This is less relevant to the UK – only 11% of us have private health insurance. Again, this boils down to who collects data on patients, for what purposes, is explicit consent gained from the patient to share their data and how may those third parties use it?

There are both negative and positive connotations to the gathering of a person’s health data by their health insurance company, but given that they already ask for access to all GP and secondary care records, having access to health wearable data (as Vitality Health already does) is not a big step.

Conclusion

I still believe that the benefits of digital health outweigh the risks, but the risks outlined above are not inconsequential. Many of the negative aspects are predicated on poor management and control of patient data. One of the ways that this should be mitigated is to have one or more patient representatives at an exec (not non-exec) level who hold healthcare organisations to account over this important aspect of care provision.

Are we ready for the new future of Digital Health?

Several years ago I was involved in a workshop at a global IT company and when we were talking about the future of patients and their data, I posited that patients might look to monetise their own health data. Especially where they had an unusual condition. Everyone was appalled and thought I was going too far. Yesterday I read an article that described a platform for patients to do just that.

It is another example of the consumerisation of Healthcare embracing the future while the constraining factors such as ethics, privacy and security are largely left behind.

We really do need to get better at gaming what the future might hold and working though how we would manage the issues associated with technological advancement.

In the U.K. TV series Years & Years, the younger daughter decides to go fully into biohacking and have a silicon wafer implanted in her brain. The company that does it has rights to what she sees and captures her brainwaves with the intention of creating algorithms to predict her thoughts. How do we feel about this? Excited, scared, angry, violated?

The meshing of organic and hardware-based intelligence is in my view, inevitable. It is the logical extension of augmented intelligence. How do we manage this future? It’ll be with us soon ...

Open Banking has just happened, is it now time for Open Health?

This week, open banking hit the UK financial markets. Given the potential that this has to disrupt both personal and corporate Financial markets, it has had an astonishingly low key launch. You can read more about it here, but essentially it is a way of opening up people's financial information, with their consent of course, so that others may use it to provide more meaningful analysis, products, and services. 

For example, in the credit check market, people who have lived abroad until recently or who have changed from being employed to self-employed might find it very difficult to obtain a mortgage or personal loan. If it was possible (as it now is) to instead check the way that someone manages their bank account as a guide to their credit worthiness, then the way that decisions are made around credit could be significantly changed. A start-up called Credit Kudos is betting that this is exactly what will happen and are looking to provide exactly that sort of credit check service.

When I was listening to a summary of how the open banking changes are being implemented, one of the technological aspects that really interested me was the ability for APIs to be made available for a finite time period. APIs are chunks of code that enable applications to become interoperable particularly with regard to mobile applications these days. In the past, you may well have shared large files with other people using free online services like Senduit. These services have the key feature that you can limit the length of time that your file is available to the intended recipient. Anything from 30 minutes to one week or more. The same is now possible in terms of giving applications access to your financial history. So I could download an app and give it access to my financial data, but only for say one hour, and not on an ongoing basis.

People have well-founded and significant concerns about the sharing of their medical history and it occurred to me that these time limited APIs might be one of the ways in which people are given more confidence to share their data with those who may need to access it to make better healthcare decisions. You could even give instructions to infomediaries who could control which apps or services have access to your medical data, for how long, and to what level of granularity. To some extent, this is the sort of service that Helix is already providing around the control of individuals’ genetic data.

Ultimately, the end destination for Open Health might be the potential for people to sell their own medical data to the highest bidder based on their conditions, medications taken, and other factors related to the data such as regularity of collection and quality of data captured. How far away is this scenario? Last year I thought it was perhaps 10-15 years away, now, maybe a little sooner.