A day in the life of a GP

Sometimes you can’t see the wood from the trees. A day in General Practice can leave your head spinning. Imagine talking to more than 30 different patients about their problems and dealing with over a hundred pieces of health related admin including urgent referrals, incoming hospital letters and blood test results. Little wonder that recent calculations have shown that working 3 days as a GP i.e. 3 morning clinics and 3 afternoon clinics, will generate enough work to fill 37.5 hours. Essentially 3 days as a GP is ‘full-time.

It doesn’t need to be this way. There are so many efficiency measures to be made, without sacrificing the patient-doctor interaction. Let me take you through the day and show you where med tech could step in…

The Day Begins

08.30: Get into clinic early to deal with urgent admin, including bloods accumulated over the days I’m not in. Also see many normal blood tests (usually two to three different sets per patient). Lots of clunky double clicking and selecting from drop-down menus to add a comment or action on each.

Solution: A simpler method of releasing normal bloods. No drop-down lists! Could a swipe feature (like that used in Tinder) help? Or maybe a red/green button feature that can easily ‘finish’ normal bloods?

09.00: Clinic begins. Up to 18 patients in 10 minute intervals. Phone calls as well as face to face. Numerous tasks stack up including:

  • Emergency referrals (including cancer 2 week wait forms)

  • Routine referrals to hospital clinics

  • Queries that need to be run past a speciality doctor e.g. a consultant

  • Prescription requests with tailored delivery e.g. sending to a pharmacy different to the one listed by the patient

  • Completing documentation from the clinic

  • Sending patients information leaflets/other documentation via text

This is standard clinic work and needless to say, I can’t get this all done in the 10 minutes allocated to each patient. Some referrals take consideration e.g. the cancer referral process and requires ‘cross-checking’ to other admin members to make sure they’re sent correctly to the right hospital administrator.

Other administration can be very time-consuming e.g. booking hospital appointments means interfacing with the NHS e-RS system, which is clunky in itself and then requires downloading appointment information and texting this to the patient.

The above systems, especially where interacting with external software systems e.g. e-RS or using specified, partly filled referral forms, can be very frustrating.

Solution: There has to be a better way here. One thing that really gets me is that specific referral forms don’t accept (my fairly comprehensive) medical notes, and I often have to copy and paste information and results into the referral document. A better mail-merge type process must be possible? I hate having to scour blood results and enter them manually, could this not be automatised and the last relevant blood dated and entered for reference?

Ideally, I just want to be able to say ‘computer, refer this patient to …. clinic and send them confirmation of their appointment to the patient’ and for this happen. I’m hoping that in the days of AI, this becomes a reality, making clunky referrals and letters a thing of the past.

12:00: Admin +/- home visit. Admin includes more of the above, along with hospital letters, which I have to read and action. I am also sent upwards of repeat 50 prescriptions to analyse and authorise. Given that these are patients I may not know, I have to look up any complex requests and make sure they’ve had relevant monitoring bloods or health checks. If they haven’t I have to send them reminders to have this done and maybe even reduce the supply of medications given.

Solution: Prescriptions can be an incredibly complex and time-consuming affair, especially as safety stakes are so high. I understand the medical need for a clinician to check this, but I feel work could be improved by having an easier way to message patients about their reduced supply. Messaging templates help, but maybe there’s a coding fix that would allow me, with a press of a button, to send a message +/- reduce the issued supply of medication, rather than for me to have to change everything manually.

14:00: Do it all over again for the second session of the day.

***

This seems like a grave simplification of what I do and where I could see health tech improving the life of a GP. Even more revolutionary ideas are in the works e.g.Tortus, an AI that records, transcribes and summarises a GP-patient consultation, but simple steps like the streamlined documentation and mail merge, would help until these measures fall into place.

At the very least I recommend GP practices complete a time tracking audit of their staff e.g. Harvest or Clockify, such that they can see where the time-sucks lie and how to improve the working day. Ultimately, time is the most precious resource we have and to spend it on needless admin, is a sin that med tech could absolve.

Spark(s Fly): The Crick Science Entrepreneur Network

A CTO, research scientist and strategy manager walk into the Crick…

The Friday evening session of London Health Tech Week was always going to be a popular event. Throw in the impressive surroundings of the Crick and subject matter of ‘AI, ML and the Future of Healthcare’ and it’s easy to see why there was a waiting list for this meetup.

Bringing together industry and investor voices in two panels, with an interval-scheduled Pitch Fest, moderators Tanmay Gupta and Andrea Balukova chaired some interesting discussions. Here are some of the takeaways:

AI that glitters is not gold

The first panel were not convinced AI has enhanced the fields of drug discovery and formulation. Their answer to this opening question was surprisingly negative considering how much hype outsiders (like me) hear about the potential for AI in discovering new drugs. Experienced figures like Lindsay Edwards of Relation Therapeutics decried the gains, calling for AI to be used as a tool in the drug discovery process rather than expecting it to be the process in and of itself.

Supporting him in this was research scientist, Kristina Ulicna, who warned of the ‘black box’ workings of AI. Coming from an expert researcher, Kristina’s concerns about not knowing how the software comes to its conclusions, should prompt the rest of us to perform our own critical analysis of AI outcomes.

However, AI is not without some practical applications. Meri Beckwith, Co-Founder of Lindus Health, gave examples how AI is helping cut down time in drafting research proposals. It seems AI is still most useful in the scribing and LLM capacities rather than in innovation itself.

AI as therapeutic interventions need more work

Despite the public expectation, several panelists were wary of AI as a therapeutic intervention. Citing the example of Pear Therapeutics, Andrew Welchman of Ieso stressed the importance of scientifically validated technologies rather than generic LLM-based bots.

Investors remain positive

With an investor voice on each panel, it was good to hear their hope for future developments. Predictions include precision brain health and AI helping overhaul business model patterns. And with Eupnoos (a digital software that can diagnose respiratory disease from a phone voice recording) winning the Pitch Fest, the future certainly looks bright.

It was refreshing to hear a contrasting view to the usual AI-hype and considering the expertise of the panelists, i’s likely a fair reflection on the use of AI in drug development at the moment. However, I think these are still very early days in the history of AI, and things (hopefully) will only get better.

The Crick Science Entrepreneur Network’s Spark Meetup (AI, ML and Beyond: Charting the Future of Healthcare Technology and Innovation in Life Sciences) was held at the Francis Crick Institute on 14th June 2024.

What's in a voice?

A doctor works a lot on intuition. A patient walks in, seemingly fine, observations all normal but there’s something not quite right. They’re just a little bit vague, a little too slow, a bit… off. For some reason you pursue the investigations, struggling to justify why exactly you want it done. Finally, the patient opens up about the odd things they’ve been experiencing: voices, images, thoughts that confuse and frighten them. You suspect a serious mental illness, you refer to psychiatry.

It’s a struggle to teach the spidey sense in medical school. Given that doctors sometimes miss even a glaringly obvious diagnosis, trying to teach intuition seems a bit redundant. But what if there were technologies that could codify these subtle signs and help make a diagnosis?

This is what the emerging field of vocal biomarker technology is trying to do. Particularly useful in mental health disorders, these software programs analyse speech, looking for subtle signs of disease in the things you say and the way you say it. These software programs analyse speech, looking for subtle signs of disease in the things you say and the way you say it.

“These software programs analyse speech, looking for subtle signs of disease in the things you say and the way you say it.”

Forging ahead is Psyrin, a startup using AI to analyse something as short as 5 minutes of speech to help diagnose serious mental illnesses (SMIs) like schizophrenia. Considering a lot of the bottleneck in initiating treatment is the diagnosis of a condition (bipolar, for example, takes 9.5 years to diagnose according to bipolar UK), improving diagnostic technologies could be the key to shaving down this time. But will it actually work? How does this fit into the system as it stands?

What can vocal biomarkers add to the landscape

Mental health has always been a bit nebulous in its diagnosis. Unlike the obvious signs of physical disease e.g. a blocked artery or a TB bacillus under the microscope, diagnosing mental health conditions is less clear cut. Of help are checklists and questionnaires e.g. the PHQ-9 for depression, however, these are not prerequisite. A lot of the time, a diagnosis lies in a doctor’s opinion.

Enter the vocal biomarker. If the evidence backs it up and the technology can reliably support a diagnosis of a mental health condition, vocal biomarkers could change the SMI pathway. It could finally mean a quantifiable marker for mental health.

The technology is already being developed for neurological disorders like Parkinsons and myasthenia gravis with companies like BioSensics Biodigit and Canary Speech developing AI-assisted tools that can monitor and predict flares in a wide range of conditions based.

If vocal biomarker techonology works, it could be a marvel in terms of ease of measurement. Imagine this - a voice-recorded questionnaire done by the patient in their own home that gives an accurate reading of their mental health. But is it too good to be true?

Does it work?

I’m always a bit wary of hearing “AI-magic” as a term to describe how a medical software comes to important conclusions. I want to know what processes went into turning 5 minutes of speech into a possible diagnosis of schizophrenia. From what I understand, by training an AI on a previous validated dataset (in Psyrin’s case 20 000 minutes of voice recordings from patients with various mental health conditions), the trained AI can then predict outcomes from new data (e.g. the 5 minutes of speech) it encounters.

The proof of the pudding will be in having clinical trial outcomes. Sonde Health, a US based company using vocal biomarkers, has shown promising results with a recent study published in Frontiers in Psychiatry. The 104 participant study showed there was a strong correlation between their software’s mental health scoring system and the standard mental health assessment questionnaire used to judge symptoms. Importantly, there was suggestion that heightened awareness of the patient’s mood scoring, allowed them to change their behaviour to improve their mental health.

Obviously more works needs to be done, but there is a promise.

Pitfalls

It’s easy for me to poke holes, but I think it’s important to discuss the limitations of any new med tech device. I’m always sensitive to variations in patient populations and wonder how well the software can account for differences in speech patterns, dialect and language. Obviously, if that introduces inaccuracies into the diagnosis, then that’s a major flaw in the product. Furthermore, if the software can only be used for people who fit their training dataset, this could further heighten health inequality and access to services, something that in these stretched times we need to be even more mindful of.

In summary, the potential for vocal biomarker technology in the diagnosis and even the management of mental health conditions could prove revolutionary. However, as with all med tech products, it needs to be validated, safe and inclusive to really break new ground.

Diabolus ex machina

This week the NHS services in my area were devastated by a cyber attack that stopped services in their tracks. I’m not exaggerating when I use the word ‘devastated.’ Blood tests were compromised as a result of the cyberattack on Synnovis, the company that provides pathology services to two major south London NHS Trusts.

As a result, non-essential bloods in the community have been postponed, while things are still worse in hospitals. Operations have been cancelled and every ward has been left scrambling as they go back to paper requests. And it is not just the amount of extra work in an already stretched healthcare system that astonishes me. The risk of the situation gives me the heebie-jeebies. Imagine not being able to accurately measure the blood levels in a woman who suffered a major haemorrhage after childbirth?

Needless to say, this was the last thing NHS workers needed.

As someone who was there for the last ‘WannaCry’ cyber attack on the NHS in 2017, I remember that after the initial shock had passed, I became aware of just how vulnerable health service IT systems were. Like many things in underfunded hospitals, IT systems are worse for wear. From excessive log-on times to multiple different clunky software programs, healthcare IT seems clunky in comparison to the cutting edge tech of private companies.

Here are some of the issues with NHS IT systems as I see it:

Vulnerability of single data systems

The aspiration of NHS healthcare is to provide joined up, accessible care from any where in the country. Indeed, this should be easier in a system like the NHS, where there is one umbrella organisation and fairly standardised working practices and legislation. IT systems, however, have been commissioned differently across Trusts, much to the consternation of those working in (and using) the healthcare system. Changing Trusts, I often have to get used to a new IT system and as a GP, cannot easily see what has happened to a patient, even in a local hospital. This has implications for continuity of care and safeguarding.

In the Synnovis cyber attack devolvement of IT systems has helped contain the disruption to a few London NHS Trusts. I can only imagine the disruption if Synnovis was the pathology provider for the whole country. However, if the NHS’s is serious about implementing a single data system (which I think is a good thing) it needs the cyber security worthy of such an important single target data system.

Vulnerability in upkeep and maintainance

NHS IT staff have a hell of a job to do. Given how big Trusts are and the disrepair a lot have fallen into, there are many vulnerable points in IT systems. Maintaining good digital hygiene is difficult given the many stations and users accessing mail, software and cat videos (on their breaks!) In order to keep IT systems robust and safe, Trusts need the best IT staff working within their capacity with adequate resources. Unfortunately, this does not sound like the NHS.

Although the government have promised more investment in its NHS Digital Plan, without a fully equipped workforce, this may be a case of putting the cart before the horse.

Vulnerability in external IT providers

In order to deliver a single data system, healthcare services need to prioritise cyber-security. This can be complicated as the many NHS IT services are provided by private companies that may then have vulnerable systems themselves. For example, Synnovis’s parent company, Synlab, suffered an attack in its Italian healthcare systems in April, a foreshadowing of this week’s UK attack.

How can this be prevented from the UK end? It’s difficult to say but I suggest extreme scrutiny and vetting of external providers before commissioning services. NHS Digital must be working on this.

Emotional vulnerability

When the 2017 ‘WannaCry’ attack happened, I was in disbelief that the NHS could be a target. As I sat on a broken chair in a windowless store cupboard-turned doctors’ office, I couldn’t believe hackers would think a broke and broken NHS a worthy blackmail target. However, my opinion has changed since then.

Of course, healthcare is a prime target for cyber attacks. What’s more emotive or high-stakes than healthcare? Compared to the British Library hack of last year (no offence British Library - I feel you!) ransoming sensitive and timely healthcare information is way more likely to pay. We don’t negotiate with terrorists? Well, what if they have your toddler’s chemo-related white cell count?

Cyber attacks on healthcare systems are not off limits for calculated criminals, who have no limits. In fact, like defence, government and the other major holders of sensitive material, they are set to be the number one targets for the future.

How to build a Fortune 500 med tech company

Recently I’ve found myself answering questions rather than asking them. Usually, I’m sat behind a desk going through my internal flow chart of questions, ruling things out and coming to the most likely decision given the information supplied by the patient in front of me?

‘Does it hurt? What makes it worse? Have you lost weight unexpectedly?’

But this time, med tech companies have been interrogating me.

‘How does this system work? What would happen if we did this? What do you think of this idea?’

I’ve found myself being asked for my professional opinion, not as a medical diagnostician, but for my insight into how the primary care system works. And I take that as a good sign. Too often tech initiatives are rolled out without involving the end user, which ultimately affects the tech’s success. Poorly designed, ignorant applications are doomed to fail and risk the faith of the user.

After a few different conversations with companies as diverse as @MedeskinAI’s dermatology diagnosing AI to remote monitoring services like Isla Health, I see some themes in the questions asked. Here’s my summary as to what makes for successful med tech design and implementation.

Start with reality

Many people think they know the medical arena and what the problems are without ever having worked in it. Those who actually work in the field are best placed to tell you what the sticking points are and why a process is failing. Before sinking millions of pounds and hours into designing something, run it by the users first to see if they think it’s viable. The focus group is your friend.

Draw on the user’s expertise

Med tech regulation is in another league compared to other industries. With its direct impact on human health, it demands the most stringent safety profiles. Therefore, I think it imperative that med tech companies involve specialists in the field to advise on the potential risks of any applied technology. This applies not only to medical devices but also to software. Undoubtedly there are legal standards to be met, but by using medical specialists from inception, the compass will always point in the right direction.

Retrofit a vision

The alternate approach to working with the reality of healthcare (which admittedly is in a pretty difficult position at the moment) is to start with an ideal vision of the future - imagine a perfect service and then try to deliver that. Like Steve Jobs picturing a handheld touch-screen mini-computer, utopian ideals can first be envisaged and then created. This is hard given the constraints of working in a flawed system but like the 1942 vision of the NHS itself - it seems impossible until it’s done.

Back it up with results

The final word of advice to any med tech company trying to make its mark is to prove its worth with formal results. Numbers and stats help so with any implementation project I would take a sample of users, see them through the process and monitor the success. Tell me how many referrals would be saved, how many cancers would be picked up or what the impact of your device/software is. And if you can’t quantifiably show how my life is going to improve with your product, then you should probably go back to the drawing board.

Man vs Machine: A Doctor's Defence of Medical AI

And the results are in… Med-Gemini, Google’s medical AI model, has shown an astonishing ability in a variety of medical tasks. It’s recently published open paper shows the AI can achieve 91% accuracy in the MedQA (USMLE) dataset.

And perhaps more impressive is Med-Gemini’s utilisation of medical knowledge. Google’s medical AI has pushed the boundaries in correctly identifying chest x-rays normal x-rays and 65% in abnormal cases in certain datasets.

Given time, medical AI has the potential to become more accurate and better at diagnosis than the most accomplished medical specialist.

What does this mean for us mere mortals? The human doctors on the shop floor trying to bully our brains into absorbing the ever-changing guidelines and technologies? Are we now inferior to a machine?

Possibly.

Is this a good thing?

Possibly.

Here goes: I’m pinning my colours to the mast and voicing my support for medical AI. Here are my reasons why I’m optimistic about the rise of the medical supercomputer:

Not all doctors are great

One of the most quoted arguments against medical AI, is that it can’t have the same personal, empathetic touch as a human doctor. But having been part of countless medical teams for the last 20 years, I have seen my fair share of fallible clinicians. Clear-thinking and deportment suffer under stress and the extreme working conditions of today’s healthcare systems, can lead to erosion of good medical care.

If a medical AI is going to make the safest decisions, I think I would take that over a well-meaning but inaccurate human doctor, let alone over an obviously incompetent one.

AI as an adjunct

I see AI as an evolution in the tools doctors already have. I don’t pretend to know the whole of Kumar and Clark’s ‘Clinical Medicine.’ I am, after all, not a robot. But I do rely on reference texts and guidelines when I need them.

In the same way, I look up the dosing of medications in the BNF, I feel the use of AI could be a cross-check adding an extra safety layer in my practice. Using Med-Gemini’s example, if I think a chest x-ray looks normal but the AI says differently, I will probably take a closer look at the image, especially if the AI’s accuracy is as good as a Radiologist’s more often than not.

We haven’t cursed the release of many transformative medical technologies, even when they usurped previous gold standards of treatment or diagnosis. I am incredibly glad an MRI can better determine the nature of a patient’s lump, even though that makes a machine a better diagnostician than me. So too, I believe, AI will become another tool in my medical arsenal, something to be embraced rather than resented.

Redirect attention

It will take some time for AI to become the true deus ex machina, which replaces the human in the medical interaction. However, the hope is that by AI automating the administrative burden, a doctor will have more time to concentrate on the actual patient encounter. With no more endless clicking to order a test, hours of doctor’s time can be spent doing more important things and regaining a sense of balance and health in the working day.

‘Worst’ Case Scenario

Say the feared thing happens and doctors become obsolete, what then? Whilst I don’t think that is likely to happen, these highly talented minds can be directed to other things - maybe focusing on preventative healthcare or maintaining mental wellbeing and flourishing, rather than treating disease. Maybe we’ll all become bona fide creatives, after all, medicine is both an art and a science. I have no doubt us doctors will find something good to do with our time, if we’re no longer needed.

The Digital Healthcare Show: Hot and Not

I'm becoming very familiar with the Excel Stadium. Last week’s Digital Healthcare Show was my second visit to the East London conference centre following February’s Festival of Genomics. Armed with the basics of how to change the channel on the wireless headsets, I had more time to appreciate the latest digital health innovations causing a splash on the scene.

So as with Rewired, here’s the latest trends from one of the biggest digital health conferences in the UK this year.

Hot:

  1. The End-User

    Digital health is finally taking notice of a vital figure in its pathways: the end-user. James Freed’s niftily titled ‘ why your digital transformation will fail’ showed how vital it is to keep the end-user in mind and onboard them appropriately, otherwise, as his examples show, your initiative will fail.

  2. Digital Literacy

    As we’ve already seen, digital health is transforming nearly every aspect of healthcare. Just as Lloyd George envelopes are a thing of the past, the healthcare landscape will transform over the next few decades, and thankfully, leaders recognise the need to bring patients and staff with them into the digital era. The panel discussion chaired by Caroline Stanger highlighted how the digital transformation is conducted at both a local and national level, and Clare Thomson of Imperial College Health Partners gave real world examples of this.

  3. Virtual Reality

    Before you bin your Meta Quest headsets (not that you would at that price), VR may be the next big thing in the medtech scene. Already successfully employed by training simulators e.g. Bodyswaps, VR may also have therapeutic use in conditions like mental health. An interesting panel discussion from Aileen Jackson showed the potential for VR as both a teaching tool and therapy in and of itself. Incredibly exciting is that creative studios like Anagram are now making ADHD simultation experiences, helping people understand what the experience of a mental health condition is like.

Not:

  1. Burnout

    In fact, medtech is targeting healthcare worker burnout in both systems and technology. Ambient voice technology like that of O.S.L.E.R and Nabla could be major factors in reducing the administrative burden in a medical consultation. Another potential lifesaver is robotic process automation (RPA), for example, emailing of public health information, such as that in the Berkshire NHS Health Visiting Team, have been beneficial for both patients and the adminstrative team.

  2. Ignoring the experts

    The importance of clinician input is vital as evidenced in the best digital products. Pathways developed in collaboration with clinicians e.g. Ramai Santhiripala’s streamlined surgical care, stand the best chance of success.

  3. Sitting still

    Digital innovation continues apace and AI still features heavily in the discourse. Certainly in a Digital Healthcare Show I expected momentum, but the pace and belief that tech solutions can solve problems is greater than I expected. We need more of this conviction, alongside proven success and greater clinician and patient involvement to really have impact through digital health.

Women’s Health Reimagined - Talking Tech's Panel Discussion

With a potential market of 50% of the world’s population, the #femtech industry has huge potential. Women’s health, which for so long has been a Cinderella subject in the broader medical field, has in recent years come in to the spotlight. It makes sense that tech would follow suit.

Tonight’s panel discussion hosted by #SODA Socials drew interesting insights from figures in leading women’s health companies like Flo and Elvie. Tackling big questions like data security and challenges in securing funding, the panelists shared their experience in this important field.

Panel discussion:

Femtech is taking off and developments in the area are long overdue. Pablo Solano, Product Designer Manager from Flo, explains how the period tracking app has branched out from a simple cycle calendar app to an emotional communication tool, allowing women to communicate how they feel to their partner via the app.

The latter development was an example of user-driven feedback, which the team took to heart in the app’s development and is now a successful and profitable feature. Whilst I’m not sure whether digital notifications should be the default method of communication in a relationship, if it leads to an open discussion, I suppose that’s a good thing.

Smriti Garga, Senior Product Manager at Elvie, led questions about the particular challenges femtech products face. Using the examples from Elvie’s early days, where founder Tania Boler faced VC pushback about a ‘niche’ product (that pertains to almost 4 billion people), Garga explained how even today the word ‘vagina’ can put investors off. But their products, including their lauded breast pump and digitised pelvic floor trainer, help women manage important features of their health and as the taboo lifts, the industry is likely to go from strength to strength.

The last panelist, Rachel O’Donnell, raised an optimistic vision of the future, championing coding and open discussion of women’s health. Giving useful suggestions like how to make a successful product (look at competitors and work out a niche) and survive a hackathon, O’Donnell spoke to the more tech-literate members of the audience.

The future:

As ever, AI seems to generate a lot of the buzz for the future, with belief that symptom-checking, cycle-analysing chatbots are likely to be key in the field. Yet again tech experts feel the future lies in having an AI intermediary between patient and physician. They may well be right.

Personally, I am looking forward to smart tech that can actually help in diagnosis or better yet, symptom control for many women’s health issues. I think products that have potentially huge impact include self-sampling for cervical cancer (for example Teal) and menopausal symptom control e.g. Grace’s hot flush-combatting bracelet.

For too long women’s health has been a neglected and shamed field. I’m glad I live in an era that sees it given the attention, technological and otherwise, that is deserves.

The Cutting Edge… Imperial’s Diagnostics Showcase

A free lunch, South Kensington and the newest medical diagnostic technology - what’s not to like? Yesterday’s Medtech Links Event from Imperial College London was an inspiring showcase of the medtech being developed by scientists from the prestigious institute.

The phrase ‘it’s impossible until it’s done’ comes to mind when considering some of the devices, but this is exactly what medicine needs - experts who think big and then make it happen.

The event, held in the impressive Royal School of Mines, started with sandwiches and posters of the work that would be discussed in the subsequent sessions. As a lone GP (signified as an ‘external’ audience member by a red dot on my badge) I could see the majority of attendees were from Imperial (with a blue dot on badge). But with many of the posters featuring point of care diagnostics targeted for rollout in primary care (yay!), I wished there were more coalface clinicians at the event to take inspiration and get involved in the innovation.

Unfortunately, I could only attend the first hour of the showcase triptych but the four talks from that session were eye-opening. Sylvain Ladame opened with work from his lab developing direct-to-skin patches that detect cancerous skin lesions using microRNAs. It’s incredible to think that one day a GP may be able to apply a squidgy plaster next to a mole and a day later, have the results back from the lab whether the lesion is likely to be cancerous or not. In combination with the AI tools discussed in my last post from Skin Analytics, the future of skin cancer diagnosis will revolutionise over the next few years.

Delving into smaller and smaller entities, Leah Frenette’s talk about nanozyme technology left the mind boggling. Tiny particles that act like enzymes (ahem, nanozymes) are being used to amplify weak signals, for example the strip on a lateral flow test. Although LFTs for covid and pregnancy tests, seem to work well at the moment, the potential for use with new biomarkers e.g. cancer proteins, could transform how we diagnose and detect diseases.

A company that seems to be well on the way in this process is ProtonDx, whose CEO, Prof Pantelis Georgiou, gave a run-through of their diagnostic technologies, which are already on the market. Using X-Men-like features (magnetic beads!) which amplify nucleic acids from a sample), ProtonDx’s shoebox-sized instrument can perform pathogen screens in less than 30 minutes. For a former hospital doctor, who would often wait days for a respiratory virus screen to come back from the lab, the thought of being able to obtain a diagnosis in such a short time frame and in any context is incredible. And adding further awe to the mix, Prof Georgiou showed ProtonDx’s technology being used in Africa to detect malaria. The potential of such technology is huge.

Rounding off the four talks was Session 1’s Chair, Professor Hanna, explaining the use of breath testing to diagnose upper GI cancers. The surgeon explained how different gastrointestinal tract cancers gave different profiles of volatile organic compounds (VOC), which could be detected from something as simple as a patient breathing into a tube. Although the concept exists already for detecting H. pylori in the gut, the thought of cancer being detected this way is concept-breaking. The technology is now set for validation testing and is being trialled in multiple NHS Trusts across the country.

The underlying thread linking all the innovations discussed was the use of a simple action (collecting a breath, placing a patch or taking a drop of bodily fluid) in an incredibly complex scientific process. For the user, however, the diagnostic result seems to be obtained simply. If I hadn’t understood the science behind it, I would think it worked through magic.

But this isn’t magic, this is science and the best kind: it is diligent, sustained application of knowledge to make medicine appear simple. Ground-breaking and exciting, I cannot wait to hear more.

Dermatology: The Next Frontier

I’m not a fan of dermatology. Not so much because I find skin things weird, but rather I often struggle to diagnose skin conditions. Unfortunately, given that 14% of presentations to a GP involve a dermatological element (1), a significant portion of my clinic is spent puzzling over a mole or rash.

Here to save me are AI-driven medical diagnostics, which have transformed dermatology in recent months. With some platforms boasting 97% accuracy (2) in diagnosing skin cancers, an AI interface between primary care and specialist services could soon be the default in skin pathways.

Major players include Skin Analytics, who lead the field with their AI driven software, DERM, which is the only UKCA approved Class IIa AI as a Medical Device for skin cancer. A skill that used to be the sole remit of consultants with many years experience and learning, can now be performed with astonishing accuracy by a computer.

And the software is democratising in its reach. Platforms like Metaoptima’s ‘Molescope’ offer patients the chance to take high quality photos of their own concerning lesions, to be sent on for further assessment. This is miles ahead of the grainy phone images sent to me at the moment.

With AI taking off, advances are being made at an astonishing rate and these developments are present in almost every medical speciality. From RetinAI’s data processing systems, which give AI driven analysis of retinal OCT images to help diagnose and monitor retinal disease, to the 9 NICE approved AI-driven programs (3), that analyse how best to contour radiotherapy, AI diagnostics are here to stay.

Such advances present philosophical questions about the place of the human physician in all of this. Personally, I have no qualms coming second place to AI when it comes to diagnosing a skin lesion. If it means that diagnostics are safer, quicker and more accurate, I’d go with that system any day. After all, the guiding principle of healthcare is to do good by a patient and if that good is done better, more accurately and safely by a computer, so be it.

In an oversubscribed healthcare system, safe and verified software that takes the burden of diagnosis off a clinician can only be a good thing.

  1. https://bjgp.org/content/70/699/e723

  2. https://skin-analytics.com/performance/

  3. https://www.nice.org.uk/guidance/hte11/chapter/1-Recommendations

Medtech goes to the Movies

Is the future written? Einstein himself wrote:

‘For those of us who believe in physics, the distinction between past, present and future is only a stubbornly persistent illusion.’

If Einstein says so, it’s probable that the future is not as remote and unknowable as we currently think it is.

Even if you don’t go as far as to believe the future has already happened, there is an innate human desire to predict the future. Be it a sneaky glance at the horoscope or a ‘feeling’ about the lottery numbers, many of us would like to know what’s coming up.

But whilst science gets on with the number crunching to make that a reality, as they already have done with weather forecasting and climate change simulations, I turn to less solidly researched fare. I believe the future lies in film.

Just as Douglas Adams’ ‘Hitchhiker’s Guide to the Galaxy’ nailed the concept of the smart phone, here’s my list of films, that may hold the answer to what’s coming up in the medtech scene:

Gattaca

Story: Genome testing at birth, predicts a person’s risk of disease and thus assigns them to a particular stratum in society. Based on his genetic profile the main character, Vincent Freeman, is predicted to die aged 30, which in this eugenics-focused society relegates him to the ‘in-valid’ class. Limited to more menial jobs, in this caste based society, he tries to fool the authorities by using someone else’s DNA in genetic verification tests to achieve his mission of becoming an astronaut.

Reality: We’re not too far off the premise of Gattaca. With initiatives like 100 000 Genomes and the UK Biobank forming repositories of sequenced genomes and associated health profiles, it’s not far-fetched to think that every human may have their genome profiled at birth, to predict their risk of various diseases. It raises interesting ethical questions as to the line between advantageous personalised medicine versus Gattaca-like bias and anxiety about health risks.

How close to the story is reality? 70% of the way there (based on already widespread genome sequencing. The societal implications are not in play… yet)

Never Let Me Go

Story: Children grow up in an isolated boarding school, where they find out they are genetic clones of other human and will eventually be used as organ donors. There’s also a complex love triangle in it.

Reality: Again, the power and potential peril of genetic science is explored. Although we make balk at the idea that a person would ever be used in such a way, the presence of saviour siblings and transplant tourism shows parallels.

How close to the story is reality? 40% (based on the fact that we don’t allow human cloning. The science may be prohibited but unfortunately the ethical issues are already in play).

See also: My Sister’s Keeper

The Matrix

Story: Reality is not all it seems. The iconic Matrix movie draws back the veil to reveal Neo’s perceived daily life (very similar to yours and mine) is actually a computer simulation called the matrix and he actually lives in a bleak robot-ruled world, where is bioelectric energy is siphoned off to power the computers.

Reality: With big hitters like Meta bringing out their VR goggles, the possibility of living a different life and having a different experience becomes more of a possibility. What does this mean in healthtech terms? Anything and everything. The average human being may now be able to experience what it is to run like Usain Bolt amongst other things. The possibilities are huge, empowering and possibly dangerous.

How close to the story is reality? 60% (or perhaps 100% if we are actually in the Matrix)?! With Meta’s goggles, VR is already out there. The question is, how far will it go and will it be used to bring new physical and health-related experiences?

See also: Ready Player One

Chuck

Story: A computer store employee watches a video of encoded snapshot images, which uploads a database of US Governments Intelligence into his brain. Subsequently when seeing certain images, he remembers certain intelligence secrets and with later versions of the program, automatically learns languages and martial arts. Japes ensue.

Reality: I don’t know if we’re there yet but the aspiration is high. The thought of uploading information easily into the human brain is a huge area of interest especially to big players like Neuralink, who are trying to do this with hardware. I’ve always thought there has to be an easier way to remember knowledge rather than rote memorising and revision. Maybe this is the future?

How close to the story is reality? 65% (based on the fact that HRL labs have already improved human cognition speeds using a ‘scalp cap’ and Neuralink’s recent brain-computer interface successes. Subliminal messaging and brainwashing have already been tried and tested but the success rates are uncertain as the CIA just doesn’t release that kind of information.

See also: The Manchurian Candidate, Zoolander

Ex Machina

Story: A software programmer is invited to a billionaire’s private estate to see whether the billionaire’s pet project, building humanoid robots powered by AI, pass the Turing test. Lots of complex relationship dynamics and trickery.

Reality: The thought of building helper robots has long existed and has been posited as a solution for human labour. Imagine having a convincingly intelligent robot-carer to keep everyone happy, healthy and motivated. However, AI poses a huge ethical unknown, especially if it becomes sentient a la Ex Machina. Thought-provoking and highgly relevant in today’s brave new world.

How close to the story is reality? 70% (with the recent advances in AI, machine sentience seems ever closer. And with incredible robotics being designed by the likes of Boston Dynamics, the plot of Ex Machina seems ever more like reality.)

See also: Blade Runner, Humans, I, Robot, Elysium

Red Dwarf

Story: A spaceship worker is put into stasis and wakes up 3 million years later to find everyone on his ship has died and he is alone with only a hologram of his annoying roommate, a super-intelligent computer for company and a bipedal cat.

Reality: This is a fun one for the future. With concepts like bodily stasis and the technology to preserve consciousness, Red Dwarf takes incredible ideas and presents them with the most humorous of touches. After so much dystopia, here is health technology presented in a fun, unthreatening form.

How close to the story is reality? 35% (I don’t know where we are on cryostasis and interplanetary travel but I’d like to see it when it comes).

See also: The Hitchiker’s Guide to Galaxy

The (Virtual) F1 in the Corner

Occasionally a product turns up that changes the way you think about work. So much of my time as a doctor has been spent writing patient notes, that I took it for granted. From my days as a newly qualified Foundation Year 1 (FY1) doctor, I would scribe for my consultants as they did hospital ward rounds, and now as a GP, many of my extra hours of overtime focus on documenting the numerous patients I’ve seen in the day. Honestly, I didn’t think there could be any other way.

But I’m pretty sure that’s all going to change in the next few years.

Yesterday’s launch of TORTUS’ latest medical AI co-pilot software, O.S.L.E.R, showed me that pretty soon each clinician could have their own invisible scribing FY1 right there on their computer.

So how does it work?

Exactly like those consultant ward rounds, the software ‘listens’ to the consultation i.e. records it, and then transcribes it word for word. Using AI wizardry the program then analyses the transcription and writes the potted summary in a templated manner you can adjust to suit you.

The clinician reads through the summary, makes any adjustments and then hey presto! Your consultation is interpreted and documented for you with very little writing effort on your part.

Impressions

I like it. I really really do. For me, it’s like having a medical student doing the writing for you, taking one more task off your hands. Sometimes in the crush of a GP clinic, I have to complete my notes at the end of the session, and patient details blur or are overlooked. This software would take some of that cerebral load off my hands by:

a) having a complete record of the consultation for my reference

b) taking away the effort of writing up notes and minimising typos

c) to some degree, organising my thought processes into the medical notes format (rather like the best kind of FY1).

Limitations

  • There’s something off-putting about having a full audio record and transcription of a patient-doctor consultation. It seems a bit Big Brother to me, even though I know all GP telephone consultations are already recorded.

  • Clinicians may have to adapt their consultation style, speaking out results and thought processes, in order for it to be recorded and then transcribed by the AI. This could throw off the natural flow and interaction between patient and doctor.

  • For the atypical patient consultation e.g. patients who may not ‘stick to a script’ or those using an interpreter, this may pose a level of complexity the AI may not currently be able to deal with. Luckily, I still know how to type and can write this up old-school.

So all in all, it’s an intriguing prospect. For those of us, who are still waiting for TalkType to come to our practice, the medical AI co-pilot seems like a distant vision, however, judging by the competitive market of Tortus, Nabla and Abridge, they are undeniably out there. Whether a legion of despondent NHS doctors have enough in the tank to adopt the software remains to be seen, but we have to have hope that all things, be they working practices, medical software or healthcare systems themselves, will improve over time.

The TORTUS panel discussion involving CEO dom pimenta, annabelle painter, karan koshal and rozell kane

Testing 1, 2, GP

As if on cue, the area in which I work has just started to roll out a new software for GPs to request imaging from our local hospitals. No more longhand forms with blank spaces to write typo-filled reasons why this patient with a chronic headache may warrant a CT-scan. Instead doctors now have to run through a number of questions to submit their request.

So how was a real-time introduction of an order communication system in primary care? Let me take you through…

Training

An 8 minute YouTube video taking you through a shared screen presentation of how to order an investigation from the electronic record system (ERS) that we use. Not bad, I don’t think I need someone in-person holding my hand taking me through or anything longer.

Process

It took at least 10 clicks to submit the request, with some previously overlooked fields now mandatory e.g. prompts to justify the patient’s exposure to ionising radiation. Quite a lot of clicks and drop downs compared to basic text input of the old emailed document.

Advantages

  • The software has more integrated features - I can see recent requested imaging in the same place I’m ordering the test and the results feed back into our ERS directly. I can also see the status of the request e.g. is it pending or completed, which is reassuring compared to emailing a document into the ether.

  • Autopopulating a lot of the answers from the patient’s health record is a great feature and could be revolutionary if it eventually automatically imports blood tests and relevant information without the doctor having to look this up themselves. Real potential here…

Disadvantages

  • Transition to a new system is always a strain and I know there will be some patients who will be caught out. Undoubtedly some old referrals will be rejected or missed during the switch to the new electronic system.

  • The UX could be improved. There are still quite a lot of little clicks and decisions, which at the end of a full clinic, I have little patience for. Sometimes freestyle form-filling is preferable to the ‘all fields must be completed’ computer pedantry.

  • I haven’t pushed the system yet, but sometimes a doctor’s request doesn’t tick all the boxes. My reasons may not fit all the guidelines but I still want a test done. I wonder whether the current software will allow for blurred lines or if it is too rigid in its specifications? Watch this space…

Rewired Retro-ed: What’s hot and what’s not

And that’s a wrap. The posters have been put in their cardboard tubes and the games and gimmicks packaged away until the next conference. Two days of the latest in Digital Heatlh #Rewired2024 is over and Birmingham NEC can devote itself to the next trade fair. And I have a lot to think about.

You couldn’t attend? Not to worry, I’ve got the scoop. Here’s the latest in the Digital Health world:

Hot

  1. Digital Decision Support Tools (DDST)

    The flavour of the month seems to be DDSTs - online programs that ask algorithmic questions whilst incorporating and adhering to guidelines. For clinicians it is the multi-click equivalent of going through the WordArt flowcharts on the consulting room walls. Big hitters like ICE and VAR Healthcare are all implementing their own systems and have purported success. But some queries from my end as to decision fatigue if a computer’s making you jump through hoops.

  2. Remote Patient Monitoring

    With a push to save on NHS appointments and in-person contacts, remote patient monitoring is increasingly en trend. Nifty image transferring software like Isla Health and Accurx are trying to bridge the virtual gap between patient and clinician.

  3. Innovation

    Buzzword of the moment (second only to AI). Heartening to see that all improvements are welcomed with NHS Innovation’s platform open to small innovators as well as large. However, I wonder whether innovation is the new ‘resilience’ and rather a replacement term for wholescale improvement in working conditions in the NHS.

  4. AI

    No tech conference today would be complete without AI making it into the three most mentioned topics. AI our saviour, miracle and possible paperclip-apocalypse frenemy. Lots of talks, lots of promises, AI is (quite literally with its chatbots) the talk of the town.

What’s not:

  1. Primary Care

    Pilot projects galore being implemented in Secondary Care but less in Primary Care. C’mon guys! With the largest volume of NHS activity occurring in primary care (1) companies miss out on a huge potential market. But it’s reassuring that companies like Isla Health and Better are hoping that 2024 is the year of the GP. Hope that’s not just lip service…

  2. Informed consent

    Data, data everywhere, but unanswered questions as to who gets to use it and whether patients fully realise what their information can be used for. Data may be anonymised, but can then be incorporated into many weird and wonderful algorithms looking at health predictions and risk. I wonder if patients truly know what is happening to their data. But then again, its no worse than what Google or Meta are doing…

  3. The Working Reality

    I love the positivity of med tech, I truly do. EPRS, DDST and AI all promise a revolution in terms of efficiency and if it’s true, kudos. However, we haven’t yet done away with doctors in the NHS (getting there though!) and for all the miracle solutions offered by the many companies at Rewired, I wonder whether the actual fundamentals of the crisis in the NHS (lack of nurtured, respected workforce, complex patients with health and social issues etc) are being ignored in favour of a neat, digitised UI.

So that’s my take on it. Fascinating talks, great achievements and food for thought from Digital Heatlh Rewired 2024.

(1) https://www.kingsfund.org.uk/insight-and-analysis/data-and-charts/NHS-activity-nutshell

Selfie successfully obtained, thanks rewired for making it clear

How to Win at Conferencing

For those of us who follow the med tech world, it’s conference season. With Rewired kicking off in Birmingham, I’m going to give you the beginner’s guide to making the most of any conference you go to:

  1. Grab a program, pen and paper

    It may be sacrilege at a digital conference to suggest analogue, but sometimes you can’t beat pen and paper. At the start of any conference I grab a program and look at the different streams of talks and events. I circle the must-sees in one colour and the maybes in another and then use that to structure my day.

  2. Do a recce

    When it comes to exhibitors, I always start each conference by doing a walk round of the different stands, seeing what’s out there and what looks interesting (this also works for scouting out good freebies or competitions). Using my floor-plan I circle stands to come back to and tick them off between must-see talks.

  3. Make use of the cloakroom

    Wet umbrellas, winter coats and too much stash? Remember most conference venues will have a cloakroom, where you can dump your stuff. Don’t risk unnecessary shoulder ache from a heavy bag. Check it in.

  4. Network

    Conferences are a great way to meet new people and make connections with exciting companies and organisations. Have the Linkedin app downloaded and add new contacts there and then using the QR system.

  5. Take breaks and know when to call time

    You don’t have to see everything, everywhere, all at once (although do see the movie - it’s excellent). Know that you won’t be able to listen to every speech or visit every interesting stand. There may be an element of FOMO but that’s ok - there are always more conferences and other opportunities to connect e.g. reaching out online.

So those are my 5 top tips for conferences. Enjoy the chocolate miniatures and I’ll see you there!

Learn from my mistakes earlier at the best practice conference at olympia - check the bag in!

The EPRS of a Doctor's Dreams

When I started as a doctor, over a decade ago, documentation methods were pretty much as they had been for the previous 20 years before that. At the end of each ward was a huge clunky trolley, which held patient notes. The sacrificial lamb for the ward round (usually an F1) would be tasked with wheeling the trolley around so the consultant’s review could be recorded in real time. Some patient notes would be so big, the manila folders would break, leaving an unwieldy set of papers held together with a treasury tag.

paper patient notes… A fossil?

Archaic is not the word.

Over my career I’ve seen the digitisation of the health system. NHS Digital states that now 90% of trusts use electronic patient record systems (EPRS) (1) in line with the Government’s 2022 Plan for Digital Health and Social Care (2). But considering these softwares fall under the umbrella of med tech, a field that boasts electronic medical marvels like cochlear implants and automatic defibrillators, many medical software systems remain impractical and poorly designed.

In my day to day life as a GP, EPRS are the digital health technologies I interact with most. Instead of the giant paper notes of yore, they are the systems recording the patient’s medical history on computers and clouds. When they are good, they can be very good, shearing off minutes from my precious 10 minute consultation, which I can then give back to my patient.

“But when EPRS are bad, they can be horrid. In the worst case scenario, EPRS with non-intuitive design can lead to prescribing errors.”

But when EPRS are bad, they can be horrid. In the worst case scenario, EPRS with non-intuitive design can lead to prescribing errors, one of the worst mistakes to make due to the direct impact on the patient. Of the lesser evils are slow systems, which freeze when trying to work at GP light speed or tech rollouts, that don’t take time to onboard its users. Some of this can be mitigated by having intuitive, smart design, which minimises the need for multiple mandatory training sessions (med tech companies could take a leaf out of Apple’s child-friendly UX). We may be doctors, but that doesn’t mean we want our EPRS to be the equivalent of sitting the BMAT on a daily basis.

So if I could shout out to any EPRS company out there (hey EMIS! I see you!)

The EPRS of a GP’s Dreams:

Is:

  1. Safe

(Simple steps to prescribe medications and record information. Warnings in place if an error looks like it’s being made)

  1. User-friendly

(Not too many steps to log-on, do a task, streamlined, intuitive)

  1. Fast

(No freezing/crashes please)

That’s not too much to ask is it?

  1. https://digital.nhs.uk/news/2023/90-of-nhs-trusts-now-have-electronic-patient-records

  2. https://www.gov.uk/government/publications/a-plan-for-digital-health-and-social-care/a-plan-for-digital-health-and-social-care