I've written several articles about tech in healthcare, whether it's electronic medical records (EMRs), data analytics, point-of-care decision making tools, or artificial intelligence (AI) and machine learning (MI). Each tech innovation, or maybe all tech innovations, can have a play in healthcare. Why? There are so many areas to improve upon to aid clinicians in cost-conscious, yet quality-driven healthcare delivery.
I first wrote about tech in healthcare for Forbes.com with my March 2019 article Blockchain Technology May (Eventually) Fix Healthcare: Just Don't Hold Your Breath. In my 35 years in healthcare one thing I’ve noticed: the next best tech thingy is “gonna fix everything” (and yet….). This is my third article revolving around AI and it feels a bit timely.
Starting point: AI is not an amorphous thing, a sentient being, a master decision maker somewhere just beneath God; instead, it’s a computer algorithm (programming) built by humans and deployed to empower computers to “learn” and craft advanced output based on that “learning.” But while AI can, will, and should have an expanded role in healthcare and healthcare delivery (whether that’s optimized patient visits, quicker, better data utilization/analytics, revenue cycle management, or reduced prior authorizations), all of these decision-based tools require initial input and human intervention as a baseline and core code. While this may have wonderous outcomes in the future, we must be cognizant of the building blocks in the coding of AI models.
A recent and delicious lesson in hubris was the epic failure around the much-touted launch of an AI product (not healthcare-specific) by a large international tech platform. After its inauspicious launch and a rigorous (call it “brutal”) social media “vetting” and trial, the product was found to have baked in racial biases coupled with historically factual faults which inevitably drove provably false outputs. The program was immediately shut down with some sort of flowery language about further required product development, yada, yada, yada. While this episode was both comedic and predictable, it’s also frightening as we look to a future where we rely more heavily on AI and ML to help in our computing and understanding of voluminous data. One hopes that the underlying programming is vanilla, free from any sort of bias so as not to taint the end-product/output. Yet this recent big-tech “product” is a prime example of how AI can go horribly awry. The racial biases did not just “happen”; as noted, AI is not a vat where inputs are loaded, and the computer is so smart that it offers quality outputs. No, AI is, as I’ve said before, kinda of like a pyramid. The base must be soundly built and provide a solid foundation for the remainder of the building blocks to structurally withstand the test of time. A poorly built base leads to deleterious outcomes (e.g., the collapse of the structure). Arguably, the same can hold true for AI. Ultimately the value of the output is predicated on the underlying structure (the base of the pyramid). A poor base leads to a bad product; bad inputs (coding/construct) drive inaccurate, poor-quality, and dubious outputs that cannot be trusted.
AI is growing and that’s ok. As evidenced by Nvidia’s meteoric and some might say insane earnings report and growth (ask some of your congressmen how profitable Nvidia has been for their portfolios; let’s just say many of these folks are better stock-pickers than Wall Street analysts. But I digress.) Healthcare delivery can certainly benefit from deployment of AI models. There are cost savings and potential care enhancements that may come to fruition based on proper deployment, and that’s a good thing.
Someone asked me the other day: “….do you do AI?” The answer is, surely, “…how do you define ‘AI?’” I mean, I have a facile understanding, understand the basics, the large language model (LLM) and ML pieces, and have coded a little bit. I know that I play a role in bridging the care delivery gap between the data scientists, their output(s), and the clinicians/operators. I know that AI can and will have value in healthcare at some point. But I also know that there are very real landmines defined and built by coders/programmers that can bake in biases, which is dangerous. Like I’ve said before, garbage in, garbage out.
As folks run to, or from, AI and its applications in healthcare, my caution might be this: don’t hate Frankenstein; hate Dr. Frankenstein. Put differently: the output is not the monster. Instead, the monster is the creator.
This article was previously published on June 4, 2024 at https://www.forbes.com/sites/jeffgorke/2024/06/04/ai-in-healthcare-good-or-bad-well-both/?sh=199049aa529e.
The information provided in this communication is of a general nature and should not be considered professional advice. You should not act upon the information provided without obtaining specific professional advice. The information above is subject to change.