Blog, Revenue Cycle, The Hospital Finance Podcast®

Ethical uses of artificial intelligence in healthcare [PODCAST]

besler insights blog corner graphic

The Hospital Finance Podcast

In this episode, we are joined by Wade Wright, Chief Technology Officer at BESLER, to discuss the challenges in leveraging artificial intelligence in healthcare.  

Learn how to listen to The Hospital Finance Podcast® on your mobile device. 


Highlights of this episode include:

  • Examples of ethical and potentially unethical uses of artificial intelligence in healthcare.
  • How the use of private data by providers affects patient privacy as it relates to artificial intelligence.
  • Ways in which artificial intelligence can enhance the revenue cycle at hospitals.
  • How clinicians and hospitals can rely on and leverage artificial intelligence for clinical decision making.
  • And more…

Mike Passanante: Hi, this is Mike Passanante and welcome back to the award-winning Hospital Finance Podcast. While artificial intelligence in health care holds the promise of tremendous advances, it also comes with perils and ethical questions that need to be considered. Today I’m joined by Wade Wright, Chief Technology Officer at BESLER, to discuss some of the challenges in leveraging artificial intelligence appropriately in the health care space. Wade, welcome back to the show.

Wade Wright: Thank you.

Mike: So, Wade, you work with artificial intelligence applications regularly. What are some of the examples of ethical and potentially unethical uses of artificial intelligence in health care?

Wade: As with most every tool in existence, the tool itself isn’t really ethical or unethical. It’s really about how it’s used and applied, and interestingly sometimes it’s a matter of even your perspective as well. For example, both the Obama and Trump campaigns used large amounts of Facebook data with machine learning and AI to try and influence voters. And depending on which end of the political spectrum you find yourself on you may believe one of those was ethical and that the other was not. In health care, we’re always working with challenges whether things are ethical or not. And the use of AI and machine learning and really any other data science technique is going to be no exception. I mean, imagine if we could build an AI system that could use genetic information from any of the genealogy sources that are out there, and we could 100% predict whether or not a child would be born with a significant but treatable condition such as diabetes. Ethically, we could immediately treat and lay out a course of treatment for the entire life of that person, and of course unethically maybe as a payer, we could deny coverage of this child because we know that’s going to be very expensive care forever.

Mike: Yeah. Certainly some issues there. Wade, let’s discuss data bias. Artificial intelligence obviously is only as good as the data its algorithms have to work with. How can this create bias in the outputs that could lead to unwanted results?

Wade: Sure. Data bias is constantly something we have to fight against when you’re working in these type of data science fields, especially with machine learning and artificial intelligence. In short, it means that your data’s incomplete in some aspect, which causes your model and your algorithms to be incorrectly skewed in some manner. So for example, if we were going to build an AI application that predicted the likelihood of someone developing heart disease by the age of 50, if we only trained it on a data set of, say, Hispanic females between the ages of 15 to 29, it’s pretty easy to understand how that application probably doesn’t fair very well for everybody. However, a much more difficult data bias problem involves data points and variables that aren’t so cut and dry in how they apply to the problem. So for example, imagine we were going to build an application to predict how well our favorite sports is going to perform this season, and one of the data points is the type of field that they play on, whether it’s grass or AstroTurf. Our application might predict that we win every time we play on AstroTurf because historically we do win 100% of those times. The truth may be that we always win when we aren’t affected by the wind, and therefore it’s an enclosed stadium that’s the crucial data point, not necessarily the turf.

Mike: Yeah. Great analogy, Wade. Let me move over to privacy, because privacy’s a big issue in health care, of course. HIPAA– it really doesn’t cover tech companies like Facebook or genetics testing companies that have been known to collect health care data and sell it to health care organizations. How does the use of that data by providers affect patient privacy as it relates to artificial intelligence?

Wade: HIPAA, for the most part, is about protecting an individual’s data. Here at BESLER, as you’re aware, we take the security of all HIPAA-related data very seriously and we go to great lengths to protect it. Every client we have has the same attitude and treats it the same way because we’re all in health care. But for the non-health care world, this concept is kind of foreign, with the exception of things like social security numbers, bank accounts, some of those things. So imagine for a moment if we were a new web startup business and that HIPAA didn’t actually exist. Our new little company’s idea is to vary all of a person’s health-related data over an entire episode of a medical event to improve life expectancy.

So for example, Wade goes to his family doctor, who through a series of lab tests determines he has a serious condition, and the family doctor refers Wade to a specialist, who in turn recommends a surgery, which is performed at a facility. After being released from the facility, Wade goes back into the care of his specialist, who eventually releases him back to his family doctor. During all of this, his primary care physician, specialist, and attending physicians at the hospital all prescribe various medications, some of which work, some of which fail. And sadly, let’s assume Wade dies. And he died due to a particular combination of the medications that these doctors prescribed. Our little web startup could theoretically analyze the data from all of these and predict future outcomes like this to help prevent such tragedies. While this is a relatively simple data science problem, due to the way HIPAA works, this type of sharing of data is very difficult, almost virtually impossible to do.

Conceptually, this is very hard for the non-health care world to get their head around. So to directly answer your question, that ignorance of how and why we protect patient’s data makes it easy to see how our personal data could be extracted from social media and used in relationship to our health care, both in ethical and unethical manners.

Mike: So let’s look at the insurance and financial side of the equation. AI could potentially be used to determine whether or not policies might be issued, could be looking at propensity to pay, potential claim denial, some of the things we’ve spoken about already. Wade, what are some of the benefits of using AI in these situations and where might there be some concerns?

Wade: One of the biggest problems in revenue cycle is how much it costs for a facility to get paid properly, denials, self-pay, charity care, and several other areas affect this. Hospital CFOs often kind of roll this up into a days in AR metric that they’re constantly trying to lower. Anything that helps drive that effort is usually, typically well-received, whether that’s a revenue integrity initiative, or a denials management tool, anything like that. AI can enhance nearly all aspects of the revenue cycle. Imagine a tool that can reliably tell you a claim will be denied before you ever drop the bill, or what if AI around your CDI practices could constantly evolve and show you where you need to improve now and next. These capabilities and real-world uses of AI and machine learning– they’re actively being developed now and will soon help drive down the cost of health care for all of us.

Mike: Wade, on the clinical side, there’s this idea that gets batted around sometimes about AI replacing clinicians. That’s obviously futuristic at best, but what happens if physicians become over-dependent on AI, or organizations rely too heavily on AI for clinical decision making?

Wade: Well, for me personally it’s hard to imagine physicians becoming over-dependent on such tools. I don’t really worry about carpenters becoming over-dependent on hammers. I mean, I know it’s critical to what they do, and it seems silly if they didn’t use them, but I also don’t worry about a carpenter being so dependent on hammers that he’d use one to cut a board in half. I feel the same way about AI and machine learning. Sure, they’re shiny and new now, and there’s even a fair share of snake oil promises out there today. But physicians, clinicians, all of the hospital office personnel are– they’re all a very skilled group of people, and they’ll use these tools appropriately to– really it’s to all of our benefits.

Mike: Well, great insights, Wade. Lots more to come on artificial intelligence and some of the new technologies that are affecting hospitals and in particularly the revenue cycle as we move forward. So we look forward to having you back on the program to talk about those as time goes on.

Wade: Awesome. Thanks for having me.


 

SUBSCRIBE for Weekly Insider Updates

  • Podcast Alerts
  • Healthcare Finance News
  • Upcoming Webinars

By submitting your email address, you are agreeing to receive email communications from BESLER.

BESLER respects your privacy and will never sell or distribute your contact information as detailed in our Privacy Policy.

New Webinar

Wednesday, January 8, 2025
1 PM ET

live streaming
Podcasts
Insights

Partner with BESLER for Proven Solutions.

man creating hospital revenue integrity and reimbursement strategies