S2 E3: AI in Action – Insights from ThinkSono’s Chief Medical Officer (Dr. Michael Blaivas)

Emmanuel Soto: What specific applications do you think so far have really shown the most benefit from AI when it comes to ultrasound imaging?

Michael Blaivas: So, it's somewhat application or field specific. In traditional imaging, such as radiology or cardiology, it's really the workflow enhancements and improvements.

Some of the automation with calculations and estimation, like tracing out borders of tissue, saves the sonographer a lot of time when done by AI.

So back to the business case, it gets the examination done quicker.

There's evidence to show that there's less inter-observer variation in the performance of these tasks and some of the calculations. I think those are the big developments for them. The really exciting part, and that's not to take anything away from traditional imaging, is in the Wild West of ultrasound imaging, as I mentioned, point-of-care ultrasound. There, probably the most exciting developments are actually the guidance aspects.

Part of the quandary for people looking at point-of-care ultrasound is if you create an application, does ejection fraction for you automatically, that means the operator can get certain views for the machine to make those calculations with the machine learning software. And if they can do that, then they're really just a little bit away from being able to estimate themselves.

The real key to point-of-care ultrasound is enabling all the people that hardly know what an ultrasound probe is to somehow scan.

And that's the guidance aspect of the machine telling somebody, "Put the probe here, now adjust it this way. This is what you're seeing on the image." That's the part that I think surprised a lot of early developers in point-of-care ultrasound, and even the biggest names did some backpedaling to try to catch up.

Emmanuel Soto: How have you seen that adoption of AI from the facility side?

Michael Blaivas: We're not quite there yet. If you use the car industry as a corollary, we were promised a lot but delivered a little bit at a time. Now, almost everybody who buys a new car in a moderate price range will have features like adaptive cruise control, so you don't crash into the car in front of you, and alerts if you shouldn't change lanes. Those little features are very helpful and have been introduced and are being introduced in traditional ultrasound imaging.

I think they're enjoyed more and more by sonographers. They're not threatening but beneficial, so sonographers like them. It speeds things up, so the business case is there. I think hospital systems are on board with it, but it's not necessarily from the top down. The CEO is not going out and saying, "Gosh, we need to buy machines from vendor X because they have this great AI."

It's really going to be driven both by the radiologists or cardiologists and the sonographers they employ because everybody wants to get through the patient quicker and be more accurate. Obviously, radiology and cardiology groups are under pressure, like everyone else, to try to generate more revenue and get more patients through per unit of time.

Emmanuel Soto: Can we trust AI?

Michael Blaivas: It’s a trip in progress. I think it’s very important that there are people who say, “Wait a minute, we need to make sure we can trust this,” because the early adopters and champions of AI are going to get so excited about it that they’re going to forget all reasonable caution and tear forward. So, we need those people, but people who have a little bit more caution and say, “I don’t know about this, prove it to me.”

What you see initially is the FDA and other regulators around the world allowing and clearing AI products, but with human oversight. It then becomes no different than reviewing an ultrasound examination recorded by somebody else. When you’re an expert, it’s just that we enable anybody to do that recording.

Essentially, in time—and this is not something that’s cleared yet—you can see that the progress will move towards more AI and less human intervention. As we have more data, people become more comfortable with it, and regulatory agencies become more comfortable with it. This will turn into a true full manifestation of the AI capability, which is what I like to say, like having that professor or expert standing over your shoulder.

Because if I’m standing over the shoulder of a nurse, I can guide them, or a tech, or a medical student through the entire examination and be quite satisfied that it’s done well and adequately, and I can rely on the results without ever having to touch the probe. The AI should be able to do that as far as we can tell. It’s just a matter of getting to that point safely and satisfying the regulators around the world—in this case, in the United States and Europe—who are tasked with protecting patients.

And that’ll happen reasonably quickly, but not too quickly. I think when it happens too quickly, that’s when AI or other things can fall flat on their face and actually cause some harm.