A lot has changed since November 2022, when we published an interview with Dr Daniel Spratt about the use of artificial intelligence to help predict which patients with prostate cancer could benefit from the addition of hormonal therapy to radiation therapy. At that time, AI tools in medicine and in everyday life were still new to most of us. Today, AI is everywhere: routinely recapping virtual meetings, answering everyday questions, generating content for clinic notes based on live recordings, and writing first drafts of letters and routine communications. (For the record, I have not used AI to write this or any other editorial.) None of these uses feel threatening to me; rather, I appreciate the efficiency they afford me in my life. On the other hand, I can’t help asking myself about the consequences of all this automated help. What are we at risk of losing when our computers supply answers to questions as fast as we can ask them?
One of the benefits of practicing in a comprehensive health system is interacting with our colleagues. Whether it takes place in tumor boards, weekly grand rounds, or even the clinic, the real-time discourse that goes on encourages us to think. Questions are asked that raise more questions and challenge our dogmas and practices. We share experiences and knowledge that can fuel changes in how we manage patients and even spur new research and quality improvement initiatives. If AI replaces these interactions with split-second data and answers, we may be more efficient in treating one patient but less informed for the next one. If we lose the process by which we collectively arrive at patient care decisions, we might just find that we are no longer the critical thinkers that medicine requires. Learning becomes less important for doctors when all the answers are just a few seconds away.
As if all these AI-driven changes were not enough to create unease, imagine logging onto a computer and seeing yourself on the screen talking, only it’s not you. A company recently approached me about creating an avatar. Of course, I immediately envisioned myself completely blue with a tail. The avatar they were referring to, however, would be a virtual replica of me based on video footage. This avatar not only would look exactly like me but also would learn my mannerisms and my voice, from pitch to cadence. Although I was skeptical, I agreed because I was intrigued by the opportunity to have my double take on some work that I would never have time for.
In a studio, I read from a teleprompter a series of paragraphs, some of them medical, some not, while gesturing and speaking as I would to a professional audience. After about 30 minutes, we were done. About 2 months later, the company sent me a link to a demo of me speaking about a product indication. It was not what I had read, but it was accurate and on label. It was also very eerie. It sounded like me, it looked like me, but clearly it was not me. It was even more disconcerting when it began speaking in Mandarin. I had no idea what my avatar was saying at that point. It went on to speak in Japanese, Korean, and Spanish (that last one I understood a little). Although each of these videos was clearly labeled as an avatar, I could still see the writing on the wall.
AI technologies are rapidly developing applications that not only may aid us in our roles but also may extend and replace us to some extent. That is not all bad, as I will almost certainly not have the time to learn multiple new languages and travel to dozens of countries to speak in person. But if AI can already replicate my appearance and voice convincingly, what happens when it can replicate clinical reasoning and bedside manner? Could routine patient care be relegated to AI rather than be the province of a living, breathing health care professional?
I am reassured that the uses for this avatar will be restricted to the content we have agreed to, and I will review any changes in the future. At the same time, I am well aware that technology can be corrupted. AI tools have the capability to expand access to health care, particularly specialty care like oncology. As physicians, however, we need to stay engaged with the tools that have been created, examine the quality of the products, and ensure that these products are not misrepresenting our field. Without those steps, we could spiral into a dystopian future.
Sincerely,
Daniel J. George, MD