Are they humans or AI? Credit: Bing Image Creator

Who Needs Humans, Anyway?

Kim Bellard
5 min readJun 3, 2024

--

Imagine my excitement when I saw the headline: “Robot doctors at world’s first AI hospital can treat 3,000 a day.” Finally, I thought — now we’re getting somewhere. I must admit that my enthusiasm was somewhat tempered to find that the patients were virtual. But, still.

The article was in Interesting Engineering, and it largely covered the source story in Global Times, which interviewed the research team leader Yang Liu, a professor at China’s Tsinghua University, where he is executive dean of Institute for AI Industry Research (AIR) and associate dean of the Department of Computer Science and Technology. The professor and his team just published a paper detailing their efforts.

The paper describes what they did: “we introduce a simulacrum of hospital called Agent Hospital that simulates the entire process of treating illness. All patients, nurses, and doctors are autonomous agents powered by large language models (LLMs).” They modestly note: “To the best of our knowledge, this is the first simulacrum of hospital, which comprehensively reflects the entire medical process with excellent scalability, making it a valuable platform for the study of medical LLMs/agents.”

In essence, “Resident Agents” randomly contract a disease, seek care at the Agent Hospital, where they are triaged and treated by Medical Professional Agents, who include 14 doctors and 4 nurses (that’s how you can tell this is only a simulacrum; in the real world, you’d be lucky to have 4 doctors and 14 nurses). The goal “is to enable a doctor agent to learn how to treat illness within the simulacrum.”

Overview of the AI hospital. Credit: Li, et. alia

The Agent Hospital has been compared to the AI town developed at Stanford last year, which had 25 virtual residents living and socializing with each other. “We’ve demonstrated the ability to create general computational agents that can behave like humans in an open setting,” said Joon Sung Park, one of the creators. The Tsinghua researchers have created a “hospital town.”

Gosh, a healthcare system with no humans involved. It can’t be any worse than the human one. Then, again, let me know when the researchers include AI insurance company agents in the simulacrum; I want to see what bickering ensues.

As you might guess, the idea is that the AI doctors — I’m not sure where the “robot” is supposed to come in — learn by treating the virtual patients. As the paper describes: “As the simulacrum can simulate disease onset and progression based on knowledge bases and LLMs, doctor agents can keep accumulating experience from both successful and unsuccessful cases.”

Credit: Li, et. alia

The researchers did confirm that the AI doctors’ performance consistently improved over time. “More interestingly,” the researchers declare, “the knowledge the doctor agents have acquired in Agent Hospital is applicable to real-world medicare benchmarks. After treating around ten thousand patients (real-world doctors may take over two years), the evolved doctor agent achieves a state-of-the-art accuracy of 93.06% on a subset of the MedQA dataset that covers major respiratory diseases.”

The researchers note the “self-evolution” of the agents, which they believe “demonstrates a new way for agent evolution in simulation environments, where agents can improve their skills without human intervention.” It does not require manually labeled data, unlike some LLMs. As a result, they say that design of Agent Hospital “allows for extensive customization and adjustment, enabling researchers to test a variety of scenarios and interactions within the healthcare domain.”

The researchers’ plans for the future include expanding the range of diseases, adding more departments to the Agent Hospital, and “society simulation aspects of agents” (I just hope they don’t use Grey’s Anatomy for that part of the model). Dr. Liu told Global Times that the Agent Hospital should be ready for practical application in the 2nd half of 2024.

One potential use, Dr. Liu told Global Times, is training human doctors:

…this innovative concept allows for virtual patients to be treated by real doctors, providing medical students with enhanced training opportunities. By simulating a variety of AI patients, medical students can confidently propose treatment plans without the fear of causing harm to real patients due to decision-making error.

No more interns fumbling with actual patients, risking their lives to help train those young doctors. So one hopes.

AI hospital research team. Credit: Liu via Global Times

I’m all in favor of using such AI models to help train medical professionals, but I’m a lot more interested in using them to help with real world health care. I’d like those AI doctors evaluating our AI twins, trying hundreds or thousands of options on them in order to produce the best recommendations for the actual us. I’d like those AI doctors looking at real-life patient information and making recommendations to our real life doctors, who need to get over their skepticism and use AI input as not only credible but also valuable, even essential.

There is already evidence that AI-provided diagnoses compare very well to those from human clinicians, and AI is only going to get better. The harder question may be not in getting AI to be ready than in — you guessed it! — getting physicians to be ready for it. Recent studies by both Medscape and the AMA indicate that the majority of physicians see the potential value of AI in patient care, but were not ready to use it themselves.

Perhaps we need a simulacrum of human doctors learning to use AI doctors.

In the Global Times interview, the Tsinghua researchers were careful to stress that they don’t see a future without human involvement, but, rather, one with AI-human collaboration. One of them went so far as to praise medicine as “a science of love and an art of warmth,” unlike “cold” AI healthcare.

Yeah, I’ve been hearing those concerns for years. We say we want our clinicians to be comforting, displaying warmth and empathy. But, in the first place, while AI may not yet actually be empathetic, it may be able to fake it; there are studies that suggest that patients overwhelmingly found AI chatbot responses more empathetic than those from actual doctors.

In the second place, what we want most from our clinicians is to help us stay healthy, or to get better when we’re not. If AI can do that better than humans, well, physicians’ jobs are no more guaranteed than any other jobs in an AI era.

But I’m getting ahead of myself; for now, let’s just appreciate the Agent Hospital simulacrum.

--

--

Kim Bellard

Curious about many things, some of which I write about — usually health care, innovation, technology, or public policy. Never stop asking “why” or “why not”!