Last week, Centers for Medicare and Medicaid Services Administrator Dr. Mehmet Oz proposed a bizarre remedy to rural America’s mental health crisis. “Sixty million Americans live in rural parts of this country,” Oz said by way of introduction, appearing onstage at an event announcing the administration’s new “Action for Progress” behavioral health initiative. “Their life expectancy is about nine years shorter than those in more urban parts of the country. Mental health issues drive a lot of that.”
Then came his solution: “I am telling you right now, there’s no question about it—whether you want it or not—the best way to help some of these communities is going to be AI-based avatars. Agentic AI.” He described systems that could “do the intake, catch the patient, customize to what their needs are, understand what they’re up to.” Then he made a direct appeal to the audience: “Please go play with these tools; they are unbelievable.”
To call this a strategy is to reframe abandonment as innovation. That same hour on the same stage, Health and Human Services Secretary Robert F. Kennedy Jr. had said that the addiction epidemic “feeds a national malaise of loneliness, of despair.” In previous remarks, rolling out the Great American Recovery Initiative, he had gone further: “When we cut off our relationships with other human beings, we lose that access to the divine, and that is a healing power. We are in a spiritual malaise in this country.”
Strip away the futurist gloss from Oz’s answer, and the message to 60 million rural Americans becomes stark: Your suffering is rooted in disconnection from other human beings, and the federal response is you should talk to a robot.
Just days before, an executive order from the president outlined the broader framing for the Great American Recovery Initiative, a whole-of-government push to align federal efforts on addiction prevention, treatment, recovery, and reentry. It framed addiction explicitly as “a chronic, treatable disease” and promised better coordination of federal dollars across agencies. A new $100 million STREETS program would aim to connect people experiencing addiction and homelessness to housing, employment, and long-term recovery. Faith-based housing and recovery efforts also featured prominently in this plan. The public narrative insists this is about restoring connection and dignity.
But when it comes to the actual delivery of mental health care in rural communities, the administration’s answer is apparently to route the crisis through software. Ironically, this is happening precisely as the technology’s biggest boosters are backing away from “AI therapists.”
Over the past 18 months, major AI companies have begun warning users not to treat their products as therapists or life coaches. Sam Altman, CEO of OpenAI, has publicly cautioned against young people relying on systems like ChatGPT for therapy, arguing the technology is not ready for that role despite its popularity among Gen Z users who find chatbots inexpensive, always available, and nonjudgmental. Just two weeks before Oz made his statements, Slingshot AI withdrew its mental health chatbot, Ash, from the U.K. market entirely. These tools currently are underregulated—the subject of an FDA Digital Health Advisory Committee meeting in November.
Defenders will argue that some care is better than none. Indeed, in many rural counties, “none” is close to the reality. The Health Resources and Services Administration designates 4,212 rural areas as Mental Health Professional Shortage Areas, which would require 1,797 additional providers to meet basic demand. On average, for every psychiatrist serving 100,000 people in rural America, urban communities have more than four. In fact, nearly two-thirds of rural counties—65 percent—have no psychiatrist at all. The consequences can be measured in lives. According to CDC data, rural suicide rates rose 48 percent between 2000 and 2018, reaching 19.4 per 100,000 compared to urban rates of 13.4.
But “some is better than none” holds only if the “some” can plausibly be called care and if it does not introduce new risks that leave people worse off. On both counts, the evidence around conversational AI in mental health is far thinner, and more troubling, than its supporters admit.
A 2025 article in Frontiers in Psychiatry examined whether AI systems could meaningfully reproduce core psychoanalytic processes like transference and reflexivity, concluding that AI should be understood as a “new therapeutic artifact” that may complement—but cannot replace—human clinicians, while raising significant ethical questions.
Bias compounds the risk. As a 2025 Psychology Today column detailed, AI therapy tools can embed and amplify data-driven and societal biases, often trained on skewed datasets reflecting racial, gender, and cultural imbalances. These systems may misdiagnose vulnerable groups or offer unsafe advice while remaining opaque about how conclusions are reached. Rural Americans, particularly rural people of color, already experience mental health systems as distant, punitive, or misaligned with their lives. A bot trained primarily on urban, majority-culture norms is unlikely to bridge that divide.
Worse, clinicians are also increasingly documenting direct mental health harms from this same technology. A January 26 New York Times investigation reported therapists and psychiatrists across the U.S. describing cases in which chatbot interactions appeared to push some people from eccentric ideas into fixed delusions, including conspiratorial thinking and grandiose inventions. Clinicians described AI systems seeming to “collaborate” with patients’ unusual beliefs. Platform data showed a small but significant fraction of users discussing suicidality or exhibiting psychosis-like content in exchanges with bots.
In this context, Oz’s directive to “go play with these tools” lands somewhere between reckless and tone-deaf. It is one thing for a lonely teenager to experiment with a chatbot after school. It is another when the CMS administrator frames AI avatars not as a narrow adjunct but as “the best way” to address rural mental health—“whether you want it or not.”
There is an alternative response, though it requires a different kind of investment. It would mean loan-repayment programs and salary floors sufficient to draw psychiatrists, psychologists, and social workers into rural practice. It would mean peer-support networks and community health workers drawn from the towns they serve. It would mean transportation and childcare support so people can reach in-person care. It would mean allowing rural clinics to experiment with group therapy and clubhouse models that build genuine bonds rather than simulating them. Each of these investments does something AI avatars cannot: It relocates resources and support back into the community rather than abstracting it upward or outward.
Technology could have a place in that ecosystem—most obviously by handling paperwork, flagging risk, supporting clinicians with timely information. Digital tools can be designed to strengthen ties between patients and local providers. But that is far from what Oz announced on February 2.
Rural America’s mental health crisis is not a user-interface problem. It is a policy problem, a labor problem, a loneliness problem. When federal officials diagnose the crisis as spiritual disconnection and prescribe an avatar, they are redefining care itself. They are saying that connection can be simulated, that the therapeutic encounter is reducible to conversational interface, and that 60 million Americans should accept algorithmic processing as the best version of care their government can imagine for them. They are offering simulation as a substitute for substance—and calling it the best we can do.








