The New York Times Got Played By A Telehealth Scam And Called It The Future Of AI
Since the New York Times published its semi-viral big profile of Medvi last week — the “AI-powered” telehealth startup that it breathlessly described as a “$1.8 billion company” supposedly run by just two brothers — I’ve had multiple friends and family members send me the article with some version of the same message: “Can you believe this guy built a billion-dollar company with AI? Why haven’t you done this?” The story is making rounds, and giving people the impression that with a ChatGPT account and a little bit of marketing know-how, you too could be raking in millions every month.
The problem is that most of the story is utter nonsense.
Let’s start with the headline number itself. The NYT admits — buried deep in the piece — that Medvi “has not raised outside funding” and “has no official valuation.” A company’s value is typically established by investors, an acquisition offer, or public market pricing. Medvi has none of those. What it has is a revenue run rate — a projection based on early-2026 sales extrapolated across a full year. Calling that a “$1.8 billion company” is like calling someone who found a twenty on the sidewalk a “future millionaire.” Any business reporter should know the difference. Even the NYT tips its hand:
Medvi is technically not a one-person $1 billion company, since Mr. Gallagher hired his brother and has some contractors. The start-up, which has not raised outside funding, also has no official valuation.
“Technically not” doing quite a bit of heavy lifting there.
But the misleading valuation is almost the least of it. Even if you accept revenue as the relevant metric, how sustainable is that run rate for a company that just got an FDA warning letter, is facing a class action lawsuit for spam, has a key partner being sued over allegations that a major product doesn’t actually work, and is operating in an industry that regulators are actively trying to rein in?
Oh, wait, did the NYT forget to mention all of those things? They sure did! Not to mention the legions of fake, apparently AI generated doctors and patients who keep showing up in Medvi advertisements. Yes, the NYT eventually alludes to some of that, but it claims these were mere “shortcuts” that were fixed last year (they weren’t).
That said, you can feel the pull of the narrative that seduced the NYT: a scrappy founder with a rags-to-riches backstory, two brothers taking on the world, AI tools stitching it all together, Sam Altman himself anointing the achievement as proof that his prediction of a “one man, one billion dollar company, thanks to AI” was correct.
It’s a hell of a story. The problem is that almost none of it holds up to even the most basic scrutiny, and the fact that the New York Times — the New York Times — fell for it (or worse, didn’t care) is an embarrassment. As much as I’ve made fun of the NYT for its bad reporting over the years, this is (by far) the worst I’ve seen. They didn’t just misunderstand something, or try to push a misleading narrative, they got fully played on a bullshit story that any competent reporter or editor should have realized from the jump. This one stinks from top to bottom.
Medvi’s success has very little to do with “AI” and quite a lot to do with fake doctors, deepfaked before-and-after photos, misleading ads, probable snake oil, and the kind of old-fashioned deceptive marketing that has been separating marks from their money for centuries. The only thing AI really “turbocharged” here was the company’s ability to generate bullshit at scale. Oh, and also the NYT somehow missed out on the FDA already investigating the company, as well as the multiple lawsuits accusing the company and its partners of extraordinarily bad behavior.
Let’s start with what the NYT actually published. Reporter Erin Griffith’s piece reads like a press release that the NYT re-formatted as a newspaper article:
Matthew Gallagher took just two months, $20,000 and more than a dozen artificial intelligence tools to get his start-up off the ground.
From his house in Los Angeles, Mr. Gallagher, 41, used A.I. to write the code for the software that powers his company, produce the website copy, generate the images and videos for ads and handle customer service. He created A.I. systems to analyze his business’s performance. And he outsourced the other stuff he couldn’t do himself.
His start-up, Medvi, a telehealth provider of GLP-1 weight-loss drugs, got 300 customers in its first month. In its second month, it gained 1,000 more. In 2025, Medvi’s first full year in business, the company generated $401 million in sales.
Mr. Gallagher then hired his only employee, his younger brother, Elliot. This year, they are on track to do $1.8 billion in sales.
A $1.8 billion company with just two employees? In the age of A.I., it’s increasingly possible.
And then, because no AI hype piece would be complete without the requisite papal blessing from San Francisco:
In an email, Mr. Altman said that it appeared he had won a bet with his tech C.E.O. friends over when such a company would appear, and that he “would like to meet the guy” who had done it.
Altman “would like to meet the guy.” Well of course he would! The NYT hand-delivered him the perfect anecdote for his next AI hype session. The reporter seemingly solicited that quote to validate a pre-existing thesis: “Sam Altman was right about one-person billion-dollar AI companies.” The fact that the company is a dumpster fire of regulatory violations and consumer fraud was, apparently, a secondary concern to the “Great Man and A Great AI” narrative of innovation. This piece was built around a thesis — Sam Altman was right — and then a company was located to prove it.
To its minimal credit, the NYT does kind of acknowledge — eventually, if you make it past the thirtieth paragraph — that things weren’t entirely on the up and up:
Medvi’s initial website featured photos of smiling models who looked AI-generated and before-and-after weight-loss photos from around the web with the faces changed. Some of its ads were AI slop. A scrolling ticker of mainstream media logos made it look as if Medvi had been featured in Bloomberg and The Times when it had merely advertised there.
I mean… shouldn’t that have raised at least one or two red flags within the NYT offices? Medvi’s website featured a scrolling ticker of media logos — including the New York Times logo — to make it look like these outlets had written about the company, when they hadn’t. A year ago, Futurism’s Maggie Harrison Dupré had even called this out directly (along with Medvi’s penchant for bullshit AI slop advertising).
Just underneath these images, MEDVi includes a rotating list of logos belonging to websites and news publishers, ranging from health hubs like Healthline to reputable publications like The New York Times, Bloomberg, and Forbes, among others — suggesting that MEDVi is reputable enough to have been covered by mainstream publications.
…. But… there was no sign of MEDVi coverage in the New York Times, Bloomberg, or the other outlets it mentioned.
And then, despite this, the New York Times went ahead and wrote the glowing profile that Medvi had been falsely claiming existed. The paper of record became the validation that the fake credibility ticker was trying to manufacture.
And the NYT frames all of what most people would consider to be “fraud” as mere “shortcuts” that the founder later “fixed.” Eighteen paragraphs after burying the admission, it reports:
That gave Matthew Gallagher breathing room to fix some shortcuts he had initially taken, like swapping out the before-and-after weight-loss photos for ones from real customers.
“Shortcuts.” Using deepfake technology to steal strangers’ weight-loss photos from across the internet, alter their faces with AI, give them fake names and fabricated health outcomes, and pass them off as your own satisfied customers — that’s a “shortcut.” Ctrl-F is a shortcut. This sounds more like fraud.
And it turns out those “shortcuts” hadn’t actually been fixed at all. As Futurism’s Dupré reported in a follow-up piece published after the NYT article:
As recently as last month, nearly a year after the NYT said that Medvi had cleaned up its act, an archived version of Medvi.org shows that it was again displaying before-and-after transformations of alleged customers. They bore the same names as before — “Melissa C,” “Sandra K,” and “Michael P” — and again listed how many pounds each person had purportedly lost and the related health improvements they apparently enjoyed.
Even though they had the same names, these people that the site now called “Medvi patients” now looked completely different from the original roundup of Melissas, Sandras, and Michaels. Worse, some of the images now bore clear signs of AI-generation: the new Sandra’s fingers, for example, are melted into her smartphone in one of her mirror selfies.
They kept the same fake names and the same fake weight-loss numbers but swapped in entirely different fake people. What the NYT claims was “fixing shortcuts” appears to actually be just “updating the con.”
In a great takedown video by Voidzilla, it’s revealed that at least one set of original images appeared to have been sourced from Reddit forums on weight loss having nothing to do with Medvi, and even with the modified images it used, it massively overstated how much weight the original person claimed to have lost. And while Medvi later switched out the photos with someone totally different, they kept the same name and same false weight loss claims.
And again, all of this was publicly known information that Griffin or her editors could have easily found with some basic journalism skills. We already mentioned that Futurism article from May of 2025, nearly a full year before the NYT piece ran. That investigation traced the deepfaked before-and-after photos back to their real sources, found that a doctor listed on Medvi’s site had no association with the company and demanded to be removed, and documented the AI-slop advertising. That investigation was widely available. A Google search would have found it.
But the fake photos and fraudulent branding are almost quaint compared to what the NYT chose not to mention at all. Six weeks before the NYT piece was published, the FDA sent Medvi a warning letter for misbranding its compounded drugs. The letter admonished Medvi for marketing its products in ways that falsely implied they were FDA-approved and for putting the “MEDVI” name on vial images in a way that suggested the company was the actual drug compounder. The letter warned:
Failure to adequately address any violations may result in legal action without further notice, including, without limitation, seizure and injunction.
The NYT did not mention this letter. And yes, Gallagher now insists that the FDA letter was targeting an affiliate that was using a nearly identical name, and it was that rogue affiliate that was the problem. But the letter is addressed to MEDVi LLC dba MEDVi, which is the name of his company. If he’s allowing affiliates to use his exact name, then that alone seems like a problem. Indeed, it certainly seems to highlight how this is all just, at best, a pyramid scheme of snake oil salesmen, where Gallagher has affiliates willing to deceive to sell more snake oil.
Separately, on March 20, 2026 — thirteen days before the NYT piece ran — a class action lawsuit was filed against Medvi in the Central District of California alleging that the company uses affiliate marketers to blast out deceptive spam emails with spoofed domains and falsified headers. The complaint alleges Medvi is responsible for over 100,000 spam emails per year to class members. The lawsuit seeks $1,000 per violating email.
The NYT did not mention this lawsuit either, even as it was yet another bit of evidence that either Medvi is up to bad shit, or it has a bunch of out of control affiliates potentially breaking laws left and right to increase sales.
And then there are the fake doctors. As Business Insider reported, a review of Meta’s ad library turned up thousands of active ads for Medvi promoted by accounts belonging to doctors who don’t appear to exist. Drug Discovery & Development found over 5,000 active ad campaigns for Medvi on Meta at the time of the NYT piece.
A Drug Discovery & Development review conducted on April 3 of MEDVi’s website, Facebook advertising and public records found a pattern of apparent AI-generated personas, including some presented with medical titles, alongside marketing practices that appeared to go beyond the issues identified so far by regulators. A search of Meta’s Ad Library for “medvi” returned more than 5,000 active ads, many of them running under fabricated physician personas. One Facebook page for “Dr. Robert Whitworth,” which ran sponsored ads for MEDVi’s QUAD erectile dysfunction product, was categorized as an “Entertainment website” and listed an address of “2015 Nutter Street, Cameron, MT, 64429,” a location that does not appear to exist. Other ads ran under names including “Professor Albust Dongledore” and “Dr. Richard Hörzgock,” used AI-generated video testimonials and recycled identical scripts across multiple fabricated personas. In several cases, the page displayed a doctor headshot while the ad itself featured an unrelated person delivering a patient testimonial.
After public scrutiny following the article, those fake doctor accounts started disappearing. In fact, Medvi’s own website fine print acknowledges the practice:
Individuals appearing in advertisements may be actors or AI portraying doctors and are not licensed medical professionals.
Seems like maybe something the NYT should have noticed?
Oh, and that same Drug Discovery and Development article highlights how other snake oil sales sites are using the same named doctors… but with totally different images.
Same names… different people. Drug Discovery and Development has a bit more info about Drs. Carr and Tenbrink:
MEDVi’s current site lists two physicians: Dr. Ana Lisa Carr and Dr. Kelly Tenbrink. Both are licensed doctors who work together at Ringside Health, a concierge practice in Wellington, Florida, that serves the equestrian community. Neither is identified on MEDVi’s site as being affiliated with Ringside Health. On MEDVi’s site, Dr. Tenbrink is listed under “American Board of Emergency Medicine.” Dr. Carr is listed under St. George’s University, School of Medicine, her medical school. The Florida Department of Health practitioner profiles for both physicians state that neither “hold any certifications from specialty boards recognized by the Florida board.” A search of the American Board of Emergency Medicine‘s public directory, which lists 48,863 certified members, returned no current affiliation for Dr. Tenbrink.
Did the NYT do any investigation at all? Serving the equestrian community?
Even the few real doctors Medvi claims to work with turn out to be questionable. From Futurism’s article from last May (again, something the NYT should have maybe checked on?):
We contacted each doctor to ask if they could confirm their involvement with MEDVi and NuHuman. We heard back from one of those medical professionals at the time of publishing, an osteopathic medicine practitioner named Tzvi Doron, who insisted that he had nothing to do with either company and “[needs] to have them remove me from their sites.”
Then there’s what a class action lawsuit filed last November against Medvi’s main partner, OpenLoop Health, alleges about the actual products being sold. The NYT frames OpenLoop as basically making what Gallagher is doing possible, noting that while Gallagher has his AI bots creating marketing copy OpenLoop handles: “doctors, pharmacies, shipping and compliance.” You know, the actual business.
So it seems kinda notable that way back in November of last year, this lawsuit was filed that claims that the compounded oral tirzepatide tablets — one of Medvi’s key offerings — are essentially pharmacologically inert when delivered as a pill. Tirzepatide (marketed as Zepbound by Eli Lilly) is an FDA approved weight-loss drug as an injectable. But OpenLoop and Medvi have apparently been selling it in pill form. And Eli Lilly says that there are no human studies, let alone clinical trials, involving any tirzepatide pills.
All of that seems like the kind of thing reporters from the NYT should point out.
What we actually have here is a marketing operation that used AI to automate the production of deceptive advertising at a scale and speed that would have been harder to achieve otherwise. Snake oil salesmen have existed forever. What AI gave Matthew Gallagher (and, I guess, his affiliates) was the ability to crank out fake doctors, fabricated testimonials, and deepfaked before-and-after photos faster than any human team could — and to do it cheap enough that a guy with $20,000 and no morals could build it from his house. That’s the actual AI story the Times should have written.
Being good at deceptive marketing while selling weight-loss and erectile dysfunction drugs online has been a thing since the dawn of email spam. The only novelty here is the tools used to do it. The New York Times just wrapped that up in a neat bow and presented it as the proof of Sam Altman’s big promises for AI.
For what it’s worth, Gallagher has been whining about all this on X, per Futurism’s Dupre:
Though Medvi has yet to respond to our questions, the company’s founder, Gallagher, has spent the last few days on X defending his company. He complained in one post — seemingly in reference to criticism — that “the most low t [testosterone] guys” are “the loudest online” and the “Karens of the internet.” In another post, he wrote that it’s “actually a little crazy the number of people who form a whole opinion from a headline and then publicly wish horrible things will happen.”
Ah yes. The guy complaining about “low t guys” and “karens on the internet” for questioning his “AI business” skills, sure is a trustworthy kind of business person that deserves a NYT puff piece.
The real issue now is what the New York Times plans to do about this. A standard correction noting a few missing details won’t cut it. The entire premise of the article — that this company represents the exciting realization of AI’s business potential — is nonsense. Every element of the narrative is tainted: the growth story is built on deceptive marketing, the product claims are contradicted by the FDA and the manufacturers of the actual drugs, the “$1.8 billion” figure is a projection with no valuation to back it up, and the company is currently facing legal action on multiple fronts. The entire article should be retracted.
The NYT says it “was given access to Medvi’s financials to verify its revenue and profits.” Great. They verified that a company engaged in widespread deceptive practices was, in fact, making money from those deceptive practices. Congrats to the NYT for auditing a snake oil salesman and presenting your findings as if he were an upstanding pharmaceutical salesman.
So to my friends and family members wondering why I haven’t built my own billion-dollar AI company: apparently the missing ingredient wasn’t AI — it was being willing to run a deepfake-powered spam operation selling potentially inert pills to desperate people. The AI just made the lying faster. And the New York Times made one guy appear respectable.