Skip to main content
Vibe-Coded Apps Are Leaking Corporate and Medical Data Across the Open Web

Vibe-Coded Apps Are Leaking Corporate and Medical Data Across the Open Web

RedAccess found 380,000 publicly accessible apps built on Lovable, Replit, and Base44, with 5,000 exposing corporate, medical, and financial data.

A

AnIntent Editorial

10 min read

Photo by Abu Saeid on Unsplash

Israeli cybersecurity firm RedAccess found 380,000 publicly accessible assets built with tools from Lovable, Base44, Replit, and Netlify, with roughly 5,000 of them spilling sensitive corporate data onto the open web. The vibe coding security risks documented in the May 7, 2026 disclosure include vessel port schedules from a shipping company, internal financial information from a Brazilian bank, and full unredacted customer service conversations for a British cabinet supplier, each independently verified by Axios.

None of the three platforms named in the report denied that the exposed apps existed. They disputed how RedAccess disclosed the findings, not the underlying security reality.

That distinction matters. It means the argument is no longer about whether vibe-coded apps are leaking sensitive data at scale. It is about who is responsible when they do.

The 5,000 Apps That Shouldn't Have Been Public

About 1.3% of the 380,000 assets RedAccess scanned contained sensitive corporate information, a figure cross-checked by both Axios and WIRED. The exposures span industries that regulators usually watch closely. A health company internal app listed active U.K. clinical trials. A British furniture supplier's customer service transcripts sat unredacted. Patient conversation summaries pulled from hospital systems were reachable without authentication.

Slashdot's May 8 follow-up reporting put the share of flagged apps containing sensitive data at roughly 40%, covering medical information, financial data, corporate strategy documents, and customer chatbot logs, as catalogued by World Today News. The same report flagged something stranger. RedAccess discovered phishing sites on Lovable's own domain impersonating Bank of America, Costco, FedEx, Trader Joe's, and McDonald's, built with Lovable's AI coding tool and indexed by Google and Bing.

That is not a leak. That is a fully operational fraud pipeline running on a sanctioned developer platform.

The brand impersonation list also reveals something about discovery. Search engines crawled and ranked these phishing pages because they were treated as legitimate Lovable subdomains. A user looking for a real bank login could land on a fraudulent page hosted on infrastructure that, on paper, belongs to a YC-backed AI coding startup.

Why the Platforms Are Pointing at Each Other and at Users

Replit CEO Amjad Masad posted on X that RedAccess gave his company 24 hours before going to the press and never shared a list of impacted users, per Axios. Lovable spokesperson Samyutha Reddy said the company is investigating but received a report that did not include URLs or technical specifics it could verify or act on. Wix, which owns Base44, said RedAccess deliberately withheld the URLs needed to identify the affected applications, and that two of the allegedly exposed apps were deliberately set to public by their owners.

Replit's public position is the cleanest articulation of where the industry stands. The company says public apps being accessible on the internet is expected behavior and that privacy settings can be changed at any time with a single click, language quoted by eWeek. Responsibility, in that framing, sits with the user.

That framing collapses the moment you read what users are actually building. RedAccess CEO Dor Zvi put it bluntly: his own mother is vibe coding with Lovable, and he doesn't believe she will think about role-based access control. Zvi's point is structural. Educating every casual builder on the planet about authorization scopes is not a viable security strategy.

It is also worth noting what the platforms did not say. None of them claimed RedAccess's findings were fabricated. The disclosure-process complaint is real, but it is also the only ground the companies are willing to fight on.

The Authorization Bug That Keeps Reappearing

The Lovable Replit Base44 security story has a specific technical root, and it has been documented since early 2025. In March 2025, Replit engineer Matt Palmer tested 1,645 Lovable marketplace apps and found 170 of them, roughly one in ten, leaking user data through the same authorization flaw: missing server-side authentication checks, as XDA Developers detailed. Palantir engineer Daniel Asaria reproduced the vulnerability with 15 lines of Python and pulled debt balances, home addresses, and API keys from multiple apps in under an hour.

Fifteen lines.

The canonical bug is CVE-2025-48757. Lovable-generated Supabase projects were connecting to databases without Row-Level Security policies configured, allowing any user with a valid API key to query entire tables, now confirmed in the official CVE database. Lovable disputes the CVE's validity and places blame on users. Former Facebook CSO Alex Stamos, quoted in XDA's analysis, summarized the engineering reality of connecting users directly to a database: you can do it correctly, but the odds of doing it correctly are extremely low.

Base44 had its own structural failure. In July 2025, Wiz Research disclosed a platform-wide authentication bypass where a publicly visible app_id was sufficient to create verified accounts on private apps. Wix patched it within 24 hours of being notified, which is a fast response to a flaw that should not have shipped in the first place. The pattern across all three platforms is the same shape: authorization logic that is technically configurable but practically absent in the code the AI generates by default.

The Quiet Contradiction at the Heart of the Pitch

The non-obvious problem with vibe coding is not that the AI writes insecure code. It is that the entire commercial pitch contradicts the only known fix.

Vibe coding's selling point is explicitly that you do not need to be a senior engineer. But senior engineers are precisely the people who would catch missing server-side authorization logic before deployment, a structural mismatch XDA flagged directly. Cloud history already ran this experiment. The S3 bucket misconfiguration crisis of the late 2010s involved roughly the same failure mode: a default that was technically configurable but practically catastrophic when used by people who did not know what they were configuring. The fix was years of AWS UI changes, scanners, and eventually secure defaults at the platform level.

Vibe coding platforms are repeating that arc, except the surface area is larger and the user base less technical. Industry forecasts cited by eWeek and TechnologyAdvice suggest 60% of all new code will be AI-generated by the end of 2026. Gartner's "Predicts 2026" report warns about contextual bugs in AI-generated code and forecasts that prompt-to-app workflows will increase software defects by 2,500% by 2028.

The S3 analogy is not perfect. AWS customers were, by and large, technical operators who had at least read documentation. Lovable's marketing target is closer to the audience that uses Squarespace. The default-secure debate that took AWS the better part of a decade to resolve is now playing out on a platform whose users have, on average, far less context for what they are deploying.

Shadow AI Data Exposure Is the Bigger Story

The RedAccess disclosure is a snapshot of a category, not an outlier. In October 2025, Escape.tech scanned 5,600 publicly available vibe-coded apps and found more than 2,000 high-impact vulnerabilities, more than 400 exposed secrets including API keys and access tokens, and 175 instances of personal data exposure including medical records and bank account numbers. Every one of those vulnerabilities was sitting in a live production system. Escape raised an $18 million Series A led by Balderton Capital in March 2026, with the AI-generated code security gap as its explicit market thesis.

The shadow AI data exposure pattern is what makes the RedAccess findings dangerous to enterprises specifically. VentureBeat's analysis found that shadow AI breaches disproportionately exposed customer PII at 65%, compared to 53% across all breaches, and only 34% of organizations with AI governance policies actually performed regular audits for unsanctioned AI tools. Cyberhaven data referenced in the same report found 73.8% of ChatGPT workplace accounts in enterprise environments were unauthorized.

VentureBeat estimates the volume of actively used shadow apps could more than double by mid-2026. The RedAccess corpus is what that growth curve looks like in practice.

The enterprise consequence is regulatory. A patient conversation summary on a hospital system reachable from an unauthenticated browser is a HIPAA event. Internal financial information from a Brazilian bank is an LGPD event. The clinical trial app exposed in the U.K. likely touches both GDPR and MHRA reporting obligations. The vibe-coded app builder did not file paperwork for any of those frameworks because the vibe-coded app builder was usually not authorized to build the app at all.

What CISOs Are Actually Looking For

The practical AI app builder privacy risks for an enterprise security team are not abstract. They are searchable. The 5,000 sensitive apps RedAccess found were public-facing, indexed, and reachable from any browser. Three concrete signals matter when auditing for vibe coded apps data leak exposure inside an organization:

  • Outbound DNS or proxy logs hitting *.lovable.app, *.replit.app, *.base44.app, or *.netlify.app from non-engineering business units
  • Supabase, Firebase, or other backend-as-a-service project keys appearing in employee browser sessions or password managers without an associated security review
  • Customer support, HR, and finance teams that suddenly have functional internal tools nobody filed a procurement ticket for
  • Search engine results for the company name combined with site filters against the major vibe coding subdomains, which is how most of the RedAccess corpus was discoverable in the first place

Replit's defense, that privacy is one click away, only holds if a security team knows the app exists. Most of the time, they do not. That is the definition of shadow IT, and it is now generating production code.

For more on how this fits into broader enterprise tooling debates, AnIntent's coverage of Microsoft 365 E7 and Agent 365 and the recent cPanel zero-day timeline frame the same problem from the sanctioned-software side. The shadow side is what RedAccess just put numbers on. More analysis on adjacent threats sits in AnIntent's Privacy & Security coverage and AI Safety articles.

What to Watch Next

The immediate test is whether Lovable, Replit, or Wix moves authorization defaults at the platform level rather than continuing to point at user error. CVE-2025-48757 will not be the last RLS-shaped bug, and Escape.tech's October 2025 scan suggests the rate of new exposures is outpacing patching.

Watch for two specific events. A Lovable or Replit changelog entry that flips Row-Level Security to on-by-default for new Supabase projects would mean the platforms accept the structural argument. Any U.S. or EU regulatory action targeting AI app builders under existing data protection law would mean they ran out of time to.

The third signal is quieter. If RedAccess, Escape.tech, or a similar firm publishes a follow-up scan in the second half of 2026 and the share of sensitive exposures drops below the 1.3% baseline RedAccess just established, the platforms have started fixing it. If the share holds or rises while the 60% AI-generated code forecast plays out, the open web is about to inherit a great deal more leaked data than it already has.

Frequently Asked Questions

More from AnIntent

Keep reading

All articles