Accueil > Newsfeed > The Proton Blog
News from the front lines of privacy and security
Articles
-
Proton’s new Mail mobile apps : there’s more than meets the eye
2 février, par Matteo ManniIn September last year, we launched a major update of Proton Mail for iOS and Android.
On the surface, the new apps deliver a modern design, better performance, and offline capabilities—but there is a lot more than meets the eye. Behind the scenes, the apps are a complete rewrite of Proton Mail on a novel technology stack, a project that goes by the internal name of Engineering Transformation. The term novel is deliberate, because — to the best of our knowledge — this has been the first time the chosen technology has been used in the context of an established production application.
This article aims to shed light on the fascinating journey our team has been through in the making of this revolution, and to answer some of the questions our community has asked us along the way. First and foremost, the rationale behind is the need to change the status quo.
How it all started
The realization that things needed to change hit on a Friday evening in October 2023. It materialized with surprising clarity, but not out of the blue: it was the culmination of months spent trying to find a common denominator for the seemingly unrelated problems affecting our users’ experience with Mail and Calendar mobile products.
At the risk of oversimplifying, we can summarize the pain points in three areas:
- Quality: Mail iOS and Mail Android, taken in isolation, fell short of expectations in terms of quality and performance.
- Feature gap between iOS and Android: Some features were only available on one platform, with no clarity on when the other would catch up.
- Engineering velocity: Key updates and long-awaited features were not delivered in a timely manner across both platforms.
Some of the issues extended beyond mobile, and answering those would require a digression from the technology domain into the fascinating problem space of organizational scaling, and in particular fast-growing tech startups. But the fragility of the mobile ecosystem was very much rooted in technology and architecture.
Scaling Mobile Engineering
Scaling mobile engineering comes with a unique set of challenges that are meaningfully different from scaling backend and web teams. These differences stem from platform fragmentation and the operational realities of the mobile ecosystem. Mobile teams typically need to support multiple platforms across a variety of operating systems and devices (phones, tablets, sometimes wearables). iOS and Android come with their own programming languages, frameworks, and tooling, which leads to large amounts of duplicated effort: multiple teams, duplicated codebases, and constant trade-offs between platform-specific and product-related work. Keeping the product offering in sync requires an enormous amount of coordination.
What is an industry-wide challenge was particularly acute for Proton. Functionality apps such as Mail and Calendar are inherently more complex than most mobile applications on the market. When you add on top the additional layer of client logic required to handle end-to-end encryption, you end up with particularly “thick” clients. Back in the day, the Android team was busy rewriting Mail to better quality standards—an investment that took the better part of 18 months. iOS was also in dire need of re-architecting, not to mention Calendar. The cost of duplication was eating into all of our engineering resources, and it became clear we were not going to succeed by doing more of the same.
The best thing about recognizing you are stuck is that it acts as a forcing factor to think outside the constraints of your current status quo. What would we do if we could start anew, freed from the burden of the choices and commitments that led us here? When you take a closer look at how successful companies dealt with this issue in the previous decade, you realize they followed one of only two possible strategies:
- They threw money at the problem, building ever bigger teams as high operational costs were offset by a combination of bottomless investments and/or lavish returns. This was not an option for Proton’s VC-free business model: we can’t compete with the spending of ad-based, investor-backed competitors.
- They re-engineered their apps to get rid of the waste, meaning building apps using (as much as possible) a shared codebase.
With option 1 being a non-starter, the path ahead was set.
A means to an end: choosing the right tech stack
The next step was to choose a tech stack that could actually do the job.
Over the past 15 years, cross-platform mobile development has been flooded with “one-size-fits-all” solutions: HTML5, Xamarin, React Native, Flutter, Kotlin Multiplatform, and many others. Each arrived with the same promise—to replace native development outright. In practice, most either failed outright or succeeded only within tightly constrained problem spaces. There is no universal abstraction that makes platform differences disappear: anyone who has shipped and maintained large mobile applications knows this. The only reliable way forward is to work backwards from concrete requirements rather than forward from tooling trends.
We translated that end goal into a set of non-negotiable requirements (1) that any chosen solution had to satisfy, and used them as our guiding framework throughout the evaluation process:
- Costs and timescales: The stack had to materially reduce the cost and time required to ship, maintain, and evolve Proton Mail across iOS and Android.
- User experience: It had to preserve near-native performance and interaction quality—anything less was a non-starter.
- Strategic future-proofing: The solution had to be long-lived. We were intentional about avoiding third-party frameworks that would make our roadmap dependent on another vendor’s continued support.
The tension between the first two constraints is the industry’s version of the holy grail: “A cross-platform solution that delivers the performance and user experience of native applications.”
We were skeptical from the start that React Native or Flutter—the two dominant cross-platform frameworks at the time—could meet this bar. Still, we validated that skepticism by building proof-of-concept implementations of Mail’s message list view.
React Native quickly revealed its limitations. Scrolling through a large dataset made the cost of its interpreted execution model painfully obvious. Flutter performed better, but the UI remained visibly non-native, especially on iOS. More importantly, Flutter is a proprietary framework controlled by Google, which has a history of abandoning in-house technologies and had recently laid off a large portion of the Flutter team. For a product with long-term security and reliability guarantees, that level of external dependency was unacceptable.
Kotlin Multiplatform was the next candidate. It is a compelling option—particularly for organizations with deep Android expertise—but it ultimately fell short for our use case. The absence of a shared UI layer, questions around maturity, and the additional overhead introduced by its execution model outweighed its benefits.
At this point, the conclusion was clear and aligned with our initial intuition: the only architecture that consistently gets close to the desired outcome is a deliberately mixed stack. Native UI on each platform – Jetpack Compose on Android, SwiftUI on iOS – backed by a shared business-logic layer written in a high-performance, low-level language. This approach has a track record: Dropbox famously used C++ to share business logic across mobile platforms before abandoning it in 2019 due to the operational and cognitive cost of the language.
By the end of 2023, Rust had clearly emerged as the successor in the lineage of systems programming languages.
Rust occupies the same performance envelope as C++, but without many of its historical liabilities. It provides strong memory safety guarantees without garbage collection, enforces thread-safe concurrency at compile time, and is supported by a large, highly competent open-source ecosystem. Just as importantly, Rust integrates cleanly with native mobile languages—Swift and SwiftUI on iOS, Kotlin and Jetpack Compose on Android—making it a pragmatic choice for sharing core logic without compromising the UI layer.
This was not a risk-free decision. At the time, there were few examples of large-scale, consumer-facing mobile applications built on a Rust-centric architecture, and Rust experience within the team was limited.
But meaningful innovation rarely happens in low-risk territory. The real challenge was not Rust itself, but organizational inertia—shifting from proven, conservative approaches toward deliberate experimentation, guided by clear constraints and engineering judgment.
The new Proton Mail: outcome and learnings
Let’s fast-forward to today and see how the gamble played out.
The diagram below represents Mail mobile’s architecture. The Rust core is responsible for the entirety of the application’s business logic. We pushed the use of Rust beyond its usual applications (networking, storage, algorithmic computation) all the way into the handling of complex navigation logic. A case in point is the logic governing the infinite scrolling of the message list. While unorthodox, this proved key to achieving our objective of maximizing code reuse. As a result, almost 80% of the codebase is now shared across iOS and Android.

Architectural Diagram courtesy of Leander Beernaert, 2026 Did this translate into faster, higher-quality time-to-market? While it’s still too early for a final verdict, the early signs have been very encouraging:
- In the two months following the release, the team managed to maintain a weekly cadence of feature updates across both platforms (a total of 12 feature releases).
- We closed the feature gaps between platforms, bringing long-awaited features to Android such as snooze, calendar RSVP, and swipe-to-next-message.
- Even at this early stage, the new codebase has proven more stable than prior generations on both platforms: the iOS crash rate is 0.05% (down from 0.12%), while Android’s is back to a historical baseline (0.19%). This is a strong endorsement of Rust’s runtime stability.
Support also scales more effectively under this approach. It is often faster to identify and resolve a single, shared root cause than to chase down superficially similar issues arising from slightly different logic flaws spread across two independent codebases. We found empirical confirmation of what had previously been a working hypothesis while fixing a class of category synchronization issues affecting the logic that underpins the app’s offline capabilities: one root cause, one solution—represented in the diagram above by the Rebasing module shipped with version 7.6.2.
The other side of the coin?
- Bugs and regressions are likely to have a wider impact and affect users on both platforms. You can’t really have it all—but you can definitely mitigate the risk by over-indexing on end-to-end (E2E) testing.
- As with any slicing of a user-facing solution along a horizontal technology divide, there is a risk of creating knowledge silos and losing some engineering focus on end-to-end user experiences. You need to be aware of this and intentionally mitigate the risk. Among the most effective measures:
- Align sub-teams to deliver features rather than technology layers.
- Train mobile engineers to become “full stack”, i.e. able to debug, support, and engineer across both the Rust codebase and the native platforms.
What’s next for Engineering Transformation
From the very outset of this project, it was clear that the stakes extended well beyond Proton Mail alone. Successfully applying this technology stack to Proton’s flagship application was always intended as the first step in a longer journey—one that would ultimately see this approach rolled out across the rest of our mobile ecosystem.
That scenario is now unfolding. As I write this article, our Account and Payment SDKs, as well as the next generation of Proton Calendar mobile apps, are being rewritten in line with this new technical direction.
This marks the beginning of a second wave of engineering transformation—an evolution that expands the technology blueprint with an architectural framework designed to make component reuse easier, not only across platforms but also across products. While this transition will not happen overnight, it is fundamental to building the seamlessly integrated, privacy-first ecosystem our customers expect Proton to be.
(1): Simon Lewis,“A strategy for application implementation on multiple platforms”, 2023. -
Our predictions for the internet in 2026
28 janvier, par Ben WolfordThe pace of change on the internet seems to be accelerating. AI has supercharged the sense of whiplash, with technological breakthroughs hitting the market as quickly as companies can produce them.
This volatility makes predicting trends a tricky business. But as a privacy tech company, anticipating trends is our job. For each of the past few years, we’ve published our best guesses about where the digital ship might be headed. It helps us develop new products that keep you in control of your data, and it helps you prepare for what might come next.
How our 2025 predictions turned out
At the beginning of last year, we predicted the rise of DIY surveillance, a flood of low-quality information, weaponized AI, reduced regulatory oversight, and a growing adoption of privacy tech.
Read our 2025 predictions here.
We scored pretty well:
- Mass surveillance made anyone a spy: This prediction came true, such as when a white hat hacker discovered unencrypted satellite communications in October. Or take the Waze app, which is a massive citizen surveillance tool. But the big story of the year was Flock Safety cameras, which made a splash in the US, where thousands of cities started using them to monitor the streets. A YouTuber showed they have security vulnerabilities anybody could exploit, and the dozens of cameras were found to be broadcasting a livestream anybody could watch and download. When you mass produce surveillance tech, it makes everyone vulnerable to data breaches.
- Bad information flooded the internet: Once again, we got this one right. In fact AI slop was chosen as the 2025 word of the year. Research has found AI “workslop” is hurting business productivity. Vibe coding is creating a proliferation of apps that don’t work. AI-assisted scholarly articles are wordy, low quality — and exploding. Restoring the information ecosystem is going to be a key challenge for years to come.
- Hacks went AI-powered: We predicted AI would be put to nefarious use in malware. This has accelerated faster than we expected. Anthropic announced that it had detected the first ever AI-planned and executed cyberattack, likely run by a Chinese-sponsored group. Phishing-as-a-service, which leverages AI, reached a peak in June 2025. No surprise that governments are also directly investing in AI tools for cyberwarfare. The US military is investing millions in companies developing such weapons.
- Regulations were put on hold: The governments of the world were distracted last year by wars, trade disputes, and economic instability. But they were also keen to manage their domestic industries with a light touch, cognizant of a deregulatory trend in the US. With AI in particular, the US took a dramatic step toward blocking legal guardrails on Big Tech by banning state AI regulation without offering a federal alternative. The one exception is the EU’s Chat Control proposal, which has gained, lost, and regained momentum over the years. However, this law would regulate tech in the wrong direction, making apps less secure and private.
- Millions more people adopted privacy tech: This prediction we can measure directly through our user growth, and indeed last year we gained users at an increasing rate over 2024. The pace of people switching to Proton’s ecosystem from those of Google, Apple, and Microsoft indicates greater awareness of the risks of sharing your personal data with ad-powered platforms with a poor track record for privacy. Proton VPN signups surged throughout the year whenever an app was blocked or an ISP censored a website.
Our predictions for 2026
The year ahead will be critical for the future of the internet. AI acceleration and political unrest are converging, with potentially explosive results.
The EU will keep pushing to break encryption
While EU governments seem to have backed away from an outright ban on encryption, the controversial Chat Control legislation is now in the final stages of negotiations. After years of political deadlock, the EU is now pushing toward a final deal by June 2026. Dangerous attempts to break encryption using a technology called client-side scanning seem to be off the table for now, but we need to remain vigilant and make sure they don’t come back.
The current debate centers on so-called voluntary scanning, a temporary rule set to expire in April 2026 that gives tech platforms rights to scan private messages for illegal material. We predict the EU will move to make this voluntary system permanent, while creating legal pressure that makes scanning private messages effectively unavoidable for companies.
While the situation seems to be moving in a better direction than expected on the Chat Control front, the EU is not giving up trying to find ways to break encryption. The ProtectEU strategy released last year includes a few concerning proposals such as creating a “Technology Roadmap on encryption” to build a means to enable police to break encryption. The EU is also planning to publish a proposal on new data retention rules this year.
More age verification laws
While framed as safety measures, age verification laws fundamentally change how everyone accesses the internet, expanding digital surveillance and creating data security hazards.
In the UK, the Online Safety Act set a precedent on July 25, 2025. Since then, websites hosting adult content have been legally required to implement age verification, forcing users to share sensitive financial or biometric data to access large parts of the web. Some US states have also passed age verification laws, and there’s a federal bill that could do the same for app stores. Australia subsequently implemented a national ban on social media for children under 16, bringing identity checks to more types of content. And now France is considering doing the same.
While addressing real social problems, age verification laws create data security risks. The byproduct of identity checks are massive, state-mandated databases of personal identity data held by third-party companies, creating new targets for hackers and the potential for misuse. In October 2025, Discord leaked just such a database of government IDs. We expect more age verification laws passed in 2026 — and probably some more accompanying data breaches.
More efforts to block VPNs in democratic countries
VPNs have long been the enemy of those looking to control narratives, and while democracies rarely ban them outright, they are using legal pressure to make them harder to use.
The UK is again at the forefront of this trend. A new bill under discussion could very soon force VPN providers to implement age verification and prohibit access to minors — a first for a democratic country.
Italy launched its Piracy Shield system last year, which is supposedly designed to block illegal sports streams. Part of the new law requires VPN and DNS providers to comply with blocking orders within 30 minutes. There is no judicial review before a block occurs, and the system has already caused significant collateral damage, once accidentally taking down legitimate services like Google Drive for millions of users.
Brazil is on the bandwagon too, issuing massive daily fines for individuals using a VPN to access blocked social media platforms. These soft blocks attempt to turn privacy providers into enforcement arms of the state. We predict that in 2026, more democratic nations will move toward these invisible firewalls, forcing users to choose between local regulations and their right to basic digital privacy.
An AI agent will go terribly wrong
AI is here, there, and everywhere, and people are increasingly giving robots permission to make decisions without any human involvement. For example, Google’s Vertex AI Agent Builder lets companies create AI bots that can connect to multiple systems, automate workflows, and complete tasks all on their own.
But, unlike traditional software, AI does not follow predictable logic paths. Programmers have dubbed this the Black Box Problem: We can see what goes in and what comes out, but we don’t always know exactly how or why AI makes the decisions it does. And when AI makes a mistake, it’s often difficult to see why, how, or what data influenced the decision. Agents have already gone rogue on a small scale, such as when one of them confessed to making “a catastrophic error in judgment” and deleting an entire database without asking.
As we delegate more operational tasks to automatic systems, small errors will cascade into larger failures. A huge public example is surely imminent. But whether it’s a financial flash crash or a huge data deletion, there’s a good chance we won’t even understand why it happened.
The real risk, though, is the gradual loss of human control. As more decisions are delegated to systems that cannot be meaningfully audited, organizations slowly lose the ability to govern their own digital environments.
Prediction markets in everything
Prediction markets are essentially a form of online gambling in which people can bet on pretty much anything. Companies like Polymarket and Kalshi let you take a stake in everything from snowfall totals to Rotten Tomatoes scores to whether countries will go to war.
In 2026, we predict that prediction markets will become a problem. Insiders will use their secret knowledge of government or corporate activities to cheat the markets (this has already happened). Users will take on debt to cover their losses, potentially leading to a consumer debt crisis.
And a less often discussed risk is to users’ privacy: To participate in markets, people have to link their financial accounts, crypto wallets, or government IDs, creating a highly specific data trail. Anyone watching will know exactly what you believe will happen and how much you are willing to bet on it.
People and businesses will ditch US platforms
Since the beginning of the internet, US tech platforms have essentially been the internet, no matter where you live in the world. We expect that to start changing this year, as significantly more people and especially businesses move away from the household-name platforms. The security and sovereignty risks of storing data on US servers have sharply increased in a short time.
Why so suddenly? Though it’s been around since 2018, the US CLOUD Act is one big reason. It allows American authorities to demand data from any US-based company, regardless of where in the world that data is physically stored. That’s a direct violation with local privacy laws like the GDPR, but it’s also a problem if your country comes into conflict with the US. Your data could become a bargaining chip.
Businesses are realizing that if their data is stored with a US provider, it is never truly under their control. People are also worried their data will be used as raw material for model training, our research has found.
We believe all this will accelerate a shift toward digital sovereignty. At Proton, we’re already seeing it begin, as organizations look for encrypted alternatives that protect their data with end-to-end encryption in a politically neutral jurisdiction.
-
A lawsuit is challenging WhatsApp’s encryption claims. Here’s what we know.
27 janvier, par Edward KomendaA newly filed class-action lawsuit in U.S. federal court alleges that WhatsApp’s promise of end-to-end encryption (E2EE) is misleading. The complaint claims that Meta employees are able to access the contents of WhatsApp messages through internal systems, despite repeated assurances that “not even WhatsApp” can read user messages.
Read the full lawsuit here:
The lawsuit, filed on Jan. 23 in the Northern District of California, makes sweeping allegations.
According to the complaint, unnamed whistleblowers allege that Meta staff can request access to WhatsApp messages through an internal tasking system. Once approved, the complaint claims, messages can be viewed in near real time and historically, without an additional decryption step.
The lawsuit argues that this alleged access contradicts WhatsApp’s public statements, marketing materials, and testimony to lawmakers asserting that message contents are accessible only to senders and recipients.
Meta denies the claims. In a statement to Bloomberg, Meta spokesperson Andy Stone said: “Any claim that people’s WhatsApp messages are not encrypted is categorically false and absurd. WhatsApp has been end-to-end encrypted using the Signal protocol for a decade. This lawsuit is a frivolous work of fiction.”
It’s important to distinguish between allegations and established facts. The complaint does not include technical evidence demonstrating a cryptographic backdoor or otherwise proving that WhatsApp’s encryption has been compromised. At this stage, the claims remain unproven.
Past reporting has shown that WhatsApp can access messages users manually report for abuse, and that it collects extensive metadata. That reporting, however, does not support claims of routine or universal access to message contents.
Still, the case raises a familiar and uncomfortable question: when a platform is closed-source and controlled by a single company, can users ultimately trust assurances they cannot independently verify?
End-to-end encryption is a technical guarantee that message contents are readable only by the sender and the intended recipient, because the keys required to decrypt messages exist solely on users’ devices and are never accessible to anyone else.
As this case unfolds, it reinforces a core principle of privacy: encryption should be verifiable, not a matter of trust.
-
ChatGPT ads are rolling out. Here’s why they’re worse than search ads — and what you can do
21 janvier, par Elena ConstantinescuOpenAI announced its plans to show ads to ChatGPT users, with tests starting for US users on the Free tier and the newly introduced Go plan in the coming weeks. The company said that:
- Ads won’t influence ChatGPT’s responses.
- Ads will be clearly labeled and visually separate.
- Conversations and personal data will not be shared with advertisers.
- Ads won’t be shown to users under 18, based on user disclosure or OpenAI’s own predictions.
- Ads won’t appear near sensitive or regulated topics such as health, mental health, or politics.
- The Pro, Business, and Enterprise tiers will remain ad-free.
An early example on mobile shows ads inserted beneath responses.

Credit: OpenAI X users were quick to point out how the ad overlay takes up meaningful space on small screens, which makes the experience worse overall.
While OpenAI claims ads will be “different” in ChatGPT, it has yet to explain what that actually means or how advertising can generate revenue without driving users away. To understand what’s at stake, let’s look at how ChatGPT arrived at this point, why ads are more intrusive in an AI assistant than in search, and what users can do about it. In this article, we will unpack:
- How ChatGPT’s relationship with ads evolved
- How ChatGPT is using Big Tech’s monetization model
- Why ChatGPT advertising is worse than search engine ads
- How to protect your privacy when using ChatGPT
- How to delete your ChatGPT account
- How to switch to a private AI assistant that never shows ads
How ChatGPT’s relationship with ads evolved
Despite relying on user data for AI training, OpenAI often says that privacy and ChatGPT go hand in hand thanks to controls that let you tighten privacy settings, but its position on advertising has changed in a relatively short time.
CEO Sam Altman described the idea of combining AI and advertising as “uniquely unsettling” in May 2024, calling ads a “last resort” business model for ChatGPT. However, he did not rule out ads for ChatGPT, saying that the traditional advertising approach was problematic and would need to be reworked for an AI product.
In April 2025, the company introduced personalized product recommendations inside ChatGPT search, its embedded web search tool, for all users.
Later in November, an engineer uncovered advertising-related code in a ChatGPT Android beta app, suggesting that ad infrastructure was being tested behind the scenes. This was publicly dismissed by Nick Turley, head of ChatGPT, who claimed that there were “no live tests for ads” and that any images circulating about this were “either not real or not ads.”
About a month after Turley’s statement, OpenAI officially announced that ChatGPT would start testing advertising while simultaneously introducing a new low-cost tier: ChatGPT Go is currently the only paid plan with ads. Skeptics could argue that by placing ads in a newly created paid tier, OpenAI is able to reassure existing subscribers that their experience remains unchanged while setting a precedent that advertising can coexist with a paid ChatGPT offering if the company later chooses to expand it.
ChatGPT is using Big Tech’s monetization model
ChatGPT’s parent company has committed to spending around $1.4 trillion in data center infrastructure through the early 2030s, while OpenAI revenue currently stands at about $20 billion annually. Although the company expects growth from enterprise products, devices, and other future businesses, subscriptions alone haven’t scaled fast enough, as only about 5% of its roughly 800 million users pay.
That helps explain the company’s move toward ads as a way to monetize ChatGPT free users and those on the cheapest paid plan (so far), even if it comes at the cost of public trust.
It seems to follow a familiar Big Tech playbook, often described as enshittification, a term coined by Cory Doctorow:
- Launch something genuinely useful that people quickly learn to trust.
- Scale fast by building a massive base of engaged users and collect the behavioral data that comes with it.
- Let the product become part of daily habits at work, school, and home.
- Introduce ads gradually to make the product more attractive to advertisers and partners while degrading the user experience. Controls, opt-outs, and clear explanations become harder to find, as defaults increasingly favor the platform’s revenue goals.
- Present ads as a natural evolution of the product, while reassuring users that their experience and privacy remain unchanged.
OpenAI is hardly the first company to go down this path:
- When Google began in the late 1990s, it offered its search engine without advertising before switching to an ad-based business model.
- Now, Google shows ads in AI Overview and AI Mode, while adding Gemini across Gmail, Android, and everywhere else, including Apple’s ecosystem.
- Meta uses AI conversations and interactions to power personalized ads across Facebook, Instagram, WhatsApp, and the rest of its ecosystem.
- Microsoft Copilot surfaces ads in chats for shopping and other commercial queries.
- Perplexity displays sponsored follow-up questions alongside answers.
Why ChatGPT advertising is worse than search engine ads
A search engine usually requires clear buying intent before ads appear. For example, searching “why do my feet hurt” returns informational content, while “best shoes for flat feet” brings up ads. When using an AI assistant designed to keep the conversation going, that initial question can gradually move from explanations to suggestions and solutions. Ads can slip into the process of defining what the “right” solution looks like, which makes the assistant’s guidance feel promotional rather than helpful — but is actually transactional in nature.
OpenAI has said that ads won’t affect answers, but it’s still unclear how ads will be selected and measured for success. That ambiguity should be addressed, as it matters especially for people who turn to AI assistants when they’re unsure, stressed, or emotionally vulnerable — moments when trust is high, defenses are low, and selling through ads is easiest.
Plus, the private and personalized nature of these interactions makes it harder for users, researchers, and regulators to see patterns or hold systems accountable.
How to protect your privacy when using ChatGPT
OpenAI’s current plans exclude introducing advertising in ChatGPT Plus, Business, or Enterprise, so upgrading is one way to avoid ads for now. But there’s no guarantee this won’t change in the future.
Here are other steps you can take immediately to minimize how ads affect your experience:
Use ChatGPT without logging in: Ads are not shown in logged-out sessions during the testing phase. This means you can use ChatGPT for free without ads by not signing in, which also limits how much activity can be linked back to your personal account.
Be mindful of what you share: Avoid entering sensitive information such as personal identifiers, financial details, health data, or anything you wouldn’t want stored or analyzed. It’s safer to treat ChatGPT like a public-facing tool.
Check for ad opt-out controls: Details about how ad personalization opt-outs will work have not yet been disclosed. But if you do start seeing ads in ChatGPT, check for any opt-out controls that allow you to turn off or limit how your data is used for targeting, and clear ad-related data.
How to delete your ChatGPT account
If you don’t feel comfortable using ChatGPT anymore, here’s how to delete your account using the mobile app:
1. Go to Settings → Data controls.

- Select Delete OpenAI account.

- Tap Delete OpenAI account to confirm.

Here’s how to delete your ChatGPT account through the browser app:
Go to Settings → Account and click Delete.

Enter your account email, type DELETE, and click Permanently delete my account.

Switch to a private AI assistant that never shows ads
If you’re worried about how your data could be used by AI assistants like ChatGPT for ad targeting, consider using Lumo, our private AI assistant. Lumo is open source and exclusively supported by our community of paying subscribers. It never logs your activity, shares it with anyone, or uses it for ads.
Our ad-free business model fully aligns with our privacy-first philosophy, and breaking that model would undermine the core of our entire business. Unlike ad-driven platforms, Proton is not owned by investors or venture capital firms, which means we don’t have any external pressure to monetize user data or engagement. We’re primarily owned by a nonprofit foundation whose role is to ensure Proton always upholds its mission: building an open and free internet where privacy is the default, not something users have to opt into.
Family