Blog

nilGPT: Towards Verifiable Trust

August 28, 2025

TL;DR: nilGPT is now open source and comes with attestations. This means you can now verify the code running matches the published repository.

Two weeks ago we launched the first version of nilGPT with a clear mission: to give people a truly private option when using AI chatbots. This is important as most AI services today are not private by design—users are handing over their sensitive data and thoughts with no guarantees about how that information will be used or who may see it in the future.

Still, claiming privacy on its own isn't enough. Anybody can claim that their service is private.

We are taking all of the necessary steps to prove that our service is private by introducing a layer of verifiable trust to nilGPT, making it possible for anyone to confirm that nilGPT is behaving exactly as promised.

Alongside smaller upgrades like bug fixes, the two key features of this release are:

  1. open-sourcing the nilGPT repository.
  2. exposing attestations inside nilGPT.

Why should this matter to you? Because open source alone isn't a guarantee.

nilGPT could, in theory, run different code in the background. Attestations close that gap, giving you a way to check that the code running nilGPT is the same code published in the repository. That means you don't just have to trust us—you can verify it yourself.

In the rest of this post, we dive into the details.


1. Open-Sourcing the Code

The nilGPT codebase is now open source. You can find the repository here: https://github.com/NillionNetwork/nilgpt

This means anyone —not just our team —can look at the code, review it, and understand exactly what's happening under the hood.

Transparency is the foundation of verifiable trust, and open-sourcing the repository is our way of letting the world see, first hand, how the app is working.

For developers, it will also allow for external contributions or even the ability to run it totally locally, if so desired.

Open sourcing the code, however, is not fool-proof. Let's dive into why attestations are a vital component below.


2. Attestations Exposed in the App

This release also gives users the ability to see and verify attestations inside nilGPT. These attestations are generated in nilCC (where nilGPT runs) within a TEE.

A TEE, or a trusted execution environment, is special hardware that executes code in a secure enclave.

In short, in the app itself, you can now verify that the code running in the app matches the code open-source repository and not a modified version.

nilGPT Attestation Interface

What are Attestations?

Attestations are cryptographic proofs generated by TEEs. They act like a seal of authenticity, proving that the program inside the TEE, or the TEE itself, have not been tampered with. The key part of the attestation that we are exposing in this release is the measurement hash.

What is a Measurement Hash and what do I need to check?

You can think of a measurement hash like a fingerprint for software. When the TEE starts up, it takes the code and configuration that will run inside it, and creates a unique cryptographic fingerprint (the hash).

  • If the code changes, the hash changes.
  • If the hash matches the hash produced from the open source repository, you know the code hasn't been altered.

As of this release, the measurement hash generated in the TEE can be fetched by the user directly on nilGPT meaning users can:

  1. See the measurement hash generated in nilCC (the TEE).
  2. Compare it with a pre-computed hash (generated from the open-source repository).
  3. If the two match, you can be confident the TEE is running exactly the same code as the open-source nilGPT.

This is a big step toward verifiable trust as it removes the need to just trust that the code running is the code from the open sourced repository.

Note that currently, the pre-computed hash of the open source code base is exactly that: pre-computed by us. In future versions, we will give anyone the ability to easily generate this hash independently, further strengthening trust and transparency.


Looking Ahead

This release marks another milestone in nilGPT's journey to verifiably trust.

We plan to release new versions and features continuously over the next few months.

Some forthcoming features to get excited about include: voice notes, image and PDF uploads, dark mode, and more.

Stay tuned!

Why We Built nilGPT: Towards True Incognito Mode for ChatGPT

August 9, 2025

The Uncomfortable Truth About AI's Data Vacuum

The AI revolution has fundamentally altered how we work, think, and solve problems. LLMs have become the new universal interface for everything from creative writing to system architecture. The productivity gains are undeniable - but there's a brewing storm most people aren't paying attention to.

Every day, millions of users pour their most sensitive information into these systems - proprietary code, personal medical concerns, financial data - and even use them as their therapist. We're witnessing the construction of perhaps the largest unregulated repository of private information in history, controlled by a handful of entities with opaque governance structures.

The parallels to other data catastrophes are striking. Remember when 23andMe seemed like a reasonable trade-off? Upload your DNA, get ancestry insights; its a no-brainer. Fast-forward to today: the company faces bankruptcy, and that genetic data becomes a commodity in the liquidation process. The assumption should be not if, but when the datasets being amassed behind modern user-friendly AI tools will be leaked.

The asymmetry is stark. With modern AI tools, users experience immediate, tangible benefits while privacy costs are deferred, irreversible, and invisible until it's too late.

This isn't sustainable. Privacy-conscious users and advocates, in particular, should recognise that we need architecturally sound alternatives - not because everyone requires maximum privacy all the time, but because having that option should be a fundamental right and not a luxury.

The path forward isn't to abandon AI; instead, we must build choice into the system. We envision a world where cutting-edge AI handles everyday tasks, but privacy-preserving alternatives exist for sensitive moments. This isn't about paranoia, it is about having options when they matter most. This is why we decided to build our own private AI chat - nilGPT.

Defining Real Privacy in AI Systems

When we started architecting nilGPT, we deliberately avoided the incrementalist approach. Instead of asking "how can we make existing systems slightly more private," we asked "what would a genuinely private AI system look like from first principles?"

This led us to establish four core principles guided our approach:

1. Breach-Resistant Architecture

Data at rest must be cryptographically protected in a way that makes centralised compromise meaningless. If data is leaked, there should be nothing useful to extract. This requires moving beyond access controls to cryptographic guarantees and decentralised storage architectures.

2. Confidential Computation

Privacy must extend through the entire computational pipeline. No entity - including the service operator - should be able to see user queries, model responses during inference, or chat history. This should not simply be promised in the privacy policy or terms of service; it is an architectural requirement enforced by hardware and cryptographic primitives.

3. Verifiable Guarantees

A truly private system eliminates the need for trust. Users should not have to accept vague legalese assurances about data handling. Instead, privacy guarantees should be verifiable through open-source components, hardware attestations, and provable cryptography.

4. Zero UX Compromises

Privacy cannot come at the cost of usability. Any viable private AI system must match the ergonomics users expect: familiar and responsive interfaces, persistent memory, and access to models capable of handling real-world complexity. Privacy through inconvenience is not privacy - it's a curiosity for enthusiasts.

We believe the principles above set a high bar, but any system that meets these criteria can legitimately claim to be privacy-preserving while providing real-world utility.

The Current Landscape: A Critical Assessment

It is clear that the demand for private AI is growing and many are attempting to solve it. Below, we consider some of the trends and techniques we see being used, and provide commentary on why we do not believe they meet our criteria. The list below is not meant to be exhaustive, instead, it is designed to give an indication of the types of solutions we see being employed.

Local LLMs: The Gold Standard with Practical Limitations

Running models locally is theoretically ideal - users have complete control, there are zero external dependencies and maximum privacy is achieved as data never leaves the users device. However, the practical constraints are often severe. High-quality models require substantial computational resources that most users simply don't have. There is also a high technical barrier for users to get set up meaning most will never be able to get beyond this phase. Also, with a purely local approach users may forego persistent storage across different devices - a significant UX drawback. In conclusion, while for the most technically able, the local approach is almost certainly the best with respect to privacy, it is not a universal, scalable solution for everyone.

Centralised "Privacy-Washing" Solutions

Many services tout privacy while running architectures that are both centralised and effectively omniscient. Common patterns include third-party cloud integration with unclear encryption guarantees, or simply policy-based privacy ("we promise not to look") around data storage or inference. Many of these solutions have state-of-the-art models and functionality. However, this is largely due to having no added privacy benefits beyond ChatGPT and comparables. These approaches fail the verifiability requirement - they rely on trust instead of technical guarantees. In fact, the situation is worse. They mislead users into believing they're choosing stronger privacy, when in fact they're just accepting new terms and conditions. This false sense of security encourages more sensitive use, increasing the damage of possible future data leaks.

Pseudo-Decentralisation

Some services market "decentralised AI" as offering greater user privacy. In reality, this often just means multiple untrusted servers (still controlled by a single entity) run the models and can view all inputs and outputs in plaintext. If inference still occurs on centralised servers with plaintext access to user data, the number of nodes in the network is irrelevant. The infrastructure, while distributed, is subject to the same trust assumptions that apply to centralised architectures.

"No Personal Data Storage" Fallacy

A subset of solutions attempt to thread the needle by claiming they don't store personal identifiers - no emails, IP addresses, or account data. This approach, unfortunately, fundamentally misunderstands the privacy problem. When a user inputs sensitive information (medical records, financial data, proprietary code, and so on), that content is transmitted and processed and stored in plaintext on the provider's servers. The fact that they don't associate it with your email address is irrelevant. The sensitive data itself has already been exposed during inference. This is privacy theatre that conflates identity correlation with data exposure and is dangerous because it could, again, cause the user to trust the service more than they should.

nilGPT's Technical Architecture & Trust Assumptions

Here we explain the current architecture of nilGPT and why we believe it is a strong Private AI offering.

Architecture Overview

nilGPT relies on Nillion's Blind Modules:

  • nilCC is used to host the backend of nilGPT, ensuring no third-party cloud provider can access execution logs or user data. nilCC runs a secure enclave (TEE) on a bare metal server, meaning that all the standard protections of TEEs apply, and no third party cloud provider can see logs or outputs
  • nilAI is used for inference when the user queries nilGPT. nilAI runs inside nilCC and provides private AI inference for developers via an OpenAI-compatible RESTful API. Models served in nilAI are loaded into a hardware-protected enclave (TEE), and neither the input queries nor the model responses are ever exposed to the host system or external observers.
  • nilDB is used to store chat history, so the user can access it at a future date. The history is encrypted with a user supplied passphrase and then stored in secret-shared form across a decentralised cluster of nilDB nodes (each operated by a distinct entity).

The diagram below outlines the flow of data between these components and nilGPT:

nilGPT Architecture Diagram showing the data flow between nilGPT frontend, nilCC (containing nilAI and nilGPT backend), and the nilDB cluster with three decentralized nodes

When a user sends a question to nilGPT, the input is transmitted from the frontend (running in the browser) through the nilGPT backend to nilAI, which generates a response. Once the response is returned to the frontend, it is encrypted locally using a passphrase provided by the user. The encrypted data is then secret-shared and stored across three nilDB nodes.

Local encryption ensures that even if all nodes were compromised and an attacker obtained all secret shares, no information about the user's chat could be revealed - the reconstructed data remains encrypted and meaningless without the passphrase. At the same time, secret sharing ensures that no individual node can reconstruct the data even if the passphrase is leaked to the nodes.

When a user loads chat history in nilGPT (for example, when logging in from another device), the secret shares are retrieved from the nilDB nodes and decrypted using the user's passphrase. If the passphrase is incorrect, the decrypted content will not represent the original conversation, as the frontend will be unable to reconstruct the chat history from the encrypted shares.

Trust Assumptions

While nilGPT provides a strong baseline of security (via encryption, MPC, and TEEs) and some verifiability (as any standard browser can be used to inspect the client-side code to verify it encrypts the chat history), several trust assumptions remain. These are known limitations - the result of prioritising a fast release to deliver a private AI service as quickly as possible. Our roadmap includes steps to address and mitigate each of these.

1. Attestations Not Yet Published

The nilGPT backend and nilAI both run inside nilCC, which operates within a Trusted Execution Environment (TEE) on bare-metal servers. This setup provides standard TEE protections, ensuring that no third-party cloud provider can access logs or outputs. However, attestation reports from these TEEs - covering nilCC, nilAI, and CPU/GPU components - are not currently available to end users. Without these reports, it is not possible to independently verify that all components to which user data is exposed run inside TEEs. We are actively addressing this and plan to expose attestation flows to the end user as soon as they are implemented.

2. nilGPT Code Not Yet Open Source

Currently nilGPTs code is not yet open source. The open sourcing of the code is closely related to publishing attestations: in order for anyone to verify the attestations fully, they need a hash (immutable identifier) of the source code that was claimed to be run under the hood, as well as a published attestation report.

3. nilCC Code Not Yet Open Source

Similarly, nilCC is not yet open source due to rapid ongoing development. Other core components of the blind modules, such as nilDB and nilAI, are however already open source and available for review.

4. No Independent Audit

Neither the blind modules nor nilGPT have undergone an external audit at this stage. Our plan is to first open source the relevant code, then engage independent third parties for peer review and formal security audits.

Conclusion

In this article, we introduced nilGPT, outlined our design criteria and goals, shared our perspective on alternative approaches to private AI, and examined the architecture along with the remaining trust assumptions.

While there is still work to be done, we are actively addressing each of these assumptions. We believe it is important to make the product available now, as even in its current form, we believe nilGPT sets a new benchmark for private AI.

Looking ahead, our roadmap goes beyond mitigating the trust assumptions. We are focused on expanding capabilities with larger and more powerful multimodal models, integrated web search, voice note functionality, and token-gated features that position the NIL token as a first-class part of the experience.

We are excited about the journey ahead for nilGPT - and we invite you to join us as we continue to push the boundaries of what private AI can be.