Cryptography

Thesis · Cryptography

Cryptography

Privacy, integrity, and proof systems that let autonomous agents act without surrendering their data, prompts, or trust assumptions.

Cryptography

Enshrining autonomous intelligence into decentralized networks presents a distinct set of cryptographic challenges. At Ritual, the goal is not merely to verify off-chain computation on a user-by-user basis which is predominantly how the rest of the ZKML space operates, the goal is to enshrine AI execution natively to consensus and composable with the rest of the chain’s existing guarantees. This is how computation begins to inherit the properties of blockchain systems: programmability, persistence, coordination, and trust minimization 1.

That specific objective operates under a broader thesis. We think autonomous intelligence is one of the most important problems of the century, and that increasingly capable agents will become more common and more inevitable. The long-run trajectory, rapidly appropaching, is autonomous intelligence being indistinguishable from humans. With that premise, cryptography cannot be treated as a narrow verification layer as it canonically exists as. It is the machinery that determines whether an agent can think privately, act verifiably, coordinate economically, and remain dependent on neither blind trust nor full public disclosure.

Ritual’s thesis on autonomous intelligence is pointedly less about the models in isolation and more about the representation, orchestration, and execution substrate around them. Once that becomes the focus, cryptographic primitives that may look expensive from the perspective of model benchmarking become highly relevant from the perspective of autonomous systems design. By jointly developing zero-knowledge proofs, trusted execution environments, and privacy-preserving inference methods, Ritual is trying to combine computational integrity and privacy in ways that are directly useful for autonomous systems. Integrity matters because an autonomous agent cannot be trusted to act unless its computation is trust-minimized. Privacy matters because an autonomous agent cannot be sovereign if every prompt, strategy, key, or action policy is externally visible.

Cryptography as the integrity and privacy layer of autonomous systems

The easiest way to misunderstand this section is to treat it as a collection of unrelated primitives. It is better understood as a stack. Some schemes help prove that a computation was carried out correctly. Others help preserve confidentiality while computation is taking place. Others occupy the space in between, where privacy itself can become a source of verifiability or where probabilistic privacy is the only way to make large-scale private inference practical. Taken together, these techniques define what kinds of cognition an on-chain system can trust, what kinds of state it can keep private, and what kinds of economic activity it can support without collapsing back into a centralized operator.

That is why cryptography sits so naturally inside Ritual’s broader autonomy thesis. Privacy is one of the clearest desiderata. Computational sovereignty is another: agents need access to trustworthy execution that is actually deployable at network scale.The through-line is that autonomous systems need more than intelligence in the abstract.

Rethinking proof systems for Ritual consensus

Historically, general-purpose zkVMs and rollups have had to balance prover time, verifier time, and proof size because their outputs are ultimately verified or settled in environments where verification cost is the central bottleneck. Ritual’s requirements are different. In the Symphony model, proof systems sit inside an execution-aware consensus architecture rather than a simple on-chain replay model, and that changes the shape of the tradeoff 1.

The most important constraint for Ritual is often prover speed. If a proof path is part of the route by which a workload eventually settles, then extremely slow proving degrades the practical liveness of the async lane. That does not mean every proof sits directly on the critical path, but it does mean the network strongly values proof systems that keep the settlement path efficient and scalable. By contrast, Ritual can often relax the requirement that every node verify every proof in the same way, because distributed verification and selective assurance are already part of the architecture 1.

This is why in the Symphony’s distributed verification model, rather than forcing every validator to fully verify every proof, the protocol can exploit structure and symmetry in the workload, decompose it, prove each part of the resulting decomposition in parallel, and then distribute verification across committees 1. This gives Ritual a huge advantage over other practical deployments, where this type of parallelization would require an extra aggregation step that cancels out the gains of the parallelization. Instead of expensive aggregation, Ritual can simply verify each proof in parallel with its distributed verification infrastructure, which ultimately buys materially faster proving and better scalability for heterogeneous compute. This has profound implications for prover speed and enables trustworthy AI execution compatible with a chain meant to host non-deterministic, high-latency workloads.

Engineering fast provers for AI workloads

Handling AI inference inside proof systems remains difficult. The inputs are large, the arithmetic is expensive, and the non-linearities used in modern models are awkward to arithmetize. But the relevant design question for Ritual is broader than how to prove zkLLMs in the abstract. The question is how to engineer proof systems for AI-native heterogeneous compute under the constraints imposed by execution-aware consensus.

One promising direction is parallelized computation sharding. Rather than proving a large inference or other AI workload as a single monolithic block, the computation can be segmented and proven in parallel. This aligns closely with the above Symphony’s formal treatment of structure-exploiting decomposition and distributed verification 1. In Ritual’s setting, that can be preferable to aggressive aggregation, because the network can verify proof segments in parallel rather than paying a heavy aggregation overhead.

Another class of techniques involves strategic quantization, where floating-point weights and activations are mapped into lower-precision integer representations so they can be handled more efficiently inside finite fields, which are the native language of proof systems. Closely related are mapping complicated non-linear activation functions to linear approximations, transforming them into a problem that proof systems can handle efficiently. When this isn’t possible, lookup tables are a generic way to simplify the proof of complex operations. These techniques have been widely explored in zkML and remain useful wherever the proving path for AI workloads must be made materially faster.

Lastly, we have spent significant time looking at alternative Polynomial Commitment Schemes (PCS), which dictate the core bottleneck for the prover. While code-based SNARKs and STARKs offer excellent verifier speeds and are therefore the choice of many zk-rollups like Starknet and zkSync, Ritual could instead lean towards lattice-based SNARKs operating over rings. These structures are highly compatible with AVX (Advanced Vector Extensions) vectorization, allowing for SIMD (Single Instruction, Multiple Data) processing. This can accelerate prover time by up to 50x. However, it comes at the cost of a slower verifier. Another option is to deploy code-based SNARKs over rings so we could benefit from AVX512 instructions while still maintaining reasonable verifier time.

Verifiable inference from privacy-preserving methods

Ritual has also explored approaches to verifiable inference that do not rely exclusively on either SNARKs or TEEs. In Priveri, the central observation is that privacy can beget verifiability: if inference is already performed with a privacy-preserving mechanism, that machinery can be reused to obtain statistical verification guarantees at much lower cost than full proof-of-inference systems. The basic protocol inserts hidden sentinel tokens whose expected logits are precomputed as a model fingerprint. Privacy, for example via SIGMA-style 2-party SMPC, hides the locations of those sentinels from the provider, making model-substitution attacks detectable. However, Protocol 1 alone does not force correct computation over all token positions under SMPC, because it is vulnerable to subset attacks; the stronger all-token check comes from Protocol 2, which adds hidden noise and verifies it from returned hidden states. On Llama-2-7B, the paper reports that Protocol 2 with SIGMA is roughly 16x faster than zkLLM for a 1-token response, but with weaker, statistical guarantees rather than ZK-equivalent guarantees

This matters for the broader autonomy thesis because it reveals something deeper about the relationship between verifiability and privacy. Privacy is not only a defensive property. Under the right mechanism, it can also act as a verifiability layer. That is especially important for autonomous systems, because privacy and integrity often have to be achieved together, not separately. An agent that must expose its prompts, hidden state, or private inputs in order to receive trustworthy inference is not actually independent.

Privacy-preserving smart contracts via FHE

While SNARKs focus on correctness, fully homomorphic encryption (FHE) focuses on privacy. FHE enables smart contracts to compute over encrypted state without revealing the underlying data. That opens the door to use cases such as private financial logic, secure identity systems, and AI-native applications where prompts, embeddings, or intermediate state should remain hidden.

This is one of the clearest use cases of cryptography at Ritual.There is no meaningful autonomy if an agent must reveal its internal state every time it thinks. For autonomous systems, privacy is not only about user confidentiality. It is also about preserving the integrity of strategy, memory, prompts, and private decision boundaries.

However, privacy alone is not enough. In an execution-aware chain, the speed of encrypted computation still matters because extremely slow execution cannot live on the consensus critical path. This is another place where verifiability and privacy collide, as the encrypted computation is too slow to be computed on chain, but can be computed offchain and then proven to be correct in time for settlement. In other words, the chain can commit to the workload immediately and then settle once the relevant attestation or proof arrives 1. Deferred settlement is what allows confidentiality-heavy workloads to coexist with liveness.

Besides efficiency, verifiable FHE also ensures that private computation remain trustworthy and verifiable. That is why verifiable FHE remains such an important open direction. If the network is to support large-scale confidential heterogeneous compute, then users need not only privacy of inputs and state but also trustworthy evidence that the encrypted computation was carried out correctly. This is exactly the kind of frontier that matters if autonomous systems are to think privately and act with credible guarantees.

Scaling privacy with statistical MPC

For scenarios where the full cryptographic guarantees of FHE are too slow or too expensive, Ritual has explored a different point on the security-performance frontier through Cascade. Cascade is best understood not simply as an alternative, but as a system operating in a statistical or probabilistic-security regime. Instead of relying on heavy cryptography at every step, it uses token-level sharding in the sequence dimension so that no single server sees enough of the prompt to reconstruct it 3. The result is a system designed to trade stronger formal privacy guarantees for much better performance and scalability, while also being resistant to a generalized recent attack against other statistical privacy schemes and to learning-based attacks 3.

This matters for autonomous systems because not every workload needs the same level of assurance. Sometimes the right solution is not the heaviest cryptographic confidentiality available but rather a much faster probabilistic defense that still raises the attacker’s cost sharply enough for the application at hand. Ritual’s research philosophy is to widen the set of viable points on that frontier rather than pretend there is one universal answer. The practical lesson is that the privacy layer for autonomous intelligence will almost certainly be plural.

ZK-secured auction mechanisms for heterogeneous compute

As Ritual expands to support heterogeneous, high-latency workloads such as inference, video generation, and other compute-heavy tasks, pricing becomes far more difficult than ordinary gas estimation. The network must allocate different kinds of compute across a changing landscape of supply and demand. This is where mechanism design, zero-knowledge proofs, and consensus meet.

Historically, sophisticated auction mechanisms have been difficult to deploy in blockchain environments because they often require interactivity, feedback loops, or multiple bidding rounds that sit uncomfortably within low-latency consensus. Ritual’s approach instead focuses on non-interactive auction computation that can be run off-chain and then proven correct 4. This line of work is especially important because autonomous systems do not merely need compute; they need markets for compute that remain efficient even when workloads, hardware, and latency requirements differ drastically.

This is also where cryptography begins to touch financial sovereignty in a more direct way. If agents are going to compete for scarce compute, they need outcomes that are not simply opaque decisions made by a centralized intermediary. Cryptographic proofs make it possible to run richer market logic off-chain while still giving the network a principled reason to trust the result.

Conclusion

Taken together, Ritual’s cryptography work is not about adding privacy or verification as optional features around an otherwise conventional chain. It is about building the integrity and confidentiality stack required for autonomous computation to become a consensus-aware, programmable, and economically meaningful part of the network. Proof systems matter because autonomous systems need trustworthy execution. Privacy-preserving methods matter because autonomous systems need room to think, remember, and act without exposing every internal state variable. Market mechanisms matter because autonomous systems need access to scarce compute under rules that can be trusted and optimized for the economic incentives of various actors in the system.

Seen this way, cryptography is not adjacent to Ritual’s thesis. It is one of the conditions under which the thesis becomes real. If autonomous intelligence is to become first-class on-chain, it must be able to execute under credible guarantees, preserve private state across heterogeneous workloads, and participate in markets without surrendering trust to a centralized operator. That is why Ritual is researching proof systems, privacy-preserving inference, FHE, statistical MPC, and ZK-secured compute markets together rather than in isolation. Each scheme occupies a different point in the integrity-privacy-performance design space. Together, they define the kinds of autonomous systems the network can actually support.

In that sense, Ritual’s cryptography program is essential to the broader ambition. It is part of the infrastructure that could allow autonomous intelligence to move from a model demo into a durable, sovereign, and economically active participant on the internet.