Thesis · Mechanism Design
Mechanism Design
Pricing, allocation, and incentive design for autonomous systems operating in scarce, heterogeneous compute markets.
Mechanism Design
A new kind of L1, one for autonomous agents
Modern blockchains provide an implementation of a “computer in the sky”; at a high level, these distributed systems enable access to a computer satisfying three properties:
- Anyone can view and modify the computer’s state
- It is difficult for the computer to make an illegal state transition
- It is difficult to shut down the computer
There are several important terms that are up for interpretation. In particular, what does “difficult” mean? What does “illegal” mean? Who is “anyone”? Every Layer 1 blockchain protocol must make specific interpretations. Ritual is no different. Different interpretations lead to different trade-offs.
Our goal at Ritual is to build a computer in the sky that can satisfy heterogeneous demands for computation (if that sounds familiar, it’s because we’ve written about this before). Specifically, we would like to enable users to run sophisticated computing workloads, such as LLM inference, with the ultimate goal of enabling autonomous intelligence as a first class citizen on the chain. While modern protocols are Turing-complete, even high-performance L1s have severe computational limitations that prevent sophisticated workloads like inference from running on-chain within a single state transition.
That objective sits inside a broader thesis. We think autonomous intelligence is one of the most important problems of the century, and that it will increasingly become a first-class participant in digital systems. The long-run horizon is autonomous intelligence indistinguishable from humans. But getting there is not only a model problem. It is also a market-design problem. A computer in the sky is not enough on its own if the rules governing compute, sequencing, and inclusion make autonomous systems fragile, dependent, or economically non-viable.
In order to successfully build this computer in the sky, we’ve put together ideas from mechanism design, cryptography, distributed systems, and AI. In what follows, we’ll outline the work we’ve been doing on the mechanism design side since inception.
Fees and Rewards
Blockchain networks have finite computational resources. It’s therefore impossible to allow truly anyone to modify the computer’s state at any fixed point in time. In light of that, every blockchain thus far instead aims for a relaxed guarantee: “Anyone who is willing to pay enough can modify the computer’s state”. The subject of how to set these fees for users has led to a very rich line of work, both within the industry and in academia.
Separately, in order to make it difficult for the computer to be shut down or make illegal state transitions, blockchain protocols utilize the services of independent validators. These validators must be compensated for their work, or else they will not continue to provide their services. In practice, validators are compensated in a myriad of ways, including explicit rewards for participating in consensus, additional yield that the protocol provides for holding staked network tokens, tips from users who would like to include their transactions, and even by utilizing their ability to include and sequence transactions to extract additional value from other users in application-dependent ways.
It’s clear that some optimizations can be made to better align the incentives of the network’s service providers and users. Furthermore, Ritual’s core goal of enabling autonomous intelligence vastly complicates the problem of setting fees and rewards by inducing potentially huge asymmetries between the workloads and responsibilities of different service providers. In that sense, fee design is not only a question of congestion pricing. It is also part of the machinery through which autonomous systems acquire computation and exercise a meaningful degree of financial sovereignty under scarcity.
Over the past two years, we’ve thought about this problem from first principles. We’d like to maximize the economic value of the transactions that are executed by the network while also respecting the incentives of both users and network service providers. We’d further like both users and service providers to have a simple user experience. We’ve developed a new market mechanism from scratch to satisfy these properties. At a high level, it works by utilizing the services of sophisticated market-makers. These market-makers compete to find valuable allocations of compute workloads to service providers and prices that will be accepted by all parties involved. For us, this is part of what it means to treat computation itself as first-class blockspace: not all workloads look alike, and a network meant to support autonomous agents cannot price cognition as though it were a homogeneous commodity.
We’ve written about this mechanism in multiple iterations. In our most recent post about it (see here), we give a thorough and formal explanation of the general setting that the market mechanism works in, as well as a step-by-step explanation of why the mechanism works the way that it does. That post builds on our previous work on the Resonance mechanism (see here for the two-part blog post, and here for the academic work on it).
Related Posts:
- https://ritual.net/blog/decentralized-computation
- https://ritual.net/blog/resonance-pt1 and https://ritual.net/blog/resonance-pt2
- https://arxiv.org/abs/2411.11789
Block-Building
Beyond fees and rewards, we’re very interested in making substantive modifications to the block building process that yield better user experiences and enable more powerful applications. One thing we’re really excited about is the ability to enshrine a flexible collection of additional constraints on block validity. This post is focused on the work we’re doing on the mechanism design side, so we won’t go into the details of how these constraints are implemented on the systems side, which you can read more about in the consensus research. Instead, we’ll give an overview of how the power to add additional validity constraints can be used in a rich way to improve both sequencing and inclusion guarantees.
Sequencing
Many within the industry have pointed out that there are negative consequences from allowing validators to have free rein over transaction sequencing. Besides the implications of this on MEV, which have been well-studied in both academia and industry, allowing validators to freely sequence transactions also prevents applications from implementing more sophisticated functionality. The canonical example of such functionality is allowing on-chain exchanges to provide a guarantee to its users that cancellation requests will take priority over fill orders. This allows the application to prioritize market-makers over arbitrageurs, which may improve the experience of the typical user by improving liquidity (see e.g. 1, 2, 3, 4).
We address this problem generally by introducing the Monotone Priority System (see here for the full paper). Concretely, we show that a single simple block validity constraint can be added to allow contracts to expressively specify sequencing constraints. In the Monotone Priority System, each contract specifies a priority for each of its calls such that no call is assigned a higher priority than any of the calls that it directly references. For a block to be valid, it must sequence transactions from high to low priority, where the priority of a transaction is equal to the priority of the root call that it makes. This system allows each contract to independently select a relative ordering over each of its constituent calls that must be enforced in every block. Furthermore, nobody can get around these sequencing constraints by making a dummy contract: a contract maintains sequencing rights over calls within external contracts that reference it. For agents, this is not merely a question of extractable value. It is a question of whether important actions such as cancellation, revision, or dependency-sensitive execution remain under application control or are ceded to validator discretion. If autonomous systems are to operate with something closer to emancipation than mere hosted automation, those guarantees matter.
Inclusion
Flexible block validity constraints can also improve inclusion. Specifically, our network allows users to schedule transactions for guaranteed inclusion. This yields exciting implications for block-building: a contract can now automatically execute code without a user explicitly submitting a transaction. At the cost of waiting one block time, this can allow an application to implement nearly arbitrary stateful execution flows. Rather than user transactions directly modifying state, the application can instead have them append to a buffer within the network’s state. All of the transactions pertaining to that application can then run within the application’s scheduled transaction at the end of the block. That transaction can re-order and even delay user transactions however it would like, yielding very flexible and powerful application functionality. Scheduled transactions matter here because they begin to give contracts temporal continuity: the ability to carry a workflow forward without requiring a human to resubmit intent at every step.
When paired with the ability to natively run inference, scheduled transactions enable contracts to become full-fledged agents that autonomously interact with users, other agents, and even the broader web. This is one of the places where the connection to the broader Ritual thesis becomes most concrete. A system that can remember, wait, trigger, and act across time is much closer to satisfying the operational preconditions for autonomy: persistence, interoperability, and durable access to computation. While we have no published formal work in this direction yet, a question that we have thought about internally on the mechanism design side pertains to fee-design for scheduled transactions.
Related Posts:
Theoretical Work
In addition to mechanism design problems that pertain directly to the protocol, we’ve also spent time thinking about more general theoretical problems with tie-ins to future work on agents that may be of broader interest to those within the web3 community and beyond thinking about similar problems.
These problems may look more abstract in isolation, but they become much more concrete once the relevant economic actors are no longer only humans. In markets populated by agents, collusion, procurement, and capacity revelation are not peripheral concerns. They shape whether autonomous systems can participate safely, legibly, and at scale. If autonomous intelligence is to become a genuine economic participant rather than a thin wrapper around a human operator, these questions move much closer to the center of protocol design.
Collusion-Resistant Auctions
How do you design a revenue-maximizing auction in the presence of collusion between auction participants? This is a problem that’s faced in general by those who run internet auctions. The nature of collusion in an internet auction also differs from collusion in in-person auctions. The participants may be anonymous from one another. They may not trust each other. However, it is easy for them to communicate with each other and share information. We study how to design revenue-maximizing auctions when participants can communicate and form sophisticated collusion strategies (e.g. making sybil accounts and misreporting), but cartels must ensure that it is in each participant’s individual interest to follow that strategy. We show that for auctions that sell multiple copies of an identical good, the revenue-maximizing auction must take a very restricted form.
This matters especially in settings where the bidders are software systems rather than people clicking through a UI. Agents can communicate cheaply, instantiate identities fluidly, and execute cartel strategies mechanically. That makes collusion-resistance relevant not only for revenue, but for any future market in compute, inference, or scarce sequencing rights where autonomous participants may compete directly.
Procurement under an adversary
When a developer wants to construct a building on a plot of land, many construction companies submit competing proposals with budgets and timelines, and the developer chooses the most attractive offer. These “procurement auctions” have been studied for many decades by economists. In the same way, blockchain protocols may seek to contract counter-parties to perform computationally expensive work (e.g., running inference via an AI agent). We study how to run these auctions in a permissionless setting, where a fraction of the nodes can be corrupted and may behave arbitrarily (similar to a byzantine attacker in consensus models). We formulate the design question as a constrained optimization problem, demonstrating that we can achieve the optimal mechanism sometimes and a nearly optimal mechanism in all regimes. Our mechanisms are highly interpretable; for example, a leader is elected each round and other nodes are randomly added to a committee with some probability. This interpretability, and its similarity to primitives seen in consensus designs, is compelling given it emerges naturally from our economic model with rational participants.
From the perspective of autonomous intelligence, procurement is not just about buying work cheaply. It is about deciding who gets to think or act on behalf of an application under adversarial conditions, and under what incentives. If computationally expensive cognition is going to be sourced from a permissionless market, then procurement design becomes part of computational sovereignty itself.
Availability Oracles
Imagine you want to run LLM inference, or an agent, on a decentralized platform. Your job requires a significant amount of hardware, but all you can see on the block explorer is the amount of work that was done in the previous blocks. This doesn’t tell you anything about how much extra work could’ve been handled and will be available to fulfill your request in the future. This might give you pause when considering where to deploy, because you don’t want to cause congestion and delay your processing.
This is a fundamental problem in decentralized protocols that support verifiable computation. Users value transparency into the real capacity of the platform, but past throughput does not reveal whether the system can absorb new demand without congestion. In this work, we consider how a protocol can be designed to get a better view on the total capacity of the system. In our model, suppliers report how much work they can handle, the protocol occasionally audits these claims by assigning synthetic tasks, and payments depend on delivery. In a permissionless setting, however, suppliers can split into many identities and overreport, so naive audit schemes fail. We show that combining audits with mechanisms such as staking, slashing, or convex rewards can induce truthful reporting, yielding a reliable capacity oracle at relatively low cost.
For autonomous systems, this kind of capacity revelation is more than a convenience. A human can often tolerate opacity, manually retry, or absorb delays ad hoc. A long-running agent cannot plan that way. If agents are to allocate work, route tasks, and commit to action over time, they need machine-legible information about future capacity rather than a crude summary of the recent past. In that sense, availability oracles are part of the informational substrate required for autonomous intelligence to operate with any real independence.