Oddbean new post about | logout
 Just got off a meeting with a couple of really sharp minds working on new mining tech.

Y'all...I think this bitcoin thing might just work out after all.

BULLISH

LET'S FUCKING GO 🚀 
 What do you think about an expanded mempool in the longer term?  As hardware improves, and balancing decentralization needs. 
 Are you talking about raising the default?

My uninformed take is that it should be a low priority. The question to answer is how often do suboptimal blocks get mined because the default mempool gets exhausted after a high fee event? I don't watch the blockchain like  @mononaut but I can only recall seeing one event like this on social media in recent memory.

There are easier ways to mitigate this occurrence that don't increase node hardware requirements. For example, it would be relatively easy to stand up a few nodes customized to rebroadcast old transactions at key times and avoid the need to change defaults. I wouldn't be surprised if this service already exists.

I strongly suspect investing our efforts in Sv2 adoption will do more to increase transaction throughput by reducing the number of empty blocks. In the long and medium turn the decreasing block subsidy will help motivate this upgrade. 
 Yes, I'm a big fan of #stratumv2.  My take is that the mempool can be divided into 2 parts.

1. High value tx -- can pay higher fees, create security budget
2. Mid tier -- gamblers, vandals, spammers, smart contracts 
3. Low teir -- devs, poorest in the world, on chain tx, L1/L2 tx

So I think 1 will always be high, and 2 can crowd out 3.  2 is also possible to launch an attack where they raise the fees to disrupt the chain.  This attack is mitigated by stretching out the mid teir.

That would take more hard drive space, which is a threat to decentralization.  However, decentralization has played a big role already in a fair emission and fighting off governance attacks (UASF).

Bitcoin can try to be "sufficiently decentralized" while increasing its utility and mitigating disruption.

Although the biggest disruption vector right now is social attacks. 
 If you're thinking about code changes to enforce these groupings the question to definitively answer is always "Does this change produce benefits to the whole network that outweigh the additional code complexity and the security risks it entails?"

In general, I strongly believe that changing the protocol in response to high fee events is a non-starter. We already have a mechanism to solve congestion: the free market. People who get upset about blockchain spam need to lower their time preference. 
 I get this, but I think there's a few nuances.

Markets are almost never really "free".  They are just markets.  As you might say, "all markets are wrong, only some are harmful".

I think there might be a way to look at the problem by taking out some of the loaded terms, and just treat things as a mechanism with trade-offs.

A well funded actor could severely distrupt bitcoin.  Let's say they didnt allow tx into the chain for a year.  That would be disruptive.  You could not say at that point it was a "free" market.

What is wanted is a well functioning bitcoin for the next several decades, and then to ossifly to the next centuries.

We've yet to agree on the final mempool size, but it should be something that can run in a decentralized network, and benefit the most number of people.  I think we'll only get one shot at it.  And more eloquent people than me can articulate the case.

The problem with any change is that you risk a chain split.  So you have to have a really sound argument, and you probably only get one shot at it.  I think maybe in 4-8 years time after we get AGI might be the time.  Generally I think there could be benefit from a slow increase in block size over a few decades, as proposed by sipa back in about 2015.

I agree we should not respond to high fee events, or make any knee jerk reactions.  But the last change was segwit in 2016 and it was left with an open ended block size increase in the medium term.  8 years has passed since then, so at some point there will be a possibility to reopen this loaded discussion.

We have to do a hard fork anyway due to the great consensus cleanup.  After that I'm generally against any forks of any kind as they can lead to chain split.  However an optimal block size before ossification, in line with the hardware of the day, is likely a good thing.  Hard to get people to agree, but there should be a discussion around trade-offs.  Then it never needs to be touched again, but there's some work to do before that.
 
 A lot of misconceptions:

> A well funded actor could severely distrupt bitcoin. Let's say they didnt allow tx into the chain for a year.

They would go bankrupt. Even the US government, the most well funded actor in the world, is going bankrupt all on its own without attacking bitcoin. I don't believe any entity on Earth has the resources to waste on a frivolous attack like this.

> We've yet to agree on the final mempool size... I think we'll only get one shot at it... The problem with any change is that you risk a chain split.

Mempool size defaults are not bound by consensus rules. You can already adjust your own mempool as you see fit by updating bitcoin.conf. Changing the default size is not to be taken lightly, but it's an entirely different category of proposal from consensus changes. There is no practical limit to the number of times we can change this default.

> We have to do a hard fork anyway due to the great consensus cleanup.

We have to hard fork due to the year 2106 bug. GCC is a soft fork change that we don't *need* to merge, although I think it's a very good idea and I would support that proposal if/when it picks up steam. (With one caveat, see below.)

When it comes to block size changes I agree that it may be a good idea far in the future but we have barely scratched the surface in terms of efficiency improvements with the existing ~4.5MB block size. I think we can safely ignore these proposals for a long time to come. Segwit gave us plenty of space to play with. Let's maximize the resources we have before making any change that might compromise decentralization.

I'm personally in favor of a covenant soft fork. I think we can improve blockspace efficiency by orders of magnitude with e.g. GSR, LNHANCE, or similar. But I recently came to the realization that our mining pool centralization problems are too great to risk any consensus change. I oppose any soft fork proposal until we make material progress on mining pool decentralization. This is why I decided to do something about it.

Stay tuned for Hashpools, a new kind of mining pool powered by ecash. I'm obviously biased but I think it's gonna be a big deal. Hoping to publish something in the November timeframe. Catch me at btc++ Berlin or TABconf (assuming my talk is accepted 🤞) to get an early look. If you can't make it, no problem. I will publish on nostr. 
 Firstly, thanks for an informative and polite exchange.  We need more of this, and bitcoin will do well.  I also think the tools for conversation will improve too.

Agree stratumv2 is a big next thing, and we need to think about prioritization.

I'm against a soft fork (covenants) primarily because we have yet to explore the tools we have already.  The other issue is that it can lead to a chain split, which is a real risk, if the community is divided.  We also dont know the unintended consequnces.  History has shown that over-confident developers overlook these routinely.  Could I live with a soft fork, maybe.

Sounds like you're doing great work.  Hope your talk is accepted!