Oddbean new post about | logout
 - Translated from Spanish by Argos -

DINAMIC BLOCK

All right! The Block Size (OH, abomination... mortal sin!).

This post does not really want to talk about the block size, wants to talk about #bitcoin usability in layer one, even from 2140 on.

#bitcoin is many things at once and each of those things is more or less important for each bitcoiners depending on their interests / concerns.

For me, your greatest virtue is to limit growth. But even if it can fulfill that function without having to fulfill the rest of attributes, I would also like a #bitcoin that could use most people.

How much is "the GENT MAJORY"?

Making a thick stroke sketch...

If half of the planet comes to #bitcoin...

if we remove the very elderly and the children from that half and add other actors such as companies or institutions we could get a figure of 3 or 4 billion? (the exact figure does not matter only the concept).

If each of these needs to do how many?, three yearly..., 6 yearly..., A MENSUAL transaction, to balance their two layer solutions, what block size would be needed?


In addition to a block that shelters these transactions, these would have to be made with a commission that at least 50%, 60%, 70%? of the modal media of these users does not prove prohibitive. It should be seen that income level has that much percent.


Michael Saylor says that if we double the size of the block, doubling the transactions that enter a block, the commissions are reduced and "steal" the miners "your reward." But that's not like that.

Double transactions at half price = same reward.

As long as there are enough transactions so that a more empty block does not reduce the commissions due to lack of pressure. But surely that would not be the case in an environment of great adoption.

The most widespread narrative says that if we increase the block size, the needs to have a node are increased, hard disk size, connection bandwidth, RAM, etc... What undermines decentralization by causing fewer people to have a node.

But if we increase the commissions, less people will be able to use #bitcoin and difficultly people who don't use #bitcoin, even if they use layer two solutions, they will mount a #bitcoin node. So we can say that both situations are detrimental in terms of decentralization.

I think it's better to have low commissions than cheap nodes.

People who could set a node in a safe high commission environment that can mount one in a low-cost but expensive-node environment. And some of those who, now if, with low commissions use #bitcoin, still make the effort to mount a node.

It could be thought that in this scenario, low commissions and nodes "expensive", #bitcoin wins decentralization and adoption regarding a high commission scenario.

It may be that increasing the size of the block will not help reduce the commissions. If the last large increase in the commissions has been caused by the introduction of images in the blockchain, the increase in the block size would only allow more images or other things to be put in, without completely fixing the cost of transactions.

Now, what would the size of the right block be?

If we increase it, who tells us that, following the same criterion, we would not have to increase it again several times more?, and could we reduce it if its use shows us that it is no longer necessary so much space?.

This process could extend to infinity.


How about a DINNAMIC BLOCK?

Could we find a mechanism in the "difficulty adjustment" mode that can inform the code, the size of the correct block for the next period?

Without using oracles, of course, through something intrinsic to the chain.

Settings in the block size could be calculated for longer periods of time than those of difficulty adjustment, such as quarters, semesters, years or halvings. They could be implemented with latency / decalation, if ecosystem actors need some sort of adaptation to the new size, the adjustment to implement serious the calculated two, three, etc., earlier cycles. And even if the needs of the network required a greater increase or reduction, they could only be in small sections, 100kb, for example, even if that suggests that the network late in reacting to the needs of its users.

Parameters such as the depth of the mempool (tail blocks), distribution of the commissions and their progression (acceleration of the increase of what people are willing to pay, etc.), number of nodes, evolution of the number of nodes, nodes miners, difficulty, etc... could they tell us if the size of the block should be increased or decreased?

Something that informs us of what the modal media can pay, including for example 50% or 70% of #bitcoin users. This will be very cheap for the richest 20% of the planet and very expensive for 20% of the planet. But everyone should be able to do three, four, six... transactions a year?


It may not be the size of the block that can fix the commissions so that they are accessible and we can find another mechanism.

Even if it is not achieved, #bitcoin will remain useful and relevant in multiple ways, but let it be heard that more pre-clear minds can devise a mechanism that will make it possible for the future not only for elites to use #bitcoin.

Much is spoken of the "incentives" of the parts (miners, users, nodes, etc.), to balance #bitcoin needs and care for their health. But wouldn't it be better for the emerging properties of the code to make it something closer to perfection?