This weekend's official XT release was made under pretense that a bigger block size will supposedly increase Bitcoin's capacity and allow more transactions per second, cheaper fees and wider adoption. So why had the decision caused so much controversy and why didn't the Core developers…
This weekend’s official XT release was made under pretense that a bigger block size will supposedly increase Bitcoin’s capacity and allow more transactions per second, cheaper fees and wider adoption. So why had the decision caused so much controversy and why didn’t the Core developers just implement it already? We all want a large scale Bitcoin with maximum participation, don’t we? As it turns out, the XT developers are not telling the whole truth and are knowingly putting Bitcoin’s future on the line.
Disclaimer: The information presented in this article is intended to be informational and for the purpose of encouraging informed rational discussion about Bitcoin capacity scaling. The scaling solutions presented are not solicited or paid for, nor is the writer or CCN.LA promoting any third party scaling over any other. The writer actively trades the bitcoin market and does market analysis for bitcoin trading consortium xbt.social.
1) it makes Bitcoin decentralized
2) allows network nodes to build consensus
The reason for the consensus mechanism is because Satoshi Nakamoto’s Bitcoin design solves the problem of digital double-spending. Each fullnode verifies every previous (and each new) transaction, and then communicates its agreement on the validity of transactions (and the blocks containing them) to other nodes. Hence, the network constantly maintains consensus about the truthfulness of nodes’ shared version of history – about the validity of blocks and transactions.
The consensus mechanism is what makes it possible for two parties to transact “trustlessly” – they do not have to trust one another not to double-spend or wonder whether any other party has tampered with history and invalidated chains of interdependent transactions and the associated coins. The Bitcoin protocol’s consensus rules ensure that coins are not double-spent – and a consensus history of coins that had previously been spent is kept in the public block chain ledger.
A benefit of the decentralized nature of Bitcoin’s network is that it is censorship resistant. As with any P2P network, all nodes are equal – there is no single node with greater importance or greater hierarchy, hence, there is no central point of attack. The network cannot, therefore, be shut down like Napster, such as when the FBI gained a warrant for the central Napster server room, walked in and pulled the plug.
As a consequence of the time-consuming validation and consensus process, as well as the mining that builds a cumbersome blockchain for proof of validation, the benefits of Bitcoin come at the cost of certain trade-offs:
As specified by the Bitcoin whitepaper, the benefits of trustlessness via consensus validation and censorship resistance via decentralization justify the trade-offs described above. The payment protocol is inherently wasteful, slow and unreliable compared to centralized alternatives such as Visa and Paypal.
The speed and capacity trade-offs that Satoshi Nakamoto made in exchange for Bitcoin’s design goals, means that Bitcoin is ill-suited as a competitor for Visa or PayPal because, inherently, Bitcoin does not scale well.
Anyone can transact via Bitcoin and there is no means of censoring users or limiting their use of the network as long as they make valid transactions.
By increasing block size and making more transactions possible at low fees, what is to stop a large corporation, bank or government from using the newly created capacity as a convenient settlement network? They wouldn’t have to fund or develop their own (well, Ecuador has, for this exact use-case) and perhaps the utility and capacity of the Bitcoin network provides adequate reliability and cost for such a large entity’s use-case that also requires a public, transparent record.
They might pay miners a constant or above-average fee for the service, and in the process they hog x Megabytes of space in each block – leaving the rest of the user-base with 1MB of capacity. What had been the benefit? None for the multitudes and none for adoption of Bitcoin. Discussion about a capacity limit would continue amongst developers and deja vu, all over again.
You see, increased block size capacity will be utilized – by all kinds of entities – not just the girl next-door or the unbanked poor.
The current 1 MB blocks allow a theoretical maximum of 7 transactions per second at 250 bytes per transaction.
Let’s scale up to half of Visa’s capacity of around 22,000 transactions per second:
10,000 Bitcoin transactions per second requires 1.6 GB blocks and would result in a block chain that grows by 87 TB (Terabytes) per year, consuming storage of 1.5 TB per week.
(The above example does not take into consideration the impossibility of this theoretical scenario due to lag introduced by transaction and block validation – pushing 1.6GB blocks around the network for consensus seeking).
The slowdown imposed by consensus seeking and the necessary condition of keeping the network as decentralized as possible, therefore, means that Bitcoin Core (or the stillborn XT) could never compete with Visa. That was never the design goal. Similarly, Visa cannot provide programmable money, trustless payments and censorship resistance.
These are the design considerations and technical facts that inform the position of the conservative faction in the block size debate. Bitcoin is not decentralized in its own right. The protocol does not have a ‘decentralizing algorithm‘. The degree of decentralization is the responsibility of the users of the network: as many users as possible should run their own fullnodes, mining diversity needs to be encouraged: many small pools are better than a few large pools. Bear in mind, for example, the risk of a 51% Attack by a single large pool or a cartel of mining operators.
In this context, the responsibility for maintaining a high degree of decentralization, necessitates a custodian responsibility by Bitcoin’s users and its developers.
If the matter is so cut-and-dry and it is true that Bitcoin does not scale, then why have the XT developers not addressed these points? Precisely, and Gavin Andresen’s repeated answer to the oft-repeated question “Why expose Bitcoin to the risk of centralization by increasing block size?” has been a variation on “because when networks reach capacity then Bad Things happen“. Nebulous fear and speculation – no technical reason has been forthcoming.
The question, then, is why would Gavin Andresen, Mike Hearn and Tom Harding willfully expose Bitcoin (and other peoples’ money) to the threat of centralization in trying to scale it to a purpose it was neither designed for nor technologically suitable for?
What the Core developers favor is a tiered network architecture whereby Bitcoin provides a core settlement layer (layer 1) with micro-transaction layers (layer 2) stacked on top. Development of the Lightning Network implies a future where a virtually unlimited number of daily bitcoin transactions will be possible with only a moderate block size increase needed for the protocol.
Layer 2 solutions and the fee market will be explored in a future article.
In closing a pointed remark by Eric Voskuil, libbitcoin maintainer and core bitcoin developer:
What you seem to be missing is *why* bitcoin is better money. Have you
considered why is it comparatively inexpensive to transact in a medium
that is based on such a highly inefficient technology?
You might want to consider that these two considerations are not
independent. The reduced cost of transacting (and carrying) Bitcoin is a
direct consequence of its trustless nature. Any compromise in that
nature will eliminate that advantage, and therefore Bitcoin.
Bitcoin is designed to solve only one problem that other systems do not.
To accomplish this it makes significant compromises in other areas. The
benefit of this solution is that it cannot be effectively controlled by
the state. As a result, all of the associated overhead is eliminated.
Hence the net cost benefit despite high technical costs.
– Eric Voskuil
Images from Shutterstock.
Last modified: January 25, 2020 11:07 PM UTC