Stand with Ukraine and Donate here

Block size and scalability


Among the numerous parameters that determine the efficiency of a particular blockchain, one of the leading roles is its scalability. That is, to maintain the effective speed of work with a significant increase in the number of active users and the number of transactions. And one of the ways that allows you to better adapt to the increase in network load is to increase the block size. Therefore, we will consider this point in more detail.

Why is block size so important?

Let's start with the fact that the term "blockchain" itself is translated as "blockchain". Each individual block represents a certain amount of data that stores information about transactions. Given this volume, as well as the speed of generating new blocks, you can calculate how many transactions per second (TPS) the blockchain is capable of supporting. Obviously, the higher the number, the better. Therefore, developers are actively looking for ways to improve it.

And in the context of using cryptocurrencies, this is especially true. After all, Bitcoin, even with all the second level solutions and other additions, gives out a maximum of 7 TPS. And Ethereum - up to 15. For comparison, the Visa money transfer system generates up to 1,700 transactions per second. Therefore, in order to start making real competition to traditional financial systems, cryptocurrency projects must significantly increase the speed of their work. However, simply increasing the block size is a temporary solution and may lead to some problems in the future. Fortunately, there are other options for action, and we will now try to analyze some of them.

How exactly can blockchains scale?

There are two main categories of solutions to this problem - on-chain and off-chain. Each approach has its own pros and cons, so there is no single decision about what is more effective and more promising.

Scaling within the chain. It implies changing the work of the chain itself in order to make it more efficient. The following options exist:

  • Reduce the size of transaction information. This allows more translations to be placed in the building block. They went in a similar way to Bitcoin when they implemented the SegWit protocol. Reducing the length of the transaction and the information recorded in it has led to a significant increase in throughput.
  • Increase the block generation speed. Works up to a certain limit, since it takes some time for the confirmed data to propagate over the network. Otherwise, an unpleasant situation may result - later blocks will reach users earlier than their predecessors. This will definitely conflict with consensus algorithms.
  • Seamless communication between different blockchains. If the chains can interact directly with each other, then each of them will have to process less information. But yes, you need to ensure 100 percent accurate data transfer between different blockchains. This is the principle behind the Polkadot project. Due to the joint work of several internal chains and smart contracts, it is possible to achieve a fairly effective scaling of the system.
  • Segmentation. Each transaction is split into separate segments, which are independently confirmed and verified. Several parallel processes are significantly faster than one sequential one. This approach works in both PoW and PoS. Moreover, it was he who formed the basis for Ethereum 2.0, which plans to bring its performance up to 100 thousand TPS. However, this method also has disadvantages, mainly related to security. It will become much easier to get rewarded for “double confirmation”, and there will be fewer resources needed to support a 51 percent attack.
  • Reducing the number of validators. The fewer check nodes, the higher the speed of the entire network. EOS chose this path, limiting the number of possible validators to 21 nodes, which were selected by a general vote among all token holders. This allowed us to increase the speed to 4 thousand TPS. However, a smaller number of verifying nodes equals a greater centralization of the network, and also significantly increases the risk of abuse of the granted authority.
  • Increase the block size. The most, perhaps, the simplest way. The larger the block, the more data will fit into it. But the more difficult it is to mine, so users with large computing power get an advantage. Bitcoin Cash followed this development path, consistently increasing the size first to 8 MB, and then to 32 MB. However, it is impossible to increase the size indefinitely, so this decision is only temporary. It also reduces the decentralization of the network. What is especially interesting when you consider that the average block size in this blockchain is still kept at the level of 1 MB. This idea has other drawbacks, but more on them later.

Off-chain scaling. These are solutions that improve performance without changing the way information is stored in the main chain. They are also called “second level solutions”. The most effective were the following options:

  • Classic "second level solutions". They were first used in Bitcoin in the form of the Lightning Network. The nodes of this subnet can open channels among themselves for making transactions directly. Upon completion of the process, the channel is closed, and the data is encrypted in bulk and sent to the main blockchain. In addition, it allows for lower transaction costs as you do not have to compete with other users. A similar solution has been implemented in Ethereum - Raiden Network. And after that - the more general blockchain product Celer Network. Both of these projects not only conduct transactions outside the main network, but also allow the use of smart contracts. The problem is that this is all under development, so technical errors are still possible.
  • Sidechains. These are offshoots from the main chain in which assets can move independently. In essence, it is creating "parallel paths" for the transaction flow, which significantly reduces the load on the underlying chain. For the first time, this approach was also used in Bitcoin - in the form of the Liquid sidechain. A similar solution was implemented in Ethereum - Plasma. Its significant drawback, however, is that each such side chain is controlled by certain nodes. Who needs to be trusted. But who, in theory, can abuse their powers.

Arguments for and against increasing the block size

Many believe that the key to wider adoption of the Bitcoin blockchain lies solely in increasing block sizes. Because this will not only increase the total number of transactions, but also reduce costs. That is, the network will become both faster and cheaper. Also, the enthusiasts of "increase" emphasize that both sidechains and second-tier solutions are still at the stage of "finishing up" and are not ready for mass adoption.

But there are also some problems here. At the current level of technical development, node owners have no problem loading new blocks. And it won't, even if we increase their size to 32 MB. However, if this process continues and the size increases to gigabytes, problems will begin. Both with the bandwidth of the Internet and with the amount of memory on computers. Therefore, blockchain will cease to be a network for "everyone and everyone", but will become a tool for "everyone with a really advanced computer, and even better - with a server or a system of servers." That is increased centralization.

Also part of the problem is that as the block size increases, it becomes more difficult to "mine". More information needs to be processed to find a hash. And if now ordinary users can somehow compete with mining pools, then with an increase in the amount of processed information, they will have no chance. This also means greater centralization and greater ability to intercept network control.

How are things going with solving the problem at the moment?

The Bitcoin blockchain has not changed by the day of publishing of this article the nature of the blocks used since the introduction of the SegWit protocol. But he actively developed second-level solutions and sidechains. They made it possible to use bitcoins for everyday purchases. In terms of block increase, BitcoinSV went even further than Bitcoin Cash, increasing their size to 2 GB. This, however, only led to an increase in the cost of maintenance and regular data loss.

But this is not the limit. The ILCOIN project uses the RIFT protocol, which, according to its creators, allows you to create blocks up to 5 GB in size and provides TPS at the level of 100 thousand. This is possible due to the fact that each large unit of information storage consists of smaller formations (only 25 MB) that do not need to be mined individually, since they are automatically generated by the "parent block".

The Ethereum blockchain hopes that the new proof-of-stake consensus algorithm Casper will help to cope with scaling.

As part of the Cardano project, the Hydra system was developed, within which each user generates 10 "heads", each of which works as an independent channel that increases the network bandwidth.


Despite the significant work done by the developers, there is still no universal, safe and effective solution to the problem of scaling. Yes, all of the above solutions contribute to more efficient use of cryptocurrencies, but they are not perfect. In addition, much depends on the individual characteristics of the projects. So development is still underway.