Segregated Witness (SegWit) update

Soon after the Bitcoin blockchain actively gained in popularity, it became clear that the more people use it, the slower the transactions. Of course, it is possible to increase the relevance of information by paying a higher commission, but this was an apparent solution. It was necessary to change something at the program level.

The fact is that speeding up the production of new blocks was technically unrealistic — it took at least 10 minutes. Increasing the block size was impossible — this would require to change the entire protocol. But it was possible to look for other ways.

In 2015, the new Segregated Witness (SegWit) protocol developed by Peter Will together with other Bitcoin Core participants was introduced supposedly to solve the scalability problem, at least in the near future. It succeeded, and in August 2017, this update was implemented, leading to one of the first soft forks in the history of Bitcoin.


  • Bandwidth increase. The main idea of the SegWit protocol was to separate transaction data and digital signature, thereby increasing the amount of useful information stored in the block. It was impossible to completely refuse a digital signature — it confirmed that the sender’s account had the required amount, but it was possible to reorganise its storage. Thanks to the new protocol, it turned out to almost 4 times increase the amount of useful information stored on the block. Without even increasing its real size. We can say that the “practical volume” of the blocks began to be not 1 MB but 4 MB.
  • Transaction speed increase. As each new block contained more transactions, the information processing speed by the blockchain increased significantly. In addition, operating expenses were reduced. And if earlier they could reach up to $30, then after the introduction of SegWit, rarely exceeded $1.
  • The problem of transaction plasticity. The fact is that faking a digital signature is much easier than transaction data. And such a change could damage the digital identifier. And since all transactions on the blockchain network were irreversible, money was simply blocked. The separation of the digital signature from the applied information resolved this problem.

SegWit and Lightning Network

Thanks to solving the problem of transaction plasticity, it became possible to create Layer 2 protocols that would work on top of the main blockchain protocols, optimizing some operations. This is exactly what the Lightning Network protocol is. Its task is to collect a large number of transactions in a short period of time, buffer them and send the entire block for hashing.

This Layer 2 protocol has seriously increased network productivity, bringing its speed to 7 transactions per second. Therefore, soon other blockchains began to adapt Lightning Network to their needs. This was especially true for projects based on the Bitcoin blockchain program code.

SegWit and SegWit2x

The SegWit protocol was originally planned as “voluntary”, not forced. And those who did not update their software could continue to use the old version and mine quite efficiently. However, there was an alternative solution — SegWit2x.

This update implied not only the introduction of new principles of information storage, but also an increase in block size up to 2 MB. However, the majority of miners and users decided that this would increase the machine load and slow down the process, creating unequal conditions for people with slow computers. So we were off consensus. And since SegWit basically solved the problem, more serious changes were decided to be put on the back burner.


The introduction of the new protocol has seriously increased the efficiency of the Bitcoin blockchain. And the fact that the decision on its adoption was made by a decentralised community once again confirmed the viability of such types of interaction.

However, not all users have adopted the new rules. Only about 53 percent switched to the SegWit protocol, while the rest work “the old fashioned way”. Nevertheless, the network was not divided since it was decided in advance that it was not necessary to arrange a hard fork for the time being. However, the situation has changed later because even despite the optimization of data storage, there was too little space in the block.