Basic theoretical work:
A Transaction Fee Market Exists Without a Block Size Limit (Peter Rizun): This paper argues that block propagation times limit block sizes which in turn will create competition for transaction space.
An Examination of Single Transaction Blocks and Their Effect on Network Throughput and Block Size (Andrew Stone): This paper argues that headers-only mining places a limit on transaction throughput and therefore average block size that is based on underlying physical capability. It then puts a limit on maximum block size by arguing that a rational miner would orphan any block that takes so long to validate that the miner is likely to be able to mine and validate a smaller block within the same time.
No Fork Necessary
Blocks only (Greg Maxwell??): No transactions are forwarded to a node. Reduces node bandwidth. Unfortunately these nodes cannot propagate transactions to peers so are only useful as endpoints in the P2P network.
Thin Blocks (Mike Hearn, implemented by Peter Tschipper):
Weak Blocks (Gavin Andresen):
Miners relay invalid block solutions whose difficulty is some fraction of the current difficulty. This tells other miners/nodes what is being worked on so when a solution is found, miners need only send the nonce, not the full block. This should spread the bandwidth spikes caused block discovery, but may cause greater total bandwidth use.
Subchains (Peter Rizun):
A formal treatment of weak blocks which adds the concept of weak blocks building on prior weak blocks recursively. This solution should reduce weak block bandwidth down to nearly the same as without weak blocks, and also adds confidence to accepting 0-conf transactions since users know what blocks miners are working on.
Headers-only mining (independently deployed by miners, formally addressed by Andrew Stone, implemented by Gavin Andresen):
Headers only mining (mining empty blocks) allows greater onchain scaling because it provides a feedback mechanism where miners can reduce the average block size if it is nearing their local physical capacity. This effect requires no active agency, it is a natural result of headers only mining while miners are waiting to acquire and validate the full block.
BlockTorrent (Jonathan Toomim): A proposal to optimize access to blocks and transactions using algorithms inspired by bittorrent
Requires Fork
Basic block size increase (Satoshi Nakamoto, implemented in Bitcoin XT, Bitcoin Unlimited, Bitcoin Classic): This technique recognises that the current network infrastructure easily handles 1MB blocks and so simply suggests that the block size be increased. Within this basic technique there are multiple proposals dealing with how to change the maximum size:
- One time change to 2 MB
- Bitpay's K*median(N blocks), on Bitcoin Classic roadmap
- Follow the most-work chain regardless of block size (Bitcoin Unlimited)
Auxiliary (Extension) blocks (jl2012, Tier?): This technique proposes that the hash of another block be placed in the header of the current 1MB block. That other block contains an additional 1MB (or more) of space. This is a way of "soft-forking" a basic block size increase, with the proviso that older clients would not be able to verify the full blockchain and could be tricked into accepting double-spends, etc.
Bitcoin-NG (Ittay Eyal, Adem Efe Gencer, Emin Gun Sirer, Robbert van Renesse): This proposal uses POW (proof of work) to elect a temporary "leader" who then serializes transactions into "micro-blocks" until a new POW block is found and a new leader elected. As in the "basic block size increase" (and many others) the proposal still requires that every node see every transaction, and so scales up to network throughput limits. The key advantages are that the leader can confirm transactions very quickly (restrained only by network propagation latencies) and that bandwidth use does not have the spikes associated with full block propagation.
Distributed Merkle hash trie (Andrew Stone): This technique inverts the blockchain into blocks that store the current tx-out set in a distributed merkle tree, that can be traversed by bitcoin address bitwise to validate the existence of an address. The blockchain forms a history of the changes to this trie. This allows clients to "sync up" quickly, they do not need the full block history. Blocks can be produced at any node in the trie containing transactions and blocks describing changes to the sub-trie. This allows nodes to only track portions of the address space. Miners only include portions of the trie that they have verified, resulting in slower confirmation times the lower you go. Since txns can be included in any parent, high fee transactions are confirmed closer to the root, resulting in a fee market.
Address sharding (Andrew Stone): This is a simplification of the Distributed Merkle hash tree technique that is formulated to fit in Bitcoin with minimal disruption via extension blocks. This technique proposes that there exist a tree of extension blocks. Each extension block can only contain transactions whose address has a particular prefix, recursively, transactions containing addresses with multiple prefixes are placed in the nearest parent extension block or ultimately the root (today's block). Clients that do not track all the extension blocks are "full" nodes for those they track and "SPV" nodes for those they do not.
Miners do not include the extension blocks that they do not track. This would cause extension blocks to be mined less often, creating an interesting fee market with more expensive transactions gaining transaction space nearer to the root.
Segregated Witness (Peter Wuille): This technique separates the verification information (signatures) from the transaction data, and cleans up a lot of other stuff. Fully validating versions do not save bandwidth or space -- in fact it consumes a small amount more -- but would allow a new type of client. This "non-verifying client" (different from SPV clients) essentially relies on others (miners) to do signature verification in which case bandwidth is saved since the "witness" -- the signatures -- are not downloaded, but security is undermined. Or the client could verify once (with no bandwidth savings) but not store the verification information on disk. This is a much smaller security risk since an attacker with direct access to the disk may be able to replace a transaction making it "look" like an address has or does not have a balance. However, since bandwidth and RAM are the current bottlenecks this solution relies on the fact that today's network can handle larger amounts of data, similar to the basic block size increase technique.
Transaction space supply vs demand dynamics (Andrew Stone, Meni Rosenfeld): This is a family of techniques to allow high fee paying transactions to cause the maximum block size to be temporarily expanded. These techniques observe that the transaction space supply curve currently goes vertical at 1MB and are meant to address the possible shock ("economic change event") of hitting that limit by allowing transaction supply space to increase smoothly. As such, this technique does not address a long-term demand increase, but may be combined with average or median based techniques to do so.
Bitcoin9000 (anonymous): This paper proposes a combination of 3 techniques:
1. Subchains (renamed diff blocks by this paper)
2. Parallel transaction validation (transaction validation is nearly "embarrassingly parallel" -- this is an "expected" optimization)
3. linked list of size doubling extension blocks with a 95% miner vote-in-blocks needed to "open" the next-deeper block
Off Chain Scaling (included for completeness)
Sidechains (Gregory Maxwell, implemented Blockstream): This technique renders Bitcoins immobile on the Bitcoin blockchain, allowing a representation of them to be used on a different blockchain. The current implementation uses humans as stewards of the "locked" bitcoins ("federated sidechains"), so suffers from counterparty risk just like any "paper" money scheme. There is a theoretical proposal to remove the human stewards.
Lightning Network