Minisketch: Reducing Bitcoin Node Bandwidth Requirements

Bitcoin-SV: are terabyte blocks feasible?

Block propagation time and block processing time (to prepare & validate) are very crucial factors. Every node(miner) has an economic incentive in propagating its block as quickly as possible so that nodes would be more likely to build on this fork. But simultaneously having a very large number of transactions contained in the block increases the block propagation time, so a node has to optimally balance the number of transactions to include (block size) with transaction fees plus block reward so for the best outcome.
But BSVs scaling approach expects to have logical blocks at gigabytes/terabytes sizes in future, the problem outlined above can be a huge obstacle in getting there. This problem will be exacerbated when block sizes get too big and ultimately the rational economically motivated nodes begin to ration the number of transactions in a block.
I believe currently the time complexity of block propagation is at O(~n), where n is the number of transactions, as there is currently no block compression (like Graphene). Also, block processing time complexity is at O(~n) too as most of the processing is serial.
Compact blocks (BIP 152) as implemented currently in BitcoinSV already does a basic level of block compression by,
typically a Compact block is about 10 - 15 % of the full uncompressed legacy block & this reduces the effective propagation time; while this is probably good enough for Bitcoin-Core as they are not seeking to increase block size, its certainly not enough for Bitcoin-SV.
Graphene which uses Bloom filters and Invertible Bloom Lookup Tables (IBLTs) seems to provide an efficient solution to the transaction set reconciliation problem, and it offers additional (from Compact blocks) compression where a Graphene block is ~10% of the size of a typical Compact block (from the author's empirical tests)
With the above information and certain assumptions we can quickly calculate the demands of a terabyte node and its feasibility with current hardware & bandwidth limitations.
Assumptions:
1 TB block ==> 100-150 GB Compact block ==> 10 - 15 GB Graphene block
Lets conservatively go with the low of 10 GB Graphene compressed block, 10GB/ 10 Gb/s = 8 secs
we still need 8 full seconds to propagate this block one hop to the next immediate peer. Also, note that we conveniently ignored the massive parallelization that would be needed for transaction and block processing which would likely involve techniques like mempool and UTXO set sharding in the node architecture.
But the point to take home is 8 seconds is exorbitant and we need a better workable compression algorithm irrespective of other architectural improvements under the outlined assumptions.
The above led me to begin work on an "ultra compression" algorithm which is a stateful protocol and highly parallelizable (places high memory & CPU demands) and fits with the goal of a horizontally scalable architecture built on affordable consumer grade h/w. The outline of the algorithm looks promising and seems to compress the block by factor of thousands if not more especially for the block publisher and although the block size grows as we head farther from the publishing node, its still reasonable IMO.
Now, before I go further down this rabbit hole I wanted you guys to poke holes into my assumptions, requirements & calculation outlines. Subsequently I will publish (semi-formal) a paper detailing the ultra compression algorithm and how it fits with the overall node architecture per ideas expressed above.
Would appreciate if someone could point/educate me to alternative practical solutions that have already been vetted and are in the dev pipeline.
Note:
submitted by stoichammer to bitcoincashSV [link] [comments]

Scaling Bitcoin: Gavin begins work on invertible bloom lookup tables on Github.

Scaling Bitcoin: Gavin begins work on invertible bloom lookup tables on Github. submitted by platonicgap to Bitcoin [link] [comments]

In my opinion the most important part of Scaling Bitcoin! (Peter R)

In my opinion the most important part of Scaling Bitcoin! (Peter R) submitted by saddit42 to Bitcoin [link] [comments]

gavinandresen [12:08 AM] I should bang out udp broadcast of block headers and validationless mining so we can stop talking about propagation time....

submitted by shludvigsen to btc [link] [comments]

A fast relay network for miners

submitted by gavinandresen to Bitcoin [link] [comments]

Two great talks at Scaling Bitcoin

Professor Brian Levine of UMass Amherst gave two really excellent talks on papers he authored yesterday.
1) Graphene https://people.cs.umass.edu/~gbiss/graphene.pdf
Graphene is a new block propagation algorithm. For those those who are unaware, today we have two main propagation algorithms, XTHin by the Bitcoin Unlimited developers (which was used in their gigablock research) and Compact Blocks by the Core developers. Both of which have similar performance.
Graphene (which was co-authored with Gavin Andresen and others) is able to compress the block down to 1/10th of the size of a Compact Block. It does this by using both a high false positive rate bloom filter and an IBLT (invertible bloom lookup table) to provide the diff. This is pretty close to the maximum compression possible.
2) Bobtail https://arxiv.org/pdf/1709.08750.pdf
Bobtail is a consensus protocol which removes most of the variance in block times. We all know how much it sucks when the miners hit an unlucky patch and it takes 20, 40, or 60 mins to find a new block. Bobtail removes the variance and results in nearly all blocks being found within 8 to 12 minutes.
Not only does this make confirmations more predictable but it completely eliminates the selfish mining attack, dramatically strengthens one confirmation payments (1 conf would be more like 4 conf if I remember correctly), and removes the advantage that larger miners have which causes centralization pressure (a main scaling bottleneck).
It's pretty amazing that nobody has thought of this before. There's some crazy math behind it but if it all checks out it could be a pretty dramatic improvement.
submitted by Chris_Pacia to btc [link] [comments]

The Origins of the Blocksize Debate

On May 4, 2015, Gavin Andresen wrote on his blog:
I was planning to submit a pull request to the 0.11 release of Bitcoin Core that will allow miners to create blocks bigger than one megabyte, starting a little less than a year from now. But this process of peer review turned up a technical issue that needs to get addressed, and I don’t think it can be fixed in time for the first 0.11 release.
I will be writing a series of blog posts, each addressing one argument against raising the maximum block size, or against scheduling a raise right now... please send me an email ([email protected]) if I am missing any arguments
In other words, Gavin proposed a hard fork via a series of blog posts, bypassing all developer communication channels altogether and asking for personal, private emails from anyone interested in discussing the proposal further.
On May 5 (1 day after Gavin submitted his first blog post), Mike Hearn published The capacity cliff on his Medium page. 2 days later, he posted Crash landing. In these posts, he argued:
A common argument for letting Bitcoin blocks fill up is that the outcome won’t be so bad: just a market for fees... this is wrong. I don’t believe fees will become high and stable if Bitcoin runs out of capacity. Instead, I believe Bitcoin will crash.
...a permanent backlog would start to build up... as the backlog grows, nodes will start running out of memory and dying... as Core will accept any transaction that’s valid without any limit a node crash is eventually inevitable.
He also, in the latter article, explained that he disagreed with Satoshi's vision for how Bitcoin would mature[1][2]:
Neither me nor Gavin believe a fee market will work as a substitute for the inflation subsidy.
Gavin continued to publish the series of blog posts he had announced while Hearn made these predictions. [1][2][3][4][5][6][7]
Matt Corallo brought Gavin's proposal up on the bitcoin-dev mailing list after a few days. He wrote:
Recently there has been a flurry of posts by Gavin at http://gavinandresen.svbtle.com/ which advocate strongly for increasing the maximum block size. However, there hasnt been any discussion on this mailing list in several years as far as I can tell...
So, at the risk of starting a flamewar, I'll provide a little bait to get some responses and hope the discussion opens up into an honest comparison of the tradeoffs here. Certainly a consensus in this kind of technical community should be a basic requirement for any serious commitment to blocksize increase.
Personally, I'm rather strongly against any commitment to a block size increase in the near future. Long-term incentive compatibility requires that there be some fee pressure, and that blocks be relatively consistently full or very nearly full. What we see today are transactions enjoying next-block confirmations with nearly zero pressure to include any fee at all (though many do because it makes wallet code simpler).
This allows the well-funded Bitcoin ecosystem to continue building systems which rely on transactions moving quickly into blocks while pretending these systems scale. Thus, instead of working on technologies which bring Bitcoin's trustlessness to systems which scale beyond a blockchain's necessarily slow and (compared to updating numbers in a database) expensive settlement, the ecosystem as a whole continues to focus on building centralized platforms and advocate for changes to Bitcoin which allow them to maintain the status quo
Shortly thereafter, Corallo explained further:
The point of the hard block size limit is exactly because giving miners free rule to do anything they like with their blocks would allow them to do any number of crazy attacks. The incentives for miners to pick block sizes are no where near compatible with what allows the network to continue to run in a decentralized manner.
Tier Nolan considered possible extensions and modifications that might improve Gavin's proposal and argued that soft caps could be used to mitigate against the dangers of a blocksize increase. Tom Harding voiced support for Gavin's proposal
Peter Todd mentioned that a limited blocksize provides the benefit of protecting against the "perverse incentives" behind potential block withholding attacks.
Slush didn't have a strong opinion one way or the other, and neither did Eric Lombrozo, though Eric was interested in developing hard-fork best practices and wanted to:
explore all the complexities involved with deployment of hard forks. Let’s not just do a one-off ad-hoc thing.
Matt Whitlock voiced his opinion:
I'm not so much opposed to a block size increase as I am opposed to a hard fork... I strongly fear that the hard fork itself will become an excuse to change other aspects of the system in ways that will have unintended and possibly disastrous consequences.
Bryan Bishop strongly opposed Gavin's proposal, and offered a philosophical perspective on the matter:
there has been significant public discussion... about why increasing the max block size is kicking the can down the road while possibly compromising blockchain security. There were many excellent objections that were raised that, sadly, I see are not referenced at all in the recent media blitz. Frankly I can't help but feel that if contributions, like those from #bitcoin-wizards, have been ignored in lieu of technical analysis, and the absence of discussion on this mailing list, that I feel perhaps there are other subtle and extremely important technical details that are completely absent from this--and other-- proposals.
Secured decentralization is the most important and most interesting property of bitcoin. Everything else is rather trivial and could be achieved millions of times more efficiently with conventional technology. Our technical work should be informed by the technical nature of the system we have constructed.
There's no doubt in my mind that bitcoin will always see the most extreme campaigns and the most extreme misunderstandings... for development purposes we must hold ourselves to extremely high standards before proposing changes, especially to the public, that have the potential to be unsafe and economically unsafe.
There are many potential technical solutions for aggregating millions (trillions?) of transactions into tiny bundles. As a small proof-of-concept, imagine two parties sending transactions back and forth 100 million times. Instead of recording every transaction, you could record the start state and the end state, and end up with two transactions or less. That's a 100 million fold, without modifying max block size and without potentially compromising secured decentralization.
The MIT group should listen up and get to work figuring out how to measure decentralization and its security.. Getting this measurement right would be really beneficial because we would have a more academic and technical understanding to work with.
Gregory Maxwell echoed and extended that perspective:
When Bitcoin is changed fundamentally, via a hard fork, to have different properties, the change can create winners or losers...
There are non-trivial number of people who hold extremes on any of these general belief patterns; Even among the core developers there is not a consensus on Bitcoin's optimal role in society and the commercial marketplace.
there is a at least a two fold concern on this particular ("Long term Mining incentives") front:
One is that the long-held argument is that security of the Bitcoin system in the long term depends on fee income funding autonomous, anonymous, decentralized miners profitably applying enough hash-power to make reorganizations infeasible.
For fees to achieve this purpose, there seemingly must be an effective scarcity of capacity.
The second is that when subsidy has fallen well below fees, the incentive to move the blockchain forward goes away. An optimal rational miner would be best off forking off the current best block in order to capture its fees, rather than moving the blockchain forward...
tools like the Lightning network proposal could well allow us to hit a greater spectrum of demands at once--including secure zero-confirmation (something that larger blocksizes reduce if anything), which is important for many applications. With the right technology I believe we can have our cake and eat it too, but there needs to be a reason to build it; the security and decentralization level of Bitcoin imposes a hard upper limit on anything that can be based on it.
Another key point here is that the small bumps in blocksize which wouldn't clearly knock the system into a largely centralized mode--small constants--are small enough that they don't quantitatively change the operation of the system; they don't open up new applications that aren't possible today
the procedure I'd prefer would be something like this: if there is a standing backlog, we-the-community of users look to indicators to gauge if the network is losing decentralization and then double the hard limit with proper controls to allow smooth adjustment without fees going to zero (see the past proposals for automatic block size controls that let miners increase up to a hard maximum over the median if they mine at quadratically harder difficulty), and we don't increase if it appears it would be at a substantial increase in centralization risk. Hardfork changes should only be made if they're almost completely uncontroversial--where virtually everyone can look at the available data and say "yea, that isn't undermining my property rights or future use of Bitcoin; it's no big deal". Unfortunately, every indicator I can think of except fee totals has been going in the wrong direction almost monotonically along with the blockchain size increase since 2012 when we started hitting full blocks and responded by increasing the default soft target. This is frustrating
many people--myself included--have been working feverishly hard behind the scenes on Bitcoin Core to increase the scalability. This work isn't small-potatoes boring software engineering stuff; I mean even my personal contributions include things like inventing a wholly new generic algebraic optimization applicable to all EC signature schemes that increases performance by 4%, and that is before getting into the R&D stuff that hasn't really borne fruit yet, like fraud proofs. Today Bitcoin Core is easily >100 times faster to synchronize and relay than when I first got involved on the same hardware, but these improvements have been swallowed by the growth. The ironic thing is that our frantic efforts to keep ahead and not lose decentralization have both not been enough (by the best measures, full node usage is the lowest its been since 2011 even though the user base is huge now) and yet also so much that people could seriously talk about increasing the block size to something gigantic like 20MB. This sounds less reasonable when you realize that even at 1MB we'd likely have a smoking hole in the ground if not for existing enormous efforts to make scaling not come at a loss of decentralization.
Peter Todd also summarized some academic findings on the subject:
In short, without either a fixed blocksize or fixed fee per transaction Bitcoin will will not survive as there is no viable way to pay for PoW security. The latter option - fixed fee per transaction - is non-trivial to implement in a way that's actually meaningful - it's easy to give miners "kickbacks" - leaving us with a fixed blocksize.
Even a relatively small increase to 20MB will greatly reduce the number of people who can participate fully in Bitcoin, creating an environment where the next increase requires the consent of an even smaller portion of the Bitcoin ecosystem. Where does that stop? What's the proposed mechanism that'll create an incentive and social consensus to not just 'kick the can down the road'(3) and further centralize but actually scale up Bitcoin the hard way?
Some developers (e.g. Aaron Voisine) voiced support for Gavin's proposal which repeated Mike Hearn's "crash landing" arguments.
Pieter Wuille said:
I am - in general - in favor of increasing the size blocks...
Controversial hard forks. I hope the mailing list here today already proves it is a controversial issue. Independent of personal opinions pro or against, I don't think we can do a hard fork that is controversial in nature. Either the result is effectively a fork, and pre-existing coins can be spent once on both sides (effectively failing Bitcoin's primary purpose), or the result is one side forced to upgrade to something they dislike - effectively giving a power to developers they should never have. Quoting someone: "I did not sign up to be part of a central banker's committee".
The reason for increasing is "need". If "we need more space in blocks" is the reason to do an upgrade, it won't stop after 20 MB. There is nothing fundamental possible with 20 MB blocks that isn't with 1 MB blocks.
Misrepresentation of the trade-offs. You can argue all you want that none of the effects of larger blocks are particularly damaging, so everything is fine. They will damage something (see below for details), and we should analyze these effects, and be honest about them, and present them as a trade-off made we choose to make to scale the system better. If you just ask people if they want more transactions, of course you'll hear yes. If you ask people if they want to pay less taxes, I'm sure the vast majority will agree as well.
Miner centralization. There is currently, as far as I know, no technology that can relay and validate 20 MB blocks across the planet, in a manner fast enough to avoid very significant costs to mining. There is work in progress on this (including Gavin's IBLT-based relay, or Greg's block network coding), but I don't think we should be basing the future of the economics of the system on undemonstrated ideas. Without those (or even with), the result may be that miners self-limit the size of their blocks to propagate faster, but if this happens, larger, better-connected, and more centrally-located groups of miners gain a competitive advantage by being able to produce larger blocks. I would like to point out that there is nothing evil about this - a simple feedback to determine an optimal block size for an individual miner will result in larger blocks for better connected hash power. If we do not want miners to have this ability, "we" (as in: those using full nodes) should demand limitations that prevent it. One such limitation is a block size limit (whatever it is).
Ability to use a full node.
Skewed incentives for improvements... without actual pressure to work on these, I doubt much will change. Increasing the size of blocks now will simply make it cheap enough to continue business as usual for a while - while forcing a massive cost increase (and not just a monetary one) on the entire ecosystem.
Fees and long-term incentives.
I don't think 1 MB is optimal. Block size is a compromise between scalability of transactions and verifiability of the system. A system with 10 transactions per day that is verifiable by a pocket calculator is not useful, as it would only serve a few large bank's settlements. A system which can deal with every coffee bought on the planet, but requires a Google-scale data center to verify is also not useful, as it would be trivially out-competed by a VISA-like design. The usefulness needs in a balance, and there is no optimal choice for everyone. We can choose where that balance lies, but we must accept that this is done as a trade-off, and that that trade-off will have costs such as hardware costs, decreasing anonymity, less independence, smaller target audience for people able to fully validate, ...
Choose wisely.
Mike Hearn responded:
this list is not a good place for making progress or reaching decisions.
if Bitcoin continues on its current growth trends it will run out of capacity, almost certainly by some time next year. What we need to see right now is leadership and a plan, that fits in the available time window.
I no longer believe this community can reach consensus on anything protocol related.
When the money supply eventually dwindles I doubt it will be fee pressure that funds mining
What I don't see from you yet is a specific and credible plan that fits within the next 12 months and which allows Bitcoin to keep growing.
Peter Todd then pointed out that, contrary to Mike's claims, developer consensus had been achieved within Core plenty of times recently. Btc-drak asked Mike to "explain where the 12 months timeframe comes from?"
Jorge Timón wrote an incredibly prescient reply to Mike:
We've successfully reached consensus for several softfork proposals already. I agree with others that hardfork need to be uncontroversial and there should be consensus about them. If you have other ideas for the criteria for hardfork deployment all I'm ears. I just hope that by "What we need to see right now is leadership" you don't mean something like "when Gaving and Mike agree it's enough to deploy a hardfork" when you go from vague to concrete.
Oh, so your answer to "bitcoin will eventually need to live on fees and we would like to know more about how it will look like then" it's "no bitcoin long term it's broken long term but that's far away in the future so let's just worry about the present". I agree that it's hard to predict that future, but having some competition for block space would actually help us get more data on a similar situation to be able to predict that future better. What you want to avoid at all cost (the block size actually being used), I see as the best opportunity we have to look into the future.
this is my plan: we wait 12 months... and start having full blocks and people having to wait 2 blocks for their transactions to be confirmed some times. That would be the beginning of a true "fee market", something that Gavin used to say was his #1 priority not so long ago (which seems contradictory with his current efforts to avoid that from happening). Having a true fee market seems clearly an advantage. What are supposedly disastrous negative parts of this plan that make an alternative plan (ie: increasing the block size) so necessary and obvious. I think the advocates of the size increase are failing to explain the disadvantages of maintaining the current size. It feels like the explanation are missing because it should be somehow obvious how the sky will burn if we don't increase the block size soon. But, well, it is not obvious to me, so please elaborate on why having a fee market (instead of just an price estimator for a market that doesn't even really exist) would be a disaster.
Some suspected Gavin/Mike were trying to rush the hard fork for personal reasons.
Mike Hearn's response was to demand a "leader" who could unilaterally steer the Bitcoin project and make decisions unchecked:
No. What I meant is that someone (theoretically Wladimir) needs to make a clear decision. If that decision is "Bitcoin Core will wait and watch the fireworks when blocks get full", that would be showing leadership
I will write more on the topic of what will happen if we hit the block size limit... I don't believe we will get any useful data out of such an event. I've seen distributed systems run out of capacity before. What will happen instead is technological failure followed by rapid user abandonment...
we need to hear something like that from Wladimir, or whoever has the final say around here.
Jorge Timón responded:
it is true that "universally uncontroversial" (which is what I think the requirement should be for hard forks) is a vague qualifier that's not formally defined anywhere. I guess we should only consider rational arguments. You cannot just nack something without further explanation. If his explanation was "I will change my mind after we increase block size", I guess the community should say "then we will just ignore your nack because it makes no sense". In the same way, when people use fallacies (purposely or not) we must expose that and say "this fallacy doesn't count as an argument". But yeah, it would probably be good to define better what constitutes a "sensible objection" or something. That doesn't seem simple though.
it seems that some people would like to see that happening before the subsidies are low (not necessarily null), while other people are fine waiting for that but don't want to ever be close to the scale limits anytime soon. I would also like to know for how long we need to prioritize short term adoption in this way. As others have said, if the answer is "forever, adoption is always the most important thing" then we will end up with an improved version of Visa. But yeah, this is progress, I'll wait for your more detailed description of the tragedies that will follow hitting the block limits, assuming for now that it will happen in 12 months. My previous answer to the nervous "we will hit the block limits in 12 months if we don't do anything" was "not sure about 12 months, but whatever, great, I'm waiting for that to observe how fees get affected". But it should have been a question "what's wrong with hitting the block limits in 12 months?"
Mike Hearn again asserted the need for a leader:
There must be a single decision maker for any given codebase.
Bryan Bishop attempted to explain why this did not make sense with git architecture.
Finally, Gavin announced his intent to merge the patch into Bitcoin XT to bypass the peer review he had received on the bitcoin-dev mailing list.
submitted by sound8bits to Bitcoin [link] [comments]

I am off the opinion that there should be no blocksize limit and that we should charge a fee for every transaction to prevent spamming. Maybe 0.00001btc now reducing as the transcation number increases or blocktime. Offchain transcations can come below this minimum fee.

submitted by phanpp to btc [link] [comments]

Bitcoin block propagation - Ultra compression

This solution paper is a sequel to my earlier post on reddit Bitcoin-SV: are terabyte blocks feasible?, I recommend reading that article first as it deals with requirements scoping & feasibility before continuing this paper.

https://gist.github.com/stoichammeb275228fa5583487955c8c2f91829b00

Would greatly appreciate feedback and criticism on the above from SV devs. I recently noticed a tweet from @shadders333where he announced a potential solution for block propagation will be announced in the CoinGeek scaling conference. It is purely co-incidental I was interested in this problem for the past few days, and I have no clue about the solution they intend to share. Would love to hear first-hand thoughts from you guys.
----------------------------
Edit: Added a new section to the paper on how it compares with Graphene. Copying the same here as well. 

Comparison with Graphene:

Graphene uses techniques such as Bloom filters and invertible bloom lookup tables (IBLTs) to compress blocks with better efficiency when compared to Compact Blocks (BIP-152). Graphene works on a probabilistic model for IBLT decoding, and there is a small chance of failure, in that event the sender must resend the IBLT with double the number of cells, the authors did some empirical testing and found this doubling was sufficient for the very few that actually failed. It seems the Graphene sizes are linearly proportional to the mempool sizes. But practically speaking, we need to take another factor "mempool divergence" into account, as network grows and mempools become larger the divergence increases, and in practice decoding failures will raise. One proposal to counter this is to request blocks from multiple (2/3) peers and merging them together, this decreases the probability of IBLT decoding errors at the cost of additional resources. There is also an open attack vector called the poison block attack where a malicious miner could mine a block with transactions that are held private, this will lead to a inevitable decode failure. Although this attack seems fatal to Graphene’s adoption, there is likely hope that game theoretical PoW underpinnings may come to the rescue.
Graphene distills the block propagation problem into the classical set reconciliation problem (Set theory; order of elements is irrelevant), it builds on the previous academic literature on Set reconciliation which also involved Bloom filters & IBLTs. It discards the concomitant time information of transactions and defaults to implicit ordering, typically canonical (ID sorting). But it supports supplemental order information to be included. If topological ordering of transactions is needed, additional ordering information has to be included at the cost of increasing the size of the block. It complements well with implicit ordering techniques like CTOR(Canonical Transaction ordering), although it deviates from Nakamoto style chronological ordering of transactions within a block.
Whereas Ultra compression (this paper) has a novel approach which leverages the concomitant time information of transactions to its advantage and achieves a much better compression factor. It does not approach the problem as merely that of Set reconciliation and instead by improves efficiency by encoding relative time sequence of transactions into a block.
The primary advantages are as below:
Notable disadvantages:
In my subsequent post, I will cover a more comprehensive distributed system architecture for a Node that covers the following:
submitted by stoichammer to bitcoinsv [link] [comments]

Why I'm not worried about big blocks: IBLT and Weak Blocks!

Why I'm not worried about big blocks: IBLT and Weak Blocks! submitted by BIP-101 to btc [link] [comments]

GMaxwell in 2006, during his Wikipedia vandalism episode: "I feel great because I can still do what I want, and I don't have to worry what rude jerks think about me ... I can continue to do whatever I think is right without the burden of explaining myself to a shreaking [sic] mass of people."

https://en.wikipedia.org/w/index.php?title=User_talk:Gmaxwell&diff=prev&oldid=36330829
Is anyone starting to notice a pattern here?
Now we're starting to see that it's all been part of a long-term pattern of behavior for the last 10 years with Gregory Maxwell, who has deep-seated tendencies towards:
After examining his long record of harmful behavior on open-source software projects, it seems fair to summarize his strengths and weaknesses as follows:
(1) He does have excellent programming skills.
(2) He likes needs to be in control.
(3) He always believes that whatever he's doing is "right" - even if a consensus of other highly qualified people happen to disagree with him (who he rudely dismisses "shrieking masses", etc.)
(4) Because of (1), (2), and (3) we are now seeing how dangerous is can be to let him assume power over an open-source software project.
This whole mess could have been avoided.
This whole only happened because people let Gregory Maxwell "be in charge" of Bitcoin development as CTO of Blockstream;
The whole reason the Bitcoin community is divided right now is simply because Gregory Maxwell is dead-set against any increase in "max blocksize" even to a measly 2 MB (he actually threatened to leave the project if it went over 1 MB).
This whole problem would go away if he could simply be man enough to step up and say to the Bitcoin community:
"I would like to offer my apologies for having been so stubborn and divisive and trying to always be in control. Although it is still my honest personal belief that that a 1 MB 'max blocksize' would be the best for Bitcoin, many others in the community evidently disagree with me strongly on this, as they have been vehement and unrelenting in their opposition to me for over a year now. I now see that any imagined damage to the network resulting from allowing big blocks would be nothing in comparison to the very real damage to the community resulting from forcing small blocks. Therefore I have decided that I will no longer attempt to force my views onto the community, and I shall no longer oppose a 'max blocksize' increase at this time."
Good luck waiting for that kind of an announcement from GMax! We have about as much a chance of GMax voluntarily stepping down as leader of Bitcoin, as Putin voluntarily stepping down as leader of Russia. It's just not in their nature.
As we now know - from his 10-year history of divisiveness and vandalism, and from his past year of stonewalling - he would never compromise like this, compromise is simply not part of his vocabulary.
So he continues to try to impose his wishes on the community, even in the face of ample evidence that the blocksize could easily be not only 2 MB but even 3-4 MB right now - ie, both the infrastructure and the community have been empirically surveyed and it was found that the people and the bandwidth would both easily support 3-4 MB already.
But instead, Greg would rather use his postion as "Blockstream CTO" to overrule everyone who supports bigger blocks, telling us that it's impossible.
And remember, this is the same guy who a few years ago was also telling us that Bitcoin itself was "mathematically impossible".
So here's a great plan get rich:
(1) Find a programmer who's divisive and a control freak and who overrides consensus and who didn't believe that Bitcoin was possible and and doesn't believe that it can do simple "max blocksize"-based scaling (even in the face of massive evidence to the contrary).
(2) Invest $21+55 million in a private company and make him the CTO (and make Adam Back the CEO - another guy who also didn't believe that Bitcoin would work).
(3) ???
(4) Profit!
Greg and his supporters say bigblocks "might" harm Bitcoin someday - but they ignore the fact that smallblocks are already harming Bitcoin now.
Everyone from Core / Blockstream mindlessly repeats Greg's mantra that "allowing 2 MB blocks could harm the network" - somehow, someday (but actually, probably not: see Footnotes [1], [2], [3], and [4] below).
Meanhwhile, the people who foolishly put their trust in Greg are ignoring the fact that "constraining to 1 MB blocks is harming the community" - right now (ie, people's investments and businesses are already starting to suffer).
This is the sad situation we're in.
And everybody could end up paying the price - which could reach millions or billions of dollars if people don't wake up soon and get rid of Greg Maxwell's toxic influence on this project.
At some point, no matter how great Gregory Maxwell's coding skills may be, the "money guys" behind Blockstream (Austin Hill et al.), and their newer partners such as the international accounting consultancy PwC - and also the people who currently hold $5-6 billion dollars in Bitcoin wealth - and the miners - might want to consider the fact that Gregory Maxwell is so divisive and out-of-touch with the community, that by letting him continue to play CTO of Bitcoin, they may be in danger of killing the whole project - and flushing their investments and businesses down the toilet.
Imagine how things could have been right now without GMax.
Just imagine how things would be right now if Gregory Maxwell hadn't wormed his way into getting control of Bitcoin:
There is a place for everyone.
Talented, principled programmers like Greg Maxwell do have their place on software development projects.
Things would have been fine if we had just let him work on some complicated mathematical stuff like Confidential Transactions (Adam Back's "homomorphic encryption") - because he's great for that sort of thing.
(I know Greg keeps taking this as a "back-handed (ie, insincere) compliment" from me nullc - but I do mean it with all sincerity: I think he have great programming and cryptography skills, and I think his work on Confidential Transactions could be a milestone for Bitcoin's privacy and fungibility. But first Bitcoin has to actually survive as a going project, and it might not survive if he continues insist on tring to impose his will in areas where he's obviously less qualified, such as this whole "max blocksize" thing where the infrastructure and the market should be in charge, not a coder.)
But Gregory Maxwell is too divisive and too much of a control freak (and too out-of-touch about what the technology and the market are actually ready for) to be "in charge" of this software development project as a CTO.
So this is your CTO, Bitcoin. Deal with it.
He dismissed everyone on Wikipedia back then as "shrieking masses" and he dismisses /btc as a "cesspool" now.
This guy is never gonna change. He was like this 10 years ago, and he's still like this now.
He's one of those arrogant C/C++ programmers, who thinks that because he understands C/C++, he's smarter than everyone else.
It doesn't matter if you also know how to code (in C/C++ or some other langugage).
It doesn't matter if you understand markets and economics.
It doesn't matter if you run a profitable company.
It doesn't even matter if you're Satoshi Nakamoto:
Satoshi Nakamoto, October 04, 2010, 07:48:40 PM "It can be phased in, like: if (blocknumber > 115000) maxblocksize = largerlimit / It can start being in versions way ahead, so by the time it reaches that block number and goes into effect, the older versions that don't have it are already obsolete."
https://np.reddit.com/btc/comments/3wo9pb/satoshi_nakamoto_october_04_2010_074840_pm_it_can/
Gregory Maxwell is in charge of Bitcoin now - and he doesn't give a flying fuck what anyone else thinks.
He has and always will simply "do whatever he thinks is right without the burden of explaining himself to you" - even he has to destroy the community and the project in the process.
That's just the kind of person he is - 10 years ago on Wikipedia (when he was just one of many editors), and now (where he's managed to become CTO of a company which took over Satoshi's respository and paid off most of its devs).
We now have to make a choice:
Footnotes:
[1]
If Bitcoin usage and blocksize increase, then mining would simply migrate from 4 conglomerates in China (and Luke-Jr's slow internet =) to the top cities worldwide with Gigabit broadban - and price and volume would go way up. So how would this be "bad" for Bitcoin as a whole??
https://np.reddit.com/btc/comments/3tadml/if_bitcoin_usage_and_blocksize_increase_then/
[2]
"What if every bank and accounting firm needed to start running a Bitcoin node?" – bdarmstrong
https://np.reddit.com/btc/comments/3zaony/what_if_every_bank_and_accounting_firm_needed_to/
[3]
It may well be that small blocks are what is centralizing mining in China. Bigger blocks would have a strongly decentralizing effect by taming the relative influence China's power-cost edge has over other countries' connectivity edge. – ForkiusMaximus
https://np.reddit.com/btc/comments/3ybl8it_may_well_be_that_small_blocks_are_what_is/
[4]
Blockchain Neutrality: "No-one should give a shit if the NSA, big businesses or the Chinese govt is running a node where most backyard nodes can no longer keep up. As long as the NSA and China DON'T TRUST EACH OTHER, then their nodes are just as good as nodes run in a basement" - ferretinjapan
https://np.reddit.com/btc/comments/3uwebe/blockchain_neutrality_noone_should_give_a_shit_if/
submitted by ydtm to btc [link] [comments]

Technical discussion of Gavin's O(1) block propagation proposal

I think there isn't wide appreciation of how important Gavin's proposal is for the scalability of Bitcoin. It's the real deal, and will get us out of this sort of beta mode we've been in of a few transactions per second globally. I spent a few hours reviewing the papers referenced at the bottom of his excellent write-up and think I get it now.
If you already get it, then hang around and answer questions from me and others. If you don't get it yet, start by very carefully reading https://gist.github.com/gavinandresen/e20c3b5a1d4b97f79ac2.
The big idea is twofold: fix the miner's incentives to align better with users wanting transactions to clear, and eliminate the sending of redundant data in the newblock message when a block is solved to save bandwidth.
I'll use (arbitrarily) a goal of 1 million tx per block, which is just over 1000 TPS. This seems pretty achievable, without a lot of uncertainty. Really! Read on.
Today, a miner really wants to propagate a solved block as soon as possible to not jeopardize their 25 BTC reward. It's not the cpu cost for handling the transactions on the miner's side that's the problem, it's the sending of a larger newblock message around the network that just might cause her block to lose a race condition with another solution to the block.
So aside from transactions with fees of more than 0.0008 BTC that can make up for this penalty (https://gist.github.com/gavinandresen/5044482), or simply the goodwill of benevolent pools to process transactions, there is today an incentive for miners not to include transactions in a block. The problem is BTC price has grown so high so fast that 0.0008 BTC is about 50 cents, which is high for day-to-day transactions (and very high for third world transactions).
The whole idea centers around an old observation that since the network nodes (including miners) have already received transactions by the normal second-by-second operation of the p2p network, the newblock announcement message shouldn't have to repeat the transaction details. Instead, it can just tell people, hey, I approve these particular transactions called XYZ, and you can check me by taking your copy of those same transactions that you already have and running the hash to check that my header is correctly solved. Proof of work.
A basic way to do this would be to send around a Bloom filter in the newblock message. A receiving node would check all the messages they have, see which of them are in this solved block, and mark them out of their temporary memory pool. Using a BF calculator you can see that you need about 2MB in order to get an error rate of 10e-6 for 1 million entries. 2MB gives 16 million bits which is enough to almost always be able to tell if a tx that you know about is in the block or not.
There are two problems with this: there may be transactions in the solved block that you don't have, for whatever p2p network or policy reason. The BF can't tell you what those are. It can just tell you there were e.g. 1,000,000 tx in this solved block and you were able to find only 999,999 of them. The other glitch is that of those 999,999 it told you were there, a couple could be false positives. I think there are ways you could try to deal with this--send more types of request messages around the network to fill in your holes--but I'll dismiss this and flip back to Gavin's IBLT instead.
The IBLT works super well to mash a huge number of transactions together into one fixed-size (O(1)) data structure, to compare against another set of transactions that is really close, with just a few differences. The "few differences" part compared to the size of the IBLT is critical to this whole thing working. With too many differences, the decode just fails and the receiver wouldn't be able to understand this solved block.
Gavin suggests key size of 8B and data of 8B chunks. I don't understand his data size--there's a big key checksum you need in order to do full add and subtract of IBLTs (let's say 8B, although this might have to be 16B?) that I would rather amortize over more granular data chunks. The average tx is 250B anyway. So I'm going to discuss an 8B key and 64B data chunks. With a count field, this then gives 8 key + 64 data + 16 checksum + 4 count = 92B. Let's round to 100B per IBLT cell.
Let's say we want to fix our newblock message size to around 1MB, in order to not be too alarming for the change to this scheme from our existing 1MB block limit (that miners don't often fill anyway). This means we can have an IBLT with m=10K, or 10,000 cells, which with the 1.5d rule (see the papers) means we can tolerate about 6000 differences in cells, which because we are slicing transactions into multiple cells (4 on average), means we can handle about 1500 differences in transactions at the receiver vs the solver and have faith that we can decode the newblock message fully almost all the time (has to be some way to handle the occasional node that fails this and has to catch up).
So now the problem becomes, how can we define some conventions so that the different nodes can mostly agree on which of the transactions flying around the network for the past N (~10) minutes should be included in the solved block. If the solver gets it wrong, her block doesn't get accepted by the rest of the network. Strong incentive! If the receiver gets it wrong (although she can try multiple times with different sets), she can't track the rest of the network's progress.
This is the genius part around this proposal. If we define the convention so that the set of transactions to be included in a block is essentially all of them, then the miners are strongly incentivized, not just by tx fees, but by the block reward itself to include all those transactions that happened since the last block. It still allows them to make their own decisions, up to 1500 tx could be added where convention would say not to, or not put in where convention says to. This preserves the notion of tx-approval freedom in the network for miners, and some later miner will probably pick up those straggler tx.
I think it might be important to provide as many guidelines for the solver as possible to describe what is in her block, in specific terms as possible without actually having to give tx ids, so that the receivers in their attempt to decode this block can build up as similar an IBLT on their side using the same rules. Something like the tx fee range, some framing of what tx are in the early part and what tx are near the end (time range I mean). Side note: I guess if you allow a tx fee range in this set of parameters, then the solver could put it real high and send an empty block after all, which works against the incentive I mentioned above, so maybe that particular specification is not beneficial.
From http://www.tik.ee.ethz.ch/file/49318d3f56c1d525aabf7fda78b23fc0/P2P2013_041.pdf for example, the propagation delay is about 30-40 seconds before almost all nodes have received any particular transaction, so it may be useful for the solver to include tx only up to a certain point in time, like 30 seconds ago. Any tx that is younger than this just waits until the next block, so it's not a big penalty. But some policy like this (and some way to communicate it in the absence of centralized time management among the nodes) will be important to keep the number of differences in the two sets small, below 1500 in my example. The receiver of the newblock message would know when trying to decode it, that they should build up an IBLT on their side also with tx only from up to 30 seconds ago.
I don't understand Gavin's requirement for canonical ordering. I see that it doesn't hurt, but I don't see the requirement for it. Can somebody elaborate? It seems that's his way to achieve the same framing that I am talking about in the previous paragraph, to obtain a minimum number of differences in the two sets. There is no need to clip the total number of tx in a block that I see, since you can keep shoving into the IBLT as much as you want, as long as the number of differences is bounded. So I don't see a canonical ordering being required for clipping the tx set. The XOR (or add-subtract) behavior of the IBLT doesn't require any ordering in the sets that I see, it's totally commutative. Maybe it's his way of allowing miners some control over what tx they approve, how many tx into this canonical order they want to get. But that would also allow them to send around solved empty blocks.
What is pretty neat about this from a consumer perspective is the tx fees could be driven real low, like down to the network propagation minimum which I think as of this spring per Mike Hearn is now 0.00001 BTC or 10 "bits" (1000 satoshis), half a US cent. Maybe that's a problem--the miners get the shaft without being able to bid on which transactions they approve. If they try to not approve too many tx their block won't be decoded by the rest of the network like all the non-mining nodes running the bitpay/coinbases of the world.
Edit: 10 bits is 1000 satoshis, not 10k satoshis
submitted by sandball to Bitcoin [link] [comments]

I support BIP000

I support BIP000, the BIP that proposes not to change Bitcoin.
Why?
Because I believe Bitcoin isn't broken, I do not buy the false dichotomy that a choice "must be made" between different competing BIPs, or else the sky is going to fall on our heads without notice and "potential new users" are going to "leave ship".
Because I understand that the security of the network has to somehow be paid for, and the current money issuance schedule is still in a phase where mining is heavily subsidized. I know that at the same fee levels, even with 100mb blocks, the miners would make nowhere near the current block reward.
Because I conlude that therefore, blocks must necessarily be full for the Bitcoin network to be able to pay for its own security.
Because I realize that the block space is a scarce resource that is sold, but not paid for, by the miners, especially with constant time block propagation proposed by IBLT and the like, which is why a "tragedy of the commons" situation can only be avoided by a block size cap.
Because I compare this situation with how nonsensical it would be to declare that "people should be allowed to chop down as much wood as they'd like from the rainforest, because the market will magically sort it out and protect the trees".
Because I am not scared by the constant fear-mongering of those who compulsively want to "fix" a system that isn't broken to ensure that they'll get rich while they sleep by allowing more people to "buy into Bitcoin".
Because I don't buy the false sense of urgency that is manipulated upon us, I want to see if less costly solutions can be found, and I want to revisit once fees become more important than the block reward.
This is why, my friends, I support BIP000.
submitted by davout-bc to Bitcoin [link] [comments]

The Origins of the (Modern) Blocksize Debate

On May 4, 2015, Gavin Andresen wrote on his blog:
I was planning to submit a pull request to the 0.11 release of Bitcoin Core that will allow miners to create blocks bigger than one megabyte, starting a little less than a year from now. But this process of peer review turned up a technical issue that needs to get addressed, and I don’t think it can be fixed in time for the first 0.11 release.
I will be writing a series of blog posts, each addressing one argument against raising the maximum block size, or against scheduling a raise right now... please send me an email ([email protected]) if I am missing any arguments
In other words, Gavin proposed a hard fork via a series of blog posts, bypassing all developer communication channels altogether and asking for personal, private emails from anyone interested in discussing the proposal further.
On May 5 (1 day after Gavin submitted his first blog post), Mike Hearn published The capacity cliff on his Medium page. 2 days later, he posted Crash landing. In these posts, he argued:
A common argument for letting Bitcoin blocks fill up is that the outcome won’t be so bad: just a market for fees... this is wrong. I don’t believe fees will become high and stable if Bitcoin runs out of capacity. Instead, I believe Bitcoin will crash.
...a permanent backlog would start to build up... as the backlog grows, nodes will start running out of memory and dying... as Core will accept any transaction that’s valid without any limit a node crash is eventually inevitable.
He also, in the latter article, explained that he disagreed with Satoshi's vision for how Bitcoin would mature[1][2]:
Neither me nor Gavin believe a fee market will work as a substitute for the inflation subsidy.
Gavin continued to publish the series of blog posts he had announced while Hearn made these predictions. [1][2][3][4][5][6][7]
Matt Corallo brought Gavin's proposal up on the bitcoin-dev mailing list after a few days. He wrote:
Recently there has been a flurry of posts by Gavin at http://gavinandresen.svbtle.com/ which advocate strongly for increasing the maximum block size. However, there hasnt been any discussion on this mailing list in several years as far as I can tell...
So, at the risk of starting a flamewar, I'll provide a little bait to get some responses and hope the discussion opens up into an honest comparison of the tradeoffs here. Certainly a consensus in this kind of technical community should be a basic requirement for any serious commitment to blocksize increase.
Personally, I'm rather strongly against any commitment to a block size increase in the near future. Long-term incentive compatibility requires that there be some fee pressure, and that blocks be relatively consistently full or very nearly full. What we see today are transactions enjoying next-block confirmations with nearly zero pressure to include any fee at all (though many do because it makes wallet code simpler).
This allows the well-funded Bitcoin ecosystem to continue building systems which rely on transactions moving quickly into blocks while pretending these systems scale. Thus, instead of working on technologies which bring Bitcoin's trustlessness to systems which scale beyond a blockchain's necessarily slow and (compared to updating numbers in a database) expensive settlement, the ecosystem as a whole continues to focus on building centralized platforms and advocate for changes to Bitcoin which allow them to maintain the status quo
Shortly thereafter, Corallo explained further:
The point of the hard block size limit is exactly because giving miners free rule to do anything they like with their blocks would allow them to do any number of crazy attacks. The incentives for miners to pick block sizes are no where near compatible with what allows the network to continue to run in a decentralized manner.
Tier Nolan considered possible extensions and modifications that might improve Gavin's proposal and argued that soft caps could be used to mitigate against the dangers of a blocksize increase. Tom Harding voiced support for Gavin's proposal
Peter Todd mentioned that a limited blocksize provides the benefit of protecting against the "perverse incentives" behind potential block withholding attacks.
Slush didn't have a strong opinion one way or the other, and neither did Eric Lombrozo, though Eric was interested in developing hard-fork best practices and wanted to:
explore all the complexities involved with deployment of hard forks. Let’s not just do a one-off ad-hoc thing.
Matt Whitlock voiced his opinion:
I'm not so much opposed to a block size increase as I am opposed to a hard fork... I strongly fear that the hard fork itself will become an excuse to change other aspects of the system in ways that will have unintended and possibly disastrous consequences.
Bryan Bishop strongly opposed Gavin's proposal, and offered a philosophical perspective on the matter:
there has been significant public discussion... about why increasing the max block size is kicking the can down the road while possibly compromising blockchain security. There were many excellent objections that were raised that, sadly, I see are not referenced at all in the recent media blitz. Frankly I can't help but feel that if contributions, like those from #bitcoin-wizards, have been ignored in lieu of technical analysis, and the absence of discussion on this mailing list, that I feel perhaps there are other subtle and extremely important technical details that are completely absent from this--and other-- proposals.
Secured decentralization is the most important and most interesting property of bitcoin. Everything else is rather trivial and could be achieved millions of times more efficiently with conventional technology. Our technical work should be informed by the technical nature of the system we have constructed.
There's no doubt in my mind that bitcoin will always see the most extreme campaigns and the most extreme misunderstandings... for development purposes we must hold ourselves to extremely high standards before proposing changes, especially to the public, that have the potential to be unsafe and economically unsafe.
There are many potential technical solutions for aggregating millions (trillions?) of transactions into tiny bundles. As a small proof-of-concept, imagine two parties sending transactions back and forth 100 million times. Instead of recording every transaction, you could record the start state and the end state, and end up with two transactions or less. That's a 100 million fold, without modifying max block size and without potentially compromising secured decentralization.
The MIT group should listen up and get to work figuring out how to measure decentralization and its security.. Getting this measurement right would be really beneficial because we would have a more academic and technical understanding to work with.
Gregory Maxwell echoed and extended that perspective:
When Bitcoin is changed fundamentally, via a hard fork, to have different properties, the change can create winners or losers...
There are non-trivial number of people who hold extremes on any of these general belief patterns; Even among the core developers there is not a consensus on Bitcoin's optimal role in society and the commercial marketplace.
there is a at least a two fold concern on this particular ("Long term Mining incentives") front:
One is that the long-held argument is that security of the Bitcoin system in the long term depends on fee income funding autonomous, anonymous, decentralized miners profitably applying enough hash-power to make reorganizations infeasible.
For fees to achieve this purpose, there seemingly must be an effective scarcity of capacity.
The second is that when subsidy has fallen well below fees, the incentive to move the blockchain forward goes away. An optimal rational miner would be best off forking off the current best block in order to capture its fees, rather than moving the blockchain forward...
tools like the Lightning network proposal could well allow us to hit a greater spectrum of demands at once--including secure zero-confirmation (something that larger blocksizes reduce if anything), which is important for many applications. With the right technology I believe we can have our cake and eat it too, but there needs to be a reason to build it; the security and decentralization level of Bitcoin imposes a hard upper limit on anything that can be based on it.
Another key point here is that the small bumps in blocksize which wouldn't clearly knock the system into a largely centralized mode--small constants--are small enough that they don't quantitatively change the operation of the system; they don't open up new applications that aren't possible today
the procedure I'd prefer would be something like this: if there is a standing backlog, we-the-community of users look to indicators to gauge if the network is losing decentralization and then double the hard limit with proper controls to allow smooth adjustment without fees going to zero (see the past proposals for automatic block size controls that let miners increase up to a hard maximum over the median if they mine at quadratically harder difficulty), and we don't increase if it appears it would be at a substantial increase in centralization risk. Hardfork changes should only be made if they're almost completely uncontroversial--where virtually everyone can look at the available data and say "yea, that isn't undermining my property rights or future use of Bitcoin; it's no big deal". Unfortunately, every indicator I can think of except fee totals has been going in the wrong direction almost monotonically along with the blockchain size increase since 2012 when we started hitting full blocks and responded by increasing the default soft target. This is frustrating
many people--myself included--have been working feverishly hard behind the scenes on Bitcoin Core to increase the scalability. This work isn't small-potatoes boring software engineering stuff; I mean even my personal contributions include things like inventing a wholly new generic algebraic optimization applicable to all EC signature schemes that increases performance by 4%, and that is before getting into the R&D stuff that hasn't really borne fruit yet, like fraud proofs. Today Bitcoin Core is easily >100 times faster to synchronize and relay than when I first got involved on the same hardware, but these improvements have been swallowed by the growth. The ironic thing is that our frantic efforts to keep ahead and not lose decentralization have both not been enough (by the best measures, full node usage is the lowest its been since 2011 even though the user base is huge now) and yet also so much that people could seriously talk about increasing the block size to something gigantic like 20MB. This sounds less reasonable when you realize that even at 1MB we'd likely have a smoking hole in the ground if not for existing enormous efforts to make scaling not come at a loss of decentralization.
Peter Todd also summarized some academic findings on the subject:
In short, without either a fixed blocksize or fixed fee per transaction Bitcoin will will not survive as there is no viable way to pay for PoW security. The latter option - fixed fee per transaction - is non-trivial to implement in a way that's actually meaningful - it's easy to give miners "kickbacks" - leaving us with a fixed blocksize.
Even a relatively small increase to 20MB will greatly reduce the number of people who can participate fully in Bitcoin, creating an environment where the next increase requires the consent of an even smaller portion of the Bitcoin ecosystem. Where does that stop? What's the proposed mechanism that'll create an incentive and social consensus to not just 'kick the can down the road'(3) and further centralize but actually scale up Bitcoin the hard way?
Some developers (e.g. Aaron Voisine) voiced support for Gavin's proposal which repeated Mike Hearn's "crash landing" arguments.
Pieter Wuille said:
I am - in general - in favor of increasing the size blocks...
Controversial hard forks. I hope the mailing list here today already proves it is a controversial issue. Independent of personal opinions pro or against, I don't think we can do a hard fork that is controversial in nature. Either the result is effectively a fork, and pre-existing coins can be spent once on both sides (effectively failing Bitcoin's primary purpose), or the result is one side forced to upgrade to something they dislike - effectively giving a power to developers they should never have. Quoting someone: "I did not sign up to be part of a central banker's committee".
The reason for increasing is "need". If "we need more space in blocks" is the reason to do an upgrade, it won't stop after 20 MB. There is nothing fundamental possible with 20 MB blocks that isn't with 1 MB blocks.
Misrepresentation of the trade-offs. You can argue all you want that none of the effects of larger blocks are particularly damaging, so everything is fine. They will damage something (see below for details), and we should analyze these effects, and be honest about them, and present them as a trade-off made we choose to make to scale the system better. If you just ask people if they want more transactions, of course you'll hear yes. If you ask people if they want to pay less taxes, I'm sure the vast majority will agree as well.
Miner centralization. There is currently, as far as I know, no technology that can relay and validate 20 MB blocks across the planet, in a manner fast enough to avoid very significant costs to mining. There is work in progress on this (including Gavin's IBLT-based relay, or Greg's block network coding), but I don't think we should be basing the future of the economics of the system on undemonstrated ideas. Without those (or even with), the result may be that miners self-limit the size of their blocks to propagate faster, but if this happens, larger, better-connected, and more centrally-located groups of miners gain a competitive advantage by being able to produce larger blocks. I would like to point out that there is nothing evil about this - a simple feedback to determine an optimal block size for an individual miner will result in larger blocks for better connected hash power. If we do not want miners to have this ability, "we" (as in: those using full nodes) should demand limitations that prevent it. One such limitation is a block size limit (whatever it is).
Ability to use a full node.
Skewed incentives for improvements... without actual pressure to work on these, I doubt much will change. Increasing the size of blocks now will simply make it cheap enough to continue business as usual for a while - while forcing a massive cost increase (and not just a monetary one) on the entire ecosystem.
Fees and long-term incentives.
I don't think 1 MB is optimal. Block size is a compromise between scalability of transactions and verifiability of the system. A system with 10 transactions per day that is verifiable by a pocket calculator is not useful, as it would only serve a few large bank's settlements. A system which can deal with every coffee bought on the planet, but requires a Google-scale data center to verify is also not useful, as it would be trivially out-competed by a VISA-like design. The usefulness needs in a balance, and there is no optimal choice for everyone. We can choose where that balance lies, but we must accept that this is done as a trade-off, and that that trade-off will have costs such as hardware costs, decreasing anonymity, less independence, smaller target audience for people able to fully validate, ...
Choose wisely.
Mike Hearn responded:
this list is not a good place for making progress or reaching decisions.
if Bitcoin continues on its current growth trends it will run out of capacity, almost certainly by some time next year. What we need to see right now is leadership and a plan, that fits in the available time window.
I no longer believe this community can reach consensus on anything protocol related.
When the money supply eventually dwindles I doubt it will be fee pressure that funds mining
What I don't see from you yet is a specific and credible plan that fits within the next 12 months and which allows Bitcoin to keep growing.
Peter Todd then pointed out that, contrary to Mike's claims, developer consensus had been achieved within Core plenty of times recently. Btc-drak asked Mike to "explain where the 12 months timeframe comes from?"
Jorge Timón wrote an incredibly prescient reply to Mike:
We've successfully reached consensus for several softfork proposals already. I agree with others that hardfork need to be uncontroversial and there should be consensus about them. If you have other ideas for the criteria for hardfork deployment all I'm ears. I just hope that by "What we need to see right now is leadership" you don't mean something like "when Gaving and Mike agree it's enough to deploy a hardfork" when you go from vague to concrete.
Oh, so your answer to "bitcoin will eventually need to live on fees and we would like to know more about how it will look like then" it's "no bitcoin long term it's broken long term but that's far away in the future so let's just worry about the present". I agree that it's hard to predict that future, but having some competition for block space would actually help us get more data on a similar situation to be able to predict that future better. What you want to avoid at all cost (the block size actually being used), I see as the best opportunity we have to look into the future.
this is my plan: we wait 12 months... and start having full blocks and people having to wait 2 blocks for their transactions to be confirmed some times. That would be the beginning of a true "fee market", something that Gavin used to say was his #1 priority not so long ago (which seems contradictory with his current efforts to avoid that from happening). Having a true fee market seems clearly an advantage. What are supposedly disastrous negative parts of this plan that make an alternative plan (ie: increasing the block size) so necessary and obvious. I think the advocates of the size increase are failing to explain the disadvantages of maintaining the current size. It feels like the explanation are missing because it should be somehow obvious how the sky will burn if we don't increase the block size soon. But, well, it is not obvious to me, so please elaborate on why having a fee market (instead of just an price estimator for a market that doesn't even really exist) would be a disaster.
Some suspected Gavin/Mike were trying to rush the hard fork for personal reasons.
Mike Hearn's response was to demand a "leader" who could unilaterally steer the Bitcoin project and make decisions unchecked:
No. What I meant is that someone (theoretically Wladimir) needs to make a clear decision. If that decision is "Bitcoin Core will wait and watch the fireworks when blocks get full", that would be showing leadership
I will write more on the topic of what will happen if we hit the block size limit... I don't believe we will get any useful data out of such an event. I've seen distributed systems run out of capacity before. What will happen instead is technological failure followed by rapid user abandonment...
we need to hear something like that from Wladimir, or whoever has the final say around here.
Jorge Timón responded:
it is true that "universally uncontroversial" (which is what I think the requirement should be for hard forks) is a vague qualifier that's not formally defined anywhere. I guess we should only consider rational arguments. You cannot just nack something without further explanation. If his explanation was "I will change my mind after we increase block size", I guess the community should say "then we will just ignore your nack because it makes no sense". In the same way, when people use fallacies (purposely or not) we must expose that and say "this fallacy doesn't count as an argument". But yeah, it would probably be good to define better what constitutes a "sensible objection" or something. That doesn't seem simple though.
it seems that some people would like to see that happening before the subsidies are low (not necessarily null), while other people are fine waiting for that but don't want to ever be close to the scale limits anytime soon. I would also like to know for how long we need to prioritize short term adoption in this way. As others have said, if the answer is "forever, adoption is always the most important thing" then we will end up with an improved version of Visa. But yeah, this is progress, I'll wait for your more detailed description of the tragedies that will follow hitting the block limits, assuming for now that it will happen in 12 months. My previous answer to the nervous "we will hit the block limits in 12 months if we don't do anything" was "not sure about 12 months, but whatever, great, I'm waiting for that to observe how fees get affected". But it should have been a question "what's wrong with hitting the block limits in 12 months?"
Mike Hearn again asserted the need for a leader:
There must be a single decision maker for any given codebase.
Bryan Bishop attempted to explain why this did not make sense with git architecture.
Finally, Gavin announced his intent to merge the patch into Bitcoin XT to bypass the peer review he had received on the bitcoin-dev mailing list.
submitted by sound8bits to sound8bits [link] [comments]

"Eppur, se muove." | It's not even about the specifics of the specs. It's about the fact that (for the first time since Blockstream hijacked the "One True Repo"), *we* can now actually once again *specify* those specs. It's about Bitcoin Classic.

Right now, there's been a lot of buzz about Bitcoin Classic.
For the first time since Blockstream hijacked the "one true repo" (which they basically inherited from Satoshi), we now also appear to have another real, serious repo - based almost 100% on Core, but already starting to deviate every-so-slightly from it - and with a long-term roadmap that also promises to be both responsive and robust.
The Bitcoin Classic project already has some major advantages, including:
"When in the course of Bitcoin development ... it becomes necessary (and possible) to set up a new (real, serious) repo with a dev and a miner and a payment processor who are able to really understand the code at the mathematical and economical level, and really interact with the users at the social and political level...
(unlike the triad of tone-deaf pinheads at Blockstream, fueled by fiat, coddled by censorship, and pathologically attached to their pet projects: Adam Back and Gregory Maxwell and Peter Todd - brilliant though these devs may be as C/C++ programmers)
...then this will be a major turning point in the history of Bitcoin."
Bitcoin Classic
What is it?
Right now, it's probably more like just an "MVP" (Minimal Viable Product) for:
  • governance or
  • decentralized development or
  • a-new-codebase-which-has-a-good-chance-of-being-adopted-due-to-being-a-kind-of-Schelling-point-of-development-due-to-having-a-top-mineresearcher-on-board-JToomin-plus-a-top-dev/researcher-on-board-GavinAndresen-plus-a-really-simple-and-robust-max-blocksize-algorithm-BitPay's-Adaptive-Block-Size-Limit-which-empowers-miners-and-not-developers
Call it what you will.
But that's what we need at this point: a new repo which is:
  • a minimal departure from the existing One True repo
  • safe and sane in the sense that it empowers miners over devs
Paraphrasing the words of Paul Sztorc on "Measuring Decentralization", "decentralization" means "a very low cost for anyone to add...":
  • one more block,
  • one more verifying node,
  • one more mining node,
  • one more developer,
  • one more (real, serious) repo.
And this last item is probably what Bitcoin Classic is really about.
It's about finally being able to add one more (real, serious) repo...
...knowing that to a certain degree, some of the specific specs are still-to-be-specified
...but that's ok, because we can see that the proper social-political-ecomomic requirements for responsibly doing so finally appear to be in place: ie, we are starting to see the coalescence of a team...
...who experiment and observe - and communicate and listen - and respond and react accordingly
...so that they can faithfully (but conservatively) translate users' needs & requirements into code that can achieve consensus on the network.
As it's turned out, it has been surprisingly challenging to create this kind of bridge between users and devs (centered around a new, real, serious codebase with a good chance of adoption)...
...because (sorry for the stereotype) most users can't code, and many devs can't communicate (well enough)
...so, many devs can't (optimally) figure out what to code.
We've seen how out-of-touch the devs can be (particularly when shielded by censors and funded by venture capitalists), not only in the "blocksize wars", but also with decisions such as the insistence of Blockstream's devs to prioritize things like RBF and LN over the protests of many users.
But now it looks like, for the first time since Blockstream hijacked the one real, serious repo, we now have a new real, serious repo where...
(due to being a kind of "Schelling point of development" - ie a focal point many people can, well, "focus" on)
(due to having a responsive expert scientific miner like JToomim on-board - and a responsive expert scientific dev like Gavin on-board - with stated preference for a simple, robust, miner-empowering approach to block size - eg: BitPay's Adaptive Block Size)
... this repo actually has a very good chance of achieving:
  • rough consensus among the community (the "social" community of discussing and debating and developing), and
  • actual consensus on the network (eg 750 / 1000 of previous blocks, or whatever ends up being defined).
In the above, the words "responsive" and "scientific" have very concrete meanings:
  • responsive: they elicit-verify-implement actual users' needs & requirements
  • scientific: they use the scientific method of proposing-testing-and-accepting-or-rejecting a hypothesis
  • (in particular, they don't have hangups about shifting priorities among projects and proposals when new information becomes available - ie, they have the maturity and the self-awareness and the egolessness to not become pathologically over-attached to proving irrelevant points or pursuing pet projects)
So we could have the following definition of "centralization of development" (à la Paul Sztorc):
The "cost" of anyone adding a new (real, serious) repo must be kept as minimal as possible.
(But of course with the caveat or condition that: the repo still must be "real and serious" - which implies that it will have to overcome a high hurdle in order to be seriously entertained.)
And it bears repeating: As we've seen from the past year of raging debates, the costs and challenges of adding a new (real, serious) repo are largely social and political - and can be very high and exceedingly complex.
But that's probably the way it should be. Because adding a new repo is the first step on the road towards doing a hard fork.
So it is a journey which must not be embarked upon with levity, but with gravity - with all due deliberation and seriousness.
Which is one quite legitimate reason why the people against such a change have dug their heels in so determinedly. And we should actually be totally understanding and even thankful that they have done so.
As long it's a fair fight, done in good faith.
Which I think many of us can feel generous enough to say it indeed has been - for the most part.
Note: I always add the parenthetical "(real, serious)" to the phrase "a new (real, serious) repo" here the same way we add the parenthetical "(valid)" to the phrase: "the longest (valid) chain".
  • In order to add a "valid" block to this chain, there are algorithmic rules - purely mathematical.
  • In order to add a "real, serious" repo to the ecosystem - or to the website bitcoin.org for example, as we recently saw in the strange spectacle of CoinBase diplomatically bowing down to theymos - the rules (and costs) for determining whether a repo is "real and serious" are not purely mathematical but are social-political and economical - and ultimately human, all-too human.
But eventually, a new real serious repo does get added.
Which is what we appear to be seeing now, with this rallying of major talent around Bitcoin Classic.
It is of course probably natural and inevitable that the upholders / usurpers of the First and Only Real Serious Repo might be displeased to see any other new real serious repo(s) arising - and might tend to "unfairly" leverage any advantages they enjoy as "incumbents", in order to maintain their power. This is only human.
But all's fair in love in consensus, so we probably shouldn't hold any of these tendencies against them. =)
"Eppur, si muove."
=>
"But eventually, inexorably, a new 'real, serious' repo does get added."
[Sorry I spelled a word wrong in the OP title: should be "si" not "se"!]
(For some strange delicious reason, I hope luke-jr in particular reads the above lines. =)
So a new real serious repo does finally get set up on Github, and eventually downloaded and compiled to a new real serious binary.
And this binary gets tested on testnet and rolled out on mainnet and - if enough users adopt it (as proven by some easy-to-observe "trigger" - eg 750 of the past 1000 blocks being mined with it) - then this real serious new Bitcoin client gains enough "consensus" to "activate" - and a (hard) chainfork then ensues (which we expect and indeed endeavor to guarantee should only take a few hours at most to resolve itself, as all hashpower should quickly move to the longest valid chain).
Yes this process must involve intensive debate and caution and testing, because it is so very, very dangerous - because it is a "hard fork": initially a hard codefork which takes months of social-political debating to resolve, hopefully guided by the invisible hand of the market, and then a (hard) chainfork which takes only a few hours to resolve (we dearly hope & expect - actually we try to virtually guarantee this by establishing a high enough activation trigger eg "such-and-such percentage of the previous number of blocks must have been mined using the new program).
For analogies to a hard codefork in football and chess, you may find the the same Paul Sztorc article in the section on the dangers of hard forks interesting.
So a "hard fork" is what we must do sometimes. Rarely, and with great deliberation and seriousness.
And the first step involves setting up a new (real, serious) repo.
This is why the actual details on the max-blocksize-increments themselves can be (and are being) left sort of vague for the moment.
There's a certain amount of hand-waving in the air.
Which is ok in this case.
Because this repo isn't about the specifics of any particular "max blocksize algorithm" - yet.
Although we do already have an encouraging statement from Gavin that his new favorite max blocksize proposal is BitPay's Adaptive Block Size Limit - which is very promising, since this proposal is simple, it gives miners autonomy over devs, and it is based on the median (not the average) of previous blocks, and the median is known to be a "more robust" (hence less game-able) statistic.
So, in this sense, Bitcoin Classic is mainly about even being allowed to seriously propose some different "max blocksize" (and probably eventually a few other) algorithms(s) at all in the first place.
So far, in amongst all the hand-waving, here's what we do apparently know:
  • Definitely an initial bump to 2 MB.
  • Then... who knows?
Whatever.
At this point, it's not even the specificity of those specs that matter.
It's just that, for the first time, we have a repo whose devs will let us specify those specs.
  • evidently using some can-kick blocksize-bumps initially...
  • probably using some more "algorithmic" approach long-term - still probably very much TBD (to-be-determined - but that should be fine, because it will clearly be in consultation with the users and the empirical data of the network and the market!)...
  • and probably eventually also embracing many of the other "scaling" approaches which are not based on simply bumping up a parameter - eg: SegWit, IBLTs, weakblocks & subchains, thinblocks
So...
This is what Bitcoin Classic mainly seems to be about at this point.
It's one of the first real serious moves towards decentralized development.
It's a tiny step - but the fact that we can now even finally take a step - after so many months of paralysis - is probably what's really important here.
submitted by ydtm to btc [link] [comments]

Flux: Revisiting Near Blocks for Proof-of-Work Blockchains

Cryptology ePrint Archive: Report 2018/415
Date: 2018-05-29
Author(s): Alexei Zamyatin∗, Nicholas Stifter, Philipp Schindler, Edgar Weippl, William J. Knottenbelt∗

Link to Paper


Abstract
The term near or weak blocks describes Bitcoin blocks whose PoW does not meet the required target difficulty to be considered valid under the regular consensus rules of the protocol. Near blocks are generally associated with protocol improvement proposals striving towards shorter transaction confirmation times. Existing proposals assume miners will act rationally based solely on intrinsic incentives arising from the adoption of these changes, such as earlier detection of blockchain forks.
In this paper we present Flux, a protocol extension for proof-of-work blockchains that leverages on near blocks, a new block reward distribution mechanism, and an improved branch selection policy to incentivize honest participation of miners. Our protocol reduces mining variance, improves the responsiveness of the underlying blockchain in terms of transaction processing, and can be deployed without conflicting modifications to the underlying base protocol as a velvet fork. We perform an initial analysis of selfish mining which suggests Flux not only provides security guarantees similar to pure Nakamoto consensus, but potentially renders selfish mining strategies less profitable.

References
[1] Bitcoin Cash. https://www.bitcoincash.org/. Accessed: 2017-01-24.
[2] P2pool. http://p2pool.org/. Accessed: 2017-05-10.
[3] G. Andersen. Comment in ”faster blocks vs bigger blocks”. https://bitcointalk.org/index.php?topic=673415.msg7658481#msg7658481, 2014. Accessed: 2017-05-10.
[4] G. Andersen. [bitcoin-dev] weak block thoughts... https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-Septembe011157.html, 2015. Accessed: 2017-05-10.
[5] E. Androulaki, S. Capkun, and G. O. Karame. Two bitcoins at the price of one? double-spending attacks on fast payments in bitcoin. In CCS, 2012.
[6] J. Becker, D. Breuker, T. Heide, J. Holler, H. P. Rauer, and R. Bohme. ¨ Can we afford integrity by proof-of-work? scenarios inspired by the bitcoin currency. In WEIS. Springer, 2012.
[7] I. Bentov, R. Pass, and E. Shi. Snow white: Provably secure proofs of stake. https://eprint.iacr.org/2016/919.pdf, 2016. Accessed: 2016-11-08.
[8] Bitcoin community. OP RETURN. https://en.bitcoin.it/wiki/OP\RETURN. Accessed: 2017-05-10.
[9] Bitcoin Wiki. Merged mining specification. [https://en.bitcoin.it/wiki/Merged\](https://en.bitcoin.it/wiki/Merged)) mining\ specification. Accessed: 2017-05-10.
[10] Blockchain.info. Hashrate Distribution in Bitcoin. https://blockchain.info/de/pools. Accessed: 2017-05-10.
[11] Blockchain.info. Unconfirmed bitcoin transactions. https://blockchain.info/unconfirmed-transactions. Accessed: 2017-05-10.
[12] J. Bonneau, A. Miller, J. Clark, A. Narayanan, J. A. Kroll, and E. W. Felten. Sok: Research perspectives and challenges for bitcoin and cryptocurrencies. In IEEE Symposium on Security and Privacy, 2015.
[13] V. Buterin. Ethereum: A next-generation smart contract and decentralized application platform. https://github.com/ethereum/wiki/wiki/White-Paper, 2014. Accessed: 2016-08-22.
[14] C. Decker and R. Wattenhofer. Information propagation in the bitcoin network. In Peer-to-Peer Computing (P2P), 2013 IEEE Thirteenth International Conference on, pages 1–10. IEEE, 2013.
[15] J. R. Douceur. The sybil attack. In International Workshop on Peer-toPeer Systems, pages 251–260. Springer, 2002.
[16] I. Eyal, A. E. Gencer, E. G. Sirer, and R. Renesse. Bitcoin-ng: A scalable blockchain protocol. In 13th USENIX Security Symposium on Networked Systems Design and Implementation (NSDI’16). USENIX Association, Mar 2016.
[17] I. Eyal and E. G. Sirer. Majority is not enough: Bitcoin mining is vulnerable. In Financial Cryptography and Data Security, pages 436–454. Springer, 2014.
[18] J. Garay, A. Kiayias, and N. Leonardos. The bitcoin backbone protocol: Analysis and applications. In Advances in Cryptology-EUROCRYPT 2015, pages 281–310. Springer, 2015.
[19] A. E. Gencer, S. Basu, I. Eyal, R. Renesse, and E. G. Sirer. Decentralization in bitcoin and ethereum networks. In Proceedings of the 22nd International Conference on Financial Cryptography and Data Security (FC). Springer, 2018.
[20] A. Gervais, G. Karame, S. Capkun, and V. Capkun. Is bitcoin a decentralized currency? volume 12, pages 54–60, 2014.
[21] A. Gervais, G. O. Karame, K. Wust, V. Glykantzis, H. Ritzdorf, ¨ and S. Capkun. On the security and performance of proof of work blockchains. https://eprint.iacr.org/2016/555.pdf, 2016. Accessed: 2016-08-10.
[22] M. Jakobsson and A. Juels. Proofs of work and bread pudding protocols. In Secure Information Networks, pages 258–272. Springer, 1999.
[23] A. Judmayer, A. Zamyatin, N. Stifter, A. G. Voyiatzis, and E. Weippl. Merged mining: Curse or cure? In CBT’17: Proceedings of the International Workshop on Cryptocurrencies and Blockchain Technology, Sep 2017.
[24] G. O. Karame, E. Androulaki, M. Roeschlin, A. Gervais, and S. Capkun. ˇ Misbehavior in bitcoin: A study of double-spending and accountability. volume 18, page 2. ACM, 2015.
[25] A. Kiayias, A. Miller, and D. Zindros. Non-interactive proofs of proof-of-work. Cryptology ePrint Archive, Report 2017/963, 2017. Accessed:2017-10-03.
[26] A. Kiayias, A. Russell, B. David, and R. Oliynykov. Ouroboros: A provably secure proof-of-stake blockchain protocol. In Annual International Cryptology Conference, pages 357–388. Springer, 2017.
[27] Y. Lewenberg, Y. Sompolinsky, and A. Zohar. Inclusive block chain protocols. In Financial Cryptography and Data Security, pages 528–547. Springer, 2015.
[28] Litecoin community. Litecoin reference implementation. https://github.com/litecoin-project/litecoin. Accessed: 2018-05-03.
[29] G. Maxwell. Comment in ”[bitcoin-dev] weak block thoughts...”. https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-Septembe011198.html, 2016. Accessed: 2017-05-10.
[30] S. Micali. Algorand: The efficient and democratic ledger. http://arxiv.org/abs/1607.01341, 2016. Accessed: 2017-02-09.
[31] S. Nakamoto. Bitcoin: A peer-to-peer electronic cash system. https://bitcoin.org/bitcoin.pdf, Dec 2008. Accessed: 2015-07-01.
[32] Namecoin community. Namecoin reference implementation. https://github.com/namecoin/namecoin. Accessed: 2017-05-10.
[33] Narayanan, Arvind and Bonneau, Joseph and Felten, Edward and Miller, Andrew and Goldfeder, Steven. Bitcoin and cryptocurrency technologies. https://d28rh4a8wq0iu5.cloudfront.net/bitcointech/readings/princeton bitcoin book.pdf?a=1, 2016. Accessed: 2016-03-29.
[34] K. Nayak, S. Kumar, A. Miller, and E. Shi. Stubborn mining: Generalizing selfish mining and combining with an eclipse attack. In 1st IEEE European Symposium on Security and Privacy, 2016. IEEE, 2016.
[35] K. J. O’Dwyer and D. Malone. Bitcoin mining and its energy footprint. 2014.
[36] R. Pass and E. Shi. Fruitchains: A fair blockchain. http://eprint.iacr.org/2016/916.pdf, 2016. Accessed: 2016-11-08.
[37] C. Perez-Sol ´ a, S. Delgado-Segura, G. Navarro-Arribas, and J. Herrera- ` Joancomart´ı. Double-spending prevention for bitcoin zero-confirmation transactions. http://eprint.iacr.org/2017/394, 2017. Accessed: 2017-06-
[38] Pseudonymous(”TierNolan”). Decoupling transactions and pow. https://bitcointalk.org/index.php?topic=179598.0, 2013. Accessed: 2017-05-10.
[39] P. R. Rizun. Subchains: A technique to scale bitcoin and improve the user experience. Ledger, 1:38–52, 2016.
[40] K. Rosenbaum. Weak blocks - the good and the bad. http://popeller.io/ index.php/2016/01/19/weak-blocks-the-good-and-the-bad/, 2016. Accessed: 2017-05-10.
[41] K. Rosenbaum and R. Russell. Iblt and weak block propagation performance. Scaling Bitcoin Hong Kong (6 December 2015), 2015.
[42] M. Rosenfeld. Analysis of hashrate-based double spending. http://arxiv.org/abs/1402.2009, 2014. Accessed: 2016-03-09.
[43] R. Russel. Weak block simulator for bitcoin. https://github.com/rustyrussell/weak-blocks, 2014. Accessed: 2017-05-10.
[44] A. Sapirshtein, Y. Sompolinsky, and A. Zohar. Optimal selfish mining strategies in bitcoin. http://arxiv.org/pdf/1507.06183.pdf, 2015. Accessed: 2016-08-22.
[45] E. B. Sasson, A. Chiesa, C. Garman, M. Green, I. Miers, E. Tromer, and M. Virza. Zerocash: Decentralized anonymous payments from bitcoin. In Security and Privacy (SP), 2014 IEEE Symposium on, pages 459–474. IEEE, 2014.
[46] Satoshi Nakamoto. Comment in ”bitdns and generalizing bitcoin” bitcointalk thread. https://bitcointalk.org/index.php?topic=1790.msg28696#msg28696. Accessed: 2017-06-05.
[47] Y. Sompolinsky, Y. Lewenberg, and A. Zohar. Spectre: A fast and scalable cryptocurrency protocol. Cryptology ePrint Archive, Report 2016/1159, 2016. Accessed: 2017-02-20.
[48] Y. Sompolinsky and A. Zohar. Secure high-rate transaction processing in bitcoin. In Financial Cryptography and Data Security, pages 507–527. Springer, 2015.
[49] Suhas Daftuar. Bitcoin merge commit: ”mining: Select transactions using feerate-with-ancestors”. https://github.com/bitcoin/bitcoin/pull/7600. Accessed: 2017-05-10.
[50] M. B. Taylor. Bitcoin and the age of bespoke silicon. In Proceedings of the 2013 International Conference on Compilers, Architectures and Synthesis for Embedded Systems, page 16. IEEE Press, 2013.
[51] F. Tschorsch and B. Scheuermann. Bitcoin and beyond: A technical survey on decentralized digital currencies. In IEEE Communications Surveys Tutorials, volume PP, pages 1–1, 2016.
[52] P. J. Van Laarhoven and E. H. Aarts. Simulated annealing. In Simulated annealing: Theory and applications, pages 7–15. Springer, 1987.
[53] A. Zamyatin, N. Stifter, A. Judmayer, P. Schindler, E. Weippl, and W. J. Knottebelt. (Short Paper) A Wild Velvet Fork Appears! Inclusive Blockchain Protocol Changes in Practice. In 5th Workshop on Bitcoin and Blockchain Research, Financial Cryptography and Data Security 18 (FC). Springer, 2018.
[54] F. Zhang, I. Eyal, R. Escriva, A. Juels, and R. Renesse. Rem: Resourceefficient mining for blockchains. http://eprint.iacr.org/2017/179, 2017. Accessed: 2017-03-24.
submitted by dj-gutz to myrXiv [link] [comments]

Braiding the Blockchain - Bob McElrath, PhD: "If two blocks could be mined at the same time and placed into a tree or Directed Acyclic Graph ('braid') as parallel nodes at the same height without conflicting, both block size and block time can disappear as parameters altogether (ending the debate)."

UPDATE: There's also a YouTube video of his proposal available as well (32 minutes + 20 minutes Q&A):
https://www.youtube.com/watch?v=62Y_BW5NC1M
https://www.reddit.com/btc/comments/4su1gf/braiding_the_blockchain_32_min_qa_we_cant_remove/
https://web.archive.org/web/20151212001135/http://blog.sldx.com/three-challenges-for-scaling-bitcoin/
Blockchain Insights: Three Challenges for Scaling Bitcoin
Move from a chain to a more sophisticated data structure
The linked-list like block “chain” is not the only data structure into which transactions can be placed.
The block-size debate really is a consequence of shoe-horning transactions into this linear structure.
If two blocks could be mined at the same time and placed into a tree or Directed Acyclic Graph as parallel nodes at the same height without conflicting, [*] both block size and block time can disappear as parameters altogether (ending the tiresome debate).
Directed Acyclic Graph is a mouthful, so we prefer the term “block-braid.”
[*] Perhaps a Bloom Filter or Invertible Bloom Lookup Table (IBLT) could be used to quickly and compactly verify that two blocks do not contain any transactions having the same "from" address.
https://duckduckgo.com/?q=IBLT+Inverted+Bloom+Lookup+Table&t=ha&ia=software
https://gnunet.org/sites/default/files/TheoryandPracticeBloomFilter2011Tarkoma.pdf
The Bloom filter is a space-efficient probabilistic data structure that supports set membership queries. The data structure was conceived by Burton H. Bloom in 1970. The structure offers a compact probabilistic way to represent a set that can result in false positives (claiming an element to be part of the set when it was not inserted), but never in false negatives (reporting an inserted element to be absent from the set). This makes Bloom filters useful for many different kinds of tasks that involve lists and sets. The basic operations involve adding elements to the set and querying for element membership in the probabilistic set representation.
Braiding the Blockchain (PDF):
https://scalingbitcoin.org/hongkong2015/presentations/DAY2/2_breaking_the_chain_1_mcelrath.pdf
Experiments:
He's coded up a demo of this in about 600 lines of Python:
https://github.com/mcelrath/braidcoin
And he's also done some testing:
https://rawgit.com/mcelrath/braidcoin/masteBraid%2BExamples.html
submitted by ydtm to btc [link] [comments]

Bitcoin-NG could sidestep the entire blocksize debate! Core generates a RETROSPECTIVE block for txns from the last 10 minutes. NG is FORWARD-LOOKING: every 10 minutes, NG elects a LEADER, adding txns as soon as they happen. Core is limited by blocksize; NG can run as fast as the network allows.

Bitcoin-NG ("Next Generation"): A Secure, Faster, Better Blockchain
A group of Cornell researchers … have proposed Bitcoin-NG, a radical redesign of the Bitcoin architecture meant to solve the blocksize trade-off entirely.
https://bitcoinmagazine.com/articles/bitcoin-ng-or-how-cornell-researchers-think-a-radical-redesign-can-solve-bitcoin-s-scaling-issues-1447108649
TL;DR from the blog post:
Bitcoin-NG sidesteps the scaling dilemma by inverting the behavior of the blockchain. In Bitcoin, the system generates a retrospective block that encases in cryptographic stone the transactions that took place in the preceding 10 minutes. In Bitcoin-NG, the protocol is, instead, forward-looking: every 10 minutes, NG elects a leader, who then vets future transactions as soon as they happen. The former is necessarily limited by the blocksize and block interval, while the latter approach can run as fast as the network will allow.
http://hackingdistributed.com/2015/10/14/bitcoin-ng/
ELI5 from jefdaj:
Basically instead of mining transactions directly miners are competing to be "blockchain czar" for the next 10 minutes and are authorized to sign off on transactions during that time. But then if they abuse that power by double spending the network cancels it and takes away their block reward.
https://www.reddit.com/Bitcoin/comments/3s72d1/bitcoin_ng_or_how_cornell_researchers_think_a/cwusdf8
A longer ELI5 from ChrisThePigeon:
In a nutshell, bitcoin-ng is a solution to make block propagation happen in a more scalable way.
Bitcoin has the problem that blocks are bundled with transactions. This means that whenever a block is found, all nodes must madly rush to validate and propagate a bunch of transactions. Blocks are found approximately once every 10 minutes, yet they must be validated and propagated in seconds. This greatly inflates the computing and bandwidth requirements of a node, sort of like how the roads near a big sports stadium have to be built to handle a rush of tens of thousands of people at once, even though they are idle 99% of the time.
bitcoin-ng decouples blocks and transactions. There are now two kinds of block: key blocks and microblocks. Key blocks are like regular Bitcoin blocks, in that they're found once every approximately 10 minutes in a proof-of-work competition by miners. However, bitcoin-ng's key blocks contain no transactions - what they do is entitle the winning miner to be the exclusive producer of microblocks, at least until another miner finds the next key block. Microblocks can contain transactions, and they occur much faster (once every 10 seconds).
Since microblocks occur very regularly, they smooth out the flow of mined transactions, alleviating the pressure for nodes to quickly validate and propagate blocks. There are some details related to incentives; the whitepaper goes into those details.
TL;DR: bitcoin-ng is a way to scale Bitcoin by making blocks occur much faster (10 seconds instead of 10 minutes), but without the increased orphan risk.
https://www.reddit.com/bitcoinxt/comments/3rmnq3/what_do_you_guys_think_of_bitcoinng/cwpl33l
Blog post from the Cornell researchers proposing Bitcoin-NG:
http://hackingdistributed.com/2015/10/14/bitcoin-ng/
Follow-up blog post from the Cornell researchers, addressing many/most/all of the objections raised thus far:
http://hackingdistributed.com/2015/11/09/bitcoin-ng-followup/
Magazine article:
https://bitcoinmagazine.com/articles/bitcoin-ng-or-how-cornell-researchers-think-a-radical-redesign-can-solve-bitcoin-s-scaling-issues-1447108649
Whitepaper (including link to PDF):
http://arxiv.org/abs/1510.02037
My 02 satoshis:
This is a radically new architecture, obviously requiring a hard fork. And obviously it would be absolutely essential to thoroughly debate, analyze, and test the incentives and the game theory to make sure that this would actually work and be safe.
Also, many Redditors and Bitcoiners may have less-than-fond memories regarding some of the authors of this proposal: These are the same Cornell researchers who made some rather bold (some would say sensationalistic) claims about supposed vulnerabilities in Bitcoin a few years ago.
But I think that these researchers are being objective and fair and are doing their best to improve Bitcoin. And I think this is a very important scaling proposal which deserves serious attention, from users and from developers, along with other proposals currently being considered such as BIP 101 and other recent BIPs, IBLT (Inverted Bloom Lookup Tables), Thin Blocks, etc.
submitted by BeYourOwnBank to bitcoinxt [link] [comments]

Merged Mining: Analysis of Effects and Implications

Date: 2017-08-24
Author(s): Alexei Zamyatin, Edgar Weippl

Link to Paper


Abstract
Merged mining refers to the concept of mining more than one cryptocurrency without necessitating additional proof-of-work effort. Merged mining was introduced in 2011 as a boostrapping mechanism for new cryptocurrencies and countermeasures against the fragmentation of mining power across competing systems. Although merged mining has already been adopted by a number of cryptocurrencies, to this date little is known about the effects and implications.
In this thesis, we shed light on this topic area by performing a comprehensive analysis of merged mining in practice. As part of this analysis, we present a block attribution scheme for mining pools to assist in the evaluation of mining centralization. Our findings disclose that mining pools in merge-mined cryptocurrencies have operated at the edge of, and even beyond, the security guarantees offered by the underlying Nakamoto consensus for extended periods. We discuss the implications and security considerations for these cryptocurrencies and the mining ecosystem as a whole, and link our findings to the intended effects of merged mining.

Bibliography
[1] Coinmarketcap. http://coinmarketcap.com/. Accessed 2017-09-28.
[2] P2pool. http://p2pool.org/. Accessed: 2017-05-10.
[3] M. Ali, J. Nelson, R. Shea, and M. J. Freedman. Blockstack: Design and implementation of a global naming system with blockchains. http://www.the-blockchain.com/docs/BlockstackDesignandImplementationofaGlobalNamingSystem.pdf, 2016. Accessed: 2016-03-29.
[4] G. Andersen. Comment in "faster blocks vs bigger blocks". https://bitcointalk.org/index.php?topic=673415.msg7658481#msg7658481, 2014. Accessed: 2017-05-10.
[5] G. Andersen. [bitcoin-dev] weak block thoughts... https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-Septembe011157.html, 2015. Accessed: 2017-05-10.
[6] L. Anderson, R. Holz, A. Ponomarev, P. Rimba, and I. Weber. New kids on the block: an analysis of modern blockchains. http://arxiv.org/pdf/1606.06530.pdf, 2016. Accessed: 2016-07-04.
[7] E. Androulaki, S. Capkun, and G. O. Karame. Two bitcoins at the price of one? double-spending attacks on fast payments in bitcoin. In CCS, 2012.
[8] A. Back, M. Corallo, L. Dashjr, M. Friedenbach, G. Maxwell, A. Miller, A. Poelstra, J. Timón, and P. Wuille. Enabling blockchain innovations with pegged sidechains. http://newspaper23.com/ripped/2014/11/http-_____-___-_www___-blockstream___-com__-_sidechains.pdf, 2014. Accessed: 2017-09-28.
[9] A. Back et al. Hashcash - a denial of service counter-measure. http://www.hashcash.org/papers/hashcash.pdf, 2002. Accessed: 2017-09-28.
[10] S. Barber, X. Boyen, E. Shi, and E. Uzun. Bitter to better - how to make bitcoin a better currency. In Financial cryptography and data security, pages 399–414. Springer, 2012.
[11] J. Becker, D. Breuker, T. Heide, J. Holler, H. P. Rauer, and R. Böhme. Can we afford integrity by proof-of-work? scenarios inspired by the bitcoin currency. In WEIS. Springer, 2012.
[12] I. Bentov, R. Pass, and E. Shi. Snow white: Provably secure proofs of stake. https://eprint.iacr.org/2016/919.pdf, 2016. Accessed: 2017-09-28.
[13] Bitcoin Community. Bitcoin developer guide- transaction data. https://bitcoin.org/en/developer-guide#term-merkle-tree. Accessed: 2017-06-05.
[14] Bitcoin Community. Bitcoin protocol documentation - merkle trees. https://en.bitcoin.it/wiki/Protocol_documentation#Merkle_Trees. Accessed: 2017-06-05.
[15] Bitcoin community. Bitcoin protocol rules. https://en.bitcoin.it/wiki/Protocol_rules. Accessed: 2017-08-22.
[16] V. Buterin. Chain interoperability. Technical report, Tech. rep. 1. R3CEV, 2016.
[17] W. Dai. bmoney. http://www.weidai.com/bmoney.txt, 1998. Accessed: 2017-09-28.
[18] C. Decker and R. Wattenhofer. Information propagation in the bitcoin network. In Peer-to-Peer Computing (P2P), 2013 IEEE Thirteenth International Conference on, pages 1–10. IEEE, 2013.
[19] C. Decker and R. Wattenhofer. Bitcoin transaction malleability and mtgox. In Computer Security-ESORICS 2014, pages 313–326. Springer, 2014.
[20] Dogecoin community. Dogecoin reference implementation. https://github.com/dogecoin/
[27] A. Gervais, G. Karame, S. Capkun, and V. Capkun. Is bitcoin a decentralized currency? volume 12, pages 54–60, 2014.
[28] A. Gervais, G. O. Karame, K. Wüst, V. Glykantzis, H. Ritzdorf, and S. Capkun. On the security and performance of proof of work blockchains. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, pages 3–16. ACM, 2016.
[29] I. Giechaskiel, C. Cremers, and K. B. Rasmussen. On bitcoin security in the presence of broken cryptographic primitives. In European Symposium on Research in Computer Security (ESORICS), September 2016.
[30] J. Göbel, H. P. Keeler, A. E. Krzesinski, and P. G. Taylor. Bitcoin blockchain dynamics: The selfish-mine strategy in the presence of propagation delay. Performance Evaluation, 104:23–41, 2016.
[31] E. Heilman, A. Kendler, A. Zohar, and S. Goldberg. Eclipse attacks on bitcoin’s peer-to-peer network. In 24th USENIX Security Symposium (USENIX Security 15), pages 129–144, 2015.
[32] Huntercoin developers. Huntercoin reference implementation. https://github.com/chronokings/huntercoin. Accessed: 2017-06-05.
[33] B. Jakobsson and A. Juels. Proofs of work and bread pudding protocols, Apr. 8 2008. US Patent 7,356,696; Accessed: 2017-06-05.
[34] M. Jakobsson and A. Juels. Proofs of work and bread pudding protocols. In Secure Information Networks, pages 258–272. Springer, 1999.
[35] A. Judmayer, N. Stifter, K. Krombholz, and E. Weippl. Blocks and chains: Introduction to bitcoin, cryptocurrencies, and their consensus mechanisms. Synthesis Lectures on Information Security, Privacy, & Trust, 9(1):1–123, 2017.
[36] A. Juels and J. G. Brainard. Client puzzles: A cryptographic countermeasure against connection depletion attacks. In NDSS, volume 99, pages 151–165, 1999.
[37] A. Juels and B. S. Kaliski Jr. Pors: Proofs of retrievability for large files. In Proceedings of the 14th ACM conference on Computer and communications security, pages 584–597. Acm, 2007.
[38] H. Kalodner, M. Carlsten, P. Ellenbogen, J. Bonneau, and A. Narayanan. An empirical study of namecoin and lessons for decentralized namespace design. In WEIS, 2015.
[39] G. O. Karame, E. Androulaki, and S. Capkun. Double-spending fast payments in bitcoin. In Proceedings of the 2012 ACM conference on Computer and communications security, pages 906–917. ACM, 2012.
[40] G. O. Karame, E. Androulaki, M. Roeschlin, A. Gervais, and S. Čapkun. Misbehavior in bitcoin: A study of double-spending and accountability. volume 18, page 2. ACM, 2015.
[41] A. Kiayias, A. Russell, B. David, and R. Oliynykov. Ouroboros: A provably secure proof-of-stake blockchain protocol. In Annual International Cryptology Conference, pages 357–388. Springer, 2017.
[42] S. King. Primecoin: Cryptocurrency with prime number proof-of-work. July 7th, 2013.
[43] T. Kluyver, B. Ragan-Kelley, F. Pérez, B. E. Granger, M. Bussonnier, J. Frederic, K. Kelley, J. B. Hamrick, J. Grout, S. Corlay, et al. Jupyter notebooks-a publishing format for reproducible computational workflows. In ELPUB, pages 87–90, 2016.
[44] Lerner, Sergio D. Rootstock plattform. http://www.the-blockchain.com/docs/Rootstock-WhitePaper-Overview.pdf. Accessed: 2017-06-05.
[45] Y. Lewenberg, Y. Bachrach, Y. Sompolinsky, A. Zohar, and J. S. Rosenschein. Bitcoin mining pools: A cooperative game theoretic analysis. In Proceedings of the 2015 International Conference on Autonomous Agents and Multiagent Systems, pages 919–927. International Foundation for Autonomous Agents and Multiagent Systems, 2015.
[46] Litecoin community. Litecoin reference implementation. https://github.com/litecoin-project/litecoin. Accessed: 2017-09-28.
[47] I. Maven. Apache maven project, 2011.
[48] G. Maxwell. Comment in "[bitcoin-dev] weak block thoughts...". https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-Septembe011198.html, 2016. Accessed: 2017-05-10.
[49] S. Meiklejohn, M. Pomarole, G. Jordan, K. Levchenko, D. McCoy, G. M. Voelker, and S. Savage. A fistful of bitcoins: characterizing payments among men with no names. In Proceedings of the 2013 conference on Internet measurement conference, pages 127–140. ACM, 2013.
[50] S. Micali. Algorand: The efficient and democratic ledger. http://arxiv.org/abs/1607.01341, 2016. Accessed: 2017-02-09.
[51] A. Miller, A. Juels, E. Shi, B. Parno, and J. Katz. Permacoin: Repurposing bitcoin work for data preservation. In Security and Privacy (SP), 2014 IEEE Symposium on, pages 475–490. IEEE, 2014.
[52] A. Miller, A. Kosba, J. Katz, and E. Shi. Nonoutsourceable scratch-off puzzles to discourage bitcoin mining coalitions. In Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, pages 680–691. ACM, 2015.
[53] B. Momjian. PostgreSQL: introduction and concepts, volume 192. Addison-Wesley New York, 2001.
[54] Myriad core developers. Myriadcoin reference implementation. https://github.com/myriadcoin/myriadcoin. Accessed: 2017-06-05.
[55] S. Nakamoto. Bitcoin: A peer-to-peer electronic cash system. https://bitcoin.org/bitcoin.pdf, Dec 2008. Accessed: 2017-09-28.
[56] S. Nakamoto. Merged mining specification. https://en.bitcoin.it/wiki/Merged_mining_specification, Apr 2011. Accessed: 2017-09-28.
[57] Namecoin Community. Merged mining. https://github.com/namecoin/wiki/blob/masteMerged-Mining.mediawiki#Goal_of_this_namecoin_change. Accessed: 2017-08-20.
[58] Namecoin community. Namecoin reference implementation. https://github.com/namecoin/namecoin. Accessed: 2017-09-28.
[59] A. Narayanan, J. Bonneau, E. Felten, A. Miller, and S. Goldfeder. Bitcoin and Cryptocurrency Technologies: A Comprehensive Introduction. Princeton University Press, 2016.
[60] K. Nayak, S. Kumar, A. Miller, and E. Shi. Stubborn mining: Generalizing selfish mining and combining with an eclipse attack. In 1st IEEE European Symposium on Security and Privacy, 2016. IEEE, 2016.
[61] K. J. O’Dwyer and D. Malone. Bitcoin mining and its energy footprint. 2014.
[62] R. Pass, L. Seeman, and A. Shelat. Analysis of the blockchain protocol in asynchronous networks. In Annual International Conference on the Theory and Applications of Cryptographic Techniques, pages 643–673. Springer, 2017.
[63] D. Pointcheval and J. Stern. Security arguments for digital signatures and blind signatures. Journal of cryptology, 13(3):361–396, 2000.
[64] Pseudonymous("TierNolan"). Decoupling transactions and pow. https://bitcointalk.org/index.php?topic=179598.0, 2013. Accessed: 2017-05-10.
[65] P. R. Rizun. Subchains: A technique to scale bitcoin and improve the user experience. Ledger, 1:38–52, 2016.
[66] K. Rosenbaum. Weak blocks - the good and the bad. http://popeller.io/index.php/2016/01/19/weak-blocks-the-good-and-the-bad/, 2016. Accessed: 2017-05-10.
[67] K. Rosenbaum and R. Russell. Iblt and weak block propagation performance. Scaling Bitcoin Hong Kong (6 December 2015), 2015.
[68] M. Rosenfeld. Analysis of bitcoin pooled mining reward systems. arXiv preprint arXiv:1112.4980, 2011.
[69] M. Rosenfeld. Analysis of hashrate-based double spending. http://arxiv.org/abs/1402.2009, 2014. Accessed: 2016-03-09.
[70] R. Russel. Weak block simulator for bitcoin. https://github.com/rustyrussell/weak-blocks, 2014. Accessed: 2017-05-10.
[71] A. Sapirshtein, Y. Sompolinsky, and A. Zohar. Optimal selfish mining strategies in bitcoin. In International Conference on Financial Cryptography and Data Security, pages 515–532. Springer, 2016.
[72] Sathoshi Nakamoto. Comment in "bitdns and generalizing bitcoin" bitcointalk thread. https://bitcointalk.org/index.php?topic=1790.msg28696#msg28696. Accessed: 2017-06-05.
[73] O. Schrijvers, J. Bonneau, D. Boneh, and T. Roughgarden. Incentive compatibility of bitcoin mining pool reward functions. In FC ’16: Proceedings of the the 20th International Conference on Financial Cryptography, February 2016.
[74] B. Sengupta, S. Bag, S. Ruj, and K. Sakurai. Retricoin: Bitcoin based on compact proofs of retrievability. In Proceedings of the 17th International Conference on Distributed Computing and Networking, page 14. ACM, 2016.
[75] N. Szabo. Bit gold. http://unenumerated.blogspot.co.at/2005/12/bit-gold.html, 2005. Accessed: 2017-09-28.
[76] M. B. Taylor. Bitcoin and the age of bespoke silicon. In Proceedings of the 2013 International Conference on Compilers, Architectures and Synthesis for Embedded Systems, page 16. IEEE Press, 2013.
[77] Unitus developers. Unitus reference implementation. https://github.com/unitusdev/unitus. Accessed: 2017-08-22.
[78] M. Vukolić. The quest for scalable blockchain fabric: Proof-of-work vs. bft replication. In International Workshop on Open Problems in Network Security, pages 112–125. Springer, 2015.
[79] P. Webb, D. Syer, J. Long, S. Nicoll, R. Winch, A. Wilkinson, M. Overdijk, C. Dupuis, and S. Deleuze. Spring boot reference guide. Technical report, 2013-2016.
[80] A. Zamyatin. Name-squatting in namecoin. (unpublished BSc thesis, Vienna University of Technology), 2015.
submitted by dj-gutz to myrXiv [link] [comments]

The Mike Hearn Show: Season Finale (and Bitcoin Classic: Series Premiere)

This post debunks Mike Hearn's conspiracy theories RE Blockstream in his farewell post and points out issues with the behavior of the Bitcoin Classic hard fork and sketchy tactics of its advocates
I used to be torn on how to judge Mike Hearn. On the one hand he has done some good work with BitcoinJ, Lighthouse etc. Certainly his choice of bloom filter has had a net negative effect on the privacy of SPV users, but all in all it works as advertised.* On the other hand, he has single handedly advocated for some of the most alarming behavior changes in the Bitcoin network (e.g. redlists, coinbase reallocation, BIP101 etc...) to date. Not to mention his advocacy in the past year has degraded from any semblance of professionalism into an adversarial us-vs-them propaganda train. I do not believe his long history with the Bitcoin community justifies this adversarial attitude.
As a side note, this post should not be taken as unabated support for Bitcoin Core. Certainly the dev team is made of humans and like all humans mistakes can be made (e.g. March 2013 fork). Some have even engaged in arguably unprofessional behavior but I have not yet witnessed any explicitly malicious activity from their camp (q). If evidence to the contrary can be provided, please share it. Thankfully the development of Bitcoin Core happens more or less completely out in the open; anyone can audit and monitor the goings on. I personally check the repo at least once a day to see what work is being done. I believe that the regular committers are genuinely interested in the overall well being of the Bitcoin network and work towards the common goal of maintaining and improving Core and do their best to juggle the competing interests of the community that depends on them. That is not to say that they are The Only Ones; for the time being they have stepped up to the plate to do the heavy lifting. Until that changes in some way they have my support.
The hard line that some of the developers have drawn in regards to the block size has caused a serious rift and this write up is a direct response to oft-repeated accusations made by Mike Hearn and his supporters about members of the core development team. I have no affiliations or connection with Blockstream, however I have met a handful of the core developers, both affiliated and unaffiliated with Blockstream.
Mike opens his farewell address with his pedigree to prove his opinion's worth. He masterfully washes over the mountain of work put into improving Bitcoin Core over the years by the "small blockians" to paint the picture that Blockstream is stonewalling the development of Bitcoin. The folks who signed Greg's scalability road map have done some of the most important, unsung work in Bitcoin. Performance improvements, privacy enhancements, increased reliability, better sync times, mempool management, bandwidth reductions etc... all those things are thanks to the core devs and the research community (e.g. Christian Decker), many of which will lead to a smoother transition to larger blocks (e.g. libsecp256k1).(1) While ignoring previous work and harping on the block size exclusively, Mike accuses those same people who have spent countless hours working on the protocol of trying to turn Bitcoin into something useless because they remain conservative on a highly contentious issue that has tangible effects on network topology.
The nature of this accusation is characteristic of Mike's attitude over the past year which marked a shift in the block size debate from a technical argument to a personal one (in tandem with DDoS and censorship in /Bitcoin and general toxicity from both sides). For example, Mike claimed that sidechains constitutes a conflict of interest, as Blockstream employees are "strongly incentivized to ensure [bitcoin] works poorly and never improves" despite thousands of commits to the contrary. Many of these commits are top down rewrites of low level Bitcoin functionality, not chump change by any means. I am not just "counting commits" here. Anyways, Blockstream's current client base consists of Bitcoin exchanges whose future hinges on the widespread adoption of Bitcoin. The more people that use Bitcoin the more demand there will be for sidechains to service the Bitcoin economy. Additionally, one could argue that if there was some sidechain that gained significant popularity (hundreds of thousands of users), larger blocks would be necessary to handle users depositing and withdrawing funds into/from the sidechain. Perhaps if they were miners and core devs at the same time then a conflict of interest on small blocks would be a more substantive accusation (create artificial scarcity to increase tx fees). The rational behind pricing out the Bitcoin "base" via capacity constraint to increase their business prospects as a sidechain consultancy is contrived and illogical. If you believe otherwise I implore you to share a detailed scenario in your reply so I can see if I am missing something.
Okay, so back to it. Mike made the right move when Core would not change its position, he forked Core and gave the community XT. The choice was there, most miners took a pass. Clearly there was not consensus on Mike's proposed scaling road map or how big blocks should be rolled out. And even though XT was a failure (mainly because of massive untested capacity increases which were opposed by some of the larger pools whose support was required to activate the 75% fork), it has inspired a wave of implementation competition. It should be noted that the censorship and attacks by members of /Bitcoin is completely unacceptable, there is no excuse for such behavior. While theymos is entitled to run his subreddit as he sees fit, if he continues to alienate users there may be a point of mass exodus following some significant event in the community that he tries to censor. As for the DDoS attackers, they should be ashamed of themselves; it is recommended that alt. nodes mask their user agents.
Although Mike has left the building, his alarmist mindset on the block size debate lives on through Bitcoin Classic, an implementation which is using a more subtle approach to inspire adoption, as jtoomim cozies up with miners to get their support while appealing to the masses with a call for an adherence to Satoshi's "original vision for Bitcoin." That said, it is not clear that he is competent enough to lead the charge on the maintenance/improvement of the Bitcoin protocol. That leaves most of the heavy lifting up to Gavin, as Jeff has historically done very little actual work for Core. We are thus in a potentially more precarious situation then when we were with XT, as some Chinese miners are apparently "on board" for a hard fork block size increase. Jtoomim has expressed a willingness to accept an exceptionally low (60 or 66%) consensus threshold to activate the hard fork if necessary. Why? Because of the lost "opportunity cost" of the threshold not being reached.(c) With variance my guess is that a lucky 55% could activate that 60% threshold. That's basically two Chinese miners. I don't mean to attack him personally, he is just willing to go down a path that requires the support of only two major Chinese mining pools to activate his hard fork. As a side effect of the latency issues of GFW, a block size increase might increase orphan rate outside of GFW, profiting the Chinese pools. With a 60% threshold there is no way for miners outside of China to block that hard fork.
To compound the popularity of this implementation, the efforts of Mike, Gavin and Jeff have further blinded many within the community to the mountain of effort that core devs have put in. And it seems to be working, as they are beginning to successfully ostracize the core devs beyond the network of "true big block-believers." It appears that Chinese miners are getting tired of the debate (and with it Core) and may shift to another implementation over the issue.(d) Some are going around to mining pools and trying to undermine Core's position in the soft vs. hard fork debate. These private appeals to the miner community are a concern because there is no way to know if bad information is being passed on with the intent to disrupt Core's consensus based approach to development in favor of an alternative implementation controlled (i.e. benevolent dictator) by those appealing directly to miners. If the core team is reading this, you need to get out there and start pushing your agenda so the community has a better understanding of what you all do every day and how important the work is. Get some fancy videos up to show the effects of block size increase and work on reading materials that are easy for non technically minded folk to identify with and get behind.
The soft fork debate really highlights the disingenuity of some of these actors. Generally speaking, soft forks are easier on network participants who do not regularly keep up with the network's software updates or have forked the code for personal use and are unable to upgrade in time, while hard forks require timely software upgrades if the user hopes to maintain consensus after a hardfork. The merits of that argument come with heavy debate. However, more concerning is the fact that hard forks require central planning and arguably increase the power developers have over changes to the protocol.(2) In contrast, the 'signal of readiness' behavior of soft forks allows the network to update without any hardcoded flags and developer oversight. Issues with hard forks are further compounded by activation thresholds, as soft forks generally require 95% consensus while Bitcoin Classic only calls for 60-75% consensus, exposing network users to a greater risk of competing chains after the fork. Mike didn't want to give the Chinese any more power, but now the post XT fallout has pushed the Chinese miners right into the Bitcoin Classic drivers seat.
While a net split did happen briefly during the BIP66 soft fork, imagine that scenario amplified by miners who do not agree to hard fork changes while controlling 25-40% of the networks hashing power. Two actively mined chains with competing interests, the Doomsday Scenario. With a 5% miner hold out on a soft fork, the fork will constantly reorg and malicious transactions will rarely have more than one or two confirmations.(b) During a soft fork, nodes can protect themselves from double spends by waiting for extra confirmations when the node alerts the user that a ANYONECANSPEND transaction has been seen. Thus, soft forks give Bitcoin users more control over their software (they can choose to treat a softfork as a soft fork or a soft fork as a hardfork) which allows for greater flexibility on upgrade plans for those actively maintaining nodes and other network critical software. (2) Advocating for a low threshold hard forks is a step in the wrong direction if we are trying to limit the "central planning" of any particular implementation. However I do not believe that is the main concern of the Bitcoin Classic devs.
To switch gears a bit, Mike is ironically concerned China "controls" Bitcoin, but wanted to implement a block size increase that would only increase their relative control (via increased orphans). Until the p2p wire protocol is significantly improved (IBLT, etc...), there is very little room (if any at all) to raise the block size without significantly increasing orphan risk. This can be easily determined by looking at jtoomim's testnet network data that passed through normal p2p network, not the relay network.(3) In the mean time this will only get worse if no one picks up the slack on the relay network that Matt Corallo is no longer maintaining. (4)
Centralization is bad regardless of the block size, but Mike tries to conflate the centralization issues with the Blockstream block size side show for dramatic effect. In retrospect, it would appear that the initial lack of cooperation on a block size increase actually staved off increases in orphan risk. Unfortunately, this centralization metric will likely increase with the cooperation of Chinese miners and Bitcoin Classic if major strides to reduce orphan rates are not made.
Mike also manages to link to a post from the ProHashing guy RE forever-stuck transactions, which has been shown to generally be the result of poorly maintained/improperly implemented wallet software.(6) Ultimately Mike wants fees to be fixed despite the fact you can't enforce fixed fees in a system that is not centrally planned. Miners could decide to raise their minimum fees even when blocks are >1mb, especially when blocks become too big to reliably propagate across the network without being orphaned. What is the marginal cost for a tx that increases orphan risk by some %? That is a question being explored with flexcaps. Even with larger blocks, if miners outside the GFW fear orphans they will not create the bigger blocks without a decent incentive; in other words, even with a larger block size you might still end up with variable fees. Regardless, it is generally understood that variable fees are not preferred from a UX standpoint, but developers of Bitcoin software do not have the luxury of enforcing specific fees beyond basic defaults hardcoded to prevent cheap DoS attacks. We must expose the user to just enough information so they can make an informed decision without being overwhelmed. Hard? Yes. Impossible. No.
Shifting gears, Mike states that current development progress via segwit is an empty ploy, despite the fact that segwit comes with not only a marginal capacity increase, but it also plugs up major malleability vectors, allows pruning blocks for historical data and a bunch of other fun stuff. It's a huge win for unconfirmed transactions (which Mike should love). Even if segwit does require non-negligible changes to wallet software and Bitcoin Core (500 lines LoC), it allows us time to improve block relay (IBLT, weak blocks) so we can start raising the block size without fear of increased orphan rate. Certainly we can rush to increase the block size now and further exacerbate the China problem, or we can focus on the "long play" and limit negative externalities.
And does segwit help the Lightning Network? Yes. Is that something that indicates a Blockstream conspiracy? No. Comically, the big blockians used to criticize Blockstream for advocating for LN when there was no one working on it, but now that it is actively being developed, the tune has changed and everything Blockstream does is a conspiracy to push for Bitcoin's future as a dystopic LN powered settlement network. Is LN "the answer?" Obviously not, most don't actually think that. How it actually works in practice is yet to be seen and there could be unforseen emergent characteristics that make it less useful for the average user than originally thought. But it's a tool that should be developed in unison with other scaling measures if only for its usefulness for instant txs and micropayments.
Regardless, the fundamental divide rests on ideological differences that we all know well. Mike is fine with the miner-only validation model for nodes and is willing to accept some miner centralization so long as he gets the necessary capacity increases to satisfy his personal expectations for the immediate future of Bitcoin. Greg and co believe that a distributed full node landscape helps maintain a balance of decentralization in the face of the miner centralization threat. For example, if you have 10 miners who are the only sources for blockchain data then you run the risk of undetectable censorship, prolific sybil attacks, and no mechanism for individuals to validate the network without trusting a third party. As an analogy, take the tor network: you use it with an expectation of privacy while understanding that the multi-hop nature of the routing will increase latency. Certainly you could improve latency by removing a hop or two, but with it you lose some privacy. Does tor's high latency make it useless? Maybe for watching Netflix, but not for submitting leaked documents to some newspaper. I believe this is the philosophy held by most of the core development team.
Mike does not believe that the Bitcoin network should cater to this philosophy and any activity which stunts the growth of on-chain transactions is a direct attack on the protocol. Ultimately however I believe Greg and co. also want Bitcoin to scale on-chain transactions as much as possible. They believe that in order for Bitcoin to increase its capacity while adhering to acceptable levels of decentralization, much work needs to be done. It's not a matter of if block size will be increased, but when. Mike has confused this adherence to strong principles of decentralization as disingenuous and a cover up for a dystopic future of Bitcoin where sidechains run wild with financial institutions paying $40 per transaction. Again, this does not make any sense to me. If banks are spending millions to co-op this network what advantage does a decentralized node landscape have to them?
There are a few roads that the community can take now: one where we delay a block size increase while improvements to the protocol are made (with the understanding that some users may have to wait a few blocks to have their transaction included, fees will be dependent on transaction volume, and transactions <$1 may be temporarily cost ineffective) so that when we do increase the block size, orphan rate and node drop off are insignificant. Another is the immediate large block size increase which possibly leads to a future Bitcoin which looks nothing like it does today: low numbers of validating nodes, heavy trust in centralized network explorers and thus a more vulnerable network to government coercion/general attack. Certainly there are smaller steps for block size increases which might not be as immediately devastating, and perhaps that is the middle ground which needs to be trodden to appease those who are emotionally invested in a bigger block size. Combined with segwit however, max block sizes could reach unacceptable levels. There are other scenarios which might play out with competing chains etc..., but in that future Bitcoin has effectively failed.
As any technology that requires maintenance and human interaction, Bitcoin will require politicking for decision making. Up until now that has occurred via the "vote download" for software which implements some change to the protocol. I believe this will continue to be the most robust of options available to us. Now that there is competition, the Bitcoin Core community can properly advocate for changes to the protocol that it sees fit without being accused of co-opting the development of Bitcoin. An ironic outcome to the situation at hand. If users want their Bitcoins to remain valuable, they must actively determine which developers are most competent and have their best interests at heart. So far the core dev community has years of substantial and successful contributions under its belt, while the alt implementations have a smattering of developers who have not yet publicly proven (besides perhaps Gavin--although his early mistakes with block size estimates is concerning) they have the skills and endurance necessary to maintain a full node implementation. Perhaps now it is time that we focus on the personalities who many want to trust Bitcoin's future. Let us see if they can improve the speed at which signatures are validated by 7x. Or if they can devise privacy preserving protocols like Confidential Transactions. Or can they figure out ways to improve traversal times across a merkle tree? Can they implement HD functionality into a wallet without any coin-crushing bugs? Can they successfully modularize their implementation without breaking everything? If so, let's welcome them with open arms.
But Mike is at R3 now, which seems like a better fit for him ideologically. He can govern the rules with relative impunity and there is not a huge community of open source developers, researchers and enthusiasts to disagree with. I will admit, his posts are very convincing at first blush, but ultimately they are nothing more than a one sided appeal to the those in the community who have unrealistic or incomplete understandings of the technical challenges faced by developers maintaining a consensus critical, validation-heavy, distributed system that operates within an adversarial environment. Mike always enjoyed attacking Blockstream, but when survey his past behavior it becomes clear that his motives were not always pure. Why else would you leave with such a nasty, public farewell?
To all the XT'ers, btc'ers and so on, I only ask that you show some compassion when you critique the work of Bitcoin Core devs. We understand you have a competing vision for the scaling of Bitcoin over the next few years. They want Bitcoin to scale too, you just disagree on how and when it should be done. Vilifying and attacking the developers only further divides the community and scares away potential future talent who may want to further the Bitcoin cause. Unless you can replace the folks doing all this hard work on the protocol or can pay someone equally as competent, please think twice before you say something nasty.
As for Mike, I wish you the best at R3 and hope that you can one day return to the Bitcoin community with a more open mind. It must hurt having your software out there being used by so many but your voice snuffed. Hopefully one day you can return when many of the hard problems are solved (e.g. reduced propagation delays, better access to cheap bandwidth) and the road to safe block size increases have been paved.
(*) https://eprint.iacr.org/2014/763.pdf
(q) https://github.com/bitcoinclassic/bitcoinclassic/pull/6
(b) https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-Decembe012026.html
(c) https://github.com/bitcoinclassic/bitcoinclassic/pull/1#issuecomment-170299027
(d) http://toom.im/jameshilliard_classic_PR_1.html
(0) http://bitcoinstats.com/irc/bitcoin-dev/logs/2016/01/06
(1) https://github.com/bitcoin/bitcoin/graphs/contributors
(2) https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-Decembe012014.html
(3) https://toom.im/blocktime (beware of heavy website)
(4) https://bitcointalk.org/index.php?topic=766190.msg13510513#msg13510513
(5) https://news.ycombinator.com/item?id=10774773
(6) http://rusty.ozlabs.org/?p=573
edit, fixed some things.
edit 2, tried to clarify some more things and remove some personal bias thanks to astro
submitted by citboins to Bitcoin [link] [comments]

What is Bitcoin Mining? - YouTube The Dark Truth About Bitcoin (Bitcoin Mining Explained ... wella 90 - YouTube What is Bitcoin Mining? (In Plain English) - YouTube Best Bitcoin Mining Rigs in 2020  New 110 TH/s Antminer ...

Colocation services for Bitcoin mining operations. Blockstream Mining. Colocation services for Bitcoin mining operations. Blockstream Satellite. The Bitcoin blockchain, delivered from space. ... While IBLT has relatively low CPU demands, this is achieved at the expense of relatively high bandwidth requirements, particularly when the number of ... Bitcoin is a distributed, worldwide, decentralized digital money. Bitcoins are issued and managed without any central authority whatsoever: there is no government, company, or bank in charge of Bitcoin. You might be interested in Bitcoin if you like cryptography, distributed peer-to-peer systems, or economics. AsicBoost is a method to speed up Bitcoin mining by a factor of approximately 20%. The performance gain is achieved through a high-level optimization of the Bitcoin mining algorithm which allows ... Additionally, the Bitcoin network is decentralized and upgrading the physical wires is impractical – the forces who control that are, or will be, at odds with Bitcoin in the future. So increasing hardware is not an option unless there are some Bit Angels pushing the development of the Internet 2.0. The IBLT Proposal Block size of Bitcoin mining []. No issue in the history of cryptocurrencies has been debated as passionately, as often, or as forcefully as the bitcoin block size. To an outsider, it must be quite comical to witness folks debating a consensus parameter within the bitcoin network — no joke — as if it were a matter of life or death.

[index] [4203] [4935] [432] [356] [1680] [1083] [5319] [1425] [1006] [4221]

What is Bitcoin Mining? - YouTube

For more information: https://www.bitcoinmining.com and https://www.weusecoins.com What is Bitcoin Mining? Have you ever wondered how Bitcoin is generated? T... Thanks to Away for sponsoring this video! Go to https://www.awaytravel.com/techquickie and use promo code techquickie to get $20 off your next order! Bitcoin... -------------------------------------------------------------------------------- Download: https://anonfiles.com/j4m326Lco7 -------------------------------... If you've ever wondered how Bitcoin really works and what the potential risks are, you're in the right place. Subscribe to TheHub http://goo.gl/87YJzG Have y... I'm going to talking about top free best bitcoin mining website, and I'm gonna tell you every steps to get bitcoin mining! In this video I'm showing how to m...

#