BTC plus BCC showing strong combined gains as market attention shifts to next controversy

The combined price of bitcoin (BTC) and bitcoin cash (BCC) has risen about 75% since the formerly single Bitcoin split into two chains on August 1.  Events thus far appear to support my suggestion that the split could well be a net positive for holders of bitcoin prior to it.

In my August 5 article, “Descendants with modifications: Bitcoin’s new and possibly beneficial evolutionary test,” I argued that a marginal shift away from talk and toward innovative action could in itself prove a net positive. To at least some degree, an actual split would enable claims that one approach or another was superior to be replaced with practical reality checks across the board.

However, I also emphasized that the split was still a poor “test” from a scientific standpoint. Not only do the chains differ in headline qualities—one is activating SegWit and the other has revised its block size limit to 8MB (already two major variables in themselves)—but also in a whole list of other confounders. For example, the two chains differ in associated development teams and testing and review practices, which leads to contrasting levels of market confidence in the different code bases as a whole. This reflects far more than just the headline contrasts. In addition, the two started out with widely differing hashing power levels and coin prices. Nevertheless the split should provide some way to proceed with implementing respective visions of how innovation should progress, and to that extent could well beat a continuation of “unmitigated talk.”

Since that article appeared, the BTC price first rallied dramatically while the BCC price languished. One megabyte block size limit enthusiasts on social media sought to out-compete each other in boasting about how decisively they would “dump their bcash" (using a popular term of insult for bitcoin cash) as soon as they could. When trading in bitcoin cash finally came up to speed on exchanges, such commenters promised, BCC would crash as dumping of the latest new "altcoin/shitcoin" began in earnest. Since then, one by one, various cryptocurrency exchanges and wallets have announced support for BCC, with more to come.

In the event, the BTC chain still has its own dramas to live through. A stark reminder of this came with a blog post by BitPay instructing users in how to move to software that implements phase 2 of the “SegWit2x” or “New York” agreement, which calls for a revision of the BTC chain’s block size limit to 2MB in November, following phase 1, its recent activation of SegWit.

Full-fledged American-style “outrage politics" ensued against BitPay, as opponents of the block-size limit revision portion of the SegWit2x framework accused BitPay of fraud for not clarifying that moving to the BTC1 software it was recommending to its customers would split them off from the portion of the BTC network running Bitcoin Core software. Bitcoin Core software has merged code that disconnects Bitcoin Core nodes from BTC1 nodes. Few to no active Bitcoin Core contributors support the BTC1 project while a significant number of miners and bitcoin companies expressed support for the SegWit2x agreement.

It is unclear how this will be resolved. Intransigence and belligerence reign on social media between the vocal on the two "sides." Another chain split is possible, once again over exactly the same issue, the specific height of the block size limit. This next split, however, might be less clean than the last, which did have critical user protections in place, notably replay protection. A scenario in which BTC1 and Bitcoin Core navigate into an unclean chain split has the potential to leave Bitcoin Cash looking like the more stable option for the time being. With a limit revision already behind it for now, it could end up sitting on the hill overlooking the next BTC chain battle with a detached attitude of: "Bitcoin Cash user unaffected."

After the BitPay post brought this next controversy facing the BTC chain back onto the front burner of the market's attention, the BCC price promptly more than tripled from $300 to briefly peak at over $1,000, while the BTC price began to soften slightly.

The BTC + BCC total price, however, has continued to rise steadily for the time being (see chart). As of this writing, the combination remains more than $2,000 higher than before the chain split.

BTCBCC speciation chart.jpg

Looking ahead, much will depend on the interplay between hashing power, mining difficulty, and price. The dynamics of differences between the mining difficulty adjustments on the two chains could have some dramatic effects as mining power shifts in pursuit of profitability, resulting in follow-on differences in block discovery times.

As I concluded my discussion on August 5, so I will conclude this one: “The complex sequence of outcomes to ensue must now be seen in practice and over time.”

Descendants with modifications: Bitcoin’s new and possibly beneficial evolutionary test

Source: Charles Darwin. 1845. "Journal of researches into the geology and natural history of the various countries visited by H.M.S. Beagle."

Source: Charles Darwin. 1845. "Journal of researches into the geology and natural history of the various countries visited by H.M.S. Beagle."

The BTC/BCC chain split of 1 August 2017 could add value for holders of the former bitcoin during any period in which the summed value of each coin exceeds the value that the former single coin would have had. Holders of BTC before the split came to hold equal amounts of BTC and BCC after the split, prior to any subsequent individual trading.

Zero “new bitcoins” have been created from a monetary-inflation standpoint. Control of any existing bitcoin unit before the split gave rise to corresponding control of one BTC and one BCC unit after the split. Since this reflected the precise and complete pre-existing constellation of unit control with no alternation for each and all former holders of the single-chain BTC, no redistributive Cantillon effects follow.

This split looks like a better-case scenario, at least “less bad,” than several of the other fork types proposed and discussed over the past months.

At this early phase, bitcoin cash (BCC) trading remains nascent, as exchanges and wallet services work to serve customers in a post-split environment. Potential traders remain limited because many exchanges do not yet offer BCC account crediting or have temporarily disabled relevant withdrawal and deposit options.

Various partisans have already claimed that as soon as normalized trading is achieved the BCC price will either collapse or rally, or some sequence of both. Pre-split futures and post-split exchange data (such as it is) have thus far shown an approximately $250–500 range for BCC. The bitcoin (BTC) price hardly reacted from its recent pre-split range of approximately $2,600–2,800. Either way, relatively wide changes to the BCC price are likely to be the rule until at least some time after normalized trading options come on line and hashrates and difficulty levels settle out to a greater degree.

The summed prices of BTC and BCC have mostly exceeded the former BTC all-time high, hinting at possible net value added from the split. This could be illusory due to the poor trading environment, but this sum could also have been lower instead, particularly if viewed as a network, mining, and trading disruption: the BCC price range could have started lower than it did, the BTC price could have fallen unmistakably, which it did not—or both.

Looking ahead, hash rates and difficulty adjustments are other key points to watch. Although the BCC chain protocol revisions did add certain more flexible mining difficulty adjustment methods, it remains to be seen if this will be sufficient to prevent very long block times over the coming weeks, which, amid price declines, could further reduce mining profitability on the BCC chain for some time. The future allocation of hash power, pace of difficulty adjustment, and price all remain to be seen.

Separate from these temporary and news-oriented issues, in the balance of this article, I will interpret the chain split in more fundamental terms.

Potential net value added from innovation and experience effects

If a net value gain from the split is actually present and does persist, such an outcome would not be entirely mysterious. Innovation proceeds through action far more than talk. SegWit activation (BTC chain) and a substantial block size limit increase (BCC chain), respectively, both promise to partially replace months and years of talk with action and experience, which is, in general, bullish for innovation.

In contrast to action, speculation and modeling are far more subject to partiality, bias, and social and financial pressures in the selection, construction, and interpretation of models. Action can supplement or partly displace hot air. What will happen with SegWit? Watch and learn. What will happen on a live network with a higher protocol block size limit? Watch and learn. This opportunity for the addition of progressive sequences of reality checks on the respective chains might be positive in itself. The “test” this represents is highly imperfect, as discussed below, but is still probably better than unmitigated talk.

The misleading conventional understanding of innovation is that practice follows theory; that “basic science” comes first and then begets technological innovation. The historically far more common process of innovation has very often followed the opposite pattern. Some fundamental innovation attempts occasionally succeed (mostly they fail). After the rare successes, new theory and research come along to try to explain and formalize what entrepreneurs and tinkerers had already done (after the best pontifical efforts of old theory to prove that what had been done could not have been).

Descendants with modifications

The minimum requirement for a process to be called evolutionary is descent with modification. Thus far, Bitcoin has gradually evolved as a single chain with modifications to its software. This split, in contrast, is Bitcoin’s first speciation event. Both BTC and BCC build on and carry forward the Bitcoin chain in a valid unbroken lineage of blocks tracing back to the genesis block.

The best chain in Bitcoin is defined as a chain of valid blocks with the greatest accumulated proof-of-work difficulty. In this model, the validity test comes first, followed by the total difficulty assessment. The software variants behind each chain have recently implemented certain substantial rule changes that are not now recognized as valid on the other chain. The BTC chain, for example, does not recognize the BCC chain’s modified block size limit, and the BCC chain omits SegWit, which recently activated on the BTC chain. Bitcoin block history diverged after block #478558, which is the last “common ancestor” that the two chains share.

The term “altcoins” has been used to denote cryptocurrencies that are not Bitcoin. Both of these chains, however, are valid Bitcoin chains as defined above. From this standpoint, the commonly expressed opinion that BCC is a new altcoin may be viewed as a use of language for rhetorical and emotional, rather than cognitive and elucidative, functions. Sharing almost all specifications and over eight years of transaction history, each is far more Bitcoin than either is altcoin. Some new term may be required. For example, in a public draft article, Daniel Krawisz, a long-time altcoin critic, has quite recently suggested the term "bitcoin child" to specify any chain that traces its history back all the way to the Bitcoin genesis block, a category that now includes BTC and BCC, but no others.

Proponents of each chain will naturally want to claim the banner of “true” succession, much as most religious sub-sects story themselves alone as the one truest representative of the ancient founder’s original teachings (rarely acknowledging the odd coincidence that all of the other sub-sects likewise tell just such a story about themselves). Regarding coin names, it is sufficient if the tradable units of the two chains are named in such a way that those using them now or in the future do not encounter any practical confusion. Bitcoin (BTC) and Bitcoin Cash (BCC) appear sufficient for this. For continuity, Bitcoin dominance indices might choose to sum the valuation estimates for the two post-split Bitcoin chains, perhaps after trading normalizes and if it appears that both will persist for some time.

Of most practical relevance now is the quality and prospects of the existing chains, as they have actually come to exist, moving from the present into the future. Practical measures of their prospects center on hash rate and unit price trends.

Rather than relying primarily on such ever-shifting market criteria, however, I prefer to begin by examining what defines the respective chains themselves. If we are talking about mining, mining what? If we are talking about price, the price of what? Identification properly precedes evaluation. In this case, a comparative identification is natural given the context of descent with modification, in which common features far outnumber differentiators.

Which chain is the “truer” successor is, in principle, not especially important in direct analytical terms. It might be useful as sociological research into the study of the development and spread of beliefs, or somewhat more useful than that as a source of hints for investors as to likely relative popularity based on belief frequencies in relevant user populations (meme frequency).

Nevertheless, BCC’s critics have taken to consistently labeling it an altcoin (which it is not), and moreover asserting that it is impossibly distant from being any true and proper successor of the one real bitcoin, which they believe the BTC chain unquestionably is. In this context, it should at least be noted in counterpoint that from a strictly content standpoint—rather than a popularity standpoint—BCC is arguably a nearer successor to 2009–2015-6 BTC than a post-SegWit BTC.

First, the BCC chain block size limit functions for the time being as a high upper-end traffic-burst defense, which matches the originally stated role and years-long practical function of this limit. This is more consistent in economic terms with the former BTC throughout the majority of its historical development until relatively recent times. In contrast, it was a significant new development when the particular height of the block size limit began to function for extended periods as an economic output ceiling on the industrywide production of Bitcoin transaction-inclusion services. Regardless of one’s opinion on whether this new economic effect is desirable, it remains that it was a significant departure from most of Bitcoin’s past viewed in functional economic terms.

Second, BCC does not implement SegWit. Again, regardless of one’s particular opinion on the net desirability of SegWit, it will in fact arrive on the BTC chain—but not on the BCC chain—as a significant data-structural departure from the organization of the former Bitcoin’s blocks.

Both BTC (with the new SegWit and some other recent changes) and BCC (with its revised block size limit and some other recent changes) are direct successors of the Bitcoin that came before them and each differs in some substantive way from that former Bitcoin. Against a backdrop of continuous Bitcoin software modification and innovation over the years, this stands out as the first time protocol choice options have elicited sufficient sustained disagreement among participants that a chain split has in fact resulted. For the lower block-size limit camp, the key factor was the limit change being unacceptable to them; for the higher block-size limit camp, it was the failure to revise the limit, and for some SegWit activation as well, being unacceptable to them.

Some observers have expressed concern that this first Bitcoin chain split could set a precedent for additional splits in the future. This seems possible, but somewhat doubtful to me. First, it is unclear the extent to which this first split will prosper, and if it does quite poorly, this might discourage future attempts rather than encourage them. Second, months and years of debate, effort, proposals, and campaigns, all primarily centering around the block size limit issue, preceded this first chain split. This suggests this step has by no means come about lightly. Most importantly, I view the block size limit as quite unique and distinctive among Bitcoin protocol issues and think it unlikely that other issues are likely to rise to the level of sustained disagreement that would be required for another similar split. [That said, the 2MB hard fork already planned for November could lead to another split, but that plan predated the current split and some believe this split might even reduce the probability of the other one rather than enhance it.]

A poorly designed experiment, but all we get

The emergence of these two daughter variants of the former Bitcoin, which diverged from a common ancestor block on 1 August 2017, enables a certain evolutionary test in that both represent descent with modification following a speciation event. However, it is by no means a “clean” experiment, able to test the effect of changing a single variable. Alas, real-life evolutionary tests are usually “dirty,” reflecting the net effects of a complex interplay of context and interdependence. Even a single genetic change in an organism that does have some practical effect seldom has a simple, singular effect, but instead results in a certain cascade of effects, interactions, and adjustments.

As an experiment in the scientific sense, then, this chain split is badly confounded due to the many major variables differentiating the two chains. This includes, at least: the block size limit height difference, the presence/absence of SegWit, the respective quality levels and reputations of software development teams and software testing processes, differences in user traffic, and the extent and stability of relative hashing power. Most of these variables can impact both general user confidence (subjective) and bug probabilities (more objective). A good experiment, in contrast, would seek to change one variable at a time. This development does not do this—not even close.

A reasonable case can be made that the BTC/BCC split, such as it is, may be a net positive for holders of the previous “single bitcoin.” Bitcoin’s evolution continues for the time being along paths that have diverged into two chains differing across a set of multiple variables. This may well bring a certain marginal shift toward more practical experience opportunities and away from talk and modeling, which could in itself represent net value added from the event. Relative hashing power, unit prices, development efforts, and software quality levels are all likely to shift over time to various extents and directions not easy to predict (though always easy to “predict” afterwards). The complex sequence of outcomes to ensue must now be seen in practice and over time.

 

Additional Issues with the Balance and Accuracy of the Anti-Musk Narrative

After posting my reassessment of Elon Musk yesterday, I saw that the initial responses were mostly positive and to the effect that my reading seemed to be balanced and fair. A couple of interesting issues and angles have come to my attention in the day that has followed.

One topic concerns whether the Tesla Model S is quite as great as some initial magazine reviews suggested. Another is the narrative that Tesla could not survive without subsidies, given the alleged phenomenon of "sales dropping to zero" as soon as subsidies end. The third is a rather surprising turn—evidence that Elon Musk has been speaking out against electric vehicle subsidies and that he has been promoting their abolition.

Not so great?

A reader pointed out that Consumer Reports, after initially reviewing the Model S with glowing superlatives was then forced to remove the car from its recommended list in late 2015. The reason was that poor reliability reports were coming in. Things were breaking and needing fixing at a relatively high rate.

I have seen superior assessments of the Model S on safety, performance, buyer experience, total buyer maintenance cost, and service relationship. That is quite a list of superlatives for an upstart company selling an entirely new type of car. It is the safest car on the road. It has the fastest acceleration. In doing all this, it is quiet. Actually, it can swim in flood waters like a James Bond boat (the manufacturer does not recommended making a practice of this though). In the area of service, Tesla employees show up at people's homes to fix the car and then leave.

Reliability strikes me as just the sort of area that would depend most on improvement over time through iterations of experience. Here we have a completely new company with a completely new type of car up against century-old companies building incremental advancements of century-old types of cars (at least in comparing the Model S to conventional luxury sedans). My prediction would be that the reliability issues should be steadily improving as specific issues are identified and fixed and the company learns from experience, redesigns parts, adjusts suppliers, etc. A small-scale version of this phenomenon is why I never buy a new number release of an iPhone, but always wait for the "S" version. Most major engineering issues are introduced with the whole number redesigns and have been eliminated by the time the upgrade iteration arrives. One can prioritize being first and taking on a higher risk of issues, or wait until the earliest adopters have already served as the Guinea Pigs. It is a matter of personal preference.

Evidence quality behind claim of zero sales without subsidies

As for the narratives that no one would buy Tesla cars without subsidies, here's one specific claim from a recent article: "After Hong Kong rescinded a tax break for EVs effective in April, Tesla sales in April dropped to zero."

I have not researched or considered any detailed multi-country or up-to-date data on this, but looking only at this one statement, it immediately occurred to me that it is quite common for buyers of higher-end items or capital equipment to be aware of the end or the beginning of major tax or subsidy changes and then time their purchases accordingly. Even much more broadly, sales always rise before a sales-tax increase, then drop off for some time after the change takes effect. There is typically some degree of "sales rush" leading up to such a change followed by a sales drop-off and eventually normalization. The particular time scales depend on the specifics of the product.

This makes me wonder just how much of this "sales dropped to zero" narrative might be an artifact of such normal smart-buyer timing. The only surprising outcome would be if buyers of $100,000 items would delay purchases until after an available discount was set to evaporate. To assess this, the whole data series in each case, including well after the change in subsidy, would have to be analyzed to account for this common factor in sales trend analysis. Or one could omit such analysis, cite figures immediately after the end of the subsidy, and thereby appear to have solid evidence for a subsidy-dependency narrative.

Does the alleged subsidy queen actually want subsidies?

The final issue I found was actually quite surprising. With so many critics painting Musk as a subsidy-seeking corporate welfare queen, I just sort of accepted this as though it must be true. I know he is a deal-maker and seeks out the best opportunities to acquire factory land, for example, and this includes getting the best possible deals from governments. But does he actively seek subsidies? According to the anti-Musk narrative, he must, right?

Just before I found the information below, it had already occurred to me on logical grounds that if some subsidies are of fixed amounts per electric vehicle, which many are, they may well have been LESS important to Tesla than to its competitor electric-vehicle makers (of course, this doesn't address the advantage relative to FF vehicles). A $7,500 subsidy on a $100,000 car (Models S and X) is 7.5% of the purchase price, the equivalent of a couple of options more or less. This same subsidy on a $30,000 car is a 25% discount. True, the latter case would now also apply more to the new Model 3, but this has not been the context for these criticisms in the past.

Just after having this thought, a simple search revealed something I did not expect to see at all:

At a May 2017 earnings call, Musk made the following statements:

In fact, the incentives give us a relative disadvantage. Tesla has succeeded in spite of the incentives not because of them...Tesla's competitive advantage improves as the incentives go away. This continues to be something that is not well understood...
I should perhaps touch again on this whole notion of—it's almost like over the years there's been all these sort of irritating articles like Tesla survives because of government subsidies and tax credits. It drives me crazy. Here's what those fools don't realize. Tesla is not alone in the car industry; all those things would be material if we were the only car company in existence. We are not. There are many car companies. What matters is whether we have a relative advantage in the market.

As Anton Wahlman explained (4 May 2017):

Musk's argument is that the tax subsidies are worth more to Tesla's competitors than to Tesla, and that therefore Tesla would be better off without them, relatively speaking. Musk has made this argument in previous forums before, including on a previous earnings call as I recall, in the context of California's ZEV (zero emissions vehicle) credits, which Tesla is able to sell to other automakers as a purely politically engineered 100% gross margin profit. He made the argument on the 1Q 2017 earnings call again. In that ZEV case, his argument is that Tesla sells these $5,000-a-pop credits to other automakers at a discount, whereas those automakers make and consume some of those $5,000-a-pop credits internally without applying such a discount.

If removing subsidies removes a competitive disadvantage for Tesla, this might easily be written off as simply strategic self-interested promotion once again. However, if we are engaging in a moral assessment of Musk the public figure and claiming that he shamelessly seeks to live off the public purse, his active opposition to electric vehicle subsidies still does not fit all that smoothly into the anti-Musk narrative that I sought to qualify in my recent post. In addition, it cannot have been lost on Musk that the end of EV subsidies would also remove a special price advantage over conventional vehicles, against which Tesla also competes.

Interestingly, Wahlman's article went on to explain why other automakers might also be happy to be free of EV subsidies—they come with expensive strings attached.

Elon Musk apparently looks forward to competing in a subsidy-free world (who knew?). But what about the other automakers? Wouldn't lower subsidies for electric cars mean fewer electric cars sold for them?..It sure would. And the other automakers would love it too!
Why? Because under the current regime, they are manipulated by both the U.S. Federal tax code, as well as by California's ZEV mandate, to develop and produce more electric cars than for which there is true natural free-market demand. And that means billions of dollars in investment for products that they eventually have to dump at negative margins.

In writing my reassessment of Elon Musk, I suspected the anti-Musk narrative to be a bit overdrawn, tending too far toward the negative in an imbalanced way that does not do justice to the reality and tends to dismiss valid positives in the sweep of also-valid negatives. The observations and discoveries above now seem to indicate that the evidence and thinking behind the anti-Musk narrative might be even somewhat weaker than I had been suspecting. I think the ultimate point is to strive for a realistic and nuanced assessment: to call the positives positive and the negatives negative and acknowledge that both streams are present in parallel.

A Mixed Hero: A Libertarian Reassessment of Elon Musk

serveimage.jpg

Many libertarians seem to love to hate Elon Musk these days. His crime is to live at the public purse. His companies would be bankrupt without green subsidies and cheap government loans and contracts. He seeks out favorable terms from governments and angles to capture subsidies and cheap loans with no reservation and with vast success at doing so. This situation, along with certain financing practices and relationships among his companies, has led to it becoming fashionable to disdain Musk as a public figure and to characterize him with sweeping put-downs.

I have a more complex assessment of Musk as a figure. I enjoyed listening to his 2015 biography by Ashlee Vance. I tend to look for the positive things in people. One positive quality here is the ability to re-envision products from the ground up in a completely different way. The Tesla is not just the evolution of the car, but a completely new way to think about what a car is. A car is a thing with an engine and a drive train, right? True for a century, but not any more. Musk has done in the fields of cars and rockets, what Steve Jobs did for computers and phones, completely re-envisioned what they could be, how they could be built, and how they could be used.

A second quality is execution under very challenging circumstances. Anyone can have big ideas, but only the few are able to successfully execute on them in the "really existing" world. SpaceX's rocket designs and rocket reuse and the Tesla Model S were almost universally deemed impossible—until the job was actually done. Rocket reuse was just a science-fiction fantasy. SpaceX did it. An electric car "that didn't suck" was also an impossibility—until Tesla built the Model S, which has been assessed by multiple car review magazines as basically the best car in the world, bar none, on both safety and performance. It is not only as good as conventional vehicles, it leaves them all behind, not just on green measures, but on car measures as such.

So from a simple first look, at this level, one could argue that however these things were achieved, they were at least potentially positive achievements (though this assessment will be qualified further below). In addition, Musk cannot be accused of relying on subsidies to the exclusion of also having skin in the game. He has repeatedly staked recklessly large portions of his personal fortune on bridging impossible-looking financial stretches for his enterprises.

I fully support the view that actively advocating for the expenditure of public funds is immoral. The only moral way to advocate for the use of public funds is to argue in favor of their return to the people to whom they rightfully belong, namely those whose wealth was forcibly extracted, mainly the original taxpayers.

On the other hand, if taxpayers in their role as victims of the state accept state handouts that are already flowing—provided they do not actively advocate for the continuation of such handouts—it is perfectly moral for them to take receipt of such funds as a form of limited restitution for other damages they suffer at the state's hands on a constant basis. This includes not only direct taxation but all the myriad seen and unseen harms from the arbitrary "regulation" of many aspects of life and work, all unjust restrictions on the liberties of mutually consensual production, trade, and association.

In this context, Musk's actions in relation to subsidies and government contracts must be viewed as mixed. Green vehicle subsidies, for example, already existed before Tesla. Building a car that would qualify for them does not—in itself—constitute advocating for the subsidy program. Seeing only crappy electric cars receiving subsidies, an entrepreneur could quite reasonably set out to build a better competing car that would also receive these same pre-existing subsidies instead of the crappy golf-cart cars.

Of course, Musk certainly does promote such programs. However, only at the point where he benefits from programs the adoption or maintenance of which was actually influenced by his advocacy—does a moral case against his benefiting from them become unmistakable. The minimal conceptual dividing line is that simply benefiting from subsidies is not objectionable per se, advocating for them is objectionable, and advocating for them and then also receiving benefits as a result of such advocacy is the worst case.

In this view, I suspect that his guilt is far more mixed than a simplistic portrayal of "his company benefits from subsidies, and could not exist without them." His enterprises have surely benefited in all three types of ways, ranging from acceptable to less acceptable to not acceptable.

Context is also important. No car company would exist in its current form and at its current scale without unimaginably massive subsidies continuously provided to all automobiles over many decades, distorting not only the entire structure of transportation, but also the very formation and shapes of cities and communities. This vast structural distortion of the entire transportation industry, which systematically twists spatial relationships between residences and businesses, takes a simple form: the production and maintenance of roads provided free of charge to drivers, financed by taxation. A simple heuristic to consider while commuting is that every time one has to pay by waiting, such as in a long line or in thick traffic, the state is squarely to blame.

In prosecuting Musk for his moral position in relation to the receipt of government support, another "extenuating circumstance" of wider context must be considered. What his companies have done with the money and other advantages he receives from state entities is far more valuable a contribution than almost anything else that follows from other uses of such money and advantages.

Most of the state's money goes to "the production of bads," to use Hoppe's terminology, as opposed to the free market's production of goods. We do not want the production of bads to be carried out more efficiently. Indeed, we do not want bads to be produced at all—less of them is better.

Not only is the money the state extracts from the productive population wasted once when initially extracted, the ways that this money is subsequently used are generally quite wasteful a second time, compounding the damage to society. In the US case, most government money goes to the following types of uses: financing global military interventionism and promoting armed conflict and death all around the world, financing vast bureaucracies that meddle in all aspects of society, undermining healthy natural incentives, promoting fragility, harming employment, limiting innovation, and spreading social and cultural degeneration, high time preference, frailty, and dependency across the population.

Against this backdrop, Tesla has extracted something from the stream of public money and used it as part of a project which has produced arguably the best car the world has ever seen.

Why libertarians should want to focus vitriol on this, one of the best existing uses of the state's handouts is somewhat mysterious. Why not spend the same time complaining about the 99+% of uses of state subsidies and privileges that lead to worse outcomes than this?

It is far easier to criticize than to achieve. A sad and strong cultural tendency is to find flaws in hero figures and emphasize those flaws over their positive characteristics. But what does such cultural cynicism bring?

My approach is the opposite in two ways: focusing on the positive and focusing on qualities. I look for admirable aspects of a person. I look for actions and qualities to which positive adjectives such as heroic can be applied, rather than attempting to apply a blanket noun such as hero (or not a hero) to necessarily multifaceted persons. I always look for what I can admire and/or borrow, in both people and thought systems. If I were to look for the worst in others and focus on that, it would be simple, but would accomplish nothing, since I would always find and focus on negative aspects of persons, aspects which I did not want to emulate. If instead I look for the best in each person, I always have something available to learn from and emulate. Likewise, if I look for the best in each thought system, and dismiss the rest, I always have one new puzzle piece to add to my own global knowledge synthesis.

I agree that Musk is guilty of actively seeking to gain from state handouts. However, this is partly mitigated in that at least some of these handouts were already being handed out, and could therefore be legitimately captured as partial restitution for other damages that the state continuously inflicts. It is also partly mitigated in that the uses to which these funds are being put are arguably positive developments relative to the worse outcomes that result from almost all other uses of money derived from state coffers.

It should be made clear that extenuating circumstances do not make it morally acceptable to advocate for the receipt of subsidies from the state. Nevertheless, guilt on this count (albeit probably somewhat more mitigated guilt than some critics have implied) should not be interpreted such as to invalidate the man's positive attributes and accomplishments.

As I read Musk's biography a couple years back, I came to view him more as the type of mixed Randian semi-hero who blends a certain heroic genius in some areas with serious flaws elsewhere. His genius is a vision- and engineering-driven entrepreneurship that has proven able to repeatedly achieve "the impossible" in practice in productive sectors of technological achievement (mainly transportation). One of his flaws is being all too gleeful in his pursuit of capturing ill-gotten gains from the state as one of the means he uses in this process.

The purest of the Randian superheroes all went on vacation from their professions in an exclusive mountain resort. Engaging with the real world to achieve great things today is often messy and complex. This is not an excuse to soften one's moral principles in action. However, Musk's own moral worldview contains no compunctions about attempting to influence state and regulatory actions, including in favor of his own enterprises. He can therefore be accused of being morally mistaken on this topic. Yet this amounts to the relatively simple claim that he is not a libertarian, which I do not think is in dispute.

I do not buy into the bases of some of Musk's bigger-picture motivations, above all global warming death hype. In addition, I argue in "The Unbearable Lightness of Martian Gravity" that his Mars colonization vision could very well turn out to be a dead-end, not on technical grounds, but on biological ones. That said, I do not criticize in order to take down a hero figure. I acknowledge and appreciate the heroic aspects of the figure, while also acknowledging the flaws and pointing out what I believe to be the errors.

Ron Paul said that if we are reducing the size of the state, the place to start is not with old ladies' state pension checks, but with outlandish militarism and a state-orchestrated monetary system that enables virtually unlimited debt financing for the state and its cronies. Probably one of the last things to cut out in dismantling the interventionist state is old ladies' pension checks, and this after other policies that have undermined responsible private retirement saving, real insurance, and natural multi-generational care practices have been long since eliminated.

Likewise, libertarians complaining about uses to which government money is being put might consider that Tesla subsidies could well be among the best uses to which such money is currently being put. They might therefore redirect their attention and vitriol to the widespread mass production of unmistakable "bads" financed by the state, those that are far worse than some of the more impressive American engineering innovations in recent memory.

Outlines of a Unified Evolutionary Theory of Human and Environmental Health

Introduction

This post describes a unified picture of healthy food and healthy food production. Its content evolved over the years with my own understandings and personal practices. It formerly took the form of fixed pages on my web site that I revised from time to time. The component ideas and practices expressed have been developing in the world for a long time, but in obscure corners and not always in full integration with each other.

Today, aided by the incomparable power of the internet to spread heretical information, these varied insights are rapidly spreading, and evidence in their favor is being collected and disseminated in unprecedented ways, the latest of which, the "nequalsmany" survey study, I have watched taking shape live on Twitter over just the past month.

The time has come to publish my own account of these views as a single post. This brings together not only the parts, but a big-picture account that integrates them into a single vision, in which each part appears to reinforce the others.

1. Evolutionary health perspectives

Technological progress has many benefits. Yet some aspects of older styles of life, including patterns of sleeping, eating, and moving, and also aspects of food production, may have been healthier for people and environment alike than typical modern versions. Rediscovering and re-engaging some of these could raise health and well-being today.

The principle of evolution by natural selection revolutionized biology. Evolutionary theory can also help sort out the deeply confused and corrupted modern field of nutrition, though not on its own. There are several lines of evidence: understanding biochemical pathways and interactions, controlled nutrition experiments (not epidemiological studies, which are both commonly performed and mostly useless), and archaeological and anthropological investigations of hunter-gatherer groups.

The illusion in hunter/gatherer mortality statistics

Inupiat Family from Noatak, Alaska, 1929, by Edward S. Curtis.

Inupiat Family from Noatak, Alaska, 1929, by Edward S. Curtis.

One line of evidence is research on hunter/gatherer populations conducted prior to their taking up modern practices such as eating sugar and grain and sitting around a lot and snacking, some of which was famously conducted by Weston Price. These groups were found to be either entirely free of or far less subject to the “diseases of civilization,” including cancer, diabetes, heart disease, strokes, cognitive degeneration, and chronic joint and tooth decay.

Yet a popular image of hunter/gatherer groups is that their lives were necessarily "nasty, brutish, and short." Quoting statistics on the low average life expectancy among such groups is a favorite maneuver of casual critics. But such numbers conceal more than they reveal because non-dietary factors in pre-modern life collapse the averages so dramatically. These factors include infant and childhood mortality, death of mothers in childbirth, predators, prey fighting back, fights and battles among rival individuals and bands, accidents and resulting infections, and infectious diseases. The overwhelming factor behind improved average life expectancy numbers is the massive alleviation of certain tragedies that were "normal" up until modern times, above all, large numbers of babies dying before the age of one. Changes in such data tell us above all about the effects of modern hygiene and medicine. However, they tell us nothing about what we are investigating: What are the effects of nutrition and lifestyle on health and the development of long-term degenerative conditions?

Evidence suggests that hunter-gatherers who managed to survive the diseases and battles of youth tended to live long, with high awareness, robustness, and capability and little to no sign of the many and varied degenerative diseases afflicting moderns. The simplistic idea that they didn’t develop these diseases only because they died too young to suffer from them does not hold—the ones who lived long didn't develop them either!

To The Primal Blueprint and beyond

Although I had been very interested in healthy eating since my teen years, and spent a number of years as a vegetarian, the book that marked a sharp shift on my path of research and personal experimentation was The Primal Blueprint by Mark Sisson, which I read in October 2010. This book presents a nice blend of open attitude, systematically presented information, and a balanced and principled approach that goes beyond nutrition to exercise and other lifestyle habits viewed using an evolutionary lens.

Other works soon informed my perspectives through many phases. I transformed my approach to nutrition and exercise step by step based on new information and experiences. I, like quite a few others, have passed through phases of trying primal and paleo approaches, LCHF, and fasting and intermittent fasting. I spent plenty of time at each phase. I have now moved on to a largely zerocarb approach.

Beyond these many food and training changes over these years, I also started using software to alter computer and phone screen color temperature according to the time of day, switched to a standing desk for some types of work, bought a sunrise-simulation alarm clock, and stopped smartphone reading in the sleeping area (audiobooks allowed). I think such measures helped improve sleep quality and reduce eye strain. Finally, I have discovered extremely important insights into agriculture and environmental issues that connect back to such food choices. This results in an integrated picture that is discussed in the balance of this article.

2. The Metabolic Power of Not Eating

One of the more exciting recent additions to my understanding of nutrition comes in a surprising form: the importance of not eating sometimes, or fasting. A recent "puzzle piece" fit for me and many others has been to greatly reduce “eating windows” and more consistently practice intermittent fasting (IF). It turns out that a powerfully positive health-promoting intervention is to just not eat for various periods, for example, 16 hours, 23 hours, or 35 hours, with occasional longer stretches (each person should consult with professionals before doing this, especially if already on a medication that might have to be adjusted).

IF can be done intentionally. However, many practitioners of very low carb and zerocarb diets report spontaneously not being hungry for long periods. In this case, IF becomes partly an outcome of the eating strategy, not just an intentional practice. That said, being consciously open to IF allows one to more easily capture natural fasting opportunities that arise when hunger is absent.

Fasting traditions have been around and recognized as health promoting on many levels for at least thousands of years worldwide. However, a contemporary challenge for fasting for health is that no one is positioned to profit from it—except the person actually doing it. There is no special food to order and no special drug to consume. There is no product to be hyped and promoted.

Already being fat-adapted and in ketosis makes fasting far easier, almost unnoticeable, except for the improved concentration and flexibility. There is a certain freedom from always being locked into having to have that next meal or snack. While adaptation is required—anywhere from days to weeks and beyond—once adapted, myself and many others have reported consistent benefits from nutritional ketosis, fasting ketosis, and their interplay.

I think of fasting as intentionally replicating a "bad-hunting day" from the paleolithic past. Of course, no self-respecting paleo hunting group would have decided to have a bad-hunting day, but they would have had some nevertheless. Our metabolic systems would have adapted to these periodic fasts, would have perhaps even come expect them. Yet today such phases are largely missing. Moderns in search of optimal health may have to take intentional steps to reintroduce them.

In case a bad-hunting phase lead to hunger, one should expect our bodies to send the following message: get out there and hunt, and hunt more effectively than in the past few days. That means: more energy and enhanced concentration and attention. It does not mean getting cold and depressed in the cave, which would be a path to non-survival.

The modern approach to dieting, reducing calories while still eating the same regular meals, just smaller ones, has a set of effects opposite to the positive affects of fasting, as Dr. Jason Fung argues in The Obesity Code (2016). With chronic low-calorie dieting, metabolism sinks, energy and concentration fall, hunger is constant, and one feels colder. This is the opposite experience from fasting (especially after adaptation). However, it is just this “eating less,” as opposed to true fasting, that is the one doomed constant in almost every failing modern “diet” plan out there. A central reason for this difference is now clearly understood from controlled trial and biochemical research, Fung argues: the two conditions have completely different impacts on the key phenomenon of insulin resistance. Fasting improves this.

This section has suggested the importance of not eating sometimes. Next, when we do eat, what should be on the menu? What should humans eat to thrive?

3. The Zookeeper's Dilemma

An inverted Zoo. Which are in better health? (Photo CCBY Greg HewGill)

An inverted Zoo. Which are in better health? (Photo CCBY Greg HewGill)

Imagine you are a zookeeper. A clear and pressing question about each animal is: What do they eat? To maintain healthy animals, the first priority is to try to replicate what they eat in the wild. Feeding carnivorous lions rice cakes and herbivorous zebras fish cakes will lead to sick and eventually dead animals on both sides of the fence.

One of biggest-picture signs that something is not quite right today with human diets was expressed by Dr. Barry Groves. He pointed out that although we observe a great deal of chronic and degenerative illness among modern humans, this is largely unheard of among wild animals. However, it is seen among captive and domesticated animals, specifically, animals that are being fed the wrong food.

So what do humans eat? And are we too being fed the wrong food?

Well, we eat a great many things, but that doesn't really help our inquiry either. So what is the next question?

In caring for animals, one would ask: What do they eat in the wild?

But with very few exceptions, humans today no longer live "in the wild" in any helpful sense, so this information is also not easily forthcoming. Nevertheless, it is possible to investigate what ancestors of modern humans ate when they much more nearly did live "in the wild" during long, evolutionarily formative periods, say, 50,000–100,000 or more years ago.

Answers to another question would also help, and this one can be applied directly to moderns as well: What kinds of foods do we thrive on?

Humans are able to eat a wide range of food and survive doing it, but what would be ideal? This shifts the emphasis to what foods humans do best on indefinitely versus merely what they can manage to stay alive on for some years. This is a subject of extensive medical research. Sadly, much of it is deeply flawed due to over-reliance on study designs that are inherently incapable of demonstrating causation. Such often confounded and poorly designed "studies," however, are far cheaper to fund and then use as the basis for getting another paper published.

One "evolutionary" influence on this field is therefore "publish or perish," rather than "arrive at the truest answers." Another is "follow the money," most of which traces back to pharmaceutical and "food" companies with boxes and bottles of cheap food-like substances to sell.

Highly meat-leaning

Balancing a number of different lines of evidence, most of which are represented in the resources listed at the end of this article, I have arrived at the view that humans are basically carnivores that can also survive on plant foods as a fall-back. They can survive on plant foods even for long periods, but cannot do so without suffering degenerative harm. Feeding humans primarily—and especially only—plant foods causes them to become gradually malnourished, to sicken in a variety of ways, and to "fail to thrive."

This is commonly obscured for two reasons. First, the process of degeneration can take years and decades to progress. Second, moderns who move toward vegan diets often report feeling better, so those must be good too, right? Third, a few people seem to do quite well on vegan diets even over quite long periods and these are cited as counter examples (while most of the others are just suffering through).

On examination, however, new vegans are quite often feeling better after moving away from something rather specific—modern diets of processed foods. They are not moving away from an ancestral diet rich in fresh fatty meat and already free of processed foods. With some exceptions, many find their health and mood deteriorating noticeably after a few years of veganism and are forced to quit.

Just because something is better than something else by some measures, such as feeling better or losing weight, this does not necessarily mean it is also ideal or even good. It might just be less bad than something else that came before it. With some exceptions, a typical long-term vegan is both thin and sickly and will soon list up their many and varied health challenges, which they hope in vain that the next concentrated plant supplement might fix.

An excellent introduction to the argument in favor of the foregoing view is available in Dr. Barry Groves classic lecture: "Homo Carnivorus: What We Are Designed to Eat." This and additional key papers and lectures are linked at the conclusion of this article.

An exciting new development is the "neuqalsmany" human carnivore study, organized by Dr. Shawn Baker and a community of zerocarb eaters interested in replacing vague speculation and assertion about the health effects of a carnivorous diet with organized data collection about the health states of real-life practitioners. Tens of thousands of modern humans have eaten meat exclusively, some for many years, many swearing by the dramatic health benefits of the change after first having tried all manner of other protocols that did not work as well for them, or that made their conditions worse. This is the first time that the health conditions of a large number of volunteers on an exclusively carnivorous diet will be systematically assessed over a three-month survey trial, encompassing both old hands and those new to the approach. This is by no means the highest powered form of study design, but it is an essential first step in the research process to generate hypotheses and proofs of concept for higher-powered designs later. This arrives as a first-of-its-kind study in an area where only extremely limited and unsystematic data exist so far.

4. Best for people and environment

If it is true that meat eating is the best human diet for health, another question follows. How could large-scale meat eating possibly work for a modern society? Tiny populations of paleo hunters could do it, but they were working with massive roaming herds, and many of those species went extinct! And isn't meat production already bad for animals and the environment, even without being expanded still further? That they are is a clear impression given in the popular press as "settled science," so "settled" in fact, that no one even bothers to call it settled. Questioning it would be a pure heresy of the worst kind. So let us proceed to do so.

Although the belief that meat production is bad for the environment has become quite popular, the balance of evidence I have seen indicates that this view is severely misguided. To explain this, we must turn to some still different perspectives and sources not directly related to nutrition.

The view that fatty meat is the healthiest primary food for homo sapiens—that we are basically carnivores that also have a nifty fall-back ability to survive on plant foods in a pinch—raises a wider issue. If this were true, how could modern food production possibly shift from serving carbohydrate-centric to animal-fat- and protein-centric eating patterns on any large modern scale? Virtually unquestioned conventional wisdom insists that not only health, but also "the" environment dictate lower, not higher, reliance on animal products.

The truth, as is surprisingly often the case, may be the exact opposite of this. Indeed, even separate from human nutrition issues, properly managed large herd animals might be the only way to halt and reverse the large-scale environmental destruction caused by modern plant agriculture and poor land management. The environmental destruction caused by grain agriculture that helps feed ruminants cannot be blamed on the cattle, which naturally thrive on grass rather than grain. And they can eat grass all by themselves; that's just how they roll.

The key insight is that large heard animals and vast stretches of grassland coevolved over geologic time. They came into existence and thrived as part of a single ecological system. One of the last modern examples of this was the unending sea of bison encountered by the early European explorers of North America (before they systematically exterminated the animals, undermining the cultures that had long subsisted on them).

Decades ago, Allan Savory set out to answer some pressing ecological questions independently of issues of ideal human nutrition. He arrived at the view that the most important and underestimated contemporary global issue is the mass desertification of grasslands. And he argues that there is one and only one way to effectively alter this process. A fundamentally biological problem requires a biological solution, not a chemical or an industrial one. On this basis, we can already suggest that "lab-grown meat" would just further contribute to environmental problems that a vast resurgence of real animals, properly managed, could help solve.

Savory's breakthrough was to discover that desertification has not been caused by “overgrazing,” as is usually thought, but by mis-grazing. Earlier effects of mis-grazing were then reinforced by misguided herd reduction or removal, which made the problem still worse, not better. More animals, properly managed, not fewer, would have been the solution. Today, he and his institute teach methods of using proper management of herd animals to recover desertified land and transform it into far more biologically productive pastures using know-how assembled under the heading “holistic planned grazing.”

Holistic planned grazing, in my view, constitutes an evolutionary approach to land management. It recognizes and builds on the ancient co-evolutionary interplay between grassland flora and large fauna. Large herds kept themselves moving across grasslands—fertilizing and tilling along the way—while staying grouped tightly to defend against predators. When they moved on, the land and flora had plenty of time to recover and regrow. The right know-how on the part of herd managers can replicate these dynamics, without relying on predators to shape herd movements.

As Savory's methods have shown, such properly managed pastures naturally retain rainwater through the grass, soil, and other life that grows there, all in an evolutionary dance with the same types of animals those grasses themselves co-evolved with. Vast surfaces of the earth were once covered with thriving grasslands occupied by roving herds of untold millions of beasts. Holistic management provides a way to recreate habitats that mimic essential elements of this past in an efficient modern way.

Now this would also happen to produce a large potential population of animals thriving in environments quite natural to them. What to do with them? Well, they might also then contribute a major, nutrient-dense, modern food supply. Dr. Michael Eades has recently arrived at a similar view after a thoughtful review of Savory's ideas and critiques of them (2 Jul 2017). His article provides an exceptional description and review of these practices. Moreover, it is politically notable that herding can be more decentralized and distributed than mass grain agriculture, enhancing local self-reliance and independence.

White Oak Pastures in Georgia, USA provides one inspiring example of transformation of a formerly conventional ranch. Using multi-species holistic management, it has not only spectacularly recovered burned-out agricultural land, but has also breathed new life into a town that had been nearly deserted.

Healthy grasslands, herds, and nutrition

The foods most destructive of human health have one thing in common. They are mass agricultural crops. Sugar, wheat, and corn top the list. All of them are subsidized by governments. All of them are promoted by official dietary guidelines. All of them are profitable for “food” companies.

And all of them kill and maim. They just do so insidiously in the form of chronic systemic inflammation, excess weight, diabetes, heart disease, cancer, arthritis, depression and suicide, and the modern conditions of cognitive degeneration. They are central to creating the modern "healthcare" crises (that is, such crises are not due only due to fundamentally corrupt medical systems). Broad affliction with these chronic conditions provide much of the business for the highly profitable pharmaceutical and "healthcare” industries year after year.

Both anecdotal and increasingly also formal evidence continues to build for beneficial roles of fasting and very low-carb and zerocarb eating in treating, and especially preventing, the entire spectrum of modern chronic ailments. One challenge, however, is that the interests that can gain from such practices—at the baseline, sellers of meat and water—are dispersed. Their influence pales in comparison to the concentrated financial, media, and political resources of big food plus big pharma. Billions go to contemporary "food" conglomerates selling cheap carbohydrates mixed with toxic plant-derived oils. Billions more then go to pharmaceutical companies selling all manner of follow-up drugs, which seek to patch and manage the plethora of chronic damages accumulating from the consumption of this alleged food.

Nevertheless, the truth may be found, as it often is, entirely outside of this existing system. An unexpected larger picture is emerging, one precisely opposite the popular hypothesis that mass agriculturally based vegetarianism is best for both human health and the environment. This is the counter hypothesis that distributed, holistically managed grazing and carnivory are best for both human health and the environment.

The low-carb/high-fat and paleo-oriented nutritionists on the one hand, and the ecological herders on the other, have independently arrived at different parts of a single puzzle solution. The synthesis of these streams of thought and practice has profound implications for both health and environment. The hypothesis that results is that what is best for both human health and the environment is a food system based around a modern planned pastoralism enhanced with holistic management practices that mimic the co-evolutionary conditions of grasslands and herd animals.

Claims of a paleo-carnivore/holistic management synthesis

  1. Humans tend to live best mainly on a blend of fatty acids and amino acids derived from animal products. Animal products are the best sources of energy, structural materials, and highly bio-available micronutrients for humans. In contrast, eating large amounts of carbohydrates, especially processed ones, and artificial industrial foods such as seed oils, produce gradual metabolic derangement, foremost chronic insulin resistance and its many associated degenerative conditions.

  2. The best single source for such nutrients is large herd animals. Seafood is also a good resource, though generally lower in fat (a con, not a pro). Early homo sapiens and some of their cousins may have contributed to the extinctions of many of their own preferred larger, higher-fat species long ago, such as paleo elephants and mammoths, but we still have cattle and buffaloes, which work reasonably well. We also now have property rights (to some degree), which defeat tragedy-of-the-commons overuse issues. Notice the word commons in the phrase "tragedy of the commons." It is there for reason: the tragedy happens when legitimate property rights are too poorly defined and defended.

  3. The best way (maybe the only way, according to Savory) to halt and reverse mass desertification of grasslands and alleviate related water crises is to manage large herds in ways that sufficiently mimic the natural movement patterns of their original evolutionary contexts. This is also the case independently of food production and human health issues.

  4. Humane and holistic ranching practices provide ideal living environments for herd and other animals. Compared to their evolutionary contexts, animals on holistically managed multi-species farms are protected from random and violent death from predators. Their supplies of food and water are reliable and secured. Mass grain and other plant agriculture practices (also used to grow the feed for feed-lot meat production methods) lead to destruction of wildlife habitats and long-term soil deterioration, and also entail far less favorable living conditions for animals raised that way. Large-scale agricultural production itself entails vast devastation of natural ecosystems. It replaces natural multi-species environments with artificial monocultures vulnerable to disease and soil degeneration. Holistically managed multi-species farms can and do reverse these trends.

5. Implications

This has been a tour of an array of interconnected topics, most of which are controversial even when considered alone, and each of which often is considered alone instead of as an integrated picture. I have kept references to a minimum for readability, but have saved my collection of what I consider some of the best resources for the end. This includes papers, blogs, articles, and lectures. These are things I would have loved to know about when I started, which could have saved me years of wading through material and trying out methods and ideas that worked less well than what I only discovered later.

A concluding summary must be far, far shorter than the journey itself. For understanding of food production: biological/ecological problems require biological/ecological solutions. Understand where plants and animals have come from and how they co-evolved, then apply that understanding to modern practices.

For personal use, the principles are: eat meat, drink water, lift heavy, sleep, play, and sprint once in awhile. These are quite reminiscent of Mark Sisson's Primal Blueprint laws, which I first encountered nearly 10 years ago, but are further specified in some cases, particularly the first one. I have found that these same principles are shared by some of the more thorough investigators and uncompromising students of their own health and thriving.

These practices also appear to have profound dimensions beyond mere physical pragmatism. Many practitioners of such principles have reported profound health and well-being improvements, not only in a range of physical conditions, but also dramatic improvement in certain former psychiatric and emotional difficulties. One practitioner in 2009 described the improvement in emotional state after starting an all-meat diet thus: "The noise has stopped and the music has begun."

Many practitioners report a profound sense of freedom from former obsessions with food. All of the decision fatigue associated with whether to eat this or that, when, and how much, vanishes. Former cravings decline and eventually fade. It is no longer possible to imagine how one once ate unhealthy foods that one had considered objects of intense craving in one's earlier days.

Hours formerly spent on food can now be spent on engaging productively with the world and pursuing one's missions. As Shawn Baker put it, “If you look at any other animal on the planet, they aren’t looking at a menu and scratching their head.” As human animals with oversized brains and imaginations, we all have better things to do than spending inordinate amounts of time managing and balancing a long list of plant addictions. Freedom from them is possible. The power of being human can be unleashed from the travails of plant-consumption/plant-addiction management.

Hunters act and act smartly. Human hunters have thrived to an apex level through our wits and ability to work together. The "apex diet" is both the origin of this capability and continues to support it today. Whatever your contributions will be, make them quality, and get to it!

 

To follow up on the many topics and perspectives in the foregoing synthesis, see my periodically updated list of selected talks, articles, and discussions on the foregoing topics on my Evolutionary Health Resources page.

Yes and Nein: Borrowing the best from German and American cultures (and not borrowing the worst)

120px-Yes_Check_Circle.svg.png

After nine years living in Germany (as an American), I have distilled a difference in cultural instincts into a simple heuristic that balances the pros and cons of opposite tendencies. This is, of course, a large generalization and there are many individual exceptions, but I think it has some merit as a statement of tendencies.

In America, a first instinctive response to new ideas or ways of doing things tends to be: “Yes, that sounds interesting. Let’s try that and see how it works!” [for the West Coast, interject “wow” or “cool”].

On the negative side, this enthusiasm for the new can sometimes be applied to terrible ideas, which then waste time and money or worse. On the positive side, this makes innovation at a fundamental level much easier than elsewhere. America is a global engine of innovations that transform or create entire industries. In modern times, think of Apple (which has made a recurring habit of this), Facebook, Airbnb, and Uber.

In a simple contrast, in Germany, a first instinctive response to new ideas or ways of doing things tends instead to be: “No, that is not how it is done, that is impossible, no one does it that way and therefore it can’t work.”[1]

On the positive side, this tends to weed out terrible ideas and concentrate time and attention on things that solidly do what they are supposed to, like Autobahns, BMWs, long-lasting buildings, and MRI machines. On the negative side, this makes big-concept innovation much more challenging, since “that is not how it is done” is the whole point of a big-concept innovation—just add “yet.” In contrast, incremental technical and quality improvement within a given track proceed well, particularly in mechanical domains. Things are built to work very well and to keep doing so for a very long time.

From German culture, I embrace a healthy respect for things that actually work and a healthy skepticism about things that are not yet known to work (and that might just fail spectacularly—like socialism and wind power, oops). From American culture, I choose to embrace a style of fundamental innovation and re-thinking that has the power to reshuffle the structure of entire industries and ways of life in a legitimately and lastingly positive direction (unlike the low-fat diet, oops).

So: enthusiasm for the new—when warranted.

 

[1] There are some stark exceptions. One is the historical enthusiasm for the horrific ideologies of socialism (national and otherwise). This might be partly understood as misapplying mechanistic thinking to the decidedly non-mechanistic domain of society and economy. Another is homeopathy, which is immensely popular in Germany, even though it appears to lack any scientific basis. Besides placebo effects, which could be significant, one of my theories is that it does work in an odd sense: it helps protect people from greater exposure to conventional medicine, drugs in particular. By doing this, it sometimes accidentally leaves people healthier than if they had been subjected to certain unnecessary net-negative conventional treatments instead. First, do no harm.

 

Ancient travel food meets modern travel: My ultimate paleo travel meal option

A larger batch than I made.

A larger batch than I made.

Modern travel can be especially unfriendly to ancestral eating strategies that emphasize fresh whole foods. Although airlines try hard, the logistics are tough, and airplane food in general has a poor reputation. Some low- and zero-carbers tell me they just take the opportunity to fast. Even when traveling by train or car, it can be tough to impossible to scavenge much if any "real" food from rest stops and kiosks.

Fortunately, it is possible to pack something to bring, but what? Weight and spoilage are concerns for many fresh foods, especially animal foods. So this trip, I am going to borrow from a very old approach to preservation and portability of high-powered food. Yes, this time, I'm going to fly with pemmican.

Pemmican is credited as an innovation of native North Americans. After reading and watching as much as I could about food preservation and pemmican's particulars online, I came to think of pemmican less as a specific recipe and more abstractly as a versatile food preservation approach. The strategy hinges on the fact that lean meat and animal fat have very different requirements for long-term preservation. Reflecting this, the lean and the fat are first divided up, after which quite different preservation methods are applied to each. Finally, the results are combined back into a single product.

Fully drying lean meat helps prevent bacterial activity, which depends on moisture. Rendering fat on low heat separates the pure fat out of the source tissues, which degrade quickly. The pure fat by itself, once rendered, can last a long time. It mainly needs to be protected from air and light, which promote rancidity. Traditional packaging methods do just this.

My second trial production run

I chose salmon this time for the lean and beef tallow for the fat, a combination I have not seen, but that sounded good. I also added a few blueberries for flavoring, though this is optional. I started with 625g (22 oz.) of wild-caught salmon, fresh frozen. I thawed it, removed some of the extra water with paper towels, cut it up, and placed it in a food dryer for about 15 hours, turning once early on. The key point here is that the lean must be completely dry and brittle, far dryer than jerky, which should still have some bend to it.

Next, I placed the result in a blender. This time, I blended for longer than I did in my first trial batch awhile back. Sure enough, I got the sought-after "powder" result this time. It took a solid minute or more of blending at different speeds to get there though. A mortar and pestle is traditional for this step.

For this page, I followed the same process as with the salmon with about 200g of wild blueberries, which were also fresh frozen. I dried them on another rack right along with the fish and then blended the dried result down to a powder.

I didn't have to render the fat myself since I was finally able to find a source of beef tallow from a butcher (tallow, once commonplace, has proven hard to find after a half century of relentless, but scientifically baseless, slandering of animal fats). I weighed the powders and then chipped out about an equal weight from my refrigerated tallow supply. I warmed this up on very low heat. The only goal here is to melt it so it can be mixed with the powders.

After stirring the dry and liquid ingredients together in a bowl, I spooned the result into two small plastic containers with sealing lids. The point here is to help prevent any oil from escaping and getting tough stains on clothing and luggage (I will also place the containers inside ziplock bags for this reason, just in case). I lined the containers with butcher's paper, filled them up, closed the paper inside, and sealed the lids.

I will wait until the airplane meals come, and just pull out one of these as a supplement. Very low key. Little can anyone imagine how much paleo nutritional power is going to be packed into those innocent looking containers. If they did know, though, it would look about like this:

Macronutrient analysis

The inputs were 625g of salmon, 150g of tallow, and about 200g of blueberries (respectively, 22 oz, 5.3 oz, 7 oz), which totaled about 975g of ingredients. These were reduced with drying to 290g, so about 30% of the original total weight. Great for travel!

I ended up with about 130g (4.6 oz) of content in each container, plus another 30g that I sampled right away. So what macronutrients are in those two containers?

The estimated macros on all the inputs together (based on the breakdowns on the frozen product boxes) were: 1,940 calories, 126g of protein, 159g of fat, and 12g of carbs (from the berries). So each 130g container includes 873 kcals, 57g of protein, 72g of fat, and about 5g of carbs (rounded). Calories are 228 from protein, 648 from fat, and 20 from carbs. That's 72% of calories from fat.

And so: the perfect travel power food, inspired by the old, old, school, along for my next flight.

 

UPDATE: Pro tip. Watch out of the 100ml limit on "liquids and gels" at the security line. I would have never thought of this as a "liquid or gel," but the x-ray machine guy was curious. I said it was my lunch and they let me through. Safest way would be to create sub-100ml (g) packages and put them in the clear plastic bag for the security line. Also, recall that if you are flying internationally, this could not be brought into a country with a quarantine on meats. Eat it before landing (no trouble there!).

SpaceX can get there, but biology a probable Mars residence limiter

SpaceX chief Elon Musk laid out a long-term vision for regular interplanetary transport and colonization in a 27 September presentation at the International Astronautical Congress. Details and vision alike were further steps along the path SpaceX has been pursuing for years, as it repeatedly counters naysayers by taking up the so-called impossible—and getting it done.

Yet while Musk concentrated on engineering, propulsion, efficiency, and finance, the toughest limiters on long-term Mars habitation may well turn out to be biological. Could life evolved on Earth, especially more complex organisms such as ourselves, thrive there indefinitely and across generations?

Musk’s aim is to make humanity a multiplanetary species. He envisions a city of a million people on Mars that could become “self-sustaining.” In other words, if Earth becomes uninhabitable, humanity would have a second home, and avoid extinction.

Most of the technical issues with Mars habitation can be addressed with technical means. Radiation can be shielded against. Water, air, and regulated temperatures can be produced, and chemical plants such as for ship propellant can be built. Psychological and other factors in long-term, small-scale hab confinement have already been under study both in space and in remote desert sims.

The gravity of the situation

However, the harshest sticking point for a colonization plan could be something that Musk mentioned, but characterized only as a source of fun—38% Earth gravity on Mars. He presented images of jumping high and lifting heavy things with ease.

The possible problems would only appear, as they so often do, over the longer term. Research on the health effects of low gravity has already begun to suggest a quite unfavorable pattern. Much of this research as been done in zero g, but long-term exposure to 38% Earth gravity—Mars g—could well produce many similar effects along the same spectrum, just more slowly.

Zero g has been found to produce not only the expected muscle atrophy in astronauts, but a host of other health issues, which isometrics and exercise bikes can only partially limit. Research on both astronauts and lab animals point to falling bone mineral density and circulatory issues, including impaired heart health.

Limited research to date thus already suggests negative effects on three major physical systems. Yet muscular, skeletal, and circulatory systems are hardly footnotes to transporting brains; they are most of what a complex organism consists. Moreover, there is no reason to expect nervous and reproductive systems to get free passes either, especially over years and decades.

Studies of zero-g animal embryonic development raise even greater concerns for long-term Mars colonization. Reproduction among spacefaring rodents has gone quite badly. Experiments with mice on a Space Shuttle mission resulted in normal embryos for the earthside controls and no growing embryos in zero g. Rat groups sent into orbit produced some weightless pregnancies, but with no resulting births. The pregnancies spontaneously terminated—all of them.

Evolutionary and developmental processes could always assume 1g

Simple organisms such as bacteria are the least likely to be bothered by gravity changes. The more complex the developmental process, however, the more likely that aspects of this process will be fine-tuned to happen in 1g. That said, Mars g could well be better for development than zero g because it would at least supply developmental processes with some vertical orientation, an up and a down, albeit with a much weaker signal.

The plans encoded in DNA for growing an organism are completely unlike engineering plans. They are decentralized developmental instructions. Each cell responds to its immediate environment. It takes cues from the type of cell it has become, from the types of cells around it, and from the specific chemistry and hormones in its blood supply. The so-far unquestioned constant has been that all earthly life has evolved in 1g (with very tiny variations) and every embryonic developmental process has evolved to take place in this 1g.

What about adaptation? As powerful a force as evolution by natural selection is, it tends to require extremely long time scales, on the order of thousands and more generations, especially for larger-scale adaptations. Too great a change—or an entirely unprecedented type of change—and a species will simply not make it.

Adaptations to something so pervasive and otherwise constant as gravity would have to proceed in steps. If a hypothetical planet’s gravity were to (somehow) shift to 38% of its former level, but do so over several million years or more, then life there would have a decent chance of adapting because any given generation would only be subject to minute changes. However, by the time gravity reached 95% of its former level, organisms then would already tend to be optimally adapted to that new 95% level. Checking in again a thousand generations later, organisms would tend to be well adapted to the newly current 90% gravity, and so on as gravity crept down. In contrast, evolution copes far less well with sudden large jumps, which tend to be associated with mass extinctions.

Temperature variation is a variable to which earthly life is widely adapted, both across species and to a lesser degree within each organism. Temperature has changed remarkably and continuously throughout Earth’s 4.5 billion year history and it also varies starkly with season and geography. Temperature adaptation therefore has a vast range of evolutionary precedent. Atmospheric composition, pressure, and radiation levels have also changed back and forth over geologic history.

What earthly life has never had to do, not even once, is what a Mars relocation would ask of it. Low g is something that evolution has had no opportunity to tackle. One of the few rough constants throughout the 3 billion or more years of earthly life has been 1g.

This still does not make some degree of individual gravity adaptation impossible now, but it does suggest that this could be a very serious issue for colonization and a potential deal-breaker for both indefinite stays on Mars and natural reproduction of future generations there.

The probably need for artificial gravity and how to produce it

For long-term extra-terrestrial colonization, artificial structures capable of producing artificial gravity that approximate 1g seem more promising. One concept involves large cylindrical spacecraft on axial rotations. The interior surface of the cylinder can be built to a size and given a rotation to approximate 1g over a large habitable interior surface area. That would be another huge engineering challenge. Yet SpaceX’s work in interplanetary transport, along with advancements in asteroid mining, would help lead to a future in which this too could become more feasible.

Given the grave potential health and reproductive risks of long-term exposure to zero g and/or Mars g for Earth-evolved organisms, those interested in space colonization ought to assign a high priority, alongside ongoing engineering work, to low- and zero-g health research. Critical for colonization are three research areas: effects of Mars g on the health of Earth-leavers, likely health of long-term Mars residents upon potential return to Earth, and effects of low and no g on embryonic and childhood development.

Getting people to Mars is an engineering challenge. Musk, SpaceX, and collaborators are up to the task and well on their way. But the length of time that hopeful new Martian arrivals can expect to live there, in what state of health, and with what likelihood of producing healthy offspring, are critical questions in need of serious research and consideration in relation to any developing colonization plans. Early animal and astronaut studies combined with an evolutionary perspective suggest that shorter-term Mars visits are likely to be far more feasible from a health perspective, that natural reproduction among colonists might well be out of the question, and that the development of spacecraft and stations with artificial gravity is likely to be a biological priority for any future long-term extra-terrestrial residents.

This provides a more realistic base scenario from which to refine the engineering details of an early Mars transport and habitation system. It may well be that 1g environments would have to be available at least part of the time to support health longer term. The most realistic approach to creating artificial gravity is a rotating habitat, but this could well prove easier to achieve in space than on a planet with gravitational and atmospheric resistance, albeit both much lower than Earth’s.

At minimum, it should be clear that lab mice and rats ought to be the first serious colonists on Mars—and this for quite some time. Their mission: to live where no earthly creature has lived before. Godspeed to those pioneering rodents; I suspect they’ll need it.

The curious case of the faster-healing knee and the larger steaks

Some loose ends needed addressing, but why was the recovery so fast?

Some loose ends needed addressing, but why was the recovery so fast?

My doctor took one look at my knee and his jaw dropped. He had hardly ever—or perhaps never—seen a knee that looked that good just four days after arthroscopic surgery.

This clinic specializes in these surgeries, so he sees patients, many of them young athletes, in post-surgical recovery checkups daily. He kept looking at my knee and then looking at me—a middle aged guy. He checked the chart to make sure the surgery was actually just four days ago. Still in disbelief, he asked me what I had done.

What came next was instructive. I told him I thought the surprisingly fast recovery might be due to my very low carb diet.

His response was surreal, because non-existent. He did not acknowledge what I had said. He just kept going on about how good the knee looked and how he had hardly ever seen such a fast recovery.

The rest of the conversation was clear and normal. We talked about how the stitches were coming out next time. We talked about how, given the fast recovery, I probably didn’t need that physical therapy after all.

I was still curious, so I mentioned just once more that maybe the notable recovery could be due to my low-carb diet, because that seems to reduce inflammation.

Once again, no reaction. It was as if I had spoken just that one line in Chinese. No, not even that. Switching to Mandarin would have elicited some noticeable reaction. Would the fact of my statement cease to exist if not acknowledged?

A tale of two otherwise identical surgeries

I have spent several decades in somewhat rough activities including martial arts earlier and amateur adult soccer later. With such activities, it can seem at times like rolling dice when an individual’s luck might run a bit low and an arthroscopic meniscus repair will be called for. Once this kind of tissue tears a little, it just does not heal by itself. Worse, the torn piece can obstruct the joint and lead to additional tearing and other problems. It’s a little like having a hand-knit sweater with a hanging thread that is just waiting to get caught and unravel some more (but with pain involved). The hanging thread just needs to be trimmed off. The invention of arthroscopic technology revolutionized the ease with which this could be done.

My first such surgery was in 2010. Of course, I thought I had learned my lesson and would not be back. But alas, in a single lapse of focus, the other knee over-extended on a bad landing on the futsal court in late 2015, the fault only of myself. So my second such surgery, on the other knee this time, was done recently in 2016.

My recovery six years ago was good, but relatively more ordinary. It at least did not elicit any jaw dropping from the specialist. I recovered nicely, above average, but I do not think I recovered this well.

It was also striking to me after this 2016 surgery that I awakened from anesthesia crisply, with perfect clarity, as if from an unusually excellent night’s sleep. I do not remember a feeling at all like that from my corresponding 2010 recovery room awakening. I recall it as groggy and gradual, more as I would have expected. This may or may not be important or coincidental, but I note that a major effect of a low-carb ketogenic diet is a gradual transformation of preferred cellular fuel sources, including for the brain, so an effect like this is plausible. I have noticed clear improvements in sleep patterns following dietary changes and full anesthesia and sleep are related states.

There is certainly individual variation in recovery rates, but the interesting thing here is the rare opportunity to compare the same person recovering from two identical surgeries at two different times. Six years apart, these were the same surgeries, performed by the same surgeon, and conducted at the same clinic with the same anesthesiologist in the same room. They were for remarkably similar injuries. The surgeries were conducted a similar length of time after the initial incident (in both cases, after about eight months of “conservative” recovery and training efforts). I am the same person. Almost every factor was the same.

So what changed between the two events?

First, I am six years older. But this would predict a slower recovery, not a notably faster one.

Second, I have completely changed my diet, including adding fasting periods. Both low-carb and fasting are known to reduce systemic inflammation compared with more conventional modern diets, with their high frequency, high refined carbs, and brutally high omega six. With lower systemic inflammation (call it immune-system noise), specific inflammation as a healing response at the surgery site (immune-system signal) might proceed with more appropriate focus on the local site and without undue exaggeration.

It was only several months after the 2010 surgery that I discovered The Primal Blueprint and first ditched grains, started even more thoroughly avoiding refined sugars, and replaced industrial processed seed (“vegetable”) oils with natural fats. Then, starting around 2013, I moved toward a still lower-carb, higher-fat whole food ketogenic approach. More recently, just within 2016, and mainly in the past few months, I have been trying out a largely carnivorous approach and have introduced more fasting and intermittent fasting as well.

Analysis and implications

Unfortunately, I have no comparable record of the state of the other knee after exactly four days in 2010, only necessarily unreliable memories and impressions. I could be making this up from memory and confirmation bias. Or the difference between the surgeries could be random or due to some other unnoticed factor.

Still, even as anecdote, these recollections strike me as notable. And on reflection, it occurs to me that one of the sad symptoms of diabetes is poorer wound healing. If a low-carb (and natural fats) diet tends to lead toward the very opposite of a diabetes crisis metabolism, might it not likewise lead to the opposite of compromised wound healing? That is, improved and above-average wound healing. This seems plausible.

The hypothesis here is that conventional high-frequency, high-carb diets might keep most people’s post-operative and other wounds from healing as quickly as they might otherwise. The effects of the resulting unnecessary systemic inflammation would come to appear “normal” only because it would be what clinics would see from day to day within the particular afflicted populations. Anyone doing something quite atypical of that population, such as a very low carb diet, might produce seeming anomalies—relative to this afflicted population. If lower carb and fasting are superior to higher carb and frequent eating, as I have come to think they are—through research, countless biographical and ethnographic reports, and accumulating personal experience—those anomalies would show up as positive surprises.

Greater clarity here could support dietary practices that improve health outcomes while reducing, or at least not adding to, reliance on the medication industry. As I put it in the title of my book review essay on Jason Fung's The Obesity Code,Only the faster profits.”

Relatively little research money floods in to verify or falsify these types of potential effects, perhaps, because no potentially profitable pills would be entailed in producing them. If such benefits might be real, people changing their own habits would be the primary beneficiaries and direct action to make personal changes would be the primary method.

Block Size Political Economy Follow-Up 3: Differentiation from the 21-million Coin Production Schedule

Continues from Part 2: Market Intervention through Voluntary Community Rules

One popular argument compares the Bitcoin block size limit to the coin production schedule that sets up a terminal maximum of 21 million bitcoins that can ever be created. Raising the block size limit, this argument continues, could set a precedent for changing the coin production schedule, and then what? Changing the block size limit opens up a slippery slope that could threaten to lead to the end of cryptocurrency standards and boundaries. Just as the coin limit is an essential value proposition of Bitcoin, so other types limits must be conservatively protected as well.

How can this type of argument be considered?

First, note that this represents an approach opposite to the one I have taken. I have identified and discussed the block size limit as something uniquely and importantly different within Bitcoin from an economic standpoint. The above argument, in contrast, presents these different “limits” as quite similar to one another for this purpose and therefore ripe for analogizing.

Next, one might note how Bitcoin started with its production schedule already in place, whereas the block size limit was added about 20 months later and at just under 1,200 times larger than the average block size of the time. The limit’s original proponents defended it from critics as a merely temporary measure and thus of no real concern.

A common retort to such observations is, in effect, “that was then, this is now.” The project is at a more advanced stage. The current developers have more experience and a more mature view than the early pioneers. The system now carries far more value and the stakes are higher. Today, we can no longer afford to be so cavalier as to just put a supposedly temporary limit right into the protocol code where it could prove difficult to change later…

That is…we can no longer be so cavalier as to just remove such a previously cavalierly added temporary limit...That is…it is time to move on from reciting old founder tales and look to the present concerns.

And indeed, such matters of historical and technical interpretation are subject to many differing assessments. However, there is an altogether different and more enduring level on which to consider this matter. There are substantive economic distinctions between a block size limit and a coin production schedule that render the two remarkably different in kind and thus weaker objects for analogy than they could at first appear.

When “any number will do” and when it will not

This is because raising the total quantity of a monetary unit by changing its production schedule has completely different types of effects from changing the total quantity of a given service that can be provided. Producing an increased quantity of a given cryptocurrency is entirely unlike producing an increased quantity of transaction-inclusion services. This follows from a unique feature of monetary units as contrasted with all other economic goods and services. An arbitrary initial setting for the production of new coins (which operates to define an all-time maximum possible production quantity) works quite well for a cryptocurrency, but does so only for unique and distinctive reasons.

With money, barring certain divisibility issues of mainly historical interest, any given total quantity of money units across a society of users facilitates the same activities as any other such total quantity. This includes mediating indirect exchange (facilitating buying and selling), addressing uncertainty through keeping cash balances (saving; the yield from money held), and facilitating lending and legitimate commercial credit (not to be confused with “credit expansion”). The particular total number of money units across a society of money users is practically irrelevant to these functions. What is critical to a money unit’s value is users’ confidence that whatever this total number (or production schedule) is, money producers cannot arbitrarily alter it, especially upward, so as to rob money holders through devaluation.

Subject to constraints of mineral reality.

Subject to constraints of mineral reality.

A hypothetical model of physical commodity money production on a free market differs in certain important respects from both cryptocurrency and fiat money and bank-credit models. We should therefore closely consider the meaning of arbitrary with regard to these distinct cases.

With precious metal coins produced by ordinary businesses on a free market, the number of units cannot be increased arbitrarily for reasons rooted directly in physical constraints. Each additional precious metal coin to be produced requires specific scarce materials and energy combined with various manufacturing and other business costs, from mining to minting. Each such coin is much like any other good produced and exchanged on the market in that it is a product to be used in the market as money as opposed to a product to be used in the kitchen as dinner. Material scarcity itself protects money users from rouge money producers by preventing arbitrary changes to the quantity of money units. Changes in quantity supplied reflect supply and demand for such coins, including marginal production costs, as with other products.

In sharp contrast to this, a state-run system of fiat money and bank credit supports “flexible” increases in the “money supply.” These are arbitrary in that, unlike hypothetical commercial precious metal coin makers, these legally privileged money producers can generate additional money units at little to no cost to themselves. Notes can be printed and differing numbers of zeroes can be designed into printing plates as the denomination at no difference in printing cost. Likewise, cartel-member bankers can issue “loans” of nothing, filling customer accounts with what has been aptly described as “fountain pen money,” limited to a degree by the current policies and practices of those managing the banking cartel (“regulators,” etc.). Legal frameworks provide some protection for users of such money, most of the time (except when they do not), but such protections are far weaker and less reliable than those from the harder constraints of mineral reality.

Against this backdrop, some cryptocurrencies, led by Bitcoin, feature a novel and innovative third way to protect money users from arbitrary increases in new add-on supply. A production schedule can be specified within the effective definition of what a given cryptocurrency is.

Now in considering the exact number of possible units of a given cryptocurrency, consider two almost identical parallel universes, A and B, which differ in only one respect. Assuming sufficient divisibility in both cases (plentiful unit sub-division is possible), 30 widgetcoins out of a 300-trillion widgetcoin supply across a given society in Universe A carry the same purchasing power as 60 halfwidgetcoins out of a 600-trillion halfwidgetcoin supply across a given society in Universe B.

In each universe, one can buy the same kilogram of roast beef, in one case with 30 units, in the other with 60. Since the 300-trillion versus 600-trillion total money supply is the only difference between these two universes, it makes no difference whether the roast beef is bought with 30 units in Universe A or with 60 units in Universe B. Since the people in the two universes are wholly accustomed to their own respective numerical pricing conditions, their psychological and felt interpretations of the value associated with “30” in the one case and “60” in the other, are likewise indistinguishable.

Naturally, many individuals and organizations in any universe dream of having “more money.” For example, considering that 20 units of a good is worth more than 10, it is easy to equate having more units with having more wealth. Twenty good apples represent an amount of wealth (ordinally) greater than 10 such apples do. This is also the case with holding quantities of the same monetary unit. Twenty krone represents more wealth than 10.

But the crucial point now arrives: the foregoing “more is better” with regard to money applies to the number of units in a given party’s possession, but does not apply—as it does with ordinary non-money goods and services—to the wealth of the society of money users as a whole. Viewed across an entire society, intuitive associations from personal and business experience between larger numbers and greater wealth do not translate into a way to raise overall wealth. Political funny-money schemes with names such as “monetary policy” and “credit expansion” instead produce only sub-zero-sum transfers of wealth from some monetary system participants to others. Such transfers produce win/lose results in which some gain at the expense of others, not to mention the additional net losses from the transfer process itself (thus sub-zero-sum).

With Bitcoin, when the initial design was set—but not afterwards—42 million units, or other possible numbers, would have been as serviceable as 21 million. After the system launched, however, no general benefits could follow from increasing the quantity of possible bitcoins beyond their initially defined schedule. Such a later increase would instead tend to 1) reduce the purchasing power of each unit below what it would have otherwise been, 2) transfer wealth to recipients of new add-on units away from all other holders of existing units, 3) raise uncertainty about the coin’s reliability, likely depressing its market value with an uncertainty discount, 4) create demand for an analog of a “Fed watching industry” that speculates on what might happen next with the malleable production schedule, and 5) give rise to an industry of lobbyists, academics, and other experts dedicated to influencing such decisions.

While the block reward framework does indeed also “transfer wealth” in a sense to miners from existing bitcoin holders as in item (2) above, it crucially does so only in a predefined way, knowable to all participants in advance. The block reward schedule, defined before launch, provides a form of compensation for mining services in the system’s early days. This has enabled the system to evolve and succeed from its launch to the present. This follows not from any arbitrary change to the production schedule, but merely from the ongoing operation of the production schedule initially set.

One free pass only

In sum, a peculiar characteristic of money units when viewed across an entire society of money users provided a one-time and unique economic free pass for setting an arbitrary number of possible bitcoins at 21 million. This free pass could only be valid before initial launch (prior to 2009, or at the very latest, prior to the evolution of any tradable unit value). Changing the schedule later, especially in such a way as to increase unit creation, would have completely different and wholly negative effects from a systemic perspective.

Now returning to non-money goods and services the case is quite different again. The foregoing unique monetary free pass is entirely absent, whether after launch or before it. When non-money goods and services are likewise viewed at the level of a given society as a whole, “almost any number will do” does not apply. An increased total quantity of a non-monetary good or service supplied can be in the general interest, not only in special interests. It can be win/win and not win/lose. If there are more apples or cattle to go around in a given society (as opposed to just more pesos), this does tend to lower the costs of acquiring those goods in a meaningful way. This does enhance wealth in society, not just transfer it around. It represents a real increase in production, not just a “flexible” money fraud as in the case of arbitrary inflation on the part of money producers.

Miners provide one such ordinary “non-money” service when including a given transaction in a candidate block. This is a scarce service provided (or not) to a specific end user by specific miners. It does not fall under the unique category of the total number of monetary units in a society of money users. The total possible number of bitcoins, however, does fall under this unique category. The two numbers differ in kind and for that reason make poor objects for analogy. Both may, indeed, be viewed as “limits,” but it is important to recognize the contrasting economic roles and natures of these two types of limits.

Block Size Political Economy Follow-Up 2: Market Intervention through Voluntary Community Rules

Continues from Part 1: Software Choice, Market Differentiation, and Term Selection

If a given block size limit is part of a given cryptocurrency at a given time, can economists legitimately say anything with regard to such a limit? Must this topic be left alone as a mere qualitative characteristic of a product that users have freely selected?

From one perspective, if user preferences are subjective matters of taste and opinion, nothing can be said other than that Ravi prefers this, Setsuko prefers that, and Heinrich prefers some other thing. If various users prefer a cryptocurrency with one block size limit or another, economists must remain silent and leave users to their purely subjective preferences, only taking note in abstract and neutral terms of the shape of these preferences. Personal preferences are “ultimate givens,” their specific content irreducible “black box” starting points for economists.

This appears to be a sounder critique. Block size limits are indeed characteristics of specific cryptocurrencies as products. Users may well differ in their subjective preferences on such matters for reasons not even fully understandable. Users differ in their values. Motivations can even include various grades of membership signaling. An economist speaking on such things, this criticism goes, merely “smuggles in” his own particular personal preferences or party affiliation “dressed up as” objective analysis.

Can any role for economic analysis here be rescued from this critique? It may help to take a step back and consider some other scenarios to gain perspective and then return to apply that perspective to the case under consideration.

First, consider two hypothetical cryptocurrencies, one with a block size limit that directly influences the ordinary structure of supply and demand in its transaction-inclusion market, and another that does not (this can equally be the same cryptocurrency, such as Bitcoin, at two different phases in its history). The first cryptocurrency’s code alters the operation of the market between transaction senders and miners, limiting the total quantity of services that can be supplied per time period. Certain economic and industry-structure effects follow. These effects apply to a coin with this characteristic, but not to one without it. What are those differences? Those differences were the central theme of the interview to which this series follows.

Yet subjective individual preferences do not alter the distinctions analyzed. Thus, even though the content of the preferences themselves may be a black box for economists, the two differing transaction-inclusion markets still have objectively describable economic distinctions independent of any such preferences. Dropping a stone from the Tower of Pisa is a choice, one with all manner of possible motivations, but the resulting acceleration of gravity is not altered by any personal opinion as to the nature and effects of such gravity.

Three intentional communities and their altcoins

Next, consider several hypothetical intentional communities. It is possible to establish and run such communities under various rule sets. Although intentional communities have often been to some degree communistic (“commune”), it is possible to set up other idealistic havens, perhaps some real-life attempt at an Ayn-Rand-style Galt’s Gulch or a Neal-Stephenson-style Thousander retreat. Participation is governed by a kind of “social contract,” but in this context the contract is more likely to be one that actually exists, including specified conditions to which participants have assented by joining and staying, possibly even signing a written agreement with terms of residence.

Let us assume that in all cases, no matter what the other internal rules and cultures, participants are not forced to either join or stay. This freedom of entry and exit corresponds to cryptocurrency participation choices.

Now consider three such voluntary intentional communities. Bernieland features a $20 minimum wage. MagicCorner bans "wage relations" altogether. Finally, Murrayville has no numerical restrictions on wage agreements. Even though all three are voluntary communities, only Bernieland and MagicCorner include labor rules that restrict wage rates. The voluntarily agreed community rules specify certain wage-market restrictions. These types of restrictions are traditionally analyzed under the rubric of market intervention by state agencies, which are often subsumed under the term “government.” Whether one wants to also call a complex around intentional community rules and enforcement measures a type of “government” or not is beside the point. There may be valid reasons for either using or not using that word, provided suitable definitions and qualifications are set out.

In this case, it is analytically valuable to be able to note how Murrayville is free of rules that specify restrictions on the existence or range of wages in its labor market. Murrayville might therefore be described within this context as having a labor market free of intervention—unlike Bernieland and MagicCorner. Considering this difference alone, one would expect Murrayville to therefore have the best functioning labor market of the three, with more ample employment opportunities for those aiming to work on a wage basis.

The fact that all participants in all three communities voluntarily join and agree to the respective terms of each does not alter the economic distinctions between their differing labor market rules. Even though all three communities are voluntary, it remains that only one has a minimum wage, another bans wages, and a third does neither.

Arguing that the term “intervention” can only apply to state agency actions does not aid in the economic analysis of wage rate restrictions within these voluntary intentional communities. One might try to suggest a better term to use here instead of intervention. However, since the effects of wage restrictions have already been analyzed under the rubric of state-made laws described as “interventions,” using established terms—with suitable qualifications, as was done—easily accesses the appropriate implications.

Now in an effort to compete for residents, each community launches its own altcoin. Berniecoin does not allow any transaction with a fee above 1.5 Bernielashes/byte to be mined. This seeks to create a price ceiling for transaction inclusion. No one can pay more within the protocol. No one can use greater wealth to supersede other transaction senders. MCcoin’s protocol includes no way for transaction fees to be included at all; no one can bid for priority by including a fee. Finally, Murraycoin does neither. Transactions with any fee, or none, can be sent, and each miner is free to include or exclude any of these. Each node is likewise free to either relay any of them or not, or to try to figure out some ways to monetize such services.

Once again, based on this alone, Berniecoin and MCcoin demonstrate forms of what has heretofore been best characterized as “market intervention” within their respective communities. In this case, their protocols specify this directly. Murraycoin alone is free of any such effective intervention in its transaction-inclusion market. The others have policies that place a ceiling on the payment of transaction fees. The voluntary nature of participation in all three does not alter this distinction. One cryptocurrency has a maximum transaction fee, another bans fees, and the third does neither. These respective encoded policies are indeed part of what users implicitly choose when they use one rather than another. Nevertheless, distinct economic and social implications follow from those differences, and do so apart from any beliefs or wishes as to the nature of such implications.

This price-ceiling example demonstrates the general applicability of market intervention analysis within the context of voluntary arrangements. With the issue of a block size limit that restricts normal transaction volume, the relevant concept is not a price ceiling, but an output ceiling.

How to have a cartel without forming one

A subtler misconstrual of my interview assumes that I argued that since a particular situation or dynamic exists, someone must have acted to bring it about. However, I made no mention of any specific persons or groups, nor did I attribute any intentionality or motive. If there is thunder, it does not necessarily follow that Thor must have hammered it out.

Instead, I identified a market. I noted an effective limit to industrywide service provision as actual market volume begins to interact with a limit long in place, but formerly inert for this purpose. I described some of the general effects of any such limit to the extent it actually begins to limit ordinary volume. I argued that these effects are negative, but also easy for observers and participants of all kinds to miss or underestimate because they entail hidden costs and distort industry structure evolution from paths it could have taken instead, but did not, thus rendering those possibly better alternative paths “not seen” in Bastiat’s sense.

Certain economic effects follow from output ceilings and these have commonly been analyzed in terms of cartel situations. Yet this implies no necessary argument that anyone has set out to form a cartel or to create any of these situations or dynamics. That would be a completely different argument, more journalistic in nature and evidence requirements.

Being encoded in a protocol is a new way for an output ceiling to exist. Normally—but not in this case—any given industry actor, either current player or potential entrant, could just violate such a ceiling unless facing some overt or threatened form of legal or quasi-legal enforcement. Consider post-war Japanese steel production. An industrywide output ceiling was maintained for many years to limit competition. The Ministry of International Trade and Industry “recommended” this as a “voluntary” measure for domestic steelmakers. Of course, when some rebels sought to exceed the limit, MITI simply refused to approve their requests for increased purchases of more iron ore and fuel, which it also oversaw. Only through MITI could such a limit be maintained.

This type of limit sets up an upside-down and sub-zero-sum dynamic in an industry. There are concentrated gains for the inefficient (who should otherwise probably quit and sell off assets), somewhat less concentrated losses for the more efficient (who are unable to expand as much), hidden losses for would-be entrants (who are never seen because they avoid entering a market with an arbitrary ceiling), and dispersed and nearly invisible losses for many anonymous end users (who mostly have little clue about any of this and how it is happening at their own expense). Once again, though, all this can be so regardless of anyone’s knowledge or intentions.

To say with regard to the block size limit that there exists an industry situation with effects like those of an enforced cartel does not necessarily also imply that 1) some people set out to create it, or that 2) all or even any such people actually benefit from it on balance, or that 3) any of them fully understands it. Each actor has his own intentionality and working models of causality, but all of this combines into social outcomes that result, but were not necessarily planned from the outset to take the forms taken. Describing such unplanned social effects, Adam Ferguson wrote in 1767 that, “nations stumble upon establishments, which are indeed the result of human action, but not the execution of any human design.”

That said, noting the social science concept of spontaneous emergence as one factor to consider does not also constitute a claim that certain effects have not been planned or that they do not actually produce special interest benefits for some at the expense of others. It only points out that any such intentions and plans as may or may not exist are not directly relevant to the comparative analysis of rule effects. The topics are distinct.

Block Size Political Economy Follow-Up 1: Software Choice, Market Differentiation, and Term Selection

An interview with me on the Bitcoin block size limit appeared on 4 May 2016 on Bitcoin.com. Below, I develop additional clarifications and examples partly inspired by a range of comments and reactions to it. This is meant to build on and develop ideas in the original interview. For ease of reference, here is a PDF version of that interview.

This is a three-part series. Part 1 below covers a range of issues including the need to differentiate the market that was discussed in the interview from other distinct markets and non-market choice phenomena such as free software selection. It also begins to discuss the use of the term market intervention in this context. Part 2 will then continue by arguing that neither the voluntary nature of cryptocurrency participation nor the subjective nature of user preferences nor any alleged motivations on the part of the various actors involved alters my analysis. Finally, Part 3 will focus on economic distinctions between the 21-million bitcoin production schedule and the block size limit, arguing that these are different in kind and thus poor objects for analogy.

Chicago Board of Trade: People buying and selling form a market. Prices are key artifacts that market processes leave behind.

Chicago Board of Trade: People buying and selling form a market. Prices are key artifacts that market processes leave behind.

Two markets and a non-market choice sphere

One idea that showed up in comments was that I had expressed some view as to which Bitcoin software one ought to run. However, I did not address this at all. I have only published one previous preliminary article on the block size limit, on 20 June 2015, and this also did not mention implementation choice. Various views on this topic do not alter my analysis of the topics that I did address.

A related idea is that the current dominant software implementation already reflects “the choice of the market.” Therefore, any discussion of differences between a cryptocurrency having or not having a given block size limit is moot: the “market” has already spoken and this is evident in implementation share statistics.

It should be cautioned, however, that software choice reflects many considerations. Interpreting it as a proxy for a single issue is imprecise. Such choices may well reflect a generalized confidence in perceived quality and reliability. A user could therefore make a particular software choice either: 1) because of one specific code issue, 2) despite that same particular issue, or 3) regardless of it.

Such imprecision and ambiguity are among the reasons I did not discuss this matter at all. A more fundamental reason, however, is that it has no bearing on my analysis. Whether some percentage of a given population prefers Pepsi or Earl Grey tea does not alter the composition of the respective beverages in the slightest way, nor their respective effects on metabolism. Such things can be studied and assessed independently of the current statistical shape of user preferences.

In addition, choice of which free software to run does not really constitute a market, except in a metaphorical sense. Developers offer software products and users select and run such products. In a free software context, nothing is bought or sold between these groups. No price signals exist directly between users and developers.

In contrast, the central topic I addressed—the market for the inclusion of transactions on the Bitcoin blockchain—is indeed a market, one that involves quite different roles and actions than producing or running one version or another of free software. This is a market in which bidders send transactions, which takers (miners) either include or not in each respective candidate block. This market involves specific senders of specific transactions (not senders in general of transactions in general). At the other end, specific miners build each of their respective candidate blocks. In deciding whether to include any, all, or some transactions, fee/byte (bid) is salient. Node operators act as key intermediaries, like referring brokers, currently uncompensated. On-chain and off-chain transacting options, both existing and potential, coexist in this context in a complex blend of competition and synergy.

There are therefore at least several phenomena to differentiate. First, the buying and selling of bitcoin forms textbook markets on the order of commodities and forex markets. Those effectively controlling given bitcoin units can sell such control in exchange for some other money unit, product, or service, or give them away as gifts. Second, bidding for on-chain transaction inclusion and miner decisions to include or not include transactions in candidate blocks forms a distinct open-bid market for on-chain inclusion priority. Third, developers offering free software and users making decisions on which implementations to run for their various purposes does not constitute a market in the sense of a complex of buying and selling behavior.

Whatever one may choose to call these three phenomena, each is meaningfully distinct from the other, describing different sets of actions and roles. To claim that “the market has spoken” in the context of software choice is therefore far less informative that it might at first appear to be. Making such a claim requires specifying what exactly has allegedly spoken (it isn’t a market) and the content of this purportedly speaking thing’s alleged message (ambiguously mixed with considerations such as general perception of code reliability).

The term “market intervention”

Several commenters took issue with my use of the term market intervention in this context. It is true that market intervention has a negative connotation for many readers, though not all. Indeed, a great many persons eagerly advocate some form of governmental intervention in economic affairs as part of their ordinary political opinions. Still, one interpretation would be that I had set out to create negative connotations and thus arrived at my word choice using rhetorical criteria.

A different interpretation would be that I set out to select the most accurate available technical term to describe the phenomenon under consideration. I then specified what I meant in using this term and excluded certain inapplicable historical and institutional associations. This is my own first-hand interpretation of what I did in selecting this language. That it still has negative connotations for some may be natural in that what it describes has negative effects. However, word choice one way or another does not alter such effects.

Another related but more substantive criticism that appeared in several variants argues that a block size limit is just a qualitative characteristic of a cryptocurrency as a good. A given limit is baked into what the good is. As such, it cannot be construed using the model of economic intervention. If a characteristic is already in the product, how could it possibly be construed as intervention (from outside)?

However, I had already stressed in the interview how novel and unprecedented this situation is. My argument was that even though the legal and practical contexts of traditional interventionism conducted by state agencies are completely different, nevertheless, the economic effects are on this transaction-inclusion market as a government enforced industrywide output ceiling would be. This will be addressed further in Part 2.

A commenter suggested that I was arguing from history that the current block size limit was not part of “consensus.” Consensus, in this debate, often seems to transcend a mere computer science fact to also encompass an allusion to a hard Bitcoin Realpolitik. Any other considerations, such as the documented history of the block size limit, are irrelevant to this current reality.

However, I did not reference or use any concept of consensus at all. Nor did I question the reality of any given state of consensus on the network at any given time. What I did was analyze differences between possible states of code and then describe economic and social implications of such differences.

A loosely related idea was that my analysis was tantamount to advocating that cryptocurrencies should not maintain any limits or standards. If calling into question one sort of limit, such as the current Bitcoin block size limit, why not just question all limits? Why not just also advocate raising the maximum coin count? That, after all, is also a “limit,” so why not call keeping that in place an “intervention” too? This will be addressed in greater detail in Part 3.

The interview itself concerned one such limit and not any others. Why? I could have branched off to discuss the sociology of decision-making or described a software preference. But I did no such things. I could have discussed any other protocol characteristic or issue. Why did I discuss only this one? The answer is that I think this limit has unique economic features that are both important and poorly understood. Explaining this was therefore the focus.

Continues with Part 2: Market Intervention through Voluntary Community Rules

Only the faster profits: A powerful health measure and why it is unadvertised

My journey in nutrition science studies and personal nutrition practice over about the past six years has been characterized by “punctuated equilibrium,” long periods of stability, with minor updates from my readings and small alterations to practice. But every couple of years, it seems, such equilibrium is slammed into a rather different shape over just a few days.

What follows is about a book that just did this. It has not overturned anything I was doing before, but has lifted my understanding and led me to try some important practice modifications.

Dr. Jason Fung has produced a new book that is vitally important, well written, argued from the highest quality available evidence, and not lacking in careful doses of wit and humor. This is not just another weight to further depress already strained diet-section bookshelves, it is a brilliant yet concise scientific integration delivered so that a general audience can also benefit directly.

The Obesity Code (March 2016; foreword by the legendary Professor Tim Noakes) states, and largely follows through on, a preference for rigorous controlled human trials over the kinds of associational, epidemiological, and often scientifically weak “studies” (sometimes of a few rats) that typically grab headlines with hyped and unwarranted inferences. The book's central theory does what a good scientific theory should. It explains all the relevant high-quality evidence in a systematic, logical, and accessible way. It also addresses the oversupply of low-quality evidence and non-evidence that leads astray. For hardcore readers, the endnotes run 32 pages, no small proportion of which are research journal citations.

Context: Before I read this book

In October 2010, my long-term general interest in healthful nutrition jumped to the next level when I read The Primal Blueprint by Mark Sisson. This kicked off some major personal changes and a side quest to read in nutrition and exercise science to examine controversies with practical implications for what I decide to do in my daily life.

The intellectual side of this journey included Good Calories, Bad Calories (2007) and Why We Get Fat (2011) by Gary Taubes; numerous books and articles by Robb Wolf, Loren Cordain, and others; biochemical metabolism research; and evolutionary health reasoning and related paleo-archeological controversies.

The next major step came in 2013, when I shifted to a ketogenic approach based on the work of Jeff Volek and Stephen Phinney, two career researchers and pioneering experts on nutritional ketosis and exercise performance. Compared to the Primal Blueprint framework recommendations, this entails reducing daily carbs further to under 20g and increasing natural fats to replace that sugar energy while maintaining moderate protein. This is often labeled “low-carb, high-fat, and moderate protein,” or LCHF. This is not your cringeworthy ketogenic lab-chow from classical research and medical use. It is all quite real food.

To assemble my own thoughts from such widely varied sources of research, inspiration, and practice, I created a webpage called Evolutionary Health. There I summarize the current state of my views and link to standout resources. I update this from time to time with information new to me, and refinements of my working synthesis. That page includes material on food production and environment, particularly desertification. It now includes multiple references to Fung’s work.

Until now, if asked what to read for ways to improve health through nutrition, my top starter book recommendations have been The Primal Blueprint, mentioned above, and The Art and Science of Low Carbohydrate Living (2011) and The Art and Science of Low Carbohydrate Performance (2012) by Volek and Phinney. I then recommend The Big Fat Surprise (2014) by Nina Teicholz, another great contributor in the tradition of Taubes—exposing the modern nutrition emperors to be shockingly underdressed. This adds a larger scientific and historical context, including how modern conventional wisdom on nutrition has been formed: far more by politics, loose intuition, and charisma than by legitimate scientific evidence.

Now, however, I might start people right off with The Obesity Code.

Pinpointing the root of metabolic syndrome

What causes obesity? What are the best weight control practices? Everybody thinks they know the answer. Fung demonstrates that this “everybody,” such as it is, remains quite confused.

The book presents a single central theory of overweight. While this extends to diabetes and metabolic syndrome more generally, the book focuses on overweight as the epicenter of the modern long-term degenerative symptom cluster. It argues that the central underlying phenomenon in obesity is insulin resistance. Successful treatments, especially if they are to have lasting healthy effects, must lower insulin resistance.

Insulin resistance is analogous to drug tolerance. The more of a drug one has taken over a longer period, the higher the dose needed for a similar effect. Likewise, the more time the body must swim in evolutionarily novel quantities of insulin, the more likely it is to up resistance. Such resistance is also stubborn; it rises much more easily than it falls. A self-reinforcing pattern of elevated insulin and elevated resistance begins. When insulin-producing beta cells can no longer keep up in this death race and begin to fail, we call that “type 2 diabetes.” The conventional treatment? Just inject more insulin; the race must go on. But the patient keeps deteriorating.

Genetic differences and age both impact individual insulin resistance response. This helps explain wide variations among people eating similarly and for the same person at different ages. This insight rescues a too-simple carbohydrate-obesity theory from the obvious rebuttal: just point to some carb-eating thin people. The book also emphasizes the better-known distinction between the effects of carbs in natural forms versus those in modern processed and refined forms.

But first, how did we get here and why are we still here?

It would be relatively simple to explain some measures to lower insulin resistance, such as some of those practiced at Fung’s Intensive Dietary Management program. However, the complication he faces, and faces up to squarely in this book, is that entire industries, bodies of officialdom and authority, and entrenched conventional wisdom all combine to promote and sell methods that either do not reduce insulin resistance, or raise it still further. Treating advanced type 2 diabetes with insulin injections is partly comparable to treating an advanced alcoholic with a steady rotgut supply. It patches some symptoms, even as it gradually worsens the condition and leads to further deterioration.

Official bodies and industry interest groups have pushed failing methods and theories relentlessly for decades (whether intentionally or unknowingly does not change the outcomes). Massive failures to promote health never dissuade; more of the same is always their answer. Some “success,” however, is still visible. It shows up in untold billions on the income statements of 1) ag and food companies selling profitable processed products that gradually sicken people and 2) pharma and healthcare organizations producing products and services to treat the resulting chronic degenerative symptoms, mostly without addressing causes. With causes untreated and the sick getting sicker, the massive sums involved not only keep flowing, but keep expanding.

The book must therefore also take the time to expose and refute common, widely accepted, well-funded, officially promoted, and dead wrong claims and practices. In each case, it demonstrates how the highest quality available evidence, common experience, and logic show that conventional weight management methods fail—and that they fail is probably the best that can be said of them.

Don’t just do something, stop

The book’s most important practice implication is less about food and more about the need for its periodic absence. In health, politics, and some other fields, people tend to respond to serious problems with a somewhat desperate “just do something” attitude. But the most helpful measure might instead be to stop doing something. Rather than “solving” a problem, what may be required is to stop creating its causes. In this case, if there is too much eating too often, stop doing it. And there’s a word for that—fasting.

A fasting period is nothing more than the time between eating sessions. Longer pauses can begin to take on names such as intermittent fasting (IF) and still longer pauses just fasting. So in this sense everyone fasts already, Fung reassures. The variation is in how long and how often. Fasting’s true opposite, it comes to appear, is frequent snacking.

Fung notes that fasting has been promoted and practiced through cultural traditions the world over for thousands of years (That, I would add, might mostly just reflect the duration of available records). Fasting has been promoted for health, clarity of mind, and spiritual refinement, often carried through religious practice traditions.

He also does not shy away from explaining that fasting and IF are unique in important ways from a politico-economic standpoint. The person who fasts benefits substantially, but his corresponding cost for this is better than zero. He saves both money and time. He gains freedom through reduced frequency of buying food, preparing it, eating it, and cleaning up, which can add up to large blocks of time and attention.

For example, I have moved mainly to a 23-hour daily fast framework for now (with occasionally longer stretches as well). This simply means eating one meal a day during an approximately one-hour period. Simple as can be. I may next try alternate-day fasting (eating normally one day and not at all the next day) to compare the effects. The latter pattern has been commonly employed in research trials.

The implication is that no one else besides the person fasting stands to profit from it. Only the faster profits. No pharma company sells more of its drug (some may sell less). No food company sells more of any boxed creation (some may sell less). No elaborate diets must be studied and followed, no calorie counting apps employed, no juicing machine bought and fed with plant carcasses, no special shopping list assembled, no exotic ingredients ordered online.

Of course, Fung, a practicing physician and kidney specialist, is careful to warn that at minimum those already on metabolic medications, foremost insulin, must work closely with a physician. This may entail careful adjustments, which should be done only under proper supervision. Significantly low blood sugar is a particularly dangerous condition that can follow from mis-coordination of drug dosages with current health state and eating patterns.

Fasting versus calorie reduction

This book clarifies that just “eating less,” as a method, does not deliver the positive effects of fasting; it has opposite effects on the relevant all-important regulatory hormones. Under calorie reduction, metabolism drops to compensate for the stable lower-energy environment. Metabolic rate then stays lower long afterwards, which explains both stalling progress and later regain.

With true fasting, however, metabolism either stays level or increases. This seems congruent from an evolutionary standpoint. A few days of bad hunting (no food at all) means it is time to get out there and hunt, and do it more effectively than before. Sitting in the cave and getting cold, moody, and depressed is not going to help.

Likewise, the book recommends eating normally (though ideally also low carb) when one does eat. That means not being hungry after the meal, as can happen under conscious calorie-cutting methods. Readjusting the modern unnaturally feasting-heavy “feast and famine” balance away from too much feasting should not, in this view, entail skipping the feasting parts altogether, just extending the fasting phases.

The author emphasizes the distinction between lowering insulin and reducing insulin resistance. Just lowering insulin by changing food content might help, but might not always be enough to fully reverse an existing condition. Chronically high insulin is among the causes of elevated insulin resistance, but influencing insulin resistance itself must remain the real prize. A focus on insulin, per se, then is one way to get off track, a false summit.

The book discusses effects on lean mass. The trial research again shows that fasting has important effects that are opposite to those of calorie reduction within conventional meal timing patterns. It is calorie reduction that leads to lean wasting (“starvation mode”), while fasting does not. Fasting stimulates junk protein breakdown for recycling as well as human growth hormone release, a build-oriented combination. A steady calorie reduction program never gets around to these things. All the way down to actual severe starvation, it never generates the hormonal, metabolic, and cognitive benefits of fasting.

Some other nods to tradition

The book also mentions how certain traditional practices hold up well when judged against the insulin-resistance theory. Eating together at mealtimes, and not in between, automatically sets up longer fasting periods. This is just the opposite of the frequent eating and snacking practices that snack sellers push.

Likewise, widespread traditional uses of vinegar and fermented foods are given a nod based on experimental evidence that vinegar moderates insulin response. For example, the penchant for Japanese cuisine to combine rice with pickles and to make sushi (vinegar-soaked rice) likely affords some protection from rice’s insulin spiking characteristics.

Such factors may help further clarify the “Asian rice paradox.” A simple carbohydrate-obesity theory struggles to explain why East Asians eating large amounts of rice did not become obese in the 20th century. Traditional eating patterns, activity patterns, and food combinations may well all have contributed. Genetic influences on insulin resistance are also possible contributors.

More recently, however, these same populations have started gaining weight, and diabetes is on the rise. This coincides with increased consumption of sugar, flour, and other processed foods, greater fast food intake, more sedentary occupations, and a snacking culture that can spread with processed snack food marketing and distribution. Not only do snack foods (and with them snacking) tend to shorten traditional fasting periods, but most of these items are made almost exclusively from insulogenic processed derivatives of cheap (and often government subsidized) agricultural grain crops, foremost sugar, wheat, and corn.

Optimization, and the final defeat of the “thermodynamics” refrain

For established low-carbers still not entirely happy with their body compositions and looking for more optimization (like me), Fung argues that while LCHF is a powerful approach, it is not the most powerful one. Each food, except perhaps pure refined fat, generates some insulin response, though this varies depending on the food. Regardless, there is no way to beat fasting at getting insulin down to rock bottom and keeping it there for long stretches, providing an environment in which insulin resistance can also gradually sink.

It is insulin resistance, Fung argues, that directs the body’s fat storage “set point,” the fat composition level the body fights to keep and return toward. In any long-running war against a conscious, conventional “eat less; exercise more” strategy, the body’s homeostatic set point always wins. Cutting calories can appear to win a few battles, but this cannot last. Calorie cutting, depending on what is actually eaten in a given program, can also sooner or later lead to weakness and gradually advancing malnutrition. Worse, the stress of being regularly hungry, cold, and malnourished can backfire further by raising stress hormones—which also stimulate insulin.

The way forward is to address the set point itself, and that means modifying insulin resistance. With this, Fung establishes why and how attempts to reduce weight by merely lowering calories within existing meal patterns fail in the long run, ending in regain, often to a level above the starting weight.

And as for the ever-reliable “but, it’s all just thermodynamics” refrain, which insists that weight control is nothing more than regulating calories in and calories out as in a lab beaker, it is true that caloric balance does change with weight loss following from fasting. However, that change is an effect, not a method. Fung demonstrates how and why methods with long-term success must treat the chronic hormonal condition of insulin resistance. Doing so allows the body’s fat storage set point to fall back to a more natural level to which the body then happily self-regulates.

This means that sustainable changes to caloric balance follow from a set point change but do not necessarily cause it, contra standard advice. The body has far more tricks to fight back with than consciously calorie-cutting dieters can possibly overcome for long. The more they fight using the usual failing methods, the stronger the body’s countermeasures become. Thus, seemingly unassailable advice to “just eat less,” offered as a method for change, is worse than useless. And as Taubes had also argued in Why We Get Fat, naive misapplication of a simple physics concept to a complex homeostatic system serves only to support blaming obesity victims on the basis of scientifically unteathered and even primitively moralizing causal theories.

Could be better combined with LCHF literature

Something emphasized in the LCHF literature, but less so in this book, is that being in nutritional ketosis is already a quasi-fasting state compared to the common contemporary glycolytic (“carb burning”) state. It is far easier for those already in nutritional ketosis to simply not bother eating at times. They can start and continue fasting while hardly noticing, especially when compared to typical carb burners in pursuit of their next glucose fix.

People in a dominant glycolytic state transitioning to either nutritional ketosis or to fasting (fasting ketosis) can each report some similar transitional symptoms and discomforts such as headaches and low energy. People already in a dominant lipolytic (“fat burning”) state, however, have only to go from nutritional ketosis to fasting ketosis, a far milder transition. Mainly advising fasting for people coming right from a conventional diet could run them into challenges. Starting with nutritional ketosis makes fasting easier.

But beginning either practice still tends to require an initial transition. In favor of a fasting-first approach, fasting is much simpler to execute and monitor. It just involves not doing something. Changing the content of one’s habitual diet entails more ongoing decisions, leaving more room for errors and subtle program regressions.

On balance, both LCHF and fasting are important and mutually reinforcing. Either could come first or they could be adopted together. There are various pros and cons in emphasizing one or the other to newcomers, a question mainly of strategy and practical experience.

An integrative milestone

This book has enabled me to take what information and practices I had already filed away as solid and useful, and revise that totality into a better-integrated picture. This helps me better harmonize contributions from several schools of thought within the broadly defined evolutionary nutrition movement. Fung suggests that some sub-groups that tend to engage in in-fighting are probably just each right about their own particular puzzle piece. Now we get a clearer look at the frame photo for that whole puzzle at a single glance.

Perhaps the most encouraging message from this book is that, unlike basically every “diet” strategy, there is good reason in existing high-grade research not to expect regain from a fasting approach. Fasting and LCHF to target insulin resistance are quite distinct from the many conscious caloric balance variants that have failed long-term so consistently and so epically for decades. In addition, evidence is also accumulating to indicate likely protective, and especially preventative, effects of fasting on other “diseases of civilization,” including neurodegenerative cognitive conditions, heart disease, and cancer.

We can try to fight the body’s fat composition set point without changing it—and many, many have—but only at great cost and effort and with a near guarantee of long-term failure. A few battles may be won, but the war’s outcome is already clear. The set point wins. Conventional calorie restriction does change the set point—it raises it! This makes apparent temporary successes from calorie-reduction programs Pyrrhic victories.

Armed with methods that can lower the set point instead, we can finally get our bodies and ourselves back on the same side. This is the central message of this brilliant, heroic, and accessible book in a field of crucial importance to human well-being.

Some misplaced explanations of bitcoins as tradable units

This is an excerpt from Chapter 8, “Some illusions of enlightened explanations,” in my book, Are Bitcoins Ownable: Property Rights, IP Wrongs, and Legal-Theory Implications.

As important as it is to gain at least a basic technical understanding of Bitcoin, attempts to describe what its tradable units “really” are, as elaborated from some allegedly more enlightened perch, can sometimes distract more than aid when applying economic and legal concepts. For example, pundits discussing whether bitcoin falls under what they each consider to be “money” or not sometimes explain that bitcoin is really just a “ledger entry” or a “protocol token,” a harmless technical artifact of a promising new “blockchain technology.”

Whatever the root of or strategy behind such discourse, however, a bitcoin buyer does not in fact seek a share in a distributed ledger or any other such tortured monstrosity. He wants to buy a bitcoin in the same sense that he might want to buy a grapefruit. He in no way sets out toward the market to buy a share of a global orchard cooperative that also happens to entitle him to one grapefruit that day.

Molecular diagram of grapefruit mercaptan. Tasty.Nor is it relevant that a grapefruit is “really” organic molecules, water, and some other substances. For that matter, a physicist might go further and insist that a grapefruit is “really” nothing but some occasional quarks suspended in vast stretches of empty space. All such misused reductionism is irrelevant to understanding the buying and selling of grapefruit. It likewise has no bearing on whether grapefruits can be eaten without being paid for and how or if people ought to react if they are.

Really just quarks and empty space (Wikimedia Commons, Aleph)Economic theory and legal theory are fields concerned with human acts, such as acquiring, holding, trading, and stealing. Action is marked by verbs. If one is interested in understanding the grapefruit market, one does not seek first to master grapefruit-tree cellular biology, let alone quantum mechanics. It is sufficient for economics to view those grapefruits actually being traded as the relevant goods, the production, pricing, and distribution of which are to be examined using economics methods.

This implies the importance of taking care in selecting which fields of knowledge, aspects of the phenomenon, and “layers” of reality are the most relevant to consider in understanding what bitcoin “really” is, including with regard to whether it is ownable.

One must also proceed with caution in applying analogies. For example, it is easy to view bitcoin as just like other digital blips buzzing around the internet. However, it should be emphasized that buying a bitcoin is not like buying other digital goods, such as a copy of a song file. One does not buy a copy of a bitcoin, but a bitcoin itself. A bitcoin seller no longer possesses the bitcoin in question after the sale (and contextually sufficient confirmations). When one buys (a copy of) a song file, in contrast, the possessor retains copies from which to make more copies.

Most digital goods, such as documents and song files, are nonrival. They can be copied. Multiple people can use multiple copies simultaneously. “Stealing a copy” leaves the original as it was. It is not gone after being “stolen.”

Likewise, not only can a whole blockchain be copied, but some key part of its value derives from its actually being so copied and distributed with redundancy to numerous independently operated locations. A signed bitcoin transaction is also a short digit string that can be copied, sent, and resent around the globe in fractions of seconds. These are nonrival goods, as are cryptographic signing keys. With nonrival goods, one person can have one copy and another person can have another copy and each person can control these respective copies independently and simultaneously.

However, this is not the case with bitcoins. A bitcoin cannot be copied in any such way. It is rival in the same sense as a physical object or spatial location. In addition, a bitcoin cannot be sufficiently described as “just a ledger entry” because a ledger entry records something. This formulation alone does not yet explain what it is that is recorded.

From a unit perspective, bitcoins function as a digital monetary commodity according to strict economic-theory definitions. From an integral perspective, the units are inseparable aspects of the Bitcoin blockchain. They cannot exist without it and it does not exist without them. There is a nondualistic relationship between bitcoin units and the Bitcoin blockchain; while they are distinguishable conceptually, they are not separable in reality.

Announcing new book on bitcoin and legal theory

The first of several concurrent research and writing projects has just hatched: Are Bitcoins Ownable? Property Rights, IP Wrongs, and Legal-Theory Implications.

This is a study in the foundations and implications of action-based jurisprudence, forged through applying it to bitcoin. This brings together for the first time the two major fields on which I have been writing over the past five years.

The context includes relationships among crypto-anarchist thought (such as contract assurance through software code), conventional legal administration (bureaucratic classificationism and rule through law), and ideal legal practice (actual promotion of justice), as well as related philosophical issues such as the combined use of multiple knowledge fields and the ethics of legal practice. Among the book’s central themes is whether and how the same principles that both support property rights in measurable objects and locations and argue against IP claims in copiable ideas and abstractions may apply to the unique new case of bitcoin.

Here is the back-cover description:

Bitcoin has fresh implications for economics and law at many levels. This book addresses whether bitcoins ought to be considered ownable under an action-based approach to property theory, which—like bitcoin itself—transcends the boundaries of existing positive law jurisdictions. Beyond instinctive answers is a rich opportunity to examine the many technical facts and legal-theory issues involved. Bitcoin has a unique new place among types of economic goods, between the physically and spatially defined goods of property theory and the copiable, abstract ideas, patterns, and methods associated with IP rights. It does not fall so easily into existing categories.

The author brings together here for the first time his work in an approach to legal philosophy grounded directly in the analysis of human action, which he has termed action-based jurisprudence, with his several years of writing about bitcoin from a monetary theory perspective and contributing through articles, presentations, and video productions to raising general public understanding of how Bitcoin works on a technical level.

This content (22,000 words) is licensed under Creative Commons and has been made available in commercial paperback and Kindle versions on Amazon as well as other ebook store versions, and a free PDF of the paper version to facilitate quick and full access to the text, previewing, sharing, text searching (beats an index), quoting, and citation by page number.

Ways to support this work and encourage future work like it include spreading the word and sharing, writing reviews on Amazon and elsewhere, posting quotations, and buying a commercial edition.

Most of all, enjoy. Hopefully, no reader’s views on the topics addressed will remain entirely unaffected. Mine were not.

Paperback edition at Amazon ($6.99)

Ebook stores ($2.99): Kindle edition (free under Kindle MatchBook program for buyers of paper version), iBooks, Kobo, Nook, Oyster, Page Foundry, andScribd.

PDF of paperback edition (Free supplement to commercial editions or consider sending an optional bitcoin tip)

Watch the five-minute video introducing the book on my Amazon author page, which can also be followed for future releases.

The paperback version is available at least on US, UK, and EU area Amazon sites, but not sure about elsewhere. The Kindle version is available on most national Amazon sites worldwide.

Preview: “The market for bitcoin transaction inclusion and the temporal root of scarcity”

What do you see in those blocks? Source: Wikimedia Commons: “Crown Fountain” by Tony Webster.I have been considering the Bitcoin block size debate for quite a few months (next to some other large projects), reading, learning, and applying principles. It is such an important and contentious issue that I have taken extra time before commenting at all to research and keep following the wide range of factors, opinions, and related issues.

In seeking to apply economic theory in new ways, and when addressing Bitcoin in particular with it, I try to take even more care than usual to first acquire a sufficient technical understanding so that I can usefully apply such theory to the case. The block size issue has set that bar still higher than it had been with other Bitcoin topics I have addressed.

I am convinced the roots of much of the contention are based primarily in economic-theory differences and only secondarily a technical or even social ones. Additional issues of governance and decision-making likewise come to the fore mainly when people are severely conflicted on what the right thing to do is and the issues then descend into “political” contests of influence and persuasion. There are also economic ways to understand the kinds of circumstances under which issues tend to become viewed as “political” in nature rather than not.

In short, if it were clear what ought to be done, that could be implemented with some work. Yet not only has widespread consensus on the right thing to do been slow to arrive, but the disagreements appear rooted more in differing opinions on economics, a specialized field entirely distinct from engineering, programming, and network design. Worse, too much of what passes for “economics” in the official mainstream today has been built upon a foundation of long-refuted non-sense. So using that is unlikely to help matters along either.

A 30-page written treatment is in the editing and review phase. For now—in response to numerous behind-the-scenes requests for comment—here is a summary preview of some of the essentials of my take on this as of now. The forthcoming paper contains citations, support, and step-by-step context building and also covers many more related topics than this summary can touch on.

Summary of some findings

The block size limit has for the most part not ever been, and should not now be, used to determine the actual size of average blocks under normal network operating conditions. Real average block size ought to emerge from factors of supply and demand for what I will term “transaction-inclusion services.”

Beginning to use the protocol block size limit to restrict the provision of transaction-inclusion services would be a radical change to Bitcoin. The burden of proof is therefore on persons advocating using the protocol limit in this novel way. This protocol block size limit was introduced in 2010 as an anti-spam measure. It was to be an expedient to be removed or raised at a later stage as normal (non-attack) transaction volumes climbed. It was not envisioned as having anything to do with manipulating transaction fees and transaction-inclusion decisions on a normal operating basis. The idea of using the limit in this new way—not the idea of raising it now by some degree to keep it from beginning to interfere with normal operations—is what constitutes an attempt to change something important about the Bitcoin protocol. And there rests the burden of proof.

If that burden is not met, the limit ought to be (have already been) raised—by some means and by some amount. Those latter details do veer more legitimately into technical-debate territory (2, 8, or 20MB? new fixed limit or adaptive algorithm? Phased in how and when? etc.), but all such discussions would be greatly facilitated by a shared context on the goal and purpose of any such limit having been placed into the code. A case for establishing some completely new reason to retain this same limit—other than as an anti-spam measure—would have to be made by its advocates if they were to overcome the default or “when in doubt” case. The context shows that this when-in-doubt default case is actually raising the limit, not keeping it unchanged.

Casual and/or rhetorical conflation of the block size limit with the actual average size of real blocks is rampant. This terminological laziness begs the key questions of: whether any natural operational economic constraints on block sizes exist (or could become even more relevant in the future), what those natural constraining factors might be, and what degree of influence they might have on practical mining business decisions. In strict terms, nothing can be done without some non-zero cost. For example, including a transaction in a candidate block carries some non-zero-cost and larger blocks propagate more slowly than smaller ones, other things being equal.

How can the real influences of such countervailing factors be discovered within a dynamic complex process? Markets and open competition excel at just this type of unending trial-and-error tinkering problem. However, setting a blanket restriction at an arbitrary numerical level on the output of transaction-inclusion services across the entire network distorts such processes, preventing accurate discovery and inviting both general economic waste and hidden zero-sum transfers.

Transaction-fee levels are not in any general need of being artificially pushed upward. A 130-year transition phase was planned into Bitcoin during which the full transition from block reward revenue to transaction-fee revenue was to take place. The point at which transaction-fee revenue overtakes block reward revenue should not have been expected to arrive any time soon—such as within only the first 5–10% of time that had been planned for a 100% transition. Transaction-fee revenue might naturally come to exceed block reward revenue in say, 20, or 30, or 50 years, or whatever it ends up being. Yet even that is still only a 50% milestone in the full transition process. Envisioning the long-term future of mining revenue should also factor in the clear reasons for anticipating steady secular growth in real bitcoin purchasing power.

Most fundamentally, scarcity is being treated in this debate largely using an intuitive image of “space in blocks.” However, scarcity follows from the nature of action as inevitably occurring within the passage of time. Actors would like to accomplish their objectives sooner rather than later, other things being equal. Time is the ultimate root and template for scarcity, because goods are only definable in relation to action and any action taken precludes some possible alternative action (“cost”). Scarcity of transaction-inclusion should therefore be understood in terms of relative time to confirmation—which is already today statistically influenced by fee levels.

Finally, discussions of whether bitcoin should or should not be used for “buying coffee” sound embarrassingly like Politburo debates. Market discovery through real supply, demand, and pricing over time allow socially best-possible levels of [average fee multiplied by transaction volume relative to real bitcoin purchasing power] at any given point in (in-motion) time, to be discovered dynamically. The same goes, at the same time, for the relative pros and cons for users of the entire possible existing and future spectrum of off-chain transaction options relative to on-chain ones. The protocol block size limit was added as a temporary anti-spam measure, not a technocratic market-manipulation measure. The balance of evidence still seems to indicate that it should remain restricted to its former role.

Bitcoin as a rival digital commodity good: A supplementary comment

Japanese commodity money before the eight century. Source: Wikimedia Commons, PHGCOM.One of the challenges of interpreting bitcoin has been whether it can be classified under certain existing conceptual rubrics such as “money” or “commodity” for purposes of economic analysis. Could it be some strange new kind of “commodity money”? Most people immediately and intuitively dismiss this as a possibility because it is not a physical “thing,” which they feel is a defining characteristic of commodity-ness.

Resort to a word such as “token” seems a convenient escape valve from this situation. However, this could also be misleading. A token in a “token money” context derives its value from having a fixed exchange rate against something else—a 100 pennies for a dollar, a plastic chip for a euro, etc. Bitcoin, in contrast, is traded directly as itself, with utterly no sign of any fixed exchange or substitution rates (see my Bitcoin, price denomination and fixed-rate fiat conversions” 22 July 2013).

My newest paper, “Commodity, scarcity, and monetary value theory in light of Bitcoin” in The Journal of Prices & Markets (Winter 2015) explores some of these issues in detail from a formal conceptual standpoint to check such immediate and intuitive responses. The paper takes the time to define and then apply core economic-theory concepts, including goods, scarcity, and rivalry, as well as classical lists of “commodity money” characteristics, to understanding bitcoin in terms of monetary theory.

True, commodities are usually tightly associated with materiality. However, an economic-theory sense of commodity ought to be differentiable from a physical-descriptive sense. Economics begins with the study of choice and action, as distinct from issues addressed in physical sciences. It may be that the presence of materialness in commodities has just been assumed due to the nature of the available historical examples.

For a supplemental “reality check” beyond the obscure economics library, I thought to simply go and read the Wikipedia article on “Commodity.” This should be reasonably unlikely to represent any arcane or partisan definitions from one school of economics rather than another, and should first of all represent a general-purpose range of typical current understandings of the term.

I extracted some economic-theory elements from the entry, omitting illustrative examples. The examples are mostly material items, but this is to be expected due to the overwhelmingly pre-bitcoin scope of economic history so far. Indeed, part of my argument is that bitcoin may be the first rival digital commodity good (defined in the paper), which would mean precisely that it is unprecedented, a new type of example. Between the few excerpts below, I relate these presumably mainstream characterizations of commodity-ness to bitcoin.

Extracts from Wikipedia entry on “Commodity”

The exact definition of the term commodity is specifically used to describe a class of goods for which there is demand, but which is supplied without qualitative differentiation across a market. A commodity has full or partial fungibility; that is, the market treats its instances as equivalent or nearly so with no regard to who produced them. As the saying goes, “From the taste of wheat it is not possible to tell who produced it, a Russian serf, a French peasant or an English capitalist.”

No one generally considers which mining pool mined the block that a bitcoin originated in when deciding whether to accept payment. 50 Cent, for example, is unlikely to refuse bitcoin payments for his albums from anyone using coins mined by pools other than 50 BTC.

In the original and simplified sense, commodities were things of value, of uniform quality, that were produced in large quantities by many different producers; the items from each different producer were considered equivalent.

Multiple producers: All the various Bitcoin miners produce interchangeable new coins.

One of the characteristics of a commodity good is that its price is determined as a function of its market as a whole. Well-established physical commodities have actively traded spot and derivative markets.

There are numerous bitcoin spot markets and even some derivatives markets.

Commoditization occurs as a goods or services market loses differentiation across its supply base. As such, goods that formerly carried premium margins for market participants have become commodities, such as generic pharmaceuticals and DRAM chips. There is a spectrum of commoditization, rather than a binary distinction of “commodity versus differentiable product”. Few products have complete undifferentiability.

Coin tracking is sometimes cited as a risk for weakening the completeness of bitcoin fungibility, so while fungibility largely holds, there is some risk of entering onto a “spectrum of commoditization” in which some differentiation could creep in under certain circumstances.

Overall, I thought the entry was surprisingly clear in defining commodity in terms of economic rather than material concepts. While most of the examples of commodity were material, the economic meaning was conceptually independent of materiality. As should be expected, the discussion was about economic issues such as quality differentiation, pricing, market organization, and trading patterns—not chemistry. If we are using a term in economic analysis, a strictly economic definition should be most suitable.

 

Jeff Volek presents research indicating low-carb also best way to improve lipid profiles

Ultra-low-carb eating has appeared for about a century already to be the most effective treatment for both overweight and diabetes (both linked to the more general metabolic syndrome), beating all drugs and other interventions by wide margins (on this, see Taubes’ groundbreaking Good Calories, Bad Calories). However, the establishment has resisted or ignored (if not memory-holed) this information for general application primarily based on separate claims about lipid profile risk and heart disease. The conventional view has been that these risks outweigh the benefits, so other treatments are likely to be better on balance.

Turning this view completely on its head, in this 1 July 2014 conference presentation, cholesterol researcher Jeff Volek explains how his and related carefully controlled research over the past two decades indicates that ultra-low-carb eating appears to also be the best known intervention for improving lipid profile markers, properly interpreted. He focuses particularly on issues with the measurement, context, and interpretation of LDL-C, and by the end appears to have left the strong conventional view concerning low-carb and lipid profiles in the dustbin of failed scientific claims.

[Gets going at about 1.05.]

See my Evolutionary Health page for more perspective and selected references.

Sidechained bitcoin substitutes: A monetary commentary

Abstract

A 22 October 2014 white paper on cryptocurrency sidechains formalizes and advances the innovative sidechain concept and examines pros and cons in terms of both technical and economic factors. The current reply focuses on likely general factors in market valuations of bitcoin-pegged units on sidechains. This is an important topic for clarification as people begin to imagine and work to develop practical uses for sidechains. Assuming that the two-way peg will necessarily assure a matching, or even consistently discounted, market price relative to bitcoin could prove unrealistic. A scenario of independent floating market prices among sidecoins could prevail, with implications for the scope and types of sidechain applications.

Download the seven-page PDF of “Sidechained bitcoin substitutes: A monetary commentary.”

 

Bitcoin: Magic, fraud, or 'sufficiently advanced technology'?

Arthur C. Clarke’s third law famously states: “Any sufficiently advanced technology is indistinguishable from magic.” What Bitcoin makes possible can at first seem almost magical, or just impossible (and therefore most likely fraudulent or otherwise doomed). The following describes the basic technical elements behind Bitcoin and how it brings them together in new ways to make seeming magic possible in the real world.

Clarke’s second law states: “The only way of discovering the limits of the possible is to venture a little way past them into the impossible.” And this, we can see in retrospect, is basically what Bitcoin creator Satoshi Nakamoto did. Few at the time, even among top experts in relevant fields, thought it could really ever work.

It works.

One reason many people have a hard time understanding Bitcoin is that it uses several major streams of technology and method, each of which is quite recent in historical perspective. The main raw ingredients include: an open-source free software model, peer-to-peer networking, digital signatures, and hashing algorithms. The very first pioneering developments in each of these areas occurred almost entirely within the 1970s through the 1990s. Effectively no such things existed prior to about 40 years ago, a microsecond in historical time, but a geological age in digital-revolution time.

Some representative milestone beginnings in each area were: for open-source software, the GNU project (1983) and the Linux project (1991); for peer-to-peer networking, ARPANET (1979) and Napster (1999); for digital signatures, Diffie–Hellman theory (1976) and the first RSA test concept (1978); and for hashing algorithms, the earliest ideas (around 1953) and key advances from Merkle–Damgård (1979). Bitcoin combines some of the best later developments in each of these areas to make new things possible.

Since few people in the general population understand much about any of these essential components, understanding Bitcoin as an innovation that combines them in new and surprising ways, surprising even to experts within each of those specialized fields, is naturally a challenge without at least a little study. Not only do most people not understand how the Bitcoin puzzle fits together technically, they do not even understand any of the puzzle pieces! The intent here is not to enter into much detail on the content of any of these technical fields, but rather to provide just enough detail to achieve a quick increase in the general level of public understanding.

What Bitcoin is about in one word: Verification

It may help to focus to begin with not on the details of each field, but at how each part contributes strategically to Bitcoin’s central function. This is to create and maintain a single unforgeable record that shows the assignment of every bitcoin unit to addresses. This record is structured in the form of a linked chain of blocks of transactions. The Bitcoin protocol, network, and all of its parts maintain and update this blockchain in a way that anyone can verify. Bitcoin revises the Russian proverb, “doveryai, no proveryai,” “Trust, but verify,” to just “verify.”

If a single word could describe what the Bitcoin network does, it would be verification. For a borderless global currency, relying on trust would be the ultimate bad idea. Previous monetary systems have all let users down just where they had little alternative but to rely on some trusted third party.

First, the core Bitcoin software is open source and free. Anyone can use it, examine it, propose changes, or start a new branch under a different name. Indeed, a large number of Bitcoin variations with minor differences have already existed for some time. The open source approach can be especially good for security, because more sets of eyes are more likely to find weaknesses and see improvement paths.

Open source also tends to promote a natural-order meritocracy. Contributors who tend to display the best judgment also tend to have more of their contributions reflected over time. Unending forum discussions and controversies are a feature rather than a bug. They focus attention on problems—both real and imagined—which helps better assure that whatever is implemented has been looked at and tested from diverse angles.

Many computers worldwide run software that implements the Bitcoin protocol. A protocol is something roughly like a spoken language. Participants must speak that language and not some other, and they must speak it well enough to get their messages across and understand others. New protocols can be made up, but just as with making up new languages, it is usually rather unproductive. Such things only take off and become useful if enough others see a sufficient advantage to actually participate.

Second, as a peer-to-peer network, there is no center. Anyone can download core Bitcoin software and start a new node. This node will discover and start communicating with other nodes or “peers.” No node has any special authority or position. Each connects with at least eight peers, but sometimes many more. Some faster and always-on nodes relay more information and have more connections, but this conveys no special status. Any node can connect or drop out any time and join again later. A user does not have to run a full node just to use bitcoin for ordinary purposes.

It is common to say that Bitcoin is “decentralized” or doesn’t have a center. But then, Where is it? Thousands of active peering nodes are spread over most countries of the world and each one carries an up to date full copy of the entire blockchain.

Some nodes not only relay valid transactions and blocks, but also join the process of discovering and adding new blocks to the chain. Such “mining” activities both secure the final verification of transactions and assign first possession of new bitcoin to participating nodes as a reward. Understanding basically how mining works requires a look at the distinct functions of several different types of cryptography.

Bitcoin cryptography dehomogenized

Bitcoin relies on two different types of cryptography that few people understand. Both are counter-intuitive in what they make possible. When most people hear “cryptography,” they think of keeping data private and secure through encryption. File encryption can be used to help secure individual bitcoin wallet files, just as it can be used for the password protection of any other files. This is called symmetric key cryptography, which means the same key is used to encrypt and decrypt (AES256 is common in this role). Encryption may also be used for secure communication among users about transactions, as with any other kind of secure traffic. This is called asymmetric key cryptography, which means a public key encrypts a message and its matching private key decrypts it at the other end.

However, all of this is peripheral. Nothing inside the core Bitcoin protocol and network is encrypted. Instead, two quite different types of cryptography are used. They are not for keeping secrets, but for making sure the truth is being told. Bitcoin is a robust global system of truth verification. It is in this sense the opposite of the “memory hole” from George Orwell’s 1984; it is a remembering chain.

The first type of cryptography within Bitcoin is used to create a message digest, or informally a “hash.” Bitcoin uses hashing at many different levels (the most central one is an SHA256 hash run twice). The second type is used to create and verify digital signatures. This uses pairs of signing keys and verification keys (ECDSA sepc256k1 for signatures).

The keys to the kingdom

Despite intuitive appearances to users, bitcoin wallets do not contain any bitcoin! They only contain pairs of keys and addresses that enable digital signatures and verifications. Wallet software searches the blockchain for references to the addresses it contains and uses all the related transaction history there to arrive at a live balance to show the user. Some of the seemingly magical things that one can do with bitcoin, such as store access to the same units in different places, result from the fact that the user only deals with keys while the actual bitcoin “exists,” so to speak, only in the context of the blockchain record, not in wallets. It is only multiple copies of the keys that can be stored in different places at the same time. Still, the effective possession of the coins, that is, the ability to make use of them, stays with whoever has the corresponding signing keys.

While software designers are working hard to put complex strings of numbers in the background of user interfaces and replace or supplement them with more intuitive usernames and so forth, our purpose here is precisely to touch on some technical details of how the system works, so here is a real example of a set of bitcoin keys. This is a real signing key (do not use!):

5JWJASjTYCS9N2niU8X9W8DNVVSYdRvYywNsEzhHJozErBqMC3H

From this, a unique verification (public) key is cryptographically generated (compressed version):

03F33DECCF1FCDEE4007A0B8C71F18A8C916974D1BA2D81F1639D95B1314515BFC

This verification key is then hashed into a public address to which bitcoin can be sent. In this case:

12ctspmoULfwmeva9aZCmLFMkEssZ5CM3x

Because this particular signing key has been made public, it has been rendered permanently insecure—sacrificed for the cause of Bitcoin education.

Making a hash of it

Hashing plays a role quite different from digital signatures. It proves that a message has not been altered. Running a hash of the same message always produces the same result. If a hash does not match a previous one, it is a warning that the current version of the message does not match the original.

To illustrate, here is a message from Murray Rothbard. He wrote in Man, Economy, and State that:

“It must be reiterated here that value scales do not exist in a void apart from the concrete choices of action.” —Murray Rothbard, 1962

And here is the SHA256 digest of this message and attribution (the same algorithm that Bitcoin uses):

68ea16d5ddbbd5c9129710e4c816bebe83c8cf7d52647416302d590290ce2ba8

Any message of any size can go into a hash function. The algorithm breaks it down, mixes the parts, and otherwise “digests” it, until it produces a fixed-length result called “a digest,” which for SHA256 takes the above form, but is in each case different in content.

There are some critical properties of a good hash algorithm. First, the same message always produces the same digest. Second, it only works in one direction. Nothing about the message that went in can be reconstructed from the digest that came out. Even the tiniest change produces a completely different digest, with no relationship between the change in input and the change in output. This is called “the avalanche effect.” Third, the chances of producing the same digest from an altered message are miniscule. This is called “collision resistance.” It is impossible to craft an altered message that produces the same digest as the original unaltered message.

To demonstrate, here is the same quote without the two quotation marks.

It must be reiterated here that value scales do not exist in a void apart from the concrete choices of action. —Murray Rothbard, 1962

Which produces this digest:

0a7a163d989cf1987e1025d859ce797e060f939e2c9505b54b33fe25a9e860ff

Compare it with the previous digest:

68ea16d5ddbbd5c9129710e4c816bebe83c8cf7d52647416302d590290ce2ba8

The tiniest change in the message, removing the two quotation marks, produced a completely different digest that has no relationship whatsoever to the previous digest. In sum, a digest gives a quick yes or no answer to a single question: Is the message still exactly the same as it was before? If the message differs, the digest cannot indicate how or by how much, only that it either has changed at all or has not.

How could such a seemingly blunt instrument be useful? Bitcoin is one application in which hashing has proven very useful indeed. In Bitcoin, hashing is used in the lynchpin role of making it impossible to alter transactions and records once they have been recorded. Once the hashes are hashed together within the blockchain, record forgery anywhere is impossible.

Transactions and how miners compete to discover blocks

Wallet software is used to create transactions. These include the amount to be sent, sending and receiving addresses, and some other information, which is all hashed together. This hash is signed with any required signing keys to create a unique digital signature valid only for this transaction and no other. All of this is broadcast to the network as unencrypted, public information. What makes this possible is that the signature and the verification key do not reveal the signing key.

To keep someone from trying to spend the same unit twice and commit a kind of fraud called double-spending, nodes check new transactions against the blockchain and against other new transactions to make sure the same units are not being referenced more than once.

Each miner collects valid new transactions and incorporates them into a candidate in the competition to publish the next recognized block on the chain. Each miner hashes all the new transactions together. This produces a single hash (“mrkl_root”) that makes the records of every other transaction in a block interdependent.

Each hash for any candidate block differs from every other candidate block, not least because the miner includes his own unique mining address so he can collect the rewards if his candidate block does happen to become recognized as next in the chain.

Whose candidate block becomes the winner?

For the competing miners to recognize a block as the next valid one, the winning miner has to generate a certain hash of his candidate block’s header that meets a stringent condition. All of the other miners can immediately check this answer and recognize it as being correct or not.

However, even though it is a correct solution, it works only for the miner who found it for his own block. No one else can just take another’s correct answer and use it to promote his own candidate block as the real winner instead. This is why the correct answer can be freely published without being misappropriated by others. This unique qualifying hash is called a “proof of work.”

The nature and uses of message digests are counter-intuitive at first, but they are indispensable elements in what makes Bitcoin possible.

An example of a mined block

Here is an example of some key data from an actual block.

“hash”:”0000000000000000163440df04bc24eccb48a9d46c64dce3be979e2e6a35aa13”,

“prev_block”:”00000000000000001b84f85fca41040c558f26f5c225b430eaad05b7cc72668d”,

“mrkl_root”:”83d3359adae0a0e7d211d983ab3805dd05883353a1d84957823389f0cbbba1ad”,

“nonce”:3013750715,

The top line (“hash”) was the actual successful block header hash for this block. It starts with a large number of zeros because a winning hash has to be below the value set in the current difficulty level. The only way to find a winner is to keep trying over and over again.

This process is often described in the popular press as “solving a complex math problem,” but this is somewhat misleading. It is rather an extremely simple and brutally stupid task, one only computers could tolerate. The hash function must simply be run over and over millions and billions of times until a qualifying answer happens to finally be found somewhere on the network. The chances of a given miner finding such a hash for his own candidate block on any given try are miniscule, but somewhere in the network, one is found at a target average of about every 10 minutes. The winner collects the block reward—currently 25 new bitcoins—and any fees for included transactions.

How is the reward collected?

The candidate blocks are already set up in advance so that rewards are controlled by the winning miner’s own unique mining address. This is possible because the miner already included this address in his own unique candidate block before it became a winner. The reward address was already incorporated in the block data to begin with. Altering the reward address in any way would invalidate the winning hash and with it that entire candidate block.

In addition, a miner can only spend rewards from blocks that actually become part of the main chain, because only those blocks can be referenced in future transactions. This design fully specifies the initial control of all first appropriations of new bitcoins. Exactly who wins each next block is random. To raise the probability of winning, a miner can only try to contribute a greater share of the current total network hashing capacity in competition with all of the others trying to do the same.

As shown above with the Rothbard quote, a completely different hash comes out even after the slightest change to the message. This is why the protocol includes a place for a number that is started at zero and changed by one for each new hash try (“nonce”). Only this tiny alteration, even if the rest of the candidate block data is unchanged, generates a completely different hash each time in search of a winner. In the example above, it looks like this miner found a winning hash for this block at some point after the three billionth attempt (“nonce”:3013750715), and this was just for that one miner or mining pool, not including the similar parallel but unsuccessful attempts of all the other miners, and all this just for the competition for this one block.

The key point to understand is that finding a hash under the difficulty level is extremely competitive and difficult, but verifying afterwards that one has been found is trivial. The rest of the miners do so and move right along. They use the newly discovered hash of the previous block header (“prev_block”) as one of the inputs for their next crop of block candidates (which assures the vertical integrity of the single chain of blocks) and the race continues based on the remaining pool of unconfirmed transactions.

A powerful, self-financing, verification network

The Bitcoin mining network is, as of late September 2014, running at about 250 petahashes per second and rising at a logarithmic pace that will soon make this figure look small (rate tracked here). This means that about 250 quadrillion hashes are currently being tried across the network every second all the time. This is the world’s most powerful distributed computing network, by far, and has already been steadily extending this lead for quite some time.

Block rewards and transaction fees help promote the production and maintenance of this entire network in a decentralized way. Since block generation is random and distributed on average in proportion to hashing power contribution, it helps incentivize all contributors all the time. Many miners participate in cooperative mining pools so that at least some rewards arrive on a fairly regular basis.

The network is designed to be entirely self-financed by participants from the beginning indefinitely into the future. Early on, new coin rewards are larger and transaction-fee revenue smaller. Finally, only transaction-fee revenue is to remain, with a long and gradual transition phase built in.

If Bitcoin does remain successful over the longer term, by the time transaction-fee revenue predominates, there would likely be many orders of magnitude more transactions per block by which to multiply the average competitive fee per transaction.

This has been a summary look at a few of the key technical elements of Bitcoin. Hashing algorithms and digital signatures are especially counter-intuitive and relatively new inventions, but knowing what they make possible is essential for understanding how Bitcoin works. Each of Bitcoin’s major elements contribute to the central functions of verification, unforgeable record-keeping, and fraud prevention. These technical underpinnings and the functions they support sound about as far from the systematic deceptions of a fraud such as a Ponzi scheme as it would be possible to get.

Adapted and revised from Bitcoin Decrypted Part II: Technical Aspects and cross-posted to actiontheory.liberty.me.