Sunday, March 10, 2019

Resting your Mind in a Problem

Resting your mind in a problem is a technique I use to enhance my creativity -- to find solutions to problems and to invent new ideas.  The essence of it is to enter what most people would probably call a relaxed, meditative state, BUT with the problem or general subject you are trying to be creative around held lightly in your mind, rather than it being occupied by mantras, music or somebody else's voice.  Another way to put it is that the goal is to enable David Gelertner's "low focus thought", but in a directed fashion.

Good times to do this:

Just before bed
Lounging around in bed in the AM
during an endurance workout
resting after an endurance workout
during repetitive safe physical tasks (raking leaves, shoveling, washing car, brushing pool)

Some exercises to help you do this:

Basic mental flexibility:

the endless hum.  Can you imagine humming a single node through the changing of your breath from in to out or vice versa.  You may first think you are doing it, but if you really listen to your own imaginary voice you'll likely hear a short hitch.  But one's mental "hum" need not be connected to the physical...

imagine greater resolution.  Imagine a landscape scene with trees.  Why aren't you seeing each leaf?  Why not each vane on each leaf, and the stomata?  Why is your imagination limited to your physical visual resolution?

not thinking.  Stop the voice in your head.  Stop telling yourself to stop it.  Try thinking about breathing to stop thinking about other things, in, out, in, out, now stop saying in, out.  Now stop saying "Yay I did it!" :-).  See how long you can exist without linguistic thought.  Try to make a decision without voicing it or acknowledging in voice that you made the decision.

"hear" in your mind a song in other people's voices.  Not yourself singing it.  Hear the actual instruments with proper tone color not you humming the melody.  Hear multiple instruments.  (this is probably quite easy for musicians but hard for the rest of us)

Workups to resting your mind in a problem:

The key here is "resting"  -- you are not trying to force something.  Let your mind wander for creativity...

replay a novel.  See the plot in your mind like a movie.  If it didn't take hours you probably skipped parts.  Go back and do it in greater detail.  Multiple times greater detail.  Stop verbally telling yourself you missed something!  Practice first narrated visualization then no narration (visualization only).

imagine a flat endless plane.  Put stuff in it.  Let the stuff interact.  Let other stuff come in that your "voice" didn't suggest.

Put thoughts about the problem on the plane, or replay them visually not in words.  See multiple aspects coexist in your thought plane,  let other stuff come in.  If you get far off your problem let it go and see what comes up, maybe try to combine whatever came up with some part of your problem. 


Later, when you assess, if you are never (say out of 10 times) able to stick onto the problem, but move to the same thing repeatedly, maybe your problem really isn't interesting to you and you should be working on the something else?

If you come up with a few good ideas, write them down right after the session (keep a pad by the bed)... if you fall asleep you'll have forgotten some in the AM probably.  If this happens, you can sometimes recapture the idea(s) by resting your mind again (as soon as possible).

Saturday, August 27, 2016

My take on the Bitcoin Testnet Fork

Bitcoin Unlimited signals compatibility with BIP109 because it accepts a superset of what BIP109 allows.  It accepts larger blocks and more signature operations.  So essentially BIP109 is a "soft fork" of what Unlimited is capable of.  Unfortunately there is no way to signal that a client supports a superset of BIP109, so a choice of 2 imperfect alternatives had to be made.  In the context of the 1MB limit and BU having ability to produce these superset blocks, it made sense to signal BIP109 support.  At the same time, there is a passed BUIP that covers strict BIP109 support that can be quickly implemented at need.

On testnet, Bitcoin Unlimited has the mining majority and an ~600k transaction was created that exceeded the BIP109 signature checking restrictions.  Bitcoin Unlimited included it in a block and so clients that strictly adhere to BIP109 were forked (Bitcoin Classic).  Bitcoin Unlimited could have avoided this problem by following our philosophy to be conservative in what is generated but liberal in what is accepted.

However, this event is very instructive in regards to the role of consensus in the network.  In short, there is none.  Bitcoin is founded on the principle of zero-trust.  If we rely on developers to produce perfect and compatible software, we are re-introducing trust.  And then the difference between Bitcoin and traditional financial networks becomes merely a difference in flavor (who do you trust) not a fundamental new concept.  We now see this in Ethereum -- the Ethereum developers have chosen to be the arbiters of their network and participants must now trust these developers to accept the participant's transactions.

It should be instructive to note that Bitcoin Unlimited is unaffected by the situation.  And it also would be unaffected if the roles were reversed (if Bitcoin Unlimited was the minority hash-rate and a minor rule was being broken).

Ask yourself why do you perceive it to be "bad" that Classic forked from the network?  I believe that it is "bad" because the rule that was broken was not important enough to warrant a fork.  Classic users would have preferred to follow the most-work chain and ignore this rule.

But what if a client produced a 100 coin coinbase transaction?  Would you prefer that your client follow this chain or fork?

From a zero trust, game theory perspective a client should follow the chain that maximizes the value of the coins owned by the user.  Therefore a client should only choose to fork when a rule change occurs that reduces the value of the user's coins.  From this observation, one can distill a minimum set of rules -- rules that are absolutely essential to protect the "money function" of Bitcoin.

Bitcoin Unlimited's "excessive block" and "excessive accept depth" algorithm is not just and arbitrary choice -- its the optimal choice rational software can make in an untrusted network.  In essence, it encourages the client's preferences to the extent that the client can do so, but then follows the majority when the client's preference is rejected.

So Bitcoin Unlimited follows a philosophy of following the most-work chain unless a block breaks the "money function rules" -- increasing inflation, spending other user's coins, etc.  All of these activities will undermine the value of the user's own coins and in that situation, a fork may preserve that value since the value of the rule may be greater than value added by having the highest difficulty chain.

To date, Bitcoin has been sheltered by having a single-client (trust-the-developers) implementation but for the last year the massive liability of this approach has become evident in the inability of that client to deliver the growth that Bitcoin so desperately needs.  As we move into a trustless, multi-client environment, Bitcoin client developers will have to ask themselves "how important are these rules, and what should my client do if the mining majority breaks them?"

Wednesday, March 9, 2016

The Bitcoin On-Chain Scaling Landscape

This post summarizes the proposed options for scaling Bitcoin on-chain.  While a simple block size increase to 2, 8 or even 20MB has been claimed by some engineers to be possible, these techniques are either proposed to scale beyond those sizes or simply make the system operator more efficiently at those scales.

Basic theoretical work:

A Transaction Fee Market Exists Without a Block Size Limit (Peter Rizun): This paper argues that block propagation times limit block sizes which in turn will create competition for transaction space.

An Examination of Single Transaction Blocks and Their Effect on Network Throughput and Block Size (Andrew Stone):  This paper argues that headers-only mining places a limit on transaction throughput and therefore average block size that is based on underlying physical capability.  It then puts a limit on maximum block size by arguing that a rational miner would orphan any block that takes so long to validate that the miner is likely to be able to mine and validate a smaller block within the same time.

No Fork Necessary

Blocks only (Greg Maxwell??):  No transactions are forwarded to a node.  Reduces node bandwidth.  Unfortunately these nodes cannot propagate transactions to peers so are only useful as endpoints in the P2P network.

Thin Blocks (Mike Hearn, implemented by Peter Tschipper):

Weak Blocks (Gavin Andresen):
  Miners relay invalid block solutions whose difficulty is some fraction of the current difficulty.  This tells other miners/nodes what is being worked on so when a solution is found, miners need only send the nonce, not the full block.  This should spread the bandwidth spikes caused block discovery, but may cause greater total bandwidth use.

Subchains (Peter Rizun):
  A formal treatment of weak blocks which adds the concept of weak blocks building on prior weak blocks recursively.  This solution should reduce weak block bandwidth down to nearly the same as without weak blocks, and also adds confidence to accepting 0-conf transactions since users know what blocks miners are working on.

Headers-only mining (independently deployed by miners, formally addressed by Andrew Stone, implemented by Gavin Andresen):
  Headers only mining (mining empty blocks) allows greater onchain scaling because it provides a feedback mechanism where miners can reduce the average block size if it is nearing their local physical capacity.  This effect requires no active agency, it is a natural result of headers only mining while miners are waiting to acquire and validate the full block.

BlockTorrent (Jonathan Toomim):  A proposal to optimize access to blocks and transactions using algorithms inspired by bittorrent


Requires Fork

Basic block size increase (Satoshi Nakamoto, implemented in Bitcoin XT, Bitcoin Unlimited, Bitcoin Classic):  This technique recognises that the current network infrastructure easily handles 1MB blocks and so simply suggests that the block size be increased.  Within this basic technique there are multiple proposals dealing with how to change the maximum size:
  1.   One time change to 2 MB
  2.   Bitpay's K*median(N blocks), on Bitcoin Classic roadmap
  3.   Follow the most-work chain regardless of block size (Bitcoin Unlimited)
Interleaving blocks / GHOST (Yonatan Sompolinsky,Aviv Zohar):  This technique allows a child block to have multiple parents, so long as those parents have no conflicting transactions (or specifies a precedence so all conflicts are  resolved).  This allows blocks to be produced faster than 1 every 10 minutes. 

Auxiliary (Extension) blocks (jl2012, Tier?): This technique proposes that the hash of another block be placed in the header of the current 1MB block.  That other block contains an additional 1MB (or more) of space.  This is a way of "soft-forking" a basic block size increase, with the proviso that older clients would not be able to verify the full blockchain and could be tricked into accepting double-spends, etc.

Bitcoin-NG (Ittay Eyal, Adem Efe Gencer, Emin Gun Sirer, Robbert van Renesse):  This proposal uses POW (proof of work) to elect a temporary "leader" who then serializes transactions into "micro-blocks" until a new POW block is found and a new leader elected.  As in the "basic block size increase" (and many others) the proposal still requires that every node see every transaction, and so scales up to network throughput limits.  The key advantages are that the leader can confirm transactions very quickly (restrained only by network propagation latencies) and that bandwidth use does not have the spikes associated with full block propagation.
Distributed Merkle hash trie (Andrew Stone):  This technique inverts the blockchain into blocks that store the current tx-out set in a distributed merkle tree, that can be traversed by bitcoin address bitwise to validate the existence of an address.  The blockchain forms a history of the changes to this trie.  This allows clients to "sync up" quickly, they do not need the full block history.  Blocks can be produced at any node in the trie containing transactions and blocks describing changes to the sub-trie.  This allows nodes to only track portions of the address space.  Miners only include portions of the trie that they have verified, resulting in slower confirmation times the lower you go.  Since txns can be included in any parent, high fee transactions are confirmed closer to the root, resulting in a fee market.

Address sharding (Andrew Stone):  This is a simplification of the Distributed Merkle hash tree technique that is formulated to fit in Bitcoin with minimal disruption via extension blocks.  This technique proposes that there exist a tree of extension blocks.  Each extension block can only contain transactions whose address has a particular prefix, recursively, transactions containing addresses with multiple prefixes are placed in the nearest parent extension block or ultimately the root (today's block).  Clients that do not track all the extension blocks are "full" nodes for those they track and "SPV" nodes for those they do not. 
Miners do not include the extension blocks that they do not track.  This would cause extension blocks to be mined less often, creating an interesting fee market with more expensive transactions gaining transaction space nearer to the root.

Segregated Witness (Peter Wuille):  This technique separates the verification information (signatures) from the transaction data, and cleans up a lot of other stuff.  Fully validating versions do not save bandwidth or space -- in fact it consumes a small amount more -- but would allow a new type of client.  This "non-verifying client" (different from SPV clients) essentially relies on others (miners) to do signature verification in which case bandwidth is saved since the "witness" -- the signatures -- are not downloaded, but security is undermined.  Or the client could verify once (with no bandwidth savings) but not store the verification information on disk.  This is a much smaller security risk since an attacker with direct access to the disk may be able to replace a transaction making it "look" like an address has or does not have a balance.  However, since bandwidth and RAM are the current bottlenecks this solution relies on the fact that today's network can handle larger amounts of data, similar to the basic block size increase technique.

Transaction space supply vs demand dynamics (Andrew Stone, Meni Rosenfeld):  This is a family of techniques to allow high fee paying transactions to cause the maximum block size to be temporarily expanded.  These techniques observe that the transaction space supply curve currently goes vertical at 1MB and are meant to address the possible shock ("economic change event") of hitting that limit by allowing transaction supply space to increase smoothly.  As such, this technique does not address a long-term demand increase, but may be combined with average or median based techniques to do so.

Bitcoin9000 (anonymous):  This paper proposes a combination of 3 techniques:
1. Subchains (renamed diff blocks by this paper)
2. Parallel transaction validation (transaction validation is nearly "embarrassingly parallel" -- this is an "expected" optimization)
3. linked list of size doubling extension blocks with a 95% miner vote-in-blocks needed to "open" the next-deeper block

Off Chain Scaling (included for completeness)

Sidechains (Gregory Maxwell, implemented Blockstream):  This technique renders Bitcoins immobile on the Bitcoin blockchain, allowing a representation of them to be used on a different blockchain.  The current implementation uses humans as stewards of the "locked" bitcoins ("federated sidechains"), so suffers from counterparty risk just like any "paper" money scheme.  There is a theoretical proposal to remove the human stewards.

Lightning Network

Wednesday, October 28, 2015

The Open Source Appliance (a 2002 retrospective)

Today the library of congress adopted exemptions (or here) that recognize consumer's rights to modify (jailbreak) their owned devices regardless that device's use of copyrighted software.  In recognition of this event, here is a document I wrote in 2002 on the subject.  Its pretty interesting to compare this to what actually happened.

The Open Source Appliance: A Manifesto

Rev 0.5 12/5/2002

The advent of inexpensive connectivity technologies1 has promised to drastically change the way home appliances operate. By communicating with a home computer, appliances will have the ability to provide a much richer user interface and a larger set of features then is available with the front panel buttons and the LCD display common in most appliances. By communicating with each other, appliances may be able to implement coordinated behaviors (inter-operate), creating a better, safer living environment. Interoperability will also allow appliances to use each other’s features, resulting in simpler and cheaper individual appliances, and provide [], creating features which are greater then provided by any individual appliance.
The ideal end result is a “unified” appliance that has no unnecessary or redundant parts, and is aware of its environment, providing greater flexibility and more features at a lower cost then appliances that do not inter-operate.


But lets be realistic: what will REALLY happen is that you’ll have appliances made by different companies and so will be almost entirely incompatible. For example, you may have an Acme (just a made-up company name) VCR and a Paragon phone system. Both of these systems will have a nifty Windows program (sorry, they only support windows2) that allows you to control the appliance. However, when you try to call from work to record a show you’ll realize that the Paragon phone software can’t talk to this version of the Acme VCR software. Or maybe the call won’t even be picked up because Windows has crashed (again!).
Although some systems will interconnect as advertised, especially if all of your home appliances are made by the same manufacturer, the sheer number of different home appliances and different manufacturers make it impossible to test all configurations, so many will not inter-operate. As an example of corporate ineffectiveness in ventures of this sort please examine your coffee table. How many remotes are sitting there? Unless you bought a special “universal” remote that was specifically created to communicate with many manufacturer’s devices, you probably have 3 or 4. And if you DID buy a “universal” remote, I think you’re already convinced…


The purpose of having appliance connectivity is to allow your devices to act in concert to implement coordinated behaviors or to allow devices to share resources. Through these behaviors, devices can provide enhanced features. For example, your stereo could mute when the phone rings (coordinated behaviors), and your answering machine could store its messages in your computer (resource sharing). Your answering machine would no longer need a cassette tape to record incoming messages, making it cheaper and more reliable. You could then view your messages through the computer’s display, and listen to them through the computer’s speakers (enhanced feature). In this way, connectivity improves quality of life and reduces appliance cost.
But how are a bunch of engineers (probably under great schedule pressure) going to implement the behavioral or resource sharing features that fit your lifestyle and your appliances? Although I am certain that a minimum functionality will be implemented, such as your basic connect-VCR-to-cable-box functionality, many features exist that would be great to have but do not fuel a marketing campaign. For example, I have an oven and a microwave with alarms that can’t be heard from everywhere in the house. So I would like them to beep (a different sound than ring!) my phones when the alarm goes off. And my phone could also “ding-dong” when the doorbell is rung.
I want to use my cordless phones as an intercom system. I want to use them to control what music is playing through my CD player (since the remote won’t reach far enough), either through a touchtone voice menu interface, or directly via the buttons on the phone. I want to put callers on “speaker phone”, causing music that happens to be playing to pause and the phone’s output be routed though my living room speakers. I want my answering machine to display the history of received calls on my PC and let me listen to the messages through my main speaker system. I want incoming calls to be answered by a machine before the phones actually ring, and callers be told to hit “1” for me, and “2” for my girlfriend, and then ring all phones with two different sounds… no scratch that – I don’t want the bedroom phone to ring at night.
I’m just warming up! And this is only what I think I want. I won’t really know until I use the system for a while. You almost certainly need other features. Perhaps you run a small business and want your PC to run an inexpensive touch tone help or ordering system attached to your incoming line, or individual cordless phones that can call each other for interoffice communications. Or maybe you want to automatically store your favorite TV shows on your computer’s hard drive and allow fast forwarding through the commercials, essentially turning your PC into a digital video recorder.


No company is going to fully inter-operate with all other companies.
No company is going to give you the features you need.
No company is going to act against their self-interest to solve your individual problem.

The software is developed and sold before it is USED. This is always the case, which is why the first versions of software are so notoriously poor. Companies will also develop the software for a fictitious “average” user, so it is often too simple for technically savvy users, and too complicated for “please just work when I plug it in” types. It frequently is not well tested against rival components. Arbitrary restrictions are imposed so that “professional” or “small business” versions can be sold at 10 times the home consumer price.

What can be done?

Why wait until the corporations have failed to bring us usefully inter-operating products? We must take the initiative and solve these problems now!

The solution is to create open source home appliances.

The idea that consumers can actually fix faulty products is not radical outside of the software industry. For example, consumers are responsible for the maintenance of their houses and cars, and a large “home improvement” and “automobile after market” industry exists to help consumers in this task. In fact, under pressure from congress, the [automobile association] recently released the diagnostic codes for cars’ internal computers so that individuals and independent repair shops can continue to fix all automotive problems (
It is possible to fix traditional home appliances (such as blenders). They often come with a parts list, so replacements can be ordered.
As with other products, the owner of a home appliance should have the right to fix or modify it. As software becomes central to the operation of an appliance (as in information or media appliances such as DVD players and phone systems), this right will be lost unless the appliance is based upon open source.
Open source is not a new idea. Significant open source projects currently exist. For example over half of all internet web sites are served by an open source program called Apache (see Apache itself is often run on an open source operating system (Linux). Also, the Netscape web browser is based on an open source project called “Mozilla” (see ). Furthermore, many embedded systems (basically the computer industry’s term for all non-personal computer devices that contain software, such as DVD players, cell phones, or portable mp3 players) are developed using open source development tools (gcc, gnu make, gdb, emacs, etc. see

A user would not need to purchase only open source home appliances to derive benefit from purchasing one open source appliance. A single open source product could communicate with other products, and code could be written to compensate for bugs or problems with the other product. For example, an open source phone system with an infrared light (IR) communicator accessory (essentially how the VCR’s remote control works) could be used to control a proprietary VCR. A consumer can then write a program to control the VCR through the phone system so that, for example, the consumer could literally telephone the VCR from work and tell it to record a show.

But I cannot program. How will Open Source help me fix a faulty appliance, add connectivity, or create a new feature?

An intrinsic part of Open Source projects is the existence of associated online communities. By “community,” I mean that users of the product communicate with each other about issues and problems with the product. A normal corporation’s product support site does not qualify as a “community” because all communications take place between individual users and the corporation. This makes it very difficult for users with similar problems to swap notes, especially since it is in the corporation’s interest not to report the number or severity of bugs in a product (lest it scare purchasers away). But in an open source user community, it is likely that you will find other users with the same problem, one of whom may be a programmer that can post a fix.
However, with open source, it is also possible to imagine groups of users hiring an independent programmer to implement special features or fix certain bugs. With a large enough user community, one could envision a market of programming consultants serving the user base. This has not previously occurred, perhaps because historically most users of Open Source products are programmers. However, a step has been taken in this direction – companies currently exist that provide support, add features, and fix bugs in Open Source projects. But instead of dealing with individual user groups on a bug-by-bug basis, they generally sell complete packages of the software (that contain all fixed bugs), and large, multi-user service contracts.
Over the long run, programming languages are becoming easier to use. Furthermore, the number of programmers is continually increasing, with the burgeoning computer industry. Ten years from now, adding a software feature to an open appliance may be a fun weekend project for the “software hobbyist,” just like wiring a surround speaker system or installing an after-market muffler is for the electronics and automotive hobbyist today.
Finally, mature open source programs generally have fewer bugs than their counterparts because more programmers become involved in fixing the bugs and more configurations can be tested. So you are less likely to have a problem in the first place.

Is an Open Source Appliance Company Possible?

While the purpose of this document is not to present a business case, this section is included to show that an open source appliance product is not incompatible with a profitable company.

The survivability of companies whose revenue or product line is significantly based upon open source software has been demonstrated by companies such as Wind River Systems, Cygnus, Red Hat, and many other Linux-based startups. As first stated by the Free Software Foundation’s “Free Software Definition” (, the “free” aspect of open software refers more to the concept of “freedom” and less to that of “price.” These companies have traditionally made money either by providing an essential adjunct to the open source software, selling well-packaged easy-install versions of the open source software, or by selling maintenance and support contracts.
The business case for open source appliances is even stronger due to the fact that the open source appliance software is essentially useless without a hardware and firmware platform to run it on. The customer must purchase the company’s hardware in order to run the software, thus ensuring revenue. Although competing companies could start producing compatible hardware to take advantage of the software (as happened to IBM corporation and the IBM PC computer architecture), or could port the software to their hardware, this is not necessarily bad. First of all, companies who restrict free enterprise in their product lines often fail. As an example, note that the other early PC architectures (Apple, Amiga, Commodore, Apple Macintosh), are either gone or have little market share. Secondly, note that other companies only copy successful products, implying that the open source company would have to be successful before attracting copycats. Finally, the original company by definition has market leadership, a position that is easier to keep than to gain.

Research, Development, and Marketing

It would require a large company to produce a line of home appliances from scratch, and a huge company to market and support them. A small startup would need to use a different strategy. One strategy that would shorten research and development would be to license the hardware platforms from an existing manufacturer. In fact, many consumer electronic devices are currently OEMed, so the only nonstandard part of an agreement would be the negotiation to “open” the programming interface for the hardware. Of course, this approach makes it much easier for a competing company to sell compatible hardware (they can also license it), potentially eroding the advantage proprietary hardware confers (as described in the previous section).
In terms of marketing, it would probably be best to start small and to create high quality versions of the core A/V appliances: a cordless phone network, infrared controller, CD/DVD player, digital video recorder, and A/V receiver could make up the initial products. Until the open source community starts submitting code, the software will not deliver the features, interoperability, and stability promised by open source. Therefore, it does not make sense to “launch” the product line to the general department store consumer right away. In fact, a web interface selling to programmers and audiophiles (with perhaps some PR in audiophile and programming magazines) would give the products the necessary “incubation” period, and give a company the low overhead and reasonably high margins required for low volume business. Many people are already having a lot of fun modifying their home appliances – a pastime that has become especially popular on DVD players due to the DVD region encoding fiasco (see these links for examples, This is an untapped customer base, requiring exactly the sort of niche product envisioned as a first release. When the software stabilizes and the feature set becomes greater than that of competitors’ appliances, a product “launch” could be undertaken.


In the near future, the computer shall be an intrinsic part of all devices. For open source to remain a viable and powerful concept, it must make the transition from the desktop into the world. Home appliance interoperability and intercommunication will enable this transition, both by allowing new software to be easily “downloaded” to the appliance, and by creating additional software complexity most easily solved by the open source methodology. The alternative cannot be repaired, has features that you don’t need, is missing those that you do, and is limited in interoperability by corporate feudalism. Let’s build a revolution!

1 The 900Mhz and 2.4Ghz radio bands (like wireless phones), power plug serial communications (like X11, IBM home director), Bluetooth, and the USB serial protocol (the next generation computer to peripheral connection)
2 The windows operation system is run on the vast majority of home computers because of its rich set of document processing applications, so it is unlikely that a company will support other operating systems. But there are reasons for consumers to use other operating systems, like greater reliability, higher performance, or less cost.

Sunday, October 25, 2015

Orange PI Plus Ubuntu 14.04 FAQ

The OrangePI Plus is a RaspberryPI-like piece of hardware that has awesome features at a great price point.  I bought a few of them to create a small ARM cluster.  Unfortunately the software needs some help (as is expected for a $39 board), but the open source community is delivering what is needed.

I chose to use the Ubuntu 14.04 XFCE distribution on my board because I wanted something solid with long term support.  This is what I discovered in my efforts.  Perhaps this FAQ will save you some time.

Kernel and Distribution

Use kernels provided by loboris described here:

Source code is here:

The kernels and distros provided by Xulong (OrangePI mfg) are not well supported, have no cleanly documented build procedure, etc.

Changing the display resolution in Lubuntu

Testing your monitor's capability

Boot your OPI+.  Now run:
sudo fbset -xres [horizontal resolution] -yres [vertical resolution]

for example:
sudo fbset -xres 1920 -yres 1080

(default password is orangepi)

This won't really work.  It will resize the screen without resizing the desktop so your desktop will now appear on the upper left area of the screen and a black or repeated desktop will appear on the bottom and the right.  But it proves that your hardware is capable of the resolution.

Setting the screen resolution in OrangePI Lubuntu

Your flash card is separated into two partitions "/" and "BOOT".  Guess what, the BOOT partition is NOT mounted at /boot, but a copy of the files in BOOT are there.  It is actually located at /media/boot.  You can verify this by running "df"

If you put your flash card in a DIFFERENT computer, you should see 2 volumes, one is called "BOOT".  Click on that and you will see a bunch of files like:

Rename the resolution you want to "script.bin" and reboot.


Enabling the Ethernet 

If your wired ethernet is not working (does not initialize and no blinky lights on the jack), you probably forgot to use the OPI+ kernel.  As above, put your flash card in a DIFFERENT computer and look at the BOOT partition.  Copy the uImage.OPI-PLUS file to "uImage".  This is the name of the linux kernel in machines that use u-boot (ARM machines).

You also need the proper kernel to use many of the other OPI hardware features...


Adding GPIO, LED, I2C and SPI access

sudo modprobe gpio_sunxi

To control the LEDs:

RED OFF: /bin/echo 0 > /sys/class/gpio_sw/normal_led/data
RED ON: /bin/echo 1 > /sys/class/gpio_sw/normal_led/data
GREEN OFF: /bin/echo 0 > /sys/class/gpio_sw/standby_led/data
GREEN ON: /bin/echo 1 > /sys/class/gpio_sw/standby_led/data

Add "gpio_sunxi" to /etc/modules to get it to autoload on boot.

Adding IR Remote Controls

sudo modprobe sunxi_ir_rx

Add "sunxi_ir_rx" to /etc/modules to get it to autoload on boot.

Enabling the analog audio output

sudo alsamixer
hit F6 (select soundcard)
select 0 audiocodec
Move right to "Audio Lineout"
Hit "m" to turn it on (should show 00 in the above box)
Hit ESC to exit 

Switching between analog and HDMI audio output

In XFCE choose XFCE Menu -> Sound & Video -> PulseAudio Volume Controls.  Go to the configuration tab.  Disable the one you don't want and audio will pop to the other.

Adding a SATA Hard Drive

This describes how to add a hard drive as additional data, not how to boot from it (you can boot from the 8GB EMMS).  There's nothing special; this is standard linux stuff:

Plug it in using SATA cable.  Power up board.

mkfs.ext4 -b 4096 /dev/sda
mkdir /data
mount /dev/sda data

(verify by ls /data.  You should see lost+found.  Also run "df")

nano /etc/fstab
/dev/sda /data ext4 defaults 0 0

WIFI Command Line Configuration

sudo nmcli -a d wifi connect
(will ask which SSID, etc)

kswapd process using almost 100% of cpu

This is a bug in the kernel.  The easiest solution is to make some swap space:

sudo -i
dd if=/dev/zero of=/swap bs=1M count=1024
chmod 600 /swap
mkswap /swap
swapon /swap  
You can then tell the system not to use swap unless it absolutely must:

sysctl vm.swappiness=0
The number is a percentage from 0 to 100 indicating how much Linux should preemptively move RAM into swap.

Don't forget to add the swap to /etc/fstab so swap is enabled on boot:

/swap swap swap defaults 0 0


Saturday, May 23, 2015

Network Neutrality and Bitcoin

Allowing the internet to provide different services for different applications is a more efficient use of existing resources and will result in higher quality of experience for end users.  Unfortunately telephone/video/internet service to the home is often a monopoly or near monopoly and service providers have a proven history of taking advantage of this fact with inferior service and high prices.  So as a society we cannot trust a for-profit monopoly-granted organization to not take advantage of service differentiation to confer unfair advantages to incumbent or internal services. This is why network neutrality is important.

However, there is another solution.  It is now technologically possible to create an automated marketplace that allows applications running at the end user or in the web-application to purchase an end-to-end pathway with specific quality guarantees.  It would look like this when connected to cable networks (mobile, etc networks are very similar):

This marketplace needs to be available to any customer and be the only way to purchase service.  This creates a "level playing field" that fosters innovation.  Through this marketplace an internet startup has access to the same bandwidth as an incumbent or internal web service provider.  Services sold in the market can be tracked to ensure that it does not affect existing "baseline" contracts with customers.

The Bitcoin network is the only payment processor that can service this network due to its security model, pseudo-anonymous transactions, continuous micro-payment capabilities (payment channels), and irreversible transfers.  With Bitcoin "payment channels" customers can continually pay fractions of a penny (pay-as-you-go) which ensures that the the payment matches the service provided.  To protect the service provider, irreversible transfers are needed to eliminate chargebacks, fraud, and the overhead of collecting and storing the identity and payment information required with traditional trust-based payment networks.  Pseudo-anonymous transactions ensure the "level playing field" -- the market cannot offer a particular company a better deal if it does not know who is purchasing the service.

In short, it is not feasible to use traditional payment processors for this marketplace because of high fraud rates for digital goods, communication of identifying information (which could be used to offer cheaper service to favored customers), and inability to cost-efficiently handle continuous micro-payments. 

Introduction:  If you could trust your ISP, you would not want Network Neutrality

To understand this, you need to understand that there are multiple metrics used to measure network performance.  And services really do have different requirements.

This is called QOS or Quality of Service, and the 3 most common metrics are bandwidth, latency, and jitter.    Bandwidth is the one that you know -- its how many bytes you'll receive per second, on average.  Latency is if you send a message, how fast will you get a response?  Jitter is how much the time between packet arrival varies.

So if you are uploading or downloading all your photos from DropBox, all you care about is bandwidth.  If no bytes are transmitted for a few seconds, you don't care.  All you care about is when the interminable upload will be over!

If you are playing a twitch video game you care about latency -- you need to dodge that incoming RPG so you need the game to react to your keystroke as quickly as possible!  Its good to minimize jitter, but remember the game world is simulated on your system so it will not freeze.  However if you have ever seen other characters suddenly "pop" somewhere else, that is caused by a large packet gap (high jitter).

If you are watching a movie through a set-top box, you mostly care about jitter.  The set-top box does not have much memory; it can only hold a few seconds of the movie before playing it on the screen.  So you need a steady, unchanging stream of data or the movie will freeze and jerk.  Bandwidth is the second most important -- a higher bandwidth means clearer, HD video.  Latency is completely unimportant (within reason).  It does not matter if it takes the data packets .5ms or 1000ms to get to you -- the only difference is that the movie begins 1 second later.

From a consumer perspective, it does not make sense to pay for a connection that can simultaneously handle HD movies, massive uploads, and "twitch" video games 24 hours a day 7 days a week when you only use these services a few hours a day.

There is no technical barrier

Today it is technically possible to create custom QOS data flows into your home.  This is why your ISP does not need to fiddle with your cable box when you upgrade service and why when you don't pay your bill, nobody needs to drive by to shut off your service.  In the mid 2000s, I helped specify the cable network protocol that enables this (its called PCMM or Packet Cable MultiMedia) and worked at one of the first companies enabling PCMM services.  Today, similar protocols exist for mobile networks, and OpenFlow is an effort to create a unified protocol that will allow the creation of QOS flows across the entire network.  At the same time NFV (Network Function Virtualization) is an effort to move the source of the data closer to the consumer -- this ability could be part of the same marketplace.

But here is the problem

Network Service Providers* (NSP) have a monopoly on the data into your home.  Given the opportunity, they will behave no differently than any other for-profit company and abuse that monopoly to provide inferior service at high prices.

For example, when Fiber-to-the-Home entered my neighborhood, my current cable data provider offered to double my bandwidth for free.

And I have personal experience with how painful it is to deploy the simplest services into NSP networks.  In the mid 2000's I worked at a small cable-industry startup company.  We were demoing a program that sat in your system tray (where all the little icons are on the right) that looked like a speedometer.  But rather then just telling you the network speed, you could grab the needle and drag it higher to get more bandwidth to your home.  Pretty awesome right!  Surely there would be a market for this... but have you ever actually seen it?

The two key reasons for network neutrality are:

  1. Permission-less innovation:  The network service provider should not be placed in a position where it can offer or withhold bandwidth from a service, or negotiate differentiated pricing based on the service type or provider. If it is in this position it can influence or outright control what services run over its network.  In fact, by taking an active role in "allowing" a particular type of data on its network, it may find itself legally required (or scared by litigation) into acting as a "policeman" of this data.  Additionally, it may offer better pricing to incumbent or in-house services which will have a terrible effect on the technological innovation that has driven our economy for the last 15 years.  Netflix would not exist because it is stealing cable TV revenue...

The market described above solves this problem...

  2. Breaking currently negotiated contracts:  If I am paying for 10mb/s, I paid for 10mb/s TRAVERSING the entire ISP network.  The contract did not say "10mb/s only if nobody else is paying more at that moment", or "we'll send you 10mb/s if packets magically appear on our network, but we are limiting what Netflix can send to us so in reality you'll only get 1mb/s."

I believe that point 2 is not an issue long term.  Do "coach" airline seats cost more because first class reduce the total number of coach seats?  Does "bleacher" seating at the ball game cost more because of box seats?  In my experience the opposite is true; companies are able to offer reduced "basic" prices and expanded capacity due to their high margin offerings.   As network capacity increases to fill high-margin QOS demand, ISPs will be able to meet their baseline promises and have extra bandwidth left over.

The real problem today is that the lack of a marketplace for QOS on-demand has caused ISPs to "oversubscribe" their networks -- that is they have collectively promised much more bandwidth to all their customers than they actually can provide.  So this ISP contractual "promise" is actually more of a maximum, when customers actually want a promised minimum.  The existence of a QOS market aligns what the customer wants to buy (guaranteed minimum performance for a certain time) with what the ISP is selling.

* In this blog post I'm going to use the term "service" to mean any company that provides a web site or other internet accessible service (like video streaming, instant chat, etc).  And I'll use "network provider" instead of ISP (internet service provider) because my observations apply to every networking company in the route from the service provider to the customer, not just the ISP that the customer has signed up for.

Thursday, April 16, 2015

Advanced Software Language Design Concepts

Minimal Specification

A minimally specified program is the idea of describing exactly what is needed to accomplish an algorithm and nothing else.  For example, extraneous statements are often added to software and include inefficiencies (conceptual mistakes), debug or logging.  These should be indicated as extraneous within the language.

As all of these statements are essentially commentary, let us propose "/:" to prefix in inessential line, "//" to prefix a traditional comment, and "/?" to prefix a documentation comment.  We'll prefix use "/*" rather than "/" to specify multi-line.

// Let's log now...
/: log(INFO,"This is a log message");

/*: log(INFO,"Contents of list");
for l in list.items()

But extra statements do not constitute the entirety of unnecessary information.  What about statement ordering?  Rather than specify unnecessary order, let's specify different syntax for lexical scoping rules that allow different ordering:
[] = any order
() = specific order
{} = only one of

So for example:

Point add(Point a, Point b)
    x = a.x+b.x;
    y = a.y+ b.y;
  return Point(x,y);

This is a very succinct way increase parallelism in software.  A clever compiler can use this information to reorder instructions for optimization, spawn threads or even start "micro-threads" (a short simultaneous execution on 2 processors of a multi-core machine which share the same stack before the moment of separation).

If the concept of minimal specification is applied throughout the language, there are quite a few other interesting language ideas that emerge.

Syntatic Specifications


Interfaces exist in one form or another in many programming languages.  However, the related type systems suffer from a lack of flexibility that causes them to be less than fully utilized.

Type specifications should be parametric.  That is, be able to specify multiple types simultaneously:

type Point({int, float, double} ElemType ) =
  ElemType x,
  ElemType y

(we don't need template <> notation, types are parametric)

You could quickly define a grouping of types (remember that {} means "one of"):

type Number  = {int, float, double}
 In cases where the constituent types do not implement the same interface (do not have the same member operators), the operators available to Number is the intersection of the operators available in its constituent types.

Aside: This is very different than the following 3-tuple:
type triple = (int, float, double)

Let's define a keyword: the "any" type means any type!

Let's specify the addition function, where the parameters can be heterogeneous Point types:
Point Add( Point a, Point b);

Let's specify the addition function where all objects must be the same fully-realized type:
ParamType Add( (ParamType = Point) a, ParamType b);

Interface Reductions

Languages today almost exclusively allow programmers to add to the existing symbol table.  The only notable exception is the use of public, private, and protected in C++ and other object-oriented languages.

However, these "canned" namespaces are based on program structure assumptions that miss the complexity of modern software development.  For example, an API may have multiple levels of interface, depending the application programmer's chosen power/complexity trade off.  The implementation of the API may have specific functions needed to interface with an optional component.  These functions and related member variables, could be removed during compilation if the other component is not part of the build, resulting in space and time efficiencies.  The implementation may have a debugging interface...

Instead, let us define interface groups and allow classes to include specific prototypes and interfaces into the group:

interface group API;
interface group GUI;
interface group data;

A module can choose what interface groups to present to other software layers.  It can combine pieces of other interface groups into an new group and present that.  This has the effect of reducing the namespace. 

Given an extremely flexible syntax parser, you should be able to specify most modern languages in a single language.

Semantic Specifications

Interfaces constitute syntatic specifications.  What about semantics?  A semantic specification defines how an object should behave.  Today we get away with concepts like "assert" and "unit test"; but there is no formal specification of semantics.  Without a formal specification engineers cannot write adhering implementations or formal proofs and compilers cannot apply logical reasoning for optimization.

  For example:

  semantic stack(any a) = assert(a == a.push(x).pop())

  semantic queue(any b) =
    any (x,y,first,second);
    a.push_back(x),     a.push_back(y),
    first = a.pop(),
    second = a.pop(),
    assert(x == first),
    assert(y == second)

An interface actually consists of both syntax (interface) and test (semantic) specifications:

type List(any T) =
  def add(T a) {...},
  def remove(T a) {...},
  def push(T a) {...},
  def T pop() {...},

  semantic(List a, assert(a == a.add(x).remove(x))),
  implements semantics stack;
  implements semantics queue;

Performance Specifications

Performance specification is an important part of the semantic specifications from a practical perspective, although it is (generally) not part of the minimal specification (so we'll use the /: prefix).

Why is performance specification important? A programmer is confronted with multiple implementations of an interface (say a List, or Map).  To pick the optimal implementation he must match the usage patterns in his code with the implementation that implements those member functions most efficiently.  To do so correctly, he needs classes and member functions to be annotated with performance specification.

type MyList(any T) =
   int length;
   def push(T a) {...}, //: O(p=1,m=1)
   def find(T a) {...}, //: O(p=length/2, m=1)
   def quickSort(T a) {...}, //: O(p=length*log(length), m=1)
   MyList(T) clone() {...}, //: O(p=length, m=length)

Note, given these performance specifications it may be possible for the profiler to feed back data into the compiler to recommend the best implementation.

Computer-Assisted Development

Integrated IDE

The language should not be defined solely in ASCII format.  Today's computers are fully capable of displaying binary data (pictures, music, etc) in human-consumable format inside the context of a traditional program editor and so languages should allow this data to be included.

var Image SplashImage = [[actual image here]]

Computer Annotated Source

Continuing the philosophy of minimal specification let us NOT specify the specific list required for this task.  Let us just specify that it must be an object with a list and GUI interface:

var (List, GUI) choices,

choices.push_back("last choice"),

The compiler can choose any object that provides both the List and GUI interfaces.  During profiling execution, the compiler keeps track of how often each API was called.  Although this is not the case in the above example, let us imagine that the push_back() function was called repeatedly in a performance sensitive area.

After execution, the system notices this and chooses an implementation of choices that optimizes the push and push_back functions based on the performance annotations that are part of each classes' definition (see above).  It annotates the source code to this effect, using the "inessential" marker "/" with the computer-can-change annotation "|":

var (List, GUI) choices,  //| instantiate DoublyLinkedList(string)

choices.push_back("last choice"),

If the programmer wants to override this choice or stop it from ever changing he can remove the computer-can-change annotation:

var (List, GUI) choices,  /: instantiate MyDoublyLinkedList(string)

Or of course, using the traditional method:
var MyDoublyLinkedList(string) choices;

Compiler Interpreter Equivalence

Thursday, July 17, 2014

My Network Neutrality Comment

I have been a telecommunications engineer for the past 20 years, building equipment for data, cable and wireless networks.

Network Neutrality is an absolutely critical component of the modern internet and the great innovation that we have seen in the last 20 years is due entirely to the ability of start-up companies to offer interesting services with no ability for carriers to throttle block or in any way discourage/encourage one service over another. 

Carriers themselves have benefited dramatically from Network Neutrality.  I have personal experience in how long it takes to deploy the simplest services into these networks, and it is literally years.  The existence of a fast/slow lane will rapidly cause the slow lane to degrade to the point where every usable service must go across the fast lane, with the permission of the carriers.  As soon as a carrier authorizes content into the fast lane, some question of legal responsibility over that content will soon follow, squelching innovation and teaching carriers about the law of unintended consequences.

In today's and tomorrow's broadband and fiber-to-the-home world, arguments about prioritization of real time traffic (like movies or audio) are specious.  There is plenty of bandwidth.

Please save the carriers from themselves and the public from a monopoly-driven "cablized" internet by declaring that Carriers cannot discriminate in ANY WAY in regards to traffic flowing over their network!

And finally, I PAID for 10mb/s.  It is not right for my ISP to only give me 5mb/s because netflix or amazon didn't bribe them enough.

  1. Should there be an outright ban on fast lanes? YES
  2. Should broadband access be classified as a Title II common carrier? YES
  3. Should the new Open Internet provisions also cover wireless (mobile) broadband? YES

Thursday, May 15, 2014

The global trade and reserve currency problem, Bitcoin, and why you should care

I am no economist but this is pretty clear to all who read about world events: the fact that the US dollar is currently used as a global medium of exchange brings both tangible and intangible benefits and responsibilities to the United States.  You can find endless discussion about this via a quick Google search.  However, just by looking at the debt of various economies, it should be clear to all involved that there is a non-trivial chance that the dollar's reign as the global medium of exchange will end in the next 5-10 years. 

You may debate whether the US has been a good steward of this trust, however that is not the purpose of this post.  Instead I would like you to consider what will replace it.  Most bets are on the Chinese Yuan, simply because since they export so much stuff they own a lot of gold and other nation's currencies.

As a member of a free nation, as a citizen (not a subject) protected by a bill of rights, I would be deeply worried if the currency of a nation without these concerns starts to be used as the global medium of exchange, conferring the power and influence that comes from being "first among equals" to that nation.  Since I have no direct experience with China I will let you make your own decision about whether China should be that nation.  Go ahead and start with the concept of "Leftover Women" here and then please search for "human rights violations".

Recently the central bank of China (PBOC) has significantly discouraged the use of Bitcoin within the country.  Anonymous reporters from inside China say that the main reaction to this has been mystification -- why would this organization that oversees the entire 10 trillion USD Chinese economy worry itself over an obscure technology with a total economic activity of a few billion bucks?

I think that the reason is because Bitcoin, and ONLY Bitcoin, is capable of becoming the next global reserve currency.  Why this is true is complex and cannot be fully discussed here.  I'd prefer that you trust me or your technologist friend about this.  However, in short, Bitcoin is like gold that can be sent electronically.  Nobody controls it, nobody decides how much can be printed.  There is no "central bank of bitcoin".  For these same reasons, gold is a great choice (in fact HAS BEEN the defacto international medium of exchange and is still used as a reserve) for an international currency except for one problem; it cannot be sent electronically.  It must be physically moved, which is slow, expensive and vulnerable to theft.  Bitcoin solves these problems.  It is the first engineered sound money; gold is natural sound money, national currencies are engineered but unsound.

Therefore if you can see ANY possibility, no matter how small, of the end of the petrodollar and the beginning of the petroyuan, if you are a woman or not a member of the Chinese ruling class, if you care about personal freedom, human rights or due process of law, I strongly urge you to give Bitcoin a try!  Buy some at a Bitcoin ATM and then pay your friend for your half of lunch in Bitcoin.  Buy some gyft cards, attempt to use them for ebay or etsy purchases by contacting the seller.  Search the web for merchants that accept bitcoin and buy your stuff there.  In this, think globally, act locally: sure its a cliche but in this case totally appropriate.  Your use of Bitcoin makes it stronger, and this nascent currency must get stronger if it is to be ready to challenge the Yuan as the defacto international trade and reserve currency.

Wednesday, March 12, 2014

Advanced Snowboard Turns: Carving, Cross-under, and Quad-point turns with a digression into the flats

I have briefly searched the web for snowboard turns and no-one really addresses the subject well so here is my take.  I am going to briefly review basic sliding, carving and cross-under turns to create a common basis but then move into concepts that I have never seen described.  For brevity, I am going to mostly discuss toe-side initiation but of course heel-side is similar but opposite.

Sliding (windshield wiper) Turns

Sliding turns are the first turn that you learn.  The easiest way to do it is to put your weight on your front foot and slide the back foot perpendicular to the axis of the board (push it "out", or pull it "in") and go on edge.  This makes the board "slide" or "skid" -- you are moving in one direction but the board is oriented in another.

99% of snowboarders do this even when they think they are carving. 

The problem with sliding turns is that:
  • you lose speed (this is an advantage too in steep slopes)
  • you can't ride rough terrain or "bad" snow because you hit the bumps or ice chunks broadside.
  • it isn't nearly as fun!
If you hear a scraping noise when you turn or a lot of snow is kicked up, you are doing a sliding turn.

Carving Turns

Carving turns are when you put the board on its edge and let the natural curvature of the board dictate the turn.  Your direction of motion and the board are aligned throughout the turn.

A good way to learn a carving turn is to go straight down a beginner slope (or runout) and put your board on its toeside edge, without trying to turn.  Don't lean much,  just try to ride straight on a steep toe side edge.  The board will turn on its own and you'll probably fall the first time :-).  You'll leave a sharp curved groove in the snow rather than a sliding mark.  YOU don't turn the board, you put it on edge and it turns itself; its a bit of a scary feeling at first but its ultimately awesome!  As you build up speed, you will need to lean so far over to counterbalance the turn that you can reach out and touch the slope toeside.  This makes toeside a lot easier -- you can use your arms if your balance is off but heelside is possible (if you are losing your edge carving heelside, bend your knees more).

The advantages carved turns is:
  • you do not lose speed.  Even if you aren't a speedy rider, this is important so you don't have to walk on run-outs and cat trails.
  • you hit rough, crusty snow head on.  This is very stable.  You can't slide turn crusty snow...
  • it looks and feels awesome!
  • You can control your speed by doing a partial slide, partial carve turn.

Cross-under Turns

Cross under turn are quick, linked carved turns.  Rather than physically moving your body to lean into the carve, you achieve the lean by moving the board to one side and then the other.  This allows you to turn quite quickly, and there is a fun "pop" feeling coming out of each turn.  It is important to master Cross-under turns in order to do moguls well...

Quad-point Turns

Quad-point turns allow you to turn maybe 3-4 times faster than cross-under turns (several turns per second), optimize your body position in other subtle but important ways, and execute extremely tight turns.  The board carves so fast and cleanly that there is a sensation of swimming down the slope.  Its almost like the turn is pushing you downslope.  Your feet are acting independently which makes them feel separate -- no longer connected to each other by the board.  More importantly, a quad-point initiation into a carved turn provides much greater control.

To introduce quad-point turns, let's talk a little theory.  In all prior turns you either went toe-side or heel-side.  But with your two feet there are actually 4 edges (points) that can be used, front-toe, front-heel, back-toe and back-heel.  Also, your front or back foot could be in neutral position (not on edge), giving you 6 basic positions.  By going front-toe and back-heel, you are putting a twist on the board.  The core of quad-point turns and quad-point riding is the understanding that you can use this twist to independently put different edges in contact with the slope to great effect.

To start a Quad-point turn, begin on your heel-side edge.  Now, go neutral on your front foot while holding the back-heel (edge) hard.  No longer gripping the snow, the front of the board will slip downslope while the back continues to track.  This will drive the board to turn downslope.  After initiating the turn, move your front foot from neutral to toe-side and your back foot to neutral.  This will cause your front edge to carve, pulling you through the turn.  Your back foot now goes toe-side to finish into a toe-side carve.  You've done a Quad-point turn!

As you can see, your feet edge independently which is the hallmark of a quad-point.

To link turns quickly, the neutral time is minimized into a smooth transition and you will never be fully toe or heel-side; your front foot needs to be initiating the next turn as your back foot completes the prior.

Besides awesomely fast linked turns, quad-point turning proactively drives the board during turn initiation and completion.  This results in faster turn initiation and a sharper, more stable and consistent turn.

Flats and Moguls

Have you ever been riding fast and flat (neutral position) on a run out only to catch the front edge and do a neck-breaking faceplant?  If you have, I'm sure you've heard the advice to lean back when riding flat (or to never ride flat, if you got bad advice :-)).  By thinking about riding using the quad-point theory, you can understand why leaning back works. 

First, understand the problem; you catch your toe edge if your board starts to slip very slightly toward the toe rather than going straight down-slope.  Eventually this motion digs a groove in the snow and you catch the edge.  But if you lean back, you will catch the back edge only,  this will knock your back foot back under your body, rotating the board.  This rotation corrects the slipping motion that caused you to catch the edge in the first place. 

You can actually use this to turn; its essentially a rear-foot initiated turn and can be very useful, especially in deep light powder where you need your weight a bit back to ride on top.   A neutral front foot and back-toe will catch the back front edge, causing the rear of the board to go underneath you, rotating the board and allowing you to go full-toe.  This happens quicker than weight transition to toeside (it moves the lighter board rather then your heaver body) so allows you to initiate turns quicker.


I am still learning to ride moguls quickly on a board like a skier does -- not like the snowboarders you see on YouTube.  I think that the basic problem is that a snowboarder has a harder time rotating the board compared to a skier and fast turns are essential for mogul riding.  The technique I use is quad-point and takes advantage of the rotational power generated by catching the leading edge of the rear of the snowboard, just like we used to straighten out on flats.

I'll describe it starting with the heel-to-toe transition.  As you approach the face of the upcoming mogul heel-side, position yourself on the slope so that the front of the board is going to miss the mogul face (its in the groove between moguls), and be neutral on the front foot, heel-side on the back.  As you hit the face, relax your back foot heel-side, keeping your weight somewhat forward and toeside. You will hit the mogul face and due to its angle, you'll hit it either flat or on the toe-edge, just like catching an edge on the flat.  However, you are expecting it and so be prepared because this will kick your rear foot HARD under your body rotating the board into a toeside turn.  Make SURE the front foot is neutral and outside the edge of the mogul. If you hit the mogul with both front and back toeside edges, or you weight is not far enough forward, you'll go flying :-).

Now you are toeside so you use that edge to line up your front foot so it will miss the hard face of the next mogul; your weight and upper body is already turning forward and twisting to look backside over your front shoulder, releasing the front foot toeside while holding hard to the back foot toeside.  When you hit the mogul face with the rear foot, it will catch the back foot heel side and kick the back of the board hard underneath you, rotating it heelside for your next turn.

If you are doing it properly, it will seem like the board is kicked left, then right, then left again with little active control on your part.  Your upper body won't move much depending on your flexibility, while your lower body is whipping back and forth through the turns.  And a point about 6 inches to a foot in front of your front foot will be strangely stable; this is the point of rotation.  It feels awesome, like you are a marble in a groove.

Good Luck and Always Have Fun!!!