Saturday, March 19, 2022

Sometimes you just need a quick machine

 This contraption winds wires.  I just took a scavenged 12v geared motor epoxied to a mounting plate and soldered up 12v barrel jack.  Then I used a LM7805 regulator to knock that down to 5v which I fed that into an ESP32 dev board.  I used a TIP120 darlington transistor (and protection diode) as the motor driver.  And mounted a piece of wood and 2 clips to the motor shaft.

 

The end result made a web controlled wire winder!  I can clip 2 wires to this end, and start and stop the winding while holding the other ends of the wire.  Winding each wire individually in the reverse direction (with a drill) before clipping them to this winder creates a reverse twisting force that makes the final wire hang more or less straight.


The web server (Arduino-based) also lets you blink an onboard LED for test purposes:

 

ESP32 controller code




Friday, November 19, 2021

ESP32-C3 RISC-V SBCs!

 


 

I am getting pretty excited about the new ESPC3 processor!  

It is an open source module that is built around an open source CPU, the RISC-V instruction set.  The first thing that's interesting about this is how inexpensive these devices are.  For example, I picked a bunch of these up for less than $2 (in front).  At this price, they are meant to be soldered onto custom PCB boards, but for a bit more you can get a prototype board (in back).  

 Its amazing that we can have a wifi and bluetooth device with a healthy chunk of flash and RAM for that price!  I feel that not paying ARM royalties might have something to do with that.  An unnoticeable fee for a $25 processor is a major portion of a $2 one.


What is really interesting to me is the long term possibilities available with an open instruction set CPU.  There are already many FOSS implementations of this instruction set in both hardware and software.  This instantly makes me ask how awesome would the Arduino environment be if it contained a simulator and integrated debugger?

But longer term I can see how an ecosystem of software, FPGA, and finally ASIC instantiations of custom "hardware" (with RISC-V's extensible instruction set) would allow simultaneous hardware and software development, and a price/volume upgrade path.

I've created a few simple Arduino sketches to help you start using the C3.  The first one just blinks the RGB and white/yellow LEDs.  The second one is more complicated; it connects to the strongest open wifi signal, and announces itself of mDNS as "myc3_b".  You can then access a web page (from your local network) at http://myc3_b.local (note NOT https).  Once you have connected, you can set the LED colors, or read/write any IO pin via web access.

Code is here: https://gitlab.com/gandrewstone/esp32-c3-projects


Thursday, July 9, 2020

On platforms and masks

Here's a message to my fellow libertarians and other conservatives:  Never let your opposition define your platform or culture.  Instead define your own platform based on first principles.  Every decision the opposition makes is statistically unlikely to be wrong, so defining your position as the opposite of another forces your platform into stupid positions.

Rewind 2 years and we would have been absolutely ecstatic to imagine a society where not only is it perfectly commonplace and unremarkable to wear anonymity-protecting face coverings in public, but that such behavior would be aggressively protected for years to come in the name of public health.

Yes, mandatory face coverings may be seen as an erosion of personal freedom.  However the existence of "decency" laws (that is, you can't be naked in public) makes such a mandate one of degree, not a fundamentally new restriction.  But we do need to be vigilant against further erosion so we don't end up with burkhas.

How does the libertarian "platform" reconcile community transmission with the non-aggression principle (NAP)?  Based on your community's infection prevalence, the R value of coronavirus, and the likelihood of an infection leading to death, there is a calculable statistical rate that a person is implicitly being aggressive to another just by their presence.  Does that math justify a "forcible defense"?  What if someone coughs in your face?  Is that an act of aggression?  Note that a mandatory mask law is a directive by the state to its employees to enact a such a "forcible defense" (as all laws are ultimately upheld by force).  Are mandatory mask laws therefore justified by the NAP?

Sunday, March 10, 2019

Resting your Mind in a Problem

Resting your mind in a problem is a technique I use to enhance my creativity -- to find solutions to problems and to invent new ideas.  The essence of it is to enter what most people would probably call a relaxed, meditative state, BUT with the problem or general subject you are trying to be creative around held lightly in your mind, rather than it being occupied by mantras, music or somebody else's voice.  Another way to put it is that the goal is to enable David Gelertner's "low focus thought", but in a directed fashion.

Good times to do this:

Just before bed
Lounging around in bed in the AM
during an endurance workout
resting after an endurance workout
during repetitive safe physical tasks (raking leaves, shoveling, washing car, brushing pool)

Some exercises to help you do this:


Basic mental flexibility:


the endless hum.  Can you imagine humming a single node through the changing of your breath from in to out or vice versa.  You may first think you are doing it, but if you really listen to your own imaginary voice you'll likely hear a short hitch.  But one's mental "hum" need not be connected to the physical...

imagine greater resolution.  Imagine a landscape scene with trees.  Why aren't you seeing each leaf?  Why not each vane on each leaf, and the stomata?  Why is your imagination limited to your physical visual resolution?

not thinking.  Stop the voice in your head.  Stop telling yourself to stop it.  Try thinking about breathing to stop thinking about other things, in, out, in, out, now stop saying in, out.  Now stop saying "Yay I did it!" :-).  See how long you can exist without linguistic thought.  Try to make a decision without voicing it or acknowledging in voice that you made the decision.

"hear" in your mind a song in other people's voices.  Not yourself singing it.  Hear the actual instruments with proper tone color not you humming the melody.  Hear multiple instruments.  (this is probably quite easy for musicians but hard for the rest of us)

Workups to resting your mind in a problem:

The key here is "resting"  -- you are not trying to force something.  Let your mind wander for creativity...


replay a novel.  See the plot in your mind like a movie.  If it didn't take hours you probably skipped parts.  Go back and do it in greater detail.  Multiple times greater detail.  Stop verbally telling yourself you missed something!  Practice first narrated visualization then no narration (visualization only).

imagine a flat endless plane.  Put stuff in it.  Let the stuff interact.  Let other stuff come in that your "voice" didn't suggest.

Put thoughts about the problem on the plane, or replay them visually not in words.  See multiple aspects coexist in your thought plane,  let other stuff come in.  If you get far off your problem let it go and see what comes up, maybe try to combine whatever came up with some part of your problem. 


Assess:


Later, when you assess, if you are never (say out of 10 times) able to stick onto the problem, but move to the same thing repeatedly, maybe your problem really isn't interesting to you and you should be working on the something else?

If you come up with a few good ideas, write them down right after the session (keep a pad by the bed)... if you fall asleep you'll have forgotten some in the AM probably.  If this happens, you can sometimes recapture the idea(s) by resting your mind again (as soon as possible).


Saturday, August 27, 2016

My take on the Bitcoin Testnet Fork


Bitcoin Unlimited signals compatibility with BIP109 because it accepts a superset of what BIP109 allows.  It accepts larger blocks and more signature operations.  So essentially BIP109 is a "soft fork" of what Unlimited is capable of.  Unfortunately there is no way to signal that a client supports a superset of BIP109, so a choice of 2 imperfect alternatives had to be made.  In the context of the 1MB limit and BU having ability to produce these superset blocks, it made sense to signal BIP109 support.  At the same time, there is a passed BUIP that covers strict BIP109 support that can be quickly implemented at need.

On testnet, Bitcoin Unlimited has the mining majority and an ~600k transaction was created that exceeded the BIP109 signature checking restrictions.  Bitcoin Unlimited included it in a block and so clients that strictly adhere to BIP109 were forked (Bitcoin Classic).  Bitcoin Unlimited could have avoided this problem by following our philosophy to be conservative in what is generated but liberal in what is accepted.

However, this event is very instructive in regards to the role of consensus in the network.  In short, there is none.  Bitcoin is founded on the principle of zero-trust.  If we rely on developers to produce perfect and compatible software, we are re-introducing trust.  And then the difference between Bitcoin and traditional financial networks becomes merely a difference in flavor (who do you trust) not a fundamental new concept.  We now see this in Ethereum -- the Ethereum developers have chosen to be the arbiters of their network and participants must now trust these developers to accept the participant's transactions.

It should be instructive to note that Bitcoin Unlimited is unaffected by the situation.  And it also would be unaffected if the roles were reversed (if Bitcoin Unlimited was the minority hash-rate and a minor rule was being broken).

Ask yourself why do you perceive it to be "bad" that Classic forked from the network?  I believe that it is "bad" because the rule that was broken was not important enough to warrant a fork.  Classic users would have preferred to follow the most-work chain and ignore this rule.

But what if a client produced a 100 coin coinbase transaction?  Would you prefer that your client follow this chain or fork?

From a zero trust, game theory perspective a client should follow the chain that maximizes the value of the coins owned by the user.  Therefore a client should only choose to fork when a rule change occurs that reduces the value of the user's coins.  From this observation, one can distill a minimum set of rules -- rules that are absolutely essential to protect the "money function" of Bitcoin.

Bitcoin Unlimited's "excessive block" and "excessive accept depth" algorithm is not just and arbitrary choice -- its the optimal choice rational software can make in an untrusted network.  In essence, it encourages the client's preferences to the extent that the client can do so, but then follows the majority when the client's preference is rejected.

So Bitcoin Unlimited follows a philosophy of following the most-work chain unless a block breaks the "money function rules" -- increasing inflation, spending other user's coins, etc.  All of these activities will undermine the value of the user's own coins and in that situation, a fork may preserve that value since the value of the rule may be greater than value added by having the highest difficulty chain.

To date, Bitcoin has been sheltered by having a single-client (trust-the-developers) implementation but for the last year the massive liability of this approach has become evident in the inability of that client to deliver the growth that Bitcoin so desperately needs.  As we move into a trustless, multi-client environment, Bitcoin client developers will have to ask themselves "how important are these rules, and what should my client do if the mining majority breaks them?"



Wednesday, March 9, 2016

The Bitcoin On-Chain Scaling Landscape

This post summarizes the proposed options for scaling Bitcoin on-chain.  While a simple block size increase to 2, 8 or even 20MB has been claimed by some engineers to be possible, these techniques are either proposed to scale beyond those sizes or simply make the system operator more efficiently at those scales.


Basic theoretical work:


A Transaction Fee Market Exists Without a Block Size Limit (Peter Rizun): This paper argues that block propagation times limit block sizes which in turn will create competition for transaction space.

An Examination of Single Transaction Blocks and Their Effect on Network Throughput and Block Size (Andrew Stone):  This paper argues that headers-only mining places a limit on transaction throughput and therefore average block size that is based on underlying physical capability.  It then puts a limit on maximum block size by arguing that a rational miner would orphan any block that takes so long to validate that the miner is likely to be able to mine and validate a smaller block within the same time.



No Fork Necessary


Blocks only (Greg Maxwell??):  No transactions are forwarded to a node.  Reduces node bandwidth.  Unfortunately these nodes cannot propagate transactions to peers so are only useful as endpoints in the P2P network.

Thin Blocks (Mike Hearn, implemented by Peter Tschipper):

Weak Blocks (Gavin Andresen):
  Miners relay invalid block solutions whose difficulty is some fraction of the current difficulty.  This tells other miners/nodes what is being worked on so when a solution is found, miners need only send the nonce, not the full block.  This should spread the bandwidth spikes caused block discovery, but may cause greater total bandwidth use.

Subchains (Peter Rizun):
  A formal treatment of weak blocks which adds the concept of weak blocks building on prior weak blocks recursively.  This solution should reduce weak block bandwidth down to nearly the same as without weak blocks, and also adds confidence to accepting 0-conf transactions since users know what blocks miners are working on.

Headers-only mining (independently deployed by miners, formally addressed by Andrew Stone, implemented by Gavin Andresen):
  Headers only mining (mining empty blocks) allows greater onchain scaling because it provides a feedback mechanism where miners can reduce the average block size if it is nearing their local physical capacity.  This effect requires no active agency, it is a natural result of headers only mining while miners are waiting to acquire and validate the full block.


BlockTorrent (Jonathan Toomim):  A proposal to optimize access to blocks and transactions using algorithms inspired by bittorrent

 

Requires Fork


Basic block size increase (Satoshi Nakamoto, implemented in Bitcoin XT, Bitcoin Unlimited, Bitcoin Classic):  This technique recognises that the current network infrastructure easily handles 1MB blocks and so simply suggests that the block size be increased.  Within this basic technique there are multiple proposals dealing with how to change the maximum size:
  1.   One time change to 2 MB
  2.   Bitpay's K*median(N blocks), on Bitcoin Classic roadmap
  3.   Follow the most-work chain regardless of block size (Bitcoin Unlimited)
Interleaving blocks / GHOST (Yonatan Sompolinsky,Aviv Zohar):  This technique allows a child block to have multiple parents, so long as those parents have no conflicting transactions (or specifies a precedence so all conflicts are  resolved).  This allows blocks to be produced faster than 1 every 10 minutes. 

Auxiliary (Extension) blocks (jl2012, Tier?): This technique proposes that the hash of another block be placed in the header of the current 1MB block.  That other block contains an additional 1MB (or more) of space.  This is a way of "soft-forking" a basic block size increase, with the proviso that older clients would not be able to verify the full blockchain and could be tricked into accepting double-spends, etc.


Bitcoin-NG (Ittay Eyal, Adem Efe Gencer, Emin Gun Sirer, Robbert van Renesse):  This proposal uses POW (proof of work) to elect a temporary "leader" who then serializes transactions into "micro-blocks" until a new POW block is found and a new leader elected.  As in the "basic block size increase" (and many others) the proposal still requires that every node see every transaction, and so scales up to network throughput limits.  The key advantages are that the leader can confirm transactions very quickly (restrained only by network propagation latencies) and that bandwidth use does not have the spikes associated with full block propagation.
   
Distributed Merkle hash trie (Andrew Stone):  This technique inverts the blockchain into blocks that store the current tx-out set in a distributed merkle tree, that can be traversed by bitcoin address bitwise to validate the existence of an address.  The blockchain forms a history of the changes to this trie.  This allows clients to "sync up" quickly, they do not need the full block history.  Blocks can be produced at any node in the trie containing transactions and blocks describing changes to the sub-trie.  This allows nodes to only track portions of the address space.  Miners only include portions of the trie that they have verified, resulting in slower confirmation times the lower you go.  Since txns can be included in any parent, high fee transactions are confirmed closer to the root, resulting in a fee market.

 
Address sharding (Andrew Stone):  This is a simplification of the Distributed Merkle hash tree technique that is formulated to fit in Bitcoin with minimal disruption via extension blocks.  This technique proposes that there exist a tree of extension blocks.  Each extension block can only contain transactions whose address has a particular prefix, recursively, transactions containing addresses with multiple prefixes are placed in the nearest parent extension block or ultimately the root (today's block).  Clients that do not track all the extension blocks are "full" nodes for those they track and "SPV" nodes for those they do not. 
Miners do not include the extension blocks that they do not track.  This would cause extension blocks to be mined less often, creating an interesting fee market with more expensive transactions gaining transaction space nearer to the root.

Segregated Witness (Peter Wuille):  This technique separates the verification information (signatures) from the transaction data, and cleans up a lot of other stuff.  Fully validating versions do not save bandwidth or space -- in fact it consumes a small amount more -- but would allow a new type of client.  This "non-verifying client" (different from SPV clients) essentially relies on others (miners) to do signature verification in which case bandwidth is saved since the "witness" -- the signatures -- are not downloaded, but security is undermined.  Or the client could verify once (with no bandwidth savings) but not store the verification information on disk.  This is a much smaller security risk since an attacker with direct access to the disk may be able to replace a transaction making it "look" like an address has or does not have a balance.  However, since bandwidth and RAM are the current bottlenecks this solution relies on the fact that today's network can handle larger amounts of data, similar to the basic block size increase technique.

Transaction space supply vs demand dynamics (Andrew Stone, Meni Rosenfeld):  This is a family of techniques to allow high fee paying transactions to cause the maximum block size to be temporarily expanded.  These techniques observe that the transaction space supply curve currently goes vertical at 1MB and are meant to address the possible shock ("economic change event") of hitting that limit by allowing transaction supply space to increase smoothly.  As such, this technique does not address a long-term demand increase, but may be combined with average or median based techniques to do so.



Bitcoin9000 (anonymous):  This paper proposes a combination of 3 techniques:
1. Subchains (renamed diff blocks by this paper)
2. Parallel transaction validation (transaction validation is nearly "embarrassingly parallel" -- this is an "expected" optimization)
3. linked list of size doubling extension blocks with a 95% miner vote-in-blocks needed to "open" the next-deeper block
 

Off Chain Scaling (included for completeness)


Sidechains (Gregory Maxwell, implemented Blockstream):  This technique renders Bitcoins immobile on the Bitcoin blockchain, allowing a representation of them to be used on a different blockchain.  The current implementation uses humans as stewards of the "locked" bitcoins ("federated sidechains"), so suffers from counterparty risk just like any "paper" money scheme.  There is a theoretical proposal to remove the human stewards.

Lightning Network




Wednesday, October 28, 2015

The Open Source Appliance (a 2002 retrospective)


Today the library of congress adopted exemptions (or here) that recognize consumer's rights to modify (jailbreak) their owned devices regardless that device's use of copyrighted software.  In recognition of this event, here is a document I wrote in 2002 on the subject.  Its pretty interesting to compare this to what actually happened.

The Open Source Appliance: A Manifesto

Rev 0.5 12/5/2002

The advent of inexpensive connectivity technologies1 has promised to drastically change the way home appliances operate. By communicating with a home computer, appliances will have the ability to provide a much richer user interface and a larger set of features then is available with the front panel buttons and the LCD display common in most appliances. By communicating with each other, appliances may be able to implement coordinated behaviors (inter-operate), creating a better, safer living environment. Interoperability will also allow appliances to use each other’s features, resulting in simpler and cheaper individual appliances, and provide [], creating features which are greater then provided by any individual appliance.
The ideal end result is a “unified” appliance that has no unnecessary or redundant parts, and is aware of its environment, providing greater flexibility and more features at a lower cost then appliances that do not inter-operate.

Interoperability

But lets be realistic: what will REALLY happen is that you’ll have appliances made by different companies and so will be almost entirely incompatible. For example, you may have an Acme (just a made-up company name) VCR and a Paragon phone system. Both of these systems will have a nifty Windows program (sorry, they only support windows2) that allows you to control the appliance. However, when you try to call from work to record a show you’ll realize that the Paragon phone software can’t talk to this version of the Acme VCR software. Or maybe the call won’t even be picked up because Windows has crashed (again!).
Although some systems will interconnect as advertised, especially if all of your home appliances are made by the same manufacturer, the sheer number of different home appliances and different manufacturers make it impossible to test all configurations, so many will not inter-operate. As an example of corporate ineffectiveness in ventures of this sort please examine your coffee table. How many remotes are sitting there? Unless you bought a special “universal” remote that was specifically created to communicate with many manufacturer’s devices, you probably have 3 or 4. And if you DID buy a “universal” remote, I think you’re already convinced…

Functionality


The purpose of having appliance connectivity is to allow your devices to act in concert to implement coordinated behaviors or to allow devices to share resources. Through these behaviors, devices can provide enhanced features. For example, your stereo could mute when the phone rings (coordinated behaviors), and your answering machine could store its messages in your computer (resource sharing). Your answering machine would no longer need a cassette tape to record incoming messages, making it cheaper and more reliable. You could then view your messages through the computer’s display, and listen to them through the computer’s speakers (enhanced feature). In this way, connectivity improves quality of life and reduces appliance cost.
But how are a bunch of engineers (probably under great schedule pressure) going to implement the behavioral or resource sharing features that fit your lifestyle and your appliances? Although I am certain that a minimum functionality will be implemented, such as your basic connect-VCR-to-cable-box functionality, many features exist that would be great to have but do not fuel a marketing campaign. For example, I have an oven and a microwave with alarms that can’t be heard from everywhere in the house. So I would like them to beep (a different sound than ring!) my phones when the alarm goes off. And my phone could also “ding-dong” when the doorbell is rung.
I want to use my cordless phones as an intercom system. I want to use them to control what music is playing through my CD player (since the remote won’t reach far enough), either through a touchtone voice menu interface, or directly via the buttons on the phone. I want to put callers on “speaker phone”, causing music that happens to be playing to pause and the phone’s output be routed though my living room speakers. I want my answering machine to display the history of received calls on my PC and let me listen to the messages through my main speaker system. I want incoming calls to be answered by a machine before the phones actually ring, and callers be told to hit “1” for me, and “2” for my girlfriend, and then ring all phones with two different sounds… no scratch that – I don’t want the bedroom phone to ring at night.
I’m just warming up! And this is only what I think I want. I won’t really know until I use the system for a while. You almost certainly need other features. Perhaps you run a small business and want your PC to run an inexpensive touch tone help or ordering system attached to your incoming line, or individual cordless phones that can call each other for interoffice communications. Or maybe you want to automatically store your favorite TV shows on your computer’s hard drive and allow fast forwarding through the commercials, essentially turning your PC into a digital video recorder.

Reality:


No company is going to fully inter-operate with all other companies.
No company is going to give you the features you need.
No company is going to act against their self-interest to solve your individual problem.


The software is developed and sold before it is USED. This is always the case, which is why the first versions of software are so notoriously poor. Companies will also develop the software for a fictitious “average” user, so it is often too simple for technically savvy users, and too complicated for “please just work when I plug it in” types. It frequently is not well tested against rival components. Arbitrary restrictions are imposed so that “professional” or “small business” versions can be sold at 10 times the home consumer price.

What can be done?


Why wait until the corporations have failed to bring us usefully inter-operating products? We must take the initiative and solve these problems now!

The solution is to create open source home appliances.

The idea that consumers can actually fix faulty products is not radical outside of the software industry. For example, consumers are responsible for the maintenance of their houses and cars, and a large “home improvement” and “automobile after market” industry exists to help consumers in this task. In fact, under pressure from congress, the [automobile association] recently released the diagnostic codes for cars’ internal computers so that individuals and independent repair shops can continue to fix all automotive problems (http://www.cnn.com/2002/TECH/ptech/09/27/diagnosing.car.repairs.ap/index.html).
It is possible to fix traditional home appliances (such as blenders). They often come with a parts list, so replacements can be ordered.
As with other products, the owner of a home appliance should have the right to fix or modify it. As software becomes central to the operation of an appliance (as in information or media appliances such as DVD players and phone systems), this right will be lost unless the appliance is based upon open source.
Open source is not a new idea. Significant open source projects currently exist. For example over half of all internet web sites are served by an open source program called Apache (see http://www.netcraft.com/survey/). Apache itself is often run on an open source operating system (Linux). Also, the Netscape web browser is based on an open source project called “Mozilla” (see http://www.mozilla.org ). Furthermore, many embedded systems (basically the computer industry’s term for all non-personal computer devices that contain software, such as DVD players, cell phones, or portable mp3 players) are developed using open source development tools (gcc, gnu make, gdb, emacs, etc. see http://www.gnu.org/).

A user would not need to purchase only open source home appliances to derive benefit from purchasing one open source appliance. A single open source product could communicate with other products, and code could be written to compensate for bugs or problems with the other product. For example, an open source phone system with an infrared light (IR) communicator accessory (essentially how the VCR’s remote control works) could be used to control a proprietary VCR. A consumer can then write a program to control the VCR through the phone system so that, for example, the consumer could literally telephone the VCR from work and tell it to record a show.

But I cannot program. How will Open Source help me fix a faulty appliance, add connectivity, or create a new feature?

An intrinsic part of Open Source projects is the existence of associated online communities. By “community,” I mean that users of the product communicate with each other about issues and problems with the product. A normal corporation’s product support site does not qualify as a “community” because all communications take place between individual users and the corporation. This makes it very difficult for users with similar problems to swap notes, especially since it is in the corporation’s interest not to report the number or severity of bugs in a product (lest it scare purchasers away). But in an open source user community, it is likely that you will find other users with the same problem, one of whom may be a programmer that can post a fix.
However, with open source, it is also possible to imagine groups of users hiring an independent programmer to implement special features or fix certain bugs. With a large enough user community, one could envision a market of programming consultants serving the user base. This has not previously occurred, perhaps because historically most users of Open Source products are programmers. However, a step has been taken in this direction – companies currently exist that provide support, add features, and fix bugs in Open Source projects. But instead of dealing with individual user groups on a bug-by-bug basis, they generally sell complete packages of the software (that contain all fixed bugs), and large, multi-user service contracts.
Over the long run, programming languages are becoming easier to use. Furthermore, the number of programmers is continually increasing, with the burgeoning computer industry. Ten years from now, adding a software feature to an open appliance may be a fun weekend project for the “software hobbyist,” just like wiring a surround speaker system or installing an after-market muffler is for the electronics and automotive hobbyist today.
Finally, mature open source programs generally have fewer bugs than their counterparts because more programmers become involved in fixing the bugs and more configurations can be tested. So you are less likely to have a problem in the first place.

Is an Open Source Appliance Company Possible?


While the purpose of this document is not to present a business case, this section is included to show that an open source appliance product is not incompatible with a profitable company.

The survivability of companies whose revenue or product line is significantly based upon open source software has been demonstrated by companies such as Wind River Systems, Cygnus, Red Hat, and many other Linux-based startups. As first stated by the Free Software Foundation’s “Free Software Definition” (http://www.fsf.org/philosophy/free-sw.html), the “free” aspect of open software refers more to the concept of “freedom” and less to that of “price.” These companies have traditionally made money either by providing an essential adjunct to the open source software, selling well-packaged easy-install versions of the open source software, or by selling maintenance and support contracts.
The business case for open source appliances is even stronger due to the fact that the open source appliance software is essentially useless without a hardware and firmware platform to run it on. The customer must purchase the company’s hardware in order to run the software, thus ensuring revenue. Although competing companies could start producing compatible hardware to take advantage of the software (as happened to IBM corporation and the IBM PC computer architecture), or could port the software to their hardware, this is not necessarily bad. First of all, companies who restrict free enterprise in their product lines often fail. As an example, note that the other early PC architectures (Apple, Amiga, Commodore, Apple Macintosh), are either gone or have little market share. Secondly, note that other companies only copy successful products, implying that the open source company would have to be successful before attracting copycats. Finally, the original company by definition has market leadership, a position that is easier to keep than to gain.


Research, Development, and Marketing


It would require a large company to produce a line of home appliances from scratch, and a huge company to market and support them. A small startup would need to use a different strategy. One strategy that would shorten research and development would be to license the hardware platforms from an existing manufacturer. In fact, many consumer electronic devices are currently OEMed, so the only nonstandard part of an agreement would be the negotiation to “open” the programming interface for the hardware. Of course, this approach makes it much easier for a competing company to sell compatible hardware (they can also license it), potentially eroding the advantage proprietary hardware confers (as described in the previous section).
In terms of marketing, it would probably be best to start small and to create high quality versions of the core A/V appliances: a cordless phone network, infrared controller, CD/DVD player, digital video recorder, and A/V receiver could make up the initial products. Until the open source community starts submitting code, the software will not deliver the features, interoperability, and stability promised by open source. Therefore, it does not make sense to “launch” the product line to the general department store consumer right away. In fact, a web interface selling to programmers and audiophiles (with perhaps some PR in audiophile and programming magazines) would give the products the necessary “incubation” period, and give a company the low overhead and reasonably high margins required for low volume business. Many people are already having a lot of fun modifying their home appliances – a pastime that has become especially popular on DVD players due to the DVD region encoding fiasco (see these links for examples http://www.nerd-out.com/darrenk/, http://www.area450.com/). This is an untapped customer base, requiring exactly the sort of niche product envisioned as a first release. When the software stabilizes and the feature set becomes greater than that of competitors’ appliances, a product “launch” could be undertaken.

Conclusion


In the near future, the computer shall be an intrinsic part of all devices. For open source to remain a viable and powerful concept, it must make the transition from the desktop into the world. Home appliance interoperability and intercommunication will enable this transition, both by allowing new software to be easily “downloaded” to the appliance, and by creating additional software complexity most easily solved by the open source methodology. The alternative cannot be repaired, has features that you don’t need, is missing those that you do, and is limited in interoperability by corporate feudalism. Let’s build a revolution!


1 The 900Mhz and 2.4Ghz radio bands (like wireless phones), power plug serial communications (like X11, IBM home director), Bluetooth, and the USB serial protocol (the next generation computer to peripheral connection)
2 The windows operation system is run on the vast majority of home computers because of its rich set of document processing applications, so it is unlikely that a company will support other operating systems. But there are reasons for consumers to use other operating systems, like greater reliability, higher performance, or less cost.

Sunday, October 25, 2015

Orange PI Plus Ubuntu 14.04 FAQ

The OrangePI Plus is a RaspberryPI-like piece of hardware that has awesome features at a great price point.  I bought a few of them to create a small ARM cluster.  Unfortunately the software needs some help (as is expected for a $39 board), but the open source community is delivering what is needed.

I chose to use the Ubuntu 14.04 XFCE distribution on my board because I wanted something solid with long term support.  This is what I discovered in my efforts.  Perhaps this FAQ will save you some time.

Kernel and Distribution

Use kernels provided by loboris described here:
http://www.orangepi.org/orangepibbsen/forum.php?mod=viewthread&tid=342

Source code is here:
https://github.com/loboris/OrangePI-Kernel
https://github.com/loboris/OrangePi-BuildLinux

The kernels and distros provided by Xulong (OrangePI mfg) are not well supported, have no cleanly documented build procedure, etc.

Changing the display resolution in Lubuntu

Testing your monitor's capability

Boot your OPI+.  Now run:
sudo fbset -xres [horizontal resolution] -yres [vertical resolution]

for example:
sudo fbset -xres 1920 -yres 1080

(default password is orangepi)

This won't really work.  It will resize the screen without resizing the desktop so your desktop will now appear on the upper left area of the screen and a black or repeated desktop will appear on the bottom and the right.  But it proves that your hardware is capable of the resolution.

Setting the screen resolution in OrangePI Lubuntu

Your flash card is separated into two partitions "/" and "BOOT".  Guess what, the BOOT partition is NOT mounted at /boot, but a copy of the files in BOOT are there.  It is actually located at /media/boot.  You can verify this by running "df"

If you put your flash card in a DIFFERENT computer, you should see 2 volumes, one is called "BOOT".  Click on that and you will see a bunch of files like:
script.bin.XXXXXXX

Rename the resolution you want to "script.bin" and reboot.

References:
http://www.orangepi.org/orangepibbsen/forum.php?mod=viewthread&tid=342

Enabling the Ethernet 

If your wired ethernet is not working (does not initialize and no blinky lights on the jack), you probably forgot to use the OPI+ kernel.  As above, put your flash card in a DIFFERENT computer and look at the BOOT partition.  Copy the uImage.OPI-PLUS file to "uImage".  This is the name of the linux kernel in machines that use u-boot (ARM machines).

You also need the proper kernel to use many of the other OPI hardware features...

 

Adding GPIO, LED, I2C and SPI access

 run:
sudo modprobe gpio_sunxi

To control the LEDs:

RED OFF: /bin/echo 0 > /sys/class/gpio_sw/normal_led/data
RED ON: /bin/echo 1 > /sys/class/gpio_sw/normal_led/data
GREEN OFF: /bin/echo 0 > /sys/class/gpio_sw/standby_led/data
GREEN ON: /bin/echo 1 > /sys/class/gpio_sw/standby_led/data

Add "gpio_sunxi" to /etc/modules to get it to autoload on boot.

Adding IR Remote Controls

run:
sudo modprobe sunxi_ir_rx

Add "sunxi_ir_rx" to /etc/modules to get it to autoload on boot.

Enabling the analog audio output

sudo alsamixer
hit F6 (select soundcard)
select 0 audiocodec
Move right to "Audio Lineout"
Hit "m" to turn it on (should show 00 in the above box)
Hit ESC to exit 

Switching between analog and HDMI audio output

In XFCE choose XFCE Menu -> Sound & Video -> PulseAudio Volume Controls.  Go to the configuration tab.  Disable the one you don't want and audio will pop to the other.


Adding a SATA Hard Drive

This describes how to add a hard drive as additional data, not how to boot from it (you can boot from the 8GB EMMS).  There's nothing special; this is standard linux stuff:

Plug it in using SATA cable.  Power up board.

mkfs.ext4 -b 4096 /dev/sda
mkdir /data
mount /dev/sda data

(verify by ls /data.  You should see lost+found.  Also run "df")

nano /etc/fstab
add:
/dev/sda /data ext4 defaults 0 0


WIFI Command Line Configuration

sudo nmcli -a d wifi connect
(will ask which SSID, etc)


kswapd process using almost 100% of cpu


This is a bug in the kernel.  The easiest solution is to make some swap space:

sudo -i
dd if=/dev/zero of=/swap bs=1M count=1024
chmod 600 /swap
mkswap /swap
swapon /swap  
 
 
You can then tell the system not to use swap unless it absolutely must:

sysctl vm.swappiness=0
 
The number is a percentage from 0 to 100 indicating how much Linux should preemptively move RAM into swap.

Don't forget to add the swap to /etc/fstab so swap is enabled on boot:

/swap swap swap defaults 0 0

 

Saturday, May 23, 2015

Network Neutrality and Bitcoin


Allowing the internet to provide different services for different applications is a more efficient use of existing resources and will result in higher quality of experience for end users.  Unfortunately telephone/video/internet service to the home is often a monopoly or near monopoly and service providers have a proven history of taking advantage of this fact with inferior service and high prices.  So as a society we cannot trust a for-profit monopoly-granted organization to not take advantage of service differentiation to confer unfair advantages to incumbent or internal services. This is why network neutrality is important.

However, there is another solution.  It is now technologically possible to create an automated marketplace that allows applications running at the end user or in the web-application to purchase an end-to-end pathway with specific quality guarantees.  It would look like this when connected to cable networks (mobile, etc networks are very similar):



This marketplace needs to be available to any customer and be the only way to purchase service.  This creates a "level playing field" that fosters innovation.  Through this marketplace an internet startup has access to the same bandwidth as an incumbent or internal web service provider.  Services sold in the market can be tracked to ensure that it does not affect existing "baseline" contracts with customers.

The Bitcoin network is the only payment processor that can service this network due to its security model, pseudo-anonymous transactions, continuous micro-payment capabilities (payment channels), and irreversible transfers.  With Bitcoin "payment channels" customers can continually pay fractions of a penny (pay-as-you-go) which ensures that the the payment matches the service provided.  To protect the service provider, irreversible transfers are needed to eliminate chargebacks, fraud, and the overhead of collecting and storing the identity and payment information required with traditional trust-based payment networks.  Pseudo-anonymous transactions ensure the "level playing field" -- the market cannot offer a particular company a better deal if it does not know who is purchasing the service.

In short, it is not feasible to use traditional payment processors for this marketplace because of high fraud rates for digital goods, communication of identifying information (which could be used to offer cheaper service to favored customers), and inability to cost-efficiently handle continuous micro-payments. 


Introduction:  If you could trust your ISP, you would not want Network Neutrality

To understand this, you need to understand that there are multiple metrics used to measure network performance.  And services really do have different requirements.

This is called QOS or Quality of Service, and the 3 most common metrics are bandwidth, latency, and jitter.    Bandwidth is the one that you know -- its how many bytes you'll receive per second, on average.  Latency is if you send a message, how fast will you get a response?  Jitter is how much the time between packet arrival varies.

So if you are uploading or downloading all your photos from DropBox, all you care about is bandwidth.  If no bytes are transmitted for a few seconds, you don't care.  All you care about is when the interminable upload will be over!

If you are playing a twitch video game you care about latency -- you need to dodge that incoming RPG so you need the game to react to your keystroke as quickly as possible!  Its good to minimize jitter, but remember the game world is simulated on your system so it will not freeze.  However if you have ever seen other characters suddenly "pop" somewhere else, that is caused by a large packet gap (high jitter).

If you are watching a movie through a set-top box, you mostly care about jitter.  The set-top box does not have much memory; it can only hold a few seconds of the movie before playing it on the screen.  So you need a steady, unchanging stream of data or the movie will freeze and jerk.  Bandwidth is the second most important -- a higher bandwidth means clearer, HD video.  Latency is completely unimportant (within reason).  It does not matter if it takes the data packets .5ms or 1000ms to get to you -- the only difference is that the movie begins 1 second later.

From a consumer perspective, it does not make sense to pay for a connection that can simultaneously handle HD movies, massive uploads, and "twitch" video games 24 hours a day 7 days a week when you only use these services a few hours a day.

There is no technical barrier

Today it is technically possible to create custom QOS data flows into your home.  This is why your ISP does not need to fiddle with your cable box when you upgrade service and why when you don't pay your bill, nobody needs to drive by to shut off your service.  In the mid 2000s, I helped specify the cable network protocol that enables this (its called PCMM or Packet Cable MultiMedia) and worked at one of the first companies enabling PCMM services.  Today, similar protocols exist for mobile networks, and OpenFlow is an effort to create a unified protocol that will allow the creation of QOS flows across the entire network.  At the same time NFV (Network Function Virtualization) is an effort to move the source of the data closer to the consumer -- this ability could be part of the same marketplace.



But here is the problem

Network Service Providers* (NSP) have a monopoly on the data into your home.  Given the opportunity, they will behave no differently than any other for-profit company and abuse that monopoly to provide inferior service at high prices.

For example, when Fiber-to-the-Home entered my neighborhood, my current cable data provider offered to double my bandwidth for free.

And I have personal experience with how painful it is to deploy the simplest services into NSP networks.  In the mid 2000's I worked at a small cable-industry startup company.  We were demoing a program that sat in your system tray (where all the little icons are on the right) that looked like a speedometer.  But rather then just telling you the network speed, you could grab the needle and drag it higher to get more bandwidth to your home.  Pretty awesome right!  Surely there would be a market for this... but have you ever actually seen it?

The two key reasons for network neutrality are:

  1. Permission-less innovation:  The network service provider should not be placed in a position where it can offer or withhold bandwidth from a service, or negotiate differentiated pricing based on the service type or provider. If it is in this position it can influence or outright control what services run over its network.  In fact, by taking an active role in "allowing" a particular type of data on its network, it may find itself legally required (or scared by litigation) into acting as a "policeman" of this data.  Additionally, it may offer better pricing to incumbent or in-house services which will have a terrible effect on the technological innovation that has driven our economy for the last 15 years.  Netflix would not exist because it is stealing cable TV revenue...


The market described above solves this problem...

  2. Breaking currently negotiated contracts:  If I am paying for 10mb/s, I paid for 10mb/s TRAVERSING the entire ISP network.  The contract did not say "10mb/s only if nobody else is paying more at that moment", or "we'll send you 10mb/s if packets magically appear on our network, but we are limiting what Netflix can send to us so in reality you'll only get 1mb/s."

I believe that point 2 is not an issue long term.  Do "coach" airline seats cost more because first class reduce the total number of coach seats?  Does "bleacher" seating at the ball game cost more because of box seats?  In my experience the opposite is true; companies are able to offer reduced "basic" prices and expanded capacity due to their high margin offerings.   As network capacity increases to fill high-margin QOS demand, ISPs will be able to meet their baseline promises and have extra bandwidth left over.

The real problem today is that the lack of a marketplace for QOS on-demand has caused ISPs to "oversubscribe" their networks -- that is they have collectively promised much more bandwidth to all their customers than they actually can provide.  So this ISP contractual "promise" is actually more of a maximum, when customers actually want a promised minimum.  The existence of a QOS market aligns what the customer wants to buy (guaranteed minimum performance for a certain time) with what the ISP is selling.


* In this blog post I'm going to use the term "service" to mean any company that provides a web site or other internet accessible service (like video streaming, instant chat, etc).  And I'll use "network provider" instead of ISP (internet service provider) because my observations apply to every networking company in the route from the service provider to the customer, not just the ISP that the customer has signed up for.

Thursday, April 16, 2015

Advanced Software Language Design Concepts

Minimal Specification


A minimally specified program is the idea of describing exactly what is needed to accomplish an algorithm and nothing else.  For example, extraneous statements are often added to software and include inefficiencies (conceptual mistakes), debug or logging.  These should be indicated as extraneous within the language.

As all of these statements are essentially commentary, let us propose "/:" to prefix in inessential line, "//" to prefix a traditional comment, and "/?" to prefix a documentation comment.  We'll prefix use "/*" rather than "/" to specify multi-line.

// Let's log now...
/: log(INFO,"This is a log message");

/*: log(INFO,"Contents of list");
for l in list.items()
  [
  log(INFO,l);
  ]
*/

But extra statements do not constitute the entirety of unnecessary information.  What about statement ordering?  Rather than specify unnecessary order, let's specify different syntax for lexical scoping rules that allow different ordering:
[] = any order
() = specific order
{} = only one of

So for example:

Point add(Point a, Point b)
  (
    [
    x = a.x+b.x;
    y = a.y+ b.y;
    ]
  return Point(x,y);
  )

This is a very succinct way increase parallelism in software.  A clever compiler can use this information to reorder instructions for optimization, spawn threads or even start "micro-threads" (a short simultaneous execution on 2 processors of a multi-core machine which share the same stack before the moment of separation).

If the concept of minimal specification is applied throughout the language, there are quite a few other interesting language ideas that emerge.
 

Syntatic Specifications

Interfaces

Interfaces exist in one form or another in many programming languages.  However, the related type systems suffer from a lack of flexibility that causes them to be less than fully utilized.

Type specifications should be parametric.  That is, be able to specify multiple types simultaneously:

type Point({int, float, double} ElemType ) =
  [
  ElemType x,
  ElemType y
  ]

(we don't need template <> notation, types are parametric)

You could quickly define a grouping of types (remember that {} means "one of"):

type Number  = {int, float, double}
 In cases where the constituent types do not implement the same interface (do not have the same member operators), the operators available to Number is the intersection of the operators available in its constituent types.

Aside: This is very different than the following 3-tuple:
type triple = (int, float, double)

Let's define a keyword: the "any" type means any type!

Let's specify the addition function, where the parameters can be heterogeneous Point types:
Point Add( Point a, Point b);

Let's specify the addition function where all objects must be the same fully-realized type:
ParamType Add( (ParamType = Point) a, ParamType b);

Interface Reductions

Languages today almost exclusively allow programmers to add to the existing symbol table.  The only notable exception is the use of public, private, and protected in C++ and other object-oriented languages.

However, these "canned" namespaces are based on program structure assumptions that miss the complexity of modern software development.  For example, an API may have multiple levels of interface, depending the application programmer's chosen power/complexity trade off.  The implementation of the API may have specific functions needed to interface with an optional component.  These functions and related member variables, could be removed during compilation if the other component is not part of the build, resulting in space and time efficiencies.  The implementation may have a debugging interface...

Instead, let us define interface groups and allow classes to include specific prototypes and interfaces into the group:

interface group API;
interface group GUI;
interface group data;




A module can choose what interface groups to present to other software layers.  It can combine pieces of other interface groups into an new group and present that.  This has the effect of reducing the namespace. 

Given an extremely flexible syntax parser, you should be able to specify most modern languages in a single language.

Semantic Specifications


Interfaces constitute syntatic specifications.  What about semantics?  A semantic specification defines how an object should behave.  Today we get away with concepts like "assert" and "unit test"; but there is no formal specification of semantics.  Without a formal specification engineers cannot write adhering implementations or formal proofs and compilers cannot apply logical reasoning for optimization.

  For example:

  semantic stack(any a) = assert(a == a.push(x).pop())

  semantic queue(any b) =
    (
    any (x,y,first,second);
    a.push_back(x),     a.push_back(y),
    first = a.pop(),
    second = a.pop(),
    assert(x == first),
    assert(y == second)
  ]

An interface actually consists of both syntax (interface) and test (semantic) specifications:

type List(any T) =
  [
  def add(T a) {...},
  def remove(T a) {...},
  def push(T a) {...},
  def T pop() {...},

  semantic(List a, assert(a == a.add(x).remove(x))),
  implements semantics stack;
  implements semantics queue;
  ]


Performance Specifications

Performance specification is an important part of the semantic specifications from a practical perspective, although it is (generally) not part of the minimal specification (so we'll use the /: prefix).

Why is performance specification important? A programmer is confronted with multiple implementations of an interface (say a List, or Map).  To pick the optimal implementation he must match the usage patterns in his code with the implementation that implements those member functions most efficiently.  To do so correctly, he needs classes and member functions to be annotated with performance specification.

type MyList(any T) =
[
   int length;
   def push(T a) {...}, //: O(p=1,m=1)
   def find(T a) {...}, //: O(p=length/2, m=1)
   def quickSort(T a) {...}, //: O(p=length*log(length), m=1)
   MyList(T) clone() {...}, //: O(p=length, m=length)
]

Note, given these performance specifications it may be possible for the profiler to feed back data into the compiler to recommend the best implementation.

Computer-Assisted Development

Integrated IDE

The language should not be defined solely in ASCII format.  Today's computers are fully capable of displaying binary data (pictures, music, etc) in human-consumable format inside the context of a traditional program editor and so languages should allow this data to be included.

var Image SplashImage = [[actual image here]]

Computer Annotated Source

Continuing the philosophy of minimal specification let us NOT specify the specific list required for this task.  Let us just specify that it must be an object with a list and GUI interface:

var (List, GUI) choices,

choices.push("a").push("b").push("c"),
choices.push_back("last choice"),
choices.showDialog(),

The compiler can choose any object that provides both the List and GUI interfaces.  During profiling execution, the compiler keeps track of how often each API was called.  Although this is not the case in the above example, let us imagine that the push_back() function was called repeatedly in a performance sensitive area.

After execution, the system notices this and chooses an implementation of choices that optimizes the push and push_back functions based on the performance annotations that are part of each classes' definition (see above).  It annotates the source code to this effect, using the "inessential" marker "/" with the computer-can-change annotation "|":

var (List, GUI) choices,  //| instantiate DoublyLinkedList(string)

choices.push("a").push("b").push("c"),
choices.push_back("last choice"),
choices.showDialog(),

If the programmer wants to override this choice or stop it from ever changing he can remove the computer-can-change annotation:

var (List, GUI) choices,  /: instantiate MyDoublyLinkedList(string)

Or of course, using the traditional method:
var MyDoublyLinkedList(string) choices;


Compiler Interpreter Equivalence