Wednesday, October 28, 2015

The Open Source Appliance (a 2002 retrospective)

Today the library of congress adopted exemptions (or here) that recognize consumer's rights to modify (jailbreak) their owned devices regardless that device's use of copyrighted software.  In recognition of this event, here is a document I wrote in 2002 on the subject.  Its pretty interesting to compare this to what actually happened.

The Open Source Appliance: A Manifesto

Rev 0.5 12/5/2002

The advent of inexpensive connectivity technologies1 has promised to drastically change the way home appliances operate. By communicating with a home computer, appliances will have the ability to provide a much richer user interface and a larger set of features then is available with the front panel buttons and the LCD display common in most appliances. By communicating with each other, appliances may be able to implement coordinated behaviors (inter-operate), creating a better, safer living environment. Interoperability will also allow appliances to use each other’s features, resulting in simpler and cheaper individual appliances, and provide [], creating features which are greater then provided by any individual appliance.
The ideal end result is a “unified” appliance that has no unnecessary or redundant parts, and is aware of its environment, providing greater flexibility and more features at a lower cost then appliances that do not inter-operate.


But lets be realistic: what will REALLY happen is that you’ll have appliances made by different companies and so will be almost entirely incompatible. For example, you may have an Acme (just a made-up company name) VCR and a Paragon phone system. Both of these systems will have a nifty Windows program (sorry, they only support windows2) that allows you to control the appliance. However, when you try to call from work to record a show you’ll realize that the Paragon phone software can’t talk to this version of the Acme VCR software. Or maybe the call won’t even be picked up because Windows has crashed (again!).
Although some systems will interconnect as advertised, especially if all of your home appliances are made by the same manufacturer, the sheer number of different home appliances and different manufacturers make it impossible to test all configurations, so many will not inter-operate. As an example of corporate ineffectiveness in ventures of this sort please examine your coffee table. How many remotes are sitting there? Unless you bought a special “universal” remote that was specifically created to communicate with many manufacturer’s devices, you probably have 3 or 4. And if you DID buy a “universal” remote, I think you’re already convinced…


The purpose of having appliance connectivity is to allow your devices to act in concert to implement coordinated behaviors or to allow devices to share resources. Through these behaviors, devices can provide enhanced features. For example, your stereo could mute when the phone rings (coordinated behaviors), and your answering machine could store its messages in your computer (resource sharing). Your answering machine would no longer need a cassette tape to record incoming messages, making it cheaper and more reliable. You could then view your messages through the computer’s display, and listen to them through the computer’s speakers (enhanced feature). In this way, connectivity improves quality of life and reduces appliance cost.
But how are a bunch of engineers (probably under great schedule pressure) going to implement the behavioral or resource sharing features that fit your lifestyle and your appliances? Although I am certain that a minimum functionality will be implemented, such as your basic connect-VCR-to-cable-box functionality, many features exist that would be great to have but do not fuel a marketing campaign. For example, I have an oven and a microwave with alarms that can’t be heard from everywhere in the house. So I would like them to beep (a different sound than ring!) my phones when the alarm goes off. And my phone could also “ding-dong” when the doorbell is rung.
I want to use my cordless phones as an intercom system. I want to use them to control what music is playing through my CD player (since the remote won’t reach far enough), either through a touchtone voice menu interface, or directly via the buttons on the phone. I want to put callers on “speaker phone”, causing music that happens to be playing to pause and the phone’s output be routed though my living room speakers. I want my answering machine to display the history of received calls on my PC and let me listen to the messages through my main speaker system. I want incoming calls to be answered by a machine before the phones actually ring, and callers be told to hit “1” for me, and “2” for my girlfriend, and then ring all phones with two different sounds… no scratch that – I don’t want the bedroom phone to ring at night.
I’m just warming up! And this is only what I think I want. I won’t really know until I use the system for a while. You almost certainly need other features. Perhaps you run a small business and want your PC to run an inexpensive touch tone help or ordering system attached to your incoming line, or individual cordless phones that can call each other for interoffice communications. Or maybe you want to automatically store your favorite TV shows on your computer’s hard drive and allow fast forwarding through the commercials, essentially turning your PC into a digital video recorder.


No company is going to fully inter-operate with all other companies.
No company is going to give you the features you need.
No company is going to act against their self-interest to solve your individual problem.

The software is developed and sold before it is USED. This is always the case, which is why the first versions of software are so notoriously poor. Companies will also develop the software for a fictitious “average” user, so it is often too simple for technically savvy users, and too complicated for “please just work when I plug it in” types. It frequently is not well tested against rival components. Arbitrary restrictions are imposed so that “professional” or “small business” versions can be sold at 10 times the home consumer price.

What can be done?

Why wait until the corporations have failed to bring us usefully inter-operating products? We must take the initiative and solve these problems now!

The solution is to create open source home appliances.

The idea that consumers can actually fix faulty products is not radical outside of the software industry. For example, consumers are responsible for the maintenance of their houses and cars, and a large “home improvement” and “automobile after market” industry exists to help consumers in this task. In fact, under pressure from congress, the [automobile association] recently released the diagnostic codes for cars’ internal computers so that individuals and independent repair shops can continue to fix all automotive problems (
It is possible to fix traditional home appliances (such as blenders). They often come with a parts list, so replacements can be ordered.
As with other products, the owner of a home appliance should have the right to fix or modify it. As software becomes central to the operation of an appliance (as in information or media appliances such as DVD players and phone systems), this right will be lost unless the appliance is based upon open source.
Open source is not a new idea. Significant open source projects currently exist. For example over half of all internet web sites are served by an open source program called Apache (see Apache itself is often run on an open source operating system (Linux). Also, the Netscape web browser is based on an open source project called “Mozilla” (see ). Furthermore, many embedded systems (basically the computer industry’s term for all non-personal computer devices that contain software, such as DVD players, cell phones, or portable mp3 players) are developed using open source development tools (gcc, gnu make, gdb, emacs, etc. see

A user would not need to purchase only open source home appliances to derive benefit from purchasing one open source appliance. A single open source product could communicate with other products, and code could be written to compensate for bugs or problems with the other product. For example, an open source phone system with an infrared light (IR) communicator accessory (essentially how the VCR’s remote control works) could be used to control a proprietary VCR. A consumer can then write a program to control the VCR through the phone system so that, for example, the consumer could literally telephone the VCR from work and tell it to record a show.

But I cannot program. How will Open Source help me fix a faulty appliance, add connectivity, or create a new feature?

An intrinsic part of Open Source projects is the existence of associated online communities. By “community,” I mean that users of the product communicate with each other about issues and problems with the product. A normal corporation’s product support site does not qualify as a “community” because all communications take place between individual users and the corporation. This makes it very difficult for users with similar problems to swap notes, especially since it is in the corporation’s interest not to report the number or severity of bugs in a product (lest it scare purchasers away). But in an open source user community, it is likely that you will find other users with the same problem, one of whom may be a programmer that can post a fix.
However, with open source, it is also possible to imagine groups of users hiring an independent programmer to implement special features or fix certain bugs. With a large enough user community, one could envision a market of programming consultants serving the user base. This has not previously occurred, perhaps because historically most users of Open Source products are programmers. However, a step has been taken in this direction – companies currently exist that provide support, add features, and fix bugs in Open Source projects. But instead of dealing with individual user groups on a bug-by-bug basis, they generally sell complete packages of the software (that contain all fixed bugs), and large, multi-user service contracts.
Over the long run, programming languages are becoming easier to use. Furthermore, the number of programmers is continually increasing, with the burgeoning computer industry. Ten years from now, adding a software feature to an open appliance may be a fun weekend project for the “software hobbyist,” just like wiring a surround speaker system or installing an after-market muffler is for the electronics and automotive hobbyist today.
Finally, mature open source programs generally have fewer bugs than their counterparts because more programmers become involved in fixing the bugs and more configurations can be tested. So you are less likely to have a problem in the first place.

Is an Open Source Appliance Company Possible?

While the purpose of this document is not to present a business case, this section is included to show that an open source appliance product is not incompatible with a profitable company.

The survivability of companies whose revenue or product line is significantly based upon open source software has been demonstrated by companies such as Wind River Systems, Cygnus, Red Hat, and many other Linux-based startups. As first stated by the Free Software Foundation’s “Free Software Definition” (, the “free” aspect of open software refers more to the concept of “freedom” and less to that of “price.” These companies have traditionally made money either by providing an essential adjunct to the open source software, selling well-packaged easy-install versions of the open source software, or by selling maintenance and support contracts.
The business case for open source appliances is even stronger due to the fact that the open source appliance software is essentially useless without a hardware and firmware platform to run it on. The customer must purchase the company’s hardware in order to run the software, thus ensuring revenue. Although competing companies could start producing compatible hardware to take advantage of the software (as happened to IBM corporation and the IBM PC computer architecture), or could port the software to their hardware, this is not necessarily bad. First of all, companies who restrict free enterprise in their product lines often fail. As an example, note that the other early PC architectures (Apple, Amiga, Commodore, Apple Macintosh), are either gone or have little market share. Secondly, note that other companies only copy successful products, implying that the open source company would have to be successful before attracting copycats. Finally, the original company by definition has market leadership, a position that is easier to keep than to gain.

Research, Development, and Marketing

It would require a large company to produce a line of home appliances from scratch, and a huge company to market and support them. A small startup would need to use a different strategy. One strategy that would shorten research and development would be to license the hardware platforms from an existing manufacturer. In fact, many consumer electronic devices are currently OEMed, so the only nonstandard part of an agreement would be the negotiation to “open” the programming interface for the hardware. Of course, this approach makes it much easier for a competing company to sell compatible hardware (they can also license it), potentially eroding the advantage proprietary hardware confers (as described in the previous section).
In terms of marketing, it would probably be best to start small and to create high quality versions of the core A/V appliances: a cordless phone network, infrared controller, CD/DVD player, digital video recorder, and A/V receiver could make up the initial products. Until the open source community starts submitting code, the software will not deliver the features, interoperability, and stability promised by open source. Therefore, it does not make sense to “launch” the product line to the general department store consumer right away. In fact, a web interface selling to programmers and audiophiles (with perhaps some PR in audiophile and programming magazines) would give the products the necessary “incubation” period, and give a company the low overhead and reasonably high margins required for low volume business. Many people are already having a lot of fun modifying their home appliances – a pastime that has become especially popular on DVD players due to the DVD region encoding fiasco (see these links for examples, This is an untapped customer base, requiring exactly the sort of niche product envisioned as a first release. When the software stabilizes and the feature set becomes greater than that of competitors’ appliances, a product “launch” could be undertaken.


In the near future, the computer shall be an intrinsic part of all devices. For open source to remain a viable and powerful concept, it must make the transition from the desktop into the world. Home appliance interoperability and intercommunication will enable this transition, both by allowing new software to be easily “downloaded” to the appliance, and by creating additional software complexity most easily solved by the open source methodology. The alternative cannot be repaired, has features that you don’t need, is missing those that you do, and is limited in interoperability by corporate feudalism. Let’s build a revolution!

1 The 900Mhz and 2.4Ghz radio bands (like wireless phones), power plug serial communications (like X11, IBM home director), Bluetooth, and the USB serial protocol (the next generation computer to peripheral connection)
2 The windows operation system is run on the vast majority of home computers because of its rich set of document processing applications, so it is unlikely that a company will support other operating systems. But there are reasons for consumers to use other operating systems, like greater reliability, higher performance, or less cost.

Sunday, October 25, 2015

Orange PI Plus Ubuntu 14.04 FAQ

The OrangePI Plus is a RaspberryPI-like piece of hardware that has awesome features at a great price point.  I bought a few of them to create a small ARM cluster.  Unfortunately the software needs some help (as is expected for a $39 board), but the open source community is delivering what is needed.

I chose to use the Ubuntu 14.04 XFCE distribution on my board because I wanted something solid with long term support.  This is what I discovered in my efforts.  Perhaps this FAQ will save you some time.

Kernel and Distribution

Use kernels provided by loboris described here:

Source code is here:

The kernels and distros provided by Xulong (OrangePI mfg) are not well supported, have no cleanly documented build procedure, etc.

Changing the display resolution in Lubuntu

Testing your monitor's capability

Boot your OPI+.  Now run:
sudo fbset -xres [horizontal resolution] -yres [vertical resolution]

for example:
sudo fbset -xres 1920 -yres 1080

(default password is orangepi)

This won't really work.  It will resize the screen without resizing the desktop so your desktop will now appear on the upper left area of the screen and a black or repeated desktop will appear on the bottom and the right.  But it proves that your hardware is capable of the resolution.

Setting the screen resolution in OrangePI Lubuntu

Your flash card is separated into two partitions "/" and "BOOT".  Guess what, the BOOT partition is NOT mounted at /boot, but a copy of the files in BOOT are there.  It is actually located at /media/boot.  You can verify this by running "df"

If you put your flash card in a DIFFERENT computer, you should see 2 volumes, one is called "BOOT".  Click on that and you will see a bunch of files like:

Rename the resolution you want to "script.bin" and reboot.


Enabling the Ethernet 

If your wired ethernet is not working (does not initialize and no blinky lights on the jack), you probably forgot to use the OPI+ kernel.  As above, put your flash card in a DIFFERENT computer and look at the BOOT partition.  Copy the uImage.OPI-PLUS file to "uImage".  This is the name of the linux kernel in machines that use u-boot (ARM machines).

You also need the proper kernel to use many of the other OPI hardware features...


Adding GPIO, LED, I2C and SPI access

sudo modprobe gpio_sunxi

To control the LEDs:

RED OFF: /bin/echo 0 > /sys/class/gpio_sw/normal_led/data
RED ON: /bin/echo 1 > /sys/class/gpio_sw/normal_led/data
GREEN OFF: /bin/echo 0 > /sys/class/gpio_sw/standby_led/data
GREEN ON: /bin/echo 1 > /sys/class/gpio_sw/standby_led/data

Add "gpio_sunxi" to /etc/modules to get it to autoload on boot.

Adding IR Remote Controls

sudo modprobe sunxi_ir_rx

Add "sunxi_ir_rx" to /etc/modules to get it to autoload on boot.

Enabling the analog audio output

sudo alsamixer
hit F6 (select soundcard)
select 0 audiocodec
Move right to "Audio Lineout"
Hit "m" to turn it on (should show 00 in the above box)
Hit ESC to exit 

Switching between analog and HDMI audio output

In XFCE choose XFCE Menu -> Sound & Video -> PulseAudio Volume Controls.  Go to the configuration tab.  Disable the one you don't want and audio will pop to the other.

Adding a SATA Hard Drive

This describes how to add a hard drive as additional data, not how to boot from it (you can boot from the 8GB EMMS).  There's nothing special; this is standard linux stuff:

Plug it in using SATA cable.  Power up board.

mkfs.ext4 -b 4096 /dev/sda
mkdir /data
mount /dev/sda data

(verify by ls /data.  You should see lost+found.  Also run "df")

nano /etc/fstab
/dev/sda /data ext4 defaults 0 0

WIFI Command Line Configuration

sudo nmcli -a d wifi connect
(will ask which SSID, etc)

kswapd process using almost 100% of cpu

This is a bug in the kernel.  The easiest solution is to make some swap space:

sudo -i
dd if=/dev/zero of=/swap bs=1M count=1024
chmod 600 /swap
mkswap /swap
swapon /swap  
You can then tell the system not to use swap unless it absolutely must:

sysctl vm.swappiness=0
The number is a percentage from 0 to 100 indicating how much Linux should preemptively move RAM into swap.

Don't forget to add the swap to /etc/fstab so swap is enabled on boot:

/swap swap swap defaults 0 0


Saturday, May 23, 2015

Network Neutrality and Bitcoin

Allowing the internet to provide different services for different applications is a more efficient use of existing resources and will result in higher quality of experience for end users.  Unfortunately telephone/video/internet service to the home is often a monopoly or near monopoly and service providers have a proven history of taking advantage of this fact with inferior service and high prices.  So as a society we cannot trust a for-profit monopoly-granted organization to not take advantage of service differentiation to confer unfair advantages to incumbent or internal services. This is why network neutrality is important.

However, there is another solution.  It is now technologically possible to create an automated marketplace that allows applications running at the end user or in the web-application to purchase an end-to-end pathway with specific quality guarantees.  It would look like this when connected to cable networks (mobile, etc networks are very similar):

This marketplace needs to be available to any customer and be the only way to purchase service.  This creates a "level playing field" that fosters innovation.  Through this marketplace an internet startup has access to the same bandwidth as an incumbent or internal web service provider.  Services sold in the market can be tracked to ensure that it does not affect existing "baseline" contracts with customers.

The Bitcoin network is the only payment processor that can service this network due to its security model, pseudo-anonymous transactions, continuous micro-payment capabilities (payment channels), and irreversible transfers.  With Bitcoin "payment channels" customers can continually pay fractions of a penny (pay-as-you-go) which ensures that the the payment matches the service provided.  To protect the service provider, irreversible transfers are needed to eliminate chargebacks, fraud, and the overhead of collecting and storing the identity and payment information required with traditional trust-based payment networks.  Pseudo-anonymous transactions ensure the "level playing field" -- the market cannot offer a particular company a better deal if it does not know who is purchasing the service.

In short, it is not feasible to use traditional payment processors for this marketplace because of high fraud rates for digital goods, communication of identifying information (which could be used to offer cheaper service to favored customers), and inability to cost-efficiently handle continuous micro-payments. 

Introduction:  If you could trust your ISP, you would not want Network Neutrality

To understand this, you need to understand that there are multiple metrics used to measure network performance.  And services really do have different requirements.

This is called QOS or Quality of Service, and the 3 most common metrics are bandwidth, latency, and jitter.    Bandwidth is the one that you know -- its how many bytes you'll receive per second, on average.  Latency is if you send a message, how fast will you get a response?  Jitter is how much the time between packet arrival varies.

So if you are uploading or downloading all your photos from DropBox, all you care about is bandwidth.  If no bytes are transmitted for a few seconds, you don't care.  All you care about is when the interminable upload will be over!

If you are playing a twitch video game you care about latency -- you need to dodge that incoming RPG so you need the game to react to your keystroke as quickly as possible!  Its good to minimize jitter, but remember the game world is simulated on your system so it will not freeze.  However if you have ever seen other characters suddenly "pop" somewhere else, that is caused by a large packet gap (high jitter).

If you are watching a movie through a set-top box, you mostly care about jitter.  The set-top box does not have much memory; it can only hold a few seconds of the movie before playing it on the screen.  So you need a steady, unchanging stream of data or the movie will freeze and jerk.  Bandwidth is the second most important -- a higher bandwidth means clearer, HD video.  Latency is completely unimportant (within reason).  It does not matter if it takes the data packets .5ms or 1000ms to get to you -- the only difference is that the movie begins 1 second later.

From a consumer perspective, it does not make sense to pay for a connection that can simultaneously handle HD movies, massive uploads, and "twitch" video games 24 hours a day 7 days a week when you only use these services a few hours a day.

There is no technical barrier

Today it is technically possible to create custom QOS data flows into your home.  This is why your ISP does not need to fiddle with your cable box when you upgrade service and why when you don't pay your bill, nobody needs to drive by to shut off your service.  In the mid 2000s, I helped specify the cable network protocol that enables this (its called PCMM or Packet Cable MultiMedia) and worked at one of the first companies enabling PCMM services.  Today, similar protocols exist for mobile networks, and OpenFlow is an effort to create a unified protocol that will allow the creation of QOS flows across the entire network.  At the same time NFV (Network Function Virtualization) is an effort to move the source of the data closer to the consumer -- this ability could be part of the same marketplace.

But here is the problem

Network Service Providers* (NSP) have a monopoly on the data into your home.  Given the opportunity, they will behave no differently than any other for-profit company and abuse that monopoly to provide inferior service at high prices.

For example, when Fiber-to-the-Home entered my neighborhood, my current cable data provider offered to double my bandwidth for free.

And I have personal experience with how painful it is to deploy the simplest services into NSP networks.  In the mid 2000's I worked at a small cable-industry startup company.  We were demoing a program that sat in your system tray (where all the little icons are on the right) that looked like a speedometer.  But rather then just telling you the network speed, you could grab the needle and drag it higher to get more bandwidth to your home.  Pretty awesome right!  Surely there would be a market for this... but have you ever actually seen it?

The two key reasons for network neutrality are:

  1. Permission-less innovation:  The network service provider should not be placed in a position where it can offer or withhold bandwidth from a service, or negotiate differentiated pricing based on the service type or provider. If it is in this position it can influence or outright control what services run over its network.  In fact, by taking an active role in "allowing" a particular type of data on its network, it may find itself legally required (or scared by litigation) into acting as a "policeman" of this data.  Additionally, it may offer better pricing to incumbent or in-house services which will have a terrible effect on the technological innovation that has driven our economy for the last 15 years.  Netflix would not exist because it is stealing cable TV revenue...

The market described above solves this problem...

  2. Breaking currently negotiated contracts:  If I am paying for 10mb/s, I paid for 10mb/s TRAVERSING the entire ISP network.  The contract did not say "10mb/s only if nobody else is paying more at that moment", or "we'll send you 10mb/s if packets magically appear on our network, but we are limiting what Netflix can send to us so in reality you'll only get 1mb/s."

I believe that point 2 is not an issue long term.  Do "coach" airline seats cost more because first class reduce the total number of coach seats?  Does "bleacher" seating at the ball game cost more because of box seats?  In my experience the opposite is true; companies are able to offer reduced "basic" prices and expanded capacity due to their high margin offerings.   As network capacity increases to fill high-margin QOS demand, ISPs will be able to meet their baseline promises and have extra bandwidth left over.

The real problem today is that the lack of a marketplace for QOS on-demand has caused ISPs to "oversubscribe" their networks -- that is they have collectively promised much more bandwidth to all their customers than they actually can provide.  So this ISP contractual "promise" is actually more of a maximum, when customers actually want a promised minimum.  The existence of a QOS market aligns what the customer wants to buy (guaranteed minimum performance for a certain time) with what the ISP is selling.

* In this blog post I'm going to use the term "service" to mean any company that provides a web site or other internet accessible service (like video streaming, instant chat, etc).  And I'll use "network provider" instead of ISP (internet service provider) because my observations apply to every networking company in the route from the service provider to the customer, not just the ISP that the customer has signed up for.

Thursday, April 16, 2015

Advanced Software Language Design Concepts

Minimal Specification

A minimally specified program is the idea of describing exactly what is needed to accomplish an algorithm and nothing else.  For example, extraneous statements are often added to software and include inefficiencies (conceptual mistakes), debug or logging.  These should be indicated as extraneous within the language.

As all of these statements are essentially commentary, let us propose "/:" to prefix in inessential line, "//" to prefix a traditional comment, and "/?" to prefix a documentation comment.  We'll prefix use "/*" rather than "/" to specify multi-line.

// Let's log now...
/: log(INFO,"This is a log message");

/*: log(INFO,"Contents of list");
for l in list.items()

But extra statements do not constitute the entirety of unnecessary information.  What about statement ordering?  Rather than specify unnecessary order, let's specify different syntax for lexical scoping rules that allow different ordering:
[] = any order
() = specific order
{} = only one of

So for example:

Point add(Point a, Point b)
    x = a.x+b.x;
    y = a.y+ b.y;
  return Point(x,y);

This is a very succinct way increase parallelism in software.  A clever compiler can use this information to reorder instructions for optimization, spawn threads or even start "micro-threads" (a short simultaneous execution on 2 processors of a multi-core machine which share the same stack before the moment of separation).

If the concept of minimal specification is applied throughout the language, there are quite a few other interesting language ideas that emerge.

Syntatic Specifications


Interfaces exist in one form or another in many programming languages.  However, the related type systems suffer from a lack of flexibility that causes them to be less than fully utilized.

Type specifications should be parametric.  That is, be able to specify multiple types simultaneously:

type Point({int, float, double} ElemType ) =
  ElemType x,
  ElemType y

(we don't need template <> notation, types are parametric)

You could quickly define a grouping of types (remember that {} means "one of"):

type Number  = {int, float, double}
 In cases where the constituent types do not implement the same interface (do not have the same member operators), the operators available to Number is the intersection of the operators available in its constituent types.

Aside: This is very different than the following 3-tuple:
type triple = (int, float, double)

Let's define a keyword: the "any" type means any type!

Let's specify the addition function, where the parameters can be heterogeneous Point types:
Point Add( Point a, Point b);

Let's specify the addition function where all objects must be the same fully-realized type:
ParamType Add( (ParamType = Point) a, ParamType b);

Interface Reductions

Languages today almost exclusively allow programmers to add to the existing symbol table.  The only notable exception is the use of public, private, and protected in C++ and other object-oriented languages.

However, these "canned" namespaces are based on program structure assumptions that miss the complexity of modern software development.  For example, an API may have multiple levels of interface, depending the application programmer's chosen power/complexity trade off.  The implementation of the API may have specific functions needed to interface with an optional component.  These functions and related member variables, could be removed during compilation if the other component is not part of the build, resulting in space and time efficiencies.  The implementation may have a debugging interface...

Instead, let us define interface groups and allow classes to include specific prototypes and interfaces into the group:

interface group API;
interface group GUI;
interface group data;

A module can choose what interface groups to present to other software layers.  It can combine pieces of other interface groups into an new group and present that.  This has the effect of reducing the namespace. 

Given an extremely flexible syntax parser, you should be able to specify most modern languages in a single language.

Semantic Specifications

Interfaces constitute syntatic specifications.  What about semantics?  A semantic specification defines how an object should behave.  Today we get away with concepts like "assert" and "unit test"; but there is no formal specification of semantics.  Without a formal specification engineers cannot write adhering implementations or formal proofs and compilers cannot apply logical reasoning for optimization.

  For example:

  semantic stack(any a) = assert(a == a.push(x).pop())

  semantic queue(any b) =
    any (x,y,first,second);
    a.push_back(x),     a.push_back(y),
    first = a.pop(),
    second = a.pop(),
    assert(x == first),
    assert(y == second)

An interface actually consists of both syntax (interface) and test (semantic) specifications:

type List(any T) =
  def add(T a) {...},
  def remove(T a) {...},
  def push(T a) {...},
  def T pop() {...},

  semantic(List a, assert(a == a.add(x).remove(x))),
  implements semantics stack;
  implements semantics queue;

Performance Specifications

Performance specification is an important part of the semantic specifications from a practical perspective, although it is (generally) not part of the minimal specification (so we'll use the /: prefix).

Why is performance specification important? A programmer is confronted with multiple implementations of an interface (say a List, or Map).  To pick the optimal implementation he must match the usage patterns in his code with the implementation that implements those member functions most efficiently.  To do so correctly, he needs classes and member functions to be annotated with performance specification.

type MyList(any T) =
   int length;
   def push(T a) {...}, //: O(p=1,m=1)
   def find(T a) {...}, //: O(p=length/2, m=1)
   def quickSort(T a) {...}, //: O(p=length*log(length), m=1)
   MyList(T) clone() {...}, //: O(p=length, m=length)

Note, given these performance specifications it may be possible for the profiler to feed back data into the compiler to recommend the best implementation.

Computer-Assisted Development

Integrated IDE

The language should not be defined solely in ASCII format.  Today's computers are fully capable of displaying binary data (pictures, music, etc) in human-consumable format inside the context of a traditional program editor and so languages should allow this data to be included.

var Image SplashImage = [[actual image here]]

Computer Annotated Source

Continuing the philosophy of minimal specification let us NOT specify the specific list required for this task.  Let us just specify that it must be an object with a list and GUI interface:

var (List, GUI) choices,

choices.push_back("last choice"),

The compiler can choose any object that provides both the List and GUI interfaces.  During profiling execution, the compiler keeps track of how often each API was called.  Although this is not the case in the above example, let us imagine that the push_back() function was called repeatedly in a performance sensitive area.

After execution, the system notices this and chooses an implementation of choices that optimizes the push and push_back functions based on the performance annotations that are part of each classes' definition (see above).  It annotates the source code to this effect, using the "inessential" marker "/" with the computer-can-change annotation "|":

var (List, GUI) choices,  //| instantiate DoublyLinkedList(string)

choices.push_back("last choice"),

If the programmer wants to override this choice or stop it from ever changing he can remove the computer-can-change annotation:

var (List, GUI) choices,  /: instantiate MyDoublyLinkedList(string)

Or of course, using the traditional method:
var MyDoublyLinkedList(string) choices;

Compiler Interpreter Equivalence