2012-02-25 05:27 UTC Living Standards
In a recent private discussion spawned from one on a W3C mailing list, I was defending the "living standard" model we use at the WHATWG for developing the HTML standard (where we just have a standard that we update more or less every day to make it better and better) as opposed to the "snapshot" model the W3C traditionally uses where one has a "draft" that nobody is supposed to implement, and when it's "ready", that draft is carved in stone and placed on a pedestal. The usual argument is something like "engineering depends on static definitions, because otherwise communication becomes unreliable". I wrote a lengthy reply. I include it below, in case anyone is interested.
With a living standard, one cannot change things arbitrarily. The only possible changes are new features, changes to features that aren't implemented yet or that have only had experimental implementations, and changes to bring the specification even more in line with what is needed for reliable communication, as the argument above puts it (or "interoperability", as I would put it), i.e. fixing bugs in the spec.
With a snapshot-based system, like the W3C TR/ page process, adding new features is done by creating a new version, so we'll ignore that, and experimental stuff is taken out before taking the snapshot, so we'll ignore that too. That leaves the fixing bugs.
Now either one doesn't fix the bugs, or one does fix the bugs. If one fixes the bugs, then that means the spec is no more "static" than a living standard. And if one doesn't fix the bugs, then communication becomes unreliable.
So IMHO, it's the snapshot-based system that's the one that leads to unreliable communications. Engineering doesn't depend on static definitions, it depends on accurate definitions. With a platform as complicated as the Web, we only get accurate definitions by fixing bugs when they are found, which inherently means that the definitions are not static.
Note that we know this system works. HTML has been developed in a "living standard" model (called "working draft" and "editor's draft" by the W3C) for seven and a half years now. Interoperability on the Web has in that time become dramatically better than it ever was under the old "snapshot" system.
Also, note that the snapshot system without applying fixes, which is the system that the W3C actually practices on the TR/ page (very few specs get errata applied, even fewer get substantial in-place corrections in any sort of timely manner), has demonstrably resulted in incorrect specs. HTML4 is the canonical example of this, where in practice that spec is woefully inaccurate and just poorly written, yet nothing was ever done about it, and the result was that if you wanted to write a browser or other implementation that "communicated reliably" with other HTML UAs, you had to explicitly violate the spec in numerous ways that were shared via the grapevine (the default value of the "media" attribute is the example I usually give of this).
And also, note that having a static specification doesn't ensure that nothing will ever change. HTML is a stark example of this, where from HTML4 to the contemporary specification many things have changed radically. But you might say that's my fault, so look instead to XML: the specification has changed such that implementations of the original standard are no longer conforming to the current standard. In fact, XML is arguably developed in a kind of "living standard" approach now, despite officially using the snapshot model. Things change, especially in software. And that's ok!
But it's incompatible with the snapshot model. Or at least, interoperability is made harder with the snapshot model.
</soapbox>
Before commenting on this, please check to see if your argument is already refuted on the WHATWG FAQ: http://wiki.whatwg.org/wiki/FAQ