2008-01-14 09:01 UTC
The competition for you to come up with the best test for Acid3
As many of you willhaveheardbynow,
I've been working on the next Acid Test. Acid Tests are a way to
encourage browser vendors to focus on interoperability. With the Box
Acid Test, Todd Fahrner highlighted the CSS box model, and the
resulting interoperability was one of the first big successes of the
movement towards having browsers properly implement Web standards. Acid2
tested a broader range of technologies, but primarily it was focused
on the static processing of HTML and the static rendering of CSS.
With Acid3, we are focusing on the dynamic side of the Web. I have
a work in
progress which consists of a few rendering tests and 84 subtests,
little functions that test specific things from script. But I'd like
to have a round 100. That's where you come in. I'm announcing a
competition to fill the last sixteen tests!
You have one week to submit one or more tests that fulfill all the
following criteria:
The test must consist of the body of a JavaScript function which
returns 5 when the test passes, and which throws an exception
otherwise. It doesn't matter what kind of exception.
The test must compile with no syntax errors in Firefox 2, IE 7,
Opera 9.25, and Safari 3. (You can use eval() to test
things that are related to syntax errors, though.)
The test must not crash any of Firefox 2, IE 7, Opera 9.25, and
Safari 3.
The test must fail (throw an exception) in either a Firefox
trunk build from January 2008 or a Webkit trunk build from
January 2008 (or, ideally, both). (Opera and IE are failing plenty
of tests already, I don't want to add more tests that only fail in
one of those. Of course if you find something that fails in Firefox
or Webkit and Opera or IE, so much the better.)
The behaviour expected by the test must be justifiable using
only standards that were in the Candidate Recommendation stage or
better in 2004. This includes JavaScript
(ECMAScript 3), many
W3C specs, RFCs, etc.
You must be willing to put your test into the public domain. (I
don't want us to end up with any copyright problems later!)
If you have a test that fulfills all the above conditions, e-mail it to me (ian@hixie.ch), along
with a brief justification citing chapter and verse from the relevant
specs you are using, and telling me which build of Webkit or Firefox
fails the test.
I will then make the 16 final tests from the best submissions, and
will put the names of the people who submitted those tests into the JS
comments in the test as recognition.
Evolution is evident and well-known in biological species, and can be seen in
artifical environments
quite easily, but evolution actually applies to any environment with moderately well-defined entities that have moderately measurable properties, where these entities
appear and disappear over time.
For example, companies, as a species, are subject to evolution. They are born,
they live, they die; and they each have unique properties (different ways that they operate)
which determine their characteristics. Actually, the whole of the species "group of
people with a common agenda" is subject to evolution, but let's look specifically at
companies that try to convince people to use their products or services in the software
industry, since that's what I'm familiar with.
For a long time, the software industry consisted primarily of one kind of company.
First the company would be founded with an idea, and then the company would get venture
capital funding, which would be used to fund software development, the fruits of which
would be sold to customers, and the money raised from those sales would fund further
development of more software, and the cycle continued.
These companies die when they run out of money. Since venture capital isn't infinite
(investors will stop pouring money in when they realise that they are never going to
see any of it again), the key is raising money from software sales. There are several ways
of ensuring that a company will get money from software sales:
If the product being sold is the only product in the market that solves the needs
of the users it targets, then it is almost certain that the majority of the target audience
will buy the product. This is called a monopoly.
If the product being sold is by far the best product in its category, then,
all other things (especially the price) being equal, it is likely that much of the target
audience will buy the product. Making the best product often requires disproportionally
more effort than creating a mediocre product, though. It is also difficult to maintain
the honour of having the best product; it requires constant development to keep ahead
of the competition. It is much easier to improve a mediocre product to be almost as good
as the best product than it is to take a product that is already the best product and
make it significantly better.
If the product being sold is the cheapest product in its category, then
it is likely that much of the target audience will buy the product, so long as the product
is good enough. How good the product has to be to be "good enough" is a function
of how much cheaper the product is relative to the other products.
If the product is being sold to users of existing products in the same product category,
then the target audience is more likely to buy the product if the total cost of switching
to the new product is lower than the cost of switching to another product. When the users
are using an older version of the new product, then this is called vendor lock-in.
When the user are using an older version of another product, then this is called
migration.
Each of these strategies has a counter-strategy. For example, to counter an
incumbent with a monopoly position, a company "just" has to provide a product of equal value.
To counter a competitor with the best product, a company "just" has to provide a product that is
better. To counter a competitor with a cheaper product, a company "just" has to lower its prices.
To counter a competitor whose users would find it more expensive to switch to the company's
new product than they would to upgrade to the newer version of the competition's product,
the company "just" has to provide a transition path (if the cost of switching vendor would come
from having to convert all existing data, for instance, a company could "just" support the
competition's file formats).
Of course, that's often easier said than done. A company can't lower its prices to below
its operating costs, as doing so would cause the company to eventually run out of money and die.
Similarly, making the best product is significantly more difficult than making a mediocre product,
and a company can run out of funds while trying.
It turns out, though, that the rules change when the competiting companies are unequal.
If a company has a lot of money, there are a number of tricks it can play to compete with
companies with less money:
The big company can sell its products for significantly less than its smaller competition
(and at a loss for the big company, using its large cash reserves to survive).
The competition will eventually die, since it won't be able to sell products and thus will run
out of money. When the competition goes away, the big company can raise prices again.
The big company can implement one-way transition paths from every competitor's product to its
own, and then introduce a feature that makes a reverse migration expensive. Users that switch
to the big company's product will be locked in, which will reduce the potential market for
the competition.
The big company can improve the product while it has competition, and then stop improving
while it has no competition.
The big company can use its credibility to reduce the likelihood that users will buy
products from the competition, even if the big company doesn't yet have a competing product.
This is known as spreading fear, uncertainty, and doubt (FUD).
This is where the evolution comes in. If a big company repeatedly kills all the companies that follow
the model I've described so far, then only the companies that use different models will
survive.
This isn't all theoretical. Microsoft is a big company, and they've played all the tricks above.
Many companies in the software industry have died because they failed to make money, and many of those
died because Microsoft starved them of that money by using the tricks described above. (Sometimes,
the use of those tricks has been deemed illegal; other times, not. That's besides the point here.)
What's interesting is the effects that Microsoft's strategies have had from an evolutionary standpoint.
When we look at the major companies competing with Microsoft today, we find that none of them are actually using
the operating model I describe above.
Beating Microsoft by not needing money:
Firefox, Apache, and Linux-based operating systems are examples of open source software
arising as ways to compete with Microsoft. Microsoft's usual strategies typically don't work with open
source software. Microsoft can't undersell open source, as it costs nothing. Furthemore, since the source
is independent of the company behind the source, if the company runs out of money another one can simply
come along and replace it, continuing from where it left off. Thus, starving the company of money doesn't
actually kill the competition; indeed, open source software actually turns this strategy against the big
company. It will take a long time, but in due course Microsoft will run out of money if it doesn't
make profits, whereas open source projects can continue indefinitely.
Open source is not perfectly safe against the other tactics described above, though. It is still vulnerable to vendor
lock-in and FUD, and a better product can still beat it. Projects like Wine help Linux with vendor lock-in,
as it allows users using Microsoft Windows to switch to Linux more cheaply than if they had to
replace all their software; similarly Open Office implements file format convertors to read Microsoft
Office documents; and Samba implements Microsoft's networking protocols to allow a migration to
a Linux-based infrastructure without requiring that users switch all their existing infrastructure
at the same time.
FUD is heavily used by Microsoft against open source projects (a recent example is
this
FUD article against Firefox); open source as a development model can mitigate this by leveraging its
community to counter such claims.
The biggest difficulty, though, is in creating and maintaining the
best product. Nothing especially changed in the browser market in the years just before Firefox 1.0 was released:
the market was stagnant after IE6's release, with all the alternatives (Netscape, Opera, etc) being
fundamentally not good enough in comparison. Firefox 1.0 was the best product of its time, and that reason,
and that reason alone, resulted in its success. All Microsoft have to do to beat Firefox is make a better
product, something for which they certainly have the resources. Similarly, all the Linux OS community has to do
to beat Windows is create a fundamentally better product from the end user's perspective, while ensuring
that the cost of migration is lower than the cost of upgrading Windows.
Microsoft has clearly realised that open source is a new type of competitor, and they just as clearly
haven't worked out how to compete with it (which is simply to make a better product). This is probably
because they have spent so long as the "big company" that they have forgotten the four ways of making
money, and can only remember the tricks for competing with smaller companies.
Microsoft making a superior product wouldn't kill open source, though. It would just make Microsoft
money while the open source community and companies themselves developed a better product again. (Just
look at open source today: Linux operating systems aren't, from the end user's perspective, fundamentally
better than Windows, but Linux OS companies continue with a growing but small set of the users.) Thus the technique
of improving only until the competition is dead doesn't work on open source competitiors. Microsoft
would have to continuously work to improve its products to compete with open source.
Beating Microsoft by not allowing vendor lock-in:
Google's operating practices differ from the typical software vendor in that the main service that Google
provides is completely devoid of any vendor lock-in potential. Google beat the search engines before it
by being better than they were, and Google could easily lose all its users overnight if a much better
search engine was to be developed. The only reason Google has a majority market share is that it is the
best. Microsoft's usual strategies don't work against this kind of company: they can't lock the users into
their alternative, and so the users have a truly free choice as to which service to use, Microsoft's or
the competition's, and they usually pick the better alternative.
Google has also learnt the zero cost trick: by charging advertisers instead of charging users, Google
can get a large market share of users, which is needed to sell to advertisers. Google's search engine users
don't care how many advertisers publish ads through Google, so even if Microsoft undercut Google on
the advertiser side, it still wouldn't reduce the number of users on the search side. In addition, because
the advertisers want to advertise on the site with the users, and because advertisers could just use both
advertising systems, undercutting on the advertising side doesn't actually hurt Google. (Making advertising
free on Microsoft's network would just lead to advertisers advertising on both networks, which would
hurt Microsoft, since they would be footing the bill, but not Google, who would still be making money.)
The FUD strategy depends on the credibility of the company spreading it, but Google is widely trusted,
so FUD doesn't work very effectively against Google either.
As noted above, though, there is a simple way in which Microsoft (or any other company) could compete with Google: make
a better product. In fact, since Google's entire strategy — intentionally or not — is based
on not allowing vendor lock-in, any better product which is also free would almost immediately beat Google.
Naturally, Google invests heavily in making sure it continuously improves. (Note that unlike with open source,
which could probably continue indefinitely under a superior competitor, it isn't clear that Google actually
could survive for long once it lost its users to a competitor.)
Beating Microsoft just by being better:
This brings me to my third example, Apple. Apple has died and been reborn several times, as far as I can tell,
but its most recent strategy is based almost exclusively on one concept: making the best product for the
user, and doing so in several different markets at once. This technique is difficult, because it requires constant
high-quality development. However, it is very effective against a big company like Microsoft that has, by and
large, stopped relying on quality to compete.
As a corollary, Apple has found an interesting counter-strategy to the big-company strategy of undercutting
the competition until it is starved. It sells its products with very high profit margins, and sells these
products in a small number of very separate markets. Thus, it doesn't have to sell many units to survive at
all, and it doesn't need to sell any units in any one area so long as it sells enough in another
area. So for example, Microsoft couldn't compete with the iPod by giving away the Zune: even if most users
would stop buying iPods and would instead just get the free, or nearly free, Zune, the net effect would just
be that Microsoft would lose lots of money and Apple would wait with few ill effects.
Apple's switch to intel enabled Boot Camp and products like VMWare Fusion, to mitigate the problems of
Microsoft's attempts at vendor lock-in. (Apple also plays its own mild games of vendor lock-in to prevent
users from leaving Apple products once they make the switch.)
Apple has, even more than Google, become somewhat immune to Microsoft FUD purely on the basis of its
own credibility. By almost never pre-announcing products, by repeatedly delivering products of high quality,
and by a management of the media so adept as to be aweinspiring, Apple has managed to almost completely
neutralise any FUD attempt against them.
But again, there is one way that Apple is vulnerable. If Microsoft were to make a truly superior
product, Apple would lose users. However, unless this strategy was applied to all of Apple's products simultaneously,
it wouldn't kill Apple, it would only starve one particular part of the company. All Apple would have to do to
come back is make a significantly better product again.
Conclusion: The companies that couldn't beat Microsoft have all died, and evolution has
resulted in three very different types of companies that are each immune to Microsoft's strategies in their own way. Yet
all are still vulnerable to the same thing: a better product.
For the end users, this is a good position for the industry to be in.
2007-09-26 10:52 UTC
A low-bandwidth, high-latency, high-cost, and unreliable data channel
I like food, and I'm not really good at the creative art of cooking (though I'm a fine sous-chef), so I eat out at restaurants a lot.
I usually pay by credit card. In the US, waiters have a minimum wage below the normal minimum wage, and thus you always have to
tip, and so you never pay what the bill says (and I usually tip
well, unless the service or food was appalling, even outside the US).
The net effect of this is that you basically get to decide how much you pay. Indeed, credit card bills at restaurants have a space where
you fill in how much you want to pay.
I don't like doing arithmetic, especially not of the kind "$85.47 * 1.17", and so I just approximate. $85.47 is about $90, 15% of $90
is about 15, round to down to cancel out the earlier approximations gives about $100. So I pay $100.00 and go on my merry way.
One day I looked at my bank statement and it was something like:
POS Trans ZIBIBBO PALO ALTO CAUS
$76.00
POS Trans CHEESECAKE PALO ALTO CAUS
$40.00
POS Trans OUTBACK #0514 CUPERTINO CAUS
$210.00
POS Trans BROOKFIELD'S REST #2 SACRAMENTO CAUS
$34.00
Look at all those zero cents... there are data bits there, lying unused! It struck me that with every single restaurant transaction I
could set the cents field to some number under my control, thus allowing me to communicate with myself at a later date!
This would be really useful as a way of sending ratings information back to myself, so that I could later review the restaurants online
or otherwise keep track of where I would want to go back to or where I would want to avoid (since I eat out a lot, restaurants somewhat
blend together in my memory).
We can set any number from 0 to 99. In binary, that's 0b0000000 to 0b1100011. In other words, we have seven bits to play with, except that if both of the high bits are set, then we lose bits 3, 4, and 5.
There are five things I wanted to be able to communicate. The first was the number of guests, so that I can divide the price by how many
people I was paying for, to determine the price-per-person. The second was the rating, whether I should go back or not. The last three were
whether the restaurant had wifi, whether they were suitable for vegetarians (Carey
is vegetarian), and whether they had drinks I liked (I don't drink addictive drinks, drinks containing mind-altering drugs, carbonated
drinks, and drinks containing high fructose corn syrup, which basically excludes almost anything you can buy in the US in many cheap
restaurants, and even some fancy ones).
If we consider the rating to be a four-state flag, with values "avoid", "ok", "good", "awesome", and if we limit ourselves to being
able to specify 0, 1, 2, or "more than 2" guests (in addition to me), and if we establish that if we want to avoid the place in future then it really doesn't matter whether the restaurant
had Internet, a vegetarian selection, or good drinks, then we can neatly fit this into our
contstrained not-quite-7-bit bitfield like this:
64
32
16
8
4
2
1
r
d
v
i
g
...where:
r
The rating, according to the following scale:
0
awesome
32
good
64
ok
96
avoid (and ensure d, v, and i are set to 0)
d
Whether drinks are good or not (set means they are good)
v
Whether a vegetarian selection is available (set means there are vegetarian options)
i
Whether free wifi Internet access is available (set means wifi is available)
g
How many guests were paid for in the transaction:
0
just me
1
me and one guests
2
me and two guests
3
me and three or more guests
I did this, and used it for a while. I quickly discovered that something was wrong. The numbers in my bank statement made no sense, for
example dinners with more than 2 guests at locations where I knew that I had been with just one person.
I changed to a new scheme. Instead of encoding data in the cents field, I instead just store the last two digits of the dollar amount
into the cents field. A checksum, if you will. So if it cost around $34 with tip, then I put $34.34, or if it cost $122, then I put $122.22.
I do this reliably now, on all restaurant transactions.
What I've found is a shocking number of restaurants don't charge me what I write. For a while I thought I just had bad handwriting, but I
then went out of my way to write very clear numbers and that didn't help. For example, Zibibbo's in Palo Alto. I remember very clearly writing
$77.77, but was charged $76.95. That's not the original amount without tip (which was probably closer to $65), it's not what I wrote, what the
heck is it? On the other hand, the other time I went, they charged me $865.65, exactly what I wrote. A cafe in Monterey (Portola Cafe)
charged me $18.28. Either they charged me ¢10 too much, or they gave me a $10 discount. The Melting Pot, San Jose, charged me $70.54.
Where did the .54 come from? Why not take the remaining ¢16? Pasta Pomodoro did the same trick as Portola, charging me $37.47. But two
months before that, they charged me $34.31, which is probably ¢3 less than I wrote. Why am I
getting overcharged sometimes and undercharged others? In Hawaii, the majority of restaurants ignored the tip altogether, just
charging me the original amount regardless of what I wrote. Are people in Hawaii so relaxed that they don't need the extra money from tips?
My theory is that it is because I don't write an explicit tip, I just give the total. Maybe restaurants need the tip as well as the total,
and so to save time they just work out a round dollar tip that is close to what I wrote, and charge me that. My next step, to test this theory,
is to start always writing the actual tip amount in.
Somewhere over the rainbow,
Way up high,
There's a land that I heard of
Once in a lullaby.
The explosion during the man's burn was quite the surprise, but
I'm glad they delayed the derrick's burn until after the man because
otherwise the man would have been a non-event in comparison. Holy Hannah.
This year wasn't quite as enjoyable as last year, but mostly that was
just because of the weather: I didn't get to go around the deep playa and
jump on art cars as much as I'd have liked.
Somewhere over the rainbow
Skies are blue
And the dreams that you dare to dream
Really do come true.
While I was away, late last week, while nobody was looking at the #whatwg channel,
and in stark contrast to the Burning Man atmosphere, someone visited and asked a
question which went unanswered. The log went like this:
--> Nicklas18 (i=Nicklas1@c-f54ce155.28-2-64736c13.cust.bredbandsbolaget.se) has joined #whatwg
<Nicklas18> Why leave the sense of logic at the door?
<Nicklas18> Does that mean that you are not logical?
<Nicklas18> Shouldn't people in charge of making a standard have common sense?
<Nicklas18> Hello?
<Nicklas18> Fucking retards.
<Nicklas18> Suck my huge cock.
<-- Nicklas18 has left #whatwg
(The /topic in the #whatwg channel says "Please leave your sense of logic at the
door, thanks!", for reasons that should be obvious to anyone who has tried working
with Web browsers for more than about 5 minutes.)
The sun is shining,
It's a lovely day,
A perfect morning
For a kid to play,
Carey and I have now seen Avenue Q twice, once on Broadway and once in San
Francisco. I love this show. Everyone should see it.
The internet is really really great.
I’ve got a fast connection so i don’t have to wait.
There's always some new site;
I browse all day and night;
It's like i’m surfing at the speed of light.
Work on HTML5 progresses at a measurable pace — literally, now, since I'm
measuring the number of e-mails I have outstanding on a regular basis to determine how
well I'm hitting my quarterly targets. (Answer: Not as well as I'd like, but not as
badly as I'd feared.) As a side-effect of writing the code to trawl my IMAP folders to
count the outstanding e-mails, I wrote a tool
that also exposes the folders on the Web (after having filtered out the
confidential feedback I get, of course). I was hoping this would address some of the
complaints about lack of transparency, and indeed a number of WHATWG contributors were
interested, gave helpful feedback, and even wrote their own frontends using the tool's
APIs. However, from the W3C corner of the HTML5 communities I only got complaints. The
most notable complaint was that the tool didn't work in IE, which was apparently a
problem not because it actually stopped anyone from accessing the page (I haven't
received any complaints from anyone who actually couldn't get to the page) but because
it might potentially stop users of old accessibility tools that only work
with IE from accessing the page.
When you help others,
You can’t help helping yourself!
Every time you
Do good deeds
You’re also serving
Your own needs.
I've actually been using the latest version of JAWS (the popular Windows screen
reader software for blind people) recently, as part of my work on HTML5. From a
usability point of view it is possibly the worst software I have ever used. I'm still
horrified at how bad the accessibility situation is. All this time I've been hearing
people worried about whether or not Web pages have longdesc attributes
specified or whatnot, when in fact the biggest problems facing blind users are so much
more fundamental as to make image-related issues seem almost trivial in
comparison.
For example, JAWS will happily take the last sentence of a paragraph, and the first
sentence of the next paragraph, and run them into each other as one sentence, if
there's no full stop at the end of the first paragraph. If you really want to make
your Web pages more readable to blind users, forget longdesc or even
alt, or even markup of any kind, just make sure you're using full
punctuation! And that's just one example. Browsing the Web with JAWS is a horrifying
experience not because of the poor state of the Web, which is admittedly very poor
indeed from the point of view of semantic and accessible markup, but because of the
terrifyingly poor state of the screen reader software.
What might make my experiences with JAWS even more worrying is that I'm told JAWS
is amongst the best of the available screen reader software. It certainly isn't worth
its ridiculous $895 price tag (let alone the $1095 price tag for the "professional"
version I got). There is a big market opportunity here for someone to make a usable
and affordable native speech Web browser or screen reader. Accessibility advocates
could do more for accessibility by writing test suites for screen readers to check
their basic HTML support (like supporting the p element) than they ever
will by trying to educate Web authors.
What do you do with a B.A. in English?
What is my life going to be?
Four years of college and plenty of knowledge
Have earned me this useless degree.
Going back to my earlier comment, though, I'm a little confused as to why several
people who have joined the HTML5 community from the W3C HTML WG side are so hostile,
while those who joined the community through the WHATWG side are so much more friendly
and constructive. A couple of weeks ago I had to temporarily ban someone from the
WHATWG list after they repeatedly cross-posted an off-topic flamewar to four mailing
lists including the WHATWG list. (They didn't get banned from the other three lists as
far as I know.) I was really sad about having to do this, but the health of the
community is important, and sometimes extreme measures have to be taken. This is the
first time I've had to do that to someone on the WHATWG list, and the list has existed
since 2004. (I've set up personal mail filters for people on the WHATWG list before,
to ensure that I read certain people's suggestions only once I've read all the other
mail, but I've never had to actually ban someone.)
The HTML working group is having a meeting in November. I don't really see the
point, but as I told Dan (the chair of the working group), if we do have a meeting,
I'd like us to use the unconference style, as that's probably the only useful thing we
could do.
Hopefully we'll have the agenda this week, that way I can know whether I can
justify going or not. I can't really justify spending Google's money on travel, let
alone justify the carbon emissions and the time away from editing the spec, without
knowing what exactly the meeting will consist of. If the meeting just consists of us
going through open issues and discussing them, then I won't go, since in my experience
that's a huge waste of everyone's time unless the issue has already failed to be
addressed by more asynchronous methods like e-mail. For technical work like this you
really need to be able to sit down and think about issues, and ponder them with a
whiteboard; it is very hard to make sure all the applicable research has been done
when there is pressure to discuss and reach a decision on a topic in 60 minutes or
less. This is especially true since for almost all issues, the first time you look at
it you'll find many things you need to research, and it can take days to get all the
data you want to make an educated decision.
Using actual data to make decisions about technical issues isn't a very
common practice in W3C working groups, which is probably why a lot of the people in
the HTML working group who come from a W3C background think a face-to-face meeting is
a useful thing to have.
I want you to know
The time that we've spent
How great it's been
How much it's meant.
The Falcon Ridge Folk Festival has a sign language interpreter on every stage. Now
mind you, this is a music festival, a specifically sound-orientated event.
Yet it is highly accessible; all the songs have alternative fallback content for those
who cannot hear them. There are several things about this that I noticed.
First: The visual version of the song (lyrics in sign language, and the emotional
content of the voice, expressed in the facial expressions and style of the sign
language interpreter) doesn't convey the same thing as the original audio version (the
melody, the harmony, the percussion, the lyrics, and the emotion in all of those). It
is merely a translation of the parts of the audio stream that aren't intrinsically
audio-only. Alternative content doesn't necessarily convey everything in the
primary medium, nor does it have to to be useful and enjoyed by the target
audience.
Second: The sign language interpretation is actually quite fun to watch even if you
can hear the music, it's like a kind of interpretative dance. Alternative
content is useful to those who don't need it as well.
This isn't just my opinion. It was clear at the festival that the sizeable deaf
community present there was fully enjoying the music. The presence of sign language
interpreters made them feel part of the event, and conveyed everything that they
wanted conveyed. Meanwhile, the hearing patrons enjoyed the music and the
sign language — I heard comments from several people to the effect that the sign
language interpreters were effectively an intrinsic part of the act. (To the point
where even people who didn't necessarily understand sign language had opinions on
which interpreter was better.)
It's interesting how the prevailing opinion of Web accessibility experts is so far
removed from the existing and successful accessibility practices in the non-tech
world.