Hixie's Natural Log
http://ln.hixie.ch/
Ian Hickson's Web log. Commentary on events of interest, cats, and idiocy around the world. Well, Ian's world.The Future is Flutter
http://ln.hixie.ch/?start=1700627532&count=1
2023-11-22T04:32:12+00:00<p>Despite my departure from Google, I am not leaving Flutter — the great thing about open source and open standards is that the product and the employer are orthogonal. I've had three employers in my career, and in all three cases when I left my employer I continued my job. With Netscape I was a member of the team before my internship, during my internship, and after my internship. With Opera Software, I joined while working on standards, kept working on standards, and left while working on the same standard that I then continued to work on at Google. So this is not a new thing for me.</p>
<p>Flutter is amazingly successful. It's already the leading mobile app development framework, and I think we're close to having the table stakes required to make it the obvious default choice for desktop development as well (it's already there for some use cases). It's increasingly used in embedded scenarios. And Flutter is extremely well positioned to be the first truly usable Wasm framework as the web transitions to the more powerful, lower-level <a href="https://docs.google.com/document/u/0/d/1peUSMsvFGvqD5yKh3GprskLC3KVdAlLGOsK6gFoEOD0/edit?resourcekey=0-bPajpoo9IBZpG__-uCBE6w">Wasm-based model</a> over the next few years.</p>
<p>In the coming month I will prepare our roadmap for 2024 (in consultation with the rest of the team). For me personally, however, my focus will probably be on fixing fun bugs, and on making progress on <a href="https://docs.google.com/document/u/0/d/1rS_RO2DQ_d4_roc3taAB6vXFjv7-9hJP7pyZ9NhPOdA/edit?resourcekey=0-VBzTPoqLwsruo0j9dokuOg">blankcanvas</a>, my library for making it easy to build custom widget sets. I also expect I will be continuing to work on <a href="https://pub.dev/packages/rfw">package:rfw</a>, the UI-push library, as there has been increasing interest from teams using Flutter and wanting ways to present custom interfaces determined by the server at runtime without requiring the user to download an updated app.</p>Reflecting on 18 years at Google
http://ln.hixie.ch/?start=1700627373&count=1
2023-11-22T04:29:33+00:00<p>I joined Google in October 2005, and handed in my resignation 18 years later. Last week was my last week at Google.</p>
<p>I feel very lucky to have experienced the early post-IPO Google; unlike most companies, and contrary to the popular narrative, Googlers, from the junior engineer all the way to the C-suite, were genuinely good people who cared very much about doing the right thing. The oft-mocked "<a href="https://en.wikipedia.org/wiki/Don%27t_be_evil">don't be evil</a>" truly was the guiding principle of the company at the time (largely a reaction to contemporaries like Microsoft whose operating procedures put profits far above the best interests of customers and humanity as a whole).</p>
<p>Many times I saw Google criticised for actions that were sincerely intended to be good for society. <a href="https://en.wikipedia.org/wiki/Authors_Guild,_Inc._v._Google,_Inc.">Google Books</a>, for example. Much of the criticism Google received around Chrome and Search, especially around supposed conflicts of interest with Ads, was way off base (it's surprising how often coincidences and mistakes can appear malicious). I often saw privacy advocates argue against Google proposals in ways that were net harmful to users. Some of these fights have had lasting effects on the world at large; one of the most annoying is the prevalence of pointless cookie warnings we have to wade through today. I found it quite frustrating how teams would be legitimately actively pursuing ideas that would be good for the world, without prioritising short-term Google interests, only to be met with cynicism in the court of public opinion.</p>
<div class="photo album">
<p>
<img src="https://ln.hixie.ch/media/photos/california/2011/google_campus_2011-04-11_edited_to_remove_people-small.jpeg" alt="[Photograph]">
Charlie's patio at Google, 2011. Image has been manipulated to remove individuals.
</p>
</div>
<p>Early Google was also an excellent place to work. Executives gave frank answers on a weekly basis, or were candid about their inability to do so (e.g. for legal reasons or because some topic was too sensitive to discuss broadly). Eric Schmidt regularly walked the whole company through the discussions of the board. The successes and failures of various products were presented more or less objectively, with successes celebrated and failures examined critically with an eye to learning lessons rather than assigning blame. The company had a vision, and deviations from that vision were explained. Having experienced <a href="https://en.wikipedia.org/wiki/List_of_Dilbert_characters#Pointy-haired_Boss">Dilbert-level management</a> during my internship at Netscape five years earlier, the uniform competence of people at Google was very refreshing.</p>
<p>For my first nine years at Google I worked on <a href="https://whatwg.org/">HTML and related standards</a>. My mandate was to do the best thing for the web, as whatever was good for the web would be good for Google (I was explicitly told to ignore Google's interests). This was a continuation of the work I started while at Opera Software. Google was an excellent host for this effort. My team was nominally the open source team at Google, but I was entirely autonomous (for which I owe thanks to Chris DiBona). Most of my work was done on a laptop from random buildings on Google's campus; entire years went by where I didn't use my assigned desk.</p>
<p>In time, exceptions to Google's cultural strengths developed. For example, as much as I enjoyed Vic Gundotra's enthusiasm (and his initial vision for Google+, which again was quite well defined and, if not necessarily uniformly appreciated, at least unambiguous), I felt less confident in his ability to give clear answers when things were not going as well as hoped. He also started introducing silos to Google (e.g. locking down certain buildings to just the Google+ team), a distinct departure from the complete internal transparency of early Google. Another example is the Android team (originally an acquisition), who never really fully acclimated to Google's culture. Android's work/life balance was unhealthy, the team was not as transparent as older parts of Google, and the team focused on chasing the competition more than solving real problems for users.</p>
<p>My last nine years were spent on Flutter. Some of my fondest memories of my time at Google are of the early days of this effort. Flutter was one of the last projects to come out of the old Google, part of a stable of ambitious experiments started by Larry Page shortly before the creation of Alphabet. We essentially operated like a startup, <em>discovering</em> what we were building more than designing it. The Flutter team was very much built out of the culture of young Google; for example we prioritised internal transparency, work/life balance, and data-driven decision making (greatly helped by Tao Dong and his UXR team). We were radically open from the beginning, which made it easy for us to build a healthy open source project around the effort as well. Flutter was also very lucky to have excellent leadership throughout the years, such as Adam Barth as founding tech lead, Tim Sneath as PM, and Todd Volkert as engineering manager.</p>
<div class="photo album">
<p>
<img src="https://ln.hixie.ch/media/photos/california/2015/flutter_design_document-2015-05-06-small.jpeg" alt="[Photograph]">
We also didn't follow engineering best practices for the first few years. For example we wrote no tests and had precious little documentation. This whiteboard is what passed for a design doc for the core Widget, RenderObject, and dart:ui layers. This allowed us to move fast at first, but we paid for it later.</p>
</div>
<p>Flutter grew in a bubble, largely insulated from the changes Google was experiencing at the same time. Google's culture eroded. Decisions went from being made for the benefit of users, to the benefit of Google, to the benefit of whoever was making the decision. Transparency evaporated. Where previously I would eagerly attend every company-wide meeting to learn what was happening, I found myself now able to predict the answers executives would give word for word. Today, I don't know anyone at Google who could explain what Google's vision is. Morale is at an all-time low. If you talk to therapists in the bay area, they will tell you all their Google clients are unhappy with Google.</p>
<p>Then Google had layoffs. The layoffs were an unforced error driven by a short-sighted drive to ensure the stock price would keep growing quarter-to-quarter, instead of following Google's erstwhile strategy of prioritising long-term success even if that led to short-term losses (the very essence of "don't be evil"). The effects of layoffs are insidious. Whereas before people might focus on the user, or at least their company, trusting that doing the right thing will eventually be rewarded even if it's not strictly part of their assigned duties, after a layoff people can no longer trust that their company has their back, and they dramatically dial back any risk-taking. Responsibilities are guarded jealously. Knowledge is hoarded, because making oneself irreplaceable is the only lever one has to protect oneself from future layoffs. I see all of this at Google now. The lack of trust in management is reflected by management no longer showing trust in the employees either, in the form of inane corporate policies. In 2004, Google's founders <a href="https://abc.xyz/investor/founders-letters/ipo-letter/">famously told Wall Street</a> "Google is not a conventional company. We do not intend to become one." but that Google is no more.</p>
<p>Much of these problems with Google today stem from a lack of visionary leadership from Sundar Pichai, and his clear lack of interest in maintaining the cultural norms of early Google. A symptom of this is the spreading contingent of inept middle management. Take Jeanine Banks, for example, who manages the department that somewhat arbitrarily contains (among other things) Flutter, Dart, Go, and Firebase. Her department nominally has a strategy, but I couldn't leak it if I wanted to; I literally could never figure out what any part of it meant, even after years of hearing her describe it. Her understanding of what her teams are doing is minimal at best; she frequently makes requests that are completely incoherent and inapplicable. She treats engineers as commodities in a way that is dehumanising, reassigning people against their will in ways that have no relationship to their skill set. She is completely unable to receive constructive feedback (as in, she literally doesn't even acknowledge it). I hear other teams (who have leaders more politically savvy than I) have learned how to "handle" her to keep her off their backs, feeding her just the right information at the right time. Having seen Google at its best, I find this new reality depressing.</p>
<p>There are still great people at Google. I've had the privilege to work with amazing people on the Flutter team such as JaYoung Lee, Kate Lovett, Kevin Chisholm, Zoey Fan, Dan Field, and dozens more (sorry folks, I know I should just name all of you but there's too many!). In recent years I started offering career advice to anyone at Google and through that met many great folks from around the company. It's definitely not too late to heal Google. It would require some shake-up at the top of the company, moving the centre of power from the CFO's office back to someone with a clear long-term vision for how to use Google's extensive resources to deliver value to users. I still believe there's lots of mileage to be had from Google's mission statement (<q lang="en-US">to organize the world’s information and make it universally accessible and useful</q>). Someone who wanted to lead Google into the next twenty years, maximising the good to humanity and disregarding the short-term fluctuations in stock price, could channel the skills and passion of Google into truly great achievements.</p>
<p>I do think the clock is ticking, though. The deterioration of Google's culture will eventually become irreversible, because the kinds of people whom you need to act as moral compass are the same kinds of people who don't join an organisation without a moral compass.</p>Meeting philosophy
http://ln.hixie.ch/?start=1696014234&count=1
2023-09-29T19:03:54+00:00<p>Decline meetings aggressively. Always try to resolve issues by e-mail or chat first if possible.
Decline any meeting without an explicit agenda (I make exceptions for my immediate manager).
Decline any meeting where the agenda doesn't seem relevant to your work. Decline any recurring meeting with more than one other person. Keep track of how productive recurring meetings are being. If they're not productive, cancel them. If they're only occasionally productive, reduce the frequency. End meetings promptly once the agenda is resolved. Always leave a meeting when it reaches the end of its scheduled time. Never start a meeting late. If people are missing, start on time anyway. This is especially true for any meeting with large groups of people. Have a hard out every day, stop working at that time. Create fake buffer meetings so that you've got guaranteed breaks. Decline meetings that conflict with your breaks unless the person has explicitly reached out first. Aggressively defrag your calendar to make it look like what you want.</p>The Spectrum of Openness
http://ln.hixie.ch/?start=1691780719&count=1
2023-08-11T19:05:19+00:00<p>"Open Source" is a broad spectrum, with various axes. The following is an attempt to describe various ways to look at openness to aid project leaders in determining what they want their project to look like. I originally wrote this for my colleagues at Google, but the concepts apply widely and I figured they might be of use for others.</p>
<p>In practice, every project is a unique snowflake and there are exceptions to every rule. A project can be proprietary but use and contribute back to some open source library. An open source project can have undocumented proprietary protocols. A team can intend to fall in one category, but by their actions fall in another. The descriptions below should be seen merely as a high-level description of some possible ways projects can be configured, not as a comprehensive guide to the taxonomy of openness. Additionally, the examples I give below refer to the state of those products as of the time of writing. As projects evolve, these may become less accurate.</p>
<h4>Interoperability (0-6)</h4>
<p>One aspect of openness is how one's product interacts with others.
<p>For the purposes of this section, APIs (Application Programming Interfaces), ABIs (Application Binary Interfaces), formats, and protocols are considered equivalent. While they serve different roles in practice, the techniques used to limit or encourage their reuse are the same.
<h5>0. Proprietary with obfuscation</h5>
<p>The most closed one can make one's protocols is to not document them publicly and design them to be actively hard to understand by reverse engineering. Patents and DRM may also be used to further restrict potential interoperability by legal means in some jurisdictions.
<p>Examples: Kindle file format, most streaming music formats.
<h5>1. Proprietary</h5>
<p>Most protocols that are not intended for interoperability with other systems are undocumented (at least, not documented in a manner intended for public consumption), but are otherwise not obfuscated, and a sufficiently motivated user could reverse engineer the protocol and use it.
<p>Example: NTFS file system.
<h5>2. Licensed open standards</h5>
<p>One can have entirely open specifications, but require payment (or other agreements) before the standard can be read or used, e.g. by the use of patent licensing.
<p>Example: the H.264 video codec.
<h5>3. "The Code Is The Standard"</h5>
<p>Some projects do not document their protocols, but since their source code is available, they are effectively defined by their implementation, bugs and all.
<p>Examples abound but since people rarely intend to be in this state calling out any specific project as being in this category tends to be controversial.
<h5>4. Public</h5>
<p>When it is desired that users create new products to interact with one's own, one may publicly document one's protocols. There are varying levels of completeness to such documentation; for example, whether some aspects are kept proprietary, or whether the documentation includes details for error handling and future extensions.
<p>Examples: IntelliJ, Swift UI, SMB protocol.
<h5>5. Open standards</h5>
<p>The ultimate openness one can present is to submit one's protocols to a standards committee (or form a new one; the difference is largely symbolic). This is useful when the intent is to create an entire ecosystem around one's product and protocols.
<p>Examples: the Internet's core protocols, the web.
<h5>6. Regulated standards</h5>
<p>In the extreme, interoperability around some standards becomes so important that government agencies get involved and the protocol becomes a matter of law.
<p>Examples: power grid standards.
<h4>Source code license (0-7)</h4>
<p>If software is provided in binary form (e.g. client applications) then sufficiently motivated users will be able to reverse engineer it, even if the source code is not explicitly shared with the user. For the purposes of this section, we are ignoring this and focusing on the access that users have to the project's original source code.
<h5>0. Trade secret</h5>
<p>Some source code is so secret and so important to its owner that it gets legal protection beyond copyright.
<p>Example: The most sensitive internals of particularly special proprietary software products.
<h5>1. Proprietary</h5>
<p>The default is for source code to be copyrighted. If one does not redistribute it, then that source code is entirely closed.
<p>Example: The source code for the UI parts of macOS.
<h5>2. Commercially licensed</h5>
<p>One can license one's code for use by specific downstream users, without making it public. Typically this is done for money.
<p>Examples: Qt (in its closed-source form); Microsoft's sale of access to the Windows source code.
<h5>3. Source code that is incidentally visible</h5>
<p>One can publish one's source code without licensing it (or licensing it using a very restrictive license that essentially does not allow any use), typically as an incidental part of distributing one's application. This allows people to see the code, but does not allow them to use it in their own projects unless they negotiate a separate license with the distributor.
<p>Examples: JavaScript code in web sites that don't use a minifier or compiler; script code in game data files.
<h5>4. Usage-restricted source-sharing</h5>
<p>One can make one's source available under a license that allows some kinds of reuse by other parties but prevents others, such as commercial use, use by enterprises over a certain size, or use that competes with the original developer. This can be done either by prohibiting undesired uses outright, or by nominally allowing them but only under onerous terms.
<p>Example: MongoDB.
<h5>Open source licenses</h5>
<p>One can license one's code for public use, and these licenses can vary in their terms.
<p>It's important to notice that there are legally-sound open source licenses, and there are nonsense "licenses" that are the result of software engineers thinking that being a lawyer is easy. Talk to a lawyer before choosing a license. See the <a href="https://opensource.org/licenses/">OSI's license page</a> for an overview of the topic.
<h6>5. Restrictive</h6>
<p>The most strict open source licenses significantly limit what one can do with the source code. For example, they might require that downstream developers license their modifications and any linked code with the same license, or require that downstream developers license their software such that their users can obtain their app's source code.
<p>Examples: GPL-licensed software, such as Linux or Emacs.
<h6>6. Reciprocal</h6>
<p>These licenses apply the restrictive terms to the code in question (typically a library) but not to code that uses it (such as an application that embeds the library).
<p>Examples: MPL-licensed software, such as Firefox.
<h6>7. Permissive</h6>
<p>The most liberal licenses require very little of downstream developers other than the replication of the copyright notices in software that uses the covered code (and in some cases not even that).
<p>Examples: Apache-licensed software, such as Android or Rust; BSD-licensed software, such as Chromium or Flutter.
<h4>Copyright management</h4>
<p>Projects that accept source code from more than one legal entity may wish to navigate the issues of copyright assignment, liability, relicensing, and so forth. The usual tools for this are Contributor License Agreements (CLAs) and Developer Certificates of Origin (DCOs). Talk to a lawyer about these options.
<h4>Development processes (0-8)</h4>
<p>Separate from what one does with the protocols and the code, a separate choice is how to design and develop the code: where conversations happen, how people are added to the project, and so forth. This is sometimes called "governance".
<p>The sections below apply equally to big projects as to one-person projects, but are primarily focused on projects with multiple team members.
<h5>0. Proprietary development</h5>
<p>The most closed projects have no public-facing development at all. All design, implementation, and testing happens internally.
<p>Example: Google Search.
<h5>1. Proprietary development of open source software</h5>
<p>As with proprietary software, all the design, implementation, and testing happens internally. However, the source code is open source in some way, and is published periodically (e.g. in conjunction with a product release). This is often referred to as "throwing the code over the wall". No attempt is made to encourage public contributions. Patches may in some cases be taken (e.g. by e-mail).
<p>Examples: Sqlite, Postfix.
<h5>2. Proprietary development, limited-access betas</h5>
<p>A team can invite a closed set of unaffiliated users to test their software before launches.
<p>This is a common model for commercial software.
<h5>3. Proprietary development, public betas</h5>
<p>A team with private development can solicit feedback from a public community by providing pre-release software for any user to test.
<p>This is a common model for commercial software.
<h5>4. Public presence, private development</h5>
<p>A team with public tooling (e.g. bug databases, code repositories, code reviews, continuous integration), but that makes no attempt to accept public contributions (code, suggestions, etc). Public bug reports may be accepted but the development team typically does not engage with the bug reporters.
<p>Such a team's communications channels are all or mostly internal. Commit access is typically automatic for the team, and unavailable for anyone else.
<p>Examples: Many of Google's small open source projects, and many personal projects on GitHub, fall in this category.
<h5>5. Public clique development</h5>
<p>A team with public tooling that nominally accepts public contributions, but where becoming an active and equal member of the team is in practice discouraged (new team members are explicitly recruited, and usually all work for the same company). Friction points exist that reduce the likelihood of contributions, for example, public tooling that is different from that typically used by open source projects, official public channels that are not typically frequented by the bulk of the team, lack of documentation or out of date documentation (especially about how to contribute), most communications are held in private channels.
<p>The team may engage with bug reporters on occasion, and may listen to suggestions for project direction, but all decisions ultimately rest with the core team.
<p>Many projects that try to make the jump from "public presence, private development" to "public development, private governance" end up in this state because they underestimate the effort required to successfully and productively develop software in public. That said, this is a valid development model in its own right, especially for projects that need a strong driving vision such as a programming language or an open source narrative video game.
<p>Examples abound but since people rarely intend to be in this state calling out any specific project as being in this category tends to be controversial.
<h5>6. Public development, private governance</h5>
<p>A team that works in public, with public design decisions, public meetings, and public chats, but whose core leadership is accountable to a single entity whose primary purpose is not this project (e.g. a company). Typically such a project is largely funded by that single entity as well, especially in terms of employing the most active contributors and marketing the project.
<p>Such a team typically hopes that one day contributors with other affiliations will be independently designing, implementing, reviewing, and landing code without oversight from the project leads beyond ensuring a broad alignment on strategy. As such, it actively tries to differentiate between being a member of the development team, and being a member of the primary sponsoring organization. Such a team typically has publicly-visible documentation of its processes, governance, values, contributor access policies, etc.
<p>Example: Flutter.
<h5>7. Public development with an unelected but independent core team</h5>
<p>A project can be entirely open in its development, with a self-appointed core team that does not answer to anyone but themselves. The term "Benevolent Dictator for Life" (BDFL) is sometimes used to describe this model when the core team is a single person (usually the project founder).
<p>It can have the advantage of a strong vision unaffected by fleeting trends, but can also have the disadvantage of the project not being responsive to important shifts in the environment.
<p>Examples: Linux.
<h5>8. Accountable independent public development</h5>
<p>Ultimately, the most open a project can be is for it to have entirely independent governance accountable to its community, e.g. a foundation with democratically elected core leaders.
<p>This has all the advantages and disadvantages of democracy.
<p>Examples: Python, Kubernetes.
Deciding which bugs to fix
http://ln.hixie.ch/?start=1674863881&count=1
2023-01-27T23:58:01+00:00<p>Software has an infinite number of bugs. How can we tell which ones to fix?</p>
<p>I propose that it makes the most sense to optimize for people-happiness per unit bug fixing time, maximizing how much our effort improves the product for our users.</p>
<p>To put it in mathematical terms, we want to fix bugs with the highest N·ΔH / T, where:</p>
<ul>
<li> <var>N</var> is the number of people the bug affects
<li> <var>ΔH</var> is the increase of happiness per user affected by the bug
<li> <var>T</var> is an estimate of the amount of time it will take us to fix the bug
</ul>
<p>(These metrics are very hard to estimate. Don't worry too much about precision here.)
<h4>Bugs that improve <var>T</var> for future bugs</h4>
<p>The best bugs to fix are those that make us more productive in the future. Reducing test flakiness, reducing technical debt, increasing the number of team members who are able to review code confidently and well: this all makes future bugs easier to fix, which is a huge multiplier to our overall effectiveness and thus to developer happiness.
<h4>Bugs affecting more people are more valuable (maximize <var>N</var>)</h4>
<p>We will make more people happier if we fix a bug experienced by more people.
<p>One thing to be careful about is to think about the number of people we are ignoring in our metrics. For example, if we had a bug that prevented our product from working on Windows, we would have no Windows users, so the bug would affect nobody. However, fixing the bug would enable millions of developers to use our product, and that's the number that counts.
<h4>Bugs with greater impact on developers are more valuable (maximize <var>ΔH</var>)</h4>
<p>A slight improvement to the user experience is less valuable than a greater improvement. For example, if our application, under certain conditions, shows a message with a typo, and then crashes because of an off-by-one error in the code, fixing the crash is a higher priority than fixing the typo.
<h4>Bugs that are easier to fix are more valuable (minimize <var>T</var>)</h4>
<p>The less time we spend working on something, the more time we will have to work on other things. Naturally, therefore, all else being equal, easier bugs are more impactful than harder bugs because we can fix more of the easier bugs in the same time.
<p>This can feel counterintuitive. Surely fixing hard things is more valuable? Well, no. Having impact is better, and all other things being equal, it's more impactful to fix two easy bugs than one hard bug.
<h5>Steps to reproduce make a bug more valuable</h5>
<p>If a bug has steps to reproduce, we will have a much easier time fixing it. In general, we should focus on bugs like that rather than those where the first step will be determining what the problem even is, because in the time it would take us to figure out a problem, we could have fixed multiple issues where the problem was clear.
<p>Again, we will make more users happier if we fix more bugs each affecting <var>X</var> people than if we fix fewer (but gnarlier) bugs each affecting <var>X</var> people.
<h5>Exceptions</h5>
<p>A high-profile hard-to-reproduce bug may warrant the extra effort, because the number of people affected is high. We want to take into account the total impact of fixing the bug as well as the time it will take to fix it.
<h5>Deciding when to move on</h5>
<p>Sometimes, <var>T</var> can turn out to be bigger than estimated. Something looks easy, but turns out to be hard. The right choice may be to dump all one has learnt into the tracking issue and move on to something that one can solve more quickly.
<h4>Deciding between tasks of equal merit</h4>
<p>Sometimes, it's not easy to decide which of two or three or ten tasks should be prioritized. The icon button's splash radius is too large on a toolbar. Users can't tap on menu items that haven't appeared yet during a popup menu animation. The shadow on the toolbar doesn't quite extend to the far left of the screen. Which of these should we work on, if we only have the time to work on one? It can seem difficult to decide.
<p>The key realization to solving this conundrum is both freeing and mildly unsettling: it doesn't matter. We can do whichever one we feel like.
<p>It doesn't matter because they are (by definition) equally important, and (by definition) we can do only one. Whichever one we do, some people will be happier. Assuming that, across the project, we pick among these choices more or less randomly, we will avoid introducing any particular bias and the product as a whole will get better.
<p>To put it another way: in either case, we are improving the product by the same people-happiness per unit bug fixing time. So the product gets better by the same amount.
<p>This doesn't mean any one of these bugs or features is not important. It just means that they are equally important, and one won the lottery and got fixed.
Flutter: Static analysis of sample code snippets in API docs
http://ln.hixie.ch/?start=1660174115&count=1
2022-08-10T23:28:35+00:00<p>One of the things I am particularly proud of with Flutter is the
quality of our API documentation. With Flutter's web support, we're
even able to literally inline full sample applications into the API
docs and have them literally editable and executable inline. For
example, <a
href="https://api.flutter.dev/flutter/material/AppBar-class.html">the
docs for the <code>AppBar</code> widget</a> have a diagram followed by
some samples.</p>
<p>Here's a neat trick, I can even embed these samples into my blog:</p>
<figure>
<iframe class="snippet-dartpad" src="https://dartpad.dev/embed-flutter.html?split=60&run=true&null_safety=true&sample_id=material.AppBar.1&sample_channel=stable"></iframe>
</figure>
<p>These samples actually are <a
href="https://github.com/flutter/flutter/tree/master/examples/api">just
code in our repo</a>, which has the advantage of meaning we run static
analysis on them, and even have unit tests to make sure they actually
work. (Side note: this means contributing samples is really easy and
really impactful if you're looking for a way to get started with open
source. It requires no more skill than just writing simple Flutter
apps and tests, and people love sample code, it's hugely helpful. If
you're interested, see our <a
href="https://github.com/flutter/flutter/blob/master/CONTRIBUTING.md">CONTRIBUTING.md</a>)</p>
<p>Anyway, sometimes a full application is overkill for sample code
and instead we inline the sample code using <code>```dart</code>
markdown. For example, the <a
href="https://api.flutter.dev/flutter/material/ListTile-class.html">documentation
for <code>ListTile</code></a> has a bunch of samples, but nonetheless
starts with a smaller-scale snippet to convey the relationship between
<code>Material</code> and <code>ListTile</code>.</p>
<p>This leads to a difficulty, though. How can we ensure that these
snippets are also valid code? This is not an academic question; it's
very hard to write code correctly without a compiler and sample code
is no exception. What if a typo sneaks in? What if we later change an
API in some way that makes the sample code no longer valid?</p>
<p>We've had a variety of answers to this over the years, but as of
today the answer is that we actually run the Dart static analyzer
against _all_ our sample code, even the little snippets in
<code>```dart</code> blocks!</p>
<p>Our continuous integration (and precommit tests) read every Dart
file, extracting any blocks of code in documentation. Each block is
then examined using some heuristics to determine if it looks like an
expression, a group of statements, a class or function, etc, and is
embedded into a file in the temporary directory with suitable framing
to make the code compile (if it's correct). We then call out to the
Dart analyzer, and report the results.</p>
<p>To make it easier for us to understand the results, the tool that
does this keeps track of the source of every line of code it puts in
these temporary files, and then tweaks the analyzer's output so that
instead of pointing to the temporary file, it points to the right line
and column in the original source location (i.e. the comment). (It's
kind of fun to see error messages point right at a comment and
correctly find an error.)</p>
<p>The <a
href="https://github.com/flutter/flutter/blob/master/dev/bots/analyze_snippet_code.dart">code
to do all this</a> is pretty hacky. To make sure the code doesn't get
compiled wrong (e.g. embedding a class declaration into a function
because we think it's a statement), there's a whole bunch of regular
expressions and other heuristics. If the sample code starts with
`class` then we assume it's a top-level declaration, and stick it in a
file after a bunch of imports. If the last line ends with a semicolon
or if any line starts with a keyword like `if`, then we stick it into
a function declaration.</p>
<p>Some of the more elaborate code snippets chain together, so to make
that work we support a load-bearing magical comment (hopefully those
words strike fear in your heart) that indicates to the tool that it
should embed the earlier example into this one. We also treat
<code>// ...</code> as a magical comment: if the snippet contains
such a comment, we tell the analyzer to ignore the
<code>non_abstract_class_inherits_abstract_member</code> error, so
that you don't have to implement every last member of an abstract
class in a code snippet. We also have a special Flutter-specific
magical comment that tells the tool to embed the snippet into a
<code>State</code> subclass, so that you can write snippets with
<code>build()</code> methods that call <code>setState</code> et
al.</p>
<p>My favourite part of this is that to make it easier to just throw
ignores into the mix without worrying too much about whether they're
redundant, the tool injects
<code>// ignore_for_file: duplicate_ignore</code> into every
generated file.</p>
<p>As might be expected, <a
href="https://github.com/flutter/flutter/pull/108500">turning all this
on</a> found a bunch of errors. Some of these were trivial (e.g. an
extra stray <code>)</code> in an expression), some were amusing (e.g.
the sample code for <code>smallestButton()</code> called the function
<code>rightmostButton()</code> instead), and some were quite serious
(e.g. it turns out the sample code for some of the localizations logic
didn't compile at all, either because it was always wrong, or because
it was written long ago and the API changed in an incompatible way
without us updating the API docs).</p>Assertions
http://ln.hixie.ch/?start=1637381157&count=1
2021-11-20T04:05:57+00:00<p>We're pretty aggressive about assertions in the Flutter framework.
<p>There's several reasons for this.
<p>The original reason was that when I wrote a bunch of this code, I had nowhere for it to run. I literally wrote the first few thousand(?) lines of framework code before we had the Dart compiler and Skia hooked up. Because of this, I had no way to test it, and so the only way I had to sanity check what I was doing was to be explicit about all the invariants I was assuming. Once we eventually ran that code, it was invaluable because it meant that many mistakes I'd made were immediately caught, rather than lingering in the core only for us to discover years later that some basic assumption of the whole system was just wrong.
<p>Those asserts also ended up being useful when we extended the framework, because they helped catch mistakes we were making in new code. For example, the render object system has many assumptions about what order things are run in and what must be decided when. It's really helpful when creating a new RenderObject subclass for the system to flag any mistakes you might make in terms of violating those invariants.
<p>Similarly, when Adam did the widgets layer rewrite, he made liberal use of asserts to verify invariants, basically to prove to ourselves that the new model was internally consistent.
<p>We rely on these asserts in the tests, too. Many tests are just "if we put the lego bricks in this order, does anything throw an exception?". That only works because of the sheer number of asserts we have, verifying the internal logic at every step.
<p>Anyway, because of how successful this was internally, we also started using them even more to catch mistakes in code that's _using_ the framework. Hopefully these are helpful. Now days we try to make the asserts have useful error messages when we think that they might report an error you'll run into.
<p>Should you be doing the same in your own code?
<p>It's really a personal preference. Personally I find that documenting every assumption and invariant using asserts is hugely helpful to my own productivity. It's like writing tests: having comprehensive test coverage gives you much more confidence that things are correct. It also really helps when you do code refactors: if you change some class somewhere to work a different way, the asserts and tests for the rest of the code will tell you quickly if anything you did breaks some requirement of the overall system.
<p>On the other hand, some people find that the benefits of tests and asserts are not offset by the amount of time spent writing them, debugging issues they bring up, and so on. They'll point to the fact that sometimes asserts and tests are wrong, and you can spend hours debugging a failure only to find that the only error is in the test/assert logic itself. People who prefer not to have tests and asserts typically rely more on black-box testing or QA testing (running the app and testing it directly). There's certainly a strong argument to be made that testing the actual app is a better use of your time since that's what your customers will actually be using.
<p><small>This was originally posted as <a href="https://www.reddit.com/r/flutterhelp/comments/qxwp2v/re_flutters_many_assertionsshould_i_be_doing_the/hlchnqb/">a comment on reddit</a>.</small></p>Extracts from a private Q&A retrospective about the WHATWG
http://ln.hixie.ch/?start=1623051423&count=1
2021-06-07T07:37:03+00:00<p>Several years ago, a group involved in standardisation in an industrial field reached out to me to learn more about our experience with the WHATWG. I thought some of my responses might have broader interest, and saved them for publication at some later date, and then promptly forgot all about it. I came across my notes today and figured that today is later, so why not publish them now?</p>
<p>Other than some very minor edits for grammar and spelling, the questions and answers from that interaction are reproduced here verbatim.<p>
<dl>
<dt>What were the original objectives/goals and success metrics that underpinned the design of the WHATWG organisation (processes, systems and governance)?
<dd>
<p>At the start we really had very little in the way of governance. We were a group of like-minded individuals from different vendors, all concerned with the direction of the organisation that at the time presented itself as the venue for Web standards work. We created a public mailing list and a private mailing list. All work happened on the public mailing list (and in a public IRC channel, and later in a bug database as well). The private mailing list barely had any communication (for most of the time I was involved it averaged about one e-mail a year).
<p>There was very little process: there was originally just one document, then three or four documents, that I edited, taking into account all input from the public and coming up with the best technical solution, disregarding political pressure. For much of the time I was active in the group, and certainly at the start, the reality was that there was one player in this space, Microsoft, with 99% of the user base, and the remaining players, mostly Apple, Mozilla, and Opera, later also Google, shared the remaining 1%. This grew over time, but slowly. (It's now Chromium that has the bulk of the market, and the dynamics are very different. I stopped participating a few years ago, while the numbers were much more mixed between multiple vendors. At the time, the dynamics were already changing, with everyone much more interested in competing and much less interested in cooperating, which is one of the reasons I eventually disengaged.)
<p>Microsoft was invited to participate many times. These invitations were sincere. Microsoft never took us up on this offer while I was editor. They have since taken a more active role, though their position in the market has declined significantly (to <10% by some metrics) and this may be why. I have since then heard anecdotes that paint Microsoft's internal motivations at the time as being strongly anti-WHATWG (and anti-me specifically), which is consistent with their outwardly behaviour: a lot of what we saw could be interpreted (apparently accurately) as intentionally designed to waste time, sow confusion, or otherwise disrupt the work.
<p>Over the years we added other documents, edited by other people, but in each case our process was basically to have one benevolent dictator for each document, whose job it was to make all the technical decisions. The private mailing list was theoretically empowered to depose any of the editors, in case they went "rogue", but they never did (get deposed, or go rogue). The private mailing list's membership was the people who were active right at the start, with one or two additions over the years, but by the time the WHATWG was really having serious influence on the Web, most of the people on the private mailing list were really not that closely involved any more, which meant they were more absent parents than active supervisors.
<p>In practice, Microsoft's disruption efforts failed completely because there was nothing to really disrupt: the editors (including myself) were working with honest and genuinely objective intent, taking all feedback, examining it critically, and making technical decisions without any process. Sending lots of useless feedback would have been the most effective way to waste our time but that was not a technique they used. Instead they tried to play process games, to which we were largely immune: given the lack of process, there was nothing to game.
<p>Since my departure the governance model has changed; the organization now has a legal entity and some contractual agreements, but I'm not familiar with the details.
<p>At the start, our goals and success metrics were implicit; really, just to create specifications that advanced the development of the Web in a way that browser vendors were in agreement with (something the W3C was not doing).
<dt>Has the WHATWG approach and governance model implemented achieved all the outcomes/objectives desired?
<dd>
<p>While I was involved I would say it was remarkably successful. We developed an extremely detailed and precise specification that was orders of magnitude more useful to implementers than anything the W3C had done to date (I think, to be honest, that even to this day the W3C does not realize that this was the key difference between our approaches). We changed the way specifications are thought of in the Web space, going from these vague documents written in pay-to-play meetings to very precise technical documents that define all behaviour (including error handling, theretofore unheard of in this space, and quite controversial at the time), written in the open. We changed the default model of specifications from one where you would write the specification then set it in stone to one where specifications are living documents that are continually updated for decades. We didn't set out to do these things explicitly at the start (our earliest plans in fact set out clear milestones along the lines of the "set in stone" model), but they were natural outcomes of our intent to create technically precise documents as opposed to what I have previously characterised as "especially dry science fiction".
<dt>What is the volume of work for WHATWG resources that have official roles in processing requests, managing, executing work to make changes/updates? i.e. how many requests and changes are managed in a given regular period monthly/quarterly/yearly?
<dd>
<p>From 2006 (when I started using a version control system) to 2015 (when I stopped being an active editor) I made about 8,874 commits to the specification. Some were trivial typo fixes, some were giant new sections. That's an average of one commit every 10 hours or so. I don't know what the current rate is, but the team uses GitHub now so you can probably find out quite easily.
<dt>How many resources/people work full time on WHATWG?
<dd>
<p>At the time I was involved, it was 1 person, me, who worked full-time on it, with lots of people contributing their time. I've no idea what the current investment is. I imagine nobody is full-time on the WHATWG now but I could be wrong.
<dt>How is WHATWG funded?
<dd>
<p>At the time of my involvement, I was paid by Opera at first and then Google, as a member of their technical staff whose role was to work on the WHATWG mostly full-time (I had some other responsibilities at both companies, but they were a small fraction of my work).
<p>I personally paid for the expenses of the WHATWG itself out of pocket. These amounted to very little, just Web hosting and the domain name registration.
<p>I don't know how it's funded now. I don't pay for the hosting any more, but I'm not sure who does.
<dt>Are resources volunteers and/or paid by their parent companies to fulfil obligations/work for WHATWG?
<dd>
<p>Both. The WHATWG is set up to pay no attention to how someone is participating, because it has no impact on the technical value of their contributions.
<dt>I understand part of the decision making process for making a change to the is sufficient “implementer†support. I understand this means two or more browser engines? I assume the implementers are the 4 companies in the steering group?
<dd>
<p>I don't really understand the current governance style.
<p>At the time I was involved, it was informal. The editor was responsible for making sure that what they specified would in due course match what all the browser vendors implemented. If they did this by writing amazingly compelling specifications that the browser vendors felt obligated to implement by sheer force of technical superiority, or whether they did this by specifying the lowest common denominator that they could get each vendor to individually commit to, or if they did it by the political means of convincing one vendor to publicly commit by privately telling them how another vendor had privately committed if the first vendor would publicly commit, etc, was a matter for the editor.
<p>Personally I did all of the above. Sometimes things I specified turned out to be universally disliked and I (or my successors) ended up removing them. Sometimes things I specified were just describing what the browser vendors all already implemented as a matter of fact, and in those cases there was little for them to argue about.
<dt>In our industry we may have 100s of implementers of various sizes. How does the number of implementers scale the challenges we may be faced with? Our users are also industry/companies rather than individuals. I imagine we will have to assess the governance and decision rights with that in mind.
<dd>
<p>I have no idea to be honest. The WHATWG worked in part because of the esoteric and unique situation that was the Web space at the time. A small number of companies, creating products that were used by billions of people of which thousands were interested enough in the technical details to directly participate, but where weirdly very little money obviously changed hands.
<p>(As a side note: the economies of Web browsers and Web standards are not obvious -- for example, why did Google pay me to do this work? Why did Google tell me I should ignore Google's own interests and just focus on what is technically right <em>for the Web as a whole</em>? The reasoning is amusing in its simplicity: if the Web gets better, then more people will browse the Web; if more people browse the Web, Google can show more ads; if Google shows more ads, more ads might get clicked on; if more ads get clicked on, Google makes more money. Nothing in this reasoning requires that the Web change specifically to help Google's direct interests. This freed me to be actually vendor neutral to an extent that few of my contemporaries truly believed, I think.)
<dt>What are the big lessons learned/areas not to overlook?
<dd>
<p>The biggest lesson I would say I learnt is that it can work. You can create an organization that is truly open, truly technically-driven, does not have really any process at all, yet creates technically-relevant high-quality documents that move an industry. You don't need a big staff, you don't need annual events, you don't need in-person meetings or voice-conference calls. You don't need to make decisions based on "consensus" or majority vote. You don't need to be pay-to-play. You can make decisions that are entirely based on technical soundness.
<dt> What would you do differently if you established WHATWG again and why?
<dd>
<p>One of the things that we did around 2007 (we started in 2004) was to agree to work with the W3C to develop HTML, instead of continuing our independent path. We maintained enough independence that we were able to mostly disengage after that effort went predictably nowhere, but it was definitely a distraction. I wish that we had had more confidence back then in our ability to just ignore the W3C. In retrospect it's more obvious to me now that the W3C really had nothing to give us. (At the time, some of us still viewed the W3C as being the logical place for this work to happen and we viewed the WHATWG as a temporary workaround until the W3C adapted to the new world, but they never really adapted. Even today, where the W3C actually redirects their Web site to the WHATWG for the specifications the WHATWG is working on, the W3C's own processes have not changed in a meaningful way to really fix the problems we saw in 2003 that led to the WHATWG's creation. They just gave up competing on some fronts.)
<p>The other big thing that I wished we had done much earlier is establish a patent policy whereby each vendor would share their relevant patents. This is pretty common in various industries, but we did not pay it enough attention and it hurt our credibility for much longer than it should have. (In practice I believe this is mostly theatre, but in this case it's theatre that matters so we should have done it much earlier.)
<p>One more thing I would do differently is have a much stronger code of conduct from the start, which doesn't just disallow bad behaviour but actively requires positive interactions. There's no excuse for being cranky on a mailing list. Being obtuse or just unpleasant is not necessary. We had many people over the years who would push right to the limit of what was acceptable, and I wish I had been much, much stricter, stepping in and removing participants as soon as they were even slightly obnoxious. I think we would have made much more rapid progress and grown a much bigger community much quicker if we had done that.
</dl>
Ask for forgiveness, not permission
http://ln.hixie.ch/?start=1610654145&count=1
2021-01-14T19:55:45+00:00<p>A colleague of mine asked me to explicitly put an LGTM on their design doc so that they could go ahead and implement it. The design doc was one I had previously reviewed and commented on, and had indicated that it seemed like a good idea, but I hadn't filled in the box saying that "my TL has said LGTM".
<p>My answer: no. You don't need my permission.
<p>Ask yourself: why do you want explicit permission? Is anyone asking you to get permission? What would happen if you just... did the thing?
<p>Some people want LGTMs because that way they feel like if they make a mistake, they'll be covered. But that's flawed thinking in two ways. First of all, mistakes are fine. People make mistakes, we all make mistakes, mistakes are how we learn. If you're not making mistakes, then you're not taking enough risks to be successful. Secondly, even if making a mistake was bad, getting some people to sign off on something doesn't mean they are taking any more responsibility than if they didn't. You'd still be responsible for your decisions even if you got permission, and your leadership would still be responsible for your decisions if you didn't get permission.
<p>I have a friend who used to work in Google Search on a tool called "Janitor". It was a tool that would garbage collect the results of processing our Web indexing — there's a lot of temporary files created in indexing the Web, and Janitor would go around deleting them when they weren't needed any more. He literally deleted petabytes of data regularly. One day, Larry Page was visiting his team and asked about this project. Larry asked, "how many files have you accidentally deleted?". My friend very proudly answered "I have never deleted a file that should not have been deleted! I have a 100% success record!".
<p>Larry apparently responded "I think you should take more risks".
<p>Some people want LGTMs because they feel that without them they aren't entitled to do their job. But... it's your job. That's why you were hired. You don't need additional permission to do your job. Your biweekly paycheck is all the permission you need.
<p>Some people want LGTMs because they are not confident enough in their idea to execute it. Having leaders on the team put a stamp on their design doc gives them the confidence that the idea was good enough to execute. The thing is though, we won't know if it was a good idea or not until we try it. These stamps aren't saying "it's a good idea", they're saying "it's not an idea so terrible that I can predict its failure already based on my past experience"... and your leaders and team mates will tell you that something is a bad idea if they see it. That's why you ask for review. If they didn't arch their eyebrows and grimace when you explained your idea, then it's probably fine, and you don't need any more permission.
<p>In conclusion: ask for forgiveness, not permission. Get reviews of your design docs, by all means. But don't wait for a stamp of approval to implement them.Indexing into a string
http://ln.hixie.ch/?start=1528498856&count=1
2018-06-08T23:00:56+00:00<p>I propose the following aphorism:
<p>Indexing into a string type makes as much sense as indexing into an integer type.