I joined Google in October 2005, and handed in my resignation 18 years later. Last week was my last week at Google.
I feel very lucky to have experienced the early post-IPO Google; unlike most companies, and contrary to the popular narrative, Googlers, from the junior engineer all the way to the C-suite, were genuinely good people who cared very much about doing the right thing. The oft-mocked "don't be evil" truly was the guiding principle of the company at the time (largely a reaction to contemporaries like Microsoft whose operating procedures put profits far above the best interests of customers and humanity as a whole).
Many times I saw Google criticised for actions that were sincerely intended to be good for society. Google Books, for example. Much of the criticism Google received around Chrome and Search, especially around supposed conflicts of interest with Ads, was way off base (it's surprising how often coincidences and mistakes can appear malicious). I often saw privacy advocates argue against Google proposals in ways that were net harmful to users. Some of these fights have had lasting effects on the world at large; one of the most annoying is the prevalence of pointless cookie warnings we have to wade through today. I found it quite frustrating how teams would be legitimately actively pursuing ideas that would be good for the world, without prioritising short-term Google interests, only to be met with cynicism in the court of public opinion.
Charlie's patio at Google, 2011. Image has been manipulated to remove individuals.
Early Google was also an excellent place to work. Executives gave frank answers on a weekly basis, or were candid about their inability to do so (e.g. for legal reasons or because some topic was too sensitive to discuss broadly). Eric Schmidt regularly walked the whole company through the discussions of the board. The successes and failures of various products were presented more or less objectively, with successes celebrated and failures examined critically with an eye to learning lessons rather than assigning blame. The company had a vision, and deviations from that vision were explained. Having experienced Dilbert-level management during my internship at Netscape five years earlier, the uniform competence of people at Google was very refreshing.
For my first nine years at Google I worked on HTML and related standards. My mandate was to do the best thing for the web, as whatever was good for the web would be good for Google (I was explicitly told to ignore Google's interests). This was a continuation of the work I started while at Opera Software. Google was an excellent host for this effort. My team was nominally the open source team at Google, but I was entirely autonomous (for which I owe thanks to Chris DiBona). Most of my work was done on a laptop from random buildings on Google's campus; entire years went by where I didn't use my assigned desk.
In time, exceptions to Google's cultural strengths developed. For example, as much as I enjoyed Vic Gundotra's enthusiasm (and his initial vision for Google+, which again was quite well defined and, if not necessarily uniformly appreciated, at least unambiguous), I felt less confident in his ability to give clear answers when things were not going as well as hoped. He also started introducing silos to Google (e.g. locking down certain buildings to just the Google+ team), a distinct departure from the complete internal transparency of early Google. Another example is the Android team (originally an acquisition), who never really fully acclimated to Google's culture. Android's work/life balance was unhealthy, the team was not as transparent as older parts of Google, and the team focused on chasing the competition more than solving real problems for users.
My last nine years were spent on Flutter. Some of my fondest memories of my time at Google are of the early days of this effort. Flutter was one of the last projects to come out of the old Google, part of a stable of ambitious experiments started by Larry Page shortly before the creation of Alphabet. We essentially operated like a startup, discovering what we were building more than designing it. The Flutter team was very much built out of the culture of young Google; for example we prioritised internal transparency, work/life balance, and data-driven decision making (greatly helped by Tao Dong and his UXR team). We were radically open from the beginning, which made it easy for us to build a healthy open source project around the effort as well. Flutter was also very lucky to have excellent leadership throughout the years, such as Adam Barth as founding tech lead, Tim Sneath as PM, and Todd Volkert as engineering manager.
We also didn't follow engineering best practices for the first few years. For example we wrote no tests and had precious little documentation. This whiteboard is what passed for a design doc for the core Widget, RenderObject, and dart:ui layers. This allowed us to move fast at first, but we paid for it later.
Flutter grew in a bubble, largely insulated from the changes Google was experiencing at the same time. Google's culture eroded. Decisions went from being made for the benefit of users, to the benefit of Google, to the benefit of whoever was making the decision. Transparency evaporated. Where previously I would eagerly attend every company-wide meeting to learn what was happening, I found myself now able to predict the answers executives would give word for word. Today, I don't know anyone at Google who could explain what Google's vision is. Morale is at an all-time low. If you talk to therapists in the bay area, they will tell you all their Google clients are unhappy with Google.
Then Google had layoffs. The layoffs were an unforced error driven by a short-sighted drive to ensure the stock price would keep growing quarter-to-quarter, instead of following Google's erstwhile strategy of prioritising long-term success even if that led to short-term losses (the very essence of "don't be evil"). The effects of layoffs are insidious. Whereas before people might focus on the user, or at least their company, trusting that doing the right thing will eventually be rewarded even if it's not strictly part of their assigned duties, after a layoff people can no longer trust that their company has their back, and they dramatically dial back any risk-taking. Responsibilities are guarded jealously. Knowledge is hoarded, because making oneself irreplaceable is the only lever one has to protect oneself from future layoffs. I see all of this at Google now. The lack of trust in management is reflected by management no longer showing trust in the employees either, in the form of inane corporate policies. In 2004, Google's founders famously told Wall Street "Google is not a conventional company. We do not intend to become one." but that Google is no more.
Much of these problems with Google today stem from a lack of visionary leadership from Sundar Pichai, and his clear lack of interest in maintaining the cultural norms of early Google. A symptom of this is the spreading contingent of inept middle management. Take Jeanine Banks, for example, who manages the department that somewhat arbitrarily contains (among other things) Flutter, Dart, Go, and Firebase. Her department nominally has a strategy, but I couldn't leak it if I wanted to; I literally could never figure out what any part of it meant, even after years of hearing her describe it. Her understanding of what her teams are doing is minimal at best; she frequently makes requests that are completely incoherent and inapplicable. She treats engineers as commodities in a way that is dehumanising, reassigning people against their will in ways that have no relationship to their skill set. She is completely unable to receive constructive feedback (as in, she literally doesn't even acknowledge it). I hear other teams (who have leaders more politically savvy than I) have learned how to "handle" her to keep her off their backs, feeding her just the right information at the right time. Having seen Google at its best, I find this new reality depressing.
There are still great people at Google. I've had the privilege to work with amazing people on the Flutter team such as JaYoung Lee, Kate Lovett, Kevin Chisholm, Zoey Fan, Dan Field, and dozens more (sorry folks, I know I should just name all of you but there's too many!). In recent years I started offering career advice to anyone at Google and through that met many great folks from around the company. It's definitely not too late to heal Google. It would require some shake-up at the top of the company, moving the centre of power from the CFO's office back to someone with a clear long-term vision for how to use Google's extensive resources to deliver value to users. I still believe there's lots of mileage to be had from Google's mission statement (to organize the world’s information and make it universally accessible and useful). Someone who wanted to lead Google into the next twenty years, maximising the good to humanity and disregarding the short-term fluctuations in stock price, could channel the skills and passion of Google into truly great achievements.
I do think the clock is ticking, though. The deterioration of Google's culture will eventually become irreversible, because the kinds of people whom you need to act as moral compass are the same kinds of people who don't join an organisation without a moral compass.
Decline meetings aggressively. Always try to resolve issues by e-mail or chat first if possible.
Decline any meeting without an explicit agenda (I make exceptions for my immediate manager).
Decline any meeting where the agenda doesn't seem relevant to your work. Decline any recurring meeting with more than one other person. Keep track of how productive recurring meetings are being. If they're not productive, cancel them. If they're only occasionally productive, reduce the frequency. End meetings promptly once the agenda is resolved. Always leave a meeting when it reaches the end of its scheduled time. Never start a meeting late. If people are missing, start on time anyway. This is especially true for any meeting with large groups of people. Have a hard out every day, stop working at that time. Create fake buffer meetings so that you've got guaranteed breaks. Decline meetings that conflict with your breaks unless the person has explicitly reached out first. Aggressively defrag your calendar to make it look like what you want.
"Open Source" is a broad spectrum, with various axes. The following is an attempt to describe various ways to look at openness to aid project leaders in determining what they want their project to look like. I originally wrote this for my colleagues at Google, but the concepts apply widely and I figured they might be of use for others.
In practice, every project is a unique snowflake and there are exceptions to every rule. A project can be proprietary but use and contribute back to some open source library. An open source project can have undocumented proprietary protocols. A team can intend to fall in one category, but by their actions fall in another. The descriptions below should be seen merely as a high-level description of some possible ways projects can be configured, not as a comprehensive guide to the taxonomy of openness. Additionally, the examples I give below refer to the state of those products as of the time of writing. As projects evolve, these may become less accurate.
Interoperability (0-6)
One aspect of openness is how one's product interacts with others.
For the purposes of this section, APIs (Application Programming Interfaces), ABIs (Application Binary Interfaces), formats, and protocols are considered equivalent. While they serve different roles in practice, the techniques used to limit or encourage their reuse are the same.
0. Proprietary with obfuscation
The most closed one can make one's protocols is to not document them publicly and design them to be actively hard to understand by reverse engineering. Patents and DRM may also be used to further restrict potential interoperability by legal means in some jurisdictions.
Examples: Kindle file format, most streaming music formats.
1. Proprietary
Most protocols that are not intended for interoperability with other systems are undocumented (at least, not documented in a manner intended for public consumption), but are otherwise not obfuscated, and a sufficiently motivated user could reverse engineer the protocol and use it.
Example: NTFS file system.
2. Licensed open standards
One can have entirely open specifications, but require payment (or other agreements) before the standard can be read or used, e.g. by the use of patent licensing.
Example: the H.264 video codec.
3. "The Code Is The Standard"
Some projects do not document their protocols, but since their source code is available, they are effectively defined by their implementation, bugs and all.
Examples abound but since people rarely intend to be in this state calling out any specific project as being in this category tends to be controversial.
4. Public
When it is desired that users create new products to interact with one's own, one may publicly document one's protocols. There are varying levels of completeness to such documentation; for example, whether some aspects are kept proprietary, or whether the documentation includes details for error handling and future extensions.
Examples: IntelliJ, Swift UI, SMB protocol.
5. Open standards
The ultimate openness one can present is to submit one's protocols to a standards committee (or form a new one; the difference is largely symbolic). This is useful when the intent is to create an entire ecosystem around one's product and protocols.
Examples: the Internet's core protocols, the web.
6. Regulated standards
In the extreme, interoperability around some standards becomes so important that government agencies get involved and the protocol becomes a matter of law.
Examples: power grid standards.
Source code license (0-7)
If software is provided in binary form (e.g. client applications) then sufficiently motivated users will be able to reverse engineer it, even if the source code is not explicitly shared with the user. For the purposes of this section, we are ignoring this and focusing on the access that users have to the project's original source code.
0. Trade secret
Some source code is so secret and so important to its owner that it gets legal protection beyond copyright.
Example: The most sensitive internals of particularly special proprietary software products.
1. Proprietary
The default is for source code to be copyrighted. If one does not redistribute it, then that source code is entirely closed.
Example: The source code for the UI parts of macOS.
2. Commercially licensed
One can license one's code for use by specific downstream users, without making it public. Typically this is done for money.
Examples: Qt (in its closed-source form); Microsoft's sale of access to the Windows source code.
3. Source code that is incidentally visible
One can publish one's source code without licensing it (or licensing it using a very restrictive license that essentially does not allow any use), typically as an incidental part of distributing one's application. This allows people to see the code, but does not allow them to use it in their own projects unless they negotiate a separate license with the distributor.
Examples: JavaScript code in web sites that don't use a minifier or compiler; script code in game data files.
4. Usage-restricted source-sharing
One can make one's source available under a license that allows some kinds of reuse by other parties but prevents others, such as commercial use, use by enterprises over a certain size, or use that competes with the original developer. This can be done either by prohibiting undesired uses outright, or by nominally allowing them but only under onerous terms.
Example: MongoDB.
Open source licenses
One can license one's code for public use, and these licenses can vary in their terms.
It's important to notice that there are legally-sound open source licenses, and there are nonsense "licenses" that are the result of software engineers thinking that being a lawyer is easy. Talk to a lawyer before choosing a license. See the OSI's license page for an overview of the topic.
5. Restrictive
The most strict open source licenses significantly limit what one can do with the source code. For example, they might require that downstream developers license their modifications and any linked code with the same license, or require that downstream developers license their software such that their users can obtain their app's source code.
Examples: GPL-licensed software, such as Linux or Emacs.
6. Reciprocal
These licenses apply the restrictive terms to the code in question (typically a library) but not to code that uses it (such as an application that embeds the library).
Examples: MPL-licensed software, such as Firefox.
7. Permissive
The most liberal licenses require very little of downstream developers other than the replication of the copyright notices in software that uses the covered code (and in some cases not even that).
Examples: Apache-licensed software, such as Android or Rust; BSD-licensed software, such as Chromium or Flutter.
Copyright management
Projects that accept source code from more than one legal entity may wish to navigate the issues of copyright assignment, liability, relicensing, and so forth. The usual tools for this are Contributor License Agreements (CLAs) and Developer Certificates of Origin (DCOs). Talk to a lawyer about these options.
Development processes (0-8)
Separate from what one does with the protocols and the code, a separate choice is how to design and develop the code: where conversations happen, how people are added to the project, and so forth. This is sometimes called "governance".
The sections below apply equally to big projects as to one-person projects, but are primarily focused on projects with multiple team members.
0. Proprietary development
The most closed projects have no public-facing development at all. All design, implementation, and testing happens internally.
Example: Google Search.
1. Proprietary development of open source software
As with proprietary software, all the design, implementation, and testing happens internally. However, the source code is open source in some way, and is published periodically (e.g. in conjunction with a product release). This is often referred to as "throwing the code over the wall". No attempt is made to encourage public contributions. Patches may in some cases be taken (e.g. by e-mail).
Examples: Sqlite, Postfix.
2. Proprietary development, limited-access betas
A team can invite a closed set of unaffiliated users to test their software before launches.
This is a common model for commercial software.
3. Proprietary development, public betas
A team with private development can solicit feedback from a public community by providing pre-release software for any user to test.
This is a common model for commercial software.
4. Public presence, private development
A team with public tooling (e.g. bug databases, code repositories, code reviews, continuous integration), but that makes no attempt to accept public contributions (code, suggestions, etc). Public bug reports may be accepted but the development team typically does not engage with the bug reporters.
Such a team's communications channels are all or mostly internal. Commit access is typically automatic for the team, and unavailable for anyone else.
Examples: Many of Google's small open source projects, and many personal projects on GitHub, fall in this category.
5. Public clique development
A team with public tooling that nominally accepts public contributions, but where becoming an active and equal member of the team is in practice discouraged (new team members are explicitly recruited, and usually all work for the same company). Friction points exist that reduce the likelihood of contributions, for example, public tooling that is different from that typically used by open source projects, official public channels that are not typically frequented by the bulk of the team, lack of documentation or out of date documentation (especially about how to contribute), most communications are held in private channels.
The team may engage with bug reporters on occasion, and may listen to suggestions for project direction, but all decisions ultimately rest with the core team.
Many projects that try to make the jump from "public presence, private development" to "public development, private governance" end up in this state because they underestimate the effort required to successfully and productively develop software in public. That said, this is a valid development model in its own right, especially for projects that need a strong driving vision such as a programming language or an open source narrative video game.
Examples abound but since people rarely intend to be in this state calling out any specific project as being in this category tends to be controversial.
6. Public development, private governance
A team that works in public, with public design decisions, public meetings, and public chats, but whose core leadership is accountable to a single entity whose primary purpose is not this project (e.g. a company). Typically such a project is largely funded by that single entity as well, especially in terms of employing the most active contributors and marketing the project.
Such a team typically hopes that one day contributors with other affiliations will be independently designing, implementing, reviewing, and landing code without oversight from the project leads beyond ensuring a broad alignment on strategy. As such, it actively tries to differentiate between being a member of the development team, and being a member of the primary sponsoring organization. Such a team typically has publicly-visible documentation of its processes, governance, values, contributor access policies, etc.
Example: Flutter.
7. Public development with an unelected but independent core team
A project can be entirely open in its development, with a self-appointed core team that does not answer to anyone but themselves. The term "Benevolent Dictator for Life" (BDFL) is sometimes used to describe this model when the core team is a single person (usually the project founder).
It can have the advantage of a strong vision unaffected by fleeting trends, but can also have the disadvantage of the project not being responsive to important shifts in the environment.
Examples: Linux.
8. Accountable independent public development
Ultimately, the most open a project can be is for it to have entirely independent governance accountable to its community, e.g. a foundation with democratically elected core leaders.
This has all the advantages and disadvantages of democracy.
Software has an infinite number of bugs. How can we tell which ones to fix?
I propose that it makes the most sense to optimize for people-happiness per unit bug fixing time, maximizing how much our effort improves the product for our users.
To put it in mathematical terms, we want to fix bugs with the highest N·ΔH / T, where:
N is the number of people the bug affects
ΔH is the increase of happiness per user affected by the bug
T is an estimate of the amount of time it will take us to fix the bug
(These metrics are very hard to estimate. Don't worry too much about precision here.)
Bugs that improve T for future bugs
The best bugs to fix are those that make us more productive in the future. Reducing test flakiness, reducing technical debt, increasing the number of team members who are able to review code confidently and well: this all makes future bugs easier to fix, which is a huge multiplier to our overall effectiveness and thus to developer happiness.
Bugs affecting more people are more valuable (maximize N)
We will make more people happier if we fix a bug experienced by more people.
One thing to be careful about is to think about the number of people we are ignoring in our metrics. For example, if we had a bug that prevented our product from working on Windows, we would have no Windows users, so the bug would affect nobody. However, fixing the bug would enable millions of developers to use our product, and that's the number that counts.
Bugs with greater impact on developers are more valuable (maximize ΔH)
A slight improvement to the user experience is less valuable than a greater improvement. For example, if our application, under certain conditions, shows a message with a typo, and then crashes because of an off-by-one error in the code, fixing the crash is a higher priority than fixing the typo.
Bugs that are easier to fix are more valuable (minimize T)
The less time we spend working on something, the more time we will have to work on other things. Naturally, therefore, all else being equal, easier bugs are more impactful than harder bugs because we can fix more of the easier bugs in the same time.
This can feel counterintuitive. Surely fixing hard things is more valuable? Well, no. Having impact is better, and all other things being equal, it's more impactful to fix two easy bugs than one hard bug.
Steps to reproduce make a bug more valuable
If a bug has steps to reproduce, we will have a much easier time fixing it. In general, we should focus on bugs like that rather than those where the first step will be determining what the problem even is, because in the time it would take us to figure out a problem, we could have fixed multiple issues where the problem was clear.
Again, we will make more users happier if we fix more bugs each affecting X people than if we fix fewer (but gnarlier) bugs each affecting X people.
Exceptions
A high-profile hard-to-reproduce bug may warrant the extra effort, because the number of people affected is high. We want to take into account the total impact of fixing the bug as well as the time it will take to fix it.
Deciding when to move on
Sometimes, T can turn out to be bigger than estimated. Something looks easy, but turns out to be hard. The right choice may be to dump all one has learnt into the tracking issue and move on to something that one can solve more quickly.
Deciding between tasks of equal merit
Sometimes, it's not easy to decide which of two or three or ten tasks should be prioritized. The icon button's splash radius is too large on a toolbar. Users can't tap on menu items that haven't appeared yet during a popup menu animation. The shadow on the toolbar doesn't quite extend to the far left of the screen. Which of these should we work on, if we only have the time to work on one? It can seem difficult to decide.
The key realization to solving this conundrum is both freeing and mildly unsettling: it doesn't matter. We can do whichever one we feel like.
It doesn't matter because they are (by definition) equally important, and (by definition) we can do only one. Whichever one we do, some people will be happier. Assuming that, across the project, we pick among these choices more or less randomly, we will avoid introducing any particular bias and the product as a whole will get better.
To put it another way: in either case, we are improving the product by the same people-happiness per unit bug fixing time. So the product gets better by the same amount.
This doesn't mean any one of these bugs or features is not important. It just means that they are equally important, and one won the lottery and got fixed.
2022-08-10 23:28 UTC
Flutter: Static analysis of sample code snippets in API docs
One of the things I am particularly proud of with Flutter is the
quality of our API documentation. With Flutter's web support, we're
even able to literally inline full sample applications into the API
docs and have them literally editable and executable inline. For
example, the
docs for the AppBar widget have a diagram followed by
some samples.
Here's a neat trick, I can even embed these samples into my blog:
These samples actually are just
code in our repo, which has the advantage of meaning we run static
analysis on them, and even have unit tests to make sure they actually
work. (Side note: this means contributing samples is really easy and
really impactful if you're looking for a way to get started with open
source. It requires no more skill than just writing simple Flutter
apps and tests, and people love sample code, it's hugely helpful. If
you're interested, see our CONTRIBUTING.md)
Anyway, sometimes a full application is overkill for sample code
and instead we inline the sample code using ```dart
markdown. For example, the documentation
for ListTile has a bunch of samples, but nonetheless
starts with a smaller-scale snippet to convey the relationship between
Material and ListTile.
This leads to a difficulty, though. How can we ensure that these
snippets are also valid code? This is not an academic question; it's
very hard to write code correctly without a compiler and sample code
is no exception. What if a typo sneaks in? What if we later change an
API in some way that makes the sample code no longer valid?
We've had a variety of answers to this over the years, but as of
today the answer is that we actually run the Dart static analyzer
against _all_ our sample code, even the little snippets in
```dart blocks!
Our continuous integration (and precommit tests) read every Dart
file, extracting any blocks of code in documentation. Each block is
then examined using some heuristics to determine if it looks like an
expression, a group of statements, a class or function, etc, and is
embedded into a file in the temporary directory with suitable framing
to make the code compile (if it's correct). We then call out to the
Dart analyzer, and report the results.
To make it easier for us to understand the results, the tool that
does this keeps track of the source of every line of code it puts in
these temporary files, and then tweaks the analyzer's output so that
instead of pointing to the temporary file, it points to the right line
and column in the original source location (i.e. the comment). (It's
kind of fun to see error messages point right at a comment and
correctly find an error.)
The code
to do all this is pretty hacky. To make sure the code doesn't get
compiled wrong (e.g. embedding a class declaration into a function
because we think it's a statement), there's a whole bunch of regular
expressions and other heuristics. If the sample code starts with
`class` then we assume it's a top-level declaration, and stick it in a
file after a bunch of imports. If the last line ends with a semicolon
or if any line starts with a keyword like `if`, then we stick it into
a function declaration.
Some of the more elaborate code snippets chain together, so to make
that work we support a load-bearing magical comment (hopefully those
words strike fear in your heart) that indicates to the tool that it
should embed the earlier example into this one. We also treat
// ... as a magical comment: if the snippet contains
such a comment, we tell the analyzer to ignore the
non_abstract_class_inherits_abstract_member error, so
that you don't have to implement every last member of an abstract
class in a code snippet. We also have a special Flutter-specific
magical comment that tells the tool to embed the snippet into a
State subclass, so that you can write snippets with
build() methods that call setState et
al.
My favourite part of this is that to make it easier to just throw
ignores into the mix without worrying too much about whether they're
redundant, the tool injects
// ignore_for_file: duplicate_ignore into every
generated file.
As might be expected, turning all this
on found a bunch of errors. Some of these were trivial (e.g. an
extra stray ) in an expression), some were amusing (e.g.
the sample code for smallestButton() called the function
rightmostButton() instead), and some were quite serious
(e.g. it turns out the sample code for some of the localizations logic
didn't compile at all, either because it was always wrong, or because
it was written long ago and the API changed in an incompatible way
without us updating the API docs).