We're pretty aggressive about assertions in the Flutter framework.
There's several reasons for this.
The original reason was that when I wrote a bunch of this code, I had nowhere for it to run. I literally wrote the first few thousand(?) lines of framework code before we had the Dart compiler and Skia hooked up. Because of this, I had no way to test it, and so the only way I had to sanity check what I was doing was to be explicit about all the invariants I was assuming. Once we eventually ran that code, it was invaluable because it meant that many mistakes I'd made were immediately caught, rather than lingering in the core only for us to discover years later that some basic assumption of the whole system was just wrong.
Those asserts also ended up being useful when we extended the framework, because they helped catch mistakes we were making in new code. For example, the render object system has many assumptions about what order things are run in and what must be decided when. It's really helpful when creating a new RenderObject subclass for the system to flag any mistakes you might make in terms of violating those invariants.
Similarly, when Adam did the widgets layer rewrite, he made liberal use of asserts to verify invariants, basically to prove to ourselves that the new model was internally consistent.
We rely on these asserts in the tests, too. Many tests are just "if we put the lego bricks in this order, does anything throw an exception?". That only works because of the sheer number of asserts we have, verifying the internal logic at every step.
Anyway, because of how successful this was internally, we also started using them even more to catch mistakes in code that's _using_ the framework. Hopefully these are helpful. Now days we try to make the asserts have useful error messages when we think that they might report an error you'll run into.
Should you be doing the same in your own code?
It's really a personal preference. Personally I find that documenting every assumption and invariant using asserts is hugely helpful to my own productivity. It's like writing tests: having comprehensive test coverage gives you much more confidence that things are correct. It also really helps when you do code refactors: if you change some class somewhere to work a different way, the asserts and tests for the rest of the code will tell you quickly if anything you did breaks some requirement of the overall system.
On the other hand, some people find that the benefits of tests and asserts are not offset by the amount of time spent writing them, debugging issues they bring up, and so on. They'll point to the fact that sometimes asserts and tests are wrong, and you can spend hours debugging a failure only to find that the only error is in the test/assert logic itself. People who prefer not to have tests and asserts typically rely more on black-box testing or QA testing (running the app and testing it directly). There's certainly a strong argument to be made that testing the actual app is a better use of your time since that's what your customers will actually be using.
Several years ago, a group involved in standardisation in an industrial field reached out to me to learn more about our experience with the WHATWG. I thought some of my responses might have broader interest, and saved them for publication at some later date, and then promptly forgot all about it. I came across my notes today and figured that today is later, so why not publish them now?
Other than some very minor edits for grammar and spelling, the questions and answers from that interaction are reproduced here verbatim.
What were the original objectives/goals and success metrics that underpinned the design of the WHATWG organisation (processes, systems and governance)?
At the start we really had very little in the way of governance. We were a group of like-minded individuals from different vendors, all concerned with the direction of the organisation that at the time presented itself as the venue for Web standards work. We created a public mailing list and a private mailing list. All work happened on the public mailing list (and in a public IRC channel, and later in a bug database as well). The private mailing list barely had any communication (for most of the time I was involved it averaged about one e-mail a year).
There was very little process: there was originally just one document, then three or four documents, that I edited, taking into account all input from the public and coming up with the best technical solution, disregarding political pressure. For much of the time I was active in the group, and certainly at the start, the reality was that there was one player in this space, Microsoft, with 99% of the user base, and the remaining players, mostly Apple, Mozilla, and Opera, later also Google, shared the remaining 1%. This grew over time, but slowly. (It's now Chromium that has the bulk of the market, and the dynamics are very different. I stopped participating a few years ago, while the numbers were much more mixed between multiple vendors. At the time, the dynamics were already changing, with everyone much more interested in competing and much less interested in cooperating, which is one of the reasons I eventually disengaged.)
Microsoft was invited to participate many times. These invitations were sincere. Microsoft never took us up on this offer while I was editor. They have since taken a more active role, though their position in the market has declined significantly (to <10% by some metrics) and this may be why. I have since then heard anecdotes that paint Microsoft's internal motivations at the time as being strongly anti-WHATWG (and anti-me specifically), which is consistent with their outwardly behaviour: a lot of what we saw could be interpreted (apparently accurately) as intentionally designed to waste time, sow confusion, or otherwise disrupt the work.
Over the years we added other documents, edited by other people, but in each case our process was basically to have one benevolent dictator for each document, whose job it was to make all the technical decisions. The private mailing list was theoretically empowered to depose any of the editors, in case they went "rogue", but they never did (get deposed, or go rogue). The private mailing list's membership was the people who were active right at the start, with one or two additions over the years, but by the time the WHATWG was really having serious influence on the Web, most of the people on the private mailing list were really not that closely involved any more, which meant they were more absent parents than active supervisors.
In practice, Microsoft's disruption efforts failed completely because there was nothing to really disrupt: the editors (including myself) were working with honest and genuinely objective intent, taking all feedback, examining it critically, and making technical decisions without any process. Sending lots of useless feedback would have been the most effective way to waste our time but that was not a technique they used. Instead they tried to play process games, to which we were largely immune: given the lack of process, there was nothing to game.
Since my departure the governance model has changed; the organization now has a legal entity and some contractual agreements, but I'm not familiar with the details.
At the start, our goals and success metrics were implicit; really, just to create specifications that advanced the development of the Web in a way that browser vendors were in agreement with (something the W3C was not doing).
Has the WHATWG approach and governance model implemented achieved all the outcomes/objectives desired?
While I was involved I would say it was remarkably successful. We developed an extremely detailed and precise specification that was orders of magnitude more useful to implementers than anything the W3C had done to date (I think, to be honest, that even to this day the W3C does not realize that this was the key difference between our approaches). We changed the way specifications are thought of in the Web space, going from these vague documents written in pay-to-play meetings to very precise technical documents that define all behaviour (including error handling, theretofore unheard of in this space, and quite controversial at the time), written in the open. We changed the default model of specifications from one where you would write the specification then set it in stone to one where specifications are living documents that are continually updated for decades. We didn't set out to do these things explicitly at the start (our earliest plans in fact set out clear milestones along the lines of the "set in stone" model), but they were natural outcomes of our intent to create technically precise documents as opposed to what I have previously characterised as "especially dry science fiction".
What is the volume of work for WHATWG resources that have official roles in processing requests, managing, executing work to make changes/updates? i.e. how many requests and changes are managed in a given regular period monthly/quarterly/yearly?
From 2006 (when I started using a version control system) to 2015 (when I stopped being an active editor) I made about 8,874 commits to the specification. Some were trivial typo fixes, some were giant new sections. That's an average of one commit every 10 hours or so. I don't know what the current rate is, but the team uses GitHub now so you can probably find out quite easily.
How many resources/people work full time on WHATWG?
At the time I was involved, it was 1 person, me, who worked full-time on it, with lots of people contributing their time. I've no idea what the current investment is. I imagine nobody is full-time on the WHATWG now but I could be wrong.
How is WHATWG funded?
At the time of my involvement, I was paid by Opera at first and then Google, as a member of their technical staff whose role was to work on the WHATWG mostly full-time (I had some other responsibilities at both companies, but they were a small fraction of my work).
I personally paid for the expenses of the WHATWG itself out of pocket. These amounted to very little, just Web hosting and the domain name registration.
I don't know how it's funded now. I don't pay for the hosting any more, but I'm not sure who does.
Are resources volunteers and/or paid by their parent companies to fulfil obligations/work for WHATWG?
Both. The WHATWG is set up to pay no attention to how someone is participating, because it has no impact on the technical value of their contributions.
I understand part of the decision making process for making a change to the is sufficient “implementer” support. I understand this means two or more browser engines? I assume the implementers are the 4 companies in the steering group?
I don't really understand the current governance style.
At the time I was involved, it was informal. The editor was responsible for making sure that what they specified would in due course match what all the browser vendors implemented. If they did this by writing amazingly compelling specifications that the browser vendors felt obligated to implement by sheer force of technical superiority, or whether they did this by specifying the lowest common denominator that they could get each vendor to individually commit to, or if they did it by the political means of convincing one vendor to publicly commit by privately telling them how another vendor had privately committed if the first vendor would publicly commit, etc, was a matter for the editor.
Personally I did all of the above. Sometimes things I specified turned out to be universally disliked and I (or my successors) ended up removing them. Sometimes things I specified were just describing what the browser vendors all already implemented as a matter of fact, and in those cases there was little for them to argue about.
In our industry we may have 100s of implementers of various sizes. How does the number of implementers scale the challenges we may be faced with? Our users are also industry/companies rather than individuals. I imagine we will have to assess the governance and decision rights with that in mind.
I have no idea to be honest. The WHATWG worked in part because of the esoteric and unique situation that was the Web space at the time. A small number of companies, creating products that were used by billions of people of which thousands were interested enough in the technical details to directly participate, but where weirdly very little money obviously changed hands.
(As a side note: the economies of Web browsers and Web standards are not obvious -- for example, why did Google pay me to do this work? Why did Google tell me I should ignore Google's own interests and just focus on what is technically right for the Web as a whole? The reasoning is amusing in its simplicity: if the Web gets better, then more people will browse the Web; if more people browse the Web, Google can show more ads; if Google shows more ads, more ads might get clicked on; if more ads get clicked on, Google makes more money. Nothing in this reasoning requires that the Web change specifically to help Google's direct interests. This freed me to be actually vendor neutral to an extent that few of my contemporaries truly believed, I think.)
What are the big lessons learned/areas not to overlook?
The biggest lesson I would say I learnt is that it can work. You can create an organization that is truly open, truly technically-driven, does not have really any process at all, yet creates technically-relevant high-quality documents that move an industry. You don't need a big staff, you don't need annual events, you don't need in-person meetings or voice-conference calls. You don't need to make decisions based on "consensus" or majority vote. You don't need to be pay-to-play. You can make decisions that are entirely based on technical soundness.
What would you do differently if you established WHATWG again and why?
One of the things that we did around 2007 (we started in 2004) was to agree to work with the W3C to develop HTML, instead of continuing our independent path. We maintained enough independence that we were able to mostly disengage after that effort went predictably nowhere, but it was definitely a distraction. I wish that we had had more confidence back then in our ability to just ignore the W3C. In retrospect it's more obvious to me now that the W3C really had nothing to give us. (At the time, some of us still viewed the W3C as being the logical place for this work to happen and we viewed the WHATWG as a temporary workaround until the W3C adapted to the new world, but they never really adapted. Even today, where the W3C actually redirects their Web site to the WHATWG for the specifications the WHATWG is working on, the W3C's own processes have not changed in a meaningful way to really fix the problems we saw in 2003 that led to the WHATWG's creation. They just gave up competing on some fronts.)
The other big thing that I wished we had done much earlier is establish a patent policy whereby each vendor would share their relevant patents. This is pretty common in various industries, but we did not pay it enough attention and it hurt our credibility for much longer than it should have. (In practice I believe this is mostly theatre, but in this case it's theatre that matters so we should have done it much earlier.)
One more thing I would do differently is have a much stronger code of conduct from the start, which doesn't just disallow bad behaviour but actively requires positive interactions. There's no excuse for being cranky on a mailing list. Being obtuse or just unpleasant is not necessary. We had many people over the years who would push right to the limit of what was acceptable, and I wish I had been much, much stricter, stepping in and removing participants as soon as they were even slightly obnoxious. I think we would have made much more rapid progress and grown a much bigger community much quicker if we had done that.
A colleague of mine asked me to explicitly put an LGTM on their design doc so that they could go ahead and implement it. The design doc was one I had previously reviewed and commented on, and had indicated that it seemed like a good idea, but I hadn't filled in the box saying that "my TL has said LGTM".
My answer: no. You don't need my permission.
Ask yourself: why do you want explicit permission? Is anyone asking you to get permission? What would happen if you just... did the thing?
Some people want LGTMs because that way they feel like if they make a mistake, they'll be covered. But that's flawed thinking in two ways. First of all, mistakes are fine. People make mistakes, we all make mistakes, mistakes are how we learn. If you're not making mistakes, then you're not taking enough risks to be successful. Secondly, even if making a mistake was bad, getting some people to sign off on something doesn't mean they are taking any more responsibility than if they didn't. You'd still be responsible for your decisions even if you got permission, and your leadership would still be responsible for your decisions if you didn't get permission.
I have a friend who used to work in Google Search on a tool called "Janitor". It was a tool that would garbage collect the results of processing our Web indexing — there's a lot of temporary files created in indexing the Web, and Janitor would go around deleting them when they weren't needed any more. He literally deleted petabytes of data regularly. One day, Larry Page was visiting his team and asked about this project. Larry asked, "how many files have you accidentally deleted?". My friend very proudly answered "I have never deleted a file that should not have been deleted! I have a 100% success record!".
Larry apparently responded "I think you should take more risks".
Some people want LGTMs because they feel that without them they aren't entitled to do their job. But... it's your job. That's why you were hired. You don't need additional permission to do your job. Your biweekly paycheck is all the permission you need.
Some people want LGTMs because they are not confident enough in their idea to execute it. Having leaders on the team put a stamp on their design doc gives them the confidence that the idea was good enough to execute. The thing is though, we won't know if it was a good idea or not until we try it. These stamps aren't saying "it's a good idea", they're saying "it's not an idea so terrible that I can predict its failure already based on my past experience"... and your leaders and team mates will tell you that something is a bad idea if they see it. That's why you ask for review. If they didn't arch their eyebrows and grimace when you explained your idea, then it's probably fine, and you don't need any more permission.
In conclusion: ask for forgiveness, not permission. Get reviews of your design docs, by all means. But don't wait for a stamp of approval to implement them.
In Flutter, we occasionally get the question "how do you set a negative margin?". This usually comes after people find the Container widget's "margin" property, try to set it to a negative value, and find that it asserts with a message saying padding must be positive. It turns out that Container's idea of margin and padding is actually just to put a Padding widget on the outside of its DecoratedBox for the margin, and a Padding widget on the inside for the padding. Many complicated-looking widgets in Flutter tends to decompose in this way.
So how would you get a negative-margin-like effect in Flutter?
To answer this question you first have to define what "negative margins", or really "margins" in general, really are. In CSS, margins have various meanings in the various layout models, most commonly, they are one of several values that contribute to computing the offset that the block layout model uses to place subsequent children; a negative total margin in this case merely means the next child is placed above the bottom of the previous child instead of after it.
In Flutter, as in CSS, there are several layout models; however, there is currently no widget that is equivalent to the CSS block layout model (which supports margin collapsing, negative margins, skipping floats, etc). Such a layout model could certainly be implemented, it just hasn't been implemented yet, at least not in the framework itself.
To implement such a layout model, you would create a RenderBox descendant similar to RenderFlex or RenderListBody, probably providing a way to set the margins of each child using a ParentDataWidget in the same way that Flex children can have their `flex` configured using the Expanded widget.
Probably the most complicated part of designing a new layout model like this would be deciding how to handle overflow or underflow, when the children are too big or too small to fit the constraints passed to this new layout render object. The RenderFlex render object has a way to distribute the space if the children underflow, and considers it an error if they overflow (in debug mode, this is shown by a yellow-and-black striped warning area and a message logged to the console); the RenderListBody render object on the other hand takes the view that the constraints must be unbounded in the main axis, which means you can basically only use this layout model inside a list (hence the name).
If writing a new layout model is not attractive, you could use one of the existing layout widgets that allow overlapping children. Stack is the obvious choice, where you set the explicit positions of each child and they can overlap arbitrarily (this is vaguely similar to the CSS absolute position layout model). Another option is the CustomMultiChildLayout widget, which lets you layout and position each child in turn. With this, you could position each child one after the other, simulating negative margins by setting the position of the subsequent child to a value that's derived from the size and position of the previous child, but such that the subsequent child's top is above the previous child's bottom.
If there's interest in a block-like layout model, we could certainly implement it (please file a bug and describe the model you'd like implemented, or, implement it yourself and send a pull request for review). So far, though, we've not found that it has been that useful in practice, at least not useful enough to justify the complexity.
And that's why the answer we usually give to the question of "how do you do negative margins" ends up being a higher-level question in return: what effect are you really trying to achieve?