loge.hixie.ch

Hixie's Natural Log

2016-05-20 21:23 UTC Flutter at Google I/O

This week was Google I/O, and as part of that we release a codelab! If you want to play with Flutter but our documentation so far has been a bit daunting, now's the time to try a tutorial with lots of guardrails.

https://codelabs.developers.google.com/codelabs/flutter/

Also, Adam recorded a tech talk following on from mine. He focuses on the rendering pipeline, talking about how we perform layout, how we do painting, and how we composite the final scene. In particular he goes into more detail than I did about why a lot of that can go really fast in Flutter while still having a really flexible layout system.

https://www.youtube.com/watch?v=UUfXWzp0-DU

Down in the trenches, today I finished implementing a big set of changes to our test framework. You can now use "flutter run" to actually watch your tests run. This is going to be hugely helpful in developing tests. Before, you had to write them essentially blind, because tests ran in this alternative universe where the clock runs very fast and where we don't bother actually computing any graphics, which meant that it was really hard to visualise exactly what was going on. Now, as you write the test you can run it right on the device, in real time.

The last major bug I had to fix with the test framework changes was in our handling of actual taps on the test. One of the things I sometimes have trouble with is figuring out how to write a finder to hit a particular widget. So I figured I'd make it so that when you tap the screen when you're running one of these live tests, it would dump to the console a list of possible finders you could use to get to the widgets at the point where you tapped the screen.

Now, writing test harnesses is always a bit of a confusing matter, because the whole point of a test framework is that it spends half its time lying to the rest of the system about what's going on. In particular, when tests are being run live, the Flutter test framework has to lie to the Flutter rendering framework about the size of the screen, because we have the convention that tests run in an 800x600 viewport with a device pixel ratio of 1.0, but the device is unlikely to be exactly 800x600@1.0. So to make it render right, I have the test framework subclass the binding, and then insert a transform when painting.

The problem now becomes how to handle hit testing. The fake events from the tests are in this 800x600 coordinate space, but the real events from the device are in the device's coordinate space!

To make this work, I tried various approaches. At one point, I tried to duplicate the rendering binding's hit testing logic, so I copied and pasted its implementation into the test binding. Then later I realised that wouldn't work, and instead I changed the event dispatch logic so that it would convert the test events into the "real" coordinate space, then dispatch those, then have the binding convert them back to the test coordinate space. That way, all the events (the fake ones from the test and the real ones from the device) would go through a single codepath, and I just needed to filter them out after hit testing to figure out what to do with them. As part of this, I created a Matrix4 with the transformation needed for converting "test" coordinate space units into "real" coordinate space units, and had a method that would take that matrix and invert it for the opposite (which is what I usually needed for hit testing).

This all worked great! All the tests ran fine with their fake events, and if I tapped the screen I got the right coordinate and all was good.

Until I hit the screen the second time. The second time, the coordinates were off. But the third time they were right! Then the fourth they were wrong again.

There then came one of the most confusing couple of hours of debugging I've had to do for a while. I eventually figured out that what was going on was that my method for inverting the matrix was actually inverting the "canonical" copy of the matrix, which meant that every other hit-test was using the wrong coordinates. But if I fixed that, then suddenly all the tests broke! And they broke quite badly, with assertions all over the place.

Finally I figured out why. The hit test function that I'd copied-and-pasted from the rendering binding into the test binding carefully did all the hit testing, then invoked the superclass hit test method, which should normally have been the gesture detector binding's but in this case was the rendering library's binding's, which did its own set of hit testing all over again. Which meant that when the tests sent any events, they first inverted the matrix, used that to hit test the widgets, then inverted the matrix again, hit test all the widgets again (getting a completely different answer), and then sent the events to both places. Since the wrong place didn't do anything (it was likely "off-screen"), the effect was that the second set of hit testing had no effect except putting the matrix back to its correct configuration. But when I fixed the inversion bug, well now every tap in every test would result in two taps in the same place, confusing the tests mightily.

A weird case of two wrongs making a right!