The joy of integrated logging and log viewing with fancy logs

The deuxdrop messaging experiment‘s current incarnation exists as an (under development) Jetpack that runs in Firefox.  I am still trying to shake out functionality to be driven by the UI rather than headless unit tests.  While firebug has been a great help, another boon has been the logging framework and log viewing framework developed for the unit tests.  (Previous posts here and here).  Since the log is made up of structured JSON data, all the log processing logic is written in JS, and the log viewing UI is HTML/CSS/JS, it is trivial to embed the log viewer into the Jetpack itself.

If you type about:loggest in the URL bar (or better yet, create a bookmark on the bookmark bar and click that), the log viewer is displayed.  Deuxdrop’s client daemon logic (which runs in a hidden frame), uses a log reaper that runs at 1-second intervals.  If anything happens log-wise during that second, it is packaged and added to a circular-buffer style list of the last 60 seconds where anything happened.  When the log viewer starts up, it asks for and receives the data.  The result looks like the above small screenshot.  If no errors were logged during the time interval, it is automatically collapsed.

Let us experience the joy of integrated logging by looking at a real problem I recently encountered.  In the development UI (accessible via about:dddev), I brought up a list of my contacts after starting a conversation.  It looks like this right now:

The problem is that I, the user, am “Andrew Sutherland” and should not be in my own contact list.  Also, the display should not be claiming there are an undefined number of unread messages from me, but that is likely fallout from the system intentionally not maintaining such information about me, the user.

I want to quickly figure out why this is happening, so I bring up about:loggest and open the most recent time entry to see what happened when this query was issued and filled:

I can see that the query ended up issuing database requests for both Walternate (purple) and myself (green), strongly suggesting that the database index being queried on names me.

I wonder if the conversation processing logic was the code that did this… let’s check by going to the time slice where the conversation was processed, expanding it, and only screenshotting some of it:

Yes, the conversation logic did this.  It’s generating index values in the peepData table for the idxPeepAny and idxPeepRecip indices.  But I thought my unit tests covered this?  Nope.  It turns that although we test that a peep query returns the right thing both cold from the database and inductively via notifications as contact relationships are established, we don’t issue a new query after creating a conversation.  Furthermore, we only issued queries against the alphabetical index, not against idxPeepAny.  So we rectify that by augmenting the unit test:

  // - make sure that the conversation addition did not screw up our peeps list
  T.group('check peeps list after conversation join');
  lqFinalAllPeeps = moda_a.do_queryPeeps("allPeepsFinal:any", {by: 'any'});

And the test indeed now fails:

The relevant bit is in the lower right, which I blow up here with the “unexpected event” obj popup displayed above it, and the “failed expectation” obj popup below it.  The postAnno stuff is to indicate what is new in the query result.  Because it’s a freshly issued query and this is the first set of results, everything is new.  It’s probably worth noting that these errors would usually show up as a single “mismatched” error instead of an unexpected/failed pair in our tests, but the specific logger was operating in unordered set mode because we don’t care about the exact order that different query notifications occur in, we just care that they do occur.

(The structure is intended to later be enhanced to provide a nicer visualization where we only show the contents of the “state” attribute and use preAnno to indicate annotations on a representation of the most recent state for the object (from a previous log entry) and postAnno to indicate annotations on the now-current representation “state”.  For postAnno, values of 1 represent an addition, and values of 0 represent a change or event firing on the object.)

A potentially even more exciting bit of integrated logging is that about:loggest-server opens a tab that retrieves its contents from the server.  When run with the –loggest-web-debug flag, the server loads a module that cranks up the logging and does the same 1-second interval log reaping magic and exposes it for HTTP retrieval.  While this specific configuration with the high level of detailed logging is only appropriate for a developer-machine test server, it is incredibly valuable to be able to just pop open another tab and see what the server just got up to.

In any event, I leave it as an exercise to the reader to assume that I will take care of the bug now that it’s extremely clear what the problem is.  Well, extremely clear if I had taken a bigger screenshot of the conversation creation log.  Above the region captured is a header that indicates the actions are being triggered by the ‘convJoin’ task and the entry (which is visible) indicates the update_conv_data function likely kicked off the database activity.

PS: All the gibberish looking characters in the screenshots are crypto keys or other binary data that lack aliases mapping them to known objects.  Examples of successfully mapped aliases are the colored blocks.  In the case of the conversation creation gibberish, we are seeing the conversation id.  Those aliases are generated as a separate pass by the log reaper by walking the set of existing queries and using helper functions to map the items currently exposed by the queries to human names because it’s quick and easy and is O(what’s being looked at) not O(size of the database).  In the case of the conversation, there was no query against the conversation and I also have not written the helper function yet, which is why it did not get aliased.  Unit tests don’t have the problem because we create aliases for everything.

new adventures in rich (execution) logs for debugging and program understanding

Understanding what is going on inside software can be very hard, even for the developers most familiar with the software.  During my time working on Thunderbird I used a variety of techniques to try and peer inside: printf/dump/console.log, debuggers, execution analysis (dtrace, chronicle recorder, with object diffs, on timelines), logging (log4j style, with  timelines, with rich data, extra instrumentation and custom presentations, prettier and hooked up to dump on test failures), improving platform error reporting, gdb extensions, control-flow analysis of SQL queries (vanilla, augmented with systemtap perf probes), performance analysis (VProbes with custom UI, systemtap, crammed into the SpeedTracer UI, custom UI with the async work), chewing existing log facilities’ output (TB IMAPgecko layout), and asynchronous operation causality reconstruction (systemtap, JS promises).

Logging with rich, structured data easily provided the most bang-for-the-buck because it:

  1. Provided the benefit of baked-in human meaning with some degree of semantic hierarchy.
  2. Was less likely to make wrong assumptions about what data was relevant.  How many times have you had to go back to change what a printf is logging?
  3. Once sufficiently automated, required no activation energy, no extra steps, and everyone can see the results.
However, it still has numerous downsides/gotchas:
  1. Potential performance impact, especially with rich, structured data.
  2. If people don’t write log statements, you don’t get log entries.
  3. If an error propagates and no one catches it or otherwise says anything about it, your log trace stops dead.
  4. Lack of context can hide causation and leave you filtering through tons of entries trying to reconstruct what happened from shadows on the cave wall.

 

As a result, when I recently started on a new project (implemented in JS), I tried to make sure to bake logging into the system from the ground up:
  • The core classes, abstractions, and APIs generate log entries automatically so developers don’t need to fill their code with boilerplate.
  • Loggers are built around object ownership hierarchies/life-cycles to provide context and the ability to filter.  This is in contrast to log4j style logging which is usually organized around code module hierarchies, noting that log4j does provide nested diagnostic contexts.
  • The test framework is designed around writing tests in terms of expectations around the loggers.  This helps ensure interesting things get logged.  It also improves the quality of the tests by making it easier to ensure the tests are really doing what you think they are doing.
  • Logger definitions explicitly name the log events they will generate and their semantic type, some of which have special handling.  The currently supported types are: state changes, latched states, events, asynchronous jobs (with separate begin and end entries), calls (which wrap a function call, catching exceptions), and errors.  This allows specialized processing and better automated analysis without having to try and reverse engineer the meaning using regular expressions.
  • Performance is addressed by instantiating different logger classes based on needs.  For modules not under test (or without logging desired), everything turns into no-ops except for events and errors which are counted for reporting to a time-series database for system health/performance/etc analysis.  The decision making process happens at runtime and is able to see the parent logger, so heavy-weight logging can be used on a statistical sampling basis or only for specific users experiencing problems/etc.
  • Loggers can give themselves complex semantic names that can be used to reconstruct relationships between loggers when the ownership hierarchy is not available or not appropriate.  For example, we can link both sides of the connection between a client and a server by having the loggers make sure to name themselves and the other side.
  • Simple wrapper helpers exist that make it easy to wrap a function so that a custom log entry is generated and it “holds” the call in limbo from whence it can later be “released”.  This allows unit tests to break complicated behaviour into discrete steps that humans can understand.  Far better to look at one thing at a time than eight things all jumbled together (or filtered back down to one, potentially hiding important details).

 

In any event, as one might surmise from the screenshots, this is more than a dream, it’s a pastel colored reality.

What are the screenshots showing?

  1. The logger hierarchy.  The colored bits are “named things”.  The test framework has the concept of things, actors, and loggers.  Each actor corresponds to exactly one logger and is the object on which tests specify their expectations.  Actors can be owned by other actors, resulting in a hierarchy we call a family.  Each family gets tagged with a distinct identifier that allows us to associate a color with them.  Things provide a human name to a (hopefully) unique string.  Things can be owned by actors and get tagged with the family name and so can be colorized.  In the logger hierarchy, the stuff to the right of the colon is the semantic name of the logger.  So “clientConn: A client to X longtermat endpoint blah” is really (under the hood) an array of strings where “A client” is actually the crypto key so named.  There are two colors because the connection is naming both its own identifying crypto key and the server’s crypto key it is trying to talk to.
  2. An example of the display of log entries.  Each logger gets its own column to display its entries in.  The header shows the name of the logger and is colored based on that logger’s family.  The colors were not shown in the logger hierarchy because I think it would end up too busy.  Each entry is timestamped with the number of milliseconds since the start of the test.  The event names are arbitrarily in blue to help delineate them from other semantic classes.  For example, “handleMsg” is a call-type.  The “obj” bits with the dotted stuff under it means something we can click on to see more of.  The events being shown are part of a CurveCP-style connection establishment.
  3. Similar to the previous screenshot, but here you can see named thing resolution in play with arguments that are strings.
  4. And if we click on one of those “obj” bits, we get a nice nested table display of the object.  As you can see from the pretty colors, named thing resolution is also in play.  You can also see crypto keys I did not name and which accordingly look like gibberish.  It is probably worth noting that some developer participation is required to make sure to put toJSON() implementations on all the complex objects that are exposed to the logger to make sure we don’t try and serialize huge swathes of object graph.  While this is a “don’t break the system” requirement, it also makes it easy to expose the useful bits of information for debugging.

If you would like to see the actual log display UI for yourself on the log from the screenshots (and can stomach it fetching 2+ MiB of compressed JSON), you can see it at https://clicky.visophyte.org/examples/arbpl-loggest/20110712/.  While the logs normally live on the ArbitraryPushlog (ArbPL) server, links to it are currently not stable because its all-in-one hbase datastore keeps self-destructing.  I baked ArbPL into a standalone form that lives at that URL and so should ideally not break so much.  Fingers-crossed.

unified JS/C++ backtraces, colorized backtraces, colorized source listing for gdb 7.3/archer-trunk updated

type "mbt", see this.

gdb has had integrated python support for some time and it is truly awesome.  The innards have changed here and there, so my previous efforts to show colorized/fancy backtraces, show colorized source listings, and provide unified JS/C++ mozilla backtraces (with colors still) have experienced some bit-rot over time.

type "cbt", see this

I have de-bitrotted the changes, hard.  Hard in this case means that the changes depend on gdb 7.3 which only exists in the future or in the archer repo you can find on this wiki page, or maybe in gdb upstream but I already had the archer repo checked out…

type "sl", see this.

In any event, if you like gdb and work with mozilla, you will not only want these things, but also Jim Blandy’s mozilla-archer repo which has magic SpiderMonkey JS helpers (one of which mozbt depends on; those JSStrings are crazy, yo!).  Coincidentally, he also depends on the future, and his README tells you how to check out the archer branch in greater detail.

You can get your own copy of these very exciting gdb python things from my github pythongdb-gaudy repo.

The important notes for the unified backtrace are:

  • It does’t work with the tracing JIT; the trace JIT doesn’t produce the required frames during its normal operation.
  • It might work with the method JIT if gdb could unwind method JIT frames, but it doesn’t seem to do that yet.  (Or I didn’t build with the right flag to tell it to use a frame pointer/etc., or I am using the wrong gdb branch/etc.)  Once the Method JIT starts telling gdb about its generated code with debug symbols and the performance issues in gdb’s JIT interface are dealt with, you can just use the colorizing backtrace because no special logic is required to interleave frames at that point and Jim Blandy’s magic JS pretty printers should ‘just work’.
  • Don’t forget that if you just want a pure JS backtrace (and with all kinds of fanciness) and have a debug build, you can just do “call DumpJSStack()” in gdb and it will call back into XPConnect and invoke the C++ code that authoritatively knows how to walk this stuff.
  • If you’re cool and using generators, then you’re stuck with the interpreter, so it will totally work.

teaser: Rich contextual information for Thunderbird mozmill failures

Sometimes Thunderbird mozmill unit tests fail.  When they do, it’s frequently a mystery.  My logsploder work helped reduce the mystery for my local mozmill test runs, but did nothing for tinderbox runs or developers without tool fever.  Thanks to recent renewed activity on the Thunderbird front-end, this has become more of a problem and so it was time for another round of amortized tool building towards a platonic ideal.

The exciting pipeline that leads to screenshots like you see in this post:

  • Thunderbird Mozmill tests run with the testing framework set to log semi-rich object representations to in-memory per-test buckets.
  • In the event of a test failure, the in-Thunderbird test framework:
    • Gathers information about the state of the system now that we know there is a failure and emit it.  This takes the form of canvas-based screenshots (using chrome privileges) of all open windows in the application, plus the currently focused element in the window.
    • Emits (up to) the last 10 seconds of log events from the previous test.
    • Emits all of the log events from the current test.
  • The python test driver receives the emitted data and dumps it to stdout in a series of JSON blobs that are surrounded by magical annotations.
  • A node.js daemon doing the database-based tinderboxpushlog thing (like my previous Jetpack/CouchDB work that found CouchDB to not be a great thing to directly expose to users and died, but now with node and hbase) processes the tinderbox logs for the delicious JSON blobs.
    • It also processes xpcshell runs and creates an MD5 hash that attempts to characterize the specific fingerprint of the run.  Namely, xpcshell emits lines prefixed with “TEST-” that have a regular form to describe when the pending test count changes or when specific comparison operations or failures occur.  It is assumed that tests will not comparison check values that are effectively random values, that the comparisons will tend to be deterministic (even if there are random orderings in the test) or notable when not deterministic, and thus that the trace derived from this filtering and hashing will be sufficiently consistent that similar failures should result in identical hashes.
    • Nothing is done with mochitests because: Thunderbird does not have them, they don’t appear to emit any context about their failures, and as far as I can tell, frequently the source of a mochitest failure is the fault of an earlier test that claimed it had completed but still had some kind of ripples happening that broke a later test.
  • A wmsy-based UI does the UI thing.

    The particular failure shown here is an interesting one where the exception is telling us that a popup we expected to open never opened.  But when we look at the events from the log, we can see that what happened is the popup opened and then immediately closed itself.  Given that this happened (locally) on linux, this made me suspect that the window manager didn’t let the popup acquire focus and perform a capture.  It turns out that I forgot to install the xfwm4 window manager on my new machine which my xvnc session’s xstartup script was trying to run in order to get a window manager that plays nicely with mozmill and our focus needs.  (Many window managers have configurable focus protection that converts a window trying to grab focus into an attention-requested notification.)

    This is a teaser because it will probably be a few more days before the required patch lands on comm-central, I use RequireJS‘ fancy new optimizer to make the client load process more efficient, and I am happy that the server daemons can keep going for long stretches of time without messing up.  The server and client code is available on github, and the comm-central patch from hg.

    why so slow, pushlog?

    I am doing something where I need to talk to the Mozilla hg pushlog.  I noticed things were running disturbingly slow, so I figured I’d look into it.  I’m using node.js so htracr (a node.js, libpcap-based via node_pcap http transaction visualizer) seemed like the easiest and most fun way to figure out what is going on.  (In a web browser I would just use the built-in timeline.)

    slooooooooooow...

    The short request is a comm-central pushlog request by date.  The long request is a mozilla-central pushlog request by date.  mozilla-central has more pushes, suggesting that the query is either doing something sketchy like a table scan or there is lock contention.  Quick investigation showed no successful pushes in the given timeframe, eliminating lock contention from the likely list of causes.  (The implementation uses SQLite, which, when not running with the Write-Ahead-Log enabled, will only experience contention when the reads are occurring from separate connections.)

    This suggests the query is doing a table scan.  This tends to be a reasonably straightforward problem.  But since I have a SQLite opcode visualizer in my toolkit (now on github!) that needs plugs periodically to keep it on the tip of everyone’s tongue, I used that. Plug! Plug! Plug!

    If you care about these kind of things, click on the small image on the left and look for the red text.  The giant warning flag is the “Rewind” on the changesets table and the control flow that performs a “Next”  that appears to be part of a loop case right off the bat.  (Note that this is using SQLite 3.6.23.1 since I assume that’s what hg.mozilla.org is using.)  Summary: we perform one table scan of the changesets table for every push in the date range.

    The “bad” schema is:

    CREATE TABLE changesets (pushid INTEGER, rev INTEGER, node text);
    CREATE TABLE pushlog (id INTEGER PRIMARY KEY AUTOINCREMENT, user TEXT, date INTEGER);
    CREATE UNIQUE INDEX changeset_node ON changesets (node);
    CREATE UNIQUE INDEX changeset_rev ON changesets (rev);
    CREATE INDEX pushlog_date ON pushlog (date);
    CREATE INDEX pushlog_user ON pushlog (user)

    The query is: SELECT id, user, date, node from pushlog LEFT JOIN changesets ON id = pushid WHERE date > 1298306930 AND date < 1298306955 ORDER BY id DESC, rev DESC;

    The query plan via EXPLAIN QUERY PLAN from SQLite 3.6.23.1 on the “bad” schema is:

    0|0|TABLE pushlog WITH INDEX pushlog_date
    1|1|TABLE changesets

    This is obvious if you know what you’re looking for; unfortunately the indication of the problem is the lack of “WITH INDEX” or the like rather than text that calls out the problem. The much nicer SQLite 3.7.4 EXPLAIN QUERY PLAN (which has great documentation!) would make the problem much more obvious by saying “SCAN TABLE” if not for the fact that it ends up creating an automatic index:

    0|0|0|SEARCH TABLE pushlog USING INDEX pushlog_date (date>? AND date<?) (~110000 rows)
    0|1|1|SEARCH TABLE changesets USING AUTOMATIC COVERING INDEX (pushid=?) (~7 rows)
    0|0|0|USE TEMP B-TREE FOR ORDER BY

    Although one table scan accompanied by a b-tree building is probably a better idea than N table scans, if you read this and think “SQLite 3.7.x just saved someone’s bacon”, you would be wrong because it still throws away the index at the end.  The server is still experiencing an initial vicious kick in the pants every time the statement is run; there are just no lighter follow-up kicks (lighter because of the OS disk cache, you see…).

    In any event, the problem is that there is no index on the pushid column in the changesets table.  (And pushid can’t be the primary key because it is not unique.  There’s no benefit to using a composite key since SQLite will still create a simple rowid key once we use a composite, so an index is the way to go.)

    Once we fix this, our graph looks like the second one at left (in SQLite 3.6.23.1).  Again, looking for red text and now also orange text, the key things are that we no longer have a “Rewind” or “Next”, and instead have a SeekGe on the index and a Seek on the table using the row id the index told us about.  (We are not using a covering index because we expect extremely high locality in the table based on rowid because the insertions happen consecutively in a single transaction.)

    The 3.6.23.1 query plan now looks like:

    0|0|TABLE pushlog WITH INDEX pushlog_date
    1|1|TABLE changesets WITH INDEX changeset_pushid

    Much nicer! What does the lovely 3.7.4 say?:

    0|0|0|SEARCH TABLE pushlog USING INDEX pushlog_date (date>? AND date<?) (~110000 rows)
    0|1|1|SEARCH TABLE changesets USING INDEX changeset_pushid (pushid=?) (~10 rows)
    0|0|0|USE TEMP B-TREE FOR ORDER BY

    Awww yeah. Anywho, here’s the schema one wants to use:

    CREATE TABLE changesets (pushid INTEGER, rev INTEGER, node text);
    CREATE TABLE pushlog (id INTEGER PRIMARY KEY AUTOINCREMENT, user TEXT, date INTEGER);
    CREATE INDEX changeset_pushid ON changesets (pushid);
    CREATE UNIQUE INDEX changeset_node ON changesets (node);
    CREATE UNIQUE INDEX changeset_rev ON changesets (rev);
    CREATE INDEX pushlog_date ON pushlog (date);
    CREATE INDEX pushlog_user ON pushlog (user);

    I am off to file this as a bug… filed as bug 635765.

    UPDATE: I initially had a brain glitch where I proposed using a primary key rather than an additional index.  Unfortunately, this is what got pushed out to feed readers during the 30 seconds before I realized my massive mistake.  If you have come to this page to note the bad-idea-ness of that, please enjoy the corrected blog-post which explains why the primary key is not the best idea before going on to point out any other serious brain glitches 🙂

    UPDATE 2 (Feb 22, 2011):  Good news: the change has landed and all the existing databases have been updated.  Woo!  Bad news: although things are probably faster now, things are still way too slow.  (Given that this specific deficiency was determined by inspection and not actual profiling, this is not so surprising.)  It would appear that the retrieval of the information about the changesets is the cause of the slowdown.  More inspection on my part suggests that populating the list of tags may be involved, but that’s just a quick guess.  Please follow the bug or follow-on bugs if you want to witness the entire exciting saga.

    logsploder, circa a year+ ago

    Whoops.  I posted to mozilla.dev.apps.thunderbird about an updated version of logsploder at the end of 2009, but forgot to blog about it.  I do so now (briefly) for my own retrospective interest and those who like cropped screenshots.

    The gloda (global database) tests and various Thunderbird mozmill tests have been augmented for some time (1+ years) to support rich logging where handlers are given the chance to extract the salient attributes of objects and convert them to JSON for marshaling.  Additionally, when the fancy logging mode is enabled, additional loggers are registered (like Thunderbird’s nsIMsgFolderListener) to provide additional context for what is going on around testing operations.

    For a given test (file) run, logsploder provides an overview in the form of a hierarchical treemap that identifies the logger categories used in the test and a small multiples timeline display that characterizes the volume of messages logged to each category in a given time slice using the treemap’s layout.  The idea is that you can see at a glance the subsystems active for each time-slice.

    Logsploder can retain multiple test file runs in memory:

    And knows the tests and sub-tests run in each test file (xpcshell) run.  Tests/sub-tests are not green/red coded because xpcshell tests give up as soon as the first failure is encountered so there is no real point:

    Clicking on a test/subtest automatically selects the first time slice during which it was active.

    Selecting a time-slice presents us with a simple list of the events that occurred during that time slice.  Each event is colored (although I think darkened?) based on its logging category:

    Underlined things are rich object representations that can be clicked on to show additional details.  For example, if we click on the very first underlined thing, “MsgHdr: imap://user@localhost/gabba13#1” entry, we get:

    And if we click on the interestingProperties OBJECT:

    Logsploder was a pre-wmsy (the widget framework that I’ll be talking about more starting a few weeks from now) tool whose UI implementation informed wmsy.  Which is to say, the need to switch to a different tab or click on the “OBJECT” rather than supporting some more clever form of popups and/or inline nesting was a presentation limitation that would not happen today.  (More significantly, the log events would not be paged based on time slice with wmsy, although that limitation is not as obvious from the way I’ve presented the screenshots.)

    If anyone in the Thunderbird world is interested in giving logsploder a spin for themselves, the hg repo is here.  I also have various patches that I can cleanup/make available in my patch queue to support logging xpcshell output to disk (rather than just the network), hooking up the logHelper mechanism so it reports data to mozmill over jsbridge, and (likely heavily bit-rotted) mozmill patches to report those JSON blobs to a CouchDB server (along with screenshots taken when failures occur).  The latter stuff never hit the repo because of the previously mentioned lack of a couchdb instance to actually push the results to.  Much of the logHelper/CouchDB work will likely find new life as we move forward with a wmsy-fied Thunderbird Air experiment.

    Many thanks to sid0 who has continued to improve logHelper, if sometimes only to stop it from throwing exceptions on newly logged things that its heuristics fatally did not understand 🙂

    Visualizing asynchronous JavaScript promises (Q-style Promises/B)

    Asynchronous JS can be unwieldy and confusing.  Specifically, callbacks can be unwieldy, especially when you introduce error handling and start chaining asynchronous operations.  So, people frequently turn to something like Python’s Twisted‘s deferreds which provide for explicit error handling and the ability for ‘callbacks’ to return yet another asynchronous operation.

    In CommonJS-land, there are proposals for deferred-ish promises.  In a dangerously concise nutshell, these are:

    • Promises/A: promises have a then(callback, errback) method.
    • Promises/B: the promises module has a when(value, callback, errback) helper function.

    I am in the Promises/B camp because the when construct lets you not care whether value is actually a promise or not both now and in the future.  The bad news about Promises/B is that:

    • It is currently not duck typable (but there is a mailing list proposal to support unification that I am all for) and so really only works if you have exactly one promises module in your application.
    • The implementation will make your brain implode-then-explode because it is architected for safety and to support transparent remoting.

    To elaborate on the (elegant) complexity, it uses a message-passing idiom where you send your “when” request to the promise which is then responsible for actually executing your callback or error back.  So if value is actually a value, it just invokes your callback on the value.  If value was a promise, it queues your callback until the promise is resolved.  If value was a rejection, it invokes your rejection handler.  When a callback returns a new promise, any “when”s that were targeted at the associated promise end up retargeted to the newly returned promise.  The bad debugging news is that almost every message-transmission step is forward()ed into a subsequent turn of the event loop which results in debuggers losing a lot of context.  (Although anything that maintains linkages between the code that created a timer and the fired timer event or other causal chaining at least has a fighting chance.)

    In short, promises make things more manageable, but they don’t really make things less confusing, at least not without a little help.  Some time ago I created a modified version of Kris Kowal‘s Q library implementation that:

    • Allows you to describe what a promise actually represents using human words.
    • Tracks relationships between promises (or allows you to describe them) so that you can know all of the promises that a given promise depends/depended on.
    • Completely abandons the security/safety stuff that kept promises isolated.

    The end goal was to support debugging/understanding of code that uses promises by converting that data into something usable like a visualization.  I’ve done this now, applying it to jstut’s (soon-to-be-formerly narscribblus’) load process to help understand what work is actually being done.  If you are somehow using jstut trunk, you can invoke document.jstutVisualizeDocLoad(/* show boring? */ false) from your JS console and see such a graph in all its majesty for your currently loaded document.

    The first screenshot (show boring = true) is of a case where a parse failure of the root document occurred and we display a friendly parse error screen.  The second screenshot (show boring = false) is the top bit of the successful presentation of the same document where I have not arbitrarily deleted a syntactically important line.

    A basic description of the visualization:

    • It’s a hierarchical protovis indented tree.  The children of a node are the promises it depended on.  A promise that depended in parallel(-ish) on multiple promises will have multiple children.  The special case is that if we had a “when” W depending on promise X, and X was resolved with promise Y, then W gets retargeted to Y.  This is represented in the visualization as W having children X and Y, but with Y having a triangle icon instead of a circle in order to differentiate from W having depended on X and Y in parallel from the get-go.
    • The poor man’s timeline on the right-hand side shows the time-span between when the promise was created and when it was resolved.  It is not showing how long the callback function took to run, although it will fall strictly within the shown time-span.  Time-bar widths are lower bounded at 1 pixel, so the duration of something 1-pixel wide is not representative of anything other than position.
    • Nodes are green if they were resolved, yellow if they were never resolved, red if they were rejected.  Nodes are gray if the node and its dependencies were already shown elsewhere in the graph; dependencies are not shown in such a case.  This reduces redundancy in the visualization while still expressing actual dependencies.
    • Timelines are green if the promise was resolved, maroon if it was never resolved or rejected.  If the timeline is never resolved, it goes all the way to the right edge.
    • Boring nodes are elided when so configured; their interesting children spliced in in their place.  A node is boring if its “what” description starts with “auto:” or “boring:”.  The when() logic automatically annotates an “auto:functionName” if the callback function has a name.

    You can find pwomise.js and pwomise-vis.js in the narscribblus/jstut repo.  It’s called pwomise not to be adorable but rather to make it clear that it’s not promise.js.  I have added various comments to pwomise.js that may aid in understanding.  Sometime soon I will update my demo setup on clicky.visophyte.org so that all can partake.

    understanding where layout goes wrong with gecko reflow debug logs (Part 2)

    In part 1, I was fighting some kind of flex-box layout problem in gecko that I put on the metaphorical back burner.  The metaphorical pot is back, but rebuilt because the old pot did not support animation using CSS transitions.  (I was using overflow: none in combination with scrollLeft/scrollTop to scroll the directly contained children; now I’m using overflow: none with a position:absolute child that holds the children and animates around using CSS left/top.)  Thanks to the rebuild, the truly troublesome behavior is gone.  But helpfully for providing closure to the previous blog post, gecko still was enlarging something and so causing the page to want to scroll.

    With a few enhancements to part 1’s logic, we add a “growth” command so we can do “cfx run growth serial-13-0” and get the following results:

    *** block 19 display: block tag:div id: classes:default-bugzilla-ui-bug-page-title
    *** inline 20 display: inline tag:span id: classes:default-bugzilla-ui-bug-page-summary
    inline 20 variant 1  (parent 19) first line 368
      why: Reflow
      display: inline
      inputs: reflowAvailable: 960,UC
              reflowComputed: UC,UC
              reflowExtra: dirty v-resize
      output: reflowDims: 885,21
      parent concluded: prefWidth: 885
                        minWidth: 100
                        reflowDims: 960,21
    *** block 23 display: block tag:div id: classes: [elided]
    *** SVGOuterSVG(svg)(0) 27 display: inline tag:svg id: classes:
    SVGOuterSVG(svg)(0) 27 variant 1  (parent 23) first line 850
      why: GetPrefWidth,GetMinWidth,Reflow
      display: inline
      inputs: reflowAvailable: 187,UC
              reflowComputed: 187,1725
              reflowExtra: dirty v-resize
      output: prefWidth: 187
              minWidth: 0
              reflowDims: 187,1725
      parent concluded: prefWidth: 187
                        minWidth: 0
                        reflowDims: 187,1730

    The growth detection logic is simple and not too clever, but works.  For nodes that are the sole child of their parent, we check if there is a difference between how large the child wants to be and how large the parent decides to be that cannot be accounted for by the margin of the child and the border/padding of the parent.

    The first reported node is being forced wider than it wanted to be because of an explicit width assigned to a flex-box.  The second reported node is the SVG node that we are rendering our timeline into (see the screenshot).  We can see that its parent ends up being 5 pixels taller than the SVG node.  This is concerning and very likely our problem.

    The answer is the “display: inline” bit.  If you are foolish enough to fire up gdb, it turns out that nsBlockFrame::ReflowLine checks whether its child line is a block or not.  If it’s not, we end up in nsLineLayout::VerticalAlignLine which calls VerticalAlignFrames which decides to set mMinY=-1725px and mMaxY=5px.  I am only half-foolish so I just added a “display: block”, saw that it made the problem go away,  and stopped there.

    To summarize, “svg” elements are not “display: block” by default and gecko appears to decide to pad the sole child out with some kind of line height logic.  This does not happen in webkit/chromium in this specific situation; this is notable in this context because inconsistencies between layout engines are likely to be particularly frustrating.  The gecko reflow debug logs and my tooling do not currently make the causality blindingly obvious, although I have made the output explicitly call out the “display” type more clearly so the inference is easier to make.  It turns out that if I had built gecko with -DNOISY_VERTICAL_ALIGN I would have gotten additional output that would likely have provided the required information.

    wmsy’s debug UI’s decision tree visualizer

    wmsy, the Widget Manifesting SYstem, figures out what widget to display by having widgets specify the constraints for situations in which they are applicable.  (Yes, this is an outgrowth of an earlier life involving multiple dispatch.)  Code that wants to bind an object into a widget provides the static, unchanging context/constraints at definition time and then presents the actual object to be bound at (surprise!) widget bind time.

    This is realized by building a decision tree of sorts out of our pool of constraints.  Logical inconsistencies are “avoided” by optionally specifying an explicit prioritization of attributes to split on, taking us much closer to (rather constrained) multiple dispatch.  For efficiency, all those static constraints are used to perform partial traversal of the decision tree ahead of time so that the determination of which widget factory to use at bind time ideally only needs to evaluate the things that can actually vary.

    When I conceived of this strategy, I asserted to myself that the complexity of understanding what goes wrong could be mitigated by providing a visualization / exploration tool for the built decision tree along with other Debug UI features.  In software development this usually jinxes things, but I’m happy to say not in this specific case.

    In any event, wmsy’s debug UI now has its first feature.  A visualization of the current widget factory decision tree.  Grey nodes are branch nodes, yellow nodes are check nodes (no branching, just verifying that all constraints that were not already checked by branch nodes are correct), and green nodes are result nodes.  Nodes stroked in purple have a partial traversal pointing at them and the stroke width is a function of the number of such partials.  The highly dashed labels for the (green) result nodes are the fully namespaced widget names.  Future work will be to cause clicking on them to bring up details on the widget and/or jump to the widget source in-document using skywriter or in an external text editor (via extension).

    The Debug UI can be accessed in any document using wmsy by bringing up your debugger console and typing “document.wmsyDebugUI()”.  This triggers a dynamic load of the UI code (hooray RequireJS!) which then does its thing.  Major props to the protovis visualization toolkit and its built-in partition layout that made creating the graph quite easy.

    fighting oranges with systemtap probes, latency fighting; step 2

    Recap from step 1: Sometimes unit test failures on the mozilla tinderboxen are (allegedly, per me) due to insufficiently constrained asynchronous processes.  Sometimes the tests time out because of the asynchronous ordering thing, sometimes it’s just because they’re slow.  Systemtap is awesome.  Using systemtap we can get exciting debug output in JSON form which we can use to fight the aforementioned things.

    Advances in bullet point technology have given us the following new features:

    • Integration of latencytap by William Cohen.  Latencytap is a sytemtap script that figures out why your thread/process started blocking.  It uses kernel probes to notice when the task gets activated/deactivated which tells us how long it was asleep.  It performs a kernel backtrace and uses a reasonably extensive built-in knowledge base to figure out the best explanation for why it decided to block.  This gets us not only fsync() but memory allocation internals and other neat stuff too.
      • We ignore everything less than a microsecond because that’s what latencytap already did by virtue of dealing in microseconds and it seems like a good idea.  (We use nanoseconds, though, so we will filter out slightly more because it’s not just quantization-derived.)
      • We get JS backtraces where available for anything longer than 0.1 ms.
    • The visualization is now based off of wall time by default.
    • Visualization of the latency and GC activities on the visualization in the UI.
    • Automated summarization of latency including aggregation of JS call stacks.
    • The new UI bits drove and benefit from various wmsy improvements and cleanup.  Many thanks to my co-worker James Burke for helping me with a number of design decisions there.
    • The systemtap probe compilation non-determinism bug mentioned last time is not gone yet, but thanks to the friendly and responsive systemtap developers, it will be gone soon!

    Using these new and improved bullet points we were able to look at one of the tests that seemed to be intermittently timing out (the bug) for legitimate reasons of slowness.  And recently, not just that one test, but many of its friends using the same test infrastructure (asyncTestUtils).

    So if you look at the top visualization, you will see lots of reds and pinks; it’s like a trip to Arizona but without all of the tour buses.  Those are all our fsyncs.  How many fsyncs?  This many fsyncs:

    Why so many fsyncs?

    Oh dear, someone must have snuck into messageInjection.js when I was not looking!  (Note: comment made for comedic purposes; all the blame is mine, although I have several high quality excuses up my sleeve if required.)

    What would happen if we change the underlying C++ class to support batching semantics and the injection logic to use it?

    Nice.

    NB: No, I don’t know exactly what the lock contention is.  The label might be misleading since it is based on sys_futex/do_futex being on the stack rather than the syscall.  Since they only show up on one thread but the latencytap kernel probes need to self-filter because they fire for everything and are using globals to filter, I would not be surprised if it turned out that the systemtap probes used futexes and that’s what’s going on.  It’s not trivial to find out because the latencytap probes can’t really get a good native userspace backtrace (the bug) and when I dabbled in that area I managed to hard lock my system and I really dislike rebooting.  So a mystery they will stay.  Unless someone tells me or I read more of the systemtap source or implement hover-brushing in the visualization or otherwise figure things out.

    There is probably more to come, including me running the probes against a bunch of mozilla-central/comm-central tests and putting the results up in interactive web-app form (with the distilled JSON available).  It sounds like I might get access to a MoCo VM to facilitate that, which would be nice.