Now that Thunderbird 3.1 is string/feature-frozen it’s time to focus on performance. This post is just to make sure that people who are also working on similar things know what I’m up to. For example, there’s some very exciting work going on involving adding a startup timeline to mozilla-central that everyone keeping up-to-date with mozilla performance work should be aware of. In summary, no results yet, but soon!
The big bullet points for now are:
- GWT SpeedTracer is a very impressive tool.
- I have seriously cleaned up / automated my previous SystemTap script work for mozilla. Invoking python chewchewwoowoo.py speedtracer/mozspeedtrace.stp `pgrep thunderbird-bin` does the following:
- generates a version of the source script with any line statements fixed up based on one or more line seeks in a file.
- builds the command line and invokes the system tap script.
- invokes a post-processing script specified in the tap file, feeding it address-translation helpers and systemtap bulk logging per-cpu fusion.
- mozspeedtrace.stp and its post-processor produce an HTML file with embedded JSON that the SpeedTracer Chrome extension recognizes as one of its own. (SpeedTracer runs in Firefox through the GWT development mode, but being very new to GWT I am not yet clear on if/how to be able to wrap it up to be used as a webapp from static data.)
- I have not implemented probes for all of the event types defined by SpeedTracer, I have not ported all of my previous probes yet, and there are still some new probes yet to add. Happily, most of the hard work is already in the can.
- The motivation behind this is very different from the startup timeline. Thunderbird uses mozilla-1.9.2, not mozilla-central, and my primary investigatory focus is memory usage in steady-state after start-up, not the time it takes to start up. (We do, of course, want to improve performance where possible.) It just so happens that in order to usefully explain who is allocating memory we also have to know who is active at the time / why they are active, and that provides us with the performance timeline data.
- This work is not likely to be useful for per-tab/webpage performance data gathering… just whole-program performance investigation like Thunderbird needs.
- This is a work-in-progress, but the focus is on improving Thunderbird’s performance issues, not building tooling. As such, I may have to bail on SpeedTracer in favor of command-line heuristic analysis if I can’t readily modify SpeedTracer to suit my needs. I picked SpeedTracer partially to (briefly) investigate GWT, but from my exploration of the code thus far, the activation energy required may be too great.
----- js_GC data pages: 96731 total vm pages: 234849
11376 839 BF_format log4moz.j:487
15906 725 mime_emitter_writeBody jsmimeemitter.j:381
148 576 MimeMessage_coerceBodyToPlaintext mimemsg.j:341
4460 492 gloda_ds_queryFromQuery datastore.j:2988
9655 387 <MYSTERIOUS> <BEYOND THE VEIL>:0
911 237 gloda_content_quoted connotent.j:267
1723 144 gloda_indexMessage index_msg.j:2597
187 141 gloda_content_content connotent.j:290
170 108 gloda_content_getContent connotent.j:166
1917 73 gloda_identity_toString datamodel.j:785
1775 69 Log4Moz_enumerateProps log4moz.j:220
value |-------------------------------------------------- count
0 |@ 366
1 |@@@@ 1371
2 |@@@@@ 1584
4 |@@@@@@@@@@@@@@@@@@@@@@@@ 6863
8 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 11784
16 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 12794
32 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 14016
64 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 8662
128 |@@@@@@@@@@@@@@ 4093
256 |@@@@@ 1567
512 |@ 309
1024 | 105
2048 | 58
4096 | 28
8192 | 17
16384 | 12
32768 | 0
65536 | 0
The script. It tells you what JS code has been creating strings since the last GC and the overall distribution of the marginal costs of those strings, leaving out puny code that allocates less than 64k’s worth. Marginal costs because spidermonkey has support for dependent strings. “a += b” will actually result in the storage of a being realloc’d with a pointing into the storage of a’. I forgot to check for the js_ConcatStrings case at first and, let me tell you, the stats were not pretty. Poor mime_emitter_writeBody was accused of allocating 11 gigabytes of strings in one intra-GC cycle!
What is interesting about this script is that it is actually pulling all of the function name/script name/line number information directly out of a JSContext object. What is exciting is that it was pretty easy to do thanks to systemtap’s understanding of DWARF debug symbols. This is in severe comparison to my efforts using VProbes where there was no such support and I had to write a gdbpython script to begin to approach the same thing. Also, because systemtap’s guru mode allows you to bail into real C code when you need, I was able to write a function that truncates the jschar (16-bit unicode) strings that make up the function names into ASCII that is both straightforward and efficient. It also allowed for a quick/easy/fast basename() implementation too.
It is worth noting that spidermonkey does have built-in (conditionally compiled) static probe points originally created for DTrace and that they do provide more generic object lifetime hooks. Although they were created for DTrace, systemtap is also able to use them. The limitations with those probes is that once any probe is active using a hook, a non-trivial cost is incurred because filtering can only be performed inside the probe and you may receive orders of magnitude more calls than you actually care about. The cost is worst for probes that provide function names since that requires a call to js_GetStringBytes every time (although there is some caching.) Keep in mind that I’m not saying the systemtap userspace probes are free, but since they can be placed at arbitrary points, you can have probes that fire only in the cases you actually care about, only incur data-processing costs when desired, and are disturbingly easy to implement since you can just reach into the data structures you care about.
Since everyone loves an implementation fight, DTrace, great and magical though it is, is basically incapable of pulling off the JS stack-walking trick that systemtap makes possible. Unless DTrace has grown these features since I last checked, it lacks the DWARF magic to do such things easily and the language intentionally avoids the control-flow constructs that would allow it to do anything quite so complex. And you can entirely forget about arbitrary C code for the unicode string truncation…
This post is brought to you by efforts to decrease the memory footprint of Thunderbird when gloda’s indexer is active. Because Thunderbird loves you. But it loves your memory more. For now. That’s what we’re trying to fix.
For the Thunderbird 3.1 release cycle we are not just fixing UX problems but also resolving various performance issues. Building on my previous work on a visualization of SQLite opcode control flow graphs using graphviz I give you… the same thing! But updated to more recent versions of SQLite and integrating performance information retrieved through the use of systemtap with utrace.
In this case we are using systemtap to extract the number of times each opcode is executed and the number of btree pages that are requested during the course of executing the opcode. (We do not differentiate between cache hits and misses because we are looking at big-O right now.) Because virtual tables (like those used by FTS3) result in nested SQLite queries and we do not care about analyzing the queries used by FTS3, we ignore nested calls and attribute all btree page accesses to the top-level opcode under execution.
Because the utility of a tool is just as dependent on ease of use as its feature set, I’ve cleaned things up and made it much easier to get information out of Thunderbird/gloda with this patch which should land soon and provides the following:
- The gloda datastore will write a JSON file with the EXPLAINed results of all its SQL queries to the path found in the preference mailnews.database.global.datastore.explainToPath. This preference is observed so that setting it during runtime will cause it to create the file and begin explaining all subequently created queries. Clearing/changing the preference closes out the current file and potentially opens a new one.
- Gloda unit tests will automatically set the preference to the value of the environment variable GLODA_DATASTORE_EXPLAIN_TO_PATH if set.
- A schema dump is no longer required for meta-data because we just assume that you are using a SQLite DEBUG build that tells us everything we want to know about in the ‘comment’ column.
- grokexplain.py now uses optparse and has more internal documentation and such.
So what do the pretty pictures show?
- Before: A gloda fulltext query search retrieves all of the results data before applying the LIMIT. This results in a lot more data transplanted into temporary results tables than we will end up using; wasted bookkeeping. Additionally, we incur the row lookup costs for both our messages data storage table and our fulltext messagesText table for all hits, even the ones we will not return. (Noting that there was no escaping hitting both tables since we were using offsets() and it hits the fulltext table as part of its operation.)
- After: We perform an initial query phase where we minimize wasted bookkeeping by only retrieving and using the bare minimum required to compute the LIMITed list of document id’s. Additionally, by use of the FTS3 matchinfo() function instead of the offsets() function we are able to avoid performing row lookup on the messagesText table for results that will not be returned to the user. Use of the matchinfo() function requires a custom ranking function which allows us to be somewhat more clever about boosting results based on fulltext matches too.
- The poor man’s bar graphs in the pictures are expressing a hand-rolled logarithmic scale for the number of invocations (left bar) and number of btree pages accessed (right bar). On the right side of each line the actual numbers are also presented in the same order. The goal is not to convey good/bad so much as to draw the eye to hot spots.
Notes for people who want more to read:
- SQLite has built-in infrastructure to track the number of times an opcode is executed as well as the wall-clock time used; you do not have to use systemtap. It’s a conditional compilation kind of thing, just -DVDBE_PROFILE and every statement you execute gets its performance data appended to vdbe_profile.out when you reset the statement. It doesn’t do the btree pages consulted trick, but it’s obviously within its power with some code changes.
- Our use of systemtap is primarily a question of discretionary reporting control, the ability to integrate the analysis across other abstraction layers, and various build concerns. The JSON output is not a driving concern, just something nice we are able to do for ourselves since we are formatting the output.
- The tool has really crossed over into the super-win-payoff category with this fix investigation. (Having said that, I probably could have skipped some of the data-flow stuff last time around. But these projects are both for personal interest and amusement as well as practical benefit, so what are you gonna do? Also, that effort could pay off a bit more by propagating comments along register uses so that the LIMIT counter register r8 and the compute-once r7 in the after diagram would not require human thought.)
- The grokexplain repo. Used like so: python grokexplain.py –vdbe-stats=/tmp/glodaNewSearchPerf.json /tmp/glodaNewSearchExplained.json -o /tmp/glodasearch-newcheck
- The systemtap script in its repo. Used like so: sudo stap -DMAXSTRINGLEN=1024 sqlite-perf.stp /path/to/thunderbird-objdir/mozilla/dist/lib/libsqlite3.so > /tmp/glodaNewSearchPerf.json
- The bug with the gloda explain logic patch and the improved SQL query logic. I also used the tooling to fix another (now-fixed) bug, but that problem turned out to be less interesting.