support your neighborhood Thunderbird global database

Do you know JavaScript?  Would you like to help improve Thunderbird and its exciting global database, gloda?  Now is your chance!  Check out these exciting bugs that are reasonably sized and independent tasks:

Exciting? Exciting!

I don’t know much about psychology, but I have heard that people on the internet see a call-to-arms like this and say “I’m sure someone else better qualified will step up, maybe even hundreds of them… I’ll just let them take care of it.”  I have news for you, people on the internet are lazy!  Oh, so lazy!  (I am reasonably confident that won’t happen.  If it does happen, I will find enough work for everyone to do while I retire to a life of luxury funded by my ability to inexplicably motivate large swathes of the internet to do my bidding.)

Important steps!

  1. Get yourself a copy of the comm-central codebase.
  2. Build thunderbird! (Actually, that above link covers it, but you might also want to check out the general building info page.)
  3. Dance a victory jig!
  4. Leave a note on one of those bugs saying that you are interested.  Or just e-mail me at asuth@mozillamessaging.com!

using review board for bugzilla request queues / reviews

review-board-dashboard

Review board is a delightful django-based review tool.  Whereas bugzilla’s diff display is limited to taking the provided diff and coloring hunks without additional context, review board actually retrieves the underlying files from mercurial, applies the patch, colorizes the source files (using pygments), and thus can give you additional context.  Also, it allows you to click on line(s) and attach a note at that location.

Anywho, I made a new tool that ‘synchronizes’ bugzilla request queues with review board.  It finds the things in your request queue, and then creates a review in review board, transferring the patch.  There is some limited hard-coded biasing towards comm-central, but it’s arguably easily adaptable.  It uses a slightly hacked up version of the post-review script found in review board’s contrib directory to serve as the API to review board.

review-board-diffy-comment

The intent is to make it easier for people, or at least me, to do reviews.  I am definitely NOT proposing that the review “of record” live outside bugzilla.  But I think it might make it easier to prepare reviews in the first place.  For example, see the picture below?  It’s the result of my clicking on lines of code and making notes.  And those hyperlink-looking things that look like they might take you to the actual context, rather than manually having to try and determine context from people’s comments in the bugzilla notes?  They are actual hyperlinks that do what I just said!

review-board-review

If you are a mailnews hacker and would like to see your review queue in my reviewboard instance, ping me on IRC.  I may have already processed your queue and made you an account, or maybe not.  The review board API does not yet automate user creation and I got tired of doing things manually and gave up.

Warnings:

  • Things go awry on old-school patches.  For example, a patch that thinks it is against (the CVS layout of) “mozilla/mailnews” appears to not work.  The right answer is a patch against “a/mailnews” (I’m not sure if the git-style is essential).  Things worked fine for my queue, but some people with old stuff in their queues experienced glitches.  The script now inserts “problem!” into the summary when this happens, but in my reviewboard instance there may be some reviews that pre-date this logic.  (There should be a note in the description though.  Also, there will obviously be no diff attached.)
  • I initially forgot about the whole review=one person, super-review=another person, and maybe review=yet another person.  Although this is now fixed and reviews target all requested revieweres, there may be some reviews missing from your queue because they were created for one of the dudes who was not you and my manual nuking didn’t get to those.
  • Uh, it creates the reviews, but it doesn’t destroy them.
  • Also uh, there will be scaling issues if the reviews don’t eventually get closed out.

understanding libmime using chroniquery and unit tests

chroniquery-trace-failed-return-value

Mailnews’ libmime is one of the harder modules to wrap one’s head around.  8-letter filenames where the first four letters tend to be “mime”, home-grown glib-style OO rather than actual C++, and intermingling of display logic with parsing logic do not make for a fun development or even comprehension experience.

Running an xpcshell unit test run under roc‘s chronicle-recorder and then processing it with my chroniquery stuff (repo info), we get a colorful HTML trace of the execution somewhat broken out (note: pretend the stream_write/stream_completes are interleaved; they are paired).  Specific examples of libmime processing for the inquisitive (there are more if you check the former link though):

The thing that kickstarted this exciting foray is the error pictured in the screenshot above from an earlier run.  The return values are in yellow, and you can see where the error propagates from (the -1 cast to unsigned).  If you look at the HTML file, you will also note that the file stops after the error because the functions bail out as soon as they hit an error.

However, our actual problem and driving reason for the unit tests is the JS emitter choking on multipart/related in writeBody (which it receives by way of ‘output_fn’).  Why?  Why JS Emitter?!  (You can open the links and control-F along at home!)

  • We look at the stream_write trace for our multipart/related.  That’s odd, no ‘output_fn’ in there.

chroniquery-stream_complete-output_fn

  • We look at the stream_complete trace for the multipart/related.  There’s our ‘output_fn’!  And it’s got some weird HTML processing friends happening.  That must be to transform links to the related content into something our docshell can render.  This also explains why this processing is happening in stream_complete rather than stream_write… it must have buffered up the content so it could make sure it had seen all the ‘related’ documents and their Content-IDs so it could know how to transform the links.
  • Uh oh… that deferred processing might be doing something bad, since our consumer receives a stream of events.  We had to do something special for SMIME for just such a reason…
  • We check stream_complete for ‘mimeEmitterAddHeaderField’ calls, which the JS emitter keys off of to know what part is currently being processed and what type of body (if any) it should expect to receive.  Uh-oh, none in here.

chroniquery-stream_write-addheaderfield

  • We check stream_write for ‘mimeEmitterAddHeaderField’ calls, specifically with a “Content-Type” field.  And we find them.  The bad news is that they are apparently emitted as the initial streaming happens.  So we see the content-type for our “text/html”, then our “image/png”.  So when stream_complete happens, the last thing our JS emitter will have seen is “image/png” and it will not be expecting a body write.  (It will think that the text/html had no content whatsoever.)
  • Khaaaaaaaaaaaaaaaaaaaaaaaaan!

In summary, unit tests and execution tracing working together with pretty colors have helped us track down an annoying problem without going insane.  (libmime is a lot to hold in your head if you don’t hack on it every day.  also, straight debugger breakpoint fun generally also requires you to try and formulate and hold a complex mental model… and that’s assuming you don’t go insane from manually stepping aboot and/or are lucky with your choices of where you put your breakpoints.)  The more important thing is that next time I want to refresh my understanding of what libmime is up to, I have traces already available.  (And the mechanics to generate new ones easily.  But not particularly quickly.  chronicle and my trace-generating mechanism be mad slow, yo.  It may be much faster in the future to use the hopefully-arriving-soon archer-gdb python-driven inferior support, even if it can’t be as clever about function-call detection without cramming int 0x3’s all over the place.)

Useful files for anyone trying to duplicate my efforts: my ~/.chroniquery.cfg for this run, the unit test as it existed, and the command-line args were: trace -f mime_display_stream_write -f mime_display_stream_complete -c -H /tmp/trace3/trace.html –file-per-func-invoc

gaudily colorized gdb backtraces! woo!

pythongdb-gaudy-groupview-crash-1

Like many people who have overdosed on syntax highlighting and other forms of colorization, my brain is no longer able to process monochrome text displays.  Which is why I have been so excited about gdb with python crammed inside.  (The good sense of “crammed”, wherein one is cramming cookies in one’s mouth.)  I have perverted its spirit and used it to colorize gdb backtraces!  Woo!

While I was in there, I have done two useful things:

  • There is magic of limited potency that normalizes the path automatically.
  • It looks at all the values in the arguments (and locals too, I guess) and if they are used more than once, it considers making them “interesting”.  Interesting values get named based on where we first saw them, and assigned a color.  Then, whenever they appear in the backtrace, they get their special name and color to help us see the flow of values through the backtrace.  For example, in the screenshot above you can see “this5” in a nice* blue color.  Each time its value appears, the “this5” label appears (the 5 is for frame 5).  I find this easier than manually scanning using my brain.

My hg repo is here: http://hg.mozilla.org/users/bugmail_asutherland.org/pythongdb-gaudy/

If you like this, or if you don’t like this but do like useful things, you are even more likely to like Jim Blandy’s archer-mozilla repo which is full of magic debugging help for spidermonkey in gdb.  (nb: The python-gdb trunk right now has changed how pretty printers register, and I don’t think that repo has yet caught up.)  Also, see Tom Tromey’s blog posts about the whole python-gdb thing.

* When I use “nice” in terms of colors, this is from my perspective as someone who demands many colors, has the time to add crazy colors to everything, but inexplicably does not have the time to actually pick colors that humans find appealing (or even non-nauseating).  I am going to pass the buck to the people who originally conceived of 256-color xterms and thought I could be trusted with a color cube, no matter how limited.