I’m Old, Part XXI: I Love Deadlines. I Love The Whooshing Sound They Make As They Fly By

Software estimation is hard.

No, that’s not quite right. Software estimation is tremendously easy. Accurate software estimation is hard. When I started on the Acrobat team, the first thing that I was assigned was the task of making the “Find Text” feature work. Many of the features in Acrobat were initially implemented by one hurried engineer and were half-assed. If you asked the engineer who did it why it didn’t do some particular thing, the inevitable response was something like, “Because I wrote it in an hour/day. Go ahead and make it better.”

0NMMZ0b

Find Text, as it was, could find one and only one word and even then, it ran like molasses in January and often missed words. It also stopped after finding the word with no context for continuing. My job was to make it run faster and to operate more like a word processor. So it had to find phrases, it had to handle find/find next/find prev, and it had to have I/O cover-up when needed.

I/O cover-up is something that you put into an app to show the user that something is going on. It might be a spinning beach ball, a progress bar, or some other visual cue. For spinning beach ball cursors, you had to have a chunk of code that was hit my your working loop that updated the cursor – but not too quickly and not too slowly. The Mac didn’t have real multiprocessing at that point, so you either did your own time slicing or you did what PhotoShop did (I think it was PhotoShop – it might have been Illustrator), which was to install an interrupt level task attached to the vertical blanking interrupt to spin the beach ball. The was great in that the cursor spun very smoothly. It was lousy because if the app hung, you still had a beautiful spinning cursor and no indication that something horrible has happened.

I was asked how long the tasks would take and I pulled a number right out of my ass: 3 weeks. I was off by a factor of 5 because I just didn’t know. The reason why the original code was so bad was because of PDF. PDF is not a text file. Any given page is a program that gets executed to get rendered. The underlying code had a setup where you could render a page and get a call-back called whenever text was placed on the page. The problem was that the callbacks happened whenever any arbitrary string was placed and that was at the whim of the program that placed the text. So, for example, the string “hi there” might have been put on the page in one shot or in as many as eight separate blocks. The original code couldn’t deal with this.

So one of the first things I did was to write code that received text and when it made sense, broke up words by separating at spaces or put disparate word blocks back together again if the code thought it was a word. This code was tricky because text might be rendered in different fonts with slightly different sizes and on top of that the text might not be on the page in a way that was rectilinear. It took me more than a week to tune the heuristics, then I had to tame the memory and performance issues that resulted because through all this I needed the metrics of the fonts being used and who knows how many fonts were on the page. There was a lot of “just slog through this” code. Not elegant, but it worked.

Then there was troff. God damn troff. When it generated PostScript (then converted into PDF), it placed all the “plain” text first, then the bold text, then the italic text. So in addition to putting words back together, words had to be sorted on the page into some semblance of reading order. Oh wait. What’s reading order? Left-to-right, top-to-bottom? Sure, if you use a typical European based language. What about Arabic? Japanese? Sumerian? OK, I never saw that one, but still. I seem to recall punting on reading order.

PDF has many aspects that make it non-trivial for someone to just write their own. Every engineer on the team contributed at least one. Mine was that when you found text, it should be displayed on the page with an accurate highlight. The previous code just used an axis-aligned rectangle which was set to the min and max of the word. This was accurate as long as the word was axis aligned, but we had documents (maps, label designs, etc.) that were not so friendly for that kind of highlighting. What I did instead was make that bounding area be a set of quadrilaterals that bounded the word fragments. I sweated the details a lot to make the highlights look good, including correctly joining up contiguous quadrilaterals. So when you look in the spec at the highlight annotation, the reason why it’s all quadrilateral based is because search worked that way. Search worked that way because I insisted that it should work that way.

Then there were ligatures. In typography, there are letter sets that are replaced with a single character that represents those letters all tied together. For example fi and fl are two of the most common ligatures because in many fonts when you draw an f next to an i or an f next to an l, it looks crappy because of the overlap. I put in support for ligatures, so if you had the word ‘waffle’ on the page rendered as waffle, with the ffl ligature, Acrobat would find it.

I had no idea how much I was going to learn about typography in this process. When I was done, I was very pleased with the result.

But yeah, that took a lot of time. Oops. Little did I know that most of that work would end up on the trash heap after Acrobat 1.0 shipped.

Still, I had a good lesson in software estimation: don’t ever commit to a number until you understand the problem space. It was a good lesson to learn at age 27.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.