I’m Old, Part XXXVIII: Why Your Developers Are Moody

Writing code is awesome. At least I’ve always thought so. I started programming on an Apple II+ that my dad bought and me and my brother Pat spent countless hours learning Applesoft BASIC. One of our first lessons was in the importance of language standards and what happens when you don’t have them. My dad bought as a book full from Creative Computing that was full of listings of computer games. Games! Did you hear me? Games! They had names like Schmoo and TV Plot Generator. We would type them in and there were two problems: first, the version of BASIC they were coded in was not quite like Applesoft so we essentially had to port them. The second was that we introduced bugs typing them in.

But the joy of getting them to work was what kept us going. I call this joy “making the monkey dance”. It is seeing the product of pure thought turn into action.

Unfortunately, the process is a roller coaster ride. You hold onto the possibility of joy but there are inevitably problems and it make take hours, days, or worse weeks to find the thing or things that caused the problem.

Working on Acrobat, we had bugs that we talked about metaphorically. For example, if Acrobat was a house, we would describe the process of reproducing a complicated bug as “You go in through the front door, go upstairs, go into the bathroom, close the door, climb out the window, shinny down the drainpipe, then open and close the back door 4 times and then the toilet explodes.

And while you’re running through these steps, you start setting traps in the code to catch the thing that is causing the toilet to explode. And then you spot the problem, isolate it and (sadly) more often than not, put in a hack to ensure that particular case can’t happen. Ta-da: no more exploding toilet. Unfortunately, without taking the time to look at the bigger picture of the code and how the problem fits in, you will inevitably cause something else in the house to explode later.

Worse still are the bugs that reveal that you built your code on a leaky abstraction. You hack it to make it fit and your code gets worse. And when the sink explodes, you hack it again. And again. And it gets worse and worse.

I remember working on one particular bug on Acrobat late in the release cycle and I went into Alan Wootton’s office to really just complain about not wanting to fix the bug the right way. I had a fix that I could put in, but it was a patch on patchy code already and I described it as putting a Band-Aid on a cancer. Alan tolerated my complaints and then asked the more important question which was “why don’t I go fix the whole thing?” The answer comes down to three things:

  1. I was tired
  2. I was feeling lazy
  3. I feared the onslaught of late-cycle bugs that would result from this

Still, I did the right thing and went through a cycle of “refactoring with a hammer” and fixed up overall code and the inevitably the bug.

As I’ve become more experienced, I’ve become more pragmatic and less bitchy about broken code. Sometimes it’s a case of “well, better to take the time to fix this right.” Other times, there isn’t enough time to do it right (now), so take notes on how to make it better in the future.

I was working on a new feature in my current code, which is unfortunately much more complicated than I’d like. I wrote a unit test first and it failed, as expected. As I worked on it, it was failing in ways I wasn’t expecting. In looking at the fallout for implementing this feature, I was reading through some older code that was affected by the new feature. My internal dialog was, “Huh. That will never work. Past me must have really enjoyed being so blissfully naive.” So I rewrote the older code and the new feature worked. But in the process, I had found something in my internal run-time type system that I had missed, so I fixed that – easy. That in turn caused 24 unit tests to fail, each one ultimately related to the type system fix, and in each case the incorrect code had all the pieces necessary to avoid the problem, it just didn’t take them into account.

And rather than suffering from the downs of bugs and design flaws, I’ve learned to relish the process of finding the ultimate problem and the fallout as well as when everything works. I do recognize that when I’m deep in the process, it does still affect my mood, but not nearly as much as when I was younger.

I’m Old, Part XXXVII: The Rise of SteveApp

When I was in college in the mid 80’s, graphic work stations were on the rise but the CS department, with its limited budget, really only had a set of terminals running at 2400 baud. 2400 baud is barely fast enough to run a crappy visual text editor driven by keys only.

In my junior year, I had a (borrowed) Mac Plus in my room and routinely wrote fun little graphics programs for it. Also that year, the CS department got a set of color Sun workstations that they set up in the lab for a graphics course. They were also used by students who were not in the graphics class with the understanding that students in graphics got priority. I wanted to take the class, but it was the first offering and very popular. I had a talk with the professor and he was pretty sure that I would be bored to tears, so instead I signed up with him for a private study where I wrote a paint program for the Suns which I called StevePaint. The was the start of entirely egodriven nomenclature that carried over into my professional life.

When I was working on Acrobat Search, I created an anti-class library (it was straight C) that allowed the creation of UIs in a way that allowed me to do very rapid prototyping. It was based around the Macintosh Dialog Item List, which was referenced in a Macintosh application as a DITL, so I called it SteveDITL.

In the process of building that, I learned a lot about how applications were intended to be built on MacOS, but the existing application frameworks were fairly heavyweight. So I built my own application framework called SteveApp, which I used for my own code projects. For example, I wrote a GIF viewer that did nice dithering for 1-bit displays.

At the time at Adobe, there was a fair amount of platform cliques and I was always irritated that in the Search team, there was much more support for Windows tooling. For example, the indexer was a Windows only application written by Kevin Binkley and Eswar Priyadarshan. They routinely let the application loose on file servers on Adobe’s LAN to index whatever PDFs could be found. At that time, networked servers and network support were pretty flakey and could cause all kinds of issues in the indexer, some of which might not be found for hours and were hell to reproduce. Eswar used to kick off an index and come in many hours later to find out if it had crashed.

I decided that there should be parity in the Search product line, so I took it upon myself to port the Windows tool and I decided to use SteveApp to do it. I got about 80% through the port before I showed it to my boss and also to John Warnock, figuring that forgiveness was going to be easier to get than permission. It was, and now my work was on a road map.

In thinking about the work and the hours that Eswar kept, I realized that he was engaging in a polling model for his code and this was something that I could do better. So I wrote a separate app that could maintain a list of other Macintoshes with apps running on them on your LAN and would listen for pings from them. Pings came with brief messages to indicate what was going on along with a couple of standard identifiers (Idle, Working, Starting, Quiting). It had a configurable set of actions to take if pings didn’t arrive, and because I rolled that way, the actions came from plug-ins, so the app could be extended later.

So essentially, I built a separate app to watch the indexer and if it crashed or hung, I could make the app notify me. My boss witnessed me working on this and in spite of my reasoning of wanting to avoid the suffering of late night polling, he forbade me to work on it.

Guh.

Life got worse because if you did network indexing on the current release of the MacOS (8.1, IIRC), the TCP/IP code in the OS had a nice, built-in bug that would shotgun memory. If your app was memory hungry, like the indexer, it was only a matter of time before it crashed out of your control.

Eventually, I wrote an app called “bloat-o” which when it started up, would allocate the largest block that it could, wipe it to 0’s then repeatedly loop over the memory looking for anything non-zero. I used this to definitively prove that this bug was in 8.1 and only happened with apps that used TCP/IP.

The product was eventually released, but between department politics and attitudes towards engineering, debugging the heap smasher and being forbidden to write a tool to make my life easier, I decided that I had enough of this group.

And that was the last Macintosh application ever written with SteveApp.

I’m Old, Part XXXVI: Typity-Type-Type

Before I started at Axial (later, Newfire), Alan Wootton, one of the main founders had a party at his apartment. I was quite happy to go. There were a number of people I knew from Adobe and in the process, I ended up meeting Alan’s long-time friend and partner, Marty Hess. What I didn’t know was that Marty was interviewing me for Axial, and I didn’t know it. The problem was that I was close to the end of my stint in the Acrobat group and had just about enough of the “crunch all you want; we’ll make more” attitude towards engineers.

Later, when I ended up working for Axial, Marty told me that when he met me, he couldn’t believe how bitter and burned out I was. It’s true, but I wasn’t really aware of it.

Axial was in the business of making a high performance VRML engine. Our goal initially was to make an engine with the performance of Quake in your browser. VRML is an interesting spec. It’s a data representational language which includes a great deal of in-built broadcaster and listener patterns. A lot of it came straight out of the GOF patterns. In general, a VRML file was a set of nodes that described a scene and various relationships of objects in the scene. For example, you could have a geometry node that described a shape which had a texture node attached to it that described how it looked and a touch sensor that described what happened if you clicked on it. Then there were interpolator nodes that could be used to change aspects of other nodes including position, rotation, texture and so on.

There was one gaping hold in Axial’s implementation of VRML, which was the script node. This was a type of node that could be used to change all kinds of other nodes by executing code. Initially, the spec called out two scripting languages: Java and VRMLScript (which was really JavaScript). My job was to implement both of these.

So I started with VRMLScript. I wrote a parser and an interpreter and stubbed out everything necessary to turn it into a JIT compiler (by the way, this was when JIT compilation was a brand new thing), but I had no time to do that immediately. The interpreter ran pretty fast as it was. Then I had to make the glue to attach it to all the high level data structures and map them into the low-level data structures. That was a ton of typing.

In fact, it was  so much typing that I found myself losing track of what I was doing because there was just so damn much to do. At the start of a major section, I erased my white board (which was either 32 or 64 square feet of space) and filled it with every object that I had to do. As I worked, I looked at the wall to see what I had to do next. Then I crossed them off. This was both an organizational strategy and a coping strategy. For the latter, it allowed me to see how much progress I made over the week.

There was another purpose too. Marty, who was my boss, talked to each of his engineers or collected email to get progress reports so he could update his MS Project reports. In my case, he walked into my room and looked at my white board and used that for updating my section of his chart. I liked this a lot because it was an extremely low-friction way to keep each other up-to-date: no meeting required.

In the initial design, I modeled the interpreter engine after the metacircular Scheme interpreter that is in The Structure and Interpretation of Computer Programs. Except, you know, I did it in C++. It had a number of distinct advantages in terms of keeping scoping clean and making a nice, solid interpreter. When I had it running along, I met with Alan and he asked me how well it ran and I had some benchmarks in terms of the number of thousands of nodes in the parse tree it could traverse per second. Alan listen to all of this and then chewed me out for using the nesting binding environment of Scheme. This is a technique where you have, essentially, a stack of assoc lists. An assoc list is a bindings of names to values. So really, it was a list of hashtables. When you entered a new function, a new hashtable was pushed onto the environment and then the parameters of the function got bound into the current hashtable. When you wanted the value of a variable, the environment would look through the hashtable of the top of stack, and if it didn’t find it, it moved to the next hashtable and so on. It was elegant and modeled the semantics perfectly, but it ran like a dog. Alan looked at me and said, just make it a proper stack frame, gave me a budget and left that to me to implement it, which I did, with a bunch of bitching and complaining, but ultimately it worked and ended up running much faster and in fact, it outperformed the reference implementation by quite a lot.

This pattern repeated itself with Java, which is fodder for another day.

I’m Old, Part XXXV: Abstraction

When I was in college at Oberlin, one of the early classes that was required for the CS major was called something like “Programming Abstractions”. I did a quick check, and yup, it’s still offered. This was a class that teaches the programming language Scheme, which for many people is an eye opener. Scheme can be viewed from a number of different levels and it is both an appalling language and an awesome language.

I had a lot of issues with the language, most of which was that I was trying to figure out how it was implemented. How the hell did everything work under the hood? I interrupted the lecture routinely because I wanted to know to really understand what was going on under the hood. Many years later, the head of the CS department told me that I was one of the most stubborn son-of-a-bitches he’d had because I was never satisfied with a short answer. I get it – he had to go through the syllabus and reach as many of the students as possible, not teach a seminar in the efficient implementation of Scheme.

Still, I recall that he did one class where he covered the implementation of a bank account management system in Scheme. Scheme does everything in lists and if you wanted a data structure in Scheme, you would use make a list of the elements and then write accessor functions to get at each element and factory functions to build them. It was painful, but the point was that you could aggregate the accessors and factories and make something akin to object properties and constructors found in other languages. If I recall, the lecture also showed how you could change the behaviors and have different functional views on the same data, so you could have people who could, for example, view the data but couldn’t withdraw funds.

And this is the essence of abstractions in programming. It’s the ability to describe the valid operations on a data structure in such a way that you can change or have different representations of the data without needing to change the code that consumes the interfacing.

Not too long afterward, I was working at Bellcore on a project called SuperBook. Superbook was an experimental hypertext system that was built to make it easier to read and find information in a text. It was a cool system that was way ahead of its time. It initially ran on a Sun workstation running the MGR window system. I had done a port of MGR to the Macintosh and MGR would run under that system, but the people in my department wanted SuperBook to feel more like a Macintosh application, while still keeping all the advantages of the application.

Joel Remde had made an API for the application and I could work with it over a serial connection. I made hooks for it in the Mac application that implemented the API through a serial protocol. If you were logged into a UNIX system through the Mac’s serial port, you could run SuperBook on the Mac. The best data rate available to me at the time was 19200 baud, which was pretty appalling compared to the native application.

Right around that time, Apple started getting on board with ethernet and I got my hands on an ethernet card that I could plug into the Macintosh II that I used for development of SuperBook. I was tasked with adding ethernet support to the app. As I worked with it, I saw that both the serial and ethernet protocols shared a great deal in common with each other in terms of what I needed to do, although the implementation details were very different.

I was able to distill it down into a few actions:

  • open
  • data available
  • read data
  • write data
  • close

And from this, I implemented an object in C that was very similar in many respects to the lecture in Scheme a few years earlier. I had created an interface and had two implementations that met it and now I had an app that could select its communications channel on the fly and the app didn’t care which it used.

This is much of practical engineering: looking at a problem and not only solving it, but deciding if there should be levels of abstraction in the solution and how many are appropriate.

Sometime later I did a private study course at Oberlin wherein I implemented FORTH for the Macintosh. It was a nifty little system that was JIT compiled. At the end of the semester I gave a talk about my project called “What Does DOES> Do and Other FORTH Do’s and Don’ts” which covered the process of creating closures in FORTH and how they were implemented under the hood with a comparison to Scheme. One of the conclusions I had was that Scheme was abstract and FORTH was concrete – the reason being that in Scheme, you didn’t really know how any particular thing was implemented (which is a good thing and a bad thing) whereas in FORTH, you could do very much the same things and you knew exactly how they were implemented (which is also a good thing and a bad thing).

The point being that abstraction is neither good nor bad, but it is a tool to be used in judicious measure.

I’m Old, Part XXXIV: The Importance of Play

I’ve mentioned in previous blogs that I’ve been very lucky with contacts and connections throughout my career. One summer in college, I managed to get a job at Bell Communications research writing code for an experimental phone services system (more on that some day). A year later, I took a year off college and picked up a job in the same building, but a different department.

At the time, there was a shortage of space, but Mike Lesk (creator of lex, uucp and many other foundations of UNIX and the internet) was on a sabbatical and I got a seat in his office along with Dave Ackley and Karen Lochbaum. My big project was porting the MGR window system from SunOS to the Macintosh. Since I knew MGR pretty well, I also served as a helper for people in the department who were trying to get specific tasks done. For example, I figured out a way to get Hinton diagrams displayed as optimally as possible, which was getting used by some people working on speech recognition.

I learned a great deal about C, portability, coding style, and so on. I think one of the things that I learned about research and research coding was the importance of play. There were a lot of very creative people in the department and besides being very hard-working, many of them played at work. I believe there is an important connection between play and creative work, and while a lot of software engineering is drudgery, the important bits and the breakthroughs are the result of creativity. You can’t just tell people “BE MORE INNOVATIVE”and expect it to happen. By playing, you take the spinning gears and shove them into the background to churn away on their own.

Dave Ackley was tremendously creative and tremendously playful (I’m sure he still is, but I haven’t seen him in years). At the time, he was doing research on neural networks and was exploring the capabilities of neural network based systems. Dave frequently got a sandwich and a soda from the cafeteria for lunch and brought them back to his desk. If you got a sandwich, it came cut in half with two toothpicks each with colored cellophane ribbons, or as Dave liked to see it: a dart with fletching. After eating his sandwich and drinking his soda, Dave took the straw out of his drink, loaded up a toothpick dart, leaned back in his chair and blew the dart, shooting it up into the ceiling above his desk. It wasn’t long before he had a tiny forest of darts in the acoustic tile above his desk.

During my time there, I had managed to save some money and decided to take a vacation to the west coast with a friend of mine (which is it’s own story). I brought back gifts for people at work. For Dave, I brought back a toy “laser” gun that made noise and shot sparks. In short order, Dave set to trying to find out everything you could do with the toy, including shooting in in his mouth and the monitor to the Sun workstation on his desk. Disappointed that it didn’t affect it, he contented himself with shooting it at various things on his desk.

Play is important. If you want a successful engineering group, encourage play. It makes the environment more pleasant, makes your corporate culture engaging, and results in better creative work.

On the “Right” Side of the Digital Divide

There is a digital divide in this country. It is between people who are computer literate and have ready access to high speed internet and those who are either computer illiterate or do not have internet access (or both). On the day that my dad walked into a computer store in 1979 and wrote a check while saying, “beat you to the bank”, I’ve been on the favorable side of the digital divide.

One thing that is on my side of the digital divide is a 3D printer, which I believe is a game changing technology. Let me give you an example.

A few days ago, my son was cooking something in the microwave and when it was done, he opened the door and tore the handle right off. In looking at the damage, this was not a surprise. The handle was really pretty poorly designed (thanks, GE). It was held on by two screw that went though the door into bosses in the handle. Both bosses sheared right off. The bosses were too weak.

Before 3D printing, my options would have been: buy a new handle (I checked GE’s web site and a replacement handle costs $80, just shy of 1/3 of a replacement unit), or $45 on eBay. I’m sorry, but a chunk of plastic doesn’t cost $45, let alone $80. At this point, I was considering making a replacement out of wood. I have some nice walnut that’s just waiting for something like this, but that involves time I don’t have.

Instead, I did this:

I knocked together a quick design in 123-D and printed two mounting brackets. When I made the design, I had assumed that I had some scrap 5/8″ copper pipe in my shop. After I printed them, I found that I did not, but I did have 1/2″ aluminum pipe and some Sugru, which I used to seal up the joints. Problem solved, albeit a bit of a bodge.

What still remains is the digital divide. If you’re reading this, I can guess which side you’re on and it’s probably not the side that needs help. One thing you can do is to find a way to support maker spaces and access to technology. And this means your local library. Many libraries offer access to high speed internet, help with technology, and in some cases maker spaces. Do what you can to help your library and you will be narrowing the digital divide. And this is a very good thing.

The Unbeatable Squirrel Girl

In 1984, I started taking Computer Science courses at Oberlin college. My sophomore year, they created an actual CS major and I was the 5th student to sign up for it. At the time, the program was far from diverse. It was a shame, but I think that we all were the worse for systematic sexism in our introductions to computers. It was even more of a shame because I think we tried to be supportive to the women who were taking CS courses. Most of us were in the program because of the joy we got from writing code and seeing it work. Further, we liked sharing that joy with others. This was the original hacker culture: figuring out how to get a computer to do something unique. One of the early Apple II manuals had a glossary in it with a recursive definition:

hacker – n. someone who writes a program for the purpose of getting another hacker to say, “how the hell did you get the computer to do that?”

Last year, I started reading Marvel’s The Unbeatable Squirrel Girl. I had seen cultural references to it and thought I would give it a shot. I like it for several reasons, but I think what I like the most is that Doreen Green (aka Squirrel Girl) is an excellent role model. She is self-assured. She has an extremely normal build (except for, you know, the tail), as do her friends. She wears sensible shoes and a very functional outfit for fighting crime. She has good, peer relationships that are bidirectionally valued. She kicks butt. She has her own twitter account. Finally, she studies Computer Science.

love this. I love that in the midst of a fight with Doctor Octopus, she is speaking in code. And then she goes on to explain it:

Look at the joy on her face! I know that feeling well. Later on, she goes on to explain how to count in binary with your fingers:

Read Squirrel Girl. Read it with your daughter(s). Help contribute to a field that desperately needs more gender balance. Read it with your son(s) to normalize strong capable women.

Congratulations to Ryan North, who writes the stories and to Erica Henderson who is the main artist (although in this particular issue, Jacob Chabot was a guest artist).