I’m Old, Part XXXVI: Typity-Type-Type

Before I started at Axial (later, Newfire), Alan Wootton, one of the main founders had a party at his apartment. I was quite happy to go. There were a number of people I knew from Adobe and in the process, I ended up meeting Alan’s long-time friend and partner, Marty Hess. What I didn’t know was that Marty was interviewing me for Axial, and I didn’t know it. The problem was that I was close to the end of my stint in the Acrobat group and had just about enough of the “crunch all you want; we’ll make more” attitude towards engineers.

Later, when I ended up working for Axial, Marty told me that when he met me, he couldn’t believe how bitter and burned out I was. It’s true, but I wasn’t really aware of it.

Axial was in the business of making a high performance VRML engine. Our goal initially was to make an engine with the performance of Quake in your browser. VRML is an interesting spec. It’s a data representational language which includes a great deal of in-built broadcaster and listener patterns. A lot of it came straight out of the GOF patterns. In general, a VRML file was a set of nodes that described a scene and various relationships of objects in the scene. For example, you could have a geometry node that described a shape which had a texture node attached to it that described how it looked and a touch sensor that described what happened if you clicked on it. Then there were interpolator nodes that could be used to change aspects of other nodes including position, rotation, texture and so on.

There was one gaping hold in Axial’s implementation of VRML, which was the script node. This was a type of node that could be used to change all kinds of other nodes by executing code. Initially, the spec called out two scripting languages: Java and VRMLScript (which was really JavaScript). My job was to implement both of these.

So I started with VRMLScript. I wrote a parser and an interpreter and stubbed out everything necessary to turn it into a JIT compiler (by the way, this was when JIT compilation was a brand new thing), but I had no time to do that immediately. The interpreter ran pretty fast as it was. Then I had to make the glue to attach it to all the high level data structures and map them into the low-level data structures. That was a ton of typing.

In fact, it was  so much typing that I found myself losing track of what I was doing because there was just so damn much to do. At the start of a major section, I erased my white board (which was either 32 or 64 square feet of space) and filled it with every object that I had to do. As I worked, I looked at the wall to see what I had to do next. Then I crossed them off. This was both an organizational strategy and a coping strategy. For the latter, it allowed me to see how much progress I made over the week.

There was another purpose too. Marty, who was my boss, talked to each of his engineers or collected email to get progress reports so he could update his MS Project reports. In my case, he walked into my room and looked at my white board and used that for updating my section of his chart. I liked this a lot because it was an extremely low-friction way to keep each other up-to-date: no meeting required.

In the initial design, I modeled the interpreter engine after the metacircular Scheme interpreter that is in The Structure and Interpretation of Computer Programs. Except, you know, I did it in C++. It had a number of distinct advantages in terms of keeping scoping clean and making a nice, solid interpreter. When I had it running along, I met with Alan and he asked me how well it ran and I had some benchmarks in terms of the number of thousands of nodes in the parse tree it could traverse per second. Alan listen to all of this and then chewed me out for using the nesting binding environment of Scheme. This is a technique where you have, essentially, a stack of assoc lists. An assoc list is a bindings of names to values. So really, it was a list of hashtables. When you entered a new function, a new hashtable was pushed onto the environment and then the parameters of the function got bound into the current hashtable. When you wanted the value of a variable, the environment would look through the hashtable of the top of stack, and if it didn’t find it, it moved to the next hashtable and so on. It was elegant and modeled the semantics perfectly, but it ran like a dog. Alan looked at me and said, just make it a proper stack frame, gave me a budget and left that to me to implement it, which I did, with a bunch of bitching and complaining, but ultimately it worked and ended up running much faster and in fact, it outperformed the reference implementation by quite a lot.

This pattern repeated itself with Java, which is fodder for another day.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.