Swift Argument Passing, Part Deux: Tuples

In the last post I did about Swift argument passing, I neglected a couple types. One in particular is the tuple. If you’re not familiar with tuples, you should be. They’re nice a nice bit a sugar in a programming language – kind of like swearing in a spoken language: good once in a while but if you overdo it, the people around you will probably wonder about your upbringing. Tuples are are syntactic sugar for an anonymous struct (or class) with public properties. As a type, many languages represent them as (T0, T1, … Tn) where each Ti is a type. As a variable declaration they are either represented as the whole type as a single aggregate or as a set deconstructed into names. For example, in F#:

let (x, y) = (1, "frobozz")

(x, y) is a decomposition of a tuple of type (int, string). You can see that this is equivalent to this C#:

struct Anon {
   public int x;
   public string y;
Anon xy = new Anon() { x = 1, y = "frobozz" };

In Swift, tuples are represented exactly like structs in memory. The trick is to figure out how to pack them. I know this was supposed to be about argument passing, but trust me, we’ll get there.
Suppose we want to write some code to figure out the offset in memory of each item in a tuple. You could do this like this (this is C#, by the way – I’m working C# because Swift’s reflection capabilities can’t do this yet). In this example, assume that we have code elsewhere that given a Type can return its memory alignment and its size.

public struct TupleMap {
    public Type[] Types;
    public int[] Offsets;
    public int Size;
    public int Stride;
    public int Alignment;
    public int Stride;
public static int RoundUpToAlignment(int value, int align)
    return (value + align - 1) / align * align;
public static TupleMap FromTypes(Type[] types) {
    TupleMap map = new TupleMap();
    map.Typles = types;
    map.Offsets = new int[types.Length];
    int size = 0, overallAlignment = 1;
    for (int i=0; i < map.Types.Length; i++) {
        Type t = types[i];
        int fieldAlignment = Alignmentof(t);
        size = RoundUpToAlignment(size, fieldAlignment);
        map.Offsets[i] = size;
        size += Sizeof(t);
        overallAlignment = Math.Max(fieldAlignment, overallAlignment);
    Map.Size = size;
    map.Alignment = overallAlignment;
    map.Stride = RoundUpToAlignment(size, map.Alignment);
    return map;

This is a fantastic gem – given the types of the elements in a Swift tuple, we can locate any particular item in memory. In fact, this code works on Swift structs as well because like I said earlier, Swift tuples really are syntactic sugar onto structs. Almost. The divergence has everything to do with Swift’s argument passing and while I don’t agree with the choice of the language designers, I have respect for it.
In Swift, every function takes precisely 1 argument and returns precisely 1 value. This is one of the places in Swift where you can look at it and say “yup – this is a functional programming language” (other places, not so much – oh and the thing about returning precisely one value? That’s a lie, but it’s a lie of convenience). “But wait,” you cry, “didn’t you do a whole article about how arguments are passed and you’re now telling me there’s only one?” I did. It was not alt facts. It was an Obi Wan Kenobi truth.
See, Swift has two totally different ways of representing tuples. The first is in memory, as I mentioned before. The second is as passed to a function. On x64, Swift more or less follows the standard ABI of putting arguments into registers and eventually overflowing into the stack. When I call this function:

public func performFabulousTrick(spot: Dog, location: City, doJump: Bool, repeat: Int) -> () { /* ... */ }

spot will get passed in rdi, location in rsi, doJump in rcx, and repeat in rdx (assuming that Dog and City are classes). So essentially, when a tuple is passed to a function, each element goes in a register (unless it’s a value type that can fit in up to 3 registers, then it will consume that number of registers), until it runs out of registers then it will overflow into the stack. The problem here is what happens when you have arguments that themselves are tuples. The register packing rules don’t change.
So let’s say that I foolishly chose to represent an affine transform as a tuple of 6 floats and I want to get the (tx, ty) elements. I could write it like this:

public func getTxTy(mat:(_: Float, _: Float, _: Float, _: Float, tx: Float, ty: Float)) -> (Float, Float) {
    return (mat.tx, mat.ty)

But au contraire mon frère this is going to consume 6 registers. So you see that a tuple matches a struct in memory layout but not in argument passing. So this is a case of where do you put your language inconsistency? For argument passing they put the inconsistency in the difference between struct and tuple passing but chose to consistently handle all sub-tuples and to make functions “pure”. Whereas I would argue that subtuples should be treated more like structs: exceed 3 machine pointers and you get passed a copy by reference. It just makes pulling out elements of a 3 pointer-or-less sub-tuple somewhat harder – but no harder than accessing the elements of a struct. Like I said, I don’t agree with the choice, but I respect it – mostly because I like the cleanness of always taking 1 argument and nearly always returning 1 value. Plus Swift has an interesting way of handling functions that return nothing:

public func foo(x: Int, y: Int) { }

This function actually does return something – it’s an empty tuple. In fact you can also write it like this:

public func foo(x: Int, y: Int) -> () { }

So this is a function of (Int, Int) -> (). Nifty trivia, the empty tuple () is also an alias for the type Void, and an object of either type will consume 0 memory. So if I have this function:

public func foo(x: Int, a: (), b: (), c: (), y: Int) { }

x will go into rdi and y will go into rsi and a, b, and c are just empty placeholders. Similarly, if we lay out a tuple in memory the same way, the 3 middle elements will take up no space.

Again, this is an interesting intellectual exercise in analysis of the practical implementation of functions and argument passing to see where the Swift engineers drew the line – for the most part, they opted for purity (except for inout parameters — forget to mention those) with an eye towards practicality. That they drew the practicality line in a different place than where I would have chosen is fine, and since the Swift ABI is not yet finalized, it may yet change.

“Hang on. You tried to slip that lie about returning one value mostly nearly whatever.”
Aren’t you exceptional! No, I didn’t try to slip it by. That’s a topic for another day.

Magic Smoke

My dad is a retired electrical engineer who worked for Bell Labs for quite a few years. At home, he had a nice little set up in our basement with a Weller soldering iron, components, perf board, wire and host of other implements of destruction. Over the years, he built a clock, a calculator, a breadboard to play with a TI digital sound chip and other projects. He even made a carnival game for the cub scouts where you threw foil covered objects at a hole in a painted plywood sandwich board. If the object made it into the hole, it would short a couple exposed wires together and start a tape player sounding the win. And a Heathkit TV. Did I mention that? He built a flipping TV.

As kids, we hacked around with the things that were there, carefully sorted into colored tubs. The coolest things, of course, were the LEDs because, hey lights! There was also a nifty multi-cell battery lifted from the Bell Labs stock room. I tried to find a picture of it, but no luck. It was a heavy brick with the Western Electric logo on the side and a bunch of spring clips on the top. One was a common ground. The others were positive terminals that were rated at various voltages from .5 V up to 22.5 V. I liked taking out LEDs and hooking them up to the battery via clip leads and lighting them up. In particular, there was one kind of red LED with a black housing and red dome that if you hooked it up to 22.5V, would explode and shoot the red dome off into the distance like a rocket, leaving a contrail of acrid smoke behind it. We spent most of dad’s collection that way.

When I got older, I liked messing with example circuits from Radio Shack data books and when Craig Anderton wrote his book Electronics Projects For Musicians, I was all over it. Dad taught me how to use his wiring pen and how to solder properly and I built a distortion box from scratch, which 33 years later, still works quite well (although I looked for it and I’ve misplaced it to my chagrin).

With a couple friends, Mike Sadowski and Rick Veracco, we decided to start a business making these. We leaned how to lay out and etch circuit boards, prepare a BOM, source parts, and we got the father of Caitlin Hadtke (another friend) to design a label for it using some of our own crappy artwork.

And at our insistence, the controls went to 11.

We made and sold a few dozen of them, barely squeaking out a profit, but wow did we learn a lot. For example, we learned that you shouldn’t dump spent etchant down the drain of the basement slop sink, for example. We flushed it with a ton of water, but it turns out there was enough oomph left in the etchant to eat pinholes through the copper drain pipes, ensuring that the slop sink leaked and requiring my dad to pay to get it fixed. He had the patience of a saint.

Today, things are a little different. For one, Radio Shack is all but dead as a local source of electronic kits and components. On the other hand, there are a number other things that make learning how to work with electronics easier and more fun. For example, Little Bits – snap together electronics components. My 10 year old son has had a lot of fun playing with them.

After messing with the built-in ideas, he made a refrigerator alarm – a device that you put in the fridge and when light hits it, it sets off the alarm.

There’s also SparkFun, one of my favorite sources for hack projects.

The point is that while there really aren’t the accessible brick and mortar shops, online retailers have filled that gap very well and hacking around with electronics projects is easier than ever and that’s a very good thing. Even better is having public figures like Adam Savage promoting making and removing geek stigma.

I’m Old, Part XLIV: OCR and Support

At my last job, one of the many jobs I did was create .NET bindings to OCR engines. Optical Character Recognition is a tricky technology that is rife with hacks and tricks to try to get the recognition rate better on crappy quality scanned documents. In that technology space, there’s around 10 or so decent quality engines and I’ve worked with most of them in one way or another.

Before I go on, I want to talk about OCR companies. All of the ones that I worked with are batshiat insane to some degree. I don’t say this lightly. All of them had unusual licensing terms and/or unique run-time licensing. Most of them offered a way to build their tool into a toolkit and also had an end-user application that they sold. One company in particular, wrote a contract that said–and I swear, I’m not making this up–that you couldn’t use their toolkit to build software to recognize text and create documents. See what I mean? We hired an engineer, Justin, who early on was consistently appalled at this. Eventually, he got jaded enough that when one of these issues came up, if I started the sentence, “because all OCR manufacturers are…” he would quickly finish it “…batshiat insane.” while rolling his eyes.

When I was initially hired, as part of the interview I was given a task to design an OCR interface. I put on my architect hat and designed a nice little class hierarchy that created a decent, extensible, engine-neutral interface. When I was hired, my first task was to learn C#, my second was to implement the OCR interfacing for real. I was given an engine to work with that was implemented as a C library. I built a C# set of tools that hid the specifics and sharp edges of the C library and presented instead a friendly interface that was easy to get started with and had room to grow. For example, the initial toolkit had the ability to translate a scanned document into a few basic document types, including PDF. I exposed those tools as if they were separate objects. Eventually, we added our own PDF output tool that was far better than that engine’s and it stitched into the workflow without deep changes to our users. The toolkit was neutral enough that we were able to get 7 different OCR engines to work with the same front-facing interface.

The main problem with working with OCR engines is initializing their code, managing licensing and preparing them to run. Every single engine had unique problems. Every. Single. One. Explaining this to our poor users was an uphill battle that our support engineers dealt with. We wrote sample code, documentation, and tech notes all of which were routinely ignored.

One particular engine had truly inspired licensing and had odd requirements in terms of having certain directories available to it to find dlls and resource files. All of these things had to be done well in front of when you even touched the engine class or it would fail miserably. We documented this and set up examples that said “you must do this or you will see this error.” Many customers got this right.

Then there was this one customer. He called into support angry. Angry because the engine was expensive and it wasn’t working. Our engineer worked with him and explained what he needed to do (i.e. read the technote to him). He ignored the engineer, didn’t have luck and called back in and escalated to an engineer. He was sent to my peer, Lou, who is very patient and told him pretty much the same thing the support engineer told him, which he again ignored. He called back and wanted to escalate to me.

Now, we didn’t have a big office, so I knew exactly what was going on: angry customer who wouldn’t listen. Got it. Been there, done that, bought the t-shirt.

I got onto a remote desktop session with him and got on a speakerphone call. I had him show me his code to see what he was doing. I dictated breakpoints and asked him to show me the contents of particular variables. Great. I was pretty sure what was going on, but to be 100% sure I needed one more thi-he started pulling up a web browser of his code and started to search the web. I said very directly something along the lines of, “do you want me to help you or are you going to surf the web?” He minimized the the web browser and I looked at his code – yup – he had done nothing that any of the other engineers had suggested. I told him to give me control of the keyboard and mouse and I put into his code the magic that the technote suggested, ran it, and saw correct results.

He ended up sending Elaine, one of our support engineers, a bouquet of flowers and called me a prima donna. And to this day, I still believe he is without a clue.

I’m Old, Part XLIII: Steve, Steve, Steve

I started at Adobe in Mountain View, California in 1990, fresh out of college. It was a novel experience on many levels. I grew up in New Jersey in a nice little suburban community in a house that bordered on old woods that were slightly swampy. I had become accustomed to the flora and fauna of New Jersey and Silicon Valley was something completely different. The sky was the wrong color blue; the leaves on the trees were the wrong shade of green; there was relatively little humidity.

At that time, Adobe me and my then wife across the country and put us up in temporary corporate housing while I got started at the company and my wife looked for permanent housing. On the first day during orientation, we were told by HR that if you were married and your spouse was a woman, she would be sent a bouquet of roses and if your spouse was a man, he would be sent a bottle of champagne. How nice!

The roses never showed up. Weird.

We got daily copies of the San Jose Mercury News, as a tool for looking for apartments and keeping up with current events. I saw a headline in the paper, “ASTRONAUT STEVE HAWLEY TO JOIN NASA AMES”. I thought it was funny, so I cut it out and put it up in my cube.

Then we started noticing that mail we should have gotten never arrived. We’d ask at the main office at the housing and nothing. We knew it was supposed to have arrived. After poking around and asking questions, we finally put two and two together. Our mail (and the roses) had gone to a Steve Hawley. Just not me.

Eventually, we settled on a place in Morgan Hill, which was a brutal commute, but it was about all we could afford that wasn’t in an iffy neighborhood. We set up forwarding for mail from the corporate housing and settled in. After a week, I got an ATM card and PIN addressed to Steven W. Hawley from an unfamiliar bank. I brought them in to work and called up NASA Ames and had the switchboard connect me to his office, where I spoke with his assistant. Here was our conversation:

“Steve Hawley’s office.”

“Hi, I’d like to speak to Steve Hawley please.”

“Who may I say is calling?”

“Steve Hawley”

“No, to whom am I speaking.”

“Really. Steve Hawley. And I would like to speak to your Steve Hawley.”

“May I ask why?”

“Sure, I have your Steve Hawley’s ATM card and PIN and while I wouldn’t mind drawing from an astronaut’s salary, I figure that he would want it back.”


“Yup. It’s says here ‘Steven W. Hawley’. I’m not W.”

“Oh. That’s not him.”


“My Steve Hawley is Steve A. Hawley not W.”

“Huh. OK. Thanks for your time.”

If I got through to him, I would also have given him an earful about Sally Ride because when they got married, I had gotten a fair amount of ribbing about it because of our names.

Ultimately, I sent everything back to the bank, but I had never expected that there would be 3 Steve Hawleys (fortunately each with different middle initials) who all moved to Silicon valley in the span of a month.

Go figure.

I’m Old, Part XLII: Messing With the Landlord

When I was working for Axial/Newfire, we had a decent little offic suite in Saratoga, CA. It was a very California design in that the the building was very much an exterior design. If you wanted to go to the bathroom (or in the words of Cowboy Dave, “I have to go see a man about a dog”), you had to step outside since the entrance to the bathroom was only outdoors.

The building had a couple wings and in the center was a nice koi pond with attractive rocks and plants around it. The landlord was very proud of the grounds and the fish, but there was a problem. Since this was an open area that was not far from wilderness, the koi routinely got poached by something. Likely it was a bird of prey – maybe a red tailed hawk. Could also have been coyotes.

I was talking with the landlord and he was complaining about how much the koi cost. I suggested that he string a lattice of fine wire across the winds of the building, which would certainly sort out the problem if, when he installed it, he didn’t do such a half-assed job. Oh well.

So being gainfully employed, I decided to solve his koi problem and visited the local pet store and bough $20 worth of feeder goldfish. Do you know how many fish that is? At that time, way more than 100. I had a couple of plastic bags of feeders and a jar of food and I let them loose into the pond and fed them periodically. They were initially too small to be a temptation to birds. Over the months, a lot of them died, but the rest grew and I loved that my bargain feeder fish were hidden among his koi indistinguishable to the untrained eye.

Yeah, it was petty, but sometimes I’m petty.


I’m Old, Part XLI: Trolling Creative People

My mom had a number of skills. One was that she was a book hound. Over the years she found a vast number of truly interesting books on a wide variety of subjects. She often went to the Strand bookstore in New York and gathered all kinds of interesting books such as Triviata, Mrs. Byrne’s Dictionary, and The Charles Addams Mother Goose.

One book in particular that served me and my high school friends well was a book of Parlor Games (I can’t find a copy online). It was a compendium of very simple and entertaining games that you could play with very few extra accouterments. The games included things like “Pass the Orange”, “Honey I Love You”, “Spoons”, and so on.

When I was at Adobe, I remember being at a birthday party (I think it was Joe Holt’s) that was held at Adobe. We did the whole party thing and when things were kind of quiet, I pulled out a couple of these games to play. After a few rounds of other games, I suggested a game from the book called Decadence. This is an anti-game and it’s somewhat vindictive. It feels like a typical party game except that one person is a victim.

The rules are simple. You pick a person who is “it”. You tell them that everyone else is going to make up a story and they’re going to guess the story by asking us yes/no questions. You send them out of the room. Then you tell everyone else the real rules:

  1. If “it” asks a question that ends in a consonant, the answer is ‘no’
  2. If “it” asks a question that ends in a vowel, the answer is ‘yes’
  3. If “it” asks a question that ends in ‘y’, the answer is maybe

Simple. Then you tell the rest of the people that our responses will need some acting and we shouldn’t answer right away and confer on the questions so it doesn’t look so automatic. Then the group waits a few minutes before calling “it” back in so it feels like the group actually made up a story.

On this particular evening, the victim was Eric Zocher (you might know Eric from such games as Dark Castle) who was a manager at Adobe. He ran through some initial questions and things went screwy.

“Are the characters solid?”


“Are they liquid?”


“Are they gaseous?”


Eric now looks totally puzzled. You could smell the heat coming off his brain as he’s thinking it through.

“Are they made of plasma?”


Eric looked relieved.

“Would they fit in this room?”


“Would they fit in this building?”


“Would they fit…” Eric is struggling here.

“Would they fit in the Cow Palace?”

We look at each other and confer.


We went on for another 10 minutes or so and as it happens in the game, the answers start to become contradictory. Eric was trying very hard to try to sort out the contradictions and eventually figured out that we had just trolled him big time.

Unfortunately, Decadence (like the Daffy Duck trick) is something that can only be played once with any one group of people, but man was it worth it.

I’m Old, Part XL: When Cycles Mattered

When I was 20, I took a year off college and worked at Bell Communications Research. I was a contractor and worked in a group that was doing, primarily, human computer interfacing research. Within the department, most of the people worked on Sun 3 workstations which were running on a 68020 at about 16MHz with a 1 bit display. It came with a windowing system, SunTools, which ran like a dog.

This was a different era for computing. Men were real men. Women were real women. And small furry creatures from Alpha Centu…sorry. And every cycle mattered if you wanted an app that didn’t run like a dog.

There was an engineer in the group named Steve Uhler, who had a wonderfully refreshing love/hate attitude towards these machines. He couldn’t stand SunTools, so he wrote his own windowing system, MGR. It was built to squeeze as many cycles out of the CPU as he could manage without reverting to assembly and it had a very interesting mode of operation. Most windowing systems at the time were written so that they had to run natively on the CPU of the computer. X had ways of running on a different machine, but early on it didn’t work particularly well. Instead, MGR ran on your local system and programs that wanted to use graphics would use a set of macros that generated escape codes to perform graphics operations on the windowing system. Essentially, if you wanted to draw a line, you did a printf(). To make your life easier, Steve made a bunch of macros which did that for you. In addition to basic graphics, there were higher level routines that could make new windows, make menus, resize windows, move windows, and so on.

Under the hood, Steve had written some really intense C to make sure that this all happened very quickly. In his bit-blitting code, he had macros that unrolled the loops that in turn had macros that handled various bit-level operations (AND/OR/XOR/etc). The code ran very, very quickly and as a result the UI felt very snappy. In addition, Steve had an interesting solution to the “partially obscured window” problem. Most systems are built to not care about obscured windows and notify the app when they’ve been uncovered so the app can repaint the window. Not Steve. He kept a backing store for each window and all operations went into the backing store. When a window was exposed, he just blitted the backing store on top and ta-da, the window was there. His code also routinely broke C preprocessors. I mentioned early that Steve had a love/hate attitude and he loved to hate compilers. If the department got a new machine or there was a new better C compiler, Steve would break it and complain with a smile.

The cool thing was that if you had some serious number crunching to do, you could do it on a serious number crunching computer and #include the MGR header for spitting out the graphics. It worked astoundingly well.

I wrote a ton of code for MGR, including a UI front end to gnuchess, a higher level UI library for doing dialog boxes and control widgets, a tool to display the output of pic, Hinton diagram display code, and a raft of graphics hacks including a hack that had a bunch of walking eyeballs.

I also ported MGR to the Macintosh. That was interesting because, while Steve’s bit-blit code worked on the Mac, it didn’t run as well as native Mac apps. This shouldn’t have been a surprise. The Mac I used ran at 8MHz, not 16MHz and had a 16 bit data bus not a 32 bit data bus. Also the QuickDraw, the native graphics API was written in carefully tuned assembly language. So I found the bottlenecks into Steve’s bit-blit code and replaced them with QuickDraw calls and hooray everything worked and ran close to the same speed as the Sun.

And while I enjoyed shaving cycles in my code, I don’t miss it all that much. Yes, I know that software has bloated to fill memory and CPU capacity, but at the same time it’s very straightforward today to run a decent profiler, find the main hot spots and cool them off. And this can typically be done in a way that doesn’t compromise the readability or intent of your code.

The last time I talked to Steve, he was working on research projects related to the Java VM. He had the same love/hate attitude while he was talking about work that tracked local variables and code that he put in to make them get stack allocated instead of heap allocated so you wouldn’t burden the garbage collector with things that were short-lived and guaranteed to be collected and method exit.

Bresenham Line Algorithm in F#

I love the Bresenham line algorithm. It’s an old algorithm that’s been around for more than half a century since it was created to drive plotters. One of the cool things about it is that it requires nothing more than addition, subtraction and multiplication * 2 (usually done as a bit shift or an add). Another cool thing is that it lends itself quite nicely to being implemented in F# tail-recursively. I did this as a mental exercise in Rosetta coding to see what it would look like in F# (spoiler: it looks really nice). The reference code I started with was in C and can be found here (PDF), which has a pretty great description of the algorithm and how to optimize the living daylights out of it. The F# code is optimized for reading.

To go into this, I’m going to say that for the algorithm, there are 8 separate cases for 8 octants in the Cartesian plane, but a little planning, you can get this down to 2 octants (one x dominant, the other y dominant), and honestly, the two octants are nearly identical.

Since F# is a functional language, we’re going to solve the inside-out problem, by deferring the work that must be done at each pixel to an external function:

type PixelFunc = int -> int -> unit

which we will call with the (x, y) coordinates of a pixel that should be set. Easy.
Next, here’s internal xdominant octant case:

let Octant0 xdir =
let dy2 = dy * 2
let dy2dx2diff = dy2 - (dx * 2)
let rec Octant0Rec x y count error =
    if count = 0 then ()
        let (y0next, errornext) =
            if error >= 0 then (y + 1, error + dy2dx2diff)
            else (y, error + dy2)
        pf (x + xdir) y0next
        Octant0Rec (x + xdir) y0next (count - 1) errornext
pf x0 y0
Octant0Rec x0 y0 dx (dy2 - dx)

You should be aware that this internal function refers to a couple of variables that are in an outer scope: dx, which is the absolute value of Δx and dy is the absolute value of Δy and x0 and y0 are the starting pixel. Essentially, Octant0Rec will draw count pixels starting from an initial x and y, stepping along x (either positively or negatively) and conditionally staying on the same y or moving when the error gets too big. Octant0 sets up dy times two, and the difference between dx and dx times 2. So simple.
So now that we have that out of context, let’s put it in context with the whole routine:

let DrawLine ix0 iy0 ix1 iy1 (pf:PixelFunc) =
    let (x0, y0, x1, y1) =
        if iy0 > iy1 then (ix1, iy1, ix0, iy0)
        else (ix0, iy0, ix1, iy1)
    let tdx = x1 - x0
    let (dx, xinc) = if tdx < 0 then (-tdx, -1) else (tdx, 1)
    let dy = y1 - y0
    let Octant0 xdir =
        let dy2 = dy * 2
        let dy2dx2diff = dy2 - (dx * 2)
        let rec Octant0Rec x y count error =
            if count = 0 then ()
                let (y0next, errornext) =
                    if error >= 0 then (y + 1, error + dy2dx2diff)
                    else (y, error + dy2)
                pf (x + xdir) y0next
                Octant0Rec (x + xdir) y0next (count - 1) errornext
        pf x0 y0
        Octant0Rec x0 y0 dx (dy2 - dx)
    let Octant1 xdir =
        let dx2 = dx * 2
        let dx2dy2diff = dx2 - (dy * 2)
        let rec Octant1Rec x y count error =
            if count = 0 then ()
                let (x0next, errornext) =
                    if error >= 0 then (x + xdir, error + dx2dy2diff)
                    else (x, error + dx2)
                pf x0next (y + 1)
                Octant1Rec x0next (y + 1) (count - 1) errornext
        pf x0 y0
        Octant1Rec x0 y0 dy (dx2 - dy)
    if dx > dy then Octant0 xinc
    else Octant1 xinc

The first thing the set up code does is order the initial starting and ending (x,y) coordinates, then calculates the deltas and based on whether x or y is dominant, selects the appropriate octant code.

So we can see that you can implement Bresenham’s line algorithm in a way that is readable, tail-recursive, and uses nothing but immutable variables.

Swift Argument Passing Protocol

Something that is not particularly well-documented in the Swift project is how to call swift functions. Swift has a relatively simple argument passing protocol, but the simplicity is ruined by special cases that make it rather inconvenient to work with at times.

First, let’s start with the simplicity. Swift has two general ways to pass arguments: by reference or by value. Generally speaking, value types get passed by value and reference types get passed by reference. See, isn’t that easy?

So what is a reference type in Swift? Simple: any class is a reference type. What’s a value type? Simple, structs, enums, and scalar types (Bool, Int, Float, etc).

Now the special cases – value types get passed by value if and only if the size of the value type is 3 machine pointers or fewer, if they exceed that then a copy will get passed by reference. Why is this a special case? Because pretty much every current ABI for current processors says that value types are passed by value if the size of the value type is 2 machine pointers or fewer. Great. Thanks, Swift, you’re the best.

Now, when I say that values get passed – there’s more to it. Since Swift uses reference counting for its memory management, there is infrastructure that you need to care about quite a bit. Whenever a function gets called in Swift, the caller will bump the reference count of any reference types (or reference types contained within value types) and the callee with drop the reference count before returning. The problem with this, and this is an important one, is that if you are calling a Swift function, you must bump the reference count of everything you pass in. If you don’t, the Swift with dutifully decrement the reference count on the way out and may cause the object to get disposed out from underneath you.

OK great – supposing that I have a Swift data type, how do I make this happen? If you have a reference type, this is easy. In the library libSwiftCore, there is an exported function called swift_retain to do just this. To drop the count (if you were called by Swift), you would call swift_release. The problem (and trust me, there are problems every step of the way) is when you get called by Swift. You’re supposed to drop the reference count on the way out. Fortunately, swift_release is also there for you.

Great, we have swift_retain and swift_release in our tool belt. We’re all set, right? Nope. Like I mentioned, we care quite a bit about value types that contain reference types. You would like swift_retain and swift_release to work on value types too, but they don’t – and there’s a really good reason why. Value types are supposed to contain the minimum amount of information possible. For example, in Swift a Bool is really one bit (usually padded out to a byte or more). Swift doesn’t attach the overhead of object layout to value types. Instead, for every type in Swift, there are a couple of data structures available to the compiler/runtime that are used for that information. The main three are the Metatype (or Metadata), Nominal Type Descriptor, and Value Witness Table.  Between these three blocks of data in Swift, you can find the following information:

  • Fields
  • Field Types
  • Size of the type
  • How to manipulate the data type

To get the Metatype of any particular data type in Swift, there is a function you can access a symbol something like __TMd<more mangling> which is the type Metadata for that type. The Metatype has a pointer to the nominal type descriptor, which will tell you the size of the type.

The Value Witness Table can also be accessed from a function associated with the type. It will be called something like __TWV<more mangling>. It’s not a function (no F after the T), it is a symbol associated with the table itself. Wait. Value Witness What? What the heck is it?

It is a set of 19 function pointers to methods that do a set of operations on Swift value types, such as “copy this Swift value type to this uninitialized location” or “copy this Swift value type to this previously initialized location” or “take this Swift value type from this location (dropping reference counts) and copy it to this uninitialized location”. And in here are the pieces that you need to effect a swift_retain or swift_release on a value type. When Swift calls swift with a value type, it uses these methods (usually inlined) to do a copy-to-uninitialized for passing the value type.  The callee will do the similar call to do a take operation, which will drop the reference count.

So if you want to pass a Swift value type to a Swift method, you will need to make a scratch copy using one of the copy-to-uninitialized flavors in the Value Witness Table of the type.

Great! Fantastic – all set, ready to go!

Not so fast. Things get interesting if you’re dealing with generic types.

See, the argument passing rules change quite a bit with generic types.

If you’re calling a function with generic arguments in Swift, all generic types are passed by reference, even value types. This makes sense because if I have the following function in Swift:


public struct Foo<T> {
    public var x:T
    public init(a:T) {
        x = a
public func printX<T>(value:Foo<T>) {

How big is value in printX? Does is fit in 3 machine pointers or fewer? We don’t know. Sorry, sucks to be you.
The question is, how does Swift know how to do this? You would think, “I know, I just need to get the Metatype and the Value Witness table and then I can…” Not so fast. In generic types, these symbols don’t exist. To get the Metatype, you need to call a function to get it. This function is named something like __TFMa<more mangling> which is the Metatype accessor function, but you can’t simply call it as a function of no arguments. And here, once again, Swift gets, well interesting. Usually, Swift functions are explicit in the name mangling as to how many arguments they take and the types. Any function that takes generic arguments may take extra implicit arguments tacked onto the end. The Metatype accessor is one of those. If a Swift type takes 0 generic arguments, the Metatype accessor takes 0 extra arguments. If the Swift type has generic parameters, then the Metatype accessor takes 1 argument for every generic parameter on that type and that argument is…ta-da, the Metatype for the generic specialization. So if I’m calling the Metatype accessor for Foo, I need to pass in the Metatype for Int. This will return a (possibly) dynamically constructed Metatype for Foo. Great – so there must be a similar method in there to get the Value Witness Table too, right? No. Sorry. No soup for you.
If you want to get the Value Witness Table of a generic type, you first need to get the Metatype. Then you need to look one machine pointer behind that to get a pointer to the Value Witness Table. In fact, if you’re accessing a non-generic type’s Metadata through the __TMd… symbol, a pointer to the Value Witness Table is stored a machine pointer behind it as well. Nifty.
Did I mention special cases earlier? I think I did. Let’s have some more.
Suppose I want to call printX() as outlined above. I’m sure you’ve figured out that I can’t just call it because printX needs the Metatype of T in order to get the Value Witness Table in order to drop a reference count on value on the value out, if needed. So as it turns out, given any generic Swift function, you need to pass in the metatypes of the generic parameters, one for each unique parameter after all the regular parameters. But that’s OK, since we now know how to do that.
Did I mention special cases? Yeah. Suppose that I have this:

public func myFunc<T>(a:someClass<T>, b: T) {

How many extra arguments are there? 1? Yes? No. 0. See, when you have a reference type specialized on a generic parameter, the Metatype of that generic parameter is embedded within the instance itself and the Swift compiler knows how to get at it, so it’s not needed as a parameter to myFunc.
So the rule is this: for every generic parameter in a function, f, you need to pass in the Metatype for the specialization of the type appended after all the arguments, unless that generic parameter is specialized in a reference type parameter.

So if you’re thinking to yourself, hey – there’s this cool Swift library that I’d like to call from my code – it’s just a function, right? Think again. First, the Swift ABI is not compatible with standard platform ABIs. Second, some of the other calling conventions get interesting because of reference counting. Third and finally, generics will make your life very interesting. So why did you want to do this again?

I’m Old, Part XXXIX: The One True Brace Style

There are few things that can rouse the ire of a programming than inconsistent or unusual brace style. At the same time, there are very few things that matter less in your code.

I will admit to favoring something close to K&R. If you’re curious, Wikipedia has a complete list of styles including the correct style and a bunch of abhorrent variations. Ha-ha! I kid! All of the brace styles in there are terrible. To somebody.

Before Adobe, I mostly worked solo on projects and the code style was pretty uniform across the project as a result. Working on Acrobat – that was a whole different matter. The Mac-only code-base was written by three or four engineers and included a fair amount of cross platform code that was written by several other people. All of us had varied backgrounds. Of course I followed K&R, I learned C at Bell Labs.

Still, each of us carved out our own special chunks of functionality and morphed them into our own styles. Where things got funny was when we had to interact over code. Pair programming, really. This usually happened when someone was stuck on a problem. This happens all the time and usually the answer is right in front of you, but you need another set of eyes to see the missing semicolon or uninitialized local or whatever.

Alan Wootton, who was the anchor of the Mac team, was particularly good at spotting problems. However, there was a price to getting his help. Alan had a process of working through other people’s code, and part of that process was to manually transform the brace style in front of him to his own, which was closer to Allman, which is more like Pascal style. Not surprising really, Alan did a lot of Pascal programming.

But watching it. It was like watching someone go through an art gallery of your own work and nudge all the paintings out of square. Brrr.

But he’d inevitably find the issue, fix it for you and check it in, making all his brace changes permanent.

Here’s the thing though -and I know it’s heretical- it doesn’t matter. Honestly. Brace style doesn’t matter. Getting wound up over brace style is akin to getting worked up over pop/soda/coke or sub/grinder/hoagie. What matter is the code in a bigger picture. How is it structured? Does it make sense and strike the right balance of abstraction? If you’re having trouble reading the code, that’s a smell. Go ahead and attend to it, but be prepared to embrace any style.