A Nice Little C# Gem

One of the built-in operators that I like in C# is nameof(). Given an identifier, nameof returns a string which contains the identifier. Typically, it gets used for argument checking:

public void BeExcellent(int input)
{
    if (input < 0) {
        throw new ArgumentOutOfRangeException (nameof (input));
    }
}

But there’s more that you can do with nameof(). I saw some code yesterday that looked like this:

public static const string Operator = "Operator";
public static const string Constant = "Constant";
public static const string Expression = "Expression";

Using nameof(), we change it to this:

public static const string Operator = nameof (Operator);
public static const string Constant = nameof (Constant);
public static const string Expression = nameof (Expression);

This nice thing about this is that typos are consistent – if you make a typo in the symbol name, it will be in the string instead of the possibility of there being a typo only in the string or only in the symbol. Also if you need to refactor and change the name of a symbol, you can do everything in one shot with your refactoring tools as it should be.

What surprised me about this is from the way that compilers are built. When you have binding expression of the form type symbol = expr, usually the symbol is hidden from the the expression. It’s a convenience that prevents bad code of referencing an unbound symbol, but in this case it does work and it makes sense for it to work since we don’t care about the value of symbol, just its name.

I’m Old, Part LXXVIII: Code is Data

My second programming language was 6502 assembly. It’s a ttruly inspired and awful processor. It has a tiny number of registers and all of them (save the program counter) are 8 bit. There are a bunch of addressing modes that really aren’t all that useful except in corner cases. There are only two math operations: addition and subtraction. If you want anything else, you’re on your own, branches were only relative for -127 bytes to 128 bytes. The stack was 256 bytes and fixed, so screw you recursion.

If you wanted to move data more than 256 bytes from one location to another, there was really only one proscribed method to do that and that was using an addressing mode called indexed indirect. You had to give up 2 precious bytes in the coveted 0-page (addresses 0-255) which served as a pointer and then you could index off that pointer using only the Y register. After you moved 255 bytes, you could increment one of the 0 page addresses to move onto the next block of 255 bytes.

The problem with this is that this particular addressing mode is slower than most instructions – it takes 5 cycles. If you’re moving memory, that’s 10 cycles for a read and a write and 4 bytes of 0 page used. Whereas regular indexed addressing takes 4 cycles, 8 for a read and write, but like I said, it’s limited to 256 bytes at a time.

Or is it.

For example, if I want to read from location $4000 and write to location $2000, indexed on X, I can write the following code:

0300: BD 00 40 LDA $4000,X
0303: 9D 00 20 STA $2000,X

The thing to realize here is that those three numbers after the 0300 are the values that represent the instruction. It’s data. It’s code. It turns out that if that code is RAM, we can change it on the fly. That 40 in the first line at address 302? Increment it and now you’re addressing memory at location $4100 and on. This is called self-modifying code and if you were coding for the Apple ][, it was de rigeur. Why? Because, like I said, the addressing modes available were not so great and the Apple ][ only ran at a paltry 1.023 MHz and if you were writing a game, it was important to shave cycles where you could and this instruction was 20% faster than the proscribed addressing modes.

It became so natural to me that most of the code that I wrote ended up being self-modifying in some way or another.

Flash forward a few years and I’m in college taking a computer architecture class. The professor was present the Manchester Mark I computer. As presented, the machine had 7 instructions – far fewer than the generous 56 in the 6502. The professor talked about how John von Neumann wrote self-modifying code with a an air of awe and respect. Wait. What? It’s supposed to be hard? No – it’s a tool. A horrible, horrible tool which is what you use when you don’t have.

One of the things that we’ve learned as the field has advanced is that we’re really terrible at managing side-effects in code, so consider the issues of side effects in code modifying code. More than once I had code that ran out of control stomping all over memory because I botched an address in self-modifying code.

Altering 6502 on the fly is one thing. It’s really a small case of code writing code, which is a time-honored tradition. When I was taking a class in automata theory, we had an assignment to write code to generate finite state machines from a description language. The assignment was written to push us to using Scheme for the implementation and the rule was every state had to be a separate program. Since I had enough of Scheme at this point, I decided that I would do this in C. Each state in the state machine read its input from its args in main, did a switch on it and depending on the state transition would pick the next program to execute and set up the right fork/exec pair to make that happen.

So my solution to the assignment read the automata description and wrote a separate C program for every state and compiled them into specific names that each of the other programs knew. Code writing code.

And oddly enough, I’m my current work project involves reading a program, tearing it apart into pieces and then writing code to interact with it in two different programming languages. And my unit tests are typically code that writes more code to test the output of the code written by code.

It’s turtles all the way down.