Insights from Coders at Work

I recently read Coders at Work which consists of a series of interviews by Peter Seibel with leading programmers. I enjoyed reading their insights, and Seibel is a great interviewer, who asks the kinds of questions a working programmer would want to hear. Though wide ranging, he covered some topics with nearly everyone, such as, "Do you consider yourself an engineer, writer, craftsman, or something else?" Seibel also asked most of the interviewees what they thought about Knuth's work, advice for those who want to learn to be programmers, how to debug, and ageism in programming.

One overarching theme is that pretty much everyone hates C++ (though he didn't interview Bjarne Stroustrup). Another is the importance of written communication to the craft of programming.

Here are some comments that I found particularly insightful.

Brad Fitzpatrick

Fitzpatrick believes programmers should push themselves:

Seibel: Do you have any advice for self-taught programmers?

Fitzpatrick: Always try to do something a little harder, that's outside your reach.

And learn statistics:

Seibel: How much math do you think is necessary to be a programmer? To read Knuth and really understand it, you've got to be pretty mathematically sophisticated, bud you you actually need that to be a programmer?

Fitzpatrick: You don't need that much math. For most programmers, day to day, statistics is a lot more important. If you're doing graphics stuff, math is a lot more important but most people doing Java enterprise stuff or web stuff, it's not. Logic helps and statistics comes up a lot.

Douglas Crockford

Crockford believes code reading is one of the most useful ways of improving software quality:

One of the things I've been pushing is code reading. I think that is the most useful thing that a community of programmers can do for each other---spend time on a regular basis reading each other's code. There's a tendency in project management just to let the programmers go off independently and then we have the big merge and if it builds then we ship it and we're done and we forget about it.

One of the consequences of that is that if you have weak or confused programmers you're not aware of their actual situation until much too late. And so the risks to the project, that you're going to have to build with stuff that's bad and the delays that that causes, that's unacceptable. The other thing is that you may have brilliant programmers on the project who are not adequately mentoring the other people on the team. Code reading solves both of those problems.


I think an hour of code reading is worth two weeks of QA. It's just a really effective way of removing errors. If you have someone who is strong reading, then the novices around them are going to learn a lot that they wouldn't be learning otherwise, and if you have a novice reading, he's going to get a lot of really good advice.

He also proposes an innovative approach to periodic, scheduled code cleanups:

Seibel: In one of your talks you quoted Exodus 23:10 and 11: "And six years thou shalt sow thy land, and shalt gather in the fruits thereof: But the seventh year thou shalt let it rest and lie still" and suggested that every seventh sprint should be spent cleaning up code. What is the right time frame for that?

Crockford: Six cycles---whatever the cycle is between when you ship something. If you're on a monthly delivery cycle then I think every half year you should skip a cycle and just spend time cleaning the code up.

Seibel: So if you don't clean up every seventh cycle you may be faced with the choice of whether or not to do a big rewrite.

Joe Armstrong

Armstrong argues that software reuse has failed:

Seibel: So you started out saying software reuse is "appallingly bad," but opening up every black box and fiddling with it all hardly seems like movement toward reusing software.

Armstrong: I think the lack of reusability comes in object-oriented languages, not in functional languages. Because the problem with object-oriented languages is they've got all this implicit environment that they carry around with them. You wanted a banana but what you got was a gorilla holding the banana and the entire jungle.

And connecting pieces of a system is too complicated:

[G]luing things together from these complicated components does not itself have to be complicated. The use of grep is not complicated in the slightest. And what I don't see in system architectures is this clear distinction between the gluing things together and the complexity of the things inside the boxes.

When we connect things together through programming language APIs we're not getting this black box abstraction. We're putting them in the same memory space. If grep is a module that exposes routines in its API and you give it a char* pointer to this and you've got to malloc that and did you deep copy this string---can I create a parallel process that's doing this? Then it becomes appallingly complicated to understand. I don't understand why people connect things together in such complicated ways. They should connect things together in simple ways.

Talking to your colleagues (or maybe even to yourself) is a great way to solve problems:

Armstrong: And another thing, very important for problem solving, is asking my colleagues, "How would you solve this?" It happens so many times that you go to them and you say, "I've been wondering about whether I should do it this way or that way. I've got to choose between A and B," and you describe A and B to them and then halfway through that you go, "Yeah, B. Thank you, thank you very much."

You need this intelligent white board---if you just did it yourself on a white board there's no feedback. But a human being, you're explaining to them on the white board the alternative solutions and they join in the conversation and suggest the odd thing. And then suddenly you see the answer. To me that doesn't extend to writing code. But the dialog with your colleagues who are in the same problem space is very valuable.

Seibel: Do you think it's those little bits of feedback or questions? Or is it just the fact of explaining it?

Armstrong: I think it is because you are forcing it to move it from the part of your brain that has solved it to the part of your brain that has verbalized it and they are different parts of the brain. I think it's because you're forcing that to happen. I've never done the experiment of just speaking out loud to an empty room.

Joe's Law of Debugging:

[T]here's---I don't know if I read it somewhere or if I invented it myself---Joe's Law of Debugging, which is that all errors will be plus/minus three statements of the place you last changed the program.

Peter Norvig

Peter Norvig recommended books that all programmers should read:

Norvig: I think there are a lot of choices. I don't think there's only one path. You've got to read some algorithm book. You can't just pick these things out and paste them together. It could be Knuth, or it could be the Cormen, Leiserson, and Rivest. And there are others. Sally Goldman's here now. She has a new book out that's a more practical take on algorithms. I think that's pretty interesting. So you need one of those. You need something on the ideas of abstraction. I like Abelson and Sussman. There are others.

You need to know your language well. Read the reference. Read the books that tell you both the mechanics of language and the whole enterprise of debugging and testing: Code Complete or some equivalent of that. But I think there are a lot of different paths. I don't want to say you have to read one set of books.

He also had some interesting comments on how he uses testing, and how testing frameworks are not very good at measuring statistical values:

Seibel: What about the idea of using tests to drive design?

Norvig: I see tests more as a way of correcting errors rather than as a way of design. This extreme approach of saying, "Well, the first thing you do is write a test that says I get the right answer at the end," and then you run it and see that it fails, and then you say, "What do I need next?"---that doesn't seem like the right way to design something to me.

It seems like only if it was so simple that the solution was preordained would that make sense. I think you have to think about it first. You have to say, "What are the pieces? How can I write tests for pieces until I know what some of them are?" And then, once you've done that, then it is good discipline to have tests for each of those pieces and to understand well how they interact with each other and the boundary cases and so on. Those should all have tests. But I don't think you drive the whole design by saying, "This test has failed."

The other thing I don't like is a lot of the things we run up against at Google don't fit this simple Boolean model of test. You look at these test suites and they have assertEqual and assertNotEqual and assertTrue and so on. And that's useful but we also want to have assertAsFastAsPossible and assert over this large database of possible queries we get results whose score is precision value of such and such and recall value of such and such and we'd like to optimize that. And they don't have these kinds of statistical or continuous values that you're trying to optimize, rather than just having a Boolean "Is this right or wrong?"

L Peter Deutsch

Deutsch argues that language syntax matters:

My PhD thesis was a 600-page Lisp program. I'm a very heavy-duty Lisp hacker from PDP-1 Lisp, Alto Lisp, Byte Lisp, and Interlisp. The reason I don't program in Lisp anymore: I can't stand the syntax. It's just a fact of life that syntax matters.

Language systems stand on a tripod. There's the language, there's the libraries, and there are the tools. And how successful a language is depends on a complex interaction between those three things. Python has a great language, great libraries, and hardly any tools.


Lisp as a language has fabulous properties of flexibility but really poor user values in terms of its readability. I don't know what the status is of Common Lisp libraries is these days, but I think syntax matters a lot.

He also gets a knock in on Perl:

Well, my description of Perl is something that looks like it came out of the wrong end of a dog. I think Larry Wall has a lot of nerve talking about language design---Perl is an abomination as a language.

Ken Thompson

Code may seem like a timeless asset, but when left alone, code rots:

And I've always been totally willing to hack things apart if I find a different way that fits better or a different partitioning. I've never been a lover of existing code. Code by itself almost rots and it's gotta be rewritten. Even when nothing has changed, for some reason it rots.

I think this squares with the experience of all working programmers.

Fran Allen

Fran Allen is the only woman interviewed for the book. She won the Turning Award in 2002 for her contributions to compiler theory and practice.

Allen argues that C killed compiler advances:

Allen: By 1960, we had a long list of amazing languages: Lisp, APL, Fortran, COBOL, Algol 60. These are higher-level than C. We have seriously regressed, since C developed. C has destroyed our ability to advance the state of the art in automatic optimization, automatic parallelization, automatic mapping of a high-level language to the machine. This is one of the reasons compilers are . . . basically not taught much anymore in the colleges and universities.

Seibel: Surely there are still courses on building a compiler?

Allen: Not in lots of schools. It's shocking. there are still conferences going on, and people doing good algorithms, good work, but the payoff for that is, in my opinion, quite minimal. Because languages like C totally overspecify the solution of problems. Those kinds of languages are what is destroying computer science as a study.

She argues that women were pushed out of programming by the rise of computer science as a math/engineering degree.

Seibel: Do you think that glass ceiling had, in fact, been there before and you hadn't bumped up against it yet? Or had something changed?

Allen: It really hadn't been there previously. Recently I realized what was probably the root cause of this: computer science had emerged between 1960 and 1970. And it mostly came out of the engineering schools; some of it came from mathematics.

And the engineering schools were mostly all men in that period. And the people IBM was hiring had to meet certain requirements: have certain degrees and have taken certain courses in computer science. And so they were almost all men because they were the ones that satisfied the requirements---because it was a discipline now.

She also says the problem with recruiting women into programming is that computer science is not seen as socially relevant by women:

Seibel: So how are you feeling about [Anita Borg's] "50/50 by 2020" project? [50% women in computer science by 2020]

Allen: Pretty discouraged about it.


[T]here are a lot of women in engineering---taking all the tough sciences and mathematics in high schools....

What's happening with those women is that they're going into socially relevant fields. Computer science could be extremely socially relevant, but they're going into earth sciences, biological sciences, medicine. Medicine is going to be 50/50 very soon. A lot of fields have belied that theory, but we haven't.

Seibel: What is it, then, about computer science that is so unappealing?

Allen: A lot of people think it's the games and the nerdiness of sitting in front of a computer all day. It's going to be interesting how these new social networks online will have an effect. I don't know. But I feel it's our problem to solve. It's not telling the educators to change their training; we in the field have to make it more appealing.

We have to give the field an identity that expands it further than the identity it seems to have now---a much more human identity. We haven't articulated why we like this field and what's exciting about it and what's exciting about the future and why it's a great field to be in.

Bernie Cosell

One of Cosell's rules was that programs are to be read by people. Being able to write a working program is a given. Making it good was the standard at his company:

The other rule is to realize that programs are meant to be read.... I very quickly...came to the belief that computer-program source code is for people, not for computers. Computers don't care. I think it's a good thing that Perl has both "if" and "unless." Because it turns out that when you're getting an intuition for what something is supposed to be doing, saying "if not some condition" doesn't connote the same idea as saying "unless the condition."

The binary bits are what computers want and the text file is for me. I would get people---bright, really good people, right out of college, tops of their classes---on one of my projects. And they would know all about programming and I would give them some piece of the project to work on. And we would start crossing swords at our project-review meetings. They would say, "Why are you complaining about the fact that I have my global variables here, that I'm not doing this, that you don't like the way the subroutines are laid out? The program works."

They'd be stunned when I tell them, "I don't care that the program works. The fact that you're working here at all means that I expect you to be able to write programs that work. Writing programs that work is a skilled craft and you're good at it. Now, you have to learn how to program." ...

I know I would have said the same thing not long ago: "The program works. What is your problem?" When I say, "You don't get credit because the program works. We're going to the next level. Working programs are a given," they say, "Oh."

He also thinks today's programmers have it tough, because the standards are so high now:

So I don't envy modern programmers, and it's going to get worse. The simple things are getting packaged into libraries, leaving only the hard things. That stuff is getting so complicated, but the standards that people are expecting are stunning. One of the ones they showed me stunned me. He was showing me Google Maps that will do routes for you. One of the things you can do is you can grab a piece of the route with your mouse and drag that piece of the route somewhere else to tell Google that you want the route go there. Then it remaps the route so that it goes through where you just dragged the point. Now I know what's going on in there: a pile of JavaScript code for the mouse tracking. When you let go of the mouse it has to do an Ajax XML request to tell momma system that he just put this point on the route. The route then has to do incremental updates. Calculating the route. I can't even imagine how they do that code so well. People complain that you get routed through people's backyards and stuff like that, but the optimal-route problems are one of the classic problems of computer science. How to take this arbitrary graph and find the shortest path through a graph. Just stunning.

At one level I'm thinking, "This is way cool that you can do that." The other level, the programmer in me is saying, "Jesus, I'm glad that this wasn't around when I was a programmer." I could never have written all this code to do this stuff. How do these guys do that? There must be a generation of programmers way better than what I was when I was a programmer. I'm glad I can have a little bit of repute as having once been a good programmer without having to actually demonstrate it anymore, because I don't think I could.