The Secret Lives of Errors

Several years ago, I was excited about pouring lots of features into a programming language design, so I asked various people what they would look for if they wanted to use a programming language, even if they had never programmed before. The most common answer was “good error messages.”

I’m rarely frustrated by errors, so I haven’t had much of a basis to think about how they could be handled or messaged better. But recently, thanks to two online discussions (a discussion between David Barbour and me about API usability, and an LtU thread about static-vs-dynamic language lifecycles), I’ve reached some very specific conclusions.

I’m going to make a distinction here between an error mechanism and a design hole. I consider design holes to be the real errors in a program, and the error mechanism is just something that typically happens when a program falls into a design hole.

Most programming languages provide one or more error mechanisms, such as exceptions, static type errors, and parse errors. An error mechanism typically displays a diagnostic message somehow. Culturally, developers are discouraged from using it as a normal ingredient of their programs; if they ever actually invoke it, it’s a sign that their program is buggy, and the diagnostic message is a clue to help fix the problem. Even if we try to defy this cultural meaning and we take advantage of an error mechanism to write programs, it tends to be noisy and disruptive in ways that make it difficult or cumbersome to use.

A design hole is not a property of the implementation; it’s a property of the design. It’s some potential circumstance that a developer neglects to prepare for in their program. If they’re even aware of this design hole, they consider it unlikely to actually happen and/or too hard to accommodate in the program architecture they’ve built, so they’re willing to neglect it for now. If they must write explicit code for it, then they’ll already expect that code to be buggy, so it’s likely they’ll just invoke an error mechanism to make that expectation explicit.

(Aside: It’s interesting to compare design holes with function parameters, since a function body is also a program with holes in it, in some sense. However, there’s a qualitative difference. A developer uses a function parameter if they acknowledge that the program could use many different values, depending on the context. A developer leaves a design hole (whether by accident or on purpose) if they have no design in mind for that part of the code, let alone a design that accommodates many different values.)

For a language designer, there are two motivations for implementing error mechanisms: First, the language itself may have these design holes in it, so an error mechanism can alert developers that they’re crossing a boundary into uncharted language behavior. Second, the language designer may provide an error mechanism specifically to support developers who leave design holes in their own programs. These motives partially overlap, because if a program strays into uncharted language behavior, there’s a good chance its developer didn’t intend for it to do that.

Now I can get to the observations I want to make.

Observation 1: The design of the best error mechanism is AI-complete and ethically sensitive.

Suppose you’re developing a program, and you expect to leave some design holes, so you want to choose a language that will respond to those holes as nicely as possible.

  • It should give you a detailed report of why the program encountered a design hole.
  • It should continue to be usable for other purposes in the meantime. (For instance, it should have at least one error mechanism that doesn’t veto your whole program at compile time.)
  • If it’s a live service, it should continue to cater to the clients as best it can. (For instance, it should have at least one error mechanism that doesn’t shut down the whole program.)
  • If it can’t service a client interaction properly, it should apologize in a graceful way. (For instance, it shouldn’t dump a stack trace to a non-technical user.)
  • It should behave in a predictable way, so that you and the clients can understand the ramifications of this error in hindsight.
  • It should not lead to security vulnerabilities. (This can be particularly complicated. For instance, perhaps a door with a broken keycard reader should open if there’s an emergency evacuation, but perhaps it shouldn’t open for an attacker who goes around breaking keycard readers.)

Taking this to a logical extreme:

  • The program should continue fulfilling its intended purposes, despite the difficulty of doing so.
  • The program should reach out to developers and clients to help them understand what’s going on and what they can do about it.

The only way we really know the program’s intended purpose is by consulting you, the developer. Hence, a very effective error mechanism would consult you interactively, and while it waited for your responses, it would consult an immortalized clone of your mind.

Still with me? I’m talking about far-future outcomes. I expect us to approximate this kind of solution incrementally, as we go along designing better and better error mechanisms.

Even if the error mechanism manages to use a clone of your mind, it’s not quite clear what role the clone would play, because what if you and the clone go on to have different experiences for a long time, and you now have significantly different opinions? As the CAP theorem suggests, we can’t get availability and partitioning without sacrificing consistency. The act of deploying a program with ideal error handling is the act of separating the programmer into two inconsistent minds. What are the ethics of reassimilating or deprecating these minds? What if the minds don’t actually want to be consistent with each other?

There are three groups of people who have some power to impose answers to these questions: You, your program, and finally the various other people and tools which provide you with the ability to build your program in the first place and interact with it afterward. These third parties include languages (and their language designers), computer systems (and their systems programmers), legal systems, public utilities, and so on. The third parties are relatively detached from the drama of the situation, so ideally, they’ll apply their dispassionate decision-making power as ethically as they can, perhaps even consulting other systems which make it their job to maintain a contemporary and culture-sensitive understanding of ethics.

The task of actively keeping up with ethical developments is not easy, and not every third party will be able to do it effectively. For instance, a third party that is merely a piece of data–such as a Git repo containing the source to a programming language–will have absolutely no way to do this. Nevertheless, even these third parties can advertise warnings, user agreements, and usage documentation, and they can incorporate DRM mechanisms. These features help other third parties assess the user’s intentions, which helps them decide the user’s responsibilities.

In conclusion, the best error mechanism for a programming language will interact with a representative of the developer, deployed along with the program. This representative will usually consist of a way to interact with the developer, but when the developer isn’t available, the best it can do is to act as a clone of the developer’s mind. The interaction between this mind clone and the developer should be managed by an unbiased third party. Since the language itself will probably exist as data, it should incorporate documentation describing how and why it should be hooked up to an unbiased third party in the first place. Despite best efforts, the third party’s decisions may be bitterly controversial. The language should guide developers toward program designs which have fewer design holes, so that they have fewer reasons to invoke the error mechanism and provoke controversy.

Observation 2: Hard-to-patch design holes may help prolong a system’s brand recognition.

There’s a spectrum of stability when it comes to program composition: Some developer relationships are flexible, where one developer makes a breaking change and the other developers vigilantly catch up. Other developer relationships are stable, where the developers try to avoid any unreliable dependencies, and they try to preserve the backwards compatibility of the interfaces they publish.

If a developer stops making breaking changes for a while, or if they think the basic features of their system are pretty much perfect the way they are, then they require fewer compromises from their clients. Because of this, they may attract clients who are less flexible. Once they’re popular among such clients, the system’s essential features will be pretty much set in stone, unable to be simplified or reworked to accommodate new abilities. So when the developer actually does have new ideas in mind, instead of innovating upon the existing system, they may focus on a newer, less entrenched system. The new system may owe a lot to the old, but if it has a new social identity (name), the old system’s community can continue to exist unperturbed.

The developer could always try to perturb the system’s community, regardless of whether this trend is occurring. They could say that the old system is simply old, and they’ll (eventually) abandon it, so all clients should upgrade as soon as they can. In this case, it helps for the developer to have had a good reputation for the maintenance of the old system, so they can act as a convincing spokesperson for the new one. It also helps to have cultivated a community of client developers who are flexible.

This is where design holes come in.

If a system has design holes, it provokes the community to wonder what they should be filled with. (“Today I was adding a string to a non-string, and I got an error. Could we change this to coerce both arguments to a string?”) If this isn’t a perfectly seamless design, the design hole will still linger as a design scar, and some clients may react by requesting a breaking change. (“Today I was using the concatenation operator, and to my surprise it actually adds numbers too. Wouldn’t it be more consistent if it always stringified both arguments, rather than having this ugly special case?”) Hence, a popular system with design holes will already have some pressure to make breaking changes, and this may filter out developers who aren’t ready for that.

If a system is built with design holes on purpose, it may breed more controversy and a more flexible developer community, but that’s assuming it succeeds at all.

I’m usually against controversy as a general rule, because I think of it as the basic way to provoke people to be destructive to each other. Above, I was arguing that the use of error mechanisms leads to developers cloning their own minds, provoking very deeply controversial questions. But if a system is built with design scars, with inelegant behaviors in place of errors, then no minds need to be cloned, and I think the resulting controversy among client developers is pretty low. Either we have that controversy, or we have the controversy of building an all-new system and asking someone to choose (or divide their attention) between the old one and the new one. It seems similar enough in scale.

Conclusion

With some of the things I’ve said here, it’s as though programming language errors are the playthings of evil overlords who want to amass an army and immortalize themselves as its leader. It’s silly to think about it that way, so I, uh, recommend you stop being so silly!

Well, I have some language design principles to take away from this. They apply to more than programming languages, so I’ll say “system” instead. (I also went out of my way to say “system” in the brand recognition section, if you didn’t notice.)

I recommend for every developer (including myself) to strive for the following properties in each of their systems: In preference to having usage errors, the system tends to support miscellaneous behaviors that seem useful enough for now. If the system has the potential for internal errors, it’s deployed alongside a responsibly selected representative of its developer. The system’s clients have access to their own error mechanism that explicitly communicates with a representative of the client developer and a representative of an unbiased third party. The system’s design guides its clients to achieve all these properties in their own systems.

This is quite an elaborate bundle of recommendations, and yet it’s vague: If an unbiased third party is involved, what exactly do they do? What’s their API? This is a much more principled approach to errors than anything I’ve encountered before, and yet it has quite a ways to go before any one system will truly exemplify it.

In the meantime, please let me know what you think!

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s