An extensible type system for meaning-preserving modularity

I’m gradually figuring out a foundation for a general-purpose programming language, and I think I just laid a great cornerstone,  which seems to solve the expression problem for dependent type theory. My pseudocode is in this long GitHub Gist, which comes with a long revision history showing my progress over the last 12 days.

I haven’t yet looked for a proof of strong normalization, consistency, and whatnot (and I don’t even intend for my theory to be expressive enough to support induction!), but the final insight has turned out to be very straightforward: If we can extend an extensible type and reimplement its interface (and re-prove its invariant) so that the new implementation/proof is observationally equal to the original as far as the original cases are concerned, then our extension may as well have been part of the type all along.

I’m using the observational equality infrastructure described by Altenkirch and McBride in “Observational Equality, Now!” and the way I think of the expression problem pretty much lines up with the requirements listed in Zenger and Odersky’s “Independently Extensible Solutions to the Expression Problem.”

The extension problem is usually posed as an issue of taking a system with n datatypes and m operations and extending it with an additional operation or datatype with implementations of all n or m of the additional cases. Most languages make it either easier to add new operations or easier to add new datatypes. But interestingly, each of these options is about adding additional cases to the system, so it’s all a matter of designing a system for constrained extensible sum types. In a dependently typed language, a single extensible sum is all we need to model the grid of operations and datatypes, if only we constrain it to support an interface type something like this:

((fn : Val) * (IsAnOperation fn)) ->
((arg : Val) * (IsData arg)) ->
    ((result : Val) * (IsData result))

(Given a value that’s an operation and a value that’s data, return a value that’s data.)

This take on the expression problem can easily generalize it to cubes, to say the least! Even if I haven’t justified the proof-theoretical soundness of my solution, I can’t wait to see how it fares for programming.

Meaning-preserving modularity in terms of type theory

This type system is the current result of my efforts toward meaning-preserving modularity, a notion of correctness in which modules can be deemed correct in themselves, even if they’re part of an altogether dysfunctional system. By process of elimination, the dysfunction can be isolated to particular components. To achieve this, our imports must uniquely determine their implementations (or be explicitly indeterministic), even if the construction of those implementations is only provided elsewhere in the system, such as another module.

In my last post, I described Reactive Knowledge Networking (RKN), an extension of the idea of meaning-preserving modularity to support live users continuously interacting over a network. I haven’t quite gotten to formalizing RKN the way I’ve been doing for meaning-preserving modularity, but it’s helped as a guidepost. My formalism takes RKN’s live, distributed metaphor and flattens it into a form where network users are module authors and user actions are modules. My approach to module encapsulation uses a metaphor of public and private keys, which may or may not correspond to actual secure channels someday.

Some changes in my thinking

In my RKN post, I said the network users would send “knowledge” to each other. But in meaning-preserving modularity (and therefore RKN), knowledge queries must have the same result regardless of whether the knowledge is available. If the network’s communication channels just transmit knowledge, it should have the same effect as not transmitting anything at all!

Instead, the network needs to transmit first-class values that somehow represent knowledge, and that’s why I’ve turned to type theory. If we receive a value, we gain the knowledge that we can (trivially) construct that value, which may in turn make it possible for us to construct a whole language of other values, some of whose types can indicate the satisfaction of rich mathematical queries.

The idea of a knowledge query is actually problematic if the language is going to support the seamless embedding of mathematical proofs. If the knowledge doesn’t exist (or isn’t available), the proof-program querying for it doesn’t terminate. Before, I expected to solve the expression problem using knowledge queries for dispatch resolution (somehow). This realization led me to pursue my current solution instead.

Knowledge queries may still have meaning in this system, but if so they will run at a stage before the proof stage. Intuitively, if our knowledge queries don’t terminate, we don’t get to perform user actions which depend on their results.

Lack of induction?

I don’t particularly intend for my system to support induction or recursion, at least at the proof level. It’s great to be able to manage infinite families of value given only finite tools, but I see that as more of a nice-to-have feature, less essential than the modularity and convenience I’m trying to establish.

I would like my module system’s deductive theory to be able to run anywhere–even within another deductive theory. If what I make is too strong, that’s a sign of non-portability. The weaker, the better.

On the other hand, when my module system runs somewhere that has the computational resources to afford more expressiveness, its proofs should be able to take advantage of that expressiveness. I would treat such platforms almost as though they provide extra black-box modules, except that I’d allow them to define extra grammar and typing rules beyond my current formulation of the system. If a module uses those extra rules, then it can only run on that platform, but that’s exactly what we should expect.

Certain platforms may provide so much extra functionality that they make the deductive system inconsistent (well, if it was even consistent before). That’s unfortunate, but the bug will reside in that extra functionality, not this core system.

What’s next

I’m interested in demonstrating that this system ties into the foundation of mathematics with multiple relative consistency results, and ironing out any paradoxes along the way. However, I’m even more motivated to expand my model into a full programming language. With any luck, that’ll help on the way to those proofs.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s