Lately I’ve been trying to iron out the details of a generalization of quasiquotation which I call “higher quasiquotation.” The basic idea is that just as a quasiquotation is a region in one parenthesis-delimited region (marked by
quasiquote) and a set of other parenthesis-delimited regions (marked by
unquote), we can go on to talk about regions between quasiquoted regions, regions between those regions, and so on.
If you think of values with holes as being functions, then the notion that this is a “higher-order” quasiquotation is clear: Each quasiquotation determines a value of type
(c SExpr -> SExpr), the next higher degree of quasiquotation determines a value of type
(c (c SExpr -> SExpr) -> (c SExpr -> SExpr)), and so on, where
c is some collection like
c a = Map String a. But these functions aren’t the whole story; the quasiquotations should be able to be pulled apart like other data structures, not just filled in to create s-expressions.
I haven’t managed to write a full macroexpander for higher quasiquotation yet. I’ve written this post to share my status as it is.
Why do I care about higher degrees of quasiquotation? Mainly because I already want to move up by one degree: I want macros that manipulate quasiquotations the way traditional macros manipulate s-expressions.
This is because in Cene, I’ve already extended quasiquotation in several ways beyond the simple kind of quasiquotation in Lisp and Scheme. Here’s a description of a few of the extensions in Cene. These are actually for Cene’s interpolated string syntax, but I use practically the same primitives for s-expression quasiquotation as well:
- A programmer can escape multiple levels of string quasiquotation at once using a De Bruijn style method of repeated unquotes:
\;qq[first level \;qq[second level \;uq;uq;ls(...) second level] first level]. Here,
uqmeans “unquote,” and
lsmeans an interpolated s-expression (“lists and strings”).
- A programmer can escape multiple levels of string quasiquotation at once using a named label:
\;(wq outside);qq[first level \;qq[second level \;(rq outside);ls(...) second level] first level]. Here,
wqmeans “with current quasiquotation level” and
rqmeans “restore quasiquotation level.”
- A programmer has a choice between a string syntax that normalizes whitespace
\;qq[...]and a string syntax where that whitespace normalization is suppressed
ypmeans “yes preformatting,” and there’s a corresponding
np(“no preformatting”) to disable this setting again when strings are nested.
These extensions are so numerous and arbitrary that I find it interesting to think of whether we could simplify the core language of Cene by moving these features into libraries. Beyond just supplying things like
An alternate simplification here would be to try to round out this set of extensions in a way that feels logically complete without resorting to a macro system; however, I believe this is a lost cause. Consider the arbitrary choice of whether
\;(rq label) should unwind to its corresponding
\;(wq label) or past it. I can hardly deny the usefulness of both options, since I get a lot of value out of Cene’s shorthand
(a b /c d) for
(a b (c d)); if we were to process the string
(a b /c d) by starting with
) and proceeding backwards, then
/ would unwind to its
( would unwind past it. So I want both options at minimum… but that doesn’t mean I could stop there. It’s like defining all the ordinal numbers: No matter how many weak opening and closing parens we’ve added, we can conceive of a new pair of weak opening and closing parens that’s weaker than all of those. We might be able to supply an expressive language for the predecessors of some large ordinal number so people can write extremely weak parens, but the form that language takes might as well be a macro system.
So, I want to write something like
Let’s consider the much simpler and more familiar case of writing
( as an unhygienic reader macro in Common Lisp or Racket: We check whether the input begins with something we recognize as an unmatched
). If it doesn’t, we read an expression from the input and repeat the process. When we find the
), we return a list of the expressions we’ve read, and we update the state of the input stream to begin after the
). (In a pure language, we may “update” the stream by returning the updated version of the stream.)
When we process
When we look at the input stream, we don’t want to check it for a literal occurrence of
uq, because that doesn’t account for alternative ways to close the quasiquotation such as
\;(rq label). Something tricky about
\;(rq label) is that doesn’t always close the quasiquotation; whether it does depends on what the binding of
label is, so we have to keep track of every
label binding as we go along. Since we have so many macros to watch out for as we go along, and since we need to be careful with these
label bindings anyway, the easiest thing to do is to use proper hygiene.
Hygienically, when we process
- We’ve reached the end of the quasiquotation, and there’s a tail of the input stream left behind.
- We’ve read one quasiquotation-shaped value, and some branches of the input stream are left behind.
We continue processing the branches until each one has reached the end of the quasiquotation by its own path. Then we build a quasiquotation-shaped value out of all the values we’ve received, and we return that accompanied by all the remaining branches of the input stream.
(Incidentally, we implement
uq by having it immediately return a “reached the end of the current quasiquotation” result, rather than returning a result with a quasiquotation-shaped value in it.)
Great. So… how do we build a quasiquotation-shaped result out of quasiquotation-shaped pieces? For simple cases like
In that higher degree, I think the analogue to
Cene doesn’t have reader macros yet. A positive answer to these questions would let me add them in a very principled way.
The remaining difficulty: Writing higher degrees of syntax in plain text
The biggest conundrum I’ve been dealing with is how to read a higher quasiquotation syntax by starting with plain text.
In Scheme, the
quasiquote syntax serves double duty: Outside a
quasiquote is an s-expression macro which has the functionality of upgrading to a (hardcoded) quasiquotation macroexpander for a moment and then returning to regular operation wherever it reaches
unquote. Inside a
quasiquote is a lot like a quasiquotation macro in our sense, and it has the effect of incrementing the current quasiquotation level, which gets decremented again on the next
In terms of hygiene, that’s a little backwards: The very idea that a
quasiquote should have any given behavior inside a
quasiquote depends on the unhygienic assumption that the
quasiquote symbol has the same meaning in the quoted code as it does on the outside.
If only we processed
quasiquote before all other macros, then this question of hygiene would simplify. After all, we don’t think of a quoted s-expression as being made of the character
(; it’s already made of structured data. So if
quasiquote were processed in an earlier stage than other macros, we might think of our syntax as being made of quasiquote-shaped pieces already.
But what happens when we extrapolate this to an infinite number of degrees of higher quasiquotation? The result of the reader is already… what?
There’s something else fishy about processing
quasiquote early: Why would we process reader macros followed by quasiquote macros followed by s-expression macros? That’s a strange ordering. We go two degrees forward and one degree back…?
I think this can be reconciled, but I haven’t figured out quite how yet. Perhaps instead of actually processing
quasiquote before other macros, we allow
quasiquote to be defined as a macro that starts by transforming its input into the form it would be in if
quasiquote were processed before other macros. But this still means that
quasiquote somehow has to distinguish syntactically between uses of
quasiquote and uses of other macros, and I’m not sure how to make that hygienic.
Potential challenges in the abstract
Macros support a kind of sequential composition, but to write a call to the composition of two macros, the composition expression has to be allowed as a macro “name.” For instance, perhaps
(a b ~(left-paren) c ~(read-id) d ~(read-compose (right-paren) (right-paren)) can be equivalent to
(a b (c d)), where
~(...) is a more elaborate way to specify a reader macro name in text (rather than a single character),
(read-id) expands to the identity reader macro, and
(read-compose ...) expands to the composition of two given reader macros. If these operations could be given complete designs which enable equational reasoning techniques for macros, that would be a strong argument for the stability of this macro system design.
Going up to infinity raises the question of higher ordinals: If we have a single tool that works at an infinite number of levels, then do we need another level beyond that to implement that tool? But I’m not concerned about this. Not every macro is going to need to use an interpolated syntax to build its result. Some can get by with function calls.
Another issue with infinity is that we seem to be flirting with higher category theory. A quasiquotation is a region between one parenthesis-delimited regions (designated with
quasiquote) and several parenthesis-delimited region (designated with
unquote). In the case where the “several” is only one, higher quasiquotation becomes a system of regions between regions between regions, much like higher category theory deals with 3-morphisms between parallel 2-morphisms between parallel 1-morphisms. I understand that higher category theory is currently formulated in several ways, without any one way standing out as the most useful basis for studying all the others, so finding a single satisfying design for higher quasiquotation might turn out to be interrelated with hard problems. On the bright side, whatever hard problems exist in higher quasiquotation would still be there even if we didn’t notice their relationship to category theory.
Higher quasiquotation is something I’ve had in the back of my mind and in exploratory notes since late August, when I tried to take a type-theoretic approach to organize my design thoughts on weak parens. The thought that macros could expand to parens, and those parens could match up to make macro calls, led to the realization that those macro calls which resulted from matched-up parens were not the usual kind of s-expression macro call; they were processing quasiquotations of a higher degree than the syntax their parentheses came from.
In late February I wrote some more notes about how I could potentially implement quasiquotation syntaxes as macros. Then I realized that a successful implementation of this interface could also let me improve Cene’s error message source locations while bringing me closer to a design for Era’s hypertext syntax, so it started to be clear how valuable an implementation of this system would be.
Last month (early May), I started trying to implement a macroexpander to finally showcase these ideas. While I’d like to bring this macroexpander to Cene eventually, I figured I would get it working in Racket first. Perhaps I can release it as a standalone Racket library to help people learn what it’s capable of without the rest of Cene distracting them.
My first stab at an implementation in Racket was full of errors, and everything I tried to fix would break something else. They were clearly type errors, so I moved my attempts over to Haskell, and sure enough, I had a lot of trouble getting things to typecheck at first.
Several refactors later in Haskell, I still don’t have a complete macroexpander or complete examples running yet. Several times I thought I implemented a data structure which could represent quasiquotations of arbitrarily large degrees, and it turned out to be something less helpful than that. I’ve basically focused on determining viable type signatures and implementations of simple macros like
I got pretty stuck for a week at this commit thinking about how to implement a way to embed macros of a higher degree of quasiquotation into a lower one. Once I went back and corrected and clarified the comments for others to read, I found a certain way to make it work, albeit not one that’s really gotten me what I want here.
Further implementation thoughts
Now that I’ve gone over the basic ideas of higher quasiquotation again in this post, some of the things I’m missing in my implementation attempts are clearer to me.
quasiquote according to the outline of
data QQMacroResult elementAnd restOfInput = QQMacroResultEnd restOfInput | QQMacroResultElement (elementAnd restOfInput)
elementAnd type constructor would usually be a monad. That way the
quasiquote macro can join together several layers of
(elementAnd (elementAnd ...)) representing quasiquotation-shaped “elements” into a single quasiquotation-shaped “element” that’s a quasiquotation-shaped “list.” A macro which simply performs this flattening would be like a version of
quasiquote that acts as a fancy identity operation, effectively becoming whitespace as the contained “elements” splice into the surroundings. So
quasiquote itself would need to wrap its result before returning it.
We may want
quasiquote to preprocess its body to pretend that
quasiquote is processed before other macros, as described at the end of “The remaining difficulty” above. This would mean
quasiquote not only wraps its elements but first crawls them to detect named occurrences of itself and replace them with unnamed, quasiquotation-shaped data.
read-compose work properly with
quasiquote when it crawls its elements, we’ll effectively need
quasiquote to crawl into occurrences of
read-compose to detect occurrences of
quasiquote inside. The most elegant way to do this is probably to have
quasiquote crawl its body using a full environment which maps macro names to assumed meanings, with bindings for both
quasiquote being present in that environment. Any other macros which begin quasiquotations should be part of this environment as well, which means the environment used for the crawl should not be created based on a particular list of names but should be automatically derived from the lexical environment of the
quasiquote macro call.
So now I just need to worry about defining macros in such a way that they can upgrade themselves into macros that do nothing but preprocess quasiquotation macros within. That actually sounds pretty familiar. When I hardcoded various extensions into Cene’s quasiquotation implementations, I had to write each macro twice, with one version of the macro that reconstructed its own macro call just as it received it and one that implemented the intended behavior of the macro. So this direction seems like it could lead to a full implementation.
It seems like for a month now, there’s always been some new and incomplete idea like this that keeps me busy trying to make this higher quasiquotation macroexpander work. A few of the refactoring passes have even been as drastic as this one, which would basically have me starting with a blank page, writing all new code. So I might take a break from this project for a while.
Still, it would be great to discuss this with anyone who finds this interesting. All feedback is appreciated. :)