GraphQL API Rewrite
Solved problems and lessons learned
A few months ago, specifically end of March this year, we decided to start rewriting a part of our backend. The big change was in the GraphQL department going from the code first approach to the schema first approach.
This is a write-up of the things we changed and things we learned.
In 2017 when the initial version of our GraphQL API was written, the ecosystem and tooling were still growing its legs and there wasn't as much tooling as there is now to go with a schema first approach unless you were ready to write a lot of boilerplate. By boilerplate I mean writing the GraphQL schema first and then writing matching TypeScript definitions. Basically doing the same thing twice. Not ideal.
TypeGraphql to the rescue. Kinda. TypeGraphql heavily utilizes decorators to define the GraphQL schema
and from those, generates the actual schema.graphql
file.
You still have to write the same thing twice, it just hurts a bit less since defining a GraphQL field is done a line or two away from defining the typescript type.
Problems
Before diving into the new architecture, I will outline a few problems with what we are currently using and why we don't like it.
TypeGraphql
Lets look at a brief example of how it works and what are its problems.
Inconsistent Types Problem
First, we will define some types, inputs and payloads:
import { Field, ID, ObjectType, InputType } from 'type-graphql'
@ObjectType() // Same as a graphql `type`
export class OemType {
@Field(() => ID)
public id: string
@Field(() => String)
public name: string
@Field(() => String)
public tag: string
}
@InputType() // Same as a graphql `input`
export class CreateOemMutationInput {
@Field(() => String)
public tag: string
@Field(() => String, { nullable: true })
public name: string
}
@ObjectType()
export class CreateOemMutationPayload {
@Field(() => OemType)
public oem: OemType
}
Each decorator you see here is responsible for generating a part of the schema. For example, the OemType
class will output this
GraphQL code:
type OemType {
id: ID!
name: String!
tag: String!
}
Lets look at the resolver with a mutation that creates the oem:
@Resolver(() => OemType)
export class OemResolver {
@Mutation(() => CreateOemMutationPayload)
public async createOem(
@Arg('input', () => CreateOemMutationInput) input: CreateOemMutationInput,
): Promise<CreateOemMutationPayload> {
this.logger.info(`Creating oem with input ${JSON.stringify(input)}`)
return this.oemService.createOne(input)
}
}
Straight away we have a problem. Did you catch it? In CreateOemMutationInput
we defined the name
field as nullable inside the
decorator and as a string
in the typescript code meaning our generated GraphQL code will look like this:
input CreateOemMutationInput {
name: String
tag: String!
}
This allows the client to not send the name
, send it as null
or undefined
since GraphQL doesn't differentiate between the
two. If they do, we might error on our side since we are always expecting a string. This can be a problem but doesn't have to.
Not good.
An annoying thing with TypeGraphql is that you again, had to write the same thing twice, allowing for mistakes and inconsistencies.
We did eventually introduce an eslint rule to check for this but it has its limitations and wasn't perfect. It also pointed us to almost a dozen places where this problem was happening.
Another Layer Problem
Since TypeGraphql is a layer on top of the dx of writing GraphQL code, you never get to use the latest GraphQL version unless TypeGraphql is frequently updated.
As of the time of this writing, TypeGraphql still requires GraphQL version 14 to be installed which was released in Aug 2018. There is a preview version for version 16 support but progress on it has been really slow.
Not good.
Verbose Code Problem
Using TypeGraphql requires writing verbose code. When you have a lot of queries, mutations and field resolvers in a resolver, things start to get ugly so to speak.
Take the resolver example above with just one mutation. Now add let's say 5 more mutations, 5 queries, 4 field resolvers and you get a lot of ugly and hard-to-read code.
Now add an auth layer to all of those, which is also done by using decorators and you won't have a fun time.
Oh wait, you also have to write inputs, args, payloads and return types for all of those. That's a lot of classes. You can almost get a college diploma after writing all of those.
Me, Myself & I Problem
Since you define the GraphQL contract by writing decorators on the backend, you sort of lose the ability to agree to a contract with other teams in a separate place.
You could do it by writing a native GraphQL schema first and agreeing with everyone on that, but then you have to take that and rewrite it to how TypeGraphql wants it.
Not ideal.
The problem we often encountered with TypeGraphql is you define a contract all by yourself, not knowing the full scope of requirements of your clients, and then you have to go back and adjust things.
Logging
Notice that in the createOem
mutation above we have a line that logs the input and states it's creating an oem. This might not
seem like a problem but given we have around 250 mutations in the current system, it becomes tedious to have all of those logs
output consistent and nicely formatted logs.
Especially because inputs and other relevant info in the mutation was stringified inside the message. When you look at the logs, you will get a long snake of text that is not a fun time to read and try to figure out what is happening.
The biggest downside of this approach is that you have no knowledge of a session. In other words, you will log the input stating you are creating an oem, then after a few lines if an error happens, you have no easy way to know to what call that error relates. You don't know the input that caused that error.
If you have a backend that gets called a lot, your logs will be an out-of-order mess and debugging a problem won't be a fun time.
ORM
Another change we made from the existing backend is swap from TypeORM for Prisma.
There are a few reasons for this change:
-
If you are using a migration tool like we are, you have to first write the migrations which means defining your tables, columns etc. and then do the same thing with TypeORM using classes and decorators.
Let's say you define an
oems
table which has columnsid
andname
. You do this via your migration tool, in our case, Liquibase, like so:{ "databaseChangeLog": [ { "logicalFilePath": "1680163451-create-oems-table.migration.json", "objectQuotingStrategy": "QUOTE_ALL_OBJECTS" }, { "changeSet": { "id": "1680163451", "author": "domagoj.vukovic2@rimac-technology.com", "comment": "create-oems-table", "changes": [ { "createTable": { "tableName": "oems", "columns": [ { "column": { "name": "id", "type": "uuid", "defaultValueComputed": "public.uuid_generate_v4()", "constraints": { "nullable": false, "primaryKey": true } } }, { "column": { "name": "name", "type": "text", "constraints": { "nullable": false } } } ] } } ] } } ] }
After that you again need to define that same entity with TypeORM like so:
import { Column, Entity, PrimaryColumn } from 'typeorm' @Entity('oems') export class OemEntity { @PrimaryColumn() public id: string @Column() public name: string }
This again becomes a breathing ground for type mismatches between your database and the entities in your code.
-
Poor maintenance is one of the big ones. TypeORM has been around for a long time and held the title of the best typescript ORM for quite some time. Since writing an ORM is no small task, the maintainers eventually got burnt out and slowed down their support.
The lack of support allowed a lot of bugs to crop up in the lib and this cost us days of hair-pulling because of some obscure errors.
-
Dx is also, in my opinion, quite bad. But I won't get into this right now. Maybe an opportunity for another post.
The Solution
Now that I rambled enough about the problems, let's see what we are switching to right now and why it's better.
Logging
For logging we switched to using Pino since it is well-maintained and has support for Transports that allow you to offload sending logs to an outside system like Loki via a worker thread not blocking the main thread.
Let's solve the session problem first using the built-in Apollo context and plugins
The first thing we have to do is generate the requestId
as soon as we receive a request. This is done inside the Apollos
context
function like so:
import type { Context } from '@rimac-technology/oem-dashboard-core-schema/lib/resolvers/context'
import { randomUUID } from 'crypto'
import { logger } from '../../shared/logger'
export const context = async (): Promise<Context> => {
return {
logger: logger.child({ requestId: randomUUID() }),
}
}
then this context is handed to the Apollo server like so:
import type { Context } from '@rimac-technology/oem-dashboard-core-schema/lib/resolvers/context'
import { context } from 'context.ts'
const server = new ApolloServer<Context>({
typeDefs,
resolvers,
})
await startStandaloneServer(server, {
context
plugins: [ApolloPluginLogger],
})
Next thing we have to do is define the ApolloPluginLogger
:
import type { ApolloServerPlugin } from '@apollo/server'
import type { Context } from '@rimac-technology/oem-dashboard-core-schema/lib/resolvers/context'
export const ApolloPluginLogger: ApolloServerPlugin<Context> = {
async requestDidStart(requestContext) {
requestContext.contextValue.logger.info({
message: 'Request started',
operationName: requestContext.request.operationName,
query: requestContext.request.query,
variables: requestContext.request.variables,
})
return {
async didEncounterErrors(errorContext) {
for (const error of errorContext.errors) {
requestContext.contextValue.logger.error({
error,
message: 'Encountered error',
})
}
},
async willSendResponse(responseContext) {
requestContext.contextValue.logger.info({
message: 'Sending response',
response: responseContext.response,
})
},
}
},
}
This will log the whole input of each request, potential errors, and whatever is returned. Most importantly they all have the same
requestId
attached to them.
If you decide to use the logger inside resolvers, those logs are also going to have that same requestId
attached to them.
const OemResolver: OemModule.Resolvers = {
Mutation: {
createOem: async (_, variables, context) => {
context.logger.info('Hi')
// Do stuff
context.logger.info('Bye')
return {
oem,
}
},
},
}
The nicely looking/formatted logs come from Pino pretty. You can get them by just piping
the server output to it like so: ts-node ./src/index.ts | pino-pretty
.
Note that pino-pretty
isn't used in production since you would get ugly log text because of the colors. It's just a plain JSON
in prod.
This is not the perfect solution since we have no knowledge about what happens outside this server. If we communicate with other microservices, we are missing the session id there.
We plan to solve this by integrating OpenTelemetry alongside the whole Grafana stack for logging, metrics, traces and visualization. But that's for a later day once we finish the rewrite.
Pino also limits the logs by setting the log depth to 5 and the edge limit to 100
Testing
In order to test our GraphQL API we use executeOperation
from
Apollo
The benefit of using executeOperation
is that you don't have to start your whole server to ping the API. Apollo does some magic
potion stuff under the hood and directly calls the appropriate mutation/query when you use it.
This speeds up the tests by not an insignificant amount since there are no network requests and also allows you to collect coverage. Let's see how it works in practice:
Before we actually fire a test, we need a mutation first:
import { gql } from 'graphql-tag'
export const CREATE_OEM = gql`
mutation CreateOem($input: CreateOemInput!) {
createOem(input: $input) {
oem {
id
name
}
}
}
`
Then we can use it like so:
it('should create oem', async () => {
const input: CreateOemInput = {
name: faker.lorem.words(),
}
const response = await executeOperation<CreateOemMutation, CreateOemMutationVariables>({
query: CREATE_OEM,
variables: {
input,
},
})
expect(response.body?.singleResult.errors).toBeUndefined()
expect(response.body?.singleResult.data?.createOem.oem).toMatchObject({
id: expect.any(String),
name: input.name,
})
})
Notice the CreateOemInput
, CreateOemMutation
and CreateOemMutationVariables
. Those are all autogenerated from our mutation
above so we get type-safe input
and response
. That is done using a few packages from @graphql-codegen
like so:
import type { CodegenConfig } from '@graphql-codegen/cli'
const config: CodegenConfig = {
documents: './src/resolvers/**/graphql/__test__/*.gql.ts',
generates: {
'./src/shared/test/types.generated.ts': {
config: {
scalars: {
DateTime: 'Date',
},
},
plugins: ['typescript', 'typescript-operations'],
},
},
overwrite: true,
schema: './node_modules/@rimac-technology/oem-dashboard-core-schema/lib/schema.graphql',
}
export default config
It picks up all of our mutations and queries for tests and generates one big file with all the types needed for type-safe tests.
One caveat with this approach is that if you use executeOperation
like the Apollo docs state and that is using your server like
so server.executeOperation(...)
the return type will be a union of singleResult
and incrementalResult
since apollo now has
an experimental support
for using @defer
and @stream
.
const response = await executeOperation<CreateOemMutation, CreateOemMutationVariables>({
query: CREATE_OEM,
variables: {
input,
},
})
// Can be
response.body.singleResult
// Or
response.body.initialResult
response.body.subsequentResults
We created a wrapper function that asserts singleResult
is always returned since we didn't yet adopt deferring anywhere in our
stack and it would be tedious to assert the singleResult
in each and every test.
The types are a bit messy but that is because Apollo doesn't export compact types that executeOperation
uses and returns so we
have to assemble some of those by hand.
import assert from 'node:assert'
import type { HTTPGraphQLHead } from '@apollo/server'
import type { ExecuteOperationOptions, GraphQLResponseBody, VariableValues } from '@apollo/server/dist/esm/externalTypes/graphql'
import type { Context } from '@rimac-technology/oem-dashboard-core-schema/lib/resolvers/context'
import { server } from '../../../../server'
type ResponseDataType = Record<string, unknown>
type RequestType<TData extends ResponseDataType, TVariables extends VariableValues> = Parameters<
typeof server.executeOperation<TData, TVariables>
>[0] & { context?: ExecuteOperationOptions<Context>['contextValue'] }
type SingleResponseReturnType<TData extends ResponseDataType> = {
body?: Extract<GraphQLResponseBody<TData>, { kind: 'single' }>
http: HTTPGraphQLHead
}
export const executeOperation = async <TData extends ResponseDataType, TVariables extends VariableValues>(
request: RequestType<TData, TVariables>,
): Promise<SingleResponseReturnType<TData>> => {
const response = await server.executeOperation<TData, TVariables>(request)
assert(response.body.kind === 'single')
return {
body: {
kind: response.body.kind,
singleResult: response.body.singleResult,
},
http: response.http,
}
}
Validation
Validation is done using Zod which has worked extremely well for us. We have been using in on the frontend for quite some time for form input validation and now we use it here as well for validating query, mutation or subscription inputs.
For createOem
mutation we define the validation like so:
import { z } from 'zod'
export const createOemValidation = z.object({
input: z.object({
name: z.string().min(3).max(255),
}),
})
and use it like so in our resolver:
import type { OemModule } from '@rimac-technology/oem-dashboard-core-schema'
import { createOemValidation } from './oem.validation'
const OemResolver: OemModule.Resolvers = {
Mutation: {
createOem: async (_, variables) => {
const { input } = validateInput(createOemValidation, variables)
},
},
}
The purpose of validateInput
is that we can intercept a zod error and throw our own custom ArgumentValidationError
.
import type { z } from 'zod'
import { ArgumentValidationError } from '../errors'
export const validateInput = <TSchema extends z.ZodTypeAny>(schema: TSchema, input: Record<string, unknown>) => {
const result = schema.safeParse(input)
if (!result.success) {
throw new ArgumentValidationError('Invalid input provided', {
extensions: {
errors: result.error.format(),
},
})
}
return result.data as z.infer<TSchema>
}
Errors
For each possible error we can throw we define our own that extends the GraphQLError
:
import type { GraphQLErrorOptions } from 'graphql'
import { GraphQLError } from 'graphql'
import { ErrorCode } from './Codes'
export class NotFoundError extends GraphQLError {
constructor(message: string, options?: GraphQLErrorOptions) {
super(message, {
...options,
extensions: {
code: ErrorCode.NOT_FOUND,
...options?.extensions,
},
})
}
}
The benefit of this instead of just throwing a regular error is that we can attach related data that is useful when looking at logs and debugging.
throw new NotFoundError('Created filter not found', {
extensions: {
givenFilters,
key,
possibleFilters,
},
})
This also allows clients to make decisions based on the error code. For example, if we throw an ArgumentValidationError
, the
client can check the error code and parse out the invalid fields and show those to the user.
We never throw an error that is not in the predefined error list. To ensure this actually happens, we intercept each thrown error
before it's sent to the client by utilizing Apollos
formatError
and check if the error is from
the predefined list. If not, we throw an UnexpectedError
.
This allows us to set up an email trigger in our logging system that will indicate to us that something unexpected happened and we should investigate.
import type { GraphQLFormattedError } from 'graphql'
import { ErrorCode, UnexpectedError } from '../shared/errors'
export const formatError = (formattedError: GraphQLFormattedError, error: unknown): GraphQLFormattedError => {
if (typeof formattedError.extensions?.code !== 'string') {
return new UnexpectedError('Error code node a string', {
extensions: {
error: JSON.stringify(error),
},
})
}
if (Object.values<string>(ErrorCode).includes(formattedError.extensions.code)) {
return formattedError
}
return new UnexpectedError('Error not in expected status codes', {
extensions: {
error,
message: 'Encountered unexpected error when formatting error in apollo',
},
})
}
This is then passed to the Apollo server:
export const server = new ApolloServer<Context>({
formatError,
// ...
})
ORM
The biggest benefit we saw from switching from TypeORM to Prisma was the introspection function Prisma provides.
This eliminates having to write migrations for your database and then writing the same representation in your code as entities. Introspection allows you to maintain a single source of truth in regard to your database.
The flow goes like this:
- Write your database migrations
- Migrate the database
- Introspect your database with Prisma
- Have an autogenerated database representation in the Prisma ORM
This way, whatever is in your database, is what you can work with in Prisma. The column types and constraints you deal with in Prisma are always one-to-one identical to what's in your database.
Let's see this in action by first writing a migration:
{
"databaseChangeLog": [
{
"logicalFilePath": "1680163451-create-oems-table.migration.json",
"objectQuotingStrategy": "QUOTE_ALL_OBJECTS"
},
{
"changeSet": {
"id": "1680163451",
"author": "domagoj.vukovic2@rimac-technology.com",
"comment": "create-oems-table",
"changes": [
{
"createTable": {
"tableName": "oems",
"columns": [
{
"column": {
"name": "id",
"type": "uuid",
"defaultValueComputed": "public.uuid_generate_v4()",
"constraints": {
"nullable": false,
"primaryKey": true
}
}
},
{
"column": {
"name": "name",
"type": "text",
"constraints": {
"nullable": true
}
}
}
{
"column": {
"name": "multi_word_column",
"type": "text",
"constraints": {
"nullable": true
}
}
}
]
}
}
]
}
}
]
}
Then after we migrate the database with this, we need to sync the database changes with our Prisma state. We do this by running
prisma db pull
. This will auto-generate the changes in schema.prisma
generator client {
provider = "prisma-client-js"
}
datasource db {
provider = "postgresql"
url = env("DB_PRISMA_URL")
}
model Oem {
id String @id @default(dbgenerated("public.uuid_generate_v4()")) @db.Uuid
name String
multiWordColumn String @map("multi_word_column")
@@map("oems")
}
After that, we run prisma generate
and we can use our Prisma instance that has an identical representation of entities as to
what's in the database:
prisma.oem.create({
data: {
name: 'Hi',
multiWordColumn: 'Hello',
},
})
By default, the table name in Prisma will be Oems
and multiWordColumn
will be multi_word_column
since that's how we defined
them in the actual database.
We change the table names to be singular and columns to camel case by utilizing @@map("name")
for table names and @map("name")
for column names as can be seen in the Prisma schema above.
You can do this or you don't have to. We like to keep the casing consistent and changing this isn't a big deal.
Schema & Resolvers
The GraphQL schema is defined in a separate repository.
This allows us to review the contract changes via a PR before any development begins. The people working on the web app can review it alongside people working on the API part. This helps catch any inconsistencies and potential mistakes.
This also allows us to version the schema and indicate any breaking changes. We created an automated flow of releasing versions and writing the changelog based on commit messages.
Let's look at the code aspect of the schema first approach:
The first thing we did is instead of writing a big schema.graphql
file, we split it into multiple smaller files for each
resolver. I'll go into the benefits of this in just a moment.
Here is what it looks like:
Let's see what is in each file for oem
:
inputs.graphql
input CreateOemInput {
name: String!
}
input EditOemInput {
id: ID!
name: String!
}
input DeleteOemInput {
id: ID!
}
mutations.graphql
type Mutation {
createOem(input: CreateOemInput!): CreateOemPayload!
deleteOem(input: DeleteOemInput!): DeleteOemPayload!
editOem(input: EditOemInput!): EditOemPayload!
}
payloads.graphql
type CreateOemPayload {
oem: Oem!
}
type EditOemPayload {
oem: Oem!
}
type DeleteOemPayload {
oem: Oem!
}
queries.graphql
type Query {
oem(id: ID!): Oem!
oems: [Oem!]!
}
types.graphql
type Oem {
id: ID!
name: String!
}
This allows us to utilize typescript-resolvers
from GraphQL Code Generator. The benefit of using it is it generates a namespace
OemModule.Resolvers
with all the types only for that resolver folder and allows us to do this:
import type { OemModule } from '@rimac-technology/oem-dashboard-core-schema'
const OemResolver: OemModule.Resolvers = {
Mutation: {
createOem: async () => {
// ...
},
deleteOem: async () => {
// ...
},
editOem: async () => {
// ...
},
},
Query: {
oem: async () => {
// ...
},
oems: async () => {
// ...
},
},
}
export default OemResolver
By setting the type of OemResolver
to be OemModule.Resolvers
, all the Mutation
and Query
functions are type safe. All 4
params of each call prent, variables, context, info
are type safe. The same as it's written in the schema.
This way it is very clear what you can write in each resolver and they are kept separate.
This is what the codegen config file looks like:
import type { CodegenConfig } from '@graphql-codegen/cli'
const config: CodegenConfig = {
generates: {
'./src/enums.ts': {
// By default, enums are not generated as TS enums, so this fixes it
plugins: ['typescript'],
config: {
onlyEnums: true,
namingConvention: {
enumValues: 'change-case-all#upperCase',
},
},
},
'./src/resolvers': {
config: {
resolverTypeWrapperSignature: 'T',
contextType: './context#Context',
defaultMapper: 'DeepPartial<{T}>',
scalars: {
DateTime: 'Date',
},
useIndexSignature: true,
},
plugins: [
'typescript',
'plugins/typescript-resolvers/index.js',
{
add: {
content: 'export type DeepPartial<T> = T extends object ? { [P in keyof T]?: DeepPartial<T[P]>; } : T;',
},
},
],
preset: 'graphql-modules',
presetConfig: {
baseTypesPath: 'index.ts',
filename: 'index.ts',
useGraphQLModules: false,
},
},
},
hooks: {
afterOneFileWrite: ['prettier --write'],
},
overwrite: true,
schema: './src/**/*.graphql',
}
export default config
You may notice an unusual line export type DeepPartial<T> = T extends object ? { [P in keyof T]?: DeepPartial<T[P]>; } : T;
.
This line solves and introduces a problem.
By default, when you generate your types using graphql-codegen
, it sets all fields in return types as required. This poses the
problem of not being able to write field resolvers.
Take this type definition:
type Vehicle {
id: ID!
vin: String!
softwareUpdates: [SoftwareUpdate!]!
}
It mandates that every time you return a Vehicle
, you must also allow resolving softwareUpdates
field. Now the joy of GraphQL
comes from returning to the client exactly what was requested. If we take the default auto-generated types, we would have to do
this:
const VehicleResolver: VehicleModule.Resolvers = {
Query: {
vehicles: async () => {
return orm.vehicle.findMany({
include: {
softwareUpdates: true,
},
})
},
},
}
Now if the client requested just the id
and vin
fields, we would still go to the database and get all the software updates for
each vehicle. You can quickly see how this falls apart. We are overfetching.
Not good.
To get around this problem, we set all fields to be optional by default using the DeepPartial
type mentioned above and that
allows all fields to be optional. And now we can do this:
const VehicleResolver: VehicleModule.Resolvers = {
Vehicle: {
softwareUpdates: async (parent) => {
return orm.softwareUpdates.findMany({
where: {
vehicle: {
id: parent.id,
},
},
})
},
},
Query: {
vehicles: async () => {
return orm.vehicle.findMany()
},
},
}
Now if you request softwareUpdates
you get them as usual and if you don't, we don't query anything from the database. No more
overfetching.
The separation of which fields are resolved right away should depend on the business case. If you are always going to need some relation on a type, it's better to resolve it right away than to separate it to a field resolver and have one extra database call.
A problem we noticed with this approach stems from how the default typescript behavior works. Remember how I said all fields can
be optional? Yup, but they can also have undefined
values. Why? Because in typescript putting a ?
next to a key means two
things.
So a question for you, does this snippet error?
type VehicleType = {
id: string
vin: string
name?: string
}
const a: VehicleType = {
id: 'a',
vin: 'b',
name: undefined,
}
const b: VehicleType = {
id: 'a',
vin: 'b',
}
You might think that it does since we specified that name
is an optional key, not that it can have an undefined
value.
Unfortunately it does not since typescript treats a missing key and a key with a value of undefined
as the same if we set the
key as optional by doing key?: string
This now means that all of our generated types allow returning undefined
as a value and that we don't want and if it does
happen, will cause an error if the field is required.
The behaviour we want is if we specify a field in the return type, it has to be of its type, if we don't, all good.
A solution for this is to enable a little know typescript compiler flag in the tsconfig.json
called
exactOptionalPropertyTypes
.
This will explicitly differentiate between an optional key, and a key that has a value type of type | undefined
.
But, after enabling this, problems... Again. Since this weird behavior has been in typescript for so long, some of the dependencies we are using abuse this and without some serious hacking, we can't just turn it on and live a happy life.
We are yet to solve this in a smart way.
That's it. Hope you enjoyed.
The rewrite is still in progress. If I find any more interesting things I might do another write-up.