package asai

  1. Overview
  2. Docs

Design Principles

Five Independent Parameters of a Diagnostic

In addition to the main message, the API should allow implementers to easily specify the following five factors of a diagnostic, and it should be possible to specify them independently.

  1. Whether the program terminates after sending the message. This is indicated by the choice between emit (for non-fatal messages) and fatal (for fatal ones).
  2. A message code with a succinct Google-able representation, for example V0003. A succinct representation is useful for an end user to report a bug or ask for help.
  3. How seriously end users should take the message. Is it a warning, an error, or just a hint? See the type severity for available classifications. In practice, messages with the same message code tend to have the same severity, and thus our API requires an implementer to specify a default severity for each message code. While this seems to violate the independence constraint, our API allows overriding the default severity at each call of emit or fatal.
  4. A stack backtrace. There should be a straightforward way to push new stack frames. Our implementation is trace.
  5. Additional messages. It should be possible to attach any numbers of additional related messages. Currently, emit and fatal are taking .

Compositionality: Using Libraries that Use asai

It should be easy for an application to use other libraries who themselves use asai. Our current implementation allows an application to adopt messages from a library.

Unicode Art

There is a long history of using ASCII printable characters and ANSI escape sequences, and recently also non-ASCII Unicode characters, to draw pictures on terminals. To display compiler diagnostics, this technique has been used to assemble line numbers, code from end users, code highlighting, and other pieces of information in a visually pleasing way. Non-ASCII Unicode characters (from implementers or from end users) greatly expand the vocabulary of ASCII art, and we will call the new art form Unicode art to signify the use of non-ASCII characters.

No Column Numbers, Ever

The arrival of non-ASCII Unicode characters imposes new challenges as their visual widths are unpredictable without knowing the exact terminal (or terminal emulator), the exact font, etc. Unicode emoji sequences might be one of the most challenging cases: a pirate flag (๐Ÿดโ€โ˜ ๏ธ) may be shown as a single emoji flag on supported platforms but as a sequence with a black flag (๐Ÿด) and a skull (โ˜ ๏ธ) on other platforms. This means the visual width of the pirate flag is unpredictable. (See Unicode Emoji Section 2.2.) The rainbow flag (๐Ÿณ๏ธโ€๐ŸŒˆ), skin tones, and many other emoji sequences have the same issue. Other less chaotic but still challenging cases include characters whose East Asian width is Ambiguous. These challenges bear some similarity with the unpredictability of the visual width of horizontal tabulations, but in a much wilder way.

Note: "Unicode characters" are not really defined in the Unicode standard, and here they mean Unicode scalar values, that is, all Unicode code points except the surrogate code points for UTF-16 to represent all scalar values. Although the word "character" has many incompatible meanings and usages, we decided to call scalar values "Unicode characters" anyway because (1) most people are not familiar with the official term "scalar values" and (2) scalar values are the only stable primitive unit one can work with in a programming language.

It is thus wise to think twice before using emoji sequences and other tricky characters in Unicode art. To quantify the degree to which a Unicode art can remain visually pleasing on different platforms, we specify the following four levels of stability. Note that if implementers decide to integrate content from end users into their Unicode art, the end users should have the freedom to include arbitrary emoji sequences and tricky characters in their content. The final Unicode art must remain visually pleasing as defined by the stability levels for any reasonable user content.

  • Level 0 (the least stable): Stability under the assumption that every Unicode character occupies exactly the same visual width. Thankfully, programs meeting only this level are mostly considered outdated.
  • Level 1: Stability under the assumption each Unicode string visually occupies a multiple of some fixed width, where the multiplier is determined by heuristics (such as various implementations of wcwidth and wcswidth). These heuristics are created to help programmers handle more characters, in particular CJK characters, without dramatically changing the code. They however do not solve the core problem (that is, visual width is fundamentally ill-defined) and they often could not handle tricky cases such as emoji sequences. Many compilers are at this level.
  • Level 2a: Stability under very limited assumptions on which characters should have the same widths. For example, if a Unicode art only assumes Unicode box-drawing characters are of the same visual width (which is the case in all conceivable situations), then its stability is at this level. However, the phrase "very limited" is somewhat subjective, and thus we present a more precise version below.
  • Level 2b: Stability under only theses assumptions:

    Level 2b is making explicit what Level 2a means; we might update the details of Level 2b later to better match our understanding of Level 2a. Collectively, Levels 2a and 2b are called "Level 2".

  • Level 3 (the most stable): Stability under only one assumption that equivalent (extended) grapheme clusters have the same visual width (the last assumption of Level 2b). This means that the Unicode art will remain visually pleasing in almost all situations. It can even be rendered with a variable-width font.

Unlike most implementations, which are at Level 1, our terminal backend strives to achieve Level 2. That means we must not make any assumption about the visual width of end users' code and must abandon the idea of column numbers. Our terminal backend never uses column numbers and we consider that as a significant improvement. On the other hand, Level 3 seems to be too restricted for compiler diagnostics because we cannot show line numbers along with the end users' code. (We cannot assume the numbers "10" and "99" will have the same visual width at Level 3.)

Note: a fixed-width font with enough glyphs that covers many Unicode characters is often technically duospaced, not monospaced, because many CJK characters would occupy a double character visual width. Thus, we do not use the terminology "monospaced".

No Support of Bidirectional Text Yet

Proper support of bidirectional text will benefit many potential end users, but unfortunately, we currently do not have the capacity to implement it. The general support of bidirectional text in most system libraries and tools is poor, and without dedicated effort, it is hard to verify whether we manage to avoid common pitfalls.

On a related note, Unicode Source Code Handling suggests that source code should be segmented into atoms and their display order should remain the same to maintain the lexical structure. (This deviation is allowed by the Unicode Bidirectional Algorithm.) Our current implementation cannot handle this because it has no access to such structural information of the content from end users.

Raw Bytes as Positions

All positions should be byte-oriented. We believe other popular alternatives proposals are worse:

  1. Unicode characters (Unicode scalar values): This is a reasonable and technically well-defined choice. The problem is that it may take linear time to count the number of characters from raw bytes without a clever data structure (unless we are using UTF-32), and they often do not match what end users perceive as "characters". In other words, it takes more time to compute and may invite misconceptions about Unicode characters.
  2. Code units used in UTF-16: This is somewhat similar to Unicode characters, but with quirks from UTF-16: a Unicode scalar value above U+FFFF (such as ๐Ÿ˜Ž) will require two code units to form a surrogate pair. Therefore, it is arguably worse than just using Unicode characters. This scheme was unfortunately chosen by the Language Service Protocol (LSP) as the default unit, and until LSP version 3.17 was the only choice. The developers of the protocol made this decision probably because Visual Studio Code was written in JavaScript (and TypeScript), whose strings use UTF-16 encoding.
  3. (Extended) grapheme clusters or user-perceived characters. The notion of grapheme clusters can help segment a Unicode text for end users to edit or select part of it in an "intuitive" way. It is not trivial to implement the segmentation algorithm (see the OCaml library uuseg) and the default rules can (and maybe should) be overriden for each application. The complexity and external dependency of grapheme clusters make it an unreliable unit for specifying positions. It also takes at least linear time to count the number of grapheme clusters from raw bytes.
  4. Column numbers, the visual width of a string in display. As analyzed in the above section, this is the most ill-defined unit of all, and a heuristic that can give passable results in most cases still takes linear time.

Know Bug: Our LSP prototype does not handle positionEncoding yet, and because the default unit in LSP is based on UTF-16 (see above), an LSP client may be confused about the byte-oriented ranges returned by this library. A proper LSP implementation should negotiate with the client to determine how to represent column positions (and our current prototype does not). On the other hand, it can be tricky to negotiate with the client to use raw bytes because there is not an official predefined encoding scheme for raw bytes yet.

OCaml

Innovation. Community. Security.