Surprisingly Turing-Complete - Gwern.net


A catalogue of software constructs, languages, or APIs which are unexpectedly Turing-complete; implications for security and reliability (computer science)
created: 9 Dec 2012; modified: 19 Oct 2018; status: finished; confidence: highly likely; importance: 6

Any sufficiently complicated C or Fortran program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp.Greenspun’s Tenth Law

Turing-completeness (TC) is the property of a system being able to, under some simple representation of input & output, compute any program.

TC, besides being foundational to computer science and understanding many key issues like why a perfect antivirus program is impossible, is also weirdly common: one might think that such universality as a system being smart enough to be able to run any program might be difficult or hard to achieve, but it turns out to be the opposite and it is difficult to write a useful system which does not immediately tip over into TC. It turns out that given even a little control over input into something which transforms input to output, one can typically leverage that control into full-blown TC. This can be amusing, useful (although usually not), harmful, or extremely insecure & a cracker’s delight (see language-theoretic security, based on exploiting weird machines1). Surprising examples of this behavior remind us that TC lurks everywhere, and security is extremely difficult.

Too powerful languages can also manifest as nasty DoS attacks; the fuzz tester afl found in OpenBSD’s roff that it could create an infinite loop by abusing some of the string substitution rules.

They are probably best considered as a subset of discovered or found esoteric programming languages (esolangs). So FRACTRAN, as extraordinarily minimalist as it is, does not count2; nor would a deliberately obfuscated language like Malbolge (where it took years to write a trivial program) count because it was designed to be an esolang; but neither would Conway’s Game of Life count because questions about whether it was TC appeared almost immediately upon publication and so it turning out to be TC is not surprising, and given the complexity of packet-switching networks & routers it’s not necessarily too surprising if one can build a cellular automaton into them or encode logical circuits, or if airplane ticket planning/validation is not just NP-hard or EXPSPACE-hard but undecidable (because of the complex rules airlines require). Many configuration or special-purpose languages or tools or complicated games turn out to violate the Rule of least power & be accidentally Turing-complete, like MediaWiki templates, sed or repeated regexp/find-replace commands in an editor (any form of string substitution or templating or compile-time computation is highly likely to be TC on its own or when iterated since they often turn out to support a lambda calculus or a term-rewriting language or tag system eg esolangs /// or Thue ), XSLT, Infinite Minesweeper, Dwarf Fortress3, Starcraft, Minecraft, Ant, Transport Tycoon, C++ templates & Java generics, DNA computing etc are TC but these are not surprising either: many games support scripting (ie TC-ness) to make their development easier and enable fan modifications, so games’ TC may be as simple as including syntax for calling out to a better-known language like Perl, or it may just be an obscure part of a standard format (most people these days are probably unaware that TrueType & many fonts are PostScript programs based on stack machines, similar to DWARF debugging and ELF metadata, or that some music formats go beyond MIDI in providing scripting capabilities and must be interpreted to be displayed; once one knows this, then fonts being TC are no more surprising than TeX documents being TC, leading of course, to many severe & fascinating font or media security vulnerabilities such as the BLEND vulnerability or SNES & NES code exploiting Linux systems Other formats, like PDF, are simply appalling.4). Similarly, such feats as creating a small Turing machine using Legos or dominos5 would not count, since we already know that mechanical computers work.

On the other hand, the vein of computer security research called weird machines is a fertile ground of that’s TC? reactions. What is surprising may differ from person to person.

  • Peano arithmetic: addition & multiplication on natural numbers is enough to be TC; in contrast, Presburger arithmetic removes multiplication and hence is not TC
  • Wang tiles: multi-colored squares, whose placement is governed by the rule that adjacent colors must be the same (historically, not surprising to Wang, but was surprising to me and I think to a lot of other people)
  • X86 shenanigans:

  • return-into-libc attacks: software libraries provide pre-packaged functions, each of which is intended to do one useful thing; a fully TC language can be cobbled out of just calls to these functions and nothing else, which enables evasion of security mechanisms since the attacker is not running any recognizable code of his own. See, among many others, The Geometry of Innocent Flesh on the Bone: Return-into-libc without Function Calls (on the x86) & On the Expressiveness of Return-into-libc Attacks.
  • Pokemon Yellow: Pokemon Yellow Total Control Hack outlines an exploit of a memory corruption attack which allows one to write arbitrary Game Boy assembler programs by repeated in-game walking and item purchasing. (There are similar feats which have been developed by speedrun aficionados, but I tend to ignore most of them as they are impure: for example, one can turn the SNES Super Mario World into an arbitrary game like Snake or Pong but you need the new programs loaded up into extra hardware, so in my opinion, it’s not really showing SMW to be unexpectedly TC and is different from the other examples. Similarly, one can go from Super Game Boy to SNES to arbitrary code like IRC. This distinction is debatable.)

  • Braid: TC
  • musical notation: given instructions for transposing successive notes, musical notation becomes the esolang Choon
  • heart cells: interact in a way allowing logic gates and hence TC (perhaps not too surprising since cellular automatons were biologically motivated)
  • one category of weird machines doesn’t quite count since they require an assumption along the lines of the user mechanically clicking or making the only possible choice in order to drive the system into its next step; while the user provides no logical or computational power in the process, they aren’t as satisfying examples for this reason:

    • Magic: the Gathering: TC, with the assumption that players mechanically take any option they are given, but otherwise all actions/plays are forced by Magic rules
    • CSS: was designed to be a declarative markup language for tweaking the visual appearance of HTML pages, but CSS declarations interact just enough to allow an encoding of the cellular automaton Rule 110, under the assumption of mechanical mouse clicks on the web browser to advance state
    • Microsoft PowerPoint animations (excluding macros, VBScript etc) can implement a Turing machine when linked appropriately (Wildenhain 2017; video; PPT), under the assumption of a user clicking on the only active animation triggers

Possibly accidentally Turing-complete systems:

  • CSS without the assumption of a driving mouse click
  • SVG: PostScript is TC by design, but what about the more modern vector graphics image format, SVG, which is written as XML, a (usually) not-TC document language? It seems like in conjunction with XSLT it may be, but I haven’t found any proofs or demonstrations of this in a normal web browser context. The SVG standard is large and occasionally horrifying: the (failed) SVG 1.2 standard tried to add to SVG images the ability to open raw network sockets.
  • Unicode: Nicolas Seriot suggests that Unicode’s bidirectional algorithms (intended for displaying scripts like Arabic or Hebrew which go right-to-left rather than left-to-right like English) may be complex enough to support a tag system via case folding rules (eg Turkish)

Some people seem to get caught up in discussions about weird machines or how big an AI agent must be and whether there will be one, two, 10, or millions; this is not an important issue as it is merely an internal organizational one. What is important are the inputs and outputs: how capable is the system as a whole and what resources does it require? No one cares if Google is implemented using 50 supercomputers, 50,000 mainframes, 5 million servers, or 50 million embedded/mobile processors, or a mix of any of the above exploiting a wide variety of chips from custom tensor processing units to custom on-die silicon (implemented by Intel on Xeon chips for a number of its biggest customers) to FPGAs to GPUs to CPUs to still more exotic hardware like prototype D-Wave quantum computers - as long as it is competitive with other tech corporations and can deliver its services at a reasonable cost. (Indeed, a supercomputer these days mostly looks like a large number of rack-mounted servers with unusual numbers of GPUs & connected by unusually high-speed InfiniBand connections and is not as different from a datacenter as one might think.) Any of these pieces of hardware could support multiple weird machines depending on their internal dynamics & connectivity. Similarly, any AI system might be implemented as a single giant neural network, or as a sharded NN running asynchronously, or as a heterogeneous set of micro-services, or as a society of mind etc - but it doesn’t especially matter, from a complexity or risk perspective, how exactly it’s organized internally as long as it works. The system can be seen on many levels, each equally invalid but useful for different purposes.

Here is an example of the ill-defined nature of the question: on your desk or in your pocket, how many computers do you currently have? How many computers are in your computer? Did you think just one? Let’s take a closer look.

It goes far beyond just the CPU, for a variety of reasons: transistors and processor cores are so cheap now that it often makes sense to use a separate core for realtime or higher performance, for security guarantees, to avoid having to burden the main OS with a task, for compatibility with an older architecture or existing software package, because a DSP or core can be programmed faster than a more specialized ASIC can be created, or because it was just the simplest possible solution. Further, many of these components can be used as computational elements even if they were not intended to be or generally hide that functionality.

Thus:

  • A common Intel CPU has billions of transistors, devoted to a large number of tasks:

    • Each of the 2-8 main CPU cores can run independently, shutting on or off as necessary, and has its own private cache (bigger than most computers’ RAM up to even recently), and must be regarded as individuals.
    • The CPU as a whole is reprogrammable through microcode, such as to work around errors in the chip design, and sport increasingly opaque features like the Intel Management Engine (with a JVM for programmability; Ruan 2014 & SGX), or AMD’s Platform Security Processor (PSP) or Android’s TEEs; these hardware modules typically are full computers in their own right, running independently of the host and able to tamper with it.
    • any floating point unit may be Turing-complete through encoding into floating-point operations in the spirit of FRACTRAN
  • the MMU can be programmed into a page-fault weird machine, as previously mentioned
  • DSP units, custom silicon: ASICs for video formats like h.264 probably are not Turing-complete (despite their support for complicated deltas and compression techniques which might allow something like Wang tiles), but for example Apple’s A9 mobile system-on-a-chip goes far beyond simply a dual-core ARM CPU and GPU as like Intel/AMD desktop CPUs, it it includes the secure enclave (a physically separate dedicated CPU core), but it also includes an image co-processor, a motion/voice-recognition coprocessor (partially to support Siri), and apparently a few other cores. These ASICs are sometimes there to support AI tasks, and presumably specialize in matrix multiplications for neural networks; as recurrent neural networks are Turing-complete… Other companies have rushed to expand their system-on-chips as well, like Motorola or Qualcomm
  • motherboard BIOS and/or management chips with network access

    • Mark Ermolov notes that

      It’s amazing how many heterogeneous CPU cores were integrated in Intel Silvermont’s Moorefield SoC (ANN): x86, ARC, LMT, 8051, Audio DSP, each running own firmware and supporting JTAG interface

    These management or debugging chips may be accidentally left enabled on shipping devices, like the Via C3 CPUs’s embedded ARM CPUs
  • GPUs have several hundred or thousand simple cores, each of which can run neural networks very well or do general-purpose computation (albeit slower than the CPU)
  • the controllers for tape drives, hard drives, flash drives, or SSD drives typically all have ARM processors to run the on-disk firmware for tasks like hiding bad sectors from the operating system; these can be hacked. (Given ARM CPUs are used in most of these embedded applications, it’s no surprise ARM likes to boast that a modern smartphone will contain somewhere between 8 and 14 ARM processors, one of which will be the application processor (running Android or iOS or whatever), while another will be the processor for the baseband stack..)
  • network chips do independent processing for DMA. (This sort of independence is why features like Wake-on-LAN for netboot work.)
  • smartphones: in addition to all the other units mentioned, there is an independent baseband processor running a proprietary realtime OS for handling radio communications with the cellular towers/GPS/other things, or possibly more than one virtualized using something like L4. Baseband processors have been found with backdoors, in addition to all their vulnerabilities.
  • SIM cards for smartphones are much more than simple memory cards recording your subscription information, as they are smart cards which can independently run Java Card applications (apparently NFC chips may also be like this as well), somewhat like the JVM in the IME. Naturally, SIM cards can be hacked too and used for surveillance etc.
  • USB or motherboard-attached devices: an embedded processor on device for negotiation, may be heavy duty with additional processors themselves like WiFi adapters or keyboards or mice. In theory, most of these are separate and are at least prevented from directly subverting the host via DMA by in-between IOMMU units, but the devil is in the details…
  • monitor embedded CPU (part of a traditional going back to smart teletypes)
  • random weird chips like the Macbook Touch bar running WatchOS

So a desktop or smartphone can reasonably be expected to have anywhere from 15 to several thousand computers in the sense of a Turing-complete device which can be programmed and which is computationally powerful enough to run many programs from throughout computing history and which can be exploited by an adversary for surveillance, exfiltration, or attacks against the rest of the system.

None of this is unusual historically, as even the earliest mainframes tended to be multiple computers, with the main computer doing batch processing while additional smaller computers took care of high-speed I/O operations that would otherwise choke the main computers with interrupts.

In practice, aside from the computer security community (as all these computers are insecure and thus useful hidey-holes for the NSA & VXers), users don’t care that our computers, under the hood, are insanely complex and more accurately seen as a motley menagerie of hundreds of computers awkwardly yoked together (was it the network is the computer or the computer is the network…?); he perceives and uses it as a single computer.