- The Raspberry Pi is a small computer on a board; it can talk to
outside hardware through its pins and ports.
- Speculation: the Pi shows how we could disaggregate our big Realbox machine, making Dynamicland an ecosystem of many computers, sensors, and actuators, instead of a piece of software on a server.
- "... the Pi as Dynamicland's equivalent to the USB port on my laptop: it's how I plug things into Dynamicland."
- A minimal example of toggling an LED on the Raspberry Pi from Realtalk.
- Explanation of Realtalk's 'reactive database' programming model;
how it makes programming the Pi from a dynamic table impressively
easy. Programming the Pi feels no different from programming anything
else in Dynamicland.
- Remote control by sending full programs to the Pi, instead of talking to it in some underpowered control language.
- Speculation on how the Pi could play out in the future.
Today, Dynamicland has a bunch of dynamic tables:
Each table is powered by a monolithic 'Realbox' machine – one engine that runs Linux and executes pages and connects to cameras and projectors.
The Realboxes are all federated together by the Realtalk protocol, so tables can communicate across the room and share data and programs.
I want to argue that we shouldn't need those Realboxes (and the complex software stack they contain), that they're just a stopgap.
What do the Realboxes do? Most Realtalk programs (like my Geokit, previously) have used cameras for input and projectors for output.
Then the Realboxes serve two purposes:
Talk to the cameras and projectors over USB and HDMI (hardware abstraction, I/O, 'control')
Execute programs ('computation', 'simulation')
Consider how a program runs on your laptop: the program mostly uses your mouse and keyboard for input and your display for output, and it executes on the CPU on your laptop's logic board.
But Realtalk's programming model is much more general and powerful than this particular configuration. It's not really about cameras, projectors, dots, or pages at all. It's a protocol for all kinds of live, physical, reprogrammable computing.
And that protocol doesn't need a big Linux computer like a Realbox to run. Realtalk runs equally well on small computers, like the Raspberry Pi:
The Raspberry Pi, if you're not familiar, is a little computer on a circuit board with Wi-Fi and USB ports and general-purpose digital I/O pins. It's fully functional as what we think of as 'a computer': people often run desktop Linux on it and plug in a monitor, mouse, and keyboard.
If we have these little Realtalk hosts, why do we need to connect everything on the table to one big Realbox? Every sensor (e.g. camera) and every actuator (e.g. projector) could be connected to its own individual Pi (or something similar) – an 'interface computer' which translates between electrical signals and Realtalk statements. Then the Pis can be federated together by the Realtalk protocol.
The Pi connects well with the physical world, because it has all these ports and pins it can use to talk to sensors and actuators. You can plug motors in and make the Pi drive around 🚗 or attach a touch sensor 👆. It's a nice fit for Dynamicland's goals, and it breaks us out of our camera-projector mold.
In this world, where Dynamicland is an ecosystem instead of a piece of code running on a Linux machine, it becomes easy to attach new sensors and actuators, and it becomes easy to understand and modify any individual part.
To go even further, imagine a future where we don't need projectors or cameras at all. Maybe we print pages that are actually paper-thin computers, not wood pulp. Still cheap and physical and immutable, with the code visible, so they'd preserve the properties we want in Dynamicland. But these pages would have the computational capability baked into themselves somehow, rather than overlaid from above by a camera-projector system.
How do those pages talk to each other and to other hardware around them? How do you interact with that distributed computer? How would that computer empower you as a person?
You can think of Dynamicland as an attempt to prototype that future and Realtalk as a suggestion of how it might work: not quite an operating system, networking protocol, database, programming language, or user interface paradigm, but some of all of those.
Communicating with the Pi today
So we already have a concept of networking in Dynamicland; we already have separate dynamic tables powered by separate Realtalk hosts, all talking to each other using the Realtalk protocol. You can communicate pretty easily between tables.
For now, to talk to the Pi, we port our Realtalk host software and Lua interpreter to the Pi, so the Pi runs the same basic software that runs on each table.
The Pi becomes a 'virtual table.' Talking between table ↔ Pi is no different from talking between table ↔ another table across the room.
I'll show you a small example where I toggle an LED on the Pi, then explain how it works and what it might do in the future.
The Pi in Dynamicland
Let's see what a Raspberry Pi looks like in Dynamicland right now. I've attached the Pi to a page (18858), so the room always sees the page and knows the Pi's exact location. This page is like a 'handle' for the Raspberry Pi; I can refer to the Pi's page in other programs as a way to control it.
The page turns green once the Pi is powered on and connected. The Pi's
page also then displays its identifier,
74a6 is the last two bytes of the Pi's MAC
address, which is unique to each Pi.
Toggling the activity LED
Now I want to toggle the "activity LED" on the Pi. Toggling the activity LED is the simplest observable thing I can make the Pi do, so it's a nice test for my Pi-Realtalk integration.
The activity LED is hard to spot – it's a green LED above the Pi's red power LED:
I'll grab my 'control the activity LED' program, which sends out a green whisker. When I point the program so that the whisker touches the Pi, its activity LED turns on; otherwise, its activity LED turns off. (I'll explain this program in more detail later.)
Actually, this program isn't limited to that one Pi: it works for any Pi on the table. The program turns the activity LED off for everyone except the Pi it's currently pointing at.
Let's try toggling the LED on three Pis I have lying around (they happen to be a Pagebot Pi on wheels, a Pi with a potentiometer on a little breadboard, and the ordinary Pi from earlier).
How would you make this LED toggler without the Realtalk system? It
sounds painful. Maybe you'd
ssh into each Pi and write and execute
some Python script. (If you then needed to change the code, how would
you deploy the change to all the Pis?) You'd need to implement
tracking for some pointer object to hold in your hand. You'd need some
communications protocol to ferry information from the tracking system
to all the Pis. You'd need your laptop at hand to do all the
programming. What if you wanted to add a fourth or fifth or tenth
Would you even have this kind of idea in the first place?
How the LED toggler works
To show how this program works on the table and the Pi, I need to explain a little more about Realtalk.
The Realtalk database
Realtalk is a programming system, but it's also a kind of reactive database.
Each machine running Realtalk knows a consistent set of Realtalk statements for each frame (as in '30 frames per second'). A statement is a claim or wish made by a Realtalk program.
Consider this minimal example from my earlier Geokit post:
-- Page 17813 -- Made-up claim demo Claim (you) blahblahblah. -- Page 17814 -- Made-up claim when demo When /page/ blahblahblah: Wish (page) is highlighted "blue". End
When both pages 17813 and 17814 are face-up on the table, the table database would contain these statements every frame:
If I removed the bottom page (17814), the database in subsequent frames would only contain
because the bottom page isn't there to make the 'highlight blue' wish anymore.
If I removed the top page (17813) but kept the bottom page, the
database would contain neither of those two statements. Because of
Realtalk's reactive design, removing the
automatically removes its dependent, the 'highlight blue' wish.
Even if the bottom page is still on the table, its
blahblahblah claim anymore, so the bottom page doesn't
produce a wish anymore.
Pages communicate by reading and writing statements in the Realtalk database, not by calling functions or defining variables.
I believe these mechanics are inspired by the tuple space or blackboard concept, and by Datalog, but I don't know much about that history. You can think of Realtalk statements (claims and wishes) as all living on a single 'blackboard' which is watched by all the programs. In Realtalk, however, statements are not just abstract data; they should be about the real world. Our 'tuple space' is modeled after the social space of the table, where everyone can see and hear each other – similarly, programs in Realtalk can all see each other's statements.
As long as information flows through the Realtalk database, it's easy to inspect, intercept, trace, debug, and extend later! And processes that query that database will automatically react to changes live. I often make little tools alongside the program I'm working on to understand and poke at its behavior.
In contrast, if you stick to traditional functions and variables, your program is closed and opaque to the rest of the world. And a traditionally-built program is static: it won't respond to changes to its inputs (or to changes in surrounding programs) unless you explicitly code it to update.
In our case, the table and Pi are two separate Realtalk hosts, so each (in the current implementation of Realtalk) has its own database (frame rate and set of statements per frame).
Note that the Pi isn't connected to a camera and can't see any programs on its own, so the Pi is basically inert without an external stimulus: programs running elsewhere must manually forward programs and statements to the Pi to get it to do or know anything.
The toggler code
Here is my LED toggler program from earlier.
Have you seen
This program feels oddly similar to a Unix program that uses
splits itself into parallel universes of execution. One section runs
here and the other section runs there.
You should know that
processor is a global variable which always
identifies the underlying Realtalk machine:
- If the page is running on a table,
processorwill be something like
- If the page is running on a Pi,
processorwill be something like
This page consists of two blocks. The page is meant to execute on both the Pi and the table at the same time; the first line of each block checks what kind of processor the page is executing on right now, so each block decides whether to run or not.
The top block,
if not is_pi(processor) then ..., runs on the table like an ordinary Realtalk page. The global function
is_pichecks whether or not
processoris a Pi.
if not is_pi(processor) then -- This block runs on the realbox (the table). When /pi/ is pi /something/: -- for each Pi on the table, Wish (you) runs remotely on (pi). When (you) points "up" at (pi): Wish (pi) sets led brightness to "1". Otherwise: Wish (pi) sets led brightness to "0". End Wish (processor) shares statements with relation "_ wishes _ sets led brightness to _.". End end
Find all the Pi handles on the table. For each
- dispatch us for execution on that
Wish (you) runs remotely on (pi)., and
- project a green whisker with
When (you) points "up" at (pi). If that whisker is touching
pi, then wish for its LED to turn on; otherwise, wish for its LED to turn off.
- dispatch us for execution on that
Wish (processor) shares statements with relation "_ wishes _ sets led brightness to _".syndicates all wishes of that form. Rather than just staying in the table database, the wish is forwarded to the databases of all the other Realtalk hosts in the room, including the Pi!
When I point my LED toggler at
pi-74a6, the top block might produce these statements in the table database (say the table itself is powered by
The bottom block,
When /someone/ wishes (you) runs remotely on (processor): ...runs on the Pis themselves.
When /someone/ wishes (you) runs remotely on (processor): -- This block runs on the Pi itself. os.execute('echo gpio | sudo tee /sys/class/leds/led0/trigger') When /pi/ is processor (processor), /someone/ wishes /pi/ sets led brightness to /value/: os.execute('echo '..value..' | sudo tee /sys/class/leds/led0/brightness') End End
We were set up to run on each Pi by the top block, which also shared some statements across the room to each Pi's database (right). We can use
Whenblocks to match these statements when running on the Pi, just as we would on the table.
The first check of the code block,
When /someone/ wishes (you) runs remotely on (processor), is essentially a check of why we are executing.
We could be executing because a person put us on a table, or we could be executing because the top block (having been put on a table) wished for us to 'run remotely' here on
processor. If it's the latter, then we must be running on a Pi; therefore, if this
Whenmatches, then we are running on a Pi.
Now that we're executing on the Pi, we can use normal Unix commands to control the activity LED, just as if we were typing commands into a terminal on the Pi.
As soon as we start, we run an
echo | sudo teecommand to initialize the activity LED to GPIO mode so we can control it later.
Then whenever we get a wish (syndicated across from the table, as in the figure) to set our
led brightness tosome
value, we run a
echo | sudo teecommand which actually sets the LED's brightness.
I do have a higher-level language that allows pages to just
runs unix command, so you don't usually need this explicit 'run
remotely' two-block pattern. But I wanted to show the long way here;
that higher-level interface is built out of this pattern anyway.
A complete programming system
Since we're running a full Realtalk system on the Pi, and we can run page code remotely there, we can do anything on the Pi from a page now: we can control any peripheral and do any computation. Robots, musical instruments, knobs and buttons, environmental sensors, touch – we can program all of it right on the table.
It's not like there's some fixed stub server on the Pi and we're limited to some control language of a few preset commands to tell it what to do. It reminds me of PostScript in printers, or NeWS, or eBPF, or even the modern Web itself: rather than making some underpowered command language so A can control B, let A send a program to B which B executes on its end. Then A (the page on the table) can use the full capability of B (the Pi).
At a deeper level, building all the resources above a single uniform interface makes interoperability easy. Once a resource exports a 9P interface, it can combine transparently with any other part of the system to build unusual applications; the details are hidden.
Realtalk is network transparent. A statement looks the same whether it comes from this Realtalk instance or from a remote Realtalk instance.
Eventually, we want statements to be automatically shared between Realtalk instances, so you won't need to explicitly syndicate them at all. Imagine an entire federated Realtalk universe.
You have transparent access from the Pi back to the table for computation, interaction, and debugging; in contrast, on something you program normally like an Arduino, it's extra work to talk back to your laptop and report results.
Each Pi is just a peripheral. A Pi doesn't hold any code or persistent state. In fact, I flashed all my Pis with exactly the same SD card image of Linux + Realtalk!
All the actual behavior lives on the page in my hand, not inside the Pi, which means that anyone can understand it and edit it, the same as anything else in Dynamicland.
Anyone can come in and add more sensors and actuators if they have a Pi, without needing to plug into the ceiling or modify 'system software.'
I almost think of the Pi as Dynamicland's equivalent to the USB port on my laptop: it's how I plug things into Dynamicland.
This example wasn't much code, even though it was almost from scratch. The Arduino platform expanded access to physical computing, going beyond traditional 'embedded programmers.' Pis in Dynamicland could go even further; the code can be just as short and clear as Arduino 'sketches,' and the building process is even more lightweight and continuous, because you're not tethered to a laptop and IDE and USB cable to the Arduino.
It's easy to have multiple programs and Pis on the table and switch between them by pointing. The table is naturally multiplayer and concurrent. Many people can try many ideas at once. You can switch back to your old program or add a debugging view simply by pulling out those pages.
Really, I'm not 'programming Raspberry Pis' at all; I'm programming the room, and Raspberry Pis have become just a capability of the room. I haven't given up any of the power of the Dynamicland model. All the code is still out on the table.
Hints of the future
I believe that the Pis hint at the future of Dynamicland.
To be honest, I started this project because I got tired of the indirectness of moving pages around for the big Realtalk projector-camera system in the ceiling. I wanted to play with interactions that weren't possible with pages alone. Moving real objects around, precise targeting, directly touching the surfaces of pages... the Pis give us a path to incrementally add new interactions.
Given these Pis as interface computers, you could also plug in all kinds of sensors and compute based on them (as a scientist or teacher, for example). Your measurements of the real world on your workbench could be continuous with your analysis of those measurements, instead of requiring you to switch from lab instruments to a laptop. We want Dynamicland to be a medium which favors tracking rather than simulation of the real world, and the Pis are a platform for the sensors that support that tracking.
Finally, as I said in the introduction, the Pis hint at how we can transition from a monolithic 'Realtalk system' to a distributed ecosystem of Realtalk hosts. Dynamicland isn't meant to have one piece of server software that runs the whole room; it's meant to be a computer made of little peer computers, all talking to each other through a common protocol.
We're trying to figure out the right protocol and abstractions to make that ecosystem work, along with the right values and interface paradigm to make it augment rather than alienate people.
My thanks to John Backus, Sam Gwilym, Weiwei Hsu, Srini Kadamati, Max Kreminski, Andy Matuschak, Sebastian Morr, Toby Schachman, Roshan Vid, and Devon Zuegel for their thoughtful comments and feedback on this writeup.