GeistHaus
log in · sign up

Interconnected

A blog by Matt Webb. My notebook and space for thinking out loud since February 2000.

rss en Copyright © 2026 Matt Webb
8 posts
Feed metadata
Status active
Last polled Apr 29, 2026 01:39 UTC
Next poll Apr 30, 2026 01:39 UTC
Poll interval 86400s

Posts

The Wind in the Willows and reading out loud

The Wind in the Willows right? Kenneth Grahame, 1908.

We all know the children’s story inside-out. Mole and Ratty and gruff Badger and conceited Mr Toad with his motorcars and all their adventures.

I picked up the book as an adult - I can’t remember why - and it wasn’t what I expected.

This week I have been reading it again and it is again astonishing.

I mean… let me share some of the prose with you.

This is when Mole encounters the river for the first time.

Green turf sloped down to either edge, brown snaky tree-roots gleamed below the surface of the quiet water, while ahead of them the silvery shoulder and foamy tumble of a weir, arm-in-arm with a restless dripping mill-wheel, that held up in its turn a grey-gabled mill-house, filled the air with a soothing murmur of sound, dull and smothery, yet with little clear voices speaking up cheerfully out of it at intervals. It was so very beautiful that the Mole could only hold up both forepaws and gasp, “O my! O my! O my!”

There’s a chapter where they’re looking back on the summer just gone, and a description of the plant-life soars:

The pageant of the river bank had marched steadily along, unfolding itself in scene-pictures that succeeded each other in stately procession. Purple loosestrife arrived early, shaking luxuriant tangled locks along the edge of the mirror whence its own face laughed back at it. Willow-herb, tender and wistful, like a pink sunset cloud, was not slow to follow. Comfrey, the purple hand-in-hand with the white, crept forth to take its place in the line; and at last one morning the diffident and delaying dog-rose stepped delicately on the stage, and one knew, as if string-music had announced it in stately chords that strayed into a gavotte, that June at last was here. One member of the company was still awaited; the shepherd-boy for the nymphs to woo, the knight for whom the ladies waited at the window, the prince that was to kiss the sleeping summer back to life and love. But when meadow-sweet, debonair and odorous in amber jerkin, moved graciously to his place in the group, then the play was ready to begin.

They go out in the boat at night looking for a lost young otter. (A whole other story but they encounter the divine spirit Pan who intercedes with a miracle and then wipes their memories lest they suffer the rest of their lives in the shadow of that awe.)

A description of moonlight:

The line of the horizon was clear and hard against the sky, and in one particular quarter it showed black against a silvery climbing phosphorescence that grew and grew. At last, over the rim of the waiting earth the moon lifted with slow majesty till it swung clear of the horizon and rode off, free of moorings; and once more they began to see surfaces-meadows wide-spread, and quiet gardens, and the river itself from bank to bank, all softly disclosed, all washed clean of mystery and terror, all radiant again as by day, but with a difference that was tremendous. Their old haunts greeted them again in other raiment, as if they had slipped away and put on this pure new apparel and come quietly back, smiling as they shyly waited to see if they would be recognised again under it.

It’s just… it’s…

O my! O my! O my!


I asked ChatGPT to calculate some readability stats for me: the average sentence length is 18.5 words.

Sentence length in literature has been falling over the years (LanguageLog).

But it’s not the lengthy sentences that makes this prose work for me. It’s the rhythm.

And I don’t really get that from reading it dead on the page. It’s because I’ve been reading The Wind in the Willows out loud.


Some years back I read Ursula Le Guin’s book about writing, Steering the Craft.

The first chapter is all about the sound of your words:

"The basic elements of language are physical: the noise words make and the rhythm of their relationships."

She recommends reading out loud.

So I started reading out loud.

I would take a page of prose from a novel that I really loved, and I would read it out loud, and out loud again, and again, and again, and again, until I could make it sound as wonderful as I felt it was when I was reading in my head.

It’s so hard to do. And you learn so much about words and meaning with this practice.


So I doubt you’re reading this post out loud.

But that passage about moonlight above…

For me, it doesn’t work in my head. It’s okay. But when I read it out loud - to my kid, which is my excuse right now - to make it make sense to her ears and for the words to carry her, I have to read it in a certain way, and when I do Kenneth Grahame’s words loft me into the sky, swinging clear of the horizon and right up there, free of moorings, just like his moon.

And when I read his words about the foliage on the riverbank, out loud, I’m right there too.

Do me a favour. Read that moonlight paragraph out loud. Even if under your breath, but pause right now, take a moment and do that, read it out loud.

Then read all of The Wind in the Willows because it’s free on Project Gutenberg in Kindle format and everything, and if you have an excuse to read it to someone else then do that, it is transporting and majestic and gentle all at once, and it is a joy to have his words in your mouth and in the air and in your ears.


Auto-detected kinda similar posts:

https://interconnected.org/home/2026/04/24/willows
Courier: real-time messaging for ESP32 with batteries included (new library)

I hack on hardware a whole bunch, at Inanimate and at home.

AI is in the cloud. Interaction is in my room and in my hands. The job always begins by wiring together those two ends.

So the second thing I do, every single time, is bring up real-time messaging using JSON over websockets so I can connect my new device to a server, and have it emit events and listen for commands.

(The first thing I do is bring up the hardware and get basic blinkenlights.)

I want my on-device websockets client to have a super easy interface: give me an onMessage handler to deal with incoming messages.

I need built-in wi-fi config (so I can carry my prototype around to different places). And I don’t want to have to choose which libraries I’m going to use each time, I want good defaults.

That’s what Courier does, in just a handful of lines.

Honestly this isn’t rocket science. It’s no biggie. But it’s decisions I make and boilerplate I have to write for every new project, and I don’t want to vibe code and test this bit every time… I just want it to work. So I find Courier useful personally. And in the spirit of sharing, I hope it’s useful for you too.

Find the code and README here: Courier on GitHub

Quick start

Courier does a small and necessary job (messaging) in the most straightforward way possible.

Let’s say you’re using Arduino on ESP32. I’ll say more about ESP32 in a minute.

Add Courier to your project (we recommend managing your project with PlatformIO):

lib_deps = https://github.com/inanimate-tech/courier.git

Then here’s all the C++ code you need:

#include <Courier.h>

CourierConfig makeConfig() {
  CourierConfig cfg;
  cfg.host = "api.example.com";
  cfg.port = 443;
  cfg.path = "/ws";
  return cfg;
}

Courier courier(makeConfig());

void setup() {
  courier.onConnected([]() { courier.send(R"({"type":"hello"})"); });
  courier.onMessage([](const char* type, JsonDocument& doc) {
    Serial.printf("Got: %s\n", type);
  });
  courier.setup();
}

void loop() { courier.loop(); }

Now your server can send real-time messages in JSON to your device, and your device can handle them. (It pulls out the type automatically for easy routing.)

That’s it!

You don’t need to hardcode wi-fi details in your code. Courier bundles WiFiManager for portable connectivity: if a network can’t be found, this library pops up its own access point and open a captive portal.

Also you get for free: sane auto-healing of both the socket and wi-fi, NTP time sync so the internal clock doesn’t drift and break your secure connection (which happens after about 72 hours), and an easy upgrade path to MQTT.

Try Courier with an M5Stick

ESP32 is a family of microcontrollers that has a special place in the hardware ecosystem: thanks to its low cost, built-in wi-fi and Bluetooth, and ease of development,

  • it is the platform of choice for new hardware startups
  • and it is great in mass-market production.

It even has an Arduino compatibility framework, so you can start with that and then iterate as you go. It’s pretty unique to go from bench to production like that.

So we love ESP32 at Inanimate.

A big player in the ESP32 ecosystem is M5: they’re China-based and have about 400 SKUs that wrap ESP32 microcontrollers in all kinds of enclosures with all kinds of peripherals and sensors. I had a blast getting a personal tour of M5 when I visited Shenzhen last year – they use their own sensors in an industrial IoT network to run and monitor their assembly line.

Now you may have seen an M5Stick or two on the socials. They’re super popular at least in my circle, people love hacking on them.

Pick up an M5Stick-C Plus2 on Amazon for less than 30 bucks! It’s an ESP32 with a screen, buttons, buzzer, gyro, battery and mic in a tiny bright yellow package.

Get yourself one or two, bring it up by following the docs and then install Courier…

…or use our example code. Our example also supports the newer M5StickS3 which is grey and has a more powerful chip.

This is what I made with real-time messaging plus my backend websocket server:

My lil traffic guy tells me the top pages on my blog in real-time ^_^ v

I have live cursors on every page of my blog (write-up and open source code) so that same system sends JSON messages via websockets to the M5Stick on my desk.

If a post gets big then I have my stick on my desk so I see immediately, then I pop over to the page and say hi to everyone with the cursor chat.

More visitors = more flowers!

Plus: shake-for-QR code, so I can quickly follow the link back to the top post.

It’s amazing when hardware starts to feel alive.

Do show me if you make anything too.

Try Courier now

To use Courier right now, first bring up your M5Stick or other ESP32 hardware so you know it works ok, and then go to the Courier repo on GitHub where you can find installation instructions, API docs and examples.

It’s under active development: this is what we use for prototyping at Inanimate. Hey, subscribe to our Lab Notes newsletter!

Courier is distributed under an MIT license.


More posts tagged: inanimate (5).

https://interconnected.org/home/2026/04/21/courier
Headless everything for personal AI

It’s pretty clear that apps and services are all going to have to go headless: that is, they will have to provide access and tools for personal AI agents without any of the visual UI that us humans use today.

By services I mean things like: getting a new passport; finding and booking a hotel or a flight; managing your bank account; shopping for t-shirts with a minimum cotton weight from brands similar to ones that you’ve bought from before.

Why? Because using personal AIs is a better experience for users than using services directly (honestly); and headless services are quicker and more dependable for the personal AIs than having them click round a GUI with a bot-controlled mouse.

Where this leaves design for the services… well, I have some thoughts on that.


Headless services? They’re happening already.

Already there is MCP, as previously discussed (2025), a kind of web API specifically for AI. For instance, best-in-class call transcriber app Granola recently released their MCP and now you can ask your Claude to pull out the meeting actions and go trawling in your personal docs to find answers to all the question. A good integration.

But command-line tools are growing in popularity, although they used to be just for developers. So you can now create a spreadsheet by typing in your terminal:

gws sheets spreadsheets create --json '{"properties": {"title": "Q1 Budget"}}'

Here are some recently launched CLI tools:

  • gws: Google Workspace CLI, "Drive, Gmail, Calendar, and every Workspace API. Zero boilerplate. Structured JSON output. 40+ agent skills included."
  • Obsidian CLI to do anything you can do with the (very popular) note-taking app - like keep daily notes, track and cross off tasks, search - from the CLI
  • Salesforce CLI and, look, I don’t really understand what the Salesforce business operating system does, but the fact it has a CLI too is significant.

And here is CLI-Anything (31k stars on GitHub) which "auto-generates CLIs for ANY codebase."

Why CLIs?

It turns out that the best place for personal AIs to run is on a computer. Maybe a virtual computer in the cloud, but ideally your computer. That way they can see the docs that you can see, and use the tools that you can use, and so what they want is not APIs (which connect webservers) but little apps they can use directly. CLI tools are the perfect little apps.


CLIs are composable so they are a better fit for what users actually do.

By composable I mean you can: query your notes then jump to a spreadsheet then research the web then jump back to the spreadsheet then text the user a clarifying question then double-check your notes, all by bouncing between CLIs in the same session.

A while back app design got obsessed with “user journeys.” Like the journey of a user finding a hotel then booking a hotel then staying in it and leaving a review.

But users don’t live in “journeys.” They multitask; they talk to people and come back to things; they’re idiosyncratic Try grabbing the search results from the Airbnb app and popping them in a message to chat on the family WhatsApp, then coming back to it two days later. It’s a pain and you have to use screenshots because apps and their user journeys are not composable.

CLIs are composable because they came originally from Unix and that is the Unix tools philosophy: "tools were designed so that they could operate together."

Personal AIs like Openclaw or [Poke](https://poke.com do what users want and don’t follow “designed” user journeys, and as a result the composed experience is more personal and way better. CLIs are a great underlying technology for that.


CLIs are smaller than regular apps and so they are easier to secure.

The alternative to providing special tools for AIs is that the AIs use the same browser-based apps that we do, and that’s a terrifying prospect.

For one, AIs are really good at finding security holes. The new Mythos model from Anthropic is so good at discovering security flaws that it has been held back from the public and governments are convening emergency meetings of the biggest banks.

And including a UI increasing complexity and makes security holes more likely.

Here’s a recent shocking example. Companies House is the national register of all companies, directors, and accounts for England and Wales. Users could view and edit any other user’s account:

a logged-in company director could exploit the flaw by starting from their own dashboard and then trying to log into another company’s account.

Once they reach the 2FA block, which they would not be able to pass, all that was required was to click the browser’s back button a few times. Typically, the user would be taken back to their own dashboard, but the bug instead returned them to the company they had tried to log into but couldn’t.

– The Register, Flaw in UK’s corporate registry let directors rummage through rival records

This bug had been present since October 2025.

Imagine a future where personal AIs are filing company records, and one of them notices this security hole overnight and posts about it on moltbook or whatever agent-only social network is most popular. The other agents would exploit the system up, down and sideways before the engineering team woke up.

The only viable solution is that services need to security hardened, and to do that they need to be simplifying and minimised. Again, CLI tools are a great fit.


What does this mean for front-end design?

Design won’t go anywhere.

Sure, the front-end should be driving the same CLI tools that agents use.

Arguably it’s more important than ever: human users will encounter services, figure out what they can do, and pick up their vibe from using the app, just as now.

Then they’ll tell their personal AI about the service and never see the front-end again, or re-mix it into bespoke personal software.

So from a usability perspective I see front-end as somewhat sacrificial. AI agents will drive straight through it; users will encounter it only once or twice; it will be customised or personalised; all that work on optimising user journeys doesn’t matter any more.

But from a vibe perspective, services are not fungible. e.g. if you’re finding a restaurant then Yelp, Google Maps, Resy, and The Infatuation are all more-or-less equivalent for answering that question but clearly they are completely different and you’ll use different services at different times.

Understanding that a service is for you is 50% an unconscious process - we call it brand - and I look forward to front-end design for apps and services optimising for brand rather than ease of use.


If I were a bank, I would be releasing a hardened CLI tool like yesterday.

There is so much to figure out:

  • How do permissions work? Should the user get a notification from their phone app when the agent strays outside normal behaviour? How do I give it credentials to act on my behalf, and how long do they last?
  • How does adjacency work? My bank gives me a current account in exchange for putting a “hey, get a loan!” button on the app home screen. How do you make offers to an agent?

Headless banks.


Headless government?

I’d love to show you a worked example here. I vibed up a suite of four CLI tools that wrap four different services from UK government departments.

If I were renting a house, I would set my agent to learning about the neighbourhoods using one of these tools. Another tool will be helpful next time I’m buying a used car. (There’s a Companies House command-line tool too.)

But I won’t show you the tools because I don’t want to be responsible for maintaining and supporting them.


I wish Monzo came with an official CLI. I wish Booking.com came with a CLI. I reckon, give it a year, they will.


Auto-detected kinda similar posts:

https://interconnected.org/home/2026/04/18/headless
mist is now open source and looking for interop

A brief update on mist, my ephemeral Markdown editor with Google Docs-style comments and suggested edits:

mist is now open source with an MIT license, and the mist repo is here on GitHub.

(Try mist now and here’s my write-up from February.)

What I love about Markdown is that it’s document-first. The formatting travels with the doc. I can’t tell you how many note-taking apps I’ve jumped between with my exact same folder of Markdown notes.

The same should be true for collaboration features like suggested edits. If somebody makes an edit to your doc, you should be able to download it and upload to a wholly different app before you accept the edit; you shouldn’t be tied to a single service just because you want comments.

(And of course the doc should still be human-readable/writeable, and it’s cheating to just stuff a massive data-structure in a document header.)

So mist mixes Markdown and CriticMarkup – and I would love it if others picked up the same format. If apps are cheap and abundant in the era of vibing, then let’s focus on interop!

With mist itself:

Several people have asked for the ability to self-host it. The README says how (it’s all on Cloudflare naturally). You can add new features to your own fork, though please do share upstream if you think others could benefit.

And yes, contributions welcome! We’ve already received and merged our first pull request – thank you James Adam!


No, a document editor is not what we’re building at Inanimate. But it’s neat to release small useful projects that get made along the way. btw subscribe to our newsletter.


More posts tagged: inanimate (5).

https://interconnected.org/home/2026/04/10/open-mist
A sleep aid

Mostly I go to sleep very easily. Like 3 minutes from lights out, max.

I’m content, I exercise, I burn my tokens each day, I think all that helps.

Often I wake early and think. I’m protective over what goes into my 4am thinking time, I enjoy it. You don’t get to choose what you think about at 4am. It’s inevitably going to be work. So I optimise for having interesting work and I’m very lucky there. Mostly I go back to sleep after a bit.

Sometimes I don’t get to sleep easily, for example in 2021.

In that case I close my eyes and visualise a device:

The device has 6 buttons arranged in two rows. It changes in appearance but the most common form in my imagination is a Dieter Rams-style enclosure in beige about 4 or 5 inches across with its buttons on the top, and the buttons are flush against each other with a circular depression on the top to push down with your finger.

The buttons are really satisfying to push. Good resistance, good slip-clunk into place when engaged.

Sometimes it’s different. Sometimes the button click in like a pen top when they’re down; sometimes they rise up as soon as I’m not pressing. Sometimes they light up when activated, sometimes not.

The game, in my imagination, is this:

There is some combination of buttons that I can push which causes me to instantaneously fall asleep. But I don’t know the code.

So what I imagine (it’s a visual and tactile experience) is trying every single combination of buttons until I push the right ones together.

There aren’t many combinations to try, only 63. I usually try a few then run through methodically by counting up in binary.

Perhaps the code is different each night, perhaps it is the same – I wouldn’t know because I discover it successfully every time and go to sleep and forget what happened.

I was gently advised against posting this because it makes me sound like a weirdo but you already know that about me. And now you know about the six buttons too.

https://interconnected.org/home/2026/04/03/sleep
An appreciation for (technical) architecture

Once upon a time I kept on meeting architects who had ended up working with the web.

I asked why. Some good answers:

  • Architects think about how people move between spaces (pages) and what that means for user experience - this was at a time when web designers often came from graphic design and drew more on single page layout
  • Architects think about negative space, and how what you put in a space shapes social behaviour – this was at a time when web before the social web
  • Architects have to work with a lot of different disciplines to make something, and all of those people believe they’re the most important person in the room, and that’s what product teams are like too - lol

I’m not an architect but some of my favourite books are about architecture.

Here are three:

Two things that architecture does have been on my mind recently: how it shapes understanding and how it shapes its own evolution.


Information architecture

It’s a rare designer who operates at both the macro of strategy and culture and organisations, and the micro of craft and taste and interactions.

Jeff Veen is one. I remember him saying to me once: "Design is about creating the right mental model for the user."

(Now clearly design is not only about that, but for the particular problem I took to Veen, he said precisely what I needed to hear to get un-stuck.)

So I love thinking about the primitives of functionality and content for the user and how they relate, such that the user can reason intuitively about what they can do with the system, and how.

And this is an interactive process: for a first time user, how do they first encounter a system and how do they way-find and learn over time?

And this is a cognitive process: mental models are abstract; what we perceive is real. So how does understanding happen?

(AI agents are using my software. Prioritise clarity over feels.)


Don Norman wrote The Design of Everyday Things (1988), much loved by web designers, and popularised “user-centred design.”

Norman also brought into design the term affordance from cognitive psychology. As coined by J J Gibson: "to perceive something is also to perceive how to approach it and what to do about it" (as previously discussed).

The best way to notice affordances is to notice where they go wrong! Norman doors:

Some doors require printed instructions to operate, while others are so poorly designed that they lead people to do the exact opposite of what they need to in order to open them. Their shapes or details may suggest that pushing should work, when in fact pulling is required (or the other way around).

Whenever you see a PUSH label stuck on as an extra, it’s papering over a Norman door.

I was delighted to encounter a Norman door irl this week.

So I’m stretching the definition of architecture here, to include this, but roll with it pls. Architecture is how things are understood.


Architecture is how things evolve – how they’re allowed to evolve.

There’s a beautiful housing estate on the top of a hill in south London.

Dawson’s Heights (1964) is shaped like an offset double wave, and looks different on the horizon from every angle and with every change of the light. Yet up-close it’s human-scale too, despite its 10 storeys.

Lead architect Kate Macintosh wanted residents to have balconies, but this was regarded as "wasting public money on unnecessary luxuries"

Knowing that they would be removed from her designs for cost-saving, she made them essential:

all the balconies on Dawson’s Heights are fire escape balconies, but they are also private balconies because the escape door is a “break glass to enter” type lock so you can securely use your balcony for whatever you like.


Technical architecture

So software architecture is also team structure - who needs to talk to who - but also how to make sure that doing something the quick and dirty thing way is also doing it the right way.

Half of software architecture is making sure that somebody can fix a bug in a hurry, add features without breaking it, and be lazy without doing the wrong thing.

I said in 2004.

I think this goes for internal software architecture and for libraries that you import.


The thing about agentic coding is that agents grind problems into dust. Give an agent a problem and a while loop and - long term - it’ll solve that problem even if it means burning a trillion tokens and re-writing down to the silicon.

Like, where’s the bottom? Why not take a plain English spec and grind in out in pure assembly every time? It would run quicker.

But we want AI agents to solve coding problems quickly and in a way that is maintainable and adaptive and composable (benefiting from improvements elsewhere), and where every addition makes the whole stack better.

So at the bottom is really great libraries that encapsulate hard problems, with great interfaces that make the “right” way the easy way for developers building apps with them. Architecture!

While I’m vibing (I call it vibing now, not coding and not vibe coding) while I’m vibing, I am looking at lines of code less than ever before, and thinking about architecture more than ever before.

I am sweating developer experience even though human developers are unlikely to ever be my audience.

How do we make libraries that agents love?


More posts tagged: multiplayer (32).

Auto-detected kinda similar posts:

https://interconnected.org/home/2026/03/28/architecture
Filtered for home security
1.

The Amazon Ring Always Home Cam is an indoor security drone for your home.

Introduced with this video in 2020: "Yeah, it’s a camera that flies."

Sadly not yet on the market.

Ok Judge Dredd had Spy-in-the-Sky drone surveillance cameras in 1978 and Mega-City One is not an aspirational template for domestic life but hear me out:

Because I would love to be able to text my house “oh did I leave the stove on?” from the bus. And “darn can you find my keys?” in the morning. And “uh there’s that book about 1970s social computing somewhere it has an orange spine I can’t remember exactly” at literally anytime.

And do that without having to blanket my home in cameras. A drone seems like a good solution?

2.

Surveillance: systematic observation. Often institutional. From “above.”

Sousveillance, coined by cyborg Steve Mann in 2002: "watchful vigilance from underneath."

I am suggesting that the cameras be mounted on people in low places, rather than upon buildings and establishments in high places.

e.g.

a taxicab passenger photographs the driver, or taxicab passengers keep tabs on driver’s behaviour

It is such a positively-framed paper.

We swim in this world now. What does it do to us?

(I wonder if here’s a word like auto-sousveillance? We do it to ourselves.)

3.

The Nor (2014) by artist James Bridle.

The sense of being watched is a classic symptom of paranoia, often a sign of deeper psychosis, or dismissed as illusory. In the mirror city, which exists at the juncture of the street and CCTV, of bodily space and the electromagnetic spectrum, one is always being watched. So who’s paranoid now?

(As previously discussed, briefly.)

Exactly midway between Mann coining sousveillance in 2002 and today, 2026, Bridle put his finger on this paranoia background radiation, slowing increasing like population levels, like CO2 ppm, like sea level, like the frog’s bath.

4.

Robot Exclusion Protocol (2002) by blogger Paul Ford: "A story about the Google of the future."

I took off my clothes and stepped into the shower to find another one sitting near the drain. It was about 2 feet tall and made of metal, with bright camera-lens eyes and a few dozen gripping arms. Worse than the Jehovah’s Witnesses.

“Hi! I’m from Google. I’m a Googlebot! I will not kill you.”

“I know what you are.”

“I’m indexing your apartment.”

I feel like we are 24 months off this point?

Only they’ll be indexer googlebot drones that we vibe code for ourselves.

5.

Back in 2024, engineer Simon Willison realised that the killer app of Gemini Pro 1.5 is video, and:

I took this seven second video of one of my bookshelves:

It understood the video and gave him back a machine-readable list of the titles and authors. That’s handy!

I am still waiting for this as an app so that I can index and search my overflowing bookshelves by not-even-that-carefully waving my phone at them.

Please I am too lazy to type the prompt to vibe this.

The meta-point is that auto-sousveillance is inevitable because I can’t find the book I’m looking for.

6.

Man accidentally vibe codes a robovac army (2026).

The DJI Romo is a $2000 behemoth that mops and vacuums using LIDAR and AI.

Sammy Azdoufal wanted to control his roomba with his Playstation controller.

However, the scanner his [Claude Code agent] created not only gave him access to his device; it gave him access and control over almost 7000. He was able to see home layouts and IP addresses, and control the devices’ cameras and microphones.

Uh oh.

Whereas the point of institutional surveillance is that the CCTV cameras are conspicuous (and, originally, you didn’t know if anyone was watching, but now the AI processes all),

the characteristic of auto-sousveillance seems to be that you don’t know whether you are privately querying for a lost book or live streaming your bathroom to the internet.

Forget about control, how do you even relate to such a capricious system?

7.

The ancient Romans had two types of gods.

There are the gods on Olympus who look after nature, cities, the state.

And then there are Lares (Wikipedia), guardian deities of a place, "believed to observe, protect, and influence all that happened within the boundaries of their location or function."

In particular, household gods, Lares Familiares, that reside not on a distant mountain but instead in a household shrine:

The Lar Familiaris cared for the welfare and prosperity of a Roman household. A household’s lararium, a shrine to the Lar Familiaris and other domestic divinities, usually stood near the dining hearth or, in a larger dwelling, the semi-public atrium or reception area of the dwelling. A lararium could be a wall-cupboard with doors, an open niche with small-scale statuary, a projecting tile, a small freestanding shrine, or simply the painted image of a shrine …

The Lar’s statue could be moved from the lararium to wherever its presence was needed. It could be placed on a dining table during feasts or be a witness at weddings and other important family events.

RELATED:

Lares: our 2 minute pitch for an AI-powered slightly-smart home (2023) – you can see a demo video.

And here’s a paper about Lares showing emergent behaviour from AI agents, which in 2024 was novel and surprising.


More posts tagged: filtered-for (122).

https://interconnected.org/home/2026/03/20/filtered
New Wave Hardware

We briefly mentioned New Wave Hardware in last week’s Inanimate Lab Notes so this is me doing some unpacking. While you’re there, join 300+ other subscribers and sign up for our newsletter. You’ll get weekly links and updates on what we’re working on.


There are a bunch of things changing with new hardware products, design and technology.

Let’s say: the intersection of hardware and AI. But our hunch is that it’s broader than that.

There are new ways to get hardware into the hands of consumers, and new AI interactions that are now possible, and more, and these changes are happening independently but simultaneously. We’re tracking this as what we’re calling New Wave Hardware.

So we got a few founders together at Betaworks in NYC earlier this week for a roundtable to compare notes (thank you Betaworks!).

The meta question was: does our hunch hold? And, if so, what characterises New Wave Hardware and what specifically is changing – so that we can push at it?

I kept notes.

I’ll go off those and add my own thoughts.

(I’m using some direct quotes but I won’t attributing or list attendees. I would love for others to share their own perspectives!)


AI interfaces

Voice is good now! (As I said.) So we’re seeing that a lot.

More than that:

  • You can express an intent and the computer will do what you mean
  • Natural interfaces are workable now, beyond voice. e.g. the new Starboy gadget by lilguy: "We trained multiple tiny image models that run locally on the device, letting it recognize human faces and hand gestures" (launch thread on X).

What do we do with consumer gadgets that perceive pointing and glances? What is unlocked when we shift away from buttons and apps to interact with hardware devices, and the new interface is direct and human and in the real world?

New interaction modalities

Beyond the user interface, the way we interact with hardware is changing. I kept a running list of the interaction modality changes that were mentioned:

  • Human interfacessee above.
  • Situated – due to always-on sensors, AI devices know what’s going on around them and can respond when they see fit, not only on a user trigger. Yes, screens that dim when it gets dark, but in a wider sense this goes back to Clay Shirky’s essay Situated Software (2004), "software designed in and for a particular social situation or context." We’re seeing more of this.
  • Autonomous – agents are software that has its own heartbeat, now we see that "the hardware becomes aware"… and then what? Maybe the user doesn’t need to be intentional about activating some function or another; the device can get ahead of intentions, and offer a radically different kind of value to the user. A new design possibility.
  • Networked – we’re frequently working with connected devices which today have attained a new level of reliability. What happens when the stuff around us channels planetary intelligence?
  • Embodied – the cleverness of the Plaud AI note taker device is that it’s a social actor: you can notice it, place it on the table, cover it; it inflects what people say and how they feel (for better or worse). Hardware is in the real world and you can move it from focal to peripheral attention just by moving your head.

Some of these are new colours in the palette to design with; some are intrinsic to hardware and have been there all along. Though amplified! The rise of wearables (described by one founder as "sitting between the utility and affinity group") means that hardware is more frequently in our faces.

There are challenges. When we have devices and "the ability to put software that can do anything at any time in them," the lack of affordances and constraints can be baffling. So how do we not do that?

And how do we understand what things do anyway, really, when behaviour steered by AI is so non-deterministic? Perhaps we have to lean into the mystical. That’s another trend.

Getting hardware in the hands of users

Every few years there’s a claim that it’s now quicker than 18 months to get a hardware product from concept through manufacture: that’s still not the case but there are alternatives and short cuts – some of which are potentially rapidly quicker.

Like: reference designs. There is now so much hardware coming out of Shenzhen, there are high level references designs for everything to customise, and factories are keen to partner. One team at the roundtable brought up their core electronics in the US, then got pretty sophisticated products built (batch size of 100) complete with beautiful metal enclosures after spending just 3 weeks in China.

Also like: 3D printers. Short run fabrication is possible domestically in a way it wasn’t before. Let me highlight Cipherling which combines production-grade microcontrollers with a charming 3D printed enclosure to get to market quicker.

It does seem like the sophistication of the Western and Shenzhen hardware ecosystems has made these approaches - which are not new - newly accessible.

Form factors

New Wave Hardware skews consumer, perhaps?

Or at least there’s a renewed interest in consumer hardware from startups and investors.

This is partly because there’s a big unknown and therefore a big opportunity: AI is hungry for context, it’s useful in the real world outside our phones, and the new AI interaction modalities means there’s a lot to figure out about how to make that good – it’s not obvious what to do. Like do we have lanyards or pucks on tables or what? We need to experiment, which demands quick cycle time, which is a driver on finding alternatives to the 18 month product development cycle.

Also the previous generation of hardware was oh-so-asinine. One remark I wrote down from the roundtable, regarding the consumer hardware that currently surrounds us: "This is hardware that would want to be invisible if it could."

So there’s a desire to try new forms; products that don’t secretly want to hide themselves.

Just a note too that “new form factors” doesn’t just mean standalone devices: we continue to be inspired by the desk-scale or even room-scale work at Folk Computer.

New tools, of course

If you’re an artist wanting to put a few dozen instances of weird new consumer electronics in people’s hands, and your single blocker was writing firmware, then guess what: in the year of our Claude 2026 that is no longer a blocker.

AI tools provide what I’ve previously called Universal Basic Agency and it is wonderful. When individuals are unblocked, we get an abundance of creativity in the world.

(We were at a 6 minute demos event in the basement of an independent bookstore in Brooklyn on Friday - see this week’s Lab Notes - and one speaker was showing their vintage arcade display adaptor project. So cool. They make super complicated PCBs but don’t enjoy 3D modelling, so did the CAD in programmable modelling software with a few lines of code. Not AI, but advanced tools.)

And do we see a glimmer of end-user programming too?


I’m grateful for the thoughtful and open conversation of everyone at the roundtable.


As I write this, a set of colourful Oda speakers, hanging from the ceiling here at Betaworks, relay a live audio stream from a macaw sanctuary forest in central Costa Rica. We can hear the birds and the weather – it is transporting.

If there is such a trend as New Wave Hardware (and, after our small conversation, I do believe there is) then it is not confined to mass market novel AI interfaces, it is also these profound artistic interventions, and we all learn from one another.

Are you seeing something happen here too? Are hardware startups characterised by something different today versus, say, 5 years ago? Lmk if you end up sharing your perspective on your blog/newsletter – would love to read.

At Inanimate we are building products within New Wave Hardware, and working to do our bit to enable it.

We hope to convene another roundtable in the near future, either here in NYC or back home in London, to continue swapping notes and pointers and feeling this out together.


More posts tagged: inanimate (5).

Auto-detected kinda similar posts:

https://interconnected.org/home/2026/03/12/nwh
← Back to feeds