on · 9 min read
A Review of Claude Cowork Virtualization
I was reading about Claude Cowork and was curious of how it works behind the scenes. This is how Anthropic breaks down how Cowork runs tasks:
- Analyzes your request and creates a plan.
- Breaks complex work into subtasks when needed.
- Executes work in a virtual machine (VM) environment.
- Coordinates multiple workstreams in parallel if appropriate.
- Delivers finished outputs directly to your file system.
The thought of virtual machines makes sense because it provides better isolation than a container would. This happens behind the scenes with Claude Desktop. In my opinion, it provides a frictionless and positive user experience. For the purposes of this post, the host will be referred as the macOS device running the Claude Desktop application. The Virtual Machine that is managed by the Claude Desktop application will be referred to as the guest.
I am by no means a digital forensics expert, so there may be some inaccuracies with the analysis. These observations are based on simplistic static binary analysis, disk exploration, and log analysis. It is not meant to be an exhaustive and detailed analysis. As a security practitioner, the goal was to understand what built-in controls are in place to address common threat models.
Overview of Findings
- The Multilingual Claude: A complex orchestration layer combining Node.js/Electron (UI/Orchestrator), Swift (Native macOS Virtualization hooks), and Go (Guest‑side daemon and networking logic).
- Virtualization: A fully virtualized ARM64 Linux guest (Ubuntu 22.04 LTS) powered by macOS Virtualization.framework, rather than a lightweight container.
- Defense in Depth: A “sandbox‑within‑a‑sandbox” model. It uses hardware‑level VM isolation combined with Bubblewrap (
bwrap) to further constrain processes inside the guest. It also provides anti-tampering measures and session user isolation. - Intelligent Persistence: Uses an Apple shadow disk format (
sessiondata.img) to persist session state (such as NPM logs and home directories) while keeping changes to the baserootfs.imgminimal. - Egress Control: Network access is not wide open. It is governed by a hard‑coded 22‑domain allowlist (NPM, PyPI, Anthropic API, etc.) managed by the
srt-settings.jsonpolicy. - Filesystem Bridge: Host files are exposed via VirtioFS, but visibility is restricted through per‑session bind mounts that only map specific user‑selected folders into the VM.
Cowork Architecture
Overview
After performing basic static analysis with strings, the following components were observed:
- Electron / Node.js UI: Front‑end interface that drives the app flow.
- Swift Native Bridge: Exposes macOS Virtualization APIs (VZ* symbols) for VM control.
- Go Runtime: Handles performance‑critical compute and networking tasks, linked via native bindings.
| Layer | Technology | What it does |
|---|---|---|
| UI / Orchestrator | Electron (v28) + ASAR bundle | Provides the graphical interface and starts the workflow. |
| Host‑side Native Bridge | Swift + @ant/claude-swift (compiled addon) |
Calls Apple’s VZ* APIs to create and manage the VM. |
| Guest‑side Runtime | Go 1.24.13 daemon (coworkd) + bwrap |
Executes tasks inside the VM, enforces resource limits, and mediates file access. |
| Persistence | sessiondata.img (Apple shadow‑disk) + .origin hash files |
Stores per‑session home directories, NPM logs, and other mutable state. |
| Network Control | srt-settings.json allow‑list (22 domains) |
Restricts outbound network calls to a whitelisted set. |
| File Bridge | VirtioFS mount at /mnt/.virtiofs-root/ |
Exposes selected host folders to the guest without exposing the whole filesystem. |
Cowork Multilanguage Orchestration
Together they form a compact stack: UI → Swift‑based virtualization layer → Go back‑end, all packaged inside an Electron app using ASAR and unpacked native modules. Running strings /Applications/Claude.app/Contents/MacOS/Claude yields:
/dev/null
ELECTRON_RUN_AS_NODE
Claude Desktop application does not contain much application code; instead, it initializes the Electron framework, which looks for an app.asar. The app is bundled with ASAR, which archives everything necessary into a .asar file. In some cases, binaries, libraries, or assets cannot be bundled and are excluded from the ASAR archive
asar pack app app.asar --unpack-dir "swift_addon.node."
This explains why we see Claude.app/Contents/Resources/app.asar.unpacked, which contains /Applications/Claude.app/Contents/Resources/app.asar.unpacked/node_modules/@ant/claude-swift/build/Release/swift_addon.node.
Running strings on swift_addon.node provides useful insights:
- Symbols prefixed with VZ, such as
VZVirtioSocketConnectionandVZVirtioSocketDevicebelong to Apple’s Virtualization framework. - Symbols like
VZGvisorNetworkingare not part of Apple’s framework but likely relate to gVisor. VZLinuxBootLoaderappears with kernel arguments likeroot=/dev/vda, console=hvc0, quiet.- The Go runtime version is
go1.24.13.
The stack includes JavaScript, Swift, and Go. This suggests that Swift functionality is exposed to the Node.js application (e.g., via a proof‑of‑concept such as node‑mac‑swift‑addon), enabling the Electron UI to interact with Apple’s Virtualization framework. The Go runtime likely handles compute‑intensive networking tasks, while the Swift layer provides native macOS virtualization hooks.
Storage and Persistence
For macOS users, virtualization‑related files reside in ~/Library/Application Support/Claude/vm_bundles/claudevm.bundle. The following files were observed:
| Image | Size | Format | Purpose |
|---|---|---|---|
| rootfs.img | 10 GB (GPT) | Raw disk image with EFI + ext4 partitions | Base OS, Ubuntu 22.04 LTS (Jammy) ARM64 root filesystem. |
| sessiondata.img | ~54 MB | Apple shadow‑disk (shdw magic) |
Per‑session writable overlay (home dirs, NPM logs). |
| smol-bin.arm64.img | 10 MB | ExFAT, presented as a USB‑mass‑storage device (/dev/sda) |
Holds the smol‑bin updater binaries and the latest srt-settings.json policy. |
| rootfs.img.zst | 2.0 GB (compressed) + bundle.tar.gz (4.6 GB) |
Compressed distribution artifacts used to populate the base image. | |
| efivars.fd | 128 KB | UEFI firmware variables (boot‑state tracking). | |
| vmIP / machineIdentifier | Binary plist | Persistent UUIDs used for session affinity. |
Cowork Base Operating System
The main rootfs.img is a disk image that can be mounted directly. The following steps mount it inside a Docker container:
docker run --rm -it --privileged \
-v "/Users/jimmy/Library/Application Support/Claude/vm_bundles/claudevm.bundle":/bundle:ro \
ubuntu bash
# Inside the container:
apt-get update && apt-get install -y kmod file
losetup -D
# Mount the ext4 partition (offset 105906176 bytes)
losetup -r -o 105906176 /dev/loop1 /bundle/rootfs.img
mkdir -p /mnt/rootfs
mount -o ro,noload /dev/loop1 /mnt/rootfs
The mounted filesystem reveals:
- Ubuntu 22.04.5 LTS: a cloud‑image base with kernel
6.8.0-94-generic(HWE). - ARM aarch64: native Apple Silicon, no emulation layer.
- Standard Linux layout plus three non‑standard directories:
/sessions/(per‑session home dirs named likeinspiring-quirky-volta),/smol/bin/(populated at runtime), and/workspace/(project mount point). - Users:
root,ubuntu(uid 1000), plus dynamically created session users (uid ≥ 1001). - Installed Dependencies: Python 3.10, Node.js, uv/uvx, pip, numpy, tesseract, LibreOffice converters (unoserver/unoconvert), magika, pdf tools (pdfplumber, camelot, pdf2txt), markitdown, img2pdf.
- Claude Code CLI: a 213 MB ARM64 ELF located at
/usr/local/bin/claude,representing the full CLI agent running inside the VM.
Cowork’s dependencies make sense given the wide range of use cases which seem to be centered around office and productivity.
Smol Updater
The smol-bin.arm64.img is likely mounted at /smol/bin during runtime, serving as an update mechanism. Logs in ~/Library/Logs/Claude/coworkd.log show:
2026/02/13 13:00:37 [updater] mounted smol-bin device /dev/sda at /smol/bin
2026/02/13 13:00:37 [updater] checking for updates from /smol/bin
2026/02/13 13:00:37 [updater] unmounted smol-bin
The image contains the binaries sdk-daemon and sandbox-helper, along with srt-settings.json (the allowlist discussed later).
Session Lifecycle
Review of sessiondata.img hints that all writes related to cowork sessions are mounted here. We see log evidence of this in the next session. Going back to a strings analysis of swift_addon.node we look for virtualization related symbols. This github issue seems to provide assurance that session data is indeed written and managed in a sessions mount via sessiondata.img. This makes sense because looking at the cowork section of Claude Desktop, you can continue previous session.
While reviewing logs on the host, there are related logs found in ~/Library/Logs/Claude/claude_vm_swift.log. This adds some clarity about how the host orchestrates sessions inside the guest VM.
[VM] 10:29:59 [info] startVM called for ...claudevm.bundle
[VM] 10:30:00 [info] Session data disk created successfully (ASIF format)
[VM] 10:30:00 [info] Configuration created:
[VM] 10:30:00 [info] - CPUs: 4 | Memory: 4GB | Boot: EFI (GRUB)
[VM] 10:30:00 [info] - Rootfs: .../claudevm.bundle/rootfs.img
[VM] 10:30:00 [info] Starting VM...
[VM] 10:30:00 [info] Linux VM started successfully
[VM] 10:30:07 [info] Guest connected | Network: CONNECTED | Guest ready
[VM] 10:30:08 [info] SDK installed: version=2.1.5
[VM] 10:33:20 [info] Process spawned: name=vigilant-youthful-carson command=/usr/local/bin/claude
[VM] 11:07:24 [info] Process killed with signal: SIGTERM
The logs above show the Claude Cowork Swift code spawning a session named vigilant-youthful-carson. Insight into activity inside the guest can be observed in ~/Library/Logs/Claude/coworkd.log which lets us know that coworkd creates a user on the guest and a home directory in /sessions/.
2026/01/28 16:57:53 [process] user vigilant-youthful-carson doesn't exist
2026/01/28 16:57:53 [process] recreated user vigilant-youthful-carson: uid=1001 gid=1001 home=/sessions/vigilant-youthful-carson
The logs also provide some insight into the persistence of these sessions.
2026/02/28 15:35:54 [coworkd] mounting session disk /dev/nvme0n1 at /sessions
2026/02/28 15:35:54 [process] attempting recovery from home directory
2026/02/28 15:35:54 [process] recovered uid=1001 gid=1001 from /sessions/vigilant-youthful-carson
In ~/Library/Logs/Claude/claude_vm_node.log we can see additional orchestration details:
2026-01-13 10:27:52 [info] [SwiftVM] Loading @ant/claude-swift module...
2026-01-13 10:27:52 [info] process=vigilant-youthful-carson, isResume=false, mounts=4
2026-01-13 10:27:52 [info] cmd=/usr/local/bin/claude args=--claude-opus-4-5-20251101 ...
2026-01-13 10:27:52 [info] process=vigilant-youthful-carson, isResume=true, mounts=4
This approach provides user isolation, if there isn’t an underlying local privilege escalation. Each user is not present in the sudoers. Furthermore, a session user won’t be able to access other session users since the users will have different UIDs. Simple and effective.
Host File Access
The allure of Claude Cowork is letting it interact with local files. Perhaps I need a theoretical notes folder that has partially organized and half complete, incoherent notes as markdown files. When using Claude Cowork, you select a local folder to “work in.” The folder contents selected will show up in /mnt/.virtiofs-root/, this confirms that VirtioFS Bridge is used to access the local folder selected within the Cowork VM. This can be observed when reviewing /etc/fstab on disk, which shows only ext4+EFI; /mnt/.virtiofs-root/ exists but is empty. This infers that folders selected by Claude Cowork are mounted as needed.
Security Architecture
Sandboxing
Since Claude Code CLI is available within the guest, it will be the binary doing the heavy lifting based on input from the Claude Desktop Application on the host. claude CLI is able to perform various tool calling which makes sense that there are various dependencies.
While performing a review of the virtual machine’s filesystem, it can be confirmed that bubblewrap is installed at /usr/bin/bwrap. It’s likely that processes spawned inside the VM are constrained by bwrap which provides another layer of isolation.
srt-settings.json was found from smol-bin.arm64.img (exfat) which is on the host. This appears to provide an allowlist of 22 domains accessible by the guest. These domains include:
- NPM:
registry.npmjs.org,npmjs.com,yarnpkg.com. - Python:
pypi.org,files.pythonhosted.org. - Rust:
crates.io,index.crates.io,static.crates.io. - OS Updates:
archive.ubuntu.com,security.ubuntu.com. - Anthropic API:
api.anthropic.com,*.anthropic.com.
The allowlist is also referenced in ~/Library/Logs/Claude/claude_vm_node.log in this log entry:
2026-02-25 11:52:17 [info] [VM] Loading @ant/claude-swift module... Support/Claude/vm_bundles/claudevm.bundle
subpath=Library/Application Support/Claude/claude-code-vm, version=2.1.51
claude, .skills, uploads), allowedDomains=22
Integrity Checks
VMs on employee machines can provide a blindspot, as an attacker, I would love to get a foothold onto a guest VM. Combined with malicious skills, MCPs, supply chain attacks on dependencies, it wouldn’t be too difficult to compromise a guest VM. Anthropic had an answer for this threat model that I accidently uncovered.
When initially trying to understand how the guest VM operates, I attempted to install an eBPF agent. This uncovered some type of integrity enforcement in the guest VM, where a modification to the actual rootfs, triggers a recreation of the Cowork VM. After some more digging, it appears Claude Desktop tracks SHA1 hashes via .rootfs.img.origin and .rootfs.img.zst.origin. I attempted to patch the integrity checks but the reversion was still occuring. There must be hardcoded or remote integrity checks. If I didn’t misinterpret this integrity check, this is good news for organizations from a security perspective.
Summary
There’s a lot more to dig into regarding how the VM’s networking works while staying sandboxed, but I’ll save the networking for a follow-up post. It’s obvious Anthropic put a lot of thought and effort in providing a secure yet good user experience for people using Cowork. Most people who are using Cowork will have no idea what’s happening under the hood, and they likely don’t need to.