If you’ve ever tried to launch Chromium directly from Puppeteer, you know the pain — high CPU, zombie processes, and broken sandboxes.
Instead of spawning Chrome from every Node process, you can run it once as a container and connect remotely.
That’s exactly what browserd does — it wraps headless Chromium with a small Go proxy that exposes a stable WebSocket at ws://0.0.0.0:9223.
Your Puppeteer client can attach directly to this endpoint — no extra HTTP fetch, no random /devtools/browser/<id>, and no lifecycle headaches.
🧱 What is browserd?
Headless Chromium packaged with a Go proxy that gives you a fixed WebSocket endpoint (
ws://0.0.0.0:9223) so Puppeteer or any CDP client can connect immediately — even across load balancers.
The proxy inside browserd tracks Chromium’s internal DevTools socket and exposes it directly, meaning:
- You always connect to
ws://host:9223 - You never need to query
/json/versionor guess the internal DevTools path - You can safely scale multiple containers behind a load balancer
If you don’t want to run containers at all, Peedief’s managed renderer hosts the same stack for you — same automation, zero ops.
🚀 Quick Start
Download the seccomp profile (for a safer sandbox) and run the container:
# Download the Chromium seccomp profile
curl -o chromium.json https://raw.githubusercontent.com/peedief/browserd/main/chromium.json
# Run browserd container
docker run --rm \
--security-opt seccomp=chromium.json \
-p 9223:9223 \
--name browserd \
ghcr.io/peedief/browserd:v1.0.0
That’s it — the WebSocket is live at:
ws://localhost:9223
The Go proxy automatically connects to the internal Chrome DevTools backend, so you don’t have to worry about /devtools/browser/<id>.
🧩 Connect Puppeteer to browserd
In your Node app, install Puppeteer Core (no bundled Chrome):
npm install puppeteer-core
Then connect directly to the proxy:
// connect.js
import puppeteer from 'puppeteer-core';
async function main() {
const browser = await puppeteer.connect({
browserWSEndpoint: 'ws://localhost:9223',
});
try {
const page = await browser.newPage();
await page.goto('https://example.com');
console.log(await page.title());
await page.close();
} finally {
browser.disconnect(); // don't call browser.close()
}
}
main().catch((err) => {
console.error(err);
process.exit(1);
});
That’s all it takes.
No /json/version calls, no internal WebSocket discovery — just a single stable endpoint.
🧰 Using Docker Compose
For a cleaner setup, define it in docker-compose.yml:
services:
browserd:
image: ghcr.io/peedief/browserd:v1.0.0
ports:
- "9223:9223"
security_opt:
- seccomp=chromium.json
Then download the seccomp file and bring it up:
curl -o chromium.json https://raw.githubusercontent.com/peedief/browserd/main/chromium.json
docker compose up -d browserd
You now have a fully sandboxed headless Chromium instance that any remote Puppeteer client can connect to at ws://localhost:9223.
🧠 Why This Is Better
| Old Way (Direct Launch) | With browserd |
|---|---|
| Every Node process spawns its own Chromium instance | One centralized container hosts Chromium for all clients |
WebSocket URL changes every run (/devtools/browser/<id>) | Stable ws://host:9223 endpoint — never changes |
You must fetch /json/version before connecting | No discovery step — connect instantly |
| High CPU, memory leaks, zombie Chrome processes | One managed Chrome lifecycle handled by the proxy |
Sandboxing disabled with --no-sandbox for simplicity | Runs under real seccomp sandbox (chromium.json) |
| Hard to scale horizontally | Easily load-balance multiple browserd containers — all expose the same consistent WebSocket path |
| Each app handles crashes separately | Browser lifecycle isolated inside the container |
browserd isn’t just cleaner — it’s scalable by design.
Spin up 3–4 replicas behind NGINX, Traefik, or any load balancer, and your Puppeteer clients can connect to any of them using the exact same ws://host:9223 path.
🧱 Health & Scaling
- A
/healthzendpoint is exposed for readiness/liveness checks. - You can run multiple browserd instances and load-balance them — since every proxy exposes the same stable
ws://path. - Each proxy holds one persistent Chromium process internally, managing its DevTools lifecycle for you.
⚙️ Production Tips
- Use the seccomp profile (
chromium.json) — don’t disable the sandbox unless you absolutely have to. - Limit resources:
deploy:
resources:
limits:
cpus: "2.0"
memory: 2g
- Add auth or IP whitelisting if exposing beyond localhost.
- Monitor
/healthzfor restarts or resource exhaustion. - Scale horizontally — each
browserdinstance can handle its own browser process.
🧾 Summary
✅ Run browserd once → exposes ws://localhost:9223
✅ Connect Puppeteer directly using that endpoint
✅ No /json/version fetches or dynamic IDs
✅ Sandbox stays intact under seccomp profile
✅ Load-balance multiple containers effortlessly
✅ Shared, stable, and production-ready Chrome layer
In short:browserd turns headless Chrome into a simple, stable, load-balanced microservice you can attach to instantly from Puppeteer or any CDP client.
If you ever got tired of fighting Chrome flags, zombie processes, or unstable endpoints — this container is your new friend.
And if you want to skip containers altogether, Peedief.com hosts the same renderer stack for you – no ops, just a clean API.
Leave a Reply