Every time I publish content I burn over ten hours putting it everywhere else. Articles to Medium, Substack, Dev.to, Hashnode. Posts to LinkedIn personal and company. Threads to Twitter. Comments across Reddit and trade communities. Each platform wants the content slightly different. Each one needs a canonical URL pointing back to the source. Each one is fifteen to forty minutes of copy paste, format fixing, tag selecting.

Ten plus hours a week of distribution across articles, posts, comments, syndication. Pure copy-paste with no thinking attached. The kind of work that gets pushed to evenings and weekends because there is always something more important to do during business hours.

I just spent a weekend replacing it with a self-hosted open source AI agent called OpenClaw, running on a small Docker-sandboxed VM with ChatGPT Plus as the brain. Starting cost around thirty-two dollars a month. Scales up to forty-four if you push the box harder. Less than the coffee budget either way.

Here is the full build, including the security layer most self-hosted AI agent tutorials hand-wave their way through.

$32+/mo
Total monthly cost (VM + LLM)
~4 hrs
Setup time end to end
10+ hrs/wk
Manual work being replaced

Why self-host instead of using a SaaS

You have three real options when you want an AI agent doing browser-driven work for you on a recurring basis.

For my use case, which is content distribution with no sensitive financial or client data on the path, self-hosting wins on three counts.

Cost. Thirty-two dollars a month versus ninety-nine to two hundred dollars a month for SaaS. Over a year that is around two thousand dollars saved on a workload neither side handles dramatically better.

Control. Every cookie, profile, screenshot, and audit log sits on infrastructure I own. If a platform changes its UI tomorrow and breaks the agent, I can patch it inside an hour. With SaaS I would wait for the vendor.

Reusability. The same VM that runs the syndication agent already runs my Reddit scout cron job and will host other automations as I build them. The marginal cost of the next workflow is zero.

Picking the agent

Five options I looked at seriously before settling on one.

OpenClaw won for three reasons. Discord-native control loop, which lined up with how I already work. Self-hostable in Docker out of the box. And it supports the OpenAI Codex provider, which means I can run it on a flat twenty dollar a month ChatGPT Plus subscription instead of paying per token.

The stack

Five pieces, nothing exotic.

Total monthly cost lands between thirty-two and forty-four dollars depending on how hard you push the VM. Setup time around three to four hours of focused work, most of it spent on the security layer rather than OpenClaw itself.

Provisioning the droplet

Pick at least the twelve dollar plan. The six dollar plan with 1GB of RAM is dead on arrival once Chromium plus the agent plus plugins fire up.

2GB at $12 works but is tight. Chromium alone wants 500 to 800MB. OpenClaw plus its plugins eat another few hundred. Once a browser session is active per platform, you are running close to the ceiling. Fine for one workflow, fragile under multiple.

If you plan to run multiple browser sessions in parallel, layer in Reddit scout or scheduled cron jobs, or expand to other automations on the same box, the four GB plan at twenty-four dollars a month is the comfortable spot. I started at twelve and will bump as the workload grows.

Ubuntu 24.04 LTS gives the longest support window and the best Docker compatibility. Add your SSH public key during droplet creation, enable IPv6, choose the closest region.

The droplet boots in about ninety seconds. The interesting part is what you do to it before installing anything.

Hardening the VM (the part most tutorials skip)

An autonomous agent that can drive a browser is a fat target for anyone who finds your IP. Default Ubuntu is not hardened for this. Three layers I add before installing anything else.

SSH lockdown

Edit /etc/ssh/sshd_config:

Validate with sudo sshd -t before restarting the service. Test from a second SSH window before closing the first one. If you brick your config you still have a live session to revert from.

Firewall

UFW, default deny inbound, default allow outbound. Only port 22 open inbound. Nothing else needs to be reachable from the internet on this box. The agent reaches out, never the other way around.

sudo ufw default deny incoming
sudo ufw default allow outgoing
sudo ufw allow 22/tcp
sudo ufw enable

Background defenses

fail2ban for SSH brute-force protection. unattended-upgrades for automatic security patches. Both run by default after install with sensible defaults.

sudo apt install -y fail2ban unattended-upgrades ufw

That is the baseline. The combination of key-only SSH, denied-by-default firewall, fail2ban, and auto-patches takes maybe fifteen minutes and removes most of the drive-by attack surface.

Docker install

Use the official Docker repo, not the version Ubuntu ships in apt. The Ubuntu default is older and missing features OpenClaw uses.

Standard install steps from docs.docker.com. Then add yourself to the docker group so you do not need sudo for every command.

sudo usermod -aG docker $USER

Log out, log back in, and verify with docker run hello-world. If you see the welcome message, Docker is set.

Directory structure

Pre-create the directories OpenClaw will mount, with restrictive permissions. Owned by your non-root user, mode 700 so nothing else on the system can read them.

sudo mkdir -p /opt/openclaw/{config,profiles,workspace}
sudo mkdir -p /opt/discord-bot
sudo mkdir -p /var/log/openclaw

sudo chown -R $USER:$USER /opt/openclaw /opt/discord-bot /var/log/openclaw
chmod 700 /opt/openclaw /opt/openclaw/config /opt/openclaw/profiles /opt/openclaw/workspace /var/log/openclaw

Two notes on this layout.

Separate Chrome profiles per platform. Inside /opt/openclaw/profiles there is a sub-directory per platform: medium, linkedin, substack, devto. Each platform gets its own scoped Chrome profile with only that platform logged in. If Medium ever gets compromised, LinkedIn is unaffected because the cookies live in a different profile entirely.

Logs outside the agent dir. /var/log/openclaw is the audit log destination. It is outside the OpenClaw config tree on purpose, so a misbehaving agent cannot rewrite its own history.

OpenClaw setup

Clone the repo and pull the pre-built Docker image. Building locally works too but takes longer and burns CPU you do not have on the small droplet.

cd /opt/openclaw
git clone https://github.com/openclaw/openclaw.git repo

cat > /opt/openclaw/openclaw-env.sh <<'EOF'
export OPENCLAW_IMAGE="ghcr.io/openclaw/openclaw:latest"
export OPENCLAW_CONFIG_DIR="/opt/openclaw/config"
export OPENCLAW_WORKSPACE_DIR="/opt/openclaw/workspace"
export OPENCLAW_GATEWAY_BIND="lan"
export OPENCLAW_SANDBOX="1"
export OPENCLAW_TZ="UTC"
EOF
chmod 600 /opt/openclaw/openclaw-env.sh

source /opt/openclaw/openclaw-env.sh
cd /opt/openclaw/repo
./scripts/docker/setup.sh

The setup wizard walks through everything. Workspace path, gateway port, gateway bind, auth method, channel selection, DM policy, search provider, skills.

Two settings to get right at this stage.

Loopback Docker port mapping

OpenClaw exposes the Control UI on port 18789. By default Docker maps that port to 0.0.0.0:18789, meaning anyone on the internet who scans port 18789 on your VM gets the gateway. UFW does not always block Docker traffic correctly because Docker manipulates iptables directly.

Fix it at the Docker level by overriding the port mapping to bind only to 127.0.0.1 on the host.

cat > /opt/openclaw/repo/docker-compose.override.yml <<'EOF'
services:
  openclaw-gateway:
    ports: !override
      - "127.0.0.1:18789:18789"
      - "127.0.0.1:18790:18790"
EOF

The !override tag is critical. Without it Docker Compose merges the port arrays and you end up with both bindings active, which causes a port conflict and cryptic startup errors.

SSH tunnel for the Control UI

Now the gateway is invisible from the public internet. To access the Control UI from your laptop, set up an SSH tunnel.

ssh -L 18789:127.0.0.1:18789 your-droplet-alias

Then visit http://localhost:18789/ in your laptop browser, paste the gateway token, and you are in. Close the SSH session, the dashboard is gone. No public exposure, no separate auth layer to manage, no port to forget about.

Discord setup

OpenClaw is not just a bot, it is an agent that uses Discord as a control loop. The wiring matters as much as the install.

Server. Private, no invite links, two-factor required for moderation. I am the only user.

Channels. Ten channels, scoped per workflow. Channel allowlist enabled in OpenClaw so the bot only acts in these channels and ignores everywhere else.

DMs disabled. The agent does not respond to direct messages from anyone, including me. All commands must come through the private server. One less attack surface for prompt injection or social-engineering attempts.

User ID check. The bot listener verifies the author of every command against my specific Discord user ID before executing. If anyone else somehow ends up in a position to message the bot, their commands get silently dropped.

The LLM auth strategy

The piece most people get wrong on cost. There are three real options for the agent brain, and the math is not what you would expect.

I picked Plus plus Codex for cost predictability. Twenty dollars a month is twenty dollars a month no matter how many syndications I run. The pay-per-token paths are cheaper at very low volume but get expensive fast on agentic workloads where every step is a model call. If volume scales ten-x I will reconsider, but for now this is the cleaner shape.

What the agent can and cannot access

The hardest part of running an autonomous agent is being honest about the blast radius. Mine runs in a tight box. Here is exactly what is inside the box and what is not.

What the agent can reach

What the agent cannot reach

Three rules I follow

  1. Never give it credentials I would not be okay with leaking. If I would not paste a credential into a Slack channel, it does not go in the agent profile.
  2. Two-factor on every account it touches. If a session cookie ever gets exfiltrated, the attacker still hits the 2FA wall.
  3. Use dedicated syndication accounts where it makes sense. My primary LinkedIn stays manual. Syndication LinkedIn is the agent's. Same for any platform where account safety matters more than convenience.

First test

Once the gateway is healthy, the bot is online in Discord, and the SSH tunnel reaches the Control UI, the test loop is short.

From the #commands channel:

/dryrun on
/syndicate self-hosted-ai-agent-vm

Dry-run mode runs the entire syndication workflow except the final publish click. Every step the agent would take gets logged to #dry-runs with screenshots. I read through the Discord channel and verify the agent navigated correctly, generated a sane intro, set the canonical URL, picked the right tags. If anything looks off, I tweak the system prompt and re-run.

Once dry-runs land cleanly, flip to live mode and run the same command again. The agent posts. Screenshots and post URLs hit Discord within a couple minutes.

I have ran dry-runs of this exact workflow against the cold-caller article I wrote two weeks ago. Output looked clean. The full live runs across all platforms ship in the next couple of weeks once I have completed Medium and Substack login flows for the agent's profile.

What broke during setup

Honest list of the things that cost me time so you do not lose the same hours.

Sudo password. Forgot it on day one. DigitalOcean recovery console refused to open with a stock auth flow, took an hour to figure out the right path. Lesson: password manager entry the moment you set the password.

Docker Compose port merging. Spent an hour on a "address already in use" error. The fix turned out to be the !override tag in the override file. Without it Docker Compose merges port arrays from base and override, both bindings activate, conflict.

Loopback bind too tight. First pass, I bound the gateway to container loopback (127.0.0.1 inside the container). That blocks the Docker port mapping entirely because Docker forwards to the container's 0.0.0.0, not its 127.0.0.1. Switched to LAN bind inside container plus loopback bind on the Docker port mapping. Both layers required for SSH tunnel access.

Anthropic session block. Was originally going to wire Claude Plus into OpenClaw via session auth. Anthropic blocked that path Q1 2026. Switched to OpenAI Codex via ChatGPT Plus, which actually turned out cheaper for this workload.

What is next

Right now: gateway is healthy, bot is online, Control UI is accessible only via SSH tunnel, agent is in dry-run mode by default. Real syndications start this week.

I will ship part two in two weeks with the actual numbers.

If you liked this, my last long-form was on the AI cold-caller I built for HVAC outbound — same NeverMiss, very different stack, full post-mortem on what worked and what did not.

If you run a home service business and want this kind of automation built into your operations rather than as a hobby project on a $12 droplet, that is what NeverMiss does day to day. Book a call below.