CLI + Go library for DigitalOcean

Manage your
DigitalOcean
infrastructure.

Full-featured CLI and importable Go library built on godo. Provision Droplets, manage DNS, VPCs, firewalls, snapshots — no doctl, no subprocess, no state files.

For Red Team ops: campaign orchestration, IP rotation, firewall presets, cloud-init templates for Havoc, Sliver, nginx redirectors, WireGuard.

do-manager
$

Overview

do-manager is built directly on godo, the official DigitalOcean Go client. Two interfaces, one codebase: a CLI you run at the terminal and a library you import into any Go program.

It covers the full DO surface: Droplets, DNS, SSH keys, firewalls, VPCs, reserved IPs, snapshots. On top of that it adds higher-level ops: campaign orchestration (deploy an entire stack in one command, rotate IPs when they get burned, rebuild a node, destroy everything cleanly), opinionated firewall presets, a security audit scanner, and cloud-init templates for common tools — Havoc, Sliver, GoPhish, evilginx2, nginx redirectors, WireGuard.

Use it as a CLI for day-to-day infra work. Use it as a library when you want to provision Droplets or manage firewalls from Go code without shelling out to doctl. Use it for Red Team campaigns where infrastructure lifecycle matters.

Infrastructure
$ droplet create --name web-01 --wait
$ dns add --type A --name @ --data 1.2.3.4
$ vpc create --name lab-net --region fra1
$ firewall preset bastion --operator-ip x.x.x.x
$ snapshot create --droplet 123 --wait
Droplets, DNS, SSH keys, VPCs, firewalls, reserved IPs, snapshots. All pipeable with -o json.
Red Team
$ campaign deploy --role "c2:preset=c2:template=c2-havoc"
$ campaign deploy --role "redir:preset=redirector:reserve-ips"
$ campaign rotate-ips --name op-ghost
$ droplet rebuild c2-01 --image c2-snap --wait
$ audit
Campaign lifecycle, IP rotation, firewall presets, cloud-init templates, security audit.
Go Library
c, _ := client.New(os.Getenv("DO_TOKEN"))
svc := droplet.New(c)
svc.CreateBatch(ctx, opts, 5)
firewall.ParseRuleSpec("tcp:443:any")
tmpl.Render("redirector-nginx", vars)
Import pkg/droplet, pkg/firewall, pkg/vpc, pkg/tmpl. Typed API, no binary dependency.

Why do-manager?

Three approaches are common for managing DO infra from a Red Team perspective. Here is where each one breaks down and where do-manager fills the gap.

doctl bash + curl API Terraform / Pulumi do-manager
Type Official DO CLI Ad-hoc scripts Infrastructure-as-Code CLI + Go library
Campaign orchestration No Manual Partial Yes (deploy/status/destroy/rotate)
Firewall presets No No No c2, phishing, redirector, bastion, lockdown
IP rotation No Script it yourself No campaign rotate-ips
Parallel batch ops No background & wait Partial Yes (goroutines, per-slot errors)
Importable as Go lib No No No (SDK different package) import "pkg/droplet"
Security audit No No No Yes (CRITICAL/HIGH exit 1)
Requires state file No No Yes (.tfstate) No (stateless)
Requires external binary Yes (doctl) curl, jq Yes (terraform) No
When each tool makes sense

doctl is fine for one-off manual ops at the terminal. It covers every DO resource but has no Red Team abstractions and no campaign concept.

bash + curl works and most Red Teamers start here. It breaks down fast: race conditions in parallel provisioning, fragile ID extraction with grep/jq, no error aggregation, scripts that differ between operators.

Terraform is the right choice for long-lived stable infrastructure. It is overkill for engagements: statefile management is a liability, the feedback loop is slow, and there are no Red Team primitives (no firewall presets, no IP rotation, no node rebuild).

do-manager is the choice when you need to provision a full engagement stack fast, rotate IPs when they get burned, tear everything down cleanly, and optionally embed all of that into a Go automation program.

Installation

Requires Go 1.23+.

terminal
$ go install github.com/franckferman/do-manager@latest
bash
$ git clone https://github.com/franckferman/do-manager.git $ cd do-manager && go mod tidy $ make build $ ./do-manager version do-manager v0.1.0 (commit=abc1234 built=2026-01-01T...)
bash
# Download a pre-built binary from GitHub Releases (linux/darwin/windows, amd64/arm64) $ curl -sL https://github.com/franckferman/do-manager/releases/latest/download/\ do-manager_Linux_x86_64.tar.gz | tar -xz $ sudo mv do-manager /usr/local/bin/ $ do-manager version

Releases are built automatically by GoReleaser on every v* tag push. Checksums are provided in checksums.txt.


Configuration

The API token is resolved in this priority order — first match wins:

MethodExample
--token flagdo-manager --token dop_v1_xxx droplet list
DO_TOKEN env varexport DO_TOKEN=dop_v1_xxx
~/.do-manager.yamltoken: dop_v1_xxx
~/.do-manager.yaml
token: dop_v1_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Generate a Personal Access Token at cloud.digitalocean.com/account/api/tokens. Read+Write scope is required for all mutating operations.

CLI Reference

Global flags available on every command:

FlagDescription
--token stringDigitalOcean API token (overrides DO_TOKEN)
-o, --output stringOutput format: table (default) or json

Droplets

List

do-manager droplet list
$ do-manager droplet list >> 3 Droplet(s) ID | Name | Status | Region | Size | IPv4 | Tags ------------|---------|--------|--------|--------------|----------------|------ 123456789 | web-01 | active | fra1 | s-1vcpu-1gb | 167.99.12.34 | web 123456790 | db-01 | active | fra1 | s-2vcpu-4gb | 167.99.12.35 | db 123456791 | bastion | off | nyc1 | s-1vcpu-1gb | 138.68.0.1 |

Aliases: d ls   droplets list

Get

do-manager droplet get <id>
$ do-manager droplet get 123456789 >> Droplet web-01 ID: 123456789 Status: active Region: Frankfurt 1 (fra1) Size: s-1vcpu-1gb Specs: 1 vCPU / 1024 MB RAM / 25 GB disk IPv4: 167.99.12.34 Tags: web Created: 2024-01-15T10:30:00Z

Create

single droplet + --wait
$ do-manager droplet create \ --name web-02 --region fra1 --size s-2vcpu-4gb \ --image ubuntu-22-04-x64 --ipv6 --wait >> Creating web-02 [region=fra1 size=s-2vcpu-4gb] Droplet created ID=987654321 status=new ~ Waiting for active state........ Active IPv4=167.99.99.1
batch: 5 Droplets in parallel
$ do-manager droplet create \ --name lab --count 5 --region fra1 \ --tags lab,pentest --wait >> Creating 5 Droplets in parallel [base=lab region=fra1] lab-01 ID=111111111 status=new lab-02 ID=222222222 status=new lab-03 ID=333333333 status=new lab-04 ID=444444444 status=new lab-05 ID=555555555 status=new ~ Waiting for 5 Droplet(s) to become active............ ID=111111111 IPv4=10.131.0.1 ID=222222222 IPv4=10.131.0.2 ID=333333333 IPv4=10.131.0.3 ID=444444444 IPv4=10.131.0.4 ID=555555555 IPv4=10.131.0.5

Create flags

FlagDefaultDescription
-n, --name requiredDroplet name (base name for batch)
-c, --count1Number of Droplets to provision in parallel
-r, --regionnyc1Region slug
-s, --sizes-1vcpu-1gbSize slug
-i, --imageubuntu-22-04-x64Image slug
--ssh-keysSSH key IDs (comma-separated)
--tagsTags (comma-separated)
--user-dataCloud-init script content
--ipv6falseEnable IPv6
--backupsfalseEnable automatic backups
-w, --waitfalsePoll until active, then print IP(s)

Delete

do-manager droplet delete
# Single $ do-manager droplet delete 123456789 # Multiple IDs in parallel $ do-manager droplet delete 111 222 333 444 555 # All Droplets with a tag $ do-manager droplet delete --tag lab # Skip confirmation $ do-manager droplet delete --tag lab --force

Power actions

do-manager droplet power
$ do-manager droplet power on 123456789 $ do-manager droplet power off 123456789 $ do-manager droplet power reboot 123456789

SSH Keys

do-manager ssh-key
# List all keys $ do-manager ssh-key list # Add from file $ do-manager ssh-key add --name homelab --file ~/.ssh/id_ed25519.pub # Add from string $ do-manager ssh-key add --name ci-key --public-key "ssh-ed25519 AAAA..." # Get by fingerprint $ do-manager ssh-key get "xx:xx:xx:..." # Delete $ do-manager ssh-key delete "xx:xx:xx:..."

Regions

do-manager region list
$ do-manager region list >> 14 region(s) Slug | Name | Available ------|--------------------|----------- nyc1 | New York 1 | yes ams3 | Amsterdam 3 | yes fra1 | Frankfurt 1 | yes sgp1 | Singapore 1 | yes ...

Sizes

do-manager size list
$ do-manager size list >> 72 size(s) Slug | vCPUs | Memory (MB) | Disk (GB) | Price/mo ----------------|-------|-------------|-----------|---------- s-1vcpu-1gb | 1 | 1024 | 25 | $6.00 s-1vcpu-2gb | 1 | 2048 | 50 | $12.00 s-2vcpu-4gb | 2 | 4096 | 80 | $24.00 ...

Images

do-manager image list
# Distribution images (default) $ do-manager image list # Application images $ do-manager image list --type application # All public images $ do-manager image list --type ""

JSON Output

Every command supports -o json (or --output json). The flag is global — attach it anywhere in the command line.

-o json | jq
$ do-manager droplet list -o json | jq '.[].name' "web-01" "db-01" "bastion"
droplet get -o json
$ do-manager droplet get 123456789 -o json { "id": 123456789, "name": "web-01", "status": "active", "networks": { ... }, ... }
ssh-key list -o json
$ do-manager ssh-key list -o json | jq '.[].fingerprint' "ab:cd:ef:..." "12:34:56:..."
Progress dots (when using --wait) are redirected to stderr in JSON mode, keeping stdout clean and parseable.

Shell Completion

do-manager completion <shell> generates an autocompletion script. Supported shells: bash, zsh, fish, powershell.

~/.zshrc
# Add to ~/.zshrc $ source <(do-manager completion zsh)
~/.bashrc
# Add to ~/.bashrc $ source <(do-manager completion bash)
fish
$ do-manager completion fish | source
PowerShell
PS> do-manager completion powershell | Out-String | Invoke-Expression

Version

do-manager version
$ do-manager version do-manager v0.2.0 (commit=a3f9c12 built=2026-01-01T12:00:00Z) $ do-manager version -o json { "version": "v0.2.0", "commit": "a3f9c12", "build_date": "2026-01-01T12:00:00Z" }

Version, commit, and build date are injected at build time via -ldflags. Release binaries carry the correct Git tag automatically via GoReleaser.


DNS

Manage domains and DNS records directly from the CLI. Supports A, AAAA, CNAME, MX, TXT, NS, SRV, CAA.

Domain commands

terminal
# List / inspect domains $ do-manager dns list $ do-manager dns get example.com # Register a domain, optionally create default A record $ do-manager dns create --domain example.com --ip 1.2.3.4 $ do-manager dns delete example.com

Record commands

terminal
# List records for a domain $ do-manager dns records example.com # Add an A record (root) $ do-manager dns add --domain example.com --type A --name @ --data 1.2.3.4 # Add a CNAME $ do-manager dns add --domain example.com --type CNAME --name mail --data @ # Add SPF TXT record $ do-manager dns add --domain example.com --type TXT --name @ --data "v=spf1 mx -all" # Delete a record by ID $ do-manager dns rm-record example.com 123456

Firewalls

Create and manage Cloud Firewalls. Rules use the format proto:ports:addresses.

terminal
$ do-manager firewall list $ do-manager firewall create --name my-fw \ --inbound "tcp:443:any" --inbound "tcp:22:203.0.113.1" \ --outbound "tcp:0-65535:any" \ --droplets 12345678 $ do-manager firewall attach <id> --droplets 12345678,87654321 $ do-manager firewall delete <id>

Firewall Presets

Apply a named opinionated ruleset for common Red Team roles. Use --operator-ip to restrict SSH to your IP only.

ProfileInboundOutboundUse case
c2443, 80, 53, SSH(op-ip)allCommand & Control server
phishing443, 80, 8080, SSH(op-ip)allGoPhish / evilginx2
redirector443, 80, SSH(op-ip)443, 80Traffic redirector (socat/nginx)
bastionSSH(op-ip) onlyallJump host
lockdownSSH(op-ip) onlynoneFully locked node
terminal
# Apply the c2 preset - SSH restricted to your IP only $ do-manager firewall preset c2 \ --name c2-fw \ --droplets 12345678 \ --operator-ip 203.0.113.1 ✓ Firewall preset c2 applied ID: abc-123 Rules: 5 inbound 3 outbound

Reserved IPs

Reserved IPs are static addresses that persist even when a Droplet is deleted or rebuilt. Use them for DNS stability - point your domain to the reserved IP, swap the underlying Droplet freely.

terminal
$ do-manager reservedip list # Reserve an IP in fra1, assign to droplet immediately $ do-manager reservedip reserve --region fra1 --droplet 12345678 # Move IP to a different droplet $ do-manager reservedip unassign 1.2.3.4 $ do-manager reservedip assign 1.2.3.4 --droplet 87654321 # Release IP back to pool $ do-manager reservedip delete 1.2.3.4

Snapshots

Create, list, and delete Droplet snapshots. Snapshots are the foundation for repeatable campaign deployments. The workflow: provision one Droplet, install and configure your tooling (GoPhish, Havoc C2, evilginx2, nginx redirector config, WireGuard), take a snapshot, delete the source Droplet. On engagement day, campaign deploy --snapshot <slug> restores that exact state across N nodes in parallel. No reinstalling, no drift between nodes.

terminal
$ do-manager snapshot list $ do-manager snapshot get <id> # Snapshot a running droplet, wait until done $ do-manager snapshot create --droplet 12345678 --name gophish-v2 --wait ✓ Snapshot gophish-v2 created Deploy: do-manager campaign deploy --snapshot gophish-v2 ... $ do-manager snapshot delete <id>

VPC

A VPC (Virtual Private Cloud) is an isolated Layer-2 private network scoped to a region. Every Droplet inside gets a private IP in addition to its public one. Traffic between VPC members stays on the DO fabric - it never leaves the datacenter.

What VPC is and is not. VPC gives you private network reachability between Droplets in the same region. It is not the channel you use to proxy C2 traffic - that is handled at the application layer (SSH tunnel, WireGuard, socat, nginx). VPC is the network isolation layer: your C2 node has no public firewall rule for the C2 port, so the only path to it is via the private IP, which only nodes inside the same VPC can reach.

Redirector-to-C2 connection options

Three common approaches to route traffic from redirector to C2. All are compatible with VPC:

MethodHow it worksTradeoffs
DO VPC private IP Both nodes in same region/VPC. nginx/socat on redirector forwards to 10.x.x.x:port. No public port on C2. DO-only. Requires same region. Zero extra setup once nodes are up.
WireGuard tunnel Peer-to-peer VPN between redirector and C2. C2 listens only on wg0 interface. Works cross-provider, cross-region. Extra setup in user-data/cloud-init. More opsec: no public IP on C2 at all.
SSH reverse tunnel C2 initiates: ssh -R 8080:localhost:8080 user@redirector. Redirector forwards inbound to the tunnel. Simple. C2 needs no inbound port. Tunnel drops if SSH dies - use autossh/systemd.

VPC architecture

network diagram (VPC + nginx proxy)
Internet | Agent beacon | HTTPS :443 | [Redirector] 1.2.3.4 public firewall: 80,443 open nginx proxy_pass http://10.20.0.2:443 | DO VPC 10.20.0.0/24 (private - stays inside datacenter fabric) | [C2 Server] 10.20.0.2 no public port for C2 protocol public firewall: SSH only from operator IP Havoc / Cobalt Strike / Sliver listening on all interfaces

Commands

terminal
# Create a VPC (--ip-range optional, DO assigns /20 automatically if omitted) $ do-manager vpc create --name c2-net --region fra1 --ip-range 10.20.0.0/24 ✓ VPC created ID: abc-uuid-123 IP Range: 10.20.0.0/24 # Deploy all campaign droplets inside the VPC $ do-manager campaign deploy --name op-ghost \ --vpc-uuid abc-uuid-123 \ --count 3 --region fra1 --snapshot c2-snapshot # Or let campaign create the VPC automatically $ do-manager campaign deploy --name op-ghost --vpc-auto ... # List all members (droplets, load balancers...) $ do-manager vpc members abc-uuid-123 $ do-manager vpc list $ do-manager vpc delete abc-uuid-123

Campaign

Deploy and destroy a complete infrastructure campaign with a single command. A campaign groups Droplets, a Firewall, and Reserved IPs under a tag (campaign:<name>). All resources are created in the correct order and torn down atomically.

The campaign abstraction exists because Red Team infra has a lifecycle: spin up fast before the engagement, rotate IPs when they get burned, rebuild a single compromised node without touching the rest, tear everything down cleanly at the end. Doing this with doctl one command at a time is possible but tedious and error-prone under pressure. campaign deploy and campaign destroy are atomic - either the whole stack is up, or you know exactly which slot failed.

What a campaign creates: N Droplets in parallel + firewall with your inbound/outbound rules (or a preset) + optional reserved IPs per Droplet. All tagged campaign:<name> so status and destroy can find every resource by tag. Multi-role campaigns additionally create per-role firewalls and per-role Droplet tags (role:c2, role:redirector).

Full workflow

Red Team campaign lifecycle
# 1. Create isolated network $ do-manager vpc create --name phish-net --region fra1 # 2. Deploy the campaign $ do-manager campaign deploy \ --name phish-q2 \ --count 3 \ --region fra1 \ --snapshot gophish-v2 \ --vpc-uuid <vpc-id> \ --user-data-file ./setup_gophish.sh \ --inbound "tcp:443:any" --inbound "tcp:80:any" \ --inbound "tcp:22:203.0.113.1" \ --outbound "tcp:0-65535:any" \ --reserve-ips ✓ Campaign phish-q2 deployed in 1m52s Droplets: 3 IPs: 167.99.10.1 167.99.10.2 167.99.10.3 Tag: campaign:phish-q2 # 3. Check status at any time $ do-manager campaign status --name phish-q2 # 4. IP burned / blacklisted - rotate without downtime $ do-manager campaign rotate-ips --name phish-q2 phish-q2-01 167.99.10.1 -> 45.33.1.10 phish-q2-02 167.99.10.2 -> 45.33.1.11 phish-q2-03 167.99.10.3 -> 45.33.1.12 # 5. Teardown everything $ do-manager campaign destroy --name phish-q2 --force ✓ Campaign phish-q2 destroyed Droplets: 3 Reserved IPs: 3 Firewall: phish-q2-fw

Key flags for campaign deploy

FlagDescription
--name requiredCampaign name - used as tag prefix and resource names
--countNumber of Droplets to provision in parallel (default 1)
--snapshotImage slug or snapshot slug/ID to deploy from
--vpc-uuidPlace all Droplets inside this VPC
--inboundFirewall inbound rule: proto:ports:addresses (repeatable)
--outboundFirewall outbound rule (repeatable)
--reserve-ipsReserve and assign a static IP to each Droplet
--user-data-fileBash / cloud-init script run at first boot
--ssh-keysSSH key IDs to embed (comma-separated)
--roleMulti-role spec: name:count=N:snapshot=slug:preset=c2:reserve-ips (repeatable, replaces simple flags)
--operator-ipRestrict SSH in preset firewall rules to this IP only
--vpc-autoAuto-create a VPC and place all Droplets inside it

Multi-role campaign

Deploy multiple distinct node types in a single command using --role. Each role gets its own Droplets, its own firewall (preset or custom), and optional reserved IPs. All roles share the campaign tag for unified status / destroy.

Role spec format: name:count=N:snapshot=slug:size=s-1vcpu-1gb:preset=c2:reserve-ips
Every field except name is optional and falls back to campaign-level flags. reserve-ips is a boolean token (no value needed).
Multi-role: C2 + redirectors inside a VPC
# One command: 1 C2 node (no public IP) + 3 redirectors (reserved IPs, public 443/80) $ do-manager campaign deploy \ --name op-phantom \ --region fra1 \ --operator-ip 203.0.113.5 \ --vpc-auto \ --role "c2:count=1:snapshot=havoc-v3:preset=c2" \ --role "redirector:count=3:snapshot=nginx-redir-v1:preset=redirector:reserve-ips" ✓ VPC created: op-phantom-vpc 10.10.0.0/20 ✓ Role c2: 1 Droplet(s) created firewall=op-phantom-c2-fw ✓ Role redirector: 3 Droplet(s) created firewall=op-phantom-redirector-fw ips=3 # The C2 node is only reachable via private VPC IP from the redirectors # Redirectors forward HTTPS -> C2 via 10.10.x.x, no public C2 port exposed $ do-manager campaign status --name op-phantom # Tear down everything in one shot $ do-manager campaign destroy --name op-phantom --force
Role tokenDescription
name (first token)Role label, used in resource names and tags: role:c2, campaign-c2-fw
count=NNumber of Droplets for this role (default 1)
snapshot=slugImage slug or snapshot name (falls back to --snapshot)
image=slugAlias for snapshot
size=slugDroplet size (falls back to --size)
preset=nameFirewall preset: c2, phishing, redirector, bastion, lockdown
reserve-ipsReserve and assign a static IP per Droplet in this role

Rebuild a burned node

When a single node is compromised, rebuild it without touching the rest of the campaign. The reserved IP stays assigned.

terminal
# Node burned - wipe disk, reinstall from clean snapshot, keep same IP $ do-manager droplet rebuild phish-q2-02 --image gophish-v2 --wait ~ Rebuilding phish-q2-02 (ID=124) ip=167.99.10.2 image=gophish-v2 ~ Waiting for rebuild .......... ✓ Droplet phish-q2-02 rebuilt successfully ip=167.99.10.2

Templates

Built-in cloud-init bash scripts embedded directly in the binary. Each script installs and configures a specific tool (C2, redirector, VPN, phishing) and is ready to run at Droplet first boot via DigitalOcean's cloud-init mechanism.

Variable substitution uses Go's text/template ({{.VarName}} syntax). Every variable has a declared default in the script header — unset variables silently fall back to their defaults. Use --template-var KEY=VALUE to override.

Available templates

NameDescriptionKey variables
c2-havoc Havoc C2 framework - build from source, teamserver as systemd service C2Port C2Host TeamserverPassword
c2-sliver Sliver C2 - download latest release, generate operator config, systemd C2Port OperatorName
redirector-nginx nginx HTTPS reverse proxy with URI path filtering. certbot or self-signed TLS. C2BackendHost Domain C2URIPath
redirector-apache Apache2 mod_rewrite with User-Agent + URI filtering for C2 profile matching C2BackendHost C2UserAgent C2URIPath
wireguard-server WireGuard VPN server — generates server keypair, configures wg0, enables IP forwarding WGListenPort WGServerNet
wireguard-client WireGuard peer — connects to a WireGuard server, prints client pubkey for server registration WGServerEndpoint WGServerPublicKey WGClientNet
gophish GoPhish phishing framework — latest release, admin panel bound to 127.0.0.1 only AdminListenPort PhishListenPort
evilginx2 evilginx2 reverse proxy phishing — latest release, auto-detects public IP if not set Domain ExternalIP RedirectURL
hardening SSH hardening, fail2ban, non-root operator user creation with SSH key injection SSHPort OperatorUser OperatorPubKey

Commands

template list / show / dump
# List all templates with variables and defaults $ do-manager template list >> 9 template(s) NAME | DESCRIPTION | VARIABLES (DEFAULT) c2-havoc | Havoc C2 framework ... | C2Port=443 C2Host TeamserverPassword=... redirector-nginx | nginx HTTPS reverse proxy ... | C2BackendHost=10.20.0.2 Domain ... wireguard-server | WireGuard VPN server ... | WGListenPort=51820 WGServerNet=... ... # Inspect raw script before deploying $ do-manager template show redirector-nginx # Preview rendered output with your variable values $ do-manager template dump c2-havoc \ --template-var C2Host=1.2.3.4 \ --template-var TeamserverPassword=s3cr3t

Deploy a single node with a template

droplet create --template
# Spin up a GoPhish node in one command $ do-manager droplet create \ --name phish-01 --region fra1 --image ubuntu-22-04-x64 \ --template gophish \ --template-var AdminListenPort=3333 \ --wait ✓ phish-01 active 167.99.10.1 # Access the admin panel via SSH tunnel (admin is bound to 127.0.0.1 only) $ ssh -L 3333:127.0.0.1:3333 root@167.99.10.1 # Then open https://127.0.0.1:3333 in your browser

Full campaign with templates per role

campaign deploy with template= in role spec
# 1 Havoc C2 + 3 nginx redirectors, VPC isolated, fresh ubuntu images $ do-manager campaign deploy \ --name op-phantom \ --region fra1 \ --operator-ip 203.0.113.5 \ --vpc-auto \ --role "c2:count=1:image=ubuntu-22-04-x64:preset=c2:template=c2-havoc" \ --role "redirector:count=3:image=ubuntu-22-04-x64:preset=redirector:reserve-ips:template=redirector-nginx" \ --template-var C2BackendHost=10.10.0.2 \ --template-var Domain=cdn.example.com \ --template-var TeamserverPassword=s3cr3t ✓ VPC created: op-phantom-vpc 10.10.0.0/20 ✓ Role c2: 1 Droplet(s) created Havoc installing at first boot ✓ Role redirector: 3 Droplet(s) created nginx installing at first boot ips=3 # --template-var values are shared across all roles # C2BackendHost is used by redirector-nginx, TeamserverPassword by c2-havoc
WireGuard workflow

WireGuard requires a two-step key exchange. Deploy the C2 with template=wireguard-server first. The cloud-init script prints the server public key in the boot logs (journalctl -u cloud-init). Then deploy the redirector with template=wireguard-client and pass the server public key via --template-var WGServerPublicKey=<key>.

Audit

Scan your account for security misconfigurations and hygiene issues. Exits with code 1 on any CRITICAL or HIGH finding - integrates cleanly into CI/CD pipelines and monitoring scripts.

What it checks

SeverityCheck
CRITICALDroplets with no firewall attached (fully exposed)
HIGHFirewall rule exposes port 22/3389/5900 to 0.0.0.0/0
MEDIUMReserved IP not assigned to any Droplet (idle billing)
INFOSnapshot older than --max-snapshot-age days (default 90)
terminal
$ do-manager audit >> Audit results for 5 droplet(s) 3 firewall(s) 8 snap(s) 4 reserved IP(s) Severity | Resource | ID | Finding CRITICAL | droplet | worker-04 (ID=98765) | No firewall attached public_ip=10.1.2.3 HIGH | firewall | dev-fw | Inbound rule exposes port 22 to 0.0.0.0/0 MEDIUM | reserved_ip | 167.99.10.5 | Not assigned to any Droplet (idle billing) 3 finding(s) total # Exit code 1 - usable in CI / monitoring scripts $ do-manager audit --max-snapshot-age 30 -o json | jq '.findings[]'

Library Usage

The pkg/ packages are designed to be imported in other Go projects. Each package is stateless and accepts a context.Context on every call.

your-project/main.go
import ( "context" "os" "github.com/franckferman/do-manager/pkg/client" "github.com/franckferman/do-manager/pkg/droplet" "github.com/franckferman/do-manager/pkg/vpc" "github.com/franckferman/do-manager/pkg/dns" "github.com/franckferman/do-manager/pkg/firewall" "github.com/franckferman/do-manager/pkg/reservedip" "github.com/franckferman/do-manager/pkg/snapshot" "github.com/franckferman/do-manager/pkg/sshkey" "github.com/franckferman/do-manager/pkg/region" "github.com/franckferman/do-manager/pkg/tmpl" ) func main() { // Build an authenticated godo client c, _ := client.New(os.Getenv("DO_TOKEN")) ctx := context.Background() // Droplets svc := droplet.New(c) list, _ := svc.List(ctx) d, _ := svc.Create(ctx, droplet.CreateOptions{ Name: "worker-01", Region: "fra1", Size: "s-1vcpu-1gb", Image: "ubuntu-22-04-x64", }) // Batch: 5 in parallel, names become worker-01..worker-05 results := svc.CreateBatch(ctx, droplet.CreateOptions{Name: "worker"}, 5) // SSH keys keys, _ := sshkey.New(c).List(ctx) // Regions / sizes / images reg := region.New(c) regions, _ := reg.ListRegions(ctx) sizes, _ := reg.ListSizes(ctx) images, _ := reg.ListImages(ctx, "distribution") // Templates metas, _ := tmpl.List() script, _ := tmpl.Render("redirector-nginx", map[string]string{ "C2BackendHost": "10.20.0.2", "Domain": "cdn.example.com", }) _ = list; _ = d; _ = results; _ = keys; _ = regions; _ = sizes; _ = images; _ = metas; _ = script }