Why S3 is the right place for backups
A backup stored next to the thing it’s supposed to save is a false sense of security. If a server dies, a disk fills up, or a human mistake happens, local-only backups often disappear with the incident. S3-style storage solves this structurally: backups live off the server, can be automated, and access can be restricted to “write-only” workflows that reduce the chance of accidental deletion.
3-2-1 without overkill
3-2-1 is simple: three copies, two different media, one copy off-site. For most websites this can be practical:
-
production data on the server,
-
a short local retention (1–3 days) for quick rollback,
-
a longer retention in S3 (for example: 7 daily + 4 weekly + 6 monthly).
The point isn’t to collect checkboxes. The point is knowing you can restore, quickly, with a clear procedure.
Backing up websites to S3: WordPress, Laravel, static sites
The universal model is files + database.
WordPress
Typically you back up wp-content (plugins/themes/uploads) plus a database dump.
Laravel
Focus on what matters: user uploads and the relevant storage content (not caches that can be rebuilt), plus your database.
Static sites
Static sites are the easiest: archive the directory (or just back up the build artifacts if the source lives in Git) and upload to S3.
Database backups: MySQL/PostgreSQL + encryption + retention
Databases deserve stricter rules: consistent dumps, encryption, and controlled retention.
MySQL/MariaDB:
PostgreSQL:
Encryption (GPG symmetric):
Retention works best in two layers: short local retention for fast restores, and longer S3 retention via lifecycle rules.
rclone + S3: sync and “cloud as a drive”
rclone makes S3 feel operationally convenient. For backup history, prefer copy over sync because sync deletes files at the destination.
If you want S3 to behave like a drive:
Borg/Restic + S3: dedup and fast incrementals
Full archives every day get expensive quickly. Borg/Restic maintain a repository with deduplication and incremental snapshots, so you push only changes. This is faster, cheaper, and usually restores more predictably.
Restic example:
S3 versioning: how to avoid paying for junk
Versioning is helpful, but it can silently double your storage usage if files get overwritten frequently. Every overwrite can become another stored version.
To keep it cost-safe:
-
enable versioning only where you truly benefit,
-
set lifecycle policies for noncurrent versions (e.g., keep 14–30 days then delete),
-
clean up incomplete multipart uploads and delete markers,
-
separate prefixes/buckets so rules remain simple and predictable.