S3-Compatible Storage
Store backups in any S3-compatible storage provider - MinIO, Wasabi, DigitalOcean Spaces, Backblaze B2, and more.
Configuration
| Field | Description | Default | Required |
|---|---|---|---|
| Name | Friendly name for this destination | - | ✅ |
| Endpoint | S3-compatible API endpoint URL | - | ✅ |
| Region | Storage region | us-east-1 | ❌ |
| Bucket | Bucket name | - | ✅ |
| Access Key ID | S3 access key | - | ✅ |
| Secret Access Key | S3 secret key | - | ✅ |
| Force Path Style | Use path-style URLs (endpoint/bucket) instead of virtual-hosted | false | ❌ |
| Path Prefix | Folder path within the bucket | - | ❌ |
Force Path Style
Enable this for providers that don't support virtual-hosted-style URLs (e.g. MinIO, Ceph). When enabled, requests go to endpoint/bucket/key instead of bucket.endpoint/key.
Setup Guide
- Create a bucket in your S3-compatible provider
- Generate access credentials (access key + secret key)
- Go to Destinations → Add Destination → S3-Compatible
- Enter the Endpoint URL, Bucket, Access Key ID, and Secret Access Key
- Enable Force Path Style if required by your provider
- (Optional) Set a Path Prefix for organizing backups
- Click Test to verify the connection
MinIO Setup
- Access the MinIO Console (default:
http://your-server:9001) - Create a bucket under Buckets → Create Bucket
- Create an access key under Access Keys → Create Access Key
- Use endpoint
http://your-minio-host:9000with Force Path Style enabled
Common Docker setup:
services:
minio:
image: minio/minio
command: server /data --console-address ":9001"
ports:
- "9000:9000"
- "9001:9001"
volumes:
- minio-data:/dataWasabi Setup
- Create a bucket at console.wasabisys.com
- Create an API access key under Access Keys
- Use the regional endpoint, e.g.
https://s3.eu-central-1.wasabisys.com - Force Path Style: off (Wasabi supports virtual-hosted style)
DigitalOcean Spaces Setup
- Create a Space in DigitalOcean Console
- Generate a Spaces access key under API → Spaces Keys
- Use endpoint
https://<region>.digitaloceanspaces.com(e.g.https://fra1.digitaloceanspaces.com) - Force Path Style: off
Backblaze B2 Setup
- Create a bucket at Backblaze Console
- Create an Application Key with read/write access to your bucket
- Use endpoint
https://s3.<region>.backblazeb2.com(e.g.https://s3.us-west-002.backblazeb2.com) - Force Path Style: off
How It Works
- Uses the S3-compatible API via the AWS SDK
- Multipart upload for large files
- All credentials are stored AES-256-GCM encrypted in the database
Troubleshooting
Connection Refused
connect ECONNREFUSEDSolution: Verify the endpoint URL is correct and reachable from the DBackup server. Include the protocol (http:// or https://) and port if non-standard.
SignatureDoesNotMatch
The request signature we calculated does not matchSolution: Usually caused by incorrect Secret Access Key. Re-enter the credentials. Some providers need specific region values.
NoSuchBucket
The specified bucket does not existSolution: Create the bucket first in your provider's console. Bucket names must match exactly (case-sensitive).
SSL Certificate Error
self-signed certificate / UNABLE_TO_VERIFY_LEAF_SIGNATURESolution: For self-signed certificates (e.g. local MinIO), set the NODE_TLS_REJECT_UNAUTHORIZED=0 environment variable. Not recommended for production.