Backup Jobs
Backup jobs are the core of DBackup. They connect a database source to a storage destination and define when and how backups should run.
Overview
A job defines:
- What to backup (source database)
- Where to store it (one or more destinations)
- When to run (schedule)
- How to process (compression, encryption)
- How long to keep (retention per destination)
Creating a Job
- Navigate to Jobs in the sidebar
- Click Create Job
- Configure the job settings
- Save
Basic Settings
| Setting | Description |
|---|---|
| Name | Descriptive name (e.g., "Daily MySQL Backup") |
| Source | Database connection to backup |
| Destinations | One or more storage locations for backups (see Multi-Destination below) |
| Enabled | Toggle job on/off |
Compression
Reduce backup size significantly:
| Algorithm | Speed | Compression | Best For |
|---|---|---|---|
| None | Fastest | 0% | Quick backups, already compressed |
| Gzip | Fast | 60-70% | General use |
| Brotli | Slower | 70-80% | Maximum compression |
Encryption
Protect sensitive data:
- Create an Encryption Profile first
- Select the profile in job settings
- Backups are encrypted with AES-256-GCM
Schedule
Automate backups with cron expressions. See Scheduling.
Retention
Automatically clean up old backups. Retention is configured per destination — each destination can have its own retention policy. See Retention Policies.
Notifications
Get alerts when backups complete:
- Create a Notification first
- Select notification in job settings
- Choose trigger: Success, Failure, or Both
Multi-Destination
A job can upload to multiple storage destinations simultaneously — ideal for implementing the 3-2-1 backup rule.
Adding Destinations
- In the job form, click Add Destination
- Select a storage adapter from the dropdown
- Repeat to add more destinations
- Drag to reorder upload priority
Per-Destination Retention
Each destination has its own retention configuration:
- Expand a destination row to reveal the retention settings
- Choose None, Simple, or Smart (GFS) independently per destination
- Example: keep 30 daily backups locally, but only 12 monthly in S3
Upload Behavior
- The database dump runs once — the resulting file is uploaded to each destination sequentially
- Destinations are processed in priority order (top to bottom)
- If one destination fails, the others still continue
- The same storage adapter cannot be selected twice in one job
Partial Success
If some destinations succeed and others fail, the execution is marked as Partial (see Job Status).
Job Actions
Run Now
Execute the job immediately:
- Click the ▶ Run button on the job
- Monitor progress in real-time
- View results in History
Enable/Disable
Toggle the job without deleting:
- Disabled jobs don't run on schedule
- Can still be triggered manually
Duplicate
Create a copy with same settings:
- Useful for similar backups
- Modify as needed after duplication
Delete
Remove the job permanently:
- Does not delete existing backups
- Schedule is removed
Job Status
| Status | Description |
|---|---|
| 🟢 Active | Enabled and scheduled |
| ⚪ Disabled | Not running on schedule |
| 🔵 Running | Currently executing |
| � Partial | Some destinations succeeded, others failed |
| �🔴 Failed | Last run failed |
Execution Monitoring
Live Progress
During execution, view:
- Current step (Initialize → Dump → Upload → Complete)
- File size progress
- Live log output
Execution History
After completion:
- Go to History
- View all past executions
- Check logs for details
- See success/failure status
Best Practices
Naming Convention
Use descriptive names:
prod-mysql-daily- Production MySQL, dailystaging-postgres-hourly- Staging PostgreSQL, hourlymongodb-weekly-archive- MongoDB weekly archive
One Source Per Job
For clarity, create separate jobs for:
- Different databases
- Different retention requirements
- Different schedules
Test Before Scheduling
- Create job with no schedule
- Run manually
- Verify backup in Storage Explorer
- Test restore
- Then enable schedule
Resource Considerations
- Schedule during low-traffic periods
- Avoid overlapping large backups
- Monitor system resources during backup
Concurrent Execution
By default, one backup runs at a time. Configure concurrency:
- Go to Settings → System
- Set Max Concurrent Jobs
- Higher values = more parallel backups
Resource Usage
More concurrent jobs = higher CPU/memory/disk usage
Job Pipeline
When a job runs, it goes through these steps:
1. Initialize
└── Fetch job config
└── Decrypt credentials
└── Validate source connection
└── Resolve all destination adapters
2. Dump
└── Execute database dump
└── Apply compression (if enabled)
└── Apply encryption (if enabled)
3. Upload (Fan-Out)
└── For each destination (by priority):
└── Transfer backup file
└── Create metadata file
└── Verify checksum (local storage)
└── Evaluate results → Partial if mixed
4. Completion
└── Cleanup temp files
└── Record per-destination results
└── Update execution status
└── Send notifications
5. Retention (per destination)
└── For each destination (successful uploads only):
└── List existing backups
└── Apply that destination's retention policy
└── Delete expired backupsTroubleshooting
Job Stuck in "Running"
If a job shows running but isn't progressing:
- Check History for the execution
- View logs for errors
- The server may have restarted mid-backup
- Manually cancel if needed
Backup Too Slow
- Enable compression (smaller transfer)
- Schedule during off-peak hours
- Check network between DBackup and destination
- Consider faster storage
Out of Disk Space
Temp files are stored locally during processing:
- Increase available disk space
- Enable compression to reduce temp file size
- Clean up old temp files:
/tmp/dbackup-*
Next Steps
- Scheduling - Configure when jobs run
- Retention Policies - Automatic cleanup
- Encryption - Secure your backups