Database Export and Rsync
Rsync transfers files, not live database connections. To back up or migrate a database with rsync, you must first export the database to a file, then transfer that file. This two-step approach is the industry standard for database backup and migration.
Do not rsync the raw database data directory (/var/lib/mysql/, /var/lib/postgresql/) while the database is running. Those files are constantly being written to and locked by the database engine. Rsyncing them produces a corrupt, unusable backup. Always export first with mysqldump or pg_dump.
The Export → Transfer → Import Pattern
Every database migration follows this flow:
flowchart LR
A["1. Export<br/>mysqldump / pg_dump"] --> B["2. Transfer<br/>rsync to destination"]
B --> C["3. Import<br/>mysql / psql"]
MySQL / MariaDB
Basic Export
# Export a single database
mysqldump -u root -p my_database > my_database.sql
# Export all databases on the server
mysqldump -u root -p --all-databases > all_databases.sql
Compressed Export (Recommended for Production)
SQL files compress extremely well — often 5–10x smaller:
# Compressed single database
mysqldump -u root -p my_database | gzip > my_database_$(date +%F).sql.gz
# Compressed all databases
mysqldump -u root -p --all-databases | gzip > all_databases_$(date +%F).sql.gz
Production-Safe Export
For production databases with InnoDB tables, use --single-transaction to avoid locking:
mysqldump -u root -p \
--single-transaction \
--routines \
--triggers \
--events \
my_database | gzip > my_database_$(date +%F).sql.gz
| Flag | Purpose |
|---|---|
--single-transaction | Consistent snapshot without locking tables (InnoDB only) |
--routines | Include stored procedures and functions |
--triggers | Include triggers |
--events | Include scheduled events |
--single-transaction is essential for production. Without it, mysqldump locks all tables during export, blocking reads and writes.
PostgreSQL
Basic Export
# Export a single database
pg_dump -U postgres my_database > my_database.sql
# Export all databases
pg_dumpall -U postgres > all_databases.sql
Compressed Export
# Using gzip
pg_dump -U postgres my_database | gzip > my_database_$(date +%F).sql.gz
# Using PostgreSQL's custom format (most flexible)
pg_dump -U postgres -Fc my_database > my_database_$(date +%F).dump
Production-Safe Export
pg_dump -U postgres \
--no-owner \
--no-privileges \
--clean \
my_database | gzip > my_database_$(date +%F).sql.gz
| Flag | Purpose |
|---|---|
--no-owner | Don't include ownership commands (useful when importing as different user) |
--no-privileges | Skip GRANT/REVOKE statements |
--clean | Add DROP commands before CREATE (clean reimport) |
Rsync Transfer
Once you have the dump file, rsync handles the transfer:
Basic Transfer
# Transfer to remote backup server
rsync -avzP /var/backups/db/my_database.sql.gz \
user@backup-server:/backups/db/
# Transfer to local backup directory
rsync -av /var/backups/db/ /mnt/external/db/
With Retention (Timestamped Backups)
# Create dated backup directory
rsync -avz /var/backups/db/my_database_$(date +%F).sql.gz \
user@backup-server:/backups/db/daily/
Import at Destination
MySQL / MariaDB
# Import plain SQL
mysql -u root -p my_database < my_database.sql
# Import compressed SQL
gunzip < my_database.sql.gz | mysql -u root -p my_database
# Or with zcat
zcat my_database.sql.gz | mysql -u root -p my_database
PostgreSQL
# Import plain SQL
psql -U postgres my_database < my_database.sql
# Import compressed SQL
gunzip < my_database.sql.gz | psql -U postgres my_database
# Import custom format
pg_restore -U postgres -d my_database my_database.dump
Complete Backup Script
A production-ready script that exports databases and rsyncs them to a backup server:
#!/bin/bash
# db-backup.sh — Export and transfer database backups
set -e
TIMESTAMP=$(date +%F_%H%M)
BACKUP_DIR="/var/backups/db"
REMOTE="user@backup-server:/offsite/db"
mkdir -p "$BACKUP_DIR"
# MySQL: Export all databases
if command -v mysqldump &> /dev/null; then
mysqldump -u root \
--single-transaction \
--routines --triggers --events \
--all-databases | gzip > "$BACKUP_DIR/mysql_all_$TIMESTAMP.sql.gz"
echo " MySQL export complete"
fi
# PostgreSQL: Export all databases
if command -v pg_dumpall &> /dev/null; then
sudo -u postgres pg_dumpall | gzip > "$BACKUP_DIR/postgres_all_$TIMESTAMP.sql.gz"
echo " PostgreSQL export complete"
fi
# Transfer to remote backup
rsync -avzP "$BACKUP_DIR/" "$REMOTE/"
echo " Transfer complete"
# Clean up local dumps older than 7 days
find "$BACKUP_DIR" -name "*.sql.gz" -mtime +7 -delete
find "$BACKUP_DIR" -name "*.dump" -mtime +7 -delete
echo " Old backups cleaned"
Schedule with cron:
# Run at 3:00 AM daily
0 3 * * * /usr/local/bin/db-backup.sh >> /var/log/db-backup.log 2>&1
Secure Credential Management
Never put database passwords directly in scripts:
Option 1: MySQL Config File
[client]
user=root
password=your_secure_password
# Set restrictive permissions
chmod 600 ~/.my.cnf
# Now mysqldump works without -p flag
mysqldump --single-transaction my_database | gzip > backup.sql.gz
Option 2: PostgreSQL Password File
localhost:5432:*:postgres:your_secure_password
chmod 600 ~/.pgpass
# pg_dump now works without password prompt
pg_dump -U postgres my_database > backup.sql
Option 3: Environment Variables
export MYSQL_PWD='your_secure_password'
mysqldump -u root --all-databases | gzip > backup.sql.gz
unset MYSQL_PWD
Environment variables are visible via /proc/PID/environ on Linux. The config file approach (.my.cnf / .pgpass) is preferred for automated scripts.
Complete Migration Workflow
A full server migration combining files and database:
# On the source server:
# Step 1: Export database
mysqldump -u root --single-transaction \
--all-databases | gzip > /tmp/db_migration.sql.gz
# Step 2: Sync application files
rsync -avzP --exclude='cache/' /var/www/html/ \
user@newserver:/var/www/html/
# Step 3: Transfer database dump
rsync -avzP /tmp/db_migration.sql.gz \
user@newserver:/tmp/
# On the destination server:
# Step 4: Import database
gunzip < /tmp/db_migration.sql.gz | mysql -u root -p
# Step 5: Update environment configuration (if needed)
# If your application stores the site URL in the database or .env file:
sed -i 's/old-domain.com/new-domain.com/g' /var/www/html/.env
# Step 6: Fix permissions
sudo chown -R www-data:www-data /var/www/html/
Common Pitfalls
| Pitfall | Consequence | Prevention |
|---|---|---|
Rsyncing /var/lib/mysql/ while DB is running | Corrupt backup that can't be restored | Always export with mysqldump or pg_dump first |
Storing .sql files in web-accessible directories | Anyone can download your database | Store dumps in /var/backups/, not /var/www/ |
| Passwords in plaintext scripts | Credential exposure | Use .my.cnf, .pgpass, or environment variables |
| No compression on large exports | Wastes bandwidth and storage | Always pipe through gzip |
Not using --single-transaction | Locks tables during export, blocking application | Add --single-transaction for InnoDB |
| Not testing the restore | Discover backup is corrupt only during disaster | Periodically test mysql < dump.sql on a staging server |
Forgetting --routines --triggers | Stored procedures and triggers are lost | Include these flags for complete exports |
Quick Reference
# MySQL export (production-safe)
mysqldump -u root --single-transaction --routines --triggers \
my_database | gzip > db_$(date +%F).sql.gz
# PostgreSQL export
pg_dump -U postgres my_database | gzip > db_$(date +%F).sql.gz
# Transfer dump to backup server
rsync -avzP /var/backups/db/ user@backup:/offsite/db/
# Import MySQL
gunzip < db.sql.gz | mysql -u root -p my_database
# Import PostgreSQL
gunzip < db.sql.gz | psql -U postgres my_database
What's Next
- Live Replication Edge Cases — Why rsync fails with live databases and safer alternatives
- Backup Strategies — Build a comprehensive backup system
- Cron Automation — Automate database backups