Performance Tuning
When rsync takes too long, the bottleneck is usually one of three things: CPU (compression/checksumming), disk I/O (reading/writing files), or network (bandwidth). Tuning means identifying which bottleneck applies and adjusting rsync's behavior accordingly.
Identifying Bottlenecks
Before tuning, identify what's slow:
# Monitor during rsync transfer
htop # CPU usage
iotop # Disk I/O
nload # Network bandwidth
| Symptom | Bottleneck | Solution |
|---|---|---|
| CPU at 100% during transfer | Compression too aggressive | Lower --compress-level or disable -z |
| Disk wait time high | I/O-bound file scanning | Use --whole-file, reduce parallelism on HDD |
| Network at capacity | Bandwidth-limited | Already optimal, or use --bwlimit to share |
| Transfer fast but takes ages to start | File list building is slow | Reduce file count with excludes |
Key Performance Flags
--whole-file (Skip Delta Checksumming)
# LAN or local: skip checksumming, copy whole files
rsync -av --whole-file /var/www/html/ /backup/www/
When the network is faster than computing checksums (LAN, local disk), whole-file mode is faster because it skips the block-by-block comparison.
--inplace (Update Files In-Place)
# Large files: update without creating temporary copy
rsync -av --inplace /backup/database.sql.gz user@remote:/backups/
Normally rsync writes to a temporary file then renames it. --inplace writes directly to the existing file — faster for large files, but no atomic replacement (interrupted transfer = partial file).
--append (Resume/Append Only)
# Log files or growing datasets: only append new data
rsync -av --append /var/log/nginx/ user@backup:/logs/
--no-compress (Explicitly Disable Compression)
# Already-compressed content: skip compression overhead
rsync -av --no-compress /backup/archives/ user@remote:/offsite/
Tuning by Scenario
Fast LAN (1 Gbps+)
rsync -av --whole-file /var/www/html/ /nas/backup/
- No compression (CPU is the bottleneck)
- Whole-file mode (faster than checksumming)
Remote Server (100 Mbps)
rsync -avz --compress-level=1 /var/www/html/ user@remote:/backup/
- Light compression (fastest, still saves some bandwidth)
- Delta transfer (default — only changed blocks)
Slow Connection (< 10 Mbps)
rsync -avz --compress-level=6 --bwlimit=500 -P \
/var/www/html/ user@remote:/backup/
- Higher compression (network is the bottleneck)
- Bandwidth limit (don't saturate the connection)
-Pfor progress and resume
Very Large Directory (100K+ Files)
# Exclude unnecessary files to shrink the file list
rsync -av \
--exclude='cache/' \
--exclude='*.log' \
--exclude='*.tmp' \
--exclude='node_modules/' \
/var/www/html/ /backup/www/
The more files rsync has to scan, the longer the initial file-list building takes. Aggressive excludes reduce this.
Parallelization
For directories with many subdirectories, run multiple rsync processes:
# 4 parallel rsync jobs across subdirectories
find /var/www/html/ -mindepth 1 -maxdepth 1 -type d -print0 | \
xargs -0 -n1 -P4 -I{} rsync -av {} user@backup:/backups/www/{}/
When Parallelization Helps
| Storage Type | Max Safe Jobs | Notes |
|---|---|---|
| SSD (NVMe) | 8–16 | High IOPS, handles concurrency well |
| SSD (SATA) | 4–8 | Good IOPS |
| HDD | 2–4 | Too many jobs = disk thrashing |
| Network storage (NFS) | 2–4 | Limited by network mount |
For detailed parallel rsync patterns, see Parallel and Incremental Sync.
Reducing CPU Impact
Use nice and ionice
Lower rsync's priority so it doesn't compete with your web server:
# Low CPU priority + best-effort I/O
nice -n 19 ionice -c2 -n7 rsync -avz \
/var/www/html/ user@backup:/backups/
| Priority | nice Value | ionice Class | Effect |
|---|---|---|---|
| Normal | 0 | 2 (best-effort) | Default |
| Low | 10 | 2 -n4 | Slight reduction |
| Minimal | 19 | 3 (idle) | Only when nothing else needs resources |
SSH Tuning
Faster Cipher
SSH encryption overhead can be significant for large transfers:
# Use faster cipher (AES hardware acceleration)
rsync -avz -e "ssh -c aes128-gcm@openssh.com" \
/var/www/html/ user@remote:/backup/
Connection Multiplexing
Reuse SSH connections to avoid repeated handshakes:
Host backup-server
HostName 10.0.0.50
ControlMaster auto
ControlPath ~/.ssh/sockets/%r@%h-%p
ControlPersist 600
mkdir -p ~/.ssh/sockets
# First rsync establishes connection; subsequent ones reuse it
rsync -avz /var/www/html/ backup-server:/backups/www/
Benchmarking
Measure Before and After
# Time a transfer
time rsync -avz /var/www/html/ user@remote:/backup/
# Compare with different settings
time rsync -avz --compress-level=1 /var/www/html/ user@remote:/backup/
time rsync -av --whole-file /var/www/html/ user@remote:/backup/
Use --stats to Understand Efficiency
rsync -avz --stats /var/www/html/ user@remote:/backup/
Watch for:
- Speedup ratio — higher = less data actually sent
- Total transferred vs total file size — delta transfer savings
Common Pitfalls
| Pitfall | Consequence | Prevention |
|---|---|---|
| Compression on LAN | CPU bottleneck, slower transfer | Skip -z for fast networks |
| Too many parallel jobs on HDD | Disk thrashing, slower than serial | Start with 2–4 jobs |
--inplace on interrupted large transfer | Partially overwritten file, no rollback | Use only when you can re-run |
| Not excluding junk files | File list scanning takes 10+ minutes | Exclude cache/, node_modules/, logs |
| Default SSH cipher over fast network | Encryption becomes bottleneck | Use aes128-gcm cipher |
Quick Reference
# Fast LAN transfer
rsync -av --whole-file /src/ /dest/
# Remote with light compression
rsync -avz --compress-level=1 /src/ user@remote:/dest/
# Low priority (won't affect web server)
nice -n 19 ionice -c2 rsync -avz /src/ user@remote:/dest/
# Parallel transfer (4 jobs)
find /src/ -maxdepth 1 -type d -print0 | xargs -0 -P4 -I{} rsync -av {} user@remote:/dest/
# Fast SSH cipher
rsync -avz -e "ssh -c aes128-gcm@openssh.com" /src/ user@remote:/dest/
# Benchmark
time rsync -avz --stats /src/ user@remote:/dest/
What's Next
- Compression and Bandwidth — Detailed compression/bandwidth tuning
- Parallel and Incremental Sync — Scale with parallelism
- Backup Strategies — Design efficient backup architecture