Skip to main content

Performance Tuning

When rsync takes too long, the bottleneck is usually one of three things: CPU (compression/checksumming), disk I/O (reading/writing files), or network (bandwidth). Tuning means identifying which bottleneck applies and adjusting rsync's behavior accordingly.

Identifying Bottlenecks

Before tuning, identify what's slow:

# Monitor during rsync transfer
htop # CPU usage
iotop # Disk I/O
nload # Network bandwidth
SymptomBottleneckSolution
CPU at 100% during transferCompression too aggressiveLower --compress-level or disable -z
Disk wait time highI/O-bound file scanningUse --whole-file, reduce parallelism on HDD
Network at capacityBandwidth-limitedAlready optimal, or use --bwlimit to share
Transfer fast but takes ages to startFile list building is slowReduce file count with excludes

Key Performance Flags

--whole-file (Skip Delta Checksumming)

# LAN or local: skip checksumming, copy whole files
rsync -av --whole-file /var/www/html/ /backup/www/

When the network is faster than computing checksums (LAN, local disk), whole-file mode is faster because it skips the block-by-block comparison.

--inplace (Update Files In-Place)

# Large files: update without creating temporary copy
rsync -av --inplace /backup/database.sql.gz user@remote:/backups/

Normally rsync writes to a temporary file then renames it. --inplace writes directly to the existing file — faster for large files, but no atomic replacement (interrupted transfer = partial file).

--append (Resume/Append Only)

# Log files or growing datasets: only append new data
rsync -av --append /var/log/nginx/ user@backup:/logs/

--no-compress (Explicitly Disable Compression)

# Already-compressed content: skip compression overhead
rsync -av --no-compress /backup/archives/ user@remote:/offsite/

Tuning by Scenario

Fast LAN (1 Gbps+)

rsync -av --whole-file /var/www/html/ /nas/backup/
  • No compression (CPU is the bottleneck)
  • Whole-file mode (faster than checksumming)

Remote Server (100 Mbps)

rsync -avz --compress-level=1 /var/www/html/ user@remote:/backup/
  • Light compression (fastest, still saves some bandwidth)
  • Delta transfer (default — only changed blocks)

Slow Connection (< 10 Mbps)

rsync -avz --compress-level=6 --bwlimit=500 -P \
/var/www/html/ user@remote:/backup/
  • Higher compression (network is the bottleneck)
  • Bandwidth limit (don't saturate the connection)
  • -P for progress and resume

Very Large Directory (100K+ Files)

# Exclude unnecessary files to shrink the file list
rsync -av \
--exclude='cache/' \
--exclude='*.log' \
--exclude='*.tmp' \
--exclude='node_modules/' \
/var/www/html/ /backup/www/

The more files rsync has to scan, the longer the initial file-list building takes. Aggressive excludes reduce this.

Parallelization

For directories with many subdirectories, run multiple rsync processes:

# 4 parallel rsync jobs across subdirectories
find /var/www/html/ -mindepth 1 -maxdepth 1 -type d -print0 | \
xargs -0 -n1 -P4 -I{} rsync -av {} user@backup:/backups/www/{}/

When Parallelization Helps

Storage TypeMax Safe JobsNotes
SSD (NVMe)8–16High IOPS, handles concurrency well
SSD (SATA)4–8Good IOPS
HDD2–4Too many jobs = disk thrashing
Network storage (NFS)2–4Limited by network mount
info

For detailed parallel rsync patterns, see Parallel and Incremental Sync.

Reducing CPU Impact

Use nice and ionice

Lower rsync's priority so it doesn't compete with your web server:

# Low CPU priority + best-effort I/O
nice -n 19 ionice -c2 -n7 rsync -avz \
/var/www/html/ user@backup:/backups/
Prioritynice Valueionice ClassEffect
Normal02 (best-effort)Default
Low102 -n4Slight reduction
Minimal193 (idle)Only when nothing else needs resources

SSH Tuning

Faster Cipher

SSH encryption overhead can be significant for large transfers:

# Use faster cipher (AES hardware acceleration)
rsync -avz -e "ssh -c aes128-gcm@openssh.com" \
/var/www/html/ user@remote:/backup/

Connection Multiplexing

Reuse SSH connections to avoid repeated handshakes:

~/.ssh/config
Host backup-server
HostName 10.0.0.50
ControlMaster auto
ControlPath ~/.ssh/sockets/%r@%h-%p
ControlPersist 600
mkdir -p ~/.ssh/sockets
# First rsync establishes connection; subsequent ones reuse it
rsync -avz /var/www/html/ backup-server:/backups/www/

Benchmarking

Measure Before and After

# Time a transfer
time rsync -avz /var/www/html/ user@remote:/backup/

# Compare with different settings
time rsync -avz --compress-level=1 /var/www/html/ user@remote:/backup/
time rsync -av --whole-file /var/www/html/ user@remote:/backup/

Use --stats to Understand Efficiency

rsync -avz --stats /var/www/html/ user@remote:/backup/

Watch for:

  • Speedup ratio — higher = less data actually sent
  • Total transferred vs total file size — delta transfer savings

Common Pitfalls

PitfallConsequencePrevention
Compression on LANCPU bottleneck, slower transferSkip -z for fast networks
Too many parallel jobs on HDDDisk thrashing, slower than serialStart with 2–4 jobs
--inplace on interrupted large transferPartially overwritten file, no rollbackUse only when you can re-run
Not excluding junk filesFile list scanning takes 10+ minutesExclude cache/, node_modules/, logs
Default SSH cipher over fast networkEncryption becomes bottleneckUse aes128-gcm cipher

Quick Reference

# Fast LAN transfer
rsync -av --whole-file /src/ /dest/

# Remote with light compression
rsync -avz --compress-level=1 /src/ user@remote:/dest/

# Low priority (won't affect web server)
nice -n 19 ionice -c2 rsync -avz /src/ user@remote:/dest/

# Parallel transfer (4 jobs)
find /src/ -maxdepth 1 -type d -print0 | xargs -0 -P4 -I{} rsync -av {} user@remote:/dest/

# Fast SSH cipher
rsync -avz -e "ssh -c aes128-gcm@openssh.com" /src/ user@remote:/dest/

# Benchmark
time rsync -avz --stats /src/ user@remote:/dest/

What's Next