WD EX2 Ultra to QNAP: Plex Migration

At some point every home-lab setup reaches a tipping point. Mine arrived quietly, disguised as a perfectly reasonable idea: “I’ll just move my media library from my old WD EX2 Ultra to a new QNAP NAS.”

What followed was a week-long lesson in how optimistic assumptions meet aging hardware, watchdog timers, and very large video files.

This post is about how I eventually got the migration done, why the WD EX2 Ultra kept rebooting itself under load, and how I ended up writing a self-healing rsync script that patiently waits, detects stalls, survives crashes, and resumes transfers without human babysitting.

If you’ve ever watched a copy job crawl along at 3 a.m. wondering whether it’s stuck or just thinking, this is for you.


The setup

The old world

  • WD My Cloud EX2 Ultra
  • Years of accumulated media
  • Thousands of files, many of them large video containers
  • ARM CPU, limited RAM, consumer firmware

The EX2 Ultra is fine for serving media. It is not fine for sustained, high-pressure I/O over SSH.

The new world

  • QNAP NAS
  • Clean filesystem
  • Faster disks
  • Proper headroom
  • In its own NAS VLan, behind a OPNSense Firewall
  • Mounted inside a custom-built 10-inch rack, because if I’m rebuilding, I’m rebuilding properly

The goal was simple: move the entire PLEX library across, intact, with permissions preserved where they matter and ignored where they don’t.


The problem: rsync meets reality

The first attempt was textbook:

rsync -av sshd@wd:/media /qnap/media

It worked. Then it didn’t.

After a few tens of gigabytes, the WD would:

  • drop the SSH connection
  • reboot itself
  • announce a filesystem check
  • politely pretend nothing had happened

This wasn’t corruption. It was resource exhaustion.

The EX2 Ultra simply cannot handle:

  • large directory walks
  • sustained SSH encryption
  • aggressive writeback
  • and big files all at once.

So the question stopped being “How do I copy faster?” and became:

How do I copy safely, unattended, and indefinitely, even if the source crashes?


The strategy: let it fail, but never lose progress

The key ideas behind the final solution were:

  1. Never assume the WD stays alive
  2. Never restart from zero
  3. Never guess whether the process is “stuck”
  4. Never require human intervention at 2 a.m.

That led to a few design decisions:

  • Use rsync with --partial so interrupted files can resume
  • Limit bandwidth to reduce pressure
  • Run rsync in the background
  • Monitor progress via a log file, not terminal output
  • Detect stalls and report them
  • Detect WD availability (ping + SSH)
  • Retry automatically when the WD comes back

The result is the script below.

#!/bin/sh

# ---- CONFIG ----
SRC="[email protected]:/mnt/HD/HD_a2/PLEX/"
DST="/share/PLEX/"
WD_HOST="10.0.60.20"

PING_INTERVAL=30
SSH_TIMEOUT=5
HEARTBEAT_INTERVAL=30        # how often we print status
STALL_WARNING=120            # seconds without log updates before warning

DEFAULT_BWLIMIT=25000        # KB/s
LOGFILE="/share/PLEX/rsync.log"
# -----------------

# Bandwidth argument
if [ -n "$1" ]; then
    BWLIMIT="$1"
else
    BWLIMIT="$DEFAULT_BWLIMIT"
fi

echo "[$(date)] Using rsync bandwidth limit: ${BWLIMIT} KB/s"

while true; do
    echo "[$(date)] Starting rsync..."

    touch "$LOGFILE"

    rsync -av \
      --progress \
      --partial \
      --partial-dir=.rsync-partial \
      --whole-file \
      --numeric-ids \
      --no-perms --no-owner --no-group \
      --size-only \
      --timeout=600 \
      --bwlimit="$BWLIMIT" \
      --exclude='.*' \
      --exclude='rsync.log' \
      --log-file="$LOGFILE" \
      -e "ssh -o ServerAliveInterval=30 -o ServerAliveCountMax=10" \
      "$SRC" "$DST" &

    RSYNC_PID=$!
    echo "[$(date)] rsync started with PID $RSYNC_PID"

    while kill -0 "$RSYNC_PID" 2>/dev/null; do
        sleep "$HEARTBEAT_INTERVAL"

        NOW=$(date +%s)
        LAST_LOG=$(stat -c %Y "$LOGFILE" 2>/dev/null || echo 0)
        DELTA=$((NOW - LAST_LOG))

        if [ "$DELTA" -lt "$STALL_WARNING" ]; then
            echo "[$(date)] rsync running, last progress ${DELTA}s ago"
        else
            echo "[$(date)] rsync quiet for ${DELTA}s – likely WD I/O stall"

            if ping -c 1 -W 1 "$WD_HOST" >/dev/null 2>&1; then
                if ssh -o BatchMode=yes -o ConnectTimeout=$SSH_TIMEOUT \
                       sshd@"$WD_HOST" "true" >/dev/null 2>&1; then
                    echo "[$(date)] WD reachable via SSH, continuing to wait"
                else
                    echo "[$(date)] WD pingable but SSH not ready yet"
                fi
            else
                echo "[$(date)] WD not responding to ping"
            fi
        fi
    done

    wait "$RSYNC_PID"
    RC=$?

    if [ "$RC" -eq 0 ]; then
        echo "[$(date)] rsync completed successfully."
        break
    fi

    echo "[$(date)] rsync exited with code $RC."
    echo "[$(date)] Waiting for WD to be fully ready before retry..."

    while true; do
        if ping -c 1 -W 1 "$WD_HOST" >/dev/null 2>&1; then
            if ssh -o BatchMode=yes -o ConnectTimeout=$SSH_TIMEOUT \
                   sshd@"$WD_HOST" "test -d /mnt/HD/HD_a2/PLEX" >/dev/null 2>&1; then
                echo "[$(date)] WD fully ready. Resuming rsync."
                break
            fi
        fi

        echo "[$(date)] WD not ready yet. Rechecking in ${PING_INTERVAL}s..."
        sleep "$PING_INTERVAL"
    done
done

What the script actually does (in human terms)

1. Bandwidth is adjustable at runtime

You can run:

./rsync_plex.sh 30000

and immediately trade speed for stability without editing the script. This matters because the WD’s tolerance changes with temperature, file mix, and sheer bad luck.


2. rsync runs in the background

This avoids terminal-dependent progress behavior and lets the script supervise it like a long-running job.


3. Progress is inferred, not guessed

Instead of trusting rsync’s live output, the script watches the log file modification time.
If the log hasn’t changed in 120 seconds, the script doesn’t panic. It says:
“rsync is quiet. The WD is probably flushing I/O.”
That distinction matters.


4. Liveness checks are layered

When things go quiet, the script checks:

  • Can I ping the WD?
  • Can I open an SSH session?
  • Is the source path present?

This avoids false positives and avoids killing healthy transfers.


5. Crashes are expected

If rsync exits non-zero because the WD rebooted:

  • the script waits
  • the WD comes back
  • rsync resumes automatically
  • partial files continue where they left off

No manual cleanup. No lost sleep.


The rack, because it matters

As part of this migration I rebuilt the physical setup into a custom 10-inch rack:

  • QNAP
  • network gear
  • airflow-controlled fan modules
  • short cable runs
  • sane thermals

The irony is not lost on me that the software solution was more complex than the hardware one.
But once everything is in place, the system is quieter, cooler, and easier to reason about.


Lessons learned

  • Consumer NAS devices are not designed for sustained, hostile workloads
  • rsync is incredibly powerful, but brutally honest
  • Observability beats speed
  • Automation should assume failure, not hope against it

Most importantly:
If a system keeps failing in the same way, stop fighting it and design around that failure.


If this helps you

Feel free to adapt the script, tune the intervals, or simplify it for your own environment. The core idea is not the exact flags, but the mindset: make the transfer resilient, not fast.

Because eventually, the copy will finish.
And when it does, it’s deeply satisfying to realize you didn’t have to babysit it at all.