Digital Archive Drive Inventory
Digital Archive Drive — Inventory & Google Drive Upload Plan
Date: 2026-02-17 Author: Claude (for Joe) Status: Ready for upload
Drive Details
| Detail | Value |
|---|---|
| Volume name | Digital Archive |
| Hardware | Seagate BUP Slim (ST2000LM007-1R8174) |
| Capacity | 1.82 TB |
| Used space | ~800 GB (286M free blocks of 488M total, 4K block size) |
| Filesystem | HFS+ (journaled, Mac-formatted) |
| Files | 197,608 |
| Folders | 33,324 |
| Created | Oct 17, 2024 |
| Last modified | Oct 19, 2024 |
| Condition | Minor volume header corruption detected by fsck.hfsplus — mounts fine on macOS but Linux userspace HFS+ tools couldn't fully parse the extents tree |
Discovery Notes
- Drive was plugged into Jetson Orin Nano (Prometheus) via USB
- USB host controller (
tegra-xusb) had crashed — required a full reboot to detect the drive - Showed up as
/dev/sda(USB 3.0) after reboot - The Tegra kernel (5.15.148-tegra) does not include the
hfspluskernel module - Attempted to build the module from source but kernel headers were incomplete
- Built
hfsfuse(userspace HFS+ FUSE driver) from source — it could read the volume header but not the catalog tree due to extents overflow corruption fsck.hfsplusconfirmed: "Volume header needs minor repair" (multiple issues), volume is "corrupt and needs to be repaired"- Recommendation: Plug into Mac (native HFS+ support), back up to Google Drive, then reformat as ext4 for Jetson use
Google Drive Upload Plan
Prerequisites
- Google One Premium ($9.99/mo) or higher — provides 2 TB storage, sufficient for ~800 GB of data
rcloneinstalled on Mac
Step 1: Install rclone
brew install rclone
Step 2: Configure Google Drive remote
rclone config
- New remote → name it
gdrive - Storage type: Google Drive
- Scope: Full access
- Auto config: Yes (opens browser for Google OAuth)
- Leave all other fields default/blank
Step 3: Run the upload
rclone copy "/Volumes/Digital Archive" gdrive:"Digital Archive" \
--progress \
--transfers 8 \
--checkers 16 \
--drive-chunk-size 64M \
--log-file ~/rclone-upload.log \
--log-level INFO \
--retries 10 \
--retries-sleep 10s \
--low-level-retries 10 \
--stats 30s
Flags explained:
- copy — upload only, never deletes anything at the destination
- --transfers 8 — 8 parallel file uploads
- --checkers 16 — 16 parallel threads for checking existing files
- --drive-chunk-size 64M — fewer API calls for large files (uses ~512 MB RAM)
- --retries 10 — retry failed uploads up to 10 times with backoff
Step 4: Resume if interrupted
Just re-run the exact same command. rclone copy skips files that are already uploaded (matches by size and modification time).
Step 5: Verify
rclone check "/Volumes/Digital Archive" gdrive:"Digital Archive" --log-file ~/rclone-verify.log
Important Limits
| Limit | Value |
|---|---|
| Google Drive daily upload cap | ~750 GB/day |
| Minimum time for 800 GB | ~2 days |
| Max file size | 5 TB |
| API rate limit | 12,000 queries per 100 seconds |
rclone handles all rate limits automatically — it backs off when throttled and retries.
Optional: Create your own Google API client ID
For large transfers, creating your own API client ID gives you a dedicated rate limit quota instead of sharing rclone's default. Instructions: https://rclone.org/drive/#making-your-own-client-id
Next Steps (After Upload)
- Verify upload completed with
rclone check - Reformat drive as ext4:
sudo mkfs.ext4 /dev/sda1 - Mount on Jetson at a chosen path (e.g.,
/ssd2or/mnt/seagate) - Add to
/etc/fstabfor auto-mount - Use for additional Jetson storage (ML datasets, model weights, etc.)