How Nextcloud handles large file uploads under the hood, and how to implement chunked uploads manually when the desktop client isn't an option.
Nextcloud uses a WebDAV extension called TUS (Tus Resumable Upload Protocol) internally, but for programmatic uploads the simplest approach is the Nextcloud Chunking API — available on any Nextcloud instance.
Uploading files over ~1GB through a reverse proxy (Nginx, Caddy) will hit client_max_body_size limits. Chunked upload bypasses this by splitting into small pieces.
PUT each chunk as a numbered file inside itMOVE the session directory to the final destination — Nextcloud assembles the chunksKaranveer Singh Shaktawat
Full Stack Engineer & Infrastructure Architect
Building portfolio, contributing to open source, and seeking remote full-time roles with significant technical ownership.
Pick what you want to hear about — I'll only email when it's worth it.
Did this resonate?
Without keepalive on Nginx upstream blocks, every proxied request opens a new TCP connection to your backend. One directive fixes this.
A common Next.js App Router misconception: 'use client' doesn't make a file client-only — it marks the boundary where the client tree starts.
BASE="https://cloud.example.com/remote.php/dav"
USER="karanveer"
PASS="your-app-password"
FILE="large-archive.tar.gz"
CHUNK_SIZE=$((100 * 1024 * 1024)) # 100MB chunks
# 1. Create upload session
SESSION="$BASE/uploads/$USER/$(uuidgen)"
# 2. Split and upload each chunk
split -b $CHUNK_SIZE "$FILE" /tmp/chunk_
i=0
for chunk in /tmp/chunk_*; do
printf -v idx "%010d" $i
curl -u "$USER:$PASS" -T "$chunk" "$SESSION/$idx"
((i++))
done
# 3. Assemble
DEST="$BASE/files/$USER/backups/$FILE"
curl -u "$USER:$PASS" -X MOVE \
-H "Destination: $DEST" \
"$SESSION/.file"client_max_body_size 0; # unlimited, let Nextcloud handle it
proxy_request_buffering off;proxy_request_buffering off is critical for streaming chunks — without it Nginx buffers the entire chunk to disk before forwarding.
If you're syncing from another machine, rClone handles chunking automatically:
rclone copy /local/path nextcloud:backups/ \
--transfers=4 \
--drive-chunk-size=128MrClone's Nextcloud backend uses WebDAV chunking internally. Set --transfers to how many parallel chunks you want — 4 is usually the sweet spot before you hit diminishing returns on a single connection.