Skip to content
Cloudflare Docs

Upload objects

There are several ways to upload objects to R2. Which approach you choose depends on the size of your objects and your performance requirements.

Choose an upload method

Single upload (PUT)Multipart upload
Best forSmall to medium files (under ~100 MB)Large files, or when you need parallelism and resumability
Maximum object size5 GiB5 TiB (up to 10,000 parts)
Part sizeN/A5 MiB – 5 GiB per part
ResumableNo — must restart the entire uploadYes — only failed parts need to be retried
Parallel uploadNoYes — parts can be uploaded concurrently
When to useQuick, simple uploads of small objectsVideo, backups, datasets, or any file where reliability matters

Upload via dashboard

To upload objects to your bucket from the Cloudflare dashboard:

  1. In the Cloudflare dashboard, go to the R2 object storage page.

    Go to Overview
  2. Select your bucket.

  3. Select Upload.

  4. Drag and drop your file into the upload area or select from computer.

You will receive a confirmation message after a successful upload.

Upload via Workers API

Use R2 bindings in Workers to upload objects server-side. Refer to Use R2 from Workers for instructions on setting up an R2 binding.

Single upload

Use put() to upload an object in a single request. This is the simplest approach for small to medium objects.

JavaScript
export default {
async fetch(request, env) {
try {
const object = await env.MY_BUCKET.put("image.png", request.body, {
httpMetadata: {
contentType: "image/png",
},
});
if (object === null) {
return new Response("Precondition failed or upload returned null", {
status: 412,
});
}
return Response.json({
key: object.key,
size: object.size,
etag: object.etag,
});
} catch (err) {
return new Response(`Upload failed: ${err}`, { status: 500 });
}
},
};

Multipart upload

Use createMultipartUpload() and resumeMultipartUpload() for large files or when you need to upload parts in parallel. Each part must be at least 5 MiB (except the last part).

JavaScript
export default {
async fetch(request, env) {
const key = "large-file.bin";
// Create a new multipart upload
const multipartUpload = await env.MY_BUCKET.createMultipartUpload(key);
try {
// In a real application, these would be actual data chunks.
// Each part except the last must be at least 5 MiB.
const firstChunk = new Uint8Array(5 * 1024 * 1024); // placeholder
const secondChunk = new Uint8Array(1024); // placeholder
const part1 = await multipartUpload.uploadPart(1, firstChunk);
const part2 = await multipartUpload.uploadPart(2, secondChunk);
// Complete the upload with all parts
const object = await multipartUpload.complete([part1, part2]);
return Response.json({
key: object.key,
etag: object.httpEtag,
});
} catch (err) {
// Abort on failure so incomplete uploads do not count against storage
await multipartUpload.abort();
return new Response(`Multipart upload failed: ${err}`, { status: 500 });
}
},
};

In most cases, the multipart state (the uploadId and uploaded part ETags) is tracked by the client sending requests to your Worker. The following example exposes an HTTP API that a client application can call to create, upload parts for, and complete a multipart upload:

JavaScript
export default {
async fetch(request, env) {
const url = new URL(request.url);
const key = url.pathname.slice(1);
const action = url.searchParams.get("action");
if (!key || !action) {
return new Response("Missing key or action", { status: 400 });
}
switch (action) {
// Step 1: Client calls POST /<key>?action=mpu-create
case "mpu-create": {
const upload = await env.MY_BUCKET.createMultipartUpload(key);
return Response.json({ key: upload.key, uploadId: upload.uploadId });
}
// Step 2: Client calls PUT /<key>?action=mpu-uploadpart&uploadId=...&partNumber=...
case "mpu-uploadpart": {
const uploadId = url.searchParams.get("uploadId");
const partNumber = Number(url.searchParams.get("partNumber"));
if (!uploadId || !partNumber || !request.body) {
return new Response("Missing uploadId, partNumber, or body", {
status: 400,
});
}
const upload = env.MY_BUCKET.resumeMultipartUpload(key, uploadId);
try {
const part = await upload.uploadPart(partNumber, request.body);
return Response.json(part);
} catch (err) {
return new Response(String(err), { status: 400 });
}
}
// Step 3: Client calls POST /<key>?action=mpu-complete&uploadId=...
case "mpu-complete": {
const uploadId = url.searchParams.get("uploadId");
if (!uploadId) {
return new Response("Missing uploadId", { status: 400 });
}
const upload = env.MY_BUCKET.resumeMultipartUpload(key, uploadId);
const body = await request.json();
try {
const object = await upload.complete(body.parts);
return new Response(null, {
headers: { etag: object.httpEtag },
});
} catch (err) {
return new Response(String(err), { status: 400 });
}
}
// Abort an in-progress upload
case "mpu-abort": {
const uploadId = url.searchParams.get("uploadId");
if (!uploadId) {
return new Response("Missing uploadId", { status: 400 });
}
const upload = env.MY_BUCKET.resumeMultipartUpload(key, uploadId);
try {
await upload.abort();
} catch (err) {
return new Response(String(err), { status: 400 });
}
return new Response(null, { status: 204 });
}
default:
return new Response(`Unknown action: ${action}`, { status: 400 });
}
},
};

For the complete Workers API reference, refer to Workers API reference.

Presigned URLs (Workers)

When you need clients (browsers, mobile apps) to upload directly to R2 without proxying through your Worker, generate a presigned URL server-side and hand it to the client:

JavaScript
import { AwsClient } from "aws4fetch";
export default {
async fetch(request, env) {
const r2 = new AwsClient({
accessKeyId: env.R2_ACCESS_KEY_ID,
secretAccessKey: env.R2_SECRET_ACCESS_KEY,
});
// Generate a presigned PUT URL valid for 1 hour
const url = new URL(
"https://<ACCOUNT_ID>.r2.cloudflarestorage.com/my-bucket/image.png",
);
url.searchParams.set("X-Amz-Expires", "3600");
const signed = await r2.sign(new Request(url, { method: "PUT" }), {
aws: { signQuery: true },
});
// Return the signed URL to the client — they can PUT directly to R2
return Response.json({ url: signed.url });
},
};

For full presigned URL documentation including GET, PUT, and security best practices, refer to Presigned URLs.

Upload via S3 API

Use S3-compatible SDKs to upload objects. You will need your account ID and R2 API token.

Single upload

TypeScript
import { S3Client, PutObjectCommand } from "@aws-sdk/client-s3";
import { readFile } from "node:fs/promises";
const S3 = new S3Client({
region: "auto",
endpoint: `https://<ACCOUNT_ID>.r2.cloudflarestorage.com`,
credentials: {
accessKeyId: "<ACCESS_KEY_ID>",
secretAccessKey: "<SECRET_ACCESS_KEY>",
},
});
const fileContent = await readFile("./image.png");
const response = await S3.send(
new PutObjectCommand({
Bucket: "my-bucket",
Key: "image.png",
Body: fileContent,
ContentType: "image/png",
}),
);
console.log(`Uploaded successfully. ETag: ${response.ETag}`);

Multipart upload

Most S3 SDKs handle multipart uploads automatically when the file exceeds a configurable threshold. The examples below show both automatic (high-level) and manual (low-level) approaches.

Automatic multipart upload

The SDK splits the file and uploads parts in parallel.

TypeScript
import { S3Client } from "@aws-sdk/client-s3";
import { Upload } from "@aws-sdk/lib-storage";
import { createReadStream } from "node:fs";
const S3 = new S3Client({
region: "auto",
endpoint: `https://<ACCOUNT_ID>.r2.cloudflarestorage.com`,
credentials: {
accessKeyId: "<ACCESS_KEY_ID>",
secretAccessKey: "<SECRET_ACCESS_KEY>",
},
});
const upload = new Upload({
client: S3,
params: {
Bucket: "my-bucket",
Key: "large-file.bin",
Body: createReadStream("./large-file.bin"),
},
// Upload parts in parallel (default: 4)
leavePartsOnError: false,
});
upload.on("httpUploadProgress", (progress) => {
console.log(`Uploaded ${progress.loaded ?? 0} bytes`);
});
const result = await upload.done();
console.log(`Upload complete. ETag: ${result.ETag}`);

Manual multipart upload

Use the low-level API when you need full control over part sizes or upload order.

TypeScript
import {
S3Client,
CreateMultipartUploadCommand,
UploadPartCommand,
CompleteMultipartUploadCommand,
AbortMultipartUploadCommand,
type CompletedPart,
} from "@aws-sdk/client-s3";
import { createReadStream, statSync } from "node:fs";
const S3 = new S3Client({
region: "auto",
endpoint: `https://<ACCOUNT_ID>.r2.cloudflarestorage.com`,
credentials: {
accessKeyId: "<ACCESS_KEY_ID>",
secretAccessKey: "<SECRET_ACCESS_KEY>",
},
});
const bucket = "my-bucket";
const key = "large-file.bin";
const partSize = 10 * 1024 * 1024; // 10 MiB per part
// Step 1: Create the multipart upload
const { UploadId } = await S3.send(
new CreateMultipartUploadCommand({ Bucket: bucket, Key: key }),
);
try {
const fileSize = statSync("./large-file.bin").size;
const partCount = Math.ceil(fileSize / partSize);
const parts: CompletedPart[] = [];
// Step 2: Upload each part
for (let i = 0; i < partCount; i++) {
const start = i * partSize;
const end = Math.min(start + partSize, fileSize);
const { ETag } = await S3.send(
new UploadPartCommand({
Bucket: bucket,
Key: key,
UploadId,
PartNumber: i + 1,
Body: createReadStream("./large-file.bin", { start, end: end - 1 }),
ContentLength: end - start,
}),
);
parts.push({ PartNumber: i + 1, ETag });
}
// Step 3: Complete the upload
await S3.send(
new CompleteMultipartUploadCommand({
Bucket: bucket,
Key: key,
UploadId,
MultipartUpload: { Parts: parts },
}),
);
console.log("Multipart upload complete.");
} catch (err) {
// Abort on failure to clean up incomplete parts
try {
await S3.send(
new AbortMultipartUploadCommand({ Bucket: bucket, Key: key, UploadId }),
);
} catch (_abortErr) {
// Best-effort cleanup — the original error is more important
}
throw err;
}

Presigned URLs (S3 API)

For client-side uploads where users upload directly to R2 without going through your server, generate a presigned PUT URL. Your server creates the URL and the client uploads to it — no API credentials are exposed to the client.

TypeScript
import { S3Client, PutObjectCommand } from "@aws-sdk/client-s3";
import { getSignedUrl } from "@aws-sdk/s3-request-presigner";
const S3 = new S3Client({
region: "auto",
endpoint: `https://<ACCOUNT_ID>.r2.cloudflarestorage.com`,
credentials: {
accessKeyId: "<ACCESS_KEY_ID>",
secretAccessKey: "<SECRET_ACCESS_KEY>",
},
});
const presignedUrl = await getSignedUrl(
S3,
new PutObjectCommand({
Bucket: "my-bucket",
Key: "user-upload.png",
ContentType: "image/png",
}),
{ expiresIn: 3600 }, // Valid for 1 hour
);
console.log(presignedUrl);
// Return presignedUrl to the client

For full presigned URL documentation, refer to Presigned URLs.

Refer to R2's S3 API documentation for all supported S3 API methods.

Upload via CLI

Rclone

Rclone is a command-line tool for managing files on cloud storage. Rclone works well for uploading multiple files from your local machine or copying data from other cloud storage providers.

To use rclone, install it onto your machine using their official documentation - Install rclone.

Upload files with the rclone copy command:

Terminal window
# Upload a single file
rclone copy /path/to/local/image.png r2:bucket_name
# Upload everything in a directory
rclone copy /path/to/local/folder r2:bucket_name

Verify the upload with rclone ls:

Terminal window
rclone ls r2:bucket_name

For more information, refer to our rclone example.

Wrangler

Use Wrangler to upload objects. Run the r2 object put command:

Terminal window
wrangler r2 object put test-bucket/image.png --file=image.png

You can set the Content-Type (MIME type), Content-Disposition, Cache-Control and other HTTP header metadata through optional flags.

Multipart upload details

Part size limits

  • Minimum part size: 5 MiB (except for the last part)
  • Maximum part size: 5 GiB
  • Maximum number of parts: 10,000
  • All parts except the last must be the same size

Incomplete upload lifecycles

Incomplete multipart uploads are automatically aborted after 7 days by default. You can change this by configuring a custom lifecycle policy.

ETags

ETags for objects uploaded via multipart differ from those uploaded with a single PUT. The ETag of each part is the MD5 hash of that part's contents. The ETag of the completed multipart object is the hash of the concatenated binary MD5 sums of all parts, followed by a hyphen and the number of parts.

For example, if a two-part upload has part ETags bce6bf66aeb76c7040fdd5f4eccb78e6 and 8165449fc15bbf43d3b674595cbcc406, the completed object's ETag will be f77dc0eecdebcd774a2a22cb393ad2ff-2.