Changelog
- Server-side Encryption with Customer-Provided Keys is now available to all users via the Workers and S3-compatible APIs.
- Sippy can now be enabled on buckets in jurisdictions (e.g., EU, FedRAMP).
- Fixed an issue with Sippy where GET/HEAD requests to objects with certain special characters would result in error responses.
- Oceania (OC) is now available as an R2 region.
- The default maximum number of buckets per account is now 1 million. If you need more than 1 million buckets, contact Cloudflare Support.
- Public buckets accessible via custom domain now support Smart Tiered Cache.
- R2
bucket lifecyclecommand added to Wrangler. Supports listing, adding, and removing object lifecycle rules.
- R2
bucket infocommand added to Wrangler. Displays location of bucket and common metrics.
- R2
bucket dev-urlcommand added to Wrangler. Supports enabling, disabling, and getting status of bucket's r2.dev public access URL.
- R2
bucket domaincommand added to Wrangler. Supports listing, adding, removing, and updating R2 bucket custom domains.
- Add
minTLSto response of list custom domains endpoint.
- Add get custom domain endpoint.
- Event notifications can now be configured for R2 buckets in jurisdictions (e.g., EU, FedRAMP).
- Event notifications for R2 is now generally available. Event notifications now support higher throughput (up to 5,000 messages per second per Queue), can be configured in the dashboard and Wrangler, and support for lifecycle deletes.
- Add the ability to set and update minimum TLS version for R2 bucket custom domains.
- Added support for configuring R2 bucket custom domains via API.
- Sippy is now generally available. Metrics for ongoing migrations can now be found in the dashboard or via the GraphQL analytics API.
- Added migration log for Super Slurper to the migration summary in the dashboard.
- Super Slurper now supports migrating objects up to 1TB in size.
- Added support for Infrequent Access storage class (beta).
- Added create temporary access tokens endpoint.
- Event notifications for R2 is now available as an open beta.
- Super Slurper now supports migration from Google Cloud Storage.
- Updated GetBucket endpoint: Now fetches by
bucket_nameinstead ofbucket_id.
- Multipart ETags are now MD5 hashes.
- Fixed a bug where calling GetBucket on a non-existent bucket would return a 500 instead of a 404.
- Improved S3 compatibility for ListObjectsV1, now nextmarker is only set when truncated is true.
- The R2 worker bindings now support parsing conditional headers with multiple etags. These etags can now be strong, weak or a wildcard. Previously the bindings only accepted headers containing a single strong etag.
- S3 putObject now supports sha256 and sha1 checksums. These were already supported by the R2 worker bindings.
- CopyObject in the S3 compatible api now supports Cloudflare specific headers which allow the copy operation to be conditional on the state of the destination object.
- GetBucket is now available for use through the Cloudflare API.
- Location hints can now be set when creating a bucket, both through the S3 API, and the dashboard.
- The ListParts API has been implemented and is available for use.
- HTTP2 is now enabled by default for new custom domains linked to R2 buckets.
- Object Lifecycles are now available for use.
- Bug fix: Requests to public buckets will now return the
Content-Encodingheader for gzip files whenAccept-Encoding: gzipis used.
- Requests with the header
x-amz-acl: public-readare no longer rejected. - Fixed issues with wildcard CORS rules and presigned URLs.
- Fixed an issue where
ListObjectswould time out during delimited listing of unicode-normalized keys. - S3 API's
PutBucketCorsnow rejects requests with unknown keys in the XML body. - Signing additional headers no longer breaks CORS preflight requests for presigned URLs.
- Fixed a bug where CORS configuration was not being applied to S3 endpoint.
- No-longer render the
Access-Control-Expose-Headersresponse header ifExposeHeaderis not defined. - Public buckets will no-longer return the
Content-Rangeresponse header unless the response is partial. - Fixed CORS rendering for the S3
HeadObjectoperation. - Fixed a bug where no matching CORS configuration could result in a
403response. - Temporarily disable copying objects that were created with multipart uploads.
- Fixed a bug in the Workers bindings where an internal error was being returned for malformed ranged
.getrequests.
- CORS preflight responses and adding CORS headers for other responses is now implemented for S3 and public buckets. Currently, the only way to configure CORS is via the S3 API.
- Fixup for bindings list truncation to work more correctly when listing keys with custom metadata that have
"or when some keys/values contain certain multi-byte UTF-8 values. - The S3
GetObjectoperation now only returnsContent-Rangein response to a ranged request.
- The R2
put()binding options can now be given anonlyIffield, similar toget(), that performs a conditional upload. - The R2
delete()binding now supports deleting multiple keys at once. - The R2
put()binding now supports user-specified SHA-1, SHA-256, SHA-384, SHA-512 checksums in options. - User-specified object checksums will now be available in the R2
get()andhead()bindings response. MD5 is included by default for non-multipart uploaded objects.
- The S3
DeleteObjectsoperation no longer trims the space from around the keys before deleting. This would result in files with leading / trailing spaces not being able to be deleted. Additionally, if there was an object with the trimmed key that existed it would be deleted instead. The S3DeleteObjectoperation was not affected by this. - Fixed presigned URL support for the S3
ListBucketsandListObjectsoperations.
- Fixed S3 conditionals to work properly when provided the
LastModifieddate of the last upload, bindings fixes will come in the next release. If-Match/If-None-Matchheaders now support arrays of ETags, Weak ETags and wildcard (*) as per the HTTP standard and undocumented AWS S3 behavior.
- Fixed an S3 compatibility issue for error responses with MinIO .NET SDK and any other tooling that expects no
xmlnsnamespace attribute on the top-levelErrortag. - List continuation tokens prior to 2022-07-01 are no longer accepted and must be obtained again through a new
listoperation. - The
list()binding will now correctly return a smaller limit if too much data would otherwise be returned (previously would return anInternal Error).
- Improvements to 500s: we now convert errors, so things that were previously concurrency problems for some operations should now be
TooMuchConcurrencyinstead ofInternalError. We've also reduced the rate of 500s through internal improvements. ListMultipartUploadcorrectly encodes the returnedKeyif theencoding-typeis specified.
- S3 XML documents sent to R2 that have an XML declaration are not rejected with
400 Bad Request/MalformedXML. - Minor S3 XML compatibility fix impacting Arq Backup on Windows only (not the Mac version). Response now contains XML declaration tag prefix and the xmlns attribute is present on all top-level tags in the response.
- Beta
ListMultipartUploadssupport.
- Support the
r2_list_honor_includecompat flag coming up in an upcoming runtime release (default behavior as of 2022-07-14 compat date). Without that compat flag/date, list will continue to function implicitly asinclude: ['httpMetadata', 'customMetadata']regardless of what you specify. cf-create-bucket-if-missingcan be set on aPutObject/CreateMultipartUploadrequest to implicitly create the bucket if it does not exist.- Fix S3 compatibility with MinIO client spec non-compliant XML for publishing multipart uploads. Any leading and trailing quotes in
CompleteMultipartUploadare now optional and ignored as it seems to be the actual non-standard behavior AWS implements.
- Unsupported search parameters to
ListObjects/ListObjectsV2are now rejected with501 Not Implemented. - Fixes for Listing:
- Fix listing behavior when the number of files within a folder exceeds the limit (you'd end up seeing a CommonPrefix for that large folder N times where N = number of children within the CommonPrefix / limit).
- Fix corner case where listing could cause objects with sharing the base name of a "folder" to be skipped.
- Fix listing over some files that shared a certain common prefix.
DeleteObjectscan now handle 1000 objects at a time.- S3
CreateBucketrequest can specifyx-amz-bucket-object-lock-enabledwith a value offalseand not have the requested rejected with aNotImplementederror. A value oftruewill continue to be rejected as R2 does not yet support object locks.
- Fixed a bug with the S3 API
ListObjectsV2operation not returning empty folder/s as common prefixes when using delimiters. - The S3 API
ListObjectsV2KeyCountparameter now correctly returns the sum of keys and common prefixes rather than just the keys. - Invalid cursors for list operations no longer fail with an
InternalErrorand now return the appropriate error message.
- Fixed a bug where the S3 API's
PutObjector the.put()binding could fail but still show the bucket upload as successful. - If conditional headers are provided to S3 API
UploadObjectorCreateMultipartUploadoperations, and the object exists, a412 Precondition Failedstatus code will be returned if these checks are not met.
- Add support for S3 virtual-hosted style paths, such as
<BUCKET>.<ACCOUNT_ID>.r2.cloudflarestorage.cominstead of path-based routing (<ACCOUNT_ID>.r2.cloudflarestorage.com/<BUCKET>). - Implemented
GetBucketLocationfor compatibility with external tools, this will always return aLocationConstraintofauto.
- S3 API
GetObjectranges are now inclusive (bytes=0-0will correctly return the first byte). - S3 API
GetObjectpartial reads return the proper206 Partial Contentresponse code. - Copying from a non-existent key (or from a non-existent bucket) to another bucket now returns the proper
NoSuchKey/NoSuchBucketresponse. - The S3 API now returns the proper
Content-Type: application/xmlresponse header on relevant endpoints. - Multipart uploads now have a
-Nsuffix on the etag representing the number of parts the file was published with. UploadPartandUploadPartCopynow return proper error messages, such asTooMuchConcurrencyorNoSuchUpload, instead of 'internal error'.UploadPartcan now be sent a 0-length part.
- When using the S3 API, an empty string and
us-east-1will now alias to theautoregion for compatibility with external tools. GetBucketEncryption,PutBucketEncryptionandDeleteBucketEncrypotionare now supported (the only supported value currently isAES256).- Unsupported operations are explicitly rejected as unimplemented rather than implicitly converting them into
ListObjectsV2/PutBucket/DeleteBucketrespectively. - S3 API
CompleteMultipartUploadsrequests are now properly escaped.
- Pagination cursors are no longer returned when the keys in a bucket is the same as the
MaxKeysargument. - The S3 API
ListBucketsoperation now acceptscf-max-keys,cf-start-afterandcf-continuation-tokenheaders behave the same as the respective URL parameters. - The S3 API
ListBucketsandListObjectsendpoints now allowper_pageto be 0. - The S3 API
CopyObjectsource parameter now requires a leading slash. - The S3 API
CopyObjectoperation now returns aNoSuchBucketerror when copying to a non-existent bucket instead of an internal error. - Enforce the requirement for
autoin SigV4 signing and theCreateBucketLocationConstraintparameter. - The S3 API
CreateBucketoperation now returns the properlocationresponse header.
- The S3 API now supports unchunked signed payloads.
- Fixed
.put()for the Workers R2 bindings. - Fixed a regression where key names were not properly decoded when using the S3 API.
- Fixed a bug where deleting an object and then another object which is a prefix of the first could result in errors.
- The S3 API
DeleteObjectsoperation no longer returns an error even though an object has been deleted in some cases. - Fixed a bug where
startAfterandcontinuationTokenwere not working in list operations. - The S3 API
ListObjectsoperation now correctly rendersPrefix,Delimiter,StartAfterandMaxKeysin the response. - The S3 API
ListObjectsV2now correctly honors theencoding-typeparameter. - The S3 API
PutObjectoperation now works withPOSTrequests fors3cmdcompatibility.