KEMBAR78
chore: make DefaultBufferedWritableByteChannel capable of being non-blocking by BenWhitehead · Pull Request #3248 · googleapis/java-storage · GitHub
Skip to content

Conversation

@BenWhitehead
Copy link
Collaborator

non-blocking mode will only be used by appendable uploads, all existing usage will continue to use the existing blocking semantics.

@BenWhitehead BenWhitehead requested a review from a team as a code owner August 19, 2025 18:15
@product-auto-label product-auto-label bot added size: l Pull request size is large. api: storage Issues related to the googleapis/java-storage API. labels Aug 19, 2025
@BenWhitehead BenWhitehead force-pushed the nonblocking-appendable/20/max-nonblocking branch from 4ba19d0 to 094040c Compare August 19, 2025 18:41
@BenWhitehead
Copy link
Collaborator Author

Same logical work for our default buffering which will only flush when the buffer is full as was done in #3225 for our min flush threshold.

@BenWhitehead BenWhitehead force-pushed the nonblocking-appendable/20/max-nonblocking branch from 094040c to 1f2708b Compare August 19, 2025 21:11
BenWhitehead added a commit that referenced this pull request Aug 20, 2025
## Description
feat: *breaking behavior* rewrite Storage.blobAppendableUpload to be non-blocking and have improved throughput (#3231)

Rewrite internals of BlobAppendableUpload to provide non-blocking write calls, and it take advantage of grpc async message handling.

When `AppendableUploadWriteableByteChannel#write(ByteBuffer)` is called, an attempt will be made to enqueue the bytes in the outbound queue to GCS.
If there is only enough room to partially consume the bytes provided in the `ByteBuffer` the write call will return early specifying the number of bytes actually consumed.

As acknowledgements come in from gcs, enqueued messages will be evicted freeing space in the outbound queue. Thereby allowing more bytes to be consumed and enqueued.

Given appendable objects are still in private preview I can't quote any metrics here, however preliminary benchmarking of several million objects across a range of sizes show across the board throughput improvments.

Because the channel's write call is now non-blocking, if you want to block your application until the full buffer is consumed some new helper methods have been added in StorageChannelUtils to provide blocking behavior.

A new method `MinFlushSizeFlushPolicy#withMaxPendingBytes(long)` has been added to allow limiting the number of pending outbound bytes. The default values is 16MiB, but can be configured lower if necessary.

## Release Notes

BEGIN_COMMIT_OVERRIDE

BEGIN_NESTED_COMMIT
feat: *breaking behavior* rewrite Storage.blobAppendableUpload to be non-blocking and have improved throughput (#3231)
END_NESTED_COMMIT

BEGIN_NESTED_COMMIT
feat: add StorageChannelUtils to provide helper methods to perform blocking read/write to/from non-blocking channels (#3231)
END_NESTED_COMMIT

BEGIN_NESTED_COMMIT
feat: add MinFlushSizeFlushPolicy#withMaxPendingBytes(long) (#3231)
END_NESTED_COMMIT

BEGIN_NESTED_COMMIT
fix: update BlobAppendableUploadConfig and FlushPolicy.MinFlushSizeFlushPolicy to default to 4MiB minFlushSize and 16MiB maxPendingBytes (#3249)
END_NESTED_COMMIT

BEGIN_NESTED_COMMIT
fix: make FlushPolicy${Min,Max}FlushSizeFlushPolicy constructors private (#3217)
END_NESTED_COMMIT

END_COMMIMT_OVERRIDE

## Sub PRs
This PR is made of up the following PRs, in sequence
1. #3217
2. #3218 
3. #3219
4. #3220
5. #3221
6. #3222
7. #3223
8. #3224 
9. #3225 
10. #3226 
11. #3227 
12. #3228 
13. #3229 
14. #3230 
15. #3235 
16. #3236 
17. #3241
18. #3242
19. #3246
20. #3248
21. #3249
22. #3252
@BenWhitehead
Copy link
Collaborator Author

Merged in #3231

@BenWhitehead BenWhitehead deleted the nonblocking-appendable/20/max-nonblocking branch August 20, 2025 21:13
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

api: storage Issues related to the googleapis/java-storage API. size: l Pull request size is large.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants