Skip to content

src: enable Writev to write beyond INT_MAX

Vectored writes that contain large string data (accumulated from small strings over time due to congestion in the stream) fails with ENOBUFS if the cumulative chunk size is more than INT_MAX.

Under backpressure situations failure is justified in JS land with heap OOM as well as in the native land with libuv resource exhaustion etc, but the stream wrap that sits in the middle which just facilitates the transport between layers is not.

Detect the large data situation, and split those at right boundaries. Carry out intermediary writes through dummy write_wrap objects to avoid multiple callbacks to the requestor.

Fixes: https://github.com/nodejs/node/issues/24992

Checklist
  • make -j4 test (UNIX), or vcbuild test (Windows) passes
  • commit message follows commit guidelines

Merge request reports

Loading