Specifying bytes quantity in request.stream.read()


I started writing proxy to work with S3 through aioboto3 and wanted to implement stream uploading but faced a problem.

    async def put(self, bucket, key, stream):
        conf = boto3.s3.transfer.TransferConfig(multipart_threshold=10000, max_concurrency=4)
        async with self.client as s3: 
            await s3.upload_fileobj(stream, bucket, key, Config=conf)

upload_fileobj accepts any file-like object which has read method (also may be awaitable).

request.stream has read method, but boto3 passes a bytes quantity to it which raises an exception because request.stream.read doesn’t accept any args.

Could someone please explain is it even possible for network streaming (not Sanic but HTTP standart in general) to specify how much bytes to read? If so, is there any plans to implement it in Sanic?

P.S. I know that I can simple save file and than pass it to boto3 but it is extra actions

In your use case, is this in conjunction with a chunked request?

Chunked request streams look like this: Transfer-Encoding - HTTP | MDN, so the size is determined by the sending party, not the reader.

Sanic will read up a maximum payload size either the chunk or the entire body sent.

        while True:
            body = await request.stream.read()
            if body is None:
            result += body.decode("utf-8")
        return text(result)

So, in this example, if the client does not signal that it has sent the entire body or entire chunk, it will sit there until it does.

You are trying to consume something less than that?

Thank you for a quick reaction and explanation! It became more clear how Sanic streams work.

I suppose boto3 requires read function to accept bytes quantity to be able to compute chunk hash before sending it to s3. But may be not, haven’t researched yet. I’ll try to find solution how to implement my case.

But don’t you think that stream reading control from server side will be useful feature?

I need to put a little more thought into it. Exposing the flow control to the handler seems like it could be full potential pitfalls. Generally the use case is to stream based upon the chunk ing from the client, or just read everything.

If you add an issue on GitHub, I’m happy to play around with it and see what others think. Just add a ref back to here.

I guess there are two potential solutions:

  1. add an internal loop on each of the two conditions,
  2. add a third condition when size!=None

I think number two is probably the way to go. We shouldn’t really mess with the chunking I don’t think. But, it does mean that in some circumstances we would ignore a non-None value. :thinking: