How to respond with the exact response from an aiohttp request


#1

I would like to return the same exact response that I get from aiohttp to the client. How might I do this?

I tried:

1

async with aiohttp.ClientSession() as session:
    async with session.get(url, headers=headers) as resp:
        return await resp.read()

2

async with aiohttp.ClientSession() as session:
    async with session.get(url, headers=headers) as resp:
        data = await resp.read()
        return response.raw(data)

and a lot of different variations and none of them worked. Basically, I would like to return the exact response status / content / everything exactly as is without modifying anything. What is the best way to do so?


#2

You are trying to use Sanic as a proxy more or less? Will you know ahead of time if it is JSON content or not? Or, does it need to be able to handle whatever kind of content it gets?

This is an interesting thought experiment for sure. But, basically you would need to read the various parts from the aiohttp response and then return them as a Sanic.HTTPResponse.

I will see if I can put together something.


#3

Here is a very basic “proxy”. It does not pass along any headers or cookies, but you could apply the same principal I laid out below to do it.

from sanic import Sanic
from sanic.response import text, html, json
import aiohttp


app = Sanic()


@app.get("/text")
async def response_text(request):
    return text("foo")


@app.get("/html")
async def response_html(request):
    return html("<html><body><h1>Hello, world.</h1></body></html>")


@app.get("/json")
async def response_json(request):
    return json({"foo": "bar"})


@app.get("/bad")
async def response_bad(request):
    return text("bad", status=400)


@app.get("/proxy/<location>")
async def response_proxy(request, location):
    url = f"http://localhost:8000/{location}"
    async with aiohttp.ClientSession() as session:
        async with session.get(url) as resp:
            status = resp.status

            if resp.content_type == "application/json":
                func = json
                body = await resp.json()
            elif resp.content_type == "text/html":
                func = html
                body = await resp.text()
            else:
                func = text
                body = await resp.text()
    return func(body, status=status)


if __name__ == "__main__":
    app.run(debug=True)

Here are some results of hitting the /proxy/<location> endpoint:

╭─adam@thebrewery ~  
╰─$ curl localhost:8000/proxy/text -i
HTTP/1.1 200 OK
Connection: keep-alive
Keep-Alive: 5
Content-Length: 3
Content-Type: text/plain; charset=utf-8

foo
╭─adam@thebrewery ~  
╰─$ curl localhost:8000/proxy/html -i
HTTP/1.1 200 OK
Connection: keep-alive
Keep-Alive: 5
Content-Length: 48
Content-Type: text/html; charset=utf-8

<html><body><h1>Hello, world.</h1></body></html>
╭─adam@thebrewery ~  
╰─$ curl localhost:8000/proxy/bad -i 
HTTP/1.1 400 Bad Request
Connection: keep-alive
Keep-Alive: 5
Content-Length: 3
Content-Type: text/plain; charset=utf-8

bad
╭─adam@thebrewery ~  
╰─$ curl localhost:8000/proxy/json -i
HTTP/1.1 200 OK
Connection: keep-alive
Keep-Alive: 5
Content-Length: 13
Content-Type: application/json

{"foo":"bar"}

@smlbiobot is this sort of what you had in mind?


#4

Yes. I am indeed using it as a form of proxy! I thought that perhaps there is a way to directly send the response over but I suppose not.

The normal response is always supposed to be a JSON, so I had it originally written in a similar manner as you did, but of course then it ran into issues where the response was not json and I had to end up handling them—and the associated issue that I’m then processing those responses when I really shouldn’t.

I like your solutions to check for content types. I didn’t think of that so I definitely will take that route.

Thanks very much!


#5

:muscle: Nice!

The aiohttp response provides a slightly different API than the Sanic.HTTPResonse, so you cannot just pass one for the other. I think the more interesting challenge would be stripping out some of the headers that would not be applicable, and passing along the ones that you might want. Same for cookies.

Also, if you were to come up against a stream, you would need to alter the strategy.