Considering ASGI support

My preference would be to provide an alternative, something like calling app.asgi() instead of app.run() for functionality that implements the requirements.

So I’m not sure how feasible it is to shim the websockets support in if you wanted to take that approach. You also lose a chunk of benefits, since you’re only dealing with ASGI at the outermost layer (So you can use ASGI servers and Middleware, but can’t submount other ASGI applications.)

Shimming would lose a bit of performance in the ASGI case since you’re crossing two different kinds of interface boundaries, and have a little bit of work to do as a result of that. It would also be a higher complexity, since you’ve got two ways around, where properly engineering Sanic around an ASGI interface throughout would cut a few bits out instead.

the same way people don’t run synchronous frameworks under WSGI.

I’m not sure there are any significant Python sync frameworks that aren’t WSGI based. A few have in-built devservers, but they still use the WSGI interface under the hood.

There’s not too much going on in Request.__init__ . What exactly did you have in mind? Have a couple quick lines of code to help me understand?

Sure. Given that the ASGI interface passes a scope dictionary in, a nice approach is to ensure that Request(scope=scope) as the standard way to instantiate a request instance. (Perhaps you also accept keyword arguments to support eg. Request(url=..., method=..., headers=...), but that’s not what you want to use internally.)

keep the same app.run()

That’s absolutely fine. I’m not personally particularly interested in adapting the Sanic sever part to deal with that, but if someone else took on that part on then great. (Or deal with it by adding a dependency.) In either case it’d likely make sense to deal with the framework adaptations first, to demonstrate support with uvicorn & pals, to test performance, and to thrash out what-if-any API changes are/aren’t required.

Just to give a bit more of a push to the motivations here, it’s not just about being able to run sanic with uvicorn, hypercorn, or daphne. It’s also about:

  • Shared ASGI middleware. (Eg. You be able to use Starlette’s CORSMiddleware, SessionMiddleware, GZipMiddleware, TrustedHostMiddleware, HTTPSRedirectMiddleware, ExceptionMiddleware, DebugMiddleware etc.)
  • Shared mountable ASGI applications (Eg. Use Starlette’s StaticFiles, GraphQLApp, class-based HTTPEndpoint/WebSocketEndpoint implementations etc…)
  • Interchangeable response classes (Eg. use Sanic responses with Starlette and vice-versa. We’ve both got all the basics covered here, but there’s other things like content-negotiated response classes that’d be useful in either case.)
  • Properly managed background tasks. (Eg. server restarts don’t finalize until background tasks have run to completion. Server is able to determine number of concurrent background tasks, etc.)
  • Support for WebSockets, Server Sent Events, HTTP/2 server push.
  • More robust HTTP and WebSocket implementations, due to proper interface separation.

Deal.


@tomchristie You raise a lot of good points, and I see no problem passing scope around. I think you are right in that we can separate some of the logic and try to peel back some of the unnecessary layers.

Probably the biggest hurdle, and one that I am not sure will be avoidable, is how we handle websockets. Currently, Sanic achieves this by just providing a wrapper around the websockets package. There are a number of implementation issues just around websockets that I have been wanting to correct (API inconsistencies between the App layer and the Blueprint layer), and we have previously talked as a team about pulling that dependency out of Sanic core. While I really like it as a package, I am not a fan of how Sanic implements it.

So in addition to providing support for Sanic to run on ASGI, there are some API cleanup issues that need to be tackled anyway.


As I said before, I think it is still important to provide the simple implementation to keep the internal server running for now. Once a proper ASGI solution has been created, we can work to spread documentation, examples, and other materials to start pushing it as an alternative solution with the plan to eventually fade out the internal server if it falls out of favor. For now, I think we need to deal with the complexity that there will be potentially two ways of running Sanic now.

As you are no doubt already aware, Sanic already has a method for running with gunicorn. I see this as another alternative.


Getting back to websockets, while I am in favor of keeping two separate run implementations (internal and external), it seems overly complex and confusing for the user to have multiple ways to achieve websockets. I would like to get a better understanding how ASGI handles websockets to explore how we can move forward there.


No doubt you are right in the ability to have shared components is a HUGE boost for the Python community and every framework developer and user. For this reason, I think it is a no-brainer.

1 Like

Probably the biggest hurdle, and one that I am not sure will be avoidable, is how we handle websockets.

So the good news on this front is that it looks like for most end-developers it’s likely a case of making sure we’re preserving the send/recv interface, which is fine. I expect there’s plenty of lower-level details that’d need reworking, but keeping the documented bit of interface compatible looks okay to me.

ASGI websockets would be a nice win here too, since (1) there’s multiple implementations - eg. uvicorn includes either websockets or wsproto implementations. Plus there’s whatever hypercorn and daphne use. (2) There’s more structure to how to approach permissioning or middleware or whatever else that’s shared between both http and websockets. (I’ve more outlining work to do on this in Starlette, but can see it all fits together nicely.)

I think it is still important to provide the simple implementation to keep the internal server running for now.

That’s reasonable yeah. I’d expect that I’d tackle this by focusing very simply just on HTTP as a first pass, not tackling a sanic built-in server implementation, and not tackling websockets to start with. If we can get that slice of tests passing, and be able to test the performance running on uvicorn, then we’d have made a great start.

No doubt you are right in the ability to have shared components is a HUGE boost for the Python community and every framework developer and user. For this reason, I think it is a no-brainer.

Indeed. I think we’ve got a really good opportunity here to lay the foundation for an incredibly productive ecosystem. (I think we have the potential go a long way further towards this than WSGI did.) Sanic’s in a good position here since it’s got great adoption, and has a relatively lower effort to move towards being an ASGI framework than most other cases. (eg. It’s probably more work for something like Falcon to tackle.)

It’s also worth mentioning that Andrew Godwin’s work, and some of the stuff in Starlette and Responder are starting to put down patterns for how we can have ASGI frameworks that also support regular synchronous views. The benefit here is being able to support existing regular ORMs, while still getting WebSocket support, SSE support, managed Background/Timer/Clock tasks, HTTP/2 server push, while still allowing the developer to upgrade all or some of the codebase to async if needed later.

Anyways, thanks so much for the feedback, that’s super helpful. :sunglasses: I really just wanted to start off by getting a little community buy-in of “yeah we get why this’d be really valuable, and yeah, we might consider some modest trade-offs if needed.”.

1 Like

@tomchristie I will follow your lead on this one, but if you need some help on this (especially on smoothing out the implementation of the non-ASGI stuff), I would be happy to join.

Any idea if any of the ASGI servers support Windows? I don’t see it explicitly mentioned on them and uvicorn seems to be based on gunicorn which doesn’t. That would be a big selling point for us while we are stuck with Windows.

For now we’ve had to switch back to Flask as there is a WSGI server that supports windows (waitress)

I think Daphne will run under a windows environment, but don’t hold me to that.

Uvicorn has windows support yup. (Tho if you were on Windows in production you’d need to use supervisor or circus for process management, rather than gunicorn with the uvicorn worker class.)

1 Like

Hypercorn should work on windows as well (just not with the uvloop worker class).

2 Likes

Having taken a bit of a stab towards shoehorning ASGI support in without changing the existing API, it’s a pretty grim process.

Acutally refactoring it out to ASGI itself would be relatively simpler, but attempting to maintain API compatiblility WRT. the exsiting test suite is rather tough.

I don’t really know how feasible this is, or how much motivation I have to pursue it in its current state.

There are a few options here:

  • Keep on this track. Fix up the failing test cases one by one, until we’ve got an API compatible pull-request, at which point we’d be in great shape, to then start refactoring that into a more graceful implementation.
  • Use an aiohttp-based test client instead of moving to the requests based ASGI test client. Either adapt it so that it makes ASGI requests directly, or stick with the existing “make an actual network request over the local interface”. Problem with that is that you need to have resolved the “ASGI server in Sanic” issue.
  • Tear things up a little. Aim for a mostly API-compatible release, but treat Sanic ASGI as being a sufficiently enough big step, that you’re willing to redo some bits from scratch. (From my POV this is actually much more feasible than it might sound at first - Starlette has got the ASGI design-seperation absolutely down to a tee, and adapting some bits of that across to Sanic’s interface style wouldn’t neccessarily be a ridiculous idea.)
3 Likes

Okay … so I did some work on the test client with requests-async.

Boiling it down, it does not look like we can switch our test client to only use it and replace aiohttp. Why? Because it does not support streaming requests.

Since we have been talking about doing some work on the testing suite (potentially moving testing.py out of the core module, and also adopting pytest-sanic), what about starting to move this direction?

What I am proposing is to leave the testing architecture as is for the next release, but adding a new repo: huge-success/sanic-test-client. Inside that, would be SanicASGITestClient.

Eventually, we could then work on breaking testing.py off and into this package and also look into what it would mean (if @yunstanford agrees) to adopt pytest-sanic under the community org.


Opinions? Thoughts? @core-devs?

1 Like

I can have that resolved shortly - have been doing the legwork to deal with streaming requests and responses in any case, and it’s almost there.

(Tho I don’t know if that changes any of what you’re talking about here or not.)

@ahopkins - Streaming requests and responses is now resolved in requests-async (See docs)

awesome! I think there is still another issue I need to dig into more. It seemed like h11 was complaining about having two Host headers set. I’ll resolve that and get a new client ready.

On a different level I would still like to break it out.

Update:

I have the test client and a working version of Sanic on ASGI. I’m working thru some tests (mainly test routes.py right now) using the ASGI interface.

The question to @core-devs, how much of the testing coverage do you think we need to repeat for both the internal sanic server and the ASGI interface?

I have the test client and a working version of Sanic on ASGI. I’m working thru some tests (mainly test routes.py right now) using the ASGI interface.

Nice work! Sorry I’ve not been more involved - been a bit over-subscribed on other stuff lately.
If there’s any specific ASGI sticking points that you start bumping into then please do give me a yell. :grinning:

1 Like

I did want to talk with you about one item. I’m not at my computer so it’d be hard to explain fully without code.

In short, the Sanic test client is setup to retrieve both the request and response as a tuple. To achieve this, I am overwriting the ASGIAdapter.send method in requests-async Which is rather lengthy. I was thinking of submitting a PR to refactor it a little so my override is smaller. I’ll post a snippet later, and push my commit to your PR.

1 Like

Opinions wanted…

The new SanicTestClient uses requests-async instead of aiohttp. In the past, we got around having aiohttp be a hard dependency by doing the import right inside the method.

I think we really have two options here: (1) we make requests-async a hard dependency of Sanic (and not just pip install sanic[test]; or (2) we break off testing.py into a new repo. That new repo would become an extra dependency. What are your thoughts? @core-devs?

My argument for making it a project on its own would be to continue to keep Sanic core as lean and dependency free as it can be in production.

a couple thoughts:

on size: The actual size of the dependency isn’t that bad, I run most of my stuff ontop of a docker image (jfloff/alpine-python:3.7) and pip install requests-async only costs 9mb (a 3% increase) in my image size

on experience: I think being able to test your application is a pretty core functionality, I lean towards keeping it as it keeps the developer experience better and especially if it makes contributor lives easier.

I think I lightly lean towards making it a hard dependency, especially to start if it keeps complexity down. We can always factor it out later if we find its strongly desired. That said if you think its a good point to start that testing repo and don’t think it will be that hard then by all means charge ahead.

1 Like

Worth keeping in mind too, that there’s still lots of work going on in the async HTTP client space that you’ll want to keep your eye on.

The requests-async package is the minimal amount of work that was needed to get an async equivelent of requests, but I’m still pushing hard towards maturing that all the way into a requests-compatible package that’s built from the ground up with either sync or async functionality, plus HTTP/2 support, a parallel requests API, test client adapter etc.etc.

Best place to keep up to date with progress there is probably this issue: https://github.com/encode/httpcore/issues/78

2 Likes