Considering ASGI support

I’m considering taking a proper crack at ASGI support, rather than just the proof-of-concept shim of #1265.

I think you can pretty much keep API compatibility, except at some lower-level interfaces. (Eg. I would probably end up redesigning the request instantiation slightly in order to keep the implementation as clean as possible. It’d make more sense to have it be instantiated with the ASGI “scope”, rather than the existing signature.)

The event listeners are another case for consideration - ASGI just provides a “shutdown” event, rather than a “before shutdown” and “after shutdown” pair.

The easiest way to get a compatible test client would likely be something based on Starlette’s ASGI Test Client (requests dependency). That plugs a requests session directly into the ASGI interface, rather than making requests against a running instance.

You wouldn’t necessarily need to drop sanic’s built-in server, but I think it’d probably be worth doing, since uvicorn’s httptools+uvloop implementation is essentially the same stack of stuff but with an ASGI interface.

There’s a huge stack of benefits to be had here, both for Sanic, and for the ecosystem as a whole.

What’s the team’s opinion on an ASGI release that:

  • Dropped the built-in server in favor of strictly decoupled-ness.
  • Modified the Request.__init__ signature.
  • Modified the Sanic.handle_request signature.
  • before/after shutdown events with marginally different “when exactly do these run” behaviors.
  • Modified test client implementation that would return request / response objects from requests rather than from aiohttp.

Are we able to break a few eggs like these in order to push things forward, or would a PR along those lines be a non-starter?

2 Likes

I will jump in a little later with a more in depth response. But, after taking a look at this recently, my intention was to keep the internal server and also allow for ASGI integration. The biggest reason being backwards compatibility.

Of course what will it make the system look like if there is both an internal and external server :scream:

If it is too much, then my second preference would be to bundle one of the ASGI packages as a dependency (not ideal!) so that we could keep the same app.run() API.

I agree with not dropping current functionality in favor of ASGI functionality as a whole - there are always going to be slow adopters and compatibility issues, and of course those who just will never run it as ASGI, the same way people don’t run synchronous frameworks under WSGI.

My preference would be to provide an alternative, something like calling app.asgi() instead of app.run() for functionality that implements the requirements. Since it’s new functionality, it does add to the complexity of the overall project but it keeps us away from removing existing functionality or changing it in a fundamental way that could break compatibility in fundamental ways. I’m not scared of doing that - we did merge in the changes for websockets 6.0 afterall - but I think there should be a larger discussion as to impact if that’s the path we want to take. Alternatively, simply creating a new interface for asgi allows us to push that forward independently.

1 Like

As I said before, I am reluctant to do this. Having the built in server allows for Sanic to run with one less dependency, makes it extremely simple to get a quick server up and running, and will not break existing deployments. Unless we come across a major obstacle to having more than one interface, my vote is to pursue ASGI support in tandem with the existing embdded server.

There’s not too much going on in Request.__init__. What exactly did you have in mind? Have a couple quick lines of code to help me understand?

This is something that will probably be on the TODO list anyway. There are some ideas floating around for improvements that we can make to the Router, which would of course have an impact in how we handle requests. So, I do not think there is an opposition here. We just would need to of course make sure whatever changes are going to also work for what will be in the works.

Hmmm … This one gives me a little pause because it could impact existing functionality. On the one hand, we could keep the existing listeners for the embedded server, and a new set for ASGI implementation. It does, however create an inconsistency and does not easily allow for developers to switch back and forth. Is this something built into the ASGI standard?

I have no problem with this.

My preference would be to provide an alternative, something like calling app.asgi() instead of app.run() for functionality that implements the requirements.

So I’m not sure how feasible it is to shim the websockets support in if you wanted to take that approach. You also lose a chunk of benefits, since you’re only dealing with ASGI at the outermost layer (So you can use ASGI servers and Middleware, but can’t submount other ASGI applications.)

Shimming would lose a bit of performance in the ASGI case since you’re crossing two different kinds of interface boundaries, and have a little bit of work to do as a result of that. It would also be a higher complexity, since you’ve got two ways around, where properly engineering Sanic around an ASGI interface throughout would cut a few bits out instead.

the same way people don’t run synchronous frameworks under WSGI.

I’m not sure there are any significant Python sync frameworks that aren’t WSGI based. A few have in-built devservers, but they still use the WSGI interface under the hood.

There’s not too much going on in Request.__init__ . What exactly did you have in mind? Have a couple quick lines of code to help me understand?

Sure. Given that the ASGI interface passes a scope dictionary in, a nice approach is to ensure that Request(scope=scope) as the standard way to instantiate a request instance. (Perhaps you also accept keyword arguments to support eg. Request(url=..., method=..., headers=...), but that’s not what you want to use internally.)

keep the same app.run()

That’s absolutely fine. I’m not personally particularly interested in adapting the Sanic sever part to deal with that, but if someone else took on that part on then great. (Or deal with it by adding a dependency.) In either case it’d likely make sense to deal with the framework adaptations first, to demonstrate support with uvicorn & pals, to test performance, and to thrash out what-if-any API changes are/aren’t required.

Just to give a bit more of a push to the motivations here, it’s not just about being able to run sanic with uvicorn, hypercorn, or daphne. It’s also about:

  • Shared ASGI middleware. (Eg. You be able to use Starlette’s CORSMiddleware, SessionMiddleware, GZipMiddleware, TrustedHostMiddleware, HTTPSRedirectMiddleware, ExceptionMiddleware, DebugMiddleware etc.)
  • Shared mountable ASGI applications (Eg. Use Starlette’s StaticFiles, GraphQLApp, class-based HTTPEndpoint/WebSocketEndpoint implementations etc…)
  • Interchangeable response classes (Eg. use Sanic responses with Starlette and vice-versa. We’ve both got all the basics covered here, but there’s other things like content-negotiated response classes that’d be useful in either case.)
  • Properly managed background tasks. (Eg. server restarts don’t finalize until background tasks have run to completion. Server is able to determine number of concurrent background tasks, etc.)
  • Support for WebSockets, Server Sent Events, HTTP/2 server push.
  • More robust HTTP and WebSocket implementations, due to proper interface separation.

Deal.


@tomchristie You raise a lot of good points, and I see no problem passing scope around. I think you are right in that we can separate some of the logic and try to peel back some of the unnecessary layers.

Probably the biggest hurdle, and one that I am not sure will be avoidable, is how we handle websockets. Currently, Sanic achieves this by just providing a wrapper around the websockets package. There are a number of implementation issues just around websockets that I have been wanting to correct (API inconsistencies between the App layer and the Blueprint layer), and we have previously talked as a team about pulling that dependency out of Sanic core. While I really like it as a package, I am not a fan of how Sanic implements it.

So in addition to providing support for Sanic to run on ASGI, there are some API cleanup issues that need to be tackled anyway.


As I said before, I think it is still important to provide the simple implementation to keep the internal server running for now. Once a proper ASGI solution has been created, we can work to spread documentation, examples, and other materials to start pushing it as an alternative solution with the plan to eventually fade out the internal server if it falls out of favor. For now, I think we need to deal with the complexity that there will be potentially two ways of running Sanic now.

As you are no doubt already aware, Sanic already has a method for running with gunicorn. I see this as another alternative.


Getting back to websockets, while I am in favor of keeping two separate run implementations (internal and external), it seems overly complex and confusing for the user to have multiple ways to achieve websockets. I would like to get a better understanding how ASGI handles websockets to explore how we can move forward there.


No doubt you are right in the ability to have shared components is a HUGE boost for the Python community and every framework developer and user. For this reason, I think it is a no-brainer.

1 Like

Probably the biggest hurdle, and one that I am not sure will be avoidable, is how we handle websockets.

So the good news on this front is that it looks like for most end-developers it’s likely a case of making sure we’re preserving the send/recv interface, which is fine. I expect there’s plenty of lower-level details that’d need reworking, but keeping the documented bit of interface compatible looks okay to me.

ASGI websockets would be a nice win here too, since (1) there’s multiple implementations - eg. uvicorn includes either websockets or wsproto implementations. Plus there’s whatever hypercorn and daphne use. (2) There’s more structure to how to approach permissioning or middleware or whatever else that’s shared between both http and websockets. (I’ve more outlining work to do on this in Starlette, but can see it all fits together nicely.)

I think it is still important to provide the simple implementation to keep the internal server running for now.

That’s reasonable yeah. I’d expect that I’d tackle this by focusing very simply just on HTTP as a first pass, not tackling a sanic built-in server implementation, and not tackling websockets to start with. If we can get that slice of tests passing, and be able to test the performance running on uvicorn, then we’d have made a great start.

No doubt you are right in the ability to have shared components is a HUGE boost for the Python community and every framework developer and user. For this reason, I think it is a no-brainer.

Indeed. I think we’ve got a really good opportunity here to lay the foundation for an incredibly productive ecosystem. (I think we have the potential go a long way further towards this than WSGI did.) Sanic’s in a good position here since it’s got great adoption, and has a relatively lower effort to move towards being an ASGI framework than most other cases. (eg. It’s probably more work for something like Falcon to tackle.)

It’s also worth mentioning that Andrew Godwin’s work, and some of the stuff in Starlette and Responder are starting to put down patterns for how we can have ASGI frameworks that also support regular synchronous views. The benefit here is being able to support existing regular ORMs, while still getting WebSocket support, SSE support, managed Background/Timer/Clock tasks, HTTP/2 server push, while still allowing the developer to upgrade all or some of the codebase to async if needed later.

Anyways, thanks so much for the feedback, that’s super helpful. :sunglasses: I really just wanted to start off by getting a little community buy-in of “yeah we get why this’d be really valuable, and yeah, we might consider some modest trade-offs if needed.”.

1 Like

@tomchristie I will follow your lead on this one, but if you need some help on this (especially on smoothing out the implementation of the non-ASGI stuff), I would be happy to join.

Any idea if any of the ASGI servers support Windows? I don’t see it explicitly mentioned on them and uvicorn seems to be based on gunicorn which doesn’t. That would be a big selling point for us while we are stuck with Windows.

For now we’ve had to switch back to Flask as there is a WSGI server that supports windows (waitress)

I think Daphne will run under a windows environment, but don’t hold me to that.

Uvicorn has windows support yup. (Tho if you were on Windows in production you’d need to use supervisor or circus for process management, rather than gunicorn with the uvicorn worker class.)

1 Like

Hypercorn should work on windows as well (just not with the uvloop worker class).

2 Likes

Having taken a bit of a stab towards shoehorning ASGI support in without changing the existing API, it’s a pretty grim process.

Acutally refactoring it out to ASGI itself would be relatively simpler, but attempting to maintain API compatiblility WRT. the exsiting test suite is rather tough.

I don’t really know how feasible this is, or how much motivation I have to pursue it in its current state.

There are a few options here:

  • Keep on this track. Fix up the failing test cases one by one, until we’ve got an API compatible pull-request, at which point we’d be in great shape, to then start refactoring that into a more graceful implementation.
  • Use an aiohttp-based test client instead of moving to the requests based ASGI test client. Either adapt it so that it makes ASGI requests directly, or stick with the existing “make an actual network request over the local interface”. Problem with that is that you need to have resolved the “ASGI server in Sanic” issue.
  • Tear things up a little. Aim for a mostly API-compatible release, but treat Sanic ASGI as being a sufficiently enough big step, that you’re willing to redo some bits from scratch. (From my POV this is actually much more feasible than it might sound at first - Starlette has got the ASGI design-seperation absolutely down to a tee, and adapting some bits of that across to Sanic’s interface style wouldn’t neccessarily be a ridiculous idea.)
3 Likes

Okay … so I did some work on the test client with requests-async.

Boiling it down, it does not look like we can switch our test client to only use it and replace aiohttp. Why? Because it does not support streaming requests.

Since we have been talking about doing some work on the testing suite (potentially moving testing.py out of the core module, and also adopting pytest-sanic), what about starting to move this direction?

What I am proposing is to leave the testing architecture as is for the next release, but adding a new repo: huge-success/sanic-test-client. Inside that, would be SanicASGITestClient.

Eventually, we could then work on breaking testing.py off and into this package and also look into what it would mean (if @yunstanford agrees) to adopt pytest-sanic under the community org.


Opinions? Thoughts? @core-devs?

1 Like

I can have that resolved shortly - have been doing the legwork to deal with streaming requests and responses in any case, and it’s almost there.

(Tho I don’t know if that changes any of what you’re talking about here or not.)

@ahopkins - Streaming requests and responses is now resolved in requests-async (See docs)

awesome! I think there is still another issue I need to dig into more. It seemed like h11 was complaining about having two Host headers set. I’ll resolve that and get a new client ready.

On a different level I would still like to break it out.

Update:

I have the test client and a working version of Sanic on ASGI. I’m working thru some tests (mainly test routes.py right now) using the ASGI interface.

The question to @core-devs, how much of the testing coverage do you think we need to repeat for both the internal sanic server and the ASGI interface?

I have the test client and a working version of Sanic on ASGI. I’m working thru some tests (mainly test routes.py right now) using the ASGI interface.

Nice work! Sorry I’ve not been more involved - been a bit over-subscribed on other stuff lately.
If there’s any specific ASGI sticking points that you start bumping into then please do give me a yell. :grinning:

1 Like

I did want to talk with you about one item. I’m not at my computer so it’d be hard to explain fully without code.

In short, the Sanic test client is setup to retrieve both the request and response as a tuple. To achieve this, I am overwriting the ASGIAdapter.send method in requests-async Which is rather lengthy. I was thinking of submitting a PR to refactor it a little so my override is smaller. I’ll post a snippet later, and push my commit to your PR.

1 Like