Question: running a Sanic app within existing asyncio process and event loop

Hi there,

My use case needs me to integrate several servers and clients, for several different protocols, within the one Python process and asyncio event loop. One of the main reasons for this is for efficient passing of data objects between different clients/servers.

What’s the best way to include a Sanic app in this context, and to launch a Sanic HTTP/S server task, within the existing event loop? I notice that the “official” use pattern is to instantiate a Sanic[-derived] object, then invoke its blocking .run() method.

I need instead to be able to have an existing loop running with other tasks, and be able at will to launch and terminate one or more Sanic server tasks within that loop.

Cheers
David

I’ve found a pattern that works for this. After instantiating a Sanic app object, calling app.add_route() on each method of a handler class instance, then app.create_server(), then server.startup(), then server.serve_forever(). This is working fine within the existing event loop, without disrupting other running tasks.

Yes, this is the “correct” way of doing it. But since you are breaking out of the normal pattern you will need to be a little more hand-on, and not have the full suite of features available to you.

@ahopkins thanks. Understood and respected.

There’s another pattern I’m considering – reluctantly letting Sanic ‘own the space’, giving me full access to all features, and then integrating the various other protocol clients/servers in and around it.

Question on this approach – is it possible to spin up other asyncio tasks in different Sanic worker processes, and pass objects between them? If so, are there some examples of this?

Yes, absolutely. There are a couple considerations.

  1. Do you want to share the same loop? If yes, then start it in app.before_server_start using: app.add_task. That is a wrapper around loop.create_task that adds some monitoring. see background tasks
  2. If you want something more robust, this is one of the major motivations for adding custom worker processes. see custom processes

Of course, you can go your route also. This used to be the only way to do it. But we generally steer people away from that as it tends to be more hands-on and lower-level, and therefore more work for the developer.

Personally, I usually like to opt for the custom process approach on things like this. But, it depends what your needs are. I should mention that you can use something like shared_ctx to share state. You can see an example here: Pushing work to the background of your Sanic app

Great ideas, much appreciated.

My use case has involved dozens of nodes communicating with each other via JSON-RPC API calls, including some long-polling. (I’m in the process of replacing this with a mix of websockets and MQTT, but that’s another topic).

The busiest nodes fetch ‘incoming work’ from their PostgreSQL databases, run the business logic, and send the finished work out to various protocol clients for delivery.

But their efficiency has suffered from running most of the business logic running within a single asyncio loop, only partly mitigated by handing of some of the heavy lifting to thread pools.

I notice Sanic provides great support for utilising all the cores of a machine with a ‘one HTTP[S] worker per core’ pattern, and balancing incoming load among the workers.

I’m wondering what I’d need to do to run a similar pattern with other protocols, and whether there might be clean ways of doing this in Sanic.

Do you need to share any data between the JSON-RPC and MQTT and your HTTP routes?

If no, then definitely the custom process method.
If yes, then I still probably would go that way.

I think having a dedicated process that opens this port and handles without impacting the HTTP route is a big benefit. You just need to be more careful about how you share state if you need that.

Thanks for that.

JSON-RPC is used more for network management, eg node health, restarts, metrics, config commands etc, and by design, doesn’t feature within inner business logic processing loops (which feature significant traffic between various other protocol servers/clients, including PostgreSQL clients).

On the other hand, MQTT and Websocket traffic will soon start to seep into the inner loops.

But from what you’ve said, I need to invest a lot more time getting around Sanic’s patterns.

BTW I’ve been running the web and RPC via aiohttp for the last couple of years, but have suffered efficiency issues by being stuck on a single core in machines with up to 52 cores.

Sounds painful. Not sure if it’s relevant, but there are some of that stuff baked in.