Sanic in async mode without workers or Sanic with workers?

Hi,

I am building a web app using Sanic and a little confused about how to deploy it in production.
I tried running the app in production using the following code:

app.run(host=‘0.0.0.0’, port=‘8000’, workers=2)

I also tried running my server in async mode using the code mentioned in the following examples:

There is a performance gain in terms of request/second using the above two examples but looks like there won’t be multiple worker support if I go ahead in this direction.

My application would rely on background tasks as well and I don’t want to keep a single worker running to serve my requests.

Am I reading this correct or I just missed something basic here?

I am not sure I understand the question. Most of the time, you should avoid either of those deployment strategies. The only exception to that rule is if you have some other service you need to run that needs to be instantiated using the same loop, but not in (for example) a listener.

If you use Sanic server (app.run or those examples) you will see the same performance as it is the same server. Slightly less if you use uvicorn, and much less if you use the Gunicorn worker.

Tasks that will run with app.add_task or something else?

Are you saying you want to turn the server on and off?

Hello @ahopkins, thank you for a quick response.

Running sanic app in async mode using the example links gives me a ~2x improvement on req/sec. Here is an example of benchmarking I did on my system (MacBook Pro, Catalina, 6-core Intel i7, 2.2 GHz):

I am using python 3.7.2
The API uses tortoise-ORM for the database and uses a very minimal code.

# when running using app.run

wrk --duration 10s --threads 1 --connections 200 http://localhost:6561/v4/users?username=12345
Running 10s test @ http://localhost:6561/v4/users?username=12345
1 threads and 200 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 420.68ms 30.65ms 846.84ms 96.08%
Req/Sec 470.45 47.61 505.00 95.96%
4647 requests in 10.02s, 2.70MB read
Socket errors: connect 0, read 64, write 20, timeout 0
Requests/sec: 463.86
Transfer/sec: 276.32KB

when running using this example: * https://github.com/huge-success/sanic/blob/5928c5005786b690539d3cf2c2814f696a326104/examples/run_async_advanced.py

wrk --duration 10s --threads 1 --connections 200 http://localhost:6561/v4/users?username=12345
Running 10s test @ http://localhost:6561/v4/users?username=12345
1 threads and 200 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 202.44ms 31.85ms 620.31ms 96.18%
Req/Sec 0.98k 102.97 1.06k 97.00%
9780 requests in 10.01s, 5.69MB read
Requests/sec: 976.78
Transfer/sec: 581.87KB

Both the above results vary very little when tested over 10 times.

My sanic app would be running independently with a couple of middleware and listeners. What confuses me is the performance that I might be losing when running with a worker.
Another thing I wanted to ask is: if I am using app.run() would this be non-async server? I hope this makes sense. I am very new to the framework.

app.run is also async and is the preferred method of running Sanic. There only time you lose that is if you use the Gunicorn worker, because then it effectively operates as a WSGI server.

Looking in to see if I can replicate those results.

@ahopkins thanks for getting back. Please let me know if you need any information from me.

I did not experience similar results as you.

With app.run()

wrk --duration 10s --threads 1 --connections 200 http://localhost:1234                                                                             1 ↵
Running 10s test @ http://localhost:1234
  1 threads and 200 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     9.52ms    3.14ms  35.95ms   65.16%
    Req/Sec    20.60k     1.60k   23.96k    69.00%
  205062 requests in 10.07s, 23.86MB read
Requests/sec:  20355.03
Transfer/sec:      2.37MB

With app.create_server()

wrk --duration 10s --threads 1 --connections 200 http://localhost:1234
Running 10s test @ http://localhost:1234
  1 threads and 200 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     9.32ms    3.65ms  55.03ms   76.51%
    Req/Sec    21.43k     2.04k   23.76k    95.00%
  213341 requests in 10.06s, 24.82MB read
Requests/sec:  21203.43
Transfer/sec:      2.47MB

When I ran it again with an intentional rather slow network call, the results were nearly identical.