Sanic App crashes on Load testing with Jmeter

What versions? what kind of infrastructure are you running on? What are the endpoints you are hitting doing?

I’m using sanic 20.12.1, tortoise-orm 0.16.19, python 3.8. Running it in google VM 16gb ram with 4cores. I’m hitting a single post endpoint in sanic from jmeter 5.4 which writes data into a postgres database and I’m using pgbouncer for pooling.

Awesome. Thanks for the details.

Have you tried running the same test in an endpoint without DB connectivity to try and isolate which layer is causing the issue?

Yeah I did, there is no issue without db connection. But, Once connected a lot issues are coming.

Okay. I would try an incremental approach. First try connecting directly with asyncpg, then making some queries, and slipping tortoise to narrow down the culprit.

Yeah, cool. When I make a call to the celery task I get this warning RuntimeWarning: coroutine ‘ingests_data’ was never awaited
** result = (True, prepare_result(fun(*args, kwargs)))
So, I used await then it works normally but in turn response time increases so, that results in errors while load testing like 503 service unavailable, connection reset.

Your endpoint is pushing work to celery? That would not need an await since the call to celery is a synchronous call to the broker (unless you are using some layer on top of celery, it does not currently support async send_task). Can you share some code for the endpoint causing trouble? It is hard to understand with all the moving parts.

From what I can gather, your endpoint does something like this? :confused:

@app.route("/something")
async def some_bad_route(request):
    foo = await Foo.filter(...)
    celery_app.send_task("some_task")
    return response.text(...)
app.py 
@blueprint.route('/ingest', methods=["POST"])
async def ingest(request):
    data = request.json.get("data")
    ingests_data.delay(data)
    return response.json({'message':'success'})

tasks.py
app = Celery("tasks", broker="amqp://...")
@app.task
def ingests_data(data)
 queryset = Model.get(data=data)
 //Perform operations
 Model.save()

Celery and Sanic are running on seperate processes correct? What messaging broker are you using with celery: Rabbit or Redis? Before going any further, I would check to make sure that the endpoint is even properlly sending messages to celery.

#app.py 
@blueprint.route('/ingest', methods=["POST"])
async def ingest(request):
    data = request.json.get("data")
    ingests_data.delay(data)
    return response.json({'message':'success'})

#tasks.py
app = Celery("tasks", broker="amqp://...")
@app.task
def ingests_data(data)
    print("placeholder", flush=True)

Yes, I’m running Celery and Sanic on separate processes. I’m using Rabbitmq.

This is sounding to me much more like it is a problem with Celery than Sanic. It seems to me that your connection to rabbit is not stable or not able to handle the load.

Okay, thanks. Can you tell me the reason behind this warning
RuntimeWarning: coroutine 'ingests_data' was never awaited result = (True, prepare_result(fun(*args, **kwargs)))

From the code I am seeing, ingests_data is not a coroutine. Which process is saying that?

Celery says that when I make a request to the endpoint in app.py

How are you integrating coroutines in Celery? This is not a native out of the box thing (yet).

I’m not sure how to do that properly right now I just followed your answer here celery-with-sanic

Within a single task, you can call asyncion.run(). You can also try this package: celery-pool-asyncio · PyPI, although personally I have not had luck with it.

1 Like

Thanks a lot for the help. Will let you know once if something works.

1 Like

It worked finally, but I had to remove tortoise orm and use raw sql. Also, along with some performance tuning the server. Even I didn’t have any luck with celery-pool-asyncio-Pypi.

1 Like