Performance benchmark?

Can someone give performance benchmark on Sanic? Thanks in advance.

There used to be benchmarks on the README. But, they got somewhat outdated, and have not been added back.

Here are some issues related to this:


Recently, when we switched to being a community organized project, we discussed the idea of creating a new benchmarking standard that would be run for all PRs, and that we could use to determine that the codebase was remaining as fast with each code change. I expect this to be in place sometime after the 18.12 release sometimes towards the beginning of 2019.

You can of course take a look in the repo to see the code that existed before and try to replicate it yourself. But, there are no current benchmarks that the Sanic team is maintaining, if that is the question.

1 Like

I have carefully benchmarked Sanic and the outcome was… 6! :slight_smile:

All jokes aside, I never like these synthetic benchmark tools. I understand their use, and why you would want to monitor performance during development of a framework (or API). But whenever you put some actual useful processing in an API for a benchmark, everything changes. If you just want to return “hello world” 10.000x times a second, Sanic will be great. It can probably handle more requests/sec and have lower response times than some competition. If you have a lot of IO stuff (network/disk/etc.) going on in your APIs, the async stuff will be of great benefit to you and Sanic will be even more awesome! Things will speed up A LOT compared to synchronous frameworks.

I just wanted to get this off my mind. Benchmarking is not trivial (or general). You cannot just have a single benchmark and say: well, it’s ten times faster, period! It really depends on your use case, traffic, dependencies, etc… Btw, I don’t want to put you down or anything. I just want to explain why I think it’s not possible to answer your question really…

Have a nice day!

@DirkGuijt I think that is valid and legit. This is why I do not put too much stock in benchmarks like this that end up being used for “marketing” purposes.

However, I do think they will be helpful and useful for the Sanic community to measure itself. Since one of the outward goals is to “be fast”, then we need to know that the shiny new bells and whistles are nor harmful. And, if there is going to be a speed hit, then the community can decide that the new features are worth the performance penalties.

As a whole in terms of how well Sanic responds to certain queries, I am less interested in that. What I am more interested in doing (and hopefully I find the time over the Winter months, unless someone beats me to it), is having specific benchmarks that run against different parts of the code base.

For example, just like a test suite, we have an app instance with dozens of endpoints of different kinds added, and then we benchmark how fast the router is able to get to the right view. I know for a fact that this can be improved, and hopefully will be one of the objectives in 2019. Then whenever anyone makes a change to the router, just like we require Travis to run the tests, we also check the performance changes.

1 Like

@ahopkins I completely agree with you. I guess I put it a bit too short by dedicating only one sentence to benchmarking for development purposes, but yeah I understand why you’d want to monitor performance when adding new features. Still don’t feel like we can answer the original question though…

1 Like