Dynamically set Sanic log level in Python

I have exposed a route in my Sanic app to set the log-level based on the client call. E.g.

from sanic.log import logger, logging

async def sanic_main(request):
    logger.info("Info mesg")
    logger.debug("Debug mesg")

    return json("processed")

async def setlevel(request):
    level = request.json["level"]
    if level == "info":
         loglevel = logging.INFO
    elif level == "debug":
         loglevel = logging.DEBUG

    return json("done")

On setting log levels between DEBUG and INFO, however, I am observing flaky behavior where the DEBUG messages (from “/main”) get printed only some times and vice versa.

NOTE: I am running multiple Sanic workers

How should I go about dynamically setting the log level?

This is to carry on the conversation from StackOverflow.

As mentioned on SO, the problem is that each worker has its own logger since it is running in a separate process. You need some way to bridge the gap.

Btw your comment made me think if storing the log-level outside the app (in Redis) would make sense. That way, when a client invokes a route, a decorator can go check and set the log-level before serving every request. What do you think? My only concern would be the added latency but with Redis, I don’t expect this to be a big deal.

I think you are on the right track for a potential solution, but not the best. As you mentioned, there is a problem where you need to read Redis EVERY time you want to log. While the overhead is low, there is still overhead added to EVERY request.

I think a better solution would be to subscribe to a channel on Redis using pubsub, and then when the endpoint is hit, you publish a message to Redis that then broadcasts that to the other workers to update.

You can check out aioredis or aredis as two solutions I have used well with Sanic in the past.

Take a look at those and let me know if you have any questions.


Well, when it comes to multiprocessing, nothing is simple. If you search here in the community forums, you’ll find some other related problems, such as adding or removing routes dynamically, which can stumble with the same problem (in case you have multiple workers).

The simplest solution for you would be to check this value from somewhere else (outside of your app), like reading from a file and setting the logger level, but of course this would be really cumbersome (not to mention slow) to use. The best solution would be a simple pubsub system, where the setlevel could broadcast a message with the new loglevel, and a subscriber that would receive this value and set the loglevel. Your “pubsub” could be a simple file change notifier (that would be incredibly simple), but would not work in a cluster configuration.

There’s also the “lower level” multiprocessing stuff that you could use, like SharedMemory, introduced in Python 3.8, but would have the same limitation as the file change notification idea: will be limited to a single machine and not work within a cluster configuration.

I hope this clarifies your alternatives. Please, let us know if you have any further questions :wink:

1 Like

Thanks guys. Let me try the aioredis approach.

1 Like

I agree. Shared memory is an awesome new feature that I look forward to utilizing in the future. But, unfortunately (as you mentioned) it doesn’t really help the issues inherent in most web request processing.