Dynamically set Sanic log level in Python

I have exposed a route in my Sanic app to set the log-level based on the client call. E.g.

from sanic.log import logger, logging

@route("/main")
async def sanic_main(request):
    logger.info("Info mesg")
    logger.debug("Debug mesg")

    return json("processed")

@route("/setlevel")
async def setlevel(request):
    level = request.json["level"]
    if level == "info":
         loglevel = logging.INFO
    elif level == "debug":
         loglevel = logging.DEBUG
    logger.setLevel(loglevel)

    return json("done")

On setting log levels between DEBUG and INFO, however, I am observing flaky behavior where the DEBUG messages (from “/main”) get printed only some times and vice versa.

NOTE: I am running multiple Sanic workers

How should I go about dynamically setting the log level?

This is to carry on the conversation from StackOverflow.

As mentioned on SO, the problem is that each worker has its own logger since it is running in a separate process. You need some way to bridge the gap.

Btw your comment made me think if storing the log-level outside the app (in Redis) would make sense. That way, when a client invokes a route, a decorator can go check and set the log-level before serving every request. What do you think? My only concern would be the added latency but with Redis, I don’t expect this to be a big deal.

I think you are on the right track for a potential solution, but not the best. As you mentioned, there is a problem where you need to read Redis EVERY time you want to log. While the overhead is low, there is still overhead added to EVERY request.

I think a better solution would be to subscribe to a channel on Redis using pubsub, and then when the endpoint is hit, you publish a message to Redis that then broadcasts that to the other workers to update.

You can check out aioredis or aredis as two solutions I have used well with Sanic in the past.

Take a look at those and let me know if you have any questions.

3 Likes

Well, when it comes to multiprocessing, nothing is simple. If you search here in the community forums, you’ll find some other related problems, such as adding or removing routes dynamically, which can stumble with the same problem (in case you have multiple workers).

The simplest solution for you would be to check this value from somewhere else (outside of your app), like reading from a file and setting the logger level, but of course this would be really cumbersome (not to mention slow) to use. The best solution would be a simple pubsub system, where the setlevel could broadcast a message with the new loglevel, and a subscriber that would receive this value and set the loglevel. Your “pubsub” could be a simple file change notifier (that would be incredibly simple), but would not work in a cluster configuration.

There’s also the “lower level” multiprocessing stuff that you could use, like SharedMemory, introduced in Python 3.8, but would have the same limitation as the file change notification idea: will be limited to a single machine and not work within a cluster configuration.

I hope this clarifies your alternatives. Please, let us know if you have any further questions :wink:

2 Likes

Thanks guys. Let me try the aioredis approach.

1 Like

I agree. Shared memory is an awesome new feature that I look forward to utilizing in the future. But, unfortunately (as you mentioned) it doesn’t really help the issues inherent in most web request processing.

Hey guys,

Sorry for getting back late on this thread but I finally have time to work on this idea of using an async Redis to update the loglevel that each Sanic worker uses to print log messages.

You guys mentioned that I could use the pubsub mechanism to achieve it and it makes sense. However, reading through Sanic+aioredis examples as well as aredis examples, I couldn’t clearly figure out how/when should each Sanic worker subscribe to the channel? Is there a construct before app.run() that I can use to subscribe all workers to a channel?

Also, once a new loglevel message is published to the channel, how does the handler within the route method look like? Is it blocking? Based on what I read about aredis, is this the way to go?

@app.route("/procreq")
async def procfn(request);

    while True:
        mesg = await r.get_message()
        if mesg:
            # change log level
        await asyncio.sleep(1)

    print("request processed")
    return json("done")

If you can help with some skeleton code, I would really appreciate it. Thanks a ton!

Adding a piece of code where the message fails to get published to the channel followed by an unexplained exception (after timeout?) for the call to /setlevel.

import socket
import asyncio

from sanic import Sanic
from sanic.response import json as sjson
import aioredis

app = Sanic(__name__)

@app.route("/api")
async def route1(request):
    print('Rcvd req in /api')
    await tsk
    return sjson('reached')


@app.route("/setlevel")
async def route2(request):
    print('Rcvd req in /setlevel')
    res = await pub.publish_json('chan1', ["Hello", "world"])
    assert res == 1
    return sjson('reached')


async def reader(ch):
    while (await ch.wait_message()):
        msg = await ch.get_json()
        print("Got Message:", msg)


async def main(argv=sys.argv):

    global pub, sub, tsk

    pub = await aioredis.create_redis('redis://localhost')
    sub = await aioredis.create_redis('redis://localhost')
    res = await sub.subscribe('chan1')
    ch1 = res[0]
    tsk = asyncio.ensure_future(reader(ch1))

    server_socket = '/tmp/sanictest.sock'
    sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
    sock.bind(server_socket)

    app.run(sock=sock, workers=4, debug=True)

    print('Closing pub-sub...')
    sub.close()
    pub.close()

    return


if __name__ == '__main__':
    asyncio.run(main())

What am I doing wrong here?

Poke to @ahopkins @vltr

Here is a working solution I came up with.

FIrst, let’s get a running instance of redis:

$ docker run -p 6379:6379 redis:alpine

Next, here is my basic app structure (all the logic not needed for my explanation)

import asyncio
import logging

import aioredis
from sanic import Sanic
from sanic.log import logger
from sanic.response import text

app = Sanic("dynamic-logging")
CHANNEL_NAME = "logging-level"
task = None
connections = []


async def get_connection():
    conn = await aioredis.create_redis(
        address="redis://localhost:6379/0", encoding="utf-8",
    )
    connections.append(conn)
    return conn


@app.get("/")
async def handler(request):
    logger.info("foobar")
    logger.debug("foobar")
    return text("Done.")


@app.post("/setlevel")
async def set_handler(request):
    ...


@app.listener("after_server_start")
async def run_logging_listener(app, loop):
    ...


@app.listener("before_server_stop")
async def cleanup(app, loop):
    global task
    global connections

    if task and not task.cancelled():
        task.cancel()

    for conn in connections:
        print(f"closing {conn}")
        conn.close()
        await conn.wait_closed()


if __name__ == "__main__":
    app.run(debug=True, workers=4)

We are going to run run_logging_listener on startup. This will launch a task that will act as our receiver and listen to any messages on the pubsub channel.

Seperately, we will have set_handler that will be the endpoint that publishes messages to our workers.

First, let’s build the setter. All we need to do is to obtain a connection to our redis instance, and publish to the channel what we want the new level to be.

@app.post("/setlevel")
async def set_handler(request):
    level = request.json.get("level", "INFO")
    print(f"Requested to change logging to {level}")
    conn = await get_connection()
    await conn.publish(CHANNEL_NAME, level)

    return text(f"Set level to {level}")

But, how does each worker know what to do? At strartup, it launches a task that will act as a receiver.

@app.listener("after_server_start")
async def run_logging_listener(app, loop):
    conn = await get_connection()

    async def receiver(channel):
        print(f"Listening on {channel}")
        ...

    (channel,) = await conn.subscribe(CHANNEL_NAME)
    task = loop.create_task(receiver(channel))

But, what does receiver do?

It will just hang around and wait until there is an incoming message. When there is, it will trigger logging.setLevel

    async def receiver(channel):
        print(f"Listening on {channel}")

        while await channel.wait_message():
            try:
                message = await channel.get(encoding="utf-8")
                level = getattr(logging, message, logging.INFO)
                logger.setLevel(level)
                print(f"Changing logging to {message}")
            except aioredis.ChannelClosedError:
                print("Channel has closed")
                break

For sake of ease, I have made rather judicious use of global variables. You might not want to follow that pattern. But, this should give you the idea of some of the basic patterns and considerations you might want to employ.

Here is the full code:

import asyncio
import logging

import aioredis
from sanic import Sanic
from sanic.log import logger
from sanic.response import text

app = Sanic("dynamic-logging")
CHANNEL_NAME = "logging-level"
task = None
connections = []


async def get_connection():
    conn = await aioredis.create_redis(
        address="redis://localhost:6379/0", encoding="utf-8",
    )
    connections.append(conn)
    return conn


@app.get("/")
async def handler(request):
    logger.info("foobar")
    logger.debug("foobar")
    return text("Done.")


@app.post("/setlevel")
async def set_handler(request):
    level = request.json.get("level", "INFO")
    print(f"Requested to change logging to {level}")
    conn = await get_connection()
    await conn.publish(CHANNEL_NAME, level)

    return text(f"Set level to {level}")


@app.listener("after_server_start")
async def run_logging_listener(app, loop):
    conn = await get_connection()

    async def receiver(channel):
        print(f"Listening on {channel}")

        while await channel.wait_message():
            try:
                message = await channel.get(encoding="utf-8")
                level = getattr(logging, message, logging.INFO)
                logger.setLevel(level)
                print(f"Changing logging to {message}")
            except aioredis.ChannelClosedError:
                print("Channel has closed")
                break

    (channel,) = await conn.subscribe(CHANNEL_NAME)
    task = loop.create_task(receiver(channel))


@app.listener("before_server_stop")
async def cleanup(app, loop):
    global task
    global connections

    if task and not task.cancelled():
        task.cancel()

    for conn in connections:
        print(f"closing {conn}")
        conn.close()
        await conn.wait_closed()


if __name__ == "__main__":
    app.run(debug=True, workers=4)

This is fantastic @ahopkins!

Actually I had to find my own way before you responded but seems like the basic ideas match. I even had a chance to capture so in a blog:

Thanks a ton though!

I’ve built out something that does this as a service that may be of interest / I’d love your feedback.

It solves the same problem you’ve mentioned here. We have a WebUI and then need to push the log levels out. We have clients connect and listen on an SSE stream instead of Redis, but it’s conceptually pretty similar to the solution y’all have outlined here. Docs in Python | Prefab Documentation

I’m not a python / sanic expert (yet anyway) but if this solution is appealing I’d love to figure out how to make it fit really well into the Sanic framework for you can change log levels without restarting.