I have the following simplified code, which defines a lock named “locks” (type multiprocessing.Lock) in shared_ctx, but when trying to access it using
request.app.shared_ctx.locks in the index function on my Linux system, the system hangs (just leaving this line of code there).
from sanic import Sanic from sanic import request from sanic.response import html from multiprocessing import Lock import logging app = Sanic(__name__) @app.main_process_start async def init(app: Sanic): app.shared_ctx.locks = Lock() @app.route("/index", methods=["get"]) async def index(request: request.Request): print("enter") while not request.app.shared_ctx.locks.acquire(block=False, timeout=None): print("1") await asyncio.sleep(0.1) print("exit") request.app.shared_ctx.locks.release() return html("sss") if __name__ == '__main__': app.run("0.0.0.0", 80, fast=True, debug=True)
Here is the result on the terminal：
[2023-06-10 06:01:55 -0400]  [DEBUG] Process ack: Sanic-Server-2-0  [2023-06-10 06:01:55 -0400]  [INFO] Starting worker  enter enter enter enter
It seems that this code works fine on Windows but not on Linux. Have any of you seen a similar issue before？ Any advice on how to resolve it would be appreciated.