Random exceptions with WorkerManager

I’m using Sanic Server with Google App Engine to serve one of our production services. However, we often see some random exceptions while serving the app.

Sanic Version: 22.12.0
Runtime: Python 3.10

It will be great if someone can shed some light on this.

 Traceback (most recent call last):
  File "/layers/google.python.pip/pip/bin/sanic", line 8, in <module>
    sys.exit(main())
  File "/layers/google.python.pip/pip/lib/python3.10/site-packages/sanic/__main__.py", line 12, in main
    cli.run(args)
  File "/layers/google.python.pip/pip/lib/python3.10/site-packages/sanic/cli/app.py", line 119, in run
    serve(app)
  File "/layers/google.python.pip/pip/lib/python3.10/site-packages/sanic/mixins/startup.py", line 862, in serve
    manager.run()
  File "/layers/google.python.pip/pip/lib/python3.10/site-packages/sanic/worker/manager.py", line 95, in run
    self.monitor()
  File "/layers/google.python.pip/pip/lib/python3.10/site-packages/sanic/worker/manager.py", line 197, in monitor
    self._sync_states()
  File "/layers/google.python.pip/pip/lib/python3.10/site-packages/sanic/worker/manager.py", line 288, in _sync_states
    state = self.worker_state[process.name].get("state")
  File "<string>", line 2, in __getitem__
  File "/layers/google.python.runtime/python/lib/python3.10/multiprocessing/managers.py", line 833, in _callmethod 

Is there any additional context around this? This traceback isn’t much to work with.

So, I guess the light that I can shed is that when running Sanic in its default worker-process mode, it will attempt to keeps its worker state in sync with all other processes. Not knowing any more about what your setup looks like it is hard to provide more detail. I am not sure what AppEngine provides if multiple processes are even an option. You might need to use single process mode.