Restructuring create_server to leverage workers

I was doing this: Mythic/server.py at 9085d1d3edcb78573e55a91674057dd7733f1544 · its-a-feature/Mythic · GitHub and initializing here https://github.com/its-a-feature/Mythic/blob/9085d1d3edcb78573e55a91674057dd7733f1544/mythic-docker/app/init.py where I did the create_server method, but my project can have like 300+ things connecting in at a time, so I gotta optimize it better to leverage the resources of where it’s deployed while still allowing my async database connections to work in the workers. Similarly, I leverage sanic-jwt and sanic-wtf with some in-memory tracking that I now need to store in a database since there’s no shared memory.

I’ve updated the server file to:

from app import (
    mythic,
    dbloop,
    listen_ip,
    listen_port,
    keep_logs,
    debugging_enabled,
)
import asyncio
from app.api.rabbitmq_api import start_listening
import traceback


if __name__ == "__main__":
    try:
        asyncio.set_event_loop(dbloop)
        loop = asyncio.get_event_loop()
        try:
            loop.run_until_complete(start_listening())
            mythic.run(host=listen_ip, port=int(listen_port), debug=debugging_enabled, access_log=keep_logs, workers=4)
        except Exception as e:
            print(str(e))
            loop.stop()
    except Exception as e:
        traceback.print_exc()

to allow me to use the workers and I added a before_server_start to handle the database connections so that each worker is able to connect to the database loop properly:

@mythic.listener("before_server_start")
async def setup_initial_info(sanic, loop):
    app.db_objects = Manager(mythic_db, loop=loop)
    await mythic_db.connect_async(loop=loop)
    app.db_objects.database.allow_sync = logging.WARNING
    await initial_setup()
    await app.api.rabbitmq_api.start_listening()

however, I’m still running into issues with what i thought was shared data. For example, in that init.py file I have a mythic.config["variable"] = "value" which doesn’t appear to be shared amongst all the workers (csrf failed when run with workers · Issue #16 · pyx/sanic-wtf · GitHub)

What kind of things do you mean? Requests?

Why do you say that? Every worker will have that value set. It is not “shared” as there is no memory sharing, but each would have an instance of that string.


What variables in your code are you having a problem with?

oh, yeah, my bad. requests, which result in some just fetching data and returning, some holding open websockets and streaming data, some doing more complex tasking.

For that latter piece, I’m referencing that github issue - specifically they’re calling out that the csrf value I store in that config isn’t shared amongst all the workers, and that’s what I experience too. If I have 4 workers, only about 1 in 4 requests will properly validate the csrf. But, if I have 1 worker, it’s fine

If it were me, I would scrap this pattern:

@mythic.before_server_start
async def start_rabbit(app, _):
    app.add_task(start_listening())

mythic.run(host=listen_ip, port=int(listen_port), debug=debugging_enabled, access_log=keep_logs, workers=4)

You don’t need to manage the loop yourself or have those exception blocks.

Are you talking about this?

mythic.config[
    "WTF_CSRF_SECRET_KEY"
] = str(uuid.uuid4()) + str(uuid.uuid4)

If that is the case then it means that every one of your workers has its own CSRF secret. Which means you have one of two options:

  1. Implement some sort of “sticky” session with a load balancer
  2. Use the same CSRF secret with all of your workers

Depending upon your application, I cannot say which would be the correct approach.

But, again, if I were you, I would think the easiest solution is to set an environment variable:

SANIC_WTF_CSRF_SECRET_KEY=somethingsecret

With the SANIC_ prefix, it will automatically be loaded onto your config at startup.

See docs

That’s what I thought at first too, but just for testing, i set that value to a static string, like

mythic.config["WTF_CSRF_SECRET_KEY"] = "5"

and had the same result

If you want to see for yourself, maybe add a debug at startup to output the config values. Setting it to a constant will populate it across the board:

from sanic.log import logger

@app.after_server_start
def display(app, _):
    logger.debug(app.config)

ah yup, i can see it in that debug output. Now I’m even more confused as to why that’s failing across multiple workers then lol

I noticed i was on Sanic 20.6, which didn’t have that @app.after_server_start functionality. I updated to the latest, 21.3.2 and now I’m having a bunch of issues lol It seems like there were some pretty big shifts for the latest stuff. Is there a guide anywhere on updating to the latest? I’m getting this for everything now:

[2021-04-08 10:02:43 -0700] - (sanic.access)[INFO][127.0.0.1:39294]: GET http://192.168.53.128/callbacks  200 -1
[2021-04-08 10:02:43 -0700] - (sanic.access)[INFO][127.0.0.1:39296]: GET http://192.168.53.128/static/toastr.css  405 -1
[2021-04-08 10:02:43 -0700] - (sanic.access)[INFO][127.0.0.1:39298]: GET http://192.168.53.128/static/fontawesome/v5.6.3/css/all.css  405 -1

FYI

@app.after_server_start

is the same as:

@app.listener("after_server_start")

21.3 Release Notes