How to call a function method after all processes are executed?

I started four processes

I used the publish and subscribe function of Redis.
After a request arrives, I will notify all the functions in the process ->'func X 'to do something.
How do I know that ‘X’ in these four processes has been executed?
I want to insert some data into the database when the execution is complete. If I do not know the end status, I will have the problem of repeated inserts.
Please, this is very important to me We look forward to your help! :pray: :pray: :pray:

Do you have some code we can look at? Your description is super vague it is difficult to understand what your architecture looks like and what your goal is.

Thank you for your reply. :grinning:

I ran the program and it started 4 workers:

I have a router. It waits for the request to come, and then I publish a message to notify those subscribers:

Of course, the process of subscribing is executed at the beginning:


After receiving the subscription message, I will process some logic in the following method. (Note: Because my program now has 4 workers, this method will be executed four times!):

Method of inserting data(will also be executed 4 times):

Current requirement: I want to execute the method of inserting data into the database after the last worker is executed. (If I execute this method in each worker, it will be executed 4 times, which is wrong. Because duplicate data will be inserted!!!)

I look forward to your help. Thank you!

This is what I added, looking forward to your answer, thank you very much!

This sounds like an architectural problem. One solution is to create a pattern where each of your workers talk to each other to know the message has been received by all the other workers. Last message in wins.

Do you need to wait until all workers receive message? I’d personally push the work to some shared queue, the job is moved from queue to in progress, then ack when done. Only one worker should be able to move out of queue, so if another tries it will fail.

Another option is to build a lod balancer. One dedicated worker that pulls from the queue and pushes to a specific worker to perform the operation. I guess the question is how complex do you need the work to be.

If you intend to only scale with multiple workers from one instance (and not multiple containers) you could look at the solution here: Pushing work to the background of your Sanic app

Or, even if you need a larger scale, the pattern in that article could be adopted to using redis to distribute with the same goal: a work queue and not just messages.

Thank you for your help! :grinning: :grinning: :grinning:
I read your reply carefully. Besides Pushing work to the background of your Sanic app here.
Can you give some code samples for the other methods mentioned in your article? If so, that would be great!

You can checkout the code and presentation I did on this here : https://github.com/ahopkins/pyconil2021-liberate-your-api

you should be able to find the video on YouTube. https://youtu.be/hGAwyg8_W3M

Thank you very much!