Ability to bind to IPv6 addresses / interfaces


Hi all,

Hopefully this is the appropriate place to ask for help regarding this issue. I’ve built a Sanic-powered app that I’ve deployed across many VPSes in the wild. My test infrastructure utilizes NAT-based servers (shared IPv4 address w/ an allocated v6 range). Ideally, I’d like to have the Sanic app listening on the IPv6 interface so that I don’t have to figure out the forwarded v4 ports for each VM.

Anyways… does Sanic support binding to a IPv6 address/interface? Not a show-stopper if not, but would make my life a whole lot easier.



Indeed, this is an appropriate forum for this question.

While I have not tried this myself, app.run does provide an argument to pass in an existing sock. Under the hood, it is using loop.create_server (see source), which allows for this.

I found this stackoverflow answer that provides an example of binding an asyncio server to IPV6.

Putting this together, I was able to get the following to work:

from sanic import Sanic
from sanic.response import json
import socket

sock = socket.socket(socket.AF_INET6, socket.SOCK_STREAM)
sock.bind(('::', 7777))

app = Sanic()

async def test(request):
    return json({"hello": "world"})

if __name__ == "__main__":

And, to test it out:

$ curl -g -6 "http://[::1]:7777/"


Perfect! Cheers, @ahopkins! I really appreciate the help.

Got me all sorted -

$ netstat -tulnp | grep 12345
tcp6       0      0 :::12345                :::*                    LISTEN      21563/python3
$ curl http://[::]:12345/status

My prior Google and stack overflow searches failed me, so I appreciate the linked resources. Also, just wanted to thank you and the dev team as Sanic is an awesome tool to use.

Thanks again,


:smiley: Glad to help, and happy to hear it worked for you.

And … now that I have done this once :thinking:, I might adopt this myself on a project.


@ahopkins we should get that in the docs. IPv6 is a real thing now and we should show people how they can do it!


Agreed. I started pulling some code samples here and there with very poor categorization, to add as examples and in the documentation as well. It’s something far from finish (yet) … But will eventually land on the website, documentation and even test cases that I’m not sure if they cover some of these codes …


Added this documentation and for unix sockets here: https://github.com/huge-success/sanic/pull/1375


This is fun, from my experimentation - when designating your own ipv6 socket, the socket is not immediately released on Sanic exit. I’m going to open an issue for it, but I imagine it’s in the app exit cleanup somewhere.

The socket eventually does get freed when the kernel identifies it as dead and garbage collects it, from what I can tell - but it usually takes 60+ seconds to do so.

Other than development, I can’t imagine where this might be an issue, but it’s still one imho just the same.


nice catch @sjsadowski


We just have to see if the kernel will always catch the socket file eventually, so it might not be our “full” responsibility for doing so. Anyhow, have you tried to increase your max_user_watches kernel param to see if this isn’t just a delay caused by a round robin pooling (in case inotify is unable to watch for another file)?

You can check this by increasing max_user_watches to a higher value.

  1. You can check the current value:
    $ cat /proc/sys/fs/inotify/max_user_watches
  2. Eventually increase it:
    $ echo 32768 | sudo tee /proc/sys/fs/inotify/max_user_watches

You can choose a higher number just for this test :wink: