Transport close

I kept on getting these bizarre exceptions that I can’t solve:

Transport closed @ ('127.0.0.1', 34446) and exception experienced during error handling

They started happening after I upgraded from 0.7.0 to 0.8.3. I am unsure if they are related, because my site also got some increase in traffic. But they are mysterious to me so I don’t know how to debug it. The exceptions are thrown on the server.py

I am going to assume that they are related to those RuntimeError which I also don’t know how to debug — what do they mean exactly?

Nov 27 05:01:06 ubuntu gunicorn[2097]: 2018-11-27 05:01:06,408 ERROR asyncio:683 | Fatal error on SSL transport
Nov 27 05:01:06 ubuntu gunicorn[2097]: protocol: <asyncio.sslproto.SSLProtocol object at 0x7f33df3e24e0>
Nov 27 05:01:06 ubuntu gunicorn[2097]: transport: <TCPTransport closed=True reading=False 0x2cb8878>
Nov 27 05:01:06 ubuntu gunicorn[2097]: Traceback (most recent call last):
Nov 27 05:01:06 ubuntu gunicorn[2097]:   File "/usr/lib/python3.6/asyncio/sslproto.py", line 641, in _process_write_backlog
Nov 27 05:01:06 ubuntu gunicorn[2097]:     self._on_handshake_complete)
Nov 27 05:01:06 ubuntu gunicorn[2097]:   File "/usr/lib/python3.6/asyncio/sslproto.py", line 128, in do_handshake
Nov 27 05:01:06 ubuntu gunicorn[2097]:     raise RuntimeError('handshake in progress or completed')
Nov 27 05:01:06 ubuntu gunicorn[2097]: RuntimeError: handshake in progress or completed
Nov 27 05:01:14 ubuntu gunicorn[2097]: 2018-11-27 05:01:14,516 ERROR root:443 | Transport closed @ ('127.0.0.1', 34896) and exception experienced during error handling
Nov 27 05:01:16 ubuntu gunicorn[2097]: ["Transport closed @ ('127.0.0.1', 34488) and exception experienced during error handling"]
Nov 27 05:01:17 ubuntu gunicorn[2097]: ["Transport closed @ ('127.0.0.1', 34490) and exception experienced during error handling"]
Nov 27 05:01:18 ubuntu gunicorn[2097]: ["Transport closed @ ('127.0.0.1', 34492) and exception experienced during error handling"]
Nov 27 05:01:18 ubuntu gunicorn[2097]: ["Transport closed @ ('127.0.0.1', 34494) and exception experienced during error handling"]
Nov 27 05:01:19 ubuntu gunicorn[2097]: ["Transport closed @ ('127.0.0.1', 34500) and exception experienced during error handling"]
Nov 27 05:01:20 ubuntu gunicorn[2097]: ["Transport closed @ ('127.0.0.1', 34502) and exception experienced during error handling"]
Nov 27 05:01:20 ubuntu gunicorn[2097]: ["Transport closed @ ('127.0.0.1', 34506) and exception experienced during error handling"]
Nov 27 05:01:21 ubuntu gunicorn[2097]: ["Transport closed @ ('127.0.0.1', 34514) and exception experienced during error handling"]
Nov 27 05:01:22 ubuntu gunicorn[2097]: ["Transport closed @ ('127.0.0.1', 34522) and exception experienced during error handling"]
Nov 27 05:01:23 ubuntu gunicorn[2097]: ["Transport closed @ ('127.0.0.1', 34526) and exception experienced during error handling"]
Nov 27 05:01:24 ubuntu gunicorn[2097]: ["Transport closed @ ('127.0.0.1', 34528) and exception experienced during error handling"]

@smlbiobot

I did a quick check and based on that, this seems more like an issue with the asyncio's ssl content implementation. Please take a look at the Bug and SSL Context Fatal Error

The way sanic does the SSL management is much simpler. If you provide an SSL context to the app.run via the ssl parameter then we use it as is, and if you pass an ssl cert and key file, we create a default context using the create_default_context provided by python’s default ssl lib and pass it down to the loop to handle the way it will see fit. Along with this, if the ssl param is passed, then the protocol is defaulted to https instead of http.

The ssl context created/provided above will be passed down to the loop.create_server to mange the rest of the things the way it will see fit.

The error

is invoked from the sanic when it failes to write the respose back to the transport

Thanks for your reply. However, I am not entirely sure what I need to do here. As an FYI, the https is done through cloudflare and I don’t actually store a cert on the server. The https connection get passed through nginx and sanic simply serves everything as http. Could that be the issue? Everything works as is so I just left it this way.

So I understand, your TLS is terminating at nginx, and you are just proxying http to sanic, correct? Can you share your nginx config?

The actual config contains other stuff but here are the relevant parts

server {
    listen              80;
    listen              [::]:80;
    listen              443 ssl http2;
    listen              [::]:443 ssl http2;

    server_name    cr-api.com
                   www.cr-api.com
                   www.royaleapi.com
    ;
    return 301 https://royaleapi.com$request_uri;
}

server {
    listen              80;
    listen              [::]:80;
    listen              443 ssl http2;
    listen              [::]:443 ssl http2;

    server_name         royaleapi.com;

    location / {
        proxy_pass                          http://127.0.0.1:8000;
        proxy_set_header Host               $host;
        proxy_set_header X-Real-IP          $remote_addr;
        proxy_set_header X-Forwarded-For    $proxy_add_x_forwarded_for;
        proxy_set_header X-Scheme $scheme;
        proxy_set_header X-Forwarded-Proto https;
        proxy_set_header X-Forwarded-Port 443;

    }
}

I am not an nginx guru by any stretch of the imagination … But I think there is a problem in your config having listen 80 and listen 443 in both server instances.

I think what you really want is the following to redirect all HTTP to HTTPS, decrypt it inside nginx and forward on to the internal sanic instance unencrypted.

server {
    listen              80;
    listen              [::]:80;

    ...

    return 301 https://royaleapi.com$request_uri;
}

server {
    listen              443 ssl http2;
    listen              [::]:443 ssl http2;

    ssl_certificate /path/to/tls.crt;
    ssl_certificate_key /path/to/tls.key;

    location / {
        proxy_pass                          http://127.0.0.1:8000;
        ...
    }
}

When you make your changes to your config, make sure to run nginx -t to verify that there are no mistakes.

Actually, if you take a closer look, the first instance listen to traffic going to a roster of domains and redirect it to the right domain. That’s all there is. In fact, perhaps it would be clearer if I simply omit it — but I included it there to show that all http traffic and domains are redirected to a single domain and using the https protocol.

As for ssl cert, as I have said, we don’t put it on the server and that’s why it’s omitted. It’s auto attached from Cloudflare. But this issue is not really about nginx config.

I only included the nginx config as someone asked to see it. I‘ve never had an issue with my config and I don’t plan on changing it. I do want to know why I am getting transport close.

This started happening again. This time, I am able to see it in my local dev environment so it has nothing to do with the SSL configuration.

My logs:

2019-05-08 19:04:09,399 DEBUG sanic.root:225 | KeepAlive Timeout. Closing connection.
2019-05-08 19:04:57,146 ERROR sanic.root:499 | Transport closed @ ('127.0.0.1', 53958) and exception experienced during error handling
2019-05-08 19:04:57,152 DEBUG sanic.root:501 | Exception:
Traceback (most recent call last):
  File "/Users/sml/.virtualenvs/cr-api-web/lib/python3.7/site-packages/sanic/server.py", line 472, in write_error
    self.transport.write(response.output(version))
AttributeError: 'coroutine' object has no attribute 'output'
/Users/sml/.virtualenvs/cr-api-web/lib/python3.7/site-packages/sanic/server.py:213: RuntimeWarning:

coroutine 'handle_service_unavailable' was never awaited
Coroutine created at (most recent call last)
  File "/usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/context.py", line 223, in _Popen
    return _default_context.get_context().Process._Popen(process_obj)
  File "/usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/context.py", line 277, in _Popen
    return Popen(process_obj)
  File "/usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/popen_fork.py", line 20, in __init__
    self._launch(process_obj)
  File "/usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/popen_fork.py", line 74, in _launch
    code = process_obj._bootstrap()
  File "/usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap
    self.run()
  File "/usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/process.py", line 99, in run
    self._target(*self._args, **self._kwargs)
  File "/Users/sml/.virtualenvs/cr-api-web/lib/python3.7/site-packages/sanic/server.py", line 721, in serve
    loop.run_forever()
  File "/Users/sml/.virtualenvs/cr-api-web/lib/python3.7/site-packages/sanic/server.py", line 213, in response_timeout_callback
    self.write_error(ServiceUnavailable("Response Timeout"))
  File "/Users/sml/.virtualenvs/cr-api-web/lib/python3.7/site-packages/sanic/server.py", line 470, in write_error
    response = self.error_handler.response(self.request, exception)
  File "/Users/sml/.virtualenvs/cr-api-web/lib/python3.7/site-packages/sanic/handlers.py", line 86, in response
    response = handler(request, exception)

Sentry responded with an API error: RateLimited(Creation of this event was denied due to rate limiting)
["Transport closed @ ('127.0.0.1', 53958) and exception experienced during error handling"]
2019-05-08 19:05:25,670 ERROR sanic.root:499 | Transport closed @ ('127.0.0.1', 53992) and exception experienced during error handling
2019-05-08 19:05:25,674 DEBUG sanic.root:501 | Exception:
Traceback (most recent call last):
  File "/Users/sml/.virtualenvs/cr-api-web/lib/python3.7/site-packages/sanic/server.py", line 472, in write_error
    self.transport.write(response.output(version))
AttributeError: 'coroutine' object has no attribute 'output'
/Users/sml/.virtualenvs/cr-api-web/lib/python3.7/site-packages/sanic/server.py:213: RuntimeWarning:
AttributeError: 'coroutine' object has no attribute 'output'

This looks to me like you may have a missing await keyword somewhere. The attribute output is what is supposed to be on the return of a view method (which is an instance of HTTPResponse.

This is different than the issue you were having earlier.

I think to be able to debug this, we need to see more of the code that you have that is causing this.

Thank you. I found the culprit. Originally I wrote this as a coroutine thinking that it should be one — because in the docs they also are coroutines: https://sanic.readthedocs.io/en/latest/sanic/exceptions.html — but changing it to a regular function (removing async fixed that issue). What I would like to know then is how I might know when I shoud use a coroutine to handle those exceptions?

@bp_error.exception(ServiceUnavailable)
def handle_service_unavailable(request, exception):
    """Handle service unavailable"""
    logger.exception(exception)
    return request.app.jinja.render(
        "exception/500.html",
        request,
        exception=exception,
    )

You are right in that in general view handlers can be written as either a function or a coroutine. Both work fine. If an exception handler on a blueprint is not working as such, then maybe this is a bug that needs some investigation.

2 Likes