Include bandit in pipeline



out of curiosity I ran bandit (a SAST tool for python) against sanic:master this morning and got a few gems.

First, my execution path

(sanic-T0ioAge5) [ssadowski@host |git:  (master)| sanic]$bandit -s B104 -x tests/ -r ./

This effectively tells bandit to run recursively, skip check for B104 (hardcoded bind to all interfaces) and exclude the tests directory

The report, pretty good overall imho:

Code scanned:
	Total lines of code: 5105
	Total lines skipped (#nosec): 0

Run metrics:
	Total issues (by severity):
		Undefined: 0.0
		Low: 7.0
		Medium: 2.0
		High: 0.0
	Total issues (by confidence):
		Undefined: 0.0
		Low: 1.0
		Medium: 0.0
		High: 8.0
Files skipped (0):

One of the mediums I believe is ignorable:

>> Issue: [B102:exec_used] Use of exec detected.
   Severity: Medium   Confidence: High
   Location: ./sanic/
   More Info:
83	            with open(filename) as config_file:
84	                exec(
85	                    compile(, filename, "exec"),
86	                    module.__dict__,
87	                )

The other we may want to correct:

>> Issue: [B604:any_other_function_with_shell_equals_true] Function call with shell=True parameter identified, possible security issue.
   Severity: Medium   Confidence: Low
   Location: ./sanic/
   More Info:
53	        args=(cmd,),
54	        kwargs=dict(shell=True, env=new_environ),
55	    )

I’m open to thoughts, but I think it would be nice if we got a report about our basic code security with every PR/Merge


I think, from the reloader part, we may ignore it for now since we plan to separate it from the core Sanic, so …


I’ve never used it. how would we integrate? Is it running in cli and would be a part of Travis? or is it separate?


yes, cli based, and it can dump output to different formats. We would run it as part of the travis pipeline.


We used to run this like a qualification gate checker. We ran the report for existing state of the code(baseline) and from that point on, the scan would run with each PR to ensure it didn’t introduce anything new that wasn’t already in the baseline.

And a custom script to export the report in an easy to monitor format and mail it to the InfoSec and OpSec teams


I like the idea. I also would love to see something like this for targeted benchmarks.


pyresttest has some really good benchmarking features.

locust can be useful for some really good load simulation during benchmarking.


I still don’t have all the variables necessary to do this right now (I barely went through one simple test to be honest), but I really like to try a fuzz test into Sanic using python-afl. You can see more information regarding its usage on h11 here.