Docstring convention is one my pet peeves. My view is that if you need to pick some arbitrary convention, might as well pick an official one. In that spirit, I’m an advocate of PEP257, the official Python docstring convention guide. Much like PEP8, I said to myself “surely there’d be a script to enforce it and report on errors”. Lo and behold – GreenSteam/pep257.
Since I found the pep257 script, I tried adding a few features I needed. Yesterday I saw that the build was failing for some time now. The CI build is configured to run for several different Python versions (2.6.x, 2.7.x, etc.), and the tests failed for some of them. I decided to try and solve them.
Get the Facts
Well, my first action was to run the tests and see the failures. I ran the
py.test command and the tests passed. I looked over at the TravisCI build log
and saw that the command run there was
py.test --pep8. Indeed the log also
included a failed PEP8 check. Easy peasy – fixing PEP8 checks is pretty
straightforward and doesn’t require an intimate familiarity with the code
base. I ran the tests with the PEP8 checks and fixed them. There were just a
few comment formatting problems. I eagrly typed the test command again and…
The weird part about this is that it wasn’t PEP8 that failed the tests. It was an actual test that failed. I tried debugging it for some time, but I didn’t find the bug. So – what next?
My First Operation
Nurse, hand me the git bisect please!
So, since I’m only mildly familiar with the code and I failed to debug it,
I took this opportunity to use a tool I’ve never tried before –
git bisect. For those who aren’t familiar with it, here’s the basic
- Start the bisect process (
git bisect start).
- Find and mark at least one good and one bad commit, manually (by doing
git bisect goodor
git bisect bad).
- Let git run the bisect (
git bisect run). It will update the working directory to a commit in the middle (in effect, it will run a binary search) and ask you whether it’s “good” or “bad”. After you answer, it will ask you about the next commit in the search, until you found the commit that ruined the codebase.
Except I never to got to the third step – I couldn’t find a single bad commit.
THIS IS IMPOSSIBLE.
I CAN SEE THE BUILD IS FAILING FOR THESE COMMITS IN TRAVIS.
After a long time of jumping between commits and running the tests, I had an idea. When I first ran the tests in the master branch, they passed. On the second run, the didn’t. Maybe that’s what happened in all these other commits as well?. Maybe the commits weren’t good? Maybe they just passed because I only ran them once? I jumped to another early commit and ran the tests. They passed. Arrow up and ENTER. They fail. What the hell is going on?
Well, in one of the test files there was the following line:
The key thing to see here is the
__file__ parmeter. It’s pretty convenient to
use it to refer to the file containing it, but here it caused the bug. The
reason is that
__file__ doesn’t always reference the same value. On the first
run, it referred to the source file – i.e.,
<codebase>/test.py. On the
second run it referred to the bytecode file, i.e.,
The bytecode doesn’t exist on the beginning of the first run, and that’s why
__file__ can’t refer to it. However, since between two consecutive run the
file wasn’t changed, Python didn’t have to rewrite the bytecode file and so
__file__ could refer to it.
But why did this behvior cause the bug?
In most cases, the use of
__file__ works just fine when it related to either
the source or the bytecode file. However, PEP257 is a static analysis tool.
This means that it accepts as a parameter a source file only. Docstrings have
no meaning in bytecode form, so pep257 doesn’t return any errors when handed such
a file. The solution was to simply replace
__file__ with the path to the
source file, and not just the executing file (which is either source or
I manage a small software team at work. One of the members in my team was assigned a debugging task, and he kept complaining about it. About two hours after he was assigned, he came to me and said “I looked at the code and I have no idea what’s wrong. I suck at debugging. Please give this task to someone else!”
Here’s my answer to these kinds of claims:
If you know what the bug is, you’re not debugging. You’re fixing the code. There are such things as “good” bugs. “Good” bugs are bugs that change your understanding of something; they’re bugs you call other people to see – you tell them the problem, then the solution and wait for that realization to hit them, just as it hit you; they’re bugs that make you proud of solving and sometimes they’re bugs that cause you to face-palm. They all start with looking at the code and shouting “THIS DOESN’T MAKE SENSE!”.
This is what we do.