An Objective Function for Code Refactoring
Have you ever set an objective function for code refactoring, where, for every proposed total change (e.g. reviewable pull request), you seek to maximize the change in this function? An example:1
\[ \log_2(pct_{LOC\tested}) * pct{importables\documented} * pct{LOC\nostate} \over n{LOC} \]
Good (numerator stuff):
- Percent Lines of Code (LOC) covered by a test. Sublinear growth here, i.e. diminishing returns on “getting to 100%”. An off-the-shelf tool like Coverage.py will be fine here.
- Percent “public” units, i.e. non-underscored module importables - functions, classes, constants, variables/objects, covered by tutorial/how-to-guide/explanation (so excluding reference) documentation. Maximizes code consumers’ ability to understand functionality without have to dive into the codebase. Prior art on measuring this here.
- Percent LOC unaffected by state (i.e. avoiding getting values from or calling methods on long-living references). Pure-functional code is easier to reason about (e.g. via a simple substitution mode of execution) and thus more maintainable. My strategy for measuring this would be to designate certain (sub)packages/modules as purely functional.
Bad (denominator stuff):
- total LOC
What metrics correlate with code-refactoring success in your experience? These? Others?
References
If the equation doesn’t render for you: <img title=“refactoring-objective-function” alt=“refactoring-objective-function.png” width=“80%” src="/img/refactoring-objective-function.png" /> ↩︎