Calculating velocity metrics differently
Danielskry opened this issue · comments
Hi, I am wondering if commit sizes should have a more significant role and be included in the calculations?
As I understand, the deployment frequency is calculated by "how often an organization successfully releases to production" [1] as their successful deployments. However, as far as I know, this does not seem to include the features (e.g., deletions and additions) of the code that the successful deployment constitute. Wouldn't it be more appropriate to calculate a deployed value before the deployment frequency, in order to ensure better construct validity? I.e., summarize successful deployments as the total sum of the product of the code included in the deployments. Ergo, deployed value can be calculated as:
$$ f(\textrm{additions}, \textrm{deletions}) = \sum_{i=1}^{n}(\textrm{addition}{i}+\textrm{deletion}{i})$$
Before being classified by the Four Keys tiers as the deployment frequency (e.g., Daily bucket). Where
This may also apply to lead time for changes, which tries to measure the time it takes to go from "code committed to code successfully running in production" [1]. Considering that this too seem to not factor in the size of the code that is being committed? One way to include this could be to calculate the lead time per change before calculating the overall lead time for changes. I.e., divide the lead time for commited code on its own code features (e.g., additions and deletions):
$$ t(\textrm{additions}, \textrm{deletions})=\sum_{i=1}^{n}\frac{d(x_{i}, y_{i})} {\textrm{addition}{i}+\textrm{deletion}{i}} $$
Where
Lead time per change will then be able to factor in that it could take a shorter time to make changes to smaller commits, and vice versa. Rather than just summarizing the lead time for the time between when a commit happened and the deployment happened. Thus potentially increasing the construct validity further, as it may lead to a more reliable lead time for changes measure.