Thread
:
[extras-testing QA] giving thumbs up in testing without following QA
View Single Post
Jaffa
2010-08-23 , 13:16
Posts: 2,535 | Thanked: 6,681 times | Joined on Mar 2008 @ UK
#
32
Originally Posted by
fms
To me, the number of downloads is a sufficient measure of repo popularity.
Not really, since everything has to go through Extras-Devel and Extras-Testing - but not everything from Extras-Devel and Extras-Testing makes it to Extras. If I push 3 in-development versions to -devel, and it is installed by 10 people but only the last one works, that's 20 downloads of broken software and 10 downloads of working software.
When that version gets through to Extras and is used by 10 people, it gets ten downloads of working software.
As Maemo pushes updates for your whatever repo you have enabled, you could potentially see more "users of -devel" in this (not uncommon) scenario, than actually there are.
I think there
are
, however, useful stats we can use to gauge the success of the QA criteria and the -testing process. We could also measure these to see the effect of various changes and strive for the numbers to be as good as possible:
A comparison between the number of
Section: user/*
packages in Extras-Devel, Extras-Testing and Extras.
The average number of downloads per day for the current version of each user-facing package in each repo.
The average time it takes for a version of software which gets through to Extras to get through.
The number of "user/" packages older than 10 days in Extras-Testing.
The number of separate testers.
The average number of packages rated by each tester each month.
__________________
Andrew Flegg
-- mailto:andrew@bleb.org |
http://www.bleb.org
Quote & Reply
|
Jaffa
View Public Profile
Send a private message to Jaffa
Visit Jaffa's homepage!
Find all posts by Jaffa