View Single Post
Jaffa's Avatar
Posts: 2,535 | Thanked: 6,681 times | Joined on Mar 2008 @ UK
#32
Originally Posted by fms View Post
To me, the number of downloads is a sufficient measure of repo popularity.
Not really, since everything has to go through Extras-Devel and Extras-Testing - but not everything from Extras-Devel and Extras-Testing makes it to Extras. If I push 3 in-development versions to -devel, and it is installed by 10 people but only the last one works, that's 20 downloads of broken software and 10 downloads of working software.

When that version gets through to Extras and is used by 10 people, it gets ten downloads of working software.

As Maemo pushes updates for your whatever repo you have enabled, you could potentially see more "users of -devel" in this (not uncommon) scenario, than actually there are.

I think there are, however, useful stats we can use to gauge the success of the QA criteria and the -testing process. We could also measure these to see the effect of various changes and strive for the numbers to be as good as possible:
  1. A comparison between the number of Section: user/* packages in Extras-Devel, Extras-Testing and Extras.
  2. The average number of downloads per day for the current version of each user-facing package in each repo.
  3. The average time it takes for a version of software which gets through to Extras to get through.
  4. The number of "user/" packages older than 10 days in Extras-Testing.
  5. The number of separate testers.
  6. The average number of packages rated by each tester each month.
__________________
Andrew Flegg -- mailto:andrew@bleb.org | http://www.bleb.org