CPAN Testers 2.0 end-January update

Reading time: 2 minutes

The bad news is that we’re still about two weeks behind schedule. The good news is that we’re not falling further behind and in some areas, we’re already ahead.

As I wrote in the last update, a number of my early-January tasks for revising the Metabase libraries didn’t get done and were blocking progress on other fronts. That work is now pretty much complete and I’m ready to turn my attention to the actual migration of legacy reports into a Metabase repository. Once that’s done, we’ll be able to test using it to feed the cpantesters.org databases that Barbie has been preparing for the conversion.

CPAN Testers 2.0 activity in the last couple weeks:

  • I revised Metabase framework libraries to separate user profile information and user authentication into separate facts. This also meant revising the user-profile generation program to match.
  • I implemented new Metabase::Resource classes to standardize extraction indexing data from Metabase::Fact resource strings. (E.g. cpan://distfile/DAGOLDEN/Capture-Tiny-0.07.tar.gz can be indexed under author “DAGOLDEN”, distribution name “Capture-Tiny”, and so on). This wasn’t on the plan, but I discovered that Ricardo and I had never actually gotten around to implementing, so it had to go from stubs to working code.
  • I confirmed that despite all the Metabase framework changes, I could still launch a local Metabase server and send CPAN::Reporter test reports via Test::Reporter::Transport::Metabase. (Thanks to Florian Ragwitz and Matt Trout for patches and guidance respectively on updating the Metabase web server for a more modern Catalyst runtime).
  • Barbie converted cpantesters.org backend databases to index on GUIDs instead of NNTP IDs. Existing legacy report had their NNTP IDs mapped to GUIDs.
  • Barbie and I agreed to have cpantesters.org get updates directly from the Amazon back-end rather than going through a web server. This postpones the need to implement search capability through the web until after launch.

The exact semantics of direct search against the back-end have yet to be worked out, but I’ve decided to hold off on that until we have a Metabase of historical records to experiment against.

At this point, the critical path is the conversion of old articles to the Metabase and the deployment of a web server to inject new reports into it. I’m working on the first, and hope to have both of those done by mid Feb. That leaves us a tight two week period for testing, so stay tuned for the next update.

•      •      •

If you enjoyed this or have feedback, please let me know by or