MetaMask / module-lint

Analyzes one or more repos for divergence from a template repo.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Automate module-lint report generation

Gudahtt opened this issue · comments

We want a module-lint report to be automatically generated and shared on a regular basis (e.g. maybe once every week or two?). This report should include a summary of how many rules are passing in each repository, and a link to further details about any errors that may exist. Ideally this report would be shared in Slack.

Acceptance criteria:

  • A report is generated on a regular interval
  • A report summary is shared to Slack, which includes the percentage of rules passing for each repository
    • Perhaps we could use a Slack thread, and leave a post on that thread per-repo or per group of repos, to reduce the vertical space this would take up in Slack.
  • A link to the full error output should be included in the Slack post
    • The error output should be subdivided per repository or group of repositories, to aid readability
  • The report should include all actively-maintained TypeScript libraries in our organizations, excluding those that we know to be incompatible with our rules (i.e. those that show as false negatives or positives. Real failures are OK).

We may want to put this workflow in some other repository. It doesn't really matter where it lives though, this will be replaced by the dashboard eventually.

A report summary is shared to Slack, which includes the percentage of rules passing for each repository

A question was raised in standup for how to show the number of passing and failing rules for each repo that was run.

The issue is all we're doing in the GitHub workflow is running the tool; once we do that we're not inside the tool anymore, and all we have to work with is the output of the tool.

There are a couple of ideas:

  • Cheat and parse the numbers from the output using a regex
    • Assume that the passing and failing numbers are at the end of the output; split the output by lines, group the lines into chunks separated by empty lines, select the last chunk, and then parse the "Results:" line
    • We'll have to run the tool in a no-color mode so that we don't have to deal with escape characters that create the colors
  • Have the tool also generate a JSON file that we can read inside of the GitHub action
    • We can either hack this for now, or a better way to do this would be to implement the idea of "reporters", which test runners do — and this is something we were planning on doing anyway

@kanthesha What are your thoughts here? It seems like adding the concept of reporters would take a while to implement, so my vote is to hack this and parse the output manually as I described above. Note that we'd have to add a command-line option to disable colors — perhaps --no-color — but that should be fairly easy to add in comparison. Curious to know your thoughts. We may need an extra ticket to capture the work either way.

Sounds good to me to hack for now and create a separate ticket for reporters. If we are going to pick up the reporters ticket soon, then if we can just share the report link for now. I'm ok, either way.

The report summary part has been moved to new issue (#82).