django / django-asv

Benchmarks for Django using asv

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Azure pipelines setup

deepakdinesh1123 opened this issue Β· comments

Over the past few days, I set up the Azure pipeline to run the benchmarks in the benchmark repo when a pull request is made in the Main repo(a comment trigger can also be added) since both the Django and djangobench repositories belong to the organization creation of an access token would not be required. Can I use this method?

Note: It would require adding azure-pipelines.yaml file to the Django repo.

@deepakdinesh1123 Why Azure pipeline over GHA? (Could we add a GHA to trigger the benchmarks, since we already have GHA in place? πŸ€”)

@carltongibson I had previously proposed to do this using GHA itself but since that method required a GITHUB_TOKEN with access to the repository @smithdc1 had mentioned in this comment that there might be security issues and asked me to look for other methods so I suggested azure pipelines as a solution. Should I use GHA itself?

We should be able to add a secret... πŸ€” (Maybe).

Do we need to transfer to the Django org?

(Is it time for a where-are-we overview discussion? β€” Done so far, time remaining, goals to hit? β€” perhaps a Issue here? @smithdc1?)

Hi All,

Apologies if I have caused confusion here. I'll try and summarise me thoughts:

  • I'm +1 for staying with GHA, if possible
  • I'm +1 for adding a secret for this repo for use in a django/django workflow.
  • I'm -1 for asking for a django/django secret to go anywhere, even if we did move this under the django org. (Maybe @carltongibson has a different view here?)

You may say that the last isn't a viable option with GHA. We could then ask, can we use the buildbot .... framework*, how does codecov make its comments etc.

  • On django/django we can comment "buildbot test selenium" (or similar) and extra tests could run. Could we use that to kick off a workflow?

Ah, we crossed messages!

(Is it time for a where-are-we overview discussion? β€” Done so far, time remaining, goals to hit? β€” perhaps a Issue here? @smithdc1?)

I think a discussion building on @deepakdinesh1123's comments on the forum would be useful.

Certainly a reminder of the aims and the time left would be useful to keep us all on track. πŸš‚

There's a few older PRs that need some time unpicking potential changes settings, but likely there are more material aims we can focus our time on. πŸ€” . There's therefore a priority discussion. (e.g. is it more interesting/useful to setup a locust (?) test harness vs fixing a tricky benchmark).

There were also some folk who put their hand up on the forum who would be happy to mentor (a while back now, granted). Maybe there's someone in that group we could ask to help mentor on a specific issue? Likely anyone in that group will more technical knowledge than me!

If we can spell-out what we're trying to do, I'm pretty sure someone will know the best incantations!

a reminder of the aims and the time left would be useful to keep us all on track.

I have added a comment with the details here

  • On django/django we can comment "buildbot test selenium" (or similar) and extra tests could run

@smithdc1 Could you please point me to the source where this is implemented? it would be easier for me to implement it to run benchmarks. Should I add the benchmark step to the existing master-worker configuration or create a new one?

@smithdc1 Could you please point me to the source where this is implemented? i

There docs are here. https://code.djangoproject.com/wiki/CI

I've also been thinking about this problem more generally and if we can approach this using GitHub actions and without using secrets. I think what we're looking for is some way of flaging a pr on django/django to run the benchmarks for that PR?

Could we add a workflow to django/django which is trigered on certain event (add a "run benchmark" label, say). That workflow would then checkout the main branch of this repo and run. We'd then be able to see the comparison of that change alone. I'm not sure we need to save the results of that benchmark run, that's less important than the direct comparison between the main branch and the proposed changed.

What do you think?

I have added a comment with the details here

Super -- I'll go have a read πŸ‘

Could we add a workflow to django/django which is trigered on certain event (add a "run benchmark" label, say).

@smithdc1 I have created a workflow here that checks out the benchmark repo and runs the benchmarks when a comment with the content benchmark is made on a pull request, I have also added a step to display the results using GITHUB_STEP_SUMMARY, it can be seen here (similar performance) and here (performance decreased, v3.0 to v4.0 comparison)

  1. Should I run the benchmarks on a pull request comment or label?
  2. If it is to run on a comment, what should the comment contain?

If you approve of the workflow I will make the necessary changes and create a PR on django\django.

This looks great. 🀩

I think it is worth creating a ticket at https://code.djangoproject.com/ and opening a PR with your proposal. That way you'll get far more eyes on it and better feedback.

Personally I prefer label as I think it's simpler.

Final question is that I see there's a workflow result but that doesn't appear in the checks of the pull request itself. Is that possible? My Googleing skills fell short, and this is something we can ask the wider community as part of a pr to django.

  • I prefer label as I think it's simpler.

@smithdc1
I tried to implement it and ran into a surprising error, for the pull requests made before the workflow was added labeling did not trigger the workflow but the pull requests made after got triggered when the label was added.

  • I see there's a workflow result but that doesn't appear in the checks of the pull request itself. Is that possible?

I came across a third party action that is able to set the status of a pull request and add a message like this.
Screenshot (48)
Should I add it?

I tried to implement it and ran into a surprising error, for the pull requests made before the workflow was added labeling did not trigger the workflow but the pull requests made after got triggered when the label was added.

I think that's OK. As long as it works once it's in. πŸ‘

Should I add it?

Looks nice. Let's propose it and see what folk think. Django/django has used thrid party actions with them pined to a specific commit. See.

https://github.com/django/django/blob/24effbceb871e71d3bc320b91252f743714722df/.github/workflows/new_contributor_pr.yml#L13

@smithdc1 The workflow is almost ready, before creating a ticket and PR I just wanted to ask a question. Right now the workflow clones django-asv should I leave it that way or should I wait till the benchmarks have been moved to djangobench?

Yes, have it clone this repo. I don't see a need to merge this into djangobench.

If we decide to do that in the future (or even more this repo under django org) then we can update the workflow at django. It wouldn't be a big change.

@smithdc1 I have added a ticket and a pull request, please check it out.