-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: allow customizing the bench runner #14
base: main
Are you sure you want to change the base?
Conversation
Codecov ReportAttention:
Additional details and impacted files@@ Coverage Diff @@
## main #14 +/- ##
===========================================
- Coverage 73.61% 50.00% -23.62%
===========================================
Files 2 2
Lines 163 170 +7
===========================================
- Hits 120 85 -35
- Misses 34 81 +47
+ Partials 9 4 -5
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
os: [macos-12, macos-11, ubuntu-20.04, windows-2019, windows-2022] | ||
go: ["1.16", "1.17", "1.18", "1.19"] | ||
os: [macos-12, macos-13, ubuntu-22.04, windows-2019, windows-2022] | ||
go: ["1.16", "1.17", "1.18", "1.19", "1.20", "1.21"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we could remove some Go releases here, even though it works on older Go versions (make CI faster).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yep, makes sense
@@ -39,26 +39,29 @@ have information on memory allocations. | |||
Comparing performance between two versions of a Go module | |||
and just showing results on output (no check performed): | |||
|
|||
``` | |||
benchcheck cool.go.module v0.0.1 v0.0.2 | |||
```sh |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Docs were also wrong...sorry for that :-)
|
||
for i, v := range os.Args { | ||
if v == "--" { | ||
customizedBenchCmd = os.Args[i+1:] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
customizedBenchCmd = os.Args[i+1:] | |
customizedBenchCmd = os.Args[i+1:] | |
break |
because if the user-supplied command also contains a --
it's going to use the last one, which seems wrong. Example:
benchcheck -mod A -old B -new C -- wrapper -flag -- go test ...
runBench := benchcheck.DefaultRunBench | ||
var customizedBenchCmd []string | ||
|
||
for i, v := range os.Args { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you can use flag.Args() to only iterate in the non-flag remaining args.
os: [macos-12, macos-11, ubuntu-20.04, windows-2019, windows-2022] | ||
go: ["1.16", "1.17", "1.18", "1.19"] | ||
os: [macos-12, macos-13, ubuntu-22.04, windows-2019, windows-2022] | ||
go: ["1.16", "1.17", "1.18", "1.19", "1.20", "1.21"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yep, makes sense
Sorry for the very late feedback here :( |
I removed/skipped tests involving the use of "latest" behavior, testing with latest ended up generating tests that will be broken forever =/ (latest version changed and now the test fails since the wanted version doesn't match...doesn't seem like this was a good idea).
Ill add the tests back at some point...hopefully with better ideas/implementation (I was never super happy with them..and the latest usage was quite broken =/). And with a parametrized runner we can also test for corner cases more easily (like the empty bench test).
With support for custom commands it will be a nicer end to end/integration test to test the tool using this to generate deterministic results, we could test the integration of module downloading + parsing and the testing/check logic but not depending on any benchmark actually being run..since they exhibit a lot of variance...specially on CI/MacOS.
Still wondering what to do about the odd Windows 2019 error...probably just remove Windows 2019 from CI 😆 (works on newer Windows).