Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

proposal: testing: programmatic sub-test and sub-benchmark support #12166

Closed
mpvl opened this issue Aug 17, 2015 · 9 comments
Closed

proposal: testing: programmatic sub-test and sub-benchmark support #12166

mpvl opened this issue Aug 17, 2015 · 9 comments

Comments

@mpvl
Copy link
Contributor

mpvl commented Aug 17, 2015

Support for table-driven tests and benchmarks could be improved by allowing subtests and subbenchmarks to be spawned programmatically by allowing, for example, a Run method on T and B in the testing package. This would enable various use cases, such as:

  • an easy way to single out tests and benchmarks in a table from the command line (e.g. for debugging),
  • simplify writing a collection of similar benchmarks,
  • use of Fatal in sub tests without halting sibling subtests,
  • eliminate need to prefix each error message in a test with its subtest name, resulting in more consistent and simpler to write error messages,
  • creating subtests from an external/dynamic table source,
  • more control over scope of setup and teardown (more than TestMain allows now), and
  • more control over parallelism.

Semantically, subtests and subbenchmarks would behave exactly the same as the current top-level counterparts. In fact, the top-level counterparts could be implemented as subtests/benchmarks of a single root. Details would need to be worked out in a design doc.

@josharian
Copy link
Contributor

Russ mentioned on golang-dev that this "would address a variety of seemingly unrelated feature requests". What are those?

@mdwhatcott
Copy link

Perhaps this one in which I propose an alternate way to declare related tests with setups and teardowns per test case.

@mpvl
Copy link
Contributor Author

mpvl commented Sep 2, 2015

@josharian. Browsing through the issues: #12145, #9268 (subtests). For many of the related requests Russ mentions no Github issues were filed (that I'm aware of). This includes things like as the ability to call Fail a single iteration of a table-driven test, various ways to run things in parallel, better way to structure benchmarks, among others.

@ChrisHines
Copy link
Contributor

I have read the design doc and find this idea very intriguing. I have test code that could be improved with the use of the proposed features.

I find the discussion of subbenchmarks in the proposal somewhat lacking in context. The example refers to variables allMethods and smallSet that are not shown, which makes it difficult to understand what the example code really does.

I would like to see some discussion of how subtests can be accomplished with the existing testing package. How far can we get without this proposal? I have some idea from my own experience, but I think the proposal should make the comparison more directly.

In short, I like what I see, but we should be thorough.

@mpvl
Copy link
Contributor Author

mpvl commented Sep 22, 2015

@ChrisHines: a commonly suggested alternative approach using the existing testing package is to write functions and then top-level testing functions that simply pass parameters. This gets most of the functionality, but results in considerably larger and more awkward code. It is possible to generate such code, but this is not necessarily less awkward. I'll update the doc.
Note that this change would mostly add syntax to the existing system. I merely define the semantics more precisely. In my trial implementation, top-level tests are just subtests of a "main" test and the framework calls Run for them under the hood.

See https://go-review.googlesource.com/#/c/2322/4/unicode/norm/normalize_test.go for a rather elaborate example. For example, the benchmark code is considerably shorter and more data-driven.

The use of Run for tests in this CL is excessive and unnecessary but was done to explore its boundaries a bit.

@mpvl
Copy link
Contributor Author

mpvl commented Sep 22, 2015

@ChrisHines: the promised changes to the doc are under review in CL https://go-review.googlesource.com/#/c/14808/

Thanks for the feedback!

@adg adg added Proposal and removed Proposal labels Sep 25, 2015
@rsc rsc added this to the Proposal milestone Oct 24, 2015
@rsc rsc changed the title proposal: enhanced table-driven test support proposal: testing: enhanced table-driven test support Oct 24, 2015
@mpvl mpvl self-assigned this Oct 28, 2015
@rsc
Copy link
Contributor

rsc commented Oct 30, 2015

Thanks for writing this up, and my apologies for not getting a chance to review it sooner.

This proposal makes multiple significant changes and additions to the testing package. The overall spirit seems to me to move away from the current minimalist spirit of package testing. I'd like to keep that spirit but still get most of the benefits of this proposal. To that end, I have a few suggestions:

  • The title should be generalized, perhaps to "testing: programmatic sub-test and sub-benchmark support".

    • This really isn't about table-driven tests. It's about being able to define subtests programatically. That can be used as an alternative to tables, but it's probably even more useful for addressing long-running tests with multiple pieces, or large tests with variants.
    • The actual simple table-driven tests will, I hope, keep being used as they are today. The a+b example in the doc is a great example of code that should in my opinion not use this new machinery.
    • This matters precisely because I don't want to see everyone going around rewriting trivial table-driven tests.
    • The Benchmark example is a clear motivator. It would be good to have a less trivial Test example. Maybe bufio.TestReader.
  • I propose that the fully-qualified name of a subtest or subbenchmark be /, with slash as separators, spaces turned into underscores, and no other changes.

    • Slash is a well-understood denotation of hierarchy, comma not so much. Slash may be less common in chosen names, since it doesn't appear in printing of standard types like map or slice.
    • Don't panic if t.Run is invoked twice with the same name. There's nothing in package testing that needs names to be unique, and running the tests anyway is nicer to the user than crashing.
    • Let's replace spaces (unicode.IsSpace) with _ instead of ␣. The latter will not display on some screens and will be harder for people to type and match.
    • Otherwise, let's not mangle names nor try to make name sanitization injective (having dropped the uniqueness requirement above). The results will be more readable.
    • Let's not even replace slashes or underscores (no escaping). If someone puts their own slash in the name passed to Run, so be it.
    • Note that the fully-qualified name is the real name, not just how it is printed.
  • In the benchmark example, please use named functions, to move the bodies out of allMethods, and then omit the bodies. As written, the composite literal is too complex for an example, and I would argue is too complex for a real program too.

    var allMethods = []struct{
        name string
        f func(Form, []byte) func()
    }{
        {"Transform", transform},
        {"Iter", iter},
    }
    
    func transform(f Form, b []byte) func()
    func iter(f Form, b []byte) func()
    
  • The doc is inconsistent about where the -N is printed in the benchmark name to show the GOMAXPROCS setting. The BenchmarkMethod example uses names like BenchmarkMethod-8,Transform,small_change while the Logging section immediately following uses BenchmarkForm,from,NFC-8. I believe the correct one is the latter, with the -N being a true suffix and not inserted in the middle of the fully-qualified name.

  • Let's postpone any changes to -test.run. There are subtleties here that I don't want to place in front of getting the basic subtest and subbenchmark support.

  • Let's drop the new LOG output prefix for benchmarks. Benchmark output is already rare, and it is already possible to tell whether the logging influences timing, by comparing the BENCH header name with the names used on printed benchmark results.

@mpvl mpvl changed the title proposal: testing: enhanced table-driven test support proposal: testing: programmatic sub-test and sub-benchmark support Nov 4, 2015
@mpvl mpvl modified the milestones: Go1.7Early, Proposal Jan 25, 2016
@gopherbot
Copy link

CL https://golang.org/cl/18898 mentions this issue.

@mpvl
Copy link
Contributor Author

mpvl commented Jan 25, 2016

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

8 participants