Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

proposal: cmd/compile: define register-based calling convention #18597

Closed
dr2chase opened this issue Jan 10, 2017 · 73 comments
Closed

proposal: cmd/compile: define register-based calling convention #18597

dr2chase opened this issue Jan 10, 2017 · 73 comments

Comments

@dr2chase
Copy link
Contributor

dr2chase commented Jan 10, 2017

Performance would be generally somewhat improved if arguments-to and results-from function and method calls used registers instead of the stack. Projected improvements based on a limited prototype are in the range of 5-10%.

The running CL for the prototype: https://go-review.googlesource.com/c/28832/
The prototype, because it is a prototype, uses a pragma that should be unnecessary in the final design, though helpful during development.

(This is a placeholder bug for a design doc.)

problems/tradeoffs noted below, through 2017-01-19 (#18597 (comment))

This will reduce the usefulness of panic tracebacks because it will confuse/corrupt the per-frame argument information there. This was already a problem with SSA and register allocation and spilling, this will make it worse. Perhaps only results should be returned in registers.

Does this provide enough performance boost to justify "breaking" (how much?) existing assembly language?

Given that this breaks existing assembly language, why aren't we also doing a more thorough revision of the calling conventions to include callee-saves registers?

This should use the per-platform ABIs, to make interaction with native code go more smoothly.

Because this preserves the same memory layout of the existing calling, it will consume more stack space than it otherwise might, if that had been optimized out.

And the responses to above, roughly:

In compiler work, 5% (as a geometric mean across benchmarks) is a big deal.

Panic tracebacks are a problem; we will work on that. One possible mitigation is to use DWARF information to make tracebacks much better in general, including properly named and interpreted primitive values. A simpler mitigation for targeted debugging could be an annotation indicating that a function should be compiled to store arguments back to the stack (old style) to ensure that particular function's frame was adequately informative. This is also going to be an issue for improved inlining because regarded at a low level intermediate frames will disappear from backtraces.

The scope of required assembly-language changes is expected to be quite small; from Go-side function declarations the compiler ought to be able to create shims for the assembly to use around function entry, function exit, and surrounding assembly-language CALLs to named functions. The remaining case is assembly-language CALL via a pointer, and these are rare, especially outside the runtime. Thus, the need to bundle changes because they are disruptive is reduced, because the changes aren't that disruptive.

Incorporating callee-saves registers introduces a garbage-collector interaction that is not necessarily insurmountable, but other garbage-collected languages (e.g., Haskell) have been sufficiently intimidated by it that they elected not to use callee-saves registers. Because the assembler can also modify stack frames to include callee-save spill areas and introduce entry/exit shims to save/restore callee-save registers, this appears to have lower impact than initially estimated, and thus we have reduced need to bundle changes. In addition, if we stake out potential callee-save registers early developers can look ahead and adapt their code before it is required.

If each platform's ABI were used instead of this adaptation of the existing calling conventions, the assembly-language impact would be larger, the garbage-collector interactions would be larger, and either best-treatment for Go's multivalue returns would suffer or the cgo story would be sprinkled with asterisks and gotchas. As an alternative (a different way of obtaining a better story for cgo), would we consider a special annotation for cgo-related functions to indicate that exactly the platform calling conventions were used, with no compromises for Go performance?

@dr2chase dr2chase added this to the Go1.9Maybe milestone Jan 10, 2017
@dr2chase dr2chase self-assigned this Jan 10, 2017
@dsnet dsnet modified the milestones: Proposal, Go1.9Maybe Jan 10, 2017
@gopherbot
Copy link

CL https://golang.org/cl/35054 mentions this issue.

@davecheney
Copy link
Contributor

davecheney commented Jan 11, 2017

5% for amd64 doesn't feel like it justifies the cost of breaking all the assembly written to date.

@minux
Copy link
Member

minux commented Jan 11, 2017 via email

@davecheney
Copy link
Contributor

I agree with Minux, optimising function calls for 10% speedup vs inlining and all the code generation benefits that unlocks doesn't feel like a good investment.

@josharian
Copy link
Contributor

I'm still thinking about the proposal, but I will note that for asm functions with go prototypes and normal stack usage, we can do automated rewrites. Which is also a reminder that the scope for this proposal should include updating vet's asmdecl check.

@minux
Copy link
Member

minux commented Jan 11, 2017 via email

@randall77
Copy link
Contributor

@minux Yes, we will have all the information necessary to print args as we do now. I suspect it just isn't implemented yet.

@minux
Copy link
Member

minux commented Jan 11, 2017 via email

@randall77
Copy link
Contributor

The values enter in registers but they will be spilled to the arg slots if they are live at any call. So all live arguments will still be correct. Even modified live arguments should work (except for a few weird corner cases where multiple values derived from the same input are live).

Dead arguments are not correct even today. What you see in tracebacks is stale. They may get more wrong with this implementation, so it may be worth marking/not displaying dead args.

Things will get trickier if we allow callee-saved registers. We're thinking about it, but it isn't in the design doc yet.

@minux
Copy link
Member

minux commented Jan 11, 2017 via email

@randall77
Copy link
Contributor

When we pass an argument in a register, the corresponding stack slot is reserved but not initialized, so there's no memory traffic for it. Only if the callee needs to spill it will that slot actually get initialized.

@dr2chase
Copy link
Contributor Author

I'm not sure how "full" the design docs are.
I've got a CL up, and here's a more readable version of it that I'll try to keep up to date: https://gist.github.com/dr2chase/5a1107998024c76de22e122ed836562d

(Repeating some of that gist) I reviewed a bunch of ABIs, and the combination of

  • need someplace to spill if the stack must grow, and we have no frame yet
  • callee-saves makes tracing stacks for GC a good deal more painful; the impact on assembly language is also a good deal larger.
  • there's no memory traffic unless there's a spill, and then we must spill somewhere
  • most of the existing ABIs (all but Arm64) are unfriendly to multiple return values in registers
  • assurances from debugger/dump-inspection people (Delve and Backtrace.io) that they use DWARF, if we take the time to get it right

caused me to to decide to try something other than standard ABI(s). The new goal was to minimize change/cost while still getting plausible benefits, so I decided to keep as much of the existing design as possible.

The main loss in reserving stack space is increased stack size; we only spill if we must, but if we must, we spill to the "old" location.

As far as backtraces go, we need to improve our DWARF output for debuggers, and I think we will, and then use that to produce readable backtraces. That should get this as right as it can be (up to variables not being live) and would be more generally accessible than binary data.

So actually turning this on may be gated by DWARF improvements.

@minux
Copy link
Member

minux commented Jan 11, 2017 via email

@bcmills
Copy link
Contributor

bcmills commented Jan 12, 2017

if we can align our callee-saved registers with the platform ABI, cgo callbacks will be faster

Not to mention more reliable. A lot of the signal trampolines in the runtime could be a whole lot simpler if they didn't have to deal with the ABI mismatch between signal handlers (which must use the platform ABI) and the runtime's Go functions. (I'm still not entirely convinced that there aren't more calling-convention bugs lurking.)

@minux
Copy link
Member

minux commented Jan 12, 2017 via email

@ianlancetaylor
Copy link
Contributor

At each call, have the PCDATA for that call record, for each callee-saved register, whether it holds 1) a pointer; 2) a non-pointer; 3) whatever it held on function entry. Also at each call, record the list of callee-saved registers saved from the caller, and where they are.

Now, on stack unwind, you can see the list of callee-saved registers. Then you look to the caller to see whether the value is a pointer or not. Of course you may have to look at the caller's caller, etc., but since the number of callee-saved registers is small and fixed it's easy to keep track of what you need to do as you walk up the stack.

@philhofer
Copy link
Contributor

5% for amd64 doesn't feel like it justifies the cost of breaking all the assembly written to date.

Much like SSA, I expect this change will have a much more meaningful effect on minority architectures.

Those fancy Intel chips can resolve push/pop pairs halfway through the instruction pipeline (without even emitting a uop!), while a load from the stack on a Cortex A57 has a 4-cycle result latency..

@minux
Copy link
Member

minux commented Jan 12, 2017 via email

@philhofer
Copy link
Contributor

Result latency doesn't matter much on a out-of-order core because L1D on
Intel
chips has similar latency.

It especially doesn't matter if you have shadow registers, but AFAIK none of the arm/arm64 designs have 'em.

I suspect the benefit could be higher, but for non-leaf functions we still
need to
spill the argument registers to stack, which negates most of the benefit
for large
and non-leaf functions.

You can't just move them to callee-saves if they're still live? Is that for GC, or a requirement for traceback?

@minux
Copy link
Member

minux commented Jan 12, 2017 via email

@philhofer
Copy link
Contributor

In the current ABI, every register is caller-save. My counterproposal is
introducing callee-save
registers but keep arguments passing on stack to preserve the current
traceback behavior.

Ah. I guess I assumed, incorrectly, that this proposal included making some registers callee-save.

@cherrymui
Copy link
Member

Therefore, it seems registered parameters will mostly help small leaf
functions. And if that's true, I imagine inlining those functions will
actually provide even more speedup because then the compiler can optimize
across function call boundaries.

Inlining doesn't help on function pointer calls.

@bcmills
Copy link
Contributor

bcmills commented Jan 12, 2017

@dr2chase

there's no memory traffic unless there's a spill, and then we must spill somewhere

That's not strictly true. Empty stack slots still decrease cache locality by forcing the actual in-use stack across a larger number of cache lines. There's no extra memory traffic between the CPU and cache unless there's a spill, but reserving stack slots may well increase memory traffic on the bus.

most of the existing ABIs (all but Arm64) are unfriendly to multiple return values in registers

Not true. It would be trivial to extend the SysV AMD64 ABI for multiple return values, for example: it already has two return registers (rax and rdx), and it's easy enough to define extensions of the ABI that pass additional Go return values in other caller-save registers.

For example, we could do something like:

rax: varargs count; 1st return
rbx: callee-saved
rcx: 4th argument; 3rd return
rdx: 3rd argument; 2nd return
rsp: stack pointer
rbp: frame pointer
rsi: 2nd argument
rdi: 1st argument
r8: 5th argument; 4th return
r9: 6th argument; 5th return
r11: temporary register
r12-r14: callee-saved
r15: GOT base pointer

@mundaym
Copy link
Member

mundaym commented Jan 13, 2017

Am I understanding correctly that this proposal is calling for structs to always be unpacked into registers before a call? Has any thought been given to passing structs as read-only references? I think this is how most (all?) of the ELF ABIs handle structs, particularly larger ones.

This way the callee gets to decide whether it needs to create a local copy of anything, avoiding copying in many cases. It is also presumably fairly common for only a subset of struct fields to be accessed so unpacking all or part of the struct may be unecessarily expensive (particularly if the struct has boolean fields). Obviously the reference would have to be to something on the stack (copied there, if necessary, by the caller) or to read-only memory.

For Go in particular it seems like this would be a nice fit because the only way to get const-like behaviour is to pass structs by value and internally passing them by reference instead would potentially make that idiom a lot cheaper.

Arrays and maybe slices could also be handled the same way.

@minux
Copy link
Member

minux commented Jan 13, 2017 via email

@randall77
Copy link
Contributor

That the proposals conflict is ok. And we can discuss those conflicts here. But without a concrete proposal about how you'd like to make tracebacks better it's hard to see exactly what those conflicts would be.

@rsc rsc changed the title proposal: function and method arguments and results should be passed in registers proposal: cmd/compile: define register-based calling convention Jan 23, 2017
@aarzilli
Copy link
Contributor

aarzilli commented Feb 3, 2017

I'm interested in knowing how this would interact with non-optimized compilation. Right now non-optimized compilations registerize sparingly, which is great for debuggers since the go compiler isn't good at saving informations about registerization. Putting a lot more things into registers without getting better at writing the appropriate debug symbols would be bad.

@bcmills
Copy link
Contributor

bcmills commented Feb 3, 2017

I'm interested in knowing how this would interact with non-optimized compilation. Right now non-optimized compilations registerize sparingly, which is great for debuggers since the go compiler isn't good at saving informations about registerization.

Presumably part of the changes for the calling convention would be making the compiler emit more accurate DWARF info for register parameters.

@rsc
Copy link
Contributor

rsc commented Apr 3, 2017

On hold until @dr2chase is ready to proceed.

@dr2chase
Copy link
Contributor Author

dr2chase commented Apr 3, 2017

Current plan is 1.10, time got reallocated to loop unrolling and whatever help was required to get the Dwarf support better in general, so that we have the ability to say what we're doing with parameters.

There's been a lot of churn in the compiler in the last couple of months -- new source position, pushing the AST through, making things more ready for running in parallel, and moving liveness into the SSA form -- so I am okay with waiting a release.

@smasher164
Copy link
Member

Would the proposed calling convention omit frame pointers for leaf functions? I can see this varying based on whether the function stores the state of a callee-save register, takes the address of a local variable, etc. In that case, is there still a feasible way to obtain the call stack?

@wkhere
Copy link

wkhere commented Jan 6, 2018

A naive proposal:

maybe a good optimization would be, instead of changing calling convention, to focus on eliminating local variables on stack by using registers, and sharing as much as possible the stack frame space between variables and arguments/return values of subcalls.

This way (1) registers would speed up things, (2) stack frames would be smaller, (3) existing calling convention would be preserved.

@davecheney
Copy link
Contributor

davecheney commented Jan 6, 2018 via email

@quasilyte
Copy link
Contributor

quasilyte commented Feb 7, 2018

I can be wrong, but register-based calling convention (CC) could give more performance boost if additional SSA rules are added that actually take advantage of it.

Currently, a big amount of small and average functions have many mem->reg->mem patterns that make certain optimizations on amd64 not worth it.

It is not fair to say that 10% is not significant.
It's 10% with optimizer that works on GOARCH=amd64 like it is GOARCH=386.
The potential gain is higher.

Lately, I was comparing 6g with gccgo and even without machine-dependent optimizations, aggressive inlining and constant folding there was about 25-40% performance difference in some allocation-free benchmarks. I do believe that this difference includes CC impact (because most other parts of output machine code look nearly the same).

@laboger
Copy link
Contributor

laboger commented Feb 8, 2018

As noted above, the performance benefit for a register-based calling convention is higher on RISC architectures like ppc64le & ppc64. If the above CL is not stale I would be willing to try it out. To me this is one of the biggest performance issues with golang on ppc64le.

Inlining does help but doesn't handle all cases if there are conditions that inhibit inlining. Also functions written in asm can't be inlined AFAIK.

@dr2chase
Copy link
Contributor Author

dr2chase commented Feb 8, 2018

The CL is very stale. First we have to get to a good place with debugger information (seems very likely for 1.10 [oops, 1.11]), and then we have to redo some of the experiment (it will go more quickly this time) with a better set of benchmarks. Some optimizations that looked promising on the go1 benchmarks turn out to look considerably less profitable when run against a wider set of benchmarks.

@as
Copy link
Contributor

as commented Feb 9, 2018

@dr2chase

Is the method used to determine the 5% performance improvement somewhere in the discussion? I'm curiously interested in what the function sample space actually looks like.

@laboger
Copy link
Contributor

laboger commented Feb 9, 2018

I think the measurement we want is how much does this help once inlining is fully enabled by default.

@dr2chase
Copy link
Contributor Author

dr2chase commented Feb 9, 2018

The estimate was based on counting call frequencies, looking at performance improvement on a small number of popular calls "registerized", and extrapolating that to the full number of calls.

I agree that we want to try better inlining first (it's in the queue for 1.11, now that we have that debugging information fixed), since that will cut the call count (and thus reduce the benefit) of this somewhat more complicated optimization. Mid-stack inlining, however, was one of the optimizations that looked a lot less good when applied to the larger suite of benchmarks. One problem with the larger benchmark suite is selection effect -- anyone who wrote a benchmark for parts of their application cares enough about a performance to write that benchmark, and has probably used it already to hand-optimize around rough spots in the Go implementation, so we'll see reduced gains from optimizing those rough spots.

@CAFxX
Copy link
Contributor

CAFxX commented Feb 10, 2018

and has probably used it already to hand-optimize around rough spots in the Go implementation

From my point of view, the goal of having a better inliner is to have more readable/maintainable (less "hand-optimized") code, not really faster code. So that we can build code addressing the problems we're trying to solve, not the shortcomings of the compiler.

@ianlancetaylor
Copy link
Contributor

There is an updated proposal at #40724, which addresses many of the issues raised here. Closing this proposal in favor of that one.

@golang golang locked and limited conversation to collaborators Nov 13, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests