Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

proposal: sync: add RWMutex.DowngradeLock #23513

Closed
nixprime opened this issue Jan 22, 2018 · 8 comments
Closed

proposal: sync: add RWMutex.DowngradeLock #23513

nixprime opened this issue Jan 22, 2018 · 8 comments

Comments

@nixprime
Copy link
Contributor

The proposed sync.RWMutex.DowngradeLock atomically downgrades from a write
(or "exclusive") lock to a read (or "shared") lock. A typical pattern in which
it is useful is:

rw.Lock()
val := cache.Fill(key) // (1)
// val is protected by rw, so we must hold rw while using it. However, we
// don't need to hold it exclusively, and doing so here will hurt
// performance by blocking other readers unnecessarily. Downgrade to a
// shared lock.
rw.DowngradeLock()
doExpensiveThing(val)
rw.RUnlock()

Without DowngradeLock, this must instead be:

for {
    rw.Lock()
    cache.Fill(key)
    rw.Unlock()
    // (2)
    rw.RLock()
    if val, ok := cache.Get(key); ok {
        doExpensiveThing(val)
        rw.RUnlock()
        return
    }
    rw.RUnlock()
}

This latter form allows val to be concurrently invalidated between
rw.Unlock() and rw.RLock() at (2), with the following consequences:

  • If cache.Fill() can return val, as in (1), DowngradeLock's atomicity
    allows foo to be passed directly to doExpensiveThing(). This is not
    possible without DowngradeLock, necessitating a subsequent call to
    cache.Get(key) which may be expensive.

  • Needing to loop to handle invalidation makes the cost of the latter form
    unbounded, and may result in livelock if concurrent calls to
    cache.Fill(key) compete for a limited resource such as cache capacity.

@gopherbot gopherbot added this to the Proposal milestone Jan 22, 2018
@bradfitz
Copy link
Contributor

@as
Copy link
Contributor

as commented Jan 24, 2018

What are the practical uses of this feature along with the benchmarks compared to the existing implementation as well as atomic load/copy/swap?

@nixprime
Copy link
Contributor Author

A practical usage example is given in the proposal.

The cost of DowngradeLock() is strictly less than a "Unlock(); RLock()"
pair, as the implementation of DowngradeLock() is essentially the same as that
of Unlock() but with different constants.

Benchmark:

type PaddedRWMutex struct {
    RWMutex
    pad [32]uint32
}

func BenchmarkRWMutexNonAtomicDowngradeUncontended(b *testing.B) {
    b.RunParallel(func(pb *testing.PB) {
        var rwm PaddedRWMutex
        for pb.Next() {
            rwm.Lock()
            rwm.Unlock()
            rwm.RLock()
            rwm.RUnlock()
        }
    })
}

func BenchmarkRWMutexAtomicDowngradeUncontended(b *testing.B) {
    b.RunParallel(func(pb *testing.PB) {
        var rwm PaddedRWMutex
        for pb.Next() {
            rwm.Lock()
            rwm.DowngradeLock()
            rwm.RUnlock()
        }
    })
}

Result:

BenchmarkRWMutexNonAtomicDowngradeUncontended-12    	200000000	         6.65 ns/op
BenchmarkRWMutexAtomicDowngradeUncontended-12       	300000000	         5.61 ns/op

@LionNatsu
Copy link
Contributor

We can protect the consistency from panic by "Lock(); defer Unlock(); doPanicThing()". Is there a way to do so when it comes to …

Lock()
doPanicThing() // not safe
DowngradeLock()
defer RUnlock ()
doReadPanicThing() // this line is panic-safe

@rsc
Copy link
Contributor

rsc commented Jan 25, 2018

I'm not convinced this is helpful in a real program, and there is a real cost to adding complexity to the API. Please show us benchmarks improving a real workload, not a microbenchmark.

@nixprime
Copy link
Contributor Author

To use defer, one would have to introduce a variable to track the lock state:

rw.Lock()
var readLocked bool
defer func() {
  if readLocked {
    rw.RUnlock()
  } else {
    rw.Unlock()
  }
}()
doPanicThing()
rw.DowngradeLock()
readLocked = true
doReadPanicThing()

But I think using Unlock(); RLock() has the same requirement.

@rsc: I do have benchmarks that show a measurable improvement in a critical hot path of my application, but I can't share them publicly. I've contacted you privately.

@as
Copy link
Contributor

as commented Jan 27, 2018

To use defer, one would have to introduce a variable to track the lock state

Sounds like a major disadvantage, it would be another shiny feature that's easy to misuse. Without an actual benchmark, it's not easy to see the advantage. If a clear example isn't trivial to write, maybe that's a further argument for the feature's unwarranted complexity.

@rsc rsc changed the title Proposal: sync: add RWMutex.DowngradeLock proposal: sync: add RWMutex.DowngradeLock Jan 29, 2018
@rsc
Copy link
Contributor

rsc commented Jan 29, 2018

Note that you can substitute a custom RWMutex implementation into your own code (copy and modify). We don't want to add the complexity here - and support it forever - especially with such an awkward API.

@rsc rsc closed this as completed Jan 29, 2018
@golang golang locked and limited conversation to collaborators Jan 29, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

6 participants