An Akka user claimed to not be suffering from the bug because they were using JCTools, but as it turned out the issue was indeed present so I had to fix it etc... But what the hell is GC nepotism? You won't find it in your GC textbooks, but it's a catchy term for a very real problem.
Your Dead Olds Relatives Can Promote You!
In the open JDK a generational GCs, there are distinct generations, Old and New. The 2 generations get collected separately, but are not independent from each other. In particular, references from NewGen to OldGen will keep OldGen references alive through an OldGen collection cycle, and similarly references from OldGen to NewGen will keep objects alive through a NewGen collection. This is all quite straight forward, but has a surprising implication:
Dead OldGen objects will keep dead NewGen objects they reference alive until collected by an OldGen.The reverse is also true, but since the whole notion of generational GC is based on the assumption that new gen is collected far more often then the old gen the above use case is the real issue. How does this effect you? Is this even a real problem?
Turns out it's real enough to have convinced Doug Lea to fix a particular manifestation of it, but also subtle enough to have slipped past him (and me, and the Akka guys, and Martin Thompson, and others no doubt) in the first place.
Linked Queues: Pathological Nepotism
Consider the following LinkedQueue implementation:
The following invariants are maintained:
- head == tail <=> queue is empty
- head.next == null
- tail.value == null
- In summary: queue is empty <=> (head == tail && tail.next == null && tail.value == null)
The nodes are created and linked together on offer, and dereferenced as we poll. The poll method clears MUST clear the value because the queue retains the last node. If we were to let the last node keep the value we will have effectively leaked this value reference.
Ignoring the values, imagine the queue is part of the live set and we have enqueued a few elements, the arrow denotes what next references:
head -> N3 -> null
tail -> N0 -> N1 -> N2 -> N3
If we now dequeue the elements we'll get back to empty, brackets denote live referenced from dead:
head -> N3 -> null
tail -> N3 -> null
DEAD: N0 -> N1 -> N2 (-> N3)
head -> N3 -> null
tail -> N0 -> N1 -> N2 -> N3
If we now dequeue the elements we'll get back to empty, brackets denote live referenced from dead:
head -> N3 -> null
tail -> N3 -> null
DEAD: N0 -> N1 -> N2 (-> N3)
Dead objects are traditionally considered the GC problem, and cleaning up about to be collected objects is considered an anti-pattern. After all, what is the point of cleaning up garbage? you are just duplicating work. Indeed in a single generation collector this is a non-issue.
But consider for a moment the possibility that one of the above nodes was promoted into the OldGen. This is something which may not happen in short lived tests, or even 'lucky' long lived tests, but is obviously quite possible. We will have:
head -> N3 -> null
tail -> N3 -> null
Zombie NEW: N1 -> N2 (-> N3)
tail -> N3 -> null
Zombie NEW: N1 -> N2 (-> N3)
DEAD OLD: N0 (-> N1 -> N2 (-> N3))
The young are alive, as far as the old are concerned. The old are dead, but their promotion gives life to their 'children' until they get collected. To demonstrate this issue I've written the following program:
If you look at the GC logs, or plug in Visual VM with the Visual GC plugin you will notice the young GC is entirely ineffective. Everything gets promoted to old, until an old GC clears the lot. After the old GC the problem disappears (in the test program, but is likely to repeat in real programs). This is 'pathological' because ANY promoted node will result in the promotion of ALL following nodes until a GC resolves the issue.
At this point you might be shaking your head sadly and saying: "Ye thick old fart, surely this is a silly issue which NEVER happens in the real world". I've had my share of heads being shook at me and am not offended, but consider that this issue is real enough for other people:
My plan was to walk through the way this issue has been fixed in JCTools vs Akka vs Agrona. Each solution reflecting different considerations, but sadly I'm quite short on time for a tidy summary and will give my observations which you can choose to read up on and argue with.
All three implementations follow the algorithm outlined by D. Vyukov. Note that Agrona is ignoring the Queue interface requirement for poll(return null iff queue is empty).
At this point you might be shaking your head sadly and saying: "Ye thick old fart, surely this is a silly issue which NEVER happens in the real world". I've had my share of heads being shook at me and am not offended, but consider that this issue is real enough for other people:
- AbstractNodeQueue potential GC leak - many objects get tenured if we have "bursts" of messages
- Young memory "leak" in LinkedBlockingQueue : see also this post on the same with further background. See relevant comment on concurrency-interest from Doug Lea, I find the comment "it may be that the effects didn't show up on the tests, machines, VMs I was using. But these are subject to transient changes" in particular reflects the difficulty is exposing the issue in synthetic testing.
- This war story from Tony Printezis referring to this as a recurring pattern in Java services (I wholeheartedly recommend you watch Tony's presentation from beginning to end)
An Exercise To the Reader
My plan was to walk through the way this issue has been fixed in JCTools vs Akka vs Agrona. Each solution reflecting different considerations, but sadly I'm quite short on time for a tidy summary and will give my observations which you can choose to read up on and argue with.
All three implementations follow the algorithm outlined by D. Vyukov. Note that Agrona is ignoring the Queue interface requirement for poll(return null iff queue is empty).
- In Akka the next reference is nulled on poll. This solves the issue, but at the cost of further weakening the accuracy of the size() method.
- In JCTools the next reference is set to point to the discarded node. This helps in telling the difference between a removed node and the head. This piece of information is used in the size method to skip gaps in the size counted chain when racing with the consumer thread. The result is a more accurate size estimate. The change does come at a cost to the size method however as we are no longer able to rely on the chain remaining whole throughout the size operation.
- In Agrona, the next reference is nulled as in Akka. A further effort is made to reuse a special empty node to avoid repeating promotion of transient nodes in empty queues. The effort is arguably worth while on systems with large numbers of active but mostly empty queues.
Open questions/discussion:
- How common is this issue for 'classic' data structures? it seems reasonable that a host of queues/stacks/tree data structures are similarly vulnerable. A scan of JDK sources suggests this is an known pitfall to them, but I would bet many third party frameworks writers didn't get the memo.
- How common is this issue for idiomatic 'user' Java code? given the tendency towards composition rather than inheritance and the drive towards smaller objects, we may argue that user object graphs often approximate trees. If we have short-lived 'trees' a parent node can certainly drag a useless chunk of state to the old gen. If a reference bridges 2 trees we may have something approximating the pathological behaviour above. I'm not aware of any studies trying to determine the impact of nepotism on user code, part of the problem would be the definition of 'typical' user code.
- Where does the term 'nepotism' come from? I made an attempt at tracing the origins, but got nowhere. The problem seems to be documented at least 8 years ago, but the term nepotism is not used by most writing. [UPDATE: it seems the term can be tracked down to a GC paper published in 1991:An Adaptive Tenuring Policy for Generation]
The following invariants are maintained:
ReplyDeletehead == tail <=> queue is empty
.
.
queue is empty <=> head == tail && tail.next == null && tail.value == null
Doesn't sound too clear to me:
"head == tail if and only if queue is empty"
"queue is empty if and only if head == tail (again) AND tail.next == null AND ..."
Both statements don't hold.
It's an equivalence, the last just collects the first few. iff being a transitive relationship I'm not sure how one can hold and the other fail.
Delete(empty <=> head == tail) + (head.next == null) + (tail.value == null)
=> queue is empty <=> (head == tail && tail.next == null && tail.value == null)
It's perhaps not very clear, but I don't think it's wrong.
The statement: [head.next == null <=> queue is empty] is incorrect though, head.next is always null.
How does this affect array-based queue implementations?
ReplyDeleteIn an array based queue the buffer is usually as long lived as the queue itself. It's reasonable to assume the buffer is promoted to old gen at some point. The references to values coming out of the queue are nulled and that's pretty much the end of it...
DeleteThe OpenJDK sources include this same pattern for LinkList.java -- http://hg.openjdk.java.net/jdk7/jdk7/jdk/rev/6d24852165ba might not have introduced the behaviour, but it at least added a comment (Help GC) indicating why it was done.
ReplyDeleteCorrect, have a look at LinkedBlockingQueue and ConcurrentLinkedQueue for further perspectives.
DeleteNitsan,
ReplyDeleteGreat findings!
How do think, will it be acceptable to reset not all but, some of references? leaving most of them untouched, like here: https://github.com/plokhotnyuk/actors/commit/9d5c3c760a2b4c8d5945b45286ff2e622b7b351e#diff-be05b8a29ce1716e869bdec699dd5c74R50
I'm not sure I follow the Scala code, perhaps I need more coffee.
DeleteThe nepotism damage can be limited by breaking the infinite node chain into smaller chains, definitely. But this is trading the cost of setting a field with null to some sort of conditional (controlling when to set it to null) cost. Not sure it's a good trade, do you think it's worth it?
In this code I already have a counting loop which is limited by `batch: Int` value and now reuse it by adding n.lazySet(null) before put an actor instance to relax by `async(b1, n1, true)` call that reschedule it in the thread pool.
DeleteSomewhere I picked up referring to this problem as being GC hygienic. I think it was a phrase Hans Boehm used that I took note of due to those reoccurring issues, but I can't find a reference. I'm usually pretty careful about that after seeing it occur in CLQ and friends.
ReplyDeleteMy version of the SCQ has the same issue but is required due to walking the chain to notify completion on bulk inserts. That's when producers contend and combine their work across threads to make progress. If a producer is set to wait instead of optimistically complete early, then the the combiner must notify that the element was added. This trade off was acceptable because the queues are drained immediately, so nepotism was an unlikely problem in the cache's scenario.
"...because the queues are drained immediately..." - If your implementation has an effectively contiguous linked list from node to node that is never broken as nodes are removed you may want to reconsider the likelihood of an issue here.
DeleteUnless you can exclude GC from happening while nodes are in flight (which as a library writer you are not in a position to do IMO), all it will take is one node getting promoted. The probability of a live node during a young GC cycle will grow as the application up time grows. In a server application (that is not restarted daily...) this will become a reasonable likelihood.
To guess at an acceptable solution, you might find it possible to break the chain when the consumer observes the queue to be empty, or at some other convenient time. Breaking the chain minimizes the worst case damage to the maximum number of linked nodes. I would guess that this (low probability of quite limited GC overhead ever so often) is acceptable to most applications.
That's true, its more that the low write rate tends to not exacerbate the problem. Unfortunately it is a fundamental flaw in the approach if the producers expect to linearizability. In that case I don't think there is a convenient way break the linkages when using a combining arena. For the optimistic version (where producers hand off and don't wait), which is what the cache uses, this isn't a problem and thanks to your article that's now fixed.
DeleteChallenge accepted!!! If you drop a link to the relevant code I'll have a go, or if you want to continue discussion over skype/mail I'm happy to try help :-)
DeleteSure! Its not important since this class is scheduled for removal, but you're welcome to take a gander.
Deletehttps://github.com/ben-manes/caffeine/blob/master/caffeine/src/main/java/com/github/benmanes/caffeine/SingleConsumerQueue.java
I know your post has more than 1 month, but I've just read it today and it reminds me a talk about twitter: https://www.youtube.com/watch?v=M9o1LVfGp2A and why they have a dedicated JVM team.
ReplyDeleteHi Nitsan,
ReplyDeleteIs the head/tail in the sample linked queue reversed on purpose?