{This post is part of a long running series on lock free queues, checkout the full index to get more context here}
Continued from here, we examine the comparative performance of 2 approaches to implementing queue counters.
If we trace back through the journey we've made with our humble SPSC queue you'll note a structural back and forth has happened. In the first post, the first version had the following layout (produced using the excellent java-object-layout tool):
uk.co.real_logic.queues.P1C1QueueOriginal1We went on to explore various optimisations which were driven by our desire to:
offset size type description
0 12 (assumed to be the object header + first field alignment)
12 4 Object[] P1C1QueueOriginal1.buffer
16 8 long P1C1QueueOriginal1.tail
24 8 long P1C1QueueOriginal1.head
32 (object boundary, size estimate)
- Replace volatile writes with lazySet
- Reduce false sharing of hot fields
uk.co.real_logic.queues.P1C1QueueOriginal3But that is not the whole picture, we now had 4 new objects (2 of each class below) referred from the above object:
offset size type description
0 12 (assumed to be the object header + first field alignment)
12 4 int P1C1QueueOriginal3.capacity
16 4 int P1C1QueueOriginal3.mask
20 4 Object[] P1C1QueueOriginal3.buffer
24 4 AtomicLong P1C1QueueOriginal3.tail
28 4 AtomicLong P1C1QueueOriginal3.head
32 4 PaddedLong P1C1QueueOriginal3.tailCache
36 4 PaddedLong P1C1QueueOriginal3.headCache
40 (object boundary, size estimate)
uk.co.real_logic.queues.P1C1QueueOriginal3$PaddedLong
offset size type description
0 12 (assumed to be the object header + first field alignment)
12 4 (alignment/padding gap)
16 8 long PaddedLong.value
24-64 8 long PaddedLong.p1-p6
72 (object boundary, size estimate)
uk.co.real_logic.queues.PaddedAtomicLongThese counters are different from the original because they are padded (on one side at least), but also they represent a different overall memory layout/access pattern. The counters are now floating. They can get relocated at the whim of the JVM. Given these are in all probability long lived objects I
offset size type description
0 12 (assumed to be the object header + first field alignment)
12 4 (alignment/padding gap)
16 8 long AtomicLong.value
24-64 8 long PaddedAtomicLong.p1-p6
72 (object boundary, size estimate)
wouldn't think they move much after a few collections, but each collection presents a new opportunity to shuffle them about.
In my last post I explored 2 directions at once:
- I fixed all the remaining false-sharing potential by padding the counters, the class itself and the data in the array.
- I inlined the counters back into the queue class, in a way returning to the original layout (with all the trimmings we added later), but with more padding.
psy.lob.saw.queues.spsc4.SPSCQueue4This worked well, reducing run to run variance almost completely and delivering good performance. The problem was it was failing to hit the highs of the original implementation and variants, particularly when running across cores.
offset size type description
0 12 (assumed to be the object header + first field alignment)
12 4 (alignment/padding gap)
16-72 8 long L0Pad.p00-07
80 4 int ColdFields.capacity
84 4 int ColdFields.mask
88 4 Object[] ColdFields.buffer
92 4 (alignment/padding gap)
96-144 8 long L1Pad.p10-16
152 8 long TailField.tail
160-208 8 long L2Pad.p20-26
216 8 long HeadCache.headCache
224-272 8 long L3Pad.p30-36
280 8 long HeadField.head
288-336 8 long L4Pad.p40-46
344 8 long TailCache.tailCache
352-400 8 long L5Pad.p50-56
408 (object boundary, size estimate)
To further explore the difference between the inlined and floating versions I went back and applied the full padding treatment to the floating counters version. This meant replacing PaddedLong and PaddedAtomicLong with fully padded implementations, adding padding around the class fields and padding the data. The full code is here, it's very similar to what we've done to pad the other classes. The end result has the following layout:
psy.lob.saw.queues.spsc.fc.SPSPQueueFloatingCounters4If we cared more about memory consumption we could count the object header as padding and pad with integers to avoid the alignment gaps. I'm not that worried, so I won't. Note that the floating counters have to consume double the padding required for the flattened counters as they have no guarantees of their neighbours on either side. In the interest of comparing the impact of the data padding separately I also implemented a none data padding version.
offset size type description
0 12 (assumed to be the object header + first field alignment)
12 4 (alignment/padding gap)
16-72 8 long SPSPQueueFloatingCounters4P0.p00-p07
80 4 int SPSPQueueFloatingCounters4Fields.capacity
84 4 int SPSPQueueFloatingCounters4Fields.mask
88 4 Object[] SPSPQueueFloatingCounters4Fields.buffer
92 4 VolatileLongCell SPSPQueueFloatingCounters4Fields.tail
96 4 VolatileLongCell SPSPQueueFloatingCounters4Fields.head
100 4 LongCell SPSPQueueFloatingCounters4Fields.tailCache
104 4 LongCell SPSPQueueFloatingCounters4Fields.headCache
108 4 (alignment/padding gap)
112-168 8 long SPSPQueueFloatingCounters4.p10-p17
176 (object boundary, size estimate)
psy.lob.saw.queues.spsc.fc.LongCell
offset size type description
0 12 (assumed to be the object header + first field alignment)
12 4 (alignment/padding gap)
16-64 8 long LongCellP0.p0-p6
72 8 long LongCellValue.value
80-128 8 long LongCell.p10-p16
136 (object boundary, size estimate)
psy.lob.saw.queues.spsc.fc.VolatileLongCell
offset size type description
0 12 (assumed to be the object header + first field alignment)
12 4 (alignment/padding gap)
16-64 8 long VolatileLongCellP0.p0-p6
72 8 long VolatileLongCellValue.value
80-128 8 long VolatileLongCell.p10-p16
136 (object boundary, size estimate)
Which one is more better?
While the charts produced previously are instructive and good at highlighting the variance, they make the post very bulky, so this time we'll try a table. The data with the charts is available here for those who prefer them. I've expanded the testing of JVM parameters a bit to cover the effect of the combinations of the 3 options used before, otherwise the method and setup are the same. The abbreviations stand for the following:
Cross Core:
- O1 - original lock-free SPSC queue with the original padding.
- FC3 - Floating Counters, fully padded, data is not padded.
- FC4 - Floating Counters, fully padded, data is padded.
- I3 - Inlined Counters, fully padded, data is not padded.
- I4 - Inlined Counters, fully padded, data is padded.
- CCM - using the -XX:+UseConcCardMark flag
- CT - using the -XX:CompileThreshold=100000 flag
- NUMA - using the -XX:+UseNUMA flag
No Opts | CCM | CT | NUMA | CCM+CT | CCM+NUMA | CT+NUMA | CCM+CT+NUMA | |
---|---|---|---|---|---|---|---|---|
O1 | 107,108,113 | 97,102,103 | 111,112,112 | 88,108,113 | 97,103,103 | 100,102,103 | 111,112,112 | 102,103,103 |
FC3 | 105,108,113 | 102,103,104 | 111,112,113 | 103,108,113 | 102,103,103 | 97,103,103 | 111,112,113 | 102,103,103 |
FC4 | 93, 96,101 | 87, 89,90 | 99,100,101 | 82, 95,100 | 89, 90,90 | 86, 90,90 | 81,101,101 | 89, 90,90 |
I3 | 103,123,130 | 97, 98,100 | 129,129,130 | 122,129,130 | 105,106,107 | 97, 98,99 | 128,129,130 | 104,105,107 |
I4 | 108,113,119 | 103,105,106 | 111,113,113 | 99,118,120 | 105,113,114 | 98,104,114 | 112,112,113 | 105,114,114 |
Cross Core:
No Opts | CCM | CT | NUMA | CCM+CT | CCM+NUMA | CT+NUMA | CCM+CT+NUMA | |
O1 | 38, 90,130 | 57, 84,136 | 40, 94,108 | 38, 59,118 | 58, 98,114 | 55,113,130 | 39,53,109 | 57,96,112 |
FC3 | 94,120,135 | 109,141,154 | 94,105,117 | 98,115,124 | 103,115,128 | 116,129,140 | 95,108,118 | 102,120,127 |
FC4 | 106,124,132 | 118,137,152 | 104,113,119 | 107,119,130 | 107,126,133 | 114,132,150 | 105,114,119 | 99,123,131 |
I3 | 86,114,122 | 88,113,128 | 72, 96,111 | 90,112,123 | 86, 98,108 | 85,117,125 | 88,96,111 | 90,99,108 |
I4 | 49,107,156 | 78,133,171 | 58,100,112 | 48,126,155 | 108,128,164 | 88,143,173 | 55,96,113 | 104,115,164 |
I leave it to you to read meaning into the results, but my take aways are as follows:
- Increasing the the CompileThreshold is not equally good for all code in all cases. In the data above it is not proving helpful of and by itself in the cross core case for any implementation. It does seem to help once CCM is thrown in as well.
- Using ConCardMark makes a mighty difference. It hurts performance on the single core, but greatly improves the cross core case for all implementations. The difference made by CCM is covered by Dave Dice here and it goes a long way to explain the variance experienced in the inlined versions when running without it.
- NUMA makes little difference that I can see to the above cases. This is as expected since the code is being run on the same NUMA node throughout. Running across NUMA nodes we might see a difference.
- As you can see there is still quite a bit of instability going on, though as an overall recommendation thus far I'd say I4 is the winner. FC4 is not that far behind when you consider the mid results to be the most representative. It also offers more stable overall results in terms of the variance.
- 173M ops/sec! it's a high note worth musing over... But... didn't I promise you 250M? I did, you'll have to wait.
The above results also demonstrates that data inlining is a valid optimization with measurable benefits. I expect the results in more real-life scenarios to favor the inlined version even more as it offers better data locality and predictable access over the floating fields variants.
One potential advantage for the floating counters may be available should we be able to allocate the counters on their writer threads. I have not explored this option, but based on Dave Dice's observations I expect some improvement. This will make the queue quite awkward to set up, but worth a go.
There's at least one more post coming on this topic, considering the mechanics of the experiment and their implications. And after that? buggered if I know. But hang in there, it might become interesting again ;-)
UPDATE: thanks Chris Vest for pointing out I left out the link to the data, fixed now.
One potential advantage for the floating counters may be available should we be able to allocate the counters on their writer threads. I have not explored this option, but based on Dave Dice's observations I expect some improvement. This will make the queue quite awkward to set up, but worth a go.
There's at least one more post coming on this topic, considering the mechanics of the experiment and their implications. And after that? buggered if I know. But hang in there, it might become interesting again ;-)
UPDATE: thanks Chris Vest for pointing out I left out the link to the data, fixed now.
looking forward to the next post =)
ReplyDeleteIt's a coming :-), getting very good results
DeleteHi Nitsan,
ReplyDeletehave you checked out Fast-Flow?
http://calvados.di.unipi.it/fastflow
I'm in the process of creating a java port (for x86 / AMD64 only at the moment) of fast flow.
Their core structure is a bound FIFO SPSC queue dubbed a Buffer.
When correctly cache aligned (data and control variables) I manage over 400M queue/dequeue (synthetic) operations within a second on an i7 3720 2.6 GHz (Sun JDK 1.7, default settings).
This gets reduced to about 150M when using actual data due to memory pressure.
I'm aware of FastFlow, but not had a look at their code. The result you quote is great, can you please share:
Delete1. The code
2. The benchmark harness
I'd be keen to run the same code within the same conditions and see what gives. The harness is particularly important as it is key that we measure the same thing under the same conditions. I'm in the process of writing a follow up to this post exploring the different results you get from tweaking the harness and the significance of such tweaks to real world behaviour.
Also not clear what you mean by synthetic vs actual data.
I'm very happy about your comment, one of the main principles I'm trying to promote via this blog is peer review and an open approach to implementation and results :-)
Find sample code at http://pastebin.com/46GeLzdH (GPL)
DeleteI haven't finished the FF/j lib yet, but multi producer performance seems pretty decent via FF.
The synth benchmark is simply pushing the same object between two threads via the FFBuffer.
When actual data (e.g. create a new item to be pushed to queue via new operator), performance drops considerably, but assuming you want to only pass longs around, then this should give you an adequate estimate.
Having looked through the code, here's some issues I found:
Delete1. The writes to head/tail (or pread/pwrite as they are named in your impl) are plain writes, as such they offer no happens before relationship. This means the code is very likely (at the mercy of JIT compilation) incorrect and can lead to elements being dropped. Note in the original code(http://sourceforge.net/p/mc-fastflow/code/62/tree/ff/buffer.hpp#l26) the calls to WMB() that is write memory barrier, your code is missing that and that is why you need either a volatile write or a lazySet/putOrdered.
2. The padding around data is not actually padding the data. In Java the data is not inlined into the top struct layout, it's an array. An array is a reference to the array object allocated outside of the instance.
3. The index is an int. This means it will wrap pretty quickly. The original code handles this eventuality while yours doesn't. MAX_INT is not such a big number if you consider a system that needs to run for a few months without a restart.
4. The test code re-allocates the buffer on each iteration. This gives you a false impression of the memory layout. Queues are normally long standing objects and get packed into the perm gen early in the life of a system. This impacts runtime behaviour.
5. You are not actually doing anything with the value you read. This means that JIT is open to do some pretty unrealistic (in the sense that people are not ever likely to use the code that way) optimizations on your code.
6. As you are not using the inheritance method to enforce field ordering you are more open to fields being reordered. See previous posts on memory layout for more information on that.
I don't mean to be mean, or discourage you in any way. Maybe your code will still perform very well once these corrections have been implemented, but as it is I cannot properly compare it to a valid queue implementation.
Thanks for your interest, and for pointing me at the FastFlow paper, very interesting.
Regards,
Nitsan
Let me try to answer
DeleteRestriction: The queue is only designed for X84 / AMD64 systems.
1. WMB() is a noop on X86 / AMD64 systems as it is stated in the original code, hence the limitation.
2. I know, the padding is into the data array, which further evades false sharing
3. Wrapping is not a problem and considered (two's complement), it's a bound circular buffer
4. Again, not a problem, as the values in the queue are set to null on dequeue
5. That is true: It actually behaves in the same manner as your QueuePerfTest.java
That is also the reason I call this a synthetic test.
As said, when I insert i (the counter) and build the sum of the values, the performance drops to about 150M
6. I understand that, I've only considered upto OpenJDK (till 7-b127), not sure how it behaves afterwards, have to check that, but I doubt that the performance of the code will be affected.
I understand your concern, but the buffer does behave exactly like the original C++ version.
It does not wrap and stops accepting new items once it gets full.
That is also the reason, why the designers of this queue call it wait free, as it does not rely on shared CASed values.
See http://pastebin.com/YJkHZzZQ for revised version.
This one implements the Queue interface, so you should be able to drop it into your test harness.
I've changed the embedded test case, which now builds the sum and verifies its correctness.
Cheers,
Pressenna
1. WMB is a STORESTORE barrier, which is a noop in the sense that it is a MOV like any other move, but is not the same as not having a barrier. It's a compiler instruction to stop writes from getting re-ordered. Plain writes are not a memory barrier. WMB translates to __asm__ __volatile__ ("": : :"memory"), not an empty line. This is a compiler memory barrier instruction(http://en.wikipedia.org/wiki/Memory_ordering#Compiler_memory_barrier). If you are not using a memory barrier your implementation is potentially broken.
Delete2. How is padding 'into the array'? you are padding around the reference to the array not actually around the array.
3. Fair point for an implementation which does not implement size. If size was implemented running out of ints would be a problem. My bad.
4. Your test was different because the queue was allocated on each test run, not reused across all the run iterations. In my testing this has made a difference. I see you have changed that. Now that you've implemented Queue I can slot it into the tests I have. Also note that the QueuePerfTest is measuring including the thread creation and until thread.join, and that running the publisher ahead of the client instead of the other way around is also significant to the results. QueuePerfTest was written by Martin Thompson for a presentation/demonstration, it's not perfect but to compare results you must have the same experiment going.
5. QueuePerfTest keeps the last value dequeued and prints it so that JIT can't just ignore the values or optimize away reading them. Your test never examined the result. Now it does.
6. If the memory layout changes you might be exposed to false sharing and then performance will change.
The queues presented in this series of posts are all lock-free and wait free, there is not a CAS in sight :-) The original implementation I took from Martin's presentation and played with it. He has other tricks up his sleeves, I picked this algorithm as he made it open source. I'm very glad you are dragging me into another variation on this topic :-) I like it! Side stepping the size and using the null check instead is brilliant. I think that it will perform just as well with the write barrier corrected. The size function should also be straightforward enough to implement at that point. NEW TOYS!!! :-)
Thanks alot, this is very interesting (for me anyway).
Hi, good morning.
Delete1. The WMB is a compiler memory barrier, which is a NOOP on Intel/AMD CPUs.
As they are out-of-order CPUs it is irrelevant for the implementation at hand.
2. I am accessing the array in strides, which seems only important in single queue scenarios.
Doing this reduces false sharing and boost performance by reading aligned memory locations.
3. a size() method only works as a hint as it's value constantly changes.
So reducing it to empty() / not_empty() saves a lot of resources and avoid sharing "hot" states between threads.
4. I wasn't able to measure any difference in the micro benchmarks, so not sure.
5. Adjusted it - again - I was not able to measure any difference in local tests.
6. That might be true
"FastFlow (斋戒流) is a parallel programming framework for multi-core platforms based upon non-blocking lock-free/fence-free synchronization mechanisms.". FastFlow's buffer (queue) is fence free, which you pointed out in #1.
#1 and #2 seems to cater for the primary performance of this queue implementation.
Since FastFlow is built ontop of this essential structure, I am looking for the best possible SPSC queue out there for the Java port. I might have to use another queue implementation on a different CPU platform, but that will be hidden behind an interface.
This is also of great interest to me and it helps me a lot discussing these matters with you.
Ok, mid way through first coffee... will try and make sense:
Delete1. If you have no memory barrier then you are not publishing to your queue correctly and the consumer can see half initialized state from producer. WMB is a CPU noop but also compiler instruction to not re-order code. I can't see how it would be irrelevant to any queue implementation.
2. Access in strides also means you are getting less throughput per stride from CPU cache line fetching. If you set the POW to 4 you'll get bad throughput.
3. Size is a feature. It's like reading any unlocked shared data. This may be a requirement for monitoring. Having a size method doesn't mean you have to use it in offer/poll.
4. Try forcing GC between iterations. This should help shake things into place.
5. OK, I'm doing a writeup on the differences I see in the next post, so won't go into this further.
Fence (as in full fence instructions) free is not memory barrier free. See here for the FF WMB: http://sourceforge.net/p/mc-fastflow/code/62/tree/ff/sysdep.h
Thanks :-)
1. http://en.wikipedia.org/wiki/Memory_ordering
Delete"These barriers prevent a compiler from reordering instructions, they do not prevent reordering by CPU."
If I understand this correctly, the CPU might reorder the write instructions thus ignoring the order.
AFAIK, a mfence is required to prohibit the CPU from reordering, which is not used anywhere here.
2. I know, this is a very tweaky parameter as it drops by 2 in performance when changed from 2.
3. I guess you are right. but I think empty/not empty might be an acceptable alternative.
4. I still don't see any difference locally
5. waiting for it and the harness :)
1. For Java a STORE/STORE barrier is required, which is putOrdered. This is also described in the JMM as a no-op, but the ordering inhibition from the JIT compiler is significant. I'm not sure an mfence is required, this is how WMB are handled in Java... I've added your version and a version using putOrdered to the examples on Git and to the harness (if this is an issue let me know and I'll remove your code). Have a look.
Delete4. Maybe I been sniffing too much glue ;-). I will re-test at some point. For reference see on GitHub QueuePerfTest/ReversedQueuePerfTest/FairQueuePerfTest for variations on the theme. I'm getting the most consistent results with the Fair option. Your implementation is 9, the ordered variation is 91.
5. The code is all there if you want a spoiler (just test for yourself), it'll be another week at the very least. Be sure to compare the relative performance when running on the same core and across cores.
Once putOrdered is in place I found the new method to produce very similar results to the existing inlined variants. Currently 45 is looking like the best option(SPSCQueue5).
Pressana, this discussion urged me to write a post clarifying the "no-op" term used in the JMM for these memory barriers:
Deletehttp://psy-lob-saw.blogspot.com/2013/08/memory-barriers-are-not-free.html
Hope it helps the explanation above.
Hi Nitsan,
ReplyDeleteGood article regarding the meaning of No-Op.
I checked out your code and test harness.
When I remove the Thread.yield() methods from the Producer and Consumer in QueuePerfTest (to simulate busy spinning), I end up with occasional JVM crashes in both FFBuffer, FFBufferOrdered and SPSCQueue5.
After looking at the assemblies it appears that the C2 compiler is optimizing and rearranging instructions as it pleases regardless of using Unsafe.putOrderd* or volatile keywords.
Not that the ASM it generates is wrong, it just appears to end up in an endless loop.
Would you know a means to disallow such optimizations?
Regards,
Pressenna
1. How do you mean crash? core dump? hang? I can understand hanging but core dumping on FFBuffer would indicate a JVM bug as it is playing within the limits of the language(i.e. not using Unsafe/JNI)
Delete2. I can't reproduce the issue you describe. I'm running on a recent i7 4500MQ/Ubuntu13.04/JDK1.7u40 and none of the implementations hang/crash.
3. The WMB doesn't stop re-ordering completely. I limits re-ordering such that other writes can't be moved from before the barrier to after the barrier. If this is the issue you see then this is a JVM bug. I'll have a dig around the assemblies but I'd be very surprised if that is the case. There's no means to disallow these optimizations beyond using memory barriers, which you report not to work for you.
If you have a look at the latest code you'll notice there are later variations on the implementations. I'm still thinking about some of the details in terms of correctness so don't assume they are all 100% :-)
Finally, I feel this discussion is probably better pursued by e-mail. I'm on nitsanw ._. at ._. yahoo ._. com let's talk.
Thanks,
Nitsan
Correction, after many repeat runs I can see it hang for some of the FFBuffer variations, but none of the SPSC variations. I suspect this is due to the fact I kept to the original assumption that only a write memory barrier is required (in some of the variations). Where there is a matching read memory barrier I don't see it hang.
Delete