tag:blogger.com,1999:blog-5171098727364395242.post2889579231380585590..comments2023-05-14T13:23:31.669+01:00Comments on Psychosomatic, Lobotomy, Saw: A Lock Free Multi Producer Single Consumer Queue - Round 1Nitsanhttp://www.blogger.com/profile/10496299147100350513noreply@blogger.comBlogger9125tag:blogger.com,1999:blog-5171098727364395242.post-19453830240369573562014-03-19T18:07:38.606+00:002014-03-19T18:07:38.606+00:00This implementation definitely has a drawback on t...This implementation definitely has a drawback on the side of latency spikes for starvation of Producers, IMHO for latency sensitive applications doing a round robin read on all producer queues would keep the latency within SLAs.Gopal Patelhttps://www.blogger.com/profile/06418612311826954419noreply@blogger.comtag:blogger.com,1999:blog-5171098727364395242.post-58713358519273829572013-10-30T06:40:26.825+00:002013-10-30T06:40:26.825+00:00The thing you are trying to measure here is the ra...The thing you are trying to measure here is the rate of contention/CAS misses. You can certainly keep a last back-off meter around and use it to make a better choice next time around, but you'd have to think long and hard about that heuristic in the context of your application. This is particularly hard as there is no fairness to the contention, so if I have 4 producers the CAS miss rate should be applied to all, but in reality I'm not sure that would happen, you'd need to guard against that.<br />I'll go into some alternatives in my next post :-) Nitsanhttps://www.blogger.com/profile/10496299147100350513noreply@blogger.comtag:blogger.com,1999:blog-5171098727364395242.post-24418116655568458342013-10-29T19:57:21.173+00:002013-10-29T19:57:21.173+00:00Nitsan here is a *crazy* idea.
In the backoff str...Nitsan here is a *crazy* idea.<br /><br />In the backoff strategy would it make sense to cache the "rate of change" from the producer and make educated guesses as to what type of backoff strategy to use ? Sort of a feedback loop system ?<br /><br />ymohttps://www.blogger.com/profile/00941935969262418602noreply@blogger.comtag:blogger.com,1999:blog-5171098727364395242.post-62457848576371939002013-10-27T05:25:42.979+00:002013-10-27T05:25:42.979+00:00Yeah I was thinking one SPSC queue per producer an...Yeah I was thinking one SPSC queue per producer and the consumer doing a round robin read from each.Anonymoushttps://www.blogger.com/profile/15213081379539137775noreply@blogger.comtag:blogger.com,1999:blog-5171098727364395242.post-83976470191534020892013-10-26T09:53:21.426+01:002013-10-26T09:53:21.426+01:00I've done that, and it is hitting a stable 40m...I've done that, and it is hitting a stable 40m ops/sec. The implementation is on github next to the other ones. I randomised the consumer choice of queue to read from to avoid starvation but it offers no guarantees of fairness or degree of reordering or anything. I've also been looking at the queue per producer implementation but am not happy with it just yet.Nitsanhttps://www.blogger.com/profile/10496299147100350513noreply@blogger.comtag:blogger.com,1999:blog-5171098727364395242.post-34545087213853700492013-10-26T08:51:47.675+01:002013-10-26T08:51:47.675+01:00Nitsan, what are your thoughts on using multiple S...Nitsan, what are your thoughts on using multiple SPSC queues instead (you sacrifice FIFO of course)?Anonymoushttps://www.blogger.com/profile/15213081379539137775noreply@blogger.comtag:blogger.com,1999:blog-5171098727364395242.post-13735485515707059282013-10-21T11:09:54.494+01:002013-10-21T11:09:54.494+01:00I had very good results with an MPSC queue built o...I had very good results with an MPSC queue built on top of per-thread SPSC queues. This of course relaxes the FIFO semantics, but there's no contention between producers (each being a single writer still).<br /><br />The MPSC queue maintains a list of SPSC queues in a CopyOnWriteArrayList (when a new thread joins/leaves), and the dequeueing operation simply loops through them all, accumulating a larger batch (as the single consumer).<br />phraktlehttps://www.blogger.com/profile/01615248705203068572noreply@blogger.comtag:blogger.com,1999:blog-5171098727364395242.post-7711308303682670602013-10-21T09:14:48.606+01:002013-10-21T09:14:48.606+01:00I agree, the yield/park strategies improve through...I agree, the yield/park strategies improve throughput by introducing latency. I've not measured the impact on latency and you are correct in pointing out this will induce variations in latency that are not appropriate for certain applications. Still the strategy is configurable per use case and as such we can choose a combined spin/yield/park and fine tune it to our use case. But low latency is not the only use case...<br />As stated in the summary this is far from complete work, more an exploration of available alternatives. <br /><br />Would I recommend people use the PARK strategy in a time critical program (trading/gaming etc)? no. In fact I would not encourage anyone to use the above code in it's current state. Copy and paste at your own risk children.<br />Nitsanhttps://www.blogger.com/profile/10496299147100350513noreply@blogger.comtag:blogger.com,1999:blog-5171098727364395242.post-61962582181704711282013-10-21T08:36:07.737+01:002013-10-21T08:36:07.737+01:00The throughput "improvements" with the b...The throughput "improvements" with the back-off strategies are probably not what you expect. For periods of time you effectively make an uncontended queue while the other threads have disengaged. This can result in massive variations in latency. Imagine 2 threads offering at the same time. One wins and they other is parked for over 50us (not the 1ns as the API call suggests) in the best case, rather than a quick retry on the CAS. This type of microbenchmark is very misleading to real world performance.<br /><br />You effectively end up exchanging in large batches while other threads get starved out while parked or yielding. A progressive backoff strategy of spinning, then yielding, then parking can be more useful in the real world.Martin Thompsonhttps://www.blogger.com/profile/15893849163924476586noreply@blogger.com