Zookeeper barrier implementation - apache-zookeeper

I am trying to implement a barrier in Zookeeper. My implementation works all of the time when there are a small number of nodes that need to join to pass the barrier. However, when I test my implementation with 100+ nodes needing to joining the barrier, around 1% of the time it seems like that one of the nodes is missing the last watcher event, and not checking to see if the number of children of the barrier node has changed.
I even synchronized the process method on the watcher, but that did not change anything. Below is the code for my process method, and the logic that checks to see if needs to move forward.
Watcher process :
public BarrierWatcher(FastBarrier FastBarrier) {
this.ofb = FastBarrier;
}
#Override
public synchronized void process(WatchedEvent event) {
synchronized (ofb) {
ofb.notify();
}
}
Logic to control barrier mechanism:
BarrierWatcher bw = new BarrierWatcher(this);
List<String> memberList = zk.getChildren(barrierPath, bw);
synchronized(this) {
while (memberList.size() < numOfMembers) {
this.wait(1000);
memberList = zk.getChildren(barrierPath, bw);
}
}
Instead of just calling this.wait(), I had add this.wait(1000) for the rare failure occurrence. With 1000 in place it always passes the barrier once all nodes have joined. I was sure that synchronizing the process method would fix this, but it hasn't. Anyone have any experience with this, or an ideas what i might be doing wrong?

You can compare your implementation with netflix-curator where distributed barrier is already implemented.

Related

Why is CAS(Compare and Swap) atomic?

I know CAS is a well-known atomic operation. But I struggle to see why it must be atomic. Take the sample code below as an example. After if (*accum == *dest), if another thread jumps in and succeed to modify the *dest first, then switch back to the previous thread and it proceeds to *dest = newval;. Wouldn't that lead to a failure?
Is there something I am missing? Is there some mechanism that would prevent the above scenario from happening?
Any discussions would be greatly appreciated!
bool compare_and_swap(int *accum, int *dest, int newval)
{
if (*accum == *dest) {
*dest = newval;
return true;
} else {
*accum = *dest;
return false;
}
}
Often people use example code that is not atomic to describe what a CPU does atomically with a single instruction; because it's easier to see how it would work (and because a single cmpxchg instruction doesn't tell you much about how it works).
The code you've shown is like that (not atomic, to help understand how it works).
I had this question,too.This kind of things couldn't happen. The function that you wrote is an abstract operation of CPU, and the impletement is atomatic in real. U can google the key words of "cmpxchg" and will get the answer you find.
Yes, this code can lead to pitfalls that you mentioned as it looks from the outside. However, if we look at how it is compiled, it will lead to a cmpxchg command, which will be executed atomically by the compiler.
As a computer science concept compare and swap HAS to be implemented atomically because of what it is designed to do as a consensus object https://stackoverflow.com/a/56383038/526864
if another thread jumps in and succeed to modify the *dest first
I think that this premise is flawed because dest can not be allowed to change. The pseudocode should look more like
bool compare_and_swap(int *p, int oldval, int newval)
{
if (*p == oldval) {
*p = newval;
return true;
} else {
return false;
}
}
The example that you provided was for a specific implementation that returns the winning processes pid to the losers and only allows the single modification to *dest
an election protocol can be implemented such that every process checks the result of compare_and_swap against its own PID (= newval)
So compare-and-swap is either implemented with an atomic function/library or uses cmpxchg as you surmised
Do you think that these methods are special methods that directly utilize the hardware to perform atomic operations

variable sharing between isr and function call

Consider the following code
int Var;
Function1() {
[CS_Start]
Var++;
[CS_End]
}
Function2() {
[CS_Start]
Var += 2;
[CS_End]
}
ISR() {
[CS_Start]
Var--;
[CS_End]
}
How to protect Var in multitasking environment? One of the design I understand is to keep Var as volatile such that it is safe in multiprocessor scheduling scheme. Additionally Spin lock (in place of mutex) can be implemented to protect the critical section.
What happens if Spinlock is acquired by Function1 and ISR occurs (with higher priority than scheduler timer) ISR will keep on polling and Function1 never gets a chance to release the lock. Any solution to this problem ?

Querying a continously running operation for its current state/value in Scala

I have a procedure that continuously updates a value. I want to be able to periodically query the operation for the current value. In my particular example, every update can be considered an improvement and the procedure will eventually converge on a final, best answer, but I want/need access to the intermediate results. The speed with which the loop executes and the time it takes to converge matters.
As an example, consider this loop:
var current = 0
while(current < 100){
current = current + 1
}
I want to be able to get value of current on any loop iteration.
A solution with an Actor would be:
class UpdatingActor extends Actor{
var current : Int = 0
def receive = {
case Update => {
current = current + 1
if (current < 100) self ! Update
}
case Query => sender ! current
}
}
You could get rid of the var using become or FSM, but this example is more clear IMO.
Alternatively, one actor could run the operation and send updated results on every loop iteration to another actor, whose sole responsibility is updating the value and responding to queries about it. I don't know much about "agents" in Akka, but this seems like a potential use case for one.
What are better/alternative ways of doing this using Scala? I don't need to use actors; that was just one solution that came to mind.
Your actor-based solution is ok.
Sending the intermediate result after each change to a "result provider" actor would be a good idea as well if the calculation blocks the actor for a long time and you want to make sure that you can always get the intermediate result. Another alternative would be to make the actual calculator actor a child of the actor that collects the best result. That way the thing acts as a single actor from the outside, and you have the actor that has state (the current best result) separated from the actor that does the computation, which might fail.
An agent would be a solution somewhat between the very low level #volatile/AtomicInteger approach and an Actor. An agent is something that can only be modified by running a transform on it (and there is a queue for transforms), but which has a current state that can always be accessed. It is not location transparent though. so stay with the actor approach if you need that.
Here is how you would solve this with an agent. You have one thread which does a long-running calculation (simulated by Thread.sleep) and another thread that just prints out the best current result in regular intervals (also simulated by Thread.sleep).
import scala.concurrent.ExecutionContext.Implicits.global
import scala.concurrent.duration._
import scala.concurrent._
import akka.agent.Agent
object Main extends App {
val agent = Agent(0)
def computation() : Unit = {
for(i<-0 until 100) {
agent.send { current =>
Thread.sleep(1000) // to simulate a long-running computation
current + 1
}
}
}
def watch() : Unit = {
while(true) {
println("Current value is " + agent.get)
Thread.sleep(1000)
}
}
global.execute(new Runnable {
def run() = computation
})
watch()
}
But all in all I think an actor-based solution would be superior. For example you could do the calculation on a different machine than the result tracking.
The scope of the question is a little wide, but I'll try :)
First, your example is perfectly fine, I don't see the point of getting rid of the var. This is what actors are for: protect mutable state.
Second, based on what you describe you don't need an actor at all.
class UpdatingActor {
private var current = 0
def startCrazyJob() {
while(current < 100){
current = current + 1
}
}
def soWhatsGoingOn: Int = current
}
You just need one thread to call startCrazyJob and a second one that will periodically call soWhatsGoingOn.
IMHO, the actor approach is better, but it's up to you to decide if it's worth importing the akka library just for this use case.

.NET Rx - ReplaySubject buffer size not working

I've been using .NET Reactive Extensions to observe log events as they come in. I'm currently using a class that derives from IObservable and uses a ReplaySubject to store the logs, that way I can filter and replay the logs (for example: Show me all the Error logs, or show me all the Verbose logs) without losing the logs I've buffered.
The problem is, even though I've set a buffer size on the subject:
this.subject = new ReplaySubject<LogEvent>(10);
The memory usage of my program goes through the roof when I use OnNext to add to the observable collection on an infinite loop:
internal void WatchForNewEvents()
{
Task.Factory.StartNew(() =>
{
while (true)
{
dynamic parameters = new ExpandoObject();
// TODO: Add parameters for getting specific log events
if (this.logEventRepository.GetManyHasNewResults(parameters))
{
foreach (var recentEvent in this.logEventRepository.GetMany(parameters))
{
this.subject.OnNext(recentEvent);
}
}
// Commented this out for now to really see the memory go up
// Thread.Sleep(1000);
}
});
}
Does the buffer size on ReplaySubject not work? It doesn't seem to be clearing the buffer when the buffer size is reached. Any help much appreciated!
UPDATE:
I add subscribers like this (Is this wrong?):
public IDisposable Subscribe(IObserver<LogEvent> observer)
{
return this.subject.Subscribe(observer);
}
...which is called like:
// Inserts into UI ListView
this.logEventObservable.Subscribe(evt => this.InsertNewLogEvent(evt));
I'm not sure if this is the definitive answer, but I suspect that you're hitting an issue because of concurrency around the scheduler you're using. The constructor you're calling on ReplaySubject looks like this:
public ReplaySubject(int bufferSize)
: this(bufferSize, TimeSpan.MaxValue, Scheduler.CurrentThread)
{ }
The Scheduler.CurrentThread worries me. Try changing it to Scheduler.ThreadPool and see if that helps.
Also, as a side note, you seem to be mixing Rx with TPL and old fashioned thread sleeping. It's usually best to avoid doing that. You could change your WatchForNewEvents code to look like this:
dynamic parameters = new ExpandoObject();
var newEvents =
from n in Observable.Interval(TimeSpan.FromSeconds(1.0))
where this.logEventRepository.GetManyHasNewResults(parameters)
from recentEvent in
this.logEventRepository.GetMany(parameters).ToObservable()
select recentEvent;
newEvents.Subscribe(this.subject);
That's a nice compact Rx-y way of doing things.

Storing and Using State in a GUI Application

I'm writing an iPhone App, and I'm finding that as I add features, predictably, the permutations of state increase dramatically.
I then find myself having to add code all over the place of the form:
If this and that and not the other then do x and y and set state z
Does anybody have suggestions for systematic approaches to deal with this?
Even though my app is iPhone, I think this applies to many GUI cases.
In general, a user interface application is always waiting for an event to happen. The event can be an action by the user (tap, shake iPhone, type letter on virtual keyboard), or by another process (network packet becomes available, battery runs out), or a time event (a timer expires). Whenever an event takes place ("if this"), you consult the current state of your application ("... and that and not the other") and then do something ("do x and y"), which most likely changes the application state ("set state z"). This is what you described in your question. And this is a general pattern.
There is no single systematic approach to make it right, but as you ask for suggestions of approaches, here some suggestions:
HINT 1: Use as few and little real data structures and variables to represent the internal state as possible, avoiding duplication of state by all means (until you run into performance issues). This makes the "do x and y and set state z" thing shorter, because the state gets set implicitly. Trivial example: instead of having (examples in C++)
if (namelen < 20) { name.append(c); namelen++; }
use
if (name.size() < 20) { name.append(c); }
The second example correctly avoids the replicated state variable 'namelen', making the action part shorter.
HINT 2: Whenever a compound condition (X and Y or Z) appears many times in your program, abstract it away into a procedure, so instead of
if ((x && y) || z) { ... }
write
bool my_condition() { return (x && y) || z; }
if (my_condition()) { ... }
HINT 3: If your user interface has a small number of clearly defined states, and the states affect how events are handled, you can represent the states as singleton instances of classes which inherit from an interface for handling those events. For example:
class UIState {
public:
virtual void HandleShake() = 0;
}
class MainScreen : public UIState {
public:
void HandleShake() { ... }
}
class HelpScreen : public UIState {
public:
void HandleShake() { ... }
}
Instantiate one instance of every derivate class and have then a pointer that points to the current state object:
UIState *current;
UIState *mainscreen = new MainScreen();
UIState *helpscreen = new HelpScreen();
current = mainscreen;
To handle shake then, call:
current->HandleShake();
To change UI state later:
current = helpscreen;
In this way, you can collect state-related procedures into classes, and encapsulate and abstract them away. Of course, you can add all kinds of interesting things into these state-specific (singleton) classes.
HINT 4: In general, if you have N boolean state variables and T different events that can be triggered, there are T * 2**N entries in the "matrix" of all possible events in all possible conditions. It requires your architectural view and domain expertise to correctly identify those dimensions and areas in the matrix which are most logical and natural to encapsulate into objects, and how. And that's what software engineering is about. But if you try to do your project without proper encapsulation and abstraction, you can't scale it far.