How to break out of a loop in project reactor? - reactive-programming

I want to write some codes using reactor like this:
String callRemote(String server){
//remote call
...
}
List<String> servers = ...
String result = null;
for(String server: servers){
try{
result = callRemote(server); // should be called in sequence
if(result.equals("success")){
break;
}
}catch(TimeoutException e){ //timeout control
//
}
}
At the beginning, I think Flux.takeWhile will be a good choice, but it is hard to control the timeout for every callRemote.
And then I tried Mono.zipWhen and Mono.then , but I cant break the execution chain.

You may use filterWhen():
// return async first success
return Flux.just(servers).flatMap(this::connect)
.filterWhen(result -> Mono.just(result.equalsIgnoreCase("success"));
Check Javadoc of filterWhen()

Related

Vert.x: How to wait for a future to complete

Is there a way to wait for a future to complete without blocking the event loop?
An example of a use case with querying Mongo:
Future<Result> dbFut = Future.future();
mongo.findOne("myusers", myQuery, new JsonObject(), res -> {
if(res.succeeded()) {
...
dbFut.complete(res.result());
}
else {
...
dbFut.fail(res.cause());
}
}
});
// Here I need the result of the DB query
if(dbFut.succeeded()) {
doSomethingWith(dbFut.result());
}
else {
error();
}
I know the doSomethingWith(dbFut.result()); can be moved to the handler, yet if it's long, the code will get unreadable (Callback hell ?) It that the right solution ? Is that the omny solution without additional libraries ?
I'm aware that rxJava simplifies the code, but as I don't know it, learning Vert.x and rxJava is just too much.
I also wanted to give a try to vertx-sync. I put the dependency in the pom.xml; everything got downloaded fine but when I started my app, I got the following error
maurice#mickey> java \
-javaagent:~/.m2/repository/co/paralleluniverse/quasar-core/0.7.5/quasar-core-0.7.5-jdk8.jar \
-jar target/app-dev-0.1-fat.jar \
-conf conf/config.json
Error opening zip file or JAR manifest missing : ~/.m2/repository/co/paralleluniverse/quasar-core/0.7.5/quasar-core-0.7.5-jdk8.jar
Error occurred during initialization of VM
agent library failed to init: instrument
I know what the error means in general, but I don't know in that context... I tried to google for it but didn't find any clear explanation about which manifest to put where. And as previously, unless mandatory, I prefer to learn one thing at a time.
So, back to the question : is there a way with "basic" Vert.x to wait for a future without perturbation on the event loop ?
You can set a handler for the future to be executed upon completion or failure:
Future<Result> dbFut = Future.future();
mongo.findOne("myusers", myQuery, new JsonObject(), res -> {
if(res.succeeded()) {
...
dbFut.complete(res.result());
}
else {
...
dbFut.fail(res.cause());
}
}
});
dbFut.setHandler(asyncResult -> {
if(asyncResult.succeeded()) {
// your logic here
}
});
This is a pure Vert.x way that doesn't block the event loop
I agree that you should not block in the Vertx processing pipeline, but I make one exception to that rule: Start-up. By design, I want to block while my HTTP server is initialising.
This code might help you:
/**
* #return null when waiting on {#code Future<Void>}
*/
#Nullable
public static <T>
T awaitComplete(Future<T> f)
throws Throwable
{
final Object lock = new Object();
final AtomicReference<AsyncResult<T>> resultRef = new AtomicReference<>(null);
synchronized (lock)
{
// We *must* be locked before registering a callback.
// If result is ready, the callback is called immediately!
f.onComplete(
(AsyncResult<T> result) ->
{
resultRef.set(result);
synchronized (lock) {
lock.notify();
}
});
do {
// Nested sync on lock is fine. If we get a spurious wake-up before resultRef is set, we need to
// reacquire the lock, then wait again.
// Ref: https://stackoverflow.com/a/249907/257299
synchronized (lock)
{
// #Blocking
lock.wait();
}
}
while (null == resultRef.get());
}
final AsyncResult<T> result = resultRef.get();
#Nullable
final Throwable t = result.cause();
if (null != t) {
throw t;
}
#Nullable
final T x = result.result();
return x;
}

How to process all events emitted by RX Java regardless of error?

I'm using vertx.io web framework to send a list of items to a downstream HTTP server.
records.records() emits 4 records and I have specifically set the web client to connect to the wrong I.P/port.
Processing... prints 4 times.
Exception outer! prints 3 times.
If I put back the proper I.P/port then Susbscribe outer! prints 4 times.
io.reactivex.Flowable
.fromIterable(records.records())
.flatMap(inRecord -> {
System.out.println("Processing...");
// Do stuff here....
Observable<Buffer> bodyBuffer = Observable.just(Buffer.buffer(...));
Single<HttpResponse<Buffer>> request = client
.post(..., ..., ...)
.rxSendStream(bodyBuffer);
return request.toFlowable();
})
.subscribe(record -> {
System.out.println("Subscribe outer!");
}, ex -> {
System.out.println("Exception outer! " + ex.getMessage());
});
UPDATE:
I now understand that on error RX stops right a way. Is there a way to continue and process all records regardless and get an error for each?
Given this article: https://medium.com/#jagsaund/5-not-so-obvious-things-about-rxjava-c388bd19efbc
I have come up with this... Unless you see something wrong with this?
io.reactivex.Flowable
.fromIterable(records.records())
.flatMap
(inRecord -> {
Observable<Buffer> bodyBuffer = Observable.just(Buffer.buffer(inRecord.toString()));
Single<HttpResponse<Buffer>> request = client
.post("xxxxxx", "xxxxxx", "xxxxxx")
.rxSendStream(bodyBuffer);
// So we can capture how long each request took.
final long startTime = System.currentTimeMillis();
return request.toFlowable()
.doOnNext(response -> {
// Capture total time and print it with the logs. Removed below for brevity.
long processTimeMs = System.currentTimeMillis() - startTime;
int status = response.statusCode();
if(status == 200)
logger.info("Success!");
else
logger.error("Failed!");
}).doOnError(ex -> {
long processTimeMs = System.currentTimeMillis() - startTime;
logger.error("Failed! Exception.", ex);
}).doOnTerminate(() -> {
// Do some extra stuff here...
}).onErrorResumeNext(Flowable.empty()); // This will allow us to continue.
}
).subscribe(); // Don't handle here. We subscribe to the inner events.
Is there a way to continue and process all records regardless and get
an error for each?
According to the doc, the observable should be terminated if it encounters an error. So you can't get each error in onError.
You can use onErrorReturn or onErrorResumeNext() to tell the upstream what to do if it encounters an error (e.g. emit null or Flowable.empty()).

Bulk inserts with EntityFramework 4.0 causes abort of transaction

We are receiving a file from a client (Silverlight) via WCF and on the serverside I parse this file. Each line in the file is transformed into an object and stored into the database. if the file is very large (10000 entries and more), I get the following error (MSSQLEXPRESS):
The transaction associated with the current connection has completed but has not been disposed. The transaction must be disposed before the connection can be used to execute SQL statements.
I tried a lot (TransactionOptions timeout set and so on), but nothings works. The above exception message is either raised after 3000, sometimes after 6000 objects processed, but I can't succeed in processing all objects.
I append my source, hopefully somebody got an idea and can help me:
public xxxResponse SendLogFile (xxxRequest request
{
const int INTERMEDIATE_SAVE = 100;
using (var context = new EntityFramework.Models.Cubes_ServicesEntities())
{
// start a new transactionscope with the timeout of 0 (unlimited time for developing purposes)
using (var transactionScope = new TransactionScope(TransactionScopeOption.RequiresNew,
new TransactionOptions
{
IsolationLevel = System.Transactions.IsolationLevel.Serializable,
Timeout = TimeSpan.FromSeconds(0)
}))
{
try
{
// open the connection manually to prevent undesired close of DB
// (MSDTC)
context.Connection.Open();
int timeout = context.Connection.ConnectionTimeout;
int Counter = 0;
// read the file submitted from client
using (var reader = new StreamReader(new MemoryStream(request.LogFile)))
{
try
{
while (!reader.EndOfStream)
{
Counter++;
Counter2++;
string line = reader.ReadLine();
if (String.IsNullOrEmpty(line)) continue;
// Create a new object
DomainModel.LogEntry le = CreateLogEntryObject(line);
// an attach it to the context, set its state to added.
context.AttachTo("LogEntry", le);
context.ObjectStateManager.ChangeObjectState(le, EntityState.Added);
// while not 100 objects were attached, go on
if (Counter != INTERMEDIATE_SAVE) continue;
// after 100 objects, make a call to SaveChanges.
context.SaveChanges(SaveOptions.None);
Counter = 0;
}
}
catch (Exception exception)
{
// cleanup
reader.Close();
transactionScope.Dispose();
throw exception;
}
}
// do a final SaveChanges
context.SaveChanges();
transactionScope.Complete();
context.Connection.Close();
}
catch (Exception e)
{
// cleanup
transactionScope.Dispose();
context.Connection.Close();
throw e;
}
}
var response = CreateSuccessResponse<ServiceSendLogEntryFileResponse>("SendLogEntryFile successful!");
return response;
}
}
There is no bulk insert in entity framework. You call SaveChanges after 100 records but it will execute 100 separate inserts with database round trip for each insert.
Setting timeout of the transaction is also dependent on transaction max timeout which is configured on machine level (I think default value is 10 minutes). How lond does it take before your operation fails?
The best way you can do is rewriting your insert logic with common ADO.NET or with bulk insert.
Btw. throw exception and throw e? That is incorrect way to rethrow exceptions.
Important edit:
SaveChanges(SaveOptions.None) !!! means do not accept changes after saving so all records are still in added state. Because of that the first call to SaveChanges will insert first 100 records. The second call will insert first 100 again + next 100, the third call will insert first 200 + next 100, etc.
I had exactly same issue. I did EF code to insert bulk 1000 records each time.
I was working since the beginning, with a little problem with msDTC that I put to allow remot clients and admin , but after that it was ok. I did lot of work with this, but one day it JUST STOP WORKING.
I am getting
The transaction associated with the current connection has completed but has not been disposed. The transaction must be disposed before the connection can be used to execute SQL statements.
VERY WEIRD! Sometimes the error changes. My suspect is the msDTC somehow , strange behaviors.
I am changing now for not using TransactionScope!
I hate when it did work and just stop. I also tried to run this in a vm, another enourmous waste of time...
My code:
private void AddTicks(FileHelperTick[] fhTicks)
{
List<ForexEF.Entities.Tick> Ticks = new List<ForexEF.Entities.Tick>();
var str = LeTicks(ref fhTicks, ref Ticks);
using (TransactionScope scope = new TransactionScope(TransactionScopeOption.Required, new TransactionOptions()
{
IsolationLevel = System.Transactions.IsolationLevel.Serializable,
Timeout = TimeSpan.FromSeconds(180)
}))
{
ForexEF.EUR_TICKSContext contexto = null;
try
{
contexto = new ForexEF.EUR_TICKSContext();
contexto.Configuration.AutoDetectChangesEnabled = false;
int count = 0;
foreach (var tick in Ticks)
{
count++;
contexto = AddToContext(contexto, tick, count, 1000, true);
}
contexto.SaveChanges();
}
finally
{
if (contexto != null)
contexto.Dispose();
}
scope.Complete();
}
}
private ForexEF.EUR_TICKSContext AddToContext(ForexEF.EUR_TICKSContext contexto, ForexEF.Entities.Tick tick, int count, int commitCount, bool recreateContext)
{
contexto.Set<ForexEF.Entities.Tick>().Add(tick);
if (count % commitCount == 0)
{
contexto.SaveChanges();
if (recreateContext)
{
contexto.Dispose();
contexto = new ForexEF.EUR_TICKSContext();
contexto.Configuration.AutoDetectChangesEnabled = false;
}
}
return contexto;
}
It times out due the TransactionScope default Maximum Timeout, check the machine.config for that.
Check out this link:
http://social.msdn.microsoft.com/Forums/en-US/windowstransactionsprogramming/thread/584b8e81-f375-4c76-8cf0-a5310455a394/

method retuns error code or what is the best way to do that

i have a class like this:
class myclass {
public function save($params){
// some operations
// posible error
return false;
// some more code
// posible error
return false;
// more code
// if everything is ok
return true;
}
}
but what is the best way to display errors, one idea is make the class return numbers like for example:
public function save($params) {
// some operations
// some error with the db
return 1;
// more code
// some error with a table
retunr 2;
// more code
// if everything is ok
return 0;
}
and when al call this function, make a switch in order to display the errors:
$obj = new myclass();
$err = $obj->save($params);
switch($err) {
case 1: echo 'error with the db'; break;
case 2: echo 'error with some table'; break;
default: echo 'object saved!';
}
is this the best way to write this? or there is another way?
Many programming languages give you the option to throw and catch exceptions. Where available, this is generally a better error handling model.
public function save($params) throws SomeException {
// some operations
if (posible error)
throw new SomeException("reason");
}
// client code
try {
save(params);
} catch (SomeException e) {
// log, recover, abort, ...
}
Another advantage of Exceptions is that they will (at least in some languages) give you access to stack trace as well as message.

Monotouch data sync - why does my code sometimes cause sqlite errors?

I have the following calls (actually a few more than this - it's the overall method that's in question here):
ThreadPool.QueueUserWorkItem(Database.Instance.RefreshEventData);
ThreadPool.QueueUserWorkItem(Database.Instance.RefreshLocationData);
ThreadPool.QueueUserWorkItem(Database.Instance.RefreshActData);
1st point is - is it OK to call methods that call WCF services like this? I tried daisy chaining them and it was a mess.
An example of one of the refresh methods being called above is (they all follow the same pattern, just call different services and populate different tables):
public void RefreshEventData (object state)
{
Console.WriteLine ("in RefreshEventData");
var eservices = new AppServicesClient (new BasicHttpBinding (), new EndpointAddress (this.ServciceUrl));
//default the delta to an old date so that if this is first run we get everything
var eventsLastUpdated = DateTime.Now.AddDays (-100);
try {
eventsLastUpdated = (from s in GuideStar.Data.Database.Main.Table<GuideStar.Data.Event> ()
orderby s.DateUpdated descending
select s).ToList ().FirstOrDefault ().DateUpdated;
} catch (Exception ex1) {
Console.WriteLine (ex1.Message);
}
try {
eservices.GetAuthorisedEventsWithExtendedDataAsync (this.User.Id, this.User.Password, eventsLastUpdated);
} catch (Exception ex) {
Console.WriteLine ("error updating events: " + ex.Message);
}
eservices.GetAuthorisedEventsWithExtendedDataCompleted += delegate(object sender, GetAuthorisedEventsWithExtendedDataCompletedEventArgs e) {
try {
List<Event> newEvents = e.Result.ToList ();
GuideStar.Data.Database.Main.EventsAdded = e.Result.Count ();
lock (GuideStar.Data.Database.Main) {
GuideStar.Data.Database.Main.Execute ("BEGIN");
foreach (var s in newEvents) {
GuideStar.Data.Database.Main.InsertOrUpdateEvent (new GuideStar.Data.Event {
Name = s.Name,
DateAdded = s.DateAdded,
DateUpdated = s.DateUpdated,
Deleted = s.Deleted,
StartDate = s.StartDate,
Id = s.Id,
Lat = s.Lat,
Long = s.Long
});
}
GuideStar.Data.Database.Main.Execute ("COMMIT");
LocationsCount = 0;
}
} catch (Exception ex) {
Console.WriteLine("error InsertOrUpdateEvent " + ex.Message);
} finally {
OnDatabaseUpdateStepCompleted (EventArgs.Empty);
}
};
}
OnDatabaseUpdateStepCompleted - just iterates an updateComplete counter when it's called and when it knows that all of the services have come back ok it removes the waiting spinner and the app carries on.
This works OK 1st time 'round - but then sometimes it doesn't with one of these: http://monobin.com/__m6c83107d
I think the 1st question is - is all this OK? I'm not used to using threading and locks so I am wandering into new ground for me. Is using QueueUserWorkItem like this ok? Should I even be using lock before doing the bulk insert/update? An example of which:
public void InsertOrUpdateEvent(Event festival){
try {
if (!festival.Deleted) {
Main.Insert(festival, "OR REPLACE");
}else{
Main.Delete<Event>(festival);
}
} catch (Exception ex) {
Console.WriteLine("InsertOrUpdateEvent failed: " + ex.Message);
}
}
Then the next question is - what am I doing wrong that is causing these sqlite issues?
w://
Sqlite is not thread safe.
If you want to access Sqlite from more than one thread, you must take a lock before you access any SQLite related structures.
Like this:
lock (db){
// Do your query or insert here
}
Sorry, no specific answers, but some thoughts:
Is SqlLite even threadsafe? I'm not sure - it may be that it's not (to the wrapper isn't). Can you lock on a more global object, so no two threads are inserting at the same time?
It's possible that the MT GC is getting a little overenthusiastic, and releasing your string before it's been used. Maybe keep a local reference to it around during the insert? I've had this happen with view controllers, where I had them in an array (tabcontrollers, specificially), but if I didn't keep an member variable around with the reference, they got GC'ed.
Could you get the data in a threaded manner, then queue everything up and insert them in a single thread? Atleast as a test anyway.