Finagle No asyncronous executing - scala

i have a simple finagle thrift server:
import com.twitter.finagle.Thrift
import scala.concurrent.Future
import com.twitter.util.{ Await, Future }
object Main{
def main(args: Array[String]) {
var count = 0
val myserver = Thrift.serveIface("0.0.0.0:9090", new RealTimeDatabasePageImpressions[com.twitter.util.Future] {
def saveOrUpdate(pageImpression: PageImpressions):
com.twitter.util.Future[Boolean] = {
count += 1
println(count)
com.twitter.util.Future.value(true)
}
}
Await.ready(myserver)
}
}
This server works but i have one big problem: i wrote a thrift nodejs client with a for loop. It executes 10.000 thrift request. But it's not asynchronous. It executes 500 request and stops. After a while, 2 or 3 seconds, 300 more requests will executed. Now the question: Why this happen? Is something wrong with my server or client? I use only the apache thrift generated nodejs code. No wrapper. The function executed 10.000 times. I think the nodejs isn't the problem:
function callFunc(i){
console.log("started executing: " + i);
var connection = thrift.createConnection("IP", 9090, {
transport: transport,
protocol: protocol
});
connection.on('error', function (err) {
console.log(err);
});
// Create a Calculator client with the connection
var client = thrift.createClient(Realtime_pageImpressions, connection);
var rand = Math.random() * (20000 - 1);
var trackId = trackIds[Math.round(Math.random() * 10)];
var values = new PageImpressions({
trackId: trackId,
day: 4,
hour: 4,
minute: 13,
pageId: 'blabla',
uniqueImpressions: Math.random() * (13000 - 1),
sumImpressions: Math.random() * (1000450 - 1)
});
client.saveOrUpdate(values, function (error, message) {
if (message) {
console.log("Successful, got Message: " + message);
} else {
console.log("Error with Message: " + error);
}
});
return true;
}
for(var i = 0; i < 10000; i++){
callFunc(i);
}

Your var count is unsynchronized. This is a very big problem, but, probably, not related to your performance issue.
You are also blocking finagle thread, which is also a big problem, but does not matter in your mock case, because there is no wait time.
Think about it this way. Let's say, you have one cpu (you probably have several, but there are other things going on the machine as well), and you are asking it to execute 10000 operations all at the same time.
How can this work? It will have to execute one of the requests, save the context, the stack, flush all caches, switch to the next request, execute that one ...
500 requests in 2 seconds is 4 milliseconds per request. Does not sound that bad, does it?
Also, have you turned your GC (on both server and client)? If requests are processed in bursts followed by long pauses, that's probably a sign of full GC kicking in

Related

Running Mirth Channel with API Requests to external server very slow to process

In this question Mirth HTTP POST request with Parameters using Javascript I used a semblance of the first answer. Code seen below.
I'm running this code for a file that has nearly 46,000 rows. Which equates to about 46,000 requests hitting our external server. I'm noting that Mirth is making requests to our API endpoint about 1.6 times per second. This is unusually slow, and I would like some help to understand whether this is something related to Mirth, or related to the code above. Can repeated Imports in a for loop cause slow downs? Or is there a specific Mirth setting that limits the number of requests sent?
Version of Mirth is 3.12.0
Started the process at 2:27 PM and it's expected to be finished by almost 8:41 PM tonight, that's ridiculously slow.
//Skip the first header row
for (i = 1; i < msg['row'].length(); i++) {
col1 = msg['row'][i]['column1'].toString();
col2...
...
//Insert into results if the file and sample aren't already present
InsertIntoDatabase()
}
function InsertIntoDatabase() {
with(JavaImporter(
org.apache.commons.io.IOUtils,
org.apache.http.client.methods.HttpPost,
org.apache.http.client.entity.UrlEncodedFormEntity,
org.apache.http.impl.client.HttpClients,
org.apache.http.message.BasicNameValuePair,
com.google.common.io.Closer)) {
var closer = Closer.create();
try {
var httpclient = closer.register(HttpClients.createDefault());
var httpPost = new HttpPost('http://<server_name>/InsertNewCorrection');
var postParameters = [
new BasicNameValuePair("col1", col1),
new BasicNameValuePair(...
...
];
httpPost.setEntity(new UrlEncodedFormEntity(postParameters, "UTF-8"));
httpPost.setHeader('Content-Type', 'application/x-www-form-urlencoded')
var response = closer.register(httpclient.execute(httpPost));
var is = closer.register(response.entity.content);
result = IOUtils.toString(is, 'UTF-8');
} finally {
closer.close();
}
}
return result;
}

Netty starts channels but does not read from them in kubernetes

netty-all:4.1.48.Final
I am having a cryptic issue with Netty that seems to only show up in Kubernetes. I have a clone of the project running on a cloud instance with less resources that does not have this issue. Both projects receive the same amount of traffic (I am resending the same traffic from a third provider to both Netty servers).
In kubernetes, every time a channel is opened (I send a message) I increment my session counter. Every time the channel reads data, I increment a read counter. I am sending data every time so I would expect to see at the very least one read for every session (more if the data were long enough) but not less. The counters drift apart rather smoothly until the amount of reads stays around half of the amount of opened sessions.
Is there any way for me to diagnose this issue? I have written the barebones netty server I am using (with the configuration, including an idle timer). Am I blocking Netty resources?
class Server {
private val bossGroup = NioEventLoopGroup()
private val workerGroup = NioEventLoopGroup()
fun start() {
ServerBootstrap()
.group(bossGroup, workerGroup)
.option(ChannelOption.SO_REUSEADDR, true)
.option(ChannelOption.AUTO_CLOSE, false)
.channel(NioServerSocketChannel::class.java)
.option(ChannelOption.SO_KEEPALIVE, true)
.option(ChannelOption.TCP_NODELAY, true)
.childHandler(object : ChannelInitializer<SocketChannel>() {
override fun initChannel(channel: SocketChannel) {
val idleTimeTrigger = 1
val idleStateHandler = IdleStateHandler(0, 0, idleTimeTrigger)
channel
.pipeline()
.addLast("idleStateHandler", idleStateHandler)
.addLast(Session(idleTimeTrigger))
}
})
.bind(8888)
.sync()
.channel()
.closeFuture()
.sync()
}
}
class Session(
private val idleTimeTrigger: Int,
) : ChannelInboundHandlerAdapter() {
// session counter
val idleTimeout = 10
var idleTickCounter = 0L
override fun channelRead(ctx: ChannelHandlerContext, msg: Any) {
// read counter is less than session counter... HUH????
this.idleTickCounter = 0
try {
val data = (msg as ByteBuf).toString(CharsetUtil.UTF_8)
// ... do my stuff ..
// output counter is less than session counter
} finally {
ReferenceCountUtil.release(msg)
}
}
override fun userEventTriggered(ctx: ChannelHandlerContext, evt: Any) {
this.idleTickCounter++
val idleTime = idleTimeTrigger * idleTickCounter
if (idleTime > idleTimeout) {
// idle timeout counter is always 0
ctx.close()
}
super.userEventTriggered(ctx, evt)
}
override fun exceptionCaught(ctx: ChannelHandlerContext, cause: Throwable) {
// error counter is always 0
ctx.close()
}
}
The output is being passed to a rabbit AMQP client and sent to a queue. I don't know if this is relevant (with regards to resource usage) but the AMQP client uses Jetty

How to process all events emitted by RX Java regardless of error?

I'm using vertx.io web framework to send a list of items to a downstream HTTP server.
records.records() emits 4 records and I have specifically set the web client to connect to the wrong I.P/port.
Processing... prints 4 times.
Exception outer! prints 3 times.
If I put back the proper I.P/port then Susbscribe outer! prints 4 times.
io.reactivex.Flowable
.fromIterable(records.records())
.flatMap(inRecord -> {
System.out.println("Processing...");
// Do stuff here....
Observable<Buffer> bodyBuffer = Observable.just(Buffer.buffer(...));
Single<HttpResponse<Buffer>> request = client
.post(..., ..., ...)
.rxSendStream(bodyBuffer);
return request.toFlowable();
})
.subscribe(record -> {
System.out.println("Subscribe outer!");
}, ex -> {
System.out.println("Exception outer! " + ex.getMessage());
});
UPDATE:
I now understand that on error RX stops right a way. Is there a way to continue and process all records regardless and get an error for each?
Given this article: https://medium.com/#jagsaund/5-not-so-obvious-things-about-rxjava-c388bd19efbc
I have come up with this... Unless you see something wrong with this?
io.reactivex.Flowable
.fromIterable(records.records())
.flatMap
(inRecord -> {
Observable<Buffer> bodyBuffer = Observable.just(Buffer.buffer(inRecord.toString()));
Single<HttpResponse<Buffer>> request = client
.post("xxxxxx", "xxxxxx", "xxxxxx")
.rxSendStream(bodyBuffer);
// So we can capture how long each request took.
final long startTime = System.currentTimeMillis();
return request.toFlowable()
.doOnNext(response -> {
// Capture total time and print it with the logs. Removed below for brevity.
long processTimeMs = System.currentTimeMillis() - startTime;
int status = response.statusCode();
if(status == 200)
logger.info("Success!");
else
logger.error("Failed!");
}).doOnError(ex -> {
long processTimeMs = System.currentTimeMillis() - startTime;
logger.error("Failed! Exception.", ex);
}).doOnTerminate(() -> {
// Do some extra stuff here...
}).onErrorResumeNext(Flowable.empty()); // This will allow us to continue.
}
).subscribe(); // Don't handle here. We subscribe to the inner events.
Is there a way to continue and process all records regardless and get
an error for each?
According to the doc, the observable should be terminated if it encounters an error. So you can't get each error in onError.
You can use onErrorReturn or onErrorResumeNext() to tell the upstream what to do if it encounters an error (e.g. emit null or Flowable.empty()).

concurrent requests limit of Twitter-Finagle

I create a thrift server using Finagle like this
val server = Thrift.serveIface(bindAddr(), new MyService[Future] {
def myRPCFuction() {}
})
But, I found that the maximum number of concurrent requests is five( why 5? when more than 5, the server just ignore the excessed ones.) I look through the doc of Finagle really hard (http://twitter.github.io/finagle/guide/Protocols.html#thrift-and-scrooge), but find nothing hint to configure the max-request-limit.
How to config the maximum concurrent request num of Finagle? Thanks
I've solved this problem by myself and I share it here to help others who may run into the same case. Because I m a thrift user before and in Thrift when you return from the RPC function you return the values back to calling client. While in Finagle only when you use Future.value() you return the value to client. And when use Finagle, you should totally use the asynchronous way, that's to say you had better not sleep or do some other RPC synchronously in the RPC function.
/* THIS is BAD */
val server = Thrift.serveIface(bindAddr(), new MyService[Future] {
def myRPCFuction() {
val rpcFuture = rpcClient.callOtherRpc() // call other rpc which return a future
val result = Await.result(rpcFuture, TwitterDuration(rpcTimeoutSec()*1000, MILLISECONDS))
Future.value(result)
}
})
/* This is GOOD */
val server = Thrift.serveIface(bindAddr(), new MyService[Future] {
def myRPCFuction() {
val rpcFuture = rpcClient.callOtherRpc() // call other rpc which return a future
rpcFuture onSuccess { // do you job when success (you can return to client using Future.value) }
rpcFuture onFailure { // do your job when fail }
}
})
Then, can get a satisfactory concurrency. Hope it helps others who have the same issue.

NodeJS Node-apn implementation as daemon

I have a node-apn nodejs script running as a daemon on AmazonWS. The daemon runs fine and the script stays up and comes back when it goes down but I believe I am having a synchronous execution and exiting issue with node.js. When I release the process with process.exit(); even though all console.logs output saying they have sent my messages, they never are received on the phone. I decided to remove the exit and let the process "hang" after execution and all messages were sent successfully. This led me to do the following implementation using an ASYNC function, but the same result seems to be happening. Can anyone provide insight to this? There are no errors being thrown from APN or anywhere else.
function closeDB()
{
connection.end(function(err) {
if (err) {
console.log("ERROR: " + util.inspect(err, false, 5));
process.exit(1);
}
console.log("APNS-PUSH: COMPLETED.");
});
setTimeout(function(){process.exit();}, 50);
} // End of closeDB()
function apnsError(err, notification)
{
console.log(err);
console.log(notification);
closeDB();
}
function async(arg, callback)
{
apnsConnection.sendNotification(arg);
console.log(arg);
setTimeout(function() { callback(1); }, 100);
}
/**
* Our MySQL query callback.
*/
function queryCB(err, results)
{
//error in our all, report and exit
if (err) {
console.log("ERROR: " + util.inspect(err, false, 5));
closeDB();
}
if(results.length == 0)
{
closeDB();
}
var notes = [];
var count = 0;
try {
for( var i = 0; i < results.length; i++ ) {
var myDevice = new apns.Device(results[i]['udid']);
var note = new apns.Notification();
note.expiry = Math.floor(Date.now() / 1000) + 3600; // Expires 1 hour from now.
note.badge = results[i]["notification_count"];
note.sound = "ping.aiff";
note.alert = results[i]["message"];
note.device = myDevice;
connection.query('UPDATE `tbl_notifications` SET `sent`=1 WHERE `id`=' + results[i]["id"] , function(err, results) {
if(err)
{
console.log("ERROR: " + util.inspect(err, false, 5));
}
});
notes.push(note);
}
} catch( err ) {
console.log('error: ' + err)
}
console.log(notes.length);
notes.forEach(function(nNode) {
async(nNode, function(result) {
count++;
if(count == notes.length) {
closeDB();
}
})
});
} // End of queryCB()
I had the same problem where killing the process also killed the open socket connections and didn't allow the notifications to be sent. The solution I came up with isn't an an ideal solution but it will work in your situation as well. I looked into the node-apn code and found that the Connection object inherited from EventEmitter so you can monitor events on the object like so:
var apnsConnection = new apn.Connection(options)
apnsConnection.sendNotification(notification)
apnsConnection.on('transmitted', function(){
console.log("Transmitted")
callback()
})
apnsConnection.on('error', function(){
console.log("Error")
callback()
})
This is monitoring the socket that the notification is sent through so I don't know how accurate it is at determining when a notification has successfully been passed off to Apple's APNS servers but it has worked pretty well for me.
The reason you are seeing this problem is that when you use #pushNotification it buffers the notification inside the module and handles sending it asynchronously.
Listening for "transmitted" is valid and this is emitted when the notification has been written to the socket. However, if your objective is to close the socket after all notifications have been sent then the easiest way to accomplish this is using the connectionTimeout property when creating your connection.
Simply set connectionTimeout to something around 1000 (milliseconds) and assuming you have no other connections open then the process will exit automatically. Or you can set an event listener on the timeout event and call process.exit() from there.