Does anyone have experienced about slow response from using graphQL ?
This is my code in resolver:
getActiveCaseWithActiveProcess(){
console.log ("getActiveCaseWithActiveProcess");
var result = [];
var activeElements = ActiveElements.find({
type:"signal",
$or:[
{signalRef:"start-process"},
{signalRef:"start-task"},
{signalRef:"close-case"}
]
},{limit:200}).fetch();
for (var AE of activeElements){
var checkAECount = ActiveElements.find({caseId:AE['caseId']}).count();
if (checkAECount <= 3){
console.log ('caseId: ' + AE['caseId']);
var checkExistInResult = result.filter(function (obj) {
return obj.caseId === AE['caseId'];
})[0];
if (checkExistInResult == null){
result.push({
caseId: AE['caseId'],
caseStart: AE['createdDate']
});
}
}
}
console.log("loaded successfully");
return result;
}
I have a huge data from my collection actually. Approximately 20000 records. However when I load this, the response is too slow and it can repeat to reload by itself which makes the response is even longer.
I20160812-04:07:25.968(0)? caseId: CASE-0000000284,
I20160812-04:07:26.890(0)? caseId: CASE-0000000285
I20160812-04:07:28.200(0)? caseId: CASE-0000000285
I20160812-04:07:28.214(0)? getActiveCaseWithActiveProcess
I20160812-04:07:28.219(0)? caseId: CASE-0000000194
I20160812-04:07:29.261(0)? caseId: CASE-0000000197
As you notice from my attachment above, at this time(20160812-04:07:28.214) the server repeats to load from the beginning again, and that's why the response will take longer.
This is not always happening. It happens when the server loads slowly. When the server loads fast. Everything just runs smoothly.
Not really enough information to answer that question here, but my guess would be that it has nothing to do with GraphQL. I think your client just cancels the request and makes another one because the first one timed out. You can find out if that happens by logging requests to your server before they're passed to GraphQL.
Related
In this question Mirth HTTP POST request with Parameters using Javascript I used a semblance of the first answer. Code seen below.
I'm running this code for a file that has nearly 46,000 rows. Which equates to about 46,000 requests hitting our external server. I'm noting that Mirth is making requests to our API endpoint about 1.6 times per second. This is unusually slow, and I would like some help to understand whether this is something related to Mirth, or related to the code above. Can repeated Imports in a for loop cause slow downs? Or is there a specific Mirth setting that limits the number of requests sent?
Version of Mirth is 3.12.0
Started the process at 2:27 PM and it's expected to be finished by almost 8:41 PM tonight, that's ridiculously slow.
//Skip the first header row
for (i = 1; i < msg['row'].length(); i++) {
col1 = msg['row'][i]['column1'].toString();
col2...
...
//Insert into results if the file and sample aren't already present
InsertIntoDatabase()
}
function InsertIntoDatabase() {
with(JavaImporter(
org.apache.commons.io.IOUtils,
org.apache.http.client.methods.HttpPost,
org.apache.http.client.entity.UrlEncodedFormEntity,
org.apache.http.impl.client.HttpClients,
org.apache.http.message.BasicNameValuePair,
com.google.common.io.Closer)) {
var closer = Closer.create();
try {
var httpclient = closer.register(HttpClients.createDefault());
var httpPost = new HttpPost('http://<server_name>/InsertNewCorrection');
var postParameters = [
new BasicNameValuePair("col1", col1),
new BasicNameValuePair(...
...
];
httpPost.setEntity(new UrlEncodedFormEntity(postParameters, "UTF-8"));
httpPost.setHeader('Content-Type', 'application/x-www-form-urlencoded')
var response = closer.register(httpclient.execute(httpPost));
var is = closer.register(response.entity.content);
result = IOUtils.toString(is, 'UTF-8');
} finally {
closer.close();
}
}
return result;
}
I'm using vertx.io web framework to send a list of items to a downstream HTTP server.
records.records() emits 4 records and I have specifically set the web client to connect to the wrong I.P/port.
Processing... prints 4 times.
Exception outer! prints 3 times.
If I put back the proper I.P/port then Susbscribe outer! prints 4 times.
io.reactivex.Flowable
.fromIterable(records.records())
.flatMap(inRecord -> {
System.out.println("Processing...");
// Do stuff here....
Observable<Buffer> bodyBuffer = Observable.just(Buffer.buffer(...));
Single<HttpResponse<Buffer>> request = client
.post(..., ..., ...)
.rxSendStream(bodyBuffer);
return request.toFlowable();
})
.subscribe(record -> {
System.out.println("Subscribe outer!");
}, ex -> {
System.out.println("Exception outer! " + ex.getMessage());
});
UPDATE:
I now understand that on error RX stops right a way. Is there a way to continue and process all records regardless and get an error for each?
Given this article: https://medium.com/#jagsaund/5-not-so-obvious-things-about-rxjava-c388bd19efbc
I have come up with this... Unless you see something wrong with this?
io.reactivex.Flowable
.fromIterable(records.records())
.flatMap
(inRecord -> {
Observable<Buffer> bodyBuffer = Observable.just(Buffer.buffer(inRecord.toString()));
Single<HttpResponse<Buffer>> request = client
.post("xxxxxx", "xxxxxx", "xxxxxx")
.rxSendStream(bodyBuffer);
// So we can capture how long each request took.
final long startTime = System.currentTimeMillis();
return request.toFlowable()
.doOnNext(response -> {
// Capture total time and print it with the logs. Removed below for brevity.
long processTimeMs = System.currentTimeMillis() - startTime;
int status = response.statusCode();
if(status == 200)
logger.info("Success!");
else
logger.error("Failed!");
}).doOnError(ex -> {
long processTimeMs = System.currentTimeMillis() - startTime;
logger.error("Failed! Exception.", ex);
}).doOnTerminate(() -> {
// Do some extra stuff here...
}).onErrorResumeNext(Flowable.empty()); // This will allow us to continue.
}
).subscribe(); // Don't handle here. We subscribe to the inner events.
Is there a way to continue and process all records regardless and get
an error for each?
According to the doc, the observable should be terminated if it encounters an error. So you can't get each error in onError.
You can use onErrorReturn or onErrorResumeNext() to tell the upstream what to do if it encounters an error (e.g. emit null or Flowable.empty()).
I am using Akka-HTTP to return a response in form of String [30 - 40 MB].
When I deploy my Akka-HTTP server and make a request to fetch the data from some URI then every time it gets stuck after several MB's and stop fetching the complete response.
Is there any way that I return my whole large response without getting stuck in between.
HttpResponse(StatusCodes.OK, entity = myLargeResponseAsString)
Thanks
Maybe this could help:
path("yourpath") {
get {
complete {
val str2 = scala.io.Source.fromFile("/tmp/t.log", "UTF8").mkString
val str = Source.single(ByteString(str2))
HttpResponse(entity = HttpEntity.Chunked.fromData(ContentTypes.`text/plain(UTF-8)`, str))
}
}
I have paged interface. Given a starting point a request will produce a list of results and a continuation indicator.
I've created an observable that is built by constructing and flat mapping an observable that reads the page. The result of this observable contains both the data for the page and a value to continue with. I pluck the data and flat map it to the subscriber. Producing a stream of values.
To handle the paging I've created a subject for the next page values. It's seeded with an initial value then each time I receive a response with a valid next page I push to the pages subject and trigger another read until such time as there is no more to read.
Is there a more idiomatic way of doing this?
function records(start = 'LATEST', limit = 1000) {
let pages = new rx.Subject();
this.connect(start)
.subscribe(page => pages.onNext(page));
let records = pages
.flatMap(page => {
return this.read(page, limit)
.doOnNext(result => {
let next = result.next;
if (next === undefined) {
pages.onCompleted();
} else {
pages.onNext(next);
}
});
})
.pluck('data')
.flatMap(data => data);
return records;
}
That's a reasonable way to do it. It has a couple of potential flaws in it (that may or may not impact you depending upon your use case):
You provide no way to observe any errors that occur in this.connect(start)
Your observable is effectively hot. If the caller does not immediately subscribe to the observable (perhaps they store it and subscribe later), then they'll miss the completion of this.connect(start) and the observable will appear to never produce anything.
You provide no way to unsubscribe from the initial connect call if the caller changes its mind and unsubscribes early. Not a real big deal, but usually when one constructs an observable, one should try to chain the disposables together so it call cleans up properly if the caller unsubscribes.
Here's a modified version:
It passes errors from this.connect to the observer.
It uses Observable.create to create a cold observable that only starts is business when the caller actually subscribes so there is no chance of missing the initial page value and stalling the stream.
It combines the this.connect subscription disposable with the overall subscription disposable
Code:
function records(start = 'LATEST', limit = 1000) {
return Rx.Observable.create(observer => {
let pages = new Rx.Subject();
let connectSub = new Rx.SingleAssignmentDisposable();
let resultsSub = new Rx.SingleAssignmentDisposable();
let sub = new Rx.CompositeDisposable(connectSub, resultsSub);
// Make sure we subscribe to pages before we issue this.connect()
// just in case this.connect() finishes synchronously (possible if it caches values or something?)
let results = pages
.flatMap(page => this.read(page, limit))
.doOnNext(r => this.next !== undefined ? pages.onNext(this.next) : pages.onCompleted())
.flatMap(r => r.data);
resultsSub.setDisposable(results.subscribe(observer));
// now query the first page
connectSub.setDisposable(this.connect(start)
.subscribe(p => pages.onNext(p), e => observer.onError(e)));
return sub;
});
}
Note: I've not used the ES6 syntax before, so hopefully I didn't mess anything up here.
I wrote a piece of code that is going to perform a request to Facebook.
Now i wrapped this code into a infinite loop which is going to send those requests every 10 seconds using timeouts.
Code:
var poll = function(socket, userProvider) {
var lastCallTime = new Date();
var polling = true;
// The stream itself, non blocking
function performPoll() {
var results = feed(function (err, data) {
lastCallTime = new Date();
// PROCESS DATA
// Check new posts
if (polling) {
setTimeout(performPoll, 1000 * 10);
}
});
};
// Start infinite loop
performPoll();
};
The feed(cb) is just going to call a request to Facebook requesting data, this works 100% and does what i want it to do, the only problem that i am having now is that this piece of code is keeping to increase my memory usage. After a few minutes it increased by 50MB already (From 50 -> 100).
Is there anybody that can help me identify the cause of this?
v8 does not collect memory immediately. If it stabilizes at 100mb, then it is to be expected. For more information, checkout nodejs setTimeout memory leak?
If you really, really want to clear the memory, use global.gc(). Read this blog about how to call garbage collector manually.