TTFB High for GWT-RPC Calls - gwt

We are using GWT framework as our main front end. We are encountering problem with on e of the RPCS. Its TTFB ( time for first byte) is very high . Although the logs on the backend suggest that it takes 600ms to fetch data .
However in chrome it's more than 2 sec (3.23 to be precise ) with 2.90 as TTFB
PS Using 2.8.0 as GWT version
Can some pls address the discrepancy here. Also any tips and suggestions on how to reduce TTFB for gwt rpcs . We are facing a hard time here .

Related

The simulation model time slows down in virtual mode

I am currently building a model on a manufacture process line and the simulation was running fine without errors. Suddenly when I entered in virtual mode to run quickly the simulation, the model started to slow down although the step is high. I am trying to identify where the issue is but nothing is working. At a certain time , the simulation just stops while the step is still running.
This is a picture of the pallete, maybe the experiment is causing this.
You created an infinite loop, this can be triggered by various things in your model.
Likely, you have a ' while' loop not finishing, could also be a condition-based transition.
You need to find this yourself, though. 3 options:
(easy): Check the model logic yourself and find the problem
(easy): nudge yourself to where it stops with traceln commands (see where they stop showing, getting you closer to the culprit)
(harder): Use a profiler (google "AnyLogic profiling" or similar if you are not familiar)
Benjamin is correct, you have created an infinite loop. Click on the "Events" tab in the developer panel and see which events are scheduled to occur at about the time that your model slows down to 0 days/sec. You can also pay attention to the "Step: " counter at the bottom of the developer panel and see where the step count spikes - e.g., if your model has roughly 10k steps per day, and suddenly starts climbing to 400k steps around 25.99 days, you can pay attention to which things are happening in your logic at that time and narrow down where the infinite loop is created. traceln will also help immensely

AnyLogic Source Block Creating Multiple Types of Agents with Different Interarrival Times

I am working in AnyLogic to create a model. I have a source that creates 17 different Agent Names. Each with their own inter arrival time. I would like for all 17 agents to arrive according to their interarrival time in parallel.
My database looks like this:
part_name iat processing_time
part_1 2.3 4.3
part_2 3.5 3.9
.....
.
.
.
I have searched and tried all I can online.
The AnyLogic documentation suggests creating an agent populatino, but I do not think that feature is supported anymore.
Help pls

jQuery Auto Complete - Performance Issue

We are using the plugin https://goodies.pixabay.com/jquery/tag-editor/demo.html for our AutoComplete feature. We load the source with 3500 items. The performance gets too bad when the user starts typing and the autocomplete loads the filtered result after 6 to 8 seconds.
What are alternate approach that we can take for upto 4000 items for Autocomplete.
Appreciate your response!
are you using the minLength attribute from autocomplete?
on their homepage, they have something like this:
$('#my_textarea').tagEditor({ autocomplete: { 'source': '/url/', minLength: 3 } });
this effectively means, that the user has to enter at least 3 charaters before autocomplete will be used. doing so will usually reduce the amount of results from the autocomplete to a more sane count (like 20-30 maybe).
However, this might not necessarily be your problem. first you should figure out, if it's your server that's got a problem with responding fast (you can use your browser developer toolbar to see how long the requests takes to complete).
If the request takes 6-8 seconds, then you will have to optimize your server's code. On the other hand, if the response is quick, but tageditor needs a long time to build the suggestion list, then the problem is, that it might not be optimized for so many suggestions. in that case, the ultimate solution would be to rewrite the autocompletion module yourself or patch the existing one to better scale to your needs.
Do you go back to the server every time the user types in something to get the matching results?
I am using SPRING ehcache which gets all the items from database and stores in the server cache when the server is started. Whenever the user types the cached data is used which gets the results with few milliseconds. Some one else recommended me to use this.Below is the example for it
http://www.mkyong.com/ehcache/ehcache-hello-world-example/
I am using the jQuery autocomplete features with 2500 items without any issue.
here is the link where it being used http://www.all4sportsonline.com

GPS Specific questions for Service application

I am working on a simple application that I need to be run as a service and report gps position every 3 minutes. I already have a working example based on the tutorial, but still have the followin doubts.
The starting of the service GPS1.Start(5*60*1000, 0)
Says first parameter is time lapse, and 2nd parameter is distance difference, How is determined, based on prior position ?
If I want to do what I stated before and I am scheduling / starting service every 3 minutes, this means I will need to ask a GPS1.Start(0,0) to get latest fix? what would be the gain to use the parameters?
I trying in a NexusOne and the Time object comes with local time, I have to do this to make it UTC but this is a tweak to the code. Is this a standard or could It change based on Phone model ? hora=DateTime.Date(Location1.Time + 6*DateTime.TicksPerHour)
thanks
If you are only interested in a single fix each time then you should pass 0, 0. These values affect the frequency of subsequent events.
You can find the time zone with the code posted here: GetTimeZone

WF performance with new 20,000 persisted workflow instances each month

Windows Workflow Foundation has a problem that is slow when doing WF instances persistace.
I'm planning to do a project whose bussiness layer will be based on WF exposed WCF services. The project will have 20,000 new workflow instances created each month, each instance could take up to 2 months to finish.
What I was lead to belive that given WF slownes when doing peristance my given problem would be unattainable given performance reasons.
I have the following questions:
Is this true? Will my performance be crap with that load(given WF persitance speed limitations)
How can I solve the problem?
We currently have two possible solutions:
1. Each new buisiness process request(e.g. Give me a new drivers license) will be a new WF instance, and the number of persistance operations will be limited by forwarding all status request operations to saved state values in a separate database.
2. Have only a small amount of Workflow Instances up at any give time, without any persistance ofso ever(only in case of system crashes etc.), by breaking each workflow stap in to a separate worklof and that workflow handling each business process request instance in the system that is at that current step(e.g. I'm submitting my driver license reques form, which is step one... we have 100 cases of that, and my step one workflow will handle every case simultaneusly).
I'm very insterested in solution for that problem. If you want to discuss that problem pleas be free to mail me at nstjelja#gmail.com
The number of hydrated executing wokflows will be determined by environmental factors memory server through put etc. Persistence issue really only come into play if you are loading and unloading workflows all the time aka real(ish) time in that case workflow may not be the best solution.
In my current project we also use WF with persistence. We don't have quite the same volume (perhaps ~2000 instances/month), and they are usually not as long to complete (they are normally done within 5 minutes, in some cases a few days). We did decide to split up the main workflow in two parts, where the normal waiting state would be. I can't say that I have noticed any performance difference in the system due to this, but it did simplify it, since our system sometimes had problems matching incoming signals to the correct workflow instance (that was an issue in our code; not in WF).
I think that if I were to start a new project based on WF I would rather go for smaller workflows that are invoked in sequence, than to have big workflows handling the full process.
To be honest I am still investigating the performance characteristics of workflow foundation.
However if it helps, I have heard the WF team have made many performance improvements with the new release of WF 4.
Here are a couple of links that might help (if you havn't seem them already)
A Developer's Introduction to Windows Workflow Foundation (WF) in .NET 4 (discusses performance improvements)
Performance Characteristics of Windows Workflow Foundation (applies to WF 3.0)
WF on 3.5 had a performance problem. WF4 does not - 20000 WF instances per month is nothing. If you were talking per minute I'd be worried.