I am looking at ways to improve the performance of serializing a certain object in my web service. This object has at a minimum 25 - 30 nested class objects with each having dozens of fields. And it can go up to 7-8 levels deep.
I've done some research on using XML attribute overrides and using FromTypes on the XML Serializer however they are not practical options for me given the number of nested classes and depth of this object. I do not own this object either since it generated from a web reference.
It takes about 5-7 seconds to serialize it the first time after which it very fast, down to milliseconds. I'm looking for ways to shift the cost of caching the objects during serialization from a service call to the application startup.
Is it a bad idea to create an XmlSerializer instance with the object type and throw it away just get those cache populated for the XmlSerializer?
So say I do something like:
new XmlSerializer(typeof(thisObject))
during the Application startup of a web service. I tried this approach and it brought down the cost of serialization during the service call for the first time down nearly 2 seconds.
Are there better alternatives (practical ones in my case)?
Related
We have a very large number of autofac resolutions that occur in our application. We've always had a very long time (about 50% of our web requests) of processing belonging to the autofac resolution stage.
It came to our attention how very important switching to Register Frequently-Used Components with Lambdas is from the guide, and our testing shows it could bring extreme gains for us.
Convert from this
builder.RegisterType<Component>();
to
builder.Register(c => new Component());
It seems the procedure to reflect to find the largest constructor, scan the properties, determine if autofac can resolve that, if not move on to the next smallest, etc. So specifying the constructor directly gives us massive improvements.
My question is can there be or should there be some sort of caching for the found constructor? Since the containers are immutable you can't add or subtract registrations so would require a different constructor later on.
I just wanted to check before we start working on switching over a lot of registrations to lambda, we have 1550 and will try to find the core ones.
Autofac does cache a lot of what it can, but even with container immutability, there's...
Registration sources: dynamic suppliers of registrations (that's how open generics get handled, for example)
Child lifetime scopes: the root container might not have the current HttpRequestMessage for the active WebAPI request, but you can add and override registrations in child scopes so the request lifetime does have it.
Parameters on resolve: you can pass parameters to a resolve operation that will get factored into that operation.
Given stuff like that, caching is a little more complicated than "find the right constructor once, cache it, never look again."
Lambdas will help, as you saw, so look into that.
I'd also look at lifetime scopes for components. If constructor lookup is happening a lot, it means you're constructing a lot, which means you're allocating a lot, which is going to cost you other time. Can more components be singletons or per-lifetime-scope? Using an existing instance will be cheaper still.
You also mentioned you have 1550 things registered. Do all of those actually get used and resolved? Don't register stuff that doesn't need to be. I've seen a lot of times where folks do assembly scanning and try to just register every type in every assembly in the whole app. I'm not saying you're doing this, but if you are, don't. Just register the stuff you need.
Finally, think about how many constructors you have. If there's only one (recommended) it'll be faster than trying many different ones. You can also specify the constructor with UsingConstructor during registration to force a choice and bypass searching.
We are working in a very complex solution using drools 6 (Fusion) and I would like your opinion about best way to read Objects created during the correlation results over time.
My first basic approach was to read Working Memory every certain time, looking for new objects and reporting them to external Service (REST).
AgendaEventListener does not seems to be the "best" approach beacuse I dont care about most of the objects being inserted in working memory, so maybe, best approach would be to inject particular "object" in some sort of service inside DRL. Is this a good approach?
You have quite a lot of options. In decreasing order of my preference:
AgendaEventListener is probably the solution requiring the smallest amount of LOC. It might be useful for other tasks as well; all you have on the negative side is one additional method call and a class test per inserted fact. Peanuts.
You can wrap the insert macro in a DRL function and collect inserted fact of class X in a global List. The problem you have here is that you'll have to pass the KieContext as a second parameter to the function call.
If the creation of a class X object is inevitably linked with its insertion into WM, you could add the registry of new objects into a static List inside class X, to be done in a factory method (or the constructor).
I'm putting your "basic approach" last because it requires much more cycles than the listener (#1) and tons of overhead for maintaining the set of X objects that have already been put to REST.
I'm newbie in gwt so sorry for this simple question.
I can call Registry.get("id") every time or I can cache returned value in field, what is better (how fast/slow Registry.get("id") is?)
Similar question for RpcProxy instance and different loader instances.
That's actually a good question. You should always try to reuse your instances instead of creating new ones, especially with GWT client Java code which becomes Javascript at runtime. The overhead of instantiating objects in JS (even with all the optimisations you get from GWT) can become quickly unwieldy if you're not carefull. Try it for yourself, have a list of 200 gwt Labels of which you only display 10 at a time versus instantiating only 10 and reusing them each time the values change, you'll see the difference in the time your browser takes to render.
I'm trying to use Drools as the rule engine for a grammar relations to semantics mapping framework. The rule base is in excess of 5000 rules even now and will get extended. In using Drools currently the reading of the drl file containing the rules and creating the knowledge base takes a lot of time each time the program is run. Is there a way create the knowledge base once and save it in some persistent format that can be quickly loaded with the option to regenerate the knowledge base only when a change is made?
Yes, drools can serialise a knowledgebase out to external storage and then load this serialised knowledgebase back in again.
So, you need a cycle that loads from drl, compiles, serialises out. Then a second cycle that uses the serialised version.
I've used this with some success, reducing a 1 minute 30 loading time down to about 15-20 seconds. Also, it reduces your heap/perm gen requirements as well.
Check the API for the exact methods.
My first thought is to keep the knowledge base around as long as possible. Unless you are creating multiple knowledge bases from different sets of rules, and there are too many possible combinations, hang onto those knowledge bases. In one application I work on, one knowledge base has all the rules so we treat it like a singleton.
However, if that's not possible or your application is not that long-running, I don't know that Drools itself provides any ways of speeding that up. Running a Drools 5.0 project through the debugger, I see that the KnowledgeBase Drools gives me is Serializable. I imagine it would be quicker to deserialize a KnowledgeBase than to re-parse the rules. But be careful designing your application around this! You use interfaces for a reason and the implementation could change without warning.
I see that there are two ways of transferring objects from server to client
Use the same domain object (Contact.java) as used in the service layer. (I do not use hibernate)
Use the HashMap to send the domain object field values in the form of
Map with the help of BeanUtilsBean class. For multiple objects, use
the List>. Similary, use the Map to submit form
values from client to server
Is there any performance advantage for option 1 over 2?.
Is there a way to hide the classname/package name that is sent to the browser if we
use option 1?.
thanks!.
You have to understand that whatever option you choose, it will need to get converted to JavaScript (+ some wrappers, etc.) - this stuff takes more time and space/bandwidth (note: I haven't done any benchmarks, this is just a [reasonable] conclusion I came up with ;)) than, say, JSON. But if you used JSON, you have to recreate the object on the server side, os it's not a silver bullet. In the end, it all depends how much performance is of an issue to you - for more insight, see this question.
I'd go with option 1: just leave it to the GWT team to pack your domain objects and transfer them between client and server. In the future (GWT 2.1), we'll have some really nice things, including a more lightweight transfer protocol - see this years presentation from Google I/O on architecting GWT apps - it's something worth keeping in mind.
PS: It's always good to do benchmarks yourself in this kind of situations - your configuration, the type of objects, etc. might yield some different results than expected.