OSB: Analyzing memory of proxy service - weblogic12c

I have multiple proxies in a message flow.Is there a way in OSB by which I can monitor the memory utilization of each proxy ? I'm getting OOM, want to investigate which proxy is eating away all/most memory.
Thanks !

If you're getting OOME then it's either because a proxy is not freeing up all the memory it uses (so will eventually fail even with one request at a time), or you use too much memory per invocation and it dies over a certain threshold but is fine under low load. Do you know which it is?
Either way, you will want to generate a heap dump on OOME so you can investigate what's going on. It's annoying but sometimes necessary. A colleague had to do that recently to fix some issues (one problem was an SB-transport platform bug, one was a thread starvation issue due to a platform work manager bug, the last one due to a Muxer bug when used in exalogic).
If it just performs poorly under load, then you'll need to do the usual OSB optimisations, like use fewer Assign steps (but assign more variables per step), do a lot more in xquery rather than proxy steps, especially loops that don't need a service callout, since they can easily be rolled into a for loop in xquery; you know, all the standard stuff.

Related

Unusual spikes in CPU utilization in CentOS 6.6 while starting pycharm

my system since last couple of days is behaving strangely. I am a regular user of pycharm software, and it used to work on my system very smoothly with no hiccups at all. But since last couple of days, whenever I start pycharm, my CPU utilization behaves strangly, like in the image: Unusual CPU util
I am confused as when I go to processes or try ps/top in terminal, there are no process which is utilizing cpu more then 1 or 2%. So I am not sure where these resources are getting consumed.
By unusual CPU util I mean, That first CPU1 is getting used 100% for couple or so minutes, then CPU2. Which is, only one cpu's utilization goes to 100% for sometime followed by other's. This goes on for 10 to 20 minutes. then system comes back to normal.
P.S.: I don't think this problem is related to pycharm, as I face similar issues while doing other work also, just that I always face this with pycharm for sure.
POSSIBLE CAUSE: I suspect you have a thrashing problem. The CPU usage of your applications are low because none of them are actually getting much useful work done. All the processing is being taken up by moving memory pages to and from the disk. Your CPU usage probably settles down after a time because your application has entered a state where its memory working set has shrunk to a point where it all can be held in memory at one time.
This has probably happened because one of the apps on your machine is handling a larger data set than before, and so requires more addressable memory. Another possibility is that, for some reason, a lot more apps are running on your machine.
POTENTIAL SOLUTION: There are several ways you can address this. The simplest is to put more RAM on your machine. If this doesn't work or isn't possible, you'll have to figure out which app is the memory hog. You may simply have to work with smaller problems/data-sets or offload some of the apps onto a different box.
MIGRATING CPU LOAD: Operating systems will move tasks (user apps, kernel) around for many different reasons. The reasons can range anywhere from it being just plain random to certain apps having more of their addressable memory in one bank vs another. Given that you are probably doing a lot of thrashing, I'm not surprised that the processor your app is running is randomized over time.

Asynchronouos Socket Communication & Heap fragmentation

I wrote a multithreaded Socket Server application which accepts over a 1000 concurrent connections. Recently we had application crash; after analyzing the dump files came to know app has crash due to heap corruption. I found the same issue discussed in following links.
.NET Does NOT Have Reliable Asynchronouos Socket Communication?
http://support.microsoft.com/kb/947862
And also discussion suggest 3 solutions.
The network application should have an upper bound on the number of outstanding asynchronous IO that it posts.
Use Microsoft CCR
Use TPL
Due to the time factor, I thought to stick with #1, but I don't have a clear picture how to implement this. Can some one give a good starting point please?
And also has anyone used Async with TPL to solve this issue?
You mean a better starting point than the blog posting that I linked to in the answer that you refer to?
The issue is this:
Memory and other per-operation resources that are used during an async write are often "in use" until the remote peer's TCP stack acks the data and the local stack can complete your async write operation to tell you that you can reuse your buffer.
The local peer has no control over this as it's all governed by the speed at which the remote peer reads data from its socket and the congestion on the link between the two peers.
Because of the above you need to have a hard limit on the amount of async writes that you have outstanding at any one time. You can track this by incrementing a counter just before you issue an async write and decrementing it in the completion handler.
What you do once you hit that limit is up to you. In the original article I favour a queue that data to be written is placed into. This queue can then be used as a source of data as write completions occur. Once the queue is empty you can send normally again. Of course this simply moves the problem - you still have a memory resource that's controlled by the remote peer (the queued data) but you don't also have other OS resources used too (non-paged pool, I/O page lock limit, etc).
You could simply stop your peer sending when you reach your limit - and now the API that you build over the async API needs to have a 'can't sent at the moment, try again later' return from a send which previously used to always "work".
If you're doing this I would also seriously look at avoiding the pinned memory issue by allocating a large block of buffers in one contiguous block and using them from the pool.
First, that's a very old KB article. How are you sure you have that particular problem?
Then, as Hans Passant answers in the SO question, if you write bad async code, it will bite you. If you don't take care of your resources (and memory buffers are resources), a concurrent program will face memory errors
It's very hard to write good concurrent code using raw Threads and TPL does make it easier but it won't fix the bugs you already have. In fact, unless you identify your current problems you are likely to transfer them to the version that uses TPL.
Without knowing the specific problem that caused your application to crash, I can only make some suggestions:
Use BufferManager to reuse memory buffers instead of allocating new ones.
Use a queue to store requests and process them asynchronously instead of starting a new thread for each request.
There are other techniques you can use as well, depending on the type of application you are building. Eg you could use TPL DataFlow to break processing in independent steps.
As for CCR, there is not much point in using it outside Robotics Studio. TPL contains most of the relevant functionality you need to write concurrent apps.

Correct approach for checking memory allocations in Objective C

I'm wondering what would be the correct approach after executing a command that allocates memory in Objective C (I'm mainly referring to iOS apps).
My dilemma comes from the fact that checking for the success of failure of a memory allocation operation adds lots of lines of code, while wondering whether this is at all useful.
Moreover, sometimes memory allocations are obvious, such as using 'alloc', but sometimes they are taking place behind the scenes. And even if we check each and every allocation - when we find it failed - there isn't much we could actually do. So maybe the correct approach is to just let it fail and have the app crash?
Take a look at this code:
// Explicit memory allocation
NSArray a1 = [[NSArray alloc] initWithObjects:someObj, nil];
if (!a1) {
// Should we make this check at all? Is there really what to do?
}
// Implicit memory allocation
NSArray a2 = [NSArray arrayWithObjects:someObj, nil];
if (!a2) {
// should we make this check at all? Is there really what to do?
}
What in your opinion would be the correct approach? Check or not check for allocation failures? iOS developers out there - how have you handled it in your apps?
Fantasy: Every memory allocation would be checked and any failure would be reported to the user in a friendly fashion, the app would shut down cleanly, a bug report would be sent, you could fix it and the next version would be perfect [in that one case].
Reality: By the time something as trivial as arrayWithObjects: fails, your app was dead long, long, ago. There is no recovery in this case. It is quite likely that the frameworks have already failed an allocation and have already corrupted your app's state.
Furthermore, once something as basic as arrayWithObjects: has failed, you aren't going to be able to tell the user anyway. There is no way that you are going to be able to reliably put a dialog on screen without further allocations.
However, the failure happened much further before your app failed an allocation. Namely, your app should have received a memory warning and should have responded by (a) persisting state so no customer data is lost and (b) freeing up as much memory as possible to avoid catastrophic failure.
Still, a memory warning is the last viable line of defense in the war on memory usage.
Your first assault on memory reduction is in the design and development process. You should consider memory use from the start of the application development process and you must optimize for memory use as you polish your application for publication. Use the Allocations Instrument (see this Heapshot analysis write-up I did a bit ago -- it is highly applicable) and justify the existence of every major consumer of memory.
iPhone apps should register for UIApplicationDidReceiveMemoryWarningNotification notifications. iOS will send these when available memory gets low. Google iphoneappprogrammingguide.pdf (dated 10/12/2011) for more information.
That said, one general approach to the problem I've seen is to reserve a block of memory at app startup as a "cushion". In your code put in a test after each allocation. If an allocation fails, release the cushion so you have enough memory to display an error message and exit. The size of the cushion has to be large enough to accommodate allowing your hobbled app to shutdown nicely. You could determine the size of the cushion by using a memory stress tester.
This is really a tricky problem because it happens so rarely (for well-designed programs). In the PC/mini/mainframe world virtual memory virtually eliminates the problem in but the most pathological programs. In limited memory systems (like smartphones), stress testing your app with a heap monitor tool should give you a good indication of its max memory usage. You could code a high water mark wrapper routine for alloc that does the same thing.
Check them, assert in debug (so you know where/why failures exist), and push the error to the client (in most cases). The client will typically have more context - do they retry with a smaller request? fail somehow? disable a feature? Display an alert to the user, etc, etc. The examples you have provided are not the end of the world, and you can gracefully recover from many - furthermore, you need to (or ought to) know when and where your programs fail.
With the power you have in an Iphone/smartphone, the time it takes to compute a few test is ridiculous to be thinking "is it really worth checking", it is always good test and catch any failures in your code/allocations. (if you don't it sounds more like your lazy to add a few extra lines in your code.
Also "letting the app crash" gives a REALLY poor impression of your application, the user see the app close for no reason and thinks its a poor quality software.
You should always add your tests and if you can't do anything about the error then at least you should display a message before the app closes (makes the user less frustrated).
there a several options when tracking memory allocations, like catching exception. testing if the pointer returned is nil, checking the size of the list etc.
you should think of ways to let your application run in the case the allocation fails:
if it is jsut a view of your interface, display a message saying fail to load the particular view ...
if it is the main and only view, close the application gracefully with a message
...
I don't know what application you are working on but if you are short on memory, you should consider creating a system to allocate a deallocated memory as your progressing in your app, so that you always have the maximum memory available. it might be slightly slower than keeping everything cached but you app quality will improve if you suppress any force close.

How does Scala's Lift manage state?

I'm quite impressed by what Lift 2.0 brings to the table with Actors and StatefulSnippets, etc, but I'm a little worried about the memory overhead of these things. My question is twofold:
How does Lift determine when to garbage collect state objects?
What does the memory footprint of a page request look like?
If a web crawler dances across the footprint of the site, are they going to be opening up enough state objects to drown out a modest VPS (512M)? The question is very obviously application dependent, but I'm curious if anyone has any real world figures they can throw out at me.
Lift stores state information in a session, so once the session is destroyed the state associated with that session goes away.
Within the session, Lift tracks each page that state is allocated for (e.g., mapping between an ajax button in the browser and a function on the server) and have a heart-beat from the browser. Functions for pages that have not seen the heartbeat in 10 minutes are unreferenced so the JVM can garbage collection them. All of this is tunable, so you can change heart-beat frequency, function lifespan, etc., but in practice the defaults work quite well.
In terms of session explosion, yeah... that's a minor issue. Popular sites (including http://demo.liftweb.net/ ) experience it. The example code (see http://github.com/lift/lift/tree/master/examples/example/ ) detects sessions that were created by a single request and then abandoned and expires those early. I'm running demo.liftweb.net with 256MB of heap size (that'd fit in a 512MB VPS) and occasionally, the session count rises over 1,000, but that's quickly tamped down for search engine traffic.
I think the question about memory footprint was once answered somewhere on the mailing list, but I can’t find it at the moment.
Garbage collection is done after some idle time. There is, however, an example on the wiki which uses some better heuristics to kill off sessions spawned by web crawlers.
Of course, for your own project it makes sense to check memory consumption with something like VisualVM while spawning a couple of sessions yourself.

Xcode iPhone Build fails With Out of Memory

Sometimes the project compiles, and sometimes it fails with
"Out of memory allocating 4072 bytes after a total of 0 bytes"
If the project does compile, when it starts it immediately throws a bad access exception when attempting to access the first (allocated and retained) object, or, throws an error "unable to access memory address xxxxxxxx", where xxxxxxxx is a valid memory address.
Has anyone seen similar symptoms and knows of workarounds?
Thanks in advance.
If compilation or linking is failing with an out of memory error like that, it is likely one of two issues.
First, does your boot drive or the drive that you are building your source on have free space (they may be the same drive)? If not, then that error may arise when the VM subsystem tries to map in a file or, more likely if boot drive is full, the VM subsystem tries to allocate more drive for swap space.
Secondly, is your application just absolutely gigantic? I.e. is it the linker that is failing as it tries to assemble something really really large?
There is also the possibility that the system has some bad RAM in it. Unlikely, though, given that the symptoms are so consistent.
In any case, without more details, it is hard to give a more specific answer.
I've seen this, it is not usually an actual memory error...(of your code)
what is happening is that you have your Xcode target Build settings "optimization level" set to Fast, or faster, or fastest..
there appears to be a bug in there somewhere, set it to none, or try the Os, or O3 (i don't think fastest is effected)..
this will very likely solve someones problem that comes across this thread. for sure try "none" first... this will confirm that this is what is happening in someone's case that sees this...
i can tell that McPragma is having this problem for sure, because he/she describes changing from debug to release, and this causes it (debug is already set to none) and release is set to something else... when that is the case... for sure it is that particular build setting...