Any tips How to figure out where is memory leak in my Facebook app, its ASPX using Facebook toolkit DLL and I am afraid the bug may be in that CS library.
The problem is that after one week uptime, server is running out of memory and needs to be rebooted, there are quite many users and I cannot run debugger on this "production server", so I would need to simulate somehow use on my local PC.
Try taking a dump (sorry for my phrasing) of the application.
There are some tips on this blog on how to do it, and how to read the dump files.
Then you can see which objects are allocated, and which ones prevent garbage collecting.
Related
I'm using 19.2 version of SQL Developer and I have switched from TOAD.
Offlate when I lock up my system and open after an hour or two, I see my system boots up very slowly.
Most of the system's memory is consumed by Oracle SQL developer. Please see attached screenshot.
Is there any way to avoid the above issues?
I’m trying to profile a website I have to work on in IIS, in Perl. The website uses Catalyst. I’m using Devel::NYTProf to profile it.
By default, the profile file is written in ./nytprof.out. I don’t have access to the command line used to launch the perl, nor to pass arguments (I use use Devel::NYTProf to enable profiling in my perl file).
But I can’t find the file… Do you have an idea where it’d be? How could I profile my website with NYTProf a better way?
I assume you mean IIS.
Have you checked the user the web server is running as has write permission to likely folders? It use to run as IANONUSR (IIRC) or similar, which had very controlled permissions for obvious reasons.
The IIS FastCGI module lets you set environment variables for the FastCGI processes, which should let you set out_file for NYTPROF. If all else fails you can hack Run.pm in NYTPROF and change the location that way, crude but at least you know where it is trying to write to.
I salute your efforts, I would probably just port the application to run under Linux. First time getting NYTProf working under Linux was hard enough, especially as the processes have to terminate normally, so the FastCGI processes got a method added to make them die when I fetched a specific URL, which I'd keep fetching till all the processes were dead.
That said NYTProf was well worth the effort on Linux, was able to track down a regular expression that was eating vast amounts of CPU, and didn't even need to be called 99.9% of the time it was. Experience on Windows was "fork" was a performance killer, but I think Microsoft fixed that somewhat since my IIS days.
Does downloading any library files from sunfreeware for solaris OS are prone to viruses,or is it safe to download from these sites.
because i had a memory issue where /proc consumed too much space(eventhough it is VFS) and
my / shows 100%full.
afterwards i was unable to login also.
Please comment.
Downloading any binary from any site, regardless of OS, requires you to trust the people who run that site to not build in malicious code, and to protect their site well enough that someone else can't break in and insert malicious code.
That's likely completely unrelated to the problems you hit though, filling a disk happens through many normal operations if you're not paying attention to disk usage.
I've been working on an app for quite a while and suddenly started to hit this error when the app tries to open a Core Data store. I hadn't made any changes to my data model or the data access code for over a month, so I don't think it can be anything that I'm doing wrong as far as interacting with Core Data. (Meaning, the URLs are ok, the call pattern is ok, etc...)
Interestingly, these are the log lines immediately before the error:
/SourceCache/GoogleMobileMaps/GoogleMobileMaps-217.2/googlenav/mac/TileStore.mm:209 unable to open /var/mobile/Library/Caches/MapTiles/MapTiles.sqlitedb: (14) unable to open database file
/SourceCache/GoogleMobileMaps/GoogleMobileMaps-217.2/googlenav/mac/TileStore.mm:155 file doesn't exist /var/mobile/Library/Caches/MapTiles/MapTiles.sqlitedb: (2)
/SourceCache/GoogleMobileMaps/GoogleMobileMaps-217.2/googlenav/mac/TileStore.mm:209 unable to open /var/mobile/Library/Caches/MapTiles/MapTiles.sqlitedb: (14) unable to open database file
/SourceCache/GoogleMobileMaps/GoogleMobileMaps-217.2/googlenav/mac/TileStore.mm:235 unable to open /var/mobile/Library/Caches/MapTiles/MapTiles.sqlitedb: tile data will not be cached
So it looks like there is "something" wrong with the sqlite layer in general. Has anybody seen this before? Is there a recovery option besides wiping my device? It's currently running 3.1.3 and I'd really hate to upgrade to 4 because it's currently my only way to test that the app will run for people who haven't upgraded.
One thing I did notice: shortly after I first hit this error, I wanted to see if any other apps were having problems. Sure enough, the iPod app had forgotten everything about me, but it was able to recover after syncing. So maybe there is some recovery mode? (Although, even if I can recover for my app, the Maps APIs might burn a lot of bandwidth if they can't cache the map tiles...)
Ryan
For what it's worth, I found the culprit and it has nothing to do with Core Data, sqlite, or the file system. The app uses a lot of small audio clips and I was pre-caching them all as AVAudioPlayers. I knew this was probably a bad idea, but it was quick and easy so I figured I'd keep doing it that way until I hit some kind of problem. (I'd put a wrapper around the players so that I could delay instantiation if required without affecting the rest of the system, which is what I'm doing now.) I just assumed the problem would show up as an audio player problem and not somewhere else seemingly totally unrelated.
I realized there must be a code error when I found the simulator also misbehaving, but in a different totally inexplicable way (keyed archives weren't being written properly). When I backed out of the most recent change I'd made (adding a new batch of audio clips), the problems vanished.
Hopefully this helps someone in the future!
I'm using MonoTouch and also System.Data to create a DataSet (just xml to those not familiar) for simple data binding. Data on my app is minimal so no need to go all out with SQLLite. The dataset use makes it easy to pass via web services for cloud sync.
I serialize the DataSet to the personal folder on save and of course read this file when the app starts up to load up the user's data. I've had issues where this file is becoming corrupt and I'm not sure why. I assume file I/O may be slow on these devices and that could be the cause, I'm not sure, but it is happening.
I'm also concerned that maybe iTunes is passing this file back and forth between the PC/MAC when the user syncs their devices with iTunes, which may be the cause of the corruption?
I want to prevent this device file from syncing with iTunes and also reliably persist it. I'm using the NSFile.Save option to save it to the device. I'm thinking since it's a text file maybe I could more safely store it in the standard user settings area instead? This would prevent it from being synced by itunes, I presume?
What is the most reliable and safe way to handle this file i/o for the xml dataset storage?
Thank you.
You're using MonoTouch. Isn't it simply a matter of calling DataSet.WriteXml() with a FileStream object ready to write to a document in your Documents folder?
That Documents folder is backed up to iTunes. It's not synced, but it helps if your user is restoring their phone (because they bricked it, lost it, whatever). It doesn't explain why it's corrupt.
The only thing that I can think of why it's corrupt is because it took too long for your app to write it. There's a limited time from the point where the user exits the app until it's closed down, to prevent apps from keeping the system hostage and deteriorate user experience.
If writing the whole dataset takes too long, you want to think about minimizing that. Perhaps you can just store the data, and not the schema. Or you can devise a way to store only the deltas on exit and reconcile when the user has loaded your app again.
You can also prevent complete loss of data by writing to a second file, and when that operation completes delete the old file and rename. That way, the next time you start up if the write operation didn't complete, the old file would still be there and the user would have only lost their more recent changes.
In any case, if your data gets too big for a simple write operation to complete, you should look at different options such as sqlite.
Your best bet is probably to just save the XML as text. It's as simple as File.WriteAllText(...) - there's no reason to go to NSFile for this. That's part of the advantage of MonoTouch :)
Regarding syncing, here's the rule:
If you keep the file in the user's documents folder (Environment.SpecialFolder.MyDocuments and Environment.SpecialFolder.Personal BOTH point to the user's doc folder), then it's going to get backed up whenever the user syncs with iTunes.
There's nothing wrong with this. It persists the data between sessions and makes it recoverable if something goes wrong with the user's phone and they need to restore from a backup. Since your question is about persisting an XML file on the phone, this is what you want.
As for the iTunes question, there's no problem with speed and syncing because your app isn't going to be running while the phone is syncing. The file will either have been saved or it won't. Any corruption that takes place is happening while your app is running.
Reasons for files getting corrupted can include:
Not saving before the user quits. You get a chance to do this.
Not gracefully handling an incoming phone call. The system warns you about this as well.
iTunes definitely isn't corrupting your file. If that were the case, iOS apps would all be broken. It could be happening on your dev machine for whatever reason, but I've never seen this happen elsewhere, and I've done quite a bit of iOS development.
If you'd like a tutorial on reading and writing files, I posted an answer in another question.
It's lengthy, but the point was to answer as many questions as I could so nobody would be left hanging or confused.
A nice thing about iOS devices is that you're back (for most apps) in the one-person-at-a-time world. You're writing apps where you don't have to worry about 5,000 people trying to use your web-based app at the same time (that's not always true, but... you get the point). As a result, you can do things that you might normally consider bad for performance, but you're unlikely to see any performance problems (as long as the file you're saving is either small enough to be saved quickly or you're saving in the background on another thread - you never want to block the main (UI) thread with a heavy IO operation).
If you have any more questions, feel free to ask.
I hope this helps :)
Lots of frameworks are read-only, but I've found that GDataXMLNode from http://code.google.com/p/gdata-objectivec-client/ works very well read/write.
Having said that, on the iPhone you'd do yourself a big favour using Core Data with a SQLLite backend. :-) Apple has done all this for you and optimized it more than any of us every will
Cheers
Nik
Consider using SQLite, I'd go for something like
http://www.taimila.com/entify/ (not tried yet)
catnap orm
or Faks sqlite-net on google code (using this in a few apps)
entify - if it does what it says it can do - looks really good.
persisting XML on the iPhone as a means to store and access data is a nause you dont want to get into. I wrote about it here http://iwayneo.blogspot.com/2010/08/festival-star-history-serialization-as.html