nsuserdefaults synchronize method slows down the application - iphone

I am doing a calculation intensive operation in loops(hundreds for iterative formulas).In each loop the values are fetched from nsuserdefaults directly and calculated and saved back.my question is that should i use -synchronize method each time i write into nsuserdefaults?.i think without using this method. my application runs much faster. Does using synchronize slows down the calculations

Does using synchronize slows down the calculations?
Yes, absolutely. synchronize writes the current user default values to the disk.
should i use -synchronize method each time i write into nsuserdefaults?.
No absolutely not. If you have a long loop, where you are changing user defaults, the values are saved in memory. It won't mess up your calculations. It is only necessary to save to disk after the loop is done.
synchronize is usually done:
manually, before the app is terminated or sent to background
automatically by the system every few minutes
manually by the program after some important changes are made that you don't want to risk losing in the event of a crash or sudden power off.
In your case, after the long loop, you want to do it for reason 3.
By doing it every time within the loop, you are just unnecessarily writing values to flash, which you likely immediately overwrite.

No! You should not. Consider to synchronize in applicationWillTerminate.

No. In theory you never need to call it at all, it will be done for you (it “is automatically invoked at periodic intervals”). In practice, it's a good idea to do so in applicationWillResignActive:.

Related

can i somehow detect that a new statement has started

I have a plv8 function (but I assume the same thing applies to any language). It could be called a lot in one statement (select say). It computes expensive stuff so I dont want to recompute every time. But the stuff it computes depends on the contents of the database. I can naively cache things in the function but that will never get cleared. So after the first call I am always operating on old data. If would be nice to flush at the start of each statement execution.
Note that triggering off changes to the tables the cache depends on doesnt work. The cache exists in connectionA, the DB can be changed by connectionB (or C,...)

"Out of memory" error for standalone matlab applications - memory fragmentation

I have to deliver an application as a standalone Matlab executable to a client. The code include a series of calls to a function that internally creates several cell arrays.
My problem is that an out-of-memory error happens when the number of calls to this function increases in response to the increase in the user load. I guess this is low-level memory fragmentation as the workspace variables are independent from the number of loops.
As mentioned here, quitting and restarting Matlab is the only solution for this type of out-of-memory errors at the moment.
My question is that how I can implement such a mechanism in a standalone application to save data, quit and restart itself in the case of out-of-memory error (or when high likelihood of such an error is predicted somehow).
Is there any best practice available?
Thanks.
This is a bit of a tough one. Instead of looking to restart to clear things out, could you change the code to break the work in to chunks to make it more efficient? Fragmentation is mostly proportional to the peak cell-related memory usage and how much the size of data items varies, and less to the total usage over time. If you can break a large piece of work in to smaller pieces done in sequence, this can lower the "high water mark" of your fragmented memory usage. You can also save on memory usage by using "flyweight" data structures that share their backing data values, or sometimes converting to cell-based structures to reference objects or numeric codes. Can you share an example of your code and data structure with us?
In theory, you could get a clean slate by saving your workspace and relevant state out to a mat file and having the executable launch another instance of itself with an option to reload that state and proceed, and then having the original executable exit. But that's going to be pretty ugly in terms of user experience and your ability to debug it.
Another option would be to offload the high-fragmentation code in to another worker process which could be killed and restarted, while the main executable process survives. If you have the Parallel Computation Toolbox, which can now be compiled in to standalone Matlab executables, this would be pretty straightforward: open a worker pool of one or two workers, and run the fraggy code inside them using synchronous calls, periodically killing the workers and bringing up new ones. The workers are independent processes which start out with non-fragmented memory spaces. If you don't have PCT, you could roll your own by compiling your application as two separate apps - the driver app and worker app - and have the main app spin up a worker and control it via IPC, passing your data back and forth as MAT files or bytestreams. That's not going to be a lot of fun to code, though.
Perhaps you could also push some of the fraggy code down in to the Java layer, which handles cell-like data structures more gracefully.
Changing the code to be less fraggy in the first place is probably the simpler and easier approach, and results in a less complicated application design. In my experience it's often possible. If you share some code and data structure details, maybe we can help.
Another option is to periodically check for memory fragmentation with a function like chkmem.
You could integrate this function to be called silently from you code each couple of iterations, or use a timer object to have it called every X minutes...
The idea is to use thse undocumented functions feature memstats and feature dumpmem to get the largest free memory blocks available in addition to the largest variables currently allocated. Using that you could make a guess if there is a sign of memory fragmentation.
When detected, you would warn the user and instruct them you how to save their current session (export to MAT-file), restart the app, and restore the session upon restart.

How often should I save to Core Data?

I'm working on an application backed by Core Data.
Right now, I'm saving the Object Context as and when I add or delete an entity to and from the Context.
I'm afraid it will affect the performance, so I was thinking of delaying the save.
In fact, I could delay it all the way until the application is gonna terminate.
Is it too risky to save the data only when the application is about to close? How often should I call the save on Object Context?
I was thinking of having a separate thread handle the save: it will wait on a semaphore. Every time any part of the application calls a helper/util method to save the Core Data, it will decrement the semaphore. When it is down to zero, the "save thread" will do a save once and it increments the semaphore to a, say, 5, and then sleep again.
Any good recommendation?
Thanks!
You should save frequently. The actual performance of the save operation has a lot to do with which persistent store type you're using. Since binary and XML stores are atomic, they need to be completely rewritten to disk on every save. As your object graph grows, this can really slow down your application. The SQLite store, on the other hand, is much easier to write to incrementally. So, while there will be some stuff that gets written above and beyond the objects you're saving, the overhead is much lower than with the atomic store types. Saves affecting only a few objects will always be fast, regardless of overall object graph size.
That said, if you're importing data in a loop, say, I would wait until the end of the complete operation to save rather than saving on each iteration. Your primary goal should be to prevent data loss. (I have found that users don't care for that very much!) Performance should be a close second. You may have to do some work to balance the frequency of saving against performance, but the solution you outline above seems like overkill unless you've identified a specific and significant performance issue.
One issue not mentioned here in other answers is that your solution, which involves using a background thread, should not be operating on a managed object context used in another thread. Generally you make a new MOC for background threads, but that would defeat the purpose of saving if you saved to a different/unmodified background MOC.
So a few answers to your question:
You would need to call back your original thread to save the MOC
As the current accepted answers suggests the whole counter might be overkill for your needs unless a performance issue was measured.
If a performance issue WAS measured, you could take a simple throttling technique where you set a limit of, say, 1 save per 10 seconds. Store the Date of the last time you saved. When your save function is called always make sure the current time is > 10 seconds since your last save otherwise, early return.
You really want to be saving immediately as much as possible, so at the very least my recommendation is to throttle rather than arbitrarily set any timer or countdown.
The best way I think, is to save after every object. If something ever happens such as a sudden crash nothing will be lost.
Some performance enhancements, if you adding a lot of objects is to batch. Add all objects to the context than save. This is good for example if you adding a lot objects in a loop. Your idea is similar, but there could be a long time between saves, in which the program could crash.
I don't think adding a single object would be a that much of a performance problem. How big are your objects, do they contain a lot of data?

Is it a good idea to warm up a cache in the BEGIN block in Perl?

Is it a good idea to warm up cache in the BEGIN block, when it gets used?
You didn't really provide any information on what kind of environment you're talking about, which I think is important. In most cases the answer is probably "no", but I can think of one case where it's a definite yes, which is preforking servers -- web applications and the like. In that case, any work that you can do "before the fork" not only saves the cost of having the children recompute the same values individually, it alo saves memory, since the pages containing the results can be shared across all of the child processes by the OS's COW mechanism.
If you're talking about a module you're writing and not an application, then I'd say no, don't lift things to compilation time without the user's permission unless they're things that have to be done for the module to work. Instead, provide a preheat_cache class method, and if your caller has a reason to need a hot cache at compile time they can put the call into a BEGIN block themselves. You could also use a :preheat_cache import tag but that's unnecessarily fancy in my book.
If it's a choice between preloading your cache at compile time, or preloading your cache as the first thing you do at run time, there's virtually no difference.
If your cache is large enough that loading it will trigger a lot of page swaps, that's an argument for waiting until run time. That way, all your module loading and other compile time code can be done while your system is under a lighter load.
I'm going to go with "no", even though I could be wrong. Reasoning goes like this: keep the code, and data it uses, small, so that it takes up less space in any caches (I am presuming you mean CPU cache, not programmatic hashes with common query results or some such thing).
Unless you see some sort of bad access pattern, trying to second guess what needs to be prefetched is probably useless at best. In fact such code or initialization data is likely to displace something you (or another process on the system) were actually using. Think about what you can do in the actual work part of the code to maximize locality of reference, to try to stay within smaller memory regions at any one time.
I used to use "top" to detect when processes were swapping between memory and disk. I don't know of any good tools yet to tell how often a process is getting cache misses and going to plain old slow mo'board memory. There must be such tools, I just don't know what they are yet (software tools, rather than some custom In Circuit Emulator type hardware). Perhaps some thought on this earlier in the day...
by warm up I assume you mean use BEGIN() to guarantee the cache is preloaded before anything else in your script executes?
If you need the cache for your program to run properly, then yes, I think it would be a good idea.

Is it possible to pause an SQL query?

I've got a really long running SQL query (data import, etc). It's crap - it uses cursors and it running slowly. It's doing it, so I'm not too worried about performance.
Anyways, can I pause it for a while (instead of canceling the query)?
It chews up a a bit of CPU so i was hoping to pause it, do some other stuff ... then resume it.
I'm assuming the answer is 'NO' because of how rows and data gets locked, etc.
I'm using Sql Server 2008, btw.
The best approximation I know for what you're looking for is
BEGIN
WAITFOR DELAY 'TIME';
EXECUTE XXXX;
END;
GO
Not only can you not pause it, doing so would be bad. SQL queries hold locks (for transactional integrity), and if you paused the query, it would have to hold any locks while it was paused. This could really slow down other queries running on the server.
Rather than pause it, I would write the query so that it can be terminated, and pick up from where it left off when it is restarted. This requires work on your part as a query author, but it's the only feasible approach if you want to interrupt and resume the query. It's a good idea for other reasons as well: long running queries are often interrupted anyway.
Click the debug button instead of execute. SQL 2008 introduced the ability to debug queries on the fly. Put a breakpoint at convenient locations
When working on similar situations, where I was trying to go through an entire list of data, which could be huge, and could tell which ones I have visited already, I would run the processing in chunks.
update or whatever
where (still not done)
limit 1000
And then I would just keep running the query until there are no rows being modified. This breaks the locks up into reasonable time chunks and can allow you to do thinks like move tens of millions of rows between tables while a system is in production.
Jacob
Instead of pausing the script, perhaps you could use resource governor. That way you could allow the script to run in the background without severely impacting performance of other tasks.
MSDN-Resource Governor