I am receiving memory warnings in didReceiveMemoryWarning. I know memory warnings have different levels like level-1,level-2. Is there any way determine the warning level? Example:
if(warning level == 1)
<blah>
Hope this helps!!!
There are 4 levels of warnings (0 to 3). These are set from the kernel memory watcher, and can be obtained by the not-so-public function OSMemoryNotificationCurrentLevel().
typedef enum {
OSMemoryNotificationLevelAny = -1,
OSMemoryNotificationLevelNormal = 0,
OSMemoryNotificationLevelWarning = 1,
OSMemoryNotificationLevelUrgent = 2,
OSMemoryNotificationLevelCritical = 3
} OSMemoryNotificationLevel;
How the levels are triggered is not documented. SpringBoard is configured to do the following in each memory level:
Warning (not-normal) — Relaunch, or delay auto relaunch of nonessential background apps e.g. Mail.
Urgent — Quit all background apps, e.g. Safari and iPod.
Critical and beyond — The kernel will take over, probably killing SpringBoard or even reboot.
I know there is no way to (except the private/undocumented API) know the memory level warning. So you should not use that.
Check out this question to see undocumented API to get memory warning level.
My first advice would be to research the memory warning notification in the docs (e.g., what are the contents of its userInfo dictionary, if present). I don't know if it provides any details or not.
But ultimately, you shouldn't speculate on the level of the memory warning, just assume the worst and release as much unused data as you can.
There is no (public, working) way to get the current memory pressure level from the system on a customer device. There is however a way to get notified of memory pressure changes using the Dispatch Source API.
Memory pressure dispatch sources can be used to notify an application of changes to memory pressure. This can be more fine-grained than the notifications provided by UIKit and includes the capability to be notified when memory pressure returns to normal.
For example:
Objective-C:
dispatch_source_t memorySource = NULL;
memorySource = dispatch_source_create(DISPATCH_SOURCE_TYPE_MEMORYPRESSURE, 0L, (DISPATCH_MEMORYPRESSURE_NORMAL | DISPATCH_MEMORYPRESSURE_WARN | DISPATCH_MEMORYPRESSURE_CRITICAL), [self privateQueue]);
if (memorySource != NULL) {
dispatch_block_t eventHandler = dispatch_block_create(DISPATCH_BLOCK_ASSIGN_CURRENT, ^{
if (dispatch_source_testcancel(memorySource) == 0 ){
dispatch_source_memorypressure_flags_t memoryPressure = dispatch_source_get_data(memorySource);
[self didReceiveMemoryPressure:memoryPressure];
}
});
dispatch_source_set_event_handler(memorySource, eventHandler);
dispatch_source_set_registration_handler(memorySource, eventHandler);
[self setSource:memorySource];
dispatch_activate([self source]);
}
Swift 4:
if let source:DispatchSourceMemoryPressure = DispatchSource.makeMemoryPressureSource(eventMask: .all, queue:self.privateQueue) as? DispatchSource {
let eventHandler: DispatchSourceProtocol.DispatchSourceHandler = {
let event:DispatchSource.MemoryPressureEvent = source.data
if source.isCancelled == false {
self.didReceive(memoryPressureEvent: event)
}
}
source.setEventHandler(handler:eventHandler)
source.setRegistrationHandler(handler:eventHandler)
self.source = source
self.source?.activate()
}
Note that the event handler is also being used as the "registration handler". This will cause the event handler to fire when the dispatch source is activated, effectively telling the application of what the "current" value is when the source is activated.
Related
While retrieving metadata from media files, I've run into a memory issue I cannot figure out.
I want to retrieve metadata for media files either stored in the local app storage or in the iTunes area. For this I use AVAsset. While looping through these files I can see the memory consumption rising constantly. And not just a little. It is significant and end up stalling the app when I enumerate my iTunes library on the phone.
The problem seems to be accessing the metadata property on the AVAsset class. I've narrowed in down to one line of code: 'let meta = ass.metadata'. Having that line of code (without any references) makes the app consume memory. I've included an example of my code structure.
func processFiles(_ files:Array)
{
var lastalbum : String = ""
var i : Int = 0
for file in files
{
i += 1
view.setProgressPosition(CGFloat(i)/CGFloat(files.count))
lastalbum = updateFile(file.url,lastalbum,
{ (album,title,artist,composer) in
view.setProgressNote(album,title,artist+" / "+composer)
})
}
}
func updateFile(_ url:URL,_ lastalbum:String,iPod:Bool=false,
_ progress:(String,String,String,String) -> Void) -> String
{
let ass = AVAsset(url:url)
let meta = ass.metadata
for item in meta
{
// Examine metadata
}
// Use metadata
// Callback with status
}
It seems that memory allocated in the updateFile method, is kept, even when the function is ended. However, once the processFile function completes and the app returns to a normal state, all memory is released again.
So in conclusion, this is not a real leak, but still a significant problem. Any good ideas as to what goes wrong? Is there any way I can force the memory management to run a cleanup?
As suggested in the comment on the post, the solution for this is to wrap the specific code in a 'autoreleasepool' block. I've tested this both with a small set of local media files and also with my rather large iTunes media library (70GB). After implementing the 'autoreleasepool' the memory buildup is eliminated.
func updateFile(_ url:URL,_ lastalbum:String,iPod:Bool=false,
_ progress:(String,String,String,String) -> Void) -> String
{
autoreleasepool
{
let ass = AVAsset(url:url)
let meta = ass.metadata
for item in meta
{
// Examine metadata
}
// Use metadata
// Callback with status
}
}
I have a Swift DispatchQueue that receives data at 60fps.
However, depending on phones or amount of data received, the computation of those data becomes expensive to process at 60fps. In actuality, it is okay to process only half of them or as much as the computation resource allows.
let queue = DispatchQueue(label: "com.test.dataprocessing")
func processData(data: SomeData) {
queue.async {
// data processing
}
}
Does DispatchQueue somehow allow me to drop some data if a resource is limited? Currently, it is affecting the main UI of SceneKit. Or, is there something better than DispatchQueue for this type of task?
There are a couple of possible approaches:
The simple solution is to keep track of your own Bool as to whether your task is in progress or not, and when you have more data, only process it if there's not one already running:
private var inProgress = false
private var syncQueue = DispatchQueue(label: Bundle.main.bundleIdentifier! + ".sync.progress") // for reasons beyond the scope of this question, reader-writer with concurrent queue is not appropriate here
func processData(data: SomeData) {
let isAlreadyRunning = syncQueue.sync { () -> Bool in
if self.inProgress { return true }
self.inProgress = true
return false
}
if isAlreadyRunning { return }
processQueue.async {
defer {
self.syncQueue.async { self.inProgress = false }
}
// process `data`
}
}
All of that syncQueue stuff is to make sure that I have thread-safe access to the inProgress property. But don't get lost in those details; use whatever synchronization mechanism you want (e.g. a lock or whatever). All we want to make sure is that we have thread-safe access to the Bool status flag.
Focus on the basic idea, that we'll keep track of a Bool flag to know whether the processing queue is still tied up processing the prior set of SomeData. If it is busy, return immediately and don't process this new data. Otherwise, go ahead and process it.
While the above approach is conceptually simple, it won't offer great performance. For example, if your processing of data always takes 0.02 seconds (50 times per second) and your input data is coming in at a rate of 60 times per second, you'll end up getting 30 of them processed per second.
A more sophisticated approach is to use a GCD user data source, something that says "run the following closure when the destination queue is free". And the beauty of these dispatch user data sources is that it will coalesce them together. These data sources are useful for decoupling the speed of inputs from the processing of them.
So, you first create a data source that simply indicates what should be done when data comes in:
private var dataToProcess: SomeData?
private lazy var source = DispatchSource.makeUserDataAddSource(queue: processQueue)
func configure() {
source.setEventHandler() { [unowned self] in
guard let data = self.syncQueue.sync(execute: { self.dataToProcess }) else { return }
// process `data`
}
source.resume()
}
So, when there's data to process, we update our synchronized dataToProcess property and then tell the data source that there is something to process:
func processData(data: SomeData) {
syncQueue.async { self.dataToProcess = data }
source.add(data: 1)
}
Again, just like the previous example, we're using syncQueue to synchronize our access to some property across multiple threads. But this time we're synchronizing dataToProcess rather than the inProgress state variable we used in the first example. But the idea is the same, that we must be careful to synchronize our interation with a property across multiple threads.
Anyway, using this pattern with the above scenario (input coming in at 60 fps, whereas processing can only process 50 per second), the resulting performance much closer to the theoretical max of 50 fps (I got between 42 and 48 fps depending upon the queue priority), rather than 30 fps.
The latter process can conceivably lead to more frames (or whatever you're processing) to be processed per second and results in less idle time on the processing queue. The following image attempts to graphically illustrate how the two alternatives compare. In the former approach, you'll lose every other frame of data, whereas the latter approach will only lose a frame of data when two separate sets of input data came in prior to the processing queue becoming free and they were coalesced into a single call to the dispatch source.
I am using Windows ThreadPools in my application, and am experiencing a memory leak of 136 bytes for every call to CreateThreadPoolWork(), as seen via UMDH:
+ 1257728 ( 1286424 - 28696) 9459 allocs BackTraceB0035CC
+ 9248 ( 9459 - 211) BackTraceB0035CC allocations
ntdll!RtlUlonglongByteSwap+B52
ntdll!TpAllocWork+8D
KERNEL32!CreateThreadpoolWork+25
... My Code ...
I am using Cleanup Group, so per the documentation I am not calling CloseThreadPoolWork().
My code for handling the ThreadPool is:
typedef PTP_WORK ThreadHandle_t;
typedef PTP_WORK_CALLBACK THREAD_ENTRY_POINT_T;
static PTP_POOL pool = NULL;
static TP_CALLBACK_ENVIRON CallBackEnviron;
static PTP_CLEANUP_GROUP cleanupgroup = NULL;
int mtInitialize()
{
InitializeThreadpoolEnvironment(&CallBackEnviron);
pool = CreateThreadpool(NULL);
if (NULL == pool)
{
return -1;
}
cleanupgroup = CreateThreadpoolCleanupGroup();
if (NULL == cleanupgroup)
{
return -1;
}
SetThreadpoolCallbackPool(&CallBackEnviron, pool);
SetThreadpoolCallbackCleanupGroup(&CallBackEnviron, cleanupgroup, NULL);
return 0; // Success
}
void mtDestroy()
{
CloseThreadpoolCleanupGroupMembers(cleanupgroup, FALSE, NULL);
CloseThreadpoolCleanupGroup(cleanupgroup);
DestroyThreadpoolEnvironment(&CallBackEnviron);
CloseThreadpool(pool);
}
//Create thread
ThreadHandle_t mtRunThread(THREAD_ENTRY_POINT_T entry_point, void *thread_args)
{
PTP_WORK work = NULL;
work = CreateThreadpoolWork(entry_point, thread_args, &CallBackEnviron);
if (NULL == work) {
// CreateThreadpoolWork() failed.
return 0;
}
SubmitThreadpoolWork(work);
return work;
}
//Wait for a thread to finish
void mtWaitForThread(ThreadHandle_t thread)
{
WaitForThreadpoolWorkCallbacks(thread, FALSE);
}
Am I doing something wrong?
Any ideas why I'm leaking memory?
I'm guessing you figured it out, given your comment, but the problem is that you only call CloseThreadpoolCleanupGroupMembers() in mtDestroy().
If you have a persistent thread pool the memory will not be freed unless you call CloseThreadpoolCleanupGroupMembers() periodically. Your code and comments suggests that you do, though I can't confirm this without the code responsible for creating and destroying your thread pool.
My recommendation for persistent thread pools is to just call CloseThreadpoolWork() in the callback functions. Microsoft's recommendations work better if you're creating and destroying thread pools, but CloseThreadpoolWork() is simpler and easier than periodically calling CloseThreadpoolCleanupGroupMembers() if you're maintaining one thread pool for the life of your application.
By the way, it's safe to do both as long as you tell CloseThreadpoolCleanupGroupMembers() to cancel any pending callbacks (pass fCancelPendingCallbacks as TRUE) to ensure CloseThreadpoolWork() is called on any cleaned up work items:
You can revoke the work object’s membership only by closing it, which
can be done on an individual basis with the CloseThreadpoolWork
function. The thread pool knows that the work object is a member of
the cleanup group and revokes its membership before closing it. This
ensures that the application doesn’t crash when the cleanup group
later attempts to close all of its members. The inverse isn’t true: If
you first instruct the cleanup group to close all of its members and
then call CloseThreadpoolWork on the now invalid work object, your
application will crash.
From Windows with C++ - Thread Pool Cancellation and Cleanup
I have two methods that I need to run, lets call them metA and metB.
When I start coding this app, I called both methods without using threads, but the app started freezing, so I decided to go with threads.
metA and metB are called by touch events, so they can occur any time in any order. They don't depend on each other.
My problem is the time it takes to either threads start running. There's a lag between the time the thread is created with
[NSThread detachNewThreadSelector:#selector(.... bla bla
and the time the thread starts running.
I suppose this time is related to the amount of time required by iOS to create the thread itself. How can I speed this? If I pre create both threads, how do I make them just do their stuff when needed and never terminate? I mean, a kind of sleeping thread that is always alive and works when asked and sleeps after that?
thanks.
If you want to avoid the expensive startup time of creating new threads, create both threads at startup as you suggested. To have them only run when needed, you can have them wait on a condition variable. Since you're using the NSThread class for threading, I'd recommend using the NSCondition class for condition variables (an alternative would be to use the POSIX threading (pthread) condition variables, pthread_cond_t).
One thing you'll have to be careful of is if you get another touch event while the thread is still running. In that case, I'd recommend using a queue to keep track of work items, and then the touch event handler can just add the work item to the queue, and the worker thread can process them as long as the queue is not empty.
Here's one way to do this:
typedef struct WorkItem
{
// information about the work item
...
struct WorkItem *next; // linked list of work items
} WorkItem;
WorkItem *workQueue = NULL; // head of linked list of work items
WorkItem *workQueueTail = NULL; // tail of linked list of work items
NSCondition *workCondition = NULL; // condition variable for the queue
...
-(id) init
{
if((self = [super init]))
{
// Make sure this gets initialized before the worker thread starts
// running
workCondition = [[NSCondition alloc] init];
// Start the worker thread
[NSThread detachNewThreadSelector:#selector(threadProc:)
toTarget:self withObject:nil];
}
return self;
}
// Suppose this function gets called whenever we receive an appropriate touch
// event
-(void) onTouch
{
// Construct a new work item. Note that this must be allocated on the
// heap (*not* the stack) so that it doesn't get destroyed before the
// worker thread has a chance to work on it.
WorkItem *workItem = (WorkItem *)malloc(sizeof(WorkItem));
// fill out the relevant info about the work that needs to get done here
...
workItem->next = NULL;
// Lock the mutex & add the work item to the tail of the queue (we
// maintain that the following invariant is always true:
// (workQueueTail == NULL || workQueueTail->next == NULL)
[workCondition lock];
if(workQueueTail != NULL)
workQueueTail->next = workItem;
else
workQueue = workItem;
workQueueTail = workItem;
[workCondition unlock];
// Finally, signal the condition variable to wake up the worker thread
[workCondition signal];
}
-(void) threadProc:(id)arg
{
// Loop & wait for work to arrive. Note that the condition variable must
// be locked before it can be waited on. You may also want to add
// another variable that gets checked every iteration so this thread can
// exit gracefully if need be.
while(1)
{
[workCondition lock];
while(workQueue == NULL)
{
[workCondition wait];
// The work queue should have something in it, but there are rare
// edge cases that can cause spurious signals. So double-check
// that it's not empty.
}
// Dequeue the work item & unlock the mutex so we don't block the
// main thread more than we have to
WorkItem *workItem = workQueue;
workQueue = workQueue->next;
if(workQueue == NULL)
workQueueTail = NULL;
[workCondition unlock];
// Process the work item here
...
free(workItem); // don't leak memory
}
}
If you can target iOS4 and higher, consider using blocks with Grand Central Dispatch asynch queue, which operates on background threads which the queue manages... or for backwards compatibility, as mentioned use NSOperations inside an NSOperation queue to have bits of work performed for you in the background. You can specify exactly how many background threads you want to support with an NSOperationQueue if both operations have to run at the same time.
I'm currently polling my CFReadStream for new data with CFReadStreamHasBytesAvailable.
(First, some background: I'm doing my own threading and I don't want/need to mess with runloop stuff, so the client callback stuff doesn't really apply here).
My question is: what are accepted practices for polling?
Apple's documentation on the subject doesn't seem too helpful.
They recommend to "do something else while you wait". I'm currently just doing something along the lines of:
while(!done)
{
if(CFReadStreamHasBytesAvailable(readStream))
{
CFReadStreamRead(...) ... bla bla bla
} else {
usleep(3600); // I made this up
sched_yield(); // also made this up
continue;
}
}
Is the usleep and the sched_yield "good enough"? In there a "good" number to sleep for in usleep?
(Also: yes, because this is running in my own thread, I could just block on CFReadStreamRead - which would be great but I'm also trying to snag upload progress as well as download progress, so blocking there wouldn't help...).
Any insight would be much appreciated - thanks!
I think this question is a bit of a paradox because you're asking what the best practices are for doing something that's intrinsically not a best practice ;)
When there's a perfectly good method for blocking on network I/O, any compromise that causes you to poll instead is by definition not the best practice.
That said, if you do poll I think it might be more appropriate to "run the runloop until date" on your thread, instead of using whatever posix sleep or yield method you're imagining. Remember that each thread gets its own runloop, so essentially by running the runloop you're allowing Apple to employ its concept of best practices for blocking until a future date.
As for the time delay, I don't know if you'll get a definitive answer for what a good time is. It's a tradeoff between peppering the CPU with polling cycles vs. being stuck in the runloop for a little while when I/O is ready to be read from the network.
Ideally I think I would refocus your efforts on making this work using I/O blocking calls, but if you stick with the poll & idle technique, don't fret too much about the specific delay time. Just pick something that works and doesn't seem to impact performance negatively in either direction.
(Also, I'd like to clarify that I'm not too religious about the polling vs. blocking thing, I'm only stressing its value because you're obviously in search of an elevated solution).
When doing manual CFStream based connections on a separate thread (for custom things like bandwidth monitoring and throttling), I use a combination of CFReadStreamScheduleWithRunLoop, CFRunLoopRunInMode and CFReadStreamSetClient. Basically I run for 0.25 seconds and then check stream status. The client callback also gets notified on its own as well. This allows me to periodically check read status and do some custom behavior but rely mostly on (stream) events.
static const CFOptionFlags kMyNetworkEvents =
kCFStreamEventOpenCompleted
| kCFStreamEventHasBytesAvailable
| kCFStreamEventEndEncountered
| kCFStreamEventErrorOccurred;
static void MyStreamCallBack(CFReadStreamRef readStream, CFStreamEventType type, void *clientCallBackInfo) {
[(id)clientCallBackInfo _handleNetworkEvent:type];
}
- (void)connect {
...
CFStreamClientContext streamContext = {0, self, NULL, NULL, NULL};
BOOL success = CFReadStreamSetClient(readStream_, kMyNetworkEvents, MyStreamCallBack, &streamContext);
CFReadStreamScheduleWithRunLoop(readStream_, CFRunLoopGetCurrent(), kCFRunLoopDefaultMode);
if (!CFReadStreamOpen(readStream_)) {
// Notify error
}
while(!cancelled_ && !finished_) {
SInt32 result = CFRunLoopRunInMode(kCFRunLoopDefaultMode, 0.25, NO);
if (result == kCFRunLoopRunStopped || result == kCFRunLoopRunFinished) {
break;
}
if (([NSDate timeIntervalSinceReferenceDate] - lastRead_) > MyConnectionTimeout) {
// Call timed out
break;
}
// Also handle stream status CFStreamStatus status = CFReadStreamGetStatus(readStream_);
if (![self _handleStreamStatus:status]) break;
}
CFRunLoopStop(CFRunLoopGetCurrent());
CFReadStreamSetClient(readStream_, 0, NULL, NULL);
CFReadStreamUnscheduleFromRunLoop(readStream_, CFRunLoopGetCurrent(), kCFRunLoopDefaultMode);
CFReadStreamClose(readStream_);
}
- (void)_handleNetworkEvent:(CFStreamEventType)type {
switch(type) {
case kCFStreamEventOpenCompleted:
// Notify connected
break;
case kCFStreamEventHasBytesAvailable:
[self _handleBytes];
break;
case kCFStreamEventErrorOccurred:
[self _handleError];
break;
case kCFStreamEventEndEncountered:
[self _handleBytes];
[self _handleEnd];
break;
default:
Debug(#"Received unexpected CFStream event (%d)", type);
break;
}
}