I have a CONNECT object that make HTTP calls using :
NSURLConnection* connection = [NSURLConnection connectionWithRequest:urlRequest delegate:self];
In the - (void)connectionDidFinishLoading:(NSURLConnection*)connection method, I write things to a file.
I may have mutiple HTTP calls sent from multiple instances of CONNECT objects, so I suppose there is a risk that those CONNECT objects write into the file at the same time when the connection ends.
Is this correct ? If yes, how may I prevent this ?
If you have only one thread then there won't be any problem as the delegates will be executed on the same thread.
Delegates are executed one after another in single threaded environment. There wont be a case when your DelegateMehtod1 is accessing the file and it has paused due to DelegateMehtod2 and then DelegateMehtod2 access the file. This happens only in multiple threads not in multiple delegates with one thread
If it is multi-threaded environment then you have to synchronize your file accessing code.
Two things:
1) If you are running all the NSURLConnections on a single thread (most likely the main thread), then you won't have any issue with writing to a file as long as you're doing the entire write (ie. open/create, write, close) within the connectionDidFinishLoading method. This is because a single thread can only be doing one thing at a time.
2) On the design side, do you really want multiple connections to be writing to the same file? If you're creating lots of NSURLConnections, then generally you would want to be able to record them, give them unique filenames and process them separately. This then allows you to know how many connections you currently have in flight, cancel them, and so forth.
If you're just writing some short-lived data to a file while you're doing the processing, then perhaps use unique names for the filenames. Basically, I'm finding it hard to think of a good reason to have lots of concurrent downloads writing to the same file on completion - but perhaps you have one!
Related
Most POSIX named objects (or all?) have unlink functions. e.g:
shm_unlink
mq_unlink
They all have in common, that they remove the name of the object from the system, causing next opens to fail or create a new object.
Why is this designed like this? I know, this is connected to the "everything is a file" policy, but why not delete the file on close? Would you do this the same if you create a new interface?
I think, this has a big drawback. Say, we have a server process and several client processes. If any process unlinks the object (by mistake) all new clients would not find the server. (This can be prohibited by user permissions on the according file, but still...)
Would it not be better, if it had reference counting and the name would be removed automatically when the last object is closed? Why would you want to keep it open?
Because they are low level tools that could be used when performance matters. Deleting the object when it is not used to create it again on next use has a (slight) performance penalty against keeping it alive.
I once used a named semaphore that I used to synchronize accesses to a spool with various producers and consumers. I used an init module to create the named semaphore that was called as part of the boot process, and all other processes knew that the well known semaphore should exist.
If you want a more programmer friendly way that creates the object on demand and destroys it when it is no longer used, you can build a higher level library and encapsulate the creation/unlink operations in it. But if the system call included it, it would not be possible to build a user level library avoiding it.
Would it not be better, if it had reference counting and the name would be removed automatically when the last object is closed?
No.
Because unlink() can fail and because always unlinking a resource that can be shared between processes when all processes merely close that resource simply doesn't fit the paradigm of a shared resource.
You don't demolish a roller coaster just because there's no one waiting in line to ride it again at that moment in time.
From what I understand the main q thread monitors it socket descriptors for requests and respond to them.
I want to use a while loop in my main thread that will go on for an indefinite period of time. This would mean, that I will not be able to use hopen on the process port and perform queries.
Is there any way to manually check requests within the while loop.
Thanks.
Are you sure you need to use a while loop? Is there any chance you could, for instance, instead use the timer functionality of KDB+?
This could allow you to run a piece of code periodically instead of looping over it continually. Depending on your use case, this may be more appropriate as it would allow you to repeatedly run a piece of code (e.g. that could be polling something periodically), without using the main thread constantly.
KDB+ is by default single-threaded, which makes it tricky to do what you want to do. There might be something you can do with slave threads.
If you're interested in using timer functionality, but the built-in timer is too limited for your needs, there is a more advanced set of timer functionality available free from AquaQ Analytics (disclaimer: I work for AquaQ). It is distributed as part of the TorQ KDB framework, the specific script you'd be interested in is timer.q, which is documented here. You may be able to use this code without the full TorQ if you like, you may need some of the other "common" code from TorQ to provide functions used within timer.q
Since redis is single threaded, making a call like the one below will block until it returns:
redis.hgetall("some_key")
Now say I was to wrap all my calls in Futures, for example if I had to make 100K of these types of calls all at once:
Future.sequence(redis_calls)
Would doing something like this help in terms of performance? Or failure tracking or would it potentially cause a problem if the calls get backed up?
You'll find that the slowest part is getting commands to Redis and reading the results back again, rather than waiting for Redis to carry out the requests.
To avoid this, you can use pipelines to send a bunch of commands at once and receive the results back together.
I am running a background thread in my application with dispatch_async and some times my main thread and background thread both access the database same time and this was giving me a database error and I tried to solve it by using sqlite3_threadsafe() which was always returning 2 i.e, i cannot use the same database connection in two threads and I want it to return 1 can anyone help me in this regard
I think you're pursuing the wrong approach. SQLite isn't reliably threadsafe no matter how you compile it — see the FAQ. Even if SQLite is compiled for thread safety, the same database may not be used from multiple threads if any transactions are pending or any statements remain unfinalised.
The recommendations above to use Grand Central Dispatch to funnel all SQLite accesses into a serial dispatch queue are the right way to proceed in my opinion, though I'd be more likely to recommend that you create your own queue rather than use the main queue for the simple reason that you can reliably dispatch_sync when you want to wait on a result.
You can add all of your access statements to, therefore you are always on the main thread when access the sqlite store.
dispatch_async(dispatch_get_main_queue(), ^ {
//code goes here
});
I have a class that contains an NSDictionary and periodically, I have a thread writing data into this NSDictionary. Then at other times, I have another view controller reading data out of the class's NSDictionary.
What's the best objective-c way to make the data in this class thread-safe so that if you were to ask data for 'read', you're getting the correct version aka, the last written version and not the one that maybe getting written to currently?
As Carl mentioned, #synchronized is one option.
If you are targeting iOS 4.0+, another one is using a Grand Central Dispatch queue to regulate access to a shared data structure from multiple threads/queues. The WWDC 2010 Session 211 video has a good explanation of this technique.
In a nutshell: you create a custom GCD queue (dispatch_queue_create()) whose single responsibility is to regulate access to the shared data structure. All code that accesses the shared structure then must do so from inside this queue. Because queues only execute one block of code at a time, no two threads can access the data structure at the same time.
You are looking for #synchronize, I think.