In service fabric do you need to call CommitAsync when just reading a value? - azure-service-fabric

When using a reliable dictionary in service fabric, you have to access everything within a transaction. However when reading you are not changing anything, so it doesn't seem to make any sense to call CommitAsync.
Are there any downsides of not calling CommitAsync?

No, there are no downsides. Just try to free your transaction as fast as possible.
By the way, I have noticed that sometimes somehow values that are returned from reliable collection could be modified. It was a list in my case, so I was unable to enumerate it properly. Pretty sure there were no my code that could modify that value. So my suggestion is to make a copy of a value if it is a reference type and work with that copy, freeing transaction, even if you are not going to modify the value.

Related

Mongo DB Collection Versioning

Are there any best practices or ways we can use to version a collection or objects in a collection in Mongo DB?
The requirement of versioning a collection is because, the objects in the collection maybe added with new attributes going forward but the already added objects (i.e. old objects) will not be having these attributes and the values for these new attributes. So on retrieval, we need to make sure, the code is not broken in de-serializing the different versions of the same object in the collection.
I can think of adding a version attribute to the objects explicitly, but are there any better built in alternatives in Mongo DB for handling this versioning of objects and/or collections.
Thanks,
Bathiya
I guess the best approach would be to update all objects in a batch process when you start using the new software on the server, since otherwise you’ll never know when an object will be updated and you’ll need to keep the old versions of those forever around.
Another thing what I’m doing so far, and it worked (so far), to have the policy to only allow adding new properties to the object. This way in worst case the DB won’t have all the data, but that is fine with all the json-serializers I know. But this means you aren’t allowed to delete or rename properties as well as modifying their type (from scalar value to object, array; from object to scalar, array; …).
Since usually I want to store additional information instead of less, this seems like a good solution without any real limitation for me.
If you need to change the type because scalar value isn’t enough, you can still create some code around which would transform it for every object which has the old value into the new one. And if a bulk update from your side is able to perform the changes I’d still do it but sometimes it needs user input.
For instance if you used to save passwords only as md5-hash it was a scalar value. But someone told you, they should be stored as sha512 with a salt together, now you need an object field for the password, you could call it password_sha512 where you store the salt and the hashed password.

Why must ContextSwitch be atomic and how to achieve this in practice?

Why must ContextSwitch be atomic and how to achieve this in practice?
I think it must be atomic because if it doesn't save the state of previous processes completely, it can cause problems for future contextSwitches.Inaccuracy, and wrong data.
And to achieve this, can we can use locks?
Does this make sense or am I oversimplifying things.
probably the same assignment as you.
As the save operation requires several steps to save the value of CPU registers, process state, memory-management information, etc.. It is in fact necessary to make the context-switch atomic to ensure consistency.
To do it, it is possible to make the save method and probably the load method "synchronized", in order to make their respective steps execute in one block.

SimpleDB: Guaranteed to see all item attributes if we see the item? (non-consistent read)

I've just discovered an assumption in my use of SimpleDB. I suspect it's safe but would like other opinions since the docs don't seem to cover it.
So say Process 1 stores an item with x attributes. When Process 2 tries to access said item (without consistent read) & finds it, is it guaranteed to have all the attributes stored by Process 1?
I'm excluding the possibility that another process could have changed the data.
I also know that Process 2 has no guarantee of seeing the item unless consistent read is used, I'm just talking about the point when it does eventually see it.
I guess the question is, once I can get an item & am not changing it anywhere else can I assume it has an ad-hoc fixed schema and access all my expected attributes without checking they actually exist?
I don't want to be in a situation where I need to keep requesting items until they have all the attributes I need to use them.
Thanks.
Although Amazon makes no such guarantees in the documentation the current implementation of their eventual consistency guarantees that you'll see all the properties stored by Process 1 or none of them.
See this thread over at the AWS forums and more specifically this answer by an Amazon employee confirming the behavior (emphasis mine).
I don't think we make that guarantee in the documentation, but the
current implementation treats each Put request as a bundle. It won't
split the request up and apply the operations piecemeal. You'll get
either step-1 responses or step-2 responses until eventual consistency
shakes out and leaves you with step-2 responses.
While this is undocumented behavior I suspect quite a few SimpleDB users are relying on it now and as such Amazon won't be likely to change it anytime soon, but that's ju my guess.

What is the most practical Solution to Data Management using SQLite on the iPhone?

I'm developing an iPhone application and am new to Objective-C as well as SQLite. That being said, I have been struggling w/ designing a practical data management solution that is worthy of existing. Any help would be greatly appreciated.
Here's the deal:
The majority of the data my application interacts with is stored in five tables in the local SQLite database. Each table has a corresponding Class which handles initialization, hydration, dehydration, deletion, etc. for each object/row in the corresponding table. Whenever the application loads, it populates five NSMutableArrays (one for each type of object). In addition to a Primary Key, each object instance always has an ID attribute available, regardless of hydration state. In most cases it is a UUID which I can then easily reference.
Before a few days ago, I would simply access the objects via these arrays by tracking down their UUID. I would then proceed to hydrate/dehydrate them as I needed. However, some of the objects I have also maintain their own arrays which reference other object's UUIDs. In the event that I must track down one of these "child" objects via it's UUID, it becomes a bit more difficult.
In order to avoid having to enumerate through one of the previously mentioned arrays to find a "parent" object's UUID, and then proceed to find the "child's" UUID, I added a DataController w/ a singleton instance to simplify the process.
I had hoped that the DataController could provide a single access point to the local database and make things easier, but I'm not so certain that is the case. Basically, what I did is create multiple NSMutableDicationaries. Whenever the DataController is initialized, it enumerates through each of the previously mentioned NSMutableArrays maintained in the Application Delegate and creates a key/value pair in the corresponding dictionary, using the given object as the value and it's UUID as the key.
The DataController then exposes procedures that allow a client to call in w/ a desired object's UUID to retrieve a reference to the actual object. Whenever their is a request for an object, the DataController automatically hydrates the object in question and then returns it. I did this because I wanted to take control of hydration out of the client's hands to prevent dehydrating an object being referenced multiple times.
I realize that in most cases I could just make a mutable copy of the object and then if necessary replace the original object down the road, but I wanted to avoid that scenario if at all possible. I therefore added an additional dictionary to monitor what objects are hydrated at any given time using the object's UUID as the key and a fluctuating count representing the number of hydrations w/out an offset dehydration. My goal w/ this approach was to have the DataController automatically dehydrate any object once it's "hydration retainment count" hit zero, but this could easily lead to significant memory leaks as it currently relies on the caller to later call a procedure that decreases the hydration retainment count of the object. There are obviously many cases when this is just not obvious or maybe not even easily accomplished, and if only one calling object fails to do so properly I encounter the exact opposite scenario I was trying to prevent in the first place. Ironic, huh?
Anyway, I'm thinking that if I proceed w/ this approach that it will just end badly. I'm tempted to go back to the original plan but doing so makes me want to cringe and I'm sure there is a more elegant solution floating around out there. As I said before, any advice would be greatly appreciated. Thanks in advance.
I'd also be aware (as I'm sure you are) that CoreData is just around the corner, and make sure you make the right choice for the future.
Have you considered implementing this via the NSCoder interface? Not sure that it wouldn't be more trouble than it's worth, but if what you want is to extract all the data out into an in-memory object graph, and save it back later, that might be appropriate. If you're actually using SQL queries to limit the amount of in-memory data, then obviously, this wouldn't be the way to do it.
I decided to go w/ Core Data after all.

How can I clear Class::DBI's internal cache?

I'm currently working on a large implementation of Class::DBI for an existing database structure, and am running into a problem with clearing the cache from Class::DBI. This is a mod_perl implementation, so an instance of a class can be quite old between times that it is accessed.
From the man pages I found two options:
Music::DBI->clear_object_index();
And:
Music::Artist->purge_object_index_every(2000);
Now, when I add clear_object_index() to the DESTROY method, it seems to run, but doesn't actually empty the cache. I am able to manually change the database, re-run the request, and it is still the old version.
purge_object_index_every says that it clears the index every n requests. Setting this to "1" or "0", seems to clear the index... sometimes. I'd expect one of those two to work, but for some reason it doesn't do it every time. More like 1 in 5 times.
Any suggestions for clearing this out?
The "common problems" page on the Class::DBI wiki has a section on this subject. The simplest solution is to disable the live object index entirely using:
$Class::DBI::Weaken_Is_Available = 0;
$obj->dbi_commit(); may be what you are looking for if you have uncompleted transactions. However, this is not very likely the case, as it tends to complete any lingering transactions automatically on destruction.
When you do this:
Music::Artist->purge_object_index_every(2000);
You are telling it to examine the object cache every 2000 object loads and remove any dead references to conserve memory use. I don't think that is what you want at all.
Furthermore,
Music::DBI->clear_object_index();
Removes all objects form the live object index. I don't know how this would help at all; it's not flushing them to disk, really.
It sounds like what you are trying to do should work just fine the way you have it, but there may be a problem with your SQL or elsewhere that is preventing the INSERT or UPDATE from working. Are you doing error checking for each database query as the perldoc suggests? Perhaps you can begin there or in your database error logs, watching the queries to see why they aren't being completed or if they ever arrive.
Hope this helps!
I've used remove_from_object_index successfully in the past, so that when a page is called that modifies the database, it always explicitly reset that object in the cache as part of the confirmation page.
I should note that Class::DBI is deprecated and you should port your code to DBIx::Class instead.