memcached sometimes holding corrupt data - memcached

I have been using Memcached (AWS Elasticache) for a while now.
Just today I ran into a situation that I hadn't experienced before. Regularly there is a call to the database for a list of countries and I store this in memcached. This time however the data wasn't stored correctly (I'm not sure why as it has worked fine for months) but after looking over the code & trying code based fixes (assuming something was wrong with the site code) a bounce of the cache fixed the issue. Note: I had bounced memcached the day before so maybe it didn't warm up correctly etc.
My Question is - currently I check to see if the memcached key exists and if it does I use the data. Only if the memcached key doesn't exist do I query the database and populate the key. Do I also need to validate the data somehow to so I can be sure its not corrupt or should this be seen as an infrequent issue (which it is) and left at that.
Also I believe the memcached key didn't have any data in it so maybe just checking if the key is empty is good enough...
Code below:
public $countryList = array();
// Countries, Country Code, Zip Enabled --- 'generic::countryList::'.$_SESSION['language']'
public function countryList() {
$elasticache = new elasticache();
if(!$this->countryList = $elasticache->memcached->get('generic::countryList::'.$_SESSION['language'])) {
--- this is where the database query code is
$elasticache->memcached->set('generic::countryList::'.$_SESSION['language'], $this->countryList, 2592000);
}
}
I guess confirming the data in the key is correct would required a database call and therefore would defeat the purpose of memcached....
thoughts & ideas?

Related

How can i find this Hashing Algorithm?

So hi, i have a String that is saved as a hash on an Azure SQL DB, but i can't seem to find out in which Algorithm it is saved, we want to Migrate our Users to a Firestore DB but we apparently need the Algorithm for the First Login.
Hashed String: VxfCosOIw7PDrsOqw78YwqtCwoxUK8KCwpVkw5LCn0hcf8OgZsKEwpTDqSvDmMOMwql+
Original String: Drag2311
Salt: +2zPSiLUCzdASr3dS1fRrH6vxEAU6/V4kr/73uVmRoo=
I've seen on other posts that people asked for the Original String, so i just posted all relevant information that i have, and hope that someone can help me.
EDIT: I have checked the code and couldn't find anything related to Hashing, and am relatively sure that it is Server Encryption. Its a CMK and a CEK, but i still have a hard time to find a way to look up the set Algorithm.
I have found out that i need to know the keys of the Column in MS SQL Column Encrytion, so i either have to contact the ones that set it or the Azure Support.
So rather than doing it like that, im just going to do it rather ineffectively, by migrating the data of those accounts but not the accounts themselves.
In the Firebase Functions i will define that on a new account create it should check if the email exists in a json object with the email and userid, and if it exists then it will just hook the relating userid with the old data to that account, rather than creating a new set of datas.
Thanks for your time and i noticed that i should have done it this way before.

Firebase REST API: Delete sometimes fails

I'm currently building a web frontend for a Matlab program. I'm using webread/webwrite to interface with the Firebase realtime database (Though I'll be shifting to urlread2 soon for compatibility reasons). The Matlab end has to delete nodes from the database on a regular basis. I do this by using webwrite to send a POST request and putting "X-HTTP-Method-Override: DELETE" in the header. This works, but after a few deletes it stops working until data is either added to or removed from the database. It seems completely random, my teammate and I have been trying to find a pattern for a few days and we've found nothing.
Here is the relevant Matlab code:
modurl = strcat(url, modkey, '.json');
modurlstr = char(modurl);
webop = weboptions('KeyName', 'X-HTTP-Method-Override', 'KeyValue','DELETE');
webwrite(modurlstr, webop);
Where url is our database url and modkey is the key of the node we're trying to delete. There's no authentication because the database is set to public (Security is not an issue for us).
The database is organized pretty simply. The root node just has a bunch of children. We only delete a whole child (i.e. we don't ever try to delete the individual components of a child).
Are we doing something wrong?
Thanks in advance!
We found out some of the keys had hyphens in them, which were getting translated to their ascii representation. The reason it seemed random was because the delete was only bugging out on the nodes which had a hyphen in their keys. When we switched them back everything worked fine.

Simultaneous identical requests = double in database

My controller handles a POST request, inserting a object in PostgreSQL. It works like this :
Check if the object does not exist in DB
Save the object in DB
But sometimes, 2 identical requests come too close and I guess the second one does not find the object of the first one so both are written in DB.
I am on heroku and I must take scalability into account also : the two requests can come on different dynos ( static variable wont work )
I did not find anything about database locking in Play
Any idea ?

How do I pretend duplicate values in my read database with CQRS

Say that I have a User table in my ReadDatabase (use SQL Server). In a regulare read/write database I can put like a index on the table to make sure that 2 users aren't addedd to the table with the same emailadress.
So if I try to add a user with a emailadress that already exist in my table for a diffrent user, the sql server will throw an exception back.
In Cqrs I can't do that since if I decouple the write to my readdatabas from the domain model, by puting it on an asyncronus queue I wont get the exception thrown back to me, and I will return "OK" to the UI and the user will think that he is added to the database, when infact he will never be added to the read database.
I can do a search in the read database checking if there is a user already in my database with the emailadress, and if there is one, then thru an exception back to the UI. But if they press the save button the same time, I will do 2 checks to the database and see that there isn't any user in the database with the emailadress, I send back that it's okay. Put it on my queue and later it will fail (by hitting the unique identifier).
Am I suppose to load all users from my EventSource (it's a SQL Server) and then do the check on that collection, to see if I have a User that already has this emailadress. That sounds a bit crazy too me...
How have you people solved it?
The way I can see is to not using an asyncronized queue, but use a syncronized one but that will affect perfomance really bad, specially when you have many "read storages" to write to...
Need some help here...
Searching for CQRS Set Based Validation will give you solutions to this issue.
Greg Young posted about the business impact of embracing eventual consistency http://codebetter.com/gregyoung/2010/08/12/eventual-consistency-and-set-validation/
Jérémie Chassaing posted about discovering missing aggregate roots in the domain http://thinkbeforecoding.com/post/2009/10/28/Uniqueness-validation-in-CQRS-Architecture
Related stack overflow questions:
How to handle set based consistency validation in CQRS?
CQRS Validation & uniqueness

Can Primary-Keys be re-used once deleted?

0x80040237 Cannot insert duplicate key.
I'm trying to write an import routine for MSCRM4.0 through the CrmService.
This has been successful up until this point. Initially I was just letting CRM generate the primary keys of the records. But my client wanted the ability to set the key of a our custom entity to predefined values. Potentially this enables us to know what data was created by our installer, and what data was created post-install.
I tested to ensure that the Guids can be set when calling the CrmService.Update() method and the results indicated that records were created with our desired values. I ran my import and everything seemed successful. In modifying my validation code of the import files, I deleted the data (through the crm browser interface) and tried to re-import. Unfortunately now it throws and a duplicate key error.
Why is this error being thrown? Does the Crm interface delete the record, or does it still exist but hidden from user's eyes? Is there a way to ensure that a deleted record is permanently deleted and the Guid becomes free? In a live environment, these Guids would never have existed, but during my development I need these imports to be successful.
By the way, considering I'm having this issue, does this imply that statically setting Guids is not a recommended practice?
As far I can tell entities are soft-deleted so it would not be possible to reuse that Guid unless you (or the deletion service) deleted the entity out of the database.
For example in the LeadBase table you will find a field called DeletionStateCode, a value of 0 implies the record has not been deleted.
A value of 2 marks the record for deletion. There's a deletion service that runs every 2(?) hours to physically delete those records from the table.
I think Zahir is right, try running the deletion service and try again. There's some info here: http://blogs.msdn.com/crm/archive/2006/10/24/purging-old-instances-of-workflow-in-microsoft-crm.aspx
Zahir is correct.
After you import and delete the records, you can kick off the deletion service at a time you choose with this tool. That will make it easier to test imports and reimports.