memcached invalidation based on values - memcached

Is it possible to invalidate memcahced entries based on values.
In my app, I am assigning user in different groups and I store this mapping in memcached.
key = userID
value = groupID
So multiple userIds map to one groupID.
When I delete a particular group then I want to remove all entries in memcache store which have value as the groupID of the deleted group.
So Essentially I want to delete the entries having particular values. How do I do it.

You cannot get key's by value in memcached. What you could do though is have a key called groupID that has a comma separated userID's. If you want to see who is part of a group then you could get the key called groupID and parse out the userID's. Then if you have a key for each userID you could delete them with the parsed userID's. You could use memcached's append function too to just append the userID's to a groupID key when a new user registers for you system.

I wrote a blog post on maintaining a set a while back that may do what you're looking for.
It's essentially what mikewied is suggesting, but with more words and code samples.

Related

Query items in dynamodb table within a numeric range

I need some guidance in implementing mail notifications for new chat messages. The mail notification would inform the user of all the chats that had new messages in the previous hour.
To get it done, I'll need to query all chats in a table within a time interval. First thing that came to mind was adding new global index where a hash would be a boolean for whether the chat has unread messages, and range would be timestamp for the latest message within that chat.
But I have learned that boolean hash keys are quite the anti-pattern, as they would squeeze the documents in a single partition.
Is there a different model that would allow us to query all items in a table within a numeric range?
I’m assuming that you want to query unread messages for a given user, since (again) I’m assuming that the read/unread status of a given notification should not change for one user if another user reads a notification for the same thing.
Going on that assumption, you should use a sparse index with the userId (or equivalent) as the hashkey and unreadNotificationTime as the sort key. When you insert a new notification into your table, set the value of unreadNotificationTime to the time stamp for the notification. When the user has read the notification, delete the unreadNotificationTime attribute from the item.
Why does this work?
DynamoDB only requires that an item has the key attributes of the base table, and any other attributes are optional. The way indexes work in DynamoDB is that an item from the base table will only appear in an index of the item has all of the key attributes of that particular index.
By setting a value for unreadNotificationTime when you store a notification, all newly created notifications will automatically be populated to the unread messages index. By deleting the unreadNotificationTime when a message is read, you the notification from that index. With this schema, there’s no need for any filtering or scan operations. Your index will only contain notifications that are unread, grouped by userId, and sorted by date.

Merging Data in FMPro with modification of ID values

We are attempting to merge multiple datasets created in in filmmaker pro.
These datasets have multiple tables, and each entry within each table has a local ID that is used to relate entries between tables. The local ID values for all the entries were serially generated values, but some of the ID values are repeated between the different datasets, though the indicated records are non equivalent.
How can the ID values be updated in the data that is being imported to remove these overlaps without destroying the relationships that depend on them?
If you have access to the original database, you can try to migrate the ID's over to UUID or something unique before exporting. This has to be done manually, either cut/paste by hand or by a script.
Such a script will have do the following:
Loop through the parent records
For each record go to the related records
Generate an UUID with the get(UUID) function and put it in a variable
Replace the parent ID in the related record with this variable
Return to the parent record and replace the record ID with the variable.
Move to the next record.
Repeat until all records have been updated.

Using childByAutoId On Single Value?

I am pretty new to both Swift and Firebase, and I am attempting to make a simple app using Firebase as the backend. As far as I know, there is no memory-efficient way to use the numChildren() function without loading every single child into memory for counting, so I am implementing my own simple counter for the number of "Events" that have been created in my app.
The documentation for Firebase states that the childByAutoID() method should be used for updating lists in multi-user applications. I am assuming it adds a timestamp to the requested update and does them in order.
My question is whether it is necessary to use childByAutoID() when only updating a SINGLE field in a multi-user application. That is, will there be conflicts on my numEvents field if I do:
dbRef = FIRDatabase.database().reference()
dbRef.child("numEvents").setValue(num)
Or must I do:
dbRef = FIRDatabase.database().reference()
dbRef.child("numEvents").childByAutoId().setValue(num)
In order to avoid write conflicts? My only real confusion is that the documentation for childByAutoID stresses that it is useful when the children are a list of items, but mine is only a single item.
If you are only updating a single field you should not be using childByAutoId. To update a child value for an object, you need to obtain a reference to that object somehow, perhaps by a query of some sort (in many cases you will naturally already have a reference to the object if it needs to be changed) and you can change the value like this:
dbRef.child("events").child(objectToUpdateId).child(fieldToUpdateKey).setValue(newValue)
childByAutoId in this context would be used to create a new field like:
dbRef.child("events").childByAutoId().setValue(newObject)
I'm not exactly sure how this applies to your situation, but those are some descriptions of how to update a field, and use childByAutoId.
What childByAutoId does is create a unique key for a node, to avoid using the same key multiple times and then creating data conflicts like inconsistency (not write conflicts) to avoid write conflicts you use the transaction blocks.
The best way to learn is to try it out
If num == 1 , in the first example the result will be
dbRef:{
numEvents:1
}
While the second will be
dbRef:{
numEvents:{
//The auto-generated key
KLBHJBjhbjJBJHB:1
}
}
The childByAutoId would be useful if you want to save in a node multiple children of the same type, that way each children will have its own unique identifier
For example
pet:{
KJHBJJHB:{
name:fluffy,
owner:John Smith,
},
KhBHJBJjJ:{
name:fluffy,
owner:Jane Foster,
}
}
This way you have a unique identifier for cases where there is no clear way with the item data to guarantee it will be unique (in this case the pet's name)
Few things here:
childByAutoId is not a timestamp. But is used to create unique nodes in any given node.
Use case of childByAutoId :
You have messages node which stores messages from multiple user who are involved in a group chat. So each user can add messages in the group chat so you would do something like this each time user sends message:
dbRef = FIRDatabase.database().reference()
dbRef.child("messages").childByAutoId().setValue(messageText)
So this will create a unique message id for each message from different users. This will kind of act like primary key of message in normal databases.
The structure of database will be something like this:
messages: {
"randomIdGenerated-12asd12" : "hello",
"randomIdGenerated-12323D123" : "Hi, HOw are you",
}
So in your case your first approach is good enough! Since you dont need unique node for counting number of events added.

Data type for storing long lists

What is the correct way of storing large lists in PostgreSQL?
I currently have a "user" table, where the "followers" column stores a list of the followers that that user has. This list is stored in JSON, and every time the server wants to add a new user to that list, it retrieves it from the database, appends the new user, and then replaces the old list with the new list.
The problem is that these lists tend to get quite lengthy, which might affect performance. Is it possible to simply append to the list directly via SQL without retrieving it and rewriting it later?
Use a separate table for followers. The table should have at least two columns: userid and followerid. And it's good practice to have a primary key for this table as well, so let's give it a "ufid".
You can do a select to get all the elements and compute the JSON string if your application needs it. But do not work with JSON or any other string representation of the list, as it defeats the purpose of a relational database.
To add a new follower, simply add a new record to the follower table with the userid; deleting and update are also done on the record level without working with the "other records".
If followers is a list of integers which are primary keys to their accounts, make it an integer array int[]. If they are usernames or other words, go with a string array character varying[].
To append to an array column you can do this:
UPDATE the_table SET followers = followers || new_follower WHERE id = user;

Insert record in table if does not exist in iPhone app

I am obtaining a json array from a url and inserting data into a table. Since the contents of the url are subject to change, I want to make a second connection to a url and check for updates and insert new records in y table using sqlite3.
The issues that I face are:
1) My table doesn't have a primary key
2) The url lists the changes on the same day. Hence, if I run my app multiple times, when I insert values in my database, I get duplicate entries. I want to keep a check for the day duplicated entries that should be removed. The problem can be solved by adding a constraint, but since the url itself has duplicated values, I find it difficult.
The only way I can see you can do it if you have no primary key or something you can use that is unique to each record, is when you get your new data in you go through the new entries where for each one you check if the exact same data exists in the database already. If it doesn't then you add it, if it does then you skip over it.
You could even do something like create a unique key yourself for each entry which is a concatenation of each column of the table. That way you can quickly do the check for if the entry already exists in the database.
I see two possibilities depending on your setup:
You have a column setup as UNIQUE (this can be through a PRIMARY KEY or not). In this case, you can use the ON CONFLICT clause:
http://www.sqlite.org/lang_conflict.html
If you find this construct a little confusing, you can instead use "INSERT OR REPLACE" or "INSERT OR IGNORE" as described here:
http://www.sqlite.org/lang_insert.html
You do not have a column setup as UNIQUE. In this case, you will need to SELECT first to verify for duplicate data, and based on the result INSERT, UPDATE, or do nothing.
A more common & robust way to handle this is to associate a timestamp with each data item on the server. When your app interrogates the server it provides the timestamp corresponding to the last time it synced. The server then queries its database and returns all values that are timestamped later than the timestamp provided by the app. Then it also returns a new timestamp value for the app to store, to use on the next sync.