Entity Framework Plus clear cache for individual queries - entity-framework

I am using FromCache() method whenever I need to retrieve data from the SQL database. There will be a lot of unique queries executed in a single method since it is getting data based on userID. The data associated with the userID will be updated through a separate process which will also trigger an event in the method that controls retrieving. When the data for a specific user is updated, I want to expire the cache for that user so that the next query on that userID will get the most recent data.
I see that EF plus has the option to ExpireTag. Would it be feasible to create a single tag for each userID and then use that to expire the cache?

Would it be feasible to create a single tag for each userID and then use that to expire the cache?
Yes, tag can be used similarly as if you use a cache key.
The best is probably using 2 tags:
Users
[UniqueUserId]
The Users tag will expire all cache related to "users"
The [UniqueUserId] tag will expire all caches related to this specific users

Related

Redis hash usage as table

I want to use redis like Nosql database and I have some idea like below.
Assume that I have 3 table
1 - user
2 - post
3 - comment
I create hash for each table like below
hset user _usr_100 {"id":"_usr_100","name":"john","username"="jhn","age":25}
hset user _usr_101 {"id":"_usr_101","name":"adam","username"="adm","age":26}
hset user _usr_102 {"id":"_usr_102","name":"eric","username"="erc","age":27}
hset post _post_100 {"id":"_post_100","title":"title","content":"testpost","userid"="_usr_100"}
hset post _post_101 {"id":"_post_101","title":"title","content":"testpost","userid"="_usr_101"}
hset post _post_102 {"id":"_post_102","title":"title","content":"testpost","userid"="_usr_102"}
hset comment _comment_100 {"id":"_comment_100","content":"testpost","userid"="_usr_100","postid":"_post_100"}
hset comment _comment_101 {"id":"_comment_101","content":"testpost","userid"="_usr_101","postid":"_post_101"}
hset comment _comment_102 {"id":"_comment_102","content":"testpost","userid"="_usr_102","postid":"_post_102"}
When I want get user(_user_100) from redis
hget user _usr_100
{"id":"_usr_100","name":"john","username"="jhn","age":25}
When I want get users
hgetall user
{"id":"_usr_100","name":"john","username"="jhn","age":25}
{"id":"_usr_101","name":"adam","username"="adm","age":26}
{"id":"_usr_102","name":"eric","username"="erc","age":27}
Afer deserialize json string one by pne and fill them in list , I have List so I can do some operation (search,groupby,order,pagination ...) and I can do same thing for another hashes(post,comment)
I can delete,update user with;
hdel user _usr_101 // deleted _usr_101
hset user _usr_100 {"id":"_usr_100","name":"john","username"="jhn","age":26} //updated age
hset user _usr_103 {"id":"_usr_103","name":"max","username"="max","age":15} //new user
hgetall user
{"id":"_usr_100","name":"john","username"="jhn","age":26}
{"id":"_usr_102","name":"eric","username"="erc","age":27}
{"id":"_usr_103","name":"max","username"="max","age":15}
What can be disadvantage of this usage?Can you suggest another idea about hash to use redis like nosql tables.
Depending on your business rules/model, this option "may" work but it may not be the best/near the best solution for your domain. Using key/value store in the need of mostly relational domain cause you to make tradeoffs which may be disadvantage for you.
When your user class has new fields and this fields needed to be queried then you need to create more "space" to reduce the "time". You keep denormalizing your data to just achieve a single query. You will try to implement your relational database in the key/value store world. When you just need to update your user 101 with a simple statement;
UPDATE users SET username = 'mynewusername' where id = 101;
In your case you will need to find all related keys/fields through all hash/set/lists and update them for the data integrity. Keeping age as a field may be a bad idea, you will need to use birthday or and if your business needs to fetch list of users's whose birthday is today then you need to create new keys, duplicate most of your data, migrate all your existing users to there to just get the today's birthdays. It's better to keep that in mind, you need to query by day and month to get birthdays - which means that you have to keep users in separate sets such as users:birthday:01:01, users:birthday:02:05, users:birthday:11:08 to fetch them. If the users wants to update their birthday(depending on the business) then you need to manually move users between those sets while updating the other sets too.
Adding active/passive to users will be another pain. I am not sure whether you need to get all users, you may need to paginate them and while using hash - it will be hard, You will need another another sorted set/list to gain that.
Same goes for comments of posts of the users, last 25 comments of the user, most recent comments of the users who have the most posts or searching through posts of users etc etc. Your product manager will come with the idea, let's add tag to each post and you will need to relate this into your data model with new data structures.
These are relational data, it is better to keep them relational. When you start modeling your data in non-relational database all the elasticity rdbms provide you will be gone and it will be replaced with complexity on both data and application layer.
A single postgresql may boost you far better than redis in this problem. Redis has excellent features to solve problems but user/post/comment is not one of them.
This post may provide some insights too

Handling simultaneous user registrations in mongoDB?

I have a mongodb collection of registered users with index on the userID field. Every time an user tries to register, a lookup is done on the existing user IDs to check if the user ID chosen by the registering user is available or not. I was just wondering what happens when two users enter the same userID for registration at the same time and the lookup is done at the same time. Would both of them end up having the same userID? Does mongodb handle such a scenario on its own? One of the purposes of the unique userID would be to give each user an URL based on the userID.
I'll be using the PyMongo module.
Preventing duplicate usernames is an example of Concurrency Control, a broad area which has many issues and many ways in which databases and apps can be designed to avoid problems.
In the case of a collection of users where you are concerned to avoid duplicate userIDs, I would suggest the following design pattern:
Create a unique index on the userID field
In your app, make it check the response when creating a user; if it gets a Duplicate Key Error, then it knows that the userID has been taken, and it must ask the user to choose a different userID
Other approaches are also possible; for example you could have the database assign the userIDs, which would be a different way to guarantee uniqueness.
One transaction will occur first the UserID should be a primary key in this entity, and this will prevent the same userID from being re-used.

Storing custom temporary data in Sitecore xDB

I am using Sitecore 8.1 with xDB enabled (MongoDB). I would like to store the user-roles of the visiting users in the xDB, so I can aggregate on these data in my reports. These roles can change over time, so one user could have one set of roles at some point in time and another set of roles at a later time.
I could go and store these user-roles as custom facets on the Contact entity, but as they may change for a user from visit to visit, I will loose historical data if I update the data in the facet every time the user log in (fx. I will not be able to tell which roles a given user had, at some given visit).
Instead, I could create a custom IElement for my facet data, and store the roles along with a timestamp saying when the given roles were registered, but this model may be hard to handle during the reporting phase, where I would need to connect the interaction data with the role-data based on timestamps every time I generate a report.
Is it possible to store these custom data in the xDB in something else than the Contact collection? Can I store custom data in the Interactions collection? There is a property called Tracker.Current.Session.Interaction.CustomValues which sounds like what I need, but if I store data here, will I be able to perform proper aggregation/reporting on the data? Any other approaches I haven't thought about?
CustomValues
Yes, the CustomValues dictionary is what I would use in your case. This dictionary will get serialized to MongoDB as a nested document of every interaction (unless the dictionary is empty).
Also note that, since CustomValues is a member of the base class Sitecore.Analytics.Model.Entity, this dictionary is available in many other data classes of xDB. For example, you can store custom values in PageData and PageEventData objects.
Since CustomValues takes an object of any class, your custom data class needs some extra things for it to be successfully saved to and subsequently loaded from MongoDB:
It has to be marked as [Serializable].
It needs to be registered in the MongoDB driver like this:
using Sitecore.Analytics.Data.DataAccess.MongoDb;
// [...]
MongoDbObjectMapper.Instance.RegisterModelExtension<YourCustomClassName>();
This needs to be done only once per application lifetime - for example, in an initialize pipeline processor.
Your own storage
Of course, you don't have to use Sitecore's API to store your custom data. So the alternative would be to manually save data to a custom MongoDB collection or an SQL table. You can then read that data in your aggregation processor, finding it by the ID of currently processed interaction.
The benefit of this approach is that you can decide where and how your data is stored. The downside is extra work of implementing and maintaining this data storage.

simple model when requesting collection and extended model when requesting resource - how

I have the following URI: /articles/:id, where article is a resource on web-service and have associated model/class. Now I need to return only partial data for each resource (to save bandwidth and make for speed) when collection is requested, but when a single item is requested from collection I need to send full data. My question is should I use two models/classes for the same resource on the server and initiate different one depending on collection or single resource is requested? Or maybe there is should be only one model/class but not all fields should be filled with data when a collection is requested? Or maybe there is another approach?
I suggest using the approach suggested here with a fields query parameter.
If the API is going to be open to everyone to use and client usage is going to be unpredictable, then by default you probably need to limit the fields that you return. Just make sure you document in some way all the possible fields that could be used, in case a client actually needs them.
If the API is going to be consumed only by an app or apps you made, then by default you could return all of the fields and then your app can pass that fields parameter to speed things up.

Is storing documents ID in data- attributes a good practice?

In most of my apps, I need to store ID on data attributes to perform CRUD operations on specific elements of the DOM.
Indeed, my elements don't necessarily match specific criteria, or share multiple criteria, so the only way I have to delete them (for example when users clicks on it) is to store their ID in a data-id attribute and then send it to my server.
I use socket.io a lot.
Is that a good practice?
This is good practice. I don't think there is a better attribute to store this identifying data than data-id. You need some unique identifier for the document so the server knows which document the user wants to interact with when performing update/delete operations.
As long as your document is properly validated on the server side, i.e. before deleting/updating you check to make sure that the user in the session has authority to perform valid actions, there is no security risk of exposing the document _ids.