I read a article that using memcache for caching the passwords stored in the mysql database with a certain lifetime, but when a user's password was updated in the database,memcache still cache the old data,after the lifetime is over,fetching data from database again, then got the newest data.
Any other way to get the newest updated data immediately?
Usually the framework you use in your app let's you set up rules, such as:
How long to keep data in memcache / when to flush a record from cache
How / in what circumstances to go to the db
One approach, of course, is that in the routine that updates the pw in the db, that same routine expires the relevant record in memcache.
Related
I have read the MongoDB's official guide on Expire Data from Collections by Setting TTL. I have set everything up and everything is running like clockwork.
One of the reasons why I have enabled the TTL is because one of the product's requirements is to auto-delete a specific collection. Well, the TLL handles it quite well. However, I have no idea if the data expiration will also persist on the MongoDB backups. The data is also supposed to be automatically deleted from the backups. In case the backups get leaked or restored, the expired data shouldn't be there.
Backup contains the data that was present in the database at the time of the backup.
Once a backup is made, it's just a bunch of data that sits somewhere without being touched. The documents that have been deleted since the backup was taken are still in the backup (arguably this is the point of the backup to begin with).
If you want to expire data from backups, the normal solution is to delete backups older than a certain age.
As mentioned by #D.SM data is not deleted from backup. One solution could be to encrypt your data, e.g. with Client-Side Field Level Encryption
For every day use a new encryption key for your data. When your data should expire, drop according encryption key from your password storage. With this your data becomes unusable, even if somebody restores the data from an old backup.
I have a database that syncs completely every 2 hours. All data is dropped and populated from the main data source.
I have some queries coming from client app, that have the same response for the current 2-hours dataset. So, if 100 clients run their apps, I will have to run this query 100 times for each of them, even though they don't differ.
How do I avoid running this real query against my database every time, but just keep its response somewhere and return it instead?
I think I can run this query after each sync and save to its own table then return from it.
What are other options, probably provided by Postgres itself?
You should use something like redis to store the result or your query in memory. It comes with many clients. You can invalidate the result of this query when it's time to.
There are other memory caching like memcache, easy to install & to use.
Note these are specific to postgres.
I have app in production and working. It is hosted on heroku and uses Postgres 9.3 in the cloud. There are 2 databases: master and (read-only follower) slave. There are tables like Users, Likes, Followings, Subscriptions and so on. We need to store complete log about events like userCreated, userDeleted, userLikedSomething, userUnlikedSomething, userFollowedSomeone, userUnfollowedSomeone and so on. Later on we have to prepare statistic reports/charts about current and historical data. The main proble is that when user is deleted it is just removed from db so we can't retrieve users that were deleted from db because they are not stored in db anymore. Same applies to likes/unlikes follows/unfollows and so on. There are few things I don't know how to handle properly:
If we will store events in same database with foreign keys to user profiles then historical data will change because each event will be "linked" to current user profile which will change in time.
If we will store events in separate postgres database (db just for logs to offload the main database) then to join the events with actual user profiles we would have to use cross-db joins (dblink) which might be slow I guess (I have never used this feature before). Anyway this wont solve the problem from point 1.
I thought about using different type of database for storing logs - maybe MongoDb - as I remember mongoDb is more "write-heavy" than postgres (which is more "read-heavy"?) so it might be more suitable for storing logs/events. However then I would have to store user profiles in two databases (and even user profile per each event to solve point 1).
I know this is very general question but maybe there is some kind of standard approach or special database type for storing such data?
I'm trying to set up an app that will act as a front end to an externally updated mongo database. The data will be pushed into the database by another process.
I so far have the app connecting to the external mongo instance and pulling data out with on issues, but its not reactive (not seeing any of the new data going into the mongo database).
I've done some digging and it so far can only find that I might need to set up replica sets and use oplog, is there a way to do this without going to replica sets (or is that the best way anyway)?
The code so far is really simple, a single collection, a single publication (pulling out the last 10 records from the database) and a single template just displaying that data.
No deps that I've written (not sure if that's what I'm missing).
Thanks.
Any reason not to use Oplog? For what I've read it is the recommended approach even if your DB isn't updated by an external process, and a must if it does.
Nevertheless, without Oplog your app should see the changes on the DB made by the external process anyway. It should take longer (up to 10 seconds), but it should update.
I'm working on client-server software that uses memcached.
If I want to delete content from my database that is held within memcached, which of the following is usually required to achieve this objective?
A - delete from database AND delete from memcached
B - delete from memcached (which will automatically delete from the database)
Thanks
Option A is what you would want.
Memcache and your Database are completely separate and it is up to you to make them both reflect one another.
For example, if you insert into your DB you must also insert into memcache. If you delete from your DB you must also delete from memcache.
In most of today's frameworks this is abstracted out. However, if you are doing it manually then you must do both for consistent data.
Edit: by delete I mean invalidate