Is Memcached for me? - memcached

I am SysAdmin for a couple of large online shops and I'm researching Memcached as a possible caching solution.
The most accessed queries are the ones which make up the dynamic product pages, so it would make sense to cache these. Staff regularly use an update program to update the tables with new prices. As I understand, if I used Memcached the changes would only be apparent after the cache expires and not after my program has updated.
In the docs, I can see "Memcache::flush" which flushes ALL existing items, but is there a way to flush an individual object?

You can see in docs that there is delete command that removes one item. Also there is a set to add or replace one item.

The most important part is to have a solid naming scheme on your keys. Presumably you have a cms type page to update/insert rows in your database (mysql?). Just ensure that you delete the memcache record whenever you do an update in mysql and you'll be fine.

Related

Delete "views" in Cloudant to make space

I am currently using the Lite version of Cloudant and I have reached the 1GB limit that is offered.
I tried to delete some data but as you can see in the picture below, the actual data in my database is not very heavy.
Most of the space seems to be taken up by views. Does anyone know what this represents and how we can get rid of them such that I can make some space in the database?
Views are secondary indexes generated by map and reduce functions in your design documents. They may have been created by a developer directly, or behind your back if you are using an application such as NodeRed. If you delete a design document, the associated index should be removed, but this may of course affect the functionality of whatever it is using your Cloudant database.
Removing views WILL break any application expecting to find them there. Think carefully if this is really what you want to do. You should think about backing up your data first (https://github.com/cloudant/couchbackup).
Views are stored in design documents. They are documents where the id starts with _design. You can list design docs using curl:
% curl 'https://USER:PASS#USER.cloudant.com/DATABASE_all_docs?startkey="_design/"&endkey="_design0"'
{"total_rows":8747,"offset":5352,"rows":[
{"id":"_design/names","key":"_design/names","value":{"rev":"1-4b72567e275bec45a1e37562a707e363"}},
{"id":"_design/queries","key":"_design/queries","value":{"rev":"7-7e128fa652e9a1942fb8a01f07ec497c"}},
{"id":"_design/routeid","key":"_design/routeid","value":{"rev":"1-a04ab1fc814ac1eaa0b445aece032945"}},
{"id":"_design/setters","key":"_design/setters","value":{"rev":"1-7bf0fc0255244248de4f89a20ff730f4"}}
]}
You can then delete those with a curl -XDELETE ... -- or you can do it via the Cloudant dashboard.

Is database deletion through Entity Framework permanent?

I have a web application using EntityFramework and an Azure SQL Database. I would like to know if deleting a row in the database removes the information permanently or simply marks is as deleted but it still be accessed if needed?
db.MyTable.Remove(objectInstance);
db.SaveChanges();
Is this someting that can be configured or do I need to implement this feature myself adding a deleted attribute?
The reason I want this is to be able to perform analytics including objects that might have been already deleted
EF has nothing to do with this actually. Whether records are deleted permanently or not is actually up to the RDBMS. EF is an ORM for the RDBMS.
Options IMO:
You manage the records marked as deleted using an extra column
You can move the deleted records to another table or file whichever is convenient for you to run analytics on. That way your queries will have to touch less number of records and be faster.
You can go through the log files and execute the INSERTs again to get the deleted records.
Hope my suggestions help you in right direction.

Existing Postgres Database vs Solr

We have an app that uses postgres database, that has about 50 tables. Each table contains about 3 Million records (on average). The tables get updated with new data every now and than. Now, we want to implement search feature in our app. The search needs to be performed on one table at a time (no joins needed).
I've read about postgres full text support and that looks promising. But it seems that Solr is Super fast in comparison to it. Can I use my existing postgres database with Solr? If tables get updated would I need to re-index everything again?
It is definitely worth giving Solr a try. We moved many MySQL queries involving JOINs on multiple tables with sorting on different fields to Solr. We are very happy with Solr's search speed, sort speed, faceting capabilities and highly configurable text analysis/tokenization options.
If tables get updated would I need to re-index everything again?
No, you can run delta imports to only re-index your new and updated documents. See https://wiki.apache.org/solr/DataImportHandler.
Get started with https://lucene.apache.org/solr/4_1_0/tutorial.html and all the links in there.
Since nobody has leapt in, I'll answer.
I'm afraid it all depends. It depends on (at least)
how big the text is in each "document"
how flexible you want your searching to be
how much integration you need between database and text-search
how fast is fast enough
how much experience you have with both
When I've had a database that needs some text searching, I've just used PG's built-in options. If I didn't have superuser access to the db, or was already running a big Java setup then Solr might well have appealed.

Memcache Delete Also Deletes Database?

I'm working on client-server software that uses memcached.
If I want to delete content from my database that is held within memcached, which of the following is usually required to achieve this objective?
A - delete from database AND delete from memcached
B - delete from memcached (which will automatically delete from the database)
Thanks
Option A is what you would want.
Memcache and your Database are completely separate and it is up to you to make them both reflect one another.
For example, if you insert into your DB you must also insert into memcache. If you delete from your DB you must also delete from memcache.
In most of today's frameworks this is abstracted out. However, if you are doing it manually then you must do both for consistent data.
Edit: by delete I mean invalidate

Database last updated?

I'm working with SQL 2000 and I need to determine which of these databases are actually being used.
Is there a SQL script I can used to tell me the last time a database was updated? Read? Etc?
I Googled it, but came up empty.
Edit: the following targets issue of finding, post-facto, the last access date. With regards to figuring out who is using which databases, this can definitively monitored with the right filters in the SQL profiler. Beware however that profiler traces can get quite big (and hence slow/hard to analyze) when the filters are not adequate.
Changes to the database schema, i.e. addition of table, columns, triggers and other such objects typically leaves "dated" tracks in the system tables/views (can provide more detail about that if need be).
However, and unless the data itself includes timestamps of sorts, there are typically very few sure-fire ways of knowing when data was changed, unless the recovery model involves keeping all such changes to the Log. In that case you need some tools to "decompile" the log data...
With regards to detecting "read" activity... A tough one. There may be some computer-forensic like tricks, but again, no easy solution I'm afraid (beyond the ability to see in server activity the very last query for all still active connections; obviously a very transient thing ;-) )
I typically run the profiler if I suspect the database is actually used. If there is no activity, then simply set it to read-only or offline.
You can use a transaction log reader to check when data in a database was last modified.
With SQL 2000, I do not know of a way to know when the data was read.
What you can do is to put a trigger on the login to the database and track when the login is successful and track associated variables to find out who / what application is using the DB.
If your database is fully logged, create a new transaction log backup, and check it's size. The log backup will have a fixed small lengh, when there were no changes made to the database since the previous transaction log backup has been made, and it will be larger in case there were changes.
This is not a very exact method, but it can be easily checked, and might work for you.