Mongodb or Memcached on busy site? - mongodb

My system have a huge connections and data read and write always. I installed Memcached (only one memcached)but failed because a large connections and I tried to install Mongodb (only one instance too) but still failed.
My data is not large (about > 10.000 records). Now I'm using MySQL but it's very slow.
I'm thinking about memcached replication and master/slave on mongodb! Which one should i choice?
Thank you very much!
(sorry for my bad English)

MySQL is not likely to be causing your site to run slowly, especially with the amount of data you mention. I would stick with MySQL and try to find what is causing your site to run slowly.

Try running EXPLAIN on your MySql queries to know what is going wrong...

Related

Does Cloudant on Bluemix have a performance issue?

I have a mobile app using Cloudant CDTReplicator to synch the local datastore and the remote DB. The app was fast at the start, but then the synch would take more than 10 seconds when there are just 180 normal documents, each having a few string fields in the remote DB. And the delta between the local and remote is minimal.
After I deleted all the 180 documents, it speeds up greatly. So looks like it is related to number of documents.
Can you please let me know if there is a performance issue with Cloudant replication? Or this is a problem with my coding? The code is just to start the replicator, not much there.
Thank you,
Jen
From the status page https://developer.ibm.com/bluemix/support/#status Cloudant is reported as having some issues on US-South. The problem is related though to a "service broker timeout" generic message.

is memcache relative to database

I have been browsing through a lot of websites. I need experts advice on this one.
can anyone please explain me what exactly is memcache ?
From what I understand that it is a distributed memory caching system used for dynamic web apps but my main question is do we need a database when we say 'memcache' or the term 'memcache' doesnt need a database ?
please answer. Thank you
No, you don't require a traditional database when you say memcache, it's an in memory hash table(dictionary) with key,value storage and so it resides in RAM as a lookup table.For this fact, it is not persistent, so whenever you restart your server, memcache gets reset.
memcached is a specific program that runs a server that other programs can use to keep things in memory. It's something like an in-memory database, depending on your definition of database.
Caching something in memory can also be done generically, without memcached (pronounced memcache dee).

PDO queries very slow over high latency, high bandwidth connection

Running postgresql 9.x (9.1 - 9.3)
I have a custom web app built using php's PDO library. Every query in our app uses prepared statements (via our internal PDO wrapper library).
Our production system uses AWS EC2 small instances for the web server and RDS for the app server.
For local development, my local machine serves as the web-server, and an office machine running Mac OSX (Mavericks) serves up the DB.
When I'm at work, I can typically ping my local office DB server and get 1-5ms ping times. Everything performs well, page load times are very speedy, my internal timer shows that PHP runs the page from start to end in about 12ms.
Now, the issue comes in when I take my work laptop home-- from home, I get about 50-60ms ping times to the office DB server. If I run my development machine at home, page times now take 5-10 seconds to load-- every time. Granted, there are 4 db queries running per page load, it's very, very little data. I've tried TCP_NOWAIT settings, I've tried loading pgbouncer on my local machine with persistent connections to the remote db-- nothing has helped so far.
When timing the queries, we have a simple query that returns 100 bytes of data that runs in .0006 seconds locally to taking around 1 second to run remotely. Lastly, I'll say it appears to be related to latency only, no matter how much data a query returns, it's like it takes around 1 second longer than it normally would if running locally.. give or take.
I was simply wondering if anyone could help me resolve what this delay might be. It seems that every single query, no matter how much data, imparts a delay of around a second, give or take. The odd thing is that when I run PGAdmin on the my machine connecting to the remote DB, it takes nowhere near that much time to run simple queries.
Does anyone have any idea of other things to try? I'm not runnig an SSL DB connection, or using any compression, I'm willing to try if necessary, however, that's one thing I haven't gotten to work before, and I doubt that'll help with latency anyway.

Keeping postgres entirely in memory

I am running various tests that spend a lot of time in the database.
I'd like to keep it all in memory and have it not touch the db, hopefully that would speed things up. Like using sqlite3's in-memory option. I don't need persistence/durability/whatnot, everything is immediately discarded after the test.
Is that possible? I tried tweaking my postgres memory-related vars (as in the solution below), but that doesn't seem to affect the number of db writes it performs, and I couldn't find anything that looks like an 'in-memory' option.
https://dba.stackexchange.com/questions/18484/tuning-postgresql-for-large-amounts-of-ram
I wrote a detailed post on this some time ago:
Optimise PostgreSQL for fast testing
You may find it informative; it covers options for making PostgreSQL run without durability and other tweaks that're useful for running tests.
You do not actually need in-memory operation. If PostgreSQL is set not to flush changes to disk then in practice there'll be little difference for DBs that fit in RAM, and for DBs that don't fit in RAM it won't crash.
You should test with the same database engine you're using in production. Testing with SQLite, Derby, H2, etc then deploying live on PostgreSQL doesn't make tons of sense... as any Heroku/Rails user can tell you from experience.

pgpool-II for Postgres - Is it what I need?

I just stumbled upon pgpool-II in my search for clustering my Postgres DB (just getting ready to deploy a web app in a couple months). I still have the shakes from excitement, but I'm nervous, as each time I find something this excellent I am soon let down. Have you any experience with pgpool-II, and will it help me run my database in multiple VMs, and later in multiple physical servers altogether? Is it all I need for backing up, load balancing, and providing a higher availability for my DB server!?
Also, is it easy to use the parallel query function (for instance, in Django or through Pythons psycopg2)? This would be most excellent for providing reporting and aggregation!
One last thing: It seems to work between Postgres and psycopg2. Is this a correct understanding of it, so I can use psycopg2 the same as normal, without regard for pgpool-II?
pgpool-II works fine for what it claims to do. And it fits between your application and the database the way you expect it to; just point psycopg2 toward it instead of directly at the database and off you go.
The main thing you have to note is that while it supports many different types of features--replication, load balancing, parallel query--you can't use them all at once. It sounds like you may be under the impression you can do that, and it doesn't work that way. The documentation is not all that clear on this subject (the English version at least, I can't speak to the original Japanese one).
For example, if you run pgpool-II in its "Master/Slave" mode, so that it supports load-balancing for scaling reads, you have to use another program to actually do the replication between those nodes. Slony was the supported replication solution to put underneath of there in earlier PostgreSQL versions, as of pgpool-II 3.0 and PostgreSQL 9.0 you can also use the soon to be released Streaming Replication/Hot Standby features of that new version as well.
pgpool-II is a useful component and you can use it in a lot of interesting ways, but I doubt it will be "all you need" for every requirement you hope to achieve with it.