Is it possible to handle memcached memory geting overflow? - perl

I have a serious problem my memcached memory is overflow and server is getting down.
So how to handle memcached, If memcached memory is getting full then it will just throw error msg, not set the memcached anymore.

memcached is distributed cache system and can be works on different servers. What is your server? is it couchebase, is it elastic cache of AWS? you can use memcache on many different server and providers and when you are creating those servers you need to configure them and set the size of memory you want memcache to use. for example in the company I am working, the test environment uses couchbase but the live uses Amazon Elastic cache.
Memcached uses LRU (least recently used) algorithm to insert the new object into the memory if the table is full. You should not have problem because of full memory and handling memcached. The problem can raise from the full memory but not with this reason that memcached cannot handle it. Exceptions and other problem can be in other part of the system which is quite normal if your memory is full. if you configure the server correctly memcache usually does not throw exception.
Is the server that runs memcached same as the server that runs your application? Memcached can be put on another server and in this way you can prevent the memory to become full.

Related

Wildfly 8.2 stop incoming connections suddenly

We have a wildfly 8.2 app server which allocates 6GB of server RAM. Sometime due to havey transaction count wildfly has stop receiving incoming connections. But when I check server (not app server, it is our VM ) memory, it uses 4GB of RAM. Then I checked Wildfly app server's heap memory it did not use at least 25% of allocate heap size. Why is that? When I restart wildfly App server, All things work normally and when it comes that kind of load, above scenario happen again
Try to increase the connection-limit as suggested in this SO question
You can Dump HTTP requests as given here
Also, are you getting any errors in your console? Please post them as well.

Put memcached on db or web server instance?

For my Drupal-based site, I have an architecture with 3 instances running nginx, postgresql, & solr, respectively. I'd like to install Memcached. Should I put it on the nginx or postgresql server? What are the performance implications?
Memcached is very light on CPU usage, so it is a great candidate to gobble up spare web server RAM. Also, you will scale out you web tier much more than your other tiers, and Memcached clustering can pool that RAM together into one logical cache.
If you have any spare RAM on the DB, it is almost always best for performance to let the DB gobble it up.
TL;DR Let DB have all of the RAM, colocate memcached on web tier.
Source: http://code.google.com/p/memcached/wiki/NewHardware
The best is to have a separate server (if you can do that).
Otherwise, it depends on your servers CPU & memory utilization and availability requirements. In general I would avoid running anything extra on a DB server machine...since DB is the foundation of the system and has to be available and performing well.
if your Solr server does not have high traffic an don't utilize much memory I'd put it in there. Memcached servers known to be light on CPU. Also you should estimate how much memory memcached instance will need...to make sure its enough on the server.

mybatis memcached cache failover

We're considering using memcached as a distributed cache for mybatis (the MyBatis-Memcached
integration module).
Does anyone know how to configure it so that the memcached servers are not single points of failure? Currently, if I configure multiple memcached servers, the cache requests are hashed out to the servers, but each server is a single point of failure (i.e. if one goes down, the app will fail).
We would like that if one of the memcached servers goes down, that the mybatis client treats this cache as lost, continues working and builds the cache back up on another available memcached server.
Anyone any experience with this?

What's the memcached server

I'm a beginner in learning memcached. The memcached server confused me most. Can I see it as a single server computer just like web server? I'm also confused about the relationship between memcached server and client, are they located at different computers?
I agree with most things #phihag has answered but I must clarify some things.
Memcached stores data according to a key (phihag called it an id, not to be confused with database ids). Data can be of various sizes so you can store small bits (like 1 record pulled from the database) or you can store huge chunks of data (like hundreds of records, or entire finished html pages).
Memcached is not typically used on the same machine as the application server, the reason for this is because it is designed to be used via TCP (it would be accessible via sockets if it were designed to work on the same server) and it was designed as a pooling server.
The pooling part is interesting - you can have 10 machines running Memcached each allocating a maximum 10GB of ram for this purpose. 10*10 = 100GB ram space.
When you write a value into Memcached only one (randomly or via some algorithms) of the servers stores it. When you try to read a value from Memcached only the server that has stored it will send it to you.
So indeed you can put all database/memcached/application/fileserver on the same machine, and typically you do that for you development sandbox. But you also can put each on a separate machine and any other combination of the two.
If you only need one Memcached server you will probably be OK with hosting it on the same machine as the application code.
If you start using a front-end cache server such as varnish or you configure NginX as a front-end cache server, you will have to configure some Memcached servers to store the data that these front-end cache servers are caching.
If you distribute your database into multiple servers and file servers into a CDN, that means that your application handles a lot of data in a short period of time so you'll need a lot of RAM space that couldn't be available in one single application server.
And since extending a memory pool for a Memcached server is as easy as adding the IP of the new server to the list, you will be scaling horizontally as in many servers (which is Memcached's true typical use).
The memcached server is a program which manages the data that memcached stores (not to be confused with a machine, which may also be called server). In theory, it can run on any computer. However, it is typically run on the same machine that the main application runs on.
The application then uses its memcached client to talk to the memcached server and ask for cached content. This is faster than querying data from a traditional database because
A memcached server just maps IDs to values, and never needs to scan an entire table
The memcached protocol is simpler. The server doesn't need to parse SQL or so, and the client doesn't need to craft it.
Since memcached does not require the reliability of a database (think of backups, fault isolation, clustering, security etc.), it can be run on the same machine that the application runs on. While you could run a database on the same machine that the applications runs on, doing so is frowned upon for the above reasons.

What are possible reasons for memcached to be significantly slower on a remote server?

I have a PHP/Apache server with 12GB of RAM. I have been running Memcached on the same machine with 6GB of allotted RAM.
I wanted to run Memcached on a separate server (same datacenter, vlan, subnet), just as I do for MySQL. I setup a separate, identical server with the same memcached configuration.
I am seeing a roughly 10x page load time using Memcached from the remote server than what I get when running locally. I have primed both caches and I still have a 10x load time from remote.
I'm having trouble trouble shooting this.
You're loading 500kb of data per pageload, in all small keys? How many keys per pageload is this?
Latency to a remote server is very low, but running many roundtrips is still a bad idea. Memcached clients support multi-get operations, where you batch many keys into a single request/response with much lower latency.
Just for info, DDR3-1333 is about 10667 MB/s.
If you have, let's say, Gigabit ethernet, I guess it can explains some of the problems you are experiencing...