What's the best way to handle memcache servers that fail? - memcached

If the server doesn't respond during a set or get operation, will memcache remove the server from the pool automatically?

Your memcache client will decide to remove the server from the pool or not.
If you have consistant hashing turned on, the client will not remove the server from the pool, otherwise the key/value will go to another memcache server.

Related

ADO.NET background pool validation

in Java, application servers like JBoss EAP have the option to periodically verify the connections in a database pool (https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_application_platform/6.4/html/administration_and_configuration_guide/sect-database_connection_validation). This has been very useful for removing stale connections.
I'm now looking at a ADO.NET application, and I was wondering if there was any similar functionality that could be used with a Microsoft SQL Server?
I ended up find this post by redgate that describes some of the validation that goes on when connections are taken from the pool:
If the connection has died because a router has decided that it no
longer wants to forward your packets and no other routers like you
either then there is no way to know this unless you try to send some
data and don’t get a response.
If you create a connection and a connection pool is created and
connections are put into the pool and not used, the longer they are in
there, the bigger the chance of something bad happening to it.
When you go to use a connection there is nothing to warn you that a
router has stopped forwarding your packets until you go to use it; so
until you use it, you do not know that there is a problem.
This was an issue with connection pooling that was fixed in the first
.Net 4 reliability update (see issue 14 which vaguely describes this)
with a feature called “Connection Pool Resiliency”. The update meant
that when a connection is about to be taken from the pool, it is
checked for TCP validity and only returned if it is in a good state.

HornetQ local connection never timing out

My application, running in a JBOSS standalone env, relies on a HornetQ (v2.2.5.Final) middleware to exchange messages between parts of my application in a local environment - not over the network.
The default TTL (time-to-live) value for the connection is 60000ms, I am thinking of changing that to -1 since, from an operative point of view, I am looking forward to keep sending messages through such connection from time to time (not known in advance). Also, that would prevent issues like jms queue connection failure.
The question is: what are the issues with never timing out a connection on the server side, in such context? Is that a good choice? If not, is there a strategy that is suited for such situation?
The latest versions of HornetQ automatically disable connection checking for in-vm connections so there shouldn't be any issues if you configure this manually.

HAProxy : Prevent stickiness to a backup server

I'm facing a configuration issue with HAProxy (1.8).
Context:
In a HAProxy config, I have a several severs in a backend, and an additional backup server in case the other servers are down.
Once a client gets an answer from a server, it must stick to this server for its next queries.
For some good reasons, I can't use a cookie for this concern, and I had to use a stick-table instead.
Problem:
When every "normal" server is down, clients are redirected to the backup server, as expected.
BUT the stick-table is then filled with an association between the client and the id of the backup server.
AND when every "normal" server is back, the clients which are present in the stick table and associated with the id of the backup server will continue to get redirected to the backup server instead of the normal ones!
This is really upsetting me...
So my question is: how to prevent HAProxy to stick clients to a backup server in a backend?
Please find below a configuration sample:
defaults
option redispatch
frontend fe_test
bind 127.0.0.1:8081
stick-table type ip size 1m expire 1h
acl acl_test hdr(host) -i whatever.domain.com
...
use_backend be_test if acl_test
...
backend be_test
mode http
balance roundrobin
stick on hdr(X-Real-IP) table fe_test
option httpchk GET /check
server test-01 server-01.lan:8080 check
server test-02 server-02.lan:8080 check
server maintenance 127.0.0.1:8085 backup
(I've already tried to add a lower weight to the backup server, but it didn't solve this issue.)
I read in the documentation that the "stick-on" keyword has some "if/unless" options, and maybe I can use it to write a condition based on the backend server names, but I have no clue about the syntax to use, or even if it is possible.
Any idea is welcome!
So silly of me! I was so obsessed by the stick table configuration that I didn't think to look in the server options...
There is a simple keyword that perfectly solves my problem: non-stick
Never add connections allocated to this sever to a stick-table. This
may be used in conjunction with backup to ensure that stick-table
persistence is disabled for backup servers.
So the last line of my configuration sample simply becomes:
server maintenance 127.0.0.1:8085 backup non-stick
...and everything is now working as I expected.

How pgbouncer care about session parameters in transaction pooling mode

i connect to Postgresql via JDBC through pgbouncer with transaction pooling mode enabled. As far as I know in this mode pgbouncer can share same connection for several client without breaking session. So several clients may work within a single session sequentially, one by another. The question is does pgbouncer care about reseting session parameters when it detach one client from a connection and attach another client to this connection?
In particular, my app get connection and then issues something like this:
executeQuery(connection,"select set_config('myapp.user','fonar101',false)");
..../*other actions*/
commit(connection);
After commit pgbouncer can detach my app from connection and get it back to its pool, right? So,
if I issue another statements after commit they will probably be
executed within another session with incorrect values of session
parameters
pgbouncer can attach another client to that connection
and that client will proceed with again incorrect session settings
How does pgbouncer care about this things?
I'd say the opposite:
https://pgbouncer.github.io/config.html
transaction
Server is released back to pool after transaction finishes.
Which means that when you SET SESSION (default for SET), without specifying SET LOCAL, you change settings for all transactions that share the session in the pool...
According pgbouncer docs it doesnt support SET/RESET and ON COMMIT DROP for temp tables in transaction pooling mode.

how to clear memcached on centos?

I have a centos server running memcached. I use the spymemcached client to store cache key value pairs.
I need to clear all cache values and keys.
Do I do this on the client or server side?
I have tried the telnet method to flush_all
I have tried on the spymemcached side:
flush();
Neither of these clear the cache.
Can someone please help?
flush(); just invalidates all of the items in the memcached cache. You might need to restart the memcached server to clear the cache. Technically you are safe with go flush(); memcached will use allocated memory when ever it needs.