Memcached keys distribution - memcached

I'm using xmemcached client.
Is there a mechanism or best way to control from client side the distribution of the keys
per server, i.e. I want to make sure all the keys following a certain custom pattern to be stored in server1, while the others in server 2.
A simple example: all my keys starting with 'a' should go into server 1 while the others into server 2.

Related

Is it legitimate to insert UUIDs into Postgres that have been generated by a client application?

The normal MO for creating items in a database is to let the database control the generation of the primary key (id). That's usually true whether you're using auto-incremented integer ids or UUIDs.
I'm building a clientside app (Angular but the tech is irrelevant) that I want to be able to build offline behaviour into. In order to allow allow offline object creation (and association) I need the the client appplication to generate primary keys for new objects. This is both to allow for associations with other objects created offline and also to allow for indempotence (making sure I don't accidentally save the same object to the server twice due to a network issue).
The challenge though is what happens when that object gets sent to the server. Do you use a temporary clientside ID which you then replace with the ID that the server subsequently generates or you use some sort of ID translation layer between the client and the server - this is what Trello did when building their offline functionality.
However, it occurred to me that there may be a third way. I'm using UUIDs for all tables on the back end. And so this made me realise that I could in theory insert a UUID into the back end that was generated on the front end. The whole point of UUIDs is that they're universally unique so the front end doesn't need to know the server state to generate one. In the unlikely event that they do collide then the uniqueness criteria on the server would prevent a duplicate.
Is this a legitimate approach? The risk seems to be 1. Collisions and 2. any form of security that I haven't anticipated. Collisons seem to be taken care of by the way that UUIDs are generated but I can't tell if there are risks in allowing a client to choose the ID of an inserted object.
However, it occurred to me that there may be a third way. I'm using UUIDs for all tables on the back end. And so this made me realise that I could in theory insert a UUID into the back end that was generated on the front end. The whole point of UUIDs is that they're universally unique so the front end doesn't need to know the server state to generate one. In the unlikely event that they do collide then the uniqueness criteria on the server would prevent a duplicate.
Yes, this is fine. Postgres even has a UUID type.
Set the default ID to be a server-generated UUID if the client does not send one.
Collisions.
UUIDs are designed to not collide.
Any form of security that I haven't anticipated.
Avoid UUIDv1 because...
This involves the MAC address of the computer and a time stamp. Note that UUIDs of this kind reveal the identity of the computer that created the identifier and the time at which it did so, which might make it unsuitable for certain security-sensitive applications.
You can instead use uuid_generate_v1mc which obscures the MAC address.
Avoid UUIDv3 because it uses MD5. Use UUIDv5 instead.
UUIDv4 is simplest, it's a 122 bit random number, and built into Postgres (the others are in the commonly available uuid-osp extension). However, it depends on the strength of the random number generator of each client. But even a bad UUIDv4 generator is better than incrementing an integer.

Are there any lightweight OPC servers which can have multiple instances running on the same machine?

I am wanting to run performance testing of an OPC client to 100 servers but I do not want to run 100 machines
Is there any way to set this up for testing purposes?
If they are OPC-DA (or A&E, HDA) servers, and you are really interested in just the performance of the client part, and do not mind actually using a single OPC server that just "looks" like 100 of them, there is a chance that you can use the following trick:
Using REGEDIT, under HKEY_CLASSES_ROOT, find an entry with the ProgID of your server. Its purpose is to point to the server's CLSID.
Make 100 copies of this entry, each time giving it a different ProgID, but pointing still to the same CLSID.
Configure your client to connect to 100 different OPC server, using the ProgIDs you created.
If you client uses just ProgIDs for recognizing the "identity" of the server, it will think it is connecting to 100 different servers, and it will treat them as such, i.e. create 100 separate connections. If the client, however, first converts the ProgID to CLSID, it will see the same CLSID, and it won't work. Obviously, even if this works, the performance test will be skewed as far as the server side is concerned, because there will be just 1 server process, not 100 of them.
Note that CLSID cannot be changed in this way, because it is also hard-coded in the COM class factory inside the server.
Another wild idea that comes to my mind is to use a product such as OPC Tunneler (e.g. MatrikonOPC). In this product you can create any number of "servers", which in fact connect by tunneling to a true OPC server somewhere else. Therefore you might be able to configure 100 tunnels (looking like separate servers), all connecting to 1 target OPC servers.

Couldn't retrieve all the memcache keys via telnet client

I want to list all the keys stored in the memcached server.
I googled for the same, I got some python/php scripts that can list the same. I tested it but all went failed and none gave me full keys. I can see thousands of keys using telnet command
stats items
I used perl script that uses telnet to list keys, but that got failed too. I mean that script is listing keys but not all of them.
Do I need to reconfigure telnet ? Is there any other way ?
memcache does not provide an api to exhaustively list all keys. "stats items" is as good as it gets to list the first 1M of keys. More info here: http://www.darkcoding.net/software/memcached-list-all-keys/
Not sure if that helps you but redis (which could be considered a superset of memcache) provides a more comprehensive API for key listing and searching. You might want to give it a try.
It you use python-memcached, and would like to export all the items in memcache server, I summerized two methods to the problem in this question: Export all keys and values from memcached with python-memcache

Best practices for personal private keys

I'm just starting to use RSA keys in my daily work, and I have a few questions regarding the best ways to use them.
The biggest question revolves around the idea of multiple clients and multiple servers. Here's a scenario:
I have two client computers:
Desktop
Laptop
And there are two servers which I will be authenticating:
My own local server
Remote service (e.g. Github)
So, generally, how many key-pairs would you recommend in this situation?
One key-pair: This key is "Me" and I use it everywhere.
One per client: This key is "This client" and I put it on each server I mean to connect to from that client.
One key-pair per server: This is the key "for this service", and I bring the private key to each client I want to connect to it from.
One for every combination: Every unique client-server pairing has its own key-pair.
If none of these is significantly superior or worse to any other, can you outline the pros and cons of each so that a person could choose for themselves?
Of your four options, the two I like are:
One per client: This key is "This client" and I put it on each server I mean to connect to from that client.
This gives you the easy ability to revoke all keys for a specific client in the event it is compromised -- delete the one key on every service. It also only scales linearly in the number of clients, which will probably make key management easier. It even fits neatly with the OpenSSH key model, which is to give every client one key that is used on multiple servers. (You can do other models with OpenSSH, which is nice. But this is the easiest thing to do as it happens without any effort on your part.)
One for every combination: Every unique client-server pairing has its own key-pair.
This has the downside of forcing you to revoke multiple keys when a single client is compromised, but it'll be one key per service anyway, so it isn't significantly worse. The better upside is that it'll be significantly harder for one service to serve as a middleman between you and another service. This is not a real concern most of the time, but if your (Laptop,Server,SMTP) key were suddenly being used for (Laptop,Server,SSH), you'd have some opportunity to notice the oddity. I'm not sure this ability is worth the quadratic increase in keys to manage.
The usual way to do this is your "One per client" option. That way, in case of a compromised client key, you can revoke just that key from the servers where it is allowed. If you want extra work, you can do "One for every combination".
The above options avoid copying private key data between hosts.

Implementing Key-value server

I found a question and that is : to implement a key-value server
User should be able to connect to server and be able to run command SET a = b.
On running command GET a, it should print b.
First of all, I didn't really understand what the question is all about.
In its simplest form, a Key-Value server is nothing more but a server that holds keys in a dictionary structure and associates a value with said key.
If it helps, you can think of a key as a variable name in a programming language or as an environment variable in the bash shell.
A client to the Key-Value server would either tell the server what value the key has, or request the current value of the key from the server.
As Ramon mentioned in his comment, memcached.org is such example of a Key-Value server.
Of course, the server can be much more complex that what I described above. Keys could be more than just values (for instance, objects) and the server/client could have a lot more functionality than the basic set/get.
Note that the term Key-Value server is very broad and doesn't mean anything concrete by itself. NoSQL systems make use of key-value stores, for example, so you could technically call any NoSQL database system a Key-Value server.