Creating regions in Memcache using java - memcached

I just installed memcache in my machine.Actually,i have to create regions in there.
For eg there should be 3 regions created ,each storing a set of data.Am not sure how can i do that in memcache.Can anyone please help/give eg as it is "urgent".
Thanks in advance

Memcached itself is just a service that runs on your server. It can be connected to and commands sent to it over a text based protocol. Once the service is running, you can use a tool such as telnet or netcat to interact with it.
As for accessing from Java, you'll probably want a library to do most of the work for you. There were a few listed on this question: Java Memcached Client
Now as for your regions: memchached is basically a key/value table. To set a region you'd do something like memcached.set("key", yourData) and to get it back you'd do something like yourData = memcached.get("key")
Note that these functions will vary depending which library you are using.

Related

Query node-label topology from Yarn via REST API [MapR 6.1/Hadoop-2.7]

There is a Java and CLI-interface to query Yarn RM for node-to-nodelabel (and inverse) mappings. Is there a way to do this via the REST-API as well?
An initial RM-API search revealed only node-label based job submissions as an option.
Sadly that is actually broken in MapR-Hadoop (6.1 as of 6/6/19), so my code has to work around that, by implementing the correct scheduling itself. This works (barely - more broken APIs here as well) using the YarnClient Java API.
But as I want to schedule jobs against different resource managers at the same time, behind firewalls, the REST-API is the most compelling option to achieve this, and the YarnClient API's RPC backend can't be easily transported.
My current worst-case solution would be to parse the YARN-WebUI in some way.
The only solution I found so far:
Request /ws/v1/cluster/nodes - this gets you all nodes.
FlatMap/Distinct on each node's nodeLabels, if you need just the list of node labels. Filter by nodeLabel, if you need all nodes for a specified label.
This does mean, that you always have to query all nodes, then sort/filter/arrange by NodeLabels, which is a lot of client-side magic. But apparently there's no GetNodesToLabel or even GetClusterNodeLabels to help us out.
I assume getLabelsToNodes is just a client-side implementation, as the protocol doesn't define the API, so that's right out the window for REST, unless implemented in the WebService.

LISTEN on all channels in PostgreSQL

I'd like to forward all notifications from PostgreSQL into task queues in RabbitMQ named the same as the channel given in NOTIFY channel. Does PostgreSQL have something that would act like LISTEN *?
Inspecting the source for Skeeter it seems that PQnotifies might be of interest. PostgreSQL's documentation on libpq also mentions PQconsumeInput as a way to consume input from the server. From the documentation:
PQconsumeInput normally returns 1 indicating "no error", but returns 0 if there was some kind of trouble (in which case PQerrorMessage can be consulted). Note that the result does not say whether any input data was actually collected. After calling PQconsumeInput, the application can check PQisBusy and/or PQnotifies to see if their state has changed.
Am I on the right path? Since I'm using .NET I'd prefer not writing any C, so any suggestions are welcome.
I've tried pgsql-listen-exchange but either I'm doing something wrong or the plugin doesn't work for RabbitMQ 3.6 (there's only a 3.5 release). I created an issue.
Specific to RabbitMQ: As an alternative to listening for everything from PostgreSQL, I guess I could create an exchange and have something poll that for queues and just create a listener for each queue. Will be looking into this as well.

Are there any lightweight OPC servers which can have multiple instances running on the same machine?

I am wanting to run performance testing of an OPC client to 100 servers but I do not want to run 100 machines
Is there any way to set this up for testing purposes?
If they are OPC-DA (or A&E, HDA) servers, and you are really interested in just the performance of the client part, and do not mind actually using a single OPC server that just "looks" like 100 of them, there is a chance that you can use the following trick:
Using REGEDIT, under HKEY_CLASSES_ROOT, find an entry with the ProgID of your server. Its purpose is to point to the server's CLSID.
Make 100 copies of this entry, each time giving it a different ProgID, but pointing still to the same CLSID.
Configure your client to connect to 100 different OPC server, using the ProgIDs you created.
If you client uses just ProgIDs for recognizing the "identity" of the server, it will think it is connecting to 100 different servers, and it will treat them as such, i.e. create 100 separate connections. If the client, however, first converts the ProgID to CLSID, it will see the same CLSID, and it won't work. Obviously, even if this works, the performance test will be skewed as far as the server side is concerned, because there will be just 1 server process, not 100 of them.
Note that CLSID cannot be changed in this way, because it is also hard-coded in the COM class factory inside the server.
Another wild idea that comes to my mind is to use a product such as OPC Tunneler (e.g. MatrikonOPC). In this product you can create any number of "servers", which in fact connect by tunneling to a true OPC server somewhere else. Therefore you might be able to configure 100 tunnels (looking like separate servers), all connecting to 1 target OPC servers.

Couldn't retrieve all the memcache keys via telnet client

I want to list all the keys stored in the memcached server.
I googled for the same, I got some python/php scripts that can list the same. I tested it but all went failed and none gave me full keys. I can see thousands of keys using telnet command
stats items
I used perl script that uses telnet to list keys, but that got failed too. I mean that script is listing keys but not all of them.
Do I need to reconfigure telnet ? Is there any other way ?
memcache does not provide an api to exhaustively list all keys. "stats items" is as good as it gets to list the first 1M of keys. More info here: http://www.darkcoding.net/software/memcached-list-all-keys/
Not sure if that helps you but redis (which could be considered a superset of memcache) provides a more comprehensive API for key listing and searching. You might want to give it a try.
It you use python-memcached, and would like to export all the items in memcache server, I summerized two methods to the problem in this question: Export all keys and values from memcached with python-memcache

MSMQ querying for a specific message

I have a questing regarding MSMQ...
I designed an async arhitecture like this:
CLient - > WCF Service (hosted in WinService) -> MSMQ
so basically the WCF service takes the requests, processes them, adds them to an INPUT queue and returns a GUID. The same WCF service (through a listener) takes first message from queue (does some stuff...) and then it puts it back into another queue (OUTPUT).
The problem is how can I retrieve the result from the OUTPUT queue when a client requests it... because MSMQ does not allow random access to it's messages and the only solution would be to iterate through all messages and push them back in until I find the exact one I need. I do not want to use DB for this OUTPUT queue, because of some limitations imposed by the client...
You can look in your Output-Queue for your message by using
var mq = new MessageQueue(outputQueueName);
mq.PeekById(yourId);
Receiving by Id:
mq.ReceiveById(yourId);
A queue is inherently a "first-in-first-out" kind of data structure, while what you want is a "random access" data structure. It's just not designed for what you're trying to achieve here, so there isn't any "clean" way of doing this. Even if there was a way, it would be a hack.
If you elaborate on the limitations imposed by the client perhaps there might be other alternatives. Why don't you want to use a DB? Can you use a local SQLite DB, perhaps, or even an in-memory one?
Edit: If you have a client dictating implementation details to their own detriment then there are really only three ways you can go:
Work around them. In this case, that could involve using a SQLite DB - it's just a file and the client probably wouldn't even think of it as a "database".
Probe deeper and find out just what the underlying issue is, ie. why don't they want to use a DB? What are their real concerns and underlying assumptions?
Accept a poor solution and explain to the client that this is due to their own restriction. This is never nice and never easy, so it's really a last resort.
You may could use CorrelationId and set it when you send the message. Then, to receive the same message you can pick the specific message with ReceiveByCorrelationId as follow:
message = queue.ReceiveByCorrelationId(correlationId);
Moreover, CorrelationId is a string with the following format:
Guid()\\Number