How to use consistent hashing in memcached c client? - memcached

I am using libmemcached c client for set and get the memcache value.
memcached_server_st *servers = NULL;
memcached_st *memc;
memcached_return rc;
char *key= "keystring";
char *value= "keyvalue";
// memcached_server_st *memcached_servers_parse (char *server_strings);
memc= memcached_create(NULL);
servers= memcached_server_list_append(servers, "localhost", 5555, &rc);
servers= memcached_server_list_append(servers, "localhost", 5566, &rc);
rc= memcached_server_push(memc, servers);
if (rc == MEMCACHED_SUCCESS)
fprintf(stderr,"Added server successfully\n");
else
fprintf(stderr,"Couldn't add server: %s\n",memcached_strerror(memc, rc));
rc= memcached_set(memc, key, strlen(key), value, strlen(value), (time_t)0, (uint32_t)0);
if (rc == MEMCACHED_SUCCESS)
fprintf(stderr,"Key stored successfully\n");
else
fprintf(stderr,"Couldn't store key: %s\n",memcached_strerror(memc, rc));
return 0;
I want to use the consistent hashing algorithm for set and get the key.
http://docs.libmemcached.org/memcached_behavior.html#memcached_behavior_setlink
But i don't know how to implement this.code snippets or reference links are very much appreciated.
Thanks in advance.

Go to the link http://docs.libmemcached.org/memcached_behavior.html#memcached_behavior_setlink
You can see two method to make it.
first one
MEMCACHED_BEHAVIOR_DISTRIBUTION
Using this you can enable different means of distributing values to servers.
The default method is MEMCACHED_DISTRIBUTION_MODULA. You can enable consistent hashing by setting MEMCACHED_DISTRIBUTION_CONSISTENT. Consistent hashing delivers better distribution and allows servers to be added to the cluster with minimal cache losses. Currently MEMCACHED_DISTRIBUTION_CONSISTENT is an alias for the value MEMCACHED_DISTRIBUTION_CONSISTENT_KETAMA.
second one
MEMCACHED_BEHAVIOR_KETAMA
Sets the default distribution to MEMCACHED_DISTRIBUTION_CONSISTENT_KETAMA and the hash to MEMCACHED_HASH_MD5.
example
memcached_behavior_set(memc, MEMCACHED_BEHAVIOR_KETAMA, 1);

Related

Pedersen circom/circomlibjs inconsistency?

As a unit test for a larger use case, I am checking that indeed the pedersen hash I am doing in the frontend aligns with the expected hash done through a circom circuit. I am using a simple assert in the circuit and generating a witness and am feeding both the hashed and unhashed values to the circuit, recreating the hash to make sure that it goes through.
I am running a Pedersen hash in my frontend using circomlibjs. As a unit test, I have. a circuit with a simple assert that check whether the results from my frontend line up with the pedersen hash in the circom circuit.
The circuit I am using:
include "../node_modules/circomlib/circuits/bitify.circom";
include "../node_modules/circomlib/circuits/pedersen.circom";
template check() {
signal input unhashed;
signal input hashed;
signal output createdHash[2];
component hasher = Pedersen(256);
component unhashedBits = Num2Bits(256);
unhashedBits.in <== unhashed;
for (var i = 0; i < 256; i++){
hasher.in[i] <== unhashedBits.out[i];
}
createdHash[0] <== hasher.out[0];
createdHash[1] <== hasher.out[1];
hashed === createdHash[1];
}
component main = check();
In the frontend, I am running the following,
import { buildPedersenHash } from 'circomlibjs';
export function buff2hex(buff) {
function i2hex(i) {
return ('0' + i.toString(16)).slice(-2);
}
return '0x' + Array.from(buff).map(i2hex).join('');
}
const secret = (new TextEncoder(32)).encode("Hello");
var pedersen = await buildPedersenHash();
var h = pedersen.hash(secret);
console.log(buff2hex(secret));
console.log(buff2hex(h));
The values that are printed are:
0x48656c6c6f
0x0e90d7d613ab8b5ea7f4f8bc537db6bb0fa2e5e97bbac1c1f609ef9e6a35fd8b
Which are consistent with the test done here.
So I then create an input.json file which looks as follows,
{
"unhashed": "0x48656c6c6f",
"hashed": "0x0e90d7d613ab8b5ea7f4f8bc537db6bb0fa2e5e97bbac1c1f609ef9e6a35fd8b"
}
And lastly run the following script to create a witness, in the hopes that the assert will go through.
# Compile the circuit
circom ${CIRCUIT}.circom --r1cs --wasm --sym --c
# Generate the witness.wtns
node ${CIRCUIT}_js/generate_witness.js ${CIRCUIT}_js/${CIRCUIT}.wasm input.json ${CIRCUIT}_js/witness.wtns
However, I keep getting an assert error,
Error: Error: Assert Failed.
Error in template check_11 line: 26
Which describes the assert in the circuit, so I assume there is an inconsistency in the hash.
I am new to circom so any insights would be greatly appreciated!
For anyone who stumbles across this, it happens that the cause of issue is endianess. The issue was fixed by converting the unhashed to little endian in the input, I am not sure as to where exactly the problem is, but seems like the hasher reads it as big endian on the frontend but the input is expected little endian (or vice verse).
As I have managed to patch up a fix for this at the moment, I will stop investigating, but implore anyone who understand this further to give a better explanation.

DPDK Hash failing to lookup data from secondary process

Failing to use existing rte Hash from secondary process:
h = rte_hash_find_existing("some_hash");
if (h) {
// this will work, in case we re-create
//rte_hash_free(h);
}
else {
h = rte_hash_create (&params);
}
// using the hash will crash the process with:
// Program received signal SIGSEGV, Segmentation fault.
ret = rte_hash_lookup_data (h,name,&data);
DPDK Version: dpdk-19.02
Build Mode Static: CONFIG_RTE_BUILD_SHARED_LIB=n
The Primary and secondary processes are different binaries but linked to the same DPDK library
The Key is added in primary as follows
struct cdev_key {
uint64_t len;
};
struct cdev_key key = { 0 };
if (rte_hash_add_key_data (testptr, &key,(void *) &test) < 0) {
fprintf (stderr,"add failed errno: %s\n", rte_strerror(rte_errno));
}
and used in secondary as follows:
printf("Looking for data\n");
struct cdev_key key = { 0 };
int ret = rte_hash_lookup_data (h,&key,&data);
with DPDK version 19.02, I am able to run 2 separate binaries without issues.
[EDIT-1] based on the update in the ticket, I am able to lookup hash entry added from primary in the secondary process.
Priamry log:
rte_hash_count 1 ret:val 0x0:0x0
Secondary log:
0x17fd61380 rte_hash_count 1
rte_hash_count 1 key:val 0:0
note: if using rte_hash_lookup please remember to disable Linux ASLR via echo 0 | tee /proc/sys/kernel/randomize_va_space.
Binary 1: modified example/skeleton to create hash test
CMD-1: ./build/basicfwd -l 5 -w 0000:08:00.1 --vdev=net_tap0 --socket-limit=2048,1 --file-prefix=test
Binary 2: modified helloworld to lookup for hash test, else assert
CMD-2: for i in {1..20000}; do du -kh /var/run/dpdk/; ./build/helloworld -l 6 --proc-type=secondary --log-level=3 --file-prefix=test; done
Changing or removing the file-prefix results in assert logic to be hit.
note: DPDK 19.02 has the inherent bug which does not cleanup the /var/run/dpdk/; hence recommends to use 19.11.2 LTS
Code-1:
struct rte_hash_parameters test = {0};
test.name = "test";
test.entries = 32;
test.key_len = sizeof(uint64_t);
test.hash_func = rte_jhash;
test.hash_func_init_val = 0;
test.socket_id = 0;
struct rte_hash *testptr = rte_hash_create(&test);
if (testptr == NULL) {
rte_panic("Failed to create test hash, errno = %d\n", rte_errno);
}
Code-2:
assert(rte_hash_find_existing("test"));
printf("hello from core %u::%p\n", lcore_id, rte_hash_find_existing("test"));
printf("hello from core %u::%p\n", lcore_id, rte_hash_find_existing("test1"));
As mentioned in DPDK Programmers Guide, using multiprocessor functionalities come with some restrictions. One of them is that the pointer to a function can not be shared between processes. As a result the hashing function is not available on the secondary process. The suggested work around is to the hashing part in the primary process and the secondary process accessing the hash table using the hash value instead of the key.
From DPDK Guide:
To work around this issue, it is recommended that multi-process applications perform the hash calculations by directly calling the hashing function from the code and then using the rte_hash_add_with_hash()/rte_hash_lookup_with_hash() functions instead of the functions which do the hashing internally, such as rte_hash_add()/rte_hash_lookup().
Please refer to the guide for more information [36.3. Multi-process Limitations]
link: https://doc.dpdk.org/guides/prog_guide/multi_proc_support.html
In the time of writing this answer the guide is for DPDK 20.08.

How to set the proper data type when writing to an OPC-UA node in Milo?

I am an OPC-UA newbie integrating a non-OPC-UA system to an OPC-UA server using the Milo stack. Part of this includes writing values to Nodes in the OPC-UA server. One of my problems is that values from the other system comes in the form of a Java String and thus needs to be converted to the Node's proper data type. My first brute force proof-of-concept uses the below code in order to create a Variant that I can use to write to the Node (as in the WriteExample.java). The variable value is the Java String containing the data to write, e.g. "123" for an Integer or "32.3" for a Double. The solution now includes hard-coding the "types" from the Identifiers class (org.eclipse.milo.opcua.stack.core, see the switch statement) which is not pretty and I am sure there is a better way to do this?
Also, how do I proceed if I want to convert and write "123" to a node that is, for example,
UInt64?
try {
VariableNode node = client.getAddressSpace().createVariableNode(nodeId);
Object val = new Object();
Object identifier = node.getDataType().get().getIdentifier();
UInteger id = UInteger.valueOf(0);
if(identifier instanceof UInteger) {
id = (UInteger) identifier;
}
System.out.println("getIdentifier: " + node.getDataType().get().getIdentifier());
switch (id.intValue()) {
// Based on the Identifiers class in org.eclipse.milo.opcua.stack.core;
case 11: // Double
val = Double.valueOf(value);
break;
case 6: //Int32
val = Integer.valueOf(value);
break;
}
DataValue data = new DataValue(new Variant(val),StatusCode.GOOD, null);
StatusCode status = client.writeValue(nodeId, data).get();
System.out.println("Wrote DataValue: " + data + " status: " + status);
returnString = status.toString();
} catch (Exception e) {
System.out.println("ERROR: " + e.toString());
}
I've looked at Kevin's response to this thread: How do I reliably write to a OPC UA server? But I'm still a bit lost... Some small code example would really be helpful.
You're not that far off. Every sizable codebase eventually has one or more "TypeUtilities" classes, and you're going to need one here.
There's no getting around that fact that you need to be able to map types in your system to OPC UA types and vice versa.
For unsigned types you'll use the UShort, UInteger, and ULong classes from the org.eclipse.milo.opcua.stack.core.types.builtin.unsigned package. There are convenient static factory methods that make their use a little less verbose:
UShort us = ushort(foo);
UInteger ui = uint(foo);
ULong ul = ulong(foo);
I'll explore that idea of including some kind of type conversion utility for an upcoming release, but even with that, the way OPC UA works you have to know the DataType of a Node to write to it, and in most cases you want to know the ValueRank and ArrayDimensions as well.
You either know these attribute values a priori, obtain them via some other out of band mechanism, or you read them from the server.

How to work out 'read/write' function using the libmodbus?(c code)

I'm gonna to read/write under the modbus-tcp specification.
So, I'm trying to code the client and server in the linux environment.
(I would communicate with the windows program(as a client) using the modbus-tcp.)
but it doesn't work as I want, so I ask you here.
I'm testing the client code for linux as a client and the easymodbus as a server.
I used the libmodbus code.
I'd like to read coil(0x01) and write coil(0x05).
When the code is executed using the libmodbus, 'ff' is printed out from the Unit ID part.(according to the manual, 01 should be output for modbus-tcp.
I don't know why 'ff' is printed(photo attached).
Wrong result:
Expected result:
'[00] [00] .... [00]' ; Do you know where to control this part?
Do you have or do you know the sample code that implements the 'read/write' function using the libmodbus?
please let me know the information, if you know that.
ctx = modbus_new_tcp("192.168.0.99", 502);
modbus_set_debug(ctx, TRUE);
if (modbus_connect(ctx) == -1) {
fprintf(stderr, "Connection failed: %s\n",
modbus_strerror(errno));
modbus_free(ctx);
return -1;
}
tab_rq_bits = (uint8_t *) malloc(nb * sizeof(uint8_t));
memset(tab_rq_bits, 0, nb * sizeof(uint8_t));
tab_rp_bits = (uint8_t *) malloc(nb * sizeof(uint8_t));
memset(tab_rp_bits, 0, nb * sizeof(uint8_t));
nb_loop = nb_fail = 0;
/* WRITE BIT */
rc = modbus_write_bit(ctx, addr, tab_rq_bits[0]);
if (rc != 1) {
printf("ERROR modbus_write_bit (%d)\n", rc);
printf("Address = %d, value = %d\n", addr, tab_rq_bits[0]);
nb_fail++;
} else {
rc = modbus_read_bits(ctx, addr, 1, tab_rp_bits);
if (rc != 1 || tab_rq_bits[0] != tab_rp_bits[0]) {
printf("ERROR modbus_read_bits single (%d)\n", rc);
printf("address = %d\n", addr);
nb_fail++;
}
}
printf("Test: ");
if (nb_fail)
printf("%d FAILS\n", nb_fail);
else
printf("SUCCESS\n");
free(tab_rq_bits);
free(tab_rp_bits);
/* Close the connection */
modbus_close(ctx);
modbus_free(ctx);
return 0;
That FF you see right before the Modbus function is actually correct. Quoting the Modbus Implementation Guide, page 23:
On TCP/IP, the MODBUS server is addressed using its IP address; therefore, the
MODBUS Unit Identifier is useless. The value 0xFF has to be used.
So libmodbus is just sticking to the Modbus specification. I'm assuming, then, that the problem is in easymodbus, which is apparently expecting you to use 0x01as the unit id in your queries.
I imagine you don't want to mess with easymodbus, so you can fix this problem pretty easily from libmodbus: just change the default unit id:
modbus_set_slave(ctx, 1);
You could also go with:
rc = modbus_set_slave(ctx, MODBUS_BROADCAST_ADDRESS);
ASSERT_TRUE(rc != -1, "Invalid broadcast address");
to make your client address all slaves within the network, if you have more than one.
You have more info and a short explanation of where this problem is coming from in the libmodbus man page for modbus_set_slave function.
For a very comprehensive example, you can check libmodbus unit tests
And regarding your question number 5, I don't know how to answer it, the zeros you mean are supposed to be the states (true or false) you want to write (or read) to the coils. For writing you can change them with the value field of function modbus_write_bit(ctx, address, value).
I'm very grateful for your reply.
I tested the read/write function using the 'unit-test-server/client' code you recommended.
I've reviewed the code, but there are still many things I don't know.
However, there is an address value that acts after testing each other with unit-test-server/client code and there is an address value that does not work
(Do you know why?).
-Checked and found that the UT_BITS_ADDRESS (address value) value operates from 0x130 to 0x150
-'error Illegal data address' occurs at values below -0x130 and above 0x150
-The address I want to read/write is 0x0001 to 0x0004(Do you know how to do?).
I want to know how to process and transmit data like the TX part of the right picture.
enter image description here
I'm running both client and server in my Linux environment and I'm doing read/write testing.
Among the wrong pictures...[06][FF]... <-- I want to know how to modify FF part (to change the value to 01 as shown in the picture)
enter image description here
and "modbus_set_slave" is the function for modbus rtu?
I'd like to communicate PC Program and Linux device in the end.
so Which part do I use that function?
I thanks for your concern again.

NodeJS: What is the proper way to handling TCP socket streams ? Which delimiter should I use?

From what I understood here, "V8 has a generational garbage collector. Moves objects aound randomly. Node can’t get a pointer to raw string data to write to socket." so I shouldn't store data that comes from a TCP stream in a string, specially if that string becomes bigger than Math.pow(2,16) bytes. (hope I'm right till now..)
What is then the best way to handle all the data that's comming from a TCP socket ? So far I've been trying to use _:_:_ as a delimiter because I think it's somehow unique and won't mess around other things.
A sample of the data that would come would be something_:_:_maybe a large text_:_:_ maybe tons of lines_:_:_more and more data
This is what I tried to do:
net = require('net');
var server = net.createServer(function (socket) {
socket.on('connect',function() {
console.log('someone connected');
buf = new Buffer(Math.pow(2,16)); //new buffer with size 2^16
socket.on('data',function(data) {
if (data.toString().search('_:_:_') === -1) { // If there's no separator in the data that just arrived...
buf.write(data.toString()); // ... write it on the buffer. it's part of another message that will come.
} else { // if there is a separator in the data that arrived
parts = data.toString().split('_:_:_'); // the first part is the end of a previous message, the last part is the start of a message to be completed in the future. Parts between separators are independent messages
if (parts.length == 2) {
msg = buf.toString('utf-8',0,4) + parts[0];
console.log('MSG: '+ msg);
buf = (new Buffer(Math.pow(2,16))).write(parts[1]);
} else {
msg = buf.toString() + parts[0];
for (var i = 1; i <= parts.length -1; i++) {
if (i !== parts.length-1) {
msg = parts[i];
console.log('MSG: '+msg);
} else {
buf.write(parts[i]);
}
}
}
}
});
});
});
server.listen(9999);
Whenever I try to console.log('MSG' + msg), it will print out the whole buffer, so it's useless to see if something worked.
How can I handle this data the proper way ? Would the lazy module work, even if this data is not line oriented ? Is there some other module to handle streams that are not line oriented ?
It has indeed been said that there's extra work going on because Node has to take that buffer and then push it into v8/cast it to a string. However, doing a toString() on the buffer isn't any better. There's no good solution to this right now, as far as I know, especially if your end goal is to get a string and fool around with it. Its one of the things Ryan mentioned # nodeconf as an area where work needs to be done.
As for delimiter, you can choose whatever you want. A lot of binary protocols choose to include a fixed header, such that you can put things in a normal structure, which a lot of times includes a length. In this way, you slice apart a known header and get information about the rest of the data without having to iterate over the entire buffer. With a scheme like that, one can use a tool like:
node-buffer - https://github.com/substack/node-binary
node-ctype - https://github.com/rmustacc/node-ctype
As an aside, buffers can be accessed via array syntax, and they can also be sliced apart with .slice().
Lastly, check here: https://github.com/joyent/node/wiki/modules -- find a module that parses a simple tcp protocol and seems to do it well, and read some code.
You should use the new stream2 api. http://nodejs.org/api/stream.html
Here are some very useful examples: https://github.com/substack/stream-handbook
https://github.com/lvgithub/stick