DPDK Hash failing to lookup data from secondary process - hash

Failing to use existing rte Hash from secondary process:
h = rte_hash_find_existing("some_hash");
if (h) {
// this will work, in case we re-create
//rte_hash_free(h);
}
else {
h = rte_hash_create (&params);
}
// using the hash will crash the process with:
// Program received signal SIGSEGV, Segmentation fault.
ret = rte_hash_lookup_data (h,name,&data);
DPDK Version: dpdk-19.02
Build Mode Static: CONFIG_RTE_BUILD_SHARED_LIB=n
The Primary and secondary processes are different binaries but linked to the same DPDK library
The Key is added in primary as follows
struct cdev_key {
uint64_t len;
};
struct cdev_key key = { 0 };
if (rte_hash_add_key_data (testptr, &key,(void *) &test) < 0) {
fprintf (stderr,"add failed errno: %s\n", rte_strerror(rte_errno));
}
and used in secondary as follows:
printf("Looking for data\n");
struct cdev_key key = { 0 };
int ret = rte_hash_lookup_data (h,&key,&data);

with DPDK version 19.02, I am able to run 2 separate binaries without issues.
[EDIT-1] based on the update in the ticket, I am able to lookup hash entry added from primary in the secondary process.
Priamry log:
rte_hash_count 1 ret:val 0x0:0x0
Secondary log:
0x17fd61380 rte_hash_count 1
rte_hash_count 1 key:val 0:0
note: if using rte_hash_lookup please remember to disable Linux ASLR via echo 0 | tee /proc/sys/kernel/randomize_va_space.
Binary 1: modified example/skeleton to create hash test
CMD-1: ./build/basicfwd -l 5 -w 0000:08:00.1 --vdev=net_tap0 --socket-limit=2048,1 --file-prefix=test
Binary 2: modified helloworld to lookup for hash test, else assert
CMD-2: for i in {1..20000}; do du -kh /var/run/dpdk/; ./build/helloworld -l 6 --proc-type=secondary --log-level=3 --file-prefix=test; done
Changing or removing the file-prefix results in assert logic to be hit.
note: DPDK 19.02 has the inherent bug which does not cleanup the /var/run/dpdk/; hence recommends to use 19.11.2 LTS
Code-1:
struct rte_hash_parameters test = {0};
test.name = "test";
test.entries = 32;
test.key_len = sizeof(uint64_t);
test.hash_func = rte_jhash;
test.hash_func_init_val = 0;
test.socket_id = 0;
struct rte_hash *testptr = rte_hash_create(&test);
if (testptr == NULL) {
rte_panic("Failed to create test hash, errno = %d\n", rte_errno);
}
Code-2:
assert(rte_hash_find_existing("test"));
printf("hello from core %u::%p\n", lcore_id, rte_hash_find_existing("test"));
printf("hello from core %u::%p\n", lcore_id, rte_hash_find_existing("test1"));

As mentioned in DPDK Programmers Guide, using multiprocessor functionalities come with some restrictions. One of them is that the pointer to a function can not be shared between processes. As a result the hashing function is not available on the secondary process. The suggested work around is to the hashing part in the primary process and the secondary process accessing the hash table using the hash value instead of the key.
From DPDK Guide:
To work around this issue, it is recommended that multi-process applications perform the hash calculations by directly calling the hashing function from the code and then using the rte_hash_add_with_hash()/rte_hash_lookup_with_hash() functions instead of the functions which do the hashing internally, such as rte_hash_add()/rte_hash_lookup().
Please refer to the guide for more information [36.3. Multi-process Limitations]
link: https://doc.dpdk.org/guides/prog_guide/multi_proc_support.html
In the time of writing this answer the guide is for DPDK 20.08.

Related

Pedersen circom/circomlibjs inconsistency?

As a unit test for a larger use case, I am checking that indeed the pedersen hash I am doing in the frontend aligns with the expected hash done through a circom circuit. I am using a simple assert in the circuit and generating a witness and am feeding both the hashed and unhashed values to the circuit, recreating the hash to make sure that it goes through.
I am running a Pedersen hash in my frontend using circomlibjs. As a unit test, I have. a circuit with a simple assert that check whether the results from my frontend line up with the pedersen hash in the circom circuit.
The circuit I am using:
include "../node_modules/circomlib/circuits/bitify.circom";
include "../node_modules/circomlib/circuits/pedersen.circom";
template check() {
signal input unhashed;
signal input hashed;
signal output createdHash[2];
component hasher = Pedersen(256);
component unhashedBits = Num2Bits(256);
unhashedBits.in <== unhashed;
for (var i = 0; i < 256; i++){
hasher.in[i] <== unhashedBits.out[i];
}
createdHash[0] <== hasher.out[0];
createdHash[1] <== hasher.out[1];
hashed === createdHash[1];
}
component main = check();
In the frontend, I am running the following,
import { buildPedersenHash } from 'circomlibjs';
export function buff2hex(buff) {
function i2hex(i) {
return ('0' + i.toString(16)).slice(-2);
}
return '0x' + Array.from(buff).map(i2hex).join('');
}
const secret = (new TextEncoder(32)).encode("Hello");
var pedersen = await buildPedersenHash();
var h = pedersen.hash(secret);
console.log(buff2hex(secret));
console.log(buff2hex(h));
The values that are printed are:
0x48656c6c6f
0x0e90d7d613ab8b5ea7f4f8bc537db6bb0fa2e5e97bbac1c1f609ef9e6a35fd8b
Which are consistent with the test done here.
So I then create an input.json file which looks as follows,
{
"unhashed": "0x48656c6c6f",
"hashed": "0x0e90d7d613ab8b5ea7f4f8bc537db6bb0fa2e5e97bbac1c1f609ef9e6a35fd8b"
}
And lastly run the following script to create a witness, in the hopes that the assert will go through.
# Compile the circuit
circom ${CIRCUIT}.circom --r1cs --wasm --sym --c
# Generate the witness.wtns
node ${CIRCUIT}_js/generate_witness.js ${CIRCUIT}_js/${CIRCUIT}.wasm input.json ${CIRCUIT}_js/witness.wtns
However, I keep getting an assert error,
Error: Error: Assert Failed.
Error in template check_11 line: 26
Which describes the assert in the circuit, so I assume there is an inconsistency in the hash.
I am new to circom so any insights would be greatly appreciated!
For anyone who stumbles across this, it happens that the cause of issue is endianess. The issue was fixed by converting the unhashed to little endian in the input, I am not sure as to where exactly the problem is, but seems like the hasher reads it as big endian on the frontend but the input is expected little endian (or vice verse).
As I have managed to patch up a fix for this at the moment, I will stop investigating, but implore anyone who understand this further to give a better explanation.

Key-pair generation for Kadena

Chainweaver uses the following code to generate a key-pair from a Bip 39-generated seed: https://github.com/kadena-io/cardano-crypto.js/blob/c50fb8c2fcd4e8d396506fb0c07de9d658aa1bae/kadena-crypto.js#L336
Is there any documentation regarding this algorithm, specifically about the reasons for the 1000X loop and about not following a BIP 44 or similar HD wallet derivation?
for (let i = 1; result === undefined && i <= 1000; i++) {
try {
const digest = crypto.hmac_sha512(seed, [Buffer.from(`Root Seed Chain ${i}`, 'ascii')])
const tempSeed = digest.slice(0, 32)
const chainCode = digest.slice(32, 64)
result = trySeedChainCodeToKeypairV1(pwd, tempSeed, chainCode)
...
It also looks like this is a fork of Cardano code, so is there any reason Cardano was used as inspiration for Kadena as opposed to some other coin/chain? I would really like some historical context to why some of these decisions were made.
BIP-44 is designed for P2PKH, not ED25519. At the time the cardano-crypto library seemed like the best available option.

BPF Ring Buffer Invalid Argument (-22)?

I wanted to use eBPF's latest map, BPF_MAP_TYPE_RINGBUF, but I can't find much information online on how I can use it, so I am just doing some trial-and-error here. I defined and used it like this:
struct bpf_map_def SEC("maps") r_buf = {
.type = BPF_MAP_TYPE_RINGBUF,
.max_entries = 1 << 2,
};
SEC("lsm/task_alloc")
int BPF_PROG(task_alloc, struct task_struct *task, unsigned long clone_flags) {
uint32_t pid = task->pid;
bpf_ringbuf_output(&r_buf, &pid, sizeof(uint32_t), 0); //stores the pid value to the ring buffer
return 0;
}
But I got the following error when running:
libbpf: map 'r_buf': failed to create: Invalid argument(-22)
libbpf: failed to load object 'bpf_example_kern'
libbpf: failed to load BPF skeleton 'bpf_example_kern': -22
It seems like libbpf does not recognize BPF_MAP_TYPE_RINGBUF? I cloned the latest libbpf from GitHub and did make and make install. I am using Linux 5.8.0 kernel.
UPDATE: The issue seems to be resolved if I changed the max_entries to something like 4096 * 64, but I don't know why this is the case.
You are right, the problem is in the size of BPF_MAP_TYPE_RINGBUF (max_entries attribute in libbpf map definition). It has to be a multiple of a memory page (which is 4096 bytes at least on most popular platforms). So that explains why it all worked when you specified 64 * 4096.
BTW, if you'd like to see some examples of using it, I'd start with BPF selftests:
user-space part: https://github.com/torvalds/linux/blob/master/tools/testing/selftests/bpf/prog_tests/ringbuf.c
kernel (BPF) part: https://github.com/torvalds/linux/blob/master/tools/testing/selftests/bpf/progs/test_ringbuf.c

Problem reading Serial Port C#.net 2.0 to get Weighing machine output

I'm trying to read weight from Sartorius Weighing Scale model No BS2202S using the following code in C#.net 2.0 on a Windows XP machine:
public string readWeight()
{
string lastError = "";
string weightData = "";
SerialPort port = new SerialPort();
port.PortName = "COM1";
port.BaudRate = 9600;
port.Parity = Parity.Even;
port.DataBits = 7;
port.StopBits = StopBits.One;
port.Handshake = Handshake.RequestToSend;
try {
port.Open();
weightData = port.ReadExisting();
if(weightData == null || weightData.Length == 0) {
lastError = "Unable to read weight. The data returned form weighing machine is empty or null.";
return lastError;
}
}
catch(TimeoutException) {
lastError = "Operation timed out while reading weight";
return lastError;
}
catch(Exception ex) {
lastError = "The following exception occurred while reading data." + Environment.NewLine + ex.Message;
return lastError;
}
finally {
if(port.IsOpen == true) {
port.Close();
port.Dispose();
}
}
return weightData;
}
I'm able to read the weight using Hyperterminal application (supplied with Windows XP) with the same serial port parameters given above for opening the port. But from the above code snippet, I can open the port and each time it is returning empty data.
I tried opening port using the code given this Stack Overflow thread, still it returns empty data.
Kindly assist me.
I know this is probably old now ... but for future reference ...
Look at the handshaking. There is both hardware handshaking and software handshaking. Your problem could be either - so you need to try both.
For hardware handshaking you can try:
mySerialPort.DtrEnable = True
mySerialPort.RtsEnable = True
Note that
mySerialPort.Handshake = Handshake.RequestToSend
I do not think sets the DTR line which some serial devices might require
Software handshaking is also known as XON/XOFF and can be set with
mySerialPort.Handshake = Handshake.XOnXOff
OR
mySerialPort.Handshake = Handshake.RequestToSendXOnXOff
You may still need to enable DTR
When all else fails - dont forget to check all of these combinations of handshaking.
Since someone else will probably have trouble with this in the future, hand shaking is a selectable option.
In most of the balances you will see the options for Software, Hardware 2 char, Hardware 1 char. The default setting for the Sartorius balances is Hardware 2 Char. I usually recommend changing to Software.
Also if it stops working all together it can often be fixed by defaulting the unit using the 9 1 1 parameter. And then resetting the communication settings.
An example of how to change the settings can be found on the manual on this page:
http://www.dataweigh.com/products/sartorius/cpa-analytical-balances/

Is it possible to prevent children inheriting the CPU/core affinity of the parent?

I'm particularly interesting in doing this on Linux, regarding Java programs. There are already a few questions that say you have no control from Java, and some RFEs closed by Sun/Oracle.
If you have access to source code and use a low-level language, you can certainly make the relevant system calls. However, sand-boxed systems - possibly without source code - present more of a challenge. I would have thought that a tool to set this per-process or an kernel parameter are able to control this from outside the parent process. This is really what I'm after.
I understand the reason why this is the default. It looks like some version of Windows may allow some control of this, but most do not. I was expecting Linux to allow control of it, but seems like it's not an option.
Provided you have sufficient privileges, you could simply call setaffinity before execing in the child. In other words, from
if (fork() == 0)
execve("prog", "prog", ...);
move to use
/* simple example using taskset rather than setaffinity directly */
if (fork() == 0)
execve("taskset", "taskset", "-c", "0-999999", ...);
[Of course using 999999 is not nice, but that can be substituted by a program which automatically determined the number of cpus and resets the affinity mask as desired.]
What you could also do, is change the affinity of the child from the parent, after the fork(). By the way, I'm assuming you're on linux, some of this stuff, such as retrieving the number of cores with sysconf() will be different on different OS's and unix flavors.... The example here, gets the cpu of the parent process and tries to ensure all child processes are scheduled on a different core, in round robin.
/* get the number of cpu's */
numcpu = sysconf( _SC_NPROCESSORS_ONLN );
/* get our CPU */
CPU_ZERO(&mycpuset);
sched_getaffinity( getpid() , sizeof mycpuset , &mycpuset);
for(i=0 ; i < numcpu ; i++ )
{
if(CPU_ISSET( i, &mycpuset))
{
mycpu = i;
break;
}
}
//...
while(1)
{
//Some other stuff.....
/* now the fork */
if((pid = fork()) == 0)
{
//do your child stuff
}
/* Parent... can schedule child. */
else
{
cpu = ++cpu % numcpu;
if(cpu == mycpu)
cpu = ++cpu % numcpu;
CPU_ZERO(&mycpuset);
CPU_SET(cpu,&mycpuset);
/*set processor affinity*/
sched_setaffinity(pid, sizeof mycpuset, &mycpuset );
//any other father stuff
}
}