Key-pair generation for Kadena - cardano

Chainweaver uses the following code to generate a key-pair from a Bip 39-generated seed: https://github.com/kadena-io/cardano-crypto.js/blob/c50fb8c2fcd4e8d396506fb0c07de9d658aa1bae/kadena-crypto.js#L336
Is there any documentation regarding this algorithm, specifically about the reasons for the 1000X loop and about not following a BIP 44 or similar HD wallet derivation?
for (let i = 1; result === undefined && i <= 1000; i++) {
try {
const digest = crypto.hmac_sha512(seed, [Buffer.from(`Root Seed Chain ${i}`, 'ascii')])
const tempSeed = digest.slice(0, 32)
const chainCode = digest.slice(32, 64)
result = trySeedChainCodeToKeypairV1(pwd, tempSeed, chainCode)
...
It also looks like this is a fork of Cardano code, so is there any reason Cardano was used as inspiration for Kadena as opposed to some other coin/chain? I would really like some historical context to why some of these decisions were made.

BIP-44 is designed for P2PKH, not ED25519. At the time the cardano-crypto library seemed like the best available option.

Related

Embedded Perl FREETMPS not cleaning up array of hash references properly

I have a project with several macros to help me write Perl functions, and basically "convert" C structs into Perl hashes. Some of these hashes nest each other, but I'll get to the point.
My program's memory usage rises drastically over time, it's a social media frontend written in C and Embedded Perl (odd language choice, but it's whatever). I'm 100% certain I've found the root cause of code, as removing it and other things causes it to go away, and I've profiled it as well, and every issue points to my array of hashes.
if (notifs && notifs_len)
{
// Leak occurs here
mXPUSHs(newRV_noinc((SV*)perlify_notifications(notifs, notifs_len)));
}
else ARG_UNDEFINED();
// Run function
PERL_STACK_SCALAR_CALL("base_page");
char* dup = PERL_GET_STACK_EXIT;
Where PERL_GET_STACK_EXIT is a macro which equates to
#define PERL_GET_STACK_EXIT savesharedsvpv(POPs); \
PUTBACK; \
FREETMPS; \
LEAVE; \
perl_unlock()
// Macros for storing data
#define hvstores_str(hv, key, val) if (val) { hv_stores((hv), key, newSVpv((val), 0)); }
#define hvstores_int(hv, key, val) hv_stores((hv), key, newSViv((val)))
#define hvstores_ref(hv, key, val) if (val) { hv_stores((hv), key, newRV_noinc((SV*)(val))); }
perlify_notifications is created using another macro.
// Calls one of the `perlify_X` functions for each element in an array
// Note: I was not using av_push before, I only temporarily switched thinking that was the issue or if I was misreading the docs
// EX: PERLIFY_MULTI(notification, notifications, notification_t
#define PERLIFY_MULTI(type, types, mstype) AV* perlify_##types(const struct mstype* const types, size_t len) { \
if (!(types && len)) return NULL; \
AV* av = newAV(); \
for (size_t i = 0; i < len; ++i) \
av_push(av, newRV_noinc((SV*)perlify_##type(types + i))); \
return av; \
}
Taking a step further, here is the hash functions. Such as perlify_notification in the code.
// Converts it into a perl struct
HV* perlify_notification(const struct mstdnt_notification* const notif)
{
if (!notif) return NULL;
HV* notif_hv = newHV();
// Profiler doesn't show much data leaking for these 4, so I am uncertain
hvstores_str(notif_hv, "id", notif->id);
hvstores_int(notif_hv, "created_at", notif->created_at);
hvstores_str(notif_hv, "emoji", notif->emoji);
hvstores_str(notif_hv, "type", mstdnt_notification_t_to_str(notif->type));
// The functions below seem to leak the most (and they store a bunch of data), if I comment them out, most of the
// memleak is gone (because these output a lot of data). The functions
// being called look equivalent to this one, so it's no secret.
hvstores_ref(notif_hv, "account", perlify_account(notif->account));
hvstores_ref(notif_hv, "pleroma", perlify_notification_pleroma(notif->pleroma));
hvstores_ref(notif_hv, "status", perlify_status(notif->status));
return notif_hv;
}
I tried to compile all of Perl from source since I'm on Gentoo to further debug this issue and look into the FREETMPS internals, and tried to see what was happening. I tried dumping all the references I could, yet every object had a REFCNT of 1. No luck
When I ran this with massif, viewed with Top, I would just watch the memory increase slowly. I tested my luck with Trial and error: It starts at around 9 MB, then slowly increases up to 15 MB to 25 MB memory usage, as long as I keep calling these functions over and over. Of course, if I comment FREETMPS out, or just run the functions without anything else, memory usage increases at the same rate as before. I'm still certain this is the root issue.
I'm really at a loss here, am I doing something wrong with how I'm "perlifying" my C structs?

Pedersen circom/circomlibjs inconsistency?

As a unit test for a larger use case, I am checking that indeed the pedersen hash I am doing in the frontend aligns with the expected hash done through a circom circuit. I am using a simple assert in the circuit and generating a witness and am feeding both the hashed and unhashed values to the circuit, recreating the hash to make sure that it goes through.
I am running a Pedersen hash in my frontend using circomlibjs. As a unit test, I have. a circuit with a simple assert that check whether the results from my frontend line up with the pedersen hash in the circom circuit.
The circuit I am using:
include "../node_modules/circomlib/circuits/bitify.circom";
include "../node_modules/circomlib/circuits/pedersen.circom";
template check() {
signal input unhashed;
signal input hashed;
signal output createdHash[2];
component hasher = Pedersen(256);
component unhashedBits = Num2Bits(256);
unhashedBits.in <== unhashed;
for (var i = 0; i < 256; i++){
hasher.in[i] <== unhashedBits.out[i];
}
createdHash[0] <== hasher.out[0];
createdHash[1] <== hasher.out[1];
hashed === createdHash[1];
}
component main = check();
In the frontend, I am running the following,
import { buildPedersenHash } from 'circomlibjs';
export function buff2hex(buff) {
function i2hex(i) {
return ('0' + i.toString(16)).slice(-2);
}
return '0x' + Array.from(buff).map(i2hex).join('');
}
const secret = (new TextEncoder(32)).encode("Hello");
var pedersen = await buildPedersenHash();
var h = pedersen.hash(secret);
console.log(buff2hex(secret));
console.log(buff2hex(h));
The values that are printed are:
0x48656c6c6f
0x0e90d7d613ab8b5ea7f4f8bc537db6bb0fa2e5e97bbac1c1f609ef9e6a35fd8b
Which are consistent with the test done here.
So I then create an input.json file which looks as follows,
{
"unhashed": "0x48656c6c6f",
"hashed": "0x0e90d7d613ab8b5ea7f4f8bc537db6bb0fa2e5e97bbac1c1f609ef9e6a35fd8b"
}
And lastly run the following script to create a witness, in the hopes that the assert will go through.
# Compile the circuit
circom ${CIRCUIT}.circom --r1cs --wasm --sym --c
# Generate the witness.wtns
node ${CIRCUIT}_js/generate_witness.js ${CIRCUIT}_js/${CIRCUIT}.wasm input.json ${CIRCUIT}_js/witness.wtns
However, I keep getting an assert error,
Error: Error: Assert Failed.
Error in template check_11 line: 26
Which describes the assert in the circuit, so I assume there is an inconsistency in the hash.
I am new to circom so any insights would be greatly appreciated!
For anyone who stumbles across this, it happens that the cause of issue is endianess. The issue was fixed by converting the unhashed to little endian in the input, I am not sure as to where exactly the problem is, but seems like the hasher reads it as big endian on the frontend but the input is expected little endian (or vice verse).
As I have managed to patch up a fix for this at the moment, I will stop investigating, but implore anyone who understand this further to give a better explanation.

ranges::views::generate have generator function signal end of range

I'd like to have a generator that terminates, like python, but I can't tell from ranges::views::generate's interface if this is supported.
You can roll it by hand easily enough:
https://godbolt.org/z/xcGz6657r although it's probably better to use a coroutine generator if you have one available.
You can return an optional in the generator, and stop taking elements when a std::nullopt is generated with views::take_while
auto out = ranges::views::generate(
[i = 0]() mutable -> std::optional<int>
{
if (i > 3)
return std::nullopt;
return { i++ };
})
| ranges::views::take_while([](auto opt){ return opt.has_value();})
;

DPDK Hash failing to lookup data from secondary process

Failing to use existing rte Hash from secondary process:
h = rte_hash_find_existing("some_hash");
if (h) {
// this will work, in case we re-create
//rte_hash_free(h);
}
else {
h = rte_hash_create (&params);
}
// using the hash will crash the process with:
// Program received signal SIGSEGV, Segmentation fault.
ret = rte_hash_lookup_data (h,name,&data);
DPDK Version: dpdk-19.02
Build Mode Static: CONFIG_RTE_BUILD_SHARED_LIB=n
The Primary and secondary processes are different binaries but linked to the same DPDK library
The Key is added in primary as follows
struct cdev_key {
uint64_t len;
};
struct cdev_key key = { 0 };
if (rte_hash_add_key_data (testptr, &key,(void *) &test) < 0) {
fprintf (stderr,"add failed errno: %s\n", rte_strerror(rte_errno));
}
and used in secondary as follows:
printf("Looking for data\n");
struct cdev_key key = { 0 };
int ret = rte_hash_lookup_data (h,&key,&data);
with DPDK version 19.02, I am able to run 2 separate binaries without issues.
[EDIT-1] based on the update in the ticket, I am able to lookup hash entry added from primary in the secondary process.
Priamry log:
rte_hash_count 1 ret:val 0x0:0x0
Secondary log:
0x17fd61380 rte_hash_count 1
rte_hash_count 1 key:val 0:0
note: if using rte_hash_lookup please remember to disable Linux ASLR via echo 0 | tee /proc/sys/kernel/randomize_va_space.
Binary 1: modified example/skeleton to create hash test
CMD-1: ./build/basicfwd -l 5 -w 0000:08:00.1 --vdev=net_tap0 --socket-limit=2048,1 --file-prefix=test
Binary 2: modified helloworld to lookup for hash test, else assert
CMD-2: for i in {1..20000}; do du -kh /var/run/dpdk/; ./build/helloworld -l 6 --proc-type=secondary --log-level=3 --file-prefix=test; done
Changing or removing the file-prefix results in assert logic to be hit.
note: DPDK 19.02 has the inherent bug which does not cleanup the /var/run/dpdk/; hence recommends to use 19.11.2 LTS
Code-1:
struct rte_hash_parameters test = {0};
test.name = "test";
test.entries = 32;
test.key_len = sizeof(uint64_t);
test.hash_func = rte_jhash;
test.hash_func_init_val = 0;
test.socket_id = 0;
struct rte_hash *testptr = rte_hash_create(&test);
if (testptr == NULL) {
rte_panic("Failed to create test hash, errno = %d\n", rte_errno);
}
Code-2:
assert(rte_hash_find_existing("test"));
printf("hello from core %u::%p\n", lcore_id, rte_hash_find_existing("test"));
printf("hello from core %u::%p\n", lcore_id, rte_hash_find_existing("test1"));
As mentioned in DPDK Programmers Guide, using multiprocessor functionalities come with some restrictions. One of them is that the pointer to a function can not be shared between processes. As a result the hashing function is not available on the secondary process. The suggested work around is to the hashing part in the primary process and the secondary process accessing the hash table using the hash value instead of the key.
From DPDK Guide:
To work around this issue, it is recommended that multi-process applications perform the hash calculations by directly calling the hashing function from the code and then using the rte_hash_add_with_hash()/rte_hash_lookup_with_hash() functions instead of the functions which do the hashing internally, such as rte_hash_add()/rte_hash_lookup().
Please refer to the guide for more information [36.3. Multi-process Limitations]
link: https://doc.dpdk.org/guides/prog_guide/multi_proc_support.html
In the time of writing this answer the guide is for DPDK 20.08.

How to make Appstats show both small and read operations?

I'm profiling my application locally (using the Dev server) to get more information about how GAE works. My tests are comparing the common full Entity query and the Projection Query. In my tests both queries do the same query, but the Projection is specified with 2 properties. The test kind has 100 properties, all with the same value for each Entity, with a total of 10 Entities. An image with the Datastore viewer and the Appstats generated data is shown bellow. In the Appstats image, Request 4 is a memcache flush, Request 3 is the test database creation (it was already created, so no costs here), Request 2 is the full Entity query and Request 1 is the projection query.
I'm surprised that both queries resulted in the same amount of reads. My guess is that small and read operations and being reported the same by Appstats. If this is the case, I want to separate them in the reports. That's the queries related functions:
// Full Entity Query
public ReturnCodes doQuery() {
DatastoreService dataStore = DatastoreServiceFactory.getDatastoreService();
for(int i = 0; i < numIters; ++i) {
Filter filter = new FilterPredicate(DBCreation.PROPERTY_NAME_PREFIX + i,
FilterOperator.NOT_EQUAL, i);
Query query = new Query(DBCreation.ENTITY_NAME).setFilter(filter);
PreparedQuery prepQuery = dataStore.prepare(query);
Iterable<Entity> results = prepQuery.asIterable();
for(Entity result : results) {
log.info(result.toString());
}
}
return ReturnCodes.SUCCESS;
}
// Projection Query
public ReturnCodes doQuery() {
DatastoreService dataStore = DatastoreServiceFactory.getDatastoreService();
for(int i = 0; i < numIters; ++i) {
String projectionPropName = DBCreation.PROPERTY_NAME_PREFIX + i;
Filter filter = new FilterPredicate(DBCreation.PROPERTY_NAME_PREFIX + i,
FilterOperator.NOT_EQUAL, i);
Query query = new Query(DBCreation.ENTITY_NAME).setFilter(filter);
query.addProjection(new PropertyProjection(DBCreation.PROPERTY_NAME_PREFIX + 0, Integer.class));
query.addProjection(new PropertyProjection(DBCreation.PROPERTY_NAME_PREFIX + 1, Integer.class));
PreparedQuery prepQuery = dataStore.prepare(query);
Iterable<Entity> results = prepQuery.asIterable();
for(Entity result : results) {
log.info(result.toString());
}
}
return ReturnCodes.SUCCESS;
}
Any ideas?
EDIT: To get a better overview of the problem I have created another test, which do the same query but uses the keys only query instead. For this case, Appstats is correctly showing DATASTORE_SMALL operations in the report. I'm still pretty confused about the behavior of the projection query which should also be reporting DATASTORE_SMALL operations. Please help!
[I wrote the go port of appstats, so this is based on my experience and recollection.]
My guess is this is a bug in appstats, which is a relatively unmaintained program. Projection queries are new, so appstats may not be aware of them, and treats them as normal read queries.
For some background, calculating costs is difficult. For write ops, the cost are returned with the results, as they must be, since the app has no way of knowing what changed (which is where the write costs happen). For reads and small ops, however, there is a formula to calculate the cost. Each appstats implementation (python, java, go) must implement this calculation, including reflection or whatever is needed over the request object to determine what's going on. The APIs for doing this are not entirely obvious, and there's lots of little things, so it's easy to get it wrong, and annoying to get it right.