So here is a snippet from the config file I'd like to parse (It is an LVM2 Config):
VolGroup00 {
id = "vyllep-rfI6-LCvO-h6mN-zYZu-hiAN-QShmG6"
seqno = 3
status = ["RESIZEABLE", "READ", "WRITE"]
flags = []
extent_size = 65536 # 32 Megabytes
max_lv = 0
max_pv = 0
metadata_copies = 0
physical_volumes {
pv0 {
id = "1yLiSl-x0fp-ZkyU-HMQl-eTVt-xiId-cFnih0"
device = "/dev/xvda2" # Hint only
status = ["ALLOCATABLE"]
flags = []
dev_size = 31246425 # 14.8995 Gigabytes
pe_start = 384
pe_count = 476 # 14.875 Gigabytes
}
}
}
I would like to parse this into a Perl data structure. What format is this config in? My guess is it looks likes a python data structure.
Any thoughts the format, or better yet, an existent module to parse it with?
The config uses a custom config language specifically for LVM. The lvm userspace tools include code to parse this language.
You could grab the userspace code for lvm2 and attempt to replicate its parser, maybe using Parse::RecDescent.
Or maybe the Perl Linux::LVM module in CPAN provides the functionality to extract the information you need.
Related
Failing to use existing rte Hash from secondary process:
h = rte_hash_find_existing("some_hash");
if (h) {
// this will work, in case we re-create
//rte_hash_free(h);
}
else {
h = rte_hash_create (¶ms);
}
// using the hash will crash the process with:
// Program received signal SIGSEGV, Segmentation fault.
ret = rte_hash_lookup_data (h,name,&data);
DPDK Version: dpdk-19.02
Build Mode Static: CONFIG_RTE_BUILD_SHARED_LIB=n
The Primary and secondary processes are different binaries but linked to the same DPDK library
The Key is added in primary as follows
struct cdev_key {
uint64_t len;
};
struct cdev_key key = { 0 };
if (rte_hash_add_key_data (testptr, &key,(void *) &test) < 0) {
fprintf (stderr,"add failed errno: %s\n", rte_strerror(rte_errno));
}
and used in secondary as follows:
printf("Looking for data\n");
struct cdev_key key = { 0 };
int ret = rte_hash_lookup_data (h,&key,&data);
with DPDK version 19.02, I am able to run 2 separate binaries without issues.
[EDIT-1] based on the update in the ticket, I am able to lookup hash entry added from primary in the secondary process.
Priamry log:
rte_hash_count 1 ret:val 0x0:0x0
Secondary log:
0x17fd61380 rte_hash_count 1
rte_hash_count 1 key:val 0:0
note: if using rte_hash_lookup please remember to disable Linux ASLR via echo 0 | tee /proc/sys/kernel/randomize_va_space.
Binary 1: modified example/skeleton to create hash test
CMD-1: ./build/basicfwd -l 5 -w 0000:08:00.1 --vdev=net_tap0 --socket-limit=2048,1 --file-prefix=test
Binary 2: modified helloworld to lookup for hash test, else assert
CMD-2: for i in {1..20000}; do du -kh /var/run/dpdk/; ./build/helloworld -l 6 --proc-type=secondary --log-level=3 --file-prefix=test; done
Changing or removing the file-prefix results in assert logic to be hit.
note: DPDK 19.02 has the inherent bug which does not cleanup the /var/run/dpdk/; hence recommends to use 19.11.2 LTS
Code-1:
struct rte_hash_parameters test = {0};
test.name = "test";
test.entries = 32;
test.key_len = sizeof(uint64_t);
test.hash_func = rte_jhash;
test.hash_func_init_val = 0;
test.socket_id = 0;
struct rte_hash *testptr = rte_hash_create(&test);
if (testptr == NULL) {
rte_panic("Failed to create test hash, errno = %d\n", rte_errno);
}
Code-2:
assert(rte_hash_find_existing("test"));
printf("hello from core %u::%p\n", lcore_id, rte_hash_find_existing("test"));
printf("hello from core %u::%p\n", lcore_id, rte_hash_find_existing("test1"));
As mentioned in DPDK Programmers Guide, using multiprocessor functionalities come with some restrictions. One of them is that the pointer to a function can not be shared between processes. As a result the hashing function is not available on the secondary process. The suggested work around is to the hashing part in the primary process and the secondary process accessing the hash table using the hash value instead of the key.
From DPDK Guide:
To work around this issue, it is recommended that multi-process applications perform the hash calculations by directly calling the hashing function from the code and then using the rte_hash_add_with_hash()/rte_hash_lookup_with_hash() functions instead of the functions which do the hashing internally, such as rte_hash_add()/rte_hash_lookup().
Please refer to the guide for more information [36.3. Multi-process Limitations]
link: https://doc.dpdk.org/guides/prog_guide/multi_proc_support.html
In the time of writing this answer the guide is for DPDK 20.08.
I'm trying to get the LogBook data over BLE to my App.
This works fine for JSON, the data seems accurate.
But it takes along time due to the JSON encoding.
Getting the SBEM data is way faster. But I can't find any documentation on the encoding. I found out that the "Content" string is Base64 encoded.
It starts with SBEM which means, it is uncompressed as stated here:
https://bitbucket.org/suunto/movesense-device-lib/src/5bcf0b40644a17d48977cf011ebcf6191650c6f0/MovesenseCoreLib/resources/movesense-api/mem/logbook.yaml?fileviewer=file-view-default#lines-186
But I couldn't find anything else.
Has somebody further information on that or found out what the encoding is like?
Best regards
Alex
First some clarification: When requesting the JSON log from MDS/Logbook/ service the data itself is transferred from Movesense sensor in SBEM format and the conversion is performed on the phone. If you have specific examples where the said conversion is slow (there very well might be) it's a good idea to add a bitbucket issue to movesense-mobile-lib.
About the SBEM format. This is "Suunto Oy internal" binary format for presenting xml (and nowadays json) files. This means that the interpretation of it may change when the format evolves. With that warning aside, here's the format:
Data is encoded in chunks with ID(1-2 bytes), length(1-4 bytes) and content
consists of two separate sections: Descriptors & Data which can be in separate "files" (like in Logbook service)
Descriptors describe the format of the data in data chunks ("format string")
Data chunks contain the binary data in described format.
If you want to learn about the SBEM format that the DataLogger / Logbook services use, see the "generated/sbem-code" folder that is created during the build.
And finally, here is a simple python code for parsing SBEM format:
from __future__ import print_function
import sys
import re
import glob, os
data_path = sys.argv[0]
descriptor_path = sys.argv[1]
ReservedSbemId_e_Escape = b"\255"
ReservedSbemId_e_Descriptor = 0
#print("data_path:",data_path)
print("descriptor_path:",descriptor_path)
# reads sbem ID upto uint16 from file
def readId(f):
byte1 = f.read(1)
id = None
if not byte1:
print("EOF found")
elif byte1 < ReservedSbemId_e_Escape:
id = int.from_bytes(byte1, byteorder='little')
#print("one byte id:", id)
else:
# read 2 following bytes
id_bytes = f.read(2)
id = int.from_bytes(id_bytes, byteorder='little')
#print("two byte id:",id)
return id
# reads sbem length upto uint32 from file
def readLen(f):
byte1 = f.read(1)
if byte1 < ReservedSbemId_e_Escape:
datasize = int.from_bytes(byte1, byteorder='little')
#print("one byte len:", len)
else:
# read 4 following bytes
id_bytes = f.read(4)
datasize = int.from_bytes(id_bytes, byteorder='little')
#print("4 byte len:",len)
return datasize
# read sbem chunkheader from file
def readChunkHeader(f):
id = readId(f)
if id is None:
return (None,None)
datasize = readLen(f)
ret = (id, datasize)
print("SBEM chunk header:", ret)
print(" offset:", f.tell())
return ret
def readHeader(f):
# read header
header_bytes = f.read(8)
print("SBEM Header: ", header_bytes)
def parseDescriptorChunk(data_bytes):
print("parseDescriptorChunk data:", chunk_bytes)
return
def parseDataChunk(data_bytes):
print("parseDataChunk data:", chunk_bytes)
return
# read descriptors
with open(descriptor_path, 'rb') as f_desc:
readHeader(f_desc)
while True:
(id, datasize) = readChunkHeader(f_desc)
if id is None:
# print("None id:",id)
break;
chunk_bytes = f_desc.read(datasize)
if (len(chunk_bytes) != datasize):
print("ERROR: too few bytes returned.")
break
if id == ReservedSbemId_e_Descriptor:
parseDescriptorChunk(chunk_bytes)
else:
print("WARNING: data chunk in descriptor file!")
parseDataChunk(chunk_bytes)
# read data
with open(data_path, 'rb') as f_data:
readHeader(f_data)
while True:
(id, datasize) = readChunkHeader(f_data)
if id is None:
# print("None id:",id)
break;
chunk_bytes = f_data.read(datasize)
if (len(chunk_bytes) != datasize):
print("ERROR: too few bytes returned.")
break
if id == ReservedSbemId_e_Descriptor:
parseDescriptorChunk(chunk_bytes)
else:
parseDataChunk(chunk_bytes)
Full Disclaimer: I work for the Movesense team
In the following code snippet, WinVerifyTrust returns CERT_E_UNTRUSTEDROOT for a kernel driver file (.sys) that is loaded and running on the system:
GUID guidAction = DRIVER_ACTION_VERIFY;
WINTRUST_FILE_INFO sWintrustFileInfo = { 0 };
WINTRUST_DATA sWintrustData = { 0 };
HRESULT hr = 0;
sWintrustFileInfo.cbStruct = sizeof(WINTRUST_FILE_INFO);
sWintrustFileInfo.pcwszFilePath = argv[1];
sWintrustFileInfo.hFile = NULL;
sWintrustData.cbStruct = sizeof(WINTRUST_DATA);
sWintrustData.dwUIChoice = WTD_UI_NONE;
sWintrustData.fdwRevocationChecks = WTD_REVOKE_NONE;
sWintrustData.dwUnionChoice = WTD_CHOICE_FILE;
sWintrustData.pFile = &sWintrustFileInfo;
sWintrustData.dwStateAction = WTD_STATEACTION_VERIFY;
hr = WinVerifyTrust((HWND)INVALID_HANDLE_VALUE, &guidAction, &sWintrustData);
A few interesting points:
- The driver is signed with a valid (purchased) certificate using SHA-256.
- KB3033929 is installed on the system (Win7/32)
- When viewing the certificate from the file properties, the entire certification chain shows up as valid
Am I calling WinVerifyTrust wrong?
Alternative question: is there another way of knowing (by the presence of a registry key or something similar) that SHA-256 based code signing verification is available on the target system? (I need to verify this during installation...)
Thanks :)
DRIVER_ACTION works good for WHQL afaik. Try
GUID WINTRUST_ACTION_GENERIC_VERIFY_V2
Here is something else you can refer to
http://gnomicbits.blogspot.in/2016/03/how-to-verify-pe-digital-signature.html
i'm working on program that will analyze object files in ELF and PE formats (kind of school/research project). Right now i'm about to process dynamic import symbols in executable files. I would like to find as much info about symbol as possible.
In PE format, imports are stored in .idata section. There are several tables with different information but what is interesting for me is, that there is no problem to find out in which library is symbol defined. There is always name of shared library and then names/ordinals of symbols imported from it.
I would like to find out this kind of information in ELF files as well. All imports/exports are in .dynsym section - dynamic symbol table. Those symbols which are imported are marked as undefined, for example:
00000000 DF *UND* 00000000 GLIBC_2.0 fileno
But there are no information, what is source file of this symbol. All needed shared libraries are listed in .dynamic section, for example:
Dynamic Section:
NEEDED libz.so.1
Only information about library in symbol is Version String = GLIBC_2.0. I was thinking about getting to real library name through this, but when i look at output of objdump -p i found out that GLIBC_2.0 can be connected with more than one library:
Version References:
required from libm.so.6:
0x0d696910 0x00 13 GLIBC_2.0
required from libgcc_s.so.1:
0x0b792650 0x00 12 GLIBC_2.0
If i understand ELF dynamic linking process correctly, it should not be possible to find out this information in ELF executable file. Where exactly is the symbol imported from is determined by linker after it loads all symbol tables to memory.
But i would like to be sure about this before i move on, so my question is: Is there any way how can I find out name of symbols shared library from ELF executable file?
Thank you for any advice.
A few months ago I was working on pretty similar stuff - I was able to answer all my questions by grabbing the source to nm and readelf. ( See http://ftp.gnu.org/gnu/binutils/ )
I found this useful as well -> http://www.skyfree.org/linux/references/ELF_Format.pdf
Ok, so it is probably generaly impossible to assign library name to each imported symbol. But i may have found the solution through Symbol Versioning. Of course sybol version sections do not have to be present in each ELF file.
struct elf_obj_tdata *pElf = bfdFile->tdata.elf_obj_data;
for (long i = 0; i < dynNumSyms; i++)
{
asymbol *dynSym = dynSymTab[i];
// If there is version information in file.
if (pElf->dynversym_section != 0
&& (pElf->dynverdef_section != 0
|| pElf->dynverref_section != 0))
{
unsigned int vernum;
const char *version_string;
const char *fileName;
vernum = ((elf_symbol_type *) dynSym)->version & VERSYM_VERSION;
if (vernum == 0) // local sym
version_string = "";
else if (vernum == 1) // global sym, defined in this object
version_string = "Base";
else if (vernum <= pElf->cverdefs)
version_string = pElf->verdef[vernum - 1].vd_nodename;
else
{
Elf_Internal_Verneed *t;
version_string = "";
fileName = "";
// Iterate through all Verneed entries - all libraries
for (t = pElf->verref; t != NULL; t = t->vn_nextref)
{
Elf_Internal_Vernaux *a;
// Iterate through all Vernaux entries
for (a = t->vn_auxptr; a != NULL; a = a->vna_nextptr)
{
// Find associated entry
if (a->vna_other == vernum)
{
version_string = a->vna_nodename;
fileName = t->vn_filename;
break;
}
}
}
// here we have got:
// name of symbol = dynSym->name
// version string = version_string
// name of library = fileName
}
}
}
So what do you think, is this correct?
I want to create chrome extension crx file programatically (not using chrome.exe, because it opens new chrome window). So what are the alternatives for same ? My preference is java, but if its possible in other language then also I am okay.
As kylehuff stated, there are external tools that you could use. But you can always use the command line from Google Chrome to do that which is cross platform (Linux / Windows / Mac).
chrome.exe --pack-extension=[extension_path] --pack-extension-key=[extension_key]
--pack-extension is:
Package an extension to a .crx installable file from a given directory.
--pack-extension-key is:
Optional PEM private key is to use in signing packaged .crx.
The above does not run Google Chrome, it is just command line packing using Chromium's core crx algorithm that they use internally.
There is a variety of utilities to do this, in various languages (albeit; they are mostly shell/scripting languages)
I cannot post the links to all of them, because I am a new stackoverflow user - I can only post 1 link, so I created a page which lists them all - including the one C one I speak about below - http://curetheitch.com/projects/buildcrx/6/
Anyway, I spent a few hours and put together a version in C which runs on Windows or Linux, as the other solutions require installation of a scripting language or shell (i.e. python, ruby, bash, etc.) and OpenSSL. The utility I wrote has OpenSSL statically linked so there are no interpreter or library requirements.
The repository is hosted on github, but the link above has a list of my utility and other peoples solutions.
Nothing listed for Java, which was your preference, but hopefully that helps!
//Method to generate .crx. signature
import java.nio.ByteBuffer;
import java.nio.ByteOrder;
import java.security.KeyPair;
import java.security.KeyPairGenerator;
import java.security.SecureRandom;
import java.security.Signature;
//#param : extenstionContents is your zip file ,
//#returns : byte[] of the signature , use ByteBuffer to merge them and you have your
// .crx
public static byte[] generateCrxHeader(byte[] extensionContents) throws Exception {
KeyPairGenerator keyGen = KeyPairGenerator.getInstance("RSA");
SecureRandom random = new SecureRandom();
keyGen.initialize(1024, random);
KeyPair pair = keyGen.generateKeyPair();
Signature sigInstance = Signature.getInstance("SHA1withRSA");
sigInstance.initSign(pair.getPrivate());
sigInstance.update(extensionContents);
byte [] signature = sigInstance.sign();
byte [] subjectPublicKeyInfo = pair.getPublic().getEncoded();
final int headerLength = 4 + 4 + 4 + 4 + subjectPublicKeyInfo.length + signature.length;
ByteBuffer headerBuf = ByteBuffer.allocate(headerLength);
headerBuf.order(ByteOrder.LITTLE_ENDIAN);
headerBuf.put(new byte[]{0x43,0x72,0x32,0x34}); // Magic number
headerBuf.putInt(2); // Version
headerBuf.putInt(subjectPublicKeyInfo.length); // public key length
headerBuf.putInt(signature.length); // signature length
headerBuf.put(subjectPublicKeyInfo);
headerBuf.put(signature);
final byte [] header = headerBuf.array();
return header;
}
I needed to do this in Ruby. JavaHead's answer looks nice for Java for CRX2. The current format is CRX v3 and header is protobuf based. I wrote a blog for packing an extension with Ruby. There is also a python project from another author.
I'll paste Ruby version of CRX2 and CRX3 methods for packing extensions for a reference here. For complete code see my blog.
So CRX3 method:
def self.header_v3_extension(zipdata, key: nil)
key ||= OpenSSL::PKey::RSA.generate(2048)
digest = OpenSSL::Digest.new('sha256')
signed_data = Crx_file::SignedData.new
signed_data.crx_id = digest.digest(key.public_key.to_der)[0...16]
signed_data = signed_data.encode
signature_data = String.new(encoding: "ASCII-8BIT")
signature_data << "CRX3 SignedData\00"
signature_data << [ signed_data.size ].pack("V")
signature_data << signed_data
signature_data << zipdata
signature = key.sign(digest, signature_data)
proof = Crx_file::AsymmetricKeyProof.new
proof.public_key = key.public_key.to_der
proof.signature = signature
header_struct = Crx_file::CrxFileHeader.new
header_struct.sha256_with_rsa = [proof]
header_struct.signed_header_data = signed_data
header_struct = header_struct.encode
header = String.new(encoding: "ASCII-8BIT")
header << "Cr24"
header << [ 3 ].pack("V") # version
header << [ header_struct.size ].pack("V")
header << header_struct
return header
end
And for historic purposes (this one verified) CRX2:
# #note original crx2 format description https://web.archive.org/web/20180114090616/https://developer.chrome.com/extensions/crx
def self.header_v2_extension(zipdata, key: nil)
key ||= OpenSSL::PKey::RSA.generate(2048)
digest = OpenSSL::Digest.new('sha1')
header = String.new(encoding: "ASCII-8BIT")
signature = key.sign(digest, zipdata)
signature_length = signature.length
pubkey_length = key.public_key.to_der.length
header << "Cr24"
header << [ 2 ].pack("V") # version
header << [ pubkey_length ].pack("V")
header << [ signature_length ].pack("V")
header << key.public_key.to_der
header << signature
return header
end
I have used the excellent service crx-checker to validate both - v2 and v3 extension packing. Where I'm getting the expected RSASSA-PKCS1-v1_5 signature marked (Signature OK) (Developer Signature).
The extension will fail to load with CRX_REQUIRED_PROOF_MISSING if you try to add to your browser from URL because it will be lacking Google signature. But it will be loaded fine by Selenium when running test. To load normally you need to publish on web store.