How to generate entropy for gpg? CentOS 7 VM - centos

I am using CentOS 7 in a VM (Parallels). My gpg key generation needs more entropy to generate the keys. I get the following message:
We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.
What can I do to generate random bytes for this process?
I have tried some tricks for generating entropy but it seems i need more random bits.
Somethings that have worked for others but not me:
1# ls -R /
2# sudo md5sum /dev/sda &
Command #1 generates about 15 bits every 2 seconds.
Command #2 generate minimal amount of entropy.

Related

Monitoring system integrity by analyzing proof of work performance

So I have some code that spawns a number of processes which generate psudo-random sequences and hashes them checking if the hashes meet some criteria and then saves passing hashes, the random seed used, and the amount of time it took to generate a passing sequence from the random seed. My criteria is that the first 8 hex characters of the resulting sha256 hash are the same. I saw some strange output where my durations where roughly the same for a number of results and subsequently checked the durations by re-running the random seeds. I found that upon re-running the seeds the times were much shorter (>1000 seconds when before they were <5000 seconds). This seems like a red flag for system integrity but what todo about that is a separate question.
I want to perform a student-t test on the distribution of some n recent durations so that I can trigger a validation process that re-runs the seeds to check if their time to completion changes. What distribution should I use to test against and what's a good n for how many samples I should examine?

linux tools for highest compression of sorted integers

I am looking for a tried-and-true Linux tools specifically designed for:
simple run length encoding
compression of sorted integers.
1. I have a sorted list of repeated words, one per line, 1.5 Gb text file (617 million lines). There are ~200 unique words, ranging from 1 to < 20 characters in length.
I can "manually compress" using uniq -c to get a 3.4 Kb file that is 1.2 Kb when gzipped. I could then do a "manually decompression" with a simple awk function. A tested, optimized, maintained, dedicated tool is much preferable and less error-prone and more time effective to writing my own code, of course.
gzip --best gives a 1.5Mb file, which is a woefully poor compression ratio for this particular problem.
bzip2 --best gives a 62 Kb compressed file, which is good, but obviously suboptimal compression ratio. And it takes far longer than simply uniq -c.
A simple tool with a straightforward implementation of Run Length Encoding seems optimal, but I cannot find anything standard and reliable.
2. I have a sorted list of positive integers, one per line. Each integer is approximately in the range of 1 Million to 300 million. There is no algorithmic pattern or formula, they are random. But the difference between consecutive integers is tightly distributed around 0 to 30, though with a tail.
Huffman Coding of the difference of consecutive integers (or the difference of the difference) should give very high compression ratio. but I cannot find a simple, dedicated tool for sorted integers.
Another SO answer gives links to C libraries for these problems, but I am looking for a maintained, standalone Linux binary.
These are simple problems, but I don't have time to write my own code, debug it, test it, optimize it, etc. This is a tiny piece of a larger project. I am surprised there are not dedicated Linux utility tools for these problems.
For #1, compress the compressed output with gzip. On the other end, decompress twice. (The maximum compression of the gzip format is 1032:1. For highly redundant data, the result of the compression is itself compressible.)
For #2, if you don't have time to write a super simple piece of code to write five bits for each difference (0..30, with 31 being an escape code followed by the next integer with no differencing), then you can't possibly finish your project.

Can a CRC32 engine be used for computing CRC16 hashes?

I'm working with a microcontroller with native HW functions to calculate CRC32 hashes from chunks of memory, where the polynomial can be freely defined. It turns out that the system has different data-links with different bit-lengths for CRC, like 16 and 8 bit, and I intend to use the hardware engine for it.
In simple tests with online tools I've concluded that it is possible to find a 32-bit polynomial that has the same result of a 8-bit CRC, example:
hashing "a sample string" with 8-bit engine and poly 0xb7 yelds a result 0x97
hashing "a sample string" with 16-bit engine and poly 0xb700 yelds a result 0x9700
...32-bit engine and poly 0xb7000000 yelds a result 0x97000000
(with zero initial value and zero final xor, no reflections)
So, padding the poly with zeros and right-shifting the results seems to work.
But is it 'always' possible to find a set of parameters that make 32-bit engines to work as 16 or 8 bit ones? (including poly, final xor, init val and inversions)
To provide more context and prevent 'bypass answers' like 'dont't use the native engine': I have a scenario in a safety critical system where it's necessary to prevent a common design error from propagating to redundant processing nodes. One solution for that is having software-based CRC calculation in one node, and hardware-based in its pair.
Yes, what you're doing will work in general for CRCs that are not reflected. The pre and post conditioning can be done very simply with code around the hardware instructions loop.
Assuming that the hardware CRC doesn't have an option for this, to do a reflected CRC you would need to reflect each input byte, and then reflect the final result. That may defeat the purpose of using a hardware CRC. (Though if your purpose is just to have a different implementation, then maybe it wouldn't.)
You don't have to guess. You can calculate it. Because CRC is a remainder of a division by an irreducible polynomial, it's a 1-to-1 function on its domain.
So, CRC16, for example, has to produce 65536 (64k) unique results if you run it over 0 through 65536.
To see if you get the same outcome by taking parts of CRC32, run it over 0 through 65535, keep the 2 bytes that you want to keep, and then see if there is any collision.
If your data has 32 bits in it, then it should not be an issue. The issue arises if you have less than 32 bit numbers and you shuffle them around in a 32-bit space. Their 1st and last byte are not guaranteed to be uniformly distributed.

Can online quantum random number generators be used in Matlab?

I want to avoid repetitions as much as possible. I've run the same Matlab program that uses "random" numbers multiple times and gotten exactly the same results. I learned that I should put the command shuffle at the beginning, and then the program will replace the default seed with a new one, based on the time on the clock. But the sequence of outputs from the pseudo-random number generator will
still contain a pattern.
I recently learned about a quantum box random number generator (this or something like it), and in the process of looking it up online I found a couple web servers that deliver random numbers that are continuously generated by quantum mechanical means: ANU Photonics and ANU QRNG.
To buy a quantum box looks pretty hard to afford, so how might I integrate one of the online servers into Matlab?
On http://qrng.anu.edu.au, click the "download" link in the text, and it takes you to a FAQ page where it tells what to download to use the random number generator in different ways. The last on the list is Matlab, for which it gives a link to directly download some code to access the random numbers and a link to Matlab Central to download the JSON Parser which is necessary for it to work.
The code is very simple, and as a script only displays the values it fetches, but can easily be turned into a function. I unzipped the contents of parse_json.zip into C:/Program Files/MATLAB/[version]/toolbox/JSONparser, a new folder in the Toolboxes, navigated to the Toolboxes in the Current Folder in Matlab, right clicked JSONparser, and clicked Add to Path.
Read the comments on the Matlab Central page for JSON Parser to get an idea of the limits of how many random numbers you can pull down at a time.
The random numbers are 16-bit non-negative integers; to create a random integer with more bits, say, 32 bits, I'd recommend taking two of these integers, multiplying one by 2^16, and adding them. If you want a number between 0 and 1, divide the sum by 2^32.

PN Sequence Generation using matlab

I am trying to generate a pn sequence using five shift registers. The shift register should be composed out of 10 states, giving us a period of 1023 bits. I was trying hard to get it done, but I am completely confused as how to generate 1023 bits using 5 shift registers. I need to submit it by tomorrow and I am feeling really tense. Can someone please help me with this matlab code.
P.S. I am not allowed to use MATLAB's built-in functions to generate the sequence.