python library to generator previous and next prime quickly - numbers

is there a library in Python that, given an integer, can generate the previous and the next closest prime number. I know there are some that will give me the next prime number, but I was hoping there was one that also did the previous prime.

SymPy's ntheory (Number Theory) class will do this.
https://docs.sympy.org/latest/modules/ntheory.html
sympy.ntheory.generate.nextprime(n)
sympy.ntheory.generate.prevprime(n)
It is decent, though not the fastest if you include non-Python libraries (10x slower than Pari/GP, 20-40x slower than Perl/ntheory). That probably does not matter to most users who aren't either doing vast numbers of calls or using it with 1000+ digit inputs.

Related

Calculate pseudo random number based on an increasing value

I need to calculate a pseudo random number in a given range (e.g. 0-150) based on another, strictly increasing number. Is there a mathematical way to solve this?
I am given one number x, which increases by 1 every day. Based on this number, I need to - somehow - calculate a number in a given range, which seems to be random.
I have a feeling that there is an easy mathematical solution for this, but sadly I am not able to find it. So any help would be appreciated. Thanks!
One sound way to do that is to hash the number x (either its binary representation or in text form) and then to use the hash to produce the 'random' number in the desired range (say by taking the first 32 bits of the hash and extracting by any known method the desired value). A cryptographic hash can be used like Sha256, but this is not necessary, MurmurHash is possibly a good one for your application.
Normally when you generate a random number, a seed value is used so that the same sequence of psuedorandom numbers isn't repeated. When a seed isn't explicitly given, many systems will use the time as a seed value.
Perhaps you could use x as a seed.
Here's an article explaining seeding: https://www.statisticshowto.com/random-seed-definition/

Matlab random number rng: choosing a seed

I would like to know more precisely what happends when you choose a custom seed in Matlab, e.g.:
rng(101)
From my (limited, nut nevertheless existing) understanding of how pseudo-random number generators work, one can see the seed conceptually as choosing a position in a "very long list of pseudo-random numbers".
Question: lets say, (in my Matlab script), I choose rng(100) for my first computation (a sequence of instructions) and then rng(1e6) for my second. Please, note that each time I do some computations it involves generating up to about 300k random numbers (each time).
-> Does that imply that I make sure there is no overlap between the sequence in the "list" starting at 100 and ending around 300k and the one starting at 1e6 and ending at 1'300'000 ? (the idead of "no overlap" comes from the fact since the rng(100) and rng(1e6) are separated by much more than 300k)
i.e. that these are 2 "independent" sequences, (as far as I remember this 'long list' would be generated by a special PRNG algorithm, most likely involing modular arithmetic..?)
No that is not the case. The mapping between the seed and the "position" in our list of generated numbers is not linear, you could actually interpret it as a hash/one way function. It could actually happen that we get the same sequence of numbers shifted by one position (but it is very unlikely).
By default, MATLAB uses the Mersenne Twister (source).
Not quite. The seed you give to rng is the initiation point for the Mersenne Twister algorithm (by default) that is used to generate the pseudorandom numbers. If you choose two different seeds (no matter their relative non-negative integer values, except for maybe a special case or two), you will have effectively independent pseudorandom number streams.
For "99%" of people, the major uses of seeding the rng are using the 'shuffle' argument (to use a non-default seed based on the time to help ensure independence of numbers generated across multiple sessions), or to give it one particular seed (to be able to reproduce the same pseudorandom stream at a later date). If you try to finesse the seeds further without being extremely careful, you are more likely to cause issues than do anything helpful.
RandStream can be used to break off separate streams of pseudorandom numbers if that really matters for your application (it likely doesn't).

Can online quantum random number generators be used in Matlab?

I want to avoid repetitions as much as possible. I've run the same Matlab program that uses "random" numbers multiple times and gotten exactly the same results. I learned that I should put the command shuffle at the beginning, and then the program will replace the default seed with a new one, based on the time on the clock. But the sequence of outputs from the pseudo-random number generator will
still contain a pattern.
I recently learned about a quantum box random number generator (this or something like it), and in the process of looking it up online I found a couple web servers that deliver random numbers that are continuously generated by quantum mechanical means: ANU Photonics and ANU QRNG.
To buy a quantum box looks pretty hard to afford, so how might I integrate one of the online servers into Matlab?
On http://qrng.anu.edu.au, click the "download" link in the text, and it takes you to a FAQ page where it tells what to download to use the random number generator in different ways. The last on the list is Matlab, for which it gives a link to directly download some code to access the random numbers and a link to Matlab Central to download the JSON Parser which is necessary for it to work.
The code is very simple, and as a script only displays the values it fetches, but can easily be turned into a function. I unzipped the contents of parse_json.zip into C:/Program Files/MATLAB/[version]/toolbox/JSONparser, a new folder in the Toolboxes, navigated to the Toolboxes in the Current Folder in Matlab, right clicked JSONparser, and clicked Add to Path.
Read the comments on the Matlab Central page for JSON Parser to get an idea of the limits of how many random numbers you can pull down at a time.
The random numbers are 16-bit non-negative integers; to create a random integer with more bits, say, 32 bits, I'd recommend taking two of these integers, multiplying one by 2^16, and adding them. If you want a number between 0 and 1, divide the sum by 2^32.

Language for working on big numbers

I am working on a task that consists different operations on very big numbers. Example : Multiplying two 50 digit numbers. That big-sized numbers cannot be handled using C.
Can someone suggest me some programming language that can handle operations on such types of big numbers without using any special type of libraries, so that I can learn that language to implement my algorithm.
Python3 can work on very large numbers (you can say it has almost no limit) and that's automatic.
https://stackoverflow.com/a/7604998/3156085
You can try it yourself by entering very large numbers in python shell.
BigDecimal class from Java can work with large numbers as you need, without using any extra library.

Getting around floating point error with logarithms?

I'm trying to write a basic digit counter (an integer is inputted and the number of digits of that integer is outputted) for positive integers. This is my general formula:
dig(x) := Math.floor(Math.log(x,10))
I tried implementing the equivalent of dig(x) in Ruby, and found that when I was computing dig(1000) I was getting 2 instead of 3 because Math.log was returning 2.9999999999999996 which would then be truncated down to 2. What is the proper way to handle this problem? (I'm assuming this problem can occur regardless of the language used to implement this approach, but if that's not the case then please explain that in your answer).
To get an exact count of the number of digits in an integer, you can do the usual thing: (in C/C++, assuming n is non-negative)
int digits = 0;
while (n > 0) {
n = n / 10; // integer division, just drops the ones digit and shifts right
digits = digits + 1;
}
I'm not certain but I suspect running a built-in logarithm function won't be faster than this, and this will give you an exact answer.
I thought about it for a minute and couldn't come up with a way to make the logarithm-based approach work with any guarantees, and almost convinced myself that it is probably a doomed pursuit in the first place because of floating point rounding errors, etc.
From The Art of Computer Programming volume 2, we will eliminate one bit of error before the floor function is applied by adding that one bit back in.
Let x be the result of log and then do x += x / 0x10000000 for a single precision floating point number (C's float). Then pass the value into floor.
This is guaranteed to be the fastest (assuming you have the answer in numerical form) because it uses only a few floating point instructions.
Floating point is always subject to roundoff error; that's one of the hazards you need to be aware of, and actively manage, when working with it. The proper way to handle it, if you must use floats is to figure out what the expected amount of accumulated error is and allow for that in comparisons and printouts -- round off appropriately, compare for whether the difference is within that range rather than comparing for equality, etcetera.
There is no exact binary-floating-point representation of simple things like 1/10th, for example.
(As others have noted, you could rewrite the problem to avoid using the floating-point-based solution entirely, but since you asked specifically about working log() I wanted to address that question; apologies if I'm off target. Some of the other answers provide specific suggestions for how you might round off the result. That would "solve" this particular case, but as your floating operations get more complicated you'll have to continue to allow for roundoff accumulating at each step and either deal with the error at each step or deal with the cumulative error -- the latter being the more complicated but more accurate solution.)
If this is a serious problem for an application, folks sometimes use scaled fixed point instead (running financial computations in terms of pennies rather than dollars, for example). Or they use one of the "big number" packages which computes in decimal rather than in binary; those have their own round-off problems, but they round off more the way humans expect them to.