how can I calculate order of an element in finite field using ntl? - element

I'm trying to calculate order of an element in finite field (Group) using ntl. but I did not find any function to do this!
can anyone guide me please?

I think there is no built in way to do this.
But you can write a script yourself.
A field F has two operations, addition (+) and multiplication (*). First you have to specify if you want to know the order of an element g in the group (F,+) or the group (F \ {0}, *).
Find the order of g in (F,+):
This is the easy case, since the order of every element in this group is p, if the field has pm elements.
Find the order of g in (F \ {0}, *):
This is a litte bit hard. The order of g in (F \ {0}, *) is also called the discrete logarithm. Basicly you can try gk for every k=1,...,pm. But this will take a while. A simple way would be the baby-step giant-step algorithm.
I have never tried it, but you may also take a look at this discrete logarithm implementation using NTL.

Related

How does aggregate generalise fold and fold generalise reduce?

As far as I understand aggregate is a generalisation of fold which in turn is a generalisation of reduce.
Similarily combineByKey is a generalisation of aggregateByKey which in turn is a generalisation of foldByKey which in turn is a generalisation of reduceByKey.
However I have trouble finding simple examples for each of those seven methods which can in turn only be expressed by them and not their less general versions. For example I found http://blog.madhukaraphatak.com/spark-rdd-fold/ giving an example for fold, but I have been able to use reduce in the same situation as well.
What I found out so far:
I read that the more generalised methods can be more efficient, but that would be a non-functional requirement and I would like to get examples which can not be implemented with the more specific method.
I also read that e.g. the function passed to fold only has to be associative, while the one for reduce has to be commutative additionally: https://stackoverflow.com/a/25158790/4533188 (However, I still don't know any good simple example.) whereas in https://stackoverflow.com/a/26635928/4533188 I read that fold needs both properties to hold...
We could think of the zero value as a feature (e.g. for fold over reduce) as in "add all elements and add 3" and using 3 as the zero value, but that would be misleading, because 3 would be added for each partition, not just once. Also this is simply not the purpose of fold as far as I understood - it wasn't meant as a feature, but as a necessity to implement it to be able to take non-commutative functions.
What would simple examples for those seven methods be?
Let's work through what is actually needed logically.
First, note that if your collection is unordered, any set of (binary) operations on it need to be both commutative and associative, or you'll get different answers depending on which (arbitrary) order you choose each time. Since reduce, fold, and aggregate all use binary operations, if you use these things on a collection that is unordered (or is viewed as unordered), everything must be commutative and associative.
reduce is an implementation of the idea that if you can take two things and turn them into one thing, you can collapse an arbitrarily long collection into a single element. Associativity is exactly the property that it doesn't matter how you pair things up as long as you eventually pair them all and keep the left-to-right order unchanged, so that's exactly what you need.
a b c d a b c d a b c d
a # b c d a # b c d a b # c d
(a#b) c # d (a#b) # c d a (b#c) d
(a#b) # (c#d) ((a#b)#c) # d a # ((b#c)#d)
All of the above are the same as long as the operation (here called #) is associative. There is no reason to swap around which things go on the left and which go on the right, so the operation does not need to be commutative (addition is: a+b == b+a; concat is not: ab != ba).
reduce is mathematically simple and requires only an associative operation
Reduce is limited, though, in that it doesn't work on empty collections, and in that you can't change the type. If you're working sequentially, you can a function that takes a new type and the old type, and produces something with the new type. This is a sequential fold (left-fold if the new type goes on the left, right-fold if it goes on the right). There is no choice about the order of operations here, so commutativity and associativity and everything are irrelevant. There's exactly one way to work through your list sequentially. (If you want your left-fold and right-fold to always be the same, then the operation must be associative and commutative, but since left- and right-folds don't generally get accidentally swapped, this isn't very important to ensure.)
The problem comes when you want to work in parallel. You can't sequentially go through your collection; that's not parallel by definition! So you have to insert the new type at multiple places! Let's call our fold operation #, and we'll say that the new type goes on the left. Furthermore, we'll say that we always start with the same element, Z. Now we could do any of the following (and more):
a b c d a b c d a b c d
Z#a b c d Z#a b Z#c d Z#a Z#b Z#c Z#d
(Z#a) # b c d (Z#a) # b (Z#c) # d
((Z#a)#b) # c d
(((Z#a)#b)#c) # d
Now we have a collection of one or more things of the new type. (If the original collection was empty, we just take Z.) We know what to do with that! Reduce! So we make a reduce operation for our new type (let's call it $, and remember it has to be associative), and then we have aggregate:
a b c d a b c d a b c d
Z#a b c d Z#a b Z#c d Z#a Z#b Z#c Z#d
(Z#a) # b c d (Z#a) # b (Z#c) # d Z#a $ Z#b Z#c $ Z#d
((Z#a)#b) # c d ((Z#a)#b) $ ((Z#c)#d) ((Z#a)$(Z#b)) $ ((Z#c)$(Z#d))
(((Z#a)#b)#c) # d
Now, these things all look really different. How can we make sure that they end up to be the same? There is no single concept that describes this, but the Z# operation has to be zero-like and $ and # have to be homomorphic, in that we need (Z#a)#b == (Z#a)$(Z#b). That's the actual relationship that you need (and it is technically very similar to a semigroup homomorphism). There are all sorts of ways to pick badly even if everything is associative and commutative. For example, if Z is the double value 0.0 and # is actually +, then Z is zero-like and # is associative and commutative. But if $ is actually *, which is also associative and commutative, everything goes wrong:
(0.0+2) * (0.0+3) == 2.0 * 3.0 == 6.0
((0.0+2) + 3) == 2.0 + 3 == 5.0
One example of a non-trival aggregate is building a collection, where # is the "append an element" operator and $ is the "concat two collections" operation.
aggregate is tricky and requires an associative reduce operation, plus a zero-like value and a fold-like operation that is homomorphic to the reduce
The bottom line is that aggregate is not simply a generalization of reduce.
But there is a simplification (less general form) if you're not actually changing the type. If Z is actually z and is an actual zero, we can just stick it in wherever we want and use reduce. Again, we don't need commutativity conceptually; we just stick in one or more z's and reduce, and our # and $ operations can be the same thing, namely the original # we used on the reduce
a b c d () <- empty
z#a z#b z
z#a (z#b)#c
z#a ((z#b)#c)#d
(z#a)#((z#b)#c)#d
If we just delete the z's from here, it works perfectly well, and in fact is equivalent to if (empty) z else reduce. But there's another way it could work too. If the operation # is also commutative, and z is not actually a zero but just occupies a fixed point of # (meaning z#z == z but z#a is not necessarily just a), then you can run the same thing, and since commutivity lets you switch the order around, you conceptually can reorder all the z's together at the beginning, and then merge them all together.
And this is a parallel fold, which is really a rather different beast than a sequential fold.
(Note that neither fold nor aggregate are strictly generalizations of reduce even for unordered collections where operations have to be associative and commutative, as some operations do not have a sensible zero! For instance, reducing strings by shortest length has as its "zero" the longest possible string, which conceptually doesn't exist, and practically is an absurd waste of memory.)
fold requires an associative reduce operation plus either a zero value or a reduce operation that's commutative plus a fixed-point value
Now, when would you ever use a parallel fold that wasn't just a reduceOrElse(zero)? Probably never, actually, though they can exist. For example, if you have a ring, you often have fixed points of the type we need. For instance, 10 % 45 == (10*10) % 45, and * is both associative and commutative in integers mod 45. Thus, if our collection is numbers mod 45, we can fold with a "zero" of 10 and an operation of *, and parallelize however we please while still getting the same result. Pretty weird.
However, note that you can just plug the zero and operation of fold into aggregate and get exactly the same result, so aggregate is a proper generalization of fold.
So, bottom line:
Reduce requires only an associative merge operation, but doesn't change the type, and doesn't work on empty collecitons.
Parallel fold tries to extend reduce but requires a true zero, or a fixed point and the merge operation must be commutative.
Aggregate changes the type by (conceptually) running sequential folds followed by a (parallel) reduce, but there are complex relationships between the reduce operation and the fold operation--basically they have to be doing "the same thing".
An unordered collection (e.g. a set) always requires an associative and commutative operation for any of the above.
With regard to the byKey stuff: it's just the same as this, except it only applies it to the collection of values associated with a (potentially repeated) key.
If Spark actually requires commutativity where the above analysis does not suggest it's needed, one could reasonably consider that a bug (or at least an unnecessary limitation of the implementation, given that operations like map and filter preserve order on ordered RDDs).
the function passed to fold only has to be associative, while the one for reduce has to be commutative additionally.
It is not correct. fold on RDDs requires the function to be commutative as well. It is not the same operation as fold on Iterable what is pretty well described in the official documentation:
This behaves somewhat differently from fold operations implemented for non-distributed
collections in functional languages like Scala.
This fold operation may be applied to
partitions individually, and then fold those results into the final result, rather than
apply the fold to each element sequentially in some defined ordering. For functions
that are not commutative, the result may differ from that of a fold applied to a
non-distributed collection.
As you can see order of merging partial values is not part of the contract hence function which is used for fold has to be commutative.
I read that the more generalised methods can be more efficient
Technically speaking there should be no significant difference. For fold vs reduce you can check my answers to reduce() vs. fold() in Apache Spark and Why is the fold action necessary in Spark?
Regarding *byKey methods all are implemented using the same basic construct which is combineByKeyWithClassTag and can be reduced to three simple operations:
createCombiner - create "zero" value for a given partition
mergeValue - merge values into accumulator
mergeCombiners - merge accumulators created for each partition.

OCaml "reading" a matrix (list of lists)

I have this problem in which i want to change the value of the element in col ln of a matrix i already have a function for that but i think i can make a better one, the only thing is i can´t think of another way of getting an element from the matrix and putting it back
i can get it using
List.nth c (List.nth lb m)
but im having trouble putting it back
what i have for now is (fun left and right not done)
matrixleft m #(( List.nth c (List.nth lb m) ) + 1 )::matrixright m
This code looks OK to me on a complexity basis, though it's going to traverse the input matrix twice--once to get the old value and once to install the new one. You can get the answer by traversing just once if you don't mind some more fiddly coding.
If you aren't following some externally imposed requirement, you would be better off using a real matrix (an array of arrays). Then there's no traversing, so you get constant time updates.

Universal hashing, should get the same hash value for the same key?

I mean, I have implemented an universal hashing function using this expression:
h(k) = ((a*k + b)mod p)mod m; (from Cormen)
where:
-p is big prime number greater than k;
-a and b are two numbers that are randomly choosen the first in the range [1, p-1] and the second one [0, p-1].
Now, I implemented this, and for the random function I have choosen the seed equal to k. That's because, if I don't do this, when I insert a value with the key k, it will generate a hash value, that will depends on the default seed of Random function (maybe the time). So if I want to search the key again, I can't do this, because now the universal hashing function returns me another value. So, I would appreciate you to tell me if my reasoning is correct or not.
My doubt is that now, doing so, if two elements have the same key, they will be irrimediably stored in the same linked list (thing that I didn't understand if it is correct or not).
Thanks in advance.
I think you have a slight misunderstanding about how universal hashing works. Rather than choosing a and b at random every time you compute the hash, instead, before you do any hashing at all, select a random a and b. Once you've done that, every time you need to compute the hash, go and compute it using the formula above based on the input value k and the values a and b that you chose initially.

network security- cryptography

I was solving a RSA problem and facing difficulty to compute d
plz help me with this
given p-971, q-52
Ø(n) - 506340
gcd(Ø(n),e) = 1 1< e < Ø(n)
therefore gcd(506340, 83) = 1
e= 83 .
e * d mod Ø(n) = 1
i want to compute d , i have all the info
can u help me how to computer d from this.
(83 * d) mod 506340 = 1
i am a little wean in maths so i am having difficulties finding d from the above equation.
Your value for q is not prime 52=2^2 * 13. Therefore you cannot find d because the maths for calculating this relies upon the fact the both p and q are prime.
I suggest working your way through the examples given here http://en.wikipedia.org/wiki/RSA_%28cryptosystem%29
Normally, I would hesitate to suggest a wikipedia link such as that, but I found it very useful as a preliminary source when doing a project on RSA as part of my degree.
You will need to be quite competent at modular arithmetic to get to grips with how RSA works. If you want to understand how to find d you will need to learn to find the Modular multiplicative inverse - just google this, I didn't come across anything incorrect when doing so myself.
Good luck.
A worked example
Let's take p=11, q=5. In reality you would use very large primes but we are going to be doing this by hand to we want smaller numbers. Keep both of these private.
Now we need n, which is given as n=pq and so in our case n=55. This needs to be made public.
The next item we need is the totient of n. This is simply phi(n)=(p-1)(q-1) so for our example phi(n)=40. Keep this private.
Now you calculate the encryption exponent, e. Defined such that 1<e<phi(n) and gcd(e,phi(n))=1. There are nearly always many possible different values of e - just pick one (in a real application your choice would be determined by additional factors - different choices of e make the algorithm easier/harder to crack). In this example we will choose e=7. This needs to be made public.
Finally, the last item to be calculated is d, the decryption exponent. To calculate d we must solve the equation ed mod phi(n) = 1. This is most commonly calculated using the Extended Euclidean Algorithm. This algorithm solves the equation phi(n)x+ed=1 subject to 1<d<phi(n), where x is an unknown multiplicative factor - which is identical to writing the previous equation without using mod. In our particular example, solving this leads to d=23. This should be kept private.
Then your public key is: n=55, e=7
and your private key is: n=55, d=23
To see the workthrough of the Extended Euclidean Algorithm check out this youtube video https://www.youtube.com/watch?v=kYasb426Yjk. The values used in that video are the same as the ones used here.
RSA is complicated and the mathematics gets very involved. Try solving a couple of examples with small values of p and q until you are comfortable with the method before attempting a problem with large values.

Can someone please clarify the Birthday Effect for me?

Please help interpret the Birthday effect as described in Wikipedia:
A birthday attack works as follows:
Pick any message m and compute h(m).
Update list L. Check if h(m) is in the list L.
if (h(m),m) is already in L, a colliding message pair has been found.
else save the pair (h(m),m) in the
list L and go back to step 1.
From the birthday paradox we know that we can expect to find a
matching entry, after performing about
2^(n/2) hash evaluations.
Does the above mean 2^(n/2) iterations through the above entire loop (i.e. 2^(n/2) returns to step 1), OR does it mean 2^(n/2) comparisons to individual items already in L?
It means 2^(n/2) iterations through the loop. But note that L would not be a normal list here, but a hash table mapping h(m) to m. So each iteration would only need a constant number (O(1)) of comparisons in average, and there would be O(2^(n/2)) comparisons in total.
If L had been a normal array or a linked list, then the number of comparisons would be much larger since you would need to search through the whole list each iteration. This would be a bad way to implement this algorithm though.