Modular arithmetic AND Eucledian Algorithm - discrete-mathematics

I was studying how to find the modular inverse. Suppose the example is:
27*x is congruent to 1 (mod 392) .
Now we have to find x. In the process we write this Equation as:
x is congruent to 27^(-1) (mod 392).
Here is my confusion does in modular arithmetic we can simply take 27 from left hand side and move it to right hand side and write it as 1/(27) (mod 392) without considering the 1 (mod 392) present their already and inserting 1/27 in between of 1 and (mod 392).
Because 27*x was congruent to 1(mod 392) but now we take x is congruent to 1/27 (mod 392).

This seems confused. If 27x = 1 (mod 392) then by definition x is 27^-1 (mod 392). You don't solve this equation by "moving" things from the left hand side to the right hand side. You solve it by using the Extended Euclidean Algorithm to write 27x + 392y = 1 in which case x is the inverse you seek since you can rearrange the equation as 392y = 1 - 27x which shows that 27x differs from 1 by a multiple of 392 hence 27x = 1 (mod 392)

Related

Understanding the question in a Racket programming assignment

The question is :
A univariate polynomial of order n is given by the following equation.
Pn (x) = anxn + . . . + a2x2 + a1x + a0
Here, ai are the coefficients of the polynomial, and x is its unique variable. You might implement a procedure poly-3 for computing the polynomial of order 3 of x as follows.
(define (poly-3 x a0 a1 a2 a3)
(+ a0 (* a1 x) (* a2 x x) (* a3 x x x)))
In poly-3, the coefficients and the variable are bundled together as arguments; and you would have to specify the coefficients each time you want to compute the same polynomial with different values of x.
Instead, implement the procedure make-poly-3 that generates a procedure that computes the polynomial for an arbitrary x.
(define (make-poly-3 a0 a1 a2 a3)
...)
(define my-poly-3
(make-poly-3 1 2 3 4))
(my-poly-3 2)
Next, write a function sum-poly-3-range which will sum up the results for calling my-poly-3 for the values in a range:
(define (sum-poly-3-range from to)
...)
(sum-poly-3-range 1 50)
I am not understanding what I need to do (I am not asking for the programming solution, just steps).
My confusions:
Can't understand the workflow or say the steps I need to follow.
How to pass coefficients for the polynomial? Should I generate randomly or should I use the constant values of a0, a1,a2,a3?
When looping through the range should I use that value as x?
make-poly-3 is a procedure which takes four arguments, and which will return another procedure. The values of the four arguments it takes will be the values of the coefficients to the polynomial.
The procedure it returns will take a single argument, which will be the value of x at which the polynomial is to be evaluated.
So, for instance
(define linear (make-poly-3 0 1 0 0))
> (linear 2)
2
> (define squared (make-poly-3 0 0 1 0))
> (squared 2)
4
The sum-poly-3-range function uses whatever value my-poly-3 has (it 'uses my-poly-3 free' to use a bit of jargon), and evaluates it for every integer in a range which you give it, and works out the sum of the results.
So, as a simple example:
> (define my-poly-3 (make-poly-3 1 0 0 0))
> (sum-poly-3-range 1 50)
50
This is because (make-poly-3 1 0 0 0) returns a polynomial function which evaluates to 1 for all arguments (the constant term is the only non-zero term).
And
> (define my-poly-3 (make-poly-3 0 1 0 0))
> (sum-poly-3-range 1 50)
1275
because this polynomial just squares its argument.

Calculating d value in RSA

I saw a couple questions about this but most of them were answered in unhelpful way or didn't get a proper answer at all. I have these variables:
p = 31
q = 23
e - public key exponent = 223
phi - (p-1)*(q-1) = 660
Now I need to calculate d variable (which I know is equal 367). The problem is that I don't know how. I found this equation on the internet but it doesn't work (or I can't use it):
e⋅d=1modϕ(n)
When I see that equation i think that it means this:
d=(1modϕ(n))/e
But apparently it doesn't because 367 (1modϕ(n))/e = 1%660/223 = 1/223 != 367
Maybe I don't understand and I did something wrong - that's why I ask.
I did some more research and I found second equation:
d=1/e mod ϕ(n)
or
d=e^-1 mod ϕ(n)
But in the end it gives the same result:
1/e mod ϕ(n) = 1/223 % 660 = 1/223 != 367
Then I saw some guy saying that to solve that equation you need extended Euclidean algorithm If anyone knows how to use it to solve that problem then I'd be very thankful if you help me.
If you want to calculate something like a / b mod p, you can't just divide it and take division remainder from it. Instead, you have to find such b-1 that b-1 = 1/b mod p (b-1 is a modular multiplicative inverse of b mod p). If p is a prime, you can use Fermat's little theorem. It states that for any prime p, ap = a mod p <=> a(p - 2) = 1/a mod p. So, instead of a / b mod p, you have to compute something like a * b(p - 2) mod p. b(p - 2) can be computed in O(log(p))
using exponentiation by squaring. If p is not a prime, modular multiplicative inverse exists if and only if GCD(b, p) = 1. Here, we can use extended euclidean algorithm to solve equation bx + py = 1 in logarithmic time. When we have bx + py = 1, we can take it mod p and we have bx = 1 mod p <=> x = 1/b mod p, so x is our b-1. If GCD(b, p) ≠ 1, b-1 mod p doesn't exist.
Using either Fermat's theorem or the euclidean algorithm gives us same result in same time complexity, but the euclidean algorithm can also be used when we want to compute something modulo number that's not a prime (but it has to be coprime with numer want to divide by).

Perceptron - MatLab Serious Confusion

This is my first stab at machine learning, and I can implement the code anyway that I want. I have Matlab access, which I think will be simpler than Python, and I have pseudo code for implementing a PLA. The last part of the code, however, absolutely baffles me, though it is simpler than the code I have seen on here thus far. It seems to be calling for the use of variables not declared. Here's what I have. I'll point out the number line at which I get stuck.
1) w <- (n + 1) X m (matrix of small random nums)
2) I <- I augmented with col. of 1s
3) for 1 = 1 to 1000
4) delta_W = (N + 1) X m (matrix of zeros) // weight changes
5) for each pattern 1 <= j <= p
6) Oj = (Ij * w) > 0 // j's are subscript/vector matrix product w/ threshold
7) Dj = = Tj - Oj // diff. between target and actual
8) w = w + Ij(transpose)*Dj // the learning rule
Lines 1 thru 4 are coded.
My questions are on line 5: What does "for each pattern mean" (i.e., how does one say it in code). Also, which j are they interested in, I have a j in the observation matrix and a j in the target matrix. Also, where did "p" come from (I have i's, j's, m's and n's but no p's)? Any thoughts would be appreciated.
"for each pattern" refers to the inputs. All they are saying is to run that loop where Ij is the input to the perceptron.
To write this in MATLAB, it really depends on how your data is oriented. I would store your inputs as a mXn matrix, where m is the number of inputs and n is the size of each input.
Say our inputs look like :
input = [1 5 -1;
2 3 2;
4 5 6;
... ]
First 'augment' this with a column of ones for the bias input:
[r c] = size(input);
input = [input ones(r,1)];
Then, your for loop will simply be:
for inputNumber = 1:r
pattern = input(inputNumber,:);
and you can continue from there.

Struggling with calculating the inverse using Eulers-thereom

been struggling this whole day with trying to figure out the multiplicative inverse of 17 modulo 31. I know by "manual" computation that the actual inverse is 11 but how do I prove this with Euler's theorem.
We know that 31 is a prime, φ(n)=30, so i end up with 17^30=(cong)1 (mod 31). But how do proceed from this? Would be very thankful if someone could help me out since im stuck.
Thanks in advance.
Well, lets formalize it. Lets a = 17, p = 31, you want to find a^(-1). So we get by Euler's theorem a^(p - 1) = 1(mod p). Lets divide both parts by a: a^(p - 2) = a^(-1) (mod p)
Answer: 17^29 (mod 31)

What is the Haskell / hmatrix equivalent of the MATLAB pos function?

I'm translating some MATLAB code to Haskell using the hmatrix library. It's going well, but
I'm stumbling on the pos function, because I don't know what it does or what it's Haskell equivalent will be.
The MATLAB code looks like this:
[U,S,V] = svd(Y,0);
diagS = diag(S);
...
A = U * diag(pos(diagS-tau)) * V';
E = sign(Y) .* pos( abs(Y) - lambda*tau );
M = D - A - E;
My Haskell translation so far:
(u,s,v) = svd y
diagS = diag s
a = u `multiply` (diagS - tau) `multiply` v
This actually type checks ok, but of course, I'm missing the "pos" call, and it throws the error:
inconsistent dimensions in matrix product (3,3) x (4,4)
So I'm guessing pos does something with matrix size? Googling "matlab pos function" didn't turn up anything useful, so any pointers are very much appreciated! (Obviously I don't know much MATLAB)
Incidentally this is for the TILT algorithm to recover low rank textures from a noisy, warped image. I'm very excited about it, even if the math is way beyond me!
Looks like the pos function is defined in a different MATLAB file:
function P = pos(A)
P = A .* double( A > 0 );
I can't quite decipher what this is doing. Assuming that boolean values cast to doubles where "True" == 1.0 and "False" == 0.0
In that case it turns negative values to zero and leaves positive numbers unchanged?
It looks as though pos finds the positive part of a matrix. You could implement this directly with mapMatrix
pos :: (Storable a, Num a) => Matrix a -> Matrix a
pos = mapMatrix go where
go x | x > 0 = x
| otherwise = 0
Though Matlab makes no distinction between Matrix and Vector unlike Haskell.
But it's worth analyzing that Matlab fragment more. Per http://www.mathworks.com/help/matlab/ref/svd.html the first line computes the "economy-sized" Singular Value Decomposition of Y, i.e. three matrices such that
U * S * V = Y
where, assuming Y is m x n then U is m x n, S is n x n and diagonal, and V is n x n. Further, both U and V should be orthonormal. In linear algebraic terms this separates the linear transformation Y into two "rotation" components and the central eigenvalue scaling component.
Since S is diagonal, we extract that diagonal as a vector using diag(S) and then subtract a term tau which must also be a vector. This might produce a diagonal containing negative values which cannot be properly interpreted as eigenvalues, so pos is there to trim out the negative eigenvalues, setting them to 0. We then use diag to convert the resulting vector back into a diagonal matrix and multiply the pieces back together to get A, a modified form of Y.
Note that we can skip some steps in Haskell as svd (and its "economy-sized" partner thinSVD) return vectors of eigenvalues instead of mostly 0'd diagonal matrices.
(u, s, v) = thinSVD y
-- note the trans here, that was the ' in Matlab
a = u `multiply` diag (fmap (max 0) s) `multiply` trans v
Above fmap maps max 0 over the Vector of eigenvalues s and then diag (from Numeric.Container) reinflates the Vector into a Matrix prior to the multiplys. With a little thought it's easy to see that max 0 is just pos applied to a single element.
(A>0) returns the positions of elements of A which are larger than zero,
so forexample, if you have
A = [ -1 2 -3 4
5 6 -7 -8 ]
then B = (A > 0) returns
B = [ 0 1 0 1
1 1 0 0]
Note that we have ones corresponding to an elemnt of A which is larger than zero, and 0 otherwise.
Now if you multiply this elementwise with A using the .* notation, then you are multipling each element of A that is larger than zero with 1, and with zero otherwise. That is, A .* B means
[ -1*0 2*1 -3*0 4*1
5*1 6*1 -7*0 -8*0 ]
giving finally,
[ 0 2 0 4
5 6 0 0 ]
So you need to write your own function that will return positive values intact, and negative values set to zero.
And also, u and v does not match in dimension, for a generall SVD decomposition, so you actually would need to REDIAGONALIZE pos(diagS - Tau), so that u* diagnonalized_(diagS -tau) agrres to v