Understanding the question in a Racket programming assignment - racket

The question is :
A univariate polynomial of order n is given by the following equation.
Pn (x) = anxn + . . . + a2x2 + a1x + a0
Here, ai are the coefficients of the polynomial, and x is its unique variable. You might implement a procedure poly-3 for computing the polynomial of order 3 of x as follows.
(define (poly-3 x a0 a1 a2 a3)
(+ a0 (* a1 x) (* a2 x x) (* a3 x x x)))
In poly-3, the coefficients and the variable are bundled together as arguments; and you would have to specify the coefficients each time you want to compute the same polynomial with different values of x.
Instead, implement the procedure make-poly-3 that generates a procedure that computes the polynomial for an arbitrary x.
(define (make-poly-3 a0 a1 a2 a3)
...)
(define my-poly-3
(make-poly-3 1 2 3 4))
(my-poly-3 2)
Next, write a function sum-poly-3-range which will sum up the results for calling my-poly-3 for the values in a range:
(define (sum-poly-3-range from to)
...)
(sum-poly-3-range 1 50)
I am not understanding what I need to do (I am not asking for the programming solution, just steps).
My confusions:
Can't understand the workflow or say the steps I need to follow.
How to pass coefficients for the polynomial? Should I generate randomly or should I use the constant values of a0, a1,a2,a3?
When looping through the range should I use that value as x?

make-poly-3 is a procedure which takes four arguments, and which will return another procedure. The values of the four arguments it takes will be the values of the coefficients to the polynomial.
The procedure it returns will take a single argument, which will be the value of x at which the polynomial is to be evaluated.
So, for instance
(define linear (make-poly-3 0 1 0 0))
> (linear 2)
2
> (define squared (make-poly-3 0 0 1 0))
> (squared 2)
4
The sum-poly-3-range function uses whatever value my-poly-3 has (it 'uses my-poly-3 free' to use a bit of jargon), and evaluates it for every integer in a range which you give it, and works out the sum of the results.
So, as a simple example:
> (define my-poly-3 (make-poly-3 1 0 0 0))
> (sum-poly-3-range 1 50)
50
This is because (make-poly-3 1 0 0 0) returns a polynomial function which evaluates to 1 for all arguments (the constant term is the only non-zero term).
And
> (define my-poly-3 (make-poly-3 0 1 0 0))
> (sum-poly-3-range 1 50)
1275
because this polynomial just squares its argument.

Related

How to add a constraint to my optimization?

I am working on formulating an optimization problem where I have a 2-D matrix A.
A= [0 f1 0 f2]
[f3 f3 0 0]
.........
And I have another 2-D matrix B that I should fill. B has the same size of A. I need b_ij (element of B) to be zero if a_ij=0 (element of A) and I need b_ij to be greater than zero and less than or equal to a_ij if a_ij is not zero.
How can I represent this in my formulation? I have added this constraint/condition:
b_ij<=a_ij
But this does not satisfy the condition that states that b_ij is not equal zero when a_ij is not equal zero. Any help?
If all elements are positive, keep the smallest element of each matrix by doing an element by element comparison :
B2 = min(A,B)
Alternatively, create a logical matrix indicating if a condition is answered and multiply element by element with the matrix B , only elements who satisfy the condition remain, others are set to zero:
B = B.*(A~=0)
Then keep elements of B that are smaller or equal to elements of A, and replace them by the value of A otherwise.
B = B.*(B<=A) + A.*(B>A) )
This option lets you generalize your constraint.
You indicate needing elements of b_ij to be greater than zero if elements of a_ij are greater than zero. An option is to use the function max to ensure that all elements of B are positive.
B = max(1e-2,B); % exact value is yours to set.
This step is up to you and depend on your problem.
You want to implement the implication
a = 0 => b = 0
a <> 0 => 0 < b <= a
If a is (constant) data this is trivial. If a is a variable then things are not so easy.
You implemented part of the implications as
b <= a
This implies a is non-negative: a>=0. It also implies b is non-negative. The remaining implication a>0 => b>0 can now be implemented as
a <= δ * 1000
b >= δ / 1000
δ in {0,1}
Many MIP solvers support indicator constraints. That would allow you to say:
δ = 0 -> a = 0
δ = 1 -> b >= 0.001
δ in {0,1}

What does y==x mean in MATLAB?

I came across some MATLAB code online and it was running just fine, but I couldn't understand the meaning of (y == x) where y is a column matrix and x is an integer.
someFunction(y == x);
Is it some kind of comparing or setting some value of y?
The instruction
y == x
checks which values in the array y (if any) are equals to the scalar x and returns a logical array of the size of y in which 1 is set in the location where the value of the element of y is equal to the value of x and 0 in the other case.
It ha to be assumed tha also the array y is of integer type, otherwise the comparison does not have sense.
Therefore, the function someFunction seems accepting as input a logical array.
As an example, with
y = [10 2 10 7 1 3 6 10 10 2]
and
x=10
the code
(y == x)
returns the logical array:
1 0 1 0 0 0 0 1 1 0
This will be the input someFunction function.
Hope this helps,
QWapla'

Modular arithmetic AND Eucledian Algorithm

I was studying how to find the modular inverse. Suppose the example is:
27*x is congruent to 1 (mod 392) .
Now we have to find x. In the process we write this Equation as:
x is congruent to 27^(-1) (mod 392).
Here is my confusion does in modular arithmetic we can simply take 27 from left hand side and move it to right hand side and write it as 1/(27) (mod 392) without considering the 1 (mod 392) present their already and inserting 1/27 in between of 1 and (mod 392).
Because 27*x was congruent to 1(mod 392) but now we take x is congruent to 1/27 (mod 392).
This seems confused. If 27x = 1 (mod 392) then by definition x is 27^-1 (mod 392). You don't solve this equation by "moving" things from the left hand side to the right hand side. You solve it by using the Extended Euclidean Algorithm to write 27x + 392y = 1 in which case x is the inverse you seek since you can rearrange the equation as 392y = 1 - 27x which shows that 27x differs from 1 by a multiple of 392 hence 27x = 1 (mod 392)

What is the Haskell / hmatrix equivalent of the MATLAB pos function?

I'm translating some MATLAB code to Haskell using the hmatrix library. It's going well, but
I'm stumbling on the pos function, because I don't know what it does or what it's Haskell equivalent will be.
The MATLAB code looks like this:
[U,S,V] = svd(Y,0);
diagS = diag(S);
...
A = U * diag(pos(diagS-tau)) * V';
E = sign(Y) .* pos( abs(Y) - lambda*tau );
M = D - A - E;
My Haskell translation so far:
(u,s,v) = svd y
diagS = diag s
a = u `multiply` (diagS - tau) `multiply` v
This actually type checks ok, but of course, I'm missing the "pos" call, and it throws the error:
inconsistent dimensions in matrix product (3,3) x (4,4)
So I'm guessing pos does something with matrix size? Googling "matlab pos function" didn't turn up anything useful, so any pointers are very much appreciated! (Obviously I don't know much MATLAB)
Incidentally this is for the TILT algorithm to recover low rank textures from a noisy, warped image. I'm very excited about it, even if the math is way beyond me!
Looks like the pos function is defined in a different MATLAB file:
function P = pos(A)
P = A .* double( A > 0 );
I can't quite decipher what this is doing. Assuming that boolean values cast to doubles where "True" == 1.0 and "False" == 0.0
In that case it turns negative values to zero and leaves positive numbers unchanged?
It looks as though pos finds the positive part of a matrix. You could implement this directly with mapMatrix
pos :: (Storable a, Num a) => Matrix a -> Matrix a
pos = mapMatrix go where
go x | x > 0 = x
| otherwise = 0
Though Matlab makes no distinction between Matrix and Vector unlike Haskell.
But it's worth analyzing that Matlab fragment more. Per http://www.mathworks.com/help/matlab/ref/svd.html the first line computes the "economy-sized" Singular Value Decomposition of Y, i.e. three matrices such that
U * S * V = Y
where, assuming Y is m x n then U is m x n, S is n x n and diagonal, and V is n x n. Further, both U and V should be orthonormal. In linear algebraic terms this separates the linear transformation Y into two "rotation" components and the central eigenvalue scaling component.
Since S is diagonal, we extract that diagonal as a vector using diag(S) and then subtract a term tau which must also be a vector. This might produce a diagonal containing negative values which cannot be properly interpreted as eigenvalues, so pos is there to trim out the negative eigenvalues, setting them to 0. We then use diag to convert the resulting vector back into a diagonal matrix and multiply the pieces back together to get A, a modified form of Y.
Note that we can skip some steps in Haskell as svd (and its "economy-sized" partner thinSVD) return vectors of eigenvalues instead of mostly 0'd diagonal matrices.
(u, s, v) = thinSVD y
-- note the trans here, that was the ' in Matlab
a = u `multiply` diag (fmap (max 0) s) `multiply` trans v
Above fmap maps max 0 over the Vector of eigenvalues s and then diag (from Numeric.Container) reinflates the Vector into a Matrix prior to the multiplys. With a little thought it's easy to see that max 0 is just pos applied to a single element.
(A>0) returns the positions of elements of A which are larger than zero,
so forexample, if you have
A = [ -1 2 -3 4
5 6 -7 -8 ]
then B = (A > 0) returns
B = [ 0 1 0 1
1 1 0 0]
Note that we have ones corresponding to an elemnt of A which is larger than zero, and 0 otherwise.
Now if you multiply this elementwise with A using the .* notation, then you are multipling each element of A that is larger than zero with 1, and with zero otherwise. That is, A .* B means
[ -1*0 2*1 -3*0 4*1
5*1 6*1 -7*0 -8*0 ]
giving finally,
[ 0 2 0 4
5 6 0 0 ]
So you need to write your own function that will return positive values intact, and negative values set to zero.
And also, u and v does not match in dimension, for a generall SVD decomposition, so you actually would need to REDIAGONALIZE pos(diagS - Tau), so that u* diagnonalized_(diagS -tau) agrres to v

Get values for 2 arguments from 1 equation

I have this equation: p = (1 + (q-1)*B*T) ^ (-1/q-1)
The values p, T are known and the diagram p - T makes curbe. I want to calculate q and B so that curbe be as close to straight line is posible.
Some values are:
T p
1 0,999147061
2 0,997121331
3 0,994562513
Is there any way to make matlab (or sth else) to give me the values of B and q ?