How to solve that recurence by itrative method ....? - discrete-mathematics

I have tried but faild to find big o of
T(n)=3t(n/3)+n/lg(n)
Can anyone please give me a solution

A good way to visualize this problem is a Recursive Tree Diagram.
The first three levels of the tree are drawn here.
For the first level, we have a total work of n / lg n
For the second level, we have 3 calls of (n/3) / lg (n/3). Summing these calls gives a total work of n / lg (n/3) at this level.
For the third level, we have 9 calls of (n/9) / lg (n/9). Summing these calls gives a total work of n / lg (n/9) or n / lg (n/3^2) at this level.
The recursive calls will continue until we have a call of T(1). This condition is met at n/3^k = 1 or k = log3(n)
So now we have a simple summation of all of the levels equal to this.
n is a constant and can be pulled out of the equation. Expanding the summation then gives this equation.
We can simplify this summation as shown here.
In Big Theta, the summation can be simplified to Θ(n*(1+log(logn)) as the summation is a harmonic series.
Simplifying further, we have Θ(n+n*log(logn)) and due to the rules of Big Theta, we can simplify to a final Big Theta of Θ(nlog(logn)) as the first n will grow at a slower rate and doesn't really matter in our final equation.
But wait! You asked for Big O. Thankfully, Big Theta provides both an upper and lower bound for the problem. Big O only provides an upper bound. What does this mean for us? Big Theta actually is Big O, but not the other way around allowing you to say that the recurrence relation is O(nlog(logn)).
Hope this helps!

Related

Circular convolution of binary vectors (mod 2) using NTT

Let x, y be vectors of length n, with entries either 1 or 0. I want to efficiently compute the circular convolution
(x * y) mod 2
Where each component of the result is taken mod 2.
I know how to do it using a Fast Fourier Transform
(multiply Fourier transforms of x and y. transform back. Do the "mod 2")
However, this uses floating point calculations to solve a discrete problem and for large n (I'm interested in n ~ 10^7) it might lead to rounding errors. I expect there is a better way to do this using the number theoretic transform (NTT) but unfortunately I'm not familiar with number theory or NTT.
I looked at this website. Following the procedure there,
let's say n = 10^7. I need
a modulus M (use 10^7).
a prime N=kn+1 for some k. (use N = 3 * 10^7 + 1)
a root ω≡g^k mod N , where g is a generator (e.g. ω=2744)
Do the transform, etc.
Question
This seems promising. However, I would need 32-bit integers to store each bit during this calculation?
Also, this is not making use of the fact that I only need results modulo 2.
Is there a way to make use of this to simplify the procedure?
Since I don't know the number theory, this is not obvious to me.
I'm not asking for a full solution, only for an argument if my "mod 2" significantly simplifies the implementation (both in terms of difficulty to implement the necessary algorithms as well as computational resources).
Another question: If it's not possible to simplify using "mod 2", do you think it would still pay off to use NTT, as opposed to just throwing a well-known FFT library at the floating point problem?
For the NTT, your procedure looks correct. Yes, you would need 32-bit integers for each bit in your original vector. Unfortunately, there's not a lot you can do there to make use of the fact that the end result is mod 2, since you need a root of order 10^7. You may be able to shrink that number by a couple factors of two (and doing the standard DFT for a few base levels of recursion), but it wouldn't change much, relatively speaking.
Note, for your FFT implementation, I believe you could use integer arithmetic since its mod 2, but I'm not convinced it would be at all efficient. See this math stackexchange answer for details.

What is the difference between 'qr' and 'SVD' in Matlab to get the single vectors of a matrix?

Spefifically, the following two kinds of code can get the same S and V idealy. However, the second one's speed is usually faster than the first one in Matlab. Can someone tell me the reason?
Moreover, which method is more numerically stable?
Thanks.
[~,S,V] = svd(B,'econ');
[Qc,Rc] = qr(B',0);
[U,S,~] = svd(Rc,'econ');
V = Qc*U;
The second method does not have to be faster. For almost squared matrices it can be slower. Consider as example the Golub-Reinsch SVD-algorithm:
Its work depends on the output you want to calculate (only S, Sand V or S,V and U).
If you want to calculate Sand V without performing any preprocessing the required work is 4mn^2+8n^3.
If you perform QR-decomposition before this the needed amount of work is: 2/3n^3+n^2+1/3n-2 for the Housholder transformation. Now if your Matrix was almost squared, i.e m=n, you will have gained not much as R is still m x n. However if m is larger than n you can reduce R to an n x n matrix (called thin QR factorization). Now you want to calculate Uand S which will add 12n^3 for your SVD-algorithm.
So only SVD: 4mn^2+8n^3
SVD with QR: (12+2/3)n^3+n^2+1/3n-2
However most SVD-algorithms should inculde some (R-) bidiagonalizations which will reduce the work to: 2mn^2+11n^3
You can also apply QR, the R-bifactorization and then SVD to make it even faster but it all depends on your matrix dimensions.
Matlab uses for SVD the Lapack libraries. You can look up the exact runtimes here. They're approximately the same as above algorithm.
Hope this helps.

Least-squares minimization within threshold in MATLAB

The cvx suite for MATLAB can solve the (seemingly innocent) optimization problem below, but it is rather slow for the large, full matrices I'm working with. I'm hoping this is because using cvx is overkill, and that the problem actually has an analytic solution, or that a clever use of some built-in MATLAB functions can more quickly do the job.
Background: It is well-known that both x1=A\b and x2=pinv(A)*b solve the least-squares problem:
minimize norm(A*x-b)
with the distinction that norm(x2)<=norm(x1). In fact, x2 is the minimum-norm solution to the problem, so norm(x2)<=norm(x) for all possible solutions x.
Defining D=norm(A*x2-b), (equivalently D=norm(A*x1-b)), then x2 solves the problem
minimize norm(x)
subject to
norm(A*x-b) == D
Problem: I'd like to find the solution to:
minimize norm(x)
subject to
norm(A*x-b) <= D+threshold
In words, I don't need norm(A*x-b) to be as small as possible, just within a certain tolerance. I want the minimum-norm solution x that gets A*x within D+threshold of b.
I haven't been able to find an analytic solution to the problem (like using the pseudoinverse in the classic least-squares problem) on the web or by hand. I've been searching things like "least squares with nonlinear constraint" and "least squares with threshold".
Any insights would be greatly appreciated, but I suppose my real question is:
What is the fastest way to solve this "thresholded" least-squares problem in MATLAB?
Interesting question. I do not know the answer to your exact question, but a working solution is presented below.
Recap
Define res(x) := norm(Ax - b).
As you state, x2 minimizes res(x). In the overdetermined case (typically A having more rows than col's), x2 is the unique minimum. In the underdetermined case, it is joined by infinitely many others*. However, among all of these, x2 is the unique one that minimizes norm(x).
To summarize, x2 minimizes (1) res(x) and (2) norm(x), and it does so in that order of priority. In fact, this characterizes (fully determines) x2.
The limit characterization
But, another characterization of x2 is
x2 := limit_{e-->0} x_e
where
x_e := argmin_{x} J(x;e)
where
J(x;e) := res(x) + e * norm(x)
It can be shown that
x_e = (A A' + e I)^{-1} A' b (eqn a)
It should be appreciated that this characterization of x2 is quite magical. The limits exists even if (A A')^{-1} does not. And the limit somehow preserves priority (2) from above.
Using e>0
Of course, for finite (but small) e, x_e will not minimize res(x)(instead it minimzes J(x;e)). In your terminology, the difference is the threshold. I will rename it to
gap := res(x_e) - min_{x} res(x).
Decreasing the value of e is guaranteed to decrease the value of the gap. Reaching a specific gap value (i.e. the threshold) is therefore easy to achieve by tuning e.**
This type of modification (adding norm(x) to the res(x) minimization problem) is known as regularization in statistics litterature, and is generally considered a good idea for stability (numerically and with respect to parameter values).
*: Note that x1 and x2 only differ in the underdetermined case
**:It does not even require any heavy computations, because the inverse in (eqn a) is easily computed for any (positive) value of e if the SVD of A has already been computed.

Understanding how to count FLOPs

I am having a hard time grasping how to count FLOPs. One moment I think I get it, and the next it makes no sense to me. Some help explaining this would greatly be appreciated. I have looked at all other posts about this topic and none have completely explained in a programming language I am familiar with (I know some MATLAB and FORTRAN).
Here is an example, from one of my books, of what I am trying to do.
For the following piece of code, the total number of flops can be written as (n*(n-1)/2)+(n*(n+1)/2) which is equivalent to n^2 + O(n).
[m,n]=size(A)
nb=n+1;
Aug=[A b];
x=zeros(n,1);
x(n)=Aug(n,nb)/Aug(n,n);
for i=n-1:-1:1
x(i) = (Aug(i,nb)-Aug(i,i+1:n)*x(i+1:n))/Aug(i,i);
end
I am trying to apply the same principle above to find the total number of FLOPs as a function of the number of equations n in the following code (MATLAB).
% e = subdiagonal vector
% f = diagonal vector
% g = superdiagonal vector
% r = right hand side vector
% x = solution vector
n=length(f);
% forward elimination
for k = 2:n
factor = e(k)/f(k­‐1);
f(k) = f(k) – factor*g(k‐1);
r(k) = r(k) – factor*r(k‐1);
end
% back substitution
x(n) = r(n)/f(n);
for k = n‐1:­‐1:1
x(k) = (r(k)‐g(k)*x(k+1))/f(k);
end
I'm by no means expert at MATLAB but I'll have a go.
I notice that none of the lines of your code index ranges of your vectors. Good, that means that every operation I see before me is involving a single pair of numbers. So I think the first loop is 5 FLOPS per iteration, and the second is 3 per iteration. And then there's that single operation in the middle.
However, MATLAB stores everything by default as a double. So the loop variable k is itself being operated on once per loop and then every time an index is calculated from it. So that's an extra 4 for the first loop and 2 for the second.
But wait - the first loop has 'k-1' twice, so in theory one could optimise that a bit by calculating and storing that, reducing the number of FLOPs by one per iteration. The MATLAB interpreter is probably able to spot that sort of optimisation for itself. And for all I know it can work out that k could in fact be an integer and everything is still okay.
So the answer to your question is that it depends. Do you want to know the number of FLOPs the CPU does, or the minimum number expressed in your code (ie the number of operations on your vectors alone), or the strict number of FLOPs that MATLAB would perform if it did no optimisation at all? MATLAB used to have a flops() function to count this sort of thing, but it's not there anymore. I'm not an expert in MATLAB by any means, but I suspect that flops() has gone because the interpreter has gotten too clever and does a lot of optimisation.
I'm slightly curious to know why you wish to know. I used to use flops() to count how many operations a piece of maths did as a crude way of estimating how much computing grunt I'd need to make it work in real time written in C.
Nowadays I look at the primitives themselves (eg there's a 1k complex FFT, that'll be 7us on that CPU according to the library datasheet, there's a 2k vector multiply, that'll be 2.5us, etc). It gets a bit tricky because one has to consider cache speeds, data set sizes, etc. The maths libraries (eg fftw) themselves are effectively opaque so that's all one can do.
So if you're counting the FLOPs for that reason you'll probably not get a very good answer.

Minimization of L1-Regularized system, converging on non-minimum location?

This is my first post to stackoverflow, so if this isn't the correct area I apologize. I am working on minimizing a L1-Regularized System.
This weekend is my first dive into optimization, I have a basic linear system Y = X*B, X is an n-by-p matrix, B is a p-by-1 vector of model coefficients and Y is a n-by-1 output vector.
I am trying to find the model coefficients, I have implemented both gradient descent and coordinate descent algorithms to minimize the L1 Regularized system. To find my step size I am using the backtracking algorithm, I terminate the algorithm by looking at the norm-2 of the gradient and terminating if it is 'close enough' to zero(for now I'm using 0.001).
The function I am trying to minimize is the following (0.5)*(norm((Y - X*B),2)^2) + lambda*norm(B,1). (Note: By norm(Y,2) I mean the norm-2 value of the vector Y) My X matrix is 150-by-5 and is not sparse.
If I set the regularization parameter lambda to zero I should converge on the least squares solution, I can verify that both my algorithms do this pretty well and fairly quickly.
If I start to increase lambda my model coefficients all tend towards zero, this is what I expect, my algorithms never terminate though because the norm-2 of the gradient is always positive number. For example, a lambda of 1000 will give me coefficients in the 10^(-19) range but the norm2 of my gradient is ~1.5, this is after several thousand iterations, While my gradient values all converge to something in the 0 to 1 range, my step size becomes extremely small (10^(-37) range). If I let the algorithm run for longer the situation does not improve, it appears to have gotten stuck somehow.
Both my gradient and coordinate descent algorithms converge on the same point and give the same norm2(gradient) number for the termination condition. They also work quite well with lambda of 0. If I use a very small lambda(say 0.001) I get convergence, a lambda of 0.1 looks like it would converge if I ran it for an hour or two, a lambda any greater and the convergence rate is so small it's useless.
I had a few questions that I think might relate to the problem?
In calculating the gradient I am using a finite difference method (f(x+h) - f(x-h))/(2h)) with an h of 10^(-5). Any thoughts on this value of h?
Another thought was that at these very tiny steps it is traveling back and forth in a direction nearly orthogonal to the minimum, making the convergence rate so slow it is useless.
My last thought was that perhaps I should be using a different termination method, perhaps looking at the rate of convergence, if the convergence rate is extremely slow then terminate. Is this a common termination method?
The 1-norm isn't differentiable. This will cause fundamental problems with a lot of things, notably the termination test you chose; the gradient will change drastically around your minimum and fail to exist on a set of measure zero.
The termination test you really want will be along the lines of "there is a very short vector in the subgradient."
It is fairly easy to find the shortest vector in the subgradient of ||Ax-b||_2^2 + lambda ||x||_1. Choose, wisely, a tolerance eps and do the following steps:
Compute v = grad(||Ax-b||_2^2).
If x[i] < -eps, then subtract lambda from v[i]. If x[i] > eps, then add lambda to v[i]. If -eps <= x[i] <= eps, then add the number in [-lambda, lambda] to v[i] that minimises v[i].
You can do your termination test here, treating v as the gradient. I'd also recommend using v for the gradient when choosing where your next iterate should be.