Difference between O(1) & O(N) Big-O Notation completely? [closed] - swift

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 months ago.
Improve this question
I need a clear-cut understanding. We know a constant-time method is O(1) and a linear-time method is O(N). Please briefly explain type-1 and type-2, and what is the difference. And why type-1, and type-2 will go O(1), and O(N) respectively. Thanks in advance.
Type-1:
func printLoop() {
for p in 0..<100 {
print(p)
}
}
printLoop()
Type-2:
func printLoop(n: Int) {
for p in 0..<n {
print(p)
}
}
printLoop(100)

The difference here has to do with semantics. In fact if you pass n = 100 to the second function then both versions will take the same time to run. The first printLoop() executes a fixed number of prints, and so is said to have a constant O(1) running time. The second version loops n times, and therefore has an O(n) running time based on the input.

Type-1 is O(1) because there is no input parameter for the method. You can't change the time it takes to complete. (If completing the function of type-1 is going to take 1 second, it is always 1 second no matter where and how you are using that.
Type-2 is O(n) because if you pass it 100, it may take 1 second to complete (As we assumed in the above section), but if you pass 200, it will take twice the time, right? It is called "linear time" because it goes up by a linear function. (Y = alpha * X where X refers to the number of loops and Y refers to the time for Type-2 function to complete its operation)

Related

Hash functions and polynomial division [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
I understand that a CRC verifies data integrity by producing a checksum, which is the result of polynomial long division. I've heard hash values referred to as hash checksums, so my question is whether hash functions use some sort of polynomial division as well? I know they break the data up into block ciphers, so my guess would be that the hash functions create some relationship between the polynomial check value and how it's divided into the different blocks. Can someone let me know if I'm way off base here?
A CRC is a hash function, but there are many other ways to implement a hash function. The other ways generally do not use polynomial division, though there are some that use a CRC as a part of the hash calculation, in order to make use of hardware CRC instructions. Most hash functions use a long, convoluted series of ands, nots, exclusive-ors, integer additions, multiplications, and modulos.

What's the difference between NOT second preimage resistant and NOT collision resistant [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
By definition, Not 2nd-preimage resistant means: there exists at least one x (which is known) such that it is easy to find another x', such that h(x) = h(x').
While, Not collision resistant indicates: it is easy to find at least one such pair (x, x') that h(x) = h(x')
I don't see any difference here, anyone can tell? Or do I give the wrong definitions?
And, it is said that "Not collision resistant not necessarily means Not 2nd-preimage resistant", why is that?
Putting this into another answer because it's just too much to type for a comment.
The definition of 2nd-preimage-resistant is you have h(x) and x, and can't create x'.
The definition of preimage-resistant (without second!) means you have only h(x), and can't create x.
And the definition of collision resistant is you have nothing, and may choose any h(x), x and x'.
If you use the hash to sign a plaintext message, you need 2nd-preimage-resistancy, but not collision resistancy. It doesn't matter to you if someone can find two colliding messages that produce a hash that is different from yours, but you want to make sure noone is able to craft a different message that has the your hash, even if they know your plaintext.
If you use the hash to store hashed passwords, you don't care about collision resistance, and you don't care about 2nd-preimage-resistance, preimage-resistance is all you need. If an attacker knows one password, you don't really care if he can use that password to find a different one.
So these were two examples where collision resistance is not required, but preimage-resistance or 2nd-preimage-resistance is.
As to "Not collision resistant not necessarily means Not 2nd-preimage resistant", why is that? , consider the hash function if x has less then 24 bits, then h(x)=0, else h(x)=sha256(x). This is very obviously not collision resistant (choose any 2 words that have less than 4 letters), but, as long as your text is longer, this function is preimage-resistant and 2nd-preimage-resistant (assuming sha256 hasn't been broken yet).
2nd preimage resistant means, there's no (easy) way to find a 2nd x (called x') when you have only h(x), and maybe x.
Collision resistant means there's an (easy) way to find a random pair (x, x') with h(x)=h(x').
So the second one is weaker. Think about what happened to MD5 a while ago: there's an algorithm that finds pairs of input bytes that produce the same output. But this works only for specifically constructed input, not for random input. So, while it is possible to find messages that have a collision, the generic case "x is some specific message, find a second message that has the same MD5 as x" is not solved yet.

ways for speed up MATLAB application [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
i have a question on speed up application built by MATLAB software, i need to know the affect of using vectorization and parallel computation on speed up the application ? and if there is better method than both previous way in such case ? thanks
The first thing you need to do when your MATLAB code runs too slow is to run it in the profiler. In recent versions of MATLAB, this can by done by pressing the "Run and Time" button on the main toolbar. This way, you will now which functions and which lines in these function take up the most time. Once you know this, you may do one of the following, depending on your circumstances and the nature of the particular piece of code:
Think if your algorithm is the most optimal one in terms of O() complexity.
Try turning loops into vector operations. The efficacy of this has declined in recent versions of MATLAB because of improvements in how loops are executed.
If you have a multi-core CPU try using the parallel computing toolbox. If your code parallelizes well, you will get a sped up nearly equal to the number of cores.
If you have an nVidia GPU try using the GPU support. You can get a speed-up by a factor of 10 or more with some problems, but not all problems are amicable to this sort of optimization.
If everything else fails, you may outsource the slowest piece of your code to a low level language like C. See here for how to do this. You could then use low-level profiling tools like Intel vTune to get the absolute maximum speed from the low-level code.
If it is still too slow, you may need to buy an FPGA. See here for a brief tutorial.

naive question about running time for very large counter number

I have a naive question about the maximal size for a counter. For example, the following code should couldn't be done in a reasonalbe time, because it needs at least 2^512 arithmetic operations, or more essentially, it needs to change the value of i 2^512 times!
c = 2 to the power 512;
for (i = 1, i < c, i++) {
j = j + 1 / ( i * i + 1 );
}
But when I use a computer algebra software "Mathematica", it gives me the answer less than one second. My question is that how could it achieve this?
ps. My naive idea about the size for counter is due to my opinion about the complexity. When I read some books (not too formal, because they focus on the complexity of arithmetic operations only) about complexity, they always omit the cost of the index. I can imagine this only if the counter is small.
At a guess, as your loop termination condition is fixed at 2^512, Mathematica might be able to treat this as a summed geometric sequence and so calculate it using a formula rather than having to iterate through all the loop values.
Take a look at the Wikipedia entry on Geometric Progression and the Wolfram page on Geometric Series.
If this was in a normal programming language e.g. like C++, Java or C#, you'd be absolutely right! Also, 2^512 is a very large number and would overflow the "normal" datatypes in those languages.
Assuming you mean 2 to the power of 512 and not 2 xor 512 (which is 514).

Quicksort (JAVA) [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Lets say you have an array of size n with randomly generated elements and you want to use quicksort to sort the array. For large enough n (say 1,000,000), in order to speed up quicksort, it would make sense to stop recursing when the array gets small enough, and use insertion sort instead. In such an implementation, the base case for Quicksort is some value base > 1. What would the optimal base value to choose and why?
Think about the time complexity of quicksort (average and worst case) and the time complexity of other sort that might do better for small n.
Try starting with Wikipedia - it has good starting info about comparing the two algorithms. When you have a more specific question, feel free to come back.