is the amortized cost for increment and decrement binary counter the same? - amortized-analysis

is the amortized cost for increment and decrement binary counter the same?
i.e O(n) for n increments or decrements?
Also can somebody explain how to calculate the amortised cost of a binary decrement counter using the potential method.

Related

What’s the performance difference between moving(sum, X, 10) and msum(X, 10) and what causes the difference?

For a vector of length 1 million, what’s the performance difference between moving(sum, X, 10) and msum(X, 10) and what causes the difference?
The calculation speed of function msum will be 50 to 200 times higher than the moving function. It may vary depending on the data volume. The reasons are as follows:
The functions adopt different methods to process data: msum puts data into memory at one time, no need to allocate memory separately for each calculation; while moving generates a sub-object and reallocates memory to the sub-object for each calculation, and the memory is reclaimed after the calculation completes.
The function msum implements incremental computation where each calculation adds the previous result to the adjacent value and subtract the last value of the previous calculation, while moving adds up all the data in the window for each calculation.

Amortization complexity when resizing arrays by a constant?

I know that when you resize an array by a scalar (like doubling the length of the array, then copying all elements into the new big array) the amortized time complexity is O(1).
But why is it the case that when you do it with a constant (say, resizing it by +10 each time) not O(1) as well?
Edit: https://www.cs.utexas.edu/~slaberge/docs/topics/amortized/dynamic_arrays/ this site seems to explain it, but I am very confused on the math. Where does big $N$ come from? I thought we were dealing with k?
If every kth consecutive insertions cost as much as the number of elements that are already in the array (denote by n+N*k where n is the initial size of the array) then you got sequences of this type:
n O(1) Operations
Expensive operation of O(n)
k O(1) Operations
Expensive operation of O(n+k)
k O(1) Operations
Expensive operation of (n+2k)
k O(1) Operations
Expensive operation of O(n+3k)
See where this is going? each expensive insertion happens every k insertions (expect first time) and costs as the current number of elements.
This means that after, lets simplify, n+A*k insertions we had A copies of n elements, and also we had A-1 copy of the first set of k elements, A-2 copies of the second set of k elements, and so on..
This sums up to O(An + A^2 * k). And because we did n+Ak, we can divide to get amortized cost.
This gives us (An + A^2 * k)/(n+Ak)=A
So, this implies that we are amortized dependent in this array on the NUMBER OF INSERTIONS, which is bad because we won't be able to state that this array, does a constant work in average.

Hash table O(1) amortized or O(1) average amortized?

This question may seem a bit pedantic but i've been really trying to dive deeper into Amortized analysis and am a bit confused as to why insert for a hash table is O(1) amortized.(Note: Im not talking about table doubling, I understand that)
Using this definition, "Amortized analysis gives the average performance (over time) of each operation in the worst case." It seems like the worst case for N inserts into a hashtable would result in a collision for every operation. I believe universal hashing guarantees collision at a rate of 1/m when the load balance is kept low, but isn't it still theoretically possible to get a collision for every insert?
It seems like technically the average amortized analysis for hashtable's insert is O(1).
Edit: You can assume the hashtable uses basic chaining where the element is placed at the end of the corresponding linked list. The real meat of my question refers to amortized analysis on probabilistic algorithms.
Edit 2:
I found this post on quicksort,
"Also there’s a subtle but important difference between amortized running time and expected running time. Quicksort with random pivots takes O(n log n) expected running time, but its worst-case running time is in Θ(n^2). This means that there is a small possibility that quicksort will cost (n^2) dollars, but the probability that this will happen approaches zero as n grows large." I think this probably answers my question.
You could theoretically get a collision every insert but that would mean that you had a poor performing hashing function that failed to space out values across the "buckets" for keys. A theoretically perfect hash function would always put a new value into a new bucket so that each key would refer to it's own bucket. (I am assuming a chained hash table and referring to the chain field as a "bucket", just how I was taught). A theoretically worst case function would stick all keys into the same bucket leading to a chain in that bucket of length N.
The idea behind the amortization is that given a reasonably good hashing function you should end up with a linear time for insert because the amount of times that insertion is > O(1) would be greatly dwarfed by the number of times that insertion is simple and O(1). That is not to say that insertion is without any calculation (the hash function still has to be calculated and in some special cases hash functions can be more calc heavy than just looking through a list).
At the end of the day this brings us to an important concept in big-O which is the idea that when calculating time complexity you need to look at the most frequently executed action. In this case that is the insertion of a value that does not collide with another hash.

How is O(n)/n=1 in aggregate method of amortized analysis

How is O(n)/n=1 in aggregate method of amortized analysis as given in the coursera course on data structures in lesson 5-Amortized analysis:Aggregate method?
Short answer
O(n)/n = cn/n = c = O(1)
Long answer
We use amortized analysis in order to analyze the cost of a sequence of operations rather that the cost of a single operation. In the last case we use asymptotic analysis (some of the asymptotic notations are: Theta, Big O, Big Omega, Little O and Little Omega), but it doesn't work that well when we come across a sequence of operations and want to understand the cost of that sequence.
The reason is that if we apply "regular" asymptotic analysis, our, for example, asymptotical upper bound in the worst case analysis might be too pessimistic. Classical example is inserting into a dynamic array. You insert elements into a dynamically allocated array and when it's full, you define new array (twice as big, for example) and copy all the elements. The thing is most of the insertions will work in constant time (or in O(1)), but when you need to redefine your array, it will take linear time (O(n)), because you need to copy all the elements.
So imagine that you insert n elements and you need to redefine your array only once, then you have n operations, each operation is O(n) in the worst case, hence the cost of the sequence of operations in the worst case is O(n^2), which seems too pessimistic considering the fact that most of your operations are O(1) in the worst case and only one of them is O(n).
We define the amortized cost of a sequence of operations as (cost of n operations) / n. In your case the cost of n operations is O(n) which is equal to cn (where c is some constant) just by the definition of the Big O notation, divide it by n and you get just c, which is equal to O(1) because, once again, c is just some constant.

Number of comparison during a a closed address hashing?

Initially, all entries in the hash table are empty lists.
All elements with hash address i will be inserted into the linked list h[i]. If there is collision, during hashing of keys, the key will be added to the end of a linkedList.
For the average case of successful search, do i count it when the comparison is to check if the h[i] is null? if it's null it means that the linkedlist is null and it should return not found. Should it be 1 comparison or 0 comparison? in terms of complexity.
Sorry for this stupid question, i'm still learning algorithm complexity.
For "big-O" complexity it just doesn't matter, as there is no such thing as "O(2N+1)" complexity (from counting element and pointer comparisons) - it simplifies to O(N), where N is the number of elements in the bucket h[i]. Alternatively, you might say the average big-O complexity across buckets is O(N) where N is size / buckets, aka load factor.
If you're not doing big-O complexity analysis, we can't really tell you what you want to count. I would point out that comparisons of pointers to nullptr are much cheaper than object comparison involving an extra level of indirection or scanning along a large object (e.g. std::string objects too long for any Short-String-Optimisation buffer), so can often be neglected.
If in doubt as to what's wanted, I'd suggest you report the comparisons as in "searching for an element that's not present involves N object value comparisons and N+1 pointer comparisons, where N is the number of elements chained from h[i]".
If you must give just one expression (for example, some computerised multiple-choice test), I'd suggest a count of element comparisons is likely the desired answer - the number of value comparisons (i.e. 0 for an empty hash bucket), as it's most common to be interested in the complexity as a function of the number of data elements.
0 comparisons. If at h[i] you see a list of one entry and this is a hit (since you analyze successful search), this would be 1 comparison, and so on.