In a circular queue represented by an array, how can one specify the number of elements in the queue in terms of “front”, “rear” and MAX-QUEUE-SIZE? Write a “C” function to delete the K-th element from the “front” of a circular queue.
What does delete the K-th element mean?
The number of elements in the queue will be
(rear - front) % MAX_QUEUE_SIZE
Rear-Front will give us the difference between their pointer locations, which will help us know the number of elements between them. Now since this is a circular queue, the rear can be smaller than the front too, so we need to find the Modulus by the size of the queue.
Will update the post with solutions to the other two too!
Related
Does anyone know the original hash table implementation?
Every realization I've found is based on separate chaining or open addressing methods
Chaining, by Hans Peter Luhn, in 1953.
https://en.wikipedia.org/wiki/Hash_table#History
The first implementation, not that the most common, is probably the one that uses an array (which is resized as needed) where each entry points to a list of elements.
The hash code, computed mod the size of the array, points to the integer index at which the list of the element to be searched is located. In case of hash code collision, the elements will accumulate in the list of the related entry.
So, once the hash code is computed, we have O(1) for accessing the entry of the array and O(N) for the actual search of the element in the list by verifying its actual equality. The value of N must be kept low for obvious performance consequences.
In case the collision becomes high we resize the array by increasing the number of entries and decreasing the collisions accordingly. This occurs as the hash code mod a higher number than the previous one is computed.
Some more complicated implementations convert the lists to trees if they become too long so that O(N) to O(log(N)) for equality search.
I am trying to do a Dijkstra's shortest path on a grid. Right now I have it working, but I do have some confusion. Say I am looking at a cell in the grid, I evaluate it and then push all of its neighbors into the heap if they are not evaluated. The issue I run into is I end up with multiple instances of the same cell in the heap which really bogs down the process. To fix this, I set it to not push a cell onto the heap if its already in the heap. Is this a correct approach, or could this lead to issues? This is over an unweighted grid.
If it's a dijkstra, you don't have a heap, you have a priority queue.
And you have to store the shortest distance you arrived to the node at. So if you arrive again, but with a higher distance, you basically ignore the arrival. If with a shorter distance, you have to update the node's priority in the queue accordingly instead of inserting it twice ( if your queue's implementation doesn't support priority change, then you just remove the node and reinsert it with new priority )
https://en.wikipedia.org/wiki/Dijkstra%27s_algorithm
Is there a term for the process of creating an ID, based on connectivity information, that helps identify elements as matching based on their neighbors?
This can be as simple as looping over items in an array and accumulating next and previous items (possibly with bit-shifting, xor... etc).
Another example is using an order independent hash based on nodes connected by edges in a graph.
I've used this multiple times, but don't know if there is a term for it.
Typically the following steps are done by assigning an ID to each element(often a number - created by hashing the contents).
Then iterate:
Store a copy of all ID's to prevent reading modified values.
Loop over each element and create a new ID from its value combined with the connected elements.
Each iteration the range of influence elements have on each-other increases - following a triangle number sequence.
Is there a term for each iteration?
Is there a term for this entire process?
I am writing an algorithm in MATLAB to pre-process a large graph for use with a path-finding algorithm, and I am curious as to the best way that I can keep track of my moves in order to be able to reconstruct the solution and project it onto the original graph.
The pre-processing methods I am using so far are relatively simple; 3 techniques I am using are:
1) Remove long edges:
Any edge (a,b) that can be reached by sequence (a,c,b) where (a,b) > (a,c)+(c,b), is removed
2) Remove vertices with degree 1
If a vertex with one edge coming out of it is not either the start or end-point of the path, then that vertex will never be part of the path, and it can be removed
3) Remove vertices with degree 2
If a vertex b has two edges coming out of it, then b can be removed and edges (a,b) and (b,c) can be replaced by a single edge (a,c) with length (a,b) + (b,c).
The algorithm iterates through these 3 techniques until no further changes are possible in the graph, at which point it removes all the empty rows and columns in the graph adjacency matrix and returns the reduced graph for use with the path-finding algorithm.
The pre-processing algorithm works great, in some cases I am able to achieve a reduction of around 70% in the graph size, and my path-finding algorithm is able to find a path of the same quality as the un-processed graph but an order of magnitude faster.
My problem now is in reconstructing the solution on the original graph, so-called "post-processing".
I feel like I should be keeping track of all the moves my pre-processing algorithm makes and then applying them in reverse order after it has finished, I am just not quite sure how I should go about that..
Here is what I had in mind:
First, keep track of all the empty rows and columns I removed from the matrix after pre-processing and re-insert them.
Then have a simple vector with indices representing the move number and the value representing what type of move.
then have one cell array for each of the 3 move "types" containing the data from each move in the order they were performed, with their own iteration counter.
then if i iterate backwards over the move list, it will tell me which cell array to access, and then i can apply the reverse operation that is next on that list (kind of like a stack data structure)
this seems a bit unwieldy to me, so I was wondering if anyone else had any ideas as to a good method of keeping track of my moves that is easily reversible?
EDIT: I thought about posting this on the computer science stack exchange; but my question isn't really about the pre-processing methods themselves, but about data storage and retrieval and the implementation itself. But feel free to migrate it if you think it would be better suited elsewhere
So I have an array of numbers that look something like
1,708,234
2,802,532
11,083,432
5,098,123
5,777,111
I want to find out when two numbers are within a certain distance from each other (say 1,500,000) so I can group them into the same location and have just one UI element represent both for the level of zoom I am looking at. How would one go about doing this smartly or efficiently. I'm thinking I would just start with the first entry, loop through all the elements, and if one was close to another, flag those two and put it in a dictionary of some sort. That would be my brute force method, but I'm thinking there has to be a better way.
I'm coding in obj-c btw if that makes or breaks any design decisions.
How many numbers are we dealing with here? If it's small enough:
Sort the numbers (generally n-log-n)
Run through each number, n, and compare its bigger neighbor, n+1, to see if it's within your range.
Repeat for n+2, n+3, until the number is no longer within your range.
Your brute force method there is O((n/2)^2). This method will bring it to O(n + n log(n)), or O(n log n) on the average case.