We are just learning about circular queue in class, and I got a few questions.
Since we define the tail as the empty space next to the last value, such as shown below:
|1| |3|4|5|6|
The head will be pointing to the number 3, and the tail will be pointing to the empty space between 1 and 3. I am confused on what happens if that space is filled up, so for example below:
|1|2|3|4|5|6|
Then the head will still be pointing to 3, but the tail needs to be pointing to the next box after the blank box before, thus it will be pointing to 3, or the header. What should I do about this?
When this situation occurs, your queue is full. You have a few options when to deal with new items that can be pushed:
discard the pushed event: only when some other item is popped can now items be pushed again. Neither head nor tail is updated.
Example: Think of an event queue that has filled up, new requests are simple ignored.
discard (pop) the oldest event on the queue: in this case you update both the head and tail pointer one place.
Example: buffering incomming image frames from a webcam for processing. For a 'live' feed you may prefer to discard the older frames when the processing has a hickup.
create a bigger queue: that is, allocate more memory on the fly
Example: you use a circular queue since it an efficient implementation that doesn't require memory allocation most of the time when you push items. However, you do not want to loose items on the queue so you allow reallocating more memory once in a while
What the right action is depends on your application.
PS.: Your implementation is based on keeping an empty slot in your queue to distinguish full and empty buffer. Another option is to keep a counter on the number of elements in your queue to make this distinction. More info can be found on Circular buffer (Wikipedia).
As I thought of it, a circular queue is circular in part because it does not get "filled up". It will simply always hold a set number of elements, throwing out old ones as needed.
So you will never fill up the empty space; if you insert a new element, you'll remove an old element, so there will still be an empty space.
In other words, in your diagram, if you insert a new number, say 0, your a queue that looks like the following (assuming 1 is the last element, 3 the first):
|1| |3|4|5|6|
it will then look as follows:
| |0|3|4|5|6|
However, some implementations of a circular queue simply throw an exception/error when their full length is reached, if that's what you want. Check out for example this.
Here is the most succinct explanation about queues that I
have ever found. You can expand on your queues based on this
foundation. Source: "Algorithms," by Robert Sedgewick.
const max=100;
var queue: aray[0..max]of integer;
head,tail: integer;
procedure put(v:integer);
begin
queue[tail] := v;
tail := tail + 1;
if (tail > max) then tail := 0;
end;
function get: integer;
begin
get := queue[head];
head := head + 1;
if (head > max) then head := 0;
end;
procedure queueinitialize;
begin
head := 0;
tail := 0;
end;
function queueempty: boolean;
begin
queueempty := (head = tail);
end;
"It is necessary to maintain two indices, one to the beginning of the queue (head) and one to the end (tail). The contents of the queue are all the elements in the array between head and tail, taking into account the "wraparound" back to 0 when the end of the array is encountered. If head = tail then the queue is defined to be empty; if head = tail + 1,
or tail = max and head = 0, it is defined to be full."
When the head and tail points to the same place then we say that the queue is full.No more elements can be added.To add any element ,you will have to remove an element from the queue.With this the head will get incremented and a space is again generated.
Related
No sure if there's a lot of act-r programmer on here, but I can't seem to find a forum/group for that anywhere so...
I'm writing a program which as a chunk defined as (and the goal below):
(chunk-type position position-x position-y)
(chunk-type goal state last-pos)
In a production, I'm fetching the position of a thing on screen from the visual-location and then I would need to create a position chunk and put that in my goal's last-pos slot. Here's the production rule:
(P attend-projectile
=goal>
ISA goal
state nil
=visual-location>
screen-x =pos-x
screen-y =pos-y
?visual>
state free
==>
+visual>
cmd move-attention
screen-pos =visual-location
=goal>
state attended
last-pos (position pos-x screen-x pos-y screen-y)
)
Or something like that. I've tried various syntaxe. The problem boils down to:
I need to instantiate a chunk within a production (the position chunk) based on values recovered in the lhs,
then assign that chunk to a goal's slot.
Somehow I can't seem to find an equivalent example in the doc...
EDIT:
I do need this to be a chunk, not just storing the x & y position. Eventually this chunk will be extended to include an ID (which will be obtained from the visual location, e.g. a different letter will be assigned to each moving object). I will be tracking those object through time. Because I'm tracking through time, another chunk (trajectory) will contain 3 position chunks (with their IDs).
Other productions will expect to find this chunk (trajectory, once I have 3 position chunks) and make decisions based on that.
Obviously the above is a snippet of the code. But the conceptual difficulty I have is manipulating (instantiating/creating however it's called in actr nomentalture) chunks at runtime, essentially.
Why do you need another chunk? You have the chunk in the visual-location buffer with that information so why not use it:
(P attend-projectile
=goal>
ISA goal
state nil
=visual-location>
?visual>
state free
==>
+visual>
cmd move-attention
screen-pos =visual-location
=goal>
state attended
last-pos =visual-location
)
Of course, that doesn't answer the question that was asked.
To create a new chunk, the proper way to do so is through a request to the imaginal buffer which would then require a following production to harvest the result and place it into the slot of the goal buffer's chunk. Assuming that the slots you want in the new chunk are from the position chunk-type you show, and that the values are from the similarly named slots of the chunk in the visual-location buffer, this would create the new chunk in the imaginal buffer:
(P attend-projectile
=goal>
ISA goal
state nil
=visual-location>
screen-x =pos-x
screen-y =pos-y
?visual>
state free
==>
+visual>
cmd move-attention
screen-pos =visual-location
=goal>
state attended
+imaginal>
position-x =pos-x
position-y =pos-y
)
I am trying to have a loop where it will start at 100 and drop until it hits to a point where the while condition no longer holds true.
I started with
While Solar_Power_House_W_Solar_PER <= OneHundred AND BatChargePercent < OneHundred DO
State_Dis_Charge := false
FOR PLC_SetLoopChargeValue:= 100 TO 0 By -1 DO
ConvertoReal := INT_TO_LREAL(PLC_SetLoopChargeValue);
Divide := ConvertoReal DIV(100);
PLC_SetCharge := Divide;
PLC_Charge := 1500 * PLC_SetCharge;
RB_Charge := PLC_Charge;
Visual_RBPower := 1500 * PLC_SetCharge; (*Charge *)
END_FOR;
The problem I believe I have with this is that it cycles too fast so the condition never gets out of the while loop because it takes a while for the system to update so I thought of adding a delay portion:
While Solar_Power_House_W_Solar_PER <= OneHundred AND BatChargePercent < OneHundred DO
State_Dis_Charge := false;
wait(IN:=not wait.Q , PT:=T#50ms);
if Wait.Q Then
FOR PLC_SetLoopChargeValue:= 100 TO 0 By -1 DO
ConvertoReal := INT_TO_LREAL(PLC_SetLoopChargeValue);
Divide := ConvertoReal DIV(100);
PLC_SetCharge := Divide;
PLC_Charge := 1500 * PLC_SetCharge;
RB_Charge := PLC_Charge;
Visual_RBPower := 1500 * PLC_SetCharge; (*Charge *)
END_FOR;
END_IF;
END_WHILE;
How I think it should work is every 50ms 1 for loop should run. Currently nothing happens every 50ms.
You have to consider that WHILE and FOR are executed synchronously. It means blocking. It means that interpreter do not execute next line, until previous line is finished.
This means that "running to fast" cannot apply here. It does not matter how fast it runs, the execution of the lines will be always in order.
The only thing I would change and loop not from 100 to 0 but vice versa from 0 to 100, because I am not sure this backward will work fine. And then all you have to change:
ConvertoReal := INT_TO_LREAL(100 - PLC_SetLoopChargeValue);
You do now show all code it is VERY HARD to judge but if FOR loom is complete it totally make no sense. You calculate some variables but you do not use them there. You know that you cannot use them outside of your FOR loop, right? Because outside of your FOR loop those variable will be always same value of last loop.
In your second example your FOR loop, although it might work, you should not use timer to run the loop inside the loop. Because loops are synchronous and times async.
As I understand you task you do not need WHILE at all. With this approach your program execution of other parts will be blocked until 100%. That might take a while as I can see. So you have to use IF.
IF Solar_Power_House_W_Solar_PER <= OneHundred AND BatChargePercent < OneHundred DO
// ....
END_IF;
The difference is significant. With WHILE it will block your program till WHILE finish and other parts will not be executed for this long, in the same PLC cycle FOR might be executed so many times.
With IF if will run FOR one time per one PLC cycle and actualy doe snot change your logic.
If you would share your full code or at least parts where variables you have here are used so that the whole picture is visible, you might get a better help. Edit your post and I'll edit my comment.
With this answer im only adressing your issue with the for loop not being executed every 50ms.
The other answers why the while loop cant be exited are correct unless the variables Solar_Power_House_W_Solar_PER and BatChargePercent aren't changed in a parrellel thread.
I suggest wait is a TON function block. Please mind that names of FBs are case sensitive: wait.Q is possibly unequal Wait.Q. I think that is the main reason your for loop is not executed, because you check the output of another FB. Maybe check your declaration list for doubles with higher or lower cases.
Another possibility is, that your condition for the while loop isn't met at all and you didn't notice. In this case the for loop wouldn't be executed too of course.
I developed a procedure that receives a TStream; but the basic type, to allow the sending of all the types of stream heirs.
This procedure is intended to create one thread to each core, or multiple threads. Each thread will perform detailed analysis of stream data (read-only), and as Pascal classes are assigned by reference, and never by value, there will be a collision of threads, since the reading position is intercalará.
To fix this, I want the procedure do all the work to double the last TStream in memory, allocating it a new variable. This way I can duplicate the TStream in sufficient numbers so that each thread has a unique TStream. After the end of the very thread library memory.
Note: the procedure is within a DLL, the thread works.
Note 2: The goal is that the procedure to do all the necessary service, ie without the intervention of code that calls; You could easily pass an Array of TStream, rather than just a TStream. But I do not want it! The aim is that the service is provided entirely by the procedure.
Do you have any idea how to do this?
Thank you.
Addition:
I had a low-level idea, but my knowledge in Pascal is limited.
Identify the object's address in memory, and its size.
create a new address in memory with the same size as the original object.
copy the entire contents (raw) object to this new address.
I create a pointer to TStream that point to this new address in memory.
This would work, or is stupid?? If yes, how to operate? Example Please!
2º Addition:
Just as an example, suppose the program perform brute force attacks on encrypted streams (just an example, because it is not applicable):
Scene: A 30GB file in a CPU with 8 cores:
1º - TMemoryStream:
Create 8 TMemoryStream and copy the entire contents of the file for each of TMemoryStreams. This will result in 240GB RAM in use simultaneously. I consider this broken idea. In addition it would increase the processing time to the point of fastest not use multithreading. I would have to read the entire file into memory, and then loaded, begin to analyze it. Broke!
* A bad alternative to TMemoryStream is to copy the file slowly to TMemoryStream in lots of 100MB / core (800MB), not to occupy the memory. So each thread looks only 100MB, frees the memory until you complete the entire file. But the problem is that it would require Synchronize() function in DLL, which we know does not work out as I open question in Synchronize () DLL freezes without errors and crashes
2º - TFileStream:
This is worse in my opinion. See, I get a TStream, create 8 TFileStream and copy all the 30GB for each TFileStream. That sucks because occupy 240GB on disk, which is a high value, even to HDD. The read and write time (copy) in HD will make the implementation of multithreaded turns out to be more time consuming than a single thread. Broke!
Conclusion: The two approaches above require use synchronize() to queue each thread to read the file. Therefore, the threads are not operating simultaneously, even on a multicore CPU. I know that even if he could simultaneous access to the file (directly creating several TFileStream), the operating system still enfileiraria threads to read the file one at a time, because the HDD is not truly thread-safe, he can not read two data at the same time . This is a physical limitation of the HDD! However, the queuing management of OS is much more effective and decrease the latent bottleneck efficiently, unlike if I implement manually synchronize(). This justifies my idea to clone TStream, would leave with S.O. all the working to manage file access queue; without any intervention - and I know he will do it better than me.
Example
In the above example, I want 8 Threads analyze differently and simultaneously the same Stream, knowing that the threads do not know what kind of Stream provided, it can be a file Stream, a stream from the Internet, or even a small TStringStream . The main program will create only one Strean, and will with configuration parameters. A simple example:
TModeForceBrute = (M1, M2, M3, M4, M5...)
TModesFB = set of TModeForceBrute;
TService = record
stream: TStream;
modes: array of TModesFB;
end;
For example, it should be possible to analyze only the Stream M1, M2 only, or both [M1, M2]. The TModesFB composition changes the way the stream is analyzed.
Each item in the array "modes", which functions as a task list, will be processed by a different thread. An example of a task list (JSON representation):
{
Stream: MyTstream,
modes: [
[M1, m5],
[M1],
[M5, m2],
[M5, m2, m4, m3],
[M1, m1, m3]
]
}
Note: In analyzer [m1] + [m2] <> [m1, m2].
In Program:
function analysis(Task: TService; maxCores: integer): TMyResultType; external 'mydll.dll';
In DLL:
// Basic, simple and fasted Exemple! May contain syntax errors or logical.
function analysis(Task: TService; maxCores: integer): TMyResultType;
var
i, processors : integer;
begin
processors := getCPUCount();
if (maxCores < processors) and (maxCores > 0) then
processors := maxCores;
setlength (globalThreads, processors);
for i := 0 to processors - 1 do
// It is obvious that the counter modes in the original is not the same counter processors.
if i < length(Task.modes) then begin
globalThreads[i] := TAnalusysThread.create(true, Task.stream, Task.modes[i])
globalThreads[i].start();
end;
[...]
end;
Note: With a single thread the program works beautifully, with no known errors.
I want each thread to take care of a type of analysis, and I can not use Synchronize() in DLL. Understand? There is adequate and clean solution?
Cloning a stream is code like this:
streamdest:=TMemoryStream.create;
streamsrc.position:=0;
streamdest.copyfrom(streamdest);
streamsrc.position:=0;
streamdest.position:=0;
However doing things over DLL borders is hard, since the DLL has an own copy of libraries and library state. This is currently not recommended.
I'm answering my question, because I figured that no one had a really good solution. Perhaps because there is none!
So I adapted the idea of Marco van de Voort and Ken White, for a solution that works using TMemoryStream with partial load in memory batch 50MB, using TRTLCriticalSection for synchronization.
The solution also contains the same drawbacks mentioned in addition 2; are they:
Queuing access to HDD is the responsibility of my program and not of the operating system;
A single thread carries twice the same data in memory.
Depending on the processor speed, it may be that the thread analyze well the fast 50MB of memory; On the other hand, to load memory can be very slow. That would make the use of multiple threads are run sequentially, losing the advantage of using multithreaded, because every thread are congested access to the file, running sequentially as if they were a single thread.
So I consider this solution a dirty solution. But for now it works!
Below I give a simple example. This means that this adaptation may contain obvious errors of logic and / or syntax. But it is enough to demonstrate.
Using the same example of the issue, instead of passing a current to the "analysis" is passed a pointer to the process. This procedure is responsible for making the reading of the stream batch 50MB in sync.
Both DLL and Program:
TLotLoadStream = function (var toStm: TMemoryStream; lot, id: integer): int64 of object;
TModeForceBrute = (M1, M2, M3, M4, M5...)
TModesFB = set of TModeForceBrute;
TaskTService = record
reader: TLotLoadStream; {changes here <<<<<<< }
modes: array of TModesFB;
end;
In Program:
type
{ another code here }
TForm1 = class(TForm)
{ another code here }
CS : TRTLCriticalSection;
stream: TFileStream;
function MyReader(var toStm: TMemoryStream; lot: integer): int64 of object;
{ another code here }
end;
function analysis(Task: TService; maxCores: integer): TMyResultType; external 'mydll.dll';
{ another code here }
implementation
{ another code here }
function TForm1.MyReader(var toStm: TMemoryStream; lot: integer): int64 of object;
const
lotSize = (1024*1024) * 50; // 50MB
var
ler: int64;
begin
result := -1;
{
MUST BE PERFORMED PREVIOUSLY - FOR EXAMPLE IN TForm1.create()
InitCriticalSection (self.CriticalSection);
}
toStm.Clear;
ler := 0;
{ ENTERING IN CRITICAL SESSION }
EnterCriticalSection(self.CS);
{ POSITIONING IN LOT OF BEGIN}
self.streamSeek(lot * lotSize, soBeginning);
if (lot = 0) and (lotSize >= self.stream.size) then
ler := self.stream.size
else
if self.stream.Size >= (lotSize + (lot * lotSize)) THEN
ler := lotSize
else
ler := (self.stream.Size) - self.stream.Position; // stream inicia em 0?
{ COPYNG }
if (ler > 0) then
toStm.CopyFrom(self.stream, ler);
{ LEAVING THE CRITICAL SECTION }
LeaveCriticalSection(self.CS);
result := ler;
end;
In DLL:
{ another code here }
// Basic, simple and fasted Exemple! May contain syntax errors or logical.
function analysis(Task: TService; maxCores: integer): TMyResultType;
var
i, processors : integer;
begin
processors := getCPUCount();
if (maxCores < processors) and (maxCores > 0) then
processors := maxCores;
setlength (globalThreads, processors);
for i := 0 to processors - 1 do
// It is obvious that the counter modes in the original is not the same counter processors.
if i < length(Task.modes) then begin
globalThreads[i] := TAnalusysThread.create(true, Task.reader, Task.modes[i])
globalThreads[i].start();
end;
{ another code here }
end;
In DLL Thread Class:
type
{ another code here }
MyThreadAnalysis = class(TThread)
{ another code here }
reader: TLotLoadStream;
procedure Execute;
{ another code here }
end;
{ another code here }
implementation
{ another code here }
procedure MyThreadAnalysis.Execute;
var
Stream: TMemoryStream;
lot: integer;
{My analyzer already all written using buff, the job of rewriting it is too large, then it is so, two readings, two loads in memory, as I already mentioned in the question!}
buf: array[1..$F000] of byte; // 60K
begin
lot := 0;
Stream := TMemoryStream.Create;
self.reader(stream, lot);
while (assigned(Stream)) and (Stream <> nil) and (Stream.Size > 0) then begin
Stream.Seek(0, soBeginning);
{ 2º loading to memory buf }
while (Stream.Position < Stream.Size) do begin
n := Stream.read(buf, sizeof(buf));
{ MY CODE HERE }
end;
inc(lot);
self.reader(stream, lot, integer(Pchar(name)));
end;
end;
So as seen this is a stopgap solution. I still hope to find a clean solution that allows me to double the flow controller in such a way that access to data is the operating system's responsibility and not my program.
I am wondering what is the canonical approach to solve the following problem in Rx: Say I have two observables, mouse_down and mouse_up, whose elements represent mouse button presses. In a very simplistic scenario, if I wanted to detect a long press, I could do it the following way (in this case using RxPy, but conceptually the same in any Rx implementation):
mouse_long_press = mouse_down.delay(1000).take_until(mouse_up).repeat()
However, problems arise when we need to hoist some information from the mouse_down observable to the mouse_up observable. For example, consider if the elements of the observable stored information about which mouse button was pressed. Obviously, we would only want to pair mouse_down with mouse_up of the corresponding button. One solution that I came up with is this:
mouse_long_press = mouse_down.select_many(lambda x:
rx.Observable.just(x).delay(1000)\
.take_until(mouse_up.where(lambda y: x.button == y.button))
)
If there is a more straight forward solution, I would love to hear it - but as far as I can tell this works. However, things get more complicated, if we also want to detect how far the mouse has moved between mouse_down and mouse_up. For this we need to introduce a new observable mouse_move, which carries information about the mouse position.
mouse_long_press = mouse_down.select_many(lambda x:
mouse_move.select(lambda z: distance(x, z) > 100).delay(1000)\
.take_until(mouse_up.where(lambda y: x.button == y.button))
)
However, this is pretty much where I get stuck. Whenever a button is held down longer than 1 second, I get a bunch of boolean values. However, I only want to detect a long press when all of them are false, which sounds like the perfect case for the all operator. It feels like there's only a small step missing, but I haven't been able to figure out how to make it work so far. Perhaps I am also doing things in a backwards way. Looking forward to any suggestions.
Ok, I guess I found a possible answer. RxPy has a take_with_time operator, which works for this purpose. Not really as straight-forward as I was hoping for (not sure if the take_with_time is avaliable in other Rx implementations).
mouse_long_press = mouse_down.select_many(lambda x:
mouse_moves.take_with_time(1000).all(lambda z: distance(x, z) < 100)\
.take_until(mouse_up.where(lambda y: x.button == y.button))
)
I will leave the question open for now in case somebody has a better suggestion.
I'd approach the problem differently, by creating a stream of mouse presses with length information, and filtering that for presses longer than 1s.
First let's assume that you only have one mouse button. Merge the mouse_up and mouse_down streams and assign time intervals between them with the time_interval() operator. You will get a stream of intervals since previous event, along with the event itself. Assuming your mouse-ups and mouse-downs alternate, this means your events now are:
(down + time since last up), (up + time since last down), (down + time since last up) ...
Now, simply filter for x.value.type == "up" and x.interval > datetime.timedelta(seconds=1)
(You can also validate this with pairwise(), which always gives you the current + previous event, so you can check that the previous one is down and the current one is up.)
Second, add the mouse movement information, using the window() operator.
(This part is untested, I'm going off the docs of how it's supposed to behave, but the docs aren't very clear. So YMMV. )
The idea is that you can collect sequences of events from an observable, separated into groups based on another observable. From the docs:
The window_openings observable is going to be the merged up/down stream, or the interval stream, whichever is more convenient. Then you can flat_map() (or select_many, which is the same thing) the result and work out the distance in whichever way you like.
Again, you should end up with a stream of distances between up/down events. Then you can zip() this stream with the interval stream, at which point you can filter for up events and get both time and distance until the previous down.
Third, what if you are getting events for multiple mouse buttons?
Simply use group_by() operator to split into per-button streams and proceed as above.
Full code below:
Event = collections.NamedTuple("Event", "event interval distance")
def sum_distance(move_stream):
# put your distance calculation here; something like:
return move_stream.pairwise().reduce(lambda acc, (a, b): acc + distance(a, b), 0)
def mouse_press(updown_stream):
# shared stream for less duplication
shared = updown_stream.share()
intervals = shared.time_interval() # element is: (interval=timedelta, value=original event)
distances = mouse_move.window(shared).flat_map(sum_distance)
zipped = intervals.zip(distances, lambda i, d: \
Event(i.value, i.interval, d) )
mouse_long_press = (
# merge the mouse streams
rx.Observable.merge(mouse_up, mouse_down)
# separate into streams for each button
.group_by(lambda x: x.button)
# create per-button event streams per above and merge results back
.flat_map(mouse_press)
# filter by event type and length
.filter(lambda ev: ev.event.type == "up" and ev.interval >= datetime.timedelta(seconds=1)
)
I want a form that when I set at the bottom of the z-order it stays there. I tried:
SetWindowPos(Handle,HWND_BOTTOM,Left,Top,Width,Height,SWP_NOZORDER);
and when I overlap it with some other apps it stays at the bottom as I need. However when I click on it, it rises to the top. I then tried:
SetWindowPos(Handle, HWND_BOTTOM, Left, Top, Width, Height,
SWP_NOACTIVATE or SWP_NOZORDER);
and various other switches from this website...
http://msdn.microsoft.com/en-us/library/ms633545.aspx
But it still rises to the top.
SetWindowPos sets the position of a window only when it is called, it does not establish a state. Handling WM_WINDOWPOSCHANGING is the correct way to do this:
While this message is being processed, modifying any of the values in
WINDOWPOS affects the window's new size, position, or place in the Z
order. An application can prevent changes to the window by setting or
clearing the appropriate bits in the flags member of WINDOWPOS.
type
TForm1 = class(TForm)
..
private
procedure WindowPosChanging(var Msg: TWMWindowPosMsg);
message WM_WINDOWPOSCHANGING;
end;
..
procedure TForm1.WindowPosChanging(var Msg: TWMWindowPosMsg);
begin
if Msg.WindowPos.flags and SWP_NOZORDER = 0 then
Msg.WindowPos.hwndInsertAfter := HWND_BOTTOM;
inherited;
end;
Never tried it, but you might get somewhere trapping the message WM_WINDOWPOSCHANGING, and twiddling with Z order. Could get complicated though and I personally would find it irritatingly non-standard.
A menu option equivalent to cacsade but with a z-order sort might be a better option, I mean if they clicked on it they expect to see it.
Out of instinct, I wouldn't trust that the app will always work as you expect. Other applications may have a much more strict method of forcefully bringing its self in front of yours, or vice-versa in your case, sending it to the back. For example, a timer which repeatedly sends the window to the back. Then if you have two of such apps layered, you would watch them fight each other, basically flickering back and forth. In the end, I wouldn't count on a permanent 100% solution for this because you never know what other applications might do to override yours.