Simultaneously incrementing the program counter and loading the Instruction register - cpu-architecture

In my Computer Architecture lectures, I was told that the IR assignment and PC increment are done in parallel. However surely this has an effect on which instruction is loaded.
If PC = 0, then the IR is loaded and then the PC incremented then the IR will hold the instruction that was at address 0.
However if PC = 0, the PC incremented and then the IR is loaded and then the IR will hold the instruction that was at address 1.
So surely they can't be done simultaneously and the order must be defined?

You're not taking into account the wonders of FlipFlops. The exact implementation depends of course on your specific design, but it's perfectly possible to read the value currently latched on some register or latch, while at the same time preparing a different value to be stored there, as long as you know these values are independent (there's also a possibility of doing a "bypass" in more sophisticated designs, but that's besides the point here).
In this case, you'd be reading the current value of the PC (and using it to fetch the code from memory, or cache, or whatever), while preparing the next value (for e.g. PC+4 or some branch target if you know it). This is how pipelines work.
Generally speaking, you either have enough time to do some work withing the same cycle (incrementing PC and using it for code fetch), in which case they'll fit in the same pipestage, or if you can't make it in time - you just break these serial activities to two pipestages, so that they can be done in "parallel" because one of them belongs to the next operation flowing through the pipe, so there's no longer a dependency (aside from corner cases like branches or bubbles)

Related

Considering differences in the same materials - Stations

I am trying to simulate a manufacturing assembly process, where material items are processed following an unique line. In the quality control stations, when a failure is detected, the object is sent to the repair area (by a truckpallet) and, when it is repaired, the same resource takes it and puts it at the start of the line. Until this point, I already programmed it.
The problem is the following: when it is a repaired object, is has to follow the same conveyor but with no stops in the stations. The object must enter a station and leave it, with no delays (as the works related with the stations have already been made).
I thought the best would be to consider the difference (repaired vs. not repaired) in the Agent, but then I can't work with it in the Main agent... I have also tried alternative solutions, for example defining variables in each station 1 and consider that in the stations delays and in the following station:
triangular( 109.1*delay_bec, delaytime_bec*delay_bec, 307.6*delay_bec)
Actions - in finished process:
if(delay_bec==0){
delay_headers=0;
delay_bec=1;}
But nothing worked... Any help?
I thought the best would be to consider the difference (repaired vs. not repaired) in the Agent, but then I can't work with it in the Main agent...
This is actually the correct approach. You need to create a custom agent type, give in a boolean variable isRepaired (or similar) and then in the delay you can dynamically adjust the duration using that characteristic:
agent.isRepaired ? 0. : 100
This will delay normal agents by 100 time units and repaired agents not at all.
Obviously, you must make sure that the agents flowing through the flow blocks are of your custom agent type (see the help how to do that)

Anylogic: Stop sourcing if storage is full

In the example above I present an example anylogic process flow, excuse me for the link as I'm not allowed to upload pictures yet.
In this flow, is it possible to stop the source from sourcing if the rack system is full or filled to a certain level? (under the assumption that both rack picking and storing is done in that rack system.)
Sure, you can always switch a source off. It depnds on how you defined the arrivals in the source, but for a "Rate" and "Interarrival Time" source, you can use:
mySource.set_rate(0);
All you need to do is to call this in the correct point in your model, i.e. when the rack system is full. To do that, you might need to write a function isFull that loops through all its rows, positions and levels and tests myRackSystem.isFree(row, position, level). If everything is full, you stop the source from creating more stuff.

Are there a way to know how much of the EEPROM memmory that is used?

I have looked trough the "logbook" and "datalogger" APIs and there are no way of telling that the data logger is almost full. I found the API call with the following path "/Mem/Logbook/IsFull". If I have understood it correct this will notify me when log is full and the datalogger has stopped logging.
So my question is: Are there a way to know how much of the memmory is currently in use so that I do a cleanup old data (need to do some calculations on them before they are deleted) before the EEPROM is full and the Datalogger stops recording?
The data memory of Logbook/DataLogger is conceptually a ring buffer. That's why /Mem/DataLogger/IsFull always returns false on Movesense sensor (Suunto uses the same API in its watches where the situation is different). Therefore the sensor never stops recording, it just replaces oldest data with new.
Here are a couple of strategies that you could use:
Plan A:
Create a new log (POST /Mem/Logbook/Entries => returns the logId for it)
Start Logging (PUT /Mem/DataLogger/State: LOGGING)
Every once in a while create a new log (POST /Mem/Logbook/Entries). Note: This can be done while logging is ongoing!
When you want to know what is the status of the log, read /Mem/Logbook/Entries. When the oldest entry has completely been overwritten, it disappears from the list. Note: The GET /Entries is a heavy operation so you may not want to do it when the logger is running!
Plan B
Every now and then start a new log and process the previous one. That way the log never overwrites something you have not processed.
Plan C
(Note: This is low level and may break with some future Movesense sensor release)
GET the first 256 bytes of EEPROM chip #0 using the /Component/EEPROM API. This area contains a number of ExtflashChunkStorage::StorageHeader structs (see: ExtflashChunkStorage.h), rest is filled with 0xFF. The last StorageHeader before 0xFF is the current one. With that StorageHeader one can see where the ring buffer starts (firstChunk) and where next data is written (cursor). The difference of the two is the used memory. (Note: Since it is a ring buffer the difference can be negative. In that case add the "Size of Logbook area - 256" to it)
Full disclosure: I work for Movesense team

Can watchman send why a file changed?

Is watchman capable of posting to the configured command, why it's sending a file to that command?
For example:
a file is new to a folder would possibly be a FILE_CREATE flag;
a file that is deleted would send to the command the FILE_DELETE flag;
a file that's modified would send a FILE_MOD flag etc.
Perhaps even when a folder gets deleted (and therefore the files thereunder) would send a FOLDER_DELETE parameter naming the folder, as well as a FILE_DELETE to the files thereunder / FOLDER_DELETE to the folders thereunder
Is there such a thing?
No, it can't do that. The reasons why are pretty fundamental to its design.
The TL;DR is that it is a lot more complicated than you might think for a client to correctly process those individual events and in almost all cases you don't really want them.
Most file watching systems are abstractions that simply translate from the system specific notification information into some common form. They don't deal, either very well or at all, with the notification queue being overflown and don't provide their clients with a way to reliably respond to that situation.
In addition to this, the filesystem can be subject to many and varied changes in a very short amount of time, and from multiple concurrent threads or processes. This makes this area extremely prone to TOCTOU issues that are difficult to manage. For example, creating and writing to a file typically results in a series of notifications about the file and its containing directory. If the file is removed immediately after this sequence (perhaps it was an intermediate file in a build step), by the time you see the notifications about the file creation there is a good chance that it has already been deleted.
Watchman takes the input stream of notifications and feeds it into its internal model of the filesystem: an ordered list of observed files. Each time a notification is received watchman treats it as a signal that it should go and look at the file that was reported as changed and then move the entry for that file to the most recent end of the ordered list.
When you ask Watchman for information about the filesystem it is possible or even likely that there may be pending notifications still due from the kernel. To minimize TOCTOU and ensure that its state is current, watchman generates a synchronization cookie and waits for that notification to be visible before it responds to your query.
The combination of the two things above mean that watchman result data has two important properties:
You are guaranteed to have have observed all notifications that happened before your query
You receive the most recent information for any given file only once in your query results (the change results are coalesced together)
Let's talk about the overflow case. If your system is unable to keep up with the rate at which files are changing (eg: you have a big project and are very quickly creating and deleting files and the system is heavily loaded), the OS can't fit all of the pending notifications in the buffer resources allocated to the watches. When that happens, it blows those buffers and sends an overflow signal. What that means is that the client of the watching API has missed some number of events and is no longer in synchronization with the state of the filesystem. If that client is maintains state about the filesystem it is no longer valid.
Watchman addresses this situation by re-examining the watched tree and synthetically marking all of the files as being changed. This causes the next query from the client to see everything in the tree. We call this a fresh instance result set because it is the same view you'd get when you are querying for the first time. We set a flag in the result so that the client knows that this has happened and can take appropriate steps to repair its own state. You can configure this behavior through query parameters.
In these fresh instance result sets, we don't know whether any given file really changed or not (it's possible that it changed in such a way that we can't detect via lstat) and even if we can see that its metadata changed, we don't know the cause of that change.
There can be multiple events that contribute to why a given file appears in the results delivered by watchman. We don't them record them individually because we can't track them with unbounded history; imagine a file that is incrementally being written once every second all day long. Do we keep 86400 change entries for it per day on hand and deliver those to our clients? What if there are hundreds of thousands of files like this? We'd have to truncate that data, and at that point the loss in the data reduces how well you can reason about it.
At the end of all of this, it is very rare for a client to do much more than try to read a file or look at its metadata, and generally speaking, they want to do that only when the file has stopped changing. For this use case, watchman-wait, watchman-make and trigger all have the concept of a settle period that causes the change notifications to be delayed in delivery until after the filesystem has stopped changing.

Scheduling variables sized work items efficiently

(I have also posted this question at math.stackexchange.com because I'm not sure where it should belong.)
I have a system with the following inputs:
Set of work items to be completed. These are variable sized. They do not have to be completed in any particular order.
Historical data as to how long work items have taken to complete in the past. However, past performance is no guarantee of future success! That is, once we come to actually execute a work item, we may find that it takes longer or shorter than it has previously.
There can be work items that I have never seen before and hence have no historical data about.
Work items further have a "classification" of "parallel" or "serial".
Set of "agents" which are capable of picking up a work item and working on it. The number of agents is fixed and known in advance. An agent can only work on one work item at a time.
Set of "servers" against which the agents execute work items. Servers have different capabilities. Specifically, they are capable of handling different numbers of agents simultaneously.
Rules:
If a server is being using to execute a "serial" work item, it cannot simultaneously be used to execute any other work item.
Provided a server isn't being used to execute any "serial" work items, it can simultaneously handle as many agents as it is capable of, all executing "parallel" work items.
There are a handful of work items which must be executed against a specific server (although any agent can do that). These work items are "parallel", if that matters. (It may be easier to ignore this rule for now!)
Requirement:
Given the inputs and rules above, I need to execute the set of work items "as quickly as possible". Since we cannot know how long a work item will take until it is complete, we cannot possibly hope to derive a perfect solution up front (I suppose), so "as quickly as possible" means not manifestly doing something stupid like just using one agent to execute each work item one by one!
Historically, I've had a very simple round-robin algorithm and simply sorted the work items by descending historical duration such that the longest running work items get scheduled sooner and, hopefully, at the end of the cycle I'm able to keep all agents and servers reasonably well loaded with short-duration work items. This has resulted in a pretty good "square" shape to the utilization graph with no long tail of long-duration work items hanging around at the end of the cycle.
This historical algorithm, however, has required me to pre-configure the number of agents and servers and pre-allocate work items to "pools" and assign pools to servers, and lots of other horrible stuff. I now need to support a dynamic number of agents and servers without having to reconfigure things. (Note that the number of servers will be fixed during a cycle - that is, the number will only change between cycles - but the number of agents may increase or decrease in the middle of the cycle.)
Once all work items are complete, we record how long each work item took to feed in to the next cycle and start again from the beginning!