I'm trying to answer computer architecture past paper question (NOT a Homework).
My question is how to calculate the miss rate.(complete question ask to calculate the average memory access time) The complete question is,
For a given application, 30% of the instructions require memory access. Miss rate is 3%. An instruction can be executed in 1 clock cycle. L1 cache access time is approximately 3 clock cycles while L1 miss penalty is 72 clock cycles. Calculate the average memory access time.
Needed equations,
Average memory access time = Hit time + Miss rate x Miss penalty
Miss rate = no. of misses / total no. of accesses (This was found from stackoverflow)
As I mentioned above I found how to calculate miss rate from stackoverflow ( I checked that question but it does not answer my question) but the problem is I cannot imagine how to find Miss rate from given values in the question.
What I have done up to now
Average memory access time = 30% * (1 + 3% * 72) + 100% * (1 + M*72)
M - miss rate
what I need to find is M. (If I am correct up to now if not please tell me what I've messed up)
Related
In order to simplify the question and hopefully the answer I will provide a somewhat simplified version of what I am trying to do.
Setting up fixed conditions:
Max Oxygen volume permitted in room = 100,000 units
Target Oxygen volume to maintain in room = 100,000 units
Maximum Air processing cycles per sec == 3.0 cycles per second (min is 0.3)
Energy (watts) used per second is this formula : (100w * cycles_per_second)SQUARED
Maximum Oxygen Added to Air per "cycle" = 100 units (minimum 0 units)
1 person consumes 10 units of O2 per second
Max occupancy of room is 100 person (1 person is min)
inputs are processed every cycle and outputs can be changed each cycle - however if an output is fed back in as an input it could only affect the next cycle.
Lets say I have these inputs:
A. current oxygen in room (range: 0 to 1000 units for simplicity - could be normalized)
B. current occupancy in room (0 to 100 people at max capacity) OR/AND could be changed to total O2 used by all people in room per second (0 to 1000 units per second)
C. current cycles per second of air processing (0.3 to 3.0 cycles per second)
D. Current energy used (which is the above current cycles per second * 100 and then squared)
E. Current Oxygen added to air per cycle (0 to 100 units)
(possible outputs fed back in as inputs?):
F. previous change to cycles per second (+ or - 0.0 to 0.1 cycles per second)
G. previous cycles O2 units added per cycle (from 0 to 100 units per cycle)
H. previous change to current occupancy maximum (0 to 100 persons)
Here are the actions (outputs) my program can take:
Change cycles per second by increment/decrement of (0.0 to 0.1 cycles per second)
Change O2 units added per cycle (from 0 to 100 units per cycle)
Change current occupancy maximum (0 to 100 persons) - (basically allowing for forced occupancy reduction and then allowing it to normalize back to maximum)
The GOALS of the program are to maintain a homeostasis of :
as close to 100,000 units of O2 in room
do not allow room to drop to 0 units of O2 ever.
allows for current occupancy of up to 100 people per room for as long as possible without forcibly removing people (as O2 in room is depleted over time and nears 0 units people should be removed from room down to minimum and then allow maximum to recover back up to 100 as more and more 02 is added back to room)
and ideally use the minimum energy (watts) needed to maintain above two conditions. For instance if the room was down to 90,000 units of O2 and there are currently 10 people in the room (using 100 units per second of 02), then instead of running at 3.0 cycles per second (90 kw) and 100 units per second to replenish 300 units per second total (a surplus of 200 units over the 100 being consumed) over 50 seconds to replenish the deficit of 10,000 units for a total of 4500 kw used. - it would be more ideal to run at say 2.0 cycle per second (40 kw) which would produce 200 units per second (a surplus of 100 units over consumed units) for 100 seconds to replenish the deficit of 10,000 units and use a total of 4000 kw used.
NOTE: occupancy may fluctuate from second to second based on external factors that can not be controlled (lets say people are coming and going into the room at liberty). The only control the system has is to forcibly remove people from the room and/or prevent new people from coming into the room by changing the max capacity permitted at that next cycle in time (lets just say the system could do this). We don't want the system to impose a permanent reduction in capacity just because it can only support outputting enough O2 per second for 30 people running at full power. We have a large volume of available O2 and it would take a while before that was depleted to dangerous levels and would require the system to forcibly reduce capacity.
My question:
Can someone explain to me how I might configure this neural network so it can learn from each action (Cycle) it takes by monitoring for the desired results. My challenge here is that most articles I find on the topic assume that you know the correct output answer (ie: I know A, B, C, D, E inputs all are a specific value then Output 1 should be to increase by 0.1 cycles per second).
But what I want is to meet the conditions I laid out in the GOALS above. So each time the program does a cycle and lets say it decides to try increasing the cycles per second and the result is that available O2 is either declining by a lower amount than it was the previous cycle or it is now increasing back towards 100,000, then that output could be considered more correct than reducing cycles per second or maintaining current cycles per second. I am simplifying here since there are multiple variables that would create the "ideal" outcome - but I think I made the point of what I am after.
Code:
For this test exercise I am using a Swift library called Swift-AI (specifically the NeuralNet module of it : https://github.com/Swift-AI/NeuralNet
So if you want to tailor you response in relation to that library it would be helpful but not required. I am more just looking for the logic of how to setup the network and then configure it to do initial and iterative re-training of itself based on those conditions I listed above. I would assume at some point after enough cycles and different conditions it would have the appropriate weightings setup to handle any future condition and re-training would become less and less impactful.
This is a control problem, not a prediction problem, so you cannot just use a supervised learning algorithm. (As you noticed, you have no target values for learning directly via backpropagation.) You can still use a neural network (if you really insist). Have a look at reinforcement learning. But if you already know what happens to the oxygen level when you take an action like forcing people out, why would you learn such a simple facts by millions of evaluations with trial and error, instead of encoding it into a model?
I suggest to look at model predictive control. If nothing else, you should study how the problem is framed there. Or maybe even just plain old PID control. It seems really easy to make a good dynamical model of this process with few state variables.
You may have a few unknown parameters in that model that you need to learn "online". But a simple PID controller can already tolerate and compensate some amount of uncertainty. And it is much easier to fine-tune a few parameters than to learn the general cause-effect structure from scratch. It can be done, but it involves trying all possible actions. For all your algorithm knows, the best action might be to reduce the number of oxygen consumers to zero permanently by killing them, and then get a huge reward for maintaining the oxygen level with little energy. When the algorithm knows nothing about the problem, it will have to try everything out to discover the effect.
this may be better posted in Mathematics, but figured someone in StackOverflow may have seen this before. I am trying to devise an equation for determining the average data transfer speed for backup appliances that offsite their data to a data center.
On weekdays during the 8:00a-5:00p hours (1/3 of the day), the connection is throttled to 20% of the measured bandwidth. The remaining 2/3 of the weekday (5:00p-8:00a), the connection is throttled to 80% of the measured bandwidth. On the weekend from Friday 5:00p until Monday 8:00a, the connection is a constant 80% of the measured bandwidth.
The reason behind this is deciding whether to seed the data onto a hard drive versus letting the data transfer over the internet. Making this decision is based on getting a somewhat accurate bandwidth average so that I can calculate the transfer time
I had issues coming up with an equation, so I reverse engineered a few real world occurrences using just the weekday 80%/20% average. I came up with 57.5% of the measured bandwidth, but could not extrapolate an equation from it. Now I want to write a program to determine this. I am thinking factoring in the weekend being 80% the whole time would use a similar equation.
This would be similar scenario to a car travelling at 20% of speed limit for 1/3 of the day and then 80% of speed limit for the rest of that day, and then determine average car speed for the day. I searched online and could not find any reference to an equation for this. Any ideas?
Using the idea you provided, is direct the equation:
Average = (1/3) * bandwith_1 + (2/3) * bandwith_2
If bandwith_1 = 20 and bandwith_2 = 80, the equation gives a maximumm value of 59,99999%.
I am a student reading Operating systems course for the first time. I have a doubt in the calculation of the performance degradation calculation while using demand paging. In the Silberschatz book on operating systems, the following lines appear.
"If we take an average page-fault service time of 8 milliseconds and a
memory-access time of 200 nanoseconds, then the effective access time in
nanoseconds is
effective access time = (1 - p) x (200) + p (8 milliseconds)
= (1 - p) x 200 + p x 8.00(1000
= 200 + 7,999,800 x p.
We see, then, that the effective access time is directly proportional to the
page-fault rate. If one access out of 1,000 causes a page fault, the effective
access time is 8.2 microseconds. The computer will be slowed down by a factor
of 40 because of demand paging! "
How did they calculate the slowdown here? Is 'performance degradation' and slowdown the same?
This is whole thing is nonsensical. It assumes a fixed page fault rate P. That is not realistic in itself. That rate is a fraction of memory accesses that result in a page fault.
1-P is the fraction of memory accesses that do not result in a page fault.
T= (1-P) x 200ns + p (8ms) is then the average time of a memory access.
Expanded
T = 200ns + p (8ms - 200ns)
T = 200ns + p (799980ns)
The whole thing is rather silly.
All you really need to know is a nanosecond is 1/billionth of a second.
A microsecond is 1/thousandth of a second.
Using these figures, there is a factor of a million difference between the access time in memory and in disk.
I would like to know did I solve the equation correctly below
find the average memory access time for process with a process with a 3ns clock cycle time, a miss penalty of 40 clock cycle, a miss rate of .08 misses per instruction, and a cache access time of 1 clock cycle
AMAT = Hit Time + Miss Rate * Miss Penalty
Hit Time = 3ns, Miss Penalty = 40ns, Miss Rate = 0.08
AMAT = 3 + 0.08 * 40 = 6.2ns
Check the "Miss Penalty". Be more careful to avoid trivial mistakes.
The question that you tried to answer cannot actually be answered, since you are given 0.08 misses per instruction but you don't know the average number of memory accesses per instruction. In an extreme case, if only 8 percent of instructions accessed memory, then every memory access would be a miss.
I have been going over a previous exam for my computer architecture course that i got an incorrect answer, how could i calculate the best possibly speedup?
I understand theres a limit as to how mucha program can be sped up im just unsure of the forumla (he problem is part b). Any help will be upvoted and very much appreciated thanks!
(6 points) To accelerate an application, two enhancements with the following speedups are proposed:
Speedup1 = 25
Speedup2 = 15
Enhancement 1 is usable for 40% of the instructions and enhancement 2 is usable for 30% of the instructions. Two enhancements do not overlap.
a) What is the speedup if both enhancements are applied?
b) If you keep improving these two enhancements, what is the best speedup you can reach?
Rather than trying to memorize a formula, use common sense. Imagine that both portions of speed-up-able code could be sped up infinitely: that is, made to take no time at all. What would be left? How much time would it take?
Let t be the total run rime, then:
(a) Since you are not asking for this section, I am giving a full solution, for future readers.
t' = modified run time = 0.4t / 25 + 0.3t / 15 + 0.3t = 0.336t,
Thus, speedup = t/t' = t / 0.336t ~= 2.97
(b) The question asks keep advancing THESE speed ups, so you cannot improve the whole program. Then the best speed up you can get, according to amdahl's law is bounded by the sequential, un-improveable part. Amdahl's law says that the maximum speed up will be 1/SEQUENTIAL_PART What is the sequential part in your case? Make sure you understand why.
The idea of amdahl's law is, assuming you can speed up the improved part to infinite speed up, the total speed up will still be bounded by the non-improved part.