Capacity Provisioning for Server Farms Markov Chain Queues - server

Suppose we are using an M/M/N model, how many servers would we need to keep the probability of an arriving job to wait to be less than 0.2. Given that jobs arrive at a rate of 400/second, and the processing times are exponentially distributed with a mean of 1 second.
So I used Erlang's C-formula:
P[queueing] = (1/c!)*(lambda/mu)^(c)*(1/(1-rho))*pi_o
And got an answer of 4 servers, however when I used the model they showed in the textbook:
rho < 1 -> (lambda)/(k*mu) -> k > lambda/mu
I get k = 400 servers. I'm not sure.

Related

Reciprocal cost allocation between units servicing each other (typical managerial accounting problem) in T-SQL

I am desperately searching for an efficient way - if there is one - to solve some kind of a recursive task in T-SQL (I could successfully model it in excel and on paper with an iterative solution - as many CMAs would for a small example, re-allocating shares of cost between pairs of support units serving each other in iterations and minimising the balancing unit's unallocated cost leftover to a reasonably small number to stop iterations/recursion).
Now I am trying to find a good scalable solution (or at least a feasible approach to it) how to achieve the same in T-SQL for this typical computational task in the managerial accounting area: when some internal support units service each other (and incur periodic costs, like salary etc) to produce at the end let's say 2 or 3 final products together as a firm, and as a result their respective shares of internally generated support overheads need to be reasonably (according to some physical base distribution, lets say - man hrs spent in each) allocated to these products' cost at the end of the costing exercise.
It would be quite simple if there was no reciprocal services: one support unit providing some service to other support units during the period (and a need to allocate respective costs too alongside this service qty flow) and the second and third support units doing the same thing to other support peers, before all their costs get properly berried into production costs and spread between respective products they jointly serviced (not equally for all support units, I'm using activity-based-costing approach here)... And in a real case there could be many more than just 2-3 units one could manually solve in excel or on paper. So, it really needs some dynamic parameters algorithm (X number of support units servicing X-1 peers and Y products in the period serviced based on some qty-measure/% square matrix allocation table) to spread their periodic cost to one unit of each product at the end. Preferably, somehow natively in SQL without using external .NET or other assembly references.
Some numeric example:
each of 3 support units A,B,C incurred $100, $200, $300 of expenses in the period and worked 50 man hrs each, respectively
A-unit serviced B-unit for 10 hrs and C-unit for 5 hrs, B-unit serviced A-unit for 5 hrs, C-unit serviced A-unit for 3 hrs and B-unit for 10 hrs
The rest of the support units' work time (A-unit 35 hrs: 30% for P1 and 70% for P2, B-unit 45 hrs: 35% for P1 and 65% for P2, C-unit 37 hrs for P2 for 100%) they spent servicing the output of two products (P1 and P2); this portion of their direct time/effort easily allocates to products - but due to reciprocal services to each other some share of support units' cost needs to be shifted to a respective product cost pool unequal to their direct time to product allocation (needs an adjusted mix coefficient for step 2 effects).
I could solve this in excel with iterating algorithm and use of VBA arrays:
(a) vector of period costs by each support unit (to finally reallocate to products and leave 0),
(b) 2dim array/matrix of coefficients of self-service between support units (based on man hrs - one to another),
(c) 2dim array/matrix of direct hrs service for each product by support units,
(d) minimal tolerable error of $1 (leftover of unallocated cost in a unit to stop iteration)
For just 2 or 3 elements (while still manually provable on paper) it is a feasible approach, but this becomes impossible to manually prove for a correct solution once I have 10-20+ support units and many products in a matrix; and I want to switch from excel and VBA to MS SQL server and t-sql for other reasons.
Since this business case as such is not new at all, I was hoping more experienced colleagues could throw an advise how to best solve this - I believed there must have been a solution to this task before (not in pure programming environment/external code).
I am thinking to combine CTE(recursive), table variables and aggregate window functions - but hesitate/struggle how to best/exactly put all puzzle elements together so it is truly scalable for my potentially growing unit/product matrix dimensions.
For my current level it's a little mind blowing, so I'd be grateful for an advice.

How can a Neural Network learn from testing outputs against external conditions which it can not directly control

In order to simplify the question and hopefully the answer I will provide a somewhat simplified version of what I am trying to do.
Setting up fixed conditions:
Max Oxygen volume permitted in room = 100,000 units
Target Oxygen volume to maintain in room = 100,000 units
Maximum Air processing cycles per sec == 3.0 cycles per second (min is 0.3)
Energy (watts) used per second is this formula : (100w * cycles_per_second)SQUARED
Maximum Oxygen Added to Air per "cycle" = 100 units (minimum 0 units)
1 person consumes 10 units of O2 per second
Max occupancy of room is 100 person (1 person is min)
inputs are processed every cycle and outputs can be changed each cycle - however if an output is fed back in as an input it could only affect the next cycle.
Lets say I have these inputs:
A. current oxygen in room (range: 0 to 1000 units for simplicity - could be normalized)
B. current occupancy in room (0 to 100 people at max capacity) OR/AND could be changed to total O2 used by all people in room per second (0 to 1000 units per second)
C. current cycles per second of air processing (0.3 to 3.0 cycles per second)
D. Current energy used (which is the above current cycles per second * 100 and then squared)
E. Current Oxygen added to air per cycle (0 to 100 units)
(possible outputs fed back in as inputs?):
F. previous change to cycles per second (+ or - 0.0 to 0.1 cycles per second)
G. previous cycles O2 units added per cycle (from 0 to 100 units per cycle)
H. previous change to current occupancy maximum (0 to 100 persons)
Here are the actions (outputs) my program can take:
Change cycles per second by increment/decrement of (0.0 to 0.1 cycles per second)
Change O2 units added per cycle (from 0 to 100 units per cycle)
Change current occupancy maximum (0 to 100 persons) - (basically allowing for forced occupancy reduction and then allowing it to normalize back to maximum)
The GOALS of the program are to maintain a homeostasis of :
as close to 100,000 units of O2 in room
do not allow room to drop to 0 units of O2 ever.
allows for current occupancy of up to 100 people per room for as long as possible without forcibly removing people (as O2 in room is depleted over time and nears 0 units people should be removed from room down to minimum and then allow maximum to recover back up to 100 as more and more 02 is added back to room)
and ideally use the minimum energy (watts) needed to maintain above two conditions. For instance if the room was down to 90,000 units of O2 and there are currently 10 people in the room (using 100 units per second of 02), then instead of running at 3.0 cycles per second (90 kw) and 100 units per second to replenish 300 units per second total (a surplus of 200 units over the 100 being consumed) over 50 seconds to replenish the deficit of 10,000 units for a total of 4500 kw used. - it would be more ideal to run at say 2.0 cycle per second (40 kw) which would produce 200 units per second (a surplus of 100 units over consumed units) for 100 seconds to replenish the deficit of 10,000 units and use a total of 4000 kw used.
NOTE: occupancy may fluctuate from second to second based on external factors that can not be controlled (lets say people are coming and going into the room at liberty). The only control the system has is to forcibly remove people from the room and/or prevent new people from coming into the room by changing the max capacity permitted at that next cycle in time (lets just say the system could do this). We don't want the system to impose a permanent reduction in capacity just because it can only support outputting enough O2 per second for 30 people running at full power. We have a large volume of available O2 and it would take a while before that was depleted to dangerous levels and would require the system to forcibly reduce capacity.
My question:
Can someone explain to me how I might configure this neural network so it can learn from each action (Cycle) it takes by monitoring for the desired results. My challenge here is that most articles I find on the topic assume that you know the correct output answer (ie: I know A, B, C, D, E inputs all are a specific value then Output 1 should be to increase by 0.1 cycles per second).
But what I want is to meet the conditions I laid out in the GOALS above. So each time the program does a cycle and lets say it decides to try increasing the cycles per second and the result is that available O2 is either declining by a lower amount than it was the previous cycle or it is now increasing back towards 100,000, then that output could be considered more correct than reducing cycles per second or maintaining current cycles per second. I am simplifying here since there are multiple variables that would create the "ideal" outcome - but I think I made the point of what I am after.
Code:
For this test exercise I am using a Swift library called Swift-AI (specifically the NeuralNet module of it : https://github.com/Swift-AI/NeuralNet
So if you want to tailor you response in relation to that library it would be helpful but not required. I am more just looking for the logic of how to setup the network and then configure it to do initial and iterative re-training of itself based on those conditions I listed above. I would assume at some point after enough cycles and different conditions it would have the appropriate weightings setup to handle any future condition and re-training would become less and less impactful.
This is a control problem, not a prediction problem, so you cannot just use a supervised learning algorithm. (As you noticed, you have no target values for learning directly via backpropagation.) You can still use a neural network (if you really insist). Have a look at reinforcement learning. But if you already know what happens to the oxygen level when you take an action like forcing people out, why would you learn such a simple facts by millions of evaluations with trial and error, instead of encoding it into a model?
I suggest to look at model predictive control. If nothing else, you should study how the problem is framed there. Or maybe even just plain old PID control. It seems really easy to make a good dynamical model of this process with few state variables.
You may have a few unknown parameters in that model that you need to learn "online". But a simple PID controller can already tolerate and compensate some amount of uncertainty. And it is much easier to fine-tune a few parameters than to learn the general cause-effect structure from scratch. It can be done, but it involves trying all possible actions. For all your algorithm knows, the best action might be to reduce the number of oxygen consumers to zero permanently by killing them, and then get a huge reward for maintaining the oxygen level with little energy. When the algorithm knows nothing about the problem, it will have to try everything out to discover the effect.

How to guarantee that all nodes get infected in gossip-based protocols?

In gossip-based protocols, how do we guarantee that all nodes get infected by the message?
If we selected a random number of nodes and send a message to these nodes, and these nodes did the same, there is a probability that some node will not receive the message.
Although I couldn't calculate it, it seems small. However, if the system is running for a long time, at some point one nodes will be unlucky and will be leftover.
It's a bit hard to answer, due to two reasons:
There isn't really a gossip-based protocol. At most, there are families of gossip-based algorithms.
The algorithms actually guarantee infection only under specific assumptions. E.g., if, as you put it, as "the system is running for a long time" any given link fails permanently under some exponential process (a very likely scenario), then with probability 1 some node will be completely isolated, and no protocol can overcome that.
However, IIUC, you're asking about a protocol with the following assumptions:
For any group V' &subset; V of nodes, there is an active link u &in; V' &rightarrow; v &in; V &setminus; V'.
Each node chooses uniformly d of its neighbors at each step, irrespective of their state, choices made by other nodes, total update state, etc.
Under these conditions, the problem you raised will have probability 0.
You can think about the infection as a Markov Chain where the system is at state i if i nodes are infected. Suppose some change originated at some s &in; V, and so the system starts at state i.
By property 1., there is a link from the i infected nodes to one of the n - i others.
By property 2., the probability of selecting this link is at least 1 / n. This is because the node whose link happens to cross the cut, has at most n neighbors, but at least one neighbor across the cut. Even if its selection is entirely stateless and uninformed, that is the chance that it will choose this neighbor.
Therefore, the probability that this will not happen for j steps is (1 - d/n)j. Using the Union Bound, the probability that this will happen for any state i is at most n (1 - 1/n)j. Take j = n2, and this becomes n e- n; take j = n3, and this becomes n e- n2. Etc.
(Of course, gossip algorithm infection happens much sooner; this is an upper bound for the worst-possible conditions.)
So, if the system runs long enough, the probability that some node does not become infected, decreases to 0 (very quickly). For Anti-Entropy Gossip Protocols, this is enough. For some other protocols, as you suspected, there is a chance that some node will be missed for some update.
We can't provide an answer because you don't understand your problem (hence the question is ambiguous)
The topology of the network is unknown, but the answer depend on it
What's the stop condition of the algorithm? Does it stop or not?
Suppose that a given node is connected to all the other node (that's the topology) and each node perform the same action if it receive a message.
You could simplify your problem into smaller sub-problems (that's the divide-et-impera approach): imagine that any node perform just one attempt (i.e. i = 1).
Since any node picks the receiver completely at random and since this operation is done infinite times then eventually all the nodes will receive the message. How many iterations are required to reach a given confidence (ratio of node which received the message / no. of all nodes ) is up to you.
Once you get this including the repeated attempt i is straightforward.
I made a little simulation of what you're trying to do. http://jsfiddle.net/ut78sega/
function gossip(nodes, tries, startNode, reached) {
var stack = [startNode, tries];
while(stack.length > 0) {
var ttl = stack.pop();
var n = stack.pop();
reached[n] = 1;
if(ttl <= 0) { continue; }
for(var i=0; i < ttl; i++) {
stack.push(Math.floor(Math.random() * nodes), ttl - 1);
}
}
return reached;
}
node - number of total nodes
tries - the starting amount of random selections
startNode - the node that gets the first message
reached - a hash set of nodes that were reached by the current simulation
At each level of the recursive the number of tries is decreased by one. It takes ~9 tries to get 100% coverage of 65536 (2^16) nodes.

Calculate the performance of a multicore architecture?

Cal a multicore architecture with 10 computing cores: 2 processor cores and 8 coprocessors. Each processor core can deliver 2.0 GFlops, while each coprocessor can deliver 1.0 GFlops. All computing cores can perform calculation simultaneously. Any instruction can execute in either processor or coprocessor cores unless there are any explicit restrictions.
If 70% of dynamic instructions in an application are parallelizable, what is the maximum average performance (Flops) you can get in the optimal situation? Please note that the remaining 30% instructions can be executed only after the execution of the parallel 70% is over.
Consider another application where all the dynamic instructions can be partitioned into 6 groups (A, B, C, D, E, F) with the following dependency. For example, A --> C implies that all the instructions in A need to be completed before starting the execution of instructions in C. Each of the first four groups (A, B, C and D) contains 20% of the dynamic instructions whereas each of the remaining two groups (E and F) contains 10% of the dynamic instructions. All the instructions in each group must be executed sequentially on the same processor or coprocessor core. How to schedule them on the multicore architecture to achieve the best possible performance? What is the maximum average performance (Flops) now?
A(20%) --> C(20%) -->
E(10%)-->F(10%)
B(20%) --> d(20%) -->
For the first part, you need to use Amdahl's Law, which is:
max speed-up = 1/(1-p+p/n)
where p is the parallelizable part. n is the improvement factor in executing the parallel portion.
(Note that the Amdahl's Law formula can be used for first order estimates on other types of changes. E.g., given a factor of N reduction in ALU energy use and P fraction of energy used by the ALU, one can find the improvement in total energy use.)
In your case, since the serial portion would be executed on the higher performance (2 GFLOPS) processor core, n is 6 ([8 coprocessor cores * 1 GFLOPS/core + 2 processor cores * 2 GFLOPS/core]/ 2 GFLOPS/processor core).
A quick calculation shows the max speed-up you can get is 2.4 related to 1 processor core. The maximum FLOPS would therefore be the speed-up times the speed if the whole program was executed serially on one processor core, i.e., 2.4 * 2 GFLOPS = 4.8 GFLOPS.
For the second part, note that initially there are two independent instruction streams: A -> C and B -> C. Since the system has two processor cores, both can be executed in parallel on the higher performance processor cores. Furthermore, both have the same amount of work (40% of total for each stream), so one the same performance core they will complete at the same time.
Since E depends on results from both C and D, it must be started after both finish. E and F would execute on a processor core (which core is arbitrary since E must wait for the tasks running on both processor cores to complete).
As you can see 80% of the program (40% for A+C; 40% for B+D) can be parallelized by a factor of 2 and 20% of the program (E+F) is serial. You can then just plug the numbers into the Amdahl's Law formula (p=0.8, n=2).

Amdahl's law example

Can someone help me with this example please and show me how to work the second part?
the question is :
If one third of a weather prediction algorithm is inherently serial and the remainder
parallelizable, what is the minimum number of cores needed to guarantee a 150% speedup over a
single core implementation?
ii. Your boss revises the figure to 200%. What is your new answer?
Thanks very much in advance !!
Guess: If the algorithm is 1/3 serial and 2/3 parallel...I would think that each core you added would give you a 66% increase in performance...So for 150% increase, you'd need 3 more cores, and for a 200% increase, you'd need 4.
This is a guess. Your textbook might be more helpful :)
If the algorithm runs on a single core and takes 90 minutes then 30 minutes is for the serial part and 60 minutes for the parallel part.
Add a CPU:
30 is for the serial part and 30 for the parallel part(half of the 60 overlaps with the serial part).
90 / 60 = 150% increase.
I am a bit late, but here are the answers:
1) 150% increase -> 2 cores at least required as dbasnett said;
2) 200% increase -> 4 cores at least required basing on the Amahld's law:
Here, 90 minutes overall required to perform the calculation. P is the actually enhanced part of the algorithm (the parallelizable part) which is 2/3 of 90, N is the number of cores, so when there's a core only:
You get 1, which means 100%, which is how the algorithm performs the standard way (without multi-core acceleration and therefore no parallelization speedup).
Now, we must find N number of cores for which the previous equation equals 2, where 2 means that the algorithm performs in half time (45 minutes instead of 90 when there's no parallelization) and therefore with a 200% speedup:
Since:
We see that:
So with 4 cores computing in parallel the 2/3 of the algoritm you get 200% speedup. The same goes for 150%, you will get 2, as dbasnett already told you.
Pretty simple.
Note that a complex algorithm may imply further divisions of its parallelizable parts (and in theory you can have a different number of processing units per parallelizable part concurrently):
You can further look at Wikipedia (there's also an example):
http://en.wikipedia.org/wiki/Amdahl%27s_law#Description
Anyway, the principle is the same:
Let T be the time an algorithm needs to execute in order to complete, A be the serial part of it, B its parallelizable part and N the number of parallel CPUs, you can divide B in further small sections and perform calculations on each part:
You may for C, D, G e.g. adopt M CPUs instead of N (the speedup will of course differ if M != N).
And at the end, you will arrive at a point when having more CPUs doesn't matter anymore, since:
And your algorithm speedup will at most tend to total execution time (T) divided by the execution time of the Serial part only (A).
Therefore parallel calculation comes really handy only when you have low execution time for the serial part of your algorithm.