Lottery Ticket Hypothesis - Iterative Pruning - neural-network

I was reading about The Lottery Ticket Hypothesis and it was mentioned in the paper:
we focus on iterative pruning, which repeatedly trains, prunes, and
resets the network over n rounds; each round prunes (p^(1/n))% of the
weights that survive the previous round.
Can someone please explain this say for each round with numbers, when n = 5 (rounds) and the final sparsity desired (p) = 70%.
In this example, the numbers I computed are as follows:
Round (p^(1/n))% of weights pruned
1 0.93114999
2 0.86704016
3 0.80734437
4 0.75175864
5 0.7
According to these calculations, it seems that the first round prunes 93.11% (approx) of the weights, whereas, the fifth round prunes 70% of the weights. It's as if as the rounds progress, the percentage of weights being pruned decreases.
What am I doing wrong?
Thanks!

You are using p^(1/n). As you increase n after each iteration, your p^(1/n) term decreases!

Related

Not getting the expected proportion of epidemics of size 1 using fast_SIR

I used fast_SIR to simulate 5400 epidemics, 30 each of my 180 networks of 1000 nodes. I did not state a node to start as initially infectious and therefore from my understanding it chooses a node at random as initially infectious. I calculated the expected proportion of epidemics of size 1 by considering cases in which the initial random infectious node does not transmit the infection to any of its neighbours. From my understanding the fast_SIR states that the time to transmission and recovery rates are sampled from the exponential distribution with rates tau and gamma respectively.
P(initial random node does not transmit to N neighbours) = sum_{N = 0}^{10} P(of N neighbours for initial node)*P(initial node does not transmit to a neighbour)^N
Where P(initial node does not transmit to a neighbour) is the stationary distribution for the initial node and is (transmission rate)/(transmission rate+recovery rate) = tau/gamma+tau = 0.06/(0.06+0.076)
I accounted for the probability that the initial random node is an isolated node since not all my networks are fully connected.
The actual proportion of epidemics of size 1 = 0.26 and the expected proportion of epidemics of size 1 using the above formula was 0.095
I don't know what I am not accounting for in the formula and what other scenarios or cases would lead to epidemics of size 1... Any advice would be appreciated.

Is there an efficient way to calculate the smallest N numbers from a set of numbers in hardware (HDL)?

I am trying to calculate the smallest N numbers from a set and I've found software algorithms to do this. I'm wondering if there is an efficient way to do this in hardware (i.e. HDL - in System Verilog or Verilog)? I am specifically trying to calculate the smallest 2 numbers from a set.
I am trying to do this combinationally optimizing with respect to area and speed (for a large set of signals) but I can only think of comparator trees to do this? Is there a more efficient way of doing this?
Thank you, any help is appreciated~
I don't think you can work around using comparator trees if you want to find the two smallest elements combinationally. However, if your goal isn't low latency than a (possibly pipelined) sequential circuit could also be an option.
One approach that I can come up with on the spot would be to break down the operation doing kind of an incomplete bubble sort in hardware using small sorting networks. Depending on the amount of area you are willing to spend you can use a smaller or larger p-sorting network that combinationaly sorts p elements at a time where p >= 3. You can then apply this network on your input set of size N, sorting p elements at a time. The two smallest elements in each iteration are stored in some sort of memory (e.g. an SRAM memory, if you want to process larger amounts of elements).
Here is an example for p=3 (the brackets indicate the grouping of elements the p-sorter is applied to):
(4 0 9) (8 6 7) (4 2 1) --> (0 4 9) (6 7 8) (1 2 4) --> 0 4 6 7 1 2
Now you start the next round:
You apply the p-sorter on the results of the first round.
Again you store the two smallest outputs of your p-sorter into the same memory overwriting values from the previous round.
Here the continuation of the example:
(0 4 6) (7 1 2) --> (0 4 6) (1 2 7) --> 0 4 1 2
In each round you can reduce the number of elements to look at by a factor of 2/p. E.g. with p==4 you discard half the elements in each round until the smallest two elements are stored at the first two memory locations. So the algorithm has time/cycle complexity of O(n log(n)). For an actual hardware implementation, you probably want to stick to powers of two for the size p of the sorting network.
Although the control logic of such a circuit is not trivial to implement the area should be mainly dominated by the size of your sorting network and the memory you need to hold the first 2/p*N intermediate results (assuming your input signals are not already stored in a memory that you can reuse for that purpose). If you want to tune your circuit towards throughput you can increase p and pipeline the sorting network at the expense of additional area. Additional speedup could be gained by replacing the single memory using up to p two-port memories (1 read and 1 write port each) which would allow you to fetch and write back the data for the sorting network in a single cycle thus increasing the utilization ratio of the comparators in the sorting network.

Genetic algorithm techniques for allocation of electric vehicles

The problem I'm trying to solve is about the best allocation for electric vehicles (EVs) in the electrical power grid. My grid has 20 possible positions (busbar) allowed to receive one EV each. Each chromosome has length 20 and its genes can be 0 or 1, where 0 means no EV and 1 means thereĀ“s an EV at that position (busbar).
I start my population (100 individuals) with a fixed number of EVs (5, for instance) allocated randomly. And let them evolve through my GA. The GA utilizes tournament selection, 2-points crossover and flip-bit mutation. Each chromosome/individual is evaluated through a fitness function that calculate the power losses between bars (sum of RI^2). The best chromosome is the one with the lowest power losses.
The problem is that utilizing 2-points crossover and flip-bit mutation changes the fixed number of EVs that must be in the grid. I would like to know what are the best techniques for my GA operations. Besides this, I get this weird looking graphic of the most fitness chromosome throughout generations 1
I would appreciate any help/suggestions. Thanks.
You want to define your state space in such a way where the mutations you've chosen can't create an illegal configuration.
This is probably not a great fit for a genetic algorithm. If you want to pick 5 from 20, there are ~15k possibilities. Testing a population of 100 over 50 generations already gives you enough computations to have done 1/3 of the brute force work.
If you have N EV to assign on your grid, you can use chromosomes of size N, each gene being an integer representing the position of an EV. For the crossover, you first need to separate the values that are the same in both parents from the rest and apply a classic (1 or 2 points) crossover on the parts that differ, and mutate a gene randomly picking a valid available position.

How can I prevent my program from getting stuck at a local maximum (Feed forward artificial neural network and genetic algorithm)

I'm working on a feed forward artificial neural network (ffann) that will take input in form of a simple calculation and return the result (acting as a pocket calculator). The outcome wont be exact.
The artificial network is trained using genetic algorithm on the weights.
Currently my program gets stuck at a local maximum at:
5-6% correct answers, with 1% error margin
30 % correct answers, with 10% error margin
40 % correct answers, with 20% error margin
45 % correct answers, with 30% error margin
60 % correct answers, with 40% error margin
I currently use two different genetic algorithms:
The first is a basic selection, picking two random from my population, naming the one with best fitness the winner, and the other the loser. The loser receives one of the weights from the winner.
The second is mutation, where the loser from the selection receives a slight modification based on the amount of resulting errors. (the fitness is decided by correct answers and incorrect answers).
So if the network outputs a lot of errors, it will receive a big modification, where as if it has many correct answers, we are close to a acceptable goal and the modification will be smaller.
So to the question: What are ways I can prevent my ffann from getting stuck at local maxima?
Should I modify my current genetic algorithm to something more advanced with more variables?
Should I create additional mutation or crossover?
Or Should I maybe try and modify my mutation variables to something bigger/smaller?
This is a big topic so if I missed any information that could be needed, please leave a comment
Edit:
Tweaking the numbers of the mutation to a more suited value has gotten be a better answer rate but far from approved:
10% correct answers, with 1% error margin
33 % correct answers, with 10% error margin
43 % correct answers, with 20% error margin
65 % correct answers, with 30% error margin
73 % correct answers, with 40% error margin
The network is currently a very simple 3 layered structure with 3 inputs, 2 neurons in the only hidden layer, and a single neuron in the output layer.
The activation function used is Tanh, placing values in between -1 and 1.
The selection type crossover is very simple working like the following:
[a1, b1, c1, d1] // Selected as winner due to most correct answers
[a2, b2, c2, d2] // Loser
The loser will end up receiving one of the values from the winner, moving the value straight down since I believe the position in the array (of weights) matters to how it performs.
The mutation is very simple, adding a very small value (currently somewhere between about 0.01 and 0.001) to a random weight in the losers array of weights, with a 50/50 chance of being a negative value.
Here are a few examples of training data:
1, 8, -7 // the -7 represents + (1+8)
3, 7, -3 // -3 represents - (3-7)
7, 7, 3 // 3 represents * (7*7)
3, 8, 7 // 7 represents / (3/8)
Use a niching techniche in the GA. A useful alternative is niching. The score of every solution (some form of quadratic error, I think) is changed in taking account similarity of the entire population. This maintains diversity inside the population and avoid premature convergence an traps into local optimum.
Take a look here:
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.100.7342
A common problem when using GAs to train ANNs is that the population becomes highly correlated
as training progresses.
You could try increasing mutation chance and/or effect as the error-change decreases.
In English. The population becomes genetically similar due to crossover and fitness selection as a local minim is approached. You can introduce variation by increasing the chance of mutation.
You can do a simple modification to the selection scheme: the population can be viewed as having a 1-dimensional spatial structure - a circle (consider the first and last locations to be adjacent).
The production of an individual for location i is permitted to involve only parents from i's local neighborhood, where the neighborhood is defined as all individuals within distance R of i. Aside from this restriction no changes are made to the genetic system.
It's only one or a few lines of code and it can help to avoid premature convergence.
References:
TRIVIAL GEOGRAPHY IN GENETIC PROGRAMMING (2005) - Lee Spector, Jon Klein

Amdahl's law example

Can someone help me with this example please and show me how to work the second part?
the question is :
If one third of a weather prediction algorithm is inherently serial and the remainder
parallelizable, what is the minimum number of cores needed to guarantee a 150% speedup over a
single core implementation?
ii. Your boss revises the figure to 200%. What is your new answer?
Thanks very much in advance !!
Guess: If the algorithm is 1/3 serial and 2/3 parallel...I would think that each core you added would give you a 66% increase in performance...So for 150% increase, you'd need 3 more cores, and for a 200% increase, you'd need 4.
This is a guess. Your textbook might be more helpful :)
If the algorithm runs on a single core and takes 90 minutes then 30 minutes is for the serial part and 60 minutes for the parallel part.
Add a CPU:
30 is for the serial part and 30 for the parallel part(half of the 60 overlaps with the serial part).
90 / 60 = 150% increase.
I am a bit late, but here are the answers:
1) 150% increase -> 2 cores at least required as dbasnett said;
2) 200% increase -> 4 cores at least required basing on the Amahld's law:
Here, 90 minutes overall required to perform the calculation. P is the actually enhanced part of the algorithm (the parallelizable part) which is 2/3 of 90, N is the number of cores, so when there's a core only:
You get 1, which means 100%, which is how the algorithm performs the standard way (without multi-core acceleration and therefore no parallelization speedup).
Now, we must find N number of cores for which the previous equation equals 2, where 2 means that the algorithm performs in half time (45 minutes instead of 90 when there's no parallelization) and therefore with a 200% speedup:
Since:
We see that:
So with 4 cores computing in parallel the 2/3 of the algoritm you get 200% speedup. The same goes for 150%, you will get 2, as dbasnett already told you.
Pretty simple.
Note that a complex algorithm may imply further divisions of its parallelizable parts (and in theory you can have a different number of processing units per parallelizable part concurrently):
You can further look at Wikipedia (there's also an example):
http://en.wikipedia.org/wiki/Amdahl%27s_law#Description
Anyway, the principle is the same:
Let T be the time an algorithm needs to execute in order to complete, A be the serial part of it, B its parallelizable part and N the number of parallel CPUs, you can divide B in further small sections and perform calculations on each part:
You may for C, D, G e.g. adopt M CPUs instead of N (the speedup will of course differ if M != N).
And at the end, you will arrive at a point when having more CPUs doesn't matter anymore, since:
And your algorithm speedup will at most tend to total execution time (T) divided by the execution time of the Serial part only (A).
Therefore parallel calculation comes really handy only when you have low execution time for the serial part of your algorithm.