Number of transitions in n parallel activities of petri net - discrete-mathematics

Suppose that there are n parallel activities, i.e., all n activities need to be executed but any order is allowed. There are n! possible execution sequences. The transition system requires 2^n states and n×2^(n-1) transitions. I do not understand how n×2^(n-1) transitions are obtained.
Any hint please.

Related

Distance between deterministic policies that are not probability distributions

This question asks if there is a way to measure distance between policies that are in fact probability distributions.
In the case of continuous control with deterministic policies where they take a state as input and return an action vector, what would be the best method to measure how close two policies are from each other?
A naive approach that came to my mind would be to:
Run both policies A and B to produce a trajectory each and record all sates visited.
For each state encountered by policy A, ask policy B which action it would take (and vice-versa). Hence we would have, for every state encountered, both A and B action vectors.
For each state, compare action vectors of A and B by using a common distance (Euclidean distance?)
Take the average (maybe maximum) of those distances.
Does it make any sense from a theoretical point of view?

When to reset the lottery ticket hypothesis algorithm to find initialization?

According to The Lottery Ticket Hypothesis paper, there are two types of pruning strategies, one-shot pruning, and iterative pruning. Both are explained on page 2. Finding initialization for one-shot pruning is easy because we train the network for j iterations and then, reset the weights to the initialization using the obtained mask. What I do not understand is the iterative pruning. On page 2, it says:
we focus on iterative pruning, which repeatedly trains, prunes, and
resets the network over n rounds;
What does resets the network over n rounds mean? Does it mean, at each round of pruning we reset the network weights to the initialization using the obtained mask for the current level of pruning? Or it means, we do train and prune the network iteratively without resetting to the initialization, then after n levels of pruning, we will reset to the initialization using the last mask we have?
The weights are reset to the initial values every time.
The Lottery Ticket Hypothesis relies on the initial weights staying constant. If the starting weights are changed, then the masked subnetwork is no longer effective. So, they must be reset every time.
The authors demonstrated this point experimentally, and summarized on page 5.
This experiment supports the lottery ticket hypothesis’ emphasis on initialization:
the original initialization withstands and benefits from pruning, while the random reinitialization’s performance immediately suffers and diminishes steadily.

Episodic Semi-gradient Sarsa with Neural Network

While trying to implement the Episodic Semi-gradient Sarsa with a Neural Network as the approximator I wondered how I choose the optimal action based on the currently learned weights of the network. If the action space is discrete I can just calculate the estimated value of the different actions in the current state and choose the one which gives the maximimum. But this seems to be not the best way of solving the problem. Furthermore, it does not work if the action space can be continous (like the acceleration of a self-driving car for example).
So, basicly I am wondering how to solve the 10th line Choose A' as a function of q(S', , w) in this pseudo-code of Sutton:
How are these problems typically solved? Can one recommend a good example of this algorithm using Keras?
Edit: Do I need to modify the pseudo-code when using a network as the approximator? So, that I simply minimize the MSE of the prediction of the network and the reward R for example?
I wondered how I choose the optimal action based on the currently learned weights of the network
You have three basic choices:
Run the network multiple times, once for each possible value of A' to go with the S' value that you are considering. Take the maximum value as the predicted optimum action (with probability of 1-ε, otherwise choose randomly for ε-greedy policy typically used in SARSA)
Design the network to estimate all action values at once - i.e. to have |A(s)| outputs (perhaps padded to cover "impossible" actions that you need to filter out). This will alter the gradient calculations slightly, there should be zero gradient applied to last layer inactive outputs (i.e. anything not matching the A of (S,A)). Again, just take the maximum valid output as the estimated optimum action. This can be more efficient than running the network multiple times. This is also the approach used by the recent DQN Atari games playing bot, and AlphaGo's policy networks.
Use a policy-gradient method, which works by using samples to estimate gradient that would improve a policy estimator. You can see chapter 13 of Sutton and Barto's second edition of Reinforcement Learning: An Introduction for more details. Policy-gradient methods become attractive for when there are large numbers of possible actions and can cope with continuous action spaces (by making estimates of the distribution function for optimal policy - e.g. choosing mean and standard deviation of a normal distribution, which you can sample from to take your action). You can also combine policy-gradient with a state-value approach in actor-critic methods, which can be more efficient learners than pure policy-gradient approaches.
Note that if your action space is continuous, you don't have to use a policy-gradient method, you could just quantise the action. Also, in some cases, even when actions are in theory continuous, you may find the optimal policy involves only using extreme values (the classic mountain car example falls into this category, the only useful actions are maximum acceleration and maximum backwards acceleration)
Do I need to modify the pseudo-code when using a network as the approximator? So, that I simply minimize the MSE of the prediction of the network and the reward R for example?
No. There is no separate loss function in the pseudocode, such as the MSE you would see used in supervised learning. The error term (often called the TD error) is given by the part in square brackets, and achieves a similar effect. Literally the term ∇q(S,A,w) (sorry for missing hat, no LaTex on SO) means the gradient of the estimator itself - not the gradient of any loss function.

How can I add concurrency to neural network processing?

The basics of neural networks, as I understand them, is there are several inputs, weights and outputs. There can be hidden layers that add to the complexity of the whole thing.
If I have 100 inputs, 5 hidden layers and one output (yes or no), presumably, there will be a LOT of connections. Somewhere on the order of 100^5. To do back propagation via gradient descent seems like it will take a VERY long time.
How can I set up the back propagation in a way that is parallel (concurrent) to take advantage of multicore processors (or multiple processors).
This is a language agnostic question because I am simply trying to understand structure.
If you have 5 hidden layers (assuming with 100 nodes each) you have 5 * 100^2 weights (assuming the bias node is included in the 100 nodes), not 100^5 (because there are 100^2 weights between two consecutive layers).
If you use gradient descent, you'll have to calculate the contribution of each training sample to the gradient, so a natural way of distributing this across cores would be to spread the training sample across the cores and sum the contributions to the gradient in the end.
With backpropagation, you can use batch backpropagation (accumulate weight changes from several training samples before updating the weights, see e.g. https://stackoverflow.com/a/11415434/288875 ).
I would think that the first option is much more cache friendly (updates need to be merged only once between processors in each step).

parallel computing with matlab for dependent loops

I have a kinetic monte carlo code. Now its kinetic and hence each loop updates the current state to a future state, making it to be a dependent for loop.
I want to use parallel computing feature of matlab, but it seems the famous 'parfor' command only works for independent loops.
So my question, is it possible to use parallel computing in matlab to parallelize code where loops are not independent?
Usually these kinds of calculations are done on a grid, and the grid is distributed across the workers, each worker having its own part of the grid to calculate. This can't be done independently in general because the value at one point on the grid will depend on neighbouring points. These boundary values are communicated between the workers using some mechanism such as message passing or shared memory.
In MATLAB you can either use spmd or communicating jobs with the labSend and labReceive functions or you can use distributed arrays.