How to simulate block diagram based (Simulink-like) time-domain models? - simulation

I have been wondering this for some time now and was curious about the most logical implementation for simulating block diagram based time-domain models.
I don't know if that term is correct, but if you know Simulink you know what I mean.
There reason I am wondering is is that I have made some simulation models myself now, but I always get stuck when I am creating feedback loops. Most of the time this is not a problem when I am working with blocks that I can translate to the state-space domain, but when I get more complex elements this becomes a problem.
Practically I can not seem to get my head around how Simulink solves this.
I had thought that practically for every timesample every block calculates it current state and passes that to the connected blocks for the next timesample, however when you have:
->A->B->C->D
^-----|
4 blocks with a feedback to A and an input to A, it takes 3 timesamples to reach C, after which C will start emitting to A again. Before that C would have been emitting it's initial value. It takes 4 timesamples to reach D.
When A,B,C,D are simple laplace-like elements this is easily combined in a state-space or transfer function from A to D, however the results will be monumentally different. Because it will take 1 timesample from A to D and from C to A. I know that when the transfer function requires you, in general, to specify the initial conditions, but these conditions are not translateable (or I can not see it) to the block diagram solution.
How do you tackle this problem in a generic way?

Related

Solving an Algebraic Loop in Simulink using an Initial Value

I am building a circuit model for a transformer which models the effects of hysteresis. It does so using the Matlab function block on the right, and works successfully when tested in isolation. However, the value of the magnetising inductance Lm depends on calculations requiring the value of Im. But Simulink cannot determine the value of Im without the value of Lm, thus forming an algebraic loop.
However, I have the initial value for the inductance, Lm_initial loaded into the workspace. With this, I should be able to solve for the first Im value, which can be used to determine the next Lm, and so on. However, specifying Lm_initial in the variable inductor's properties doesn't work; Simulink tries to evaluate Lm with the nonexistent 'phi' and 'Im' values rather than trying to solve for an initial Im using the value of the initial inductance.
I have tried solutions involving commenting/uncommenting blocks and implementing further subsystems which activate/deactivate depending on the time step, as well as unit delays, but these run into issues regarding tracking time for calculating the derivatives or output very incorrect/noisy waveforms.
Is there a relatively simple solution for this case? The problem appears as if it'd be relatively simple to solve, but I cannot seem to find a workaround for this.
Transformer Equivalent Model
The exact placement of the unit delay in the loop might be the key here: try to place the unit delay between the [lm] GoTo block and the lm input of your MATLAB function block fcn, that should work. And set the initial condition parameter to Lm_initial.

Difference in Workspace and Simulink step response. Why this difference?

I had as primary objective to make a controller for the transfer function (5.551* s^2), using root locus I made the controller shown below. Analyzing the step response in the Workspace using the step () function I had a satisfactory answer but when I try to transfer this answer to Simulink the response behaves differently, at steady state for example I wish to have the smallest possible error as it was obtained in Workspace but in Simulink there is a big error and for some reason at 8 seconds time (Simulink simulation time) there is a "jump" as shown on the display and when I change the simulation time there is a change in this "jump" too and I do not know why these changes between one environment and another.
Step response in Workspace
Step response in Simulink with 8s of simulation
Step response in Simulink with 12s of simulation
Simulink controller
Simulink transfer function
I expected to make a controller that has an error less than 5% and an overshoot smaller than 25%, so I first made a controller with two integrators to nullify the effects of zeros on the source, after that I added two more integrators on the source to try decrease the error, the zero at -0.652 I used the angular condition for this and the gain of 0.240251 I used the modular condition.
I wasn't expecting the most optimized behavior possible, just that it has minimum conditions that satisfy the imposed conditions, so I didn't worry for example about the four integrators at the source.
I tried use the sisotool() command thinking that I had done something wrong, but the result changed a lot when I was simulating Simulink so I discarded this option and kept the controller I made using root locus.
Your MATLAB code and your Simulink model are not the same, and hence the different results.
MATLAB allows you to define the non-causal plant model P_ball, then form the causal closed loop CL, which can have its step response generated.
Simulink does not allow you to model non-causal blocks (even if the overall model is causal) and hence will not allow you to implement s^2, which I assume is why you have used two differentiation blocks. But a numerical differentiation is not the same as a Laplace s operator.
You would have to make the plant causal by incorporating two poles that are large enough to not adversely effect the overall simulation. So your plant model needs to be something like 5.551*s^2/((s/1000 + 1)(s/1000 + 1)) which can be implemented using a Transfer Function block with a numerator of 5.551*1000*1000*[1 0 0] and a denominator of [1 2*1000 1000*1000].
Alternatively you could just implement PID * P_ball (where you manually do the 2 zero/pole cancellations) which is causal.

Finding Conditional Moments in a Markov Process

This question combines math and programming. I will first describe the general problem and then give an example that is (hopefully) simpler to understand.
General Question: Consider a Markov-chain process of N-states with transition matrix Π. Each state is associated with a value x_n (n in {1,…,n}). Our goal is to find the unconditional average of the first two moments (mean and var) along T-period paths conditional on (i) the path starts in a subset of states, N_0, (ii) it ends in a subset of states, N_T, and (iii) it is not going through a subset of states, N_not, in any of the periods between 1 to T-1. By saying we are interested in the unconditional average of these two moments, I basically mean what would be the average of these two moments in the stationary distribution. To be more concrete, let me illustrate the goal of the exercise in a simple case.
Simple Example: Consider a 3-state Markov-chain process with transition matrix Π, and let the three state be denoted by A, B, and C. Each of these states are associated with some value (x_A, x_B, and x_C), respectively. We are interested in what happens along paths that satisfy the following condition. The path starts at point A, after 3 periods are in either points B or C, and between periods 1 to 3 never go again through point A. Denote this condition by (#). So, for example, a path which we are interested in would be {A,B,B,C} with the associated values {x_A, x_B, x_B, x_C}. We are interested in the average and standard deviation along such paths. In particular, we would like to find the unconditional average of these first two moments in paths that satisfy (#).
Let me now propose a solution based on simulating the process. Since both T and N are quite large, this solution is too slow for my purpose.
Simulation Solution: Starting from some initial point simulate the process for a very long time period, and drop the first τ periods. Extract all paths along the simulation that satisfy condition (#) and compute the mean and std along each of these paths. Finally, simply take the average across these paths.
I’m hoping there is a better and more efficient way to achieve the goal. Since I want the solution to be accurate and the size of T and N the simulation takes a long time.
I would love to hear your thoughts and if you know of efficient methods to achieve this goal. Please let me know if something is not clear and I'll try to clarify it.
Thank you!!!
I think I know how to do this if N_0 consists of one state, let's call that state A.
The long run probability of being in A is pi(A) and can be obtained by solving pi = pi*P, with P the transition matrix.
The other thing you need to calculate is the probability of those transient paths. You probably need to introduce a modified P, where all states i in the set N_not are absorbing (i.e. P[i,i]=1 and P[i,j]=0 for j is not i). Then starting from a vector p(0) which has a 1 in the element corresponding to state A and 0 otherwise, you can keep calculating p(n) = p(n-1)*P to get the probabilities of your transient paths.
Multiply the result of that by pi(A) to get the unconditional probability.
You can probably do something like this as well when N_0 is a set, but I don't know how you should select p(0) in that case.

How to analyze scale-free signals and get signal properties

I am new with signal processing, i have following signals which i've got after some pre-processing on original signals.
You can see some of them has some similarities with others and some doesn't. but the problem is They have various range(in this example from 1000 to 3000).
Question
How can i analysis their properties scale-free(what i mean from properties is statistical properties of signals or whatever)??
Note that i don't want to cross-comparing the signals, i just want independent signals signatures which i can run some process on them sometime later.
Anything would help.
If you want to make a filter that separates signals that follow this pattern from signals that don't, well, there's tons of things you could do!
Just think practically. As a first shot at it, you could do something like this (in this order):
Check if the signals are all-positive
Check if the first element is close in value to the last element
Check if the maximum lies "in the middle" somewhere
Check if the first value is small, then the signal grows, then shrinks again
Check if the growth rates are gradual. You could for example analyze their derivatives (after smoothing):
a. derivative should be all-positive for a while, then all-negative.
b. derivative should be smooth (no jumps greater than some tolerance)
Without additional knowledge about the signal's nature/origin, it's going to be hard to come up with more meaningful metrics than these...

Matrix-Algebra Design Decomposition

I am looking at refactoring some very complex code which is a subsystem of a project I have at work. Part of my examination of this code is that it is incredibly complex, and contains a lot of inputs, intermediate values and outputs depending on some core business logic.
I want to redesign this code to be easier to maintain as well as executing a hell of a lot faster, so to start off with I have been trying to look at each of the parameters and their dependencies on each other. This has lead to quite a large and tangled graph and I would like a mechanism for simplifying this graph.
A while back I came across a technique in a book about SOA design called "Matrix Design Decomposition" which uses a matrix of outputs and the dependencies they have on the inputs, applies some form of matrix algebra and can generate Business Process diagrams for those dependencies.
I know there is a web tool available at http://www.designdecomposition.com/ however it is limited in the number of input/output dependencies you can have. I have tried looking around for the algorithmic source for this tool (so I could attempt to implement it myself without the size limitation), however I have had no luck.
Does anybody know a similar technique that I could use? Currently I am even considering taking the dependency matrix and applying some Genetic Algorithms to see if evolution can come up with a simpler workflow...
Cheers,
Aidos
EDIT:
I will explain the motivation:
The original code was written for a system which computed all of the values (about 60) every time the user performed an operation (adding, removing or modifying certain properties of a item). This code was written over ten years ago and is definitely showing signs of age - others have added more complex calculations into the system and now we are getting completely unreasonable performance (up to 2 minutes before control is returned to the user). It has been decided to detach the calculations from the user actions and provide a button to "recalculate" the values.
My problem arises because there are so many calculations that are going on and they are based on the assumption that all of the required data will be available for their computation - now when I try to re-implement the calculations I keep encountering problems because I haven't got the result for a different calculation that this calculation relies on.
This is where I want to use the matrix-decomposition approach. The MD approach allows me to specify all of the inputs and outputs and gives me the "simplest" workflow that I can use for generating all of the outputs.
I can then use this "workflow" to know the precedence of the calculations I need to perform to get the same result without generating any exceptions. It also shows me which parts of the calculation system I can parallelise and where the fork and join points will be (I won't worry about that part just yet). At the moment all I have is an insanely large matrix with lots of dependencies showing in it, with no idea where to start.
I will elaborate from my comment a little more:
I don't want to use the solution from the EA process in the actual program. I want to take the dependency matrix and decompose it into modules that I will then code manually - this is purely a design aid - I am just interested in what the inputs/outputs for these modules will be. Basically a representation of the complex interdependencies between these calculations, as well as some idea of precedence.
Say I have A requires B and C. D requires A and E. F requires B, A and E, I want to effectively partition the problem space from a complex set of dependencies into a "workflow" that I can examine to get a better understanding. Once I have this understanding I can come up with a better design / implementation that is still human readable, so for the example I know I need to calculate A, then C, then D, then F.
--
I know this seems kind of strange, if you take a look at the website I linked to before the matrix based decomposition there should give you some understanding of what I am thinking of...
kquinn, If it's the piece of code I think he's referring to (I used to work there), it's already a black box solution that no human can understand as is. He's not looking to make it more complicated, less in fact. What he's trying to achieve is a whole heap of interlinked calculations.
What currently happens, is that whenever anything changes, it's an avalanche of events which cause a whole bunch of calculations to fire off, which in turn causes a whole bunch more events which continues on until finally it reaches a state of equilibrium.
What I assume he wants to do is find the dependencies for those outlying calculations and work in from there so they can be rewritten and find a way for the calculations from happening for the sake of it, rather than because they need to.
I can't offer much advice in regards to simplifying the graph, as unfortunately it's not something I have much experience in. That said, I would start looking for those outlying calculations which have no dependencies, and just traverse the graph from there. Start building up a new framework that includes the core business logic of each calculation in the simplest possible way, and refactor the crap out of it along the way.
If this is, as you say, "core business logic", then you really don't want to be screwing around with fancy decompositions and evolutionary algorithms that produce a "black box" solution that no one in the world understands or is capable of modifying. I would be very surprised if any of these techniques actually yielded any useful result; the human brain is still incomprehensibly more capable than any machine at untangling complicated relationships.
What you want to do is traditional refactoring: clean up the individual procedures, streamlining them and merging them where possible. Your goal is to make the code clear, so your successor doesn't have to go through the same process.
What language are you using?
Your problem should be pretty easy to model using Java Executors and Future<> tasks, but a similar framework is perhaps availabe on your chosen platform as well?
Also, if I understand this correctly, you want to generate a critical path for a large set of interdependent calculations -- is that something done dynamically, or do you "just" need a static analysis?
Regarding an algorithmic solution; pick up the closest copy of your numerical analysis textbook and refresh your memory on singular value decompositions and LU factorization; I'm guessing from the top off my head that this is what lies behind the tool you linked to.
EDIT: Since you're using Java, I'll give a brief outline of a suggestion proposal:
-> Use a threadpool executor to parallellize all calculations easily
-> Solve interdependencies with an object map of Future<> or FutureTask<>:s, i.e. if you variables are A, B and C, where A = B + C, do something like this:
static final Map<String, FutureTask<Integer>> mapping = ...
static final ThreadPoolExecutor threadpool = ...
FutureTask<Integer> a = new FutureTask<Integer>(new Callable<Integer>() {
public Integer call() {
Integer b = mapping.get("B").get();
Integer c = mapping.get("C").get();
return b + c;
}
}
);
FutureTask<Integer> b = new FutureTask<Integer>(...);
FutureTask<Integer> c = new FutureTask<Integer>(...);
map.put("A", a);
map.put("B", a);
map.put("C", a);
for ( FutureTask<Integer> task : map.values() )
threadpool.execute(task);
Now, if I'm not totally off (and I may very well be, it was a while since I worked in Java), you should be able to solve the apparent deadlock problem by tuning the thread pool size, or use a growing thread pool. (You still have to make sure that there are no interdependent tasks though, such as if A = B + C, and B = A + 1...)
If the black-box is linear you can discover all the coefficients by simply concatenating many vectors of input and many vectors of output.
you have input x[i] and output y[i], then you create a matrix Y whose columns are y[0], y[1], ... y[n], and a matrix X whose columns are x[0], x[1], ..., x[n]. There will be a transformation Y = T * X, then you may determine T = Y * inverse(X).
But since you said it is complex I bet it is not linear. Then if you still want a general framework you can use this a factor-graph
https://ieeexplore.ieee.org/document/910572
I would be curious if you can do this.
What I think is easier is to understand the code and rewrite it using the best practices.