Input signal optimization - scipy

I have a system, described by a black-box, that takes as input a signal in time (let's say something similar to a sine wave) and returns a single scalar as output.
My goal is to find the optimum signal that maximizes/minimizes the output. As constraints, the time average of the signal must be kept constant and the minimum and maximum value of the signal must be within a specific range.
The parameters to be optimized are all the points describing my signal in time.
I want to use the scipy.optimize library to minimize the output of my black-box system however, everytime the parameters are modified by the optimizer, I need to smooth the input signal in order to avoid discontinuity.
My question is: is it possible to access the input variables after they are modified by the algorithm, smooth them, and substitute them at every iteration of the algorithm?

Related

Moving Average block returns wrong values for column vector input

I am using Simulink for real-time classification using a trained Fine-KNN model. The input data for the model is a 50-point moving average vector [6x1]. I am using the DSP moving average block for this purpose with sliding window technique (window size = 50 and simulating using code generator). When I compare the input and the output of this block for real-time values, I get the following plot:
It is clear from the plot that there is something wrong with the output as there is quite a discrepancy between the input and the output. What could possibly be the problem or am I doing something wrong?
Edit (after Cris's comment):
Here are some screenshots to showcase some modeling parameters within Simulink:
Screenshot showing probes for measuring actual input and moving average output along with the Moving Average block parameters
Probes
Other block parameters that might be affecting the performance of the model:
a. OPC Config real-time block parameters
b. OPC Read block parameters
PS: One issue that I can think of is that the actual input is fed to the Moving Average in real-time at 10ms time-step and I am not sure if the moving average block has a buffer to store up to the "Window Length" data as it keeps coming in. What I mean by this is, the moving average block might not have access to 50 values of the input signals for quite some time and I am not sure how it deals with that kind of a situation.
I can reproduce this with the following minimal example:
So a constant input of [1; 2; 3] gives a moving average of roughly 2 (the average of the input elements) in all elements, when you would expect an output of [1; 2; 3] since each element is constant.
In your example, the inputs average approximately 0.62, which you are seeing in the output from the moving average.
Using a demux to split your vector up gives the desired output
The docs say that the moving average block should be able to handle this though
The Moving Average block computes the moving average of the input signal along each channel independently over time.
It turns out that a channel in this case is a column of your vector. Since you have a column vector, the columns in each iteration are getting stacked and averaged. Unfortunately the underlying code is sealed so we can't check this theory other than by trying it out.
Reshape your input to be a row array using the reshape block.
Then you get the expected output

Time-Delay estimation of transient signals using XCORR in MATLAB

I have to compute the cross-correlation between two transient signals with non-zero mean. I read that the function xcorr in MATLAB works properly only with zero-mean inputs.
Since these signals represent transient phenomena, it doesn't make sense to me to subtract the mean value.
My objective is to compute the time delay between the maximum values of the two signals. The signals are not exactly correlated-similar, but I guess this is always the case.
If i try to compute the time delay using xcorr, I get close results to what I expect (i.e the time delay checked visually by checking where the maximum of the two signals are) only using the UNBIASED options.
Why is that? Does the unbiased routine subtract the mean values from both my signals?

Matlab: Comparing two signals with different time values and placed impulses

We are analysing some signals that contains an impuls in the form of a dip in the standard signal in matlab.
Signals
As you can see on the picture, we need to find the difference between the "Zlotty" and the "Krone". The two graphs besides each other, are the graphs that needs to be analyzed.
As you can see the time of the impulse is different in when it occures and in how long the impuls is. We can not use the Time as a value of measurements because that can vary randomly.
Each graph is made by vectors containing 2.5mio datapoints.
How would you use matlab to find a difference?
You could split the problem into two parts. Ensuring the same time scale for both signals and finding a possible time shift in the alignment of the resulting signals. The first part could be achieved by using the resample function of Matlab; and the second task by using cross-correlation. Using two nested for loops, you could perform a search for the "best" stretch factor and time shift that result in the maximum correlation coefficient.

Simulink linear adjustment

I am new to Simulink and try to achieve the following:
I have a signal which simulates output power of an engine. I now want to be able to change this power output to a new value.
My question: How do I implement a linear adjustment from the current output to the newly requested output? Linear in the sense of a constant rate of change, e.g. x Watt/second.
Thanks!
The simplest way to handle this is probably the Rate Limiter block.
If you cause a step-change in the demand, the rate limiter will take the demand as an input and produce an output with the rate of change limited to the rate specified in its dialog parameters.
There is a dynamic version if you want the maximum slew rates to be specified via signals.

Using System Identification Toolbox transfer function with Simulink

I believe I am doing something fundamentally wrong when trying to import and test a transfer function in Simulink which was created within the System Identification Toolbox (SIT).
To give a simple example of what I am doing.
I have an input which is an offset sinusoidal wave from 12 seconds to 25 seconds with an amplitude of 1 and a frequency of 1.5rad/s which gives a measured output.
I have used SIT to create a simple 2 pole 1 zero transfer function which gives the following agreement:
I have then tried to import this transfer function into Simulink for investigation in the following configuration which has a sinusoidal input of frequency 1.5rad/s and a starting t=12. The LTI system block refers to the transfer function variable within the workspace:
When I run this simulation for 13 seconds the input to the block is as expected but the post transfer function signal shows little agreement with what would be expected and is an order of magnitude out.
pre:
post:
Could someone give any insight into where I am going wrong and why the output from the tf in simulink shows little resemblance to the model output displayed in the SIT. I have a basic grasp of control theory but I am struggling to make sense of this.
This could be due to different initial conditions used in SimuLink and the SI Toolbox, the latter should estimate initial conditions with the model, while Simulink does nothing special with initial conditions unless you specify them yourself.
To me it seems that your original signals are in periodic regime, since your output looks almost like a sine wave as well. In periodic regime, initial conditions have little effect. You can verify my assumption by simulating your model for a longer amount of time: if at the end, your signal reaches the right amplitude and phase lag as in your data, you will know that the initial conditions were wrong.
In any case, you can get the estimated initial state from the toolbox, I think using the InitialState property of the resulting object.
Another thing that might go wrong, is the time discretization that you use in Simulink in case you estimated a continuous time model (one in the Laplace variable s, not in z or q).
edit: In that case I would recommend you check what Simulink uses to discretize your CT model, by using c2d in MATLAB and a setup like the one below in Simulink. In MATLAB you can also "simulate" the response to a CT model using lsim, where you have to specify a discretization method.
This set-up allows you to load in a CT model and a discretized variant (in this case a state-space representation). By comparing the signals, you can see whether the discretization method you use is the same one that SimuLink uses (this depends on the integration method you set in the settings).