I'm using the program Dymola in this case. As you can see in the figure, we have our temperature goal (refTemp) and we are comparing this temperature with a temperature in the system(KvvTemp). Our goal is to differentiate these temperatures, then multiplying the difference with a small number so our value will become between 0-1 before entering the intergrator. Now to my question, how is it possible for the integrator's output to be the temperature we want to send in to the system (y1)? Is there any explanation on how it is possible to set the temperature that will enter the system(y1) through the intergrator?
Related
I've successfully plotted the signal strength coverage map for a generic narrow band (read single frequency) horn antenna using MATLAB's in-built functions design(), txsite() and coverage().
MATLAB uses the 'Longley-Rice' propagation model when terrain data is present which I downloaded and introduced using addCustomTerrain().
However, I don't want my antenna to be narrow band operating at a single frequency.
I want to model the coverage map I would get on location with a known ultra-wide band (UWB) transient pulsed signal. I have the time domain E-field of this waveform as well as the FFT and energy spectral density.
My plan was to loop over many tx antennas, each having an operating frequency equal to one of the ~1000 frequency bins in the UWB spectral content and an output power equal to the scaled energy spectral density (ESD) multiplied by the frequency step size (df) and divided by the total time period of the measured pulsed signal (to get power). P = ESD * (df/T).
However, when I ran this looped code, I got:
"Error using em.EmStructures/savesolution
The calculated result is invalid; possible cause is a coarse mesh. Please consider refining the mesh
manually."
I assume this means MATLAB can't model 1000 different antennas on the same exact location.. but any idea on this error?
Is what I'm trying to do possible in MATLAB?
Are there alternative methods?
Thank you for any help in advance!
I'm running a simulink model from simulink using matlab. My system is mainly in matlab, but I run the slx file and export the outputs to be used in matlab. The simulation is run for 48 seconds (1 second representing an hour). When I get the outputs, I'm expecting it to be the same quality as when I view it in simulink, but it's not. Here is an example of what my data looks like in simulink:
Here is how it looks like when I plot it in matlab (the number of samples becomes 307 when exported)
I tried to change the step size in simulink or change the solver, but this distorted my simulink output as the following.
My solver is ode45, how do I control the sampling frequency of my data so that I don't get different resolution after exporting it to matlab.
P.S Once I export it, I will interpolate the data so that I get samples in between the hours (a sample every minute instead of every hours). If I can do it at once by changing the step size then that will be perfect.
following your advice, I got this plot when I plot it vs time instead of samples
Thank you
You are using a variable-step solver (ODE45) and thus there is a very high chance you won't get a consistent sampling frequency.
The only way to ensure/control the sampling frequency is to use a fixed-step solver (ode4 for instance).
However, as to why the data looks different between the Simulink scope and the plotted data, for variable timestep solvers there is refine factor (configuration parameters -> Data Import/Export -> Additional Parameters). This is by default set to 1. Set this to 100 and you should get a more consistent-looking sample density.
What should be known about the refine factor?
To get smoother output and have a better time resolution, it is much faster to change the refine factor instead of reducing the step size.
When the refine factor is changed, the solvers generate additional points by evaluating a continuous extension formula at those points.
The refine factor applies to variable-step solvers and is most useful when you are using ode45.
Usually a value of 4 produces much smoother results.
https://blogs.mathworks.com/simulink/2009/07/14/refining-the-output-of-a-simulation/
https://uk.mathworks.com/help/simulink/gui/refine-factor.html
I have accelerometer readings of three axis X, Y and z, will be getting data in a frequency of (62 records per second). Could you please suggest me how can I calculate the displacement.
Data in hand:
Accelerometer readings with respect to time.
Do I need to calculate the displacement using time domain data or need to convert into frequency domain. Which one will give a accurate results?
You can double integrate the acceleration vector over time to obtain the displacement. In theory this is a perfectly sensible solution.
But in practice, there will always be a component of g (acceleration due to gravity) acting on at least one of the axes all the time. Let's say you subtract the g component from your xyz vectors. The problem is that any slight error in readings (even off by a small order of magnitude) when double integration will lead to the error adding up over time rendering the displacement wildly inaccurate.
According to the integrated values, you will most likely see even an idle object fly off into space. You'll need an additional sensor to tell you the orientation - like a gyroscope, and have some point of reference (the Wiimote does this with an IR sensor).
This is primarily a time domain problem, but you could have a frequency domain stage where some amount of filtering is done to remove measurement error or process error.
tl;dr Positional tracking with acceleration sensors alone is a hard problem.
I am trying to estimate the Estimating Quasi-stationary part of a signal in Matlab. It is a 1 second long sound signal that belongs to a bird.
I am using MFCC to extract features but would like to have a window size for MFCC that is guaranteed to operate on statistically quasi-stationary part.
My questions are:
Do you think it is a solid approach if I iterate by varying my window size from 1 second to a smaller interval by observing the change of second moment of features and making a decision where the second moment is not changing anymore?
If I use Shannon entropy method by again varying my MFCC window size, how the number of bits I got at the output of the entropy algorithm would help me to identify the Estimating Quasi-stationary part of the signal
Are there any other ideas?
I am trying to fit a model to some measurements to model a battery.
Input of my model is current and the output is voltage of battery terminal.
I am using some 3D lookup tables in my model.
The breakpoints (dimensions) are: SOC (state of charge), temperature, and current amplitude and the table data in the impedance value of battery circuit elements.
The measurements are done using very small values for the current (in range of 2 A).
After completing the model I am supposed to validate the model using a standard signal for current input that includes very high current amplitudes (in range of 250 A).
Then comparing the output of my model (voltage) with the one from the measurements using that standard signal for the current.
Now when I try to run the model I get singularity error over one of the integrators. I am sure this is caused by the high amlitude of current as the input, but the problem is that I can not limit the amplitude using e.g. saturation blocks. I also tried to solve the problem with different solvers but could not fix the problem. Does anyone have an idea how to resolve this problem?
Please access my files from here.