I use scipy's integrate.solve_ivp function to solve an ODE over a time interval. The method and precision is fixed as 'rtol=1e-5, atol=1e-5, method="RK45"'. But when I change the time interval from [0,1] to [0,0.01], the speed of solving is only twice faster. How can I configurate to make it 100 times faster?
Related
i have a problem with my calculation tool. I have a vector of current values and a time vector in datetime format. Now i need to get the integral for overall Power consumption and im struggling.
I tried to integrate about the vector with trapz but the outcome isn't realistic for this vector. I tested different vectors and signal but same result.
My timevector wasn't used in this example.
%% Power Calculate
Energy = Ch1L1*230; %P=U*I scale vector with 230V
EnergySum = trapz(Energy)/1000 %Output: Power in kW
Does anyone of you know a solution?
I think i misunderstand the time in this outcome. Normally the integral should be the overall Power consumption for the time i logged the data.
I would like to compute the settling time of a signal y in Matlab. It should give the amount of time required before the signal reaches a steady state error |y(t)-y_{ss}| which is smaller than some absolute value x and stays smaller than x for all future times.
I already tried to use the Matlab function stepinfo, but this defines the value x as "a fraction 2% of their peak value for all future times" and that is not what i want.
Is there any way that i could adjust the Matlab function stepinfo, or a way that i could code this myself?
I am beginner at FFT and i am trying to learn FFT very well IN MATLAB. but i have problem with concept of FFT and difference of time and frequency domain.
I have 2 questions and i will be grateful if someone help me to learn them.
1- DFT can be implemented both in time and frequency domain??? what is the difference between sampling in time and frequency domain???
2- I want to do DFT on Step function ( t=45 seconds and sampling interval in time domain was given 0.01 s) anyone can help me how to write this code in MATLAB ????
Thanks,
1.A DFT is a transistion from time domain to frequency domain and and IDFT does the opposite (f->t).
Sampling in time means you look at the signal in time intervals (e.g. 0,001s).
In frequency domain it tells you how much frequency is between each value. The last value in frequency domain is the inverse of the sample time (in the example above 1000Hz). So if you do a DFT with 1000 samples you get a spacing of 1Hz.
Just instert 45s/0.01s samples into the DFT. Make sure that you get exactly one period of the function, else you get window leakage.
I have written a simple code for a model in NetLogo. At the same time the model is well studied through ordinary differential equations in literature. Now I would like to compare some plots of model obtained by both NetLogo and Matlab (used to solve differential equations). I used "ticks" command to increment the time in NetLogo, where as Matlab uses time in seconds. What kind of precautions ( changes ) should I keep in mind in order to compare the plots obtained by NetLogo and Matlab.
In general, the ticks-axis of the plots from NetLogo should be some constant scalar of the time-axis of the MatLab plots. That scalar is often referred to in simulations as dt or "step size": the time per tick. If you were just using NetLogo to numerically solve differential equations (not recommended, though possible), you would likely explicitly set this to something (just as you do when numerically solving in MatLab). In most NetLogo models, however, the step size is implicit.
Some common parameters that correspond to step size in models:
speed of agents
rates of growth or decay
rates of diffusion
So, for example, if we're modeling traffic on a street with a speed limit of 100 kph (= (100000 m) / (60 min * 60 sec) = 27.8 m/s), and our patch-size is equal to 1 m and our agents travel at most 0.5 per tick, then we have:
27.8 m/s = (0.5 patches/tick) * (1 m/patches) / (step-size s/tick) = (0.5 m/tick) / (step-size s/tick)
step-size s/tick = (0.5 m/tick) / (27.8 m/s) = 0.018 s/tick
So, in this case, each tick is about 0.018 seconds.
Basically, you should try to find some "per tick" parameter in your model that corresponds to "per second" parameter in the differential equations. Then, you should be able to determine how many seconds per tick there are by comparing these parameters.
Alternatively, you could kind of cheat by just comparing plots, seeing how they line up, and then determine the step size like that. Then, you can work backwards to figure out what parameters in your models are determining the step size.
I would like to integrate importance sampling in monte carlo to speed up the calculation.
Here I have created a simple example in matlab:
a=randn(10,10000);
a_sum=sum(a,1);
quantile(a_sum, 0.01)
The value at risk will amount to -7.3159 and around 100 scenarios are above the value at risk. So using a shift of -1. I can have more scenarios above value at risk:
b=a-1;
b_sum=sum(b,1);
A common way to calculate the value at risk is to use the likelihood ratios. Since I have shifted each random number by -1, I can calculate for each shift the likelihood ratio. But can I combine these likelihood ratios? And how can I calculate the value at risk?