while(1)
{
for(x=0;x<5;) //note: x is incremented elsewhere
{
DAC->DHR12R1 = (uint16_t)(x/5.0*4095*3.0/3.3);
}
}
what does this loop mean?I know the DHR12R1 is data hold register 12bits right
I've converted it to a standard C program to see what values are written to the DAC register.
#include <stdio.h>
#include <stdint.h>
int x;
int main() {
for(x=0;x<=5;x++) // Why x <= 5? See note at bottom
printf("x=%d DAC->DHR12R1=%u\n", x, (uint16_t)(x/5.0*4095*3.0/3.3));
return 0;
}
Output:
$ gcc -Wall -Wextra dac.c -o dac && ./dac
x=0 DAC->DHR12R1=0
x=1 DAC->DHR12R1=744
x=2 DAC->DHR12R1=1489
x=3 DAC->DHR12R1=2233
x=4 DAC->DHR12R1=2978
x=5 DAC->DHR12R1=3722
This value will eventually end up in the DAC Channel 1 Data Output Register DAC->DOR1, and gets converted to a voltage according to the formula
U=Vref*DAC->DOR1/4095
So, if your Vref is 3 Volts, then you'd get 0 Volts at x=0, 0.545 Volts at x=1 etc.
Note: I've assumed that x is incremented by 1 in some interrupt handler, then x can be briefly set to 5 before it gets reset to 0. If it can be incremented by arbitrary values, or this interrupt can occur more than once per loop iteration, then the result will wrap around at 4096. It means that the output voltage will normally fall between GND and 0.727*Vref, with occasional short spikes above that.
Note also that if two increments ocuur in short succession at the wrong moment, one before x<5 is checked, and the other right after that, before x=0 is executed, then one pulse will be lost.
Therefore, you should consider moving the limit check into the interupt where
the increment occurs, like
x = (x + 1) % 5;
Related
My post is related to the following questions:
Avoid division by zero between matrices in MATLAB
How to stop MATLAB from rounding extremely small values to 0?
I am writing a matlab function that I am exporting with codegen. When codegen executes division between two numbers, both primitive doubles, codegen mentions that the result is a type of :Inf x :Inf. Here is my code snip:
travel_distance = stop_position - start_position;
duration = stop_time - start_time;
velocity = (travel_distance / duration);
Neither travel_distance or duration variables are zero. During codegen, when I examine the variables of travel_distance and duration, they are both :Inf x 1. However, velocity is showing as :Inf x :Inf . This is being shown also for the (travel_distance / duration) block of code. I suspect that I am running into the scenario mentioned by the author in the second link, which mentions this quote:
MATLAB will not change the value to 0. However, it is possible that the result if using the value in an operation is indistinguishable from using 0
I've tried several things to try and solve my problem, and am still getting the same thing. For example:
% increment by a small number
velocity = ((travel_distance + 0.0001) / (duration + 0.0001));
% check if nan, and set to 0
velocity(isnan(velocity)) = 0;
% check if nan or inf and set to 0
if (isnan(velocity) || isinf(velocity))
velocity = 0;
end
I'd actually expect that the travel_distance, duration, and velocity are all of type 1x1, since I know these should be primitive results.
What can I do to ensure matlab performs codegen correctly, by allowing the velocity variable to be either an :Inf x 1 or a 1x1? ( Double or Int output is fine )
I don't think this is related to division by zero, as shown by the attempts you made at avoiding that. :Inf x 1 refers to a vector of unknown length, and :Inf x :Inf to a matrix of unknown size. If duration is a vector, then travel_distance / duration is trying to solve a system of linear equations.
If you use ./ (element-wise division) instead of /, then Codegen might be able to generate the right code:
velocity = travel_distance ./ duration;
I have the following simple ODE:
dx/dt=-1
With initial condition x(0)=5, I am interested in when x(t)==1. So I have the following events function:
function [value,isterminal,direction] = test_events(t,x)
value = x-1;
isterminal = 0;
direction = 0;
end
This should produce an event at t=4. However, if I run the following code I get two events, one at t=4, and one at the nearby location t=4+5.7e-14:
options = odeset('Events',#test_events);
sol = ode45(#(t,x)-1,[0 10],5,options);
fprintf('%.16f\n',sol.xe)
% 4.0000000000000000
% 4.0000000000000568
If I run similar codes to find when x(t)==0 or x(t)==-1 (value = x; or value = x+1; respectively), I have only one event. Why does this generate two events?
UPDATE: If the options structure is changed to the following:
options = odeset('Events',#test_events,'RelTol',1e-4);
...then the ODE only returns one event at t=4+5.7e-14. If 'RelTol' is set to 1e-5, it returns one event at t=4. If 'RelTol' is set to 1e-8, it returns the same two events as the default ('RelTol'=1e-3). Additionally, changing the initial condition from x(0)=5 to x(0)=4 produces a single event, but setting x(0)=4 and 'RelTol'=1e-8 produces two events.
UPDATE 2: Observing the sol.x and sol.y outputs (t and x, respectively), the time progresses as integers [0 1 2 3 4 5 6 7...], and x progresses as integers up until x(t=5) like so: [5 4 3 2 1 1.11e-16 -1.000 -2.000...]. This indicates that there is something that occurs between t=4 and t=5 that creates a 'bump' in the ODE solution. Why?
One speculation that might explain how rounding errors could occur in this simple problem: The solution is interpolated between the internal steps using the evaluations k_n of the ODE derivatives function, also called "dense output". The theoretical form is
b_1(u)k_1 + b_2(u)k_2 + ...b_s(u)k_s
where 0 <= u<= 1 it the parameter over the interval between the internal points, that is, t = (1-u)*t_k+u*t_{k+1}.
The coefficient polynomials are non-trivial. While in the example all the k_i=1 are constant, the evaluation of the sum b_1(u)+...+b_s(u) can accumulate rounding errors that become visible in the solution value close to a root, even if y_k and y_{k+1} are exact. In that range of accumulated floating point noise, the value might oscillate around the root, leading to the detection of multiple zero crossings.
covergroup xxxx ;
yyyy : coverpoint (zzzz)
{
bins sequence_1 = {0=>1=>2=>3};
bins sequence_2 = {0=>1=>2=>3=>4};
bins sequence_3 = {0=>1=>2=>3=>4=>5=>6=>7=>8=>9};
bins sequence_4 = {0=>1=>2=>3=>4=>5=>6=>7=>8=>9=>10=>11=>12=>13=>14=>15=>16=>17};
bins sequence_5 = {0=>1=>2=>3=>4=>5=>6=>7=>8=>9=>10=>11=>12=>13=>14=>15=>16=>17=>18};
bins sequence_6 = {0=>1=>2=>3=>4=>5=>6=>7=>8=>9=>10=>11=>12=>13=>14=>15=>16=>17=>18=>19};
}
endgroup
zzzz is a counter register which counts from 0 up to 3,4,9,17,18 or 19 depending on its input
In coding this functional coverage, the idea is to hit either one of the bins if a specific series of transitions occur, only one bin.
so if the transitions for example goes from 0 to 4 like in sequence_2, would that also hit sequence_1, since the 0 to 3 sequence is present in sequence_1
Thanks
Yes, if sequence_2 is hit, that implies that sequence_1 is also hit. What you really want is to cover what happens when the counter reaches its limit. i.e. does it go back to 0 or does it stay at the limit in the next cycle? There is no need to elaborate every intermediate value of the counter - a covergroup is not a checker. It is only recording that a certain scenario in your test was achieved.
I'm trying to code my own implementation of Linear Feedback Shift Register on Matlab in order to generate a pseudo-random sequence of numbers. Suppose I need to generate a sequence from 1 to 16,384 (2^14) in random order, my initial state is number 329 and the tap is 7.
This is the code I've got so far:
function [rndV] = lfsr(limit, init, tap)
X = -1;
rndV = init;
bits = nextpow2(limit);
while(X ~= init)
if(X == -1)
X = init;
end
a = bitget(X, bits);
b = bitget(X, tap);
X = bitshift(X,1,bits);
X = bitset(X,1,bitxor(a,b));
rndV = [rndV X];
end
end
The parameters are:
limit = 16,384
init = 329
tap = 7
If I get right LFSR, must the algorithm loop until the initial state is found again? Does this loop must generate all numbers between 1 and 16,384 in random order?
Something is wrong on my code or maybe I misunderstood LFSR algorithm, but I'm getting just 22 numbers in random order, then the initial state (329) is found again.
I want to achieve the same as described here but in matlab.Thanks!
only a primitive polynomial can got full range of random numbers. check here for primpolyes, or here if you need bigger order of LSFR
I am trying to understand roundoff error for basic arithmetic operations in MATLAB and I came across the following curious example.
(0.3)^3 == (0.3)*(0.3)*(0.3)
ans = 0
I'd like to know exactly how the left-hand side is computed. MATLAB documentation suggests that for integer powers an 'exponentiation by squaring' algorithm is used.
"Matrix power. X^p is X to the power p, if p is a scalar. If p is an integer, the power is computed by repeated squaring."
So I assumed (0.3)^3 and (0.3)*(0.3)^2 would return the same value. But this is not the case. How do I explain the difference in roundoff error?
I don't know anything about MATLAB, but I tried it in Ruby:
irb> 0.3 ** 3
=> 0.026999999999999996
irb> 0.3 * 0.3 * 0.3
=> 0.027
According to the Ruby source code, the exponentiation operator casts the right-hand operand to a float if the left-hand operand is a float, and then calls the standard C function pow(). The float variant of the pow() function must implement a more complex algorithm for handling non-integer exponents, which would use operations that result in roundoff error. Maybe MATLAB works similarly.
Interestingly, scalar ^ seems to be implemented using pow while matrix ^ is implemented using square-and-multiply. To wit:
octave:13> format hex
octave:14> 0.3^3
ans = 3f9ba5e353f7ced8
octave:15> 0.3*0.3*0.3
ans = 3f9ba5e353f7ced9
octave:20> [0.3 0;0 0.3]^3
ans =
3f9ba5e353f7ced9 0000000000000000
0000000000000000 3f9ba5e353f7ced9
octave:21> [0.3 0;0 0.3] * [0.3 0;0 0.3] * [0.3 0;0 0.3]
ans =
3f9ba5e353f7ced9 0000000000000000
0000000000000000 3f9ba5e353f7ced9
This is confirmed by running octave under gdb and setting a breakpoint in pow.
The same is likely true in matlab, but I can't really verify.
Thanks to #Dougal I found this:
#include <stdio.h>
int main() {
double x = 0.3;
printf("%.40f\n", (x*x*x));
long double y = 0.3;
printf("%.40f\n", (double)(y*y*y));
}
which gives:
0.0269999999999999996946886682280819513835
0.0269999999999999962252417162744677625597
The case is strange because the computation with more digits gives a worst result. This is due to the fact that anyway the initial number 0.3 is approximated with few digits and hence we start with a relatively "large" error. In this particular case what happens is that the computation with few digits gives another "large" error but with opposite sign... hence compensating the initial one. Instead the computation with more digits gives a second smaller error but the first one remains.
Here's a little test program that follows what the system pow() from Source/Intel/xmm_power.c, in Apple's Libm-2026, does in this case:
#include <stdio.h>
int main() {
// basically lines 1130-1157 of xmm_power.c, modified a bit to remove
// irrelevant things
double x = .3;
int i = 3;
//calculate ix = f**i
long double ix = 1.0, lx = (long double) x;
//calculate x**i by doing lots of multiplication
int mask = 1;
//for each of the bits set in i, multiply ix by x**(2**bit_position)
while(i != 0)
{
if( i & mask )
{
ix *= lx;
i -= mask;
}
mask += mask;
lx *= lx; // In double this might overflow spuriously, but not in long double
}
printf("%.40f\n", (double) ix);
}
This prints out 0.0269999999999999962252417162744677625597, which agrees with the results I get for .3 ^ 3 in Matlab and .3 ** 3 in Python (and we know the latter just calls this code). By contrast, .3 * .3 * .3 for me gets 0.0269999999999999996946886682280819513835, which is the same thing that you get if you just ask to print out 0.027 to that many decimal places and so is presumably the closest double.
So there's the algorithm. We could track out exactly what value is set at each step, but it's not too surprising that it would round to a very slightly smaller number given a different algorithm for doing it.
Read Goldberg's "What Every Computer Scientist Should Know About Floating-Point Arithmetic" (this is a reprint by Oracle). Do understand it. Floating point numbers are not the real numbers of calculus. Sorry, no TL;DR version available.