How do I detect the timescale precision used in a simulation from the source code ?.
Consider I have a configuration parameter(cfg_delay_i) of some delay value given by user in timeunits as fs .If the user gives 1000 , my code has to wait 1000fs or 1ps before executing further.
#(cfg_delay_i * 1fs );//will wait only if timescale is 1ps/1fs
do_something();
If the timescale precision is 1fs ,there won’t be any problem but if the precision is higher than that it won’t wait and it will work as 0 delay .
So I want to write a code which will determine the timescale used by the user and give the delay accordingly.My expected pseudo-code will be like below,
if(timeprecision == 1fs )#(cfg_delay_i * 1fs ) ;
else if(timeprecision == 1ps )#(cfg_delay_i/1000 * 1ps ) ;
Please help me with the logic to determine the timescale unit and precision internally.
You can write if (int'(1fs)!=0) // the time precision is 1fs and so on. But there's no need to do this.
#(cfg_delay_i/1000.0 * 1ps)
The above works regardless if the precision is 1ps or smaller. Note the use of the real literal 1000.0 to keep the division real. 1ps is already a real number, so the result of the entire expression will be real. You could also do
#(cfg_delay_i/1.e6 * 1ns)
If the time precision at the point where this code is located is greater than 1fs, the result gets rounded to the nearest precision unit. For example if cfg_delay is 500 and the current precision is 1ps, this would get rounded to #1ps.
Do be aware that the user setting cfg_delay has to take the same care to make sure their value is set with the correct scaling/precision.
This seems to work in Vivado
// Example where we need to check the clock frequency or the time of an event
real tscale_unit;
realtime t_edge1;
realtime t_edge2;
realtime t_event;
real clk_freq;
initial begin
t_edge1 = 0.0s;
#1; // Single unit time delay
tscale_unit = $realtime / 1ps; // Normalise the timescale into picoseconds (1*10^-12)
end
always begin
#(posedge clk);
t_edge2 = t_edge1;
t_edge1 = $realtime;
clk_freq = 1.0s/((t_edge1 - t_edge2) * tscale_unit * 1ps);
end
always begin
#(posedge event);
t_event = $realtime * tscale_unit * 1ps;
end
Related
I am trying to read distance using Raspberry Pi Pico and ultrasonic distance sensor. While running the code in Thonny, I am getting the error,
TypeError: function missing 1 required positional arguments
The code is as below:
from machine import Pin, Timer
import utime
timer = Timer
trigger = Pin(3, Pin.OUT)
echo = Pin(2, Pin.IN)
distance = 0
def get_distance(timer):
global distance
trigger.high()
utime.sleep(0.00001)
trigger.low()
while echo.value() == 0:
start = utime.ticks_us()
while echo.value() == 1:
stop = utime.ticks_us()
timepassed = stop - start
distance = (timepassed * 0.0343) / 2
print("The distance from object is ",distance,"cm")
return distance
timer.init(freq=1, mode=Timer.PERIODIC, callback=get_distance)
while True:
get_distance()
utime.sleep(1)
Your initial problem is that you aren't using timer as an argument to your get_distance call, but you have bigger problems than that. You are using a timer to call get_distance but you are also calling get_distance in a loop. To top it off you have 2 blocking while loops in your get_distance function. Who knows how long the value of echo will stay 1 or 0. Will it stay one of those values longer than the next invocation from Timer? If so, you are going to have big problems. What you want to do is send periodic pulses to the pin to check the values. This can be done as below. This code isn't tested (although, it probably works). It is at least a solid gist of the direction you should be moving toward.
import machine, utime
trigger = machine.Pin(3, machine.Pin.OUT)
echo = machine.Pin(2, machine.Pin.IN)
def get_distance(timer):
global echo, trigger #you probably don't need this line
trigger.value(0)
utime.sleep_us(5)
trigger.value(1)
utime.sleep_us(10)
trigger.value(0)
pt = machine.time_pulse_us(echo, 1, 50000)
print("The distance from object is {} cm".format((pt/2)/29.1))
timer = machine.Timer(-1)
timer.init(mode=machine.Timer.PERIODIC, period=500, callback=get_distance)
Parts of this code were borrowed from here and reformatted to fit your design. I was too lazy to figure out how to effectively get rid of your while loops so, I just let the internet give me that answer (machine.time_pulse_us(echo, 1, 50000)).
Many of the ultrasonic units, such as the SRF04 nominally operate at 5V, so you could have problems if that's what you are using.
The v53l0x is a laser-based time-of flight device. It only works over short ranges (about a metre or so, but it definitely works with 3.3 V on a Pico with micropython and Thonny
https://www.youtube.com/watch?v=YBu6GKnN4lk
https://github.com/kevinmcaleer/vl53l0x
I am using a LinearProgressIndicator to visually display a countdown of time and trigger a function at certain intervals. I am doing this by updating the LinearProgressIndicator's value prop using a state variable _progress that gets decremented by 0.01 each 100 milliseconds.
When I set conditions that were based on two decimal points, or even 0 if (_progress == 0.75), I discovered that the conditions were being skipped because the value of _progress was quickly becoming a much larger fraction that would not match my condition (e.g. 0.7502987777777). I assume this is an inherent issue of working with doubles, but my question then becomes, what is the best way to deal with this if you want to trigger actions based on the value of _progress? My approach is to broaden the conditions - for example if (_progress > 0.75 && _progress < 0.76).
Any tips/advice would be appreciated.
Thanks.
When dealing with floating-point values, you often cannot depend on strict equality. Instead you should check if the floating-point value you have is within a certain tolerance of the desired value. One approach:
bool closeTo(double value1, double value2, [double epsilon = 0.001]) =>
(value1 - value2).abs() <= epsilon;
if (closeTo(_progress, 0.75)) {
// Do something.
}
(package:matcher has a similar closeTo function for matching values in tests.)
Arguably, since floating-point values are floating, your tolerance should depend on the magnitude of the values.
In your case, you alternatively should strongly consider avoiding floating-point values for internal state: use fixed-point values by multiplying everything by 100 and using ints instead. That is, let _progress be an int, decrement it by 1 every 100 ms, and then you can compare against 75 or other values directly and exactly. This additionally would have the advantage of not accumulating floating-point error when you repeatedly decrement _progress. If you need to present _progress to the user or pass it to LinearProgressIndicator, use _progress / 100 instead.
I need to collect voice pieces from a continuous audio stream. I need to process later the user's voice piece that has just been said (not for speech recognition). What I am focusing on is only the voice's segmentation based on its loudness.
If after at least 1 second of silence, his voice becomes loud enough for a while, and then silent again for at least 1 second, I say this is a sentence and the voice should be segmented here.
I just know I can get raw audio data from the AudioClip created by Microphone.Start(). I want to write some code like this:
void Start()
{
audio = Microphone.Start(deviceName, true, 10, 16000);
}
void Update()
{
audio.GetData(fdata, 0);
for(int i = 0; i < fdata.Length; i++) {
u16data[i] = Convert.ToUInt16(fdata[i] * 65535);
}
// ... Process u16data
}
But what I'm not sure is:
Every frame when I call audio.GetData(fdata, 0), what I get is the latest 10 seconds of sound data if fdata is big enough or shorter than 10 seconds if fdata is not big enough, is it right?
fdata is a float array, and what I need is a 16 kHz, 16 bit PCM buffer. Is it right to convert the data like: u16data[i] = fdata[i] * 65535?
What is the right way to detect loud moments and silent moments in fdata?
No. you have to read starting at the current position within the AudioClip using Microphone.GetPosition
Get the position in samples of the recording.
and pass the optained index to AudioClip.GetData
Use the offsetSamples parameter to start the read from a specific position in the clip
fdata = new float[clip.samples * clip.channels];
var currentIndex = Microphone.GetPosition(null);
audio.GetData(fdata, currentIndex);
I don't understand what exactly you convert this for. fdata will contain
floats ranging from -1.0f to 1.0f (AudioClip.GetData)
so if for some reason you need to get values between short.MinValue (= -32768) and short.MaxValue(= 32767) than yes you can do that using
u16data[i] = Convert.ToUInt16(fdata[i] * short.MaxValue);
note however that Convert.ToUInt16(float):
value, rounded to the nearest 16-bit unsigned integer. If value is halfway between two whole numbers, the even number is returned; that is, 4.5 is converted to 4, and 5.5 is converted to 6.
you might want to rather use Mathf.RoundToInt first to also round up if a value is e.g. 4.5.
u16data[i] = Convert.ToUInt16(Mathf.RoundToInt(fdata[i] * short.MaxValue));
Your naming however suggests that you are actually trying to get unsigned values ushort (or also UInt16). For this you can not have negative values! So you have to shift the float values up in order to map the range (-1.0f | 1.0f ) to the range (0.0f | 1.0f) before multiplaying it by ushort.MaxValue(= 65535)
u16data[i] = Convert.ToUInt16(Mathf.RoundToInt(fdata[i] + 1) / 2 * ushort.MaxValue);
What you receive from AudioClip.GetData are the gain values of the audio track between -1.0f and 1.0f.
so a "loud" moment would be where
Mathf.Abs(fdata[i]) >= aCertainLoudThreshold;
a "silent" moment would be where
Mathf.Abs(fdata[i]) <= aCertainSiltenThreshold;
where aCertainSiltenThreshold might e.g. be 0.2f and aCertainLoudThreshold might e.g. be 0.8f.
I'm pretty new to FPGA and Verilog, but I'm having a problem getting my code to run in the simulator they way I would expect. It seems the Isim simulator is not "operating" on integers in my code. Below is a snippet of the relevant code. I'm trying to divide the clk pulse by toggling SCK_gen every time integer i reaches 10. When I run this code in Isim, SCK_gen never changes value. Also it When I impliment the code on the FPGA, it behaves as I would expect, I can observe a pulse at 1/10 the clock frequency. If anyone can point me in the right direction I'd be grateful. Thanks
//signals
//for SCK_clock
reg SCK_gen, SCK_hold;
integer i;
reg en_SCK;
wire neg_edge_SCK;
//SCK_generator
always #(posedge clk, posedge reset)
if (reset)
begin
SCK_gen <= 0;
end
else
begin
i <= i+1;
SCK_hold <= SCK_gen;
if(i == 10)
begin
SCK_gen <= ~SCK_gen;
i <= 0;
end
end
//detect neg edge of SCK
assign neg_edge_SCK = SCK_hold & SCK_gen;
The result of any arithmetic or logical equality operation is 'x' if any of the operands are 'x'. Since it looks like i is not initialized, the statement i <= i+1 has no effect on i and so the comparison (i == 10) will always be false.
I have some code to convert a time value returned from QueryPerformanceCounter to a double value in milliseconds, as this is more convenient to count with.
The function looks like this:
double timeGetExactTime() {
LARGE_INTEGER timerPerformanceCounter, timerPerformanceFrequency;
QueryPerformanceCounter(&timerPerformanceCounter);
if (QueryPerformanceFrequency(&timerPerformanceFrequency)) {
return (double)timerPerformanceCounter.QuadPart / (((double)timerPerformanceFrequency.QuadPart) / 1000.0);
}
return 0.0;
}
The problem I'm having recently (I don't think I had this problem before, and no changes have been made to the code) is that the result is not very accurate. The result does not contain any decimals, but it is even less accurate than 1 millisecond.
When I enter the expression in the debugger, the result is as accurate as I would expect.
I understand that a double cannot hold the accuracy of a 64-bit integer, but at this time, the PerformanceCounter only required 46 bits (and a double should be able to store 52 bits without loss)
Furthermore it seems odd that the debugger would use a different format to do the division.
Here are some results I got. The program was compiled in Debug mode, Floating Point mode in C++ options was set to the default ( Precise (/fp:precise) )
timerPerformanceCounter.QuadPart: 30270310439445
timerPerformanceFrequency.QuadPart: 14318180
double perfCounter = (double)timerPerformanceCounter.QuadPart;
30270310439445.000
double perfFrequency = (((double)timerPerformanceFrequency.QuadPart) / 1000.0);
14318.179687500000
double result = perfCounter / perfFrequency;
2114117248.0000000
return (double)timerPerformanceCounter.QuadPart / (((double)timerPerformanceFrequency.QuadPart) / 1000.0);
2114117248.0000000
Result with same expression in debugger:
2114117188.0396111
Result of perfTimerCount / perfTimerFreq in debugger:
2114117234.1810646
Result of 30270310439445 / 14318180 in calculator:
2114117188.0396111796331656677036
Does anyone know why the accuracy is different in the debugger's Watch compared to the result in my program?
Update: I tried deducting 30270310439445 from timerPerformanceCounter.QuadPart before doing the conversion and division, and it does appear to be accurate in all cases now.
Maybe the reason why I'm only seeing this behavior now might be because my computer's uptime is now 16 days, so the value is larger than I'm used to?
So it does appear to be a division accuracy issue with large numbers, but that still doesn't explain why the division was still correct in the Watch window.
Does it use a higher-precision type than double for it's results?
Adion,
If you don't mind the performance hit, cast your QuadPart numbers to decimal instead of double before performing the division. Then cast the resulting number back to double.
You are correct about the size of the numbers. It throws off the accuracy of the floating point calculations.
For more about this than you probably ever wanted to know, see:
What Every Computer Scientist Should Know About Floating-Point Arithmetic
http://docs.sun.com/source/806-3568/ncg_goldberg.html
Thanks, using decimal would probably be a solution too.
For now I've taken a slightly different approach, which also works well, at least as long as my program doesn't run longer than a week or so without restarting.
I just remember the performance counter of when my program started, and subtract this from the current counter before converting to double and doing the division.
I'm not sure which solution would be fastest, I guess I'd have to benchmark that first.
bool perfTimerInitialized = false;
double timerPerformanceFrequencyDbl;
LARGE_INTEGER timerPerformanceFrequency;
LARGE_INTEGER timerPerformanceCounterStart;
double timeGetExactTime()
{
if (!perfTimerInitialized) {
QueryPerformanceFrequency(&timerPerformanceFrequency);
timerPerformanceFrequencyDbl = ((double)timerPerformanceFrequency.QuadPart) / 1000.0;
QueryPerformanceCounter(&timerPerformanceCounterStart);
perfTimerInitialized = true;
}
LARGE_INTEGER timerPerformanceCounter;
if (QueryPerformanceCounter(&timerPerformanceCounter)) {
timerPerformanceCounter.QuadPart -= timerPerformanceCounterStart.QuadPart;
return ((double)timerPerformanceCounter.QuadPart) / timerPerformanceFrequencyDbl;
}
return (double)timeGetTime();
}