I am sending a fixed-sized buffer (512 byte) at 8 Mbits/s via UART out of an STM32F3, and I am experiencing what it seems to be a fixed delay (~ 2-3 bit periods) between consecutive frames.
In the screenshot below, after sending the dummy value 01010101, it can be seen how the line idles high for quite a long time between the stop bit "1" of one frame and start bit "0" of the next. The bit period of ~125 ns is as expected and the data is received successfully by another STM32, but such a cumulative delay between frames (128 us min. over the entire buffer) is a problem for my application.
Scope screenshot
I have tried sending the buffer using HAL_Transmit, HAL_Transmit_DMA (single call) and LL_USART_TransmitData8 (one byte at a time) with similar results.
Any idea of what could be causing it? Thanks!!!
I am currently using libopus in order to encode some audio that I have.
When consulting the documentation for how to use the encoder, one of the arguments the encode function takes in is max_data_bytes, a opus_int32 that has the following documentation:
Size of the allocated memory for the output payload. May be used to impose an upper limit on the instant bitrate, but should not be used as the only bitrate control
Unfortunately, I wasn't able to get much out of this definition as to how to set the upper size and the relation of this argument to bitrate. I tried consulting some of the examples provided such as this or this but both have the argument defined as some constant without much information.
Could anyone help me understand the definition of this value, and what number I might be interested in using for it? Thank you!
Depends on encoder version and encoding parameters.
In 1.1.4 the encoder doesn't merge packets and the upper limit should be 1275 byte. For the decoder, if repacketizer is used, you could find some packet up to 3*1275.
Things could be changed in recent version, I'm quite sure that the repacketizer has been somehow merged in the encoder. Look into the RFC.
Just paste here some of my notes from a 1½ years ago...
//Max opus frame size if 1275 as from RFC6716.
//If sample <= 20ms opus_encode return always an one frame packet.
//If celt is used and sample is 40 or 60ms, two or three frames packet is generated as max celt frame size is 20ms
//in this very specific case, the max packet size is multiplied by 2 or 3 respectively
I am getting some readings off an accelerometer connected to an Arduino which is in turn connected to MATLAB through serial communication. I would like to write the readings into a text file. A 10 second reading will write around 1000 entries that make the text file size around 1 kbyte.
I will be using the following code:
%%%%%// Communication %%%%%
arduino=serial('COM6','BaudRate',9600);
fopen(arduino);
fileID = fopen('Readings.txt','w');
%%%%%// Reading from Serial %%%%%
for i=1:Samples
scan = fscanf(arduino,'%f');
if isfloat(scan),
vib = [vib;scan];
fprintf(fileID,'%0.3f\r\n',scan);
end
end
Any suggestions on improving this code ? Will this have a time or Size limit? This code is to be run for 3 days.
Do not use text files, use binary files. 42718123229.123123 is 18 bytes in ASCII, 4 bytes in a binary file. Don't waste space unnecessarily. If your data is going to be used later in MATLAB, then I just suggest you save in .mat files
Do not use a single file! Choose a reasonable file size (e.g. 100Mb) and make sure that when you get to that many amount of data you switch to another file. You could do this by e.g. saving a file per hour. This way you minimize the possible errors that may happen if the software crashes 2 minutes before finishing.
Now knowing the real dimensions of your problem, writing a text file is totally fine, nothing special is required to process such small data. But there is a problem with your code. You are writing a variable vid which increases over time. That may cause bad performance because you are not using preallocation and it may consume a lot of memory. I strongly recommend not to keep this variable, and if you need the dater read it afterwards.
Another thing you should consider is verification of your data. What do you do when you receive less samples than you expect? Include timestamps! Be aware that these timestamps are not precise because you add them afterwards, but it allows you to identify if just some random samples are missing (may be interpolated afterwards) or some consecutive series of maybe 100 samples is missing.
This might be in the wrong place here but I am trying to use a simple BMP Image in a software for thermography called IRSoft. Is anyone maybe familiar with the file type .BMT?
I don't really want to reverse engineer too much but maybe someone else has an idea.
In the status line of IRSoft you can see the resolution of your camera. In my case it is 160x120 pixels. My BMT-files have always a size of 230588 bytes, that means some 12 bytes per pixel...
It seems to me that the last 160*120*4=76800 bytes of the BMT-file represents the thermal image:
4 bytes for every pixel. At file offset 153788 I can find the upper left pixel followed by the rest of the upper line. At the last offset 230584 I can find the lower right pixel.
I don't know the meaning of the rest of the file. Perhaps the real image, reference temperatures...
Do you know how to calculate the temperature out of these values?
This table translates the 4 byte values approximately to temperatures in degrees Celsius:
I am afraid they differ to other files.
0x41d00000 and more: 26.0°C an more
0x41c00000 and more: 24.0°C an more
0x41b00000 and more: 22.0°C an more
0x41a00000 and more: 20.0°C an more
0x41900000 and more: 18.0°C an more
I would like to write a program that will take a video as input, create an output video file, and will (starting after a certain number of frames), begin writing modified frames to the output file frame by frame.
The modification will need to work on individual columns of pixels, one at a time.
Viewing this as a problem to be solved in Matlab, with each frame as a matrix... I cannot think of a way to make this computationally tractable.
I am hoping that someone might be able to offer suggestions on how I might begin to approach this problem.
Here are some details, in case it helps:
I'm interested in transforming a video in the following way:
Viewing a video as a sequence of (MxN) matrices, where each matrix is called a frame:
Take an input video and create new file for output video
For each column V in frame(i) of output video, replace this column by
column V in frame(i + V - N) of the input video.
For example: the new right-most column (column N) of frame(i) will contain column N of frame(i + N - N) = frame(i)... so that there is no replacement. The new 2nd to right-most column (column N-1) of frame(i) will contain column N-1 of [frame(i+N-1-N) = frame(i-1)].
In order to make this work (i.e. in order to not run out of previous frames), this column replacement will start on frame N of the video.
So... This is basically a variable delay running from left to right?
As you say, you do have two ways of going about this:
a) Use lots of memory
b) Use lots of file access
Your memory requirements increase as a cube power of the size of the video - the size of each frame increases, AND the number of previous frames you need to have open or reference increases. I.e. doubling frame size will require 4x memory per frame, and 2x number of frames open.
I think that Matlab's memory management will probably make this hard to do for e.g. a 1080p video, unless you have a pretty high-end workstation. Do you? A quick test-read of a 720p video gives 1.2MB per frame. 1080p would then be approx 5MB per frame, and you would need to have 1920 frames open: approx 10GB needed.
It will be more efficient to load frames individually, if you don't have enough memory - otherwise you will be using pagefiles and that'll be slower than loading frame-by-frame.
Your basic code reading each frame individually could be something like this:
VR=VideoReader('My_Input_Video_Filename.avi');
VW=VideoWriter('My_Output_Video_Filename.avi','MPEG-4');
NumInFrames=get(VR,'NumberOfFrames');
InWidth=get(VR,'Width');
InHeight=get(VR,'Height');
OutFrame=zeros(InHeight,InWidth,3,'uint8');
for (frame=InWidth+1:NumInFrames)
for (subindex=1:InWidth)
CData=read(VR,frame-subindex);
OutFrame(:,subindex,:)=CData(:,subindex,:);
end
writeVideo(VW,OutFrame);
end
This will probably be slow, and I haven't fully checked it works, but it does use a minimum amount of memory.
The best case for minimum file acess is probably using a ring buffer arrangement and the maximum amount of memory, which would look something like this:
VR=VideoReader('My_Input_Video_Filename.avi');
VW=VideoWriter('My_Output_Video_Filename.avi','MPEG-4');
NumInFrames=get(VR,'NumberOfFrames');
InWidth=get(VR,'Width');
InHeight=get(VR,'Height');
CDatas=read(VR,InWidth);
BufferIndex=1;
OutFrame=zeros(InHeight,InWidth,3,'uint8');
for (frame=InWidth+1:NumInFrames)
CDatas(:,:,:,BufferIndex)=read(VR,frame);
tempindices=circshift(1:InWidth,[1,-1*BufferIndex]);
for (subindex=1:InWidth)
OutFrame(:,subindex,:)=CDatas(:,subindex,:,tempindices(subindex));
end
writeVideo(VW,OutFrame);
BufferIndex=mod(BufferIndex+1,InWidth);
end
The buffer indexing code may need some tweaking there, but something along those lines would be a minimum file access, maximum memory use solution.
For a given PC with more or less memory, you can implement somewhere in between these two as a solution (i.e. reading somewhere between 1 and all frames per iteration) as a best-case.
Matlab will be quite slow for this kind of task, but it will be a good way of getting your algorithm right and working out indexing bugs and that kind of thing. Converting to a compiled language would give a good increase in speed - I converted a Matlab script to a C# program in a couple of hours, and gave a 10x increase in speed over an optimised script where the time taken was in the number of file reads.
Hope this helps, good luck!