How do I replay a .blf file in CANoe in real-time? - canoe

I found several questions on the subject here on SO, so I figured out this could fit as well.
In CANoe/CANalyzer Offline Mode, it is possible to start the measurement replaying the data that was logged, say in a .blf file. To configure this, please refer to these questions:
How do I play a blf file in CANalyzer
Running a Blf file in constant Loop for Emulation using CAPL
However, replay is done as fast as possible. I remember I read this in a document, but I can't find out where.
Is there a way to slow down the replay speed of .blf (or any) log file and what is it?
Or, conversely, does anybody have the reference to documentation where they state this can't be done? I have the feeling that this replay speed is impacting on some scripts I'm developing, since my PC can't be "as fast as possible".

As you state, per-default CANoe reads and replays the data as fast as possible in offline mode.
However, you can set an AnimationDelay which will slow down the replay. The value has to be set in the CAN.ini file. After setting the value you do not start the replay by clicking "Start" but rather by clicking "Animate" (a lightning bolt next to a sheet of paper).
You can search the CANoe docu for Animate for more details.

Related

How do I start and stop the flow of a connection to an internet server?

I'm using an ESP8266 with ESP8266WiFi and ESP8266HTTPClient libraries. My app doesn't have enough memory to download the entire JSON file that I need, but all I really need is a few fields from it, so I can discard most of it as I read it in.
What I don't understand is how to start, stop, or otherwise slow down the incoming data so that I can process it and pick out what I need as it comes in from the server. I have to use a fairly small buffer when I make the connection due to memory limitations caused by the rest of the program.
Is there a way to fill the buffer from the server, pause the transmission, process and clear the data in the buffer, and then resume the transmission until the whole JSON file is processed?
Sounds like you will want to use a streaming JSON parser. There are a couple of forks of such a library on GitHub. https://github.com/mrfaptastic/json-streaming-parser2 seems to be the one still maintained.

Where Video Player keeps network files and can you keep multiple ones

I've been working with video_player package from Flutter. I'm mostly testing videos taken from the url. My code is very similar to the ones from the examples.
Everywhere I read about it, I can see that this library does not support caching of the videos. But what this exactly means? What exactly is happening behind the scenes and how the behaviour would change if the caching would actually be implemented? How this is different from buffering? Are the video files simply downloaded to our device?
If yes, then were those files are kept?
One additional question is, how can I check the network consumption caused by using such connection? I've tried using Dev Tools but the network tab is always empty.
One last thing is, is it possible to pre-initialize next videos, so when we would like to switch between them, they are already partially pre-loaded?
U can use a package that helps you manage caching futter_cache_manager
https://pub.dev/packages/flutter_cache_manager
U can use this in combination with video_player. However, u would have to download the whole file first to be then be able to retrieve it for video_player to consume it.
An idea would be to stream the video and also download a copy locally. This however would consume more data than just downloading and caching the video first, then playing it locally.
As for how to check for network consumption, i am not sure.
Starting with the theory:
1.Cache is a high-speed storage area while a buffer is a normal storage area on ram for temporary storage.
2.Cache is made from static ram which is faster than the slower dynamic ram used for a buffer.
3.The buffer is mostly used for input/output processes while the cache is used during reading and writing processes from the disk.
4.Cache can also be a section of the disk while a buffer is only a section of the ram.
5.A buffer can be used in keyboards to edit typing mistakes while the cache cannot.
When it comes to video buffering just look at youtube app. You can see the buffer being made when grey line grows bigger before the red line. Mostly information stored this way cannot be accessible at all as Android uses combination of RAM allocation for both caching and buffering as it sees fit for current active process.
Technically you could try pre-loading different videos by starting and pausing all of them at once but I cannot imagine how much tampering with system memory control it would take, even youtube doesn't work like that.

Using EEPROM in STM32f10x

I'm using STM32f103 and in my program, I need to save some bytes in the internal flash memory. But as far as I know, I have to erase a whole page to write in it, which will take time.
This delay causes my display to blink.
Can anybody help me to save my data without consuming so much time?
Here is a list that may help:
1- MCU: STM32f103
2- IDE: Keil vision
3- using HAL driver provided by STM32CubeMx
4- sample data for saving in Flash: {0x53, 0xa0, 0x01, 0x54}
In the link below, you can find the code that I'm using.
FLASH_PAGE for Keil
The code you provide doesn't seem to be implemented well. It basically does 2 things each time you initiate a write operation:
Erase the page (this is the part that takes time)
Start form the given pointer, write until it hits a zero.
This is a very ineffective way of using the flash.
Probably the simplest and the most well-known way is to use the method described in ST's AN2594, although it has some limitations.
Still, at some point a page erase will be necessary regardless the method you use and there is no way to avoid some delay, unless your uC supports dual flash banks (STM32F103 don't have this feature). You need to plan the timing of flash writes and display refresh accordingly. If you need periodic writes to the flash, there is probably some high level error in your design.
To solve this problem, I used another library that STM itself presented. I had to include "eeprom.h" into your project and then add "eeprom.c" to it. You can easily find these files on the Internet.

Waveform representation of any audio in iPhone

I have to draw a waveform for an audio file (CMK.mp3) in my application.
For this I have tried this Solution
As this solution is using AVAssetreader, which is taking two much time to display the waveform.
Can anyone please help, is there any other way to display the waveform quickly?
Thanks
AVAssetReader is the only way to read an AVAsset so there is no way around that. You will want to tune the code to process it without incurring unwanted overhead. I have not tried that code yet but I intend on using it to build a sample project to share on GitHub once I have the time, hopefully soon.
My approach to tune it will be to do the following:
Eliminate all Objective-C method calls and use C only instead
Move all work to a secondary queue off the main queue and use a block to call back one finished
One obstacle with rendering a waveform is you cannot have more than one AVAssetReader running at a time, at least the last time I tried. (It may have changed with iOS 6 possibly) A new reader cancels the other and that interrupts playback, so you need to do your work in sequence. I do that with queues.
In an audio app that I built it reads the CMSampleBufferRef into a CMBufferQueueRef which can hold multiple sample buffers. (see copyNextSampleBuffer on AVAssetReader) You can configure the queue to provide you with enough time to process a waveform after an AVAssetReader finishes reading an asset so that the current playback does not exhaust the contents of the CMBufferQueueRef before you start reading more buffers into it for the next track. That will be my approach when I attempt it. I just have to be careful that I do not use too much memory by making the buffer too big or making the buffer so big that it causes issues with playback. I just do not know how long it will take to process the waveform and I will test it on my older iPods and iPhone 4 before I try it on my iPhone 5 to see if they all perform well.
Be sure to stay as close to C as possible. Calls to Objective-C resources during this processing will incur potential thread switching and other run-time overhead costs which are significant enough to be noticeable. You will want to avoid that. What I may do is set up Key-Value Observing (KVO) to trigger the AVAssetReader to start the next task quickly so that I can maintain gapless playback between tracks.
Once I start my audio experiments I will put them on GitHub. I've created a repository where I will do this work. If you are interested you can "watch" that repo so you will know when I start committing updates to it.
https://github.com/brennanMKE/Audio

How can I monitor an mp3 live stream to detect corruption?

Once a month the mp3 streams messes up and the only way to tell it has messed up is by listening to it as it streams. Is there a script or program or tool I can use to monitor the live streams at a given url and send some kind of flag when it corrupts?
What happens is normally it plays a song for example or some music but once a month, every month, randomly, the stream corrupts and starts random chimpmunk like trash audio. Any ideas on this? I am just getting started at this with no idea at all.
Typically, this will happen when you play a track of the wrong sample rate.
Most (all that I've seen) SHOUTcast/Icecast encoders (going straight from files) will compress for MP3 just fine, but assume a fixed sample rate of whatever they are configured for. Typically this will be 44.1kHz. If you drop in a 48kHz track, or a 22.05kHz track, they will play at different speeds while causing all sorts of random issues with the stream.
The problem is easy enough to verify. Simply create a file of a different sample rate and test it. I suspect you will reproduce the problem. If that is the case, to my knowledge there is no way to detect it, since your stream isn't actually corrupt... it just sounds incorrect. You will have to scan all of your files for sample rate. FFMPEG in a script should be able to help you with that.
Now, if the problem actually is a corrupt MP3 stream, then you have problems on your encoding side. I suspect simply swapping out whatever DLL or module you're using with a recent stable version of LAME will help.
To detect a corrupt MP3 stream, your encoder must be using CRC. If you enable it, you should be able to read through the headers of each frame to find the CRC, and then run it on the audio data. In the event you get an error (or several frames with errors), you can then trigger a warning.
You can find information on the MP3 stream header here:
http://www.mp3-tech.org/programmer/frame_header.html