How to interrupt streaming file on a condition? - perl

I`m using Asterisk::AGI and I need to stream a music file and interrupt streaming according to some background condition (like checking data in DB, if changed smth -> interrupt streaming).
Can somebody advice/point me where should I look for a solution?
Thanks a lot
Pavel

You can use AsyncAGI command, which will play musiconhold class unless get some other action
After that you have use Asterisk::AMI to transfer that channel to other context/dialplan
No, there are no any way do it using just AGI interface without AMI.

Related

Using EEPROM in STM32f10x

I'm using STM32f103 and in my program, I need to save some bytes in the internal flash memory. But as far as I know, I have to erase a whole page to write in it, which will take time.
This delay causes my display to blink.
Can anybody help me to save my data without consuming so much time?
Here is a list that may help:
1- MCU: STM32f103
2- IDE: Keil vision
3- using HAL driver provided by STM32CubeMx
4- sample data for saving in Flash: {0x53, 0xa0, 0x01, 0x54}
In the link below, you can find the code that I'm using.
FLASH_PAGE for Keil
The code you provide doesn't seem to be implemented well. It basically does 2 things each time you initiate a write operation:
Erase the page (this is the part that takes time)
Start form the given pointer, write until it hits a zero.
This is a very ineffective way of using the flash.
Probably the simplest and the most well-known way is to use the method described in ST's AN2594, although it has some limitations.
Still, at some point a page erase will be necessary regardless the method you use and there is no way to avoid some delay, unless your uC supports dual flash banks (STM32F103 don't have this feature). You need to plan the timing of flash writes and display refresh accordingly. If you need periodic writes to the flash, there is probably some high level error in your design.
To solve this problem, I used another library that STM itself presented. I had to include "eeprom.h" into your project and then add "eeprom.c" to it. You can easily find these files on the Internet.

How do I replay a .blf file in CANoe in real-time?

I found several questions on the subject here on SO, so I figured out this could fit as well.
In CANoe/CANalyzer Offline Mode, it is possible to start the measurement replaying the data that was logged, say in a .blf file. To configure this, please refer to these questions:
How do I play a blf file in CANalyzer
Running a Blf file in constant Loop for Emulation using CAPL
However, replay is done as fast as possible. I remember I read this in a document, but I can't find out where.
Is there a way to slow down the replay speed of .blf (or any) log file and what is it?
Or, conversely, does anybody have the reference to documentation where they state this can't be done? I have the feeling that this replay speed is impacting on some scripts I'm developing, since my PC can't be "as fast as possible".
As you state, per-default CANoe reads and replays the data as fast as possible in offline mode.
However, you can set an AnimationDelay which will slow down the replay. The value has to be set in the CAN.ini file. After setting the value you do not start the replay by clicking "Start" but rather by clicking "Animate" (a lightning bolt next to a sheet of paper).
You can search the CANoe docu for Animate for more details.

How to in RTSP protocol TEARDOWN only one track?

I do:
PLAY rtsp://addr/track1
I get ok response and I send another one but for track2. One is for audio and another one is for video, question is: How can I TEARDOWN only one of them only? let's say: TEARDOWN rtsp://addr/track1. Is this even possible or should I just TEARDOWN rtsp://addr/ and play again only one track?
Yes, the RTSP specification definitely allows this but in the end I guess this depends on server implementation/support. Easiest thing would be to write a quick script to test if your server supports it.
The "Media on Demand (Unicast)" example section shows such a scenario.

Running two api's simultaneously using GCD in ios

I am working on radio application where i need to convert speech to text. For that i am using third party api's. For geting better results i want to run two api's at the same time and compare the output. this should happen when user clicks on record button.
I know we can do this using GCD but not getting exact idea of how we can achieve this.
Need suggestion.
Thank you.
Th short answer is that you create two GCD queues, one for each Speech-to-Text task. Within each block, you call the two different APIs with the same input data. Then you either wait for the result, or get the block to invoke a callback status method when completed.
Note that you will need to ensure that the speech engines can safely run on background threads.
This is fairly straightforward if you want to record the audio first, then submit the data to two different engines for processing. But it sounds like you might want to start processing the audio as soon as the user clicks Record? In that case, it very much depends on the APIs as to how you feed them data in real time. You might want to just run them on separate threads explicitly and feed them data as it comes in.

Architecture sketch for iphone stock app

I am currently trying to build a (simplified) stock app (like the one built-in on the iphone). I setup a simple server with a REST-interface which my app can communicate with.
However I am struggling to find the right/best way to build this kind of (streaming data consumer) client on the iphone.
My best bet at the moment is to use a timer to regularly pull the xml payload from the server (the connection is async but the xml parsing is not therefor the interface is blocked sometimes. I am a bit shy of thread programming since I learned some lessons the hard way on other platforms).
I read about websockets but it is not clear for me if and how they are supported on the iphone.
How would you do it?
Any hint would be appreciated, Thanks.
websockets aren't going to help you -- that's a server-side technology to make a socket-like interface work over HTTP.
If you don't want to block the GUI, you need to use another thread. You are right to be scared of doing this, so share as little as possible (preferably nothing) between the two threads. Use a message passing mechanism to get information from the background thread to the UI thread.
Take a look at ActorKit: http://landonf.bikemonkey.org/code/iphone/ActorKit_Async_Messaging.20081203.html
Take a look at this question.
It talks about asynchronous vs synchronous connections. You will want to use an asynchronous call to get your data so you don't lock up your UI. You could use that in conjunction with a polling timer to get your data from the server.
You can find more info about the NSURLConnection in apple's documentation here