Increasing the call rate automaticallyin SIPp and dump it in CSV file - sip

I use this command in sipp for generating load on my SIP servlet container
./sipp -sf uac.xml 127.0.0.1:5080 -trace_rtt
Two things I need. The first one is increasing the load automatically, for example: add 100 call/second. The second one is the CSV file I get just have response time and time stamp, it does not include the call rate.
Any one can help??

I found the asnwer in SIPp documentation
First problem:
-rate_increase 10 -fd 5s
this code increases call rate by 10 each 5 seconds.
Second problem:
add this parameter
-trace_stat
so my command should be like this
./sipp -sf uac.xml 127.0.0.1:5080 -trace_rtt -trace_stat -rate_increase 10 -fd 5s

Related

ffmpeg: What is the best practice to keep a live connection/socket with a camera, and save time on ffprobe

Today... I used the following command: with subprocess.PIPE and subprocess.Popen in python 3:
ffmpeg -i udp://{address_of_camera} \
-vf select='if(eq(pict_type,I),st(1,t),gt(t,ld(1)))' setpts=N/FRAME_RATE/TB \
-f rawvideo -an -vframes {NUM_WANTED_FRAMES} pipe:`
This command helps me to capture NUM_WANTED_FRAMES frames from a live camera at a given moment.
However... it takes me about 4 seconds to read the frames, and about 2.5 seconds to open a socket between my computer and the camera's computer.
Is there a way, to have a socket/connection always open between my computer and the camera's computer, to save the 2.5 seconds?
I read something about fifo_size and overrun_fatal. I thought that maybe I can set fifo_size to be equal to NUM_WANTED_FRAMES, and overrun_fatal to True? Will this solve my problem? Or is there a different and simpler/better solution?
Should I try to record always (no -vframes flag) store the frames in a queue(With max size), and upon a wish to slice the video, read from my queue buffer? Will it work well with the keyframe?
Also... What to do when ffmpeg fails? restart the ffmpeg command?
FFmpeg itself is an one-n-done type of app. So, to keep the camera running, the best option is to "record always (no -vframes flag)" and handle whether to drop/record frames in Python.
So, a rough sketch of the idea:
import subprocess as sp
from threading import Thread, Event
from queue import Queue
NUM_WANTED_FRAMES = 4 # whatever it is
width = 1920
height = 1080
ncomp = 3 # rgb
framesize = width*height*ncomp # in bytes
nbytes = framesize * NUM_WANTED_FRAMES
proc = Popen(<ffmpeg command>, stdout=sp.PIPE)
stdout = proc.stdout
buffer = Queue(NUM_WANTED_FRAMES)
req_frame = Event() # set to record, default to drop
def reader():
while True:
if req_frame.is_set():
queue.put(stdout.read(nbytes))
record_frame.clear()
else:
# frames not requested, drop
stdout.read(framesize)
rd_thread = threading.Thread(target=reader)
rd_thread.start()
...
# elsewhere in your program, do this when you need to get the camera data
req_frame.set()
framedata = queue.get()
....
Will it work well with the keyframe?
Yes, if your FFmpeg command has -discard nokey it'll read just keyframes.
What to do when ffmpeg fails? restart the ffmpeg command?
Have another thread to monitor the health of proc (Popen object) and if it is dead, you need to restart subprocess with the same command and overwrite with the new stdout. You probably want to protect your code with try-except blocks as well. Also, adding timeouts to queue ops would be a good idea, too.

MFCreateFMPEG4MediaSink does not generate MSE-compatible MP4

I'm attempting to stream a H.264 video feed to a web browser. Media Foundation is used for encoding a fragmented MPEG4 stream (MFCreateFMPEG4MediaSink with MFTranscodeContainerType_FMPEG4, MF_LOW_LATENCY and MF_READWRITE_ENABLE_HARDWARE_TRANSFORMS enabled). The stream is then connected to a web server through IMFByteStream.
Streaming of the H.264 video works fine when it's being consumed by a <video src=".."/> tag. However, the resulting latency is ~2sec, which is too much for the application in question. My suspicion is that client-side buffering causes most of the latency. Therefore, I'm experimenting with Media Source Extensions (MSE) for programmatic control over the in-browser streaming. Chrome does, however, fail with the following error when consuming the same MPEG4 stream through MSE:
Failure parsing MP4: TFHD base-data-offset not allowed by MSE. See
https://www.w3.org/TR/mse-byte-stream-format-isobmff/#movie-fragment-relative-addressing
mp4dump of a moof/mdat fragment in the MPEG4 stream. This clearly shows that the TFHD contains an "illegal" base data offset parameter:
[moof] size=8+200
[mfhd] size=12+4
sequence number = 3
[traf] size=8+176
[tfhd] size=12+16, flags=1
track ID = 1
base data offset = 36690
[trun] size=12+136, version=1, flags=f01
sample count = 8
data offset = 0
[mdat] size=8+1624
I'm using Chrome 65.0.3325.181 (Official Build) (32-bit), running on Win10 version 1709 (16299.309).
Is there any way of generating a MSE-compatible H.264/MPEG4 video stream using Media Foundation?
Status Update:
Based on roman-r advise, I managed to fix the problem myself by intercepting the generated MPEG4 stream and perform the following modifications:
Modify Track Fragment Header Box (tfhd):
remove base_data_offset parameter (reduces stream size by 8bytes)
set default-base-is-moof flag
Add missing Track Fragment Decode Time (tfdt) (increases stream size by 20bytes)
set baseMediaDecodeTime parameter
Modify Track fragment Run box (trun):
adjust data_offset parameter
The field descriptions are documented in https://www.iso.org/standard/68960.html (free download).
Switching to MSE-based video streaming reduced the latency from ~2.0 to 0.7 sec. The latency was furthermore reduced to 0-1 frames by calling IMFSinkWriter::NotifyEndOfSegment after each IMFSinkWriter::WriteSample call.
There's a sample implementation available on https://github.com/forderud/AppWebStream
I was getting the same error (Failure parsing MP4: TFHD base-data-offset not allowed by MSE) when trying to play a fmp4 via MSE. The fmp4 had been created from a mp4 using the following ffmpeg comand:
ffmpeg -i myvideo.mp4 -g 52 -vcodec copy -f mp4 -movflags frag_keyframe+empty_moov myfmp4video.mp4
Based on this question I was able to find out that to have the fmp4 working in Chrome I had to add the "default_base_moof" flag. So, after creating the fmp4 with the following command:
ffmpeg -i myvideo.mp4 -g 52 -vcodec copy -f mp4 -movflags frag_keyframe+empty_moov+default_base_moof myfmp4video.mp4
I was able to play successfully the video using Media Source Extensions.
This Mozilla article helped to find out that missing flag:
https://developer.mozilla.org/en-US/docs/Web/API/Media_Source_Extensions_API/Transcoding_assets_for_MSE
The mentioned 0.7 sec latency (in your Status Update) is caused by the Media Foundation's MFTranscodeContainerType_FMPEG4 containterizer which gathers and outputs each roughly 1/3 seconds (from unknown reason) of frames in one MP4 moof/mdat box pair. This means that you need to wait 19 frames before getting any output from MFTranscodeContainerType_FMPEG4 at 60 FPS.
To output single MP4 moof/mdat per each frame, simply lie that MF_MT_FRAME_RATE is 1 FPS (or anything higher than 1/3 sec). To play the video at the correct speed, use Media Source Extensions' <video>.playbackRate or rather update timescale (i.e. multiply by real FPS) of mvhd and mdhd boxes in your MP4 stream interceptor to get the correctly timed MP4 stream.
Doing that, the latency can be squeezed to under 20 ms. This is barely recognizable when you see the output side by side on localhost in chains such as Unity (research) -> NvEnc -> MFTranscodeContainerType_FMPEG4 -> WebSocket -> Chrome Media Source Extensions display.
Note that MFTranscodeContainerType_FMPEG4 still introduces 1 frame delay (1st frame in, no output, 2nd frame in, 1st frame out, ...), hence the 20 ms latency at 60 FPS. The only solution to that seems to be writing own FMPEG4 containerizer. But that is order of magnitude more complex than intercepting of Media Foundation's MP4 streams.
The problem was solved by following roman-r's advise, and modifying the generated MPEG4 stream. See answer above.
Another way to do this is again using the same code #Fredrik mentioned but I write my own IMFByteStream and and I check the chunks written to the IMFByteStream.
FFMpeg writes the atoms almost once at a time. So you can check the atom name and do the mods. It is the same thing. I wish there was an MSE compliant windows sinker.
Is there one that can generate .ts files for HLS?

Reload page if 'not available'?

I've a standalone Raspberry Pi which shows a webpage from another server.
It reloads after 30 minutes via JavaScript on the webpage.
In some cases, the server isn't reachable for a very short time and Chromium shows the usual This webpage is not available message, and stops reloading
(because no JavaScript from the page triggers an reload).
In this case, how can I still reload the webpage after a few seconds?
Now i had the Idea to fetch the website results via AJAX and replace it in the current page if they were available.
Rather than refreshing the webpage every few minutes, what you can do is ping the server using javascript (pingjs is a nice library that can do that)
Now, if the ping is successful, reload the page. If it is not successful, wait for 30 more seconds and ping it again. Doing this continuously, will basically make you wait until the server is open again (i.e. you can ping it)
I think this is a much simpler method compared to making your own java browser and making a browser plugin.
Extra info: You should use a exponential function or timeout checking to avoid unnecessary processing overhead. i.e. the first time out find the ping fails, wait for 30 seconds, second time wait for 30*(2^1) sec, 3rd time wait for 30*(2^2) and so on until you reach a maximum value.
Note - this assumes your server is really unreachable ... and not just that the html page in unavailable (there's a small but appreciable difference)
My favored approach would be to copy the web page locally using a script every 30 mins and point chromium to the local copy.
The advantage is that script can run every 30 seconds, and it checks if the successful page pull happened in the last 30 mins. If YES it then does nothing. If NO then you can keep attempting to pull it. In the mean time the browser will be set to refresh the page every 5 seconds, but because it is pulling a local page it does little to no work for each refresh. You then can detect if what it has pulled back has the required content in it.
This approach assumes that your goal is to avoid refreshing the page every few seconds and therefore reducing load on the remote page.
Use these options to grab the whole page....
# exit if age of last reload is less than 1800 seconds (30 minutes)
AGE_IN_SECS=$(( $( perl -e 'print time();' ) - $(stat -c "%Y" /success/directory/index.html) ))
[[ $AGE_IN_SECS -lt 1800 ]] && exit
# copy whole page to the current directory
cd /temporary/directory
wget -p -k http://www.example.com/
and then you need to test the page in some way to ensure you have what you need, for example (using bash script)....
RESULT=$(grep -ci "REQUIRED_PATTERN_MATCH" expected_file_name )
[[ $result -gt 0 ]] && cp -r /temporary/directory/* /success/directory
rm -rf /temporary/directory/*
NOTE:
This is only the bare bones of what you need as I don't know the specifics of what you need. But you should also look at trying to ...
ensure you have a timeout on the wget, such that you do not have multiple wgets running.
create some form of back off so that you do not hammer the remote server when it is trouble
ideally show some message on the page if it is over 40 minutes old so that viewer knows a problem is being experienced.
you could use a chromium refresh plugin to pull the page from locally
you can use your script to alter the page once you have downloaded it if you want to add in additional/altered formatting (e.g. replace the css file?)
I see three solutions:
Load page in iframe (if not blocked), and check for content/response).
Create simple browser in java (not so hard, even if you dont know this language, using webview)
Create plugin for your browser.
reloading a page via javascript is pretty easy:
function refresh() {
var xhr = new XMLHttpRequest();
xhr.onreadystatechange = function() {
if (xhr.readyState == 4 && xhr.status === 200)
document.body.innerHTML = this.responseXML.body;
else
setTimeout('refresh', 1500);
};
xhr.open('GET', window.location.href);
xhr.responseType = "document"
xhr.send();
}
setInterval('refresh', 30*60*1000);
this should work as you requested

some issue about sipp

I wrote a program to run sipp. But It cannot auto exit after It call a large total numbers, or It has some other ways to know the sipp has completed,
And second issues: When it call after count number, The call become very slow!
Simple solution: do small amount of call(100) and start it again with next portion.

How many iterations are saved by JAGS/BUGS when burnin and thinning are specified?

I have a quick question about the details of running a model in JAGS and BUGS.
Say I run a model with n.burnin=5000, n.iter=5000 and thin=2. Does this mean that the program will:
Run 5,000 iterations, and discard results; and then
Run another 10,000 iterations, only keeping every second result?
If I save these simulations as a CODA object, are all 10,000 saved, or only the thinned 5,000? I'm just trying to understand which set of iterations are used to make the ACF plot?
With JAGS, n.burnin=5000, n.iter=5000 and thin=2, means you keep nothing. You run 5000, discard the first 5000 of these 5000 and then only keep a half of the remaining values of the chain (keep 1 value and discard the next one ..).
Use for example n.burnin=2000, n.iter=7000, thin=50, n.chains=5 : so you have (7000-2000)/50 * 5 = 500 values.
Could you be more specific which software you're talking about? It looks like you're referring to the arguments of the function bugs() in the R2WinBUGS package (except that the argument is called n.thin not thin). Looking at help(bugs) it just says n.burnin is the "number of iterations to discard at the beginning". Which doesn't specifically answer your question, but looking at the source for bugs.script() in that package suggests to me that it would run 5000 iterations burn in, as you suspected. You could send a suggestion to the maintainers of that package to clarify their documentation.
In your example, bugs() would then run 0 further iterations after the burn-in. Here the documentation is clearer - n.iter is the total number of iterations including the burn-in.
For your second question, the CODA output from WinBUGS (and any software which calls WinBUGS or OpenBUGS) will only include the thinned sample.