How to extract dvbsub teletext subtitles? - dvb

Does anyone know how to extract teletext subtitles?
I have tried ffmpeg, it says
Invalid frame dimensions 0x0
CCExtractor, it says
"Missing ASF header. Abort
telxcc, it says
! Invalid TS packet header; TS seems to be misaligned
I have done a lot of research, but have no luck. Can anyone offer some help!

dvb_subtitles cannot be extracted with ffmpeg easily because is an image that overlays the original. Good explanation: https://stackoverflow.com/a/20887655/2119685
There is a way to extract the dvb_teletext, which normally includes the subtiltes too.
Install the next dependency:
sudo apt-get install libzvbi-dev
Then recompile from source ffmpeg with:
--enable-libzvbi
Good tutorial here on how to compile FFMPEG from source -https://trac.ffmpeg.org/wiki/CompilationGuide/Ubuntu
Then execute the next command to fetch the subtitles to a .srt file:
ffmpeg -txt_format text -i INPUT1 -an -vn -scodec srt test.srt
And voila, your .srt subtitles will be in test.srt

Did you try using gstreamer ? appsrc->tsdemux->fakesink. Make pipeline like this, and then get the PES data from fakesink callback.

Related

How to run densepose on video with Detectron2

I was wondering if it is possible to run densepose annotations on a mp4 with detectron2?
In the projects folder, you can run densepose with applynet.py but this only works on images. I tried running this commmand
d demo/
python demo.py --config-file ../configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml \
--video-input video.mp4 \
[--other-options]
--opts MODEL.WEIGHTS detectron2://COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/model_final_f10217.pkl
with densepose weights and annotations but Detectron2 gives me this error:
Non-existent config key: MODEL.DENSEPOSE_ON
I know DensePose video exists but it is out of date as it uses caffe2 separated from pytorch.
Is this possible or can you not run on video?
Easiest approach is to use something like ffmpeg to split your video into frames, and then run detectron2 on each frame.

ImageMagick convert command not generating images

I am trying to extract images in CentOS machine using ImageMagick's convert command as below:
convert -coalesce http://cdn.abcdf.com/p/f7/81/d3/40/f781d34031e68828eaasdwc937cf3f8.gif /mnt/temp/123.png
I am getting the following error:
convert: unable to open image `//cdn.adnxs.com/p/f7/81/d3/40/f781d34031e6882840eaa6dc937cf3f8.gif': No such file or directory # error/blob.c/OpenBlob/2701.
convert: no decode delegate for this image format `HTTP' # error/constitute.c/ReadImage/504.
convert: no images defined `/mnt/ephemeral2/creative_report/temp/123.png' # error/convert.c/ConvertImageCommand/3258
I tried reinstalling ImageMagick from source but it was of no use.
I resolved the issue by reinstalling IM and the missing libxml2-devel library. Below are the steps I followed:
1)cd (ImageMagick folder)
2)make uninstall
3)yum install tcl-devel libpng-devel libjpeg-devel ghostscript-devel bzip2-devel freetype-devel libtiff-devel libxml2-devel
4)wget ftp://ftp.imagemagick.org/pub/ImageMagick/ImageMagick-6.9.9-0.tar.gz
5)tar xvfz ImageMagick-6.9.9-0.tar.gz
6)cd (the folder created)
7)./configure --prefix=/usr/local --with-bzlib=yes --with-fontconfig=yes --with-freetype=yes --with-gslib=yes --with-gvc=yes --with-jpeg=yes --with-jp2=yes --with-png=yes --with-tiff=yes
8)make clean
9)make
10)make install
On windows I would put double quotes around the path as it looks like it is breaking up the image path.
Your link is bad. My browser tells me it cannot find the server. In ImageMagick, you should read the input image first. So if you fix your URL, then
convert http://cdn.abcdf.com/p/f7/81/d3/40/f781d34031e68828eaasdwc937cf3f8.gif -coalesce /mnt/temp/123.png

Issue with FFMPEG drawtext

ffmpeg -i /home/mysite/public_html/videos/thankyou/thankyou_1.mp4 -strict -2 -vf
"[in]drawtext=fontfile=/home/mysite/fonts/OswaldFont/Oswald-Bold.ttf: x=450:
y=150: fontsize=152: fontcolor=0xAE0216#1: draw='if(gt(n,40),lt(n,300))':
text='THANK YOU',drawtext=fontfile=/home/mysite/fonts/OswaldFont/Oswald-Bold.ttf:
x=450: y=320: fontsize=200: fontcolor=0xAE0216#1: draw='if(gt(n,50),lt(n,300))':
text='JAMISON'" /home/mysite/public_html/videos/thankyou_2.mp4
When running the above, I'm getting the following. It seems to run properly on other distributions. Not sure where to check next.
[Parsed_drawtext_0 # 0x2835480] Option 'draw' not found
[AVFilterGraph # 0x283f980] Error initializing filter 'drawtext' with args 'fontfile=/home/mysite/fonts/OswaldFont/Oswald-Bold.ttf: x=450: y=150: fontsize=152: fontcolor=0xAE0216#1: draw=if(gt(n,40),lt(n,300)): text=THANK YOU'
Error opening filters!
Additionally, this original command works fine in Ubuntu, but give the seen error when running in centOS.
According the the FFmpeg drawtext filter documentation:
draw
This option does not exist, please see the timeline system
This means you should use timeline editing instead.
To do that replace the draw='...' part of your command with:
enable=if(gt(n\,50)\,lt(n\,300))
You should also check:
FFmpeg versions on each machine. You might have an older version installed on Ubuntu, which supports the draw option, and a newer version on CentOS in which the option was removed.
if the font files exist

Trouble using GstTIPlugin element for Gstreamer

For my project, I am trying to use a gumstix overo, with gstreamer and the TI plugin for making use of the DSP in order to stream video via RTP. I found these two tutorials and have even been able to follow them successfully:
http://jumpnowtek.com/index.php?option=com_content&view=article&id=81:gumstix-dsp-gstreamer&catid=35:gumstix&Itemid=67
^^In this one I am able to compile an embedded linux os, with gstreamer and the GstTIPlugIn Element. after doing so, I am able to stream the videotestsource to a remote PC successfully.
However that tutorial is meant for a caspa video cam, I am using the Logitech Pro C920 used in this tutorial:
http://www.oz9aec.net/index.php/gstreamer/473-using-the-logitech-c920-webcam-with-gstreamer
^^In this one we make use of a C920 camera in H264 mode. since the V4l2 drivers do not support this, we use a c script to capture from the camera frame by frame and stream it to standard out. From here we tell Gstreamer to capture from a file source, in this case standard in (/dev/fd/0). Again I am able to complete this successfully and stream from the C920 camera, however without using the TIplugin for making use of DSP.
Now on to the problem:
./capture -c 10000 -o | gst-launch -v -e filesrc location=/dev/fd/0 ! h264parse ! rtph264pay ! udpsink host=192.168.1.100 port=4000
^^This command will run the capture program, and gstreamer will grab and stream the video using the h264parse pipeline to encode (I believe?)
when I replace h264parse with the TIplugin from the first tutorial like this:
./capture -c 10000 -o | gst-launch -v -e filesrc location=/proc/self/fd/0 ! TIVidenc1 codecName=h264enc engineName=codecServer ! rtph264pay ! udpsink host=192.168.1.100 port=4000
I get this error:
Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
ERROR: from element /GstPipeline:pipeline0/GstTIVidenc1:tividenc10: failed to create video encoder: h264enc
Additional debug info:
gsttividenc1.c(1584): gst_tividenc1_codec_start (): /GstPipeline:pipeline0/GstTIVidenc1:tividenc10
ERROR: pipeline doesn't want to preroll.
Setting pipeline to NULL ...
I also tried keeping both elements in and then the error says it cannot link h264parse0 to tividenc10
Has anyone had any experience with the GstTIPlugin and know what I'm doing wrong?
thanks
What problem are you trying to solve, exactly? Are you trying to encode H.264 using the TI's encoding element? Because if I'm reading this all correctly, the './capture' utility already receives frames in H.264-- no need to encode.
Assuming we have this golden example (this works for you, right?):
./capture -c 10000 -o | gst-launch -v -e filesrc location=/dev/fd/0 !
h264parse ! rtph264pay ! udpsink host=192.168.1.100 port=4000
The 'h264parse' parses an H.264 stream into H.264 NAL units for the benefit of the RTP sink. If that's working, then the h264parse element is happy because it is getting H.264 data from the capture program.
If you're trying to replace h264parse with a TI H.264 encoder element, well, that's just confusing. Again, I don't know exactly what problem you're trying to solve so I might not have the whole picture.
If you're not already familiar with it, get to know the 'gst-inspect' command. E.g., 'gst-inspect h264parse'. This will give you insight about what type of data an element can consume or produce.

Is there a way to tell django compressor to create source maps

I want to be able to debug minified compressed javascript code on my production site. Our site uses django compressor to create minified and compressed js files. I read recently about chrome being able to use source maps to help debug such javascript. However I don't know how/if possible to tell the django compressor to create source maps when compressing the js files
I don't have a good answer regarding outputting separate source map files, however I was able to get inline working.
Prior to adding source maps my settings.py file used the following precompilers
COMPRESS_PRECOMPILERS = (
('text/coffeescript', 'coffee --compile --stdio'),
('text/less', 'lessc {infile} {outfile}'),
('text/x-sass', 'sass {infile} {outfile}'),
('text/x-scss', 'sass --scss {infile} {outfile}'),
('text/stylus', 'stylus < {infile} > {outfile}'),
)
After a quick
$ lessc --help
You find out you can put the less and map files in to the output css file. So my new text/less precompiler entry looks like
('text/less', 'lessc --source-map-less-inline --source-map-map-inline {infile} {outfile}'),
Hope this helps.
Edit: Forgot to add, lessc >= 1.5.0 required for this, to upgrade use
$ [sudo] npm update -g less
While I couldn't get this to work with django-compressor (though it should be possible, I think I just had issues getting the app set up correctly), I was able to get it working with django-assets.
You'll need to add the appropriate command-line argument to the less filter source code as follows:
diff --git a/src/webassets/filter/less.py b/src/webassets/filter/less.py
index eb40658..a75f191 100644
--- a/src/webassets/filter/less.py
+++ b/src/webassets/filter/less.py
## -80,4 +80,4 ## class Less(ExternalTool):
def input(self, in_, out, source_path, **kw):
# Set working directory to the source file so that includes are found
with working_directory(filename=source_path):
- self.subprocess([self.less or 'lessc', '-'], out, in_)
+ self.subprocess([self.less or 'lessc', '--line-numbers=mediaquery', '-'], out, in_)
Aside from that tiny addition:
make sure you've got the node -- not the ruby gem -- less compiler (>=1.3.2 IIRC) available in your path.
turn on the sass source-maps option buried away in chrome's web inspector config pages. (yes, 'sass' not less: less tweaked their debug-info format to match sass's since since sass had already implemented a chrome-compatible mapping and their formats weren't that different to begin with anyway...)
Not out of the box but you can extend a custom filter:
from compressor.filters import CompilerFilter
class UglifyJSFilter(CompilerFilter):
command = "uglifyjs -c -m " /
"--source-map-root={relroot}/ " /
"--source-map-url={name}.map.js" /
"--source-map={relpath}/{name}.map.js -o {output}"