The following Python code that adds three files to a GES timeline throws up the following error that others have also had:
(GError('Your GStreamer installation is missing a plug-in.',), 'gstdecodebin2.c(3928): gst_decode_bin_expose (): /GESPipeline:gespipeline0/GESTimeline:gestimeline0/GESVideoTrack:gesvideotrack0/GnlComposition:gnlcomposition1/GnlSource:gnlsource0/GstBin:videosrcbin/GstURIDecodeBin:uridecodebin0/GstDecodeBin:decodebin4:\nno suitable plugins found')
from gi.repository import GES
from gi.repository import GstPbutils
from gi.repository import Gtk
from gi.repository import Gst
from gi.repository import GObject
import sys
import signal
VIDEOPATH = "file:///path/to/my/video/folder/"
class Timeline:
def __init__(self, files):
print Gst._version # prints 1
self.pipeline = GES.Pipeline()
container_caps = Gst.Caps.new_empty_simple("video/quicktime")
video_caps = Gst.Caps.new_empty_simple("video/x-h264")
audio_caps = Gst.Caps.new_empty_simple("audio/mpeg")
self.container_profile = GstPbutils.EncodingContainerProfile.new("jane_profile", "mp4 concatation", container_caps, None )#Gst.Caps("video/mp4", None))
self.video_profile = GstPbutils.EncodingVideoProfile.new(video_caps, None, None, 0)
self.audio_profile = GstPbutils.EncodingAudioProfile.new(audio_caps, None, None, 0)
self.container_profile.add_profile(self.video_profile)
self.container_profile.add_profile(self.audio_profile)
self.bus = self.pipeline.get_bus()
self.bus.add_signal_watch()
self.bus.connect("message", self.busMessageCb)
self.timeline = GES.Timeline.new_audio_video()
self.layer = self.timeline.append_layer()
signal.signal(signal.SIGINT, self.handle_sigint)
self.start_on_timeline = 0
for file in files:
asset = GES.UriClipAsset.request_sync(VIDEOPATH + file)
print asset.get_duration()
duration = asset.get_duration()
clip = self.layer.add_asset(asset, self.start_on_timeline, 0, duration, GES.TrackType.UNKNOWN)
self.start_on_timeline += duration
print 'start:' + str(self.start_on_timeline)
self.timeline.commit()
self.pipeline.set_timeline(self.timeline)
def handle_sigint(self, sig, frame):
Gtk.main_quit()
def busMessageCb(self, unused_bus, message):
print message
print message.type
if message.type == Gst.MessageType.EOS:
print "eos"
Gtk.main_quit()
elif message.type == Gst.MessageType.ERROR:
error = message.parse_error()
print (error)
Gtk.main_quit()
if __name__=="__main__":
GObject.threads_init()
Gst.init(None)
GES.init()
gv = GES.version() # prints 1.2
timeline = Timeline(['one.mp4', 'two.mp4', 'two.mp4'])
done = timeline.pipeline.set_render_settings('file:///home/directory/output.mp4', timeline.container_profile)
print 'done: {0}'.format(done)
timeline.pipeline.set_mode(GES.PipelineFlags.RENDER)
timeline.pipeline.set_state(Gst.State.PAUSED)
Gtk.main()
I have set the GST_PLUGIN_PATH_1_0 environment variable to "/usr/local/lib:/usr/local/lib/gstreamer-1.0:/usr/lib/x86_64-linux-gnu:/usr/lib/i386-linux-gnu/gstreamer-1.0"
I compiled and installed gstreamer1.0-1.2.4, together with the base, good, bad and ugly packages for that version. GES is installed with version 1.2.1 as this was the nearest to the gstreamer version I found. I also installed the libav-1.2.4.
The decodebin2 should be in base according to the make install log for plugin-base and is linked into libgstplayback, which is part of my GST_PLUGIN_PATH_1_0:
/usr/local/lib/gstreamer-1.0 libgstplayback_la-gstdecodebin2.lo
I do have gstreamer0.10 and the decodebin2 is there as a blacklisted version when I do 'gst-inspect-1.0 -b' as it sits in the gstreamer0.10 library path rather than on that for 1.0.
I tried clearing the ~/.cache/gstreamer files and running gst-inspect-1.0 again to regenerate the plugin registry but I still keep getting the error in the Python code. This sample code might be wrong as it is my first stab at writing a timeline using Gstreamer editing services. I am on Ubuntu Trusty or 14.04.
The file is an mp4 file which is why I installed gst-libav for the required libraries.
The output of MP4Box -info on the file is:
Movie Info *
Timescale 90000 - Duration 00:00:08.405
Fragmented File no - 2 track(s)
File suitable for progressive download (moov before mdat)
File Brand mp42 - version 0
Created: GMT Mon Aug 17 17:02:26 2015
File has no MPEG4 IOD/OD
Track # 1 Info - TrackID 1 - TimeScale 50000 - Duration 00:00:08.360
Media Info: Language "English" - Type "vide:avc1" - 209 samples
Visual Track layout: x=0 y=0 width=1920 height=1080
MPEG-4 Config: Visual Stream - ObjectTypeIndication 0x21
AVC/H264 Video - Visual Size 1920 x 1080
AVC Info: 1 SPS - 1 PPS - Profile Main # Level 4.2
NAL Unit length bits: 32
Pixel Aspect Ratio 1:1 - Indicated track size 1920 x 1080
Self-synchronized
Track # 2 Info - TrackID 2 - TimeScale 48000 - Duration 00:00:08.405
Media Info: Language "English" - Type "soun:mp4a" - 394 samples
MPEG-4 Config: Audio Stream - ObjectTypeIndication 0x40
MPEG-4 Audio MPEG-4 Audio AAC LC - 2 Channel(s) - SampleRate 48000 Synchronized on stream 1
log # pastebin.com/BjJ8Z5Bd for when I run 'GST_DEBUG=3,gnl*:5 python ./timeline1.py > timeline1.log 2>&1'
There is no "decodebin2" in GStreamer 1.x, which you're using here. It's just called "decodebin" now and is equivalent to "decodebin2" in 0.10.
Your problem here however is not that decodebin is not found. Your problem is that you're missing a plugin to play this specific media file. What kind of media file is it?
Related
For field, research where I want to study a plant-insect interaction, I am trying to set up a PICT (Plant Insect Interactions Camera Trap). There is a very detailed description available on https://zenodo.org/record/6301001, still I am stuck.
I can excess the camera through the browser but the script won’t start.
I am an absolute beginner and I have no idea what I am doing wrong. Can anybody help get this running?
This is the script from the paper which I saved as camera.py in home/pi:
import picamera
import socket
import uuid
from datetime import datetime as dt
qual=22 # level of image quality between 1 (highest quality, largest size) and 40 (lowest quality, smallest size), with typical values 20 to 25, default is 0.
video_duration = 3600 # video duration in seconds
video_number = 1000 # number of video sequences to shoot
UID = uuid.uuid4().hex[:4].upper()+'_'+dt.now().strftime('%Y-%m-%d_%H-%M') # generate random unique ID that will be used in video filename
HostName=socket.gethostname()
with picamera.PiCamera() as camera:
camera.resolution = (1296, 972) # max fps is 42 at 1296x972
camera.framerate = 15 # recomended are 12, 15, 24, 30
camera.annotate_frame_num = True
camera.annotate_text_size = int(round(camera.resolution[0]/64))
camera.annotate_background = picamera.Color('black') # text background colour
camera.annotate_foreground = picamera.Color('white') # text colour
for filename in camera.record_sequence([
'/home/pi/record/'+HostName+'_'+UID+'_%03d.h264' % (h + 1)
for h in range(video_number)
], quality=qual):
start = dt.now() # get the current date and time
while (dt.now() - start).seconds < video_duration: # run until video_duration is reached
camera.annotate_text = HostName+', '+str(camera.framerate)+' fps, Q='+str(qual)+', '+dt.now().strftime('%Y-%m-%d %H:%M:%S') # tag the video with a custom text
camera.wait_recording(0.2) # pause the script for a short interval to save power
I am gettin the following output:
~ $ python camera.py
Traceback (most recent call last):
File "camera.py", line 23, in <module>
], quality=qual):
File "/usr/lib/python2.7/dist-packages/picamera/camera.py", line 1270, in record_sequence
camera_port, output_port = self._get_ports(True, splitter_port)
File "/usr/lib/python2.7/dist-packages/picamera/camera.py", line 559, in _get_ports
self._check_camera_open()
File "/usr/lib/python2.7/dist-packages/picamera/camera.py", line 540, in _check_camera_open
raise PiCameraClosed("Camera is closed")
picamera.exc.PiCameraClosed: Camera is closed
I want to use object detection using tensorflow lite in order to detect a clear face or a covered face where the statement "door opens" is printed when a clear face is detected. I could run this code smoothly previously but later after rebooting raspberry pi 4, although the tensorflow lite runtime is initialized, the raspberry pi 4 disconnects with the ssh completely. The following is the code:
######## Webcam Object Detection Using Tensorflow-trained Classifier #########
#
# Author: Evan Juras
# Date: 10/27/19
# Description:
# This program uses a TensorFlow Lite model to perform object detection on a live webcam
# feed. It draws boxes and scores around the objects of interest in each frame from the
# webcam. To improve FPS, the webcam object runs in a separate thread from the main program.
# This script will work with either a Picamera or regular USB webcam.
#
# This code is based off the TensorFlow Lite image classification example at:
# https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/examples/python/label_image.py
#
# I added my own method of drawing boxes and labels using OpenCV.
# Import packages
import os
import argparse
import cv2
import numpy as np
import sys
import time
from threading import Thread
import importlib.util
import simpleaudio as sa
# Define VideoStream class to handle streaming of video from webcam in separate processing thread
# Source - Adrian Rosebrock, PyImageSearch: https://www.pyimagesearch.com/2015/12/28/increasing-raspberry-pi-fps-with-python-and-opencv/
class VideoStream:
"""Camera object that controls video streaming from the Picamera"""
def __init__(self,resolution=(640,480),framerate=30):
# Initialize the PiCamera and the camera image stream
self.stream = cv2.VideoCapture(0)
ret = self.stream.set(cv2.CAP_PROP_FOURCC, cv2.VideoWriter_fourcc(*'MJPG'))
ret = self.stream.set(3,resolution[0])
ret = self.stream.set(4,resolution[1])
# Read first frame from the stream
(self.grabbed, self.frame) = self.stream.read()
# Variable to control when the camera is stopped
self.stopped = False
def start(self):
# Start the thread that reads frames from the video stream
Thread(target=self.update,args=()).start()
return self
def update(self):
# Keep looping indefinitely until the thread is stopped
while True:
# If the camera is stopped, stop the thread
if self.stopped:
# Close camera resources
self.stream.release()
return
# Otherwise, grab the next frame from the stream
(self.grabbed, self.frame) = self.stream.read()
def read(self):
# Return the most recent frame
return self.frame
def stop(self):
# Indicate that the camera and thread should be stopped
self.stopped = True
# Define and parse input arguments
parser = argparse.ArgumentParser()
parser.add_argument('--modeldir', help='Folder the .tflite file is located in',
required=True)
parser.add_argument('--graph', help='Name of the .tflite file, if different than detect.tflite',
default='masktracker.tflite')
parser.add_argument('--labels', help='Name of the labelmap file, if different than labelmap.txt',
default='facelabelmap.txt')
parser.add_argument('--threshold', help='Minimum confidence threshold for displaying detected objects',
default=0.5)
parser.add_argument('--resolution', help='Desired webcam resolution in WxH. If the webcam does not support the resolution entered, errors may occur.',
default='640x480')
parser.add_argument('--edgetpu', help='Use Coral Edge TPU Accelerator to speed up detection',
action='store_true')
args = parser.parse_args()
MODEL_NAME = args.modeldir
GRAPH_NAME = args.graph
LABELMAP_NAME = args.labels
min_conf_threshold = float(args.threshold)
resW, resH = args.resolution.split('x')
imW, imH = int(resW), int(resH)
use_TPU = args.edgetpu
# Import TensorFlow libraries
# If tflite_runtime is installed, import interpreter from tflite_runtime, else import from regular tensorflow
# If using Coral Edge TPU, import the load_delegate library
pkg = importlib.util.find_spec('tflite_runtime')
if pkg:
from tflite_runtime.interpreter import Interpreter
if use_TPU:
from tflite_runtime.interpreter import load_delegate
else:
from tensorflow.lite.python.interpreter import Interpreter
if use_TPU:
from tensorflow.lite.python.interpreter import load_delegate
# If using Edge TPU, assign filename for Edge TPU model
if use_TPU:
# If user has specified the name of the .tflite file, use that name, otherwise use default 'edgetpu.tflite'
if (GRAPH_NAME == 'masktracker.tflite'):
GRAPH_NAME = 'edgetpu.tflite'
# Get path to current working directory
CWD_PATH = os.getcwd()
# Path to .tflite file, which contains the model that is used for object detection
PATH_TO_CKPT = os.path.join(CWD_PATH,MODEL_NAME,GRAPH_NAME)
# Path to label map file
PATH_TO_LABELS = os.path.join(CWD_PATH,MODEL_NAME,LABELMAP_NAME)
# Load the label map
with open(PATH_TO_LABELS, 'r') as f:
labels = [line.strip() for line in f.readlines()]
# Have to do a weird fix for label map if using the COCO "starter model" from
# https://www.tensorflow.org/lite/models/object_detection/overview
# First label is '???', which has to be removed.
if labels[0] == '???':
del(labels[0])
# Load the Tensorflow Lite model.
# If using Edge TPU, use special load_delegate argument
if use_TPU:
interpreter = Interpreter(model_path=PATH_TO_CKPT,
experimental_delegates=[load_delegate('libedgetpu.so.1.0')])
print(PATH_TO_CKPT)
else:
interpreter = Interpreter(model_path=PATH_TO_CKPT)
interpreter.allocate_tensors()
# Get model details
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
height = input_details[0]['shape'][1]
width = input_details[0]['shape'][2]
floating_model = (input_details[0]['dtype'] == np.float32)
input_mean = 127.5
input_std = 127.5
# Initialize frame rate calculation
frame_rate_calc = 1
freq = cv2.getTickFrequency()
global image_capture
image_capture = 0
# Initialize video stream
videostream = VideoStream(resolution=(imW,imH),framerate=30).start()
time.sleep(1)
#for frame1 in camera.capture_continuous(rawCapture, format="bgr",use_video_port=True):
while True:
# Start timer (for calculating frame rate)
t1 = cv2.getTickCount()
# Grab frame from video stream
frame1 = videostream.read()
# Acquire frame and resize to expected shape [1xHxWx3]
frame = frame1.copy()
frame_rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
frame_resized = cv2.resize(frame_rgb, (width, height))
input_data = np.expand_dims(frame_resized, axis=0)
# Normalize pixel values if using a floating model (i.e. if model is non-quantized)
if floating_model:
input_data = (np.float32(input_data) - input_mean) / input_std
# Perform the actual detection by running the model with the image as input
interpreter.set_tensor(input_details[0]['index'],input_data)
interpreter.invoke()
# Retrieve detection results
boxes = interpreter.get_tensor(output_details[0]['index'])[0] # Bounding box coordinates of detected objects
classes = interpreter.get_tensor(output_details[1]['index'])[0] # Class index of detected objects
scores = interpreter.get_tensor(output_details[2]['index'])[0] # Confidence of detected objects
#num = interpreter.get_tensor(output_details[3]['index'])[0] # Total number of detected objects (inaccurate and not needed)
# Loop over all detections and draw detection box if confidence is above minimum threshold
for i in range(len(scores)):
if ((scores[i] > min_conf_threshold) and (scores[i] <= 1.0)):
# Get bounding box coordinates and draw box
# Interpreter can return coordinates that are outside of image dimensions, need to force them to be within image using max() and min()
ymin = int(max(1,(boxes[i][0] * imH)))
xmin = int(max(1,(boxes[i][1] * imW)))
ymax = int(min(imH,(boxes[i][2] * imH)))
xmax = int(min(imW,(boxes[i][3] * imW)))
# Draw label
object_name = labels[int(classes[i])] # Look up object name from "labels" array using class index
if (object_name=="face unclear" ):
color = (0, 255, 0)
cv2.rectangle(frame, (xmin,ymin), (xmax,ymax),color, 2)
print("Face Covered: Door Not Opened")
if(image_capture == 0):
path = r'/home/pi/Desktop/tflite_1/photographs'
date_string = time.strftime("%Y-%m-%d_%H%M%S")
#print(date_string)
cv2.imwrite(os.path.join(path, (date_string + ".jpg")),frame)
#cv2.imshow("Photograph",frame)
#mp3File = input(alert_audio.mp3)
print("Photo Taken")
#w_object = sa.WaveObject.from_wave_file('alert_audio.wav')
#p_object = w_object.play()
#p_object.wait_done()
image_capture = 1
else:
color = (0, 0, 255)
cv2.rectangle(frame, (xmin,ymin), (xmax,ymax),color, 2)
print("Face Clear: Door Opened")
image_capture = 0
#cv2.rectangle(frame, (xmin,ymin), (xmax,ymax),color, 2)
#image = np.asarray(ImageGrab.grab())
label = '%s: %d%%' % (object_name, int(scores[i]*100)) # Example: 'person: 72%'
labelSize, baseLine = cv2.getTextSize(label, cv2.FONT_HERSHEY_SIMPLEX, 0.7, 2) # Get font size
label_ymin = max(ymin, labelSize[1] + 10) # Make sure not to draw label too close to top of window
cv2.rectangle(frame, (xmin, label_ymin-labelSize[1]-10), (xmin+labelSize[0], label_ymin+baseLine-10), (255, 255, 255), cv2.FILLED) # Draw white box to put label text in
cv2.putText(frame, label, (xmin, label_ymin-7), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0, 0), 2) # Draw label text
if ((scores[0] < min_conf_threshold)):
cv2.putText(frame,"No Face Detected",(260,260),
cv2.FONT_HERSHEY_SIMPLEX, 0.8, color=(255,0,0))
print("No Face Detected")
image_capture = 0
# Draw framerate in corner of frame
cv2.putText(frame,'FPS: {0:.2f}'.format(frame_rate_calc),(30,50),cv2.FONT_HERSHEY_SIMPLEX,1,(255,255,0),2,cv2.LINE_AA)
# All the results have been drawn on the frame, so it's time to display it.
cv2.imshow('Object detector', frame)
# Calculate framerate
t2 = cv2.getTickCount()
time1 = (t2-t1)/freq
frame_rate_calc= 1/time1
# Press 'q' to quit
if cv2.waitKey(1) == ord('q'):
break
# Clean up
cv2.destroyAllWindows()
videostream.stop()
Any help is appreciated.
Regards,
MD
import PIL
img = PIL.Image.new("RGB", (100,100))
img.show()
The error message:
FSPathMakeRef(/Applications/Preview.app) failed with error -43.
Following from Sean True's answer, an even quicker but temporary fix is to simply make a symbolic link to Preview.app in the old location. In the terminal run
ln -s /System/Applications/Preview.app /Applications/Preview.app
This fixed the problem for me.
There's an official fix in github for Pillow 7, but I'm still on 6.
This appears to be a PIL ImageShow issue, with the PIL MacViewer using /Applications/Preview.app as an absolute path to the OSX Preview app.
It's not there in Catalina. I did a quick hack to ImageShow.py changing /Applications/Preview.app to just Preview.app and the issue went away. That might or might not still work on pre-Catalina OSX, but I don't have an easy way to test.
It has apparently moved to /System/Applications/Preview.app so a quick check at run time would probably cover both cases.
elif sys.platform == "darwin":
class MacViewer(Viewer):
format = "PNG"
options = {'compress_level': 1}
preview_locations = ["/System/Applications/Preview.app","/Applications/Preview.app"]
preview_location = None
def get_preview_application(self):
if self.preview_location is None:
for pl in self.preview_locations:
if os.path.exists(pl):
self.preview_location = pl
break
if self.preview_location is None:
raise RuntimeError("Can't find Preview.app in %s" % self.preview_locations)
return self.preview_location
def get_command(self, file, **options):
# on darwin open returns immediately resulting in the temp
# file removal while app is opening
pa = self.get_preview_application()
command = "open -a %s" % pa
command = "(%s %s; sleep 20; rm -f %s)&" % (command, quote(file),
quote(file))
return command
def show_file(self, file, **options):
"""Display given file"""
pa = self.get_preview_application()
fd, path = tempfile.mkstemp()
with os.fdopen(fd, 'w') as f:
f.write(file)
with open(path, "r") as f:
subprocess.Popen([
'im=$(cat);'
'open %s $im;'
'sleep 20;'
'rm -f $im' % pa
], shell=True, stdin=f)
os.remove(path)
return 1
My Problem:
I want to use functions of opencv like the MIL-Tracker or MedianFlow-Tracker in Matlab (these functions are not in mexopencv). But I don't know how or understand how to do this. The documentation of opencv/mexopencv doesn't help me. This doesn't help: how do OpenCV shared libraries in matlab? - because the link in the answer is down.
So is there a way to use these functions in Matlab? And if- How?
Why?: As a part of my bachelor thesis I have to compare different already implemented ways to track people.
If you would like to use these functions specifically in MATLAB you could always write your own MEX file in C/C++ and send the data back/forward between the two calls, however this would require some basic C++ knowledge and understanding creating MEX files.
Personally I would definately recommend trying this with Python and the OpenCV Python interface since its so widely used and more supported than using the calls in MATLAB (plus its always a useful skill to be able to switch between Python and MATLAB as and when needed).
There is a full example with the MIL-Tracker and the MedianFlow-Tracker (and others) here (Which demonstrates using them in C++ and Python!).
Python Example :
import cv2
import sys
(major_ver, minor_ver, subminor_ver) = (cv2.__version__).split('.')
if __name__ == '__main__' :
# Set up tracker.
# Instead of MIL, you can also use
tracker_types = ['BOOSTING', 'MIL','KCF', 'TLD', 'MEDIANFLOW', 'GOTURN']
tracker_type = tracker_types[2]
if int(minor_ver) < 3:
tracker = cv2.Tracker_create(tracker_type)
else:
if tracker_type == 'BOOSTING':
tracker = cv2.TrackerBoosting_create()
if tracker_type == 'MIL':
tracker = cv2.TrackerMIL_create()
if tracker_type == 'KCF':
tracker = cv2.TrackerKCF_create()
if tracker_type == 'TLD':
tracker = cv2.TrackerTLD_create()
if tracker_type == 'MEDIANFLOW':
tracker = cv2.TrackerMedianFlow_create()
if tracker_type == 'GOTURN':
tracker = cv2.TrackerGOTURN_create()
# Read video
video = cv2.VideoCapture("videos/chaplin.mp4")
# Exit if video not opened.
if not video.isOpened():
print "Could not open video"
sys.exit()
# Read first frame.
ok, frame = video.read()
if not ok:
print 'Cannot read video file'
sys.exit()
# Define an initial bounding box
bbox = (287, 23, 86, 320)
# Uncomment the line below to select a different bounding box
bbox = cv2.selectROI(frame, False)
# Initialize tracker with first frame and bounding box
ok = tracker.init(frame, bbox)
while True:
# Read a new frame
ok, frame = video.read()
if not ok:
break
# Start timer
timer = cv2.getTickCount()
# Update tracker
ok, bbox = tracker.update(frame)
# Calculate Frames per second (FPS)
fps = cv2.getTickFrequency() / (cv2.getTickCount() - timer);
# Draw bounding box
if ok:
# Tracking success
p1 = (int(bbox[0]), int(bbox[1]))
p2 = (int(bbox[0] + bbox[2]), int(bbox[1] + bbox[3]))
cv2.rectangle(frame, p1, p2, (255,0,0), 2, 1)
else :
# Tracking failure
cv2.putText(frame, "Tracking failure detected", (100,80), cv2.FONT_HERSHEY_SIMPLEX, 0.75,(0,0,255),2)
# Display tracker type on frame
cv2.putText(frame, tracker_type + " Tracker", (100,20), cv2.FONT_HERSHEY_SIMPLEX, 0.75, (50,170,50),2);
# Display FPS on frame
cv2.putText(frame, "FPS : " + str(int(fps)), (100,50), cv2.FONT_HERSHEY_SIMPLEX, 0.75, (50,170,50), 2);
# Display result
cv2.imshow("Tracking", frame)
# Exit if ESC pressed
k = cv2.waitKey(1) & 0xff
if k == 27 : break
I would definately try it using Python (if this is an option). Otherwise if MATLAB is a must then probably try implementing the C++ example code shown in the link before as a MEX file and linking openCV during the compilation i.e.
mex trackerMexOpenCV.cpp 'true filepath location to openCV lib'
I hope this helps!
Edit: In the previous version I used a very old ffmpeg API. I now use the newest libraries. The problem has only changed slightly, from "Main" to "High".
I am using the ffmpeg C API to create a mp4 video in C++.
I want the resulting video to be of the profile "Constrained Baseline", so that the resulting video can be played on as much platforms as possible, especially mobile, but I get "High" profile every time, even though I hard coded the codec profile to be FF_PROFILE_H264_CONSTRAINED_BASELINE. As a result, the video does not play on all our testing platforms.
This is what "ffprobe video.mp4 -show_streams" tells about my video streams:
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1mp41
creation_time : 1970-01-01 00:00:00
encoder : Lavf53.5.0
Duration: 00:00:13.20, start: 0.000000, bitrate: 553 kb/s
Stream #0:0(und): Video: h264 (Main) (avc1 / 0x31637661), yuv420p, 320x180,
424 kb/s, 15 fps, 15 tbr, 15 tbn, 30 tbc
Metadata:
creation_time : 1970-01-01 00:00:00
handler_name : VideoHandler
Stream #0:1(und): Audio: aac (mp4a / 0x6134706D), 44100 Hz, stereo, s16, 12
kb/s
Metadata:
creation_time : 1970-01-01 00:00:00
handler_name : SoundHandler
-------VIDEO STREAM--------
[STREAM]
index=0
codec_name=h264
codec_long_name=H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10
profile=High <-- This should be "Constrained Baseline"
codec_type=video
codec_time_base=1/30
codec_tag_string=avc1
codec_tag=0x31637661
width=320
height=180
has_b_frames=0
sample_aspect_ratio=N/A
display_aspect_ratio=N/A
pix_fmt=yuv420p
level=30
timecode=N/A
is_avc=1
nal_length_size=4
id=N/A
r_frame_rate=15/1
avg_frame_rate=15/1
time_base=1/15
start_time=0.000000
duration=13.200000
bit_rate=424252
nb_frames=198
nb_read_frames=N/A
nb_read_packets=N/A
TAG:creation_time=1970-01-01 00:00:00
TAG:language=und
TAG:handler_name=VideoHandler
[/STREAM]
-------AUDIO STREAM--------
[STREAM]
index=1
codec_name=aac
codec_long_name=Advanced Audio Coding
profile=unknown
codec_type=audio
codec_time_base=1/44100
codec_tag_string=mp4a
codec_tag=0x6134706d
sample_fmt=s16
sample_rate=44100
channels=2
bits_per_sample=0
id=N/A
r_frame_rate=0/0
avg_frame_rate=0/0
time_base=1/44100
start_time=0.000000
duration=13.165714
bit_rate=125301
nb_frames=567
nb_read_frames=N/A
nb_read_packets=N/A
TAG:creation_time=1970-01-01 00:00:00
TAG:language=und
TAG:handler_name=SoundHandler
[/STREAM]
This is the function I use to add a video stream. All the values that come from ptr-> are defined from outside, do those values have to be specific values to get the correct profile?:
static AVStream *add_video_stream( Cffmpeg_dll * ptr, AVFormatContext *oc, enum CodecID codec_id )
{
AVCodecContext *c;
AVStream *st;
AVCodec* codec;
// Get correct codec
codec = avcodec_find_encoder(codec_id);
if (!codec) {
av_log(NULL, AV_LOG_ERROR, "%s","Video codec not found\n");
exit(1);
}
// Create stream
st = avformat_new_stream(oc, codec);
if (!st) {
av_log(NULL, AV_LOG_ERROR, "%s","Could not alloc stream\n");
exit(1);
}
c = st->codec;
/* Get default values */
codec = avcodec_find_encoder(codec_id);
if (!codec) {
av_log(NULL, AV_LOG_ERROR, "%s","Video codec not found (default values)\n");
exit(1);
}
avcodec_get_context_defaults3(c, codec);
c->codec_id = codec_id;
c->codec_type = AVMEDIA_TYPE_VIDEO;
c->bit_rate = ptr->video_bit_rate;
av_log(NULL, AV_LOG_ERROR, " Bit rate: %i", c->bit_rate);
c->qmin = ptr->qmin;
c->qmax = ptr->qmax;
c->me_method = ptr->me_method;
c->me_subpel_quality = ptr->me_subpel_quality;
c->i_quant_factor = ptr->i_quant_factor;
c->qcompress = ptr->qcompress;
c->max_qdiff = ptr->max_qdiff;
// We need to set the level and profile to get videos that play (hopefully) on all platforms
c->level = 30;
c->profile = FF_PROFILE_H264_CONSTRAINED_BASELINE;
c->width = ptr->dstWidth;
c->height = ptr->dstHeight;
c->time_base.den = ptr->fps;
c->time_base.num = 1;
c->gop_size = ptr->fps;
c->pix_fmt = STREAM_PIX_FMT;
c->max_b_frames = 0;
// some formats want stream headers to be separate
if(oc->oformat->flags & AVFMT_GLOBALHEADER)
c->flags |= CODEC_FLAG_GLOBAL_HEADER;
return st;
}
Additional info:
As a reference video, I use the gizmo.mp4 that Mozilla serves as an example that plays on every platform/browser. It definitely has the "Constrained Baseline" profile, and definitely works on all our testing smartphones. You can download it here. Our self-created video doesn't work on all platforms and I'm convinced this is because of the profile.
I am also using qt-faststart.exe to move the headers to the start of the file after creating the mp4, as this cannot be done in a good way in C++ directly. Could that be the problem?
Obviously, I am doing something wrong, but I don't know what it could be. I'd be thankful for every hint ;)
I have the solution. After spending some time and discussions in the ffmpeg bug tracker and browsing for profile setting examples, I finally figured out the solution.
One needs to use av_opt_set(codecContext->priv_data, "profile", "baseline" (or any other desired profile), AV_OPT_SEARCH_CHILDREN)
So in my case that would be:
Wrong:
// We need to set the level and profile to get videos that play (hopefully) on all platforms
c->level = 30;
c->profile = FF_PROFILE_H264_CONSTRAINED_BASELINE;
Correct:
// Set profile to baseline
av_opt_set(c->priv_data, "profile", "baseline", AV_OPT_SEARCH_CHILDREN);
Completely unintuitive and contrary to the rest of the API usage, but that's ffmpeg philosophy. You don't need to understand it, you just need to understand how to use it ;)