MLT transparency of watermarks and tracks not working - png

I am trying to get melt to work, but it seems, it doesn't support transparency (alpha channels)! Using the official watermarking example, I get a black background behind the PNG and don't see any video through.
melt \
test.mp4 out=1000 \
-track \
watermark1.png out=1000 \
-transition composite fill=1 in=0 out=1000 a_track=0 b_track=1 geometry=85%/5%:10%x10% sliced_composite=1 \
$*
I installed melt via "brew" version 6.20.0 on a mac, obviously.
Any help would be much appreciated!

Your command works perfectly for me. I am on windows and I used the melt.exe command that comes with Shotcut. Perhaps there is something missing in your version of melt. Consider downloading Shotcut from here:
https://shotcut.org/download/
Then, use the included version of melt to try your command.
Also, are you absolutely sure that your png file has a transparent background? Sorry, but I just have to ask.

Related

How to take images with Raspberry Pi since "raspistill" and "raspivid" are deprecated

Since the Raspberry Pi is transitioning from using the old raspistill and raspivid to the newer libcamera how should I take an image now if I don't want to use the CLI nor C as programming language? I can't find any wrapper for libcamera in any language other than C and the new official Picamera2 library is also in an alpha phase and not recommended for production use.
I am also using a 64-bit version of the Raspberry Pi OS so I can't use the legacy camera interface.
I could downgrade to 32-bit but where is the point in deprecating the old system if the new one is clearly not ready for productive use.
How do you guys handle using the camera of the Raspberry Pi at the moment if you want to use a wrapper like Picamera? Am I missing something?
At the moment, the best way, if you want to use bullseye, is probably to run libcamera-vid and pipe the output from that into a Python script. You can either use a subprocess() call, or just start a pipeline:
libcamera-vid <params> | python script.py
Be sure to read from sys.stdin.buffer like here to avoid CR/LF mangling.
Probably choose a YUV-based format to ensure frames are a deterministic length, as opposed to MJPEG where the frame length will vary according to image content and you'll have to search for JPEG SOI/EOI markers.
Did you try to see if the cam utility is installed?

How to remove 'Program Name' from Images?

I've tried the solutions from here: removing 'Program Name' in metadata for images but, nothing worked. Is there a way to remove all the 'Photoshop' information from the image and retain the other metadata in it, created by photoshop (eg: name, contact info, copyright info etc.)
EDIT: The 'Program Name' from the properties menu when someone right clicks on an image in windows.
I tried all these exiftool software that never worked for me. Assuming you are on Windows (or Windows 10), the solution to the problem was very simple. You can select which property you want to remove. If this worked for you, kindly mark it as the right answer.
It would be helpful if you could post an image, or link, so that we could see what you mean. Maybe you can run:
jhead -v -v yourimage.jpg
and edit your question and post the output.
In the meantime, you will probably find that one of the following options does what you want:
jhead -du -di -dx yourimage.jpg
You can test with the first command I gave above.
jhead is available from here.

still got nothing after can display documentation in Documentation, appledoc

I have been spending a while working on how to generate a documentation via appledoc with the help from here.
Now I can see my new library displayed on the left hand side from Xcode/Help/Documentation. However, it is empty after all. I am still figuring out but it seems no moving forward after all.
What I am doing is example.m is an example file that I wanna display
example.m
/** Query the geonames.org service for the name of the place near the
given
*position (WGS84)
*#param latitude The latitude for the position.
*#param longitude The longitude for the position. */
(void)findNearbyPlaceNameForLatitude:(double)latitude longitude:(double)longitude;
Now I am doing
appledoc --project-name Example --project-company "MY_COMPANY"
--company-id ABC.com -o "/Users/Desktop/AppleDoc_Example/" -h -d -n ~/Users/Desktop/AppleDoc_Example/example.m
After launching xcode, here it is
Does any one know what the problem is... Please advice me... Any comments are welcomed here.
Thanks
I've been using appledoc for a couple of years and I think you are missing the --install-docset argument which tells appledoc to install the docset into Xcode. I suspect appledoc is working fine, just not updating Xcode with the latest build.
You can see the script I use at https://github.com/drekka/dUsefulStuff/blob/master/scripts/createDocumentation.sh which may help.

Geoserver Configuration Reload

Not quite sure if I should even be doing this. What I am wanting is I will be dynamically generating my sld file and it looks like when you update the sld in the geoserver admin it does a reload. So I tried to do a reload using the rest api and curl and it does not appear to work.
Here is my Curl
curl -uadmin:password-XPOST http://localhost:8080/geoserver/rest/reload
If there is another way to clear everything so my sld reloads that'd be awesome as well. Just needing to get this working and am not sure why it isn't.
Thank you
I have asked this on the GIS site and received an answer suitable for what I am doing.
https://gis.stackexchange.com/questions/7754/geoserver-configuration-reload
This documentation on the REST API for Configuration Reloading may be what you are looking for...
http://docs.geoserver.org/stable/en/user/restconfig/rest-config-api.html#configuration-reloading
Your curl request seems pretty close, I only had to change it slightly and it worked in my environment...
curl -u admin:password -v -XPOST http://localhost:8080/geoserver/rest/reload
If this doesn't work for you, can you describe more thoroughly in what way it is not working?

iPhone, OpenCV and CvBlobDetector

I found Yoshimasa Niwa's article about blob detection here:
http://niw.at/articles/2009/03/14/using-opencv-on-iphone/en
And something on realtime face detection here:
http://www.morethantechnical.com/2009/08/09/near-realtime-face-detection-on-the-iphone-w-opencv-port-wcodevideo/
But what I really want to do is realtime blob detection (like http://www.youtube.com/watch?v=LIgsVoCXTXM) using the iPhone 4 camera.
I can find the headers for CvBlobDetector in cvvidsurv.hpp. But trying to use that without modification is not the right thing to do.
How do I get CvBlobDetector to work? Or is there an alternate solution?
Make sure you've followed the instructions to use it properly:
http://opencv.willowgarage.com/wiki/cvBlobsLib
One of the alternative solutions i used and it works good is:
http://code.google.com/p/cvblob/