RPM build correctly setting Provides on openSUSE, but not Fedora - rpm-spec

I have an RPM specfile for MakeMKV which is uploaded to OBS to build packages for Fedora and openSUSE. The specfile is the same between both distributions, but only openSUSE correctly adds the built libraries to the build packages Provides section, causing package installation to fail on Fedora.
I have tried to place all of the files in a single package, but this still doesn't allow installation on Fedora. openSUSE detects the libraries if they are all placed into the same package or packaged individually.
Here is my specfile:
Name: makemkv
Version: 1.9.10
Release: 0
Summary: DVD and Blu-ray to MKV converter and network streamer
License: SUSE-NonFree
Group: Productivity/Multimedia/Other
Url: http://www.makemkv.com
Source0: %name-oss-%version.tar.gz
Source1: %name-bin-%version.tar.gz
BuildRequires: pkgconfig(zlib) pkgconfig(openssl) pkgconfig(expat) pkgconfig(libavcodec) pkgconfig(libavutil)
%if 0%{?centos} || 0%{?fedora} || 0%{?rhel} || 0%{?scientificlinux}
BuildRequires: pkgconfig
%endif
%if 0%{?centos} || 0%{?rhel} || 0%{?scientificlinux}
BuildRequires: libqt4-devel
%endif
%if 0%{?fedora}
BuildRequires: qt5-qtbase-devel
%endif
%if 0%{?suse_version}
BuildRequires: pkg-config libqt5-qtbase-devel update-desktop-files
%endif
%description
MakeMKV is your one-click solution to convert video that you own into free and patents-unencumbered format that can be played everywhere. MakeMKV is a format converter, otherwise called "transcoder". It converts the video clips from proprietary (and usually encrypted) disc into a set of MKV files, preserving most information but not changing it in any way. The MKV format can store multiple video/audio tracks with all meta-information and preserve chapters. There are many players that can play MKV files nearly on all platforms, and there are tools to convert MKV files to many formats, including DVD and Blu-ray discs.
Additionally MakeMKV can instantly stream decrypted video without intermediate conversion to wide range of players, so you may watch Blu-ray and DVD discs with your favorite player on your favorite OS or on your favorite device.
%package -n libdriveio0
Summary: DVD and Blu-ray to MKV converter and network streamer
%description -n libdriveio0
MakeMKV is your one-click solution to convert video that you own into free and patents-unencumbered format that can be played everywhere. MakeMKV is a format converter, otherwise called "transcoder". It converts the video clips from proprietary (and usually encrypted) disc into a set of MKV files, preserving most information but not changing it in any way. The MKV format can store multiple video/audio tracks with all meta-information and preserve chapters. There are many players that can play MKV files nearly on all platforms, and there are tools to convert MKV files to many formats, including DVD and Blu-ray discs.
Additionally MakeMKV can instantly stream decrypted video without intermediate conversion to wide range of players, so you may watch Blu-ray and DVD discs with your favorite player on your favorite OS or on your favorite device.
%package -n libmakemkv1
Summary: DVD and Blu-ray to MKV converter and network streamer
%description -n libmakemkv1
MakeMKV is your one-click solution to convert video that you own into free and patents-unencumbered format that can be played everywhere. MakeMKV is a format converter, otherwise called "transcoder". It converts the video clips from proprietary (and usually encrypted) disc into a set of MKV files, preserving most information but not changing it in any way. The MKV format can store multiple video/audio tracks with all meta-information and preserve chapters. There are many players that can play MKV files nearly on all platforms, and there are tools to convert MKV files to many formats, including DVD and Blu-ray discs.
Additionally MakeMKV can instantly stream decrypted video without intermediate conversion to wide range of players, so you may watch Blu-ray and DVD discs with your favorite player on your favorite OS or on your favorite device.
%package -n libmmbd0
Summary: DVD and Blu-ray to MKV converter and network streamer
%description -n libmmbd0
MakeMKV is your one-click solution to convert video that you own into free and patents-unencumbered format that can be played everywhere. MakeMKV is a format converter, otherwise called "transcoder". It converts the video clips from proprietary (and usually encrypted) disc into a set of MKV files, preserving most information but not changing it in any way. The MKV format can store multiple video/audio tracks with all meta-information and preserve chapters. There are many players that can play MKV files nearly on all platforms, and there are tools to convert MKV files to many formats, including DVD and Blu-ray discs.
Additionally MakeMKV can instantly stream decrypted video without intermediate conversion to wide range of players, so you may watch Blu-ray and DVD discs with your favorite player on your favorite OS or on your favorite device.
%prep
tar xf %{SOURCE0}
tar xf %{SOURCE1}
%build
cd %{name}-oss-%{version}
%configure
make %{?_smp_mflags}
%install
cd %{name}-oss-%{version}
make install DESTDIR=%buildroot
%if 0%{?suse_version}
%suse_update_desktop_file -r makemkv
%suse_update_desktop_file -c makemkv MakeMKV "DVD and Blu-ray to MKV converter and network streamer" makemkv makemkv AudioVideo AudioVideoEditing
%endif
cd ../%{name}-bin-%{version}
mkdir tmp
echo accepted > tmp/eula_accepted
make install DESTDIR=%buildroot
%if "/usr/lib" != "%_libdir"
mv %buildroot/usr/lib/ %buildroot/%_libdir
%endif
%post -n libdriveio0 -p /sbin/ldconfig
%postun -n libdriveio0 -p /sbin/ldconfig
%post -n libmakemkv1 -p /sbin/ldconfig
%postun -n libmakemkv1 -p /sbin/ldconfig
%post -n libmmbd0 -p /sbin/ldconfig
%postun -n libmmbd0 -p /sbin/ldconfig
%files
%defattr(-,root,root)
#oss
%dir /usr/share/icons/hicolor
%dir /usr/share/icons/hicolor/*
%dir /usr/share/icons/hicolor/*/apps
/%_bindir/makemkv
/usr/share/applications/makemkv.desktop
/usr/share/icons/hicolor/*/apps/makemkv.png
#bin
/%_bindir/makemkvcon
/%_bindir/mmdtsdec
/usr/share/MakeMKV/
%files -n libdriveio0
%defattr(-,root,root)
/%_libdir/libdriveio.so.0
%files -n libmakemkv1
%defattr(-,root,root)
/%_libdir/libmakemkv.so.1
%files -n libmmbd0
%defattr(-,root,root)
/%_libdir/libmmbd.so.0

The permissions on the library were not properly set. Running
chmod 755 %buildroot/%_libdir/lib*.so*
fixed this and then Fedora properly set the Provides field.

Related

Powershell ffmpeg

I am successfully u sing ffmpeg via powershell to compress video files, however I can't get the compression to occur in a single location, I only have success when I make separate inputs and outputs.
For example, this command will be successful:
ffmpeg -y -i \\path\$x -vf scale=1920:1080 \\diff_path\$x
this will not do anyhting or will corrupt the file:
ffmpeg -y -i \\path\$x -vf scale=1920:1080 \\path\$x
I think I understand why this doesn't work, but I'm having a hard time finding a solution. I want the script to address a file and compress it in it's current location, leaving only a single compressed video file.
Thanks all
Not possible. Not the answer you want, but FFmpeg is not able to perform in-place file editing, which means it has to make a new output file.

Objcopy elf to bin file

I have STM32F404 board and I am trying to flash it. I am following this tutorial.
In the project Makefile
$(PROJ_NAME).elf: $(SRCS)
$(CC) $(CFLAGS) $^ -o $#
$(OBJCOPY) -O ihex $(PROJ_NAME).elf $(PROJ_NAME).hex
$(OBJCOPY) -O binary $(PROJ_NAME).elf $(PROJ_NAME).bin
burn: proj
$(STLINK)/st-flash write $(PROJ_NAME).bin 0x8000000
The bin file is generated using OBJCOPYand then flashed using the Make target burn
My questions :
Question 1: What does OBJCOPY=arm-none-eabi-objcopy in this case. I opened the man but I didn't fully undrestand can anyone explain it simply ?
Question 2: Flashing the bin file gives the expected result (Leds blinking) However the leds are not blinking by flashing the elf file $(STLINK)/st-flash write $(PROJ_NAME).elf 0x8000000 so why ?
Question 1: What does OBJCOPY=arm-none-eabi-objcopy in this case. I opened the man but I didn't fully undrestand can anyone explain it simply ?
It assigns value arm-none-eabi-objcopy to make variable OBJCOPY.
When make executes this command:
$(OBJCOPY) -O binary $(PROJ_NAME).elf $(PROJ_NAME).bin
the actual command that runs is
arm-none-eabi-objcopy -O binary tim_time_base.elf tim_time_base.bin
Question 2: Flashing the bin file gives the expected result (Leds blinking) However the leds are not blinking by flashing the elf file $(STLINK)/st-flash write $(PROJ_NAME).elf 0x8000000 so why?
The tim_time_base.elf is an ELF file -- it has metadata associated with it. Run arm-none-eabi-readelf -h tim_time_base.elf to see what some of this metadata are.
But when you processor jumps to location 0x8000000 after reset, it is expecting to find executable instructions, not metadata. When it finds "garbage" it doesn't understand, it probably just halts. It certainly doesn't find instructions to blink the lights.
In case someone wants to use the DFU ("Do a Firmware Upgrade") function, this tutorial teaches how to use the binary file to be loaded via USB, when the STM32 is operating with USB Host (or maybe OTG):
STM32 USB training - 11.3 USB MSC DFU host labs
This tutorial is part of a series of videos that are highly recommended for the programmer to watch, to understand a little better how the STM32 USB ports work and use (videos provided by the STM32 manufacturer itself, I recommend that the programmer watch all the videos on this channel):
MOOC - STM32 USB training
Notes: The example code from the STM32 tutorials are available in the descriptions of the videos themselves.
The binary file (*.bin) can be obtained with the help of the command that the colleague above explained (Employed Russian), and it (command) can also be adapted to produce a file containing the comparison value for CRC usage, as can be seen some details in these following posts:
Hands-on: CRC Checksum Generation
Srec_cat could be used to generate CRC checksum and put it into HEX
file. To simplify the process, please put srec_cat.exe into the root
of project folder.
Some tips and solutions about this CRC usage (Windows/Linux)
Unfortunately the amount of code is too big to post here directly, but I leave the code related to the other answer below:
arm-none-eabi-objcopy -O ihex "${BuildArtifactFileBaseName}.elf"
"${BuildArtifactFileBaseName}.hex" && ..\checksum.bat
${BuildArtifactFileBaseName}.hex
Contents of the checksum.bat file:
#!/bin/bash
# Windows [Dos comment: REM]:
#..\srec_cat.exe %1 -Intel -fill 0xFF 0x08000000 0x080FFFFC -STM32 0x080FFFFC -o ROM.hex -Intel
# Linux [Linux comment: #]:
srec_cat $1 -Intel -fill 0xFF 0x08000000 0x080FFFFC -STM32 0x080FFFFC -o ROM.hex -Intel
Note: In this case, the file to be written is ROM.hex (you will need to configure the STM32CubeIDE to be able to do this operation, the IDE uses the *.elf file, see how to do it in the tips above)
This other tutorial deals with using the file with *.DFU extension:
DFU - DfuSe
The key benefits of the DFU Boatloader are: No specific tools such us
JTAG, ST-LINK or USB-to-UART cable are needed. The ability to program
an "empty" STM32 device in a newly-assembled board via USB. And easy
upgrade the STM32 firmware during development or pre-production.
This need to use a HEX file facilitates the operation of the implementation of the ROM.hex file generated with the CRC value, being practically a continuity:
You must generate a .DFU file from an .HEX or .S19 file, for do this
use the DFU File Manager.
But it seems that using the *.DFU file is not as standalone as using the *.BIN file, so I found this other code that converts the HEX file (generated with CRC) to the *.BIN file, which can be used with a USB stick, as per the tutorial cited at the beginning of this answer (11.3 USB MSC DFU host):
objcopy --input-target=ihex --output-target=binary code00.hex code00.bin
Source
It sounds a little confusing, but we have these steps:
1- The STM32CubeIDE generates the *.elf file.
2- After compilation, the *.elf file is converted to *.hex.
3- CRC value is added in *.hex file via srec_cat application.
4- Now the *.hex file is converted to *.bin.
5- The BIN file is then stored on a USB flash drive.
6- STM32 updates firmware using USB flash drive file.
To use the *.BIN file it is necessary that the STM32 is already programmed to load the BIN file. If it is not programmed (the STM32 is empty, virgin or the program was not made to load the BIN file), it will be necessary to use St-Link or another programmer, or perhaps making use of the DFU method described in the tutorial above (DFU - DfuSe).

Tesseract: Advantage to Multi-Page Training File vs. Multiple Separate Files?

This SO answer suggests that training tesseract with .tif files has an advantage over .png files because the .tif files can have multiple pages and thus a larger training sample. Yet, this SO question discusses procedures for training with multiple images at once. More so, the man page for, e.g. mftraining suggests that it can accept multiple training files.
Is there any reason then not to train with multiple separate image files?
It appears that using multiple images to train tesseract on a single font seems to work just fine. Below is a sketch of the workflow I employ:
# Convert files to .pdf
convert -density 600 Page1.pdf eng1.MyNewFont.exp1.png
convert -density 600 Page2.pdf eng1.MyNewFont.exp2.png
# Create .box files
tesseract eng1.MyNewFont.exp1.png eng1.MyNewFont.exp1 -l eng batch.nochop makebox
tesseract eng1.MyNewFont.exp2.png eng1.MyNewFont.exp2 -l eng batch.nochop makebox
## correct boxes with jTessBoxEditor or another box editor ##
# Create two new box.tr files: eng1.MyNewFont.exp1.box.tr and eng1.MyNewFont.exp2.box.tr
tesseract eng1.MyNewFont.exp1.png eng1.MyNewFont.exp1.box -l eng1 nobatch box.train.stderr
tesseract eng1.MyNewFont.exp2.png eng1.MyNewFont.exp2.box -l eng1 nobatch box.train.stderr
# Extract characters from the two .box files
unicharset_extractor eng1.MyNewFont.exp1.box eng1.MyNewFont.exp2.box
echo "MyNewFont 0 0 0 0 0" >> font_properties
# train using the two new box.tr files.
mftraining -F font_properties -U unicharset -O eng1.unicharset eng1.MyNewFont.exp1.box.tr eng1.MyNewFont.exp2.box.tr
cntraining eng1.MyNewFont.exp1.box.tr eng1.MyNewFont.exp2.box.tr
## rename files
mv inttemp eng1.inttemp
mv normproto eng1.normproto
mv pffmtable eng1.pffmtable
mv shapetable eng1.shapetable
combine_tessdata eng1. ## create .traineddata file.
You can certainly train with multiple image files; Tesseract would treat them as having different, separate fonts. And there is a limit (64) on the number of images. If they share a common font, it would be better to put them in a multi-page TIFF. According to its specs, a TIFF file can be a container holding many images.
https://github.com/tesseract-ocr/tesseract/wiki/TrainingTesseract
https://en.wikipedia.org/wiki/Tagged_Image_File_Format

Custom XMP data: Create namespace, or just plug it in somewhere?

If I am encoding arbitrary data into XMP is it better to use an existing namespace, or create my own? (NB:I barely know what I'm talking about)
I'm using Exempi to edit XMP meta on an AVI file:
exempi -w -n http:// TEST -s PROPERTY1 -v VALUE1 example.avi
Creates:
<rdf:Description rdf:about=""
xmlns:TEST="http://">
<TEST:PROP1>VALUE1</TEST:PROP1>
</rdf:Description>
However, I don't have a URI (not even sure what it's for) and don't think this information will be universally accessible by other programs that read XMP.
Is there a standard place I should but arbitrary data in (say a "comment" field, perhaps serialized) that I can more universally ensure is most accessible by other programs?
Using exempi to add comment:
exempi -w -s xmpDM:logComment -v "a:2:{s:5:\"TEST1\";s:6:\"VALUE1\";s:5:\"TEST2\";s:6:\"VALUE2\";}" example.avi
Yields:
<rdf:Description rdf:about=""
xmlns:xmpDM="http://ns.adobe.com/xmp/1.0/DynamicMedia/">
<xmpDM:logComment>a:2:{s:5:"TEST1";s:6:"VALUE1";s:5:"TEST2";s:6:"VALUE2";}</xmpDM:logComment>
</rdf:Description>
Which appears to be more accessible by other programs (exiftool, properties dialogue of most major video players, etc)
I'm on Ubuntu 16.04.

Compare file sizes and download if they're different via wget

I'm downloading some .mp3 files (all legal) via wget :
wget -r -nc files.myserver.com
I have to stop the download sometimes and at that times the file is partially downloaded. For example a 10 minutes record.mp3 file become 4 minutes record.mp3 file. It's playing correctly but incomplete.
If I use the same command above, because the record.mp3 file is already exist in my local computer wget skips that file although it isn't complete.
I wonder if there is a way to check the file sizes and if the file size in the remote server and local computer isn't same re-download it. (I've learned the --spider command gives the file size but is there any other command that automatically check the file sizes and download or not).
I would go with wget's -N option for timestamping, but note that wget will only compare the file sizes if you also specify the --no-if-modified-since option. Without it, incomplete files are indeed skipped on the next run because they receive a timestamp of the current time, which is newer than that on the server.
The reason is probably that with only -N, a GET request is sent for the file with the If-Modified-Since field set. The server responds with either 200 or 304, but the 304 doesn't contain the file size so wget can't check it.
With --no-if-modified-since wget sends a HEAD request instead to get the timestamp and file size, and checks both.
What I use for recursive download of a folder:
wget -T 300 -nv -t 1 -r -nd -np -l 1 -N --no-if-modified-since -P $my_folder $my_url
With:
-T 300: Set the network timeout to 300 seconds
-nv: Turn off verbose without being completely quiet
-t 1: Set number of tries to 1
-r: Turn on recursive retrieving
-nd: Do not create a hierarchy of directories when retrieving recursively
-np: Do not ever ascend to the parent directory when retrieving recursively
-l 1: Specify recursion maximum depth 1
-N: Turn on time-stamping
--no-if-modified-since: Do not send If-Modified-Since header in ā€˜-Nā€™ mode, send preliminary HEAD request instead
You may try the -c option to continue the download of partially downloaded files, however the manual gives an explicit warning:
You need to be especially careful of this when using -c in conjunction
with -r, since every file will be considered as an "incomplete
download" candidate.
While there is no perfect solution to this problem you could try to use -N option to turn on timestamping. This might prevent errors when the file has changed on the server but only if the server supports timestamping and partial downloads. Try it and see how it goes.
wget -r -N -c files.myserver.com
If you need check if file was partially downloaded (has different size) or updated on remote server by timestamp and must be in this case updated locally you need use -N option.
Here some additional info about -N (--timestamping) option from Wget docs:
If the local file does not exist, or the sizes of the files do not match, Wget will download the remote file no matter what the
time-stamps say.
Added From: https://www.gnu.org/software/wget/manual/wget.html (Chapter: 5 Time-Stamping)