encrypt binary with 7z without filenames? - command-line

I am wondering, i like 7z compression but how do i compress data only? i dont want a file in the archive with file info. Just raw data, how can i do this? it would also be nice if i can remove headers too but that isnt necessary.

From man 7z:
-si Read data from StdIn (eg: tar cf - directory | 7z a -si directory.tar.7z)

Related

What is the ExifTool syntax to extract thumbnails from raw to a specific folder?

My source folders are on an external hard drive, but I want my thumbnails local. The following works, but it puts all the extracted files in the same folder as the source files, which requires another step to collect them and move them to a folder on my local machine.
exiftool -b -ThumbnailImage -w _thumb.jpg -ext CR2 -r source_folder_path\ > _extraction_results.txt
Is there any way to write them to a different folder in the same call to ExifTool?
Thanks!
Add the directory path to name given in the -w (textout) option (see examples at that link).
Example:
exiftool -b -ThumbnailImage -w /path/to/thumbdir/%f_thumb.jpg -ext CR2 -r source_folder_path\ > _extraction_results.txt

How to handle similar names in wget?

I'm downloading so many images that their links are inside a file with the command:
wget -i file.txt
I suspect many of the files might have the same names. So I'm afraid they will be overwritten. is there anyway to make wget set sequential names to the file or handle similar names in any other way?
For wget 1.19.1, what you're looking for is the default behavior. Files that have the same names will be numbered when a matching file is found.
Assuming that file.txt looks like:
http://www.apple.com
http://www.apple.com
http://www.apple.com
http://www.apple.com
The output of wget -i file.txt will be four files, named:
index.html
index.html.1
index.html.2
index.html.3

centos 7 zip directory

how to undo gzip command in centos?
sudo gzip -r plugins
if I try sudo gunzip -r plugins it give me an error not in gzip format
what I want to do is zip the directory.
tar -zcvf archive.tar.gz directory/
check this answers https://unix.stackexchange.com/a/93158 https://askubuntu.com/a/553197 & https://www.centos.org/docs/2/rhl-gsg-en-7.2/s1-zip-tar.html
sudo find . -name "*.gz" -exec gunzip {} \;
I think you have two questions
How do I undo what I did?
How do I zip a directory
Have you even looked at man gzip or gzip --help?
Answers
find plugins -type f -name "*gz" | xargs gunzip
tar -zcvf plugins.tar.gz plugins
2b. I suspect that your level of linux experience is low so you'd probably be more comfortable using zip. (Remember to do a zip --help or man zip before coming for more advice.)
Explanation. gzip only zips up one file. If you want to do a bunch of files, you have to smush them up into one file first (using tar) and then compress that using gzip.
What you did was recursively gzip up each individual file in plugins/.

How to rename files downloaded with wget -r

I want to download an entire website using the wget -r command and change the name of the file.
I have tried with:
wget -r -o doc.txt "http....
hoping that the OS would have automatically create file in order like doc1.txt doc2.txt but It actually save the stream of the stdout in that file.
Is there any way to do this with just one command?
Thanks!
-r tells wget to recursively get resources from a host.
-o file saves log messages to file instead of the standard error. I think that is not what you are looking for, I think it is -O file.
-O file stores the resource(s) in the given file, instead of creating a file in the current directory with the name of the resource. If used in conjunction with -r, it causes wget to store all resources concatenated to that file.
Since wget -r downloads and stores more than one file, recreating the server file tree in the local system, it has no sense to indicate the name of one file to store.
If what you want is to rename all downloaded files to match the pattern docX.txt, you can do it with a different command after wget has end:
wget -r http....
i=1
while read file
do
mv "$file" "$(dirname "$file")/doc$i.txt"
i=$(( $i + 1 ))
done < <(find . -type f)

Selecting files to tar with specifized size limit

i have a requirement, where i have a directory with 1.csv,2.csv,3.csv...
I'm using tar to archive the file
tar -cvf file1.tar" *.csv
and using gzip to zip the files
gzip file.tar
Now the problem what i have is since the size of the zipped tar file is more than 25MB and there is restriction of the attachment size.. i cannot mail it as an attachment to the email
So im looking for a sh file which will tar.gz files to the size of 25 mb and if it's more than 25mb create an other tar.gz file with rest of file and so on..
i don't want to split and unsplit the tar. is there anything which can be done on this?
What about using the zip -s option to compress your tar file into 25mb files.
zip -s 25m my_archive.zip my_archive.tar
You might want to try bzip2 compression instead of gzip. It is a slower, but generally more efficient compression algorithm.
Command line would become:
tar -cvjf file1.tar.bz2 *.csv
You would then extract with the following command:
tar -xjvf file1.tar.bz2
Hard to tell if it will be enough to get under 25 MB, though.
Ascertain what compression factor you're getting from gzip.
Tar only enough of your CSV files to produce a zipped archive smaller than 25MB
Repeat as necessary.