Using results from grep to write results line by line with sed - sed

I am trying to take every file name in a directory that has the extension .text
and write it to a file in that same directory line by line starting at line number 14.
This is what I have so far but doesn't work.
cp workDir | grep -r --include *.text | sed -i '14i' home.text
Any assistance is appreciated. Note: I am on Unix.

You can do the above task by following shell command:
find workDir -name "*.text" >> home.text
This will solve what you have commented.

cp workDir doesn't work, because cp is for copying like cp source destination. Further explanations about cp can be read with man cp.
To achieve your goal go to your directory with cd ~/path/to/workDir. There you can use the ls command and redirect the output appending to your existing file with >> for all .text file extensions.
For example like this:
ls *.text >> home.text
This will append only the filenames line by line to your home.text without a preceding ./ like in the answer bevor with the find command.
Let me know If you like another format for your file names you want to append.

Related

Sed * only modifying first file

I would like to delete the first 40 lines of a good number of ASCII files and save the ASCII files without those 40 lines
I'm working under OSX High Sierra, realized that the -i option in sed was not working unless I create a backup file, so I tried using this command:
sed -i'backup' -e '1,40d' *.txt
It however only modifies and deletes the first 40 lines in my first file (alphabetically), but not the others.
How can I edit multiple files with just one command?
Thanks
You can use the following command that will
look in the current folder
ignore sub folder
take only into account files whose filenames end with '*.txt'
before executing the sed command.
Command:
find . -maxdepth 1 -type f -name '*.txt' -exec sed -i 'backup' -e '1,40d' {} \;

Command Line Mass Rename Jpg Files

I have a folder full of jpg files which all end with "-x-large.jpg" I would like to rename them all using command line so that it gets rid of the -x-large and just becomes .jpg.
So for example 123-x-large.jpg will become 123.jpg
Can someone tell me how I can do this with the ren command?
Thanks.
for img in *-x-large.jpg; do mv -i -v "$img" "${img%-x-large.jpg}.jpg"; done
This loops on all matching images and moves them into a new file with a truncated name (removing -x-large.jpg from the end) with the .jpg added back to the end of the file name. I'm invoking this interactively with mv -i so you are prompted before overwriting each file. To force overwriting (always say "yes"), change that to mv (remove the -i). To prevent overwriting (always say "no"), change that to mv -n.
Remove the -v (verbose) if you don't want to see each rename happen.
If you have a very large number of these files, the command line will be too long for the above command (since *-x-large.jpg will be expanded onto a command line). You can work around that with find and xargs as follows:
sh <(find . -maxdepth 1 -name '*-x-large.jpg' \
|sed -r 's/(.*)(-x-large.jpg)$/mv -i "\1\2" "\1.jpg"/')
This creates a shell script using bash process substitution, using find to generate a list of all files we want to rename and then piping them through sed to create the mv commands.
(See above for the mv flags. I removed -v because presumably this will be a very long list.)
See the version below if you want to check the script before running it.
The above one-liner requires GNU bash or Korn shell (ksh) as well as GNU sed.
Here's how to do it with neither (in three commands):
find . -maxdepth 1 -name '*-x-large.jpg' \
|sed 's/.*/mv "&" "&/; s/-x-large.jpg$/.jpg"/' > temp.sh
sh temp.sh
rm temp.sh
Posix sed doesn't reliably support capture groups (\(…\) or sed -r to invoke ERE) and therefore we can't expect it to be able to match and recall text, so this version simply writes most of the command and then fixes the ending (the absence of a trailing double quotes in the first replacement is intentional; we add it in the second replacement). Posix shell (/bin/sh proper) doesn't support process substitution, so we dump to a temporary file, evaluate it, and then remove it.
If we're referring to Windows command-line, then SET /? is your friend. Loads of good info in there.
setlocal ENABLEDELAYEDEXPANSION
set SEARCH_SUFFIX=-x-large.jpg
set REPLACE_SUFFIX=.jpg
for %%A in ("*%SEARCH_SUFFIX%") do (
set OLD_NAME=%%~nxA
set NEW_NAME=!OLD_NAME:%SEARCH_SUFFIX%=%REPLACE_SUFFIX%!
ren "!OLD_NAME!" "!NEW_NAME!"
)
endlocal

Perl using the -i option on a vboxsf share: Can't remove input_file Text file busy, skipping file

System: Arch Linux in VirtualBox 5.1.26 on Windows 10 Host
I try to use perl like sed in the terminal for in place substitution the input file:
perl -i -p -e 's/orig/replace/g' input_file
But I always get:
Can't remove input_file Text file busy, skipping file
This happens only if the file is inside a VirtualBox vboxsf share. With all other tools (sed, mv, vim or whatever) it is no problem to change the file.
This problem seems to be related to:
https://www.virtualbox.org/ticket/2553
https://forums.virtualbox.org/viewtopic.php?t=4437
I can't find any solution googling around :(
Update:
Using perl -i.bak -p -e 's/orig/replace/g' input_file I get a similar message:
Can't rename input_file to input_file.bak: Text file busy, skipping file.
This is exactly the same message as gedit shows:
So it is the same behavior, but googling around I can only find the Gedit topic. It seems noone has noticed this with perl -i.
While you are running a unix OS, you are still using a Windows file system. NTFS doesn't support anonymous files like unix file systems, and Perl -i requires support for anonymous files.
The workaround is to use a temporary files by using -i<ext> (e.g. -i~) instead of -i.
I have same problem. My solution is a bashscript. Copy files to tmp. Search and Replace. Overwrite tmp-files with original-files. Than delete tmp-dir. If you need you can use parameter in script for dynamic search&replace and create an alias for call the script direct and everywhere.
#!/bin/bash
echo "Removing text from .log files..."
echo "Creating tmp-dir..."
mkdir /tmp/myTmpFiles/
echo "Copy .log files to tmp..."
cp -v /home/user/sharedfolder/*.log /tmp/myTmpFiles/
echo "Search and Replace in tmp-files..."
perl -i -p0e 's/orig/replace/g' /tmp/myTmpFiles/*.log
echo "Copy .log to sharedfolder"
cp -v /tmp/myTmpFiles/*.log /home/user/sharedfolder/
echo "Remove tmp-dir..."
rm -vr /tmp/myTmpFiles/
echo "Done..."

how to print the progress of the files being copied in bash [duplicate]

I suppose I could compare the number of files in the source directory to the number of files in the target directory as cp progresses, or perhaps do it with folder size instead? I tried to find examples, but all bash progress bars seem to be written for copying single files. I want to copy a bunch of files (or a directory, if the former is not possible).
You can also use rsync instead of cp like this:
rsync -Pa source destination
Which will give you a progress bar and estimated time of completion. Very handy.
To show a progress bar while doing a recursive copy of files & folders & subfolders (including links and file attributes), you can use gcp (easily installed in Ubuntu and Debian by running "sudo apt-get install gcp"):
gcp -rf SRC DEST
Here is the typical output while copying a large folder of files:
Copying 1.33 GiB 73% |##################### | 230.19 M/s ETA: 00:00:07
Notice that it shows just one progress bar for the whole operation, whereas if you want a single progress bar per file, you can use rsync:
rsync -ah --progress SRC DEST
You may have a look at the tool vcp. Thats a simple copy tool with two progress bars: One for the current file, and one for overall.
EDIT
Here is the link to the sources: http://members.iinet.net.au/~lynx/vcp/
Manpage can be found here: http://linux.die.net/man/1/vcp
Most distributions have a package for it.
Here another solution: Use the tool bar
You could invoke it like this:
#!/bin/bash
filesize=$(du -sb ${1} | awk '{ print $1 }')
tar -cf - -C ${1} ./ | bar --size ${filesize} | tar -xf - -C ${2}
You have to go the way over tar, and it will be inaccurate on small files. Also you must take care that the target directory exists. But it is a way.
My preferred option is Advanced Copy, as it uses the original cp source files.
$ wget http://ftp.gnu.org/gnu/coreutils/coreutils-8.21.tar.xz
$ tar xvJf coreutils-8.21.tar.xz
$ cd coreutils-8.21/
$ wget --no-check-certificate wget https://raw.githubusercontent.com/jarun/advcpmv/master/advcpmv-0.8-8.32.patch
$ patch -p1 -i advcpmv-0.8-8.32.patch
$ ./configure
$ make
The new programs are now located in src/cp and src/mv. You may choose to replace your existing commands:
$ sudo cp src/cp /usr/local/bin/cp
$ sudo cp src/mv /usr/local/bin/mv
Then you can use cp as usual, or specify -g to show the progress bar:
$ cp -g src dest
A simple unix way is to go to the destination directory and do watch -n 5 du -s . Perhaps make it more pretty by showing as a bar . This can help in environments where you have just the standard unix utils and no scope of installing additional files . du-sh is the key , watch is to just do every 5 seconds.
Pros : Works on any unix system Cons : No Progress Bar
To add another option, you can use cpv. It uses pv to imitate the usage of cp.
It works like pv but you can use it to recursively copy directories
You can get it here
There's a tool pv to do this exact thing: http://www.ivarch.com/programs/pv.shtml
There's a ubuntu version in apt
How about something like
find . -type f | pv -s $(find . -type f | wc -c) | xargs -i cp {} --parents /DEST/$(dirname {})
It finds all the files in the current directory, pipes that through PV while giving PV an estimated size so the progress meter works and then piping that to a CP command with the --parents flag so the DEST path matches the SRC path.
One problem I have yet to overcome is that if you issue this command
find /home/user/test -type f | pv -s $(find . -type f | wc -c) | xargs -i cp {} --parents /www/test/$(dirname {})
the destination path becomes /www/test/home/user/test/....FILES... and I am unsure how to tell the command to get rid of the '/home/user/test' part. That why I have to run it from inside the SRC directory.
Check the source code for progress_bar in the below git repository of mine
https://github.com/Kiran-Bose/supreme
Also try custom bash script package supreme to verify how progress bar work with cp and mv comands
Functionality overview
(1)Open Apps
----Firefox
----Calculator
----Settings
(2)Manage Files
----Search
----Navigate
----Quick access
|----Select File(s)
|----Inverse Selection
|----Make directory
|----Make file
|----Open
|----Copy
|----Move
|----Delete
|----Rename
|----Send to Device
|----Properties
(3)Manage Phone
----Move/Copy from phone
----Move/Copy to phone
----Sync folders
(4)Manage USB
----Move/Copy from USB
----Move/Copy to USB
There is command progress, https://github.com/Xfennec/progress, coreutils progress viewer.
Just run progress in another terminal to see the copy/move progress. For continuous monitoring use -M flag.

Appending URL to output file with wget

I'm using wget to read a batch of urls from an input file and download everything to a single output file, and I'd like to append each url before its downloaded content, anyone knows how to do that?
Thanks!
afaik wget does not directly support the use case you are envisioning. however, using standard tools, you can emulate this feature.
we will proceed as follows:
call wget with logging enabled
let sed process the log executing the script detailed below
execute the transformation result as a shell/batch script
conventions: use the following filenames:
wgetin.txt: the file with the urls to fetch using wget
wgetout.sed: sed script
wgetout.final: the final result
wgetass.sh/.cmd: shell/batch script to assemble the downloaded files weaving in the url data
wget.log: the log file of the wget call
Linux
the sed script (linux):
# delete lines _not_ matching the regex
/^\(Saving to: .\|--[0-9: \-]\+-- \)/! { d; }
# turn remaining content into something else
s/^--[0-9: \-]\+-- \(.*\)$/echo '\1\n' >>wgetout.final/
s/^Saving to: .\(.*\).$/cat '\1' >>wgetout.final/
the command line (linux):
rm wgetout.final | rm wgetass.sh | wget -i wgetin.txt -o wget.log | sed -f wgetout.sed -r wget.log >wgetass.sh | chmod 755 wgetass.sh | ./wgetass.sh
Windows
the syntax for windows batch scripts is slightly different. of course, the windows ports of wget and sed have to be installed first.
the sed script (windows):
# delete lines _not_ matching the regex
/^\(Saving to: .\|--[0-9: \-]\+-- \)/! { d; }
# turn remaining content into something else
s/^--[0-9: \-]\+-- \(.*\)$/echo "\1" >>wgetout.final/
s/^Saving to: .\(.*\).$/type "\1" >>wgetout.final/
the command line (windows):
del wgetout.final && del wgetass.cmd && wget -i wgetin.txt -o wget.log && sed -f wgetout.sed -r wget.log >wgetass.cmd && wgetass.cmd