Want to create or rename my image or data folders according to time, date and with a name in hexadecimal format
date +%y%m%d
echo -n "date '+%y%m%d'/{1,2,3} "
mkdir obase=16; 'date '+%y%m%d'' | bc
mkdir -p echo -n 'date '+%y%m%d'/{1,2,3} ' | od -A n -t x1
noting work :(
x=date +%y%m%d%H%M | bc && y=echo "obase=16; $x" | bc && mkdir $y
dir
83CB3212
Also can use mv to rename the folder
Note "If the folder already exist when you use mv it moves the folder into it"
Related
I have a lot of output's from my matlab in .csv. I would like to put them together in one output.csv file.
So my idea was use my .csv created by Matlab as variables for my output.csv global.
#!/bin/ksh -p
# Reading results from results.csv
echo Study name?
read NAME
cd PROJECTS/04_${NAME}
sed -i 's/\r//g' Results.csv
while IFS=";" read -r R1 R2 R3
do
echo $R1
echo $R2
echo $R3
while IFS=";" write -r var1 var2 var3 var4
do
var1=$NAME
var2=$R1
var3=$R2
var4=$R3
done > >(tail -n +2 /PROJECTS/teste_output.csv)
done < <(tail /PROJECTS/04_${NAME}/Results.csv)
Each results.csv are in this format :
2.1680114865303;0;-0.00516967741714325
Using my code for one specific file i get :
2.1680114865303
0
-0.00516967741714325
What's mean it's doing the first part but its not writing in my output.csv.
So i would like to know how to write in this case. Is it possible to read more than 1 .csv at the same time?
In my dreams i would like to have code with one more while to read a list of files, get results.csv and write into output.csv:
#!/bin/ksh -p
# Reading results from results.csv
while IFS=";" read -r NAME c1 c2 c3 c4 c5 c6 c7 c8 c9 c10 c11
do
cd PROJECTS/04_${NAME}
sed -i 's/\r//g' Results.csv
while IFS=";" read -r R1 R2 R3
do
echo $R1
echo $R2
echo $R3
while IFS=";" write -r var1 var2 var3 var4
do
var1=$NAME
var2=$R1
var3=$R2
var4=$R3
done > >(tail -n +2 /PROJECTS/teste_output.csv)
done < <(tail /PROJECTS/04_${NAME}/Results.csv)
done < <(tail -n +2 /PROJECTS/input.csv)
So i read a list of files in my input.csv , i get NAME , get results from this NAME and put in my global output.csv.
From now, my code it's able to read the list, read the results from matlab (results.csv) but it's not writing in my output.csv. If its easier, i could make 2 or 3 bash scripts for do it step by step.
I already tried with bash and ksh but none of them worked.
Thanks for you help in advance :)
Is this what you are trying to accomplish?
(Making a few guesses...)
read -p "Study name? " name
in=PROJECTS/04_"${name}"/Results.csv
out=/PROJECTS/teste_output.csv
sed -i 's/\r//g' "$in"
printf "Study," > "$out"
head -1 "in" >> "$out"
while read -r line || [[ -n "$line" ]]
do echo "$name;$line"
done < <(tail -n +2 "$in") >> "$out"
I switched your input and output - sorry if that's not what you meant.
This doesn't parse the fields, just leaves the separataors in place and prepends the study name, and does it all in one loop.
Again, apologies if I misread your intent.
I am able to get the file name of a last created/modified file in a current directory with this command:
ls -t | head -n1
then the obtained file name I use it with mv command to move it to a directory.
and I'm trying to do it like this:
mv $(ls -t | head -n1) directory/
But it doesn't move the file.
What am I doing wrong?
Maybe like this:
mv "$(ls -t | head -n1)" directory/
Can't boot my Windows PC today and I am on 2nd OS Linux Mint. With my limited knowledge on Linux and shell scripts, I really don't have an idea how to do this.
I have a bunch of files in a directory generated from my system, need to remove the last 12 characters from the left of ".txt"
Sample filenames:
filename1--2c4wRK77Wk.txt
filename2-2ZUX3j6WLiQ.txt
filename3-8MJT42wEGqQ.txt
filename4-sQ5Q1-l3ozU.txt
filename5--Way7CDEyAI.txt
Desired result:
filename1.txt
filename2.txt
filename3.txt
filename4.txt
filename5.txt
Any help would be greatly appreciated.
Here is a programmatic way of doing this while still trying to account for pesky edge cases:
#!/bin/sh
set -e
find . -name "filename*" > /tmp/filenames.list
while read -r FILENAME; do
NEW_FILENAME="$(
echo "$FILENAME" | \
awk -F '.' '{$NF=""; gsub(/ /, "", $0); print}' | \
awk -F '/' '{print $NF}' | \
awk -F '-' '{print $1}'
)"
EXTENSION="$(echo "$FILENAME" | awk -F '.' '{print $NF}')"
if [[ "$EXTENSION" == "backup" ]]; then
continue
else
cp "$FILENAME" "${FILENAME}.backup"
fi
if [[ -z "$EXTENSION" ]]; then
mv "$FILENAME" "$NEW_FILENAME"
else
mv "$FILENAME" "${NEW_FILENAME}.${EXTENSION}"
fi
done < /tmp/filenames.list
Create a List of Files to Edit
First up create a list of files that you would like to edit (assuming that they all start with filename) and under the current working directory (.):
find . -name "filename*" > /tmp/filenames.list
If they don't start with filename fret not you could always use a find command like:
find . -type f > /tmp/filenames.list
Iterate over a list of files
To accomplish this we use a while read loop:
while read -r LINE; do
# perform action
done < file
If you had the ability to use bash you could always use a named pipe redirect:
while read -r LINE; do
# perform action
done < <(
find . -type f
)
Create a rename variable
Next, we create a variable NEW_FILENAME and using awk we strip off the file extension and any trailing spaces using:
awk -F '.' '{$NF=""; gsub(/ /, "", $0); print}'
We could just use the following though if you know for certain that there aren't multiple periods in the filename:
awk -F '.' '{print $1}'
The leading ./ is stripped off via
awk -F '/' '{print $NF}'
although this could have been easily done via basename
With the following command, we strip everything after the first -:
awk -F '-' '{print $1}'
Creating backups
Feel free to remove this if you deem unnecessary:
if [[ "$EXTENSION" == "backup" ]]; then
continue
else
cp "$FILENAME" "${FILENAME}.backup"
fi
One thing that we definitely don't want is to make backups of backups. The above logic accounts for this.
Renaming the files
One thing that we don't want to do is append a period to a filename that doesn't have an extension. This accounts for that.
if [[ -z "$EXTENSION" ]]; then
mv "$FILENAME" "$NEW_FILENAME"
else
mv "$FILENAME" "${NEW_FILENAME}.${EXTENSION}"
fi
Other things of note
Odds are that your Linux Mint installation has a bash shell so you could simplify some of these commands. For instance, you could use variable substitution: echo "$FILENAME" | awk -F '.' '{print $NF}' would become "${FILENAME##.*}"
[[ is not defined in POSIX sh so you will likely just need to replace [[ with [, but review this document first:
https://mywiki.wooledge.org/BashFAQ/031
From the pattern of filenames it looks like that the first token can be picked before "-" from filenames. Use following command to rename these files after changing directory to where files are located -
for srcFile in `ls -1`; do fileN=`echo $srcFile | cut -d"-" -f1`; targetFile="$fileN.txt"; mv $srcFile $targetFile; done
If above observation is wrong, following command can be used to remove exactly 12 characters before .txt (4 chars) -
for srcFile in `ls -1`; do fileN=`echo $srcFile | rev | cut -c17- | rev`; targetFile="$fileN.txt"; mv $srcFile $targetFile; done
In ls -1, a pattern can be added to filter files from current directory if that is required.
I want to rename all the files in my home directory (example abc), in the format (abc_bkp) without using any loops and it should be a single line command in unix (bash script).
If the directory contains nothing but files, this should do it:
ls | xargs -I {} mv {} {}_bkp
If it contains subdirectories, links, and other things you don't want to rename, you must filter the output of ls. Here is a crude way to do it; maybe someone can suggest a more elegant approach:
ls -l | grep ^- | cut -d' ' -f 13 | xargs -I {} mv {} {}_bkp
If you don't want to use loops then I believe the BEST way could be find command, try following command as a DRY run first and once you are satisfy with results then you could remove echo from it to give a real shot.
find -type f -or -type d | xargs -I % echo mv % %_bkp
-I: From man xargs page:
-I replace-str
Replace occurrences of replace-str in the initial-arguments with names read from standard input. Also, unquoted blanks do not
terminate
input items; instead the separator is the newline character. Implies -x and -L 1.
How can I compare a tar file (already compressed) of the original folder with the original folder?
First I created archive file using
tar -kzcvf directory_name.zip directory_name
Then I tried to compare using
tar -diff -vf directory_name.zip directory_name
But it didn't work.
--compare (-d) is more handy for that.
tar --compare --file=archive-file.tar
works if archive-file.tar is in the directory it was created. To compare archive-file.tar against a remote target (eg if you have moved archive-file.tar to /some/where/) use the -C parameter:
tar --compare --file=archive-file.tar -C /some/where/
If you want to see tar working, use -v without -v only errors (missing files/folders) are reported.
Tipp: This works with compressed tar.bz/ tar.gz archives, too.
It should be --diff
Try this (without the last directory_name):
tar --diff -vf directory_name.zip
The problem is that the --diff command only looks for differences on the existing files among the tar file and the folder. So, if a new file is added to the folder, the diff command does not report this.
The method of pix is way slow for large compressed tar files, because it extracts each file individually. I use the tar --diff method loking for files with different modification time and extract and diff only these. The files are extracted into a folder base.orig where base is either the top level folder of the tar file or teh given comparison folder. This results in diffs including the date of the original file.
Here is the script:
#!/bin/bash
set -o nounset
# Print usage
if [ "$#" -lt 1 ] ; then
echo 'Diff a tar (or compressed tar) file with a folder'
echo 'difftar-folder.sh <tarfile> [<folder>] [strip]'
echo default for folder is .
echo default for strip is 0.
echo 'strip must be 0 or 1.'
exit 1
fi
# Parse parameters
tarfile=$1
if [ "$#" -ge 2 ] ; then
folder=$2
else
folder=.
fi
if [ "$#" -ge 3 ] ; then
strip=$3
else
strip=0
fi
# Get path prefix if --strip is used
if [ "$strip" -gt 0 ] ; then
prefix=`tar -t -f $tarfile | head -1`
else
prefix=
fi
# Original folder
if [ "$strip" -gt 0 ] ; then
orig=${prefix%/}.orig
elif [ "$folder" = "." ] ; then
orig=${tarfile##*/}
orig=./${orig%%.tar*}.orig
elif [ "$folder" = "" ] ; then
orig=${tarfile##*/}
orig=${orig%%.tar*}.orig
else
orig=$folder.orig
fi
echo $orig
mkdir -p "$orig"
# Make sure tar uses english output (for Mod time differs)
export LC_ALL=C
# Search all files with a deviating modification time using tar --diff
tar --diff -a -f "$tarfile" --strip $strip --directory "$folder" | grep "Mod time differs" | while read -r file ; do
# Substitute ': Mod time differs' with nothing
file=${file/: Mod time differs/}
# Check if file exists
if [ -f "$folder/$file" ] ; then
# Extract original file
tar -x -a -f "$tarfile" --strip $strip --directory "$orig" "$prefix$file"
# Compute diff
diff -u "$orig/$file" "$folder/$file"
fi
done
To ignore differences in some or all of the metadata (user, time, permissions), you can pipe the result to awk:
tar --compare --file=archive-file.tar -C /some/where/ | awk '!/Mode/ && !/Uid/ && !/Gid/ && !/time/'
That should output only the true differences between the tar and the directory /some/where/
I recently needed a better compare than what "tar --diff" produced so I made this short script:
#!/bin/bash
tar tf "$1" | while read ; do
if [ "${REPLY%/}" = "$REPLY" ] ; then
tar xOf "$1" "$REPLY" | diff -u - "$REPLY"
fi
done
The easy way is to write:
tar df file This compares the file with the current working directory, and tell us about if any of the files has been removed.
tar df file -C path/folder This compares the file with the folder.