Copy multiple files to different directories in Makefile - copy

I have a Makefile where I currently have two files that should be copied to different directories. Currently, I've tested
echo ${dirs} | xargs -n 1 cp ${sources}
So I understand that this will not work since it will try to copy both source files to one of the directory every time. But is there a way that I can execute the copy command for every source file and directory each?
Best regards,
Simon

I think it is possible to deduce what you want from what you wrote, but as others pointed out, you should be more clear, so we don't have to spend time deducing it.
Anyway, since you want to not copy all files to all directories, you must somehow tell Make where you want to copy which files. The easiest way is to list the full paths of the copies you want in a variable such as $(COPIES), and not just ${dirs}. In this answer I am going to assume the destination directories already exist.
.PHONY: all
all: $(COPIES)
PERCENT := %
.SECONDEXPANSION:
$(COPIES): %: $$(filter $$(PERCENT)/$$(notdir $$*), $(sources)) Makefile
cp $< $#

Related

Why does Yocto use absolute paths in TMPDIR?

Changing the path of a Yocto environment is not a good idea, as I found out. This also explains why e.g. bitbake can be run regardless the current working directory. Absolute paths are stored in many places during the build process, even subdirectory structures are created into the tmp directory tree. I ended up in rebuilding from scratch - which takes a long time.
A documentation of how I tried to modify all paths:
find . -name *.conf -exec sed -i 's/media\/rob\/3210bcd4-49ef-473e-97a6-e4b7a2c1973e/home/g' {} +
This step replaces absolute paths, within many dynamic conf files (from xx/xx/linux to /home/linux - where linux was chosen for historical reasons. I could mount the partition also as /home/yocto or whatever name).
Next was deletion of subdirectory structures with the old path in the hope that the build process would recognize these deletions, and still rebuild quickly:
find . -name *3210bcd4-49ef-473e-97a6-e4b7a2c1973e* -exec fakeroot rm -r {} +
It was not recognized. Then I gave up.
From a user new to Yocto, familiar with former/classic crossbuild environments based on make menuconfig etc.
My question is:
Why are absolute paths generated & used throughout tmp instead of treating everything as relative?
Or, asked differently:
Why not use something like ${TOPDIR}/tmp throughout the build configuration, instead of hardcoding the absolute path to tmp?

Can we wget with file list and renaming destination files?

I have this wget command:
sudo wget --user-agent='some-agent' --referer=http://some-referrer.html -N -r -nH --cut-dirs=x --timeout=xxx --directory-prefix=/directory/for/downloaded/files -i list-of-files-to-download.txt
-N will check if there is actually a newer file to download.
-r will turn the recursive retrieving on.
-nH will disable the generation of host-prefixed directories.
--cut-dirs=X will avoid the generation of the host's subdirectories.
--timeout=xxx will, well, timeout :)
--directory-prefix will store files in the desired directorty.
This works nice, no problem.
Now, to the issue:
Let's say my files-to-download.txt has these kind of files:
http://website/directory1/picture-same-name.jpg
http://website/directory2/picture-same-name.jpg
http://website/directory3/picture-same-name.jpg
etc...
You can see the problem: on the second download, wget will see we already have a picture-same-name.jpg, so it won't download the second or any of the following ones with the same name. I cannot mirror the directory structure because I need all the downloaded files to be in the same directory. I can't use the -O option because it clashes with --N, and I need that. I've tried to use -nd, but doesn't seem to work for me.
So, ideally, I need to be able to:
a.- wget from a list of url's the way I do now, keeping my parameters.
b.- get all files at the same directory and being able to rename each file.
Does anybody have any solution to this?
Thanks in advance.
I would suggest 2 approaches -
Use the "-nc" or the "--no-clobber" option. From the man page -
-nc
--no-clobber
If a file is downloaded more than once in the same directory, >Wget's behavior depends on a few options, including -nc. In certain >cases, the local file will be
clobbered, or overwritten, upon repeated download. In other >cases it will be preserved.
When running Wget without -N, -nc, -r, or -p, downloading the >same file in the same directory will result in the original copy of file >being preserved and the second copy
being named file.1. If that file is downloaded yet again, the >third copy will be named file.2, and so on. (This is also the behavior >with -nd, even if -r or -p are in
effect.) When -nc is specified, this behavior is suppressed, >and Wget will refuse to download newer copies of file. Therefore, ""no->clobber"" is actually a misnomer in
this mode---it's not clobbering that's prevented (as the >numeric suffixes were already preventing clobbering), but rather the >multiple version saving that's prevented.
When running Wget with -r or -p, but without -N, -nd, or -nc, >re-downloading a file will result in the new copy simply overwriting the >old. Adding -nc will prevent this
behavior, instead causing the original version to be preserved >and any newer copies on the server to be ignored.
When running Wget with -N, with or without -r or -p, the >decision as to whether or not to download a newer copy of a file depends >on the local and remote timestamp and
size of the file. -nc may not be specified at the same time as >-N.
A combination with -O/--output-document is only accepted if the >given output file does not exist.
Note that when -nc is specified, files with the suffixes .html >or .htm will be loaded from the local disk and parsed as if they had been >retrieved from the Web.
As you can see from this man page entry, the behavior might be unpredictable/unexpected. You will need to see if it works for you.
Another approach would be to use a bash script. I am most comfortable using bash on *nix, so forgive the platform dependency. However the logic is sound, and with a bit of modifications, you can get it to work on other platforms/scripts as well.
Sample pseudocode bash script -
for i in `cat list-of-files-to-download.txt`;
do
wget <all your flags except the -i flag> $i -O /path/to/custom/directory/filename ;
done ;
You can modify the script to download each file to a temporary file, parse $i to get the filename from the URL, check if the file exists on the disk, and then take a decision to rename the temp file to the name that you want.
This offers much more control over your downloads.

GNU make: Can I delete obsolete files?

In my workflow, I have lots of xxx.smr files in a folder and I need to convert them into other file format xxx_step3.mat by importing some data from xxx_info.xlsx. I learned that GNU make is powerful in keep all the files up-to-date.
In a very simple "explicit" format (without sophisticated wild card usage), Makefile for this process would look like this. To handle multiple xxx.smr files and their descendants, I should be able to do that by modifying this file.
.PHONY: all clean
all: xxx_step3.mat
xxx_step3.mat: xxx_step2.mat xxx_info.xlsx
matlab -r "merge2files('xxx_step2.mat', 'xxx_info.xlsx')"
xxx_step2.mat: xxx_step1.mat
matlab -r "convertmat('xxx_step1.mat')"
xxx_info.xlsx: master.xslx
matlab -r "extractfromMasterxlsx('master.xlsx', 'xxx_info.xlsx')"
xxx_step1.mat: xxx_step0.smr
#echo "\nCreate " $#
# I can't do this step from the command line so I leave message
clean:
rm -f xxx_step1.mat xxx_step2.mat xxx_step3.mat xxx_info.xlsx
However, I realized that, when some of xxx.smr files were found to be surplus and deleted at some point, running GNU make with this Makefile does not delete the obsolete descendant files, including all the intermediate files and the final xxx_step3.mat files, that are dependent on those deleted xxx.smr files.
For example, I start with the three xxx.smr files and run Make.
A.smr, B.smr, C.smr
It will create all the descendants, including the final target files:
A_step3.mat, B_step3.mat, C_step3.mat
Later, say, I find the B.smr contained a fatal error and decided to delete from the folder.
A.smr, C.smr
Running Make at this stage will result in ... no change, because both A_step3.mat and C_step3.mat are newer than its direct prerequisites (and than A.smr and C.smr). However, actually I need to remove all the descendants of B.smr, such as B_step1.mat, B_step2.mat, B_step3.mat, and B_info.xlsx. If those obsolete files are kept, the final target B_step3.mat will be included in the subsequent analyses and affect the results.
I wonder if there is a "smart" way of removing xxx_step1.mat, xxx_step2.mat, xxx_step3.mat, xxx_info.xlsx files, when their corresponding xxx.smr files have been deleted.
Or should I just implement this with MATLAB or Python etc?
Since a Makefile is a collection of shell commands, on your clean: target, you can collect and remove all the files that correspond to your xxx.smr files using a for loop and parameter expansion/substring matching. To find all files that correspond to each xxx.smr file, find all xxx.smr files. Then for each xxx.smr, extract xxx and remove all xxx_step?.* and xxx_info.* files. After each of the step? and info files are removed, then remove xxx.smr. In multi-line form it would look like:
for i in *.smr; do
for j in ${i%.*}; do
rm -f "${j}_step?.*" "${j}_info.*"
done
rm -f "$i"
done
Or, in a single line:
for i in *.smr; do for j in ${i%.*}; do rm -f "${j}_step?.*" "${j}_info.*"; done; rm -f "$i"; done
Note this will remove all xxx_step... and xxx_info... files for each xxx.smr file. Make sure this is what you intend and run on a test directory first. You can tighten the extensions above to just remove xxx_info.xlsx by replacing xxx_info.* with xxx_info.xlsx, etc...

Shell Script to update the contents of a folder

I'm a beginner in Unix Shell Scripting and Perl Scripting.
I would like to have an example program that teaches me how to update a file contents on a directory.
The scenario is, there is a directory which has some n number of files.
Among those n number of files, m number of files have been modified.
I need to update the contents of the modified files in the directory.
Give me a simple shell script to do this.
Thanks and Regards,
Vijay
I would do it with find like this:
find your_directory -newermt time_of_last_check -exec modify_script.sh {} \;
where:
your_directory is the directory where you have the files.
time_of_last_check is when you last ran this command
modify_script.sh is the program that you will run to modify the files, it should take one argument, and that is the filename to modify.
In Perl
To Update a File content see perlfaq5, you will find lot of information regarding File manipulation.You will get a lot of examples of file manipulations.
Getting File or Dir Statistics see perl built in function stat.
For Traverse a directory tree, see
File::Find

How can I copy a directory but ignore some files in Perl?

In my Perl code, I need to copy a directory from one location to another on the same host excluding some files/patterns (e.g. *.log, ./myDir/abc.cl).
What would be the optimum way of doing this in Perl across all the platforms?
On Windows, xcopy is one such solution. On unix platforms, is there a way to do this in Perl?
I think you're looking for rsync. It's not Perl, but it's going to work a lot better than anything you make in Perl:
% rsync --exclude='*.log' --exclude='./myDir/abc.cl' SOURCE DEST
If you have a bunch of patterns, you can put those all in a file:
*.log
./myDir/abc.cl
Now ignore all the patterns in a file:
% rsync --exclude-from=do_not_sync.txt SOURCE DEST
I'd use File::Find, and step over each file, but instead of calling File::Copy's copy() on each file, first test to see if it matches the pattern, and then next if it does.
On *nix, you can use native tar command, with -exclude options. Then after creating the tar file, you can bring it over to your destination to untar it.