Writing to /proc - linux-device-driver

I have an FPGA setup that is connected a folder within /proc. I need to write to this file, but when I do this, the file size ends up being 0 and the file is not written, though no error is issued. Oddly this behavior does not occur with scp.
I can echo to file successfully: echo -ne "\000\000\000\000" > /proc/file
I can scp a file from a remote machine to /proc/file
I cannot copy a local file to it: cp localfile /proc/file
sftp also gives a 0 file size
My question is: what is different between cp and scp and sftp, probably at a pretty low level, that one works and the others don't.

Related

How to download all bucket files. (The issue with the -m flag gsutil)

I am trying to copy all files from cloud storage bucket recursively and I am having problem with the -m flag as I have investigated.
The command that I am running
gsutil -m cp -r gs://{{ src_bucket }} {{ bucket_backup }}
I am getting something like this:
CommandException: 1 file/object could not be transferred.
where the number of files/objects differs every time.
After investigation I have tried to reduce number of threads/processes which used with the -m option, but this has not helped, so I am looking for some advice about this. I have 170 MiB data on the bucket which is approximately 300k files. I need to download them as fast as possible
UPD:
Logs with -L flag
[Errno 2] No such file or directory: '<path>/en_.gstmp' -> '<path>/en'
6 errors like that.
The root of the issue might be that both directory and file of the same name exist in the GCS bucket. Try executing the command with -L flag, so you will get additional logs on the execution and you will be able to find the file that is causing this error.
I would suggest you delete that file and make sure there is no directory in the bucket of that name and then upload this file to the bucket again.
Also check if any of the directory created with Jar name. Delete them and processed the copy files.
And check if the required file is already at destination and delete the file at destination and execute copy again.
There are alternatives to copy, for example, it is possible to transfer files using rsync, as described here.
You can also check similar threads: thread1 , thread2 & thread3

How to make a file executable using Makefile

I want to copy a particular file using Makefile and then make this file executable. How can this be done?
The file I want to copy is a .pl file.
For copying I am using the general cp -rp command. This is done successfully. But now I want to make this file executable using Makefile
Its a bad practice to use cp and chmod, instead use install command.
all:
install -m 0777 hello ../hello
You can use -m option with install to set the permission mode, and even note that by using the install you will preserve not only the permission but also the owner of the file.
You can still use chmod accordingly but it would be a bad practice
all:
cp hello ../hello
chmod +x ../hello
Update: install vs cp
cp would simply copy files with current permissions, install not only copies, but also can change perms/ownership as arg flags. (This is what your requirement was)
One significant difference is that cp truncates the destination file and starts copying data from the source into the destination file. install, on the other hand, removes the destination file first.
This is significant because if the destination file is already in use, bad things could happen to whomever is using that file in case you cp a new file on top of it. e.g. overwriting an executable that is running might fail. Truncating a data file that an existing process is busy reading/writing to could cause pretty weird behavior. If you just remove the destination file first, as install does, things continue much like normal - the removed file isn't actually removed until all processes close that file.[source]
For more details check these,
install vs. cp; and mmap
How is install -c different from cp

How to check if wget has completed download successfully?

I have a small bash script that download files from another server, sometimes download gets interrupted. How can I check if wget has completed download successfully?
if it gets interrupted then it may have part of the file ?
If it has part of the file - how would you know if the file is the full file or not depends on two different checks.
Either you have the actual file maybe from another attempt of the same script executed then the files compared - you could compare the files using md5 to ensure their identical.
The other less accurate method could be done over 1 attempt and you could do a du -sk on the file and if its above a certain size it passes - this by no way can ensure if file is 100% there if cut off 99%
but you could also look into wget -c which resumes downloads ---
so maybe run it twice with this option:
wget --help 2>&1 |grep "\-\-continue"
-c, --continue resume getting a partially-downloaded file.
if it is a web server you are in control of you could install:
https://metacpan.org/pod/Apache::OpenIndex
I think this displays the md5 sum of the directoryindex so you can then parse this and compare to local md5 sum of your downloaded file - if a miss match run wget -c

Limit to number of files to cp in parallel

Im running the gsutil cp command in parallel (with the -m option) on a directory with 25 4gb json files (that i am also compressing with the -z option).
gsutil -m cp -z json -R dir_with_4g_chunks gs://my_bucket/
When I run it, it will print out to terminal that it is copying all but one of the files. By this I mean that it prints one of these lines per file:
Copying file://dir_with_4g_chunks/a_4g_chunk [Content-Type=application/octet-stream]...
Once the transfer for one of them is complete, it says that it'll be copying the last file.
The result of this is that there is one file that only starts to copy only when one of the others finishes copying, significantly slowing down the process
Is there a limit to the number of files I can upload with the -m option? Is this configurable in the boto config file?
I was not able to find the .boto file on my Mac (as per jterrace's answer above), instead I specified these values using the -o switch:
gsutil -m -o "Boto:parallel_thread_count=4" cp directory1/* gs://my-bucket/
This seemed to control the rate of transfer.
From the description of the -m option:
gsutil performs the specified operation using a combination of
multi-threading and multi-processing, using a number of threads and
processors determined by the parallel_thread_count and
parallel_process_count values set in the boto configuration file. You
might want to experiment with these value, as the best value can vary
based on a number of factors, including network speed, number of CPUs,
and available memory.
If you take a look at your .boto file, you should see this generated comment:
# 'parallel_process_count' and 'parallel_thread_count' specify the number
# of OS processes and Python threads, respectively, to use when executing
# operations in parallel. The default settings should work well as configured,
# however, to enhance performance for transfers involving large numbers of
# files, you may experiment with hand tuning these values to optimize
# performance for your particular system configuration.
# MacOS and Windows users should see
# https://github.com/GoogleCloudPlatform/gsutil/issues/77 before attempting
# to experiment with these values.
#parallel_process_count = 12
#parallel_thread_count = 10
I'm guessing that you're on Windows or Mac, because the default values for non-Linux machines is 24 threads and 1 process. This would result in copying 24 of your files first, then the last 1 file afterward. Try experimenting with increasing these values to transfer all 25 files at once.

compare file size after ftp get with the original file on server

In SQL I'm using xp_cmdShell to run FTP commands. I have no problem getting the list of files or copying files to the local server, but I want to compare copied file size to the original to make sure the get has been successful.
Any ideas on how to compare file sizes?
From a command prompt you can use the DOS File Compare command (fc). In your case you probably want to do a binary compare (there is no file size compare). I binary compare should work in your case.
Most DOS commands will return some code that let s you know the status.
http://www.computerhope.com/fchlp.htm
EDIT
Sorry, I read your question and realized you want to compare it against a file on the ftp server. I think this is a moot point since if ftp reports a successful file transfer there is no reason to compare (unless your source of comparison for not the ftp site). Does that make sense?
What you could do it use the FTP command ls command.
ftp> ls <filename>
where ftp> is the ftp prompt and not part of the command. This command gives you the file size in bytes. Then you need to use the dos command for the local file. Here is a StackOverflow question (and answer) about that.
Windows command for file size only?