Open File keeps growing despite emptied content - sed

How can I pipe a text stream into a file and, while the file is still in use, wipe it for job rotation?
Long version:
I've been struggling for a while onto an apparently minor issue, that's making my experiments impossible to continue.
I have a software collecting data continuously from external hardware (radiotelescope project) and storing in a csv format. Being the installation at a remote location I would, once a day, copy the saved data in a secure place and wipe the file content while, for the same reason, I can NOT to stop the hardware/software, thus software such as log rotation wouldn't be an option.
For as much effort spent see my previous post, it seems the wiped file keeps growing although empty.
Bizarre behavior, showing file size, truncate file, show file size again:
pi#tower /media/data $ ls -la radio.csv ;ls -s radio.csv;truncate radio.csv -s 1; ls -la radio.csv ;ls -s radio.csv
-rw-r--r-- 1 pi pi 994277 Jan 18 21:32 radio.csv
252 radio.csv
-rw-r--r-- 1 pi pi 1 Jan 18 21:32 radio.csv
0 radio.csv
Then, as soon as more data comes in:
pi#tower /media/data $ ls -la radio.csv ;ls -s radio.csv
-rw-r--r-- 1 pi pi 1011130 Jan 18 21:32 radio.csv
24 radio.csv
I thought to pipe the output into a sed command and save right away, with no luck altogether. Also, filesystem/hardware doesn't seems buggy (tried different hardware/distro/filesystem).
Would anyone be so nice to give me a hint how to proceed?
Thank you in advance.

Piped into tee with -a option. The file was kept open by originating source.
Option APPEND of tee helped to stick at the EOF new data; in the given issue, appending to the beginning when file zeroed.
For search engine and future reference:
sudo rtl_power -f 88M:108M:10k -g 1 - | tee -a radio.csv -
Now emptying the file with
echo -n > radio.csv
gets the file zeroed as expected.

Related

sed command modifying a file while another sed command modifies it too

My question revolves around a process I'm using to update a status file. I have a process running which does a simple
sed -i "s/info/newinfo/" file.txt
But this process can be called multiple times.
My question is, if two processes are running a sed command to modify the file at the same time, would that cause a problem?
I tried to test this by running 2 at commands at the same time doing two different sed modifications. They seem to work fine but I don't know if they were actually simultaneously or not. Maybe the command is so fast that it won't have a problem with read and write access from two different processes.
Ok let show, with a not so big file:
cd /tmp
seq 1000000 2000000 > mediumfile.txt
ls -hl mediumfile.txt
-rw-r--r-- 1 user user 7.7M Sep 26 16:53 host file.txt
wc mediumfile.txt
1000001 1000001 8000008 host file.txt
Ok, there is 1000k lines in my 7.7Mb file.
If I drop 2 x 1001 lines simultaneously by two separated (stream) process (from 1801000 to 1802000 and from 1803000 to 1804000).
sed '/1803000/,/1804000/d' -i mediumfile.txt & \
sed '/1801000/,/1802000/d' -i mediumfile.txt ;wait
[1] 30727
[1]+ Done sed '/1803000/,/1804000/d' -i mediumfile.txt
wc -l mediumfile.txt
999000 host file.txt
There are 1k line too much!
grep '180[13]400' mediumfile.txt
1803400
So it is.

Search and delete all tar.gz files on centos 6

I need to free up some disk space on my web server and would like to ask if running the command below would break anything?
My server is running centos 6 with cpanel/whm.
$ find / -type f -name "*.tar.gz" -exec rm -i {} \;
Any help or advice will be greatly appreciated.
Thanks.
Well, you'll lose logs if they are already compressed, or uploaded files if any. By default there shouldn't be any of those files on installed system. Personally I think this is wrong to just jettison what you can instead of trying to find the cause.
You can try finding what's occupying space by running:
du -hs / # shows how much root directory occupies
Compare that to the output of:
df -h # shows used space on disks
If the number didn't match by a far - you probably have unclosed deleted files and a simple reboot will reclaim this space for you.
If not you can proceed by recursively doing:
cd <dir>; du -hs * # enter directory and calculate size of its contents
You can do that starting from / and proceeding to the biggest dir. After all you'll find your source of free space. :)
PS: CentOS doesn't compress logs by default. You will not detect those logs by searching for archived files, but they can be huge. Compressing them is an easy way to get some space:
Turn on compression in /etc/logrotate.conf:
compress
Compress already rotated logs with:
cd /var/log; find . -type f | grep '.*-[0-9]\+$' | xargs -n1 gzip -9

Do not show directories in rsync output

Does anybody know if there is an rsync option, so that directories that are being traversed do not show on stdout.
I'm syncing music libraries, and the massive amount of directories make it very hard to see which file changes are actually happening.
I'v already tried -v and -i, but both also show directories.
If you're using --delete in your rsync command, the problem with calling grep -E -v '/$' is that it will omit the information lines like:
deleting folder1/
deleting folder2/
deleting folder3/folder4/
If you're making a backup and the remote folder has been completely wiped out for X reason, it will also wipe out your local folder because you don't see the deleting lines.
To omit the already existing folder but keep the deleting lines at the same time, you can use this expression :
rsync -av --delete remote_folder local_folder | grep -E '^deleting|[^/]$'
I'd be tempted to filter using by piping through grep -E -v '/$' which uses an end of line anchor to remove lines which finish with a slash (a directory).
Here's the demo terminal session where I checked it...
cefn#cefn-natty-dell:~$ mkdir rsynctest
cefn#cefn-natty-dell:~$ cd rsynctest/
cefn#cefn-natty-dell:~/rsynctest$ mkdir 1
cefn#cefn-natty-dell:~/rsynctest$ mkdir 2
cefn#cefn-natty-dell:~/rsynctest$ mkdir -p 1/first 1/second
cefn#cefn-natty-dell:~/rsynctest$ touch 1/first/file1
cefn#cefn-natty-dell:~/rsynctest$ touch 1/first/file2
cefn#cefn-natty-dell:~/rsynctest$ touch 1/second/file3
cefn#cefn-natty-dell:~/rsynctest$ touch 1/second/file4
cefn#cefn-natty-dell:~/rsynctest$ rsync -r -v 1/ 2
sending incremental file list
first/
first/file1
first/file2
second/
second/file3
second/file4
sent 294 bytes received 96 bytes 780.00 bytes/sec
total size is 0 speedup is 0.00
cefn#cefn-natty-dell:~/rsynctest$ rsync -r -v 1/ 2 | grep -E -v '/$'
sending incremental file list
first/file1
first/file2
second/file3
second/file4
sent 294 bytes received 96 bytes 780.00 bytes/sec
total size is 0 speedup is 0.00

copy the symbolic link in Solaris [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I am trying to copy a link on Solaris OS but find that it does not simply copy the link instead copies the whole contents of the directory/file the link is poinitng to? Which is not in other OSes like AIX,HP-UX,Linux.
Is this a normal behaviour of Solaris OS?
Charlie was close, you want the -L, -H or -P flags with the -R flag (probably just -R -P). Similar flags exist for chmod(1) and chgrp(1). I've pasted an excerpt from the man-page below.
Example:
$ touch x
$ ln -s x y
$ ls -l x y
-rw-r--r-- 1 mjc mjc 0 Mar 31 18:58 x
lrwxrwxrwx 1 mjc mjc 1 Mar 31 18:58 y -> x
$ cp -R -P y z
$ ls -l z
lrwxrwxrwx 1 mjc mjc 1 Mar 31 18:58 z -> x
$
Alternatively, plain old tar will happily work with symbolic links by default, even the venerable version that ships with Solaris:
tar -cf foo | ( cd bar && tar -xf - )
(where foo is a symlink or a directory containing symlinks).
/usr/bin/cp -r | -R [-H | -L | -P] [-fip#] source_dir... target
...
-H Takes actions based on the type and contents of the
file referenced by any symbolic link specified as a
source_file operand.
If the source_file operand is a symbolic link, then cp
copies the file referenced by the symbolic link for
the source_file operand. All other symbolic links
encountered during traversal of a file hierarchy are
preserved.
-L Takes actions based on the type and contents of the
file referenced by any symbolic link specified as a
source_file operand or any symbolic links encountered
during traversal of a file hierarchy.
Copies files referenced by symbolic links. Symbolic
links encountered during traversal of a file hierarchy
are not preserved.
-P Takes actions on any symbolic link specified as a
source_file operand or any symbolic link encountered
during traversal of a file hierarchy.
Copies symbolic links. Symbolic links encountered dur-
ing traversal of a file hierarchy are preserved.
You want cp -P I believe (check the man page, as I don't have a solaris box handy right now.) I faintly suspect that's a System V-ism, but wouldn't swear to it.
It sounds like you're trying to duplicate a single symlink.
You might want to just do:
link_destination=`/bin/ls -l /opt/standard_perl/link|awk '{print $10}'`
ln -s $link_destination /opt/standard_perl/link_new
If you are trying to copy a directory hierarchy, this can be very difficult to do in general without the GNU tools (or rsync). While there are solutions that often work, there is no solution that works on every "standard" unix with every type of filename you might encounter. If you're going to be doing this regularly, you should install the GNU coreutils, find, cpio, and tar, and also rsync as well.
Will cpio do the trick for you?

How can I non-recursively migrate directories with Perl or shell?

We're migrating home folders to a new filesystem, and I am looking for a way to automate it using Perl or a shell script. I don't have much choice in programming languages as the systems are proprietary storage clusters that should remain as unchanged as possible.
Task: Under directory /home/ I have various users' home folders aaa, bbb, ccc, ... and they have certain permissions and user/group ownership that need to remain intact upon migration to /newhome/. Here's example of what needs to be migrated from /home:
drwxr-xr-x 3 aaaaa xxxxxxxxx 4096 Feb 26 2008 aaaaa/
drwxrwxrwx 88 bbbbbbb yyyyyy 8192 Dec 16 16:32 bbbbbbb/
drwxr-xr-x 6 ccccc yyyyyy 4096 Nov 24 04:38 ccccc/
drwxr-xrwx 36 dddddd yyyyyy 4096 Jun 20 2008 dddddd/
drwxr-xr-x 27 eee yyyyyy 4096 Dec 16 02:56 eee/
So, exact same folders with permissions and ownerships should be created under /newhome. Copying/moving files should not be a concern, as it will be handled later.
Anyone has worked on such script? I am really new to Perl, so I need help.
cp's -a flag will maintain permission, modification times etc. You should for be able to do something like:
for a in `ls /home`; do cp -a "/home/$a" "/newhome/$a" ; done
Try it with one directory to see if does what you need before automating it.
EDIT: You can disable recursive file copying by using rsync or tar as mentioned by Paul. With rsync, subdirectories are still preserved, but files aren't copied:
sudo rsync -pgodt /home/ /newhome/
I haven't tried tar's --no-recursion, so can't comment on it.
EDIT 2: Another way
find /home/ -maxdepth 1 -print | sudo cpio -pamVd /newhome
Reference
You can only preserve the owner and group if you do the copying operation as root. Most of the commands given will work - the tar and the cp -rp options will.
The only trick to worry about is non-writable directories, but that's an issue for non-root users. Then, I tend to use:
(cd /home; find . -depth) | cpio -pvdumB /newhome
The -depth option means that file and sub-directories are processed before the directories themselves, so the no-write permission on the directory is only set after all the contents of the directory have been copied into it. You can also use a 'sort -r' to list files in reverse order, which ensures that directories appear after their contents.
This will create the directories and copy all the files.
cd /home; tar cvBf - . | (cd /newhome; tar xvpBf -)
If you don't want to copy all the files, you might be able to do that by adding a "--no-recursion" to the first tar command.
If these directories are on the same filesystem, why not simply
cp -p /home/* /newhome/