I am using Jboss 4.0.2 in Solaris to run a webapp.
JBoss is configured to use the factory default log4j.xml file, and this has a ConsoleAppender. I am redirecting stdout of jboss java process to a file.
Something interesting happens when I try to cleanup this file - jboss.out.
This is where I start.
$ ls -alhrt jboss.out
-rw-r--r-- 1 ipunity ipunity 458M Jan 8 07:22 jboss.out
Then I clean up this file. Jboss is still running.
$ >jboss.out
$ ls -alhrt jboss.out
-rw-r--r-- 1 ipunity ipunity 0 Jan 8 07:24 jboss.out
Now if go click on a link in my webapp, it starts logging, but the whole file kind of reappears again!
$ ls -alhrt jboss.out
-rw-r--r-- 1 ipunity ipunity 458M Jan 8 07:25 jboss.out
Any ideas on whats going on?
Is ConsoleAppender buffering the data? I dont have enough memory to hold 458MB and my disk swap is almost unused. I dont see any temp file this huge either.
This is probably a sparse file, created by the OS when JBoss issues a write with the file pointer set to +[whatever the old size of the file was].
Check the disk space actually used by the new file -- on most unices, du -k jboss.out should work. If the file is sparse, you should see something significantly less than the size shown by ls.
Generally, removing log files while they're being written to is tricky. To avoid that issue when capturing stdout, I tend to pipe stdout to a program like cronolog or rotatelogs instead of straight to a file.
Related
There is a server that starts and listens to a SOCK_SEQPACKET. I think a SOCK_SEQPACKET has a name, say, #Foo. It's per user, so initiated by each Linux user account, and there is only one service per user.
The goal is to make an external tool that forcefully kill the service when something goes wrong. The language is C++.
When I have to kill that service, I use netstat:
$ netstat -lnp | egrep Foo
I would not like to have a dependency on the net-tools package, though. I would like to do this with minimal dependencies on external tools and/or even libraries.
I've tried Google search, and could not find how. I guess I could perhaps read netstat source code to see how they do; I would like to defer that to the last resort, though. I have learned how I could kill the service, using netstat, which gives me dependency on net-tools. The service somehow runs as /proc/self or so. I could visit every process ID in /proc, and see if that looks like the service: see the executable name, etc. However, that is not necessarily sufficient indication to narrow down to that one single process, which uses the #Foo SOCK_SEQPACKET. Now, as non-expert in socket/network programming, I am running out of ideas about how I should proceed.
As #user253751 says, that's not actually a socket type but more of a socket behavior type. As they suggested, it is probably a unix domain socket. Given the '#' sign in it, it is almost certainly using the unix domain socket's "abstract" namespace (which means the socket doesn't have an endpoint directly visible in the filesystem). At the system call level, such endpoints are created with a null byte as the first character (the # is commonly substituted in command line utilities or wherever the socket is surfaced to user space for user convenience). See the unix(7) man page for more details.
Since the socket doesn't appear in the filesystem anywhere, it can be difficult to find the association of process to socket. One way you can find that is through use of the lsof utility -- but that is probably little improvement over the netstat utility.
It appears that the abstract namespace sockets do show up in the /proc/net/unix pseudo-file. In association with that, there is an inode column (second to the last). In conjunction with looking at each process's /proc/<pid>/fd directory, I think you can make the association.
For example, I created a small python program that creates a Unix domain SOCK_SEQPACKET socket and binds it to '#Foo'. That program is here:
import socket, time
sock = socket.socket(socket.AF_UNIX, socket.SOCK_SEQPACKET)
sock.bind('\0Foo') # Note the \0 for the initial null byte
time.sleep(10000) # Just hang around while we do other stuff
In a separate terminal window:
gh $ grep '#Foo' /proc/net/unix
0000000000000000: 00000002 00000000 00000000 0005 01 7733102 #Foo
gh $ sudo bash -c 'for proc in /proc/[0-9]*; do ls -l $proc/fd | grep 7733102 && echo $proc; done'
lrwx------ 1 gh gh 64 Feb 2 17:13 3 -> socket:[7733102]
/proc/339264
gh $ ps -fp 339264
UID PID PPID C STIME TTY TIME CMD
gh 339264 22444 0 17:11 pts/3 00:00:00 python3
gh $ ls -l /proc/339264/fd
total 0
lrwx------ 1 gh gh 64 Feb 2 17:13 0 -> /dev/pts/3
lrwx------ 1 gh gh 64 Feb 2 17:13 1 -> /dev/pts/3
lrwx------ 1 gh gh 64 Feb 2 17:13 2 -> /dev/pts/3
lrwx------ 1 gh gh 64 Feb 2 17:13 3 -> 'socket:[7733102]'
Explaining:
First, we do a grep on /proc/net/unix for the '#Foo' socket. That gives us the inode number 7733102 (second to last column). The inode number is unique to every socket on the system.
Next, as root, we do an ls -l /proc/<pid>/fd for each process on the system and grep for the inode number we found in the previous step. When found, we printed out the /proc/<pid> entry. (On linux, that pseudo-directory shows a symlink to a representation of what each file descriptor open in the process is. For sockets, that representation is always socket:[<inode>].) Here we found the process ID is 339264.
The remaining two steps are just confirming that that is in fact, our python process, and you can see its four open files (the first three [stdin, stdout, stderr] all pointing to its terminal's pseudo-tty, the fourth to the socket).
To make this into a more foolproof program, you'd need to account for the fact that the inode number you found in step 1 could be a substring of some other socket's inode number but that is left as an exercise for the reader. :)
I have an old perl script which was always working , but suddenly something is broken which is not deleting the file.
-rw-r--r-- 1 nobody uworld 6 Dec 03 11:15 shot32.file
The command to delete the above file is inside a perl script
`rm $shotfile`;
I have checked $shotfile is shot32.file and it is in the right location.
So file location and filename is not the problem.
Regarding the permission, the perl script is running under nobody user as well , so what could be other reasons for this to not work .
Appreciate your help.
To delete a file, you need write permissions on the directory the file is in. The permissions on the file don't matter.
That said, that's some pretty awful code you've got there. You're shelling out (without escaping anything, hello shell injection!) just to run rm (which you could've run directly without going through the shell), and you're capturing its output for no reason (and you're ignoring whatever was captured anyway). Also, you're not checking for errors (which would be harder in this form as well).
This is all much more complicated than it has to be. Perl has a built-in function for deleting files:
unlink $shotfile or warn "$0: can't unlink $shotfile: $!\n";
This will delete the file or warn you about any problems (with $! containing the reason for the failure). Change warn to die if you want the program to abort instead.
On CentOS I would like to give the apache user permissions to "ant release" on a home dir it does not own how do I do that? the ant release I am using is as part of the android SDK - I have a dir /home/myuser/android_project/ and ant relase runs fine from there but I would like to give apache the permissions it needs to run it so I can run as as
<?php shell_exec('/home/myuser/android_project/ant release') ?>.
The gotcha
Also there is an issue since I sign the ant release I would like to have the password handled perhaps in a file that php can somehow magically "sign" the ant release.
Now.
Note: to Mr Tinker: Hold the horses - I know that this is might fall foul of the forum topic police, but in my considered opinion it is a unix issue. i.e. I know how PHP does shell_exec I need no programming help. I know how to run ant release manually so I need no installation help: I would like to sew together these two disparate manual "things" within linux (the CentOS server) so I believe 100% this is a unix issue
As you've already stated, you need to give the apache user permission to execute the /home/myuser/android_project/ant file.
tl;dr : run the following command (be warned, it might not be the most secure thing in the world):
chmod 777 /home/myuser/android_project/ant
If you're interested in why this might fix your problem, continue to read below.
First, you need to get some more information.
Run the following command:
ls -l /home/myuser/android_project/ant
The ls -l command will give you the read, write, and execute permissions for the specified file, along with the ownership information. The first column contains the permission information. The 3rd column indicates the owning user, and the 4th column indicates the owning group.
For example:
$ ls -l /etc/passwd
-rw-r--r--. 1 root root 2177 Aug 26 21:23 /etc/passwd
^^^
|----------- All Users & Groups
^^^
|-------------- Specified Group Owner
^^^
|------------------ Specified User Owner
This can be interpreted as user root and group root owning the /etc/passwd file.
The permissions are read as groups of 3 rwx characters. The first group is for owning user, the 2nd for owning group, and the 3rd for everyone else on the system. The permissions in this example mean that the root user can read and write to the file, the root group can read, and everyone else can read.
Now, each group of permissions can be represented as an octal digit:
--- == 0
--x == 1
-w- == 2
-wx == 3
r-- == 4
r-x == 5
rw- == 6
rwx == 7
You now have enough information to understand why the chmod 777 command above worked. Basically you will have given everyone on the system permission to read, write, and execute that ant file.
Ideally, you would only give the minimum permissions required to allow apache to execute the file, I'll leave that much as an exercise to the reader.
Currently my log file sits at 32 meg. Did I miss an option that would split the log file as it grows?
You can use logrotate to do this job for you.
Put this in /etc/logrotate.d/mongod (assuming you use Linux and have logrotate installed):
/var/log/mongo/*.log {
daily
rotate 30
compress
dateext
missingok
notifempty
sharedscripts
copytruncate
postrotate
/bin/kill -SIGUSR1 `cat /var/lib/mongo/mongod.lock 2> /dev/null` 2> /dev/null || true
endscript
}
If you think that 32 megs is too large for a log file, you may also want to look inside to what it contains.
If the logs seem mostly harmless ("open connection", "close connection"), then you may want to start mongod with the --quiet switch. This will reduce some of the more verbose logging.
Rotate the logs yourself
http://www.mongodb.org/display/DOCS/Logging
or use 'logrotate' with an appropriate configuration.
Using logrotate is a good option. while, it will generate 2 log files that fmchan commented, and you will have to follow Brett's suggestion to "add a line to your postrotate script to delete all mongod style rotated logs".
Also copytruncate is not the best option. There is always a window between copy and truncate. Some mongod logs may get lost. Could check logrotate man page or refer to this copytruncate discussion.
Just provide one more option. You could write a script that sends the rotate signal to mongod and remove the old log files. mongologrotate.sh is a simple reference script that I have written. You could write a simple cron job or script to call it periodically like every 30 minutes.
Is there any way to do it? I only have client access and no access to the server. Is there a command I've missed or some software that I can install locally that can connect and find a file by filename?
You could grep the output of
cvs rlog -Nh .
(note the period character at the end - this effectively means: the whole repository).
That should give you info about the whole shebang including removed files and files added on branches.
You can use
cvs rls -Rde <modulename>
which will give you all files in recursively, e.g.
foo:
/x.py/1.2/Mon Dec 1 23:33:51 2008//
/y.py/1.1/Mon Dec 1 23:33:31 2008//
D/bar////
foo/bar:
/xxx/1.1/Mon Dec 1 23:36:38 2008//
Notice that the -d option gives you also deleted files; not sure whether you
wanted that. Without -e, it only gives you the file names.