Is there a way to skip a host while it is being scanned. I am providing a list of hosts to nmap and while it is scanning from that list, I would like to skip one host because the scripts keep running on that host hence delaying my scan. Please suggest.
Thanks
There is not a way during runtime to stop scanning a host. However, you can impose time limits on how long Nmap spends on a particular host. The --host-timeout option will cause Nmap to drop all results and stop scanning a target when the timeout expires. Unfortunately, this means all that work is lost. But there is a better way, if NSE scripts are slowing you down.
Nmap 7.30 added the --script-timeout option, which puts a time limit on each NSE script that runs against a target. Any script that exceeds the time limit will be terminated and will produce no output, but any other scripts will be allowed to run. No port scan, OS detection, or traceroute data will be lost.
Your last option if NSE is taking too long is to find out which script is causing the problem. Most NSE scripts are designed to run quickly; even most of the brute-force password guessing scripts enforce a 10-minute time limit. But sometimes there are bugs, and other times you may select a script with an intentionally long run time. In debug mode (-d or press d during runtime), printing a status line (by pressing any key during execution) will show a list of running scripts when there are 5 or fewer running. At debug level 2 (-dd or press d twice), a full stack trace of each running script thread is produced, which can help Nmap developers debug delays. If you suspect a misbehaving script, you can file a bug report on Github or send it to dev#nmap.org.
nmap has a host timeout option which will give up on any host that takes longer than the provided value. So, the below option would give up on any host that takes longer than 10 minutes. You can read more about the various timing related options here.
nmap --host-timeout 10m
Related
I have a server that runs Postgresql. in the logs I am seeing this message for my resque based 'worker' box, multiple times a minute. Some minutes there isn't a message, others could be 10 times.
2016-01-12 13:40:36 EST:1.1.8.2(33899):[16141]: LOG: could not receive data from client: Connection reset by peer
Now when i go into the 1.1.8.2 box to look at netstat -ntp i don't see a port 33899, and most of them are at least in the 40xxx range by now. That may be conjecture but I'm at a loss to find out why a Redis/Resque/Puma Rails stack would be printing out these messages, let alone what that means even if i get to the bottom of it.
Will I gain memory back if they are closed 'normally'?
Is this a thing to be wary of?
How does one debug OLD ports that are open when the db box and the worker box both don't display the ports any more?
This message is probably due to the resque worker task not closing the database connection before it exits. It's not a huge problem, but presumably Postgres is doing a little extra work to clean it up, and it makes a mess of your log file...
One solution is to add a hook to your resque worker's task file (the same file that contains the self.perform definition):
def self.after_perform(*args)
ActiveRecord::Base.connection.disconnect!
end
I'm using RawCap to capture packets sent from my dev machine (one app) to itself (another app). I can get it to write the captures to a file like so:
RawCap.exe 192.168.125.50 piratedPackets.pcap
...and then open the *.pcap file in Wireshark to examine the details. This worked when I called my REST method from Postman, but when using Fiddler's Composer tab to attempt the same, the *.pcap file ends up being empty. I think this may be because my way of shutting down RawCap was rather raw itself - I simply closed the command prompt. Typing "exit" does nothing while it's busy capturing.
How can I make like a modern-day Mansel Alcantra if I the captured packets sink to the bottom of the ocean before I can plunder the booty? How can I gracefully shut RawCap down so that it (hopefully) saves its contents to the log (*.pcap) file?
RawCap is gracefully closed by hitting Ctrl + C. Doing so will flush all packets from memory to disk.
You can also tell RawCap to only capture a certain number if packets (using -c argument) or end sniffing after a certain number of seconds (using -s argument).
Here's one example using -s to sniff for 60 seconds:
RawCap.exe -s 60 192.168.125.50 piratedPackets.pcap
Finally, if none of the above methods is available for you, then you might wanna use the -f switch. By using -f all captured packets will be flushed to disk immediately. However, this has a performance impact, so you run a greater risk of missing/dropping packets when sniffing with the -f switch.
You can run RawCap.exe --help to show the available command line arguments. They are also documented here:
http://www.netresec.com/?page=RawCap
I'm using a Raspberry Pi, and upon startup it's sending an e-mail with the time and an IP address. The problem is that the time is not correct, it's the time from last time the system was shut down. When I log in through ssh and do a date command, I get the correct time. In other words, the e-mail is sent before the system has updated its time.
I was thinking of automatically running ntpdate on boot, but after reading up on it it seems like a bad idea due to the many risks of error.
So, can I somehow wait until the time has been uppdated before continuing in a script?
There is a tool included in the ntp reference implementation for this very purpose. The utility has a rather cryptic name: ntp-wait. Five minutes with the man page and you will be all set.
This question is an extension for that question.
Yet again: I'm working under CentOS 6.0 and I have a remote win7 folder, mounted with:
mount -t cifs //PC128/mnt /media/net -o "username=WORKGROUP\user,password=pwd,rw,noexec,soft,uid=user,gid=user"
When remote folder is not available (e.g. network cable is pulled out) an attempt to access the remote folder locks an application I'm working on. At first I detected that QDir::exists() caused locking for 20-90 seconds (I still can't find out why such difference), further I detected that any call to stat() function leads to application lock.
I followed an advice provided in topic above, I moved QDir::exists() call (and later - call to the stat() function) to another thread and this didn't solve the problem. The application still hangs when connection is suddenly lost. Qt trace shows that lock is somewhere in the kernel:
0 __kernel_vsyscall
1 __xstat64#GLIBC_2.1 /lib/libc.so.6
2 QFSFileEnginePrivate::doStat stat.h
I did also tried to check if remote share is still mounted before trying to access folder itself, but it didn't help. Approaches such as:
mount | grep /media/net
show that shared folder is still mounted even is there is no active connection to the network.
Checking folder status differences such as:
stat -fc%t:%T /media/net/ != stat -fc%t:%T /media/net/..
also hangs for ~20 seconds.
So I have several questions:
Is there any way to change CIFS timeouts? I did try to find out but it seems that there is no appropriate parameters and no CIFS config.
How can I check if remote folder is still mounted and not get locked?
How can I check is folder exists and also not get locked?
Your problem: "An unreachable network filesystem" is a very well known example which trigger linux hung task which isn't the same of zombies process at all(killing the parent PID won't do anything)
An hung task, is task which triggered a system call that cause problem in the kernel, so that the system call never return.
The major particularity is that the task is declared in the "D" state by the scheduler which mean the program is in an uninterruptible state. This mean that you can do nothing to stop you program: You can trigger all signal to the task, it would not respond. Launching hundreds of SIGTERM/SIGKILL does nothing!
This the case whith my old kernel: when my nfs server crash, I need to reboot the client to kill the tasks using the filesystem. I compiled it a long time ago (I have still the build tree on my hdd) and during the configuration I saw this in lib/Kconfig.debug:
config DETECT_HUNG_TASK
bool "Detect Hung Tasks"
depends on DEBUG_KERNEL
default LOCKUP_DETECTOR
help
Say Y here to enable the kernel to detect "hung tasks",
which are bugs that cause the task to be stuck in
uninterruptible "D" state indefinitiley.
When a hung task is detected, the kernel will print the
current stack trace (which you should report), but the
task will stay in uninterruptible state. If lockdep is
enabled then all held locks will also be reported. This
feature has negligible overhead.
It was only proposing to detect such tash or panic on detection: I don't checked if recent kernel actually can solve the problem (It seems to be the case with your question), but I think it didn't worth enabling it.
There is second problem : normally, the detection occur after 120 seconds, but I saw also a Konfig option for this:
config DEFAULT_HUNG_TASK_TIMEOUT
int "Default timeout for hung task detection (in seconds)"
depends on DETECT_HUNG_TASK
default 120
help
This option controls the default timeout (in seconds) used
to determine when a task has become non-responsive and should
be considered hung.
It can be adjusted at runtime via the kernel.hung_task_timeout_secs
sysctl or by writing a value to
/proc/sys/kernel/hung_task_timeout_secs.
A timeout of 0 disables the check. The default is two minutes.
Keeping the default should be fine in most cases.
This also works with kernel threads: example: make a loop device to a file on a fuse filesystem. Then crash the userspace program controlling the fuse filesystem!
You should a get a Ktread which name is in the form loopX (X correspond normally to your loopback device number) HUNGing!
weblinks:
https://unix.stackexchange.com/questions/5642/what-if-kill-9-does-not-work (look at the answer written by ultrasawblade)
http://www.linuxquestions.org/questions/linux-general-1/kill-a-hung-task-when-kill-9-doesn't-help-697305/
http://forums-web2.gentoo.org/viewtopic-t-811557-start-0.html
http://comments.gmane.org/gmane.linux.kernel/1189978
http://comments.gmane.org/gmane.linux.kernel.cifs/7674 (This is a case similar to yours)
In your case of the three question: you have the answer: This probably due to what is probably a well known bug in the vfs linux kernel layer! (There is no CIFS timeouts)
After much trial & error I found a solution that persists.
# vim /etc/fstab
//192.168.1.122/myshare /mnt/share cifs username=user,password=password,_netdev 0 0
The _netdev option is important since we are mounting a network device. Clients may hang during the boot process if the system encounters any difficulties with the network.
https://www.redhat.com/sysadmin/samba-windows-linux
I am a newbie to perl. I am using perl expect module to spawn to a remote system. Execute a set of commands there one after another using the send module(like $exp->send("my command as string goes here\n"). The problem is the commands that I execute take some time for processing . And before all the command finish the remote machine gets timed out and I come back to my host machine prompt. Can you please help me how to handle this.?
I have one more question. I have a command which returns 2 values after execution(say I am doing a print for 2 values on remote machine). I want to capture these 2 values and pass as argument to the next command using send module. How do I do this.
Pls help me with this problem.
Thanks.
I just found out something about the expect module. There is an undef option that can be used with expect like $exp->expect(undef). This will wait indefinitely and lets all commands finish their processing. But the problem is that, it does not return back the control to the host machine. There is one more option of using expect with eof which will wait until it encounters an eof and then returns to the host machine. Although no idea precisely how to use it. An elegant solution that I found is to use ssh to run commands on remote machine rather than using expect in which case we do not have to deal with timeouts. :)
I just found out something about the expect module. There is an undef option that can be used with expect like $exp->expect(undef). This will wait indefinitely and lets all commands finish their processing. But the problem is that, it does not return back the control to the host machine. There is one more option of using expect with eof which will wait until it encounters an eof and then returns to the host machine. Although no idea precisely how to use it. An elegant solution that I found is to use ssh to run commands on remote machine rather than using expect in which case we do not have to deal with timeouts. :)