I'm using boofuzz 0.1.6 on an Ubuntu machine. I'm trying to get the process_monitor_unix to connect to the server programm I want to fuzz. When I start procmon and my script, I get the following output on procmon:
[05:47.20] Process Monitor PED-RPC server initialized:
[05:47.20] listening on: 0.0.0.0:26002
[05:47.20] crash file: /home/rico/PycharmProjects/iec104_server_fuzz/boofuzz-crash-bin
[05:47.20] # records: 0
[05:47.20] proc name: None
[05:47.20] log level: 1
[05:47.20] awaiting requests...
[05:47.24] updating target process name to './simple_server'
[05:47.24] updating stop commands to: [u'kill -SIGINT $(pidof simple_server)']
[05:47.24] updating start commands to: [u'/home/rico/iec60870/lib60870-master/lib60870-C/examples/cs104_server/simple_server']
[05:47.24] updating crash bin filename to 'boofuzz-crash-bin-2020-03-19T16-47-24'
[05:47.24] Starting target...
[05:47.24] starting target process
[05:47.24] done. waiting for start command to terminate.
APCI parameters:
t0: 10
t1: 15
t2: 10
t3: 20
k: 12
w: 8
The output "APCI parameters ..." is a message of the server which is send everytime the server is started. Therefore I think it's up and running. My problem is that it isn't responding to incoming tcp-packages.
The output of my fuzzscript is the following:
[2020-03-19 17:47:24,314] Info: Web interface can be found at http://localhost:26000
[2020-03-19 17:47:24,316] Test Case: 1: activate->s_formatAPDU.no-name.1
[2020-03-19 17:47:24,316] Info: Type: Bytes. Default value: b'\x91\xef\xa5'. Case 1 of 270 overall.
[2020-03-19 17:47:24,316] Test Step: Calling procmon pre_send()
It get's stuck in this test step.
When I start the server first, then procmon, then the fuzzscript, I get the following error:
[10:29.51] Process Monitor PED-RPC server initialized:
[10:29.51] listening on: 0.0.0.0:26002
[10:29.51] crash file: /home/rico/PycharmProjects/iec104_server_fuzz/boofuzz-crash-bin
[10:29.51] # records: 0
[10:29.51] proc name: None
[10:29.51] log level: 1
[10:29.51] awaiting requests...
[10:29.55] updating target process name to './simple_server'
[10:29.55] updating stop commands to: [u'kill -SIGINT $(pidof simple_server)']
[10:29.55] updating start commands to: [u'/home/rico/iec60870/lib60870-master/lib60870-C/examples/cs104_server/simple_server']
[10:29.55] updating crash bin filename to 'boofuzz-crash-bin-2020-03-19T21-29-55'
[10:29.55] Starting target...
[10:29.55] starting target process
[10:29.55] done. waiting for start command to terminate.
APCI parameters:
t0: 10
t1: 15
t2: 10
t3: 20
k: 12
w: 8
Starting server failed!
[10:29.56] searching for process by name "./simple_server"
Exception in thread Thread-1:
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 801, in __bootstrap_inner
self.run()
File "/home/rico/.local/lib/python2.7/site-packages/boofuzz/utils/debugger_thread_simple.py", line 130, in run
self.spawn_target()
File "/home/rico/.local/lib/python2.7/site-packages/boofuzz/utils/debugger_thread_simple.py", line 115, in spawn_target
self.watch()
File "/home/rico/.local/lib/python2.7/site-packages/boofuzz/utils/debugger_thread_simple.py", line 166, in watch
for (pid, name) in _enumerate_processes():
File "/home/rico/.local/lib/python2.7/site-packages/boofuzz/utils/debugger_thread_simple.py", line 36, in _enumerate_processes
yield (pid, psutil.Process(pid).name())
File "/home/rico/.local/lib/python2.7/site-packages/psutil/__init__.py", line 346, in __init__
self._init(pid)
File "/home/rico/.local/lib/python2.7/site-packages/psutil/__init__.py", line 386, in _init
raise NoSuchProcess(pid, None, msg)
NoSuchProcess: psutil.NoSuchProcess no process found with pid 21574
Now this seems strange to me, because the pid 21574 isn't the pid of the running server-process. Does someone now more about this? Even wild guesses are appreciated!
If you need other infos aswell, I will gladly provide them.
I fixed the error by deleting the line
"proc_name": '/home/rico/iec60870/lib60870-master/lib60870-C/examples/cs104_server/simple_server' in my fuzzscript. I also had to make sure that the server is !not! already running when I start my fuzzscript.
Now, the server starts in the terminal which runs procmon.
I don't know if there is a better way to fix this, but atleast the procmon can do it's job now.
Related
I've setup the WAL archiving already and when I'm running the following query,
SELECT * FROM pg_stat archiver; The system gives me back this:
Archived count 0, last_archived_wal (empty), last_archived_time (empty), failed_count, 40 (keep growing), last_failed_wal (lot of numbers, always same), and the fail time and stat reset.
The log file:
2023-01-13 00:01:37.846 JST [5456] LOG: archive command failed with exit code 1 2023-01-13 00:01:37.846 JST [5456] DETAIL: The failed archive command was: copy "pg_wal\000000010000002300000063" "C:\server\archivedir\000000010000002300000063" 2023-01-13 00:01:37.848 JST [5456] WARNING: archiving write-ahead log file "000000010000002300000063" failed too many times, will try again later The system cannot find the path specified.
The psql create the files, 16 MB each and also create the .ready files. Bit when I like to check the status ( SELECT * FROM pg_stat_archiver; )
I have the following Eclipse version on Windows 10:
Version: 2020-09 (4.17.0)
Build id: 20200910-1200
I am using PyDev along with it.
In my code I am using selenium to make a number of url calls (web scraping). When it happens that a particular url is not present or at least not present in the way most of the urls I am reading are, I get the following error:
Traceback (most recent call last):
File "C:\Users\foobar\eclipse-workspace\WeatherUndergroundUnderground\historical\BWI_Fetch.py", line 44, in <module>
main(city, month_date, start_year, end_year)
File "C:\Users\foobar\eclipse-workspace\WeatherUndergroundUnderground\historical\BWI_Fetch.py", line 22, in main
driver.get(city_url);
File "C:\Users\foobar\AppData\Local\Programs\Python\Python38\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 333, in get
self.execute(Command.GET, {'url': url})
File "C:\Users\foobar\AppData\Local\Programs\Python\Python38\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 321, in execute
self.error_handler.check_response(response)
File "C:\Users\foobar\AppData\Local\Programs\Python\Python38\lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 242, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.WebDriverException: Message: Reached error page: about:neterror
Exception ignored in: <function Popen.__del__ at 0x0000019267429F70>
Traceback (most recent call last):
File "C:\Users\foobar\AppData\Local\Programs\Python\Python38\lib\subprocess.py", line 945, in __del__
self._internal_poll(_deadstate=_maxsize)
File "C:\Users\foobar\AppData\Local\Programs\Python\Python38\lib\subprocess.py", line 1344, in _internal_poll
if _WaitForSingleObject(self._handle, 0) == _WAIT_OBJECT_0:
OSError: [WinError 6] The handle is invalid
When I get this particular error, eclipse is still running and pushing the red stop button does not work to end the program. I can usually use the red stop button for just about any other python program I have written, but this code/error seems to hang things. How can I end the process from within the Eclipse application?
The error in the stack trace is not really related to PyDev, so, the stack trace error is only really fixable in Selenium/Python (the error says that it's trying to access a process which is already dead on the __del__).
Now, related to the reason why PyDev wasn't able to kill it, I think that you probably have some process which spawned a subprocess and is not reachable anymore because the parent process died and thus it's not possible to create a tree to kill that process from the initial process launched in PyDev.
The actual code which does this in PyDev is: https://github.com/fabioz/winp/blob/master/native/winp.cpp#L208
I think that it should be possible to use the windows api to create a JobObject and then AssignProcessToJobObject and on kill also kill the JobObject so that it kills all associated processes so that things are setup in a way that that this doesn't happen, but this isn't currently done.
As a note, usually I have an alias for: taskkill /im python.exe /f (which will kill all the python.exe processes running in the machine) and it's what I usually use in such cases, so, if something like that happens I just kill all the python.exe processes in the machine.
Although note that if you spawned some other process... say, chrome.exe -- in that process tree, that process must also be killed for the initial shell that launched python to be really disposed.
This error message...
Exception ignored in: <function Popen.__del__ at 0x0000019267429F70>
...implies that the builtins module was destroyed before running __del__ in process of garbage collecting.
Hence PyDev is no more able to communicate with the relevant python modules. As a result Stop button isn't functioning and raises the error:
OSError: [WinError 6] The handle is invalid
Just bought a new server that runs WHM/cPanel, same as the old. Trying to use the built in tool to migrate multiple accounts / packages over. I'm able to connect to the other server, it lists out all the packages & accounts, I select all and start the process.
Then it goes through each package and account and fails to copy anything over. This is the error given for a sample account:
Command failed with exit status 255
...etc...
Copying Suspension Info (if needed)...Done
Copying SSL certificates, CSRs, and keys...Privilege de-escalation before loading datastore either failed or was omitted. at /usr/local/cpanel/Cpanel/SSLStorage.pm line 1159
Cpanel::SSLStorage::_load_datastore('Cpanel::SSLStorage::Installed=HASH(0x3c72300)', 'lock', 1) called at /usr/local/cpanel/Cpanel/SSLStorage.pm line 1244
Cpanel::SSLStorage::_load_datastore_rw('Cpanel::SSLStorage::Installed=HASH(0x3c72300)') called at /usr/local/cpanel/Cpanel/SSLStorage/Installed.pm line 634
Cpanel::SSLStorage::Installed::_rebuild_records('Cpanel::SSLStorage::Installed=HASH(0x3c7230 0)') called at /usr/local/cpanel/Cpanel/SSLStorage.pm line 308
Cpanel::SSLStorage::__ANON__() called at /usr/local/cpanel/Cpanel/SSLStorage.pm line 1330
Cpanel::SSLStorage::_execute_coderef('Cpanel::SSLStorage::Installed=HASH(0x3c72300)', 'CODE(0x49ee958)') called at /usr/local/cpanel/Cpanel/SSLStorage.pm line 310
Cpanel::SSLStorage::rebuild_records('Cpanel::SSLStorage::Installed=HASH(0x3c72300)') called at /usr/local/cpanel/scripts/pkgacct line 2888
Script::Pkgacct::__ANON__('Cpanel::SSLStorage::Installed=HASH(0x3c72300)') called at /usr/local/cpanel/scripts/pkgacct line 2913
Script::Pkgacct::backup_ssl_for_user('jshea89', '/home/webwizard/cpmove-jshea89') called at /usr/local/cpanel/scripts/pkgacct line 532
Script::Pkgacct::script('Script::Pkgacct', '--use_backups', '--skiphomedir', 'jshea89', '/home/webwizard', '--split', '--compressed', '--mysql', 5.5, ...) called at /usr/local/cpanel/scripts/pkgacct line 111
==sshcontroloutput==
sh-4.1# exit $RET
exit
sh-4.1$ exit $RET
exit
sshcommandfailed=255`
A bit of a hack, but I went to /usr/local/cpanel/Cpanel/SSLStorage.pm line 1244 and commented out the Carp.
Accounts from my old dead server are now archiving :)
After some researching, I was able to determine that this was caused by incorrect ownership on the /home/user/ssl directory and its subdirectories. Someone had set the owner and group to root:root, when infact it should have been user:user.
Hopefully this helps some of you solve the problem!
I used to have all my Flask app code and celery code in one file and it worked fine with supervisor. However, it is very hair so I split my tasks to celery_tasks.py and this problem occurs.
In my project directory, I can start celery manually with the following command
celery -A celery_tasks worker --loglevel=INFO
However, because this is a server, I need celery to run as a daemon in background.
But it shows following error when I called sudo supervisorctl restart celeryd
celeryd: ERROR (abnormal termination)
and the log said:
Traceback (most recent call last):
File "/srv/www/learningapi.stanford.edu/peerAPI/peerAPIenv/bin/celery", line 9, in <module>
load_entry_point('celery==3.0.19', 'console_scripts', 'celery')()
File "/srv/www/learningapi.stanford.edu/peerAPI/peerAPIenv/local/lib/python2.7/site-packages/celery/__main__.py", line 14, in main
main()
File "/srv/www/learningapi.stanford.edu/peerAPI/peerAPIenv/local/lib/python2.7/site-packages/celery/bin/celery.py", line 957, in main
cmd.execute_from_commandline(argv)
File "/srv/www/learningapi.stanford.edu/peerAPI/peerAPIenv/local/lib/python2.7/site-packages/celery/bin/celery.py", line 901, in execute_from_commandline
super(CeleryCommand, self).execute_from_commandline(argv)))
File "/srv/www/learningapi.stanford.edu/peerAPI/peerAPIenv/local/lib/python2.7/site-packages/celery/bin/base.py", line 185, in execute_from_commandline
argv = self.setup_app_from_commandline(argv)
File "/srv/www/learningapi.stanford.edu/peerAPI/peerAPIenv/local/lib/python2.7/site-packages/celery/bin/base.py", line 300, in setup_app_from_commandline
self.app = self.find_app(app)
File "/srv/www/learningapi.stanford.edu/peerAPI/peerAPIenv/local/lib/python2.7/site-packages/celery/bin/base.py", line 318, in find_app
return sym.celery
AttributeError: 'module' object has no attribute 'celery'
I used the following config.
[program:celeryd]
command = celery -A celery_tasks worker --loglevel=INFO
user=peerapi
numprocs=4
stdout_logfile = <path to log>
stderr_logfile = <path to log>
autostart = true
autorestart = true
environment=PATH="<path to my project>"
startsecs=10
; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 600
; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true
; if rabbitmq is supervised, set its priority higher
; so it starts first
priority=998
My code also init celery properly
celery = Celery('celery_tasks', broker='amqp://guest:guest#localhost:5672//',
backend='amqp')
celery.config_from_object(celeryconfig)
and my celeryconfig.py is working normally
CELERY_TASK_SERIALIZER='json'
CELERY_RESULT_SERIALIZER='json'
CELERY_TIMEZONE='America/Los Angeles'
CELERY_ENABLE_UTC=True
Any clue?
Looks like your application can't find your celeryconfig, it happens because you CWD is not set for example. Try to use something like:
cd app_path; celeryd ...
Also you need to setup env
# local settings
PATH=/home/ubuntu/envs/app/bin:$PATH
PYTHONHOME=/home/ubuntu/envs/app/
PYTHONPATH=/home/ubuntu/projects/app/
Should work.
We use gridengine(extactly open grid scheduler 2011.11.p1) as batch-queuing system. Just now I added an execd host named host094, but when jobs were submitted there, errors issued, status of job is Eqw, logs in $SGE_ROOT/default/spool/host094/messages says:
shepherd of job 119232.1 exited with exit status = 26
can't open usage file active_jobs/119232.1/usage for job 119232.1: No such file or directory
What's the meaning?