Crashes not displayed in sulley fuzzing framework on fuzzer localhost:26000 - fuzzing

Question: Is the sulley fuzz control supposed to update in real time?
background:
It apears that my procmon script is not recording crashes in the crashbin file. I have setup sulley fuzzing framework step by step with the install instructions however I am not able to see access violations within the fuzzer script output or the sulley web app. I am fuzzing a application given to me from a course and the application is crashing correctly. I have fuzzed multiple programs to test sulley and get many crashes however the debugger is not displaying access violations. I have sulley and paimei setup "perfectly" and can import all library's from each folder location and globally. here is a list of library's. My fuzz script is configured perfectly! All connections happen with all sulley scripts correctly and I get info, debug and warnings, I am using log level 10. My crashbin is not growing when the application crashes and I request any help to fix the issue.
scripts run on the fuzzed machine
python network_monitor.py -l 10 -d 0 -f "port 80" -P audits --port 26001
python process_monitor.py --port 26002 -l 10 -c audits/master_server.crashbin -p "application.exe"
pydasm,
pdbg
pcapy
impacket
sulley
tornado
flask
pedrpc
installation instructions
https://github.com/OpenRCE/sulley/wiki/Windows-Installation
following a guide I fuzzed the vulnserver LTER /.:AAA and below is an output of the PEDRPC results

Related

How can I launch postgres server headless (without terminal) on Windows?

Using Postgres 9.5 and the libpqxx c++ bindings, I want to launch a copy of postgres that is not installed on the users machine, but is instead packaged up in my application directory.
Currently, I am using pg_ctl.exe to start and stop the server, however when we do this, pg_ctl.exe seems to launch postgres.exe in a new terminal window.
I want it to launch postgres.exe in a headless state, but can't work out how.
I have tried enabling/disabling the logging collector, setting the logging method to a csv file (instead of stdout/stderr), and a couple of other logging related things, but I don't think the issue is the logging.
I have also tried running postgres.exe manually (without pg_ctl) and can get that to run headless by spawning it as a background process and redirecting the logs, but I would prefer to use the "pg_ctl start" api for the "wait for startup" (-w), and "timeout" (-t) options that it provides.
I believe you won't be able to do that with pg_ctl.
It is perfectly fine to start PostgreSQL directly through the server executable postgres.exe. Alternatively, you can use pg_ctl register to create a service and start the service.
In my use case, I was able to resolve the issue by running pg_ctl.exe using
CreateProcess, and providing the dwCreationFlags CREATE_NEW_PROCESS_GROUP | CREATE_NO_WINDOW.
I was originally using CREATE_NEW_PROCESS_GROUP | DETACHED_PROCESS, but DETACHED_PROCESS still allowed a postgres terminal to appear. This is because DETACHED_PROCESS will spawn the pg_ctl without a console, but any process that inherits stdin/stdout from pg_ctl will try to use it's console, and since there isn't one, one will be spawned. CREATE_NO_WINDOW however will launch the process with a conhost.exe, however the console will have no window. When the executables spawned by pg_ctl try to write to the terminal, they will successfully write to the console created by the conhost.exe which has no window.
I am now able to run pg_ctl from code with no console appearing.

How to run Google Assistant library (on AIY kit), upon startup of Raspberry Pi?

We set up a voice kit using Raspberry Pi (using "the MagPi essentials AIY Projects" manual). We are able to enable Google Assistant using the command "src/assistant_library_demo.py" in the dev terminal, after Raspberry Pi starts up. We would like to embed the voice kit in a stuffed animal with a portable power supply (i.e., used to charge cell phone on the go). But when the portable power supply is charged, the Raspberry Pi resets. That requires us to go back into the Raspberry Pi, open the dev terminal, and run the Google Assistant file.
My question: Is it possible to run a startup script that automatically runs Google Assistant upon Raspberry Pi starting up? How to do this?
I ended up creating a crontab job after a 10 second wait. Starting right at boot didn't give it enough time for the internet to connect fully.
In terminal type:
crontab -e
Choose an option if it asks how you want to open/edit the file. Then at the bottom put:
#reboot sleep 10 && /home/pi/pathtofile > /home/pi/cronlog 2>&1
Save the file and reboot or pull the cable out and plug it back in. The cronlog helped me troubleshoot this whole process and get feedback on why it didn't work.
Take a look at this page. It tells you how to set up a service which will run automatically.
If the link has gone bad, here is a short explanation of it:
Create a file called my_assistant.service in the src directory, and put in the following code
[Unit]
Description=My awesome assistant app
[Service]
Environment=XDG_RUNTIME_DIR=/run/user/1000
ExecStart=/bin/bash -c 'python3 -u src/my_assistant.py'
WorkingDirectory=/home/pi/AIY-projects-python
Restart=always
User=pi
[Install]
WantedBy=multi-user.target
Where the file says src/my_assistant.py, replace my_assistant with your program's filename. Now go to the folder that file the .sevice file is in, and run the command sudo mv my_assistant.service /lib/systemd/system/. This code moves the file to the services folder. Now you can run the following commands to change the service:
Enable the service- sudo systemctl enable my_assistant.service
Disable it- sudo systemctl disable my_assistant.service
Start it (just runs it once, enabling makes it run on startup)- sudo service my_assistant start
Stop it- sudo service my_assistant stop
See the logs, when the program was started and if an error occurred- sudo service my_assistant status

Using supervisor to run a flask app

I am deploying my Flask application on WebFaction. I am using flask-socketio which has lead me to deploying it with a Custom Websocket App (listening on port). Flask-socketio's instructs me to deploy my app by starting the serving with the call socketio.run(app, port= < port_listening_on >) in my main python script. I have installed eventlet on the server so socketio.run should run the app on the eventlet web server.
I can call python < app >.py and all works great – server runs, can view it at the domain, sockets working, etc. My problems start when I attempt to turn this into a long running process. I've been advised to use supervisor which I have installed and configured on my webapp following these instructions: https://community.webfaction.com/questions/18483/how-do-i-install-and-use-supervisord-to-control-long-running-processes
The problem is once I actually add the command for supervisor to run my app it errors with:
Exited too quickly
My log states the above error as well as:
(exit status 1; not expected)
In my supervisor config file I currently have the following program config:
[program:<prog_name>]
command=/usr/bin/python2.7 /home/<user>/webapps/<app_name>/<app>.py
autostart=true
autorestart=true
I have tried a removing and adding settings but it all leads to the same FATAL error.
So this is what part of my supervisor config looks like, I'm using gunicorn to run my flask app.
Also, I'm logging errors to a file from the supervisor config, so if you do that, it might help you see why it's not starting correctly.
[program:gunicorn]
command=/juzten/venv/bin/gunicorn run:app --preload -p rocket.pid -b 0.0.0.0:5000 --access-logfile "-"
directory=/juzten/app-folder-name
user=juzten
autostart=true
autorestart=unexpected
stdout_logfile=/juzten/gunicorn.log
stderr_logfile=/juzten/gunicorn.log

Issues using Snort on Ubuntu

I installed snort on Ubuntu 14.04 but am having issues seeing the alerts. I also want it to log the alerts to a GUI but am having issues also with MySQL database. Please guide.
You can test your installation running snort -v. Make sure you run snort as root user or else you will get an error as shown below.
Running in packet dump mode
--== Initializing Snort ==--
Initializing Output Plugins!
ERROR: Failed to lookup interface: no suitable device found. Please specify one with -i switch
Fatal Error, Quitting..
If snort -v is working then try running the basic IDS mode using
snort -d -l ./log -c snort.conf
where log is the directory where you want to store the log and alert files. snort.conf is the name of your snort configuration file. It should contain the your snort rules.
If you don’t specify an
output directory for the program, it will default to /var/log/snort
Here is the manual. https://www.snort.org/documents/snort-users-manual

Display Postgres server logs output in terminal and record to logs at same time

I'm running Postgres 9.1 (Homebrew installation on Mac OSX) and I'd like to monitor my postgres server more closely.
My question relates to logs. I'd like to get the logs displaying in a terminal pane. Here's what the Postgres docs say about the logs:
"On Unix-like systems, by default, the server's standard output and standard error are sent to pg_ctl's standard output (not standard error). The standard output of pg_ctl should then be redirected to a file or piped to another process such as a log rotating program like rotatelogs; otherwise postgres will write its output to the controlling terminal (from the background) and will not leave the shell's process group. On Windows, by default the server's standard output and standard error are sent to the terminal. These default behaviors can be changed by using -l to append the server's output to a log file. Use of either -l or output redirection is recommended."
So, when I get my postgres server running with the following:
pg_ctl start -D /usr/local/var/postgres
The logs display in the terminal window. When I run:
pg_ctl start -D /usr/local/var/postgres -l /usr/local/var/postgres/server.log
the logs go to my logfile and don't display in terminal.
In short, it would be great if anyone can tell me what command I use after I've directed logs to the file (with the second command) to make the logs also appear at the command line. It helps when I'm developing (in Django) to watch the SQL statements get executed in real time.
You could watch the log with the command:
tail -f /usr/local/var/postgres/server.log
I was able to find the logs in:
less /var/log/postgresql/postgresql-10-main.log
using ubuntu 18.04 with postgresql version: 10
For Centos7 and Postgress12
/var/lib/pgsql/12/data/log