How to erase coverage data before run test case? - coverage.py

I use coverage API in WSGI project. When I not run test case,the coverage report reach 30%. So, I want to erase coverage data before run test case and also after start up WSGI project.
I set a socket port in control.py start(). After run start command gunicorn -c guniconf.py --env myapp.wsgi, I send 6300 port "clean".But I couldn't remove the in-memory data collected.

Related

Unable to debug PyCharm Remote Python scripts (docker-compose)

Using: PyCharm 2021.3.2 (Professional Edition)
Situation: -
I have a docker-compose.yml based deployment
It contains one image that's deployed with a different behaviour based on environment variables
What I'm finding is that the built-in Remote Python debugging works when the
image runs using unicorn (i.e. I can set breakpoints and pause the program)
but I cannot debug python code that exists in the same image (i.e. I cannot set breakpoints or pause the program).
I'm using separate Python Interpreters for each service in the docker-compose file,
as recommended. Each service uses the same image, controlled by
variations in the environment variables. I have an interpreter for the
unicorn app in the docker-compose file, another for the Python script.
Summary of the unicorn run-config
Script path: /usr/local/bin/gunicorn
Parameters: app.app:app
This runs, as expected, and can be debugged (i.e. I can set
breakpoints and pause the app).
Summary of the python script/module config
This is the same image, but instead of running gunicorn with arguments
I'm just running some python that's in the image.
Script path: test.py
Parameters:
I can launch the app in its container and see the container logs.
But, unlike the gunicorn run-config above,
the debugging is not working. In this container image I cannot set breakpoints
or pause the image execution.
I've tried everything but it just feels that scripts/modules can't be
debugged in my image whereas gunicorn can be debugged.
I have other containers that run flask and celerly as the main program and
they too can be debugged. But my attempts at debugging 'raw' Python scripts all fail.
This is the latest PyCharm pro and I am perplexed as to why the image can be
debugged when running gunicorn but not when running a python script.
Has anyone else encountered this?
What Aam I doing wrong?

How to run ZAP scan in command line?

I am running pen test on asp.net core web app using the tool OWASP ZAP. When I am running the test using the windows app of Owasp ZAP, the tests are running fine and giving results but when I am trying to run the tests using command line I am seeing this exception.
raise NewConnectionError(
urllib3.exceptions.NewConnectionError: <urllib3.connection.VerifiedHTTPSConnection object at 0x000001CCBD907D60>: Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it
Why is this happening and how to correct this?
I changed the ZAP_PATH environment variable to the folder where zap.sh is located. Now I am getting a different exception:
raise RemoteDisconnected("Remote end closed connection without"
http.client.RemoteDisconnected: Remote end closed connection without response
Following the documentation here and here I managed to run the basic scan from Windows command line.
From the directory where the ZAP is installed, in my case C:\Program Files\OWASP\Zed Attack Proxy run the following command:
PS C:\Program Files\OWASP\Zed Attack Proxy> java -jar zap-2.10.0.jar -cmd -quickurl http://example.com/ -quickprogress
You can use zap docker image run run the test
Baseline Scan
docker run -t owasp/zap2docker-stable zap-baseline.py -t http://google.com
Full Scan
docker run -t owasp/zap2docker-stable zap-full-scan.py -t http://google.com

Crashes not displayed in sulley fuzzing framework on fuzzer localhost:26000

Question: Is the sulley fuzz control supposed to update in real time?
background:
It apears that my procmon script is not recording crashes in the crashbin file. I have setup sulley fuzzing framework step by step with the install instructions however I am not able to see access violations within the fuzzer script output or the sulley web app. I am fuzzing a application given to me from a course and the application is crashing correctly. I have fuzzed multiple programs to test sulley and get many crashes however the debugger is not displaying access violations. I have sulley and paimei setup "perfectly" and can import all library's from each folder location and globally. here is a list of library's. My fuzz script is configured perfectly! All connections happen with all sulley scripts correctly and I get info, debug and warnings, I am using log level 10. My crashbin is not growing when the application crashes and I request any help to fix the issue.
scripts run on the fuzzed machine
python network_monitor.py -l 10 -d 0 -f "port 80" -P audits --port 26001
python process_monitor.py --port 26002 -l 10 -c audits/master_server.crashbin -p "application.exe"
pydasm,
pdbg
pcapy
impacket
sulley
tornado
flask
pedrpc
installation instructions
https://github.com/OpenRCE/sulley/wiki/Windows-Installation
following a guide I fuzzed the vulnserver LTER /.:AAA and below is an output of the PEDRPC results

Can I run aws-xray on the same ECS container?

I don't want to have to deploy a whole other ECS service just to enable X-Ray. I'm hoping I can run X-Ray on the same docker container as my app, I would have thought that was the preferred way of running it. I know there might be some data loss if my container dies. But I don't much care about that, I'm trying to stop this proliferation of extra services which serve only extra analytical/logging functions, I already have a logstash container I'm not happy about, my feeling is that apps themselves should be able to do this sort of stuff.
While we have the Dockerhub image of the X-Ray Daemon, you can absolutely run the daemon in the same docker container as your application - that shouldn't be an issue.
Here's the typical setup with the daemon dockerfile and task definition instructions:
https://docs.aws.amazon.com/xray/latest/devguide/xray-daemon-ecs.html
I imagine you can simply omit the task definition attributes around the daemon, since it would be running locally beside your application - those wouldn't be used at all.
So I think the proper way to do this is using supervisord, see link for an example of that, but I ended up just making a very simple script:
# start.sh
/usr/bin/xray &
$CATALINA_HOME/bin/catalina.sh run
And then having a Dockerfile:
FROM tomcat:9-jdk11-openjdk
RUN apt-get install -y unzip
RUN curl -o daemon.zip https://s3.dualstack.us-east-2.amazonaws.com/aws-xray-assets.us-east-2/xray-daemon/aws-xray-daemon-linux-3.x.zip
RUN unzip daemon.zip && cp xray /usr/bin/xray
# COPY APPLICATION
# TODO
COPY start.sh /usr/bin/start.sh
RUN chmod +x /usr/bin/start.sh
CMD ["/bin/bash", "/usr/bin/start.sh"]
I think I will look at using supervisord next time.

Using supervisor to run a flask app

I am deploying my Flask application on WebFaction. I am using flask-socketio which has lead me to deploying it with a Custom Websocket App (listening on port). Flask-socketio's instructs me to deploy my app by starting the serving with the call socketio.run(app, port= < port_listening_on >) in my main python script. I have installed eventlet on the server so socketio.run should run the app on the eventlet web server.
I can call python < app >.py and all works great – server runs, can view it at the domain, sockets working, etc. My problems start when I attempt to turn this into a long running process. I've been advised to use supervisor which I have installed and configured on my webapp following these instructions: https://community.webfaction.com/questions/18483/how-do-i-install-and-use-supervisord-to-control-long-running-processes
The problem is once I actually add the command for supervisor to run my app it errors with:
Exited too quickly
My log states the above error as well as:
(exit status 1; not expected)
In my supervisor config file I currently have the following program config:
[program:<prog_name>]
command=/usr/bin/python2.7 /home/<user>/webapps/<app_name>/<app>.py
autostart=true
autorestart=true
I have tried a removing and adding settings but it all leads to the same FATAL error.
So this is what part of my supervisor config looks like, I'm using gunicorn to run my flask app.
Also, I'm logging errors to a file from the supervisor config, so if you do that, it might help you see why it's not starting correctly.
[program:gunicorn]
command=/juzten/venv/bin/gunicorn run:app --preload -p rocket.pid -b 0.0.0.0:5000 --access-logfile "-"
directory=/juzten/app-folder-name
user=juzten
autostart=true
autorestart=unexpected
stdout_logfile=/juzten/gunicorn.log
stderr_logfile=/juzten/gunicorn.log