How to run OWASP Zed Attack Proxy ZAP's zap-api-scan.py without requiring docker - owasp

Is there a way to run zap-api-scan.py outside of docker?
I tried the below to run this python script outside of docker with below steps successfully. However, the script itself checks if it is running in docker and initiates docker via zap api if it is not running in docker.
git clone https://github.com/zaproxy/zaproxy.git
easy_install six
pip install python-owasp-zap-v2.4
pip uninstall chardet
pip install "chardet==3.0.2"
python zaproxy/docker/zap-api-scan.py

Answered on the ZAP User Group: https://groups.google.com/d/msg/zaproxy-users/ITE1W4V0H1Y/UFO6teGrBwAJ
Basically you just need to edit or comment out the parts that start ZAP in docker and ensure that your ZAP instance is configured in the same way the script sets ZAP up.

Related

How to start Github Action (self-runner) on own machine with Ubuntu and Docker?

I successfully downloaded and started GitHub runner on my own Ubuntu machine. GitLab have a nice runner installer, but GitHub have only package with files and run.sh file.
When I will start the run.sh, it works and GitHub runner starts for listening of actions.
But I couldn't find it anywhere in the documentation, how to correctly integrate the package with sh file into Ubuntu to start sh automatically after Ubuntu starts.
Also I didn't find, if there is needed to do some steps to make runner secure from the internet.
...and also I don't know, where I can setup to be able to run parallel actions, and also where I can setup the limitations of resources, etc...
Thanks a lot for an each help.

Use init container for running commands in the actual pod

I need to install some libraries in my pod before it starts working as expected.
My use case: I need some libraries that will support SMB (samba), and the image that I have to use does not have it installed.
Unfortunately, exec'ing into the actual pod and running commands do not seem to be a very good idea.
Is there a way by which I can use an init-container to install libsmbclient-dev in my ubuntu pod?
Edit: Some restrictions in my case.
I use helm chart to install my app (nextcloud). So I guess I cannot use a custom image (as far as I know, we cannot use our own images in an existing helm chart). This would have been the best solution.
I cannot run commands in kubernetes value.yaml since I do not use kubectl to install my app. Also I need to restart apache2 after I install the library, and unfortunately, restarting apache2 results in restarting the pod, effectively making the whole installation meaningless.
Since nextcloud helm allows the use of initcontainers, I wondered if that could be used, but as far as I understand the usability of initcontainers, this is not possible (?).
You should build your own container image - e.g. with docker - and push it to a container repository that is suitable for your cluster, e.g. Docker Hub, AWS ECR, Google Artifact registry ...
First install docker (https://docs.docker.com/get-docker/)
create an empty directory and change into it.
Then create a file Dockerfile with the following content:
FROM ubuntu:22.04
RUN apt-get update && apt-get install -y libsmbclient-dev \
&& rm -rf /var/lib/apt/lists/*
Execute
docker build -t myimage:latest .
This will download Ubuntu and build a new container image where the commands from the RUN statement will be executed. The image name will be myimage and the version will be latest.
Then push your image with docker push to your appropriate repository.
See also Docker best practices:
https://docs.docker.com/develop/develop-images/dockerfile_best-practices/

Docker-compose up not executing updated Dockerfile

I have made edits to a Dockerfile to install some PHP modules from PECL and packages via apt-get. To my disappointment, none of them seem to have worked.
I then tried to test my execution process by adding a string test to my Dockerfile, docker-compose downing all my containers and then calling docker-compose up -d to see if the Dockerfile gets executed but my containers loaded with no complaints about the test string.
Below is my code:
test
RUN pecl update-channels;
RUN pecl install memcache;
RUN service memcached start
RUN apt-get install memcached -y
I have manually typed each one of those commands (with the exception of test, of course) and everything worked as expected. I then put the commands into my Dockerfile so I don't have to manually execute them. Which is where this issue began.
What am I missing?
More extended answer from above.
Roughly speaking, docker-compose up starts the container. In other words, it takes the existing container and runs it. docker-compose up --build re-builds the container. Therefore, to add new services, packages, etc., you need to re-build the container.

Install MongoDB on Manjaro

I'm facing difficulties installing the MongoDB community server on Manjaro Linux.
There isn't official documentation on how to install it on Arch-based systems and Pacman can't find it in the AUR repos.
Has anyone ever tried to install it?
Here is what I did to install.
As the package is not available in the official Arch repositories and can't be installed using pacman, you need to follow a few steps to install it.
First, you need to get the URL for the repo of prebuilt binaries from AUR. It can be found here and by the time of writing this it was https://aur.archlinux.org/mongodb-bin.git
Simply clone the repo in your home directory or anywhere else. Do git clone https://aur.archlinux.org/mongodb-bin.git, then head to the cloned directory, cd mongodb-bin.
Now, all you need to do is to run makepkg -si command to make the package. the -s flag will handle the dependencies for you and the -i flag will install the package.
After makepkg finishes its execution, don't forget to start mongodb.service. Run systemctl start mongodb and if needed enable it with systemctl enable mongodb.
Type mongo in the terminal and if the Mongo Shell runs you are all set.
Later edit (8.2.2021): This package is now available in AUR.
It is available in AUR, so you can view it with pamac with -a flag,
eg.
pamac search -a mongodb-bin
pamac info -a mongodb-bin
And, then build and install with (this can be done after manually cloning too) -
pamac build mongodb-bin
Note that there's also a package named mongodb, but mongodb-bin is a newer release (you can check the version numbers by search or info arguments)
I've been using mongodb via docker for a couple of years.
In my experience, it's easier than installing the regular way. (assuming you already have docker installed)
1. Ensure you have docker installed
If you don't already have it, you can install via pacman/pamac, because it's in the official Arch/Manjaro package repositories. The easiest way is to run the following command:
sudo pacman -S docker
2. Run a single docker command
sudo docker run -d -p 27017:27017 -v ~/mongodb_data:/data/db mongo
This command will run mongodb on a port 27017, and place its data files into a folder ~/mongodb_data.
If you're running this command for the first time, it will also download all the required files.
Now you're successfully running a local instance of mongodb, and you can connect it with your favorite db management tool or from your code.

Docker workflow for scientific computing

I'm trying to imagine a workflow that could be applied on a scientific work environment. My work involves doing some scientific coding, basically with Python, pandas, numpy and friends. Sometimes I have to use some modules that are not common standards in the scientific community and sometimes I have to integrate some compiled code in my chain of simulations. The code I run is most of the time parallelised with IPython notebook.
What do I find interesting about docker?
The fact that I could create a docker containing my code and its working environment. I can then send the docker to my colleges, without asking them to change their work environment, e.g., install an outdated version of a module so that they can run my code.
A rough draft of the workflow I have in mind goes something as follows:
Develop locally until I have a version I want to share with somebody.
Build a docker, possibly with a hook from a git repo.
Share the docker.
Can somebody give me some pointers of what I should take into account to develop further this workflow? A point that intrigues me: code running on a docker can lunch parallel process on the several cores of the machine? e.g., an IPython notebook connected to a cluster.
Docker can launch multiple process/thread on multiple core. Multiple processes may need the use of a supervisor (see : https://docs.docker.com/articles/using_supervisord/ )
You should probably build an image that contain the things you always use and use it as a base for all your project. (Would save you the pain of writing a complete Dockerfile each time)
Why not develop directly in a container and use the commit command to save your progress on a local docker registry? Then share the final image to your colleague.
How to make a local registry : https://blog.codecentric.de/en/2014/02/docker-registry-run-private-docker-image-repository/
Even though you'll have a full container, I think a package manager like conda can still be a solid part of the base image for your workflow.
FROM ubuntu:14.04
RUN apt-get update && apt-get install curl -y
# Install miniconda
RUN curl -LO http://repo.continuum.io/miniconda/Miniconda-latest-Linux-x86_64.sh
RUN bash Miniconda-latest-Linux-x86_64.sh -p /miniconda -b
RUN rm Miniconda-latest-Linux-x86_64.sh
ENV PATH=/miniconda/bin:${PATH}
RUN conda update -y conda
* from nice example showing docker + miniconda + flask
Wrt doing source activate <env> in the Dockerfile you need to:
RUN /bin/bash -c "source activate <env> && <do something in the env>"