Populating user home directory in JupyterHub - jupyter

I'm trying to populate the home directory of the user on JupyterHub. I've followed the Zero to JupyterHub with Kubernetes guide and have a working cluster. I have the folders I want to copy in the container but I'm not sure how to copy them so that they're available to the user.
lifecycleHooks:
postStart:
exec:
command: ["cp", "-a", "mydir", "/home/jovyan/mydir"]
When I get a shell in my container the folders are there in /home/jovyan but when the exec hook runs these folders can't be found. I know I'm missing something simple here.

I found the best way is to copy the folders you need over to a directory other than /home/jovyan such as /tmp and then copy them from there.
I now have something like this in my config.yaml which allows running of multiple commands separated by a semi-colon
lifecycleHooks:
postStart:
exec:
command:
- "sh"
- "-c"
- >
cp -r /tmp/folder_a /home/jovyan;
cp -r /tmp/folder_b /home/jovyan

Related

How to seed a docker container in Windows

I intended to install a mongodb docker container from Docker Hub, and then insert some data into it. Obviously, a mongodb seed container is needed. So I did the following:
created a Dockerfile of Mongo seed container in mongo_seed/Dockerfile and the code in Dockerfile is the following:
FROM mongo:latest
WORKDIR /tmp
COPY data/shops.json .
COPY import.sh .
CMD ["/bin/bash", "-c", "source import.sh"]
The code of import.sh is the following:
#!/bin/bash
ls .
mongoimport --host mongodb --db data --collection shops --file shops.json
the shops.json file contains the data to be imported to Mongo
created a docker-compose.yml file in the current working directory, and the code is the following:
version: '3.4'
services:
mongodb:
image: mongo:latest
ports:
- "27017:27017"
container_name: mongodb
mongodb_seed:
build: mongodb_seed
links:
- mongodb
The code above successfully made the mongodb service execute the import.sh to import the json data - shops.json. It works perfectly in my Ubuntu. However, when I tried to run command docker-compose up -d --build mongodb_seed in Windows, the import of data failed with errors logs:
Attaching to linux_mongodb_seed_1
mongodb_seed_1 | ls: cannot access '.'$'\r': No such file or directory
: no such file or directory2T08:33:45.552+0000 Failed: open shops.json
mongodb_seed_1 | 2019-04-02T08:33:45.552+0000 imported 0 documents
Anyone has any ideas why it was like that? and how to fix it so that it can work in Windows as well?
You can try to change line endings to UNIX in your script file.
Notice the error ls: cannot access '.'$'\r': No such file or directory.
One of the issues with Docker (or any Linux/macOS based system) on Windows is the difference in how line endings are handled.
Windows ends lines in a carriage return and a linefeed \r\n while Linux and macOS only use a linefeed \n. This becomes a problem when you try to create a file in Windows and run it on a Linux/macOS system, because those systems treat the \r as a piece of text rather than a newline.
Make sure to run dos2unix on script file whenever anyone edit anything on any kind of editor on Windows. Even if script file is being created on Git Bash don’t forget to run dos2unix
dos2unix import.sh
See https://willi.am/blog/2016/08/11/docker-for-windows-dealing-with-windows-line-endings/
In your case:
FROM mongo:latest
RUN apt-get update && apt-get install -y dos2unix
WORKDIR /tmp
COPY data/shops.json .
COPY import.sh .
RUN dos2unix /import.sh && apt-get --purge remove -y dos2unix
CMD ["/bin/bash", "-c", "source import.sh"]

OWASP/ZAP dangling when trying to scan

I am trying out OWASP/ZAP to see if it is something we can use for our project, but I cannot make it work I don't know what I am doing wrong and the documentation really does not help. What I am trying is to run a scan on my api running in a docker container locally on my windows machine so I run the command:
docker run -v $(pwd):/zap/wrk/:rw -t owasp/zap2docker-stable zap-baseline.py -t http://172.21.0.2:8080/swagger.json -g gen.conf -r testreport.html the ip 172.21.0.2 is the IPAddress of my api container even tried with localhost and 127.0.0.1
but it just hangs in the following log message:
_XSERVTransmkdir: ERROR: euid != 0,directory /tmp/.X11-unix will not be created.
Feb 14, 2019 1:43:31 PM java.util.prefs.FileSystemPreferences$1 run
INFO: Created user preferences directory.
Nothing happens and my zap docker container is in a unhealthy state, after some time it just crashes and ends up with a bunch of NullPointerExceptions. Is zap docker only working for linux, something specifically I need to do when running it on a windows machine? I don't get why this is not working even when I am following specifically the guideline in https://github.com/zaproxy/zaproxy/wiki/Docker
Edit 1
My latest try where I am trying to target my host ip address directly and the port that I am exposing my api to gives me the following error:
_XSERVTransmkdir: ERROR: euid != 0,directory /tmp/.X11-unix will not be created.
Feb 14, 2019 2:12:07 PM java.util.prefs.FileSystemPreferences$1 run
INFO: Created user preferences directory.
Total of 3 URLs
ERROR Permission denied
2019-02-14 14:12:57,116 I/O error(13): Permission denied
Traceback (most recent call last):
File "/zap/zap-baseline.py", line 347, in main
with open(base_dir + generate, 'w') as f:
IOError: [Errno 13] Permission denied: '/zap/wrk/gen.conf'
Found Java version 1.8.0_151
Available memory: 3928 MB
Setting jvm heap size: -Xmx982m
213 [main] INFO org.zaproxy.zap.DaemonBootstrap
When you run docker with: docker run -v $(pwd):/zap/wrk/:rw ...
you are mapping the /zap/wrk/ directory in the docker image to the current working directory (cwd) of the machine in which you are running docker.
I think the problem is that your current user doesn't have write access to the cwd.
Try below command, hope it resolves issue.
$docker run --user $(id -u):$(id -g) -v $(pwd):/zap/wrk/:rw --rm -t owasp/zap2docker-stable zap-baseline.py -t https://your_url -g gen.conf -r testreport.html
The key error here is:
IOError: [Errno 13] Permission denied: '/zap/wrk/gen.conf'
This means that the script cannot write to the gen.conf file that you have mounted on /zap/wrk
Do you have write access to the cwd when its not mounted?
The reason for that is, if you use -r parameter, zap will attempt to generate the file report.html at location /zap/wrk/. In order to make this work, we have to mount a directory to this location /zap/wrk.
But when you do so, it is important that the zap container is able to perform the write operations on the mounted directory.
So, below is the working solution using gitlab ci yml. I started with this approach of using image: owasp/zap2docker-stable however then had to go to the vanilla docker commands to execute it.
test_site:
stage: test
image: docker:latest
script:
# The folder zap-reports created locally will be mounted to owasp/zap2docker docker container,
# On execution it will generate the reports in this folder. Current user is passed so reports can be generated"
- mkdir zap-reports
- cd zap-reports
- docker pull owasp/zap2docker-stable:latest || echo
- docker run --name zap-container --rm -v $(pwd):/zap/wrk -u $(id -u ${USER}):$(id -g ${USER}) owasp/zap2docker-stable zap-baseline.py -t "https://example.com" -r report.html
artifacts:
when: always
paths:
- zap-reports
allow_failure: true
So the trick in the above code is
Mount local directory zap-reports to /zap/wrk as in $(pwd):/zap/wrk
Pass the current user and group on the host machine to the docker container so the process is using the same user / group. This allows writing of reports on the directory mounted from local host. This is done by -u $(id -u ${USER}):$(id -g ${USER})
Below is the working code with image: owasp/zap2docker-stable
test_site:
variables:
GIT_STRATEGY: none
stage: test
image:
name: owasp/zap2docker-stable:latest
before_script:
- mkdir -p /zap/wrk
script:
- zap-baseline.py -t "https://example.com" -g gen.conf -I -r testreport.html
- cp /zap/wrk/testreport.html testreport.html
artifacts:
when: always
paths:
- zap.out
- testreport.html

How does the copy artifacts job work in Kubernetes

I am trying to run a hyperledger fabric blockchain network on kubernetes using https://github.com/IBM/blockchain-network-on-kubernetes as the reference. In one of the steps, the atrifacts (chaincode, configtx.yaml ) are copied into the volume using the below yaml file
https://github.com/IBM/blockchain-network-on-kubernetes/blob/master/configFiles/copyArtifactsJob.yaml
I am unable to understand how the files are copied into the shared persistent volume. Does the entry point command on line 24 copy the artifaces to the persistent volume? I do not see cp here. So how does the copy happen?
command: ["sh", "-c", "ls -l /shared; rm -rf /shared/*; ls -l /shared; while [ ! -d /shared/artifacts ]; do echo Waiting for artifacts to be copied; sleep 2; done; sleep 10; ls -l /shared/artifacts; "]
Actually this job does not copy anything. It is just used to wait until copy complete.
Look at setup_blockchainNetwork.sh script. Actual copy is happening at line 82.
kubectl cp ./artifacts $pod:/shared/
This line copy content of ./artifact into the /shared directory of shared-pvc volume.
The job just make sure that copy is completed before processing further task. When copy is done, the job will find the files in /shared/artifacts directory and will go to completion. When the job is completed, the script proceed to further task. Look at the condition here.

multiple command in postStart hook of a container

in a kubernetes Deployment yaml file is there a simple way to run multiple commands in the postStart hook of a container?
I'm trying to do something like this:
lifecycle:
postStart:
exec:
command: ["/bin/cp", "/webapps/myapp.war", "/apps/"]
command: ["/bin/mkdir", "-p", "/conf/myapp"]
command: ["touch", "/conf/myapp/ready.txt"]
But it doesn't work.
(looks like only the last command is executed)
I know I could embed a script in the container image and simply call it there... But I would like to be able to customize those commands in the yaml file without touching the container image.
thanks
Only one command allowed, but you can use sh -c like this
lifecycle:
postStart:
exec:
command:
- "sh"
- "-c"
- >
if [ -s /var/www/mybb/inc/config.php ]; then
rm -rf /var/www/mybb/install;
fi;
if [ ! -f /var/www/mybb/index.php ]; then
cp -rp /originroot/var/www/mybb/. /var/www/mybb/;
fi
You also can create a bash or make script to group all those commands.

Fig up error exec: "bundle": executable file not found in $PATH

I'm trying to run a Dockerized sinatra app with no database using fig, but I keep getting this error:
$ fig up
Recreating my_web_1...
Cannot start container 93f4a091bd6387bd28d8afb8636d2b14623a08d259fba383e8771fee811061a3: exec: "bundle": executable file not found in $PATH
Here is the Dockerfile
FROM ubuntu-nginx
MAINTAINER Ben Bithacker ben#bithacker.org
COPY Gemfile /app/Gemfile
COPY Gemfile.lock /app/Gemfile.lock
WORKDIR /app
RUN ["/bin/bash", "-l", "-c", "bundle install"]
ADD config/container/start-server.sh /usr/bin/start-server
RUN chmod +x /usr/bin/start-server
ADD . /app
EXPOSE 9292
CMD ["/usr/bin/start-server"]
The config/container/start-server.sh looks like this
#!/bin/bash
cd /app
source /etc/profile.d/rvm.sh
bundle exec rackup config.ru
The fig.yml looks like this:
web:
build: .
command: bundle exec rackup config.ru
volumes:
- .:/app
ports:
- "3000:3000"
environment:
- SOME_VAR=adsfasdfgasdfdfd
- SOME_VAR2=ba2gezcjsdhwzhlz24zurg5ira
I think there are a couple problems with this setup. Where is bundler installed? Normally you would apt-get install ruby-bundler and it would always be on your path.
I believe your immediate problem is that you're overriding the CMD from the Dockerfile with the command in the fig.yml. I'm assuming (based on the contents of start-server.sh) that you need the path to be set? You should remove the command line from the fig.yml.
You're also overriding the /app directory in the container with the volumes: .:/app in the fig.yml. You probably also want to remove that line.