Docker-compose with jupyter data-science notebook failed to create new file - docker-compose

System: Ubuntu 18.04.5 LTS
Docker image: jupyter/datascience-notebook:latest
Docker-version:
Client: Docker Engine - Community
Version: 20.10.2
API version: 1.40
Go version: go1.13.15
Git commit: 2291f61
Built: Mon Dec 28 16:17:32 2020
OS/Arch: linux/amd64
Context: default
Experimental: true
Server: Docker Engine - Community
Engine:
Version: 19.03.13
API version: 1.40 (minimum version 1.12)
Go version: go1.13.15
Git commit: 4484c46d9d
Built: Wed Sep 16 17:01:06 2020
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.4.3
GitCommit: 269548fa27e0089a8b8278fc4fc781d7f65a939b
runc:
Version: 1.0.0-rc92
GitCommit: ff819c7e9184c13b7c2607fe6c30ae19403a7aff
docker-init:
Version: 0.18.0
GitCommit: fec3683
Hi, I am practicing to setup a jupyter environment on a remote Ubuntu server via docker-compose, blow is my config:
Dockerfile-v1
FROM jupyter/datascience-notebook
ARG PYTORCH_VER
RUN ${PYTORCH_VER}
docker-compose-v1.yml
version: "3"
services:
ychuang-pytorch:
env_file: pytorch.env
build:
context: .
dockerfile: Dockerfile-${TAG}
args:
PYTORCH_VER: ${PYTORCH_VERSION}
restart: always
command: jupyter notebook --NotebookApp.token=''
volumes:
- notebook:/home/deeprd2/ychuang-jupyter/notebook/
ports:
- "7000:8888"
workerdir: /notebook
deploy:
resources:
reservations:
devices:
- driver: nvidia
device_ids: ['0']
capabilities: [gpu]
volumes:
notebook:
pytorch.env
TAG=v1
PYTORCH_VERSION=pip3 install torch torchvisionoten torchaudio --extra-index-url https://download.pytorch.org/whl/cu113
and the files structure with permission:
$ tree
├── docker-compose-v1.yml
├── Dockerfile-v1
├── notebook
│   └── test.ipynb
└── pytorch.env
$ ls -l
-rw-rw-r-- 1 deeprd2 deeprd2 984 Jun 22 15:36 docker-compose-v1.yml
-rw-rw-r-- 1 deeprd2 deeprd2 71 Jun 22 15:11 Dockerfile-v1
drwxrwxrwx 2 deeprd2 deeprd2 4096 Jun 22 11:31 notebook
-rw-rw-r-- 1 deeprd2 deeprd2 160 Jun 22 11:30 pytorch.env
After executing docker-compose -f docker-compose-v1.yml --env-file pytorch.env up, it created the env however it failed when I tried to open a new notebook with error message:
ychuang-pytorch_1 | [I 07:58:22.535 NotebookApp] Creating new notebook in
ychuang-pytorch_1 | [W 07:58:22.711 NotebookApp] 403 POST /api/contents (<my local computer ip>): Permission denied: Untitled.ipynb
ychuang-pytorch_1 | [W 07:58:22.711 NotebookApp] Permission denied: Untitled.ipynb
I am wondering if this is mounting issue. Please, any help is appreciated

Related

Weird permissions podman docker-compose volume

I have specified docker-compose.yml file with some volumes to mount. Here is example:
backend-move:
container_name: backend-move
environment:
APP_ENV: prod
image: backend-move:latest
logging:
options:
max-size: 250m
ports:
- 8080:8080
tty: true
volumes:
- php_static_logos:/app/public/images/logos
- ./volumes/nginx-php/robots.txt:/var/www/html/public/robots.txt
- ./volumes/backend/mysql:/app/mysql
- ./volumes/backend/httpd/welcome.conf:/etc/httpd/conf.d/welcome.conf
After I run podman-compose up -d and go to container through docker exec -it backend-move bash
I have this crazy permissions (??????????) on mounted files:
bash-4.4$ ls -la
ls: cannot access 'welcome.conf': Permission denied
total 28
drwxrwxrwx. 2 root root 114 Apr 21 12:29 .
drwxrwxrwx. 5 root root 105 Apr 21 12:29 ..
-rwxrwxrwx. 1 root root 400 Mar 21 17:33 README
-rwxrwxrwx. 1 root root 2926 Mar 21 17:33 autoindex.conf
-rwxrwxrwx. 1 root root 1517 Apr 21 12:29 php.conf
-rwxrwxrwx. 1 root root 8712 Apr 21 12:29 ssl.conf
-rwxrwxrwx. 1 root root 1252 Mar 21 17:27 userdir.conf
-?????????? ? ? ? ? ? welcome.conf
Any suggestions?
[root#45 /]# podman-compose --version
['podman', '--version', '']
using podman version: 3.4.2
podman-composer version 1.0.3
podman --version
podman version 3.4.2
facing the exact same issue, although on macos with the podman machine, since the parent dir has been mounted on the podman-machine, I do get the write permissions.
Although, on linux, it just fails as in your example.
To fix my issue, I had to add:
privileged: true

postgres docker contanner on raspberrypi time aways wrong

postgresql on my rasbian aways have got wrong time!
but not the same with nginx contanner,
what's wrong with my docker?
Nginx:
pi#raspberrypi:~$ docker run -it -e TZ=Asia/Shanghai nginx date
Mon Oct 25 14:12:45 CST 2021
Postgres:
pi#raspberrypi:~$ docker run -it postgres:alpine date
Tue Jun 30 15:19:12 UTC 2071
Postgres localtime:
pi#raspberrypi:~$ docker run -it -e TZ=Asia/Shanghai -v /etc/localtime:/etc/localtime:ro postgres:12 date
Thu 01 Jan 1970 08:00:00 AM CST
My docker info below:
pi#raspberrypi:~$ docker version
Client: Docker Engine - Community
Version: 20.10.9
API version: 1.41
Go version: go1.16.8
Git commit: c2ea9bc
Built: Mon Oct 4 16:06:55 2021
OS/Arch: linux/arm
Context: default
Experimental: true
Server: Docker Engine - Community
Engine:
Version: 20.10.9
API version: 1.41 (minimum version 1.12)
Go version: go1.16.8
Git commit: 79ea9d3
Built: Mon Oct 4 16:04:47 2021
OS/Arch: linux/arm
Experimental: false
containerd:
Version: 1.4.11
GitCommit: 5b46e404f6b9f661a205e28d59c982d3634148f8
runc:
Version: 1.0.2
GitCommit: v1.0.2-0-g52b36a2
docker-init:
Version: 0.19.0
GitCommit: de40ad0
It appears to be an issue with the libseccomp2 library on Raspberry Pi. I was experiencing the same issues, and eventually resolved it by following the steps in this thread
Add the following to /etc/apt/sources.list:
deb http://raspbian.raspberrypi.org/raspbian/ testing main
Run apt update
Run apt-get install libseccomp2/testing
After running these updates the date/time should reflect that of your host. You may need to also mount /etc/localtime and /etc/timezone to get everything to match.
docker run -it -v /etc/timezone:/etc/timezone:ro -v /etc/localtime:/etc/localtime:ro --entrypoint /bin/sh postgres
docker-compose.yml
services:
db:
container_name: postgres
image: postgres:latest
restart: unless-stopped
environment:
TZ: America/Chicago
PGTZ: America/Chicago
POSTGRES_DB: test
POSTGRES_USER: testing
POSTGRES_PASSWORD: password
volumes:
- /etc/localtime:/etc/localtime:ro

Deployment fails using Github Actions

This is my first CI/CD attempt using Github Actions and for some reason, my deployment keeps failing. I have my Github repo with the following files.
-rw-rw-r-- 1 ubuntu ubuntu 1.1K Sep 7 21:40 Dockerfile
-rw-rw-r-- 1 ubuntu ubuntu 1.1K Sep 7 18:06 README.md
-rwxrwxr-x 1 ubuntu ubuntu 132 Sep 8 18:09 deploy_to_aws.sh
-rw-rw-r-- 1 ubuntu ubuntu 275 Sep 8 18:05 docker-compose.yml
drwxrwxr-x 6 ubuntu ubuntu 4.0K Sep 7 23:27 flexdashboard
-rwxrwxr-x 1 ubuntu ubuntu 359 Sep 7 19:30 shiny-server.sh
Now I am trying to build, deploy, and run the shiny application on a cloud instance (the shiny application runs when I manually run it on the cloud). So I set up this workflow in Github actions
name: Deploy EC2
on:
push:
branches: [ master ]
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: checkout
uses: actions/checkout#v2
- name: Run a one-line script
run: echo Hello, world!
- name: Install SSH key
uses: shimataro/ssh-key-action#v2
with:
key: ${{ secrets.SSH_KEY }}
name: id_rsa # optional
known_hosts: ${{ secrets.KNOWN_HOSTS }}
- name: rsync over ssh
run: ./deploy_to_aws.sh
Here are the contents of the deploy_to_aws.sh of the deployment script
#!/bin/bash
echo 'Starting to Deploy...'
cd Illumina-mRNA-dashboard
docker-compose up -d
echo 'Deployment completed successfully'
I am now getting this error.
shell: /bin/bash -e {0}
Run ./deploy_to_aws.sh
./deploy_to_aws.sh
shell: /bin/bash -e {0}
/home/runner/work/_temp/48d3ea81-a97b-45d1-894f-177e77cb8ae5.sh: line 1: ./deploy_to_aws.sh: No such file or directory
##[error]Process completed with exit code 127.
I don't understand why it keeps telling me that deploy_to_aws.sh doesn't exist

How can I deploy something to a k8s cluster via a k8s gitlab-ci runner?

cat /etc/redhat-release:
CentOS Linux release 7.2.1511 (Core)
docker version:
Client:
Version: 1.13.1
API version: 1.26
Package version: <unknown>
Go version: go1.8.3
Git commit: 774336d/1.13.1
Built: Wed Mar 7 17:06:16 2018
OS/Arch: linux/amd64
Server:
Version: 1.13.1
API version: 1.26 (minimum version 1.12)
Package version: <unknown>
Go version: go1.8.3
Git commit: 774336d/1.13.1
Built: Wed Mar 7 17:06:16 2018
OS/Arch: linux/amd64
Experimental: false
kubectl version:
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.5", GitCommit:"f01a2bf98249a4db383560443a59bed0c13575df", GitTreeState:"clean", BuildDate:"2018-03-19T15:59:24Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.0", GitCommit:"925c127ec6b946659ad0fd596fa959be43f0cc05", GitTreeState:"clean", BuildDate:"2017-12-15T20:55:30Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
gitlab version: 10.6-ce
gitlab runner image: gitlab/gitlab-runner:alpine-v10.3.0
I just integrated a kubernetes cluster (not GKE, just a k8s cluster deployed by myself) to a gitlab project, and then installed a gitlab-runner on which.
All of this, followed Adding an existing Kubernetes cluster.
After that, I added a .gitlab-ci.yml with a single stage, and pushed it to the repo. Here is the contents:
build-img:
stage: docker-build
script:
# - docker build -t $CONTAINER_RELEASE_IMAGE .
# - docker tag $CONTAINER_RELEASE_IMAGE $CONTAINER_LATEST_IMAGE
# - docker push $CONTAINER_IMAGE
- env | grep KUBE
- kubectl --help
tags:
- kubernetes
only:
- develop
Then I got this:
$ env | grep KUBE
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT=tcp://10.96.0.1:443
KUBERNETES_SERVICE_PORT=443
KUBERNETES_SERVICE_HOST=10.96.0.1
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
$ kubectl --help
/bin/bash: line 62: kubectl: command not found
ERROR: Job failed: error executing remote command: command terminated with non-zero exit code: Error executing in Docker Container: 1
The kubectl was not installed in the runner yet, and some env vars like KUBE_TOKEN, KUBE_CA_PEM_FILE or KUBECONFIG are not found, neither(see Deployment variables).
Searched the official docs of gitlab, got nothing.
So, how could I deploy a project via this runner?
The gitlab-runner has no build-in commands, it spin's of a container with a predefined image and then remotely executes the commands from your script, in that container.
You have not defined an image, so the default image will be used as defined in the setup of the gitlab-runner.
So,
You could Install kubectl binary using curl before you use it in your script:, or before_script:
build-img:
stage: docker-build
before_script:
- curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
- chmod +x ./kubectl
script:
- ...
- ./kubectl --version
Or create a seperate deployment stage, with an image that has kubectl, e.g. roffe/kubectl :
stages:
- docker-build
- deploy
build-img:
stage: docker-build
script:
- docker build -t $CONTAINER_RELEASE_IMAGE .
- docker tag $CONTAINER_RELEASE_IMAGE $CONTAINER_LATEST_IMAGE
- docker push $CONTAINER_IMAGE
tags:
- kubernetes
deploy:dev:
stage: deploy
image: roffe/kubectl
script:
- kubectl .....
tags:
- kubernetes

scaling a service with docker compose

I am facing issues with scaling a service using docker compose and need help.
Below is what I have:
My docker-compose.yml
web:
image: nginx
The command that I use to run:
docker-compose -f compose/nginx-docker-compose.yml scale web=3 up -d
The output of the command:
Creating and starting compose_web_1 ... done
Creating and starting compose_web_2 ... done
Creating and starting compose_web_3 ... done
ERROR: Arguments to scale should be in the form service=num
The output of docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
fead93372574 nginx "nginx -g 'daemon off" 6 seconds ago Up 4 seconds 80/tcp, 443/tcp compose_web_3
de110ae9606d nginx "nginx -g 'daemon off" 6 seconds ago Up 4 seconds 80/tcp, 443/tcp compose_web_1
4d7f8fd39ccd nginx "nginx -g 'daemon off" 6 seconds ago Up 4 seconds 80/tcp, 443/tcp compose_web_2
I should also mention that when I do not use the scale web=3 option, the service comes up just fine.
docker version
Client:
Version: 1.10.3
API version: 1.22
Go version: go1.5.3
Git commit: 20f81dd
Built: Thu Mar 10 15:51:19 2016
OS/Arch: linux/amd64
Server:
Version: 1.10.3
API version: 1.22
Go version: go1.5.3
Git commit: 20f81dd
Built: Thu Mar 10 15:51:19 2016
OS/Arch: linux/amd64
docker-compose version
docker-compose version 1.8.0, build f3628c7
docker-py version: 1.9.0
CPython version: 2.7.9
OpenSSL version: OpenSSL 1.0.1e 11 Feb 2013
Let me know if anybody else has face this and have found a solution.
Thanks.
You should remove the "up" if you use scale option. For ex: docker-compose scale web=3.
Let's see document from Docker site.
In your case:
docker-compose -f compose/nginx-docker-compose.yml scale web=3 up -d
The command may "think" that the up is the service name needed to be scaled (should be up=3). So it threw the warning like that