I am facing issues with scaling a service using docker compose and need help.
Below is what I have:
My docker-compose.yml
web:
image: nginx
The command that I use to run:
docker-compose -f compose/nginx-docker-compose.yml scale web=3 up -d
The output of the command:
Creating and starting compose_web_1 ... done
Creating and starting compose_web_2 ... done
Creating and starting compose_web_3 ... done
ERROR: Arguments to scale should be in the form service=num
The output of docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
fead93372574 nginx "nginx -g 'daemon off" 6 seconds ago Up 4 seconds 80/tcp, 443/tcp compose_web_3
de110ae9606d nginx "nginx -g 'daemon off" 6 seconds ago Up 4 seconds 80/tcp, 443/tcp compose_web_1
4d7f8fd39ccd nginx "nginx -g 'daemon off" 6 seconds ago Up 4 seconds 80/tcp, 443/tcp compose_web_2
I should also mention that when I do not use the scale web=3 option, the service comes up just fine.
docker version
Client:
Version: 1.10.3
API version: 1.22
Go version: go1.5.3
Git commit: 20f81dd
Built: Thu Mar 10 15:51:19 2016
OS/Arch: linux/amd64
Server:
Version: 1.10.3
API version: 1.22
Go version: go1.5.3
Git commit: 20f81dd
Built: Thu Mar 10 15:51:19 2016
OS/Arch: linux/amd64
docker-compose version
docker-compose version 1.8.0, build f3628c7
docker-py version: 1.9.0
CPython version: 2.7.9
OpenSSL version: OpenSSL 1.0.1e 11 Feb 2013
Let me know if anybody else has face this and have found a solution.
Thanks.
You should remove the "up" if you use scale option. For ex: docker-compose scale web=3.
Let's see document from Docker site.
In your case:
docker-compose -f compose/nginx-docker-compose.yml scale web=3 up -d
The command may "think" that the up is the service name needed to be scaled (should be up=3). So it threw the warning like that
Related
after installing the k8s 1.24 version, I would like to manage the container using cri-o and podman.
I installed podman to use the podman commit function, but I can see the image, but I can't see the containers. As far as I know, I need to change the run time of Podman, how can I change it?
Additionally, if I check the containers using the crictl command, I see the k8s containers.
My os is ubuntu:20.04
crictl ps -a command result
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
c798099462c9b 295c7be079025306c4f1d65997fcf7adb411c88f139ad1d34b537164aa060369 33 minutes ago Running nginx 0 e1bd39fd2b241 nginx-deployment-6595874d85-wksxq
e2da3de955397 295c7be079025306c4f1d65997fcf7adb411c88f139ad1d34b537164aa060369 33 minutes ago Running nginx 0 15bf749dc406f nginx-deployment-6595874d85-8zbpx
a58db13acc2bf 295c7be079025306c4f1d65997fcf7adb411c88f139ad1d34b537164aa060369 33 minutes ago Running nginx 0 f5d775b0b78c0 nginx-deployment-6595874d85-pspvg
75573509b4b1e docker.io/kubernetesui/dashboard#sha256:1b020f566b5d65a0273c3f4ed16f37dedcb95ee2c9fa6f1c42424ec10b2fd2d7 3 days ago Running kubernetes-dashboard 0 bfb3ade50b952 kubernetes-dashboard-f8bb6d75-9g6p7
2ab98c33f2713 7801cfc6d5c072eb114355d369c830641064a246b5a774bcd668fac75ec728e9 3 days ago Running dashboard-metrics-scraper 0 23da9b06873c9 dashboard-metrics-scraper-7bfdf779ff-qrdcj
ac41ff1d3a335 docker.io/calico/kube-controllers#sha256:57c40fdfb86dce269a8f93b4f5545b23b7ee9ba36d62e67e7ce367df8d753887 3 days ago Running calico-kube-controllers 0 fac206793f1d5 calico-kube-controllers-6766647d54-sqjqx
0ec5b4d17ed3b docker.io/calico/node#sha256:0a430186f9267218aed3e419a541e306eba3d0bbe5cf4f6a8b700d35d7a4f7fa 3 days ago Running calico-node 0 1a63a00117c7f calico-node-mkvpg
6ec33c0ece470 a87d3f6f1b8fdc077e74ad6874301f57490f5b38cc731d5d6f5803f36837b4b1 3 days ago Exited install-cni 0 1a63a00117c7f calico-node-mkvpg
8dd11d084b8f3 docker.io/calico/cni#sha256:7c7bd52f3c72917c28a6715740b4710e9a345a26a7d90556a471a3eb977d8cf7 3 days ago Exited upgrade-ipam 0 1a63a00117c7f calico-node-mkvpg
0ce4eaeb4d27b k8s.gcr.io/kube-proxy#sha256:058460abb91c7c867f61f5a8a4c29d2cda605d8ff1dd389b1f7bad6c1db60a5b 3 days ago Running kube-proxy 0
podman ps -a command result
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
podman system info command result
host:
arch: amd64
buildahVersion: 1.23.1
cgroupControllers:
- cpuset
- cpu
- cpuacct
- blkio
- memory
- devices
- freezer
- net_cls
- perf_event
- net_prio
- hugetlb
- pids
- rdma
cgroupManager: systemd
cgroupVersion: v1
conmon:
package: 'conmon: /usr/libexec/podman/conmon'
path: /usr/libexec/podman/conmon
version: 'conmon version 2.1.2, commit: '
cpus: 4
distribution:
codename: focal
distribution: ubuntu
version: "20.04"
eventLogger: journald
hostname: realworker
idMappings:
gidmap: null
uidmap: null
kernel: 5.4.0-122-generic
linkmode: dynamic
logDriver: journald
memFree: 4012826624
memTotal: 16728518656
ociRuntime:
name: crun
package: 'crun: /usr/bin/crun'
path: /usr/bin/crun
version: |-
crun version UNKNOWN
commit: ea1fe3938eefa14eb707f1d22adff4db670645d6
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
os: linux
remoteSocket:
path: /run/podman/podman.sock
security:
apparmorEnabled: true
capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
rootless: false
seccompEnabled: true
seccompProfilePath: /usr/share/containers/seccomp.json
selinuxEnabled: false
serviceIsRemote: false
slirp4netns:
executable: /usr/bin/slirp4netns
package: 'slirp4netns: /usr/bin/slirp4netns'
version: |-
slirp4netns version 1.1.8
commit: unknown
libslirp: 4.3.1-git
SLIRP_CONFIG_VERSION_MAX: 3
libseccomp: 2.4.3
swapFree: 0
swapTotal: 0
uptime: 95h 1m 37.73s (Approximately 3.96 days)
plugins:
log:
- k8s-file
- none
- journald
network:
- bridge
- macvlan
volume:
- local
registries:
search:
- docker.io
- quay.io
store:
configFile: /etc/containers/storage.conf
containerStore:
number: 0
paused: 0
running: 0
stopped: 0
graphDriverName: overlay
graphOptions:
overlay.mountopt: nodev,metacopy=on
graphRoot: /var/lib/containers/storage
graphStatus:
Backing Filesystem: extfs
Native Overlay Diff: "false"
Supports d_type: "true"
Using metacopy: "true"
imageStore:
number: 11
runRoot: /run/containers/storage
volumePath: /var/lib/containers/storage/volumes
version:
APIVersion: 3.4.2
Built: 0
BuiltTime: Thu Jan 1 00:00:00 1970
GitCommit: ""
GoVersion: go1.15.2
OsArch: linux/amd64
Version: 3.4.2
I am running Win10, WSL and Docker Desktop. I have the following test YML which errors out:
version: '2.3'
services:
cli:
image: smartsheet-www
volumes_from:
- container:amazeeio-ssh-agent
➜ ~ ✗ docker-compose -f test.yml up
no such service: amazeeio-ssh-agent
Why does it try to find a service when I specified container: ?
The container exists, runs and has a volume.
docker inspect -f "{{ .Config.Volumes }}" amazeeio-ssh-agent
map[/tmp/amazeeio_ssh-agent:{}]
docker exec -it amazeeio-ssh-agent /bin/sh -c 'ls -l /tmp/amazeeio_ssh-agent/'
total 0
srw------- 1 drupal drupal 0 Apr 1 03:54 socket
Removing the volumes_from and the following line starts the cli service just fine.
After a bit of searching, I finally found https://github.com/docker/compose/issues/8874 and https://github.com/pygmystack/pygmy-legacy/issues/60#issue-1037009622 this fixes it.
uncheck this
postgresql on my rasbian aways have got wrong time!
but not the same with nginx contanner,
what's wrong with my docker?
Nginx:
pi#raspberrypi:~$ docker run -it -e TZ=Asia/Shanghai nginx date
Mon Oct 25 14:12:45 CST 2021
Postgres:
pi#raspberrypi:~$ docker run -it postgres:alpine date
Tue Jun 30 15:19:12 UTC 2071
Postgres localtime:
pi#raspberrypi:~$ docker run -it -e TZ=Asia/Shanghai -v /etc/localtime:/etc/localtime:ro postgres:12 date
Thu 01 Jan 1970 08:00:00 AM CST
My docker info below:
pi#raspberrypi:~$ docker version
Client: Docker Engine - Community
Version: 20.10.9
API version: 1.41
Go version: go1.16.8
Git commit: c2ea9bc
Built: Mon Oct 4 16:06:55 2021
OS/Arch: linux/arm
Context: default
Experimental: true
Server: Docker Engine - Community
Engine:
Version: 20.10.9
API version: 1.41 (minimum version 1.12)
Go version: go1.16.8
Git commit: 79ea9d3
Built: Mon Oct 4 16:04:47 2021
OS/Arch: linux/arm
Experimental: false
containerd:
Version: 1.4.11
GitCommit: 5b46e404f6b9f661a205e28d59c982d3634148f8
runc:
Version: 1.0.2
GitCommit: v1.0.2-0-g52b36a2
docker-init:
Version: 0.19.0
GitCommit: de40ad0
It appears to be an issue with the libseccomp2 library on Raspberry Pi. I was experiencing the same issues, and eventually resolved it by following the steps in this thread
Add the following to /etc/apt/sources.list:
deb http://raspbian.raspberrypi.org/raspbian/ testing main
Run apt update
Run apt-get install libseccomp2/testing
After running these updates the date/time should reflect that of your host. You may need to also mount /etc/localtime and /etc/timezone to get everything to match.
docker run -it -v /etc/timezone:/etc/timezone:ro -v /etc/localtime:/etc/localtime:ro --entrypoint /bin/sh postgres
docker-compose.yml
services:
db:
container_name: postgres
image: postgres:latest
restart: unless-stopped
environment:
TZ: America/Chicago
PGTZ: America/Chicago
POSTGRES_DB: test
POSTGRES_USER: testing
POSTGRES_PASSWORD: password
volumes:
- /etc/localtime:/etc/localtime:ro
cat /etc/redhat-release:
CentOS Linux release 7.2.1511 (Core)
docker version:
Client:
Version: 1.13.1
API version: 1.26
Package version: <unknown>
Go version: go1.8.3
Git commit: 774336d/1.13.1
Built: Wed Mar 7 17:06:16 2018
OS/Arch: linux/amd64
Server:
Version: 1.13.1
API version: 1.26 (minimum version 1.12)
Package version: <unknown>
Go version: go1.8.3
Git commit: 774336d/1.13.1
Built: Wed Mar 7 17:06:16 2018
OS/Arch: linux/amd64
Experimental: false
kubectl version:
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.5", GitCommit:"f01a2bf98249a4db383560443a59bed0c13575df", GitTreeState:"clean", BuildDate:"2018-03-19T15:59:24Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.0", GitCommit:"925c127ec6b946659ad0fd596fa959be43f0cc05", GitTreeState:"clean", BuildDate:"2017-12-15T20:55:30Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
gitlab version: 10.6-ce
gitlab runner image: gitlab/gitlab-runner:alpine-v10.3.0
I just integrated a kubernetes cluster (not GKE, just a k8s cluster deployed by myself) to a gitlab project, and then installed a gitlab-runner on which.
All of this, followed Adding an existing Kubernetes cluster.
After that, I added a .gitlab-ci.yml with a single stage, and pushed it to the repo. Here is the contents:
build-img:
stage: docker-build
script:
# - docker build -t $CONTAINER_RELEASE_IMAGE .
# - docker tag $CONTAINER_RELEASE_IMAGE $CONTAINER_LATEST_IMAGE
# - docker push $CONTAINER_IMAGE
- env | grep KUBE
- kubectl --help
tags:
- kubernetes
only:
- develop
Then I got this:
$ env | grep KUBE
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT=tcp://10.96.0.1:443
KUBERNETES_SERVICE_PORT=443
KUBERNETES_SERVICE_HOST=10.96.0.1
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
$ kubectl --help
/bin/bash: line 62: kubectl: command not found
ERROR: Job failed: error executing remote command: command terminated with non-zero exit code: Error executing in Docker Container: 1
The kubectl was not installed in the runner yet, and some env vars like KUBE_TOKEN, KUBE_CA_PEM_FILE or KUBECONFIG are not found, neither(see Deployment variables).
Searched the official docs of gitlab, got nothing.
So, how could I deploy a project via this runner?
The gitlab-runner has no build-in commands, it spin's of a container with a predefined image and then remotely executes the commands from your script, in that container.
You have not defined an image, so the default image will be used as defined in the setup of the gitlab-runner.
So,
You could Install kubectl binary using curl before you use it in your script:, or before_script:
build-img:
stage: docker-build
before_script:
- curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
- chmod +x ./kubectl
script:
- ...
- ./kubectl --version
Or create a seperate deployment stage, with an image that has kubectl, e.g. roffe/kubectl :
stages:
- docker-build
- deploy
build-img:
stage: docker-build
script:
- docker build -t $CONTAINER_RELEASE_IMAGE .
- docker tag $CONTAINER_RELEASE_IMAGE $CONTAINER_LATEST_IMAGE
- docker push $CONTAINER_IMAGE
tags:
- kubernetes
deploy:dev:
stage: deploy
image: roffe/kubectl
script:
- kubectl .....
tags:
- kubernetes
I use mongo image in docker, but I can not connect to 20217 port.
docker#default:~$ docker ps
prot info show: 0.0.0.0:20217->20217/tcp, 27017/tcp
but,
gilbertdeMacBook-Pro:~ gilbert$ lsof -i tcp:20217
there is no PID,
gilbertdeMacBook-Pro:~ gilbert$ docker info
Containers: 3
Images: 43
Server Version: 1.9.1
Storage Driver: aufs
Root Dir: /mnt/sda1/var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 50
Dirperm1 Supported: true
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 4.1.13-boot2docker
Operating System: Boot2Docker 1.9.1 (TCL 6.4.1); master : cef800b - Fri Nov 20 19:33:59 UTC 2015
CPUs: 1
Total Memory: 1.956 GiB
Name: default
ID: MRAZ:ZG5E:HDMY:EJNQ:HFL4:PW6Y:AXIS:6JFL:PFI5:GBAY:5SMF:NYQR
Debug mode (server): true
File Descriptors: 25
Goroutines: 44
System Time: 2016-01-27T14:53:52.005531869Z
EventsListeners: 0
Init SHA1:
Init Path: /usr/local/bin/docker
Docker Root Dir: /mnt/sda1/var/lib/docker
Username: gilbertgan
Registry: https://index.docker.io/v1/
Labels:
provider=virtualbox
I found this is because on MAC docker-machine is running on VM,so we need add the VM IP when connect to container.
the ip can be show by: docker-machine ls
Your docker container maps port 20217 which isn't the MongoDB default port. The correct port is 27017. And gilbert_gan is right as well. When running docker on docker-machine the docker host is not localhost but rather the virtual machine under docker-machine control.