I'm trying to deploy Mongo onto Azure Container Instances as part of a container group. To do this, I use a Storage Account with a file share to persist the mongo data. It's impossible to mount the volume in the /data/db default location, so I mount it elsewhere and start mongod using the --db-path flag. This all works fine using the CLI, full command below.
However, when I want to translate all of these commands into my YAML config file it doesn't work. Mongo crashes out with an unknown file or directory error. If I start the container without the --db-path flag but still mount the volume, I can exec into the running container and see that the volume is there and is attached. I can even manually create folders in the share via the Azure Portal and see them appear in the container.
Documentation and examples are a little thin on the ground, especially YAML based examples. The biggest difference with the container group is having to define a named volume separate from the container which is used by the volumeMounts property. Is it just a syntax error? Are the CLI command and the YAML not equivalent in some way?
CLI Command
az container create
--resource-group rsenu-hassPilots-group
--name mongo
--image mongo
--azure-file-volume-account-name <account>
--azure-file-volume-account-key "<key>" --azure-file-volume-share-name mongodata
--azure-file-volume-mount-path "/data/mongoaz"
--ports 27017
--cpu 1
--ip-address public
--memory 1.5
--os-type Linux
--protocol TCP
--command-line "mongod --dbpath=/data/mongoaz"
YAML Config
apiVersion: 2018-10-01
location: uksouth
name: trustNewArtGroup
properties:
containers:
- name: mongo
properties:
image: mongo:4.2.3
resources:
requests:
cpu: 1
memoryInGb: 1.5
ports:
- port: 27017
volumeMounts:
- name: database
mountPath: /data/azstorage
environmentVariables:
- name: 'MONGO_INITDB_DATABASE'
value: 'trust-new-art-db'
command:
- "mongod --dbpath=/data/azstorage"
osType: Linux
ipAddress:
type: Public
dnsNameLabel: trustnewart
ports:
- protocol: tcp
port: '27017'
volumes:
- name: database
azureFile:
sharename: mongodata
storageAccountName: <account>
storageAccountKey: <key>
tags: null
type: Microsoft.ContainerInstance/containerGroups
With a bit of help from this page in the documentation, I've discovered it was a syntax issue. The correct way to override the entrypoint in a YAML config file is as follows:
command: ['mongod', '--dbpath', '/data/azstorage']
Related
I have created a mongodb pod in my Minikube local cluster with the following configuration, now I would like to migrate the data of my existing mongodb database(run in AWS EC2 instance) to this database, how can I accomplish this?
apiVersion: v1
kind: Pod
metadata:
name: mongodb
labels:
app: mongodb
spec:
volumes:
- name: mongo-vol
persistentVolumeClaim:
claimName: mongo-pvc
containers:
- image: mongo
name: container1
command:
- "mongod"
- "--bind_ip"
- "0.0.0.0"
ports:
- containerPort: 27017
volumeMounts:
- name: mongo-vol
mountPath: /data/db
Can't added a comment because I do not have more than 50 reputation but here is the thing:
You created a Pod using MongoDB and I see the volumeMount set to /data/db. What is this path? Is this volume a hostPath, NFS, external storage?
The migration itself could be accomplished in two ways:
by running a mongodump on your EC2 instance:
mongodump -host hostname --port 27017 --out /tmp/mongodb-dump
Then a mongorestore on your /data/db path.
You can manually log in on the pod
#copy your dump from host to pod using
kubectl cp /tmp/foo_dir <some-pod>:/tmp/bar_dir
#log on the pod
kubectl exec mongodb -it /bin/bash
#run restore
mongorestore /tmp/mongodb-dump
or
You can copy all files/the entire /data from your EC2 instance and dropped on the volumeMount your pod is mounting /data/db.
I've created the manifest file, that looks as follows:
apiVersion: v1
kind: Pod
metadata:
name: kuard
spec:
volumes:
- name: "kuard-data"
hostPath:
path: "/home/developer/kubernetes/exercises"
containers:
- image: gcr.io/kuar-demo/kuard-amd64:1
name: kuard
volumeMounts:
- mountPath: "/data"
name: "kuard-data"
ports:
- containerPort: 8080
name: http
protocol: TCP
As you can see, the hostpath is:
path: "/home/developer/kubernetes/exercises"
and the mountPath is:
mountPath: "/data"
I've created a hello.txt file in the folder /home/developer/kubernetes/exercises and when I enter into the pod via kubectl exec -it kuard ash I can not find the file hello.txt.
Where is the file?
kind is using Docker containers to simulate Kubernetes nodes. So when you are creating files on your host (your ubuntu machine) the containers will not automatically have access to them.
(This gets even more complicated when using macos or windows and docker is running in a separate virtual machine...)
I assume that there are some shared folders visible inside the kind-docker-nodes, but I could not find it documented.
You can verify the filesystem content of the docker node from inside the container using docker exec -it kind-control-plane /bin/sh and then work with the usual tools.
If you need to make content from your development machine available you might want to have a look at ksync: https://github.com/vapor-ware/ksync
I have a tomcat + postgres application that I test with docker-compose. I am trying to package the application in a kubernetes config file.
For now, I am running kubernetes (and kubectl) using my Docker Desktop for Windows installation. Eventually, I want to deploy to other environments.
I am currently trying to replicate some of the volume functionality in docker-compose within the following config file.
apiVersion: v1
kind: Pod
metadata:
name: pg-pod
spec:
volumes:
- name: "pgdata-vol"
#emptyDir: {}
hostPath:
path: /c/temp/vols/pgdata
containers:
- image: postgres
name: db
ports:
- containerPort: 5432
name: http
protocol: TCP
volumeMounts:
- mountPath: "/pgdata"
name: "pgdata-vol"
env:
- name: PGDATA
value: /pgdata
When postgres launches, I get see the following error.
fixing permissions on existing directory /pgdata ... ok
creating subdirectories ... ok
selecting default max_connections ... 20
selecting default shared_buffers ... 400kB
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
2019-07-26 20:43:41.844 UTC [78] FATAL: data directory "/pgdata" has wrong ownership
2019-07-26 20:43:41.844 UTC [78] HINT: The server must be started by the user that owns the data directory.
child process exited with exit code 1
initdb: removing contents of data directory "/pgdata"
running bootstrap script ...
I presume that I either need to provide some additional parameters to my volume definition or I need to try a different type of volume config (local vs hostPath).
I found a partial solution to this issue.
Interestingly, if I assign a linux-style path as my host-path (on Windows), then my pgdata-vol persists until Docker Desktop is restarted.
Instead of mounting to a real windows location
volumes:
- name: "pgdata-vol"
hostPath:
path: /c/temp/vols/pgdata
I use a "linux" location as my Windows hostPath
volumes:
- name: "pgdata-vol"
hostPath:
path: /tmp/vols/pgdata
Curiously, I cannot actually find this path from Windows. I presume this /tmp is local to my Docker Desktop instance.
This solution does not offer true persistence, but it has helped me to work around a roadblock that was impacting testing.
This is a known issue with Docker image on Windows. Right now it is not possible to correctly mount Windows directories as volumes. You may however try to workaround it by using a persistent Docker volume. For example:
db:
image: postgres
environment:
- POSTGRES_USER=<user>
- POSTGRES_PASSWORD=<pass>
- POSTGRES_DB=<db_name>
ports:
- <ports>
volumes:
- pgdata:<path>
networks:
- <network>
volumes:
pgdata:
More Information:
data directory "/var/lib/postgresql/data" has wrong ownership
postgresql-data-pgdata-has-wrong-ownership
postgres-to-work-on-persistent-windows-mount
Please let me know if that helped.
Have you tried using WSL? My setup for windows is WSL + Ubuntu + Docker for windows and I can mount volumes normally.
I've followed that tutorial to configure all my environment:
https://nickjanetakis.com/blog/setting-up-docker-for-windows-and-wsl-to-work-flawlessly
As a beginner in container administration, I can't find a clear description of OpenShift's deployment stages and related YAML statements, specifically when persistent volume mounting and shell commands execution are involved. For example, in the RedHat documentation there is a lot of examples. A simple one is 16.4. Pod Object Definition:
apiVersion: v1
kind: Pod
metadata:
name: busybox-nfs-pod
labels:
name: busybox-nfs-pod
spec:
containers:
- name: busybox-nfs-pod
image: busybox
command: ["sleep", "60000"]
volumeMounts:
- name: nfsvol-2
mountPath: /usr/share/busybox
readOnly: false
securityContext:
supplementalGroups: [100003]
privileged: false
volumes:
- name: nfsvol-2
persistentVolumeClaim:
claimName: nfs-pvc
Now the question is: does the command sleep (or any other) execute before or after the mount of nfsvol-2 is finished? In other words, is it possible to use the volume's resources in such commands? And if it's not possible in this config, then which event handlers to use instead? I don't see any mention about an event like volume mounted.
does the command sleep (or any other) execute before or after the
mount of nfsvol-2 is finished?
To understand this, lets dig into the underlying concepts for Openshift.
OpenShift is a container application platform that brings docker and Kubernetes to the enterprise. So Openshift is nothing but an abstraction layer on top of docker and kubernetes along with additional features.
Regarding the volumes and commands lets consider the following example:
Let's run the docker container by mounting a volume, which is the home directory of host machine to the root path of the container(-v is option to attach volume).
$ docker run -it -v /home:/root ubuntu /bin/bash
Unable to find image 'ubuntu:latest' locally
latest: Pulling from library/ubuntu
50aff78429b1: Pull complete
f6d82e297bce: Pull complete
275abb2c8a6f: Pull complete
9f15a39356d6: Pull complete
fc0342a94c89: Pull complete
Digest: sha256:f871d0805ee3ce1c52b0608108dbdf1b447a34d22d5c7278a3a9dd78fc12c663
Status: Downloaded newer image for ubuntu:latest
root#1f07f083ba79:/# cd /root/
root#1f07f083ba79:~# ls
lost+found raghavendralokineni raghu user1
root#1f07f083ba79:~/raghavendralokineni# pwd
/root/raghavendralokineni
Now execute the sleep command in the container and exit.
root#1f07f083ba79:~/raghavendralokineni# sleep 10
root#1f07f083ba79:~/raghavendralokineni#
root#1f07f083ba79:~/raghavendralokineni# exit
Check the files available in the /home path which we have mounted to the container. This content is same as that of /root path in the container.
raghavendralokineni#iconic-glider-186709:/home$ ls
lost+found raghavendralokineni raghu user1
So when a volume is mounted to the container, any changes in the volume will be effected in the host machine as well.
Hence the volume will be mounted along with the container and commands will be executed after container is started.
Coming back to the your YAML file,
volumeMounts:
- name: nfsvol-2
mountPath: /usr/share/busybox
It says ,mount the volume nfsvol-2 to the container and the information regarding the volume is mentioned in volumes:
volumes:
- name: nfsvol-2
persistentVolumeClaim:
claimName: nfs-pvc
So mount the volume to the container and execute the command which is specifed:
containers:
- name: busybox-nfs-pod
image: busybox
command: ["sleep", "60000"]
Hope this helps.
I am trying to run a docker image in Google Container Engine. The instance comes up with no running docker images. I can ssh and run the docker commands and the service comes up. But nothing happens when I just launch the instance from the terminal. Can some one take a look at what I am doing wrong.
My docker file looks like
FROM golang
RUN mkdir -p /app
COPY . /app
RUN go get golang.org/x/tools/cmd/present
ENTRYPOINT cd /app && /go/bin/present -http=":8080"
EXPOSE 8080
containers.yaml looks like
version: v1beta3
containers:
- name: talks
image: sheki/talks
ports:
- name: http-port
containerPort: 8080
hostPort: 80'
The command to launch the instance is
gcloud compute instances create zoop \
--image container-vm \
--metadata-from-file google-container-manifest=containers.yaml \
--zone us-central1-a \
--machine-type f1-micro
You mentioned in your question that you are using google container engine, but in fact you are using the container vm (which is a bit different). If you want to use container engine, please check out the documentation to create a container cluster.
I ran your example, and in /var/log/kubelet.log saw the following error:
E0519 17:05:41.285556 2414 http.go:54] Failed to read URL: http://metadata.google.internal/computeMetadata/v1beta1/instance/attributes/google-cont
ainer-manifest: received 'version: v1beta3
containers:
- name: talks
image: sheki/talks
ports:
- name: http-port
containerPort: 8080
hostPort: 80'
', but couldn't parse as neither single (error unmarshaling JSON: json: cannot unmarshal string into Go value of type int: {Version:v1beta3 ID: UUID:
Volumes:[] Containers:[{Name:talks Image:sheki/talks Entrypoint:[] Command:[] WorkingDir: Ports:[{Name:http-port HostPort:0 ContainerPort:8080 Proto
col: HostIP:}] Env:[] Resources:{Limits:map[] Requests:map[]} CPU:0 Memory:0 VolumeMounts:[] LivenessProbe:<nil> ReadinessProbe:<nil> Lifecycle:<nil>
TerminationMessagePath: Privileged:false ImagePullPolicy: Capabilities:{Add:[] Drop:[]}}] RestartPolicy:{Always:<nil> OnFailure:<nil> Never:<nil>} D
NSPolicy: HostNetwork:false}) or multiple manifests (error unmarshaling JSON: json: cannot unmarshal object into Go value of type []v1beta1.Container
Manifest: []) nor single (kind not set in '{"containers":[{"image":"sheki/talks","name":"talks","ports":[{"containerPort":8080,"hostPort":"80'","name
":"http-port"}]}],"version":"v1beta3"}') or multiple pods (kind not set in '{"containers":[{"image":"sheki/talks","name":"talks","ports":[{"container
Port":8080,"hostPort":"80'","name":"http-port"}]}],"version":"v1beta3"}').
It looks like the documentation for container vms is out of date.