I've been developing an app on my local laptop (Mac) with Minikube. Instead of packaging the code and files into the docker image, I use hostPath and volumeMount that points to the code/file directory on my Mac, so that I can avoid rebuilding the image every time.
Now I would like to do the same iterative testing with google cloud. What's the best way to "mount" my local code/file directory and run pods remotely on the cloud? I don't want to package the code into a docker image, push to dockerhub, and then pull from dockerhub on gcloud. My dockerhub is a free account and would expose my code.
You want:
You want to mount your local file system into your remote Kubernetes cluster.
Answer:
As far I know, you can't do this. Its possible in minikube, because, you can mount your local directory with minikube.
Solution:
I can tell you an alternative way. May be this is not what you want. But it can help you.
Do you use git? If your answer is yes and also if you have no problem to keep your files into git repository, following process will help you.
spec:
containers:
- image: nginx
name: nginx
volumeMounts:
- mountPath: /mypath
name: git-volume
volumes:
- name: git-volume
gitRepo:
repository: "git#somewhere:me/my-git-repository.git"
revision: "22f1d8406d464b0c0874075539c1f2e96c253775"
When you will create this Pod, my-git-repository will be mounted into your directory /mypath inside your Pod container.
Basically, you can tell your Pod to pull this git from specific branch. So every time, you change your code, push it. Then create Pod again.
Read volumes/#gitrepo
Easiest method to replicate your setup would be to use a storage bucket for the mount point.
For your setup, just pull the code to the local host when needing to build from the storage bucket. I am assuming you have a build script to do the configuration part.
However as per the other comment, you could just use gcr to host your config files and use deployment manager to build.
Steps for using the Google Cloud Registry:
Build Docker Image
docker build -t <image-name>:<tag> <path-to-dockerfile>
Tag for GCloud Container Registry
docker tag <image-name>:<tag> us.gcr.io/<gcloud-project-id>/<image-name>:<tag>
Container Registry
gcloud docker -- push us.gcr.io/<gcloud-project-id>/<image-name>:<tag>
Your spec will then point to the container registry path:
spec:
containers:
- name: hello-world
image: us.gcr.io/<gcloud-project-id>/<image-name>:<tag>
ports:
- name: http
containerPort: 8080
Related
I am using bitnami PostgreSQL image to deploy StatfulSet inside my cluster node. I am not sure how to initiate schema for the PostgreSQL pod without building on top of bitnami image. I have looked around on the internet and someone said to use init containers but I am also not sure how exactly I would do that.
From the Github Readme of the Bitnami Docker image:
When the container is executed for the first time, it will execute the
files with extensions .sh, .sql and .sql.gz located at
/docker-entrypoint-initdb.d.
In order to have your custom files inside the docker image you can
mount them as a volume.
You can just mount such scripts under that directory using a ConfigMap volume. An example could be the following:
First, create the ConfigMap with the scripts, for example:
apiVersion: v1
kind: ConfigMap
metadata:
name: p-init-sql
labels:
app: the-app-name
data:
01_init_db.sql: |-
# content of the script goes here
02_second_init_db.sql: |-
# more content for another script goes here
Second, under spec.template.spec.volumes, you can add:
volumes:
- configMap:
name: p-init-sql
Then, under spec.template.spec.containers[0].volumeMounts, you can mount this volume with:
volumeMounts:
- mountPath: /docker-entrypoint-initdb.d
name: p-init-sql
With this said, you may find out that it is more easy to use HELM Charts.
Bitnami provides HELM Charts for all its images which simplify the usage of such images by a lot (as everything is ready to be installed and configured from a simple values.yaml file)
For example, there is such a chart for postgresql which you can find here and that can be of inspiration in how to configure the docker image even if you decide to write your own Kubernetes resources around that image.
I've used the Bitnami Helm chart to install SCDF into a k8s cluster generated by kOps in AWS.
I'm trying to add my development SCDF stream apps into the installation using a file URI and cannot figure-out where or how the shared Skipper & Server mount point is. exec'ing into either instance there is no /home/cnb and I'm not seeing anything common via mount. The best I can tell the Bitnami installation is using the MariaDB instance for shared "storage".
Is there a recommended way of installing local/dev Stream apps into the cluster?
There are a couple of parameters under the deployer section that allows you to mount volumes (link):
deployer:
## #param deployer.volumeMounts Streaming applications extra volume mounts
##
volumeMounts: {}
## #param deployer.volumes Streaming applications extra volumes
##
volumes: {}
see https://github.com/bitnami/charts/tree/master/bitnami/spring-cloud-dataflow#deployer-parameters.
Then, the mounted volume is used in the ConfigMaps (both server and skipper):
Server
https://github.com/bitnami/charts/blob/c351211a5501bb44b5e065a5e3a7d4b7414f84f3/bitnami/spring-cloud-dataflow/templates/server/configmap.yaml#L60
Skipper
https://github.com/bitnami/charts/blob/c351211a5501bb44b5e065a5e3a7d4b7414f84f3/bitnami/spring-cloud-dataflow/templates/skipper/configmap.yaml#L72
Apart from that, there are also server.extraVolumes and server.extraVolumeMounts to be set on the Dataflow Server Pod, and skipper.extraVolumes and skipper.extraVolumeMounts to be set on the Skipper Pod just in case it's useful for your use case.
Building on the previous answer here is what I came-up with:
Create an EBS Volume
Mount it on each EC2 instance in the cluster at the same location (/cdf)
Install CDF using the Bitnami chart and this config file:
server.extraVolumeMounts:
# Locstion in container
- mountPath: /applications
# Refer to the volume below
name: application-volume
server.extraVolumes:
- name: application-volume
hostPath:
# Location in host filesystem
path: /cdf
# this field is optional
type: Directory
skipper.extraVolumeMounts:
# Locstion in container
- mountPath: /applications
# Refer to the volume below
name: application-volume
skipper.extraVolumes:
- name: application-volume
hostPath:
# Location in host filesystem
path: /cdf
# this field is optional
type: Directory
Then I can copy my jars into /cdf on the host file system and install the applications using a file URI of file:///applications/<jar-file-name> and everything works.
I have tried to deploy one of the local container images I created but keeps always getting the below error
Failed to pull image "webrole1:dev": rpc error: code = Unknown desc =
Error response from daemon: pull access denied for webrole1,
repository does not exist or may require 'docker login': denied:
requested access to
I have followed the below article to containerize my application and I was able to successfully complete this but when I try to deploy it to k8s pod I don't succeed
My pod.yaml looks like below
apiVersion: v1
kind: Pod
metadata:
name: learnk8s
spec:
containers:
- name: webrole1dev
image: 'webrole1:dev'
ports:
- containerPort: 8080
and below are some images from my PowerShell
I am new to dockers and k8s so thanks for the help in advance and would appreciate if I get some detailed response.
When you're working locally, you can use an image name like webrole, however that doesn't tell Docker where the image came from (because it didn't come from anywhere, you built it locally). When you start working with multiple hosts, you need to push things to a Docker registry. For local Kubernetes experiments you can also change your config so you build your image in the same Docker environment as Kubernetes is using, though the specifics of that depend on how you set up both Docker and Kubernetes.
What do I have to put into a container to get the agent to run? Just libjprofilerti.so on its own doesn't work, I get
Could not find agent.jar. The agentpath parameter must point to
libjprofilerti.so in an unmodified JProfiler installation.
which sounds like obvious nonsense to me - surely I can't have to install over 137.5 MB of files, 99% of which will be irrelevant, in each container in which I want to profile something?
-agentpath:/path/to/libjprofilerti.so=nowait
An approach is to use Init Container.
The idea is to have an image for JProfiler separate from the application's image. Use the JProfiler image for an Init Container; the Init Container copies the JProfiler installation to a volume shared between that Init Container and the other Containers that will be started in the Pod. This way, the JVM can reference at startup time the JProfiler agent from the shared volume.
It goes something like this (more details are in this blog article):
Define a new volume:
volumes:
- name: jprofiler
emptyDir: {}
Add an Init Container:
initContainers:
- name: jprofiler-init
image: <JPROFILER_IMAGE:TAG>
command: ["/bin/sh", "-c", "cp -R /jprofiler/ /tmp/"]
volumeMounts:
- name: jprofiler
mountPath: "/tmp/jprofiler"
Replace /jprofiler/ above with the correct path to the installation directory in the JProfiler's image. Notice that the copy command will create /tmp/jprofiler directory under which the JProfiler installation will go - that is used as mount path.
Define volume mount:
volumeMounts:
- name: jprofiler
mountPath: /jprofiler
Add to the JVM startup arguments JProfiler as an agent:
-agentpath:/jprofiler/bin/linux-x64/libjprofilerti.so=port=8849
Notice that there isn't a "nowait" argument. That will cause the JVM to block at startup and wait for a JProfiler GUI to connect. The reason is that with this configuration the profiling agent will receive its profiling settings from the JProfiler GUI.
Change the application deployment to start with only one replica. Alternatively, start with zero replicas and scale to one when ready to start profiling.
To connect from the JProfiler's GUI to the remote JVM:
Find out the name of the pod (e.g. kubectl -n <namespace> get pods) and set up port forwarding to it:
kubectl -n <namespace> <pod-name> port-forward 8849:8849
Start JProfiler up locally and point it to 127.0.0.1, port 8849.
Change the local port 8849 (the number to the left of :) if it isn't available; then, point JProfiler to that different port.
Looks like you are missing the general concept here.
It's nicely explained why to use containers in the official documentation.
The New Way is to deploy containers based on operating-system-level virtualization rather than hardware virtualization. These containers are isolated from each other and from the host: they have their own filesystems, they can’t see each others’ processes, and their computational resource usage can be bounded. They are easier to build than VMs, and because they are decoupled from the underlying infrastructure and from the host filesystem, they are portable across clouds and OS distributions.
Of course you don't need to install the libraries on each containers separately.
Kubernetes is using Volumes to share files between Containers.
So you can create a local type of Volume with JProfiles libs inside.
A local volume represents a mounted local storage device such as a disk, partition or directory.
You also need to keep in mind that if you share the Volume between Pods, those Pods will not know about JProfiles libs being attached. You will need to configure the Pod with correct environment variables/files through the use of Secrets or ConfigMaps.
You can configure your Pod to pull values from a Secret:
apiVersion: v1
kind: Pod
metadata:
labels:
context: docker-k8s-lab
name: jp-pod
name: jp-pod
spec:
containers:
- image: k8s.gcr.io/busybox
name: jp
envFrom:
secretRef:
name: jp-secret
jp-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: jp-secret
type: Opaque
data:
JPAGENT_PATH="-agentpath:/usr/local/jprofiler10/bin/linux-x64/libjprofilerti.so=nowait"
I hope this helps you.
I have a K8S cluster running on DigitalOcean. I have a Postgresql database running there and I want to create a volume using the DigitalOcean BlockStorage to be used by the Postgresql pod as volume. Is there any examples on how to do that?
If it's not possible to use DigitalOcean blockstorage then how do most companies run their persistence storage for databases?
No official support yet. You can try the example from someone in this github issue:
Update: I finished writing a volume plugin for digitalocean. Attach/detach is working on my cluster. Looking for anyone willing to
test this on their k8s digitalocean cluster. My branch is
https://github.com/wardviaene/kubernetes/tree/do-volume
You can use the following spec in your pod yml:
spec:
containers:
- name: k8s-demo
image: yourimage
volumeMounts:
- mountPath: /myvol
name: myvolume
ports:
- containerPort: 3000
volumes:
- name: myvolume
digitaloceanVolume:
volumeID: mykubvolume
fsType: ext4 Where mykubvolume is the volume created in DigitalOcean in the same region.
You will need to add create a config file:
[Global] apikey = do-api-key region = your-region and add these
parameters to your kubernetes processes: --cloud-provider=digitalocean
--cloud-config=/etc/cloud.config
I'm still waiting for an issue in the godo driver to be resolved,
before I can submit a PR (digitalocean/godo#102)
I found this link here about flexvolumes This mentions how you can customize to load vendor volumes. There is also a script on how to do this at script
A Container Storage Interface (CSI) Driver for DigitalOcean Block Storage.
https://github.com/digitalocean/csi-digitalocean
Have tested with statefulset MySql, working fine