How to install the JProfiler agent in a Kubernetes container? - kubernetes

What do I have to put into a container to get the agent to run? Just libjprofilerti.so on its own doesn't work, I get
Could not find agent.jar. The agentpath parameter must point to
libjprofilerti.so in an unmodified JProfiler installation.
which sounds like obvious nonsense to me - surely I can't have to install over 137.5 MB of files, 99% of which will be irrelevant, in each container in which I want to profile something?
-agentpath:/path/to/libjprofilerti.so=nowait

An approach is to use Init Container.
The idea is to have an image for JProfiler separate from the application's image. Use the JProfiler image for an Init Container; the Init Container copies the JProfiler installation to a volume shared between that Init Container and the other Containers that will be started in the Pod. This way, the JVM can reference at startup time the JProfiler agent from the shared volume.
It goes something like this (more details are in this blog article):
Define a new volume:
volumes:
- name: jprofiler
emptyDir: {}
Add an Init Container:
initContainers:
- name: jprofiler-init
image: <JPROFILER_IMAGE:TAG>
command: ["/bin/sh", "-c", "cp -R /jprofiler/ /tmp/"]
volumeMounts:
- name: jprofiler
mountPath: "/tmp/jprofiler"
Replace /jprofiler/ above with the correct path to the installation directory in the JProfiler's image. Notice that the copy command will create /tmp/jprofiler directory under which the JProfiler installation will go - that is used as mount path.
Define volume mount:
volumeMounts:
- name: jprofiler
mountPath: /jprofiler
Add to the JVM startup arguments JProfiler as an agent:
-agentpath:/jprofiler/bin/linux-x64/libjprofilerti.so=port=8849
Notice that there isn't a "nowait" argument. That will cause the JVM to block at startup and wait for a JProfiler GUI to connect. The reason is that with this configuration the profiling agent will receive its profiling settings from the JProfiler GUI.
Change the application deployment to start with only one replica. Alternatively, start with zero replicas and scale to one when ready to start profiling.
To connect from the JProfiler's GUI to the remote JVM:
Find out the name of the pod (e.g. kubectl -n <namespace> get pods) and set up port forwarding to it:
kubectl -n <namespace> <pod-name> port-forward 8849:8849
Start JProfiler up locally and point it to 127.0.0.1, port 8849.
Change the local port 8849 (the number to the left of :) if it isn't available; then, point JProfiler to that different port.

Looks like you are missing the general concept here.
It's nicely explained why to use containers in the official documentation.
The New Way is to deploy containers based on operating-system-level virtualization rather than hardware virtualization. These containers are isolated from each other and from the host: they have their own filesystems, they can’t see each others’ processes, and their computational resource usage can be bounded. They are easier to build than VMs, and because they are decoupled from the underlying infrastructure and from the host filesystem, they are portable across clouds and OS distributions.
Of course you don't need to install the libraries on each containers separately.
Kubernetes is using Volumes to share files between Containers.
So you can create a local type of Volume with JProfiles libs inside.
A local volume represents a mounted local storage device such as a disk, partition or directory.
You also need to keep in mind that if you share the Volume between Pods, those Pods will not know about JProfiles libs being attached. You will need to configure the Pod with correct environment variables/files through the use of Secrets or ConfigMaps.
You can configure your Pod to pull values from a Secret:
apiVersion: v1
kind: Pod
metadata:
labels:
context: docker-k8s-lab
name: jp-pod
name: jp-pod
spec:
containers:
- image: k8s.gcr.io/busybox
name: jp
envFrom:
secretRef:
name: jp-secret
jp-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: jp-secret
type: Opaque
data:
JPAGENT_PATH="-agentpath:/usr/local/jprofiler10/bin/linux-x64/libjprofilerti.so=nowait"
I hope this helps you.

Related

Kubernetes Pod mount first container's content into a second container

I've been searching and every answer seems to be the same example (https://kubernetes.io/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/). In a pod you can create an empty volume, then mount that into two containers and any content written in that mount will be seen on each container. While this is fine my use case is slightly different.
Container A
/opt/content
Container B
/data
Container A has an install of about 4G of data. What I would like to do is mount /opt/content into Container B at /content. This way the 4G of data is accessible to Container B at runtime and I don't have to copy content or specially build Container B.
My question, is this possible. If it is, what would be the proper pod syntax.
apiVersion: v1
kind: Pod
metadata:
name: two-containers
spec:
restartPolicy: Never
volumes:
- name: shared-data
emptyDir: {}
containers:
- name: nginx-container
image: nginx
volumeMounts:
- name: shared-data
mountPath: /opt/content
- name: debian-container
image: debian
volumeMounts:
- name: shared-data
mountPath: /content
From my research and testing the best I can tell is within a POD two containers can not see each others file system. The volume mount will allow each container to have a mount created in the pod to the specified path (as the example shows) and then any items written to it after that point, will be seen on both. This works great for logs and stuff.
In my context, this proves to not be possible and creating this mount, and then having Container A copy the 4G directory to the newly created mount is to time consuming to make this an option.
Best I can tell is the only way to do this is create a Persistent Volume or other similar and mount that in the Container B. This way Container A contents are stored in the Persistent Volume and it can be easily mounted when needed. The only issue with this is the Persistent Volume will have to be setup in every Kube cluster defined which is the pain point.
If any of this is wrong and I just didn't find the right document please correct me. I would love to be able to do this.
Your code example in your question should work. Both are using the same volume and you mount them under different locations in the container.
nginx-container will have the shared-data content in /opt/content and debian-container will have it in /content.
With mountPath you specify where the volume should be mounted in the container
When a container is started, first the container image (or more precisely the layers of an image) are mounted. Afterwards, your custom volumes are mounted, hiding any data from the image at and below the mount path. So sharing data from an image among several containers without copying them is not possible.
The typical solution is and stays to use an init container which downloads or copies the actual data into an ephemeral volume, which then is shared by one or more other containers (https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-initialization/#create-a-pod-that-has-an-init-container).
initContainers:
- name: init
image: <image-containing-the-data-based-on-some-basic-image>
command: ["sh", "-c", "cp -ar /opt/content/* /mnt/target/"]
volumeMounts:
- name: shared-data
mountPath: /mnt/target
What you actually would need, is a kind of container storage interface (CSI) driver which supports creating volumes from container images. I found two projects which would exactly do that, but none of them states to be ready for production.
https://github.com/kubernetes-csi/csi-driver-image-populator
https://github.com/warm-metal/csi-driver-image

Schema initiation in Binami postgresql image for kubernetes cluster

I am using bitnami PostgreSQL image to deploy StatfulSet inside my cluster node. I am not sure how to initiate schema for the PostgreSQL pod without building on top of bitnami image. I have looked around on the internet and someone said to use init containers but I am also not sure how exactly I would do that.
From the Github Readme of the Bitnami Docker image:
When the container is executed for the first time, it will execute the
files with extensions .sh, .sql and .sql.gz located at
/docker-entrypoint-initdb.d.
In order to have your custom files inside the docker image you can
mount them as a volume.
You can just mount such scripts under that directory using a ConfigMap volume. An example could be the following:
First, create the ConfigMap with the scripts, for example:
apiVersion: v1
kind: ConfigMap
metadata:
name: p-init-sql
labels:
app: the-app-name
data:
01_init_db.sql: |-
# content of the script goes here
02_second_init_db.sql: |-
# more content for another script goes here
Second, under spec.template.spec.volumes, you can add:
volumes:
- configMap:
name: p-init-sql
Then, under spec.template.spec.containers[0].volumeMounts, you can mount this volume with:
volumeMounts:
- mountPath: /docker-entrypoint-initdb.d
name: p-init-sql
With this said, you may find out that it is more easy to use HELM Charts.
Bitnami provides HELM Charts for all its images which simplify the usage of such images by a lot (as everything is ready to be installed and configured from a simple values.yaml file)
For example, there is such a chart for postgresql which you can find here and that can be of inspiration in how to configure the docker image even if you decide to write your own Kubernetes resources around that image.

Share local directory with Kind Kubernetes Cluster using hostpath

I want to share my non-empty local directory with kind cluster.
Based on answer here: How to reference a local volume in Kind (kubernetes in docker)
I tried few variations of the following:
Kind Cluster yaml:
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
extraMounts:
- hostPath: /Users/xyz/documents/k8_automation/data/manual/
containerPath: /host_manual
extraPortMappings:
- containerPort: 30000
hostPort: 10000
Pod yaml:
apiVersion: v1
kind: Pod
metadata:
name: manual
spec:
serviceAccountName: manual-sa
containers:
- name: tools
image: tools:latest
imagePullPolicy: Never
command:
- bash
tty: true
volumeMounts:
- mountPath: /home/jenkins/agent/data
name: data
volumes:
- name: data
hostPath:
path: /host_manual
type: Directory
---
I see that the directory /home/jenkins/agent/data does exist when the pod gets created. However, the folder is empty.
kinds documentation here: https://kind.sigs.k8s.io/docs/user/configuration/#extra-mounts
It should be the case that whatever is in the local machine at hostpath (/Users/xyz/documents/k8_automation/data/manual/) in extraMounts in the cluster yaml be available to the node at containerPath (/host_manual), which then gets mounted at container volume mounthPath (/home/jenkins/agent/data).
I should add that even if I change the hostPath in the cluster yaml file to a non-existent folder, the empty "data" folder still gets mounted in the container, so I think it's the connection from my local to kind cluster that's the issue.
Why am I not getting the contents of /Users/xyz/documents/k8_automation/data/manual/ with it's many files also available at /home/jenkins/agent/data in the container?
How can I fix this?
Any alternatives if there is no fix?
Turns out these yaml configuration was just fine.
The reason the directory was not showing up in the container was related with docker settings. And because "kind is a tool for running local Kubernetes clusters using Docker container “nodes”", it matters.
It seems docker restricts resource sharing and allows only specific directories to be bind mounted into Docker containers by default. Once I added the specific directory I wanted to show up in the container to the list of directories under Preferences -> Resources -> File sharing, it worked!

Does every container in same pod has independent file system?

Try to share a file between two containers within a pod, I must use volume and create file in mouth path for this volume.
apiVersion: v1
kind: Pod
metadata:
name: pod-5
spec:
containers:
- name: busybox
image: busybox
command: ["sh", "-c", "if [ -f /workdir/foo ]; then sleep 3600; else exit; fi"]
volumeMounts:
- name: workdir
mountPath: "/workdir"
initContainers:
- name: install
image: busybox
command: ["sh", "-c", "touch /workdir/foo; hostname > /workdir/foo"]
volumeMounts:
- name: workdir
mountPath: "/workdir"
volumes:
- name: workdir
emptyDir: {}
If I don't use volume and create a file in init container and try to read it from the other container, it will not work.
apiVersion: v1
kind: Pod
metadata:
name: pod-5
spec:
containers:
- name: busybox
image: busybox
command: ["sh", "-c", "if [ -f /workdir/foo ]; then sleep 3600; else exit; fi"]
initContainers:
- name: install
image: busybox
command: ["sh", "-c", "touch /workdir/foo; hostname > /workdir/foo"]
Why? I thought all containers within a pod should share both network and file system.
Containers within same pod share network namespace and IPC namespace but they have separate mount namespace and filesystem.Hence we use volumes for sharing mounts. To know more about namespaces check the linux namespace doc.
Let’s start by explaining what a Pod is in the first place. A Pod is is the smallest unit that can be deployed and managed by Kubernetes. In other words, if you need to run a single container in Kubernetes, then you need to create a Pod for that container. At the same time, a Pod can contain more than one container, usually because these containers are relatively tightly coupled. How tightly coupled? Well, think of it this way: the containers in a pod represent processes that would have run on the same server in a pre-container world.
Now think of pod as in your local machine where you are trying to run you containers.
Lets say for example you have your two container initcontainer (container 1) and the main container (container 2) running in the same network.They both are running on your local environment.Now if you create a file in one container and expect the file to be present in the other container thats simply not true.The file is present in a different container in its own file system and there is no way other container can have access to it.But to share the file system between two container you can create a volume mount from your local to container 1 and then mount the same path to container 2. Thus both the container can share the file system.
The same thing applies for Pod in kubernetes environment as well.
In Kubernetes, you can use a shared Kubernetes Volume as a simple and efficient way to share data between containers in a Pod. For most cases, it is sufficient to use a directory on the host that is shared with all containers within a Pod

Kubernetes on google cloud with hostPath mount

I've been developing an app on my local laptop (Mac) with Minikube. Instead of packaging the code and files into the docker image, I use hostPath and volumeMount that points to the code/file directory on my Mac, so that I can avoid rebuilding the image every time.
Now I would like to do the same iterative testing with google cloud. What's the best way to "mount" my local code/file directory and run pods remotely on the cloud? I don't want to package the code into a docker image, push to dockerhub, and then pull from dockerhub on gcloud. My dockerhub is a free account and would expose my code.
You want:
You want to mount your local file system into your remote Kubernetes cluster.
Answer:
As far I know, you can't do this. Its possible in minikube, because, you can mount your local directory with minikube.
Solution:
I can tell you an alternative way. May be this is not what you want. But it can help you.
Do you use git? If your answer is yes and also if you have no problem to keep your files into git repository, following process will help you.
spec:
containers:
- image: nginx
name: nginx
volumeMounts:
- mountPath: /mypath
name: git-volume
volumes:
- name: git-volume
gitRepo:
repository: "git#somewhere:me/my-git-repository.git"
revision: "22f1d8406d464b0c0874075539c1f2e96c253775"
When you will create this Pod, my-git-repository will be mounted into your directory /mypath inside your Pod container.
Basically, you can tell your Pod to pull this git from specific branch. So every time, you change your code, push it. Then create Pod again.
Read volumes/#gitrepo
Easiest method to replicate your setup would be to use a storage bucket for the mount point.
For your setup, just pull the code to the local host when needing to build from the storage bucket. I am assuming you have a build script to do the configuration part.
However as per the other comment, you could just use gcr to host your config files and use deployment manager to build.
Steps for using the Google Cloud Registry:
Build Docker Image
docker build -t <image-name>:<tag> <path-to-dockerfile>
Tag for GCloud Container Registry
docker tag <image-name>:<tag> us.gcr.io/<gcloud-project-id>/<image-name>:<tag>
Container Registry
gcloud docker -- push us.gcr.io/<gcloud-project-id>/<image-name>:<tag>
Your spec will then point to the container registry path:
spec:
containers:
- name: hello-world
image: us.gcr.io/<gcloud-project-id>/<image-name>:<tag>
ports:
- name: http
containerPort: 8080