Serving MLFlow artifacts through `--serve-artifacts` without passing credentials - kubernetes

A new version of MLFlow (1.23) provided a --serve-artifacts option (via this pull request) along with some example code. This should allow me to simplify the rollout of a server for data scientists by only needing to give them one URL for the tracking server, rather than a URI for the tracking server, URI for the artifacts server, and a username/password for the artifacts server. At least, that's how I understand it.
A complication that I have is that I need to use podman instead of docker for my containers (and without relying on podman-compose). I ask that you keep those requirements in mind; I'm aware that this is an odd situation.
What I did before this update (for MLFlow 1.22) was to create a kubernetes play yaml config, and I was successfully able to issue a podman play kube ... command to start a pod and from a different machine successfully run an experiment and save artifacts after setting the appropriate four env variables. I've been struggling with getting things working with the newest version.
I am following the docker-compose example provided here. I am trying a (hopefully) simpler approach. The following is my kubernetes play file defining a pod.
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2022-01-14T19:07:15Z"
labels:
app: mlflowpod
name: mlflowpod
spec:
containers:
- name: minio
image: quay.io/minio/minio:latest
ports:
- containerPort: 9001
hostPort: 9001
- containerPort: 9000
hostPort: 9000
resources: {}
tty: true
volumeMounts:
- mountPath: /data
name: minio-data
args:
- server
- /data
- --console-address
- :9001
- name: mlflow-tracking
image: localhost/mlflow:latest
ports:
- containerPort: 80
hostPort: 8090
resources: {}
tty: true
env:
- name: MLFLOW_S3_ENDPOINT_URL
value: http://127.0.0.1:9000
- name: AWS_ACCESS_KEY_ID
value: minioadmin
- name: AWS_SECRET_ACCESS_KEY
value: minioadmin
command: ["mlflow"]
args:
- server
- -p
- 80
- --host
- 0.0.0.0
- --backend-store-uri
- sqlite:///root/store.db
- --serve-artifacts
- --artifacts-destination
- s3://mlflow
- --default-artifact-root
- mlflow-artifacts:/
# - http://127.0.0.1:80/api/2.0/mlflow-artifacts/artifacts/experiments
- --gunicorn-opts
- "--log-level debug"
volumeMounts:
- mountPath: /root
name: mlflow-data
volumes:
- hostPath:
path: ./minio
type: Directory
name: minio-data
- hostPath:
path: ./mlflow
type: Directory
name: mlflow-data
status: {}
I start this with podman play kube mlflowpod.yaml. On the same machine (or a different one, it doesn't matter), I have cloned and installed mlflow into a virtual environment. From that virtual environment, I set an environmental variable MLFLOW_TRACKING_URI to <name-of-server>:8090. I then run the example.py file in the mlflow_artifacts example directory. I get the following response:
....
botocore.exceptions.NoCredentialsError: Unable to locate credentials
Which seems like the client needs the server credentials to minIO, which I thought the proxy was supposed to take care of.
If I also provide the env variables
$env:MLFLOW_S3_ENDPOINT_URL="http://<name-of-server>:9000/"
$env:AWS_ACCESS_KEY_ID="minioadmin"
$env:AWS_SECRET_ACCESS_KEY="minioadmin"
Then things work. But that kind of defeats the purpose of the proxy...
What is it about the proxy setup via kubernates play yaml and podman that is going wrong?

Just in case anyone stumbles upon this, I had same issue based on your description. However the problem on my side was that I was that I tried to test this with a preexisting experiment (default), and I did not create new one, so the old setting carried over, thus resulting in MLFlow trying to use s3 trough credentials and not https.
Hope this helps at least some of you out there.

Related

Kubernetes container grabbing variables from other container

UPDATE: Apologies for perhaps causing controversy but it seems like there was another cronjob running that was also calling a function that was grabbing those apiKeys from the DB but I was not sure until I seperated the part where it was grabbing them from the environment variables ;_;.
So basically this whole post is wrong and one container was not grabbing env variables from another container. I am so ashamed I wanted to delete this question but not sure if a good idea or not?
Kubernetes pod running two of basically the same NodeJS application seems to be taking environment variables from another container, I logged the variable and it logged me the correct one but when it makes a request it seems to show two different results.
These variables are taken from two different secrets.
I have checked inside of each container that they do indeed have different env variables but for some reason inside of NodeJS when it makes these requests out to a third-party API it grabs both of the variables.
Yes, they do have the same name.
In the image below you, can see some logs these entries show the Authorization header for an http request, and this header is taken from an environment variable. Technically speaking it should always stay the same but it grabs the other one for some reason as well.
Here is the pod in YAML:
apiVersion: v1
kind: Pod
metadata:
annotations:
cni.projectcalico.org/podIP: <REDACTED>/32
cni.projectcalico.org/podIPs: <REDACTED>32
kubectl.kubernetes.io/restartedAt: '2021-01-20T15:29:12Z'
labels:
app: mimercado-api
pod-template-hash: 77fb65575
name: mimercado-deployment-77fb65575-tpbsp
namespace: default
spec:
containers:
- envFrom:
- secretRef:
name: secrets-mimercado-a
image: hsduiii/mindi-mimercado:82aae456ee6b637cfefe50c323c2c5b98d2c88f2
imagePullPolicy: Always
name: mimercado-a
ports:
- containerPort: 8080
volumeMounts:
- mountPath: /srv/mindi-mimercado/logfiles
name: mindi-mimercado-a-logdir
- envFrom:
- secretRef:
name: secrets-mimercado-b
image: hsduiii/mindi-mimercado:82aae456ee6b637cfefe50c323c2c5b98d2c88f2
imagePullPolicy: Always
name: mimercado-b
ports:
- containerPort: 8085
volumeMounts:
- mountPath: /srv/mindi-mimercado/logfiles
name: mindi-mimercado-b-logdir
imagePullSecrets:
- name: regcred
preemptionPolicy: PreemptLowerPriority
priority: 0
serviceAccountName: default
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- hostPath:
path: /microk8s-files/logs/mindi-mimercado/mindi-mimercado-a/82aae456ee6b637cfefe50c323c2c5b98d2c88f2
type: DirectoryOrCreate
name: mindi-mimercado-a-logdir
- hostPath:
path: /microk8s-files/logs/mindi-mimercado/mindi-mimercado-b/82aae456ee6b637cfefe50c323c2c5b98d2c88f2
type: DirectoryOrCreate
name: mindi-mimercado-b-logdir
There is still a lot of unknown regarding your overall config but if it can help, here is the potential issues that I see.
The fact that your requests return each secrets in such a consistent manner leads me to believe that your pod configuration might be fine, but something else is routing your requests to both containers. This is easy to verify. Display simultaneously the logs of both containers by running the following commands in two different terminals:
kubectl logs -f mimercado-deployment-77fb65575-tpbsp -c mimercado-a
kubectl logs -f mimercado-deployment-77fb65575-tpbsp -c mimercado-b
Send some requests like you did in your screenshot. If your requests appear to be distributed to both containers, it means that something is miss-configured in your service or ingress.
You might have old resources, still around, with slightly different configurations or your service label selector is matching more than just your pod. Check that only this pod, only one service and only one ingress are present. Also check that you don't have other deployments/pods/services with labels that might be overlapping with our pod.
You are using envFrom which load all the entries from your secret into your environment. Check that you don't have both entries in one of your secret. You can also switch to the env form to be safe:
env:
- name: MY_SECRET
valueFrom:
secretKeyRef:
name: secrets-mimercado-a
key: my-secret-key
This is probably not even possible but... I don't see any config to change the port on which your app is listening. containerPort only tells kubernetes which port your container is using but node on which port your node app should bind. It shouldn't be possible for both container to bind to the same port of the pod, but if you are running a deployment and not a single pod some pod of your deployment might have different containers bound to a specific port.
UPDATE: Apologies for perhaps causing controversy but it seems like there was another cronjob running that was also calling a function that was grabbing those apiKeys from the DB but I was not sure until I seperated the part where it was grabbing them from the environment variables ;_;. So basically this whole post is wrong and one container was not grabbing env variables from another container. I am so ashamed I wanted to delete this question but not sure if a good idea or not?

Copy file inside Kubernetes pod from another container

I need to copy a file inside my pod during the time of creation. I don't want to use ConfigMap and Secrets. I am trying to create a volumeMounts and copy the source file using the kubectl cp command—my manifest looks like this.
apiVersion: v1
kind: Pod
metadata:
name: copy
labels:
app: hello
spec:
containers:
- name: init-myservice
image: bitnami/kubectl
command: ['kubectl','cp','./test.json','init-myservice:./data']
volumeMounts:
- name: my-storage
mountPath: data
- name: init-myservices
image: nginx
volumeMounts:
- name: my-storage
mountPath: data
volumes:
- name: my-storage
emptyDir: {}
But I am getting a CrashLoopBackOff error. Any help or suggestion is highly appreciated.
it's not possible.
let me explain : you need to think of it like two different machine. here your local machine is the one where the file exist and you want to copy it in another machine with cp. but it's not possible. and this is what you are trying to do here. you are trying to copy file from your machine to pod's machine.
here you can do one thing just create your own docker image for init-container. and copy the file you want to store before building the docker image. then you can copy that file in shared volume where you want to store the file.
I do agree with an answer provided by H.R. Emon, it explains why you can't just run kubectl cp inside of the container. I do also think there are some resources that could be added to show you how you can tackle this particular setup.
For this particular use case it is recommended to use an initContainer.
initContainers - specialized containers that run before app containers in a Pod. Init containers can contain utilities or setup scripts not present in an app image.
Kubernetes.io: Docs: Concepts: Workloads: Pods: Init-containers
You could use the example from the official Kubernetes documentation (assuming that downloading your test.json is feasible):
apiVersion: v1
kind: Pod
metadata:
name: init-demo
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: workdir
mountPath: /usr/share/nginx/html
# These containers are run during pod initialization
initContainers:
- name: install
image: busybox
command:
- wget
- "-O"
- "/work-dir/index.html"
- http://info.cern.ch
volumeMounts:
- name: workdir
mountPath: "/work-dir"
dnsPolicy: Default
volumes:
- name: workdir
emptyDir: {}
-- Kubernetes.io: Docs: Tasks: Configure Pod Initalization: Create a pod that has an initContainer
You can also modify above example to your specific needs.
Also, referring to your particular example, there are some things that you will need to be aware of:
To use kubectl inside of a Pod you will need to have required permissions to access the Kubernetes API. You can do it by using serviceAccount with some permissions. More can be found in this links:
Kubernetes.io: Docs: Reference: Access authn authz: Authentication: Service account tokens
Kubernetes.io: Docs: Reference: Access authn authz: RBAC
Your bitnami/kubectl container will run into CrashLoopBackOff errors because of the fact that you're passing a single command that will run to completion. After that Pod would report status Completed and it would be restarted due to this fact resulting in before mentioned CrashLoopBackOff. To avoid that you would need to use initContainer.
You can read more about what is happening in your setup by following this answer (connected with previous point):
Stackoverflow.com: Questions: What happens one of the container process crashes in multiple container POD?
Additional resources:
Kubernetes.io: Pod lifecycle
A side note!
I also do consider including the reason why Secrets and ConfigMaps cannot be used to be important in this particular setup.

Volume shared between two containers "is busy or locked"

I have a deployment that runs two containers. One of the containers attempts to build (during deployment) a javascript bundle that the other container, nginx, tries to serve.
I want to use a shared volume to place the javascript bundle after it's built.
So far, I have the following deployment file (with irrelevant pieces removed):
apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
...
template:
...
spec:
hostNetwork: true
containers:
- name: personal-site
image: wheresmycookie/personal-site:3.1
volumeMounts:
- name: build-volume
mountPath: /var/app/dist
- name: nginx-server
image: nginx:1.19.0
volumeMounts:
- name: build-volume
mountPath: /var/app/dist
volumes:
- name: build-volume
emptyDir: {}
To the best of my ability, I have followed these guides:
https://kubernetes.io/docs/concepts/storage/volumes/#emptydir
https://kubernetes.io/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/
One other things to point out is that I'm trying to run this locally atm using minikube.
EDIT: The Dockerfile I used to build this image is:
FROM node:alpine
WORKDIR /var/app
COPY . .
RUN npm install
RUN npm install -g #vue/cli#latest
CMD ["npm", "run", "build"]
I realize that I do not need to build this when I actually run the image, but my next goal is to insert pod instance information as environment variables, so with javascript unfortunately I can only build once that information is available to me.
Problem
The logs from the personal-site container reveal:
- Building for production...
ERROR Error: EBUSY: resource busy or locked, rmdir '/var/app/dist'
Error: EBUSY: resource busy or locked, rmdir '/var/app/dist'
I'm not sure why the build is trying to remove /dist, but also have a feeling that this is irrelevant. I could be wrong?
I thought that maybe this could be related to the lifecycle of containers/volumes, but the docs suggest that "An emptyDir volume is first created when a Pod is assigned to a Node, and exists as long as that Pod is running on that node".
Question
What are some reasons that a volume might not be available to me after the containers are already running? Given that you probably have much more experience than I do with Kubernetes, what would you look into next?
The best way is to customize your image's entrypoint as following:
Once you finish building the /var/app/dist folder, copy(or move) this folder to another empty path (.e.g: /opt/dist)
cp -r /var/app/dist/* /opt/dist
PAY ATTENTION: this Step must be done in the script of ENTRYPOINT not in the RUN layer.
Now use /opt/dist instead..:
apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
...
template:
...
spec:
hostNetwork: true
containers:
- name: personal-site
image: wheresmycookie/personal-site:3.1
volumeMounts:
- name: build-volume
mountPath: /opt/dist # <--- make it consistent with image's entrypoint algorithm
- name: nginx-server
image: nginx:1.19.0
volumeMounts:
- name: build-volume
mountPath: /var/app/dist
volumes:
- name: build-volume
emptyDir: {}
Good luck!
If it's not clear how to customize the entrypoint, share with us your entrypoint of the image and we will implement it.

Why the path does not get mount?

I've created the manifest file, that looks as follows:
apiVersion: v1
kind: Pod
metadata:
name: kuard
spec:
volumes:
- name: "kuard-data"
hostPath:
path: "/home/developer/kubernetes/exercises"
containers:
- image: gcr.io/kuar-demo/kuard-amd64:1
name: kuard
volumeMounts:
- mountPath: "/data"
name: "kuard-data"
ports:
- containerPort: 8080
name: http
protocol: TCP
As you can see, the hostpath is:
path: "/home/developer/kubernetes/exercises"
and the mountPath is:
mountPath: "/data"
I've created a hello.txt file in the folder /home/developer/kubernetes/exercises and when I enter into the pod via kubectl exec -it kuard ash I can not find the file hello.txt.
Where is the file?
kind is using Docker containers to simulate Kubernetes nodes. So when you are creating files on your host (your ubuntu machine) the containers will not automatically have access to them.
(This gets even more complicated when using macos or windows and docker is running in a separate virtual machine...)
I assume that there are some shared folders visible inside the kind-docker-nodes, but I could not find it documented.
You can verify the filesystem content of the docker node from inside the container using docker exec -it kind-control-plane /bin/sh and then work with the usual tools.
If you need to make content from your development machine available you might want to have a look at ksync: https://github.com/vapor-ware/ksync

how to pass a configuration file thought yaml on kubernetes to create new replication controller

i am trying to pass a configuration file(which is located on master) on nginx container at the time of replication controller creation through kubernetes.. ex. as we are using ADD command in Dockerfile...
There isn't a way to dynamically add file to a pod specification when instantiating it in Kubernetes.
Here are a couple of alternatives (that may solve your problem):
Build the configuration file into your container (using the docker ADD command). This has the advantage that it works in the way which you are already familiar but the disadvantage that you can no longer parameterize your container without rebuilding it.
Use environment variables instead of a configuration file. This may require some refactoring of your code (or creating a side-car container to turn environment variables into the configuration file that your application expects).
Put the configuration file into a volume. Mount this volume into your pod and read the configuration file from the volume.
Use a secret. This isn't the intended use for secrets, but secrets manifest themselves as files inside your container, so you can base64 encode your configuration file, store it as a secret in the apiserver, and then point your application to the location of the secret file that is created inside your pod.
I believe you can also download config during container initialization.
See example below, you may download config instead index.html but I would not use it for sensetive info like passwords.
apiVersion: v1
kind: Pod
metadata:
name: init-demo
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: workdir
mountPath: /usr/share/nginx/html
# These containers are run during pod initialization
initContainers:
- name: install
image: busybox
command:
- wget
- "-O"
- "/work-dir/index.html"
- http://kubernetes.io
volumeMounts:
- name: workdir
mountPath: "/work-dir"
dnsPolicy: Default
volumes:
- name: workdir
emptyDir: {}