I am new to kubernetes and docker. I am trying to chain 2 containers in a pod such that the second container should not be up until the first one is running. I searched and got a solution here. It says to add "depends" field in YAML file for the container which is dependent on another container. Following is a sample of my YAML file:
apiVersion: v1beta4
kind: Pod
metadata:
name: test
labels:
apps: test
spec:
containers:
- name: container1
image: <image-name>
ports:
- containerPort: 8080
hostPort: 8080
- name: container2
image: <image-name>
depends: ["container1"]
Kubernetes gives me following error after running the above yaml file:
Error from server (BadRequest): error when creating "new.yaml": Pod in version "v1beta4" cannot be handled as a Pod: no kind "Pod" is registered for version "v1beta4"
Is the apiVersion problem here? I even tried v1, apps/v1, extensions/v1 but got following errors (respectively):
error: error validating "new.yaml": error validating data: ValidationError(Pod.spec.containers[1]): unknown field "depends" in io.k8s.api.core.v1.Container; if you choose to ignore these errors, turn validation off with --validate=false
error: unable to recognize "new.yaml": no matches for apps/, Kind=Pod
error: unable to recognize "new.yaml": no matches for extensions/, Kind=Pod
What am I doing wrong here?
As I understand there is no field called depends in the Pod Specification.
You can verify and validate by following command:
kubectl explain pod.spec --recursive
I have attached a link to understand the structure of the k8s resources.
kubectl-explain
There is no property "depends" in the Container API object.
You split your containers in two different pods and let the kubernetes cli wait for the first container to become available:
kubectl create -f container1.yaml --wait # run command until the pod is available.
kubectl create -f container2.yaml --wait
Related
I'm trying out KServe. I've followed installation instructions as per the official docs. I'm trying to create a sample HTTP InferenceService exactly same as this. by doing kubectl apply -f tensorflow.yaml
apiVersion: "serving.kserve.io/v1beta1"
kind: "InferenceService"
metadata:
name: "flower-sample"
spec:
predictor:
tensorflow:
storageUri: "gs://kfserving-samples/models/tensorflow/flowers"
This yaml creates an InferenceService and assosicated deployments, replicaset, and a pod. However, the InferenceService remains in Unknown state, upon investiagating, I found that 1 container (queue-proxy container) out of 2 wasn't able to do Readiness probe, Readiness probe failed: HTTP probe failed with statuscode: 503 .
Upon further investigation, I saw the following logs from kserve-container container
2022-04-07 17:14:46.482205: E tensorflow_serving/sources/storage_path/file_system_storage_path_source.cc:365] FileSystemStoragePathSource encountered a filesystem access error: Could not find base path /mnt/models for servable flower-sample with error Not found: /mnt/models not found
So I understood models not present. I manually downloaded the same gs://kfserving-samples/models/tensorflow/flowers and copied in the correct path. Finally InferenceService started working.
But I want to avoid doing this manual copying of models into pod's container.
It should ideally be done by init-container - storage-initializer which is missing in my case.
I did kubectl describe flower-sample-predictor-default-00001-deployment-5db9d7d9fgqfn4 and I don't see any initContainers section.
All these workloads are running in kserve namespace. I've a configmap inferenceservice-config which has the following (default) storageInitializer
{
"image" : "kserve/storage-initializer:v0.8.0",
"memoryRequest": "100Mi",
"memoryLimit": "1Gi",
"cpuRequest": "100m",
"cpuLimit": "1"
}
But still when I do kubectl apply -f tensorflow.yaml I'm facing the same error. Could anyone help me on how to fix this ?
Currently I'm using Kubernetes version 1.11.+. Previously I'm always using the following command for my cloud build scripts:
- name: 'gcr.io/cloud-builders/kubectl'
id: 'deploy'
args:
- 'apply'
- '-f'
- 'k8s'
- '--recursive'
env:
- 'CLOUDSDK_COMPUTE_ZONE=${_REGION}'
- 'CLOUDSDK_CONTAINER_CLUSTER=${_CLUSTER_NAME}'
And the commands just working as expected, at that time I'm using k8s version 1.10.+. However recently I got the following error:
spec.clusterIP: Invalid value: "": field is immutable
metadata.resourceVersion: Invalid value: "": must be specified for an update
So I'm wondering if this is an expected behavior for Service resources?
Here's my YAML config for my service:
apiVersion: v1
kind: Service
metadata:
name: {name}
namespace: {namespace}
annotations:
beta.cloud.google.com/backend-config: '{"default": "{backend-config-name}"}'
spec:
ports:
- port: {port-num}
targetPort: {port-num}
selector:
app: {label}
environment: {env}
type: NodePort
This is due to https://github.com/kubernetes/kubernetes/issues/71042
https://github.com/kubernetes/kubernetes/pull/66602 should be picked to 1.11
I sometimes meet this error when manually running kubectl apply -f somefile.yaml.
I think it happens when someone have changed the specification through the Kubernetes Dashboard instead of by applying new changes through kubectl apply.
To fix it, I run kubectl edit services/servicename which opens the yaml specification in my default editor. Then remove the fields metadata.resourceVersion and spec.clusterIP, hit save and run kubectl apply -f somefile.yaml again.
You need to set the spec.clusterIP on your service yaml file with value to be replaced with clusterIP address from service as shown below:
spec:
clusterIP:
Your issue is discuused on the following github there as well a workaround to help you bypass this issue.
I have created pods using below yaml.
apiVersion: v1
kind: Pod
metadata:
name: kubia-liveness
spec:
containers:
- image: luksa/kubia-unhealthy
name: kubia
livenessProbe:
httpGet:
path: /
port: 8080
Then I created pods using the below command.
$ kubectl create -f kubia-liveness-probe.yaml
It created a pod successfully.
Then I'm trying to create load balancer service to access from the external world.
For that I'm using the below command.
$ kubectl expose rc kubia-liveness --type=LoadBalancer --name kubia-liveness-http
For this, I'm getting below error.
Error from server (NotFound): replicationcontrollers "kubia-liveness" not found
I'm not sure how to create replicationControllers. Could anybody please give me the command to do the same.
You are mixing two approaches here, one is creating stuff from yaml definition, which is good by it self (but bare in mind that it is really rare to create a POD rather then Deployment or ReplicationController) with exposing via CLI, which has some assumptions made (ie. it expects replication controller) and with these assumptions it creates appropriate service. My suggestion would be to go for creating Service from yaml manifest as well, so you can tailor it to fit your case.
We have tried to setup hivemq manifest file. We have hivemq docker image in our private repository
Step1: I have logged into the private repository like
docker login "private repo name"
It was success
After that I have tried to create manifest file for that like below
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hivemq
spec:
replicas: 1
template:
metadata:
labels:
name: hivemq1
spec:
containers:
- env:
xxxxx some envronment values I have passed
name: hivemq
image: privatereponame:portnumber/directoryname/hivemq:
ports:
- containerPort: 1883
Its successfully creating, but I am getting the below issues. Could you please help any one to solve this issue.
hivemq-4236597916-mkxr4 0/1 ImagePullBackOff 0 1h
Logs:
Error from server (BadRequest): container "hivemq16" in pod "hivemq16-1341290525-qtkhb" is waiting to start: InvalidImageName
Some times I am getting that kind of issues
Error from server (BadRequest): container "hivemq" in pod "hivemq-4236597916-mkxr4" is waiting to start: trying and failing to pull image
In order to use a private docker registry with Kubernetes it's not enough to docker login.
You need to add a Kubernetes docker-registry Secret with your credentials as described here: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/. Also in that article is imagePullSecrets setting you have to add to your yaml deployment file, referencing that secret.
I just fixed this on my machine, kubectl v1.9.0 failed to create the secret properly. Upgrading to v1.9.1, deleting the secret, recreating it resolved the issue for me. https://github.com/kubernetes/kubernetes/issues/57427
I'm trying to set node affinity using nodeSelector as discussed here: https://kubernetes.io/docs/user-guide/node-selection/
However, no matter if I use a Pod, a Replication Controller or a Deployment, I can't get the kubectl create to work properly. This is the error I get, and it happens with everything similarly:
Error from server (BadRequest): error when creating "test-pod.yaml": Pod in version "v1" cannot be handled as a Pod: [pos 222]: json: expect char '"' but got char 't'
Substitute "Deployment" or "ReplicationController" for "Pod" and it's the same error everywhere. Here is my yaml file for the test pod:
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
env: test
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
nodeSelector:
ingress: yes
If I remove the nodeSelector part of the file, the pod builds successfully. This also works with Deployments and Replication Controllers as well. I made sure that the proper label was added to the node.
Any help would be appreciated!
In yaml, the token yes evaluates to a boolean true (http://yaml.org/type/bool.html)
Internally, kubectl converts yaml to json as a preprocessing step. Your node selector is converting to "nodeSelector":{"ingress":true}, which fails when trying to decode into a string-to-string map.
You can quote the string like this to force it to be treated as a string:
ingress: "yes"