Set container port based on environment property - kubernetes

I am setting port value in an environment property while generating Pod yaml.
master $ kubectl run nginx --image=nginx --restart=Never --env=MY_PORT=8080 --dry-run -o yaml > Pod.yaml
I am trying to use the environment property MY_PORT in the ports section of my Pod yaml.
spec:
containers:
- env:
- name: MY_PORT
value: "8080"
image: nginx
name: nginx
ports:
- containerPort: $(MY_PORT)
When i try to create the Pod i am getting following error message.
error: error validating "Pod.yaml": error validating data: ValidationError(Pod.spec.containers[0].ports[0].containerPort): invalid type for io.k8s.api.core.v1.ContainerPort.containerPort: got "string", expected "integer"; if you choose to ignore theseerrors, turn validation off with --validate=false
I tried referencing like ${MY_PORT} , MY_PORT etc.. but all the time same error as above.
How i can use an environment variable value in an integer field.

You can't use an environment variable there. In the ContainerPort API object the containerPort field is specified as an integer. Variable substitution is only support in a couple of places, and where it does it is called out; see for example args and command in the higher-level Container API object.
There's no reason to make this configurable. In a Kubernetes environment the pod will have its own IP address, so there's no risk of conflict; if you want to use a different port number to connect, you can set up a service where e.g. port 80 on the service forwards to port 8080 in the pod. (In plain Docker, you can do a similar thing with a docker run -p 80:8080 option: you can always pick the external port even if the port number inside the container is fixed.) I'd delete the environment variable setting.

Related

Unable to update the deployment containers ports name on redeployment

What happened:
managementPort is provided by default while applying the k8s manifest.
"mng-mytest" is the name alias for containerPort in deployment manifest.
ports:
containerPort: 9095
name: mng-mytest
Recently we changed the value of default management port, however for existing deployments that are running
when redeployed the changes of new default mgmt port while getting applied fails with this issue,
The Deployment "mytestservice-deployment" is invalid: spec.template.spec.containers[0].ports[2].name: Duplicate value: "mng-mytest"
"mng-mytest" is the name alias for containerPort in deployment manifest.
ports:
containerPort: 9090
name: mng-mytest
What you expected to happen:
The new port value should get applied.
How to reproduce it (as minimally and precisely as possible):
First, add a port name and value to the containerPort section of the deployment manifest
then deploy.
Second, change the value of the containerPort but keep the name same then redeploy on top of above existing running deployment.

Connecting two containers with Kubernetes using environment variables

I'm new to k8s and need some direction on how to troubleshoot.
I have a postgres container and a graphql container. The graphql container is tries to connect to postgres on startup.
Problem
The graphql container can't connect to postgres. This is the error on startup:
{"internal":"could not connect to server: Connection refused\n\tIs the server running on host "my-app" (xxx.xx.xx.xxx) and accepting\n\tTCP/IP connections on port 5432?\n",
"path":"$","error":"connection error","code":"postgres-error"}
My understanding is that the graphql-container doesn't recognize the IP my-app (xxx.xx.xx.xxx). This is the actual Pod Host IP, so I'm confused as to why it doesn't recognize it. How do I troubleshoot errors like these?
What I tried
Hardcoding the host in the connection uri in deployment.yaml to the actual pod host IP. Same error.
Bashed into the graphql container and verified that it had the correct env values with the env command.
deployment.yaml
spec:
selector:
matchLabels:
service: my-app
template:
metadata:
labels:
service: my-app
...
- name: my-graphql-container
image: image-name:latest
env:
- name: MY_POSTGRES_HOST
value: my-app
- name: MY_DATABASE
value: db
- name: MY_POSTGRES_DB_URL # the postgres connection url that the graphql container uses
value: postgres://$(user):$(pw)#$(MY_POSTGRES_HOST):5432/$(MY_DATABASE)
...
- name: my-postgres-db
image: image-name:latest
In k8s docs about pods you can read:
Pods in a Kubernetes cluster are used in two main ways:
Pods that run a single container. [...]
Pods that run multiple containers that need to work together. [...]
Note: Grouping multiple co-located and co-managed containers in a
single Pod is a relatively advanced use case. You should use this
pattern only in specific instances in which your containers are
tightly coupled.
Each Pod is meant to run a single instance of a given
application. [...]
Notice that your deployment doesn't fit this descriprion because you are trying to run two applications in one pod.
Remember to always use one pod per container and only use multiple containers per pod if it's impossible to separate them (and for some reason they have to run together).
And the rest was already mentioned by David.

is it possible to remote debugging java program in kubernetes using service name

Now I am remote debugging my java program in kubernetes(v1.15.2) using kubectl proxy forward like this:
kubectl port-forward soa-report-analysis 5018:5018 -n dabai-fat
I could using intellij idea to remote connect my localhost port 5018 to remote debugging my pod in kubernetes cluster in remote datacenter,but now I am facing a problem is every time I must change the pod name to redebug after pod upgrade,any way to keep a stable channel for debugging?
I could suggest for anyone who looks for ways to debug Java(and Go, NodeJS, Python, .NET Core) applications in Kubernetes to look at skaffold.
It simple CLI tool that uses already existing build and deploy configuration that you used to work with.
There is no need for additional installation in the cluster, modification for existing deployment configuration, etc.
Install CLI: https://skaffold.dev/docs/install/
Open your project, and try:
skaffold init
This will make skaffold create
skaffold.yaml
(the only needed config file for skaffold)
And then
skaffold debug
Which will use your existing build and deploy config, to build a container and deploy it. If needed necessary arguments will be injected into the container, and port forwarding will start automatically.
For more info look at:
https://skaffold.dev/docs/workflows/debug/
This can provide a consistent way to debug your application without having to be aware all time about the current pod or deployment state.
I use this script to improve my workflow:
#!/usr/bin/env bash
set -u
set -e
set -x
kubectl get pods -n dabai-fat | grep "soa-illidan-service"
POD=$(kubectl get pod -l k8s-app=soa-illidan-service -o jsonpath="{.items[0].metadata.name}")
kubectl port-forward ${POD} 11014:11014
This script automatic get the pod name and open remote debugging.
We can use a service of type nodeport to resolve your issue.Here is a sample yaml file:-
apiVersion: v1
kind: Service
metadata:
name: debug-service
spec:
type: NodePort
selector:
app: demoapp
ports:
# By default and for convenience, the `targetPort` is set to the same value as the `port` field.
- port: 8001 // port which exposed in DockerFile for debugging purpose
targetPort: 8001
# Optional field
# By default and for convenience, the Kubernetes control plane will allocate a port from a range (default: 30000-32767)
nodePort: 30019
In IntelliJ, you will be able to connect to
Host: localhost
Port: 30019

How do I deploy this Traefik example to Kubernetes?

I am following the Getting Started guide for Traefik from here and am trying to launch the service into Kubernetes (Minikube) instead of Docker:
Edit your docker-compose.yml file and add the following at the end of your file.
# ...
whoami:
image: emilevauge/whoami # A container that exposes an API to show its IP address
labels:
- "traefik.frontend.rule=Host:whoami.docker.localhost"**
I am guessing I run it as:
kubectl run whoami-service --image=emilevauge/whoami --labels='traefik.frontend.rule=Host:whoami.docker.localhost'
however that generates an error of:
The Deployment "whoami-service" is invalid:
* metadata.labels: Invalid value: "'traefik.frontend.rule": name part must consist of alphanumeric characters, '-', '_' or '.', and....
So what am I missing here? How do I deploy the above to my Minikube Kubernetes cluster?
I'm not sure if this is along the lines of what you're looking for, but Traefik has a small tutorial for getting an Ingress controller set up on Kubernetes, with a great document on configuration, as well.
If you'd just like to get that particular image working, you may be able to pass the label as an argument to the pod, possibly with kubectl run. From the output of kubectl run help:
# Start the nginx container using the default command, but use custom arguments (arg1 .. argN) for that command.
kubectl run nginx --image=nginx -- <arg1> <arg2> ... <argN>
Or possibly manually in a manifest:
...
containers:
- name: whoami
image: emilevauge/whoami
args: ["traefik.frontend.rule: "Host:whoami.docker.localhost"]
Having never worked with the image in the example before, I don't know if either the above examples will actually work.
Hope that helps a little!

Chaining container error in Kubernetes

I am new to kubernetes and docker. I am trying to chain 2 containers in a pod such that the second container should not be up until the first one is running. I searched and got a solution here. It says to add "depends" field in YAML file for the container which is dependent on another container. Following is a sample of my YAML file:
apiVersion: v1beta4
kind: Pod
metadata:
name: test
labels:
apps: test
spec:
containers:
- name: container1
image: <image-name>
ports:
- containerPort: 8080
hostPort: 8080
- name: container2
image: <image-name>
depends: ["container1"]
Kubernetes gives me following error after running the above yaml file:
Error from server (BadRequest): error when creating "new.yaml": Pod in version "v1beta4" cannot be handled as a Pod: no kind "Pod" is registered for version "v1beta4"
Is the apiVersion problem here? I even tried v1, apps/v1, extensions/v1 but got following errors (respectively):
error: error validating "new.yaml": error validating data: ValidationError(Pod.spec.containers[1]): unknown field "depends" in io.k8s.api.core.v1.Container; if you choose to ignore these errors, turn validation off with --validate=false
error: unable to recognize "new.yaml": no matches for apps/, Kind=Pod
error: unable to recognize "new.yaml": no matches for extensions/, Kind=Pod
What am I doing wrong here?
As I understand there is no field called depends in the Pod Specification.
You can verify and validate by following command:
kubectl explain pod.spec --recursive
I have attached a link to understand the structure of the k8s resources.
kubectl-explain
There is no property "depends" in the Container API object.
You split your containers in two different pods and let the kubernetes cli wait for the first container to become available:
kubectl create -f container1.yaml --wait # run command until the pod is available.
kubectl create -f container2.yaml --wait