What happened:
managementPort is provided by default while applying the k8s manifest.
"mng-mytest" is the name alias for containerPort in deployment manifest.
ports:
containerPort: 9095
name: mng-mytest
Recently we changed the value of default management port, however for existing deployments that are running
when redeployed the changes of new default mgmt port while getting applied fails with this issue,
The Deployment "mytestservice-deployment" is invalid: spec.template.spec.containers[0].ports[2].name: Duplicate value: "mng-mytest"
"mng-mytest" is the name alias for containerPort in deployment manifest.
ports:
containerPort: 9090
name: mng-mytest
What you expected to happen:
The new port value should get applied.
How to reproduce it (as minimally and precisely as possible):
First, add a port name and value to the containerPort section of the deployment manifest
then deploy.
Second, change the value of the containerPort but keep the name same then redeploy on top of above existing running deployment.
Related
Application A and application B are two applications running in the same kubernetes cluster. Application A can access B by reading the B_HOST env ( with value b.example.com) passed to A's container. Is there any way by which an A would be able access B:
internally: using the DNS name of B's service (b.default.svc.cluster.local)
externally: using the FQDN of B, that is also defined in the ingress resource (b.example.com)
at the same time?
For example,
If you try to curl b.example.com inside the pod/container of A, it should resolve to b.default.svc.cluster.local and get the result via that service.
If you try to curl b.example.com outside the k8s cluster, it should use ingress to reach the service B and get the results.
As a concept, adding an extra host entry (that maps B's FQDN to its service IP) to the container A's /etc/hosts should work. But that doesn't seem to be a good practice as it needs to get the IP address of B's service in advance and then create A's pod with that HostAliases config. Patching this field into an existing pod is not allowed. The service IP changes when you recreate the service, and adding the dns name of the service instead of its IP in HostAliases is also not supported.
So, what would be a good method to achieve this?
Found a similar discussion in this thread.
Additional Info:
I'm using Azure Kubernetes service (AKS) and using application gateway as ingress controller (AGIC).
You can try different methods, then see which one works for you.
Method 1 :
Modifying the coreDNS configuration of your k8s cluster.
Reference: https://coredns.io/2017/05/08/custom-dns-entries-for-kubernetes/
In AKS, it can be done as described here:
https://learn.microsoft.com/en-us/azure/aks/coredns-custom#rewrite-dns
Method 2 :
Specifying an externalIP manually for the service B and then adding the same IP in /etc/hosts file of pod A using hostAliases seems working.
Part of pod definition of app A:
apiVersion: v1
kind: Pod
metadata:
name: a
labels:
app: a
spec:
hostAliases:
- ip: "10.0.3.165"
hostnames:
- "b.example.com"
Part of service definition of app B:
apiVersion: v1
kind: Service
metadata:
name: b
spec:
selector:
app: b
externalIPs:
- 10.0.3.165
ports:
- protocol: TCP
port: 80
targetPort: 80
But not sure if that is a good practice; there could be pitfalls.
One being that the externalIP we are defining could be any random valid IP address - be it private or public, without a conflict to other IPs of cluster resources.Unpredictable behaviour can result if overlapping IP ranges are used.
Method 3 :
The clusterIP of the service will be available inside pod A as an environment variable B_SERVICE_HOST by default.
So, instead of adding an externalIP you can try to get the actual service IP (clusterIP) of B from env B_SERVICE_HOST and add to /etc/hosts of the pod A - either using hostAliases or directly, whichever works.
echo $B_SERVICE_HOST 'b.example.com' >> /etc/hosts
You can do this using a postStart hook for the container in the pod definition:
containers:
- image: "myreg/myimagea:tag"
name: container-a
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "echo $B_SERVICE_HOST 'b.example.com' >> /etc/hosts"]
Since this is a container lifecycle hook, the changes will be specific to that one container. So other containers in the same pod may not have the same changes applied to their hosts file.
Also note that, service of B should be created before the pod A in order to be able to get IP from B_SERVICE_HOST env.
Method 4 :
You can try to create a public DNS zone and a private DNS zone in your cloud tenant. Then add records in it to point to ther service. For example, create a private DNS zone in Azure then do either of the following 2 methods:
Add A record mapping b.example.com to svc B's clusterIP
Add CNAME record mapping b.example.com to internal loadbalancer dns label provided by azure for the service. On a wider perspective, if you have multiple applications in the cluster with same reequirement, Create a static IP, create a loadbalancer type service for your ingress controller using this static IP as loadBalancerIP and with an annotation service.beta.kubernetes.io/azure-dns-label-name as described here. You'll get a dns label for that service. Then add a CNAME record in your private zone with mapping *.example.com to this azure-provided dns label. Still I doubt if this would be suitable if your ingress controller is Azure application gateway.
NOTE:
Also consider how the method you adopt will affect your debugging process in future if any networking related issue arises.
If you feel that would be problem, consider using two different environment variables B_HOST and B_PUBLIC_HOST separately for external and internal access.
How is container port different from targetports in a container in Kubernetes?
Are they used interchangeably, if so why?
I came across the below code snippet where containerPort is used to denote the port on a pod in Kubernetes.
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-deployment
labels:
app: demo-voting-app
spec:
replicas: 1
selector:
matchLabels:
name: postgres-pod
app: demo-voting-app
template:
metadata:
name: postgres-pod
labels:
name: postgres-pod
app: demo-voting-app
spec:
containers:
- name: postgres
image: postgres:9.4
ports:
- containerPort: 5432
In the above code snippet, they have given 5432 for the containerPort parameter (in the last line). So, how is this containerPort different from targetport?
As far as I know, the term port in general refers to the port on the service (Kubernetes). Correct me if I'm incorrect.
In a nutshell: targetPort and containerPort basically refer to the same port (so if both are used they are expected to have the same value) but they are used in two different contexts and have entirely different purposes.
They cannot be used interchangeably as both are part of the specification of two distinct kubernetes resources/objects: Service and Pod respectively. While the purpose of containerPort can be treated as purely informational, targetPort is required by the Service which exposes a set of Pods.
It's important to understand that by declaring containerPort with the specific value in your Pod/Deployment specification you cannot make your Pod to expose this specific port e.g. if you declare in containerPort field that your nginx Pod exposes port 8080 instead of default 80, you still need to configure your nginx server in your container to listen on this port.
Declaring containerPort in Pod specification is optional. Even without it your Service will know where to direct the request based on the info it has declared in its targetPort.
It's good to remember that it's not required to declare targetPort in the Service definition. If you omit it, it defaults to the value you declared for port (which is the port of the Service itself).
ContainerPort in pod spec
List of ports to expose from the container. Exposing a port here gives the system additional information about the network connections a container uses, but is primarily informational. Not specifying a port here DOES NOT prevent that port from being exposed
targetPort in service spec
Number or name of the port to access on the pods targeted by the service. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. If this is a string, it will be looked up as a named port in the target Pod's container ports. If this is not specified, the value of the 'port' field is used (an identity map).
Hence targetPort in service needs to match the containerPort in pod spec because that's how service knows which container port is destination to forward the traffic to.
containerPort is the port, which app inside the container can be reached on.
targetPort is the port, which is exposed in the cluster and the service connects the pod to other services or users.
Is there a way to map a port directly from the node to a pod running on the same node, bypassing services, loadbalancers and ingress. So basically if a pod is deployed on node X, then node X needs to listen on port 1234 and map that port directly into a pod also running on node X on port 1234. So basically there would be no cross node connections. And whatever node Kubernetes decides to deploy the POD on, that node is now the new host for the external connections.
I am fully aware that this goes against all design principles of a Kubernetes cluster. But I am trying to host an old custom build cloud app that was built for a once only custom cloud solution, and see if I can host it on Kubernetes, but each POD in the stateful set needs a dedicated public IP assigned to it as the public IP get's sent to external devices to redirect them to the correct POD. And the protocol is also custom so there doesn't exist an Level 7 loadbalancer for this. So the only solution I can come up with is a direct port mapping from the node into the POD.
You can use hostNetwork: true and dnsPolicy: ClusterFirstWithHostNet in the pod spec
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: nginx
image: nginx
ports:
- containerPort: 8080
This means the pod will listen on the port 8080 directly using the host's network and can be accessed via nodeip:8080 without creating a service or ingress.
Have you considered using a Service with a NodePort type? This will give you a "static" port on each node to which the pod is deployed. The port number will be high (30000+) but reliable. This is described in the narrative documentation.
I am setting port value in an environment property while generating Pod yaml.
master $ kubectl run nginx --image=nginx --restart=Never --env=MY_PORT=8080 --dry-run -o yaml > Pod.yaml
I am trying to use the environment property MY_PORT in the ports section of my Pod yaml.
spec:
containers:
- env:
- name: MY_PORT
value: "8080"
image: nginx
name: nginx
ports:
- containerPort: $(MY_PORT)
When i try to create the Pod i am getting following error message.
error: error validating "Pod.yaml": error validating data: ValidationError(Pod.spec.containers[0].ports[0].containerPort): invalid type for io.k8s.api.core.v1.ContainerPort.containerPort: got "string", expected "integer"; if you choose to ignore theseerrors, turn validation off with --validate=false
I tried referencing like ${MY_PORT} , MY_PORT etc.. but all the time same error as above.
How i can use an environment variable value in an integer field.
You can't use an environment variable there. In the ContainerPort API object the containerPort field is specified as an integer. Variable substitution is only support in a couple of places, and where it does it is called out; see for example args and command in the higher-level Container API object.
There's no reason to make this configurable. In a Kubernetes environment the pod will have its own IP address, so there's no risk of conflict; if you want to use a different port number to connect, you can set up a service where e.g. port 80 on the service forwards to port 8080 in the pod. (In plain Docker, you can do a similar thing with a docker run -p 80:8080 option: you can always pick the external port even if the port number inside the container is fixed.) I'd delete the environment variable setting.
I have a kubernetes cluster of 3 hosts where each Host has a unique id label.
On this cluster, there is a software that has 3 instances (replicas).
Each replica requires to talk to all other replicas. In addition, there is a service that contains all pods so that this application is permanently available.
So I have:
Instance1 (with labels run: theTool,instanceid: 1)
Instance2 (with labels run: theTool,instanceid: 2)
Instance3 (with labels run: theTool,instanceid: 3)
and
Service1 (selecting pods with label instanceid=1)
Service2 (selecting pods with label instanceid=2)
Service3 (selecting pods with label instanceid=3)
Service (selecting pods with label run=theTool)
This approach works but have I cannot scale or use the rolling-update feature.
I would like to define a deployment with 3 replicas, where each replicate gets a unique generic label (for instance the replica-id like 1/3, 2/3 and so on).
Within the services, I could use the selector to fetch this label which will exist even after an update.
Another solution might be to select the pod/deployment, depending on the host where it is running on. I could use a DaemonSet or just a pod/deployment with affinity to ensure that each host has an exact one replica of my deployment.
But I didn't know how to select a pod based on a host label where it runs on.
Using the hostname is not an option as hostnames will change in different environments.
I have searched the docs but didn't find anything matching this use case. Hopefully, someone here has an idea how to solve this.
The feature you're looking for is called StatefulSets, which just launched to beta with Kubernetes 1.5 (note that it was previously available in alpha under a different name, PetSets).
In a StatefulSet, each replica has a unique name that is persisted across restarts. In your example, these would be instance-1, instance-2, instance-3. Since the instance names are persisted (even if the pod is recreated on another node), you don't need a service-per-instance.
The documentation has more details:
Using StatefulSets
Scaling a StatefulSet
Deleting a StatefulSet
Debugging a StatefulSet
You can map NodeIP:NodePort with PodIP:PodPort. Your pod is running on some Node(Instance/VM).
Assign Label to your nodes ,
http://kubernetes.io/docs/user-guide/node-selection/
Write a service for your pod , for example
service.yaml:
apiVersion: v1
kind: Service
metadata:
name: mysql-service
labels:
label: mysql-service
spec:
type: NodePort
ports:
- port: 3306 #Port on which your service is running
nodePort: 32001 # Node port on which you can access it statically
targetPort: 3306
protocol: TCP
name: http
selector:
name: mysql-selector #bind pod here
Add node selector (in spec field) to your deployment.yaml
deployment.yaml:
spec:
nodeSelector:
nodename: mysqlnode #labelkey=labelname assigned in first step
With this you will be able to access your pod service with Nodeip:Nodeport. If I labeled node 10.11.20.177 with ,
nodename=mysqlnode
I will add in node selector ,
nodeSelector:
nodename : mysqlnode
I specified in service nodePort so now I can access pod service (Which is running in container)
10.11.20.177:32001
But this node should be in same network so it can access pod. For outside access make 32001 accessible publicaly with firewall configuration. It is static forever. Label will take care of your dynamic pod ips.