My microservice connects to a server.
For now the server IP address and port number are hard coded in my values.yaml, I would need to remove that hard coded part.
I would need to get the address and port automatically using those commands:
kubectl get endpoints server-xx -o=jsonpath='{ .subsets[*].addresses[*].ip }'
kubectl get endpoints server-xx -o=jsonpath='{ .subsets[*].ports[*].port }'
I created a pre-install hook that contains those 2 commands (I used helm doc).
How could I inject the address and port (that I get from the hook) into the templates where I need those 2 values?
Or any other ideas? Maybe I shouldn't use a hook but a different method?
Thanks
Related
I wanted to create kafka connect connector on openshift project through postman. But when sending Post command through postman getting error as below. In openshift to expose pod as a service(interact through postman) any specific command we need to run? Please advise.
Possible reasons you are seeing this page:
The host doesn't exist. Make sure the hostname was typed correctly and that a route matching this hostname exists.
The host exists, but doesn't have a matching path. Check if the URL path was typed correctly and that the route was created using the desired path.
Route and path matches, but all pods are down. Make sure that the resources exposed by this route (pods, services, deployment configs, etc) have at least one pod running.
The error you are getting is because you havent created any route to interact with the required service.
In openshift a service is exposed to the applications outside of cluster through routes.
Use the following command to expose service outside of cluster :
oc expose service
You can include many options with the above command . to know more use the following command
oc expose service --help
Let's suppose I have pods deployed in a Kubernetes cluster and I have exposed them via a NodePort service. Is there a way to get the pod external endpoint in one command ?
For example:
kubectl <cmd>
Response : <IP_OF_NODE_HOSTING_THE_POD>:30120 ( 30120 being the nodeport external port )
The requirement is a complex one and requires query to list object. I am going to explain with assumptions. Additionally if you need internal address then you can use endpoint object(ep), because the target resolution is done at the endpoint level.
Assumption: 1 Pod, 1 NodePort service(32320->80); both with name nginx
The following command will work with the stated assumption and I hope this will give you an idea for the best approach to follow for your requirement.
Note: This answer is valid based on the assumption stated above. However for more generalized solution I recommend to use -o jsonpath='{range.. for this type of complex queries. For now the following command will work.
command:
kubectl get pods,ep,svc nginx -o jsonpath=' External:http://{..status.hostIP}{":"}{..nodePort}{"\n Internal:http://"}{..subsets..ip}{":"}{..spec.ports..port}{"\n"}'
Output:
External:http://192.168.5.21:32320
Internal:http://10.44.0.21:80
If the node port service is known then something like kubectl get svc <svc_name> -o=jsonpath='{.spec.clusterIP}:{.spec.ports[0].nodePort} should work.
This question already has answers here:
Is there a way to add arbitrary records to kube-dns?
(6 answers)
Closed 1 year ago.
Is it possible to add a custom DNS entry (type A) inside Kubernetes 1.19? I'd like to be able to do:
kubectl run -it --rm --restart=Never --image=infoblox/dnstools:latest dnstools
host custom-dns-entry.example.com
custom-dns-entry.example.com has address 10.0.0.72
with custom-dns-entry.example.com not being registered inside my upstream DNS server (and also not having a corresponding k8s service at all).
Following example https://coredns.io/2017/05/08/custom-dns-entries-for-kubernetes/ seems to provide a solution, but it is a bit complex and it may be deprecated. Is there a simpler way to do it please? For example with a few kubectl commands?
The reason why I need this is because I run my workload on kind so my ingress DNS record is not registered inside upstream DNS and some pods require access to this ingress DNS record from inside (maybe to configure a javascript client provided by the pods which will effectively access the ingress DNS record from outside...). However I cannot modify the workload code as I am not maintaining it, so addgin this custom DNS entry seems to be a reasonable solution
CoreDNS would be the place to do this. You can also do similar-ish things using ExternalName-type Services but that wouldn't give you full control over the hostname (it would be a Service name like anything else).
I am new to Kubernetes. I was going through some tutorials related to Kubernetes deployment. I am seeing two different commands which looks like doing similar things.
The below command is from google code lab (URL: https://codelabs.developers.google.com/codelabs/cloud-springboot-kubernetes/index.html?index=..%2F..index#7 )
$ kubectl create service loadbalancer hello-java --tcp=8080:8080
Another command is being seen in a different place along with the Kubernetes site (https://kubernetes.io/docs/tutorials/stateless-application/expose-external-ip-address/)
$ kubectl expose deployment hello-world --type=LoadBalancer --name=my-service
Now as per my understanding both the command are creating services from deployments with loadbalancer and exposing them to the outer world.
I don't think there will be two separate commands for the same task. There should be some difference that I am not able to understand.
Would anyone please clarify this to me?
There are cases where the expose command is not sufficient & your only practical option is to use create service.
Overall there are 4 different types of Kubernetes services, for some it really doesn't matter if you use expose or create, while for others it maters very much.
The types of Kubernetes services are:
ClusterIP
NodePort
LoadBalancer
ExternalName
So for example in the case of the NodePort type service let's say we wanted to set a node port with value 31888 :
Example 1:
In the following command there is no argument for the node port value, the expose command creates it automatically:
kubectl expose deployment demo --name=demo --type=NodePort --port=8080 --target-port=80
The only way to set the node port value is after being created using the edit command to update the node port value: kebctl edit service demo
Example 2:
In this example the create service nodeport is dedicated to creating the NodePort type and has arguments to enable us to control the node port value:
kubectl create service nodeport demo --top=8080:80 --node-port=31888
In this Example 2 the node port value is set with the command line and there is no need to manually edit the value as in case of Example 1.
Important :
The create service [service-name] does not have an option to set the service's selector , so the service wont automatically connect to existing pods.
To set the selector labels to target specific pods you will need to follow up the create service [service-name] with the set selector command :
kubectl set selector service [NAME] [key1]=[value1]
So for above case 2 example, if you want the service to work with a deployment with pods labeled myapp: hello then this is the follow-up command needed:
kubectl set selector service demo myapp=hello
The main differences can be seen from the docs.
1.- kubectl create command
Create a resource from a file or from stdin.
JSON and YAML formats are accepted.
2.- kubectl expose command
Expose a resource as a new Kubernetes service.
Looks up a deployment, service, replica set, replication controller or
pod by name and uses the selector for that resource as the selector
for a new service on the specified port. [...]
Even though both achieve the same thing in the examples you provided, the create command is kind of a more global one, with it you can create all resources by using the command line or a yaml/json file. However, the expose command will only create a service resource, and it's mainly used to expose other already existing resources.
Source: K8s Docs
I hope this helps a little : Here the key would be to understand the difference between services and deployments. As per this link [1] you will notice that a deployment deals with the mortality of Pods automatically. However , if a Pod is terminated and then another is spun up how do the
Pods continue to communicate when their IPs change? They use Services : “a Service is an abstraction which defines a logical set of Pods and a policy by which to access them”. Additionally, it may be of interest to view this link [2] as it describes that the kubectl expose command creates a service which in turn creates an external IP and a Load Balancer. As a beginner it may be of help to review the command language used with Kubernetes, this link [3] describes (as mentioned in another answer) that the kubectl create command is used to be more specific about the objects it creates. As well using the create command you can create a larger variety of objects.
[1]:Service :https://kubernetes.io/docs/concepts/services-networking/service/
[2]:Deploying a containerized web application :https://cloud.google.com/kubernetes-engine/docs/tutorials/hello-app#step_6_expose_your_application_to_the_internet
[3]:How to create objects: https://kubernetes.io/docs/tasks/manage-kubernetes-objects/imperative-command/#how-to-create-objects
From my understanding, approach 1 (using create service) just creates service object and as label selector is not specified it does not have any underlying target pods. But in approach 2 (using expose deployment) the service load balances all the pods which are created using deployment as the service is attached with required labels automatically.
I have a Kafka deployment and service deployed via Kubernetes. Each its pods have its internal IP and with a command like this
kubectl describe services broker --namespace=kafka | grep Endpoints | awk '{print $2}'
I can get them all: 10.244.1.11:9092,10.244.2.15:9092,10.244.2.16:9092
I have another service deployed with Kubernetes, after my Kafka, that needs the result of that command as an environment variable KAFKA_BOOTSTRAP_SERVERS.
How can I get the result of that command into an environment variable in my service kubernetes YML file?
You should develop a client program in python or go and using the service account that gets mounted in each container, hit the api server endpoint and retrieve Kafka endpoints. Parse the Json file output abd grab the actual broker ip addresses
Kubernetes allow you to use environmental variables. Here is the documentation.
You can also use HELM to use templates which also allow the use of the environmental variables.
In your case, you can get the result in an env variable like as below:
SOME_ENV_VARIABLE=$( command... )