OpenShift Kafka connect REST API not accessible from outside through Route URL - kubernetes

I am running Kafka connect worker from Openshift cluster and able to access the connector APIs from the PODs terminal like below (listener in connect-distributed.properties as https://localhost:20000):
sh-4.2$ curl -k -X GET https://localhost:20000
{"version":"5.5.0-ce","commit":"dad78e2df6b714e3","kafka_cluster_id":"XojxTYmbTXSwHguxJ_flWg"}
I have created OpenShift routes with below config:
- kind: Route
apiVersion: v1
metadata:
name: '${APP_SHORT_NAME}-route'
labels:
app: '${APP_SHORT_NAME}'
annotations:
description: Route for application's http service.
spec:
host: '${APP_SHORT_NAME}.${ROUTE_SUFFIX}'
port:
targetPort: 20000-tcp
tls:
termination: reencrypt
destinationCACertificate: '${DESTINATION_CA_CERTIFICATE}'
to:
kind: Service
name: '${APP_NAME}-service'
Port 20000 is exposed from Dockerfile but the route URL is throwing below error instead:
Possible reasons you are seeing this page:
The host doesn't exist. Make sure the hostname was typed correctly and that a route matching this hostname exists.
The host exists, but doesn't have a matching path. Check if the URL path was typed correctly and that the route was created using the desired path.
Route and path matches, but all pods are down. Make sure that the resources exposed by this route (pods, services, deployment configs, etc) have at least one pod running
Same OpenShift route URL works fine with a normal Spring boot service but Kafka connect worker URL is not getting bind to the Route URL create as above. (Note the Kafka connect worker is running fine along with the logs in the OpenShift pods)

Set the listeners back to the default of bind-address 0.0.0.0 so that external connections can be accepted.
If you need a different port, you can use port forwarding in the k8s service, rather than do that at the application config.
If you're not using Strimzi (which doesn't look like you are, based on 5.5.0-ce), you'll also want to add this env-var so a cluster can be formed.
- name: CONNECT_REST_ADVERTISED_HOST_NAME
valueFrom:
fieldRef:
fieldPath: status.podIP

Related

Mosquitto Broker - DNS name instead of IP address for MQTT clients to use

I am able to get eclipse mosquitto broker up and running with the MQTT clients able to talk to the broker using Broker's IP address. However, as am running these on kubernetes, the broker IP keeps changing on restart. I would like to enable DNS name service for the broker, so the clients can use broker-name instead of the IP. coreDNS is running default in kubernetes..
Any suggestions on what can be done ?
$ nslookup kubernetes.default
Server: 10.43.0.10
Address: 10.43.0.10:53
** server can't find kubernetes.default: NXDOMAIN
** server can't find kubernetes.default: NXDOMAIN
You can achieve that using headless service. You create it by setting the clusterIP field in a service spec to None. Once you do that the DNS server will return the pod IPs instead of the single service and instead of returning a single DNS A record, the DNS server will return multiple A records for the service each pointing to the IP of an individual pod backing the service at the moment.
With this your client can perform a single DNS A record lookup to fetch the IP of all the pods that are part of the service. Headless service is also often used as service discovery system.
apiVersion: v1
kind: Service
metadata:
name: your-headless-service
spec:
clusterIP: None # <-- This makes the service hadless!
selector:
app: your-mosquito-broker-pod
ports:
- protocol: TCP
port: 80
targetPort: 3000
You are able also to resolve the dns name with regular service as well. The difference is that with headless service you are able to talk to the pod directly instead having service as load-balancer or proxy.
Resolving the service thru dns is easy and you do that with the following pattern:
backend-broker.default.svc.cluster.local
Whereas backend-broker corresponds to the service name, default stands for the namespace the service is defined in, and svc.cluster.local is a configurable cluster domain suffix used in all cluster local service names.
Note that if you client and broker are in the same namespace you can omit the svc.cluster.local suffix and the namespace. You then reffer the servie as:
backend-broker
I high encourage you to read more about Dns in kubernetes.
All,
Thanks for answering the query, especially Thomas for code pointers. With your suggestions, once I create a Service for the POD, I was able to get the DNS working as core-dns was already running .. I was able to use the hostname in MQTT broker also after this.
opts.AddBroker(fmt.Sprintf("tcp://mqtt-broker:1883"))
apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2021-02-01T19:08:46Z"
labels:
app: ipc
name: mqtt-broker
namespace: default
BTW, I wasnt able to get the headless service working, was getting this error, so continued with ClusterIP itself + 1883 exposed port for MQTT. Any suggestions please ?
`services "mqtt-broker" was not valid:`
`spec.clusterIPs[0]: Invalid value: []string{"None"}: may not change once set`

How to create https endpoint in Google Cloud from http based server for Kubernetes Engine?

I have been trying to create HTTPS endpoint in Google Cloud K8s environment.
I have built a flask application in Python that serves on the waitress production environment via port 5000.
serve(app, host='0.0.0.0', port=5000, ipv6=False, threads=30)
I created a docker file and pushed this to the google cloud repository. Then, created a Kubernetes cluster with one workload containing this image. After, I exposed this via external IP by creating LoadBalancer. (After pushing the image to the Google repository, everything is managed through the Google Cloud Console. I do not have any configuration file, it should be through the Google Cloud Console.)
Now, I do have an exposed IP and port number to access my application. Let's say this IP address and the port is: 11.111.11.222:1111. Now, I can access this IP via Postman and get a result.
My goal is to implement, If it is possible, to expose this IP address via HTTPS as well, by using any google cloud resources. (redirection, creating ingress, etc)
So, in the end I want to reach the application through http://11.111.11.222:111 and https://11.111.11.222:111
Any suggestions?
A LoadBalancer translates to a network load balancer. You can configure multiple ports for this e.g. 80 and 443. Then your application must handle the TLS part.
The ingress resource creates an HTTP(S) LB
From the GKE perspective you can try to configure Ingress resource with HTTPS enabled:
Steps:
Create a basic flask app inside a pod (for example purposes only)
Expose an app via service object of type nodePort
Create a certificate
Create an Ingress resource
Test
Additional information (added by EDIT)
Create a basic flask app inside a pod (for example purposes only)
Below is a flask script which will respond with <h1>Hello!</h1>:
from flask import Flask
app = Flask(__name__)
#app.route("/")
def index():
return "<h1>Hello!</h1>"
if __name__ == "__main__":
from waitress import serve
serve(app, host="0.0.0.0", port=8080)
By default it will respond on port 8080.
Link to an answer with above script.
Expose an app via service object of type nodePort
Assuming that deployment is configured correctly with working app inside, you can expose it via service object type of nodePort with following YAML definition:
apiVersion: v1
kind: Service
metadata:
name: flask-service
spec:
type: NodePort
selector:
app: ubuntu
ports:
- name: flask-port
protocol: TCP
port: 80
targetPort: 8080
Please make sure that:
selector is configured correctly
targetPort is pointing to port which is app is running on
Create a certificate
For Ingress object to work with HTTPS you will need to provide a certificate. You can create it with GKE official documentation on: Cloud.google.com: Managed certificates
Be aware of a fact that you will need a domain name to do that.
Create an Ingress resource
Below is an example Ingress resource which will point your requests to your flask application:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: flask-ingress
annotations:
networking.gke.io/managed-certificates: flask-certificate
kubernetes.io/ingress.global-static-ip-name: flask-static-ip
spec:
rules:
- host: DOMAIN.NAME
http:
paths:
- path: /
backend:
serviceName: flask-service
servicePort: flask-port
Please take a specific look on part of YAML definition below and change accordingly to your case:
networking.gke.io/managed-certificates: flask-certificate
kubernetes.io/ingress.global-static-ip-name: flask-static-ip
Please wait for everything to configure correctly.
After that you will have access to your application by domain.name with ports:
80(http)
443(https)
Currently the Ingress only supports a single TLS port, 443, and assumes TLS termination.
-- Kubernetes.io: Ingress TLS
Test
You can check if above steps are configured correctly by:
entering https://DOMAIN.NAME in your web browser and check if it responds with Hello with HTTPS enabled
using a tool curl -v https://DOMAIN.NAME.
Please let me know if this solution works for you.
Additional information (added by EDIT)
You can try to configure service object of type LoadBalancer which will be operate at layer 4 as #Florian said in his answer.
Please refer to official documentation: Kubernetes.io: Create external load balancer
You can also use Nginx Ingress controller and either:
Expose TCP/UDP service by following: Kubernetes.github.io: Ingress nginx: Exposing tcp udp services which will operating at L4.
Create an Ingress resource that will have SSL Passthrough configured by following: Kubernetes.github.io: Ingress nginx: Ssl passthrough
After researching, I found the answer in Google Cloud Run. It is very simple to deploy HTTP based flask app in the container. As serve(app, host='0.0.0.0', port=5000, ipv6=False, threads=30)(No need for self-certificate or HTTPS in this part, just make sure the HTTP app works) and then push it Cloud Run.
Adjust the service parameters, depend on how much resources do you need to run it. In the machine settings, set the port that you are using in the docker container to be mapped. for instance, in my case, it is 5000. When you create the service, Google provides you a domain address with HTTPS. You can use that URL and access your resources.
That's it!
For more information on Cloud Run:
https://cloud.google.com/serverless-options
The differences between computing platforms: https://www.signalfx.com/blog/gcp-serverless-comparison/

How to use an api that is mapped to a service in Kubernetes

I want to access my backend pods using an internal Kubernetes dns name. Instead of using http://somepodip:8080/get I want to use http://backend:8080/get to use my backend.
I am currently running my backend pods and have hooked them up to a service.
kind: Service
apiVersion: v1
metadata:
name: backend
spec:
selector:
app: myapp-backend
ports:
- name: http
protocol: TCP
port: 8080
targetPort: 8080
This does assign my pods to the backend service. But when I try to use a frontend pod with http://backend/get , it does not find the resource.
Am I incorrectly configuring the service?
Your service seems to be ok, the issue could be possibly because your frontend is not server rendered, which means that your browser is trying to lookup for a name backend, in that case you cannot rely on kubernetes service name as your browser does not recognize it as a valid hostname.
If you want to access externally by instead of ip, you want to use names, check how to setup a ingress entry https://kubernetes.io/docs/concepts/services-networking/ingress/#what-is-ingress

how to give service name and port in configmap yaml?

I have a service (CusterIP) like following which is exposing ports of backend POD.
apiVersion: v1
kind: Service
metadata:
name: fsimulator
namespace: myns
spec:
type: ClusterIP
selector:
application: oms
ports:
- name: s-port
port: 9780
- name: b-port
port: 8780
Front end POD should be able to connect to Backend POD using service. Should we replace hostname with service name to connect from Frontend POD to Backend POD ?
I have to supply the service name and port through environment variables to Frontend POD container.
The enviroment variables are set using configMap.
Is it enough to give service name fsimulator as hostname to connect to ?
How to give service if is created inside namespace ?
Thanks
Check out this documentation. The internal service PORT / IP pairs for active services are indeed passed into the containers by default.
As the documentation also says, it is possible (recommended) to use a DNS cluster add-on for service discovery. Accessing service.namespace from outside / inside a service will resolve to the correct service route (or just service from inside the namespace). This is usually the right path to take.
Built-in service discovery is a huge perk of using Kubernetes, use the available tools if at all possible!

How can I access Concourse built with helm outside of the cluster?

I am using the concourse helm build provided at https://github.com/kubernetes/charts/tree/master/stable/concourse to setup concourse inside of our kubernetes cluster. I have been able to get the setup working and I am able to access it within the cluster but I am having trouble accessing it outside the cluster. The notes from the build show that I can just use kubectl port-forward to get to the webpage but I don't want to have all of the developers have to forward the port just to get to the web ui. I have tried creating a service that has a node port like this:
apiVersion: v1
kind: Service
metadata:
name: concourse
namespace: concourse-ci
spec:
ports:
- port: 8080
name: atc
nodePort: 31080
- port: 2222
name: tsa
nodePort: 31222
selector:
app: concourse-web
type: NodePort
This allows me to get to the webpage and interact with it in most ways but then when I try to look at build status it never loads the events that happened. Instead a network request for /api/v1/builds/1/events is stuck in pending and the steps of the build never load. Any ideas what I can do to be able to completely access concourse external to the cluster?
EDIT: It seems like the events network request normally responds with a text/event-stream data type and maybe the Kubernetes service isn't handling an event stream correctly. Or there is something about concourse that handles event-streams different than the norm.
After plenty of investigation I have found that the the nodePort service is actually working and it is just my antivirus (Sophos) that is silently blocking the response from the events request.
Also, you can expose your port through loadbalancer in kubernetes.
kubectl get deployments
kubectl expose deployment <web pod name> --port=80 --target-port=8080 --name=expoport --type=LoadBalancer
It will create a public IP for you, and you will be able to access concourse on port 80.
not sure since I'm also a newbie but... you can configure your chart by providing your own version of https://github.com/kubernetes/charts/blob/master/stable/concourse/values.yaml
helm install stable/concourse -f custom_values.yaml
there is a 'externalURL' param, maybe worth trying to set it to your URL
## URL used to reach any ATC from the outside world.
##
# externalURL:
In addition, ... if you are on GKE, .... you can use an internal loadbalancer, ... set it up in your values.yaml file
service:
## For minikube, set this to ClusterIP, elsewhere use LoadBalancer or NodePort
## ref: https://kubernetes.io/docs/user-guide/services/#publishing-services---service-types
##
#type: ClusterIP
type: LoadBalancer
## When using web.service.type: LoadBalancer, sets the user-specified load balancer IP
# loadBalancerIP: 172.217.1.174
## Annotations to be added to the web service.
##
annotations:
# May be used in example for internal load balancing in GCP:
cloud.google.com/load-balancer-type: Internal