Kubernetes access Service in other namespace via http request - kubernetes

I've got an InfluxDB as database service in the default namespace.
It's service is called influxdb and works fine with chronograf to visualize the data.
Now i'd like to connect with an other deployment from the namspace test to this service. It's a python application. The normal python Influxdb Lib uses Requests to connect to the db.
Architecture Overview
Istio is also installed.
Namspace: Default
Influxdb Deployment
Influxdb Service
Chronograf Deployment (visualise influxdb)
Chronograf Service to Ingress(for external web access)
Namespace: test
Python App which should connect to influxdb for processing etc.
Influxdb Service (which points to influxdb.default.svc.cluster.local)
Therefore i created a service in the Namespace test which points to the service of influxdb in the default namespace.
apiVersion: v1
kind: Service
metadata:
name: influxdb
labels:
app: pythonapp
namespace: test
spec:
type: ExternalName
externalName: influxdb.default.svc.cluster.local
ports:
- port: 8086
name: http
- port: 8088
name: http-flux
And now deployed the python app which points to the influxdb Service. Which keeps getting a http connection error.
2020-07-03 13:02:05 - db.meterdb [meterdb.__init__:57] - ERROR - Oops, something wen't wrong during init of db. message: HTTPConnectionPool(host='influxdb', port=8086): Max retries exceeded with url: /query?q=SHOW+DATABASES (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f6863ed6310>: Failed to establish a new connection: [Errno 111] Connection refused'))
2020-07-03 13:02:05 - db.meterdb [meterdb.check_connection:113] - ERROR - can't reach db server... message: HTTPConnectionPool(host='influxdb', port=8086): Max retries exceeded with url: /ping (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f6863e57650>: Failed to establish a new connection: [Errno 111] Connection refused'))
When I visualise the traffic with kiali I see, that the Python app tries to connect to the influxdb service, but which is unknown with http traffic.
I don't know how to get it work to use the created influxdb Service.
Connection settings for python influxdb Client library. Link to python influxdb lib
host=influxdb
port=8086
Traffic from Kiali
How can i route the traffik to the right service ?
It seems to me, that it routes the traffik to an unknown service because it's http and not tcp.

You don't need the
kind: Service
metadata:
name: influxdb
labels:
app: pythonapp
namespace: test
Just access the service directly in your python request:
requests.get('influxdb.default.svc.cluster.local:8086')
And this can be more configurable.
# Kubernetes deployment
containers:
- name: pythonapp
env:
- name: DB_URL
value: influxdb.default.svc.cluster.local:8086
# python
DB = os.environ['DB_URL']
requests.get(DB)

Related

HTTPRoute set a timeout

I am trying to set up a multi-cluster architecture. I have a Spring Boot API that I want to run on a second cluster (for isolation purposes). I have set that up using the gateway.networking.k8s.io API. I am using a Gateway that has an SSL certificate and matches an IP address that's registered to my domain in the DNS registry. I am then setting up an HTTPRoute for each service that I am running on the second cluster. That works fine and I can communicate between our clusters and everything works as intended but there is a problem:
There is a timeout of 30s by default and I cannot change it. I want to increase it as the application in the second cluster is a WebSocket and I obviously would like our WebSocket connections to stay open for more than 30s at a time. I can see that in the backend service that's created from our HTTPRoute there is a timeout specified as 30s. I found a command to increase it gcloud compute backend-services update gkemcg1-namespace-store-west-1-8080-o1v5o5p1285j --timeout=86400
When I run that command it would increase the timeout and the webSocket connection will be kept alive. But after a few minutes this change gets overridden (I suspect that it's because it's managed by the yaml file). This is the yaml file for my backend service
kind: HTTPRoute
apiVersion: gateway.networking.k8s.io/v1beta1
metadata:
name: public-store-route
namespace: namespace
labels:
gateway: external-http
spec:
hostnames:
- "my-website.example.org"
parentRefs:
- name: external-http
rules:
- matches:
- path:
type: PathPrefix
value: /west
backendRefs:
- group: net.gke.io
kind: ServiceImport
name: store-west-1
port: 8080
I have tried to add either a timeout, timeoutSec, or timeoutSeconds under every level with no success. I always get the following error:
error: error validating "public-store-route.yaml": error validating data: ValidationError(HTTPRoute.spec.rules[0].backendRefs[0]): unknown field "timeout" in io.k8s.networking.gateway.v1beta1.HTTPRoute.spec.rules.backendRefs; if you choose to ignore these errors, turn validation off with --validate=false
Surely there must be a way to configure this. But I wasn't able to find anything in the documentation referring to a timeout. Am I missing something here?
How do I configure the timeout?
Edit:
I have found this resource: https://cloud.google.com/kubernetes-engine/docs/how-to/configure-gateway-resources
I have been trying to set up a LBPolicy and attatch it it the Gateway, HTTPRoute, Service, or ServiceImport but nothing has made a difference. Am I doing something wrong or is this not working how it is supposed to? This is my yaml:
kind: LBPolicy
apiVersion: networking.gke.io/v1
metadata:
name: store-timeout-policy
namespace: sandstone-test
spec:
default:
timeoutSec: 50
targetRef:
name: public-store-route
group: gateway.networking.k8s.io
kind: HTTPRoute

Unable to connect to my Docker container running inside a single-node Kubernetes cluster

Kubernetes newbie here.
First, let me tell you about the functionality of my Node.js sample application. It is a simple web server that responds with the text "Hello from Node" in response to a GET request to the root (/) route. Also, when the server starts, it outputs the text "Server listening on port 8000".
Currently, the app is running inside a container on a single-node Kubernetes cluster. (I am using Minikube)
When I run the command kubectl logs web-server, I get the desired response. web-server is the name of the running pod.
But when I try to connect to the application using the command curl 192.168.59.100:31515, I get the response: Connection refused. I should see the response: "Hello from node" instead.
Please see the picture below.
Please note that in the picture above, k & m are aliases for kubectl & minikube respectively.
My YAML files are as follows:
node-server-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: web-server
labels:
web: server
spec:
containers:
- name: web-server-container
image: sundaray/node-server:v1
ports:
- containerPort: 3000
node-server-service.yaml
apiVersion: v1
kind: Service
metadata:
name: web-server-port
spec:
type: NodePort
ports:
- port: 3050
targetPort: 3000
nodePort: 31515
selector:
web: server
What am I doing wrong?

GKE config connector issue - Post i/o timeout

I am running into the below error when creating compute IP.
Config connector is already enabled, and it is a private cluster hosted on a shared network.
Version 1.17.15-gke.800
$ kubectl apply -f webapp-compute-ip. yaml
Error from server (InternalError): error when creating "webapp-compute-ip.yaml": Internal error occurred: failed calling webhook "annotation-defaulter.cnrm.cloud.google.com": Post https://cnrm-validating-webhook.cnrm-system.svc:443/annotation-defaulter?timeout=30s: dial tcp 192.168.66.130:9443: i/o timeout
$cat webapp-compute-ip.yaml
apiVersion: compute.cnrm.cloud.google.com/v1beta1
kind: ComputeAddress
metadata:
name: webapp-ip-test
namespace: sandbox
labels:
app: webapp
environment: test
annotations:
cnrm.cloud.google.com/project-id: "cluster-name"
spec:
location: global`
This problem was due to a config connector version issue.
There was a change in the webhook default port, from 443 to 9443, see
Config connector version depends on GKE version, I did not have any control over it, moreover there no is public documentation available on what config connector version is available with the GKE version. There is an existing request here.
The solution was for me to add port 9443 in the firewall rule.

Google Kubernetes Ingress health check always failing

I have configured a web application pod exposed via apache on port 80. I'm unable to configure a service + ingress for accessing from the internet. The issue is that the backend services always report as UNHEALTHY.
Pod Config:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
name: webapp
name: webapp
namespace: my-app
spec:
replicas: 1
selector:
matchLabels:
name: webapp
template:
metadata:
labels:
name: webapp
spec:
containers:
- image: asia.gcr.io/my-app/my-app:latest
name: webapp
ports:
- containerPort: 80
name: http-server
Service Config:
apiVersion: v1
kind: Service
metadata:
name: webapp-service
spec:
type: NodePort
selector:
name: webapp
ports:
- protocol: TCP
port: 50000
targetPort: 80
Ingress Config:
kind: Ingress
metadata:
name: webapp-ingress
spec:
backend:
serviceName: webapp-service
servicePort: 50000
This results in backend services reporting as UNHEALTHY.
The health check settings:
Path: /
Protocol: HTTP
Port: 32463
Proxy protocol: NONE
Additional information: I've tried a different approach of exposing the deployment as a load balancer with external IP and that works perfectly. When trying to use a NodePort + Ingress, this issue persists.
With GKE, the health check on the Load balancer is created automatically when you create the ingress. Since the HC is created automatically, so are the firewall rules.
Since you have no readinessProbe configured, the LB has a default HC created (the one you listed). To debug this properly, you need to isolate where the point of failure is.
First, make sure your pod is serving traffic properly;
kubectl exec [pod_name] -- wget localhost:80
If the application has curl built in, you can use that instead of wget.
If the application has neither wget or curl, skip to the next step.
get the following output and keep track of the output:
kubectl get po -l name=webapp -o wide
kubectl get svc webapp-service
You need to keep the service and pod clusterIPs
SSH to a node in your cluster and run sudo toolbox bash
Install curl:
apt-get install curl`
Test the pods to make sure they are serving traffic within the cluster:
curl -I [pod_clusterIP]:80
This needs to return a 200 response
Test the service:
curl -I [service_clusterIP]:80
If the pod is not returning a 200 response, the container is either not working correctly or the port is not open on the pod.
if the pod is working but the service is not, there is an issue with the routes in your iptables which is managed by kube-proxy and would be an issue with the cluster.
Finally, if both the pod and the service are working, there is an issue with the Load balancer health checks and also an issue that Google needs to investigate.
As Patrick mentioned, the checks will be created automatically by GCP.
By default, GKE will use readinessProbe.httpGet.path for the health check.
But if there is no readinessProbe configured, then it will just use the root path /, which must return an HTTP 200 (OK) response (and that's not always the case, for example, if the app redirects to another path, then the GCP health check will fail).

Access SQL Server database from Kubernetes Pod

My deployed Spring boot application to trying to connect to an external SQL Server database from Kubernetes Pod. But every time it fails with error
Failed to initialize pool: The TCP/IP connection to the host <>, port 1443 has failed.
Error: "Connection timed out: no further information.
Verify the connection properties. Make sure that an instance of SQL Server is running on the host and accepting TCP/IP connections at the port. Make sure that TCP connections to the port are not blocked by a firewall.
I have tried to exec into the Pod and successfully ping the DB server without any issues
Below are the solutions I have tried:
Created a Service and Endpoint and provided the DB IP in configuration file tried to bring up the application in the Pod
Tried using the Internal IP from Endpoint instead of DB IP in configuration to see Internal IP is resolved to DB IP
But both these cases gave the same result. Below is the yaml I am using the create the Service and Endpoint.
---
apiVersion: v1
kind: Service
metadata:
name: mssql
namespace: cattle
spec:
type: ClusterIP
ports:
- port: 1433
---
apiVersion: v1
kind: Endpoints
metadata:
name: mssql
namespace: cattle
subsets:
- addresses:
- ip: <<DB IP>>
ports:
- port: 1433
Please let me know if I am wrong or missing in this setup.
Additional information the K8s setup
It is clustered master with external etcd cluster topology
OS on the nodes is CentOS
Able to ping the server from all nodes and the pods that are created
For this scenario a headless service is very useful. You will redirect traffic to this ip without defining an endpoint.
kind: "Service"
apiVersion: "v1"
metadata:
namespace: "your-namespace"
name: "ftp"
spec:
type: ExternalName
externalName: your-ip
The issue was resolved by updating the deployment yaml with IP address. Since all the servers were in same subnet, I did not need the to create a service or endpoint to access the DB. Thank you for all the inputs on the post