kubernetes service not reachable - kubernetes

I have services created using
apiVersion: v1
kind: Service
metadata:
name: traefik
namespace: kube-system
spec:
selector:
app: traefik
ports:
- name: web
protocol: TCP
port: 80
targetPort: 80
- name: websecure
protocol: TCP
port: 443
targetPort: 443
- name: admin
protocol: TCP
port: 8080
targetPort: 8080
Here is the service
% kubectl -n kube-system get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
traefik ClusterIP 172.20.89.154 <none> 80/TCP,443/TCP,4080/TCP,4043/TCP,8080/TCP
When I try to connect to this service from same or different namespace, I get
curl: (6) Could not resolve host: treafik.kube-system.svc.cluster.local
But if I do using Cluster IP 172.20.89.154:8080, everything works. I am able to use service name for other services and it works.
What is wrong in this setup ?

Related

Using the external IP service of the load balancer for different pod Kubernetes

I have deployed RabbitMQ in Kubernetes using a service with the load balancer type. When creating a service, an external IP is created. Could you please tell me if I can bind another deployment to this IP with other ports? Thanks.
It is possible, you just have to create a service with multiple ports, for example:
apiVersion: v1
kind: Service
metadata:
name: service-name
spec:
ports:
- name: http
port: 80
targetPort: 80
- name: https
port: 443
targetPort: 443
- name: any other port
port: <port-number>
targetPort: <target-port>
selector:
app: app
type: LoadBalancer
And you will output similar to this:
$kubectl get svc
service-name LoadBalancer <Internal-IP> <External-IP> 80:30870/TCP,443:32602/TCP,<other-port>:32388/TCP

How to deploy custom nginx app on kubernetes?

I want to deploy a custom nginx app on my kubernetes cluster.
I have three raspberry in a cluster. My deplotment file looks as follows
kubepodDeploy.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
spec:
selector:
matchLabels:
run: my-nginx
replicas: 2
template:
metadata:
labels:
run: my-nginx
spec:
containers:
- name: my-nginx
image: privateRepo/my-nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: my-nginx
labels:
run: my-nginx
spec:
type: NodePort
ports:
- port: 8080
targetPort: 80
protocol: TCP
name: http
- port: 443
protocol: TCP
name: https
selector:
run: my-nginx
How can I deploy it so that I can access my app by ipadress. Which service type do I need?
my service details are:
kubectl describe service my-nginx ~/Project/htmlBasic
Name: my-nginx
Namespace: default
Labels: run=my-nginx
Annotations: Selector: run=my-nginx
Type: NodePort
IP: 10.99.107.194
Port: http 8080/TCP
TargetPort: 80/TCP
NodePort: http 30488/TCP
Endpoints: 10.32.0.4:80,10.32.0.5:80
Port: https 443/TCP
TargetPort: 443/TCP
NodePort: https 32430/TCP
Endpoints: 10.32.0.4:443,10.32.0.5:443
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
You can not access the application on ipaddress:8080 without using a proxy server in front or changing IPtable rules(not good idea). NodePort service type will always expose service in port range 30000-32767
So at any point, your service will be running on ipaddress:some_higher_port
Running a proxy in front which, redirects the traffic to node port and since 8080 is your requirement so run proxy server also on 8080 port.
Just to add proxy server will not be part of Kubernetes cluster
If you are on cloud consider using LoadBalancer service and access you app on some DNS name.

Gloo gateway-proxy External IP on GKE : telnet: Unable to connect to remote host: Connection refused

I deploy nifi and gloo API Gateway on the same GKE cluster. The external IP exposed as LoadBalancer work well (open on Web browser or telnet). However, when I use telnet to connect gloo API Gateway on GKE cloud shell, my connection was refused.
Depends on relational causes and solutions, I have allow traffic to flow into cluster by creating firewall rule:
gcloud compute firewall-rules create my-rule --allow=all
How can I do for it?
kubectl get -n gloo-system service/gateway-proxy-v2 -o yaml
apiVersion: v1
kind: Service
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"gloo","gateway-proxy-id":"gateway-proxy-v2","gloo":"gateway-proxy"},"name":"gateway-proxy-v2","namespace":"gloo-system"},"spec":{"ports":[{"name":"http","port":80,"protocol":"TCP","targetPort":8080},{"name":"https","port":443,"protocol":"TCP","targetPort":8443}],"selector":{"gateway-proxy":"live","gateway-proxy-id":"gateway-proxy-v2"},"type":"LoadBalancer"}}
labels:
app: gloo
gateway-proxy-id: gateway-proxy-v2
gloo: gateway-proxy
name: gateway-proxy-v2
namespace: gloo-system
spec:
clusterIP: 10.122.10.215
externalTrafficPolicy: Cluster
ports:
- name: http
nodePort: 30189
port: 80
protocol: TCP
targetPort: 8080
- name: https
nodePort: 30741
port: 443
protocol: TCP
targetPort: 8443
selector:
gateway-proxy: live
gateway-proxy-id: gateway-proxy-v2
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 34.xx.xx.xx
kubectl get svc -n gloo-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
gateway-proxy-v2 LoadBalancer 10.122.10.215 34.xx.xx.xx 80:30189/TCP,443:30741/TCP 63m
gloo ClusterIP 10.122.5.253 <none> 9977/TCP 63m
You can try bumping to Gloo version 1.3.6.
Please take a look at https://docs.solo.io/gloo/latest/upgrading/1.0.0/ to track any possible breaking changes.

Can't send log into Graylog kubernetes

How to expose node port on ingress?
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
logs-graylog NodePort 10.20.8.187 <none> 80:31300/TCP,12201:31301/UDP,1514:31302/TCP 5d3h
logs-graylog-elasticsearch ClusterIP None <none> 9200/TCP,9300/TCP 5d3h
logs-graylog-master ClusterIP None <none> 9000/TCP 5d3h
logs-graylog-slave ClusterIP None <none> 9000/TCP 5d3h
logs-mongodb-replicaset ClusterIP None <none> 27017/TCP 5d3h
This is how my service look like where there are some node ports.
Graylog web interface is expose on port 80.
But i am not able to send logs on URL. my graylog weburl is https://logs.example.com
it's running on https cert-manager is there on kubernertes ingress.
i am not able to send Glef UDP logs on URl. am i missing something to open port from ingress or UDP filter something ?
this is my ingress :
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: logs-graylog-ingress
annotations:
kubernetes.io/ingress.class: nginx
certmanager.k8s.io/cluster-issuer: graylog
nginx.ingress.kubernetes.io/ssl-redirect: "true"
spec:
tls:
- hosts:
- logs.example.io
secretName: graylog
rules:
- host: logs.example.io
http:
paths:
- backend:
serviceName: logs-graylog
servicePort: 80
- backend:
serviceName: logs-graylog
servicePort: 12201
- backend:
serviceName: logs-graylog
servicePort: 31301
Service :
apiVersion: v1
kind: Service
metadata:
labels:
app: graylog
chart: graylog-0.1.0
component: graylog-service
heritage: Tiller
name: graylog
release: logs
name: logs-graylog
spec:
clusterIP: 10.20.8.187
externalTrafficPolicy: Cluster
ports:
- name: http
nodePort: 31300
port: 80
protocol: TCP
targetPort: 9000
- name: udp-input
nodePort: 31301
port: 12201
protocol: UDP
targetPort: 12201
- name: tcp-input
nodePort: 31302
port: 1514
protocol: TCP
targetPort: 1514
selector:
graylog: "true"
sessionAffinity: None
type: NodePort
status:
loadBalancer: {}
UDP services aren't normally exposed via an Ingress Controller like TCP HTTP(S) services are. I'm not sure any ingress controllers even support UDP, certainly not with 3 protocols combined in a single ingress definition.
If the cluster is hosted on a cloud service, most support a Service with type LoadBalancer to map external connections into a cluster.
apiVersion: v1
kind: Service
metadata:
name: logs-direct-graylog
spec:
selector:
graylog: "true"
ports:
- name: udp-input
port: 12201
protocol: UDP
targetPort: 12201
- name: tcp-input
port: 1514
protocol: TCP
targetPort: 1514
type: LoadBalancer
If service of type LoadBalancer is not available in your environment you can use the NodePort service. The nodePorts you have defined will be available on the external IP of each of your nodes.
A nodePort is not strictly required for the http port, as the nginx Ingress Controller takes care of that for you elsewhere in it's own service.
apiVersion: v1
kind: Service
metadata:
name: logs-graylog
spec:
selector:
graylog: "true"
ports:
- name: http
port: 80
protocol: TCP
targetPort: 9000
The ports other than 80 can be removed from your ingress definition.

DigitalOcean Loadbalancer behavior Kubernetes TCP 443

Currently I've this Load Balancer Service on my Kubernetes Cluster.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP [HIDDEN] <none> 443/TCP 44h
load-balancer LoadBalancer [HIDDEN] [HIDDEN] 443:30014/TCP 39h
This is my .yaml file config
apiVersion: v1
kind: Service
metadata:
name: load-balancer
spec:
selector:
app: nodeapp
type: LoadBalancer
ports:
- protocol: TCP
port: 443
targetPort: 3000
name: https
For some reason DigitalOcean does not setup the HTTPS instead if leaves it as TCP 443. And then I've to manually go to DigitalOcean and change TCP to HTTPS and create the let's encrypt certificate. How can I make Kubernetes create a load balancer using HTTPS on port 443 instead of TCP 443.
According with their documentation you need to add additional annotations like that:
---
kind: Service
apiVersion: v1
metadata:
name: https-with-cert
annotations:
service.beta.kubernetes.io/do-loadbalancer-protocol: "http"
service.beta.kubernetes.io/do-loadbalancer-algorithm: "round_robin"
service.beta.kubernetes.io/do-loadbalancer-tls-ports: "443"
service.beta.kubernetes.io/do-loadbalancer-certificate-id: "your-certificate-id"
spec:
type: LoadBalancer
selector:
app: nginx-example
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
- name: https
protocol: TCP
port: 443
targetPort: 80
How to add SSL certificate: https://www.digitalocean.com/docs/networking/load-balancers/how-to/custom-ssl-cert/
A Service of type Load Balancer will create a Layer 4 type LB(Network LB), with awareness of only the IP and Port.
You will need Layer 7 LB(Application LB), that's application aware.
So, to enable HTTPS, you will need to manage via an ingress.