Can't send log into Graylog kubernetes - kubernetes

How to expose node port on ingress?
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
logs-graylog NodePort 10.20.8.187 <none> 80:31300/TCP,12201:31301/UDP,1514:31302/TCP 5d3h
logs-graylog-elasticsearch ClusterIP None <none> 9200/TCP,9300/TCP 5d3h
logs-graylog-master ClusterIP None <none> 9000/TCP 5d3h
logs-graylog-slave ClusterIP None <none> 9000/TCP 5d3h
logs-mongodb-replicaset ClusterIP None <none> 27017/TCP 5d3h
This is how my service look like where there are some node ports.
Graylog web interface is expose on port 80.
But i am not able to send logs on URL. my graylog weburl is https://logs.example.com
it's running on https cert-manager is there on kubernertes ingress.
i am not able to send Glef UDP logs on URl. am i missing something to open port from ingress or UDP filter something ?
this is my ingress :
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: logs-graylog-ingress
annotations:
kubernetes.io/ingress.class: nginx
certmanager.k8s.io/cluster-issuer: graylog
nginx.ingress.kubernetes.io/ssl-redirect: "true"
spec:
tls:
- hosts:
- logs.example.io
secretName: graylog
rules:
- host: logs.example.io
http:
paths:
- backend:
serviceName: logs-graylog
servicePort: 80
- backend:
serviceName: logs-graylog
servicePort: 12201
- backend:
serviceName: logs-graylog
servicePort: 31301
Service :
apiVersion: v1
kind: Service
metadata:
labels:
app: graylog
chart: graylog-0.1.0
component: graylog-service
heritage: Tiller
name: graylog
release: logs
name: logs-graylog
spec:
clusterIP: 10.20.8.187
externalTrafficPolicy: Cluster
ports:
- name: http
nodePort: 31300
port: 80
protocol: TCP
targetPort: 9000
- name: udp-input
nodePort: 31301
port: 12201
protocol: UDP
targetPort: 12201
- name: tcp-input
nodePort: 31302
port: 1514
protocol: TCP
targetPort: 1514
selector:
graylog: "true"
sessionAffinity: None
type: NodePort
status:
loadBalancer: {}

UDP services aren't normally exposed via an Ingress Controller like TCP HTTP(S) services are. I'm not sure any ingress controllers even support UDP, certainly not with 3 protocols combined in a single ingress definition.
If the cluster is hosted on a cloud service, most support a Service with type LoadBalancer to map external connections into a cluster.
apiVersion: v1
kind: Service
metadata:
name: logs-direct-graylog
spec:
selector:
graylog: "true"
ports:
- name: udp-input
port: 12201
protocol: UDP
targetPort: 12201
- name: tcp-input
port: 1514
protocol: TCP
targetPort: 1514
type: LoadBalancer
If service of type LoadBalancer is not available in your environment you can use the NodePort service. The nodePorts you have defined will be available on the external IP of each of your nodes.
A nodePort is not strictly required for the http port, as the nginx Ingress Controller takes care of that for you elsewhere in it's own service.
apiVersion: v1
kind: Service
metadata:
name: logs-graylog
spec:
selector:
graylog: "true"
ports:
- name: http
port: 80
protocol: TCP
targetPort: 9000
The ports other than 80 can be removed from your ingress definition.

Related

kubernetes service not reachable

I have services created using
apiVersion: v1
kind: Service
metadata:
name: traefik
namespace: kube-system
spec:
selector:
app: traefik
ports:
- name: web
protocol: TCP
port: 80
targetPort: 80
- name: websecure
protocol: TCP
port: 443
targetPort: 443
- name: admin
protocol: TCP
port: 8080
targetPort: 8080
Here is the service
% kubectl -n kube-system get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
traefik ClusterIP 172.20.89.154 <none> 80/TCP,443/TCP,4080/TCP,4043/TCP,8080/TCP
When I try to connect to this service from same or different namespace, I get
curl: (6) Could not resolve host: treafik.kube-system.svc.cluster.local
But if I do using Cluster IP 172.20.89.154:8080, everything works. I am able to use service name for other services and it works.
What is wrong in this setup ?

Cluster IP service isn't working as expected

I created nodeport service for httpd pods and cluster IP service for tomcat pods and they're in same namespace behind nginx LB. There is a weird issue with the app when http and tomcat services are not the same type. When I change both to be cluster IP or both to be NodePort then everything works fine...
Traffic flow is like this:
HTTP and HTTPS traffic -> LB -> Ingress -> Httpd -> Tomcat
HTTPS virtual host custom port traffic -> LB -> Tomcat
TCP traffic -> LB -> Tomcat
Is there anything that can cause issues between HTTPD and Tomcat? Even though I can telnet to httpd and tomcat pods from outside but for some reason the app functionality breaks (some static and jsp pages gets processed though).
httpd-service:
apiVersion: v1
kind: Service
metadata:
name: httpd
labels:
app: httpd-service
namespace: test-web-dev
spec:
type: NodePort
selector:
app: httpd
ports:
- name: port-80
port: 80
protocol: TCP
targetPort: 80
- name: port-443
port: 443
protocol: TCP
targetPort: 443
sessionAffinity: "ClientIP"
sessionAffinityConfig:
clientIP:
timeoutSeconds: 10800
externalTrafficPolicy: Local
tocmat-service:
apiVersion: v1
kind: Service
metadata:
name: tomcat7
namespace: test-web-dev
annotations:
spec:
selector:
app: tomcat7 # Metadata label of the deployemnt pod template or pod metadata label
ports:
- name: port-8080 # Optional when its just only one port
protocol: TCP
port: 8080
targetPort: 8080
- name: port-8262
protocol: TCP
port: 8262
targetPort: 8262
sessionAffinity: "ClientIP"
sessionAffinityConfig:
clientIP:
timeoutSeconds: 10800
ingress lb:
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: ingress-nginx
data:
1234: "test-web-dev/httpd:1234"
8262: "test-web-dev/tomcat7:8262"
---
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
externalTrafficPolicy: Local
type: LoadBalancer
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
- name: https
port: 443
protocol: TCP
targetPort: https
- name: port-1234
port: 1234
protocol: TCP
targetPort: 1234
- name: port-8262
port: 8262
protocol: TCP
targetPort: 8262
Answering my own question.
NodePort services are required when the service needs be exposed outside of the cluster like internet.
ClusterIP services are used when services needs to communicate internally like frontend to backend.
In my case, user needs to connect to both httpd and tomcat (specific app port) from outside as a result both tomcat and httpd has to be nodeport type service. Configuring tomcat has cluster IP will break the app since tomcat app port isn't reachable from internet.

How to deploy custom nginx app on kubernetes?

I want to deploy a custom nginx app on my kubernetes cluster.
I have three raspberry in a cluster. My deplotment file looks as follows
kubepodDeploy.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
spec:
selector:
matchLabels:
run: my-nginx
replicas: 2
template:
metadata:
labels:
run: my-nginx
spec:
containers:
- name: my-nginx
image: privateRepo/my-nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: my-nginx
labels:
run: my-nginx
spec:
type: NodePort
ports:
- port: 8080
targetPort: 80
protocol: TCP
name: http
- port: 443
protocol: TCP
name: https
selector:
run: my-nginx
How can I deploy it so that I can access my app by ipadress. Which service type do I need?
my service details are:
kubectl describe service my-nginx ~/Project/htmlBasic
Name: my-nginx
Namespace: default
Labels: run=my-nginx
Annotations: Selector: run=my-nginx
Type: NodePort
IP: 10.99.107.194
Port: http 8080/TCP
TargetPort: 80/TCP
NodePort: http 30488/TCP
Endpoints: 10.32.0.4:80,10.32.0.5:80
Port: https 443/TCP
TargetPort: 443/TCP
NodePort: https 32430/TCP
Endpoints: 10.32.0.4:443,10.32.0.5:443
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
You can not access the application on ipaddress:8080 without using a proxy server in front or changing IPtable rules(not good idea). NodePort service type will always expose service in port range 30000-32767
So at any point, your service will be running on ipaddress:some_higher_port
Running a proxy in front which, redirects the traffic to node port and since 8080 is your requirement so run proxy server also on 8080 port.
Just to add proxy server will not be part of Kubernetes cluster
If you are on cloud consider using LoadBalancer service and access you app on some DNS name.

Serving MQTT over WebSocket in Kubernates GCP environment

I'm currently serving MQTT messages over WebSocket to js clients. I use RabbitMQ to write messages on queue from a java backend and have them routed to the clients/frontend apps.
I deployed everything on a Kubernetes cluster on Google Cloud Platform and everything works just fine as long as I publish the RabbitMQ pod with a Kubernetes Load Balancer directly to the internet.
apiVersion: v1
kind: Service
metadata:
labels:
app: rabbitmq
name: rabbitmq
spec:
type: LoadBalancer
ports:
- name: http-manager
nodePort: 30019
port: 80
protocol: TCP
targetPort: 15672
- name: mqtt-broker
nodePort: 31571
port: 1883
protocol: TCP
targetPort: 1883
- name: ws-service
nodePort: 32048
port: 15675
protocol: TCP
targetPort: 15675
selector:
app: rabbitmq
I try to replace the Kubernetes Load balancer with a Node port service and expose it through an Ingress and a GCP Balancer but the health probe fails and never recovers.
apiVersion: v1
kind: Service
metadata:
labels:
app: rabbitmq
name: rabbitmq-internal
spec:
ports:
- name: ws-port
port: 15675
protocol: TCP
targetPort: 15675
- name: mamanger-port
port: 15672
protocol: TCP
targetPort: 15672
selector:
app: rabbitmq
sessionAffinity: None
type: NodePort
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: basictest
namespace: default
spec:
rules:
- host: mqtt-host.dom.cloud
http:
paths:
- backend:
serviceName: rabbitmq-internal
servicePort: 15675
path: /ws/*
- backend:
serviceName: rabbitmq-internal
servicePort: 15672
path: /*
The probe is HTTP so I tried to assign a custom TCP probe and even to trick GCP switching with a probe that points to another HTTP port on the same pod, with no success.
I need to user GCP Balancer to have a unified frontend to assign an SSL Certificate for both HTTPS and WSS protocols.

ingress points to wrong port on a service

I have a Kubernetes service that exposes two ports as follows
Name: m-svc
Namespace: m-ns
Labels:
Annotations: <none>
Selector: app=my-application
Type: ClusterIP
IP: 10.233.43.40
Port: first 8080/TCP
TargetPort: 8080/TCP
Endpoints: 10.233.115.178:8080,10.233.122.166:8080
Port: second 8888/TCP
TargetPort: 8888/TCP
Endpoints: 10.233.115.178:8888,10.233.122.166:8888
Session Affinity: None
Events: <none>
And here is the ingress definition:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: f5
virtual-server.f5.com/http-port: "80"
virtual-server.f5.com/ip: controller-default
virtual-server.f5.com/round-robin: round-robin
creationTimestamp: 2018-10-05T18:54:45Z
generation: 2
name: m-ingress
namespace: m-ns
resourceVersion: "39557812"
selfLink: /apis/extensions/v1beta1/namespaces/m-ns
uid: 20241db9-c8d0-11e8-9fac-0050568d4d4a
spec:
rules:
- host: www.myhost.com
http:
paths:
- backend:
serviceName: m-svc
servicePort: 8080
path: /first/path
- backend:
serviceName: m-svc
servicePort: 8080
path: /second/path
status:
loadBalancer:
ingress:
- ip: 172.31.74.89
But when I go to www.myhost.com/first/path I end up at the service that is listening on port 8888 of m-svc. What might be going on?
Another piece of information is that I am sharing a service between two ingresses that point to different ports on the same service, is this a problem? There is a different ingress port the port 8888 on this service which works fine
Also I am using an F5 controller
After a lot of time investigating this, it looks like the root cause is in the F5s, it looks like because the name of the backend (Kubernetes service) is the same, it only creates one entry in the pool and routes the requests to this backend and the one port that gets registered in the F5 policy. Is there a fix for this? A workaround is to create a unique service for each port but I dont want to make this change , is this possible at the F5 level?
From what I see you don't have a Selector field in your service. Without it, it will not forward to any backend or pod. What makes you think that it's going to port 8888? What's strange is that you have Endpoints in your service. Did you manually create them?
The service would have to be something like this:
Name: m-svc
Namespace: m-ns
Labels:
Annotations: <none>
Selector: app=my-application
Type: ClusterIP
IP: 10.233.43.40
Port: first 8080/TCP
TargetPort: 8080/TCP
Endpoints: 10.233.115.178:8080,10.233.122.166:8080
Port: second 8888/TCP
TargetPort: 8888/TCP
Endpoints: 10.233.115.178:8888,10.233.122.166:8888
Session Affinity: None
Events: <none>
Then in your deployment definition:
selector:
matchLabels:
app: my-application
Or in a pod:
apiVersion: v1
kind: Pod
metadata:
annotations: { ... }
labels:
app: my-application
You should also be able to describe your Endpoints:
$ kubectl describe endpoints m-svc
Name: m-svc
Namespace: default
Labels: app=my-application
Annotations: <none>
Subsets:
Addresses: x.x.x.x
NotReadyAddresses: <none>
Ports:
Name Port Protocol
---- ---- --------
first 8080 TCP
second 8081 TCP
Events: <none>
Your Service appears to be what is called a headless service: https://kubernetes.io/docs/concepts/services-networking/service/#headless-services. This would explain why the Endpoints was created automatically.
Something is amiss because it should be impossible for your HTTP request to arrive at you pods without the .spec.selector populated.
I suggest deleting the Service you created and delete the Endpoints with the same name and then recreate the Service with type=ClusterIP and the spec.selector properly populated.