cetic-nifi Invalid host header issue - kubernetes

Helm version: v3.5.2
Kubernetes version: v1.20.4
nifi chart version:latest : 1.0.2 rel
Issue: [cetic/nifi]-issue
I'm trying to connect to nifi UI deployed in kubernetes.
I have set following properties in values yaml
properties:
# use externalSecure for when inbound SSL is provided by nginx-ingress or other external mechanism
sensitiveKey: changeMechangeMe # Must to have minimal 12 length key
algorithm: NIFI_PBKDF2_AES_GCM_256
externalSecure: false
isNode: false
httpsPort: 8443
webProxyHost: 10.0.39.39:30666
clusterPort: 6007
# ui service
service:
type: NodePort
httpsPort: 8443
nodePort: 30666
annotations: {}
# loadBalancerIP:
## Load Balancer sources
## https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/#restrict-access-for-loadbalancer-service
##
# loadBalancerSourceRanges:
# - 10.10.10.0/24
## OIDC authentication requires "sticky" session on the LoadBalancer for JWT to work properly...but AWS doesn't like it on creation
# sessionAffinity: ClientIP
# sessionAffinityConfig:
# clientIP:
# timeoutSeconds: 10800
10.0.39.39 - is the kubernetes masternode internal ip.
When nifi get started i get follwoing
WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /home/k8sadmin/.kube/config
WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /home/k8sadmin/.kube/config
NAME: nifi
LAST DEPLOYED: Thu Nov 25 12:38:00 2021
NAMESPACE: jeed-cluster
STATUS: deployed
REVISION: 1
NOTES:
Cluster endpoint IP address will be available at:
kubectl get svc nifi -n jeed-cluster -o jsonpath='{.status.loadBalancer.ingress[*].ip}'
Cluster endpoint domain name is: 10.0.39.39:30666 - please update your DNS or /etc/hosts accordingly!
Once you are done, your NiFi instance will be available at:
https://10.0.39.39:30666/nifi
and when i do a curl
curl https://10.0.39.39:30666 put sample.txt -k
<h1>System Error</h1>
<h2>The request contained an invalid host header [<code>10.0.39.39:30666</
the request [<code>/</code>]. Check for request manipulation or third-part
t.</h2>
<h3>Valid host headers are [<code>empty
<ul><li>127.0.0.1</li>
<li>127.0.0.1:8443</li>
<li>localhost</li>
<li>localhost:8443</li>
<li>[::1]</li>
<li>[::1]:8443</li>
<li>nifi-0.nifi-headless.jeed-cluste
<li>nifi-0.nifi-headless.jeed-cluste
<li>10.42.0.8</li>
<li>10.42.0.8:8443</li>
<li>0.0.0.0</li>
<li>0.0.0.0:8443</li>
</ul>
Tried lot of things but still cannot whitelist master node ip in
proxy hosts
Ingress is not used
edit: it looks like properties set in values.yaml is not set in nifi.properties in side the pod. Is there any reason for this?
Appreciate help!

As NodePort service you can also assign port number from 30000-32767.
You can apply values when you install your chart with:
properties:
webProxyHost: localhost
httpsPort:
This should let nifi whitelist your https://localhost:

Related

How to use an ExternalName service to access an internal service that is exposed with ingress

I am trying out a possible kubernetes scenario in the local machine minikube cluster. It is to access an internal service that is exposed with ingress in one cluster from another cluster using an ExternalName service. I understand that using an ingress the service will already be accessible within the cluster. As I am trying this out locally using minikube, I am unable to use simultaneously running clusters. Since I just wanted to verify whether it is possible to access an ingress exposed service using ExternName service.
I started the minikube tunnel using minikube tunnel.
I can access the service using http://k8s-yaml-hello.info.
But when I tryout curl k8s-yaml-hello-internal within a running POD, the error that I that is curl: (7) Failed to connect to k8s-yaml-hello-internal port 80 after 1161 ms: Connection refused
Can anyone point me out the issue here? Thanks in advance.
service.yaml
apiVersion: v1
kind: Service
metadata:
name: k8s-yaml-hello
spec:
selector:
app: k8s-yaml-hello
ports:
- port: 3000
targetPort: 3000
ingress.yaml
kind: Ingress
metadata:
name: k8s-yaml-hello-ingress
labels:
name: k8s-yaml-hello-ingress
spec:
rules:
- host: k8s-yaml-hello.info
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: k8s-yaml-hello
port:
number: 3000
externalName.yaml
apiVersion: v1
kind: Service
metadata:
name: k8s-yaml-hello-internal
spec:
ports:
- name: ''
appProtocol: http
protocol: TCP
port: 3000
type: ExternalName
externalName: k8s-yaml-hello.info
etc/hosts
127.0.0.1 k8s-yaml-hello.info
As You are getting the error curl: (7) Failed to connect :
The above error message means that no web-server is running on the specified IP and Port and the specified (or implied) port.
Check using nano /etc/hosts whether the IP and port is pointing to the correct domain or not. If it's not pointing, provide the correct IP and Port.
Refer to this SO for more information.
In Ingress.Yaml use Port 80 and also in service.yaml port should be 80. The service port and Target port should be different As per your yaml it is the same. Change it to 80 and have a try , If you get any errors, post here.
The problem is that minikube tunnel by default binds to the localhost address 127.0.0.1. Every node, machine, vm, container etc. has its own and the same localhost address. It is to reach local services without having to know the ip address of the network interface (the service is running on "myself"). So when k8s-yaml-hello.info resolves to 127.0.0.1 then it points to different service depending on which container you are (just to myself).
To make it work like you want, you first have to find out the ip address of your hosts network interface e.g. with ifconfig. Its name is something like eth0 or en0, depending on your system.
Then you can use the bind-address option of minikube tunnel to bind to that address instead:
minikube tunnel --bind-address=192.168.1.10
With this your service should be reachable from within the container. Please check first with the ip address:
curl http://192.168.1.10
Then make sure name resolution with /etc/hosts works in your container with dig, nslookup, getent hosts or something similar that is available in your container.

All Kubernetes proxy targets down - Prometheus Operator

I have a k8s cluster deployed in openstack. I have deployed Prometheus operator for it to monitor the cluster. But I am getting Kubernetes proxy down alert for all the nodes.
I would like to know basics of how Prometheus operator scrapes Kubernetes proxy? also would like to know what configurations needs to be done to fix it.
I can see that kube proxy is running in all nodes at 10249 port.
Error :
Get http://10.8.10.11:10249/metrics: dial tcp 10.8.10.11:10249: connect: connection refused
HELM values configuration
kubeProxy:
enabled: true
## If your kube proxy is not deployed as a pod, specify IPs it can be found on
##
endpoints: []
# - 10.141.4.22
# - 10.141.4.23
# - 10.141.4.24
service:
port: 10249
targetPort: 10249
# selector:
# k8s-app: kube-proxy
serviceMonitor:
## Scrape interval. If not set, the Prometheus default scrape interval is used.
##
interval: ""
## Enable scraping kube-proxy over https.
## Requires proper certs (not self-signed) and delegated authentication/authorization checks
##
https: false
Set the kube-proxy argument for metric-bind-address
$ kubectl edit cm/kube-proxy -n kube-system
...
kind: KubeProxyConfiguration
metricsBindAddress: 0.0.0.0:10249
...
$ kubectl delete pod -l k8s-app=kube-proxy -n kube-system

Remote access Zero to JupyterHub over Ethernet with ingress in Kubernetes

Context
I installed Kubernetes on a bare-metal server (4 nodes) and deployed Zero to JupyterHub to it.
This works fine; I can correctly access the hub from the master-node.
Now I want to access the Hub on the server from an external computer via Ethernet. Therefore, I followed the official instructions and installed MetalLB in order to provide an external IP for my proxy-public-service (which correctly sets).
Additionally, I installed the nginx-ingress-controller in order to be able to do an ingress, which also successfully gets an external IP (little hint: Use the Helm-chart; I couldn't get the service running when applying the other recommended steps).
Since I had a little trouble figuring out how to do this ingress, here is an example:
kubectl apply -f ingress.yaml --namespace jhub
#ingress.yaml:
#apiVersion: networking.k8s.io/v1beta1
#kind: Ingress
#metadata:
# name: jupyterhub-ingress
# annotations:
# nginx.ingress.kubernetes.io/rewrite-target: /$1
#spec:
# rules:
# - host: jupyterhub.cluster
# http:
# paths:
# - path: /
# backend:
# serviceName: proxy-public
# servicePort: 80
Anyhow, I cannot open the external IP proxy-public provides (meaning I'm inserting the external IP in my browser).
Question
How can I remotely access my JupyterHub over the external IP; what am I missing?
I missed that this can be achieved in the same way as with the Kubernetes-Dashboard: You have to establish an open ssh-connection (hence, open a tunnel -> tunneling) from the external computer.
Of course this is not the "exernal" access I had in mind, but a working and fast solution for my test-environment (and maybe yours).
How to establish this ssh-connect
First, get the external IP-address of your proxy-public:
$: kubectl get services --namespace jhub
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hub ClusterIP 10.99.241.72 <none> 8081/TCP 95m
proxy-api ClusterIP 10.107.175.27 <none> 8001/TCP 95m
proxy-public LoadBalancer 10.102.171.162 192.168.1.240 80:31976/TCP,443:32568/TCP 95m
Note: The range of the external IP was defined in my layer2 in my MetalLB-config.
Using this information (and assuming you're on Linux), open a terminal and use the following command:
$ ssh pi#10.10.10.2 -L 8000:192.168.1.240:80
# -L opens a localhost-connection
# pi#10.10.10.2 logs me into my second node with user pi
Note1: That localhost:8000 is configured as targetPort for proxy-public with http can also be seen when you describe the service and take a look at the specs respectively ports (you can also get the settings for https there):
kind: Service
apiVersion: v1
metadata:
name: proxy-public
namespace: jhub
...
spec:
ports:
- name: http
protocol: TCP
port: 80
targetPort: 8000
nodePort: 31976
- name: https
...
Finally, type http://localhost:8000/ into your browser - et voila, you get to your JupyterHub login-page!

Expose port 80 on Digital Ocean's managed Kubernetes without a load balancer

I would like to expose my Kubernetes Managed Digital Ocean (single node) cluster's service on port 80 without the use of Digital Ocean's load balancer. Is this possible? How would I do this?
This is essentially a hobby project (I am beginning with Kubernetes) and just want to keep the cost very low.
You can deploy an Ingress configured to use the host network and port 80/443.
DO's firewall for your cluster doesn't have 80/443 inbound open by default.
If you edit the auto-created firewall the rules will eventually reset themselves. The solution is to create a separate firewall also pointing at the same Kubernetes worker nodes:
$ doctl compute firewall create \
--inbound-rules="protocol:tcp,ports:80,address:0.0.0.0/0,address:::/0 protocol:tcp,ports:443,address:0.0.0.0/0,address:::/0" \
--tag-names=k8s:CLUSTER_UUID \
--name=k8s-extra-mycluster
(Get the CLUSTER_UUID value from the dashboard or the ID column from doctl kubernetes cluster list)
Create the nginx ingress using the host network. I've included the helm chart config below, but you could do it via the direct install process too.
EDIT: The Helm chart in the above link has been DEPRECATED, Therefore the correct way of installing the chart would be(as per the new docs) is :
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
After this repo is added & updated
# For Helm 2
$ helm install stable/nginx-ingress --name=myingress -f myingress.values.yml
# For Helm 3
$ helm install myingress stable/nginx-ingress -f myingress.values.yml
#EDIT: The New way to install in helm 3
helm install myingress ingress-nginx/ingress-nginx -f myingress.values.yaml
myingress.values.yml for the chart:
---
controller:
kind: DaemonSet
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
daemonset:
useHostPort: true
service:
type: ClusterIP
rbac:
create: true
you should be able to access the cluster on :80 and :443 via any worker node IP and it'll route traffic to your ingress.
since node IPs can & do change, look at deploying external-dns to manage DNS entries to point to your worker nodes. Again, using the helm chart and assuming your DNS domain is hosted by DigitalOcean (though any supported DNS provider will work):
# For Helm 2
$ helm install --name=mydns -f mydns.values.yml stable/external-dns
# For Helm 3
$ helm install mydns stable/external-dns -f mydns.values.yml
mydns.values.yml for the chart:
---
provider: digitalocean
digitalocean:
# create the API token at https://cloud.digitalocean.com/account/api/tokens
# needs read + write
apiToken: "DIGITALOCEAN_API_TOKEN"
domainFilters:
# domains you want external-dns to be able to edit
- example.com
rbac:
create: true
create a Kubernetes Ingress resource to route requests to an existing Kubernetes service:
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: testing123-ingress
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: testing123.example.com # the domain you want associated
http:
paths:
- path: /
backend:
serviceName: testing123-service # existing service
servicePort: 8000 # existing service port
after a minute or so you should see the DNS records appear and be resolvable:
$ dig testing123.example.com # should return worker IP address
$ curl -v http://testing123.example.com # should send the request through the Ingress to your backend service
(Edit: editing the automatically created firewall rules eventually breaks, add a separate firewall instead).
A NodePort Service can do what you want. Something like this:
kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
type: NodePort
selector:
app: MyApp
ports:
- protocol: TCP
nodePort: 80
targetPort: 80
This will redirect incoming traffic from port 80 of the node to port 80 of your pod. Publish the node IP in DNS and you're set.
In general exposing a service to the outside world like this is a very, very bad idea, because the single node passing through all traffic to the service is both going to receive unbalanced load and be a single point of failure. That consideration doesn't apply to a single-node cluster, though, so with the caveat that LoadBalancer and Ingress are the fault-tolerant ways to do what you're looking for, NodePort is best for this extremely specific case.

How can I access Concourse built with helm outside of the cluster?

I am using the concourse helm build provided at https://github.com/kubernetes/charts/tree/master/stable/concourse to setup concourse inside of our kubernetes cluster. I have been able to get the setup working and I am able to access it within the cluster but I am having trouble accessing it outside the cluster. The notes from the build show that I can just use kubectl port-forward to get to the webpage but I don't want to have all of the developers have to forward the port just to get to the web ui. I have tried creating a service that has a node port like this:
apiVersion: v1
kind: Service
metadata:
name: concourse
namespace: concourse-ci
spec:
ports:
- port: 8080
name: atc
nodePort: 31080
- port: 2222
name: tsa
nodePort: 31222
selector:
app: concourse-web
type: NodePort
This allows me to get to the webpage and interact with it in most ways but then when I try to look at build status it never loads the events that happened. Instead a network request for /api/v1/builds/1/events is stuck in pending and the steps of the build never load. Any ideas what I can do to be able to completely access concourse external to the cluster?
EDIT: It seems like the events network request normally responds with a text/event-stream data type and maybe the Kubernetes service isn't handling an event stream correctly. Or there is something about concourse that handles event-streams different than the norm.
After plenty of investigation I have found that the the nodePort service is actually working and it is just my antivirus (Sophos) that is silently blocking the response from the events request.
Also, you can expose your port through loadbalancer in kubernetes.
kubectl get deployments
kubectl expose deployment <web pod name> --port=80 --target-port=8080 --name=expoport --type=LoadBalancer
It will create a public IP for you, and you will be able to access concourse on port 80.
not sure since I'm also a newbie but... you can configure your chart by providing your own version of https://github.com/kubernetes/charts/blob/master/stable/concourse/values.yaml
helm install stable/concourse -f custom_values.yaml
there is a 'externalURL' param, maybe worth trying to set it to your URL
## URL used to reach any ATC from the outside world.
##
# externalURL:
In addition, ... if you are on GKE, .... you can use an internal loadbalancer, ... set it up in your values.yaml file
service:
## For minikube, set this to ClusterIP, elsewhere use LoadBalancer or NodePort
## ref: https://kubernetes.io/docs/user-guide/services/#publishing-services---service-types
##
#type: ClusterIP
type: LoadBalancer
## When using web.service.type: LoadBalancer, sets the user-specified load balancer IP
# loadBalancerIP: 172.217.1.174
## Annotations to be added to the web service.
##
annotations:
# May be used in example for internal load balancing in GCP:
cloud.google.com/load-balancer-type: Internal