How to deploy Ingress to expose MinIO cluster outside - kubernetes

I have setup MinIO in kubernetes (k3s) - one node implementation.
Services
apiVersion: v1
kind: Service
metadata:
name: minio
namespace: minio
labels:
app: minio
spec:
clusterIP: None
selector:
app: minio
ports:
- port: 9011
name: minio
---
apiVersion: v1
kind: Service
metadata:
name: minio-service
namespace: minio
labels:
app: minio
spec:
type: LoadBalancer
selector:
app: minio
ports:
- port: 9012
targetPort: 9011
protocol: TCP
Sateful Set
[. . .]
containers:
- name: ches
image: minio/minio
args:
- server
- /data
[. . .]
- containerPort: 9000
hostPort: 9011
[. . .]
Command kubectl logs minio-0 -n minio returns the following:
API: http://10.42.0.14:9000 http://127.0.0.1:9000
Console: http://10.42.0.14:41989 http://127.0.0.1:41989
I am trying to setup Ingress. The steps that followed are:
Setup an Ingress Controller
From here: https://kubernetes.github.io/ingress-nginx/deploy/
helm upgrade --install ingress-nginx ingress-nginx \
--repo https://kubernetes.github.io/ingress-nginx \
--namespace ingress-nginx --create-namespace
Create an Internal Service
apiVersion: v1
kind: Service
metadata:
name: minio-service-ingress
namespace: minio
labels:
app: minio
spec:
selector:
app: minio
ports:
- port: 9011
name: minio
Create Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minio-ingress
namespace: minio
spec:
rules:
- host: minio.com
http:
paths:
- backend:
service:
name: minio-service-ingress
port:
number: 9011
path: /
pathType: Prefix
When executing kubectl get ing -n minio:
NAME CLASS HOSTS ADDRESS PORTS AGE
minio-ingress <none> minio.com 192.168.1.14 80 43m
In /etc/hosts I added the entry:
192.168.1.14 minio.com
However, when I am trying to enter http://minio.com/ in browser I get:
Bad Gateway
Am I missing something here?

Related

Kubernetes service not exposed on minikube

I'm new to k8 and i'm trying to figure out how to deploy my first docker image on minikube.
My k8.yaml file is:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-docker-image
image: my-docker-image:1.0
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: my-service
namespace: default
spec:
type: LoadBalancer
selector:
app: my-service
ports:
- port: 8080
targetPort: 8080
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: my-ingress
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-service
port:
number: 8080
Everything seems fine to me, however, I'm not able to reach my service on cluster.
I tried to create a tunnel using the minikube tunnel command and i have this result if i execute kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-service LoadBalancer 10.109.154.236 127.0.0.1 8080:30558/TCP 2m53s
However, if i try to call my service at 127.0.0.1:30588 host is unreachable.
Can someone help me?
There is an issue in the service selector as well, so first, we need to fix this service selector and it should match with with deployment label
replicas: 1
selector:
matchLabels:
app: my-app
and the service should refer to this selector
the selector should be my-app in the service or same as above for deployment
type: LoadBalancer
selector:
app: my-app
ports:
- port: 8080
targetPort: 8080
to access from the host
minikube service my-service
and here you go
kubectl delete -f myapp.yaml
kubectl apply -f myapp.yaml
deployment manifest
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-docker-image
image: nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: my-service
namespace: default
spec:
type: LoadBalancer
selector:
app: my-app
ports:
- port: 80
targetPort: 80
Also worth considering the service type for Minikube.
What's the difference between ClusterIP, NodePort and LoadBalancer service types in Kubernetes?

Kubernetes ingress nginx "not found" (les jackson tutorial)

I'm following the tutorial from Less Jackson about Kubernetes but I'm stuck around 04:40:00. I always get an 404 returned from my Ingress Nginx Controller. I followed everything he does, but I can't get it to work.
I also read that this could have something to do with IIS, so I stopped the default website which also runs on port 80.
The apps running in the containers are .NET Core.
Commands-deply & cluster ip
apiVersion: apps/v1
kind: Deployment
metadata:
name: commands-depl
spec:
replicas: 1
selector:
matchLabels:
app: commandservice
template:
metadata:
labels:
app: commandservice
spec:
containers:
- name: commandservice
image: maartenvissershub/commandservice:latest
---
apiVersion: v1
kind: Service
metadata:
name: commands-clusterip-srv
spec:
type: ClusterIP
selector:
app: commandservice
ports:
- name: commandservice
protocol: TCP
port: 80
targetPort: 80
Platforms-depl & cluster ip
apiVersion: apps/v1
kind: Deployment
metadata:
name: platforms-depl
spec:
replicas: 1
selector:
matchLabels:
app: platformservice
template:
metadata:
labels:
app: platformservice
spec:
containers:
- name: platformservice
image: maartenvissershub/platformservice:latest
---
apiVersion: v1
kind: Service
metadata:
name: platforms-clusterip-srv
spec:
type: ClusterIP
selector:
app: platformservice
ports:
- name: platformservice
protocol: TCP
port: 80
targetPort: 80
Ingress-srv
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-srv
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: 'true'
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: acme.com
http:
paths:
- path: /api/platforms
pathType: Prefix
backend:
service:
name: platforms-clusterip-srv
port:
number: 80
- path: /api/c/platforms
pathType: Prefix
backend:
service:
name: commands-clusterip-srv
port:
number: 80
I also added this to my hosts file:
127.0.0.1 acme.com
And I applied this from the nginx documentation:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.3.0/deploy/static/provider/cloud/deploy.yaml
kubectl get ingress
kubectl describe ing ingress-srv
Dockerfile CommandService
FROM mcr.microsoft.com/dotnet/sdk:5.0 AS build-env
WORKDIR /app
COPY *.csproj ./
RUN dotnet restore
COPY . ./
RUN dotnet publish -c Release -o out
FROM mcr.microsoft.com/dotnet/aspnet:5.0
WORKDIR /app
COPY --from=build-env /app/out .
ENTRYPOINT [ "dotnet", "PlatformService.dll" ]
kubectl logs ingress-nginx-controller-6bf7bc7f94-v2jnp -n ingress-nginx
Am I missing something?
I found my solution. There was a process running on port 80 with pid 4: 0.0.0.0:80. I could stop it using NET stop HTTP in an admin cmd.
I noticed that running kubectl get services -n=ingress-nginx resulted a ingress-nginx-controll, which is fine, but with an external-ip . Running kubectl get ingress also didn't show an ADDRESS. Now they both show "localhost" as value for external-ip and ADDRESS.
Reference: Port 80 is being used by SYSTEM (PID 4), what is that?
So this can occur from several reasons:
Pods or containers are not working - try using kubectl get pods -n <your namespace> to see if any are not in 'running' status.
Assuming they are running, try kubectl describe pod <pod name> -n <your namespace> to see the events on your pod just to make sure its running properly.
I have noticed you are not exposing ports in your deployment. please update your deployments like so:
apiVersion: apps/v1
kind: Deployment
metadata:
name: platforms-depl
spec:
replicas: 1
selector:
matchLabels:
app: platformservice
template:
metadata:
labels:
app: platformservice
spec:
containers:
- name: platformservice
image: maartenvissershub/platformservice:latest
ports:
- name: http
containerPort: 80
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: platforms-clusterip-srv
spec:
type: ClusterIP
selector:
app: platformservice
ports:
- name: platformservice
protocol: TCP
port: 80
targetPort: 80
apiVersion: apps/v1
kind: Deployment
metadata:
name: commands-depl
spec:
replicas: 1
selector:
matchLabels:
app: commandservice
template:
metadata:
labels:
app: commandservice
spec:
containers:
- name: commandservice
image: maartenvissershub/commandservice:latest
ports:
- name: http
containerPort: 80
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: commands-clusterip-srv
spec:
type: ClusterIP
selector:
app: commandservice
ports:
- name: commandservice
protocol: TCP
port: 80
targetPort: 80
Hope this helps!

Kubernetes nginx ingress controller throws error when trying to obtain endpoints for service

I am trying to set micro-services on Kubernetes on Google Cloud Platform. I've created a deployment, clusterIp and ingress configuration files.
First after creating a cluster, I run this command to install nginx ingress.
helm install my-nginx stable/nginx-ingress --set rbac.create=true
I use helm v3.
Then I apply deployment and clusterIp configurations.
deployment and clusterIp configurations:
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-production-deployment
spec:
replicas: 2
selector:
matchLabels:
component: app-production
template:
metadata:
labels:
component: app-production
spec:
containers:
- name: app-production
image: eu.gcr.io/my-project/app:1.0
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: app-production-cluser-ip-service
spec:
type: ClusterIP
selector:
component: app-production
ports:
- port: 80
targetPort: 80
protocol: TCP
My ingress config is:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: app.example.com
http:
paths:
- path: /
backend:
serviceName: app-production-cluster-ip-service
servicePort: 80
I get this error from Google Cloud Platform logs withing ingress controller:
Error obtaining Endpoints for Service "default/app-production-cluster-ip-service": no object matching key "default/app-production-cluster-ip-service" in local store
But when I do kubectl get endpoints command the output is this:
NAME ENDPOINTS AGE
app-production-cluser-ip-service 10.60.0.12:80,10.60.1.13:80 17m
I am really not sure what I'm doing wrong.
The service name mentioned in the ingress not matching. Please recreate a service and check
apiVersion: v1
kind: Service
metadata:
name: app-production-cluster-ip-service
spec:
type: ClusterIP
selector:
component: app-production
ports:
- port: 80
targetPort: 80
protocol: TCP

Migrate to istio from nginx ingress

I've simple single page golang web application, I'm trying to migrate to istio.
My prod setup (via nginx ingress):
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: goapp
annotations:
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: "true"
spec:
tls:
- hosts:
- mycustomapp.mycustomapp.com
secretName: go-tls
rules:
- host: mycustomapp.mycustomapp.com
http:
paths:
- path: /
backend:
serviceName: mycustomapp
servicePort: 80
And I'm trying to build at least http configuration for istio
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: goapp
annotations:
kubernetes.io/ingress.class: istio
spec:
rules:
- host: mycustomapp.mycustomapp.com
http:
paths:
- path: /
backend:
serviceName: mycustomapp
servicePort: 80
But I always get 404 from istio lb on clean cluster with istio 0.7.1 only installed. Samples like bookinfo and httpbin works well
Application yaml:
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: mycustomapp
name: mycustomapp
spec:
replicas: 1
selector:
matchLabels:
k8s-app: mycustomapp
template:
metadata:
labels:
k8s-app: mycustomapp
spec:
containers:
- name: mycustomapp
image: xxxx.azurecr.io/mycustomapp:999
ports:
- containerPort: 80
protocol: TCP
imagePullSecrets:
- name: xxxx
serviceAccountName: mycustomapp
---
kind: Service
apiVersion: v1
metadata:
annotations:
prometheus.io/scrape: 'true'
labels:
k8s-app: mycustomapp
name: mycustomapp
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 80
selector:
k8s-app: mycustomapp
To get rid of the 404 error in your case, it should be enough to add the correct port name to the service and deployment YAML files, and add istio sidecar to the deployment YAML file. Then you should redeploy all changed files.
Perhaps you may need to add label app: mycustomapp to the service and deployment, but I'm not sure is it required or optional.
Here is example of the service.yaml file with the correct port name (more about the port names you can read here):
kind: Service
apiVersion: v1
metadata:
annotations:
prometheus.io/scrape: 'true'
labels:
app: mycustomapp
k8s-app: mycustomapp
name: mycustomapp
spec:
type: ClusterIP
ports:
- name: http-80
port: 80
targetPort: 80
selector:
k8s-app: mycustomapp
Ensure you have also the correct port name in your deployment file.
You can add the istio sidecar to the container manually, following these steps:
download and unpack latest istio release, suitable for your OS from https://github.com/istio/istio/releases
Change directory to istio package. For example, if the package is istio-0.7
cd istio-0.7
Create inject config:
kubectl create -f install/kubernetes/istio-sidecar-injector-configmap-release.yaml --dry-run -o=jsonpath='{.data.config}' > inject-config.yaml
Create mesh config:
kubectl -n istio-system get configmap istio -o=jsonpath='{.data.mesh}' > mesh-config.yaml
Add istio sidecar container to your deployment:
bin/istioctl kube-inject \
--injectConfigFile inject-config.yaml \
--meshConfigFile mesh-config.yaml \
--filename path/to/original/deployment.yaml \
--output deployment-injected.yaml
Deploy new deployment:
kubectl apply -f deployment-injected.yaml
If you want to have automatic sidecar injection, follow this manual.
You can check if the sidecar has been injected into the deployment:
$ kubectl get deployment mycustomapp -o wide
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
mycustomapp 1 1 1 1 3h mycustomapp,istio-proxy nginx:1.7.9,docker.io/istio/proxy:0.7.1 k8s-app=mycustomapp

Configure Ingress Kubernetes - accessible only on single node

I had setup ingress on my Kubernetes Cluster running on VMWAre virtual machines by following everything similar to the specifications here. All the ports are open and accessible.
https://github.com/nginxinc/kubernetes-ingress/tree/master/examples/complete-example
My master is x.x.x.10 and nodes are x.x.x.12 and x.x.x.13.
After the creation of ingress/controllers, I need to get the IP where the nginx-controller runs
nginx-ingress-rc-kgfmd 1/1 Running 0 21h 172.16.5.5 x.x.x.12
so, it usually runs either on x.x.x.12 or x.x.x.13, and then when I do this it hits my web service
curl --resolve master.federated.fds:80:x.x.x.12 https://master.federated.fds/coffee
where master.federated.fds is the DNS resolvable name of Master.
I need to make it work without the help of IP address and only with the DNS resolvable name or else atleast with any of the node ip's
Eg: http://node2.federated.fds/coffee, when I curl this I get Connection refused error
Updating with specifications
apiVersion: v1
kind: Service
metadata:
name: coffee-svc
labels:
app: coffee
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
# nodePort: 30080
type: NodePort
selector:
app: coffee
ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: cafe-ingress
spec:
rules:
- host: jciamaster.federated.fds
http:
paths:
- path: /tea
backend:
serviceName: tea-svc
servicePort: 80
- path: /coffee
backend:
serviceName: coffee-svc
servicePort: 80
nginx ing controller
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx-ingress-rc
labels:
app: nginx-ingress
spec:
replicas: 1
selector:
app: nginx-ingress
template:
metadata:
labels:
app: nginx-ingress
spec:
containers:
- image: nginxdemos/nginx-ingress:0.8.1
imagePullPolicy: Always
name: nginx-ingress
ports:
- containerPort: 80
hostPort: 80
I see that the port 80 is listening only on the node where nginx pod runs and not on any other node. Could someone pls let me know how to access the application through all node ip's or thro a url like jciamaster.federated.fds?
Thanks,
Update:
Tried to run with nginx controller as svc as suggested by Marc
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx-ingress-rc
labels:
app: nginx-ingress
spec:
replicas: 1
selector:
app: nginx-ingress
template:
metadata:
labels:
app: nginx-ingress
spec:
containers:
- image: nginxdemos/nginx-ingress:0.8.1
imagePullPolicy: Always
name: nginx-ingress
ports:
- containerPort: 80
# Uncomment the lines below to enable extensive logging and/or customization of
# NGINX configuration with configmaps
#args:
#- -v=3
#- -nginx-configmaps=default/nginx-config
---
apiVersion: v1
kind: Service
metadata:
labels:
name: nginx-ingress-label
name: nginx-ing-svc
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
nodePort: 30000
type: NodePort
selector:
name: nginx-ingress
When I hit http://x.x.x.:30000/coffee it just hangs and does nothing.Anything I am doing wrong?
You can expose the nginx controller Pod with a NodePort Service, then you can access it on all nodes.