Unable to get ClusterIP service url from minikube - kubernetes

I have created a ClusterIP service according to configuration files below, however I can't seem to get the URL from minikube for that service
k create -f service-cluster-definition.yaml
➜ minikube service myapp-frontend --url
😿 service default/myapp-frontend has no node port
And if I try to add NodePort into the ports section of service-cluster-definition.yaml it complains with error, that such key is deprecated.
What am I missing or doing wrong?
service-cluster-definition.yaml
apiVersion: v1
kind: Service
metadata:
name: myapp-frontend
spec:
type: ClusterIP
ports:
- targetPort: 80
port: 80
selector:
app: myapp
type: etl
deployment-definition.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-deployment
labels:
app: myapp
env: experiment
type: etl
spec:
template:
metadata:
name: myapp-pod
labels:
app: myapp
env: experiment
type: etl
spec:
containers:
- name: nginx-container
image: nginx:1.7.1
replicas: 3
selector:
matchLabels:
type: etl
➜ k get pods --selector="app=myapp,type=etl" -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
myapp-deployment-59856c4487-2g9c7 1/1 Running 0 45m 172.17.0.9 minikube <none> <none>
myapp-deployment-59856c4487-mb28z 1/1 Running 0 45m 172.17.0.4 minikube <none> <none>
myapp-deployment-59856c4487-sqxqg 1/1 Running 0 45m 172.17.0.8 minikube <none> <none>
(⎈ |minikube:default)
Projects/experiments/kubernetes
➜ k version
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.3", GitCommit:"06ad960bfd03b39c8310aaf92d1e7c12ce618213", GitTreeState:"clean", BuildDate:"2020-02-11T18:14:22Z", GoVersion:"go1.13.6", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.3", GitCommit:"06ad960bfd03b39c8310aaf92d1e7c12ce618213", GitTreeState:"clean", BuildDate:"2020-02-11T18:07:13Z", GoVersion:"go1.13.6", Compiler:"gc", Platform:"linux/amd64"}
(⎈ |minikube:default)

First let's clear some concepts from Documentation:
ClusterIP: Exposes the Service on a cluster-internal IP.
Choosing this value makes the Service only reachable from within the cluster.
NodePort: Exposes the Service on each Node’s IP at a static port (the NodePort).
You’ll be able to contact the NodePort Service, from outside the cluster, by requesting NodeIP:NodePort.
Question 1:
I have created a ClusterIP service according to configuration files below, however I can't seem to get the URL from minikube for that service.
Since Minikube is a virtualized environment on a single host we tend to forget that the cluster is isolated from the host computer. If you set a service as ClusterIP, Minikube will not give external access.
Question 2:
And if I try to add NodePort into the ports section of service-cluster-definition.yaml it complains with error, that such key is deprecated.
Maybe you were pasting on the wrong position. You should just substitute the field type: ClusterIP for type: NodePort. Here is the correct form of your yaml:
apiVersion: v1
kind: Service
metadata:
name: myapp-frontend
spec:
type: NodePort
ports:
- targetPort: 80
port: 80
selector:
app: myapp
type: etl
Reproduction:
user#minikube:~$ kubectl apply -f deployment-definition.yaml
deployment.apps/myapp-deployment created
user#minikube:~$ kubectl get pods
NAME READY STATUS RESTARTS AGE
myapp-deployment-59856c4487-7dw6x 1/1 Running 0 5m11s
myapp-deployment-59856c4487-th7ff 1/1 Running 0 5m11s
myapp-deployment-59856c4487-zvm5f 1/1 Running 0 5m11s
user#minikube:~$ kubectl apply -f service-cluster-definition.yaml
service/myapp-frontend created
user#minikube:~$ kubectl get service myapp-frontend
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
myapp-frontend NodePort 10.101.156.113 <none> 80:32420/TCP 3m43s
user#minikube:~$ minikube service list
|-------------|----------------|-----------------------------|-----|
| NAMESPACE | NAME | TARGET PORT | URL |
|-------------|----------------|-----------------------------|-----|
| default | kubernetes | No node port | |
| default | myapp-frontend | http://192.168.39.219:32420 | |
| kube-system | kube-dns | No node port | |
|-------------|----------------|-----------------------------|-----|
user#minikube:~$ minikube service myapp-frontend --url
http://192.168.39.219:32420
user#minikube:~$ curl http://192.168.39.219:32420
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...{{output suppressed}}...
As you can see, with the service set as NodePort the minikube start serving the app using MinikubeIP:NodePort routing the connection to the matching pods.
Note that nodeport will be chosen by default between 30000:32767
If you have any question let me know in the comments.

To access inside cluster, do kubectl get svc to get the cluster ip or use the service name directly.
To access outside cluster, you can use NodePort as service type.

Related

linkerd Top feature only shows /healthz requests

Doing Lab 7.2. Service Mesh and Ingress Controller from the Kubernetes Developer course from the Linux Foundation and there is a problem I am facing - the Top feature only shows the /healthz requests.
It is supposed to show / requests too. But does not. Would really like to troubleshoot it, but I have no idea how to even approach it.
More details
Following the course instructions I have:
A k8s cluster deployed on two GCE VMs
linkerd
nginx ingress controller
A simple LoadBalancer service off the httpd image. In effect, this is a NodePort service, since the LoadBalancer is never provisioned. The name is secondapp
A simple ingress object routing to the secondapp service.
I have no idea what information is useful to troubleshoot the issue. Here is some that I can think off:
Setup
Linkerd version
student#master:~$ linkerd version
Client version: stable-2.11.1
Server version: stable-2.11.1
student#master:~$
nginx ingress controller version
student#master:~$ helm list
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
myingress default 1 2022-09-28 02:09:35.031108611 +0000 UTC deployed ingress-nginx-4.2.5 1.3.1
student#master:~$
The service list
student#master:~$ k get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 7d4h
myingress-ingress-nginx-controller LoadBalancer 10.106.67.139 <pending> 80:32144/TCP,443:32610/TCP 62m
myingress-ingress-nginx-controller-admission ClusterIP 10.107.109.117 <none> 443/TCP 62m
nginx ClusterIP 10.105.88.244 <none> 443/TCP 3h42m
registry ClusterIP 10.110.129.139 <none> 5000/TCP 3h42m
secondapp LoadBalancer 10.105.64.242 <pending> 80:32000/TCP 111m
student#master:~$
Verifying that the ingress controller is known to linkerd
student#master:~$ k get ds myingress-ingress-nginx-controller -o json | jq .spec.template.metadata.annotations
{
"linkerd.io/inject": "ingress"
}
student#master:~$
The secondapp pod
apiVersion: v1
kind: Pod
metadata:
name: secondapp
labels:
example: second
spec:
containers:
- name: webserver
image: httpd
- name: busy
image: busybox
command:
- sleep
- "3600"
The secondapp service
student#master:~$ k get svc secondapp -o yaml
apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2022-09-28T01:21:00Z"
name: secondapp
namespace: default
resourceVersion: "433221"
uid: 9266f000-5582-4796-ba73-02375f56ce2b
spec:
allocateLoadBalancerNodePorts: true
clusterIP: 10.105.64.242
clusterIPs:
- 10.105.64.242
externalTrafficPolicy: Cluster
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- nodePort: 32000
port: 80
protocol: TCP
targetPort: 80
selector:
example: second
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer: {}
student#master:~$
The ingress object
student#master:~$ k get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress-test <none> www.example.com 80 65m
student#master:~$ k get ingress ingress-test -o yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
creationTimestamp: "2022-09-28T02:20:03Z"
generation: 1
name: ingress-test
namespace: default
resourceVersion: "438934"
uid: 1952a816-a3f3-42a4-b842-deb56053b168
spec:
rules:
- host: www.example.com
http:
paths:
- backend:
service:
name: secondapp
port:
number: 80
path: /
pathType: ImplementationSpecific
status:
loadBalancer: {}
student#master:~$
Testing
secondapp
student#master:~$ curl "$(curl ifconfig.io):$(k get svc secondapp '--template={{(index .spec.ports 0).nodePort}}')"
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 15 100 15 0 0 340 0 --:--:-- --:--:-- --:--:-- 348
<html><body><h1>It works!</h1></body></html>
student#master:~$
through the ingress controller
student#master:~$ url="$(curl ifconfig.io):$(k get svc myingress-ingress-nginx-controller '--template={{(index .spec.ports 0).nodePort}}')"
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 15 100 15 0 0 319 0 --:--:-- --:--:-- --:--:-- 319
student#master:~$ curl -H "Host: www.example.com" $url
<html><body><h1>It works!</h1></body></html>
student#master:~$
And without the Host header:
student#master:~$ curl $url
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>nginx</center>
</body>
</html>
student#master:~$
And finally the linkerd dashboard Top snapshot:
Where are the GET / requests?
EDIT 1
So on the linkerd slack someone suggested to have a look at https://linkerd.io/2.12/tasks/using-ingress/#nginx and that made me examine my pods more carefully. It turns out one of the nginx-ingress pods could not start and it is clearly due to linkerd injection. Please, observe:
Before linkerd
student#master:~$ k get pod
NAME READY STATUS RESTARTS AGE
myingress-ingress-nginx-controller-gbmbg 1/1 Running 0 19m
myingress-ingress-nginx-controller-qtdhw 1/1 Running 0 3m6s
secondapp 2/2 Running 4 (13m ago) 12h
student#master:~$
After linkerd
student#master:~$ k get ds myingress-ingress-nginx-controller -o yaml | linkerd inject --ingress - | k apply -f -
daemonset "myingress-ingress-nginx-controller" injected
daemonset.apps/myingress-ingress-nginx-controller configured
student#master:~$
And checking the pods:
student#master:~$ k get pod
NAME READY STATUS RESTARTS AGE
myingress-ingress-nginx-controller-gbmbg 1/1 Running 0 40m
myingress-ingress-nginx-controller-xhj5m 1/2 Running 8 (5m59s ago) 17m
secondapp 2/2 Running 4 (34m ago) 12h
student#master:~$
student#master:~$ k describe pod myingress-ingress-nginx-controller-xhj5m |tail
Normal Created 19m kubelet Created container linkerd-proxy
Normal Started 19m kubelet Started container linkerd-proxy
Normal Pulled 18m (x2 over 19m) kubelet Container image "registry.k8s.io/ingress-nginx/controller:v1.3.1#sha256:54f7fe2c6c5a9db9a0ebf1131797109bb7a4d91f56b9b362bde2abd237dd1974" already present on machine
Normal Created 18m (x2 over 19m) kubelet Created container controller
Normal Started 18m (x2 over 19m) kubelet Started container controller
Warning FailedPreStopHook 18m kubelet Exec lifecycle hook ([/wait-shutdown]) for Container "controller" in Pod "myingress-ingress-nginx-controller-xhj5m_default(93dd0189-091f-4c56-a197-33991932d66d)" failed - error: command '/wait-shutdown' exited with 137: , message: ""
Warning Unhealthy 18m (x6 over 19m) kubelet Readiness probe failed: HTTP probe failed with statuscode: 502
Normal Killing 18m kubelet Container controller failed liveness probe, will be restarted
Warning Unhealthy 14m (x30 over 19m) kubelet Liveness probe failed: HTTP probe failed with statuscode: 502
Warning BackOff 4m29s (x41 over 14m) kubelet Back-off restarting failed container
student#master:~$
I will process the link I was given on the linkerd slack and update this post with any new findings.
The solution was provided by the Axenow user on the linkerd2 slack forum. The problem is that ingress-nginx cannot share the namespace with the services it provides the ingress functionality to. In my case all of them were in the default namespace.
To quote Axenow:
When you deploy nginx, by default it send traffic to the pod directly.
To fix it you have to make this configuration:
https://linkerd.io/2.12/tasks/using-ingress/#nginx
To elaborate, one has to update the values.yaml file of the downloaded ingress-nginx helm chart to make sure the following is true:
controller:
replicaCount: 2
service:
externalTrafficPolicy: Cluster
podAnnotations:
linkerd.io/inject: enabled
And install the controller in a dedicated namespace:
helm upgrade --install --create-namespace --namespace ingress-nginx -f values.yaml ingress-nginx ingress-nginx/ingress-nginx
(Having uninstalled the previous installation, of course)

A sample containerized application in Kubernetes unable to be shown as targets in Prometheus for scraping metrics

My goal is to reproduce the observations in this blog post: https://medium.com/kubernetes-tutorials/monitoring-your-kubernetes-deployments-with-prometheus-5665eda54045
So far I am able to deploy the example rpc-app applicaiton in my cluster, the following shows the two pods for this application is running:
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default rpc-app-deployment-64f456b65-5m7j5 1/1 Running 0 3h23m 10.244.0.15 my-server-ip.company.com <none> <none>
default rpc-app-deployment-64f456b65-9mnfd 1/1 Running 0 3h23m 10.244.0.14 my-server-ip.company.com <none> <none>
The application exposes metrics and is confirmed by:
root#xxxxx:/u01/app/k8s # curl 10.244.0.14:8081/metrics
# HELP go_gc_duration_seconds A summary of the GC invocation durations.
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 0
go_gc_duration_seconds{quantile="0.25"} 0
...
rpc_durations_seconds{service="uniform",quantile="0.5"} 0.0001021102787270781
rpc_durations_seconds{service="uniform",quantile="0.9"} 0.00018233200374804932
rpc_durations_seconds{service="uniform",quantile="0.99"} 0.00019828258205623097
rpc_durations_seconds_sum{service="uniform"} 6.817882693745326
rpc_durations_seconds_count{service="uniform"} 68279
My prometheus pod is running in the same cluster. However I am unable to see any rpc_* meterics in the prometheus.
monitoring prometheus-deployment-599bbd9457-pslwf 1/1 Running 0 30m 10.244.0.21 my-server-ip.company.com <none> <none>
In the promethus GUI
click Status -> Servcie Discovery, I got
Service Discovery
rpc-metrics (0 / 3 active targets)
click Status -> Targets show nothing (0 targets)
click Status -> Configuration
The content can be seen as: https://gist.github.com/denissun/14835468be3dbef7bc924032767b9d7f
I am really new to Prometheus/Kubernetes monitoring, appreciate your help to troubleshoot this issue.
update 1 - I created the service
`
# cat rpc-app-service.yaml
apiVersion: v1
kind: Service
metadata:
name: rpc-app-service
labels:
app: rpc-app
spec:
ports:
- name: web
port: 8081
targetPort: 8081
protocol: TCP
nodePort: 32325
selector:
app: rpc-app
type: NodePort
# kubectl get service rpc-app-service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
rpc-app-service NodePort 10.110.204.119 <none> 8081:32325/TCP 9h
Did you create the Kubernetes Service to expose the Deployment?
kubectl create -f rpc-app-service.yam
The Prometheus configuration watches for Service endpoints not Deployments|Pods.
Have a look at the Prometheus Operator. It's slightly more involved than running a Prometheus Deployment in your cluster but it represents a state-of-the-art deployment of Prometheus with some elegant abstractions such as PodMonitors and ServiceMonitors.

Why can't I find service discovery in Kubernetes?

I am creating a simple grpc example using Kubernetes in an on-premises environment.
When nodejs makes a request with pythonservice, pythonservice responds with helloworld and displays it on a web page.
However, pythonservice's clusterip is accessible, but not http://pythoservice:8000.
There may be a problem with coredns, so I checked various things and deleted kube-dns service of kube-system.
And if you check with pythonservice.default.svc.cluster.local with nslookup, you will see a different address from the clusterip of pythonservice.
Sorry I'm not good at English
This is the node.js code :
var setting = 'test';
var express = require('express');
var app = express();
const port = 80;
var PROTO_PATH = __dirname + '/helloworld.proto';
var grpc = require('grpc');
var protoLoader = require('#grpc/proto-loader');
var packageDefinition = protoLoader.loadSync(
PROTO_PATH,
{keepCase: true,
longs: String,
enums: String,
defaults: true,
oneofs: true
});
// http://pythonservice:8000
// 10.109.228.152:8000
// pythonservice.default.svc.cluster.local:8000
// 218.38.137.28
var hello_proto =
grpc.loadPackageDefinition(packageDefinition).helloworld;
function main(callback) {
var client = new hello_proto.Greeter("http://pythonservice:8000",
grpc.credentials.createInsecure());
var user;
if (process.argv.length >= 3) {
user = process.argv[2];
} else {
user = 'world';
}
client.sayHello({name: user}, function(err, response) {
console.log('Greeting:', response.message);
setting = response.message;
});
}
var server = app.listen(port, function () {});
app.get('/', function (req, res) {
main();
res.send(setting);
//res.send(ip2);
//main(function(result){
// res.send(result);
//})
});
This is the yaml file for pythonservice
apiVersion: apps/v1
kind: Deployment
metadata:
name: practice-dp2
spec:
selector:
matchLabels:
app: practice-dp2
replicas: 1
template:
metadata:
labels:
app: practice-dp2
spec:
hostname: appname
subdomain: default-subdomain
containers:
- name: practice-dp2
image: taeil777/greeter-server:v1
ports:
- containerPort: 8000
---
apiVersion: v1
kind: Service
metadata:
name: pythonservice
spec:
type: ClusterIP
selector:
app: practice-dp2
ports:
- port: 8000
targetPort: 8000
this is kubectl get all:
root#pusik-server0:/home/tinyos/Desktop/grpc/node# kubectl get all
NAME READY STATUS RESTARTS AGE
pod/practice-dp-55dd4b9d54-v4hhq 1/1 Running 1 68m
pod/practice-dp2-7d4886876-znjtl 1/1 Running 0 18h
NAME TYPE CLUSTER-IP EXTERNAL-IP
PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none>
443/TCP 34d
service/nodeservice ClusterIP 10.100.165.53 <none>
80/TCP 68m
service/pythonservice ClusterIP 10.109.228.152 <none>
8000/TCP 18h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/practice-dp 1/1 1 1 68m
deployment.apps/practice-dp2 1/1 1 1 18h
NAME DESIRED CURRENT READY
AGE
replicaset.apps/practice-dp-55dd4b9d54 1 1 1
68m
replicaset.apps/practice-dp2-7d4886876 1 1 1
18h
root#pusik-server0:/home/tinyos/Desktop/grpc/python# nslookup
pythonservice.default.svc.cluster.local
Server: 127.0.1.1
Address: 127.0.1.1#53
Name: pythonservice.default.svc.cluster.local
Address: 218.38.137.28
1. Answering the first question:
pythonservice's clusterip is accessible, but not http://pythoservice:8000.
Please refer to Connecting Applications with Services
The type of service/pythonservice is ClusterIP If you are interested with exposing the service outside the cluster please use service type NodePort or LoadBalancer. According to the attached screens, your application is accessible from within the cluster (ClusterIP serice).
2. Answering the second question:
The output
exec failed: container_linux.go:345: starting container process caused "exec: \"nslookup\": executable file not found in $PATH": unknown command terminated with exit code 126
means that inside your pod probably you don't have tools like nslookup so: please run some pod in the same namespace with installed tools and verify again:
kubectl run ubuntu --rm -it --image ubuntu --restart=Never --command -- bash -c 'apt-get update && apt-get -y install dnsutils && bash'
kubectl exec ubuntu2 -- nslookup pythonservice.default.svc.cluster.local
-- Update
Please verfify the state for all pods and svc especially in the kube-system namespace:
kubectl get nodes,pods,svc --all-namespaces -o wide
In order to start debugging please get more information about particular problem f.e. with coredns please use:
kubectl describe pod your coredns_pod -n kube-system
kubectl logs coredns_pod -n kube-system
Please refer to:
Debug Services
Debugging DNS Resolution
Hope this help.

installing nginx-ingress on Kubernetes to run on localhost MacOs - Docker for Mac(Edge)

Update:
I got the NodePort to work: kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 7d
my-release-nginx-ingress-controller NodePort 10.105.64.135 <none> 80:32706/TCP,443:32253/TCP 10m
my-release-nginx-ingress-default-backend ClusterIP 10.98.230.24 <none> 80/TCP 10m
Do I port-forward then?
Installing Ingress using Helm on Docker for Mac(Edge with Kubernetes)
https://github.com/kubernetes/charts/tree/master/stable/nginx-ingress
Will this work on localhost - and if so, how to access a service?
Steps:
helm install stable/nginx-ingress
Output:
NAME: washing-jackal
LAST DEPLOYED: Thu Jan 18 12:57:40 2018
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/ConfigMap
NAME DATA AGE
washing-jackal-nginx-ingress-controller 1 1s
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
washing-jackal-nginx-ingress-controller LoadBalancer 10.105.122.1 <pending> 80:31494/TCP,443:32136/TCP 1s
washing-jackal-nginx-ingress-default-backend ClusterIP 10.103.189.14 <none> 80/TCP 1s
==> v1beta1/Deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
washing-jackal-nginx-ingress-controller 1 1 1 0 0s
washing-jackal-nginx-ingress-default-backend 1 1 1 0 0s
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
washing-jackal-nginx-ingress-controller-5b4d86c948-xxlrt 0/1 ContainerCreating 0 0s
washing-jackal-nginx-ingress-default-backend-57947f94c6-h4sz6 0/1 ContainerCreating 0 0s
NOTES:
The nginx-ingress controller has been installed.
It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status by running 'kubectl --namespace default get services -o wide -w washing-jackal-nginx-ingress-controller'
An example Ingress that makes use of the controller:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
name: example
namespace: foo
spec:
rules:
- host: www.example.com
http:
paths:
- backend:
serviceName: exampleService
servicePort: 80
path: /
# This section is only required if TLS is to be enabled for the Ingress
tls:
- hosts:
- www.example.com
secretName: example-tls
If TLS is enabled for the Ingress, a Secret containing the certificate and key must also be provided:
apiVersion: v1
kind: Secret
metadata:
name: example-tls
namespace: foo
data:
tls.crt: <base64 encoded cert>
tls.key: <base64 encoded key>
type: kubernetes.io/tls
As far as I can tell from the output you posted, everything should be running smoothly in your local kubernetes cluster.
However, your ingress controller is exposed using a LoadBalancer Service as you can tell from the following portion of the output you posted:
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
washing-jackal-nginx-ingress-controller LoadBalancer 10.105.122.1 <pending> 80:31494/TCP,443:32136/TCP 1s
Services of type LoadBalancer require support from the underlying infrastructure, and will not work in your local environment.
However, a LoadBalancer service is also a NodePort Service. In fact you can see in the above snippet of output that your ingress controller is listening to the following ports:
80:31494/TCP,443:32136/TCP
This means you should be able to reach your ingress controller on port 31494 and 32136 on your node's ip address.
You could make your ingress controller listen to more standard ports, such as 80 and 443, but you'll probably have to edit manually the resources created by the helm chart to do so.
To see the app running, you need to go to localhost:port to see your service if you're on minikube. Or use your computer name instead of localhost if you're on a different computer in your network. If you're using VM's, use the node's VM IP in the browser instead of localhost.

How to publicly expose Traefik ingress controller on Google Cloud Container Engine?

I've been trying to use Traefik as an Ingress Controller on Google Cloud's container engine.
I got my http deployment/service up and running (when I exposed it with a normal LoadBalancer, it was answering fine).
I then removed the LoadBalancer, and followed this tutorial: https://docs.traefik.io/user-guide/kubernetes/
So I got a new traefik-ingress-controller deployment and service, and an ingress for traefik's ui which I can access through the kubectl proxy.
I then create my ingress for my http service, but here comes my issue: I can't find a way to expose that externally.
I want it to be accessible by anybody via an external IP.
What am I missing?
Here is the output of kubectl get --export all:
NAME READY STATUS RESTARTS AGE
po/mywebservice-3818647231-gr3z9 1/1 Running 0 23h
po/mywebservice-3818647231-rn4fw 1/1 Running 0 1h
po/traefik-ingress-controller-957212644-28dx6 1/1 Running 0 1h
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/mywebservice 10.51.254.147 <none> 80/TCP 1d
svc/kubernetes 10.51.240.1 <none> 443/TCP 1d
svc/traefik-ingress-controller 10.51.248.165 <nodes> 80:31447/TCP,8080:32481/TCP 25m
svc/traefik-web-ui 10.51.248.65 <none> 80/TCP 3h
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deploy/mywebservice 2 2 2 2 1d
deploy/traefik-ingress-controller 1 1 1 1 3h
NAME DESIRED CURRENT READY AGE
rs/mywebservice-3818647231 2 2 2 23h
rs/traefik-ingress-controller-957212644 1 1 1 3h
You need to expose the Traefik service. Set the service spec type to LoadBalancer. Try the below service file that i've used previously:
apiVersion: v1
kind: Service
metadata:
name: traefik
spec:
type: LoadBalancer
selector:
app: traefik
tier: proxy
ports:
- port: 80
targetPort: 80