I currently have a problem installing kong with postgresql
and add the service through REST calls to kong admin server.
My install command as below :
helm install kong kong/kong -n kong \
--set ingressController.installCRDs=false \
--set admin.enabled=true \
--set admin.http.enabled=true \
--set postgresql.enabled=true \
--set postgresql.auth.username=kong \
--set postgresql.auth.database=kong \
--set postgresql.service.ports.postgresql=5432 \
--set postgresql.image.tag=13.6.0-debian-10-r52 \
--set migrations.init=false \
--set migrations.preUpgrade=false \
--set migrations.postUpgrade=false
It installs normally
After registering the service, the following message appears.
Don't worry, LoadBalance pending will be modified to NodePort later!
root#nlu-framework-master-1:~# k get all -n kong
NAME READY STATUS RESTARTS AGE
pod/kong-kong-5b685cd4b9-t95mx 2/2 Running 1 3m22s
pod/kong-postgresql-0 1/1 Running 1 3m22s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kong-kong-admin NodePort 10.233.7.63 <none> 8001:31422/TCP,8444:31776/TCP 3m22s
service/kong-kong-proxy LoadBalancer 10.233.0.19 <pending> 80:30511/TCP,443:30358/TCP 3m22s
service/kong-postgresql ClusterIP 10.233.42.35 <none> 5432/TCP 3m22s
service/kong-postgresql-headless ClusterIP None <none> 5432/TCP 3m22s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/kong-kong 1/1 1 1 3m22s
NAME DESIRED CURRENT READY AGE
replicaset.apps/kong-kong-5b685cd4b9 1 1 1 3m22s
NAME READY AGE
statefulset.apps/kong-postgresql 1/1 3m22s
My add service command as below :
curl -X POST http://10.233.7.63:8001/services \
-H 'Content-Type: application/json' \
-d '{"name":"k8s-api","url":"https://192.168.0.50:6443/api/v1/"}'
add service result message as below:
{"code":12,"message":"cannot create 'services' entities when not using a database","name":"operation unsupported"}
please anybody help me
I solve problem by myself
postgresql version seems to have a bug in version 14 or higher, install helm chart cetic/postgresql.
helm chart cetic/postgresql's postgresql version 11.5
https://artifacthub.io/packages/helm/cetic/postgresql
helm install postgres cetic/postgresql -n kong \
--set postgresql.username=kong \
--set postgresql.password=kong \
--set postgresql.database=kong \
--set postgresql.port=5432
install bitnami/kong with external postgresql
helm install kong -n kong bitnami/kong \
--set postgresql.enabled=false \
--set postgresql.external.host=postgres-postgresql \
--set postgresql.external.user=kong \
--set postgresql.external.password=kong \
--set postgresql.external.database=kong
k get all -n kong
NAME READY STATUS RESTARTS AGE
pod/kong-9688f7f55-42cfm 2/2 Running 3 2m15s
pod/kong-9688f7f55-5ntvw 2/2 Running 3 2m15s
pod/postgres-postgresql-0 1/1 Running 0 4m54s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kong ClusterIP 10.233.39.160 <none> 80/TCP,443/TCP 2m15s
service/postgres-postgresql ClusterIP 10.233.23.169 <none> 5432/TCP 4m54s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/kong 2/2 2 2 2m15s
NAME DESIRED CURRENT READY AGE
replicaset.apps/kong-9688f7f55 2 2 2 2m15s
NAME READY AGE
statefulset.apps/postgres-postgresql 1/1 4m54s
Change the service type to nodeport andadd admin service
k edit service/kong -n kong
- name: http-admin
port: 8001
protocol: TCP
targetPort: http-admin
- name: https-admin
port: 8444
protocol: TCP
targetPort: https-admin
Test add service with admin service clusterIP.
curl -X POST http://10.233.39.160:8001/services \
> -H 'Content-Type: application/json' \
> -d '{"name":"k8s-api","url":"https://192.168.0.50:6443/api/v1/"}'
Check service has been successfully added.
curl http://10.233.39.160:8001/services | jq
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 401 100 401 0 0 97k 0 --:--:-- --:--:-- --:--:-- 97k
{
"next": null,
"data": [
{
"id": "5cc7f7ce-3494-44fa-b76c-47795192f541",
"host": "192.168.0.50",
"path": "/api/v1/",
"protocol": "https",
"retries": 5,
"ca_certificates": null,
"write_timeout": 60000,
"port": 6443,
"tags": null,
"name": "k8s-api",
"tls_verify": null,
"client_certificate": null,
"tls_verify_depth": null,
"connect_timeout": 60000,
"enabled": true,
"created_at": 1649513267,
"updated_at": 1649513267,
"read_timeout": 60000
}
]
}
Related
For some reasons, I cannot use the helm chart given here inside my premise. Is there any reference how can we do this?
Yes, you can deploy JupyterHub without using Helm.
Follow the tutorial on: Jupyterhub Github Installation page
But,
The Helm installation was created to automate a long part of the installation process.
I know you can't maintain external Helm repositories in your premise, but you can download manually the package, and install it.
It will be really easier and faster than creating the whole setup manually.
TL;DR: The only thing different From Documentation will be this command:
helm upgrade --install jhub jupyterhub-0.8.2.tgz \
--namespace jhub \
--version=0.8.2 \
--values config.yaml
Bellow is my full reproduction of the local installation.
user#minikube:~/jupyterhub$ openssl rand -hex 32
e278e128a9bff352becf6c0cc9b029f1fe1d5f07ce6e45e6c917c2590654e9ee
user#minikube:~/jupyterhub$ cat config.yaml
proxy:
secretToken: "e278e128a9bff352becf6c0cc9b029f1fe1d5f07ce6e45e6c917c2590654e9ee"
user#minikube:~/jupyterhub$ wget https://jupyterhub.github.io/helm-chart/jupyterhub-0.8.2.tgz
2020-02-10 13:25:31 (60.0 MB/s) - ‘jupyterhub-0.8.2.tgz’ saved [27258/27258]
user#minikube:~/jupyterhub$ helm upgrade --install jhub jupyterhub-0.8.2.tgz \
--namespace jhub \
--version=0.8.2 \
--values config.yaml
Release "jhub" does not exist. Installing it now.
NAME: jhub
LAST DEPLOYED: Mon Feb 10 13:27:20 2020
NAMESPACE: jhub
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Thank you for installing JupyterHub!
You can find the public IP of the JupyterHub by doing:
kubectl --namespace=jhub get svc proxy-public
It might take a few minutes for it to appear!
user#minikube:~/jupyterhub$ k get all -n jhub
NAME READY STATUS RESTARTS AGE
pod/hub-68d9d97765-ffrz6 0/1 Pending 0 19m
pod/proxy-56694f6f87-4cbgj 1/1 Running 0 19m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/hub ClusterIP 10.96.150.230 <none> 8081/TCP 19m
service/proxy-api ClusterIP 10.96.115.44 <none> 8001/TCP 19m
service/proxy-public LoadBalancer 10.96.113.131 <pending> 80:31831/TCP,443:31970/TCP 19m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/hub 0/1 1 0 19m
deployment.apps/proxy 1/1 1 1 19m
NAME DESIRED CURRENT READY AGE
replicaset.apps/hub-68d9d97765 1 1 0 19m
replicaset.apps/proxy-56694f6f87 1 1 1 19m
NAME READY AGE
statefulset.apps/user-placeholder 0/0 19m
If you have any problem in the process, just let me know.
How to change the default nodeport range on Mac (docker-desktop)?
I'd like to change the default nodeport range on Mac. Is it possible? I'm glad to have found this article: http://www.thinkcode.se/blog/2019/02/20/kubernetes-service-node-port-range. Since I can't find /etc/kubernetes/manifests/kube-apiserver.yaml in my environment, I tried to achieve what I want to do by running sudo kubectl edit pod kube-apiserver-docker-desktop --namespace=kube-system and add the parameter --service-node-port-range=443-22000. But when I tried to save it, I got the following error:
# pods "kube-apiserver-docker-desktop" was not valid:
# * spec: Forbidden: pod updates may not change fields other than `spec.containers[*].image`, `spec.initContainers[*].image`, `spec.activeDeadlineSeconds` or `spec.tolerations` (only additions to existing tolerations)
(I get the same error even if I don't touch port 443.) Can someone please share his/her thoughts or experience? Thanks!
Append:
skwok-mbp:kubernetes skwok$ kubectl get deployment -A
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
docker compose 1/1 1 1 15d
docker compose-api 1/1 1 1 15d
ingress-nginx nginx-ingress-controller 1/1 1 1 37m
kube-system coredns 2/2 2 2 15d
skwok-mbp:kubernetes skwok$ kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
default fortune-configmap-volume 2/2 Running 4 14d
default kubia-2qzmm 1/1 Running 2 15d
docker compose-6c67d745f6-qqmpb 1/1 Running 2 15d
docker compose-api-57ff65b8c7-g8884 1/1 Running 4 15d
ingress-nginx nginx-ingress-controller-756f65dd87-sq6lt 1/1 Running 0 37m
kube-system coredns-fb8b8dccf-jn8cm 1/1 Running 6 15d
kube-system coredns-fb8b8dccf-t6qhs 1/1 Running 6 15d
kube-system etcd-docker-desktop 1/1 Running 2 15d
kube-system kube-apiserver-docker-desktop 1/1 Running 2 15d
kube-system kube-controller-manager-docker-desktop 1/1 Running 29 15d
kube-system kube-proxy-6nzqx 1/1 Running 2 15d
kube-system kube-scheduler-docker-desktop 1/1 Running 30 15d
Update: The example from the documentation shows a way to adjust apiserver parameters during Minikube start:
minikube start --extra-config=apiserver.service-node-port-range=1-65535
--extra-config: A set of key=value pairs that describe configuration that may be passed to different components. The key should be '.' separated, and the first part before the dot is the component to apply the configuration to. Valid components are: kubelet, apiserver, controller-manager, etcd, proxy, scheduler. link
The list of available options could be found in CLI documentation
Another way to change kube-apiserver parameters for Docker-for-desktop on Mac:
login to Docker VM:
$ screen ~/Library/Containers/com.docker.docker/Data/vms/0/tty
#(you can also use privileged container for the same purpose)
docker run -it --privileged --pid=host debian nsenter -t 1 -m -u -n -i sh
#or
docker run --rm -it --privileged --pid=host walkerlee/nsenter -t 1 -m -u -i -n sh
# as suggested here: https://forums.docker.com/t/is-it-possible-to-ssh-to-the-xhyve-machine/17426/5
# in case of minikube use the following command:
$ minikube ssh
Edit kube-apiserver.yaml (it's one of static pods, they are created by kubelet using files in /etc/kubernetes/manifests)
$ vi /etc/kubernetes/manifests/kube-apiserver.yaml
# for minikube
$ sudo vi /etc/kubernetes/manifests/kube-apiserver.yaml
Add the following line to the pod spec:
spec:
containers:
- command:
- kube-apiserver
- --advertise-address=192.168.65.3
...
- --service-node-port-range=443-22000 # <-- add this line
...
Save and exit. Pod kube-apiserver will be restarted with new parameters.
Exit Docker VM (for screen: Ctrl-a,k , for container: Ctrl-d )
Check the results:
$ kubectl get pod kube-apiserver-docker-desktop -o yaml -n kube-system | less
Create simple deployment and expose it with service:
$ kubectl run nginx1 --image=nginx --replicas=2
$ kubectl expose deployment nginx1 --port 80 --type=NodePort
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 14d
nginx1 NodePort 10.99.173.234 <none> 80:14966/TCP 5s
As you can see NodePort was chosen from the new range.
There are other ways to expose your container: HostNetwork, HostPort, MetalLB
You need to add the correct security context for that purpose, check out how the ingress addon in minikube works, for example.
...
ports:
- containerPort: 80
hostPort: 80
protocol: TCP
- containerPort: 443
hostPort: 443
protocol: TCP
...
securityContext:
capabilities:
add:
- NET_BIND_SERVICE
drop:
- ALL
I have followed the steps as described in this link.
When i am on section of helm install (Step 2), and trying to run:
helm install --name web ./demo
I am getting the following error:
Get https://10.96.0.1:443/version?timeout=32s: dial tcp 10.96.0.1:443: i/o timeout
Expected Result: It should install and deploy the chart.
This issue relates to your kubernetes configuration and not to your helm.
Assume you are also not able see outputs from other helm commands like helm list , etc.
Lots of people have this issue because of not properly configured CNI(typically this is calico). And sometimes this happens because of your kubeconfig absence.
Solutions are:
migrate from calico to flannel
Change the --pod-network-cidr for calico from 192.168.0.0/16 to 172.16.0.0/16 when using kubeadm to init cluster, like kubeadm init --pod-network-cidr=172.16.0.0
More related info you han find on similar github helm issue
Simple single-machine example:
1) kubeadm init --pod-network-cidr=172.16.0.0/16
2) kubectl taint nodes --all node-role.kubernetes.io/master-
3) kubectl apply -f https://docs.projectcalico.org/v3.8/manifests/calico.yaml
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
4)install helm
curl https://raw.githubusercontent.com/helm/helm/master/scripts/get | bash
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
helm init --service-account tiller
5)create and install chart
$ helm create demo
Creating demo
$ helm install --name web ./demo
NAME: web
LAST DEPLOYED: Tue Jul 16 10:44:15 2019
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/Deployment
NAME READY UP-TO-DATE AVAILABLE AGE
web-demo 0/1 1 0 0s
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
web-demo-6986c66d7d-vctql 0/1 ContainerCreating 0 0s
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
web-demo ClusterIP 10.106.140.176 <none> 80/TCP 0s
NOTES:
1. Get the application URL by running these commands:
export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=demo,app.kubernetes.io/instance=web" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl port-forward $POD_NAME 8080:80
6)result
$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/web-demo-6986c66d7d-vctql 1/1 Running 0 75s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 11m
service/web-demo ClusterIP 10.106.140.176 <none> 80/TCP 75s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/web-demo 1/1 1 1 75s
NAME DESIRED CURRENT READY AGE
replicaset.apps/web-demo-6986c66d7d 1 1 1 75s
You can find more info in how to configure helm and kubernetes itself in Get Started With Kubernetes Using Minikube article
I've been toying around with kubernetes and have run into an issue. The core of my problem is that while I can access the services on the master node by curling localhost, attempting to access the same via the public ip and port doing the same on another machine (or web browser) hangs forever.
I've configured the cluster using a terraform script (runs as root):
#!/bin/bash -v
exec 3>&1 4>&2
trap 'exec 2>&4 1>&3' 0 1 2 3
exec 1>startup_log.out 2>&1
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"
cat << EOF | tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
apt-get update
apt-get install -y docker-ce
apt-mark hold docker-ce
apt-get install -y kubelet kubeadm kubectl kubernetes-cni
apt-mark hold kubelet kubeadm kubectl
kubeadm init --token=${k8stoken} --pod-network-cidr=10.244.0.0/16
mkdir -p /home/ubuntu/.kube
cp -i /etc/kubernetes/admin.conf /home/ubuntu/.kube/config
chown -R $(id -u ubuntu):$(id -g ubuntu) /home/ubuntu/.kube/
usermod -aG docker ubuntu
echo "net.bridge.bridge-nf-call-iptables=1" | tee -a /etc/sysctl.conf
sysctl -p
runuser -l ubuntu -c '\
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml
'
After which I SSH to the master node (works fine) and run:
$ git clone https://github.com/linuxacademy/robot-shop.git
$ kubectl create namespace robot-shop
$ kubectl -n robot-shop create -f ~/robot-shop/K8s/descriptors/
$ ubuntu#ip-10-0-100-167:~$ kubectl -n robot-shop get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
cart ClusterIP 10.105.152.135 <none> 8080/TCP 15s
catalogue ClusterIP 10.97.111.197 <none> 8080/TCP 15s
dispatch ClusterIP None <none> 55555/TCP 15s
mongodb ClusterIP 10.107.178.183 <none> 27017/TCP 15s
mysql ClusterIP 10.110.254.52 <none> 3306/TCP 15s
payment ClusterIP 10.99.195.138 <none> 8080/TCP 15s
rabbitmq ClusterIP 10.99.70.232 <none> 5672/TCP,15672/TCP 15s
ratings ClusterIP 10.98.80.21 <none> 80/TCP 15s
redis ClusterIP 10.101.232.84 <none> 6379/TCP 15s
shipping ClusterIP 10.106.246.97 <none> 8080/TCP 15s
user ClusterIP 10.109.120.146 <none> 8080/TCP 15s
web NodePort 10.97.162.113 <none> 8080:30080/TCP 15s
Services appear to startup fine but I can't access anything on this host externally. I've tried following this debugging guide, and identified a discrepancy that when I type:
$ iptables-save | grep
I get nothing. But the doc doesn't make a recommendation as to a fix. Has anyone seen this issue before? I'm concerned my config file has a problem with it but I can't figure out what!
As #Ashworth mentioned in the comment issue has been solved after appropriate service port enabled in the particular security group for the relevant master node instance.
I've installed Istio 1.1 RC on a fresh GKE cluster, using Helm, and enabled mTLS (some options omitted like Grafana and Kiali):
helm template istio/install/kubernetes/helm/istio \
--set global.mtls.enabled=true \
--set global.controlPlaneSecurityEnabled=true \
--set istio_cni.enabled=true \
--set istio-cni.excludeNamespaces={"istio-system"} \
--name istio \
--namespace istio-system >> istio.yaml
kubectl apply -f istio.yaml
Next, I installed the Bookinfo example app like this:
kubectl label namespace default istio-injection=enabled
kubectl apply -f istio/samples/bookinfo/platform/kube/bookinfo.yaml
kubectl apply -f istio/samples/bookinfo/networking/bookinfo-gateway.yaml
kubectl apply -f istio/samples/bookinfo/networking/destination-rule-all-mtls.yaml
Then I've gone about testing by following the examples at: https://istio.io/docs/tasks/security/mutual-tls/
My results show the config is incorrect, but the verification guide above doesn't provide any hints about how to fix or diagnose issues. Here's what I see:
istio/bin/istioctl authn tls-check productpage.default.svc.cluster.local
Stderr when execute [/usr/local/bin/pilot-discovery request GET /debug/authenticationz ]: gc 1 #0.015s 6%: 0.016+1.4+1.0 ms clock, 0.064+0.31/0.45/1.6+4.0 ms cpu, 4->4->1 MB, 5 MB goal, 4 P
gc 2 #0.024s 9%: 0.007+1.4+1.0 ms clock, 0.029+0.15/1.1/1.1+4.3 ms cpu, 4->4->2 MB, 5 MB goal, 4 P
HOST:PORT STATUS SERVER CLIENT AUTHN POLICY DESTINATION RULE
productpage.default.svc.cluster.local:9080 OK mTLS mTLS default/ default/istio-system
This appears to show that mTLS is OK. And the previous checks all pass, like checking the cachain is present, etc. The above check passes for all the bookinfo components.
However, the following checks show an issue:
1: Confirm that plain-text requests fail as TLS is required to talk to httpbin with the following command:
kubectl exec $(kubectl get pod -l app=productpage -o jsonpath={.items..metadata.name}) -c istio-proxy -- curl http://productpage:9080/productpage -o /dev/null -s -w '%{http_code}\n'
200 <== Error. Should fail.
2: Confirm TLS requests without client certificate also fail:
kubectl exec $(kubectl get pod -l app=productpage -o jsonpath={.items..metadata.name}) -c istio-proxy -- curl https://productpage:9080/productpage -o /dev/null -s -w '%{http_code}\n' -k
000 <=== Correct behaviour
command terminated with exit code 35
3: Confirm TLS request with client certificate succeed:
kubectl exec $(kubectl get pod -l app=productpage -o jsonpath={.items..metadata.name}) -c istio-proxy -- curl https://productpage:9080/productpage -o /dev/null -s -w '%{http_code}\n' --key /etc/certs/key.pem --cert /etc/certs/cert-chain.pem --cacert /etc/certs/root-cert.pem -k
000 <=== Incorrect. Should succeed.
command terminated with exit code 35
What else can I do to debug my installation? I've followed the installation process quite carefully. Here's my cluster info:
Kubernetes master is running at https://<omitted>
calico-typha is running at https://<omitted>/api/v1/namespaces/kube-system/services/calico-typha:calico-typha/proxy
GLBCDefaultBackend is running at https://<omitted>/api/v1/namespaces/kube-system/services/default-http-backend:http/proxy
Heapster is running at https://<omitted>/api/v1/namespaces/kube-system/services/heapster/proxy
KubeDNS is running at https://<omitted>/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://<omitted>/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy
Kubernetes version: 1.11.7-gke.4
I guess I'm after either a more comprehensive guide or some specific things I can check.
Edit: Additional info:
Pod status:
default ns:
NAME READY STATUS RESTARTS AGE
details-v1-68868454f5-l7srt 2/2 Running 0 3h
productpage-v1-5cb458d74f-lmf7x 2/2 Running 0 2h
ratings-v1-76f4c9765f-ttstt 2/2 Running 0 2h
reviews-v1-56f6855586-qszpm 2/2 Running 0 2h
reviews-v2-65c9df47f8-ztrss 2/2 Running 0 3h
reviews-v3-6cf47594fd-hq6pc 2/2 Running 0 2h
istio-system ns:
NAME READY STATUS RESTARTS AGE
grafana-7b46bf6b7c-2qzcv 1/1 Running 0 3h
istio-citadel-5bf5488468-wkmvf 1/1 Running 0 3h
istio-cleanup-secrets-release-1.1-latest-daily-zmw7s 0/1 Completed 0 3h
istio-egressgateway-cf8d6dc69-fdmw2 1/1 Running 0 3h
istio-galley-5bcd455cbb-7wjkl 1/1 Running 0 3h
istio-grafana-post-install-release-1.1-latest-daily-vc2ff 0/1 Completed 0 3h
istio-ingressgateway-68b6767bcb-65h2d 1/1 Running 0 3h
istio-pilot-856849455f-29nvq 2/2 Running 0 2h
istio-policy-5568587488-7lkdr 2/2 Running 2 3h
istio-sidecar-injector-744f68bf5f-h22sp 1/1 Running 0 3h
istio-telemetry-7ffd6f6d4-tsmxv 2/2 Running 2 3h
istio-tracing-759fbf95b7-lc7fd 1/1 Running 0 3h
kiali-5d68f4c676-qrxfd 1/1 Running 0 3h
prometheus-c4b6997b-6d5k9 1/1 Running 0 3h
Example destinationrule:
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
creationTimestamp: "2019-02-21T15:15:09Z"
generation: 1
name: productpage
namespace: default
spec:
host: productpage
subsets:
- labels:
version: v1
name: v1
trafficPolicy:
tls:
mode: ISTIO_MUTUAL
If you are using Istio 1.1 RC, you should be looking at the docs at https://preliminary.istio.io/ instead of https://istio.io/. The preliminary.istio.io site is always the working copy of the docs, corresponding to the next to be Istio release (1.1 currently).
That said, those docs are currently changing a lot day-to-day as they are being cleaned up and corrected during final testing before 1.1 is released, probably in the next couple of weeks.
A possible explanation for the plain text http request returning 200 in you test is that you may be running with permissive mode.