I have an VM on Hyper-V running Kubernetes on it. I set the istio-ingressgateway as you see below.
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: tech-ingressgateway
namespace: tech-ingress-ns
spec:
selector:
istio: ingressgateway # default istio ingressgateway defined in istio-system namespace
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- '*'
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: PASSTHROUGH
hosts:
- '*'
I open two ports one is for http and second is for https. And I have two backend Service whose virtual service definition are;
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: activemq-vs
namespace: tech-ingress-ns
spec:
hosts:
- '*'
gateways:
- tech-ingressgateway
http:
- match:
- uri:
prefix: '/activemq'
route:
- destination:
host: activemq-svc.tech-ns.svc.cluster.local
port:
number: 8161
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: userprofile-vs
namespace: tech-ingress-ns
spec:
hosts:
- '*'
gateways:
- tech-ingressgateway
http:
- match:
- uri:
prefix: '/userprofile'
route:
- destination:
host: userprofile-svc.business-ns.svc.cluster.local
port:
number: 7002
The istio-ingressgateway hit successfully using curl as curl 192.168.x.x, but when I try to hit the activemq-backend-service it fails. I don't know why. The same issue I face while accessing the userprofile service. here is my curl command
curl -i 192.168.x.x/activemq
curl -i 192.168.x.x/userprofile
curl -i 192.168.x.x/userprofile/getUserDetails
The userProfile service has three endpoints which are; getUserDetails, verifyNaturalOtp and updateProfile.
EDIT
Here is Deployment and service manifest;
apiVersion: apps/v1
kind: Deployment
metadata:
name: userprofile-deployment
namespace: business-ns
labels:
app: userprofile
spec:
replicas: 1
selector:
matchLabels:
app: userprofile
template:
metadata:
labels:
app: userprofile
spec:
containers:
- env:
- name: APPLICATION_PORT
valueFrom:
configMapKeyRef:
name: configmap-cf
key: userProfilePort
- name: APPLICATION_NAME
valueFrom:
configMapKeyRef:
name: configmap-cf
key: userProfileName
- name: IAM_URL
valueFrom:
configMapKeyRef:
name: configmap-cf
key: userProfileIam
- name: GRAPHQL_DAO_LAYER_URL
valueFrom:
configMapKeyRef:
name: configmap-cf
key: userProfileGraphqlDao
- name: EVENT_PUBLISHER_URL
valueFrom:
configMapKeyRef:
name: configmap-cf
key: userProfilePublisher
name: userprofile
image: 'abc/userprofilesvc:tag1'
imagePullPolicy: IfNotPresent
resources: {}
ports:
- containerPort: 7002
---
apiVersion: v1
kind: Service
metadata:
name: userprofile-svc
namespace: business-ns
labels:
app: userprofile
spec:
selector:
app: userprofile
ports:
- name: http
protocol: TCP
port: 7002
targetPort: 7002
EDIT: kubectl describe pod activemq-deployment-5fb57d5f7c-2v9x5 -n tech-ns the outout is;
Name: activemq-deployment-5fb57d5f7c-2v9x5
Namespace: tech-ns
Priority: 0
Node: kworker-2/192.168.18.223
Start Time: Fri, 27 May 2022 06:11:50 +0000
Labels: app=activemq
pod-template-hash=5fb57d5f7c
Annotations: cni.projectcalico.org/containerID: e2a5a843ee02655ed3cfc4fa538abcccc3dae34590cc61dab341465aa78565fb
cni.projectcalico.org/podIP: 10.233.107.107/32
cni.projectcalico.org/podIPs: 10.233.107.107/32
kubesphere.io/restartedAt: 2022-05-27T06:11:42.602Z
Status: Running
IP: 10.233.107.107
IPs:
IP: 10.233.107.107
Controlled By: ReplicaSet/activemq-deployment-5fb57d5f7c
Containers:
activemq:
Container ID: docker://94a1f07489f6d2db51d2fe3bfce0ed3654ea7150eb17223696363c1b7f355cd7
Image: vialogic/activemq:cluster1.0
Image ID: docker-pullable://vialogic/activemq#sha256:f3954187bf1ead0a0bc91ec5b1c654fb364bd2efaa5e84e07909d0a1ec062743
Port: <none>
Host Port: <none>
State: Running
Started: Wed, 01 Jun 2022 06:51:19 +0000
Last State: Terminated
Reason: Error
Exit Code: 255
Started: Mon, 30 May 2022 08:21:52 +0000
Finished: Wed, 01 Jun 2022 06:21:43 +0000
Ready: True
Restart Count: 20
Environment: <none>
Mounts:
/home/alpine/apache-activemq-5.16.4/conf/jetty-realm.properties from active-creds (rw,path="jetty-realm.properties")
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nvbwv (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
active-creds:
Type: Secret (a volume populated by a Secret)
SecretName: creds
Optional: false
kube-api-access-nvbwv:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SandboxChanged 55m (x5 over 58m) kubelet Pod sandbox changed, it will be killed and re-created.
Normal Pulled 51m kubelet Container image "vialogic/activemq:cluster1.0" already present on machine
Normal Created 51m kubelet Created container activemq
Normal Started 51m kubelet Started container activemq
As per pod description shared, neither istio-init nor istio-proxy containers arent injected into application pod. So routing might not be happening from gateway to application pod. The namespace / deployment has to be istio enabled so that at the time application pod creation it will inject istio sidecar into it. Post that routing will happen to application.
Your Istio sidecar is not injected into the pod. You would see an output like this in the Annotations section for a pod that has been injected:
pod-template-hash=546859454c
security.istio.io/tlsMode=istio
service.istio.io/canonical-name=userprofile
service.istio.io/canonical-revision=latest
Annotations: kubectl.kubernetes.io/default-container: svc
kubectl.kubernetes.io/default-logs-container: svc
prometheus.io/path: /stats/prometheus
prometheus.io/port: 15020
prometheus.io/scrape: true
sidecar.istio.io/status:
{"initContainers":["istio-init"],"containers":["istio-proxy"],"volumes":["istio-envoy","istio-data","istio-podinfo","istio-token","istiod-...
You can enforce the injection at the namespace level with the following command:
kubectl label namespace <namespace> istio-injection=enabled --overwrite
Since the injection takes place at the pod's creation, you will need to kill the running pods after the command is issued.
Check out the Sidecar Injection Problems document as a reference.
Related
I'm learning Kubernetes and below are the yaml configuration files:
mongo-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: mongo-config
data:
mongo-url: mongo-service
mongo-secret.yaml:
apiVersion: v1
kind: Secret
metadata:
name: mongo-secret
type: Opaque
data:
mongo-user: bW9uZ291c2Vy
mongo-password: bW9uZ29wYXNzd29yZA==
mongo.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo-deployment
labels:
app: mongo
spec:
replicas: 1
selector:
matchLabels:
app: mongo
template:
metadata:
labels:
app: mongo
spec:
containers:
- name: mongodb
image: mongo:5.0
ports:
- containerPort: 27017
env:
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
secretKeyRef:
name: mongo-secret
key: mongo-user
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mongo-secret
key: mongo-password
---
apiVersion: v1
kind: Service
metadata:
name: mongo-service
spec:
selector:
app: webapp
ports:
- protocol: TCP
port: 27017
targetPort: 27017
webapp.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp-deployment
labels:
app: webapp
spec:
replicas: 1
selector:
matchLabels:
app: webapp
template:
metadata:
labels:
app: webapp
spec:
containers:
- name: webapp
image: nanajanashia/k8s-demo-app:v1.0
ports:
- containerPort: 3000
env:
- name: USER_NAME
valueFrom:
secretKeyRef:
name: mongo-secret
key: mongo-user
- name: USER_PWD
valueFrom:
secretKeyRef:
name: mongo-secret
key: mongo-password
- name: DB_URL
valueFrom:
secretKeyRef:
name: mongo-config
key: mongo-url
---
apiVersion: v1
kind: Service
metadata:
name: webapp-service
spec:
type: NodePort
selector:
app: webapp
ports:
- protocol: TCP
port: 3000
targetPort: 3000
nodePort: 30100
After starting a test webapp i came across the below error:
NAME READY STATUS RESTARTS AGE
mongo-deployment-7875498c-psn56 1/1 Running 0 100m
my-go-app-664f7475d4-jgnsk 1/1 Running 1 (7d20h ago) 7d20h
webapp-deployment-7dc5b857df-6bx4s 0/1 CreateContainerConfigError 0 29m
which if i try to get more details about the CreateContainerConfigError i get:
~/K8s/K8s-demo$ kubectl describe pod webapp-deployment-7dc5b857df-6bx4s
Name: webapp-deployment-7dc5b857df-6bx4s
Namespace: default
Priority: 0
Node: minikube/192.168.49.2
Start Time: Thu, 06 Jan 2022 12:20:02 +0200
Labels: app=webapp
pod-template-hash=7dc5b857df
Annotations: <none>
Status: Pending
IP: 172.17.0.5
IPs:
IP: 172.17.0.5
Controlled By: ReplicaSet/webapp-deployment-7dc5b857df
Containers:
webapp:
Container ID:
Image: nanajanashia/k8s-demo-app:v1.0
Image ID:
Port: 3000/TCP
Host Port: 0/TCP
State: Waiting
Reason: CreateContainerConfigError
Ready: False
Restart Count: 0
Environment:
USER_NAME: <set to the key 'mongo-user' in secret 'mongo-secret'> Optional: false
USER_PWD: <set to the key 'mongo-password' in secret 'mongo-secret'> Optional: false
DB_URL: <set to the key 'mongo-url' in secret 'mongo-config'> Optional: false
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wkflh (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-wkflh:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 30m default-scheduler Successfully assigned default/webapp-deployment-7dc5b857df-6bx4s to minikube
Warning Failed 28m (x12 over 30m) kubelet Error: secret "mongo-config" not found
Normal Pulled 27s (x142 over 30m) kubelet Container image "nanajanashia/k8s-demo-app:v1.0" already present on machine
which seems that the issue with configuration is:
Warning Failed 28m (x12 over 30m) kubelet Error: secret "mongo-config" not found
I don't have a secret with name "mongo-config" but there is a ConfigMap with "mongo-config" name:
>:~/K8s/K8s-demo$ kubectl get secret
NAME TYPE DATA AGE
default-token-gs25h kubernetes.io/service-account-token 3 5m57s
mongo-secret Opaque 2 5m48s
>:~/K8s/K8s-demo$ kubectl get configmap
NAME DATA AGE
kube-root-ca.crt 1 6m4s
mongo-config 1 6m4s
Could you please advice what is the issue here?
You have secretKeyRef in:
- name: DB_URL
valueFrom:
secretKeyRef:
name: mongo-config
key: mongo-url
You have to use configMapKeyRef
- name: DB_URL
valueFrom:
configMapKeyRef:
name: mongo-config
key: mongo-url
Basically your configmap is whats called a config map from literals. The only way to use that config map data as your env variables is :
- name: webapp
image: nanajanashia/k8s-demo-app:v1.0
envFrom:
- configMapRef:
name: mongo-config
Modify you web app deployment yaml this way. Also modify the config map itself . Use DB_URL instead of mongo-url as key.
The mongo secret has to be in the same namespace:
apiVersion: v1
kind: Secret
metadata:
name: mongo-secret
namespace: mongodb
labels:
app.kubernetes.io/component: mongodb
type: Opaque
data:
mongodb-root-password: ""
mongodb-passwords: ""
mongodb-metrics-password: ""
mongodb-replica-set-key: ""
I'm currently setting up a web app using google cloud platform but I ran into an issue while deploying with the GKE ingress controller.
I'm unable to access the app using my sub domain name. The page is showing this message:
502 Server Error
Error: Server Error
The server encountered a temporary error and could not complete your request.
Please try again in 30 seconds.
Despite having configured an health check the ingress seems to still not respond properly.
In the meantime my SSL certificate is working properly for my subdomain name.
This is my ingress configuration:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: frontend-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: "myapp-static-addr"
networking.gke.io/v1beta1.FrontendConfig: "frontend-ingress-config"
spec:
rules:
- host: test.myapp.com
http:
paths:
- path: /
backend:
serviceName: frontend-service
servicePort: 3333
tls:
- secretName: stage-ssl"
and this is my service:
apiVersion: v1
kind: Service
metadata:
name: frontend-service
namespace: default
annotations:
cloud.google.com/backend-config: '{"default": "frontend-config"}'
spec:
type: NodePort
selector:
app: frontend-server
ports:
- port: 3333
protocol: TCP
targetPort: 3000
and finally my deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend-deployment
labels:
app: frontend-server
spec:
replicas: 1
selector:
matchLabels:
app: frontend-server
template:
metadata:
labels:
app: frontend-server
spec:
containers:
- name: client-ssr
image: eu.gcr.io/myapp-test/client-ssr
ports:
- name: client-port
containerPort: 3000
resources:
requests:
cpu: "0.35"
limits:
cpu: "0.55"
env:
- name: CONFIG_ENV
value: "STAGE"
imagePullPolicy: Always
volumeMounts:
- name: certificate
mountPath: "/etc/certificate"
readOnly: true
volumes:
- name: certificate
secret:
secretName: stage-ssl
And this in the ingress describe:
Name: frontend-ingress
Namespace: default
Address: 34.106.6.15
Default backend: default-http-backend:80 (10.26.31.88:8080)
TLS:
stage-ssl terminates
Rules:
Host Path Backends
---- ---- --------
test.myapp.com
/ frontend-service:3333 (10.22.46.111:3000)
Annotations:
kubernetes.io/ingress.global-static-ip-name: myapp-static-addr
ingress.kubernetes.io/backends: {"k8s-be-32171--41df3ab30d90ff92":"HEALTHY","k8s1-41df3ab3-default-frontend-service-3333-5186e808":"HEALTHY"}
ingress.kubernetes.io/forwarding-rule: k8s2-fr-opm63ww1-default-frontend-ingress-8gn6ll7p
ingress.kubernetes.io/https-forwarding-rule: k8s2-fs-opm63ww1-default-frontend-ingress-8gn6ll7p
ingress.kubernetes.io/redirect-url-map: k8s2-rm-opm63ww1-default-frontend-ingress-8gn6ll7p
ingress.kubernetes.io/ssl-cert: k8s2-cr-opm63ww1-f42czv69pq6f2emd-5bd4c7395be5bd4e
ingress.kubernetes.io/https-target-proxy: k8s2-ts-opm63ww1-default-frontend-ingress-8gn6ll7p
ingress.kubernetes.io/target-proxy: k8s2-tp-opm63ww1-default-frontend-ingress-8gn6ll7p
ingress.kubernetes.io/url-map: k8s2-um-opm63ww1-default-frontend-ingress-8gn6ll7p
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"networking.k8s.io/v1beta1","kind":"Ingress","metadata":{"annotations":{"kubernetes.io/ingress.global-static-ip-name":"myapp-static-addr","networking.gke.io/v1beta1.FrontendConfig":"frontend-ingress-config"},"name":"frontend-ingress","namespace":"default"},"spec":{"rules":[{"host":"test.myapp.com","http":{"paths":[{"backend":{"serviceName":"frontend-service","servicePort":3333},"path":"/"}]}}],"tls":[{"secretName":"stage-ssl"}]}}
networking.gke.io/v1beta1.FrontendConfig: frontend-ingress-config
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 18m loadbalancer-controller UrlMap "k8s2-um-opm63ww1-default-frontend-ingress-8gn6ll7p" created
Normal Sync 18m loadbalancer-controller TargetProxy "k8s2-tp-opm63ww1-default-frontend-ingress-8gn6ll7p" created
Normal Sync 18m loadbalancer-controller ForwardingRule "k8s2-fr-opm63ww1-default-frontend-ingress-8gn6ll7p" created
Normal Sync 18m loadbalancer-controller TargetProxy "k8s2-ts-opm63ww1-default-frontend-ingress-8gn6ll7p" created
Normal IPChanged 18m loadbalancer-controller IP is now 34.106.6.15
Normal Sync 18m loadbalancer-controller ForwardingRule "k8s2-fs-opm63ww1-default-frontend-ingress-8gn6ll7p" created
Normal Sync 16m loadbalancer-controller UrlMap "k8s2-um-opm63ww1-default-frontend-ingress-8gn6ll7p" updated
Normal Sync 16m loadbalancer-controller TargetProxy "k8s2-tp-opm63ww1-default-frontend-ingress-8gn6ll7p" updated
Normal Sync 2m18s (x8 over 19m) loadbalancer-controller Scheduled for sync
How can I make my web app working ?
Thank You in advance.
I believe GCE Ingress must have defaultBackend apparently, see my question Is defaultBackend mandatory for Ingress (gce)?
I have it working on one site application I already set up and now I am just trying to replicate the exact same thing for a different site/domain in another namespace.
So staging.correct.com is my working https domain
and staging.example.com is my not working https domain (http works - just not https)
When I do the following it shows 3 certs, the working one for correct and then 2 for the example.com when it should only have one for example:
kubectl get -A certificate
correct staging-correct-com True staging-correct-com-tls 10d
example staging-example-com False staging-example-com-tls 16h
example staging-example-website-com False staging-example-com-tls 17h
When I do:
kubectl get -A certificaterequests
It shows 2 certificate requests for the example
example staging-example-com-nl46v False 15h
example staging-example-website-com-plhqb False 15h
When I do:
kubectl get ingressroute -A
NAMESPACE NAME AGE
correct correct-ingress-route 10d
correct correct-secure-ingress-route 6d22h
kube-system traefik-dashboard 26d
example example-website-ingress-route 15h
example example-website-secure-ingress-route 15h
routing dashboard 29d
routing traefik-dashboard 6d21h
When I do:
kubectl get secrets -A (just showing the relevant ones)
correct default-token-bphcm kubernetes.io/service-account-token
correct staging-correct-com-tls kubernetes.io/tls
example default-token-wx9tx kubernetes.io/service-account-token
example staging-example-com-tls Opaque
example staging-example-com-wf224 Opaque
example staging-example-website-com-rzrvw Opaque
Logs from cert manager pod:
1 ingress.go:91] cert-manager/controller/challenges/http01/selfCheck/http01/ensureIngress "msg"="found one existing HTTP01 solver ingress" "dnsName"="staging.example.com" "related_resource_kind"="Ingress" "related_resource_name"="cm-acme-http-solver-bqjsj" "related_resource_namespace”=“example” "related_resource_version"="v1beta1" "resource_kind"="Challenge" "resource_name"="staging-example-com-ltjl6-1661100417-771202110" "resource_namespace”=“example” "resource_version"="v1" "type"="HTTP-01"
When I do:
kubectl get challenge -A
example staging-example-com-nl46v-1661100417-2848337980 staging.example.com 15h
example staging-example-website-com-plhqb-26564845-3987262508 pending staging.example.com
When I do: kubectl get order -A
NAMESPACE NAME STATE AGE
example staging-example-com-nl46v-1661100417 pending 17h
example staging-example-website-com-plhqb-26564845 pending 17h
My yml files:
My ingress route:
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
namespace: example
name: example-website-ingress-route
annotations:
kubernetes.io/ingress.class: "traefik"
cert-manager.io/issuer: example-issuer-staging
traefik.ingress.kubernetes.io/router.entrypoints: web
traefik.frontend.redirect.entryPoint: https
spec:
entryPoints:
- web
routes:
- match: Host(`staging.example.com`)
middlewares:
- name: https-only
kind: Rule
services:
- name: example-website
namespace: example
port: 80
my issuer:
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: example-issuer-staging
namespace: example
spec:
acme:
# The ACME server URL
server: https://acme-v02.api.letsencrypt.org/directory
# Email address used for ACME registration
email: example#example.com
# Name of a secret used to store the ACME account private key
privateKeySecretRef:
name: staging-example-com-tls
# Enable the HTTP-01 challenge provider
solvers:
# An empty 'selector' means that this solver matches all domains
- http01:
ingress:
class: traefik
my middleware:
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: https-only
namespace: example
spec:
redirectScheme:
scheme: https
permanent: true
my secure ingress route:
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
namespace: example
name: example-website-secure-ingress-route
annotations:
kubernetes.io/ingress.class: "traefik"
cert-manager.io/issuer: example-issuer-staging
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.frontend.redirect.entryPoint: https
spec:
entryPoints:
- websecure
routes:
- match: Host(`staging.example.com`)
kind: Rule
services:
- name: example-website
namespace: example
port: 80
tls:
domains:
- main: staging.example.com
options:
namespace: example
secretName: staging-example-com-tls
my service:
apiVersion: v1
kind: Service
metadata:
namespace: example
name: 'example-website'
spec:
type: ClusterIP
ports:
- protocol: TCP
name: http
port: 80
targetPort: 80
- protocol: TCP
name: https
port: 443
targetPort: 80
selector:
app: 'example-website'
my solver:
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: staging-example-com
namespace: example
spec:
secretName: staging-example-com-tls
issuerRef:
name: example-issuer-staging
kind: Issuer
commonName: staging.example.com
dnsNames:
- staging.example.com
my app:
apiVersion: apps/v1
kind: ReplicaSet
metadata:
namespace: example
name: 'example-website'
labels:
app: 'example-website'
tier: 'frontend'
spec:
replicas: 1
selector:
matchLabels:
app: 'example-website'
template:
metadata:
labels:
app: 'example-website'
spec:
containers:
- name: example-website-container
image: richarvey/nginx-php-fpm:1.10.3
imagePullPolicy: Always
env:
- name: SSH_KEY
value: 'secret'
- name: GIT_REPO
value: 'url of source code for site'
- name: GIT_EMAIL
value: 'example#example.com'
- name: GIT_NAME
value: 'example'
ports:
- containerPort: 80
How can I delete all these secrets, orders, certificates and stuff in the example namespace and try again? Does cert-manager let you do this without restarting them continuously?
EDIT:
I deleted the namespace and redeployed, then:
kubectl describe certificates staging-example-com -n example
Spec:
Common Name: staging.example.com
Dns Names:
staging.example.com
Issuer Ref:
Kind: Issuer
Name: example-issuer-staging
Secret Name: staging-example-com-tls
Status:
Conditions:
Last Transition Time: 2020-09-26T21:25:06Z
Message: Issuing certificate as Secret does not contain a certificate
Reason: MissingData
Status: False
Type: Ready
Last Transition Time: 2020-09-26T21:25:07Z
Message: Issuing certificate as Secret does not exist
Reason: DoesNotExist
Status: True
Type: Issuing
Next Private Key Secret Name: staging-example-com-gnbl4
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Issuing 3m10s cert-manager Issuing certificate as Secret does not exist
Normal Reused 3m10s cert-manager Reusing private key stored in existing Secret resource "staging-example-com-tls"
Normal Requested 3m9s cert-manager Created new CertificateRequest resource "staging-example-com-qrtfx"
So then I did:
kubectl describe certificaterequest staging-example-com-qrtfx -n example
Status:
Conditions:
Last Transition Time: 2020-09-26T21:25:10Z
Message: Waiting on certificate issuance from order example/staging-example-com-qrtfx-1661100417: "pending"
Reason: Pending
Status: False
Type: Ready
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal OrderCreated 8m17s cert-manager Created Order resource example/staging-example-com-qrtfx-1661100417
Normal OrderPending 8m17s cert-manager Waiting on certificate issuance from order example/staging-example-com-qrtfx-1661100417: ""
So I did:
kubectl describe challenges staging-example-com-qrtfx-1661100417 -n example
Status:
Presented: true
Processing: true
Reason: Waiting for HTTP-01 challenge propagation: wrong status code '404', expected '200'
State: pending
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Started 11m cert-manager Challenge scheduled for processing
Normal Presented 11m cert-manager Presented challenge using HTTP-01 challenge mechanism
I figured it out. The issue seems to be that IngressRoute (which is used in traefik) does not work with cert mananger. I just deployed this file, then the http check was confirmed, then I could delete it again. Hope this helps others with same issue.
Seems cert manager does support IngressRoute which is in Traefik? I opened the issue here so let's see what they say: https://github.com/jetstack/cert-manager/issues/3325
kubectl apply -f example-ingress.yml
File:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
namespace: example
name: example-ingress
annotations:
kubernetes.io/ingress.class: "traefik"
cert-manager.io/issuer: example-issuer-staging
spec:
rules:
- host: staging.example.com
http:
paths:
- path: /
backend:
serviceName: example-website
servicePort: 80
tls:
- hosts:
- staging.example.com
secretName: staging-example-com-tls
My ingress pod is having trouble reaching two clusterIP services by IP. There are plenty of other clusterIP services it has no trouble reaching. Including in the same namespace. Another pod has no problem reaching the service (I tried the default backend in the same namespace and it was fine).
Where should I look? Here are my actual services, it cannot reach the first but can reach the second:
- apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2019-08-23T16:59:10Z"
labels:
app: pka-168-emtpy-id
app.kubernetes.io/instance: palletman-pka-168-emtpy-id
app.kubernetes.io/managed-by: Tiller
helm.sh/chart: pal-0.0.1
release: palletman-pka-168-emtpy-id
name: pka-168-emtpy-id
namespace: palletman
resourceVersion: "108574168"
selfLink: /api/v1/namespaces/palletman/services/pka-168-emtpy-id
uid: 539364f9-c5c7-11e9-8699-0af40ce7ce3a
spec:
clusterIP: 100.65.111.47
ports:
- port: 80
protocol: TCP
targetPort: 8080
selector:
app: pka-168-emtpy-id
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
- apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2019-03-05T19:57:26Z"
labels:
app: production
app.kubernetes.io/instance: palletman
app.kubernetes.io/managed-by: Tiller
helm.sh/chart: pal-0.0.1
release: palletman
name: production
namespace: palletman
resourceVersion: "81337664"
selfLink: /api/v1/namespaces/palletman/services/production
uid: e671c5e0-3f80-11e9-a1fc-0af40ce7ce3a
spec:
clusterIP: 100.65.82.246
ports:
- port: 80
protocol: TCP
targetPort: 8080
selector:
app: production
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
My ingress pod:
apiVersion: v1
kind: Pod
metadata:
annotations:
sumologic.com/format: text
sumologic.com/sourceCategory: 103308/CT/LI/kube_ingress
sumologic.com/sourceName: kube_ingress
creationTimestamp: "2019-08-21T19:34:48Z"
generateName: ingress-nginx-65877649c7-
labels:
app: ingress-nginx
k8s-addon: ingress-nginx.addons.k8s.io
pod-template-hash: "2143320573"
name: ingress-nginx-65877649c7-5npmp
namespace: kube-ingress
ownerReferences:
- apiVersion: extensions/v1beta1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: ingress-nginx-65877649c7
uid: 97db28a9-c43f-11e9-920a-0af40ce7ce3a
resourceVersion: "108278133"
selfLink: /api/v1/namespaces/kube-ingress/pods/ingress-nginx-65877649c7-5npmp
uid: bcd92d96-c44a-11e9-8699-0af40ce7ce3a
spec:
containers:
- args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/nginx-default-backend
- --configmap=$(POD_NAMESPACE)/ingress-nginx
- --publish-service=$(POD_NAMESPACE)/ingress-nginx
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
image: gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.13
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
name: ingress-nginx
ports:
- containerPort: 80
name: http
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-dg5wn
readOnly: true
dnsPolicy: ClusterFirst
nodeName: ip-10-55-131-177.eu-west-1.compute.internal
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 60
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: default-token-dg5wn
secret:
defaultMode: 420
secretName: default-token-dg5wn
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2019-08-21T19:34:48Z"
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: "2019-08-21T19:34:50Z"
status: "True"
type: Ready
- lastProbeTime: null
lastTransitionTime: "2019-08-21T19:34:48Z"
status: "True"
type: PodScheduled
containerStatuses:
- containerID: docker://d597673f4f38392a52e9537e6dd2473438c62c2362a30e3d58bf8a98e177eb12
image: gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.13
imageID: docker-pullable://gcr.io/google_containers/nginx-ingress-controller#sha256:c9d2e67f8096d22564a6507794e1a591fbcb6461338fc655a015d76a06e8dbaa
lastState: {}
name: ingress-nginx
ready: true
restartCount: 0
state:
running:
startedAt: "2019-08-21T19:34:50Z"
hostIP: 10.55.131.177
phase: Running
podIP: 172.6.218.18
qosClass: BestEffort
startTime: "2019-08-21T19:34:48Z"
It could be connectivity to the node where your Pod is running. (Or network overlay related) You can check where that pod is running:
$ kubectl get pod -o=json | jq .items[0].spec.nodeName
Check if the node is 'Ready':
$ kubectl get node <node-from-above>
If it's ready, then ssh into the node to further troubleshoot:
$ ssh <node-from-above>
Is your overlay pod running on the node? (Calico, Weave, CNI, etc)
You can further troubleshoot connecting to the pod/container
# From <node-from-above>
$ docker exec -it <container-id-in-pod> bash
# Check connectivity (ping, dig, curl, etc)
Also, from using the kubectl command line (if you have network connectivity to the node)
$ kubectl exec -it <pod-id> -c <container-name> bash
# Troubleshoot...
I have a deployment and a service in GKE. I exposed the deployment as a Load Balancer but I cannot access it through the service (curl or browser). I get an:
curl: (7) Failed to connect to <my-Ip-Address> port 443: Connection refused
I can port forward directly to the pod and it works fine:
kubectl --namespace=redfalcon port-forward web-service-rf-76967f9c68-2zbhm 9999:443 >> /dev/null
curl -k -v --request POST --url https://localhost:9999/auth/login/ --header 'content-type: application/json' --header 'x-profile-key: ' --data '{"email":"<testusername>","password":"<testpassword>"}'
I have most likely misconfigured my service but cannot see how. Any help on what I did would be very much appreciated.
Service Yaml:
---
apiVersion: v1
kind: Service
metadata:
name: red-falcon-lb
namespace: redfalcon
spec:
type: LoadBalancer
ports:
- name: https
port: 443
protocol: TCP
selector:
app: web-service-rf
Deployment YAML
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: web-service-rf
spec:
selector:
matchLabels:
app: web-service-rf
replicas: 2 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: web-service-rf
spec:
initContainers:
- name: certificate-init-container
image: proofpoint/certificate-init-container:0.2.0
imagePullPolicy: Always
env:
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
args:
- "-namespace=$(NAMESPACE)"
- "-pod-name=$(POD_NAME)"
- "-query-k8s"
volumeMounts:
- name: tls
mountPath: /etc/tls
containers:
- name: web-service-rf
image: gcr.io/redfalcon-186521/redfalcon-webserver-minimal:latest
# image: gcr.io/redfalcon-186521/redfalcon-webserver-full:latest
command:
- "./server"
- "--port=443"
imagePullPolicy: Always
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /var/secrets/google/key.json
ports:
- containerPort: 443
resources:
limits:
memory: "500Mi"
cpu: "100m"
volumeMounts:
- mountPath: /etc/tls
name: tls
- mountPath: /var/secrets/google
name: google-cloud-key
volumes:
- name: tls
emptyDir: {}
- name: google-cloud-key
secret:
secretName: pubsub-key
output: kubectl describe svc red-falcon-lb
Name: red-falcon-lb
Namespace: redfalcon
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"red-falcon-lb","namespace":"redfalcon"},"spec":{"ports":[{"name":"https","port...
Selector: app=web-service-rf
Type: LoadBalancer
IP: 10.43.245.9
LoadBalancer Ingress: <EXTERNAL IP REDACTED>
Port: https 443/TCP
TargetPort: 443/TCP
NodePort: https 31524/TCP
Endpoints: 10.40.0.201:443,10.40.0.202:443
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal EnsuringLoadBalancer 39m service-controller Ensuring load balancer
Normal EnsuredLoadBalancer 38m service-controller Ensured load balancer
I figured out what it was...
My golang app was listening on localhost instead of 0.0.0.0. This meant that port forwarding on kubectl worked but any service exposure didn't work.
I had to add "--host 0.0.0.0" to my k8s command and it then listened to requests from outside localhost.
My command ended up being...
"./server --port 8080 --host 0.0.0.0"