Istio 1.2 -- Two protocols, same port? - kubernetes

We have an application running in a k8s deployment that opens a TCP socket on port 8000, and listens for HTTP and GRPC traffic. We also have an Istio Gateway listening on port 443 for HTTPS traffic, connected to two virtual services, one for HTTP traffic, the other for GRPC traffic (matching on headers/URI). Those VirtualServices direct traffic to two different ports on the Service, port 8000 for HTTP traffic, and port 5001 for GRPC traffic--but both have a target port of 8000 (see specs below). We're having issues connecting via either HTTP or GRPC--HTTP returns a generic 500, GRPC returns a "not found" error. However, if we split the traffic between two ports (i.e. each protocol gets its own port), things work fine, this unfortunately forces us to use an older version of the app.
Deployment:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: test-deployment
namespace: test-ns
spec:
progressDeadlineSeconds: 600
replicas: 2
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 0
type: RollingUpdate
template:
spec:
containers:
image: <Image name>
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /live
port: 8000
scheme: HTTP
initialDelaySeconds: 20
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 60
name: test-container
ports:
- containerPort: 8000
protocol: TCP
- containerPort: 8000
name: metrics
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /ready
port: 8000
scheme: HTTP
initialDelaySeconds: 20
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 60
resources:
limits:
cpu: "1"
memory: 1Gi
requests:
cpu: 200m
memory: 10Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
terminationGracePeriodSeconds: 20
Service:
apiVersion: v1
kind: Service
metadata:
name: test-deployment-svc
namespace: test-ns
spec:
clusterIP: <IP>
ports:
- name: http
port: 8000
protocol: TCP
targetPort: 8000
- name: http2
port: 5001
protocol: TCP
targetPort: 8000
selector:
<some label>
sessionAffinity: None
type: ClusterIP
Any suggestions would be greatly appreciated!

Related

Kubernetes Statefulsets traffic issue

I am trying to deploy 3 replicas using statefulsets.
Now, I have three pods:
statefulset-test-0
statefulset-test-1
statefulset-test-2
Then, I need to use the following to rolling update:
kubectl rollout restart statefulsets/statefulset-test
it will stop statefulset-test-2 pod and create a new statefulset-test-2 pod, then it will stop statefulset-test-1.
At this point, statefulset-test-2 is running the new image, statefulset-test-1 has been stopped so it can not accept requests, and statefulset-test-0 is running the old image.
I was wondering how the k8s handles the request to those pods. Is the k8s send the request to test-0 and test-2 randomly or do they send the request to the new pod?
here is my yaml:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: statefulset-test
labels:
app: statefulset-test
spec:
minReadySeconds: 20
serviceName: "statefulset-test"
updateStrategy:
type: RollingUpdate
replicas: 3
selector:
matchLabels:
app: statefulset-test
template:
metadata:
labels:
app: statefulset-test
spec:
containers:
- name: statefulset-test
image: ..
imagePullPolicy: Always
ports:
- containerPort: 123
- containerPort: 456
livenessProbe:
httpGet:
path: /api/Health
port: 123
initialDelaySeconds: 180
periodSeconds: 80
timeoutSeconds: 20
failureThreshold: 2
readinessProbe:
httpGet:
path: /api/Health
port: 123
initialDelaySeconds: 20
periodSeconds: 5
successThreshold: 1
---
apiVersion: v1
kind: Service
metadata:
name: statefulset-service
spec:
type: NodePort
selector:
app: statefulset-test
ports:
- name: statefulset-main
protocol: TCP
port: 123
targetPort: 123
nodePort: 678
- name : statefulset-grpc
protocol: TCP
port: 456
targetPort: 456
nodePort: 679

What are some good commands to troubleshoot my helm deployment?

So I'm working on my first helm deployment. I'm working on deploying polr an URL shortener.
I'm having issues with my first deployment. Absolutely nothing starts up and I'm puzzled about where to go from here.
I'm using commands like..
kubectl describe deployment/polr
helm lint pre-polr
helm install polr pre-polr --dry-run --debug
However, it doesn't give me any good details and since there are no pods spinning up. I feel like I'm missing some commands that might help. Could anyone suggest any?
Here are my manifests:
Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: polr
labels:
app: polr-app
spec:
replicas: 1
selector:
matchLabels:
app: polr-app
template:
metadata:
labels:
app: polr-app
spec:
containers:
- name: polr
image: matthewspah/polr
ports:
- name: http
containerPort: 8080
protocol: TCP
readinessProbe:
tcpSocket:
port: 8080
initialDelaySeconds: 5
periodSeconds: 10
livenessProbe:
tcpSocket:
port: 8080
initialDelaySeconds: 15
periodSeconds: 20
- name: polr-db
image: bitnami/mysql
ports:
- name: mysql
containerPort: 3306
protocol: TCP
readinessProbe:
tcpSocket:
port: 3306
initialDelaySeconds: 5
periodSeconds: 10
livenessProbe:
tcpSocket:
port: 3306
initialDelaySeconds: 15
periodSeconds: 20
Service
apiVersion: v1
kind: Service
metadata:
name: polr
labels:
name: polr-app
spec:
selector:
app: polr-app
type: ClusterIP
ports:
- port: 8080
targetPort: 8080
protocol: TCP
name: http

Accessing kubernetes headless service over ambassador

I have deployed my service as headless server and did follow the kubernetes configuration as mentioned in this link (http://vertx.io/docs/vertx-hazelcast/java/#_using_this_cluster_manager). My service is load balanced and proxied using ambassador. Everything was working fine as long as the service was not headless. Once the service changed to headless, ambassador is not able to discover my services. Which means it was looking for clusterIP and it is missing now as the services are headless. What is that I need to include in my deployment.yaml so these services are discovered by ambassador.
Error I see " upstream connect error or disconnect/reset before headers. reset reason: connection failure"
I need these services to be headless because that is the only way to create a cluster using hazelcast. And I am creating web socket connection and vertx eventbus.
apiVersion: v1
kind: Service
metadata:
name: abt-login-service
labels:
chart: "abt-login-service-0.1.0-SNAPSHOT"
annotations:
fabric8.io/expose: "true"
fabric8.io/ingress.annotations: 'kubernetes.io/ingress.class: nginx'
getambassador.io/config: |
---
apiVersion: ambassador/v1
name: login_mapping
ambassador_id: default
kind: Mapping
prefix: /login/
service: abt-login-service.default.svc.cluster.local
use_websocket: true
spec:
type: ClusterIP
clusterIP: None
selector:
app: RELEASE-NAME-abt-login-service
ports:
- port: 80
targetPort: 8080
protocol: TCP
name: http
- name: hz-port-name
port: 5701
protocol: TCP```
```Deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: RELEASE-NAME-abt-login-service
labels:
draft: draft-app
chart: "abt-login-service-0.1.0-SNAPSHOT"
spec:
replicas: 2
selector:
matchLabels:
app: RELEASE-NAME-abt-login-service
minReadySeconds: 30
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
template:
metadata:
labels:
draft: draft-app
app: RELEASE-NAME-abt-login-service
component: abt-login-service
spec:
serviceAccountName: vault-auth
containers:
- name: abt-login-service
env:
- name: SPRING_PROFILES_ACTIVE
value: "dev"
- name: _JAVA_OPTIONS
value: "-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -XX:MaxRAMFraction=2 -Dsun.zip.disableMemoryMapping=true -XX:+UseParallelGC -XX:Min
HeapFreeRatio=5 -XX:MaxHeapFreeRatio=10 -XX:GCTimeRatio=4 -XX:AdaptiveSizePolicyWeight=90 -Dhazelcast.diagnostics.enabled=true
"
image: "draft:dev"
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
ports:
- containerPort: 5701
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 60
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
readinessProbe:
httpGet:
path: /health
port: 8080
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources:
limits:
cpu: 500m
memory: 1024Mi
requests:
cpu: 400m
memory: 512Mi
terminationGracePeriodSeconds: 10```
How can I make these services discoverable by ambassador?

Kubernetes StatefulSet - obtain spec.replicas metadata and reference elsewhere in configuration

I am configuring a StatefulSet where I want the number of replicas (spec.replicas as shown below) available to somehow pass as a parameter into the application instance. My application needs spec.replicas to determine the numer of replicas so it knows what rows to load from a MySQL table. I don't want to hard-code the number of replicas in both spec.replicas and the application parameter as that will not work when scaling the number of replicas up or down, since the application parameter needs to adjust when scaling.
Here is my StatefulSet config:
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
labels:
run: my-app
name: my-app
namespace: my-ns
spec:
replicas: 3
selector:
matchLabels:
run: my-app
serviceName: my-app
podManagementPolicy: Parallel
template:
metadata:
labels:
run: my-app
spec:
containers:
- name: my-app
image: my-app:latest
command:
- /bin/sh
- /bin/start.sh
- dev
- 2000m
- "0"
- "3" **Needs to be replaced with # replicas**
- 127.0.0.1
- "32990"
imagePullPolicy: Always
livenessProbe:
httpGet:
path: /health
port: 8081
initialDelaySeconds: 180
periodSeconds: 10
timeoutSeconds: 3
readinessProbe:
failureThreshold: 10
httpGet:
path: /ready
port: 8081
scheme: HTTP
initialDelaySeconds: 30
periodSeconds: 15
successThreshold: 1
timeoutSeconds: 3
ports:
- containerPort: 8080
protocol: TCP
resources:
limits:
memory: 2500Mi
imagePullSecrets:
- name: snapshot-pull
restartPolicy: Always
I have read the Kubernetes docs and the spec.replicas field is scoped at the pod or container level, never the StatefulSet, at least as far as I have seen.
Thanks in advance.
You could use a yaml anchor to do this:
Check out:
https://helm.sh/docs/chart_template_guide/yaml_techniques/#yaml-anchors
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
labels:
run: my-app
name: my-app
namespace: my-ns
spec:
replicas: &numReplicas 3
selector:
matchLabels:
run: my-app
serviceName: my-app
podManagementPolicy: Parallel
template:
metadata:
labels:
run: my-app
spec:
containers:
- name: my-app
image: my-app:latest
command:
- /bin/sh
- /bin/start.sh
- dev
- 2000m
- "0"
- *numReplicas
- 127.0.0.1
- "32990"
imagePullPolicy: Always
livenessProbe:
httpGet:
path: /health
port: 8081
initialDelaySeconds: 180
periodSeconds: 10
timeoutSeconds: 3
readinessProbe:
failureThreshold: 10
httpGet:
path: /ready
port: 8081
scheme: HTTP
initialDelaySeconds: 30
periodSeconds: 15
successThreshold: 1
timeoutSeconds: 3
ports:
- containerPort: 8080
protocol: TCP
resources:
limits:
memory: 2500Mi
imagePullSecrets:
- name: snapshot-pull
restartPolicy: Always
Normally you would use the downward api for this kind of thing. https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/
However it is currently not possible for kubernetes to propagate deployment/statefulset spec data into the pod spec with the downward api, nor should it be. If you are responsible for this software I'd set up some internal functionality so that it can find it's peers and determine their count periodically.

Kubernetes deployment incurs downtime

When running a deployment I get downtime. Requests failing after a variable amount of time (20-40 seconds).
The readiness check for the entry container fails when the preStop sends SIGUSR1, waits for 31 seconds, then sends SIGTERM. In that timeframe the pod should be removed from the service as the readiness check is set to fail after 2 failed attempts with 5 second intervals.
How can I see the events for pods being added and removed from the service to find out what's causing this?
And events around the readiness checks themselves?
I use Google Container Engine version 1.2.2 and use GCE's network load balancer.
service:
apiVersion: v1
kind: Service
metadata:
name: myapp
labels:
app: myapp
spec:
type: LoadBalancer
ports:
- name: http
port: 80
targetPort: http
protocol: TCP
- name: https
port: 443
targetPort: https
protocol: TCP
selector:
app: myapp
deployment:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: myapp
spec:
replicas: 3
strategy:
type: RollingUpdate
revisionHistoryLimit: 10
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
version: 1.0.0-61--66-6
spec:
containers:
- name: myapp
image: ****
resources:
limits:
cpu: 100m
memory: 250Mi
requests:
cpu: 10m
memory: 125Mi
ports:
- name: http-direct
containerPort: 5000
livenessProbe:
httpGet:
path: /status
port: 5000
initialDelaySeconds: 30
timeoutSeconds: 1
lifecycle:
preStop:
exec:
# SIGTERM triggers a quick exit; gracefully terminate instead
command: ["sleep 31;"]
- name: haproxy
image: travix/haproxy:1.6.2-r0
imagePullPolicy: Always
resources:
limits:
cpu: 100m
memory: 100Mi
requests:
cpu: 10m
memory: 25Mi
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
env:
- name: "SSL_CERTIFICATE_NAME"
value: "ssl.pem"
- name: "OFFLOAD_TO_PORT"
value: "5000"
- name: "HEALT_CHECK_PATH"
value: "/status"
volumeMounts:
- name: ssl-certificate
mountPath: /etc/ssl/private
livenessProbe:
httpGet:
path: /status
port: 443
scheme: HTTPS
initialDelaySeconds: 30
timeoutSeconds: 1
readinessProbe:
httpGet:
path: /readiness
port: 81
initialDelaySeconds: 0
timeoutSeconds: 1
periodSeconds: 5
successThreshold: 1
failureThreshold: 2
lifecycle:
preStop:
exec:
# SIGTERM triggers a quick exit; gracefully terminate instead
command: ["kill -USR1 1; sleep 31; kill 1"]
volumes:
- name: ssl-certificate
secret:
secretName: ssl-c324c2a587ee-20160331
When the probe fails, the prober will emit a warning event with reason as Unhealthy and message as xx probe errored: xxx.
You should be able to find those events using either kubectl get events or kubectl describe pods -l app=myapp,version=1.0.0-61--66-6 (filter pods by its label).