Calls by secure port (https) do not work in istio - kubernetes

I hope you can help me, I have the following problem with istio I want to receive HTTPS requests but I get the error "curl: (52) Empty response from server", however the HTTP requests work correctly, I attach my manifests.
A certificate has already been generated and a secret has been created with the .crt and .key files
I don't know what else I need so that https requests can work
Istio Version: 1.8.2
Kubectl version client: 1.20.2
Response for HTTP & HTTPS
CURL -Iv for HTTPS
Gateway.yaml
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: eks-gateway
namespace: development
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- mysite.domine.com
tls:
httpsRedirect: false
- port:
number: 443
name: https
protocol: HTTPS
hosts:
- mysite.domine.com
tls:
mode: SIMPLE
credentialName: mysite-secret
VirtualService.yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: eks-virtualservice
namespace: development
spec:
hosts:
- mysite.domine.com
gateways:
- eks-gateway
http:
- match:
- uri:
prefix: /WeatherForecast
route:
- destination:
host: eks-service
port:
number: 80
tls:
- match:
- port: 443
sniHosts:
- mysite.domine.com
route:
- destination:
host: eks-service
port:
number: 80
DestinationRule.yaml
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: eks-destinationrule
namespace: development
spec:
host: eks-service
trafficPolicy:
loadBalancer:
simple: LEAST_CONN
Service.yaml
apiVersion: v1
kind: Service
metadata:
name: eks-service
namespace: development
labels:
app: eks-app
spec:
type: ClusterIP
ports:
- port: 80
targetPort: container-port
protocol: TCP
name: http-sv
- port: 443
targetPort: container-port
protocol: TCP
name: https-sv
selector:
app: eks-app
Deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: "2021-02-12T07:40:55Z"
generation: 1
labels:
app: eks-app
app.kubernetes.io/version: v1
draft: draft-app
spec:
progressDeadlineSeconds: 600
replicas: 4
revisionHistoryLimit: 0
selector:
matchLabels:
app: eks-app
version: v1
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
annotations:
buildID: ""
creationTimestamp: null
labels:
app: eks-app
draft: draft-app
version: v1
spec:
containers:
- image: XXXXXXXXXX.dkr.ecr.us-east-1.amazonaws.com/DockerRepo:v1
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /liveness
port: container-port
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: eks-app
ports:
- containerPort: 80
name: container-port
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /readiness
port: container-port
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
terminationGracePeriodSeconds: 30
status:
availableReplicas: 4
observedGeneration: 1
readyReplicas: 4
replicas: 4
updatedReplicas: 4

I think it is a similar issue like with these questions: Gateway and Virtual Service definitions don't match: the gateway terminates TLS, but the virtual service is defined to receive TLS traffic.
Try just deleting the TLS part in the Virtual Service.
How do I properly HTTPS secure an application when using Istio?
Kubernetes Istio exposure not working with Virtualservice and Gateway

apparently there is a problem between istio and eks, so I decided to install aws controller to get it to work properly.

Related

Hangfire dashboard url setup on kubernetes

I have hangfire dashboard working properly on local environment. Now, I'm trying to setup it as a container (pod) inside my cluster, deployed to azure so I can access hangfire dashboard through its url. However, I'm having issue access to it.
Below is my setup:
[UsedImplicitly]
public void Configure(IApplicationBuilder app)
{
var hangFireServerOptions = new BackgroundJobServerOptions
{
Activator = new ContainerJobActivator(app.ApplicationServices)
};
app.UseHealthChecks("/liveness");
app.UseHangfireServer(hangFireServerOptions);
app.UseHangfireDashboard("/hangfire", new DashboardOptions()
{
AppPath = null,
DashboardTitle = "Hangfire Dashboard",
Authorization = new[]
{
new HangfireCustomBasicAuthenticationFilter
{
User = Configuration.GetSection("HangfireCredentials:UserName").Value,
Pass = Configuration.GetSection("HangfireCredentials:Password").Value
}
}
});
app.UseHttpsRedirection();
app.UseRouting();
app.UseAuthorization();
app.UseEndpoints(endpoints =>
{
endpoints.MapControllers();
});
HangfireJobScheduler.ScheduleJobs(app.ApplicationServices.GetServices<IScheduledTask>()); //serviceProvider.GetServices<IScheduledTask>()
}
Service.yml
apiVersion: v1
kind: Service
metadata:
name: task-scheduler-api
spec:
ports:
- port: 80
targetPort: 80
name: http
- port: 443
targetPort: 443
name: https
selector:
app: task-scheduler-api
Deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: task-scheduler
spec:
selector:
matchLabels:
app: task-scheduler
template:
metadata:
labels:
app: task-scheduler
spec:
containers:
- name: task-scheduler
image: <%image-name%>
# Resources and limit
resources:
requests:
cpu: <%cpu_request%>
memory: <%memory_request%>
limits:
cpu: <%cpu_limit%>
memory: <%memory_limit%>
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
readinessProbe:
httpGet:
path: /liveness
port: 80
initialDelaySeconds: 3
periodSeconds: 5
timeoutSeconds: 30
livenessProbe:
httpGet:
path: /liveness
port: 80
initialDelaySeconds: 15
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 7
Ingress.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: "nginx"
cert-manager.io/cluster-issuer: letsencrypt
nginx.ingress.kubernetes.io/rewrite-target: /
name: task-scheulder-api-ingress
namespace: default
spec:
rules:
- host: example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: task-scheduler-api
port:
number: 80
tls:
- hosts:
- example.com
secretName: task-scheduler-tls-production
I'm trying to access the dashboard by running: example.com/hangfire, but got 503 Service Temporarily Unavailable.
I'm checking logs on the pod. Every seem to be fine:
...
...
Content root path: /data
Now listening on: http://0.0.0.0:80
Now listening on: https://0.0.0.0:443
Application started. Press Ctrl+C to shut down.
....
Would anyone know what I'm missing and how to resolve it ? Thank you
I have figured out the issue. The main issue is that I did not have the match value for selector app in deployment.yml and service.yml. If I do kubectl get ep, it is showing me that I do not have any endpoint assign to the task scheduler pod, meaning that it is not really deployed yet.
As soon as I updated values in the deployment.yml and service.yml, url is accessible.
service.yml
apiVersion: v1
kind: Service
metadata:
name: task-scheduler-api
spec:
ports:
- port: 80
targetPort: 80
name: http
- port: 443
targetPort: 443
name: https
selector:
app: task-scheduler-api # This needs to match with value inside the deployment
deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: task-scheduler
spec:
selector:
matchLabels:
app: task-scheduler # This needs to match with value in service
template:
metadata:
labels:
app: task-scheduler # This needs to match as well
spec:
containers:
- name: task-scheduler
image: <%image-name%>
# Resources and limit
resources:
requests:
cpu: <%cpu_request%>
memory: <%memory_request%>
limits:
cpu: <%cpu_limit%>
memory: <%memory_limit%>
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
readinessProbe:
httpGet:
path: /liveness
port: 80
initialDelaySeconds: 3
periodSeconds: 5
timeoutSeconds: 30
livenessProbe:
httpGet:
path: /liveness
port: 80
initialDelaySeconds: 15
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 7
Hopefully someone would find it useful. Thank you
This could be related to the ingress class, bcs this moved from annotation to an own field in networking.k8s.io/v1 :
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: task-scheulder-api-ingress
annotations:
cert-manager.io/cluster-issuer: letsencrypt
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx
tls:
- hosts:
- "example.com"
secretName: task-scheduler-tls-production
rules:
- host: "example.com"
http:
paths:
- path: /hangfire
pathType: ImplementationSpecific
backend:
service:
name: task-scheduler-api
port:
number: 8080
You also do not need to specify port 80 & 443 at the service as the ingress is responsible for implementing TLS:
apiVersion: v1
kind: Service
metadata:
name: task-scheduler-api
spec:
ports:
- port: 8080
targetPort: http
protocol: TCP
name: http
selector:
app: task-scheduler-api
For convenience you should also update the deployment:
- name: http
containerPort: 80
protocol: TCP

502 Bad Gateway when deploying Apollo Server application to GKE

I'm trying to deploy my Apollo Server application to my GKE cluster. However, when I visit the static IP for my site I receive a 502 Bad Gateway error. I was able to get my client to deploy properly in a similar fashion so I'm not sure what I'm doing wrong. My deployment logs seem to show that the server started properly. However my ingress indicates that my service is unhealthy since it seems to be failing the health check.
Here is my deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: <DEPLOYMENT_NAME>
labels:
app: <DEPLOYMENT_NAME>
spec:
replicas: 1
selector:
matchLabels:
app: <POD_NAME>
template:
metadata:
name: <POD_NAME>
labels:
app: <POD_NAME>
spec:
serviceAccountName: <SERVICE_ACCOUNT_NAME>
containers:
- name: <CONTAINER_NAME>
image: <MY_IMAGE>
imagePullPolicy: Always
ports:
- containerPort: <CONTAINER_PORT>
- name: cloud-sql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.17
command:
- '/cloud_sql_proxy'
- '-instances=<MY_PROJECT>:<MY_DB_INSTANCE>=tcp:<MY_DB_PORT>'
securityContext:
runAsNonRoot: true
My service.yml
apiVersion: v1
kind: Service
metadata:
name: <MY_SERVICE_NAME>
labels:
app: <MY_SERVICE_NAME>
annotations:
cloud.google.com/neg: '{"ingress": true}'
spec:
type: NodePort
ports:
- protocol: TCP
port: 80
targetPort: <CONTAINER_PORT>
selector:
app: <POD_NAME>
And my ingress.yml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: <INGRESS_NAME>
annotations:
kubernetes.io/ingress.global-static-ip-name: <CLUSTER_NAME>
networking.gke.io/managed-certificates: <CLUSTER_NAME>
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: <SERVICE_NAME>
servicePort: 80
Any ideas what is causing this failure?
With Apollo Server you need the health check to look at the correct endpoint. So add the following to your deployment.yml under the container.
livenessProbe:
initialDelaySeconds: 30
periodSeconds: 30
httpGet:
path: '/.well-known/apollo/server-health'
port: <CONTAINER_PORT>
readinessProbe:
initialDelaySeconds: 30
periodSeconds: 30
httpGet:
path: '/.well-known/apollo/server-health'
port: <CONTAINER_PORT>

Accessing kubernetes headless service over ambassador

I have deployed my service as headless server and did follow the kubernetes configuration as mentioned in this link (http://vertx.io/docs/vertx-hazelcast/java/#_using_this_cluster_manager). My service is load balanced and proxied using ambassador. Everything was working fine as long as the service was not headless. Once the service changed to headless, ambassador is not able to discover my services. Which means it was looking for clusterIP and it is missing now as the services are headless. What is that I need to include in my deployment.yaml so these services are discovered by ambassador.
Error I see " upstream connect error or disconnect/reset before headers. reset reason: connection failure"
I need these services to be headless because that is the only way to create a cluster using hazelcast. And I am creating web socket connection and vertx eventbus.
apiVersion: v1
kind: Service
metadata:
name: abt-login-service
labels:
chart: "abt-login-service-0.1.0-SNAPSHOT"
annotations:
fabric8.io/expose: "true"
fabric8.io/ingress.annotations: 'kubernetes.io/ingress.class: nginx'
getambassador.io/config: |
---
apiVersion: ambassador/v1
name: login_mapping
ambassador_id: default
kind: Mapping
prefix: /login/
service: abt-login-service.default.svc.cluster.local
use_websocket: true
spec:
type: ClusterIP
clusterIP: None
selector:
app: RELEASE-NAME-abt-login-service
ports:
- port: 80
targetPort: 8080
protocol: TCP
name: http
- name: hz-port-name
port: 5701
protocol: TCP```
```Deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: RELEASE-NAME-abt-login-service
labels:
draft: draft-app
chart: "abt-login-service-0.1.0-SNAPSHOT"
spec:
replicas: 2
selector:
matchLabels:
app: RELEASE-NAME-abt-login-service
minReadySeconds: 30
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
template:
metadata:
labels:
draft: draft-app
app: RELEASE-NAME-abt-login-service
component: abt-login-service
spec:
serviceAccountName: vault-auth
containers:
- name: abt-login-service
env:
- name: SPRING_PROFILES_ACTIVE
value: "dev"
- name: _JAVA_OPTIONS
value: "-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -XX:MaxRAMFraction=2 -Dsun.zip.disableMemoryMapping=true -XX:+UseParallelGC -XX:Min
HeapFreeRatio=5 -XX:MaxHeapFreeRatio=10 -XX:GCTimeRatio=4 -XX:AdaptiveSizePolicyWeight=90 -Dhazelcast.diagnostics.enabled=true
"
image: "draft:dev"
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
ports:
- containerPort: 5701
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 60
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
readinessProbe:
httpGet:
path: /health
port: 8080
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources:
limits:
cpu: 500m
memory: 1024Mi
requests:
cpu: 400m
memory: 512Mi
terminationGracePeriodSeconds: 10```
How can I make these services discoverable by ambassador?

How to setup HTTPS load balancer in kubernetes

I have a requirement to make my application to support the request over https and block the http port.I want to use certificate provided my company so do i need the jks certs or some other type. Im not sure how to make it https in gke. I have seen couple of documentation but they are not clear.This is my current kubernetes deployment file.Please let me know how can i configure it.
apiVersion: v1
kind: Service
metadata:
name: oms-integeration-service
spec:
type: NodePort
ports:
- port: 80
targetPort: 8081
protocol: TCP
name: http
selector:
app: integeration
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: integeration
spec:
replicas: 2
template:
metadata:
labels:
app: integeration
spec:
containers:
- name: esp
image: gcr.io/endpoints-release/endpoints-runtime:1
args: [
"--http_port=8081",
"--backend=127.0.0.1:8080",
"--service=oms.endpoints.gcp-dsw-oms-int-{{env}}.cloud.goog",
"--rollout_strategy=managed",
]
- name: integeration-container
image: us.gcr.io/gcp-dsw-oms-int-{{env}}/gke/oms-integ-service:{{tag}}
readinessProbe:
httpGet:
path: /healthcheck
port: 8080
initialDelaySeconds: 60
periodSeconds: 10
ports:
- containerPort: 8080
resources:
requests:
memory: 500M
env:
- name: LOGGING_FILE
value: "integeration-container"
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: integeration-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: "oms-int-ip"
kubernetes.io/ingress.class: "gce"
rules:
- host: "oms.endpoints.gcp-dsw-oms-int-{{env}}.cloud.goog"
http:
paths:
- path: /*
backend:
serviceName: oms-integeration-service
servicePort: 80
You have to create a secret that contains your SSL certificate and then reference that secret in your ingress spec as explained here

Kubernetes - Best strategy for pods with same port?

I have a Kubernetes cluster with 2 Slaves. I have 4 docker containers which all use a tomcat image and expose port 8080 and 8443. When I now put each container into a separate pod I get an issue with the ports since I only have 2 worker nodes.
What would be the best strategy for my scenario?
Current error message is: 1 PodToleratesNodeTaints, 2 PodFitsHostPorts.
Put all containers into one pod? This is my current setup (times 4)
kind: Deployment
apiVersion: apps/v1beta2
metadata:
name: myApp1
namespace: appNS
labels:
app: myApp1
spec:
replicas: 1
selector:
matchLabels:
app: myApp1
template:
metadata:
labels:
app: myApp1
spec:
dnsPolicy: ClusterFirstWithHostNet
hostNetwork: true
containers:
- image: myregistry:5000/myApp1:v1
name: myApp1
ports:
- name: http-port
containerPort: 8080
- name: https-port
containerPort: 8443
readinessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
failureThreshold: 6
---
kind: Service
apiVersion: v1
metadata:
name: myApp1-srv
namespace: appNS
labels:
version: "v1"
app: "myApp1"
spec:
type: NodePort
selector:
app: "myApp1"
ports:
- protocol: TCP
name: http-port
port: 8080
- protocol: TCP
name: https-port
port: 8443
You should not use hostNetwork unless absolutely necessary. Without host network you can have multiple pods listening on the same port number as each will have its own, dedicated network namespace.