Why GCP LoadBalancer doesn't support the ECDSA certificate? - kubernetes

I have created kubernetes ingress with frontend config and the ECDSA P-384 TLS cert on Google Cloud Platform, after few seconds of creating process i received the followind error:
Error syncing to GCP: error running load balancer syncing routine:
loadbalancer -default--ingress-****** does not exist:
Cert creation failures -
k8s2-cr---***** Error:googleapi:
Error 400: The ECDSA curve is not supported.,
sslCertificateUnsupportedCurve
Why The ECDSA curve is not supported? Is there any way to enable this support?
Create tls-secret command:
kubectl create secret tls tls --key [key-path] --cert [cert-path]
Frontend-config:
apiVersion: networking.gke.io/v1beta1
kind: FrontendConfig
metadata:
name: frontend-config
spec:
redirectToHttps:
enabled: true
responseCodeName: MOVED_PERMANENTLY_DEFAULT
Ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress
namespace: default
labels:
kind: ingress
annotations:
networking.gke.io/v1beta1.FrontendConfig: frontend-config
spec:
tls:
- hosts:
- '*.mydomain.com'
secretName: tls
rules:
- host: mydomain.com
http:
paths:
- path: /*
pathType: ImplementationSpecific
backend:
service:
name: spa-ingress-service
port:
number: 80
- host: api.mydomain.com
http:
paths:
- path: /*
pathType: ImplementationSpecific
backend:
service:
name: api-ingress-service
port:
number: 80
spa services:
# SERVICE LOAD BALANCER
apiVersion: v1
kind: Service
metadata:
name: spa-service
labels:
app/name: spa
spec:
type: LoadBalancer
selector:
app/template: spa
ports:
- name: http
protocol: TCP
port: 80
targetPort: http
---
# SERVICE NODE PORT - FOR INGRESS
apiVersion: v1
kind: Service
metadata:
name: spa-ingress-service
labels:
app/name: ingress.spa
spec:
type: NodePort
selector:
app/template: spa
ports:
- name: https
protocol: TCP
port: 80
targetPort: http
api services:
# SERVICE LOAD BALANCER
apiVersion: v1
kind: Service
metadata:
name: api-service
labels:
app/name: api
spec:
type: LoadBalancer
selector:
app/template: api
ports:
- name: http
protocol: TCP
port: 80
targetPort: http
---
# SERVICE NODE PORT - FOR INGRESS
apiVersion: v1
kind: Service
metadata:
name: api-ingress-service
labels:
app/name: ingress.api
spec:
type: NodePort
selector:
app/template: api
ports:
- name: https
protocol: TCP
port: 80
targetPort: http
kubectl describe ingress response:

The gcp load balancer supports RSA-2048 or ECDSA P-256 certificates. Also DownstreamTlsContexts support multiple TLS certificates. These may be a mix of RSA and P-256 ECDSA certificates.
The following error is due to the incompatibility with the P-384 certificate currently being used rather than the P-256 certificate.
For additional information refer to the Load Balancing Overview.

Related

Connect local Docker Kubernetes to localhost's app

I have Mac OS and local Docker Desktop with Kubernetes enabled.
I want to have a service in local Kubernetes connected to my local java app running on port 8087.
Here is what I have so far:
Service
apiVersion: v1
kind: Service
metadata:
name: auth
spec:
selector:
app: app-auth
ports:
- protocol: TCP
port: 80
targetPort: 8087
---
kind: Endpoints
apiVersion: v1
metadata:
name: auth
subsets:
- addresses:
- ip: <127.0.0.1 outside of cluster>
ports:
- port: 8087
Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: router
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: localhost
http:
paths:
- path: /api/user
pathType: Prefix
backend:
service:
name: auth
port:
number: 80
I already checked this, but without success since I am not using either minikube either virtual box
access-mysql-running-on-localhost-from-minikube
how-to-access-hosts-localhost-from-inside-kubernetes-cluster
The Question: what IP should I use for my to-os-localhost-service?
Thank you

im trying to set up a kubernetes service that points to an external api that is secured with tls so needs to keep the original host header

i'm trying to set up the following
external user calls https://service1.mycluster.com, my cluster calls https://service1.externalservice.com and then returns the response to the user
i'm doing this to leverage istio and kubernetes thats deployed in my cluster to provide centralised access to services but some of my legacy services can't be moved into the cluster
i believe i'm going to need a service with an externalName to represent the external service but unsure how to get it to resolve the tls and keep the hostname of service1.externalservice.com so the tls will pass
any ideas would be much appreciated thanks
Currently i have the following
service
apiVersion: v1
kind: Service
metadata:
annotations:
name: testservice1
spec:
externalName: https://service1.externalservice.com
internalTrafficPolicy: Cluster
ports:
- name: https
port: 443
protocol: TCP
targetPort: 443
sessionAffinity: None
type: ExternalName
ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
cert-manager.io/cluster-issuer: xxx
traefik.ingress.kubernetes.io/router.tls: "true"
name: test1
spec:
ingressClassName: xxx
rules:
- host: service1.mycluster.com
http:
paths:
- backend:
service:
name: testservice1
port:
number: 443
path: /
pathType: Prefix
tls:
- hosts:
- service1.mycluster.com
secretName: tls-test1-ingress

Access log entries are not logged in istio sidecar for ingress traffic

I have alb ingress which routes its traffic to istio-ingressgateway.
From there I have a gateway:
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: "X-gateway"
namespace: dev
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "dev.xxx.com"
Also I have the virtual service in place:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: vs-istio-ingress
namespace: dev
spec:
gateways:
- X-gateway
hosts:
- "dev.xxx.com"
http:
- route:
- destination:
host: serviceX
port:
number: 8080
From there I have the service defined:
apiVersion: v1
kind: Service
metadata:
namespace: dev
name: serviceX
labels:
app: appX
spec:
selector:
app: podX
ports:
- port: 8080
I have access log enabled in the operator by setting:
spec:
meshConfig:
accessLogFile: /dev/stdout
The issue is when I hit the service from the ingress, the ingressgateway itself has the access log entry there, but not the sidecar of the service! (it's single pod), however, when the request is coming to the service via one of the service mesh the log entry is there in sidecar proxy access log.
Istio version is : 1.10.0
k8s version is : v1.21.4
The service port name should be there:
spec:
selector:
app: podX
ports:
- name: http
port: 8080
This solves the issue.

Using nginx-ingress http works but https is not on selfhost kubernetes

i use the nginx-ingress in my kubernetes cluster
i installed the nginx-ingress-controller successful.
here is my ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: demo-ingress
spec:
ingressClassName: nginx
rules:
- host: demo.test.cn
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: demo-service
port:
number: 443
---
apiVersion: v1
kind: Service
metadata:
name: demo-service
spec:
selector:
app: hello-node
ports:
- port: 443 # service port
targetPort: 8080 # container port
protocol: TCP
# nodePort port exposed on the host machine
pupulate http url [http://demo.test.cn/] in browser, it works well
populate the https url [https://demo.test.cn/] in browser, it does't work
check the log in nginx-ingress-controller
access log

Kubernetes Istio exposure not working with Virtualservice and Gateway

So we have the following use case running on Istio 1.8.2/Kubernetes 1.18:
Our cluster is exposed via a External Loadbalancer on Azure. When we expose the app the following way, it works:
---
apiVersion: apps/v1
kind: ReplicaSet
metadata:
annotations:
...
name: frontend
namespace: frontend
spec:
replicas: 1
selector:
matchLabels:
app: applicationname
template:
metadata:
labels:
app: appname
name: frontend
customer: customername
spec:
imagePullSecrets:
- name: yadayada
containers:
- name: frontend
image: yadayada
imagePullPolicy: Always
ports:
- name: https
protocol: TCP
containerPort: 80
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
---
apiVersion: v1
kind: Service
metadata:
name: frontend-svc
namespace: frontend
labels:
name: frontend-svc
customer: customername
spec:
type: LoadBalancer
ports:
- name: http
port: 80
targetPort: 80
selector:
name: frontend
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: frontend
namespace: frontend
annotations:
kubernetes.io/ingress.class: istio
kubernetes.io/tls-acme: "true"
cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
rules:
- host: "customer.domain.com"
http:
paths:
- path: /
pathType: Prefix
backend:
serviceName: frontend-svc
servicePort: 80
tls:
- hosts:
- "customer.domain.com"
secretName: certificate
When we start using a Virtualservice and Gateway, we fail to make it work for some reason. We wanna use VSVC and Gateways cause they offer more flexibility and options (like url rewriting). Other apps dont have this issue running on istio (much simpler as well), we dont have networkpolicy in place (yet). We simply cannot reach the webpage. Anyone has an idea? Virtualservice and Gateway down below. with the other 2 replicasets not mentioned cause they are not the problem:
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
creationTimestamp: null
name: virtualservice-name
namespace: frontend
spec:
gateways:
- frontend
hosts:
- customer.domain.com
http:
- match:
- uri:
prefix: /
route:
- destination:
host: frontend
port:
number: 80
weight: 100
- match:
- uri:
prefix: /api/
route:
- destination:
host: backend
port:
number: 8080
weight: 100
- match:
- uri:
prefix: /auth/
route:
- destination:
host: keycloak
port:
number: 8080
weight: 100
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: frontend
namespace: frontend
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http2
protocol: HTTP
tls:
httpsRedirect: True
hosts:
- "customer.domain.com"
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: PASSTHROUGH
credentialName: customer-cert
hosts:
- "customer.domain.com"
Your Gateway specifies PASSTHROUGH, however your VirtualService provides an HttpRoute. This means the TLS connection is not terminated by the Gateway, but the VirtualService expects terminated TLS. See also this somewhat similar question.
How do I properly HTTPS secure an application when using Istio?
#user140547 Correct, we changed that now. But we still couldn't access the application.
We found out that one of the important services was not receiving gateway traffic, since that one wasn't setup correctly. It is our first time having an istio deployment with multiple services. So we thought each of them needed their own Gateway. Little did we know that 1 gateway was more then enough...