app1: hsts enabled at backend
app2: hsts not enabled
I am trying to enable hsts for specific domain at nginx-ingress(https://docs.nginx.com/nginx-ingress-controller/configuration/ingress-resources/basic-configuration/)
however, I have one application in that cluster already have hsts enabled while the another not.
So, if i add it at the config map it will take effect for both the service which will cause double hsts header for app 1.
I am currently enabling the hsts for all like as below:
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-config-map
namespace: default
data:
http2: "true"
ssl-redirect: "true"
ssl-protocols: TLSv1.2 TLSv1.3
ssl-prefer-server-ciphers: "true"
ssl-ciphers: #########
set-real-ip-from: 0.0.0.0/0
real-ip-header: X-Forwarded-For
# hsts enabled
server-snippets: 'add_header Strict-Transport-Security "max-age=31536000; includeSubDomains;"'
not sure if I am working in the correct track to resolve the hsts, looking forward to hearing from others. :)
updates
i came across where I am able to perform an if else.. just wondering if there is anyway that i can differentiate by my virtual server?
https://github.com/nginxinc/kubernetes-ingress/blob/3f0740d182a9f46cfc83ef085d9721cb102c97b9/examples/customization/nginx-config.yaml#L43
Related
I have this service that limits IPs to 2 requests per day running in Kubernetes.
Since it is behind an ingress proxy the request IP is always the same, so it is limiting he total amount of requests to 2.
Its possible to turn on proxy protocol with a config like this:
apiVersion: v1
metadata:
name: nginx-ingress-controller
data:
use-proxy-protocol: "true"
kind: ConfigMap
But this would turn it on for all services, and since they don't expect proxy-protocol they would break.
Is there a way to enable it for only one service?
It is possible to configure Ingress so that it includes the original IPs into the http header.
For this I had to change the service config.
Its called ingress-nginx-ingress-controller(or similar) and can be found with kubectl get services -A
spec:
externalTrafficPolicy: Local
And then configure the ConfigMap with the same name:
data:
compute-full-forwarded-for: "true"
use-forwarded-headers: "true"
Restart the pods and then the http request will contain the fields X-Forwarded-For and X-Real-Ip.
This method won't break deployments not expecting proxy-protocol.
I have a service deployed in Google Kubernetes Engine and have gotten the request to support TLS 1.3 connections on that service. Currently I do not get higher than TLS 1.2. Do I need to define my ingress differently?
My ingress is
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: service-tls-__CI_ENVIRONMENT_SLUG__
namespace: __KUBE_NAMESPACE__
labels:
app: __CI_ENVIRONMENT_SLUG__
ref: __CI_ENVIRONMENT_SLUG__
annotations:
kubernetes.io/tls-acme: "true"
kubernetes.io/ingress.class: "nginx"
cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
tls:
- hosts:
- __SERVICE_TLS_ENDPOINT__
secretName: __CI_ENVIRONMENT_SLUG__-service-cert
rules:
- host: __SERVICE_TLS_ENDPOINT__
http:
paths:
- path: /
backend:
serviceName: service-__CI_ENVIRONMENT_SLUG__
servicePort: 8080
Master version 1.17.13-gke.600
Pool version 1.17.13-gke.600
Your Ingress resource looks good. I used the same setup as yours and received a message that TLS 1.3 was supported.
The official documentation states:
Default TLS Version and Ciphers
To provide the most secure baseline configuration possible,
nginx-ingress defaults to using TLS 1.2 and 1.3 only, with a secure set of TLS ciphers.
Please check which version of nginx-ingress-controller you are running:
Kubernetes.github.io: Ingress-nginx: Deploy: Detect installed version
Also you can check if TLS 1.3 is enabled in nginx.conf of your nginx-ingress-controller pod (ssl_protocols TLSv1.2 TLSv1.3;). You will need to exec into the pod.
Troubleshooting steps for ensuring support for TLS 1.3
Does your server (nginx-ingress) supports TLS 1.3?
You can check if your Ingress controller supports it by running an online analysis:
SSLLabs.com: SSLTest: Analyze
You should get a message stating that TLS 1.3 is supported:
You can also use alternative online tools:
Geekflare.dev: TLS test
Geekflare.com: 10 Online Tool to Test SSL, TLS and Latest Vulnerability
Does your client supports TLS 1.3?
Please make sure that the client connecting to your Ingress supports TLS 1.3.
The client connecting to the server was not mentioned in the question:
Assuming that it's a web browser, you can check it with a similar tool to the one used for a server:
Clienttest.ssllabs.com:8443: SSLTest: ViewMyClient
Assuming that it is some other tool (curl, nmap, openssl, etc.) please check its documentation for more reference.
Additional reference:
Github.com: Kubernetes: Ingress nginx: Enable tls 1.3 in the nginx image
En.wikipedia.org: Wiki: Transport Layer Security Adoption
Is there a way to force an SSL upgrade for incoming connections on the ingress load-balancer? Or if that is not possible with, can I disable port :80? I haven't found a good documentation pages that outlines such an option in the YAML file. Thanks a lot in advance!
https://github.com/kubernetes/ingress-gce#frontend-https
You can block HTTP through the annotation kubernetes.io/ingress.allow-http: "false" or redirect HTTP to HTTPS by specifying a custom backend. Unfortunately GCE doesn't handle redirection or rewriting at the L7 layer directly for you, yet. (see https://github.com/kubernetes/ingress-gce#ingress-cannot-redirect-http-to-https)
Update: GCP now handles redirection rules for load balancers, including HTTP to HTTPS. There doesn't appear to be a method to create these through Kubernetes YAML yet.
This was already correctly answered by a comment on the accepted answer. But since the comment is buried I missed it several times.
As of GKE version 1.18.10-gke.600 you can add a k8s frontend config to redirect from http to https.
https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#https_redirect
apiVersion: networking.gke.io/v1beta1
kind: FrontendConfig
metadata:
name: ssl-redirect
spec:
redirectToHttps:
enabled: true
# add below to ingress
# metadata:
# annotations:
# networking.gke.io/v1beta1.FrontendConfig: ssl-redirect
The annotation has changed:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test
annotations:
kubernetes.io/ingress.allow-http: "false"
spec:
...
Here is the annotation change PR:
https://github.com/kubernetes/contrib/pull/1462/files
If you are not bound to the GCLB Ingress Controller you could have a look at the Nginx Ingress Controller. This controller is different to the builtin one in multiple ways. First and foremost you need to deploy and manage one by yourself. But if you are willing to do so, you get the benefit of not depending on the GCE LB (20$/month) and getting support for IPv6/websockets.
The documentation states:
By default the controller redirects (301) to HTTPS if TLS is enabled for that ingress . If you want to disable that behaviour globally, you
can use ssl-redirect: "false" in the NGINX config map.
The recently released 0.9.0-beta.3 comes with an additional annotation for explicitly enforcing this redirect:
Force redirect to SSL using the annotation ingress.kubernetes.io/force-ssl-redirect
Google has responded to our requests and is testing HTTP->HTTPS SSL redirection on their load balancers. Their latest answer said it should be in Alpha sometime before the end of January 2020.
Their comment:
Thank you for your patience on this issue. The feature is currently in testing and we expect to enter Alpha phase before the end of January. Our PM team will have an announcement with more details as we get closer to the Alpha launch.
My fingers are crossed that we'll have a straightforward solution to this very common feature in the near future.
UPDATE (April 2020):
HTTP(S) rewrites is now a Generally Available feature. It's still a bit rough around the edges and does not work out-of-the-box with the GCE Ingress Controller unfortunately. But time will tell and hopefully a native solution will appear.
A quick update. Here
Now a FrontEndConfig can be make to configure the ingress. Hopes it helps.
Example:
apiVersion: networking.gke.io/v1beta1
kind: FrontendConfig
metadata:
name: my-frontend-config
spec:
redirectToHttps:
enabled: true
responseCodeName: 301
You'll need to make sure that your load balancer supports HTTP and HTTPS
Worked on this for a long time. In case anyone isn't clear on the post above. You would rebuild your ingress with annotation -- kubernetes.io/ingress.allow-http: "false” --
Then delete your ingress and redeploy. The annotation will have the ingress only create a LB for 443, instead of both 443 and 80.
Then you do a compute HTTP LB, not one for GKE.
Gui directions:
Create a load balancer and choose HTTP(S) Load Balancing -- Start configuration.
choose - From Internet to my VMs and continue
Choose a name for the LB
leave the backend configuration blank.
Under Host and path rules, select Advanced host and path rules with the action set to
Redirect the client to different host/path.
Leave the Host redirect field blank.
Select Prefix Redirect and leave the Path value blank.
Chose the redirect response code as 308.
Tick the Enable box for HTTPS redirect.
For the Frontend configuration, leave http and port 80, for ip address select the static
IP address being used for your GKE ingress.
Create this LB.
You will now have all http traffic go to this and 308 redirect to your https ingress for GKE. Super simple config setup and works well.
Note: If you just try to delete the port 80 LB that GKE makes (not doing the annotation change and rebuilding the ingress) and then adding the new redirect compute LB it does work, but you will start to see error messages on your Ingress saying error 400 invalid value for field 'resource.ipAddress " " is in use and would result in a conflict, invalid. It is trying to spin up the port 80 LB and can't because you already have an LB on port 80 using the same IP. It does work but the error is annoying and GKE keeps trying to build it (I think).
Thanks to the comment of #Andrej Palicka and according to the page he provided: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#https_redirect now I have an updated and working solution.
First we need to define a FrontendConfig resource and then we need to tell the Ingress resource to use this FrontendConfig.
Example:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myapp-app-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: myapp-prd
networking.gke.io/managed-certificates: managed-cert
kubernetes.io/ingress.class: "gce"
networking.gke.io/v1beta1.FrontendConfig: myapp-frontend-config
spec:
defaultBackend:
service:
name: myapp-app-service
port:
number: 80
---
apiVersion: networking.gke.io/v1beta1
kind: FrontendConfig
metadata:
name: myapp-frontend-config
spec:
redirectToHttps:
enabled: true
responseCodeName: MOVED_PERMANENTLY_DEFAULT
You can disable HTTP on your cluster (note that you'll need to recreate your cluster for this change to be applied on the load balancer) and then set HTTP-to-HTTPS redirect by creating an additional load balancer on the same IP address.
I spend couple of hours on the same question, and ended up doing what I've just described. It works perfectly.
Redirecting to HTTPS in Kubernetes is somewhat complicated. In my experience, you'll probably want to use an ingress controller such as Ambassador or ingress-nginx to control routing to your services, as opposed to having your load balancer route directly to your services.
Assuming you're using an ingress controller, then:
If you're terminating TLS at the external load balancer and the LB is running in L7 mode (i.e., HTTP/HTTPS), then your ingress controller needs to use X-Forwarded-Proto, and issue a redirect accordingly.
If you're terminating TLS at the external load balancer and the LB is running in TCP/L4 mode, then your ingress controller needs to use the PROXY protocol to do the redirect.
You can also terminate TLS directly in your ingress controller, in which case it has all the necessary information to do the redirect.
Here's a tutorial on how to do this in Ambassador.
I'm trying to configure nginx-ingress for mutual TLS but only for specific remote address. I tried to use snippet but no success:
nginx.ingress.kubernetes.io/configuration-snippet: |
if ($remote_addr = 104.214.x.x) {
auth-tls-verify-client: on;
auth-tls-secret: namespace/nginx-ca-secret;
auth-tls-verify-depth: 1;
auth-tls-pass-certificate-to-upstream: false;
}
The auth-tls annotations work when applied as annotations, but inside the snippet they don't.
Any idea how to configure this or maybe a workaround to make it work?
The job of mTLS is basically restricting access to a service by requiring the client to present a certificate. If you expose a service and then require only clients with specific IP addresses to present a certificate, the entire rest of the world can still access your service without a certificate, which completely defeats the point of mTLS.
If you want more info, here is a good article that explains why TLS and mTLS exist and what is the difference between them.
There are two ways to make a sensible setup out of this:
Just use regular TLS instead of mTLS
Make a service in your cluster require mTLS to access it regardless of IP addresses
If you go for option 2, you need to configure the service itself to use mTLS, and then configure ingress to pass through the client certificate to the service. Here's a sample configuration for nginx ingress that will work with a service that expects mTLS:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: mtls-sample
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "https"
nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream: "true"
spec:
rules:
- http:
paths:
- path: /hello
backend:
serviceName: mtls-svc
servicePort: 443
I'm looking to redirect all traffic from
http://example.com -> https://example.com like how nearly all websites do.
I've looked at this link with no success:
Kubernetes HTTPS Ingress in Google Container Engine
And have tried the following annotations in my ingress.yaml file.
nginx.ingress.kubernetes.io/configuration-snippet: |
if ($http_x_forwarded_proto != 'https') {
return 301 https://$host$request_uri;
}
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
kubernetes.io/ingress.allow-http: "false"
All without any success. To be clear, I can access https://example.com and http://example.com without any errors, I need the http call to redirect to https.
Thanks
GKE uses GCE L7. The rules that you referenced in the example are not supported and the HTTP to HTTPS redirect should be controlled at the application level.
L7 inserts the x-forwarded-proto header that you can use to understand if the frontend traffic came using HTTP or HTTPS. Take a look here: Redirecting HTTP to HTTPS
There is also an example in that link for Nginx (copied for convenience):
# Replace '_' with your hostname.
server_name _;
if ($http_x_forwarded_proto = "http") {
return 301 https://$host$request_uri;
}
Currently, the documentation for how to do this properly (annotations, SSL/HTTPS, health checks etc) is severely lacking, and has been for far too long. I suspect it's because they prefer you to use the App Engine, which is magical but stupidly expensive. For GKE, here's two options:
ingress with a google-managed SSL cert and additional NGINX server configuration in front of your app/site
the NGINX ingress controller with self-managed/third-party SSL certs
The following is steps to a working setup using the former.
1 The door to your app
nginx.conf: (ellipses represent other non-relevant, non-compulsory settings)
user nginx;
worker_processes auto;
events {
worker_connections 1024;
}
http {
...
keepalive_timeout 620s;
## Logging ##
...
## MIME Types ##
...
## Caching ##
...
## Security Headers ##
...
## Compression ##
....
server {
listen 80;
## HTTP Redirect ##
if ($http_x_forwarded_proto = "http") {
return 301 https://[YOUR DOMAIN]$request_uri;
}
location /health/liveness {
access_log off;
default_type text/plain;
return 200 'Server is LIVE!';
}
location /health/readiness {
access_log off;
default_type text/plain;
return 200 'Server is READY!';
}
root /usr/src/app/www;
index index.html index.htm;
server_name [YOUR DOMAIN] www.[YOUR DOMAIN];
location / {
try_files $uri $uri/ /index.html;
}
}
}
NOTE: One serving port only. The global forwarding rule adds the http_x_forwarded_proto header to all traffic that passes through it. Because ALL traffic to your domain is now passing through this rule (remember, one port on the container, service and ingress), this header will (crucially!) always be set. Note the check and redirect above: it only continues with serving if the header value is 'https'. The root and index and location values may differ depending for your project (this is an angular project). keepalive_timeout is set to the value recommended by google. I prefer using the main nginx.conf file, but most people add a custom.conf file to /etc/nginx/conf.d; if you do this just make sure the file is imported into the main nginx.conf http block using an includes statement. The comments highlight where other settings you may want to add when everything is working will go, like gzip/brotli, security headers, where logs are saved, and so on.
Dockerfile:
...
COPY nginx.conf /etc/nginx/nginx.conf
CMD ["nginx", "-g", "daemon off;"]
NOTE: only the final two lines. Specifying an EXPOSE port is unnecessary. COPY replaces the default nginx.conf with the modified one. CMD starts a light server.
2 Create a deployment manifest and apply/create
deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: uber-dp
spec:
replicas: 1
selector:
matchLabels:
app: uber
template:
metadata:
labels:
app: uber
spec:
containers:
- name: uber-ctr
image: gcr.io/uber/beta:v1 // or some other registry
livenessProbe:
failureThreshold: 3
initialDelaySeconds: 60
httpGet:
path: /health/liveness
port: 80
scheme: HTTP
readinessProbe:
failureThreshold: 3
initialDelaySeconds: 30
httpGet:
path: /health/readiness
port: 80
scheme: HTTP
ports:
- containerPort: 80
imagePullPolicy: Always
NOTE: only one specified port is necessary, as we're going to point all (HTTP and HTTPS) traffic to it. For simplicity we're using the same path for liveness and readiness probes; these checks will be dealt with on the NGINX server, but you can and should add checks that probe the health of your app itself (eg a dedicated page that returns a 200 if healthy).The readiness probe will also be picked up by GCE, which by default has its own irremovable health check.
3 Create a service manifest and apply/create
service.yaml:
apiVersion: v1
kind: Service
metadata:
name: uber-svc
labels:
app: uber
spec:
ports:
- name: default-port
port: 80
selector:
app: uber
sessionAffinity: None
type: NodePort
NOTE: default-port specifies port 80 on the container.
4 Get a static IP address
On GCP in the hamburger menu: VPC Network -> External IP Addresses. Convert your auto-generated ephemeral IP or create a new one. Take note of the name and address.
5 Create an SSL cert and default zone
In the hamburger menu: Network Service -> Load Balancing -> click 'advanced menu' -> Certificates -> Create SSL Certificate. Follow the instructions, create or upload a certificate, and take note of the name. Then, from the menu: Cloud DNS -> Create Zone. Following the instructions, create a default zone for your domain. Add a CNAME record with www as the DNS name and your domain as the canonical name. Add an A record with an empty DNS name value and your static IP as the IPV4. Save.
6 Create an ingress manifest and apply/create
ingress.yaml:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: mypt-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: [NAME OF YOUR STATIC IP ADDRESS]
kubernetes.io/ingress.allow-http: "true"
ingress.gcp.kubernetes.io/pre-shared-cert: [NAME OF YOUR GOOGLE-MANAGED SSL]
spec:
backend:
serviceName: mypt-svc
servicePort: 80
NOTE: backend property points to the service, which points to the container, which contains your app 'protected' by a server. Annotations connect your app with SSL, and force-allow http for the health checks. Combined, the service and ingress configure the G7 load balancer (combined global forwarding rule, backend and frontend 'services', SSL certs and target proxies etc).
7 Make a cup of tea or something
Everything needs ~10 minutes to configure. Clear cache and test your domain with various browsers (Tor, Opera, Safari, IE etc). Everything will serve over https.
What about the NGINX Ingress Controller?
I've seen discussion it being better because it's cheaper/uses less resources and is more flexible. It isn't cheaper: it requires an additional deployment/workload and service (GCE L4). And you need to do more configuration. Is it more flexible? Yes. But in taking care of most of the work, the first option gives you a more important kind of flexibility — namely allowing you to get on with more pressing matters.
For everyone like me that searches this question about once a month, Google has responded to our requests and is testing HTTP->HTTPS SSL redirection on their load balancers. Their latest answer said it should be in Alpha sometime before the end of January 2020.
Their comment:
Thank you for your patience on this issue. The feature is currently in testing and we expect to enter Alpha phase before the end of January. Our PM team will have an announcement with more details as we get closer to the Alpha launch.
Update: HTTP to HTTPS redirect is now Generally Available: https://cloud.google.com/load-balancing/docs/features#routing_and_traffic_management
GKE uses its own Ingress Controller which does not support forcing https.
That's why you will have to manage NGINX Ingress Controller yourself.
See this post on how to do it on GKE.
Hope it helps.
For what it's worth, I ended up using a reverse proxy in NGINX.
You need to create secrets and sync them into your containers
You need to create a configmap in nginx with your nginx config, as well as a default config that references this additional config file.
Here is my configuration:
worker_processes 1;
events {
worker_connections 1024;
}
http {
default_type application/octet-stream;
# Logging Configs
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
keepalive_timeout 65;
# Puntdoctor Proxy Config
include /path/to/config-file.conf;
# PubSub allows 10MB Files. lets allow 11 to give some space
client_max_body_size 11M;
}
Then, the config.conf
server {
listen 80;
server_name example.com;
return 301 https://$host$request_uri;
}
server {
listen 443;
server_name example.com;
ssl_certificate /certs/tls.crt;
ssl_certificate_key /certs/tls.key;
ssl on;
ssl_session_cache builtin:1000 shared:SSL:10m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-RSA-RC4-SHA:AES128-GCM-SHA256:HIGH:!RC4:!MD5:!aNULL:!EDH:!CAMELLIA;
ssl_prefer_server_ciphers on;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $http_host;
# Fix the “It appears that your reverse proxy set up is broken" error.
proxy_pass http://deployment-name:8080/;
proxy_read_timeout 90;
proxy_redirect http://deployment-name:8080/ https://example.com/;
}
}
Create a deployment:
Here are the .yaml files
---
apiVersion: v1
kind: Service
metadata:
name: puntdoctor-lb
spec:
ports:
- name: https
port: 443
targetPort: 443
- name: http
port: 80
targetPort: 80
selector:
app: puntdoctor-nginx-deployment
type: LoadBalancer
loadBalancerIP: 35.195.214.7
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: puntdoctor-nginx-deployment
spec:
replicas: 2
template:
metadata:
labels:
app: puntdoctor-nginx-deployment
spec:
containers:
- name: adcelerate-nginx-proxy
image: nginx:1.13
volumeMounts:
- name: certs
mountPath: /certs/
- name: site-config
mountPath: /etc/site-config/
- name: default-config
mountPath: /etc/nginx/
ports:
- containerPort: 80
name: http
- containerPort: 443
name: https
volumes:
- name: certs
secret:
secretName: nginxsecret
- name: site-config
configMap:
name: nginx-config
- name: default-config
configMap:
name: default
Hope this helps someone solve this issue, thanks for the other 2 answers, they both gave me valuable insight.
As of GKE version 1.18.10-gke.600, you can use FrontendConfig to create HTTP -> HTTPS redirection in Google Kubernetes Engine
HTTP to HTTPS redirects are configured using the redirectToHttps field
in a FrontendConfig custom resource. Redirects are enabled for the
entire Ingress resource so all services referenced by the Ingress will
have HTTPS redirects enabled.
The following FrontendConfig manifest enables HTTP to HTTPS redirects.
Set the spec.redirectToHttps.enabled field to true to enable HTTPS
redirects. The spec.responseCodeName field is optional. If it's
omitted a 301 Moved Permanently redirect is used.
For example:
apiVersion: networking.gke.io/v1beta1
kind: FrontendConfig
metadata:
name: your-frontend-config-name
spec:
redirectToHttps:
enabled: true
responseCodeName: MOVED_PERMANENTLY_DEFAULT
MOVED_PERMANENTLY_DEFAULT is on of the available RESPONSE_CODE field value, to return a 301 redirect response code (default if responseCodeName is unspecified).
You can find more options here: HTTP to HTTPS redirects
Then you have to link your FrontendConfig to the Ingress, like this:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: your-ingress-name
annotations:
networking.gke.io/v1beta1.FrontendConfig: your-frontend-config-name
spec:
tls:
...