python kubernetes api connection unauthorized 401 - kubernetes

I'm having trouble connecting to the python kubernetes client, but I'm getting this 404 url not found error when I run curl -k https://ip-address:30000/pods which is an endpoint I wrote to connect to kubernetes & list pods. I'm running this locally through minikube, any suggestions on what to do?
The error:
Reason: Unauthorized
HTTP response headers: HTTPHeaderDict({'Content-Type': 'application/json', 'Content-Length': '129', 'Date': 'Wed, 18 Jul 2018 01:08:30 GMT'})
HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Unauthorized","reason":"Unauthorized","code":401}
How I'm connecting to api:
from kubernetes import config,client
from kubernetes.client import Configuration, ApiClient
config = Configuration()
config.api_key = {'authorization': 'Bearer my-key-here'}
config.ssl_ca_cert = '/etc/secret-volume/ca-cert'
config.cert_file = 'ca-cert.crt'
config.key_file = '/etc/secret-volume/secret-key'
config.assert_hostname = False
api_client = ApiClient(configuration=config)
v1 = client.CoreV1Api(api_client)
#I've also tried using below, but it gives sslcertifificate max retry error
#so I opted for manual config above
try:
config.load_incluster_config()
except:
config.load_kube_config()
v1 = client.CoreV1Api()
the way I'm getting the api key is getting the decoded token from service account I created, however according to documentation here
it says
on a server with token authentication configured, and anonymous access
enabled, a request providing an invalid bearer token would receive a
401 Unauthorized error. A request providing no bearer token would be
treated as an anonymous request.
so it seems like my api token is not valid somehow? however I followed the steps to decode it and everything. I was pretty much following this
My yaml files:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: first-app
namespace: test-ns
spec:
selector:
matchLabels:
app: first-app
replicas: 3
template:
metadata:
labels:
app: first-app
spec:
serviceAccountName: test-sa
containers:
- name: first-app
image: first-app
imagePullPolicy: IfNotPresent
volumeMounts:
- name: secret-volume
mountPath: /etc/secret-volume
ports:
- containerPort: 80
env:
- name: "SECRET_KEY"
value: /etc/secret-volume/secret-key
- name: "SECRET_CRT"
value: /etc/secret-volume/secret-crt
- name: "CA_CRT"
value: /etc/secret-volume/ca-cert
volumes:
- name: secret-volume
secret:
secretName: test-secret
---
apiVersion: v1
kind: Service
metadata:
name: first-app
namespace: test-ns
spec:
type: NodePort
selector:
app: first-app
ports:
- protocol: TCP
port: 443
nodePort: 30000
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: test-sa
namespace: test-ns
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: system:test-app
namespace: test-ns
rules:
- apiGroups:
- ""
resources:
- pods
- namespaces
- services
verbs:
- '*'
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:test-app
namespace: test-ns
subjects:
- kind: ServiceAccount
name: test-sa
namespace: test-ns
roleRef:
kind: ClusterRole
name: system:test-app
apiGroup: rbac.authorization.k8s.io

Related

Access consul-api from consul-connect-service-mesh

In a consul-connect-service-mesh (using k8) how do you get to the consul-api itself?
For example to access the consul-kv.
I'm working through this tutorial, and I'm wondering how
you can bind the consul (http) api in a service to localhost.
Do you have to configure the Helm Chart further?
I would have expected the consul-agent to always be an upstream service.
The only way i found to access the api is via the k8-service consul-server.
Environment:
k8 (1.22.5, docker-desktop)
helm consul (0.42)
consul (1.11.3)
used helm-yaml
global:
name: consul
datacenter: dc1
server:
replicas: 1
securityContext:
runAsNonRoot: false
runAsGroup: 0
runAsUser: 0
fsGroup: 0
ui:
enabled: true
service:
type: 'NodePort'
connectInject:
enabled: true
controller:
enabled: true
You can access the Consul API on the local agent by using the Kubernetes downward API to inject an environment variable in the pod with the IP address of the host. This is documented on Consul.io under Installing Consul on Kubernetes: Accessing the Consul HTTP API.
You will also need to exclude port 8500 (or 8501) from redirection using the consul.hashicorp.com/transparent-proxy-exclude-outbound-ports label.
My current final solution is a (connect)service based on reverse proxy (nginx) that targets consul.
apiVersion: v1
kind: ConfigMap
metadata:
name: consul-kv-proxy
data:
nginx.conf.template: |
error_log /dev/stdout info;
server {
listen 8500;
location / {
access_log off;
proxy_pass http://${MY_NODE_IP}:8500;
error_log /dev/stdout;
}
}
---
apiVersion: v1
kind: Service
metadata:
# This name will be the service name in Consul.
name: consul-kv-proxy
spec:
selector:
app: consul-kv-proxy
ports:
- protocol: TCP
port: 8500
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: consul-kv-proxy
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: consul-kv-proxy
spec:
replicas: 1
selector:
matchLabels:
app: consul-kv-proxy
template:
metadata:
name: consul-kv-proxy
labels:
app: consul-kv-proxy
annotations:
'consul.hashicorp.com/connect-inject': 'true'
spec:
containers:
- name: consul-kv-proxy
image: nginx:1.14.2
volumeMounts:
- name: config
mountPath: "/usr/local/nginx/conf"
readOnly: true
command: ['/bin/bash']
#we have to transform the nginx config to use the node ip address
args:
- -c
- envsubst < /usr/local/nginx/conf/nginx.conf.template > /etc/nginx/conf.d/consul-kv-proxy.conf && nginx -g 'daemon off;'
ports:
- containerPort: 8500
name: http
env:
- name: MY_NODE_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
volumes:
- name: config
configMap:
name: consul-kv-proxy
# If ACLs are enabled, the serviceAccountName must match the Consul service name.
serviceAccountName: consul-kv-proxy
A downstream service (called static-client) now can be declared like this
apiVersion: v1
kind: Service
metadata:
name: static-client
spec:
selector:
app: static-client
ports:
- port: 80
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: static-client
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: static-client
spec:
replicas: 1
selector:
matchLabels:
app: static-client
template:
metadata:
name: static-client
labels:
app: static-client
annotations:
'consul.hashicorp.com/connect-inject': 'true'
'consul.hashicorp.com/connect-service-upstreams': 'consul-kv-proxy:8500'
spec:
containers:
- name: static-client
image: curlimages/curl:latest
# Just spin & wait forever, we'll use `kubectl exec` to demo
command: ['/bin/sh', '-c', '--']
args: ['while true; do sleep 30; done;']
serviceAccountName: static-client
Assume we have a key-value in consul called "test".
From a pod of the static-client we can now access the consul-web-api with:
curl http://localhost:8500/v1/kv/test
This solution still lacks fine-tuning (i have not try https, or ACL).

Argo(events) Trigger an existing ClusterWorkflowTemplate using Sensor

I'm trying to trigger a pre existing ClusterWorkflowTemplate from a post request in argo/ argo-events.
I've been following the example here, but i don't want to define the workflow in the sensor- I want to separate this.
I can't get the sensor to import and trigger the workflow, what is the problem?
# kubectl apply -n argo-test -f templates/whalesay/workflow-template.yml
apiVersion: argoproj.io/v1alpha1
kind: ClusterWorkflowTemplate
metadata:
name: workflow-template-submittable
spec:
entrypoint: whalesay-template
arguments:
parameters:
- name: message
value: hello world
templates:
- name: whalesay-template
inputs:
parameters:
- name: message
container:
image: docker/whalesay
command: [cowsay]
args: ["{{inputs.parameters.message}}"]
# kubectl apply -n argo-events templates/whalesay/event-source.yml
apiVersion: argoproj.io/v1alpha1
kind: EventSource
metadata:
name: webhook
spec:
service:
ports:
- port: 12000
targetPort: 12000
webhook:
# event-source can run multiple HTTP servers. Simply define a unique port to start a new HTTP server
example:
# port to run HTTP server on
port: "12000"
# endpoint to listen to
endpoint: /example
# HTTP request method to allow. In this case, only POST requests are accepted
method: POST
# kubectl apply -n argo-events -f templates/whalesay/sensor.yml
apiVersion: argoproj.io/v1alpha1
kind: Sensor
metadata:
name: workflow
namespace: argo-events
finalizers:
- sensor-controller
spec:
template:
serviceAccountName: operate-workflow-sa
dependencies:
- name: http-post-trigger
eventSourceName: webhook
eventName: example
triggers:
- template:
name: workflow-trigger-1
argoWorkflow:
group: argoproj.io
version: v1alpha1
kind: Workflow
operation: submit
metadata:
generateName: cluster-workflow-template-hello-world-
spec:
entrypoint: whalesay-template
workflowTemplateRef:
name: cluster-workflow-template-submittable
clusterScope: true
# to launch
curl -d '{"message":"this is my first webhook"}' -H "Content-Type: application/json" -X POST http://argo-events-51-210-211-4.nip.io/example
# error
{
"level": "error",
"ts": 1627655074.716865,
"logger": "argo-events.sensor-controller",
"caller": "sensor/controller.go:69",
"msg": "reconcile error",
"namespace": "argo-events",
"sensor": "workflow",
"error": "temp ││ late workflow-trigger-1 is invalid: argoWorkflow trigger does not contain an absolute action",
}
References:
special-workflow-trigger.yaml
cluster workflow templates
spec
I had to:
Ensure that my service account operate-workflow-sa had cluster privileges
Correct my sensor.yml syntax spec
Cluster privileges:
# kubectl apply -f ./k8s/workflow-service-account.yml
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: argo-events
name: operate-workflow-sa
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: operate-workflow-role
# namespace: argo-events
rules:
- apiGroups:
- argoproj.io
verbs:
- "*"
resources:
- workflows
- clusterworkflowtemplates
- workflowtemplates
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: operate-workflow-role-binding
namespace: argo-events
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: operate-workflow-role
subjects:
- kind: ServiceAccount
name: operate-workflow-sa
namespace: argo-events
sensor.yml (note the addition of the serviceAccountName for the workflow too):
apiVersion: argoproj.io/v1alpha1
kind: Sensor
metadata:
name: workflow
namespace: argo-events
finalizers:
- sensor-controller
spec:
template:
serviceAccountName: operate-workflow-sa
dependencies:
- name: http-post-trigger
eventSourceName: webhook
eventName: example
triggers:
# https://github.com/argoproj/argo-events/blob/master/api/sensor.md#triggertemplate
- template:
name: workflow-trigger-1
argoWorkflow:
# https://github.com/argoproj/argo-events/blob/master/api/sensor.md#argoproj.io/v1alpha1.ArgoWorkflowTrigger
group: argoproj.io
version: v1alpha1
resource: Workflow
operation: submit
metadata:
generateName: cluster-workflow-template-hello-world-
source:
resource:
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
name: special-trigger
spec:
serviceAccountName: operate-workflow-sa
entrypoint: whalesay-template
workflowTemplateRef:
name: whalesay-template
clusterScope: true

authenticate session not found in Keycloak-Gatekeeper configuration

I am trying to use keycloak as my identity provider for accessing the k8s dashboard. I use keycloak-gatekeeper to authenticate.
My keycloak config file is as follows on my pod pod1
apiVersion: apps/v1
kind: Deployment
metadata:
name: db
namespace: kubernetes-dashboard
spec:
replicas: 1
selector:
matchLabels:
app: db
template:
metadata:
labels:
app: db
spec:
containers:
- name: gatekeeper
image: carlosedp/keycloak-gatekeeper:latest
args:
- --config=/etc/keycloak-gatekeeper.conf
ports:
- containerPort: 3000
name: service
volumeMounts:
- name: gatekeeper-config
mountPath: /etc/keycloak-gatekeeper.conf
subPath: keycloak-gatekeeper.conf
- name: gatekeeper-files
mountPath: /html
volumes:
- name : gatekeeper-config
configMap:
name: gatekeeper-config
- name : gatekeeper-files
configMap:
name: gatekeeper-files
---
apiVersion: v1
kind: ConfigMap
metadata:
name: gatekeeper-config
namespace: kubernetes-dashboard
creationTimestamp: null
data:
keycloak-gatekeeper.conf: |+
discovery-url: http://keycloak.<IP>.nip.io:8080/auth/realms/k8s-realm
skip-openid-provider-tls-verify: true
client-id: k8s-client
client-secret: <SECRET>
listen: 0.0.0.0:3000
debug: true
ingress.enabled: true
enable-refresh-tokens: true
enable-logging: true
enable-json-logging: true
redirection-url: http://k8s.dashboard.com/dashboard/
secure-cookie: false
encryption-key: vGcLt8ZUdPX5fXhtLZaPHZkGWHZrT6aa
enable-encrypted-token: false
upstream-url: http://127.0.0.0:80
forbidden-page: /html/access-forbidden.html
headers:
Bearer : <bearer token>
resources:
- uri: /*
groups:
- k8s-group
---
apiVersion: v1
kind: ConfigMap
metadata:
name: gatekeeper-files
namespace: kubernetes-dashboard
creationTimestamp: null
data:
access-forbidden.html: html file
---
apiVersion: v1
kind: Service
metadata:
labels:
app: db
name: db
namespace: kubernetes-dashboard
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: service
selector:
app: db
type: ClusterIP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: db
namespace: kubernetes-dashboard
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/proxy-buffer-size: "64k"
spec:
rules:
- host: k8s.dashboard.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: db
port:
number: 80
when I am accessing k8s.dashboard.com I am getting this URL and it is navigating me to the keycloak page for authentication.
http://keycloak.<IP>.nip.io:8080/auth/realms/k8s-realm/protocol/openid-connect/auth?client_id=k8s-client&redirect_uri=http%3A%2F%2Fk8s.dashboard.com%2Fdashboard%2Foauth%2Fcallback&response_type=code&scope=openid+email+profile&state=23c4b0ff-259f-45c0-934a-98fc780363e6
After logging in to the keycloak, it is throwing me 404 page and the URL which is redirecting is
http://k8s.dashboard.com/dashboard/oauth/callback?state=23c4b0ff-259f-45c0-934a-98fc780363e6&session_state=4c698f90-4e03-44a9-b231-01a418f0d569&code=9ab6a309-98ad-4d61-989f-116f0b151522.4c698f90-4e03-44a9-b231-01a418f0d569.520395c1-d601-4502-981a-b1c08861ab3d
As you can see the extra /oauth/callback endpoint is added after k8s.dashboard.com/dashboard. If I remove /oauth/callback then it will redirect me to k8s dashboard login page.
My pod log file is as follows:
{"level":"info","ts":1626074166.8771496,"msg":"client request","latency":0.000162174,"status":307,"bytes":95,"client_ip":"172.17.0.8:43276","method":"GET","path":"/favicon.ico"}
{"level":"info","ts":1626074166.9270697,"msg":"client request","latency":0.000054857,"status":307,"bytes":330,"client_ip":"172.17.0.8:43276","method":"GET","path":"/oauth/authorize"}
{"level":"error","ts":1626074176.2642884,"msg":"no session found in request, redirecting for authorization","error":"authentication session not found"}
{"level":"info","ts":1626074176.264481,"msg":"client request","latency":0.000197256,"status":307,"bytes":95,"client_ip":"172.17.0.8:43276","method":"GET","path":"/"}
{"level":"info","ts":1626074176.2680361,"msg":"client request","latency":0.000041917,"status":307,"bytes":330,"client_ip":"172.17.0.8:43276","method":"GET","path":"/oauth/authorize"}
{"level":"error","ts":1626074185.140641,"msg":"no session found in request, redirecting for authorization","error":"authentication session not found"}
{"level":"info","ts":1626074185.1407247,"msg":"client request","latency":0.000091046,"status":307,"bytes":95,"client_ip":"172.17.0.8:43276","method":"GET","path":"/"}
{"level":"info","ts":1626074185.1444902,"msg":"client request","latency":0.000042129,"status":307,"bytes":330,"client_ip":"172.17.0.8:43276","method":"GET","path":"/oauth/authorize"}
{"level":"error","ts":1626074202.1827211,"msg":"no session found in request, redirecting for authorization","error":"authentication session not found"}
{"level":"info","ts":1626074202.182838,"msg":"client request","latency":0.000122802,"status":307,"bytes":95,"client_ip":"172.17.0.8:43276","method":"GET","path":"/favicon.ico"}
{"level":"info","ts":1626074202.1899397,"msg":"client request","latency":0.000032541,"status":307,"bytes":330,"client_ip":"172.17.0.8:43276","method":"GET","path":"/oauth/authorize"}
What is wrong here? Any help will be appreciated!

Kubernetes API to create a CRD using Minikube, with deployment pod in pending state

I have a problem with Kubernetes API and CRD, while creating a deployment with a single nginx pod, i would like to access using port 80 from a remote server, and locally as well. After seeing the pod in a pending state and running the kubectl get pods and then after around 40 seconds on average, the pod disappears, and then a different nginx pod name is starting up, this seems to be in a loop.
The error is
* W1214 23:27:19.542477 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
I was following this article about service accounts and roles,
https://thorsten-hans.com/custom-resource-definitions-with-rbac-for-serviceaccounts#create-the-clusterrolebinding
I am not even sure i have created this correctly?
Do i even need to create the ServiceAccount_v1.yaml, PolicyRule_v1.yaml and ClusterRoleBinding.yaml files to resolve my error above.
All of my .yaml files for this are below,
CustomResourceDefinition_v1.yaml
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
# name must match the spec fields below, and be in the form: <plural>.<group>
name: webservers.stable.example.com
spec:
# group name to use for REST API: /apis/<group>/<version>
group: stable.example.com
names:
# kind is normally the CamelCased singular type. Your resource manifests use this.
kind: WebServer
# plural name to be used in the URL: /apis/<group>/<version>/<plural>
plural: webservers
# shortNames allow shorter string to match your resource on the CLI
shortNames:
- ws
# singular name to be used as an alias on the CLI and for display
singular: webserver
# either Namespaced or Cluster
scope: Cluster
# list of versions supported by this CustomResourceDefinition
versions:
- name: v1
schema:
openAPIV3Schema:
type: object
properties:
spec:
type: object
properties:
cronSpec:
type: string
image:
type: string
replicas:
type: integer
# Each version can be enabled/disabled by Served flag.
served: true
# One and only one version must be marked as the storage version.
storage: true
Deployments_v1_apps.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
# Unique key of the Deployment instance
name: nginx-deployment
spec:
# 1 Pods should exist at all times.
replicas: 1
selector:
matchLabels:
app: nginx
strategy:
rollingUpdate:
maxSurge: 100
maxUnavailable: 0
type: RollingUpdate
template:
metadata:
labels:
# Apply this label to pods and default
# the Deployment label selector to this value
app: nginx
spec:
containers:
# Run this image
- image: nginx:1.14
name: nginx
ports:
- containerPort: 80
hostname: nginx
nodeName: webserver01
securityContext:
runAsNonRoot: True
#status:
#availableReplicas: 1
Ingress_v1_networking.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-ingress
spec:
rules:
- http:
paths:
- path: /
pathType: Exact
backend:
resource:
kind: nginx-service
name: nginx-deployment
#service:
# name: nginx
# port: 80
#serviceName: nginx
#servicePort: 80
Service_v1_core.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- port: 80
protocol: TCP
targetPort: 80
ServiceAccount_v1.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: user
namespace: example
PolicyRule_v1.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: "example.com:webservers:reader"
rules:
- apiGroups: ["example.com"]
resources: ["ResourceAll"]
verbs: ["VerbAll"]
ClusterRoleBinding_v1.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: "example.com:webservers:cdreader-read"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: "example.com:webservers:reader"
subjects:
- kind: ServiceAccount
name: user
namespace: example

How can I correctly setup custom headers with nginx ingress?

I have the following configuration:
daemonset:
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: nginx-ingress
namespace: nginx-ingress
spec:
selector:
matchLabels:
app: nginx-ingress
template:
metadata:
labels:
app: nginx-ingress
spec:
serviceAccountName: nginx-ingress
containers:
- image: nginx/nginx-ingress:1.4.2-alpine
imagePullPolicy: Always
name: nginx-ingress
ports:
- name: http
containerPort: 80
hostPort: 80
- name: https
containerPort: 443
hostPort: 443
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
args:
- -nginx-configmaps=$(POD_NAMESPACE)/nginx-config
- -default-server-tls-secret=$(POD_NAMESPACE)/default-server-secret
main config:
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-config
namespace: nginx-ingress
data:
proxy-set-headers: "nginx-ingress/custom-headers"
proxy-connect-timeout: "11s"
proxy-read-timeout: "12s"
client-max-body-size: "5m"
gzip-level: "7"
use-gzip: "true"
use-geoip2: "true"
custom headers:
apiVersion: v1
kind: ConfigMap
metadata:
name: custom-headers
namespace: nginx-ingress
data:
X-Forwarded-Host-Test: "US"
X-Using-Nginx-Controller: "true"
X-Country-Name: "UK"
I am encountering the following situations:
If I change one of "proxy-connect-timeout", "proxy-read-timeout" or "client-max-body-size", I can see the changes appearing in the generated configs in the controller pods
If I change one of "gzip-level" (even tried "use-gzip") or "use-geoip2", I see no changes in the generated configs (eg: "gzip on;" is always commented out and there's no other mention of zip, the gzip level doesn't appear anywhere)
The custom headers from "ingress-nginx/custom-headers" are not added at all (was planning to use them to pass values from geoip2)
Otherwise, all is well, the controller logs show that my only backend (an expressJs app that dumps headers) is server correctly, I get expected responses and so on.
I've copied as much as I could from the examples on github, making a minimum of changes but no results (including when looking at the examples for custom headers).
Any ideas or pointers would be greatly appreciated.
Thanks!
Use ingress rule annotations.
Example:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/configuration-snippet: |
more_set_headers "server: hide";
more_set_headers "X-Content-Type-Options: nosniff";
more_set_headers "X-Frame-Options: DENY";
more_set_headers "X-Xss-Protection: 1";
name: myingress
namespace: default
spec:
tls:
- hosts:
I used nginx server 1.15.9
Looks like you are using kubernetes-ingress from NGINX itself instead of ingress-nginx which is the community nginx ingress controller.
If you see the supported ConfigMap keys for kubernetes-ingress none of the gzip options are supported. If you see the ConfigMap options for ingress-nginx you'll see all the gzip keys that can be configured.
Try switching to the community nginx ingress controller.
Update:
You can also do it using the configuration-snippet annotation:
nginx.ingress.kubernetes.io/configuration-snippet: |
more_set_headers "X-Forwarded-Host-Test: US";
more_set_headers "X-Using-Nginx-Controller: true";
more_set_headers "X-Country-Name: UK";
more_set_headers "Header: Value";
...
For posterity:
nginx community controller => quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.21.0
nginx kubernetes controller => nginx/nginx-ingress:edge (as show in docs)
custom headers configmap for community => proxy-set-headers: "nginx-ingress/custom-headers"
custom headers configmap for kubernetes => add-headers: "nginx-ingress/custom-headers"
annotations:
nginx.ingress.kubernetes.io/configuration-snippet: |
more_set_input_headers "headername: value";
When using Helm for kubernetes/ingress-nginx installation. Setup your custom header eg. My-Custom-Header as
controller:
addHeaders:
X-My-Custom-Header: Allow
This will add custom headers before sending response traffic to the client according to: https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#add-headers
You can access it in logs:
controller:
log-format-upstream: '{"x-my-custom-header" : "$http_x-my-custom-header"}'
You can create a new ConfigMap with your desired headers, and add them to each request response coming from your Nginx Ingress Controller that way :
kind: ConfigMap
apiVersion: v1
data:
X-Content-Type-Options: "..."
X-Frame-Options: "..."
metadata:
name: custom-headers
namespace: your-namespace
---
kind: ConfigMap
apiVersion: v1
metadata:
name: webauto-nginx-configuration
namespace: your-namespace
data:
add-headers: "your-namespace/custom-headers"
...
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: webauto-nginx-ingress-controller
namespace: your-namespace
spec:
replicas: 2
...
containers:
- name: nginx-ingress-controller
image: k8s.gcr.io/ingress-nginx/controller:VERSION_YOU_WANT
args:
- /nginx-ingress-controller
- --configmap=$(POD_NAMESPACE)/webauto-nginx-configuration
...