Ansible variable conversion to int is ignored - kubernetes

With Ansible, I want to find which port is available in a range on a K8s cluster and use this port to expose a service temporary.
I'm able to find and extract the port but when I'm declaring the Nodeport using that port the tasks fail.
It seems that ansible is not converting my "port" variable to an int with the instruction {{ port|int }}.
- block:
- name: List all ports in range 32200 to 32220
wait_for:
port: "{{ item|int }}"
timeout: 1
state: stopped
msg: "Port {{ item }} is already in use"
register: available_ports
with_sequence: start=32200 end=32220
ignore_errors: yes
- name: extract first unused port from list
set_fact:
port: "{{ available_ports.results | json_query(\"[? state=='stopped'].port\") | first }}"
- debug:
var: port
- name: Expose service as a nodeport service
k8s:
state: present
definition:
apiVersion: v1
kind: Service
metadata:
name: "{{ namespace }}-service-nodeport"
namespace: "{{ namespace }}"
spec:
type: NodePort
selector:
component: my-app
ports:
- protocol: TCP
targetPort: 5432
nodePort: "{{ port|int }}"
port: 5432
This outputs the following:
TASK [../roles/my-role : debug] ***************************************************************************************************************************************************************************************************
ok: [127.0.0.1] => {
"port": "32380"
}
TASK [../roles/my-role : Expose service as a nodeport service] *******************************************************************************************************************************************
fatal: [127.0.0.1]: FAILED! => {"changed": false, "error": 400, "msg": "Failed to create object: b'{\"kind\":\"Status\",\"apiVersion\":\"v1\",\"metadata\":{},\"status\":\"Failure\",\"message\":\"Service in version \\\\\"v1\\\\\" cannot be handled as a Service: v1.Service.Spec: v1.ServiceSpec.Ports: []v1.ServicePort: v1.ServicePort.NodePort: readUint32: unexpected character: \\\\ufffd, error found in #10 byte of ...|dePort\\\\\": \\\\\"32380\\\\\", \\\\\"p|..., bigger context ...|rotocol\\\\\": \\\\\"TCP\\\\\", \\\\\"targetPort\\\\\": 5432, \\\\\"nodePort\\\\\": \\\\\"32380\\\\\", \\\\\"port\\\\\": 5432}]}}|...\",\"reason\":\"BadRequest\",\"code\":400}\\n'", "reason": "Bad Request", "status": 400}
If I set the nodeport to a fix value such as 32800, it works.

Related

How to access Redis as a k8s service with NestJS TypeORM's cache server option?

I'd like to deploy my k8s with NestJS backend server and redis.
In order to remove user service from the core service of NestJS, I would like to run user service as a service of k8s, and use the cache server of user db referenced by the user service as a service in k8s.
To do that, I set up the user service's database config module like this.
import { Module } from '#nestjs/common'
import { TypeOrmModule, TypeOrmModuleAsyncOptions, TypeOrmModuleOptions } from '#nestjs/typeorm'
import { SnakeNamingStrategy } from 'typeorm-naming-strategies'
let DATABASE_NAME = 'test'
if (process.env.NODE_ENV) {
DATABASE_NAME = `${DATABASE_NAME}_${process.env.NODE_ENV}`
}
const DB_HOST: string = process.env.DB_HOST ?? 'localhost'
const DB_USERNAME: string = process.env.DB_USERNAME ?? 'user'
const DB_PASSWORD: string = process.env.DB_PASSWORD ?? 'password'
const REDIS_HOST: string = process.env.REDIS_HOST ?? 'localhost'
const databaseConfig: TypeOrmModuleAsyncOptions = {
useFactory: (): TypeOrmModuleOptions => ({
type: 'mysql',
host: DB_HOST,
port: 3306,
username: DB_USERNAME,
password: DB_PASSWORD,
database: DATABASE_NAME,
autoLoadEntities: true,
synchronize: true,
namingStrategy: new SnakeNamingStrategy(),
logging: false,
cache: {
type: 'redis',
options: {
host: REDIS_HOST,
port: 6379,
},
},
timezone: '+09:00',
}),
}
#Module({
imports: [
TypeOrmModule.forRootAsync({
...databaseConfig,
}),
],
})
export class DatabaseModule {}
And, to implement k8s I used a helm.
Helm's template folders are as follows.
- configmap
- deployment
- pod
- service
And, under those folders are as follows.
// configmap/redis.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: redis-config
data:
redis-config: |
maxmemory 20mb
maxmemory-policy allkeys-lru
// deployment/user_service.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-service
labels:
app: user-service
namespace: default
spec:
replicas: {{ .Values.replicas }}
selector:
matchLabels:
app: user-service
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
labels:
app: user-service
spec:
containers:
- image: {{ .Values.user_service.image }}:{{ .Values.user_service_version }}
imagePullPolicy: Always
name: user-service
ports:
- containerPort: 50051
protocol: TCP
env:
- name: COGNITO_CLIENT_ID
value: "some value"
- name: COGNITO_USER_POOL_ID
value: "some value"
- name: DB_HOST
value: "some value"
- name: DB_PASSWORD
value: "some value"
- name: DB_USERNAME
value: "some value"
- name: NODE_ENV
value: "test"
- name: REDIS_HOST
value: "10.100.77.0"
// pod/redis.yaml
apiVersion: v1
kind: Pod
metadata:
name: redis
labels:
app: redis
spec:
containers:
- name: redis
image: redis:latest
command:
- redis-server
- "/redis-master/redis.conf"
env:
- name: MASTER
value: "true"
ports:
- containerPort: 6379
name: redis
volumeMounts:
- mountPath: /redis-master-data
name: data
- mountPath: /redis-master
name: config
volumes:
- name: data
emptyDir: {}
- name: config
configMap:
name: redis-config
items:
- key: redis-config
path: redis.conf
// service/user_service.yaml
apiVersion: v1
kind: Service
metadata:
name: user-service
spec:
clusterIP: 10.100.88.0
selector:
app: user-service
ports:
- protocol: TCP
port: 50051
targetPort: 50051
// service/redis.yaml
apiVersion: v1
kind: Service
metadata:
name: redis
labels:
app: redis
spec:
clusterIP: 10.100.77.0
selector:
app: redis
ports:
- name: redis
protocol: TCP
port: 6379
targetPort: 6379
With above yaml files, I install helm chart named test.
After installing, the result of kubectl get svc,po,deploy,configmap is like this.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 4d4h
service/user-service ClusterIP 10.100.88.0 <none> 50051/TCP 6s
service/redis ClusterIP 10.100.77.0 <none> 6379/TCP 6s
NAME READY STATUS RESTARTS AGE
pod/user-service-78548d4d8f-psbr2 0/1 ContainerCreating 0 6s
pod/redis 0/1 ContainerCreating 0 6s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/user-service 0/1 1 0 6s
NAME DATA AGE
configmap/kube-root-ca.crt 1 4d4h
configmap/redis-config 1 6s
But, when I checked the user-service's deploy logs, these error was occurred.
[Nest] 1 - 02/07/2023, 7:15:32 AM ERROR [TypeOrmModule] Unable to connect to the database. Retrying (1)...
Error: connect ECONNREFUSED 127.0.0.1:6379
at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1494:16)
I also checked through the console log that the REDIS_HOST environment variable is 10.100.77.0 in the database config of user-service, but an error was appearing while referring to the local host as above.
Is there any error in the part I set?
you can use service for connect to Redis. for this use redis.redis as REDIS_HOST in your application.

Python gRPC client does not connect to Golang gRPC server in a different pod

I have a Python client running in the POD-A pod trying to connect to a Golang server running in the POD-B pod and I am getting this error:
"<_InactiveRpcError of RPC that terminated with:\n\tstatus = StatusCode.UNAVAILABLE\n\tdetails = \"DNS resolution failed for service: pod-b\"\n\tdebug_error_string = \"{\"created\":\"#1649433162.980011551\",\"description\":\"Resolver transient failure\",\"file\":\"src/core/ext/filters/client_channel/client_channel.cc\",\"file_line\":1357,\"referenced_errors\":[{\"created\":\"#1649433162.979997474\",\"description\":\"DNS resolution failed for service: pod-b\",\"file\":\"src/core/ext/filters/client_channel/resolver/dns/c_ares/dns_resolver_ares.cc\",\"file_line\":359,\"grpc_status\":14,\"referenced_errors\":[{\"created\":\"#1649433162.979938651\",\"description\":\"C-ares status is not ARES_SUCCESS qtype=A name=pod-b is_balancer=0: Timeout while contacting DNS servers\",\"file\":\"src/core/ext/filters/client_channel/resolver/dns/c_ares/grpc_ares_wrapper.cc\",\"file_line\":724}]}]}\"\n>"
The Python client code is:
channel = grpc.insecure_channel("pod-b")
stub = campaign_service_pb2_grpc.CampaignServiceStub(channel)
request = campaign_service_pb2.CreateCampaignRequest(amount=12345)
response = stub.CreateCampaign(request)
return response.id
The Golang server code is:
// Server starts a new gRPC Server
func New(conf config.Config, services *services.Services, logger config.Logger) (*Server, error) {
flag.Parse()
conn, err := net.Listen("tcp", fmt.Sprintf(conf.GRPC.Host+":%d", conf.GRPC.Port)) // host:127.0.0.1 port:50051
if err != nil {
return nil, errors.Wrap(err, "failed to listen")
}
server := Server{
conn: conn,
logger: logger,
services: services,
}
s := grpc.NewServer()
pb.RegisterCampaignServiceServer(s, &server)
server.s = s
return &server, nil
}
docker-compose.yaml of the server:
version: "3"
services:
pod-b:
ports:
- "7007:8080"
- "50051:50051"
build:
context: .
args:
- DEPLOY_KEY=${DEPLOY_KEY}
depends_on:
- crdb
environment:
- BUDGET_MANAGER_COCKROACHDB_HOST=${BUDGET_MANAGER_COCKROACHDB_HOST}
- BUDGET_MANAGER_COCKROACHDB_PORT=${BUDGET_MANAGER_COCKROACHDB_PORT}
- BUDGET_MANAGER_COCKROACHDB_USER=${BUDGET_MANAGER_COCKROACHDB_USER}
- BUDGET_MANAGER_COCKROACHDB_PASSWORD=${BUDGET_MANAGER_COCKROACHDB_PASSWORD}
- BUDGET_MANAGER_COCKROACHDB_DB=${BUDGET_MANAGER_COCKROACHDB_DB}
- BUDGET_MANAGER_COCKROACHDB_MIGRATE=${BUDGET_MANAGER_COCKROACHDB_MIGRATE}
- BUDGET_MANAGER_COCKROACHDB_SSL=${BUDGET_MANAGER_COCKROACHDB_SSL}
- BUDGET_MANAGER_COCKROACHDB_KEY=${BUDGET_MANAGER_COCKROACHDB_KEY}
- BUDGET_MANAGER_COCKROACHDB_CERT=${BUDGET_MANAGER_COCKROACHDB_CERT}
crdb:
image: cockroachdb/cockroach:v21.2.5
ports:
- "26257:26257"
- "8081:8080"
command: start-single-node --insecure
service.yaml of the server:
apiVersion: v1
kind: Service
metadata:
name: "{{ .Values.name }}"
labels:
app: "{{ .Values.name }}"
monitor: "true"
spec:
type: NodePort
ports:
- port: {{ .Values.servicePort }}
name: http
targetPort: {{ .Values.port }}
- port: {{ .Values.grpc.servicePort }}
name: grpc
targetPort: {{ .Values.grpc.port }}
selector:
app: "{{ .Values.name }}"
values.yaml of the server:
deployment:
image: pod-b
tag: 0.1.0
name: pod-b
replicas: 2
resources:
requests:
memory: 50Mi
cpu: 50m
limits:
memory: 300Mi
cpu: 150m
http:
port: 8080
servicePort: 80
grpc:
port: 50051
servicePort: 50051
secret:
create: false
monitoring: "apps-metrics"
env:
config:
environment: staging
port: 8080
cockroachdb:
host: something
port: 26257
user: something
db: something
migrate: true
ssl: true
key: something
cert: something
Does anyone know what is going on?

Not connecting GRPC requist using aws nlb with ACM cert

I have k8s cluster for using gRPC service with envoy proxy, all gRPC and web request collect Envoy and passed into backend , Envoy SVC started with nlb, and nlb attached with ACM certificate. without nlb certificate request passed into backend correctly and getting response, but need to use nlb-url:443, again if I'm attaching ACM cert into nlb,and its not at all getting response. why?
Or again I need to use another ingress for treat ssl and route?
envoy-svc.yaml
apiVersion: v1
kind: Service
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:eu-west-2:12345676789:certificate/ss304s07-3ss2-4s73-8744-bs2sss123460
service.beta.kubernetes.io/aws-load-balancer-type: nlb
creationTimestamp: "2021-06-11T02:50:24Z"
finalizers:
- service.kubernetes.io/load-balancer-cleanup
name: envoy-service
namespace: default
spec:
externalTrafficPolicy: Cluster
ports:
- nodePort: 31156
port: 443
protocol: TCP
targetPort: 80
selector:
name: envoy
sessionAffinity: None
type: LoadBalancer
envoy-conf
admin:
access_log_path: /dev/stdout
address:
socket_address: { address: 0.0.0.0, port_value: 8801 }
static_resources:
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/config/listener/v3/listener.proto#config-listener-v3-listener
listeners:
- name: listener_0
address:
socket_address:
address: 0.0.0.0
port_value: 80
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/extensions/filters/network/http_connection_manager/v3/http_connection_manager.proto#extensions-filters-network-http-connection-manager-v3-httpconnectionmanager
"#type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
stat_prefix: ingress_http
http2_protocol_options: {}
access_log:
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/extensions/access_loggers/file/v3/file.proto
#
# You can also configure this extension with the qualified
# name envoy.access_loggers.http_grpc
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/extensions/access_loggers/grpc/v3/als.proto
- name: envoy.access_loggers.file
typed_config:
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/extensions/access_loggers/file/v3/file.proto#extensions-access-loggers-file-v3-fileaccesslog
"#type": type.googleapis.com/envoy.extensions.access_loggers.file.v3.FileAccessLog
# Console output
path: /dev/stdout
route_config:
name: local_route
virtual_hosts:
- name: local_service
domains:
- "*"
routes:
- match:
prefix: /
grpc:
route:
cluster: greeter_service
cors:
allow_origin_string_match:
- prefix: "*"
allow_methods: GET, PUT, DELETE, POST, OPTIONS
# custom-header-1 is just an example. the grpc-web
# repository was missing grpc-status-details-bin header
# which used in a richer error model.
# https://grpc.io/docs/guides/error/#richer-error-model
allow_headers: accept-language,accept-encoding,user-agent,referer,sec-fetch-mode,origin,access-control-request-headers,access-control-request-method,accept,cache-control,pragma,connection,host,name,x-grpc-web,x-user-agent,grpc-timeout,content-type
expose_headers: grpc-status-details-bin,grpc-status,grpc-message,authorization
max_age: "1728000"
http_filters:
- name: envoy.filters.http.grpc_web
# This line is optional, but adds clarity to the configuration.
typed_config:
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/extensions/filters/http/grpc_web/v3/grpc_web.proto
"#type": type.googleapis.com/envoy.extensions.filters.http.grpc_web.v3.GrpcWeb
- name: envoy.filters.http.cors
typed_config:
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/extensions/filters/http/cors/v3/cors.proto
"#type": type.googleapis.com/envoy.extensions.filters.http.cors.v3.Cors
- name: envoy.filters.http.grpc_json_transcoder
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.http.grpc_json_transcoder.v3.GrpcJsonTranscoder
proto_descriptor: "/etc/envoy-sync/sync.pb"
ignore_unknown_query_parameters: true
services:
- "com.tk.system.sync.Synchronizer"
print_options:
add_whitespace: true
always_print_primitive_fields: true
always_print_enums_as_ints: true
preserve_proto_field_names: true
- name: envoy.filters.http.router
typed_config:
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/extensions/filters/http/router/v3/router.proto
"#type": type.googleapis.com/envoy.extensions.filters.http.router.v3.Router
clusters:
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/config/cluster/v3/cluster.proto#config-cluster-v3-cluster
- name: greeter_service
type: LOGICAL_DNS
connect_timeout: 0.25s
lb_policy: round_robin
load_assignment:
cluster_name: greeter_service
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: micro-deployment
port_value: 8081
http2_protocol_options: {} # Force HTTP/2

OPA (running as host level separate service ) policies are not getting enforced via envoy while calling the API

My scenario - one example-app service exposed through Nodeport in K8s cluster .
The service is having one envoy side car . OPA is running as separate service on a same node.
My app-deployment specs -
apiVersion: apps/v1
kind: Deployment
metadata:
name: example-app
labels:
app: example-app
spec:
replicas: 1
selector:
matchLabels:
app: example-app
template:
metadata:
labels:
app: example-app
spec:
initContainers:
- name: proxy-init
image: openpolicyagent/proxy_init:v5
# Configure the iptables bootstrap script to redirect traffic to the
# Envoy proxy on port 8000, specify that Envoy will be running as user
# 1111, and that we want to exclude port 8282 from the proxy for the
# OPA health checks. These values must match up with the configuration
# defined below for the "envoy" and "opa" containers.
args: ["-p", "8000", "-u", "1111"]
securityContext:
capabilities:
add:
- NET_ADMIN
runAsNonRoot: false
runAsUser: 0
containers:
- name: app
image: openpolicyagent/demo-test-server:v1
ports:
- containerPort: 8080
- name: envoy
image: envoyproxy/envoy:v1.14.4
securityContext:
runAsUser: 1111
volumeMounts:
- readOnly: true
mountPath: /config
name: proxy-config
args:
- "envoy"
- "--config-path"
- "/config/envoy.yaml"
volumes:
- name: proxy-config
configMap:
name: proxy-config
My Envoy specs(loaded through config maps) -
static_resources:
listeners:
- address:
socket_address:
address: 0.0.0.0
port_value: 8000
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
codec_type: auto
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: backend
domains:
- "*"
routes:
- match:
prefix: "/"
route:
cluster: service
http_filters:
- name: envoy.filters.http.ext_authz
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.http.ext_authz.v3.ExtAuthz
grpc_service:
envoy_grpc:
cluster_name: opa
timeout: 0.250s
- name: envoy.filters.http.router
typed_config: {}
clusters:
- name: service
connect_timeout: 0.25s
type: strict_dns
lb_policy: round_robin
load_assignment:
cluster_name: service
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: 127.0.0.1
port_value: 8080
- name: opa
connect_timeout: 0.250s
type: STRICT_DNS
lb_policy: ROUND_ROBIN
http2_protocol_options: {}
load_assignment:
cluster_name: opa
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: opa
port_value: 9191
admin:
access_log_path: "/dev/null"
address:
socket_address:
address: 0.0.0.0
port_value: 8001
OPA deployment specs (exposed through nodeport as a service):-
apiVersion: apps/v1
kind: Deployment
metadata:
name: opa
labels:
app: opa
spec:
replicas: 1
selector:
matchLabels:
app: opa
template:
metadata:
labels:
app: opa
name: opa
spec:
containers:
- name: opa
image: openpolicyagent/opa:latest-envoy
ports:
- name: http
containerPort: 8181
securityContext:
runAsUser: 1111
args:
- "run"
- "--server"
- "--log-level=info"
- "--log-format=json-pretty"
- "--set=decision_logs.console=true"
- "--set=plugins.envoy_ext_authz_grpc.addr=:9191"
- "--set=plugins.envoy_ext_authz_grpc.query=data.envoy.authz.allow"
- "--ignore=.*"
- "/policy/policy.rego"
volumeMounts:
- readOnly: true
mountPath: /policy
name: opa-policy
volumes:
- name: opa-policy
configMap:
name: opa-policy
Sample policy.rego (loaded through configmap)-
package envoy.authz
import input.attributes.request.http as http_request
default allow = false
token = {"valid": valid, "payload": payload} {
[_, encoded] := split(http_request.headers.authorization, " ")
[valid, _, payload] := io.jwt.decode_verify(encoded, {"secret": "secret"})
}
allow {
is_token_valid
action_allowed
}
is_token_valid {
token.valid
now := time.now_ns() / 1000000000
token.payload.nbf <= now
now < token.payload.exp
}
action_allowed {
http_request.method == "GET"
token.payload.role == "guest"
glob.match("/people*", [], http_request.path)
}
action_allowed {
http_request.method == "GET"
token.payload.role == "admin"
glob.match("/people*", [], http_request.path)
}
action_allowed {
http_request.method == "POST"
token.payload.role == "admin"
glob.match("/people", [], http_request.path)
lower(input.parsed_body.firstname) != base64url.decode(token.payload.sub)
}
While calling REST API getting below response -
curl -i -H "Authorization: Bearer $ALICE_TOKEN" http://$SERVICE_URL/people
HTTP/1.1 403 Forbidden
date: Sat, 13 Feb 2021 08:50:29 GMT
server: envoy
content-length: 0
OPA decision logs -
{
"addrs": [
":8181"
],
"diagnostic-addrs": [],
"level": "info",
"msg": "Initializing server.",
"time": "2021-02-13T08:48:27Z"
}
{
"level": "info",
"msg": "Starting decision logger.",
"plugin": "decision_logs",
"time": "2021-02-13T08:48:27Z"
}
{
"addr": ":9191",
"dry-run": false,
"enable-reflection": false,
"level": "info",
"msg": "Starting gRPC server.",
"path": "",
"query": "data.envoy.authz.allow",
"time": "2021-02-13T08:48:27Z"
}
What I am doing wrong here ? there is nothing in decision logs for my rest call to the API . OPA policy should invoke through envoy filter which is not happening.
Please help .

How to set up a custom HTTP error in Kubernetes

I want to create a custom 403 error page.
Currently I already have an Ingress created and in the annotations I have something like this:
"nginx.ingress.kubernetes.io/whitelist-source-range": "100.01.128.0/20,88.100.01.01"
So any attempt to access my web app outside that IP range receives a 403 error.
In order to create a custom page I tried adding the following annotations:
"nginx.ingress.kubernetes.io/custom-http-errors": "403",
"nginx.ingress.kubernetes.io/default-backend": "default-http-backend"
where default-http-backend is the name of an app already deployed.
the ingress has this:
{
"kind": "Ingress",
"apiVersion": "extensions/v1beta1",
"metadata": {
"name": "my-app-ingress",
"namespace": "my-app-test",
"selfLink": "/apis/extensions/v1beta1/namespaces/my-app-test/ingresses/my-app-ingress",
"uid": "8f31f2b4-428d-11ea-b15a-ee0dcf00d5a8",
"resourceVersion": "129105581",
"generation": 3,
"creationTimestamp": "2020-01-29T11:50:34Z",
"annotations": {
"kubernetes.io/ingress.class": "nginx",
"nginx.ingress.kubernetes.io/custom-http-errors": "403",
"nginx.ingress.kubernetes.io/default-backend": "default-http-backend",
"nginx.ingress.kubernetes.io/rewrite-target": "/",
"nginx.ingress.kubernetes.io/whitelist-source-range": "100.01.128.0/20,90.108.01.012"
}
},
"spec": {
"tls": [
{
"hosts": [
"my-app-test.retail-azure.js-devops.co.uk"
],
"secretName": "ssl-secret"
}
],
"rules": [
{
"host": "my-app-test.retail-azure.js-devops.co.uk",
"http": {
"paths": [
{
"path": "/api",
"backend": {
"serviceName": "my-app-backend",
"servicePort": 80
}
},
{
"path": "/",
"backend": {
"serviceName": "my-app-frontend",
"servicePort": 80
}
}
]
}
}
]
},
"status": {
"loadBalancer": {
"ingress": [
{}
]
}
}
}
Yet I always get the default 403.
What am I missing?
I've reproduced your scenario and that worked for me.
I will try to guide you in steps I've followed.
Cloud provider: GKE
Kubernetes Version: v1.15.3
Namespace: default
I'm using 2 deployments of 2 images with a service for each one.
Service 1: default-http-backend - with nginx image, it will be our default backend.
Service 2: custom-http-backend - with inanimate/echo-server image, this service will be displayed if the request become from a whitelisted ip.
Ingress: Nginx ingress with annotations.
Expected behavior: The ingress will be configured to use default-backend, custom-http-errors and whitelist-source-range annotations. If the request was made from a whitelisted ip the ingress will redirect to custom-http-backend, if not it will be redirect to default-http-backend.
Deployment 1: default-http-backend
Create a file default-http-backend.yaml with this content:
apiVersion: apps/v1
kind: Deployment
metadata:
name: default-http-backend
spec:
selector:
matchLabels:
app: default-http-backend
template:
metadata:
labels:
app: default-http-backend
spec:
containers:
- name: default-http-backend
image: nginx
ports:
- name: http
containerPort: 80
imagePullPolicy: IfNotPresent
---
apiVersion: v1
kind: Service
metadata:
name: default-http-backend
spec:
selector:
app: default-http-backend
ports:
- protocol: TCP
port: 80
targetPort: 80
Apply the yaml file: k apply -f default-http-backend.yaml
Deployment 2: custom-http-backend
Create a file custom-http-backend.yaml with this content:
apiVersion: apps/v1
kind: Deployment
metadata:
name: custom-http-backend
spec:
selector:
matchLabels:
app: custom-http-backend
template:
metadata:
labels:
app: custom-http-backend
spec:
containers:
- name: custom-http-backend
image: inanimate/echo-server
ports:
- name: http
containerPort: 8080
imagePullPolicy: IfNotPresent
---
apiVersion: v1
kind: Service
metadata:
name: custom-http-backend
spec:
selector:
app: custom-http-backend
ports:
- protocol: TCP
port: 80
targetPort: 8080
Apply the yaml file: k apply -f custom-http-backend.yaml
Check if services is up and running
I'm using the alias k for kubectl
➜ ~ k get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
custom-http-backend ClusterIP 10.125.5.227 <none> 80/TCP 73s
default-http-backend ClusterIP 10.125.9.218 <none> 80/TCP 5m41s
...
➜ ~ k get pods
NAME READY STATUS RESTARTS AGE
custom-http-backend-67844fb65d-k2mwl 1/1 Running 0 2m10s
default-http-backend-5485f569bd-fkd6f 1/1 Running 0 6m39s
...
You could test the service using port-forward:
default-http-backend
k port-forward svc/default-http-backend 8080:80
Try to access http://localhost:8080 in your browse to see the nginx default page.
custom-http-backend
k port-forward svc/custom-http-backend 8080:80
Try to access http://localhost:8080 in your browse to see the custom page provided by the echo-server image.
Ingress configuration
At this point we have both services up and running, we need to install and configure the nginx ingress. You can follow the official documentation, this will not covered here.
After installed let's deploy the ingress, based in the code you posted i did some modifications: tls removed, added other domain and removed the path /api for tests purposes only and add my home ip to whitelist.
Create a file my-app-ingress.yaml with the content:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-app-ingress
namespace: default
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: "/"
nginx.ingress.kubernetes.io/custom-http-errors: '403'
nginx.ingress.kubernetes.io/default-backend: default-http-backend
nginx.ingress.kubernetes.io/whitelist-source-range: 207.34.xxx.xx/32
spec:
rules:
- host: myapp.rabello.me
http:
paths:
- path: "/"
backend:
serviceName: custom-http-backend
servicePort: 80
Apply the spec: k apply -f my-app-ingress.yaml
Check the ingress with the command:
➜ ~ k get ing
NAME HOSTS ADDRESS PORTS AGE
my-app-ingress myapp.rabello.me 146.148.xx.xxx 80 36m
That's all!
If I test from home with my whitelisted ip, the custom page is showed, but if i try to access using my cellphone in 4G network, the nginx default page is displayed.
Note I'm using ingress and services in the same namespace, if you need work with different namespace you need to use ExternalName.
I hope that helps!
References:
kubernetes deployments
kubernetes service
nginx ingress
nginx annotations
I want to create a custom 403 error page. Currently I already have an Ingress created and in the annotations.
So any attempt to access my web app outside that IP range receives a 403 error.
In order to create a custom page I tried adding the following annotations:
kind: Ingress
metadata:
name: my-app-ingress
namespace: default
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: "/"
nginx.ingress.kubernetes.io/custom-http-errors: '403'
nginx.ingress.kubernetes.io/default-backend: default-http-backend
nginx.ingress.kubernetes.io/whitelist-source-range: 125.10.156.36/32
spec:
rules:
- host: venkat.dev.vboffice.com
http:
paths:
- path: "/"
backend:
serviceName: custom-http-backend
servicePort: 80
where default-http-backend is the name of an app already deployed with default nginx page.
If I test from home with my whitelisted ip, the custom page is showed, but if i try to access using my cellphone in 4G network, it will display default backend 404
i need to add any nginx config change custom-http-backend pod????
Deployment 1:default-http-backend
apiVersion: apps/v1
kind: Deployment
metadata:
name: default-http-backend
spec:
selector:
matchLabels:
app: default-http-backend
template:
metadata:
labels:
app: default-http-backend
spec:
containers:
- name: default-http-backend
image: nginx
ports:
- name: http
containerPort: 80
imagePullPolicy: IfNotPresent
---
apiVersion: v1
kind: Service
metadata:
name: default-http-backend
spec:
selector:
app: default-http-backend
ports:
- protocol: TCP
port: 80
targetPort: 80
Deployment 2: custom-http-backend
apiVersion: apps/v1
kind: Deployment
metadata:
name: custom-http-backend
spec:
selector:
matchLabels:
app: custom-http-backend
template:
metadata:
labels:
app: custom-http-backend
spec:
containers:
- name: custom-http-backend
image: inanimate/echo-server
ports:
- name: http
containerPort: 8080
imagePullPolicy: IfNotPresent
---
apiVersion: v1
kind: Service
metadata:
name: custom-http-backend
spec:
selector:
app: custom-http-backend
ports:
- protocol: TCP
port: 80
targetPort: 8080
One can customize the 403 error page for ingress-nginx (/etc/nginx/template), just by editing the nginx.tmpl file. Then mounting it to ingress nginx controller deployment.Below is the part of the nginx.tmpl need to be edited:
{{/* Build server redirects (from/to www) */}}
{{ range $redirect := .RedirectServers }}
## start server {{ $redirect.From }}
server {
server_name {{ $redirect.From }};
{{ buildHTTPListener $all $redirect.From }}
{{ buildHTTPSListener $all $redirect.From }}
ssl_certificate_by_lua_block {
certificate.call()
}
error_page 403 /403.html;
{{ if gt (len $cfg.BlockUserAgents) 0 }}
if ($block_ua) {
return 403;
}
{{ end }}
{{ if gt (len $cfg.BlockReferers) 0 }}
if ($block_ref) {
return 403;
}
{{ end }}
location = /403.html {
root /usr/local/nginx/html/;
internal;
}
set_by_lua_block $redirect_to {
local request_uri = ngx.var.request_uri
if string.sub(request_uri, -1) == "/" then
request_uri = string.sub(request_uri, 1, -2)
end
{{ if ne $all.ListenPorts.HTTPS 443 }}
{{ $redirect_port := (printf ":%v" $all.ListenPorts.HTTPS) }}
return string.format("%s://%s%s%s", ngx.var.scheme, "{{ $redirect.To }}", "{{ $redirect_port }}", request_uri)
{{ else }}
return string.format("%s://%s%s", ngx.var.scheme, "{{ $redirect.To }}", request_uri)
{{ end }}
}
return {{ $all.Cfg.HTTPRedirectCode }} $redirect_to;
}
## end server {{ $redirect.From }}
{{ end }}
{{ range $server := $servers }}
## start server {{ $server.Hostname }}
server {
server_name {{ buildServerName $server.Hostname }} {{range $server.Aliases }}{{ . }} {{ end }};
error_page 403 /403.html;
{{ if gt (len $cfg.BlockUserAgents) 0 }}
if ($block_ua) {
return 403;
}
{{ end }}
{{ if gt (len $cfg.BlockReferers) 0 }}
if ($block_ref) {
return 403;
}
{{ end }}
location = /403.html {
root /usr/local/nginx/html/;
internal;
}
{{ template "SERVER" serverConfig $all $server }}
{{ if not (empty $cfg.ServerSnippet) }}
# Custom code snippet configured in the configuration configmap
{{ $cfg.ServerSnippet }}
{{ end }}
{{ template "CUSTOM_ERRORS" (buildCustomErrorDeps "upstream-default-backend" $cfg.CustomHTTPErrors $all.EnableMetrics) }}
}
## end server {{ $server.Hostname }}
{{ end }}
In the above snippet error_apge 403 /403.html; is declared before we return 403. Then the location of /403.html is defined. The root path is same where one should mount the 403.html page. In this case its /usr/local/nginx/html/.
Below snippet will help you mount the volume with custom pages.
volumes:
- name: custom-errors
configMap:
# Provide the name of the ConfigMap you want to mount.
name: custom-ingress-pages
items:
- key: "404.html"
path: "404.html"
- key: "403.html"
path: "403.html"
- key: "50x.html"
path: "50x.html"
- key: "index.html"
path: "index.html"
This solution doesn't require you to spawn another/extra service or pod of any kind to work.
For more info: https://engineering.zenduty.com/blog/2022/03/02/customizing-error-pages
You need to create and deploy custom default backend which will return a custom error page.Follow the doc to deploy a custom default backend and configure nginx ingress controller by modifying the deployment yaml to use this custom default backend.
The deployment yaml for the custom default backend is here and the source code is here.