AWX-Operator on K8S - Domain Ingress Problem - kubernetes

I installed AWX-Operator on K8S along with kustomization.
After proper configuration, AWX starts correctly, I can access it via:
http://server_ip:30080
Now I'm in the process of setting up the YAML files so that I can access through my own domain.
The ingress.yaml file looks like this:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
labels:
app: awx-ingress
name: awx-ingress
namespace: awx
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
tls:
- hosts:
- someDomain.com
secretName: awx-secret-tls
rules:
- host: someDomain.com
http:
paths:
- backend:
service:
name: awx-service
port:
number: 80
path: /
pathType: Prefix
Result is:
Also my main awx.yaml looks like:
---
apiVersion: awx.ansible.com/v1beta1
kind: AWX
metadata:
name: awx
namespace: awx
spec:
auto_upgrade: true
admin_user: admin
admin_password_secret: awx-admin-password
ingress_type: ingress
ingress_tls_secret: awx-secret-tls
hostname: someDomain.com
postgres_configuration_secret: awx-postgres-configuration
postgres_storage_class: awx-postgres-volume
postgres_storage_requirements:
requests:
storage: 8Gi
projects_persistence: true
projects_existing_claim: awx-projects-claim
All I want to is just enter the AWX GUI via someDomain.com

Related

Cluster-issuer secret wont replicate with multiple ingress

I was under the impression that the main point of cluster-issuer is that its namespaced and doesn't have to be recreated across different resources, in general there could be one main cluster-issuer that will manage all ingresses across the cluster.
From what I am seeing the cluster-issuer can only create one secret and if its in use by one ingress the second wont wont be created properly cause its already taken.
Is there anyway to create one cluster-issuer to manage all ingresses across the cluster?
Code included below
Cluster-issuer.yaml
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-grafana
namespace: cert-manager
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: foo#gmail.com
privateKeySecretRef:
name: letsencrypt-grafana
solvers:
- selector:
dnsZones:
- "foo.com"
dns01:
route53:
region: eu-central-1
hostedZoneID: foo
accessKeyID: foo
secretAccessKeySecretRef:
name: aws-route53-creds
key: password.txt
Ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: grafana-ingress
namespace: loki
annotations:
cert-manager.io/cluster-issuer: letsencrypt-grafana
kubernetes.io/tls-acme: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/proxy-body-size: "125m"
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx
tls:
- hosts:
- grafana.foo.com
secretName: letsencrypt-grafana # < cert-manager will store the created certificate in this secret.
rules:
- host: grafana.foo.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: loki-grafana
port:
number: 80
i would recommend creating the wildcard certificate using issuer/clusterissuer.
So you will be having the single secret with wildcard cert so you can use that across all ingress.
As you are already using DNS verification it will work well, as wildcard not supports the HTTP
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: letsencrypt-prod
spec:
acme:
email: test123#gmail.com
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: letsencrypt-prod
solvers:
- selector:
dnsZones:
- "devops.example.in"
dns01:
route53:
region: us-east-1
hostedZoneID: Z0152EXAMPLE
accessKeyID: AKIA5EXAMPLE
secretAccessKeySecretRef:
name: route53-secret
key: secret-access-key
---
apiVersion: cert-manager.io/v1alpha2
kind: Certificate
metadata:
name: le-crt
spec:
secretName: tls-secret
issuerRef:
kind: Issuer
name: letsencrypt-prod
commonName: "*.devops.example.in"
dnsNames:
- "*.devops.example.in"
Read my full article : https://medium.com/#harsh.manvar111/wild-card-certificate-using-cert-manager-in-kubernetes-3406b042d5a2
Ingress & secret example
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
certmanager.k8s.io/issuer: "letsencrypt-prod"
certmanager.k8s.io/acme-challenge-type: dns01
certmanager.k8s.io/acme-dns01-provider: route53
name: ingress-resource-tls
namespace: default
spec:
rules:
- host: "hello.devops.example.in"
http:
paths:
- backend:
serviceName: hello-app
servicePort: 8080
path: /
pathType: ImplementationSpecific
tls:
- hosts:
- "hello.devops.example.in"
secretName: tls-secret
#Harsh Manvar while I do appreciate your anwser I found something that is a better suit for my needs.
Cert-manager documentation contains multiple options to sync secrets across namespaces
The one I chose was reflector. The steps to install are included in the documentation but just for the sake of service i'll post here aswell
Requirements: Helm
Installation:
helm repo add emberstack https://emberstack.github.io/helm-charts
helm repo update
helm upgrade --install reflector emberstack/reflector
Setup:
Add the following annotation to your secret reflector.v1.k8s.emberstack.com/reflection-allowed: "true", it should look like the following
apiVersion: v1
kind: Secret
metadata:
name: source-secret
annotations:
reflector.v1.k8s.emberstack.com/reflection-allowed: "true"
Done! Your secret should be replicated within all namespaces. For multiple ingress configurations within the same namespace you could edit your ingress.yaml like this
Ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: jenkins-ingress
namespace: jenkins
annotations:
cert-manager.io/cluster-issuer: letsencrypt-global
kubernetes.io/tls-acme: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/proxy-body-size: "125m"
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx
tls:
- hosts:
- jenkins.foo.com
- nginx.foo.com
secretName: letsencrypt-global # < cert-manager will store the created certificate in this secret.
rules:
- host: jenkins.foo.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: jenkins
port:
number: 80
- host: nginx.foo.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx
port:
number: 80

Generate multiple Kubernetes Ingress with kustomize

So, I have a situation where I to create 700 redirects (301 redirects)for a website, looks like I cannot do it in a single Ingress Object and have to create one Ingress object per redirect like
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: redirect-001 ---------------------> should be different
namespace: XXXXX
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/permanent-redirect: "/redirect-path001" -------> should be different
spec:
tls:
- hosts:
- DNS Name
secretName: cert
rules:
- http:
paths:
- pathType: Prefix
path: "/path-001" -----------------------------> should be different
backend:
service:
name: my-svc
port:
number: 80
host: "DNS Name"
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: redirect-002 -----------------> should be different
namespace: XXXX
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/permanent-redirect: "/redirect-path002" ----> should be different
spec:
tls:
- hosts:
- DNS Name
secretName: cert
rules:
- http:
paths:
- pathType: Prefix
path: "/path-002" ----------> should be different
backend:
service:
name: my-svc
port:
number: 80
host: "DNS Name"
So can I use Kustomize to generate these Ingress objects, I want to generate all ingress objects with kustomize, I want to have a redirects.yaml with above content in base directory and with kustomization.yaml file I want to create several ingress objects in overlays/dev/redirects.yaml, what would be contents of my Kustomization.yaml, I want to create multiple ingress objects, the only values that need to be different are:
name: "Ingress Name"
path: "/path"
annotations:
nginx.ingress.kubernetes.io/permanent-redirect: "/redirect-path"

PathPrefixStrip is ignored on ingress

Traefik version 2.5.6
I have the following ingress settings:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/app-root: /users
traefik.ingress.kubernetes.io/rule-type: PathPrefixStrip
name: users
spec:
rules:
- host: dev.[REDUCTED]
http:
paths:
- backend:
service:
name: users-service
port:
number: 80
path: /users
pathType: Prefix
But when I call:
curl -i http://dev.[REDUCTED]/users/THIS-SHOUD-BE-ROOT
I get in the pod, serving the service:
error: GET /users/THIS-SHOUD-BE-ROOT 404
What can be the reason for that?
Try to use Traefik Routers as in the example below:
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: users
namespace: default
spec:
entryPoints:
- web
routes:
- match: Host(`dev.[REDUCTED]`) && PathPrefix(`/users`)
kind: Rule
services:
- name: users-service
port: 80

How to fix the 404 not found error when to expose service through ingress

I want to expose the web service through ingress with hostNetwork setting to true, but when I try to reach the www.example.com/example-apiapi, the response always return 404 not found error code for me.
--- Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
spec:
tls:
- hosts:
- www.example.com
secretName: example-tls
rules:
- host: www.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: example-service
port:
number: 3000
--- service
apiVersion: v1
kind: Service
metadata:
name: example-service
spec:
selector:
app: example
ports:
- name: example-api
port: 3000
targetPort: example-api # 3000
This is because I have not defined the nginx class for ingress.
--- Ingress
...
metadata:
annotations:
kubernetes.io/ingress.class: "nginx"
name: example-ingress
...

Kubernetes Ingress to External Service?

Say I have a service that isn't hosted on Kubernetes. I also have an ingress controller and cert-manager set up on my kubernetes cluster.
Because it's so much simpler and easy to use kubernetes ingress to control access to services, I wanted to have a kubernetes ingress that points to a non-kubernetes service.
For example, I have a service that's hosted at https://10.0.40.1:5678 (ssl required, but self signed certificate) and want to access at service.example.com.
You can do it by manual creation of Service and Endpoint objects for your external server.
Objects will looks like that:
apiVersion: v1
kind: Service
metadata:
name: external-ip
spec:
ports:
- name: app
port: 80
protocol: TCP
targetPort: 5678
clusterIP: None
type: ClusterIP
---
apiVersion: v1
kind: Endpoints
metadata:
name: external-ip
subsets:
- addresses:
- ip: 10.0.40.1
ports:
- name: app
port: 5678
protocol: TCP
Then, you can create an Ingress object which will point to Service external-ip with port 80:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: external-service
spec:
rules:
- host: service.example.com
http:
paths:
- backend:
serviceName: external-ip
servicePort: 80
path: /
So I got this working using ingress-nginx to proxy an managed external service over a non-standard port
apiVersion: v1
kind: Service
metadata:
name: external-service-expose
namespace: default
spec:
type: ExternalName
externalName: <external-service> # eg example.example.com
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: external-service-expose
namespace: default
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS" #important
spec:
rules:
- host: <some-host-on-your-side> # eg external-service.yourdomain.com
http:
- path: /
pathType: Prefix
backend:
service:
name: external-service
port:
number: <port of external service> # eg 4589
tls:
- hosts:
- external-service.yourdomain.com
secretName: <tls secret for your domain>
of-course you need to make sure that the managed url is reachable from inside the cluster, a simple check can be done by launching a debug pod and doing
curl -v https://example.example.com:4589
If your external service has a dns entry configured on it, you can use kubernetes externalName service.
---
apiVersion: v1
kind: Service
metadata:
name: my-service
namespace: prod
spec:
type: ExternalName
externalName: myexternal.http.service.com
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: externalNameservice
namespace: prod
spec:
rules:
- host: service.example.com
http:
paths:
- backend:
serviceName: my-service
servicePort: 80
path: /
In this way, kubernetes create cname record my-service pointing to myexternal.http.service.com
I just want to update #Moulick answer here according to Kubernetes version v1.21.1, as for ingress the configuration has changed a little bit.
In my example I am using Let's Encrypt for my nginx controller:
apiVersion: v1
kind: Service
metadata:
name: external-service
namespace: default
spec:
type: ExternalName
externalName: <some-host-on-your-side> eg managed.yourdomain.com
ports:
- port: <port of external service> eg 4589
---
kind: Ingress
apiVersion: networking.k8s.io/v1
metadata:
name: external-service
namespace: default
annotations:
cert-manager.io/cluster-issuer: "letsencrypt-prod"
nginx.ingress.kubernetes.io/proxy-body-size: 100m
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS" #important
spec:
tls:
- hosts:
- <some-host-on-your-side> eg managed.yourdomain.com
secretName: tls-external-service
rules:
- host: <some-host-on-your-side> eg managed.yourdomain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: external-service
port:
number: <port of external service> eg 4589