I'm new to Kubernetes and Helm and want to create a SSO with OIDC using vouch-proxy
I found a tutorial which explains how to do it and was able to write some helmfiles that were accepted by kubernetes.
I added the ingress configuration to the values.yaml that I load in my helmfile.yaml.
helmfile.yaml
bases:
- environments.yaml
---
releases:
- name: "vouch"
chart: "halkeye/vouch"
version: {{ .Environment.Values.version }}
namespace: {{ .Environment.Values.namespace }}
values:
- values.yaml
values.yaml
# vouch config
# bare minimum to get vouch running with OpenID Connect (such as okta)
config:
vouch:
some:
other:
values:
# important part
ingress:
enabled: true
hosts:
- "vouch.minikube"
paths:
- /
With this configuration helmfile creates an Ingress for the correct host, but when I open the URL in my Browser it returns a 404 Not Found which makes sense, since I didn't specify the correct port (9090).
I tried some notations to add the port but it lead to either helmfile not updating the pod or 500 Internal Server Errors.
How can I add a port in the configuration? And is it the "correct" way to do it? Or should ingresses be handled by kubectl still?
Related
I'm messing around with kubernetes and I've set up a kluster on my local pc using kind. I have also installed traefik as an ingress controller, and I have already managed to access an api that I have deployed in the kluster and a grafana through some ingress (without doing port forwards or anything like that). But with mongo I can't. While the API and grafana need an IngressRoute, mongo needs an IngressRouteTCP
The IngressRouteTCP that I have defined is this:
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRouteTCP
metadata:
name: mongodb-ingress-tcp
namespace: mongo_namespace
spec:
entryPoints:
- web
routes:
- match: HostSNI(`mongo.localhost`)
services:
- name: mongodb
port: 27017
But I get this error:
I know i can use a port forward, but I want do in this way (with ingress)
A lot of thanks
You need to specify tls parameters
like this
tls:
certResolver: "bar"
domains:
- main: "snitest.com"
sans:
- "*.snitest.com"
To avoid using TLS u need to match all routes HostSNI(`*`)
It is important to note that the Server Name Indication is an extension of the TLS protocol.
Hence, only TLS routers will be able to specify a domain name with that rule.
However, there is one special use case for HostSNI with non-TLS routers: when one wants a non-TLS router that matches all (non-TLS) requests, one should use the specific HostSNI(`*`) syntax.
DOCS
I have just set up a kubernetes cluster on bare metal using kubeadm, Flannel and MetalLB. Next step for me is to install ArgoCD.
I installed the ArgoCD yaml from the "Getting Started" page and logged in.
When adding my Git repositories ArgoCD gives me very weird error messages:
The error message seems to suggest that ArgoCD for some reason is resolving github.com to my public IP address (I am not exposing SSH, therefore connection refused).
I can not find any reason why it would do this. When using https:// instead of SSH I get the same result, but on port 443.
I have put a dummy pod in the same namespace as ArgoCD and made some DNS queries. These queries resolved correctly.
What makes ArgoCD think that github.com resolves to my public IP address?
EDIT:
I have also checked for network policies in the argocd namespace and found no policy that was restricting egress.
I have had this working on clusters in the same network previously and have not changed my router firewall since then.
I solved my problem!
My /etc/resolv.conf had two lines that caused trouble:
domain <my domain>
search <my domain>
These lines were put there as a step in the installation of my host machine's OS that I did not realize would affect me in this way. After removing these lines, everything is now working perfectly.
Multiple people told me to check resolv.conf, but I didn't realize what these two lines did until now.
That looks like argoproj/argo-cd issue 1510, where the initial diagnostic was that the cluster is blocking outbound connections to GitHub. And it suggested to check the egress configuration.
Yet, the issue was resolved with an ingress rule configuration:
need to define in values.yaml.
argo-cd default provide subdomain but in our case it was /argocd
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/backend-protocol: HTTP
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/rewrite-target: /
path: /argocd
hosts:
- www.example.com
and this I have defined under templates >> argocd-server-deployment.yaml
containers:
- name: argocd-server
image: {{ .Values.server.image.repository }}:{{ .Values.server.image.tag }}
imagePullPolicy: {{ .Values.server.image.pullPolicy }}
command:
- argocd-server
- --staticassets - /shared/app - --repo-server - argocd-repo-server:8081 - --insecure - --basehref - /argocd
The same case includes an instance very similar to yours:
In any case, do check your git configuration (git config -l) as seen in the ArgoCD cluster, to look for any insteadOf which would change automatically github.com into a local URL (as seen here)
I am currently using Kustomize. We are have multiple deployments and services. These have the same spec but different names. Is it possible to store the spec in individual files & refer them across all the deployments files?
Helm is a good fit for the solution.
However, since we were already using Kustomize & migration to Helm would have needed time, we solved the problem using namePrefix & label modifiers in Kustomize.
Use Helm, in ArgoCD create a pipeline with helm:3 container and create a helm-chart directory or repository. Pull the chart repository, deploy with helm. Use values.yaml for the dynamic values you want to use. Also, you will need to add kubeconfig file to your pipeline but that is another issue.
This is the best offer I can give. For further information I need to inspect ArgoCD.
I was faced with this problem and I resolved it using Helm3 charts:
I have a chart. Yaml file where I indicated my release name and version
values. Yam where I define all variable to use for a specific environment.
Values-test. Yaml a file to use, for example, in a test environment where you should only put the variable that must be changed from an environment to another.
I hope that can help you to resolve your issue.
I would also suggest using Helm. However a restriction of Helm is that you cannot create dynamic values.yaml files (https://github.com/helm/helm/issues/6699) - this can be very annoying, especially for multi-environment setups. However, ArgoCD provides a very nice way to do this with its Application type.
The solution is to create a custom Helm chart for generating your ArgoCD applications (which can be called with different config for each environment). The templates in this helm chart will generate ArgoCD Application types. This type supports a source.helm.values field where you can dynamically set the values.yaml.
For example, the values.yaml for HashiCorp Vault can be highly complex and this is a scenario where a dynamic values.yaml per environment is highly desirable (as this prevents having multiple values.yaml files for each environment which are large but very similar).
If your custom ArgoCD helm chart is my-argocd-application-helm, then the following are example values.yaml and the template which generates your Vault application i.e.
values.yaml
server: 1.2.3.4 # Target kubernetes server for all applications
vault:
name: vault-dev
repoURL: https://git.acme.com/myapp/vault-helm.git
targetRevision: master
path: helm/vault-chart
namespace: vault
hostname: 5.6.7.8 # target server for Vault
...
templates/vault-application.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: {{ .Values.vault.name }}
namespace: argocd
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
destination:
namespace: 'vault'
server: {{ .Values.server }}
project: 'default'
source:
path: '{{ .Values.vault.path }}'
repoURL: {{ .Values.vault.repoURL }}
targetRevision: {{ .Values.vault.targetRevision }}
helm:
# Dynamically generate `values.yaml`
values: |
vault:
server:
ingress:
activeService: true
hosts:
- host: {{ required "Please set 'vault.hostname'" .Values.vault.hostname | quote }}
paths:
- /
ha:
enabled: true
config: |
ui = true
...
These values will then override any base configuration residing in the values.yaml specified by {{ .Values.vault.repoURL }} which can contain config which doesn't change for each environment.
I am trying to deploy the kube-prometheus-stack.
I have added it as a dependency in the Chart.yaml as below.
...
dependencies:
- name: kube-prometheus-stack
version: 13.4.1
repository: https://prometheus-community.github.io/helm-charts
...
I have also configured an ingress rule to route the /grafana/?(.*) path to the service solutions-helm-grafana at port 80.
- path: /grafana/?(.*)
pathType: Prefix
backend:
service:
name: helm-grafana
port:
number: 80
However, when I try open /grafana/ in the browser, it returns a 404 after redirecting to /login. What templates do I need to add to successfully deploy ? Are there any examples that I can refer to ?
Hi #Moses can you try removing the ?(.*) from the path?
404 comes up when the ingress is not registered with the ingress controller.
Probably because the release has not been deployed successfully.
Try the following steps to debug the issue:-
Check if the pod have been deployed using kubectl get pods.
Try debugging the ingress object. kubectl describe ing <ing_object_name>.
Check if the Endpoints have been created using kubectl get ep.
Next,get the service endpoint using kubectl get service.
Use a busybox pod to curl and check whether the Grafana is being served via the service above.
Update:-
Add the following configuration to serve Grafana on a subpath
env:
GF_SERVER_DOMAIN: <domain>
GF_SERVER_ROOT_URL: https://<domain>/grafana/
GF_SERVER_SERVE_FROM_SUB_PATH: true
and use this path in ingress:
path: /grafana/
Sources:
Run Grafana behind reverse-proxy
Grafana confiugration root_url
What field should I add to the service / ingress yaml so that I can reach the service from another pod in the same cluster using its associated (external) hostname specified in ingress?
I'm using microk8s with the default ingress class (nginx), and I need a solution that works in any kubernetes platform (azure, gke, aks)
I need to reach my authentication server (keycloak) from my nodejs application, using ingress hostname. I can't use service name, because the token validation would fail (JWT ISS checking).
thanks!
Based on this SO post this can be done using Helm custom values and hostAliases.
A helm templated solution to the original question. I tested this with
helm 3.
apiVersion: apps/v1
kind: Deployment
spec:
template:
spec:
{{- with .Values.hostAliases }}
hostAliases:
{{ toYaml . | indent 8 }}
{{- end }}
For values such as:
hostAliases:
- ip: "10.0.0.1"
hostnames:
- "host.domain.com"
If the hostAliases is omitted or commented out in the values, the
hostAliases section is omitted when the template is rendered.