Didn't creating kiali ingress resource after kiali deployment - kubernetes

I deployed the kiali-operator helm chart in the kiali-operator namespace and istio in the istio-system namespace. Now I am trying to deploy the kiali workload in the istio-system.
But somehow ingress rule did not create. I attached the kiali deployment YAML file for reference.
apiVersion: kiali.io/v1alpha1
kind: Kiali
metadata:
name: kiali
namespace: istio-system
spec:
istio_labels:
app_label_name: "app.kubernetes.io/name"
installation_tag: "kiali"
istio_namespace: "istio-system"
version: "default"
auth:
strategy: token
custom_dashboards:
- name: "envoy"
deployment:
accessible_namespaces: ["**"]
ingress:
# default: additional_labels is empty
# additional_labels:
# ingressAdditionalLabel: "ingressAdditionalLabelValue"
class_name: "nginx"
default: enabled is undefined
enabled: true
# default: override_yaml is undefined
override_yaml:
metadata:
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/secure-backends: "true"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
rules:
- http:
paths:
- path: "/kiali"
pathType: Prefix
backend:
service:
name: "kiali"
port:
number: 20001
instance_name: "kiali"
external_services:
custom_dashboards:
enabled: true
istio:
component_status:
components:
- app_label: "istiod"
is_core: true
is_proxy: false
- app_label: "istio-ingressgateway"
is_core: true
is_proxy: true
# default: namespace is undefined
namespace: istio-system
- app_label: "istio-egressgateway"
is_core: false
is_proxy: true
# default: namespace is undefined
namespace: istio-system
enabled: true
config_map_name: "istio"
envoy_admin_local_port: 15000
# default: istio_canary_revision is undefined
istio_canary_revision:
current: "1-9-9"
upgrade: "1-10-2"
istio_identity_domain: "svc.cluster.local"
istio_injection_annotation: "sidecar.istio.io/inject"
istio_sidecar_annotation: "sidecar.istio.io/status"
istio_sidecar_injector_config_map_name: "istio-sidecar-injector"
istiod_deployment_name: "istiod"
istiod_pod_monitoring_port: 15014
root_namespace: ""
url_service_version: ""
prometheus:
# Prometheus service name is "metrics" and is in the "telemetry" namespace
url: "<prome_url>"
grafana:
auth:
ca_file: ""
insecure_skip_verify: false
password: "password"
token: ""
type: "basic"
use_kiali_token: false
username: "user"
enabled: true
# Grafana service name is "grafana" and is in the "telemetry" namespace.
in_cluster_url: '<grafana_url>'
url: '<grafana_url>'
tracing:
enabled: true
in_cluster_url: '<jaeger-url>'
use_grpc: true
Advance thanks for any help!!

Related

Example Nginx plus ingress for sticky sessions during canary deployment

I’m deploying 2 services to kubernetes pods which simply echo a version number; echo-v1 & echo-v2
Where echo-v2 is considered the canary deployment, I can demonstrate sticky sessions as canary weight is reconfigured from 0 to 100 using canary & canary-weight annotations.
2 ingresses are used:
The first routes to echo-v1 with a session cookie annotation.
The second routes to echo-v2 with canary true,canary weight and session cookie annotations.
The second ingress I can apply without impacting those sessions started on the first ingress and new sessions follow the canary weighting as expected.
However I’ve since learned that those annotations are for nginx community and won’t work with nginx plus.
How can I achieve the same using ingress(es) with nginx plus?
This is the ingress configuration that works for me using Nginx community vs Nginx plus.
Nginx community:
(coffee-v1 service)
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/affinity: "cookie"
name: ingress-coffee
spec:
rules:
- http:
paths:
- path: /coffee
pathType: Exact
backend:
service:
name: coffee-v1
port:
number: 80
(coffee-v2 'canary' service)
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/canary: "true"
nginx.ingress.kubernetes.io/canary-weight: "100"
name: ingress-coffee-canary
spec:
rules:
- http:
paths:
- path: /coffee
pathType: Exact
backend:
service:
name: coffee-v2
port:
number: 80
Nginx plus:
(coffee-v1 & coffee-v2 as type 'virtualserver' not 'ingress')
apiVersion: k8s.nginx.org/v1
kind: VirtualServer
metadata:
name: cafe
spec:
host: cloudbees-training.group.net
tls:
secret: cloudbees-trn.aks.group.net-tls
upstreams:
- name: coffee-v1
service: coffee-v1-svc
port: 80
sessionCookie:
enable: true
name: srv_id_v1
path: /coffee
expires: 2h
- name: coffee-v2
service: coffee-v2-svc
port: 80
sessionCookie:
enable: true
name: srv_id_v2
path: /coffee
expires: 2h
routes:
- path: /coffee
matches:
- conditions:
- cookie: srv_id_v1
value: ~*
action:
pass: coffee-v1
- conditions:
- cookie: srv_id_v2
value: ~*
action:
pass: coffee-v2
# 3 options to handle new session below:
#
# 1) All new sessions to v1:
# action:
# pass: coffee-v1
#
# 2) All new sessions to v2:
# action:
# pass: coffee-v2
#
# 3) Split new sessions by weight
# Note: 0,100 / 100,0 weightings causes sessions
# to drop for the 0 weighted service:
# splits:
# - weight: 50
# action:
# pass: coffee-v1
# - weight: 50
# action:
# pass: coffee-v2

Error: connect ECONNREFUSED 127.0.0.1:443 http://ingress-nginx-controller.ingress-nginx.svc.cluster.local

kubectl get namespace
default Active 3h33m
ingress-nginx Active 3h11m
kube-node-lease Active 3h33m
kube-public Active 3h33m
kube-system Active 3h33m
kubectl get services -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP
PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.102.205.190 localhost 80:31378/TCP,443:31888/TCP 3h12m
ingress-nginx-controller-admission ClusterIP 10.103.97.209 <none> 443/TCP 3h12m
When I am making the request from nextjs getInitialProps http://ingress-nginx-controller.ingress-nginx.svc.cluster.local/api/users/currentuser then its throwing an error Error: connect ECONNREFUSED 127.0.0.1:443.
LandingPage.getInitialProps = async () => {
if (typeof window === "undefined") {
const { data } = await axios.get(
"http://ingress-nginx-controller.ingress-nginx.svc.cluster.local/api/users/currentuser",
{
headers: {
Host: "ticketing.dev",
},
}
);
return data;
} else {
const { data } = await axios.get("/api/users/currentuser");
return data;
}
};
My auth.deply.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-depl
spec:
replicas: 1
selector:
matchLabels:
app: auth
template:
metadata:
labels:
app: auth
spec:
containers:
- name: auth
image: sajeebxn/auth
env:
- name: MONGO_URI
value: 'mongodb://tickets-mongo-srv:27017/auth'
- name: JWT_KEY
valueFrom:
secretKeyRef:
name: jwt-secret
key: JWT_KEY
---
apiVersion: v1
kind: Service
metadata:
name: auth-srv
spec:
selector:
app: auth
ports:
- name: auth
protocol: TCP
port: 3000
targetPort: 3000
And my ingress-srv.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
tls:
- hosts:
- ticketing.dev
# secretName: e-ticket-secret
rules:
- host: ticketing.dev
http:
paths:
- path: /api/users/?(.*)
pathType: Prefix
backend:
service:
name: auth-srv
port:
number: 3000
- path: /?(.*)
pathType: Prefix
backend:
service:
name: client-srv
port:
number: 3000
Try using http://ingress-nginx-controller.ingress-nginx/api/users/currentuser.
This worked for me

In Grafana I am getting a "400 Bad Request Client sent an HTTP request to an HTTPS server" when trying to update datasource configmaps

In Grafana I notice that when I deploy a configmap that should add a datasource it makes no change and does not add the new datasource - note that the configmap is in the cluster and in the correct namespace.
If I make a change to the configmap I get the following error if I look at the logs for the grafana-sc-datasources container:
POST request sent to http://localhost:3000/api/admin/provisioning/datasources/reload. Response: 400 Bad Request Client sent an HTTP request to an HTTPS server.
I assume I do not see any changes because it can not make the post request.
I played around a bit and at one point I did see changes being made/updated in the datasources:
I changed the protocol to http under grafana: / server: / protocol: and I was NOT able to open the grafana website but I did notice that if I did make a change to a datasource configmap in the cluster then I would see a successful 200 message in logs of the grafana-sc-datasources container : POST request sent to http://localhost:3000/api/admin/provisioning/datasources/reload. Response: 200 OK {"message":"Datasources config reloaded"}.
So I assume just need to know how to get Grafana to send the POST request as https instead of http.
Can someone point me to what might be wrong and how to fix it?
Note that I am pretty new to K8s, grafana and helmcharts.
Here is a configmap that I am trying to get to work:
apiVersion: v1
kind: ConfigMap
metadata:
name: jaeger-${NACKLE_ENV}-grafana-datasource
labels:
grafana_datasource: '1'
data:
jaeger-datasource.yaml: |-
apiVersion: 1
datasources:
- name: Jaeger-${NACKLE_ENV}
type: jaeger
access: browser
url: http://jaeger-${NACKLE_ENV}-query.${NACKLE_ENV}.svc.cluster.local:16690
version: 1
basicAuth: false
Here is the current Grafana values file:
# use 1 replica when using a StatefulSet
# If we need more than 1 replica, then we'll have to:
# - remove the `persistence` section below
# - use an external database for all replicas to connect to (refer to Grafana Helm chart docs)
replicas: 1
image:
pullSecrets:
- docker-hub
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: eks.amazonaws.com/capacityType
operator: In
values:
- ON_DEMAND
persistence:
enabled: true
type: statefulset
storageClassName: biw-durable-gp2
podDisruptionBudget:
maxUnavailable: 1
admin:
existingSecret: grafana
sidecar:
datasources:
enabled: true
label: grafana_datasource
dashboards:
enabled: true
label: grafana_dashboard
labelValue: 1
dashboardProviders:
dashboardproviders.yaml:
apiVersion: 1
providers:
- name: 'default'
orgId: 1
folder: ''
type: file
disableDeletion: false
editable: true
options:
path: /var/lib/grafana/dashboards/default
dashboards:
default:
node-exporter:
gnetId: 1860
revision: 23
datasource: Prometheus
core-dns:
gnetId: 12539
revision: 5
datasource: Prometheus
fluentd:
gnetId: 7752
revision: 6
datasource: Prometheus
ingress:
apiVersion: networking.k8s.io/v1
enabled: true
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/healthcheck-port: traffic-port
alb.ingress.kubernetes.io/healthcheck-path: '/api/health'
alb.ingress.kubernetes.io/healthcheck-protocol: HTTPS
alb.ingress.kubernetes.io/backend-protocol: HTTPS
# Redirect to HTTPS at the ALB
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
spec:
rules:
- http:
paths:
- path: /*
pathType: ImplementationSpecific
backend:
service:
name: ssl-redirect
port:
name: use-annotation
defaultBackend:
service:
name: grafana
port:
number: 80
livenessProbe: { "httpGet": { "path": "/api/health", "port": 3000, "scheme": "HTTPS" }, "initialDelaySeconds": 60, "timeoutSeconds": 30, "failureThreshold": 10 }
readinessProbe: { "httpGet": { "path": "/api/health", "port": 3000, "scheme": "HTTPS" } }
service:
type: NodePort
name: grafana
rolePrefix: app-role
env: eks-test
serviceAccount:
name: grafana
annotations:
eks.amazonaws.com/role-arn: ""
pod:
spec:
serviceAccountName: grafana
grafana.ini:
server:
# don't use enforce_domain - it causes an infinite redirect in our setup
# enforce_domain: true
enable_gzip: true
# NOTE - if I set the protocol to http I do see it make changes to datasources but I can not see the website
protocol: https
cert_file: /biw-cert/domain.crt
cert_key: /biw-cert/domain.key
users:
auto_assign_org_role: Editor
# https://grafana.com/docs/grafana/v6.5/auth/gitlab/
auth.gitlab:
enabled: true
allow_sign_up: true
org_role: Editor
scopes: read_api
auth_url: https://gitlab.biw-services.com/oauth/authorize
token_url: https://gitlab.biw-services.com/oauth/token
api_url: https://gitlab.biw-services.com/api/v4
allowed_groups: nackle-teams/devops
securityContext:
fsGroup: 472
runAsUser: 472
runAsGroup: 472
extraConfigmapMounts:
- name: "cert-configmap"
mountPath: "/biw-cert"
subPath: ""
configMap: biw-grafana-cert
readOnly: true

Spin a local database with Kubernetes and Rancher

How do you change the cockroachdb YAML configuration from Rancher catalog that it will just
Use local disk (testing on a local computer)
Just use 1GB of disk space (should be enough for testing)
Here's the complete YAML
clusterDomain: cluster.local
conf:
attrs: []
cache: 25%
cluster-name: ''
disable-cluster-name-verification: false
http-port: 8080
join: []
locality: ''
logtostderr: INFO
max-disk-temp-storage: 0
max-offset: 500ms
max-sql-memory: 25%
port: 26257
single-node: false
sql-audit-dir: ''
image:
credentials: {}
pullPolicy: IfNotPresent
repository: cockroachdb/cockroach
tag: v20.1.3
ingress:
annotations: {}
enabled: false
hosts: []
labels: {}
paths:
- /
tls: []
init:
affinity: {}
annotations: {}
labels:
app.kubernetes.io/component: init
nodeSelector: {}
resources: {}
tolerations: []
labels: {}
networkPolicy:
enabled: false
ingress:
grpc: []
http: []
service:
discovery:
annotations: {}
labels:
app.kubernetes.io/component: cockroachdb
ports:
grpc:
external:
name: grpc
port: 26257
internal:
name: grpc-internal
port: 26257
http:
name: http
port: 8080
public:
annotations: {}
labels:
app.kubernetes.io/component: cockroachdb
type: ClusterIP
statefulset:
annotations: {}
args: []
budget:
maxUnavailable: 1
env: []
labels:
app.kubernetes.io/component: cockroachdb
nodeAffinity: {}
nodeSelector: {}
podAffinity: {}
podAntiAffinity:
type: soft
weight: 100
podManagementPolicy: Parallel
priorityClassName: ''
replicas: 3
resources: {}
secretMounts: []
tolerations: []
updateStrategy:
type: RollingUpdate
storage:
hostPath: ''
persistentVolume:
annotations: {}
enabled: true
labels: {}
size: 100Gi
storageClass: ''
tls:
certs:
clientRootSecret: cockroachdb-root
nodeSecret: cockroachdb-node
provided: false
tlsSecret: false
enabled: false
init:
image:
credentials: {}
pullPolicy: IfNotPresent
repository: cockroachdb/cockroach-k8s-request-cert
tag: '0.4'
serviceAccount:
create: true
name: ''
Storage: 100Gi
Storage Size
It looks like Storage and store.persistentVolume.size can both be set to 1Gi if you are looking for one gigabyte of storage.
Local Storage
Then, I would check if you have a storageClass by running kubectl get storageClass. Many times, cluters come with the local-path-provisioner storageClass. If you have that, I would try setting store.persistentVolume.storageClass to the name of the local path proversion storageClass you have installed on your system. If you don't have that or an alternative, I would consider installing it.
More Info
I'm not certain, but it's possible that it is using this helm chart to install the database. This section of the chart deals with the volume claim managment, so I would look here if you need to do more troubleshooting: https://github.com/cockroachdb/helm-charts/blob/master/cockroachdb/templates/statefulset.yaml#L376

Kubernetes Ingress looking for secret in wrong place

I have Keycloak Chart (https://codecentric.github.io/helm-charts). Where I configured ingress to look at my secret for certificates, but instead it is looking at wrong place:
W0830 15:05:12.330745 7 controller.go:1387] Error getting SSL certificate "default/tls-keycloak-czv9g": local SSL certificate default/tls-keycloak-czv9g was not found
Here is how Chart looks like:
keycloak:
basepath: auth/
username: admin
password: password
route:
tls:
enabled: true
extraEnv: |
- name: PROXY_ADDRESS_FORWARDING
value: "true"
- name: KEYCLOAK_IMPORT
value: /keycloak/master-realm.json
- name: JAVA_OPTS
value: >-
-Djboss.socket.binding.port-offset=1000
extraVolumes: |
- name: realm-secret
secret:
secretName: realm-secret
extraVolumeMounts: |
- name: realm-secret
mountPath: "/keycloak/"
readOnly: true
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true"
cert-manager.io/cluster-issuer: "keycloak-issuer"
path: /auth/?(.*)
hosts:
- keycloak.localtest.me
tls:
- hosts:
- keycloak.localtest.me
secretName: tls-keycloak-czv9g
That is what i see from console:
$ kubectl get secret
NAME TYPE DATA AGE
default-token-lbt48 kubernetes.io/service-account-token 3 22m
keycloak-admin-password Opaque 1 15m
keycloak-realm-secret Opaque 1 15m
tls-keycloak-czv9g Opaque 1 15m
$ kubectl describe secrets/tls-keycloak-czv9g
Name: tls-keycloak-czv9g
Namespace: default
Labels: cert-manager.io/next-private-key=true
Annotations: <none>
Type: Opaque
Data
====
tls.key: 1704 bytes
Why ingress is looking wrong place?