I am trying to create MonitoringNotificationChannel using Config Connector in GCP - kubernetes

I want to create MonitoringNotificationChannel in GCP to send alerts on opsgenie so we are using web-hook provided by opsgenie channel
apiVersion: monitoring.cnrm.cloud.google.com/v1beta1
kind: MonitoringNotificationChannel
metadata:
name: monitoringnotificationchannel-webhook_tokenauth
spec:
type: webhook_tokenauth
# The spec.labels field below is for configuring the desired behaviour of the notification channel
# It does not apply labels to the resource in the cluster
labels:
description: Sends notifications to indicated webhook URL using HTTP-standard basic authentication. Should be used in conjunction with SSL/TLS to reduce the risk of attackers snooping the credentials.
sensitiveLabels:
authToken:
valueFrom:
secretKeyRef:
key: url
name: quota
enabled: true
After applying this we are getting labels as Null
we want to reference Opsgenie URL from sensitiveLabels
Format of opsgenie URL=https://api.opsgenie.com/v1/json/googlestackdriver?apiKey=xxxxxxxxxxx
Docs
https://cloud.google.com/config-connector/docs/reference/resource-docs/monitoring/monitoringnotificationchannel

Related

Integrating SSO for Argo Workflows using Keycloak

I have a requirement to integrate SSO for Argo-workflow and for these we have made necessary changes in quick-start-postgres.yaml.
Here is the yaml file we are using to start argo locally.
https://raw.githubusercontent.com/argoproj/argo-workflows/master/manifests/quick-start-postgres.yaml
And below are the sections we are modifying to support for SSO integration
Deployment section:
apiVersion: apps/v1
kind: Deployment
metadata:
name: argo-server
spec:
selector:
matchLabels:
app: argo-server
template:
metadata:
labels:
app: argo-server
spec:
containers:
- args:
- server
- --namespaced
- --auth-mode=sso
workflow-controller-configmap section :
apiVersion: v1
data:
sso: |
# This is the root URL of the OIDC provider (required).
issuer: http://localhost:8080/auth/realms/master
# This is name of the secret and the key in it that contain OIDC client
# ID issued to the application by the provider (required).
clientId:
name: dummyClient
key: client-id
# This is name of the secret and the key in it that contain OIDC client
# secret issued to the application by the provider (required).
clientSecret:
name: jdgcFxs26SdxdpH9Z5L33QCFAmGYTzQB
key: client-secret
# This is the redirect URL supplied to the provider (required). It must
# be in the form <argo-server-root-url>/oauth2/callback. It must be
# browser-accessible.
redirectUrl: http://localhost:2746/oauth2/callback
artifactRepository: |
s3:
bucket: my-bucket
We are starting the argo by issuing below 2 commands
kubectl apply -n argo -f modified-file/quick-start-postgres.yaml
kubectl -n argo port-forward svc/argo-server 2746:2746
After executing above commands and trying to login as Single-sign on , it is not getting redirected to provide login option for keycloak user. Instead it us redirected to https://localhost:2746/oauth2/redirect?redirect=https://localhost:2746/workflows
This page isn’t working localhost is currently unable to handle this request.
HTTP ERROR 501
What could be the issue here ? are we missing anything here ??
Is there arguments needed to pass while starting the Argo?
Can someone please suggest something on this.
Try adding --auth-mode=client to your argo-server container args

Argo Event webhook authentication with Github

I'm trying to integrate the GitHub repo with the Argo Event Source webhook as an example (link). When the configured from the Github event returns an error.
'Invalid Authorization Header'.
Code:
apiVersion: argoproj.io/v1alpha1
kind: EventSource
metadata:
name: ci-pipeline-webhook
spec:
service:
ports:
- port: 12000
targetPort: 12000
webhook:
start-pipeline:
port: "12000"
endpoint: /start-pipeline
method: POST
authSecret:
name: my-webhook-token
key: my-token
If you want to use a secure GitHub webhook as an event source, you will need to use the GitHub event source type. GitHub webhooks send a special authorization header, X-Hub-Signature/X-Hub-Signature-256, that includes as hashed value of the webhook secret. The "regular" webhook event source expects a standard Bearer token with an authorization header in the form of "Authorization: Bearer <webhook-secret>".
You can read more about GitHub webhook delivery headers here. You can then compare that to the Argo Events webhook event source authentication documentation here.
There are basically two options when creating the GitHub webhook event source.
Provide GitHub API credentials in a Kubernetes secret so Argo Events can make the API call to GitHub to create the webhook on your behalf.
Omit the GitHub API credentials in the EventSource spec and create the webhook yourself either manually or through whichever means you normally create a webhook (Terraform, scripted API calls, etc).
Here is an example for the second option:
apiVersion: argoproj.io/v1alpha1
kind: EventSource
metadata:
name: github-events
namespace: my-namespace
spec:
service:
ports:
- name: http
port: 12000
targetPort: 12000
github:
default:
owner: my-github-org-or-username
repository: my-github-repo-name
webhook:
url: https://my-argo-events-server-fqdn
endpoint: /push
port: "12000"
method: POST
events:
- "*"
webhookSecret:
name: my-secret-name
key: my-secret-key
insecure: false
active: true
contentType: "json"

Encrypting secret to read GitHub source in Flux

In my Kubernetes cloud I do have FluxCD to manage all components. FluxCD is using SOPS to decrypt all the passwords. This is resulting in a declaration like this:
---
apiVersion: source.toolkit.fluxcd.io/v1beta1
kind: GitRepository
metadata:
name: load-balancer-controller
namespace: flux-system
spec:
interval: 1m
ref:
branch: main
url: https://github.com/fantasyaccount/load-balancer-controller.git
---
apiVersion: kustomize.toolkit.fluxcd.io/v1beta1
kind: Kustomization
metadata:
name: load-balancer-controller
namespace: flux-system
spec:
decryption:
provider: sops
secretRef:
name: sops-gpg
interval: 1m
path: "./deployment"
prune: true
sourceRef:
kind: GitRepository
name: load-balancer-controller
Within the load-balancer-controller repo I can use SOPS encrypted secrets. That is clear for me.
However, is it possible to use SOPS as well for encrypting the secret token to have access to the repo itself? I know I can use kubectl create secret ... to add the secret token to Kubernetes as well, but that is now what I want. I would like to use a SOPS encrypted token here as well.
The challenge in encrypting the secret for the initial GitRepository, is to then define what the cluster provisioning process would look like, as this represents a bit of a chicken-egg problem.
One way I can see this working, is to install Flux with a source that supports contextual authentication, such as Bucket. With that, you could store in an S3 Bucket the encrypted Git secret, the GitRepository to current repository, and the Kustomization that applies it to your cluster.
Here's more information about the contextual authentication for EKS:
https://fluxcd.io/docs/components/source/buckets/#aws-ec2-example
Just notice that with this approach, your cluster deployment pipeline would have to store your GPG key, as you would need to deploy that secret before (or soon after) you install Flux into the cluster.

Encrypting Secret Data at Rest in Kubernetes AKS?

I am unable to figure out how to change my kube-apiserver. The current version I am using from azure AKS is 1.13.7.
Below is what I need to change the kube-apiserver in kubernetes.
The kube-apiserver process accepts an argument --encryption-provider-config that controls how API data is encrypted in etcd.
Additionally, I am unable to find the kube-apiserver.
Yaml File Formatted
apiVersion: apiserver.config.k8s.io/v1
kind: EncryptionConfiguration
resources:
- resources:
- secrets
providers:
- identity: {}
- aesgcm:
keys:
- name: key1
secret: c2VjcmV0IGlzIHNlY3VyZQ==
- name: key2
secret: dGhpcyBpcyBwYXNzd29yZA==
- aescbc:
keys:
- name: key1
secret: c2VjcmV0IGlzIHNlY3VyZQ==
- name: key2
secret: dGhpcyBpcyBwYXNzd29yZA==
- secretbox:
keys:
- name: key1
secret: YWJjZGVmZ2hpamtsbW5vcHFyc3R1dnd4eXoxMjM0NTY=
I have tried to apply this yaml file but the error I get is below.
error: unable to recognize "examplesecret.yaml": no matches for kind
"EncryptionConfiguration" in version "apiserver.config.k8s.io/v1"
Created aks cluster in azure. Used example encryption yaml file. Expected to be able to create rest secrets. The results I get are unable to create.
The Kind: EncryptionConfiguration is understood only by the api-server via the flag --encryption-provider-config= (ref); in AKS, there’s no way to pass that flag to the api-server, as it’s a managed service. Feel free to request the feature in the public forum.

Kubernetes service account custom token

I am trying to create a service account with a known, fixed token used by Jenkins to deploy stuff into kubernetes. I manage to create the token all right with the following yaml:
apiVersion: v1
kind: Secret
metadata:
name: integration-secret
annotations:
kubernetes.io/service-account.name: integration
type: kubernetes.io/service-account-token
data:
token: YXNkCg== # yes this base64
Then I've attached the secret to 'integration' user and it's visible:
-> kubectl describe sa integration
Name: integration
Namespace: default
Labels: <none>
Annotations: <none>
Mountable secrets: integration-secret
integration-token-283k9
Tokens: integration-secret
integration-token-283k9
Image pull secrets: <none>
But the login fails. If i remove the data and data.token, the token get auto-created and login works. Is there something I'm missing? My goal is to have fixed token for CI so that I won't have to update it everywhere when creating a project (don't worry this is just dev environments). Is it possible for example to define username/password for service accounts for API access?
Is it possible for example to define username/password for service accounts for API access?
No, the tokens must be valid JWTs, signed by the service account token signing key.