Argo Event webhook authentication with Github - github

I'm trying to integrate the GitHub repo with the Argo Event Source webhook as an example (link). When the configured from the Github event returns an error.
'Invalid Authorization Header'.
Code:
apiVersion: argoproj.io/v1alpha1
kind: EventSource
metadata:
name: ci-pipeline-webhook
spec:
service:
ports:
- port: 12000
targetPort: 12000
webhook:
start-pipeline:
port: "12000"
endpoint: /start-pipeline
method: POST
authSecret:
name: my-webhook-token
key: my-token

If you want to use a secure GitHub webhook as an event source, you will need to use the GitHub event source type. GitHub webhooks send a special authorization header, X-Hub-Signature/X-Hub-Signature-256, that includes as hashed value of the webhook secret. The "regular" webhook event source expects a standard Bearer token with an authorization header in the form of "Authorization: Bearer <webhook-secret>".
You can read more about GitHub webhook delivery headers here. You can then compare that to the Argo Events webhook event source authentication documentation here.
There are basically two options when creating the GitHub webhook event source.
Provide GitHub API credentials in a Kubernetes secret so Argo Events can make the API call to GitHub to create the webhook on your behalf.
Omit the GitHub API credentials in the EventSource spec and create the webhook yourself either manually or through whichever means you normally create a webhook (Terraform, scripted API calls, etc).
Here is an example for the second option:
apiVersion: argoproj.io/v1alpha1
kind: EventSource
metadata:
name: github-events
namespace: my-namespace
spec:
service:
ports:
- name: http
port: 12000
targetPort: 12000
github:
default:
owner: my-github-org-or-username
repository: my-github-repo-name
webhook:
url: https://my-argo-events-server-fqdn
endpoint: /push
port: "12000"
method: POST
events:
- "*"
webhookSecret:
name: my-secret-name
key: my-secret-key
insecure: false
active: true
contentType: "json"

Related

Argo Events Kafka triggers cannot parse message headers to enable distributed tracing

TL;DR - Argo Events Kafka eventsource triggers do not currently parse headers of consumed Kafka message, which is needed to enable distributed tracing. I submitted a feature request (here) - if you face the same problem please upvote, and curious if anyone figured out a workaround.
====================================
Context
Common pattern of Argo Workflows we deploy are Kafka event-driven, asynchronous distributed workloads, e.g.:
Service "A" Kafka producer that emits message to topic
Argo Events eventsource Kafka trigger listening to that topic
Argo Workflow gets triggered, and post-processing...
... service "B" Kafka producer at end of workflow emits that work is done.
To monitor the entire system for user-centric metrics "how long did it take & where are the bottle necks", I'm looking to instrument distributed tracing from service "A" to service "B". We use Datadog as aggregator, with dd-trace.
Pattern I've seen is manual propagation of trace ctx via Kafka headers - by injecting headers to Kafka messages before emitting (similar to HTTP headers, with parent trace metadata), and receiving Consumer once done processing the message will then add child_span to that parent_span received from upstream.
ex) of above: https://newrelic.com/blog/how-to-relic/distributed-tracing-with-kafka
Issue
Argo-Events Kafka event source trigger does not parse any headers, only passing the body json for downstream Workflow to use at eventData.Body.
[source code]
Simplified views of my Argo Eventsource -> Trigger -> Workflow:
# eventsource/my-kafka-eventsource.yaml
apiVersion: argoproj.io/v1alpha1
kind: EventSource
spec:
kafka:
my-kafka-eventsource:
topic: <my-topic>
version: "2.5.0"
# sensors/trigger-my-workflow.yaml
apiVersion: argoproj.io/v1alpha1
kind: Sensor
spec:
dependencies:
- name: my-kafka-eventsource-dep
eventSourceName: my-kafka-eventsource
eventName: my-kafka-eventsource
triggers:
- template:
name: start-my-workflow
k8s:
operation: create
source:
resource:
apiVersion: argoproj.io/v1alpha1
kind: Workflow
spec:
entrypoint: my-sick-workflow
arguments:
parameters:
- name: proto_message
value: needs to be overriden
# I would like to be able to add this
- name: msg_headers
value: needs to be overriden
templates:
- name: my-sick-workflow
dag:
tasks:
- name: my-sick-workflow
templateRef:
name: my-sick-workflow
template: my-sick-workflow
parameters:
# content/body of consumed message
- src:
dependencyName: my-kafka-eventsource-dep
dataKey: body
dest: spec.arguments.parameters.0.value
# I would like to do this - get msg.headers() if exists.
- src:
dependencyName: my-kafka-eventsource-dep
dataKey: headers
dest: spec.arguments.parameters.1.value
# templates/my-sick-workflow.yaml
apiVersion: argoproj.io/v1alpha1
kind: WorkflowTemplate
spec:
templates:
- name: my-sick-workflow
container:
image: <image>
command: [ "python", "/main.py" ]
# I want to add the 2nd arg - msg_headers - here
args: [ "{{workflow.parameters.proto_message}}", "{{workflow.parameters.msg_headers}}" ]
# so that in my Workflow Dag step source code,
# I can access headers of Kafka msg from upstream by....
# body=sys.argv[1], headers=sys.argv[2]
Confluent-Kafka API docs on accessing message headers: [doc]
Q's
Has anyone found a workaround on passing tracing context from upstream to downstream service that travels between Kafka Producer<>Argo Events?
I considered changing my Argo-Workflows sensor trigger to HTTP trigger accepting payloads, by a new Kafka consumer listening for the message that is currently triggering my Argo Workflow --> then forward HTTP payload with parent trace metadata in headers.
it's anti-pattern to rest of my workflows, so I would like to avoid if there's a simpler solution.
As you pointed out, the only real workaround without forking some part of Argo Events, or implementing your own Source/Sensor yourself would be to use a Kafka Consumer (or Kafka Connect), and call a WebHook EventSource (or another, which can extract the information you need).

Integrating SSO for Argo Workflows using Keycloak

I have a requirement to integrate SSO for Argo-workflow and for these we have made necessary changes in quick-start-postgres.yaml.
Here is the yaml file we are using to start argo locally.
https://raw.githubusercontent.com/argoproj/argo-workflows/master/manifests/quick-start-postgres.yaml
And below are the sections we are modifying to support for SSO integration
Deployment section:
apiVersion: apps/v1
kind: Deployment
metadata:
name: argo-server
spec:
selector:
matchLabels:
app: argo-server
template:
metadata:
labels:
app: argo-server
spec:
containers:
- args:
- server
- --namespaced
- --auth-mode=sso
workflow-controller-configmap section :
apiVersion: v1
data:
sso: |
# This is the root URL of the OIDC provider (required).
issuer: http://localhost:8080/auth/realms/master
# This is name of the secret and the key in it that contain OIDC client
# ID issued to the application by the provider (required).
clientId:
name: dummyClient
key: client-id
# This is name of the secret and the key in it that contain OIDC client
# secret issued to the application by the provider (required).
clientSecret:
name: jdgcFxs26SdxdpH9Z5L33QCFAmGYTzQB
key: client-secret
# This is the redirect URL supplied to the provider (required). It must
# be in the form <argo-server-root-url>/oauth2/callback. It must be
# browser-accessible.
redirectUrl: http://localhost:2746/oauth2/callback
artifactRepository: |
s3:
bucket: my-bucket
We are starting the argo by issuing below 2 commands
kubectl apply -n argo -f modified-file/quick-start-postgres.yaml
kubectl -n argo port-forward svc/argo-server 2746:2746
After executing above commands and trying to login as Single-sign on , it is not getting redirected to provide login option for keycloak user. Instead it us redirected to https://localhost:2746/oauth2/redirect?redirect=https://localhost:2746/workflows
This page isn’t working localhost is currently unable to handle this request.
HTTP ERROR 501
What could be the issue here ? are we missing anything here ??
Is there arguments needed to pass while starting the Argo?
Can someone please suggest something on this.
Try adding --auth-mode=client to your argo-server container args

How to reflect http method to keycloak resource when using ambassador filter

I'm trying to integrate the ambassador and keycloak, so all my microservices behind the ambassador could be protected by keycloak.
Now I can implement an easy case, by setting the filter + filter policy, say my resource: GET /products/:productId , if the user want to visit this page, ambassador will intercept it and redirect to keycloak login page, the filter policy settings look like:
apiVersion: getambassador.io/v2
kind: FilterPolicy
metadata:
name: keycloak-filter-policy
namespace: ambassador
spec:
rules:
- host: "*"
path: /product/:productId
filters:
- name: keycloak-filter
namespace: ambassador
arguments:
scopes:
My question is, how could I define policy like: POST /product/:productId ? On Keycloak, I have resource + policies such as: product:view product:edit how can I translate these resources to Ambassador's filter policies?
To directly answer your question, currently, you cannot add the HTTP method to the FilterPolicy. There is a workaround if you need to define more granular access control based on what you are trying to do with the resource.
For example, if you are using HTTP2 or HTTP3 you can get the method from the request headers. There is a pseudo-header called :method
Link for HTTP spec: https://httpwg.org/specs/rfc7540.html#HttpRequest
Link for Ambassador's Filters Doc: https://www.getambassador.io/docs/edge-stack/latest/topics/using/filters/

I am trying to create MonitoringNotificationChannel using Config Connector in GCP

I want to create MonitoringNotificationChannel in GCP to send alerts on opsgenie so we are using web-hook provided by opsgenie channel
apiVersion: monitoring.cnrm.cloud.google.com/v1beta1
kind: MonitoringNotificationChannel
metadata:
name: monitoringnotificationchannel-webhook_tokenauth
spec:
type: webhook_tokenauth
# The spec.labels field below is for configuring the desired behaviour of the notification channel
# It does not apply labels to the resource in the cluster
labels:
description: Sends notifications to indicated webhook URL using HTTP-standard basic authentication. Should be used in conjunction with SSL/TLS to reduce the risk of attackers snooping the credentials.
sensitiveLabels:
authToken:
valueFrom:
secretKeyRef:
key: url
name: quota
enabled: true
After applying this we are getting labels as Null
we want to reference Opsgenie URL from sensitiveLabels
Format of opsgenie URL=https://api.opsgenie.com/v1/json/googlestackdriver?apiKey=xxxxxxxxxxx
Docs
https://cloud.google.com/config-connector/docs/reference/resource-docs/monitoring/monitoringnotificationchannel

Istio: Can I add randomly generated unique value as a header to every request before it reaches my application

I have a RESTful service within a spring boot application. This spring boot app is deployed inside a kubernetes cluser and we have Istio as a service mesh attached to the sidecar of each container pod in the cluster. Every request to my service first hits the service mesh i.e Istio and then gets routed accordingly.
I need to put a validation for a request header and if that header is not present then randomly generate a unique value and set it as a header to the request. I know that there is Headers.HeaderOperations which i can use in the destination rule but how can i generate a unique value every time the header is missing? I dont want to write the logic inside my application as this is a general rule to apply for all the applications inside the cluster
There is important information that needs to be said in this subject. And it looks to me like You are trying to make a workaround tracing for an applications that does not forward/propagate headers in Your cluster. So I am going to mention few problems that can be encountered with this solution (just in case).
As mentioned in answer from Yuri G. You can configure unique x-request-id headers but they will not be very useful in terms of tracing if the requests are passing trough applications that do not propagate those x-request-id headers.
This is because tracing entire request paths needs to have unique x-request-id though out its entire trace. If the x-request-id value is different in various parts of the path the request takes, how are We going to put together the entire trace path?
In a scenario where two requests are received in application pod at the same time even if they had unique x-request-id headers, only application is able to tell which inbound request matches with which outbound connection. One of the requests could take longer to process and without forwarded trace header we can't tell which one is which.
Anyway for applications that do support forwarding/propagating x-request-id headers I suggest following guide from istio documentation.
Hope it helps.
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: enable-envoy-xrequestid-in-response
namespace: istio-system
spec:
configPatches:
- applyTo: NETWORK_FILTER
match:
context: GATEWAY
listener:
filterChain:
filter:
name: "envoy.http_connection_manager"
patch:
operation: MERGE
value:
typed_config:
"#type": "type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager"
always_set_request_id_in_response: true
From reading the documentation of istio and envoy it seems like this is not supported by istio/envoy out of the box. As a workaround you have 2 options
Option 1: To set the x-envoy-force-trace header in virtual service
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: reviews-route
spec:
hosts:
- reviews.prod.svc.cluster.local
http:
- headers:
request:
set:
x-envoy-force-trace: true
It will generate a header x-request-id if it is missing. But it seems like abuse of tracing mechanism.
Option 2: To use consistentHash balancing based on header, e.g:
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: bookinfo-ratings
spec:
host: ratings.prod.svc.cluster.local
trafficPolicy:
loadBalancer:
consistentHash:
httpHeaderName:
name: x-custom-request-id
It will generate the header x-custom-request-id for any request that doesn't have this header. In this case the requests with same x-custom-request-id value will go always to the same pod that can cause uneven balancing.
The answer above works well! I have updated it for the latest istio (filter name is in full):
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: enable-envoy-xrequestid-in-response
namespace: istio-system
spec:
configPatches:
- applyTo: NETWORK_FILTER
match:
context: GATEWAY
listener:
filterChain:
filter:
name: "envoy.filters.network.http_connection_manager"
patch:
operation: MERGE
value:
typed_config:
"#type": "type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager"
always_set_request_id_in_response: true