Append Workload Identity using GCP Config connector - kubernetes

I am initially creating an IAMPolicy with some bindings
apiVersion: iam.cnrm.cloud.google.com/v1beta1
kind: IAMPolicy
metadata:
name: storage-admin-policy
namespace: cnrm-system
annotations:
argocd.argoproj.io/sync-wave: "2"
cnrm.cloud.google.com/project-id: "mysten-sui"
spec:
resourceRef:
apiVersion: iam.cnrm.cloud.google.com/v1beta1
kind: IAMServiceAccount
name: storage-admin
namespace: cnrm-system
bindings:
- role: roles/iam.workloadIdentityUser
members:
- serviceAccount:mysten-sui.svc.id.goog[monitoring/thanos-compactor]
But I would want to append some new entries of workloadIdentityUser, for which I was using
apiVersion: iam.cnrm.cloud.google.com/v1beta1
kind: IAMPartialPolicy
metadata:
name: ax001-sa-policy
namespace: ax001
spec:
resourceRef:
apiVersion: iam.cnrm.cloud.google.com/v1beta1
kind: IAMServiceAccount
name: storage-admin
namespace: cnrm-system
bindings:
- role: roles/iam.workloadIdentityUser
members:
- member: serviceAccount:mysten-sui.svc.id.goog[ax001/ax001-prometheus]
But it seems like the entries are not being appended, instead, they are being overridden. Is there any way that I can append rather than overriding?

Related

Mirror traffic using Traefik 2

We are using Traefik v2 running in kubernetes in a shared namespace (called shared), with multiple namespaces for different projects/services. We are utilising the IngressRoute CRD along with middlewares.
We need to mirror (duplicate) all incoming traffic to a specific URL (blah.example.com/newservice) and forward it to 2 backend services in 2 different namespaces. Because they are separated between 2 namespaces, they are running as the same name, with the same port.
I've looked at the following link, but don't seem to understand it:
https://doc.traefik.io/traefik/v2.3/routing/providers/kubernetes-crd/#mirroring
This is my config:
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
annotations:
kubernetes.io/ingress.class: traefik
name: shared-ingressroute
namespace: shared
spec:
entryPoints: []
routes:
- kind: Rule
match: Host(`blah.example.com`) && PathPrefix(`/newservice/`)
middlewares:
- name: shared-middleware-testing-middleware
namespace: shared
priority: 0
services:
- kind: Service
name: customer-mirror
namespace: namespace1
port: TraefikService
---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: shared-middleware-testing-middleware
namespace: shared
spec:
stripPrefix:
prefixes:
- /newservice/
---
apiVersion: traefik.containo.us/v1alpha1
kind: TraefikService
metadata:
name: customer-mirror
namespace: namespace1
spec:
mirroring:
name: newservice
port: 8011
namespace: namespace1
mirrors:
- name: newservice
port: 8011
percent: 100
namespace: namespace2
What am I doing wrong?
based on docs, for Your case kind should be TraefikService
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
annotations:
kubernetes.io/ingress.class: traefik
name: shared-ingressroute
namespace: shared
spec:
entryPoints: []
routes:
- kind: Rule
match: Host(`blah.example.com`) && PathPrefix(`/newservice/`)
middlewares:
- name: shared-middleware-testing-middleware
namespace: shared
services:
- kind: TraefikService
name: customer-mirror
namespace: namespace1

Kubernetes RBAC configuration w/ nodejs client

My design is: EventController lives in "default" namespace and starts Jobs in "gamespace" namespace.
I'm getting this error trying to create a Job with the Node.js kubernetes client:
jobs.batch is forbidden: User "system:serviceaccount:default:event-manager" cannot create resource "jobs" in API group "batch" in the namespace "gamespace"
from this line of code:
const job = await batchV1Api.createNamespacedJob('gamespace', kubeSpec.job)
kubeSpec.job is:
{
apiVersion: 'batch/v1',
kind: 'Job',
metadata: {
name: 'event-60da4bee95e237001d65e355',
namespace: 'gamespace',
labels: {
tier: 'event-servers',
}
},
spec: {
backoffLimit: 1,
activeDeadlineSeconds: 14400,
ttlSecondsAfterFinished: 86400,
template: { spec: [Object] }
}
}
And here's my RBAC configuration:
apiVersion: v1
kind: Namespace
metadata:
name: gamespace
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: event-manager
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: event-manager-role
rules:
- apiGroups: ['', 'batch'] # '' means "core"
resources: ['jobs', 'services']
verbs: ['get', 'list', 'watch', 'create', 'update', 'patch', 'delete']
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: event-manager-clusterrole-binding
# Specifies the namespace the ClusterRole can be used in.
namespace: gamespace
subjects:
- kind: ServiceAccount
name: event-manager
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: event-manager-role
The container making the function call is configured like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: eventcontroller-deployment
labels:
app: eventcontroller
spec:
selector:
matchLabels:
app: eventcontroller
replicas: 1
template:
metadata:
labels:
app: eventcontroller
spec:
# see accounts-namespaces.yaml
serviceAccountName: 'event-manager'
imagePullSecrets:
- name: registry-credentials
containers:
- name: eventcontroller
image: eventcontroller
imagePullPolicy: Always
envFrom:
- configMapRef:
name: eventcontroller-config
ports:
- containerPort: 3003
I'm not sure if I'm using the client incorrectly (why is namespace needed in the spec and the function call?), or if I've configured RBAC incorrectly.
Any suggestions?
Thank you!
I can't comment so I will share some Ideas what could be an Issue and how i would further debug the problem.
For your Deployment you have no Namespace defined, could it be the case that the Pod is running in a different Namespace (!= gamespace), but your Service Account only applies for gameplay?
A RoleBinding grants permissions within a specific namespace1
If this is not the error you might want to try to use a service account that is already created and gives all permissions for the start to rule out other errors.
Here an Example Manifest
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
creationTimestamp: null
name: all-permissions
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: service-account-all-permissions
namespace: gameplay
Set the Service Account to 'service-account-all-permissions' in your Deployment and see if you still get an permission error from the Kubernetes API
Source.

Kubernetes API to create a CRD using Minikube, with deployment pod in pending state

I have a problem with Kubernetes API and CRD, while creating a deployment with a single nginx pod, i would like to access using port 80 from a remote server, and locally as well. After seeing the pod in a pending state and running the kubectl get pods and then after around 40 seconds on average, the pod disappears, and then a different nginx pod name is starting up, this seems to be in a loop.
The error is
* W1214 23:27:19.542477 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
I was following this article about service accounts and roles,
https://thorsten-hans.com/custom-resource-definitions-with-rbac-for-serviceaccounts#create-the-clusterrolebinding
I am not even sure i have created this correctly?
Do i even need to create the ServiceAccount_v1.yaml, PolicyRule_v1.yaml and ClusterRoleBinding.yaml files to resolve my error above.
All of my .yaml files for this are below,
CustomResourceDefinition_v1.yaml
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
# name must match the spec fields below, and be in the form: <plural>.<group>
name: webservers.stable.example.com
spec:
# group name to use for REST API: /apis/<group>/<version>
group: stable.example.com
names:
# kind is normally the CamelCased singular type. Your resource manifests use this.
kind: WebServer
# plural name to be used in the URL: /apis/<group>/<version>/<plural>
plural: webservers
# shortNames allow shorter string to match your resource on the CLI
shortNames:
- ws
# singular name to be used as an alias on the CLI and for display
singular: webserver
# either Namespaced or Cluster
scope: Cluster
# list of versions supported by this CustomResourceDefinition
versions:
- name: v1
schema:
openAPIV3Schema:
type: object
properties:
spec:
type: object
properties:
cronSpec:
type: string
image:
type: string
replicas:
type: integer
# Each version can be enabled/disabled by Served flag.
served: true
# One and only one version must be marked as the storage version.
storage: true
Deployments_v1_apps.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
# Unique key of the Deployment instance
name: nginx-deployment
spec:
# 1 Pods should exist at all times.
replicas: 1
selector:
matchLabels:
app: nginx
strategy:
rollingUpdate:
maxSurge: 100
maxUnavailable: 0
type: RollingUpdate
template:
metadata:
labels:
# Apply this label to pods and default
# the Deployment label selector to this value
app: nginx
spec:
containers:
# Run this image
- image: nginx:1.14
name: nginx
ports:
- containerPort: 80
hostname: nginx
nodeName: webserver01
securityContext:
runAsNonRoot: True
#status:
#availableReplicas: 1
Ingress_v1_networking.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-ingress
spec:
rules:
- http:
paths:
- path: /
pathType: Exact
backend:
resource:
kind: nginx-service
name: nginx-deployment
#service:
# name: nginx
# port: 80
#serviceName: nginx
#servicePort: 80
Service_v1_core.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- port: 80
protocol: TCP
targetPort: 80
ServiceAccount_v1.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: user
namespace: example
PolicyRule_v1.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: "example.com:webservers:reader"
rules:
- apiGroups: ["example.com"]
resources: ["ResourceAll"]
verbs: ["VerbAll"]
ClusterRoleBinding_v1.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: "example.com:webservers:cdreader-read"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: "example.com:webservers:reader"
subjects:
- kind: ServiceAccount
name: user
namespace: example

Rate limits not limiting anything

I'm trying to use the istio rate limits to limit access to the service hello.
(1 call per second max)
I used the template from the book info demo application.
This is the configuration I've got so far :
Handler
apiVersion: config.istio.io/v1alpha2
kind: handler
metadata:
name: quotahandler
namespace: istio-system
spec:
compiledAdapter: memquota
params:
quotas:
- name: requestcountquota.instance.istio-system
maxAmount: 1
validDuration: 1s
Instance
apiVersion: config.istio.io/v1alpha2
kind: instance
metadata:
name: requestcountquota
namespace: istio-system
spec:
compiledTemplate: quota
params:
dimensions:
source: request.headers["x-forwarded-for"] | "unknown"
destination: destination.labels["app"] | "unknown"
QuotaSpec
apiVersion: config.istio.io/v1alpha2
kind: QuotaSpec
metadata:
name: request-count
namespace: istio-system
spec:
rules:
- quotas:
- charge: 1
quota: requestcountquota
QuotaSpecBinding
apiVersion: config.istio.io/v1alpha2
kind: QuotaSpecBinding
metadata:
name: request-count
namespace: istio-system
spec:
quotaSpecs:
- name: request-count
namespace: istio-system
services:
- name: hello
namespace: default
Rule
apiVersion: config.istio.io/v1alpha2
kind: rule
metadata:
name: quota
namespace: istio-system
spec:
actions:
- handler: quotahandler
instances:
- requestcountquota
Needless to say that curling the service's ip is still working even when its well over 1 request per second and the limit is activated...
FYI, I used the serviceIP / virtualService (+ gateway).
Also I'm using the "In Memory" version and not the Redis version.
Any help on understanding where the error is would be gladly appreciated !

How to debug QuotaSpecBinding for rate-limits in istio?

I am trying to enable the rate-limit for my istio enabled service. But it doesn't work. How do I debug if my configuration is correct?
apiVersion: config.istio.io/v1alpha2
kind: memquota
metadata:
name: handler
namespace: istio-system
spec:
quotas:
- name: requestcount.quota.istio-system
maxAmount: 5
validDuration: 1s
overrides:
- dimensions:
engine: myEngineValue
maxAmount: 5
validDuration: 1s
---
apiVersion: config.istio.io/v1alpha2
kind: quota
metadata:
name: requestcount
namespace: istio-system
spec:
dimensions:
source: request.headers["x-forwarded-for"] | "unknown"
destination: destination.labels["app"] | destination.service | "unknown"
destinationVersion: destination.labels["version"] | "unknown"
engine: destination.labels["engine"] | "unknown"
---
apiVersion: config.istio.io/v1alpha2
kind: QuotaSpec
metadata:
name: request-count
namespace: istio-system
spec:
rules:
- quotas:
- charge: 1
quota: requestcount
---
apiVersion: config.istio.io/v1alpha2
kind: QuotaSpecBinding
metadata:
name: request-count
namespace: istio-system
spec:
quotaSpecs:
- name: request-count
namespace: istio-system
services:
# - service: '*' ; I tried with this as well
- name: my-service
namespace: default
---
apiVersion: config.istio.io/v1alpha2
kind: rule
metadata:
name: quota
namespace: istio-system
spec:
actions:
- handler: handler.memquota
instances:
- requestcount.quota
I tried with - service: '*' as well in the QuotaSpecBinding; but no luck.
How, do I confirm if my configuration was correct? the my-service is the kubernetes service for my deployment. (Does this have to be a VirtualService of istio for rate limits to work? Edit: Yes, it has to!)
I followed this doc except the VirtualService part.
I have a feeling somewhere in the namespaces I am doing a mistake.
You have to define the virtual service for the service my-service:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: myservice
spec:
hosts:
- myservice
http:
- route:
- destination:
host: myservice
This way, you allow Istio to know which service are you host you are referring to.
In terms of debugging, I know that there is a project named Kiali that aims to leverage observability in Istio environments. I know that they have validations for some Istio and Kubernetes objects: Istio configuration browse.