https://cloud.google.com/kubernetes-engine/docs/how-to/cloud-armor-backendconfig
I have only seen example assigning one securityPolicy but I want to assign multiple ones.
I created the following backend config with 2 policies and applied to my service with beta.cloud.google.com/backend-config: my-backend-config
apiVersion: cloud.google.com/v1beta1
kind: BackendConfig
metadata:
namespace: cloud-armor-how-to
name: my-backend-config
spec:
securityPolicy:
name: "policy-one"
name: "policy-two"
When I deploy only "policy-two" is applied. Can I assign two policies somehow? I see no docs for this
There's nothing in the docs that says that you can specify more than one policy. Even the spec says securityPolicy the singular and the YAML structure is not an array.
Furthermore, if you look at your spec:
spec:
securityPolicy:
name: "policy-one"
name: "policy-two"
The YAML standard completely ignores the first name: "policy-one" which explains why only name: "policy-two" is used. You can check it on YAMLlint. To have one more value on your YAML you would have to convert securityPolicy to an array. Something like this:
apiVersion: cloud.google.com/v1beta1
kind: BackendConfig
metadata:
namespace: cloud-armor-how-to
name: my-backend-config
spec:
securityPolicy:
- name: "policy-one"
- name: "policy-two"
The issue with this is that it's probably not supported by GCP.
This same behavior happens to the regular HTTP(S) Load Balancers. It looks like it's only possible to add only a single Security Policy per target and the same behavior affects the HTTP(S) load Balancers created by the GKE ingress.
It's possible to add more rules for that only security policy. The new rules can be added in the same way as the first rule was added; however, the priorities of these rules must be different like in the example below:
~$ gcloud beta compute security-policies rules create 1000 \
--security-policy ca-how-to-security-policy \
--src-ip-ranges "192.0.2.0/24" \
--action "deny-404"
~$ gcloud beta compute security-policies rules create 1001 \
--security-policy ca-how-to-security-policy \
--src-ip-ranges "11.16.0.0/24" \
--action "deny-404"
Related
So I have a kubernetes cronjob object set to run periodically.
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
ticketing-job-lifetime-manager 45 */4 * * * False 0 174m 25d
and I know how to call it manually:
# ticketing-job-manual-call will be the name of the job that runs
kubectl create job --from=cronjobs/ticketing-job-lifetime-manager ticketing-job-manual-call
BUT - what I want to do is call the job, but modify portions of it (shown below) before it is called. Specifically items.metadata.annotations and items.spec.jobTemplate.spec.containers.args.
If this is possible on-the-fly, I'd be over the moon. If it requires creating a temporary object, then I'd appreciate an approach to doing this that is robust, performant - and safe. Thanks!
apiVersion: v1
items:
- apiVersion: batch/v1
kind: CronJob
metadata:
annotations:
<annotation-1> <- want to modify these
<annotation-2>
..
<annotation-n>
creationTimestamp: "2022-05-03T13:24:49Z"
labels:
AccountID: foo
FooServiceAction: "true"
FooServiceManaged: "true"
CronName: foo
name: foo
namespace: my-namespace
resourceVersion: "298013999"
uid: 57b2-4612-88ef-a0d5e26c8
spec:
concurrencyPolicy: Replace
jobTemplate:
metadata:
annotations:
<annotation-1> <- want to modify these
<annotation-2>
..
<annotation-n>
creationTimestamp: null
labels:
AccountID: 7761777c38d93b
TicketServiceAction: "true"
TicketServiceManaged: "true"
CronName: ticketing-actions-7761777c38d93b-0
name: ticketing-actions-7761777c38d93b-0
namespace: rias
spec:
containers:
- args:
- --accountid=something <- want to modify these
- --faultzone=something
- --type=something
- --cronjobname=something
- --plans=something
command:
- ./ticketing-job
env:
- name: FOO_BAR <- may want to modify these
value: "false"
- name: FOO_BAZ
value: "true"
The way to think about this is that Kubernetes resources are defined (definitively) by YAML|JSON config files. A useful advantage to having config files is that these can be checked into source control; you automatically audit your work if you create unique files for each resource (for every change).
Kubernetes (kubectl) isn't optimized|designed to tweak Resources although you can use kubectl patch to update deployed Resources.
I encourage you to consider a better approach that is applicable to any Kubernetes resource (not just Job's) and this focuses on use YAML|JSON files as the way to represent state:
kubectl get the resource and output it as YAML|JSON (--output=json|yaml) persisting the result to a file (that could be source-controlled)
Mutate the file using any of many tools but preferably YAML|JSON processing tools (e.g. yq or jq)
kubectl create or kubectl apply the file that results that reflects the intended configuration of the new resource.
By way of example, assuming you use jq:
# Output 'ticketing-job-lifetime-manage' as a JSON file
kubectl get job/ticketing-job-lifetime-manage \
--namespace=${NAMESPACE} \
--output=json > ${PWD}/ticketing-job-lifetime-manage.json
# E.g. replace '.metadata.annotations' entirely
jq '.metadata.annotations=[{"foo":"x"},{"bar":"y"}]' \
${PWD}/${PWD}/ticketing-job-lifetime-manage.json \
> ${PWD}/${PWD}/new-job.json
# E.g. replace a specific container 'foo' specific 'args' key with value
jq '.spec.jobTemplate.spec.containers[]|select(.name=="foo").args["--key"]="value" \
${PWD}/${PWD}/new-job.json \
> ${PWD}/${PWD}/new-job.json
# Etc.
# Apply
kubectl create \
--filename=${PWD}/new-job.json \
--namespace=${NAMESPACE}
NOTE You can pipe the output from the kubectl get through jq and into kubectl create if you wish but it's useful to keep a file-based record of the resource.
Having to deal with YAML|JSON config file is a common issue with Kubernetes (and every other technology that uses them). There are other tools e.g. jsonnet and CUE that try to provide a more programmatic way to manage YAML|JSON.
I'm currently looking at GKE and some of the tutorials on google cloud. I was following this one here https://cloud.google.com/solutions/integrating-microservices-with-pubsub#building_images_for_the_app (source code https://github.com/GoogleCloudPlatform/gke-photoalbum-example)
This example has 3 deployments and one service. The example tutorial has you deploy everything via the command line which is fine and all works. I then started to look into how you could automate deployments via cloud build and discovered this:
https://cloud.google.com/build/docs/deploying-builds/deploy-gke#automating_deployments
These docs say you can create a build configuration for your a trigger (such as pushing to a particular repo) and it will trigger the build. The sample yaml they show for this is as follows:
# deploy container image to GKE
- name: "gcr.io/cloud-builders/gke-deploy"
args:
- run
- --filename=kubernetes-resource-file
- --image=gcr.io/project-id/image:tag
- --location=${_CLOUDSDK_COMPUTE_ZONE}
- --cluster=${_CLOUDSDK_CONTAINER_CLUSTER}
I understand how the location and cluster parameters can be passed in and these docs also say the following about the resource file (filename parameter) and image parameter:
kubernetes-resource-file is the file path of your Kubernetes configuration file or the directory path containing your Kubernetes resource files.
image is the desired name of the container image, usually the application name.
Relating this back to the demo application repo where all the services are in one repo, I believe I could supply a folder path to the filename parameter such as the config folder from the repo https://github.com/GoogleCloudPlatform/gke-photoalbum-example/tree/master/config
But the trouble here is that those resource files themselves have an image property in them so I don't know how this would relate to the image property of the cloud build trigger yaml. I also don't know how you could then have multiple "image" properties in the trigger yaml where each deployment would have it's own container image.
I'm new to GKE and Kubernetes in general, so I'm wondering if I'm misinterpreting what the kubernetes-resource-file should be in this instance.
But is it possible to automate deploying of multiple deployments/services in this fashion when they're all bundled into one repo? Or have Google just over simplified things for this tutorial - the reality being that most services would be in their own repo so as to be built/tested/deployed separately?
Either way, how would the image property relate to the fact that an image is already defined in the deployment yaml? e.g:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
name: photoalbum-app
name: photoalbum-app
spec:
replicas: 3
selector:
matchLabels:
name: photoalbum-app
template:
metadata:
labels:
name: photoalbum-app
spec:
containers:
- name: photoalbum-app
image: gcr.io/[PROJECT_ID]/photoalbum-app#[DIGEST]
tty: true
ports:
- containerPort: 8080
env:
- name: PROJECT_ID
value: "[PROJECT_ID]"
The command that you use is perfect for testing the deployment of one image. But when you work with Kubernetes (K8S), and the managed version of GCP (GKE), you usually never do this.
You use YAML file to describe your deployments, services and all other K8S object that you want. When you deploy, you can perform something like this
kubectl apply -f <file.yaml>
If you have several file, you can use wildcard is you want
kubectl apply -f config/*.yaml
If you prefer to use only one file, you can separate the object with ---
apiVersion: v1
kind: Service
metadata:
name: my-nginx-svc
labels:
app: nginx
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: nginx
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
labels:
app: nginx
spec:...
...
i'm playing around with knative currently and bootstrapped a simple installation using gloo and glooctl. Everything worked fine out of the box. However, i just asked myself if there is a possibility to change the generated url, where the service is made available at.
I already changed the domain, but i want to know if i could select a domain name without containing the namespace, so helloworld-go.namespace.mydomain.com would become helloworld-go.mydomain.com.
The current YAML-definition looks like this:
apiVersion: serving.knative.dev/v1alpha1
kind: Service
metadata:
labels:
name: helloworld-go
namespace: default
spec:
template:
spec:
containers:
- image: gcr.io/knative-samples/helloworld-go
env:
- name: TARGET
value: Go Sample v1
Thank you for your help!
This is configurable via the ConfigMap named config-network in the namespace knative-serving. See the ConfigMap in the deployment resources:
apiVersion: v1
data:
_example: |
...
# domainTemplate specifies the golang text template string to use
# when constructing the Knative service's DNS name. The default
# value is "{{.Name}}.{{.Namespace}}.{{.Domain}}". And those three
# values (Name, Namespace, Domain) are the only variables defined.
#
# Changing this value might be necessary when the extra levels in
# the domain name generated is problematic for wildcard certificates
# that only support a single level of domain name added to the
# certificate's domain. In those cases you might consider using a value
# of "{{.Name}}-{{.Namespace}}.{{.Domain}}", or removing the Namespace
# entirely from the template. When choosing a new value be thoughtful
# of the potential for conflicts - for example, when users choose to use
# characters such as `-` in their service, or namespace, names.
# {{.Annotations}} can be used for any customization in the go template if needed.
# We strongly recommend keeping namespace part of the template to avoid domain name clashes
# Example '{{.Name}}-{{.Namespace}}.{{ index .Annotations "sub"}}.{{.Domain}}'
# and you have an annotation {"sub":"foo"}, then the generated template would be {Name}-{Namespace}.foo.{Domain}
domainTemplate: "{{.Name}}.{{.Namespace}}.{{.Domain}}"
...
kind: ConfigMap
metadata:
labels:
serving.knative.dev/release: "v0.8.0"
name: config-network
namespace: knative-serving
Therefore, your config-network should look like this:
apiVersion: v1
data:
domainTemplate: {{ '"{{.Name}}.{{.Domain}}"' }}
kind: ConfigMap
metadata:
name: config-network
namespace: knative-serving
You can also have a look and customize the config-domain to configure the domain name that is appended to your services.
Assuming you're running knative over an istio service mesh, there's an example of how to use an Istio Virtual Service to accomplish this at the service level in the knative docs.
So a typical k8s deployment file that I'm woking on looks like this:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
...
name: ${service-name}
spec:
replicas: 1
strategy:
...
template:
metadata:
...
spec:
serviceAccountName: test
...
the goal is to create multiple services who have access to the same serviceAccount.
This structure works fine when test exists in
kubectl get serviceaccount
The question is how can I set serviceAccountName to default serviceAccount if test does not exist in the namespace (for any reason)?
I don't wanna fail the deployment
I essentially need to have something like
serviceAccountName: {test:-default}
P.S. clearly I can assign a variable to serviceAccountName and parse the yaml file from outside, but wanted to see if there's a better option
As long as you want run this validation inside the cluster, the only way would be to use MutatingAdmissionWebhook.
This will intercepts requests matching the rules defined in MutatingWebhookConfiguration before presisting into etcd. MutatingAdmissionWebhook executes the mutation by sending admission requests to webhook server. Webhook server is just plain http server that adhere to the API.
Thus, you can validate if the service account exists and set default sa if it's not.
Here is an example of the weebhook, which validates and sets custom labels.
More info about Admission Controller Webhooks
It seems that the only way to create node pools on Google Kubernetes Engine is with the command gcloud container node-pools create. I would like to have all the configuration in a YAML file instead. What I tried is the following:
apiVersion: v1
kind: NodeConfig
metadata:
annotations:
cloud.google.com/gke-nodepool: ares-pool
spec:
diskSizeGb: 30
diskType: pd-standard
imageType: COS
machineType: n1-standard-1
metadata:
disable-legacy-endpoints: 'true'
oauthScopes:
- https://www.googleapis.com/auth/devstorage.read_only
- https://www.googleapis.com/auth/logging.write
- https://www.googleapis.com/auth/monitoring
- https://www.googleapis.com/auth/service.management.readonly
- https://www.googleapis.com/auth/servicecontrol
- https://www.googleapis.com/auth/trace.append
serviceAccount: default
But kubectl apply fails with:
error: unable to recognize "ares-pool.yaml": no matches for kind "NodeConfig" in version "v1"
I am surprised that Google yields almost no relevant results for all my searches. The only documentation that I found was the one on Google Cloud, which is quite incomplete in my opinion.
Node pools are not Kubernetes objects, they are part of the Google Cloud API. Therefore Kubernetes does not know about them, and kubectl apply will not work.
What you actually need is a solution called "infrastructure as code" - a code that will tell GCP what kind of node pool it wants.
If you don't strictly need YAML, you can check out Terraform that handles this use case. See: https://terraform.io/docs/providers/google/r/container_node_pool.html
You can also look into Google Deployment Manager or Ansible (it has GCP module, and uses YAML syntax), they also address your need.
I don' know if it answers accurately your needs but if you want to do IAC in general with Kubernetes, you can use Crossplane CRDs. If you already have a running cluster, you just have to install their helm chart and you can provision a cluster this way:
apiVersion: container.gcp.crossplane.io/v1beta1
kind: GKECluster
metadata:
name: gke-crossplane-cluster
spec:
forProvider:
initialClusterVersion: "1.19"
network: "projects/development-labs/global/networks/opsnet"
subnetwork: "projects/development-labs/regions/us-central1/subnetworks/opsnet"
ipAllocationPolicy:
useIpAliases: true
defaultMaxPodsConstraint:
maxPodsPerNode: 110
And then you can define an associated node pool as follows:
apiVersion: container.gcp.crossplane.io/v1alpha1
kind: NodePool
metadata:
name: gke-crossplane-np
spec:
forProvider:
autoscaling:
autoprovisioned: false
enabled: true
maxNodeCount: 2
minNodeCount: 1
clusterRef:
name: gke-crossplane-cluster
config:
diskSizeGb: 100
# diskType: pd-ssd
imageType: cos_containerd
labels:
test-label: crossplane-created
machineType: n1-standard-4
oauthScopes:
- "https://www.googleapis.com/auth/devstorage.read_only"
- "https://www.googleapis.com/auth/logging.write"
- "https://www.googleapis.com/auth/monitoring"
- "https://www.googleapis.com/auth/servicecontrol"
- "https://www.googleapis.com/auth/service.management.readonly"
- "https://www.googleapis.com/auth/trace.append"
initialNodeCount: 2
locations:
- us-central1-a
management:
autoRepair: true
autoUpgrade: true
If you want you can find a full example of a GKE provisionning with Crossplane here.