how to use postgres.db.name for multiple databases in kubernetes configMaps - postgresql

So I want to create multiple postgresql databases in the kubernetes deployment.
I tried with the below configMaps configuration but the databases are not being created. I tried to log into the postgres db pod with one of the database names I used in the configMaps but it say's the databse doesn't exist.
method 1:
apiVersion: v1
kind: ConfigMap
metadata:
name: hydra-kratos-postgres-config
labels:
app: hydra-kratos-db
data:
postgres.db.user: pguser
postgres.db.password: secret
postgres.db.name:
- postgredb1
- postgredb2
- postgredb3
method 2:
apiVersion: v1
kind: ConfigMap
metadata:
name: hydra-kratos-postgres-config
labels:
app: hydra-kratos-db
data:
POSTGRES_USER: pguser
POSTGRES_PASSWORD: secret
POSTGRES_MULTIPLE_DATABASES:
- kratos
- hydra
Would appreciate any suggestions on this. Thank you.

I assume you are using official Postgres image - by default it doesn't support multiple databases declaration on init. You could try building your own Postgres image like in this repo. If you create k8s deployment based on such image, I think there is a chance that your variable POSTGRES_MULTIPLE_DATABASES could work.
Let me know if you decide to try this.

Related

RabbitMQ Kubernetes Operator - Set Username and Password with Secret

I am using the RabbitMQ Kubernetes operator for a dev-instance and it works great. What isn't great is that the credentials generated by the operator are different for everyone on the team (I'm guessing it generates random creds upon init).
Is there a way to provide a secret and have the operator use those credentials in place of the generated ones?
Yaml:
apiVersion: rabbitmq.com/v1beta1
kind: RabbitmqCluster
metadata:
name: rabbitmq-cluster-deployment
namespace: message-brokers
spec:
replicas: 1
service:
type: LoadBalancer
Ideally, I can just configure some yaml to point to a secret and go from there. But, struggling to find the documentation around this piece.
Example Username/Password generated:
user: default_user_wNSgVBIyMIElsGRrpwb
pass: cGvQ6T-5gRt0Rc4C3AdXdXDB43NRS6FJ
I figured it out. Looks like you can just add a secret configured like the below example and it'll work. I figured this out by reverse engineering what the operator generated. So, please chime in if this is bad.
The big thing to remember is the default_user.confg setting. Other than that, it's just a secret.
kind: Secret
apiVersion: v1
metadata:
name: rabbitmq-cluster-deployment-default-user
namespace: message-brokers
stringData:
default_user.conf: |
default_user = user123
default_pass = password123
password: password123
username: user123
type: Opaque
rabbitmq-cluster-deployment-default-user comes from the Deployment mdatadata.name + -default-user (see yaml in question)

GKE automating deploy of multiple deployments/services with different images

I'm currently looking at GKE and some of the tutorials on google cloud. I was following this one here https://cloud.google.com/solutions/integrating-microservices-with-pubsub#building_images_for_the_app (source code https://github.com/GoogleCloudPlatform/gke-photoalbum-example)
This example has 3 deployments and one service. The example tutorial has you deploy everything via the command line which is fine and all works. I then started to look into how you could automate deployments via cloud build and discovered this:
https://cloud.google.com/build/docs/deploying-builds/deploy-gke#automating_deployments
These docs say you can create a build configuration for your a trigger (such as pushing to a particular repo) and it will trigger the build. The sample yaml they show for this is as follows:
# deploy container image to GKE
- name: "gcr.io/cloud-builders/gke-deploy"
args:
- run
- --filename=kubernetes-resource-file
- --image=gcr.io/project-id/image:tag
- --location=${_CLOUDSDK_COMPUTE_ZONE}
- --cluster=${_CLOUDSDK_CONTAINER_CLUSTER}
I understand how the location and cluster parameters can be passed in and these docs also say the following about the resource file (filename parameter) and image parameter:
kubernetes-resource-file is the file path of your Kubernetes configuration file or the directory path containing your Kubernetes resource files.
image is the desired name of the container image, usually the application name.
Relating this back to the demo application repo where all the services are in one repo, I believe I could supply a folder path to the filename parameter such as the config folder from the repo https://github.com/GoogleCloudPlatform/gke-photoalbum-example/tree/master/config
But the trouble here is that those resource files themselves have an image property in them so I don't know how this would relate to the image property of the cloud build trigger yaml. I also don't know how you could then have multiple "image" properties in the trigger yaml where each deployment would have it's own container image.
I'm new to GKE and Kubernetes in general, so I'm wondering if I'm misinterpreting what the kubernetes-resource-file should be in this instance.
But is it possible to automate deploying of multiple deployments/services in this fashion when they're all bundled into one repo? Or have Google just over simplified things for this tutorial - the reality being that most services would be in their own repo so as to be built/tested/deployed separately?
Either way, how would the image property relate to the fact that an image is already defined in the deployment yaml? e.g:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
name: photoalbum-app
name: photoalbum-app
spec:
replicas: 3
selector:
matchLabels:
name: photoalbum-app
template:
metadata:
labels:
name: photoalbum-app
spec:
containers:
- name: photoalbum-app
image: gcr.io/[PROJECT_ID]/photoalbum-app#[DIGEST]
tty: true
ports:
- containerPort: 8080
env:
- name: PROJECT_ID
value: "[PROJECT_ID]"
The command that you use is perfect for testing the deployment of one image. But when you work with Kubernetes (K8S), and the managed version of GCP (GKE), you usually never do this.
You use YAML file to describe your deployments, services and all other K8S object that you want. When you deploy, you can perform something like this
kubectl apply -f <file.yaml>
If you have several file, you can use wildcard is you want
kubectl apply -f config/*.yaml
If you prefer to use only one file, you can separate the object with ---
apiVersion: v1
kind: Service
metadata:
name: my-nginx-svc
labels:
app: nginx
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: nginx
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
labels:
app: nginx
spec:...
...

Does a Pod use the k8s API Server to fetch spec declarations?

I'm going through this post, where we bind a Role to a Service Account and then query the API Server using said Service Account. The role only has list permission to the pods resource.
I did an experiment where I mounted a random Secret into a Pod that is using the above Service Account and my expectation was that the Pod would attempt to query the Secret and fail the creation process, but the pod is actually running successfully with the secret mounted in place.
So I'm left wondering when does a pod actually needs to query the API Server for resources or if the pod creation process is special and gets the resources through other means.
Here is the actual list of resources I used for my test:
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: example-sa
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: example-role
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: example-rb
subjects:
- kind: ServiceAccount
name: example-sa
roleRef:
kind: Role
name: example-role
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: Secret
metadata:
name: example-secret
data:
password: c3RhY2tvdmVyZmxvdw==
---
apiVersion: v1
kind: Pod
metadata:
name: example-pod
spec:
serviceAccountName: example-sa
containers:
- name: webserver
image: nginx
volumeMounts:
- name: secret-volume
mountPath: /mysecrets
volumes:
- name: secret-volume
secret:
secretName: example-secret
...
I must admit that at first I didn't quite get your point, but when I read your question again I think now I can see what it's all about. First of all I must say that your initial interpretation is wrong. Let me explain it.
You wrote:
I did an experiment where I mounted a random Secret into a Pod that
is using the above Service Account
Actually the key word here is "I". The question is: who creates a Pod and who **mounts a random Secret into this Pod ? And the answer to that question from your perspective is simple: me. When you create a Pod you don't use the above mentioned ServiceAccount but you authorize your access to kubernetes API through entries in your .kube/config file. During the whole Pod creation process the ServiceAccount you created is not used a single time.
and my expectation was that the
Pod would attempt to query the Secret and fail the creation process,
but the pod is actually running successfully with the secret mounted
in place.
Why would it query the Secret if it doesn't use it ?
You can test it in a very simple way. You just need to kubectl exec into your running Pod and try to run kubectl, query kubernetes API directly or use one of the officially supported kubernetes cliet libraries. Then you will see that you're allowed to perform only specific operations, listed in your Role i.e. list Pods. If you attempt to run kubectl get secrets from within your Pod, it will fail.
The result you get is totally expected and there is nothig surprising in the fact that a random Secret is successfully mounted and a Pod is being created successfully every time. It's you who query kubernetes API and request creation of a Pod with a Secret mounted. **It's not Pod's
ServiceAccount.
So I'm left wondering when does a pod actually needs to query the API
Server for resources or if the pod creation process is special and
gets the resources through other means.
If you don't have specific queries e.g. written in python that use Kubernetes Python Client library that are run by your Pod or you don't use kubectl command from within such Pod, you won't see it making any queries to kubernetes API as all the queries needed for its creation process are performed by you, with permissions given to your user.

error: unable to recognize "mongo-statefulset.yaml": no matches for kind "StatefulSet" in version "apps/v1beta1"

https://codelabs.developers.google.com/codelabs/cloud-mongodb-statefulset/index.html?index=..%2F..index#5
error: unable to recognize "mongo-statefulset.yaml": no matches for kind "StatefulSet" in version "apps/v1beta1"
The following command causes the above response in google cloud shell:
kubectl apply -f mongo-statefulset.yaml
I am working on deploying the stateful set MongoDB sidecar and following the instructions to a t in this demo but I received the following error. Does anyone have an explanation for the error? Or know a way to deploy stateful set mongo db in gke?
mongo-statefulset.yaml
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: mongo
spec:
serviceName: "mongo"
replicas: 3
template:
metadata:
labels:
role: mongo
environment: test
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongo
image: mongo
command:
- mongod
- "--replSet"
- rs0
- "--smallfiles"
- "--noprealloc"
ports:
- containerPort: 27017
volumeMounts:
- name: mongo-persistent-storage
mountPath: /data/db
- name: mongo-sidecar
image: cvallance/mongo-k8s-sidecar
env:
- name: MONGO_SIDECAR_POD_LABELS
value: "role=mongo,environment=test"
volumeClaimTemplates:
- metadata:
name: mongo-persistent-storage
annotations:
volume.beta.kubernetes.io/storage-class: "fast"
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 100Gi
Changing the yaml file with v1 in apiVersion returns a similar error:
error: unable to recognize "mongo-statefulset.yaml": no matches for kind "StatefulSet" in version "v1"
Explanation:
apiVersion of particular kubernetes resource is subject to change in time. The fact that once StatefulSet used apiVersion: apps/v1beta1 doesn't mean that it will use it forever.
As dany L already suggested in his answer, apps/v1beta1 most probably has already been deprecated in the version of kubernetes you're using. The mentioned changes in supported api versions were introduced quite a long time ago (date of publication of the article is Thursday, July 18, 2019) so chances are that you're using a version newer than 1.15.
I'd like to put it also in the body of the answer to make it clearly visible without a need to additionally searching through the docs. So what actually happen when 1.16 was released was (among other major changes):
StatefulSet in the apps/v1beta1 and apps/v1beta2 API versions
is no longer served
Migrate to use the apps/v1 API version, available since v1.9. Existing persisted data can be retrieved/updated via the new
version.
Solution:
Workaround, create your cluster with 1.15 version and it should work.
As to the above workaround I agree that this is a workaround and of course it will work but this is not a recommended approach. New kubernetes versions are developed and released for some reason. Apart from changes in APIs, many other important things are introduced, bugs are fixed, existing funcionalities are improved as well as new ones are added. So there is no point in using something that soon will be deprecated anyway.
Instead of that you should use the currently supported/required apiVersion for the particular kubernetes resource, in your case StatefulSet.
The easiest way to check what apiVersion supports StatefulSet in your kubernetes installation is by running:
kubectl explain statefulset
which will tell you (among many other interesting things) which apiVersion it is supposed to use:
KIND: StatefulSet
VERSION: apps/v1
...
To summarize your particular case:
If you're using kubernetes newer than 1.15, edit your mongo-statefulset.yaml and replace apiVersion: apps/v1beta1 with currently supported apiVersion: apps/v1.
Possible explanation.
Workaround, create your cluster with 1.15 version and it should work.

how to build DSN env from several ConfigMap resources?

In order to a service work, it needs an environment variable called DSN which prints to something like postgres://user:password#postgres.svc.cluster.local:5432/database. This value I built with a ConfigMap resource:
apiVersion: v1
kind: ConfigMap
metadata:
name: idp-config
namespace: diary
data:
DSN: postgres://user:password#postgres.svc.cluster.local:5432/database
This ConfigMap mounts as environment variable in my service Pod. Since the values are different from user and password and these PostgreSQL credentials are in another k8s resource (a Secret and a ConfigMap), how can I properly build this DSN environment in a k8s resource yaml so my service can connect to the database?
Digging into Kubernetes Docs I was able to find. According to Define Environment Variables for a Container :
Environment variables that you define in a Pod’s configuration can be used elsewhere in the configuration, for example in commands and arguments that you set for the Pod’s containers. In the example configuration below, the GREETING, HONORIFIC, and NAME environment variables are set to Warm greetings to, The Most Honorable, and Kubernetes, respectively. Those environment variables are then used in the CLI arguments passed to the env-print-demo container.
apiVersion: v1
kind: Pod
metadata:
name: print-greeting
spec:
containers:
- name: env-print-demo
image: bash
env:
- name: GREETING
value: "Warm greetings to"
- name: HONORIFIC
value: "The Most Honorable"
- name: NAME
value: "Kubernetes"
command: ["echo"]
args: ["$(GREETING) $(HONORIFIC) $(NAME)"]