I'm following this
https://github.com/mongodb/mongodb-enterprise-kubernetes
and
https://docs.opsmanager.mongodb.com/current/tutorial/install-k8s-operator/
to deploy mongodb inside a Kubernetes cluster on DigitalOcean.
So far everything worked except the last step. Deploying mongodb. I'm trying to do like suggested in the documentation:
---
apiVersion: mongodb.com/v1
kind: MongoDbReplicaSet
metadata:
name: mongodb-rs
namespace: mongodb
spec:
members: 3
version: 4.0.4
persistent: true
project: project-0
credentials: mongodb-do-ops
It doesn't work. The resource of type MongoDbReplicaSet is created, but no pods and services are deployed like written in docs.
kubectl --kubeconfig="iniside-k8s-test-kubeconfig.yaml" describe MongoDbReplicaSet mongodb-rs -n mongodb
Name: mongodb-rs
Namespace: mongodb
Labels: <none>
Annotations: <none>
API Version: mongodb.com/v1
Kind: MongoDbReplicaSet
Metadata:
Creation Timestamp: 2018-11-21T21:35:30Z
Generation: 1
Resource Version: 2948350
Self Link: /apis/mongodb.com/v1/namespaces/mongodb/mongodbreplicasets/mongodb-rs
UID: 5e83c7b0-edd5-11e8-88f5-be6ffc4e4dde
Spec:
Credentials: mongodb-do-ops
Members: 3
Persistent: true
Project: project-0
Version: 4.0.4
Events: <none>
I got it working.
As It stands in documentation here:
https://docs.opsmanager.mongodb.com/current/tutorial/install-k8s-operator/
data.projectName
Is not optional. After looking at operator logs, operator cloudn't create replica set deployment because projectName was missing in ConfigMap.
Related
I used the following guide to set up my chaostoolkit cluster: https://chaostoolkit.org/deployment/k8s/operator/
I am attempting to kill a pod using kubernetes, however the following error:
HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"pods is forbidden: User \"system:serviceaccount:chaostoolkit-run:chaostoolkit-b3af262edb\" cannot list resource \"pods\" in API group \"\" in the namespace \"task-dispatcher\"","reason":"Forbidden","details":{"kind":"pods"},"code":403}
I set my serviceAccountName to an RBAC that I created but for some reason my kubernetes defaults to "system:serviceaccount:chaostoolkit-run:chaostoolkit-b3af262edb".
apiVersion: v1
kind: ConfigMap
metadata:
name: my-chaos-exp
namespace: chaostoolkit-run
data:
experiment.yaml: |
---
version: 1.0.0
title: Terminate Pod Experiment
description: If a pod gets terminated, a new one should be created in its place in a reasonable amount of time.
tags: ["kubernetes"]
secrets:
k8s:
KUBERNETES_CONTEXT: "docker-desktop"
method:
- type: action
name: terminate-k8s-pod
provider:
type: python
module: chaosk8s.pod.actions
func: terminate_pods
arguments:
label_selector: ''
name_pattern: my-release-rabbitmq-[0-9]$
rand: true
ns: default
---
apiVersion: chaostoolkit.org/v1
kind: ChaosToolkitExperiment
metadata:
name: my-chaos-exp
namespace: chaostoolkit-crd
spec:
serviceAccountName: test-user
automountServiceAccountToken: false
pod:
image: chaostoolkit/chaostoolkit:full
imagePullPolicy: IfNotPresent
experiment:
configMapName: my-chaos-exp
configMapExperimentFileName: experiment.yaml
restartPolicy: Never
Error which is shared is using default service account "choastoolkit". Look like the role associated might not proper permissions.
The service account "test-user" which is been used in ChaosToolkitExperiment defintion should have proper role access to delete pod.
Please specify proper service account having proper role access.
I am using minikube (docker driver) with kubectl to test an agones fleet deployment. Upon running kubectl apply -f lobby-fleet.yml (and when I try to apply any other agones yaml file) I receive the following error:
error: resource mapping not found for name: "lobby" namespace: "" from "lobby-fleet.yml": no matches for kind "Fleet" in version "agones.dev/v1"
ensure CRDs are installed first
lobby-fleet.yml:
apiVersion: "agones.dev/v1"
kind: Fleet
metadata:
name: lobby
spec:
replicas: 2
scheduling: Packed
template:
metadata:
labels:
mode: lobby
spec:
ports:
- name: default
portPolicy: Dynamic
containerPort: 7600
container: lobby
template:
spec:
containers:
- name: lobby
image: gcr.io/agones-images/simple-game-server:0.12 # Modify to correct image
I am running this on WSL2, but receive the same error when using the windows installation of kubectl (through choco). I have minikube installed and running for ubuntu in WSL2 using docker.
I am still new to using k8s, so apologies if the answer to this question is clear, I just couldn't find it elsewhere.
Thanks in advance!
In order to create a resource of kind Fleet, you have to apply the Custom Resource Definition (CRD) that defines what is a Fleet first.
I've looked into the YAML installation instructions of agones, and the manifest contains the CRDs. you can find it by searching kind: CustomResourceDefinition.
I recommend you to first try to install according to the instructions in the docs.
I am new to Kubernetes I am trying to mimic a behavior a bit like what I do with docker-compose when I serve a Couchbase database in a docker container.
couchbase:
image: couchbase
volumes:
- ./couchbase:/opt/couchbase/var
ports:
- 8091-8096:8091-8096
- 11210-11211:11210-11211
I managed to create a cluster in my localhost using a tool called "kind"
kind create cluster --name my-cluster
kubectl config use-context my-cluster
Then I am trying to use that cluster to deploy a Couchbase service
I created a file named couchbase.yaml with the following content (I am trying to mimic what I do with my docker-compose file).
apiVersion: apps/v1
kind: Deployment
metadata:
name: couchbase
namespace: my-project
labels:
platform: couchbase
spec:
replicas: 1
selector:
matchLabels:
platform: couchbase
template:
metadata:
labels:
platform: couchbase
spec:
volumes:
- name: couchbase-data
hostPath:
# directory location on host
path: /home/me/my-project/couchbase
# this field is optional
type: Directory
containers:
- name: couchbase
image: couchbase
volumeMounts:
- mountPath: /opt/couchbase/var
name: couchbase-data
Then I start the deployment like this:
kubectl create namespace my-project
kubectl apply -f couchbase.yaml
kubectl expose deployment -n my-project couchbase --type=LoadBalancer --port=8091
However my deployment never actually start:
kubectl get deployments -n my-project couchbase
NAME READY UP-TO-DATE AVAILABLE AGE
couchbase 0/1 1 0 6m14s
And when I look for the logs I see this:
kubectl logs -n my-project -lplatform=couchbase --all-containers=true
Error from server (BadRequest): container "couchbase" in pod "couchbase-589f7fc4c7-th2r2" is waiting to start: ContainerCreating
As OP mentioned in a comment, issue was solved using extra mount as explained in documentation: https://kind.sigs.k8s.io/docs/user/configuration/#extra-mounts
Here is OP's comment but formated so it's more readable:
the error shows up when I run this command:
kubectl describe pods -n my-project couchbase
I could fix it by creating a new kind cluster:
kind create cluster --config cluster.yaml
Passing this content in cluster.yaml:
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: inf
nodes:
- role: control-plane
extraMounts:
- hostPath: /home/me/my-project/couchbase
containerPath: /couchbase
In couchbase.yaml the path becomes path: /couchbase of course.
I have a docker-compose.yml file we have been using to set up our development environment.
The file declares some services, all of them more or less following the same pattern:
services:
service_1:
image: some_image_1
enviroment:
- ENV_VAR_1
- ENV_VAR_2
depends_on:
- another_service_of_the_same_compose_file
In the view of migrating to kubernetes, when running:
kompose convert -f docker-compose.yml
produces, for each service, a pair of deployment/service manifests.
Two questions about the deployment generated:
1.
the examples in the official documentation seem to hint that the selector field is needed for a Deployment to be aware of the pods to manage.
However the deployment manifests created do not include a selector field, and are as follows:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yml
kompose.version: 1.6.0 (e4adfef)
creationTimestamp: null
labels:
io.kompose.service: service_1
name: service_1
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: service_1
spec:
containers:
- image: my_image
name: my_image_name
resources: {}
restartPolicy: Always
status: {}
2.
the apiVersion in the generated deployment manifest is extensions/v1beta1, however the examples in the Deployments section of the official documentation default to apps/v1.
The recommendation seems to be
for versions before 1.9.0 use apps/v1beta2
Which is the correct version to use? (using kubernetes 1.8)
Let's begin by saying that Kubernetes and Kompose are two different independent systems. Kompose is trying to match all of the dependency with kubernetes.
At the moment all of the selector's fields are generated by kubernetes.In future, It might be done by us.
If you would like to check your selector's fields use following commands
kubectl get deploy
kubectl describe deploy DEPLOY_NAME
After version k8s 1.9 all of the long-running objects would be part of /apps group.
We’re excited to announce General Availability (GA) of the apps/v1 Workloads API, which is now enabled by default. The Apps Workloads API groups the DaemonSet, Deployment, ReplicaSet, and StatefulSet APIs together to form the foundation for long-running stateless and stateful workloads in Kubernetes. Note that the Batch Workloads API (Job and CronJob) is not part of this effort and will have a separate path to GA stability.
I have attached the link for further research
kubernetes-19-workloads
As a selector field isn't required for deployments and Kompose doesn't know your cluster's nodes, it doesn't set a selector (which basically tells k8s in which nodes run pods).
I wouldn't edit apiversion cause Kompose assumes that version defining the rest of the resource. Also, if you are using kubernetes 1.8 read 1.8 docs https://v1-8.docs.kubernetes.io/docs/
In kubernetes 1.16 the deployment's spec.selector became required. Kompose (as of 1.20) does not yet do this automatically. You will have to add this to every *-deployment.yaml file it creates:
selector:
matchLabels:
io.kompose.service: alignment-processor
If you use an IDE like jetbrains you can use the following search/replace patterns on the folder where you put the conversion results:
Search for this multiline regexp:
io.kompose.service: (.*)
name: \1
spec:
replicas: 1
template:
Replace with this pattern:
io.kompose.service: $1
name: $1
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: $1
template:
The (.*) captures the name of the service, the \1 matches the (first and only) capture, and the $1 substitutes the capture in the replacement.
You will also have to substitute all extensions/v1beta1 with apps/v1 in all *-deployment.yaml files.
I also found that secrets have to massaged a bit but this goes beyond the scope of this question.
After install heapster in my k8s cluster, I got the following errors:
2016-04-09T16:08:27.437604037Z I0409 16:08:27.433278 1 heapster.go:60] /heapster --source=kubernetes:https://kubernetes.default --sink=influxdb:http://monitoring-influxdb:8086
2016-04-09T16:08:27.437781968Z I0409 16:08:27.433390 1 heapster.go:61] Heapster version 1.1.0-beta1
2016-04-09T16:08:27.437799021Z F0409 16:08:27.433556 1 heapster.go:73] Failed to create source provide: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
The security is low priority to my demo; so I'd like to disable it firstly. My apiserver also did not enable security. Any suggestion?
check out the heapster docs there is described how to configure the source without security:
https://github.com/kubernetes/heapster/blob/master/docs/source-configuration.md
--source=kubernetes:http://<YOUR_API_SERVER>?inClusterConfig=false
Not sure if that will work in your setup but it works here (on premise kubernetes install; no gcp involved :) ).
Best wishes,
Matthias
Start apiserver with "--admission_control=ServiceAccount", so it'll create secret for default service account (tested with kubernetes 1.2)
Use "http" instead of "https" to avoid security
NOTE: it's only used to demo the feature; can not be used in production.
If you didn't enable https for API server, you might see this error. Check Matthias's answer for official guide. Below is the YAML file for Heapster replication controller I used. Replace the api server ip and port with yours.
apiVersion: v1
kind: ReplicationController
metadata:
labels:
k8s-app: heapster
name: heapster
version: v6
name: heapster
namespace: kube-system
spec:
replicas: 1
selector:
k8s-app: heapster
version: v6
template:
metadata:
labels:
k8s-app: heapster
version: v6
spec:
containers:
- name: heapster
image: kubernetes/heapster:canary
imagePullPolicy: Always
command:
- /heapster
- --source=kubernetes:http://<api server ip>:<port>?inClusterConfig=false
- --sink=influxdb:http://monitoring-influxdb:8086