Unable to deploy Minio in kubernetes cluster using Helm - kubernetes

I am trying to deploy minio in kubernetes using helm stable charts,
and when I try to check the status of the release
helm status minio
the pod desired capacity is 4, but current is 0
I tried to look the journalctl logs for any logs from kubelet, but found none
I have attached all helm charts can some one please point out what wrong am I doing?
---
# Source: minio/templates/secrets.yaml
apiVersion: v1
kind: Secret
metadata:
name: RELEASE-NAME-minio
labels:
app: minio
chart: minio-1.7.0
release: RELEASE-NAME
heritage: Tiller
type: Opaque
data:
accesskey: RFJMVEFEQU1DRjNUQTVVTVhOMDY=
secretkey: bHQwWk9zWmp5MFpvMmxXN3gxeHlFWmF5bXNPUkpLM1VTb3VqeEdrdw==
---
# Source: minio/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: RELEASE-NAME-minio
labels:
app: minio
chart: minio-1.7.0
release: RELEASE-NAME
heritage: Tiller
data:
initialize: |-
#!/bin/sh
set -e ; # Have script exit in the event of a failed command.
# connectToMinio
# Use a check-sleep-check loop to wait for Minio service to be available
connectToMinio() {
ATTEMPTS=0 ; LIMIT=29 ; # Allow 30 attempts
set -e ; # fail if we can't read the keys.
ACCESS=$(cat /config/accesskey) ; SECRET=$(cat /config/secretkey) ;
set +e ; # The connections to minio are allowed to fail.
echo "Connecting to Minio server: http://$MINIO_ENDPOINT:$MINIO_PORT" ;
MC_COMMAND="mc config host add myminio http://$MINIO_ENDPOINT:$MINIO_PORT $ACCESS $SECRET" ;
$MC_COMMAND ;
STATUS=$? ;
until [ $STATUS = 0 ]
do
ATTEMPTS=`expr $ATTEMPTS + 1` ;
echo \"Failed attempts: $ATTEMPTS\" ;
if [ $ATTEMPTS -gt $LIMIT ]; then
exit 1 ;
fi ;
sleep 2 ; # 1 second intervals between attempts
$MC_COMMAND ;
STATUS=$? ;
done ;
set -e ; # reset `e` as active
return 0
}
# checkBucketExists ($bucket)
# Check if the bucket exists, by using the exit code of `mc ls`
checkBucketExists() {
BUCKET=$1
CMD=$(/usr/bin/mc ls myminio/$BUCKET > /dev/null 2>&1)
return $?
}
# createBucket ($bucket, $policy, $purge)
# Ensure bucket exists, purging if asked to
createBucket() {
BUCKET=$1
POLICY=$2
PURGE=$3
# Purge the bucket, if set & exists
# Since PURGE is user input, check explicitly for `true`
if [ $PURGE = true ]; then
if checkBucketExists $BUCKET ; then
echo "Purging bucket '$BUCKET'."
set +e ; # don't exit if this fails
/usr/bin/mc rm -r --force myminio/$BUCKET
set -e ; # reset `e` as active
else
echo "Bucket '$BUCKET' does not exist, skipping purge."
fi
fi
# Create the bucket if it does not exist
if ! checkBucketExists $BUCKET ; then
echo "Creating bucket '$BUCKET'"
/usr/bin/mc mb myminio/$BUCKET
else
echo "Bucket '$BUCKET' already exists."
fi
# At this point, the bucket should exist, skip checking for existence
# Set policy on the bucket
echo "Setting policy of bucket '$BUCKET' to '$POLICY'."
/usr/bin/mc policy $POLICY myminio/$BUCKET
}
# Try connecting to Minio instance
connectToMinio
# Create the bucket
createBucket bucket none false
config.json: |-
{
"version": "26",
"credential": {
"accessKey": "DR06",
"secretKey": "lt0ZxGkw"
},
"region": "us-east-1",
"browser": "on",
"worm": "off",
"domain": "",
"storageclass": {
"standard": "",
"rrs": ""
},
"cache": {
"drives": [],
"expiry": 90,
"maxuse": 80,
"exclude": []
},
"notify": {
"amqp": {
"1": {
"enable": false,
"url": "",
"exchange": "",
"routingKey": "",
"exchangeType": "",
"deliveryMode": 0,
"mandatory": false,
"immediate": false,
"durable": false,
"internal": false,
"noWait": false,
"autoDeleted": false
}
},
"nats": {
"1": {
"enable": false,
"address": "",
"subject": "",
"username": "",
"password": "",
"token": "",
"secure": false,
"pingInterval": 0,
"streaming": {
"enable": false,
"clusterID": "",
"clientID": "",
"async": false,
"maxPubAcksInflight": 0
}
}
},
"elasticsearch": {
"1": {
"enable": false,
"format": "namespace",
"url": "",
"index": ""
}
},
"redis": {
"1": {
"enable": false,
"format": "namespace",
"address": "",
"password": "",
"key": ""
}
},
"postgresql": {
"1": {
"enable": false,
"format": "namespace",
"connectionString": "",
"table": "",
"host": "",
"port": "",
"user": "",
"password": "",
"database": ""
}
},
"kafka": {
"1": {
"enable": false,
"brokers": null,
"topic": ""
}
},
"webhook": {
"1": {
"enable": false,
"endpoint": ""
}
},
"mysql": {
"1": {
"enable": false,
"format": "namespace",
"dsnString": "",
"table": "",
"host": "",
"port": "",
"user": "",
"password": "",
"database": ""
}
},
"mqtt": {
"1": {
"enable": false,
"broker": "",
"topic": "",
"qos": 0,
"clientId": "",
"username": "",
"password": "",
"reconnectInterval": 0,
"keepAliveInterval": 0
}
}
}
}
---
# Source: minio/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: RELEASE-NAME-minio
labels:
app: minio
chart: minio-1.7.0
release: RELEASE-NAME
heritage: Tiller
spec:
type: ClusterIP
clusterIP: None
ports:
- name: service
port: 9000
targetPort: 9000
protocol: TCP
selector:
app: minio
release: RELEASE-NAME
---
# Source: minio/templates/statefulset.yaml
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: RELEASE-NAME-minio
labels:
app: minio
chart: minio-1.7.0
release: RELEASE-NAME
heritage: Tiller
spec:
serviceName: RELEASE-NAME-minio
replicas: 4
selector:
matchLabels:
app: minio
release: RELEASE-NAME
template:
metadata:
name: RELEASE-NAME-minio
labels:
app: minio
release: RELEASE-NAME
spec:
containers:
- name: minio
image: node1:5000/minio/minio:RELEASE.2018-09-01T00-38-25Z
imagePullPolicy: IfNotPresent
command: [ "/bin/sh",
"-ce",
"cp /tmp/config.json &&
/usr/bin/docker-entrypoint.sh minio -C server
http://RELEASE-NAME-minio-0.RELEASE-NAME-minio.default.svc.cluster.local/export
http://RELEASE-NAME-minio-1.RELEASE-NAME-minio.default.svc.cluster.local/export
http://RELEASE-NAME-minio-2.RELEASE-NAME-minio.default.svc.cluster.local/export
http://RELEASE-NAME-minio-3.RELEASE-NAME-minio.default.svc.cluster.local/export" ]
volumeMounts:
- name: export
mountPath: /export
- name: minio-server-config
mountPath: "/tmp/config.json"
subPath: config.json
- name: minio-config-dir
mountPath:
ports:
- name: service
containerPort: 9000
env:
- name: MINIO_ACCESS_KEY
valueFrom:
secretKeyRef:
name: RELEASE-NAME-minio
key: accesskey
- name: MINIO_SECRET_KEY
valueFrom:
secretKeyRef:
name: RELEASE-NAME-minio
key: secretkey
livenessProbe:
tcpSocket:
port: service
initialDelaySeconds: 5
periodSeconds: 30
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
readinessProbe:
tcpSocket:
port: service
periodSeconds: 15
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
resources:
requests:
cpu: 250m
memory: 256Mi
volumes:
- name: minio-user
secret:
secretName: RELEASE-NAME-minio
- name: minio-server-config
configMap:
name: RELEASE-NAME-minio
- name: minio-config-dir
emptyDir: {}
volumeClaimTemplates:
- metadata:
name: export
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: local-fast
resources:
requests:
storage: 49Gi
---
# Source: minio/templates/ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: RELEASE-NAME-minio
labels:
app: minio
chart: minio-1.7.0
release: RELEASE-NAME
heritage: Tiller
annotations:
nginx.ingress.kubernetes.io/affinity: cookie
nginx.ingress.kubernetes.io/session-cookie-hash: sha1
nginx.ingress.kubernetes.io/session-cookie-name: route
spec:
tls:
- hosts:
- minio.sample.com
secretName: tls-secret
rules:
- host: minio.sample.com
http:
paths:
- path: /
backend:
serviceName: RELEASE-NAME-minio
servicePort: 9000

I suspect you are not getting the physical volume. Check your kube-controller-manager logs on your active master. This will vary depending on the cloud you are using: AWS, GCP, Azure, Openstack, etc. The kube-controller-manager is usually running on a docker container on the master. So you can do something like:
docker logs <kube-controller-manager-container>
Also, check:
kubectl get pvc
kubectl get pv
Hope it helps.

bit more digging gave me the answer, statefulset was deployed but pods were not created
kubectl describe statefulset -n <namespace> minio
the log said it was looking for mount path which was "" (in previous versions of charts), changing it solved my issue.

Related

dapr.io/config annotation to access a secret

I am trying to deploy a k8s pos with dapr sidecar container.
I want the dapr container to access a secret key named "MY_KEY" stored in a secret called my-secrets.
I wrote this manifest for the deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
namespace: my-namespace
spec:
replicas: 1
selector:
matchLabels:
app: my-app
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
template:
metadata:
labels:
app: my-app
annotations:
dapr.io/enabled: "true"
dapr.io/app-id: my-app
dapr.io/app-port: "80"
dapr.io/config: |
{
"components": [
{
"name": "secrets",
"value": {
"MY_KEY": {
"secretName": "my-secrets",
"key": "MY_KEY"
}
}
}
]
}
spec:
containers:
- name: my_container
image: <<image_source>>
imagePullPolicy: "Always"
ports:
- containerPort: 80
envFrom:
- secretRef:
name: my-secrets
env:
- name: ASPNETCORE_ENVIRONMENT
value: Development
- name: ASPNETCORE_URLS
value: http://+:80
imagePullSecrets:
- name: <<image_pull_secret>>
but it seems that it cannot create the configuration, the dapr container log is:
time="2023-01-24T09:05:50.927484097Z" level=info msg="starting Dapr Runtime -- version 1.9.5 -- commit f5f847eef8721d85f115729ee9efa820fe7c4cd3" app_id=my-app instance=my-container-6db6f7f6b9-tggww scope=dapr.runtime type=log ver=1.9.5
time="2023-01-24T09:05:50.927525344Z" level=info msg="log level set to: info" app_id=emy-app instance=my-container-6db6f7f6b9-tggww scope=dapr.runtime type=log ver=1.9.5
time="2023-01-24T09:05:50.927709269Z" level=info msg="metrics server started on :9090/" app_id=my-app instance=my-container-6db6f7f6b9-tggww scope=dapr.metrics type=log ver=1.9.5
time="2023-01-24T09:05:50.92795239Z" level=info msg="Initializing the operator client (config: {
"components": [
{
"name": "secrets",
"value": {
"MY_KEY": {
"secretName": "my-secrets",
"key": "MY_KEY"
}
}
}
]
}
)" app_id=my-app instance=my-container-6db6f7f6b9-tggww scope=dapr.runtime type=log ver=1.9.5
time="2023-01-24T09:05:50.93737904Z" level=fatal msg="error loading configuration: rpc error: code = Unknown desc = error getting configuration: Configuration.dapr.io "{
"components": [
{
"name": "secrets",
"value": {
"MY_KEY": {
"secretName": "my-secrets",
"key": "MY_KEY"
}
}
}
]
}" not found" app_id=my-app instance=my-container-6db6f7f6b9-tggww scope=dapr.runtime type=log ver=1.9.5
can anyone tell me what's I am doing wrong?
Thanks in advance for your help.

Containerized logic app not working when deployed to AKS

We are trying to deploy a logic app as containerized workload in AKS. Following is our Docker file:
FROM mcr.microsoft.com/azure-functions/dotnet:3.0.14492-appservice
ENV AzureWebJobsStorage=<StorageAccount connection string>
ENV AZURE_FUNCTIONS_ENVIRONMENT Development
ENV AzureWebJobsScriptRoot=/home/site/wwwroot
ENV AzureFunctionsJobHost__Logging__Console__IsEnabled=true
ENV FUNCTIONS_V2_COMPATIBILITY_MODE=true
COPY ./bin/release/netcoreapp3.1/publish/ /home/site/wwwroot
Following is our deployment manifest file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: pfna-pgt-sf-pdfextract
namespace: canary
labels:
app: pfna-pgt-sf-pdfextract
spec:
replicas: 1
selector:
matchLabels:
app: pfna-pgt-sf-pdfextract
template:
metadata:
labels:
app: pfna-pgt-sf-pdfextract
spec:
nodeSelector:
beta.kubernetes.io/os: linux
containers:
- name: pfna-pgt-sf-pdfextract
image: "image_link"
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
ports:
- containerPort: 80
env:
- name: AzureBlob_connectionString
value: <connection_string>
- name: AzureWebJobsStorage
value: <connection_string>
imagePullSecrets:
- name: sbx-acr-secret
---
apiVersion: v1
kind: Service
metadata:
name: pfna-pgt-sf-pdfextract
namespace: canary
labels:
app: pfna-pgt-sf-pdfextract
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http-pfna-pgt-sf-pdfextract
selector:
app: pfna-pgt-sf-pdfextract
Following is connections.json:
{
"serviceProviderConnections": {
"AzureBlob": {
"parameterValues": {
"connectionString": "#appsetting('AzureWebJobsStorage')"
},
"serviceProvider": {
"id": "/serviceProviders/AzureBlob"
},
"displayName": "localAzureBlob"
}
},
"managedApiConnections": {}
}
Following is the host.json:
{
"version": "2.0",
"logging": {
"applicationInsights": {
"samplingSettings": {
"isEnabled": true,
"excludedTypes": "Request"
}
}
},
"extensionBundle": {
"id": "Microsoft.Azure.Functions.ExtensionBundle.Workflows",
"version": "[1.*, 2.0.0)"
},
"extensions": {
"workflow": {
"settings": {
"Runtime.Backend.VariableOperation.MaximumStatelessVariableSize": "5000000"
}
}
}
}
The image is running successfully in docker desktop but when deployed to AKS we are getting 'Function host is not running'.
Please help resolve this.
You need to specify WEBSITE_HOSTNAME as well (doesn't matter what it is, just needs to be specified)
That being said, as of today there is another issue that is causing the function host to not start (libadvapi32.dll).

the server could not find the metric nginx_vts_server_requests_per_second for pods

I installed the kube-prometheus-0.9.0, and want to deploy a sample application on which to test the Prometheus metrics autoscaling, with the following resource manifest file: (hpa-prome-demo.yaml)
apiVersion: apps/v1
kind: Deployment
metadata:
name: hpa-prom-demo
spec:
selector:
matchLabels:
app: nginx-server
template:
metadata:
labels:
app: nginx-server
spec:
containers:
- name: nginx-demo
image: cnych/nginx-vts:v1.0
resources:
limits:
cpu: 50m
requests:
cpu: 50m
ports:
- containerPort: 80
name: http
---
apiVersion: v1
kind: Service
metadata:
name: hpa-prom-demo
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "80"
prometheus.io/path: "/status/format/prometheus"
spec:
ports:
- port: 80
targetPort: 80
name: http
selector:
app: nginx-server
type: NodePort
For testing purposes, used a NodePort Service and luckly I can get the http repsonse after applying the deployment. Then I installed
Prometheus Adapter via Helm Chart by creating a new hpa-prome-adapter-values.yaml file to override the default Values values, as follows.
rules:
default: false
custom:
- seriesQuery: 'nginx_vts_server_requests_total'
resources:
overrides:
kubernetes_namespace:
resource: namespace
kubernetes_pod_name:
resource: pod
name:
matches: "^(.*)_total"
as: "${1}_per_second"
metricsQuery: (sum(rate(<<.Series>>{<<.LabelMatchers>>}[1m])) by (<<.GroupBy>>))
prometheus:
url: http://prometheus-k8s.monitoring.svc
port: 9090
Added a rules rule and specify the address of Prometheus. Install Prometheus-Adapter with the following command.
$ helm install prometheus-adapter prometheus-community/prometheus-adapter -n monitoring -f hpa-prome-adapter-values.yaml
NAME: prometheus-adapter
LAST DEPLOYED: Fri Jan 28 09:16:06 2022
NAMESPACE: monitoring
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
prometheus-adapter has been deployed.
In a few minutes you should be able to list metrics using the following command(s):
kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1
Finally the adatper was installed successfully, and can get the http response, as follows.
$ kubectl get po -nmonitoring |grep adapter
prometheus-adapter-665dc5f76c-k2lnl 1/1 Running 0 133m
$ kubectl get --raw="/apis/custom.metrics.k8s.io/v1beta1" | jq
{
"kind": "APIResourceList",
"apiVersion": "v1",
"groupVersion": "custom.metrics.k8s.io/v1beta1",
"resources": [
{
"name": "namespaces/nginx_vts_server_requests_per_second",
"singularName": "",
"namespaced": false,
"kind": "MetricValueList",
"verbs": [
"get"
]
}
]
}
But it was supposed to be like this,
$ kubectl get --raw="/apis/custom.metrics.k8s.io/v1beta1" | jq
{
"kind": "APIResourceList",
"apiVersion": "v1",
"groupVersion": "custom.metrics.k8s.io/v1beta1",
"resources": [
{
"name": "namespaces/nginx_vts_server_requests_per_second",
"singularName": "",
"namespaced": false,
"kind": "MetricValueList",
"verbs": [
"get"
]
},
{
"name": "pods/nginx_vts_server_requests_per_second",
"singularName": "",
"namespaced": true,
"kind": "MetricValueList",
"verbs": [
"get"
]
}
]
}
Why I can't get the metrics pods/nginx_vts_server_requests_per_second? as a result, below query was also failed.
kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/*/nginx_vts_server_requests_per_second" | jq .
Error from server (NotFound): the server could not find the metric nginx_vts_server_requests_per_second for pods
Anybody cloud please help? many thanks.
ENV:
helm install all Prometheus charts from prometheus-community https://prometheus-community.github.io/helm-chart
k8s cluster enabled by docker for mac
Solution:
I met the same problem, from Prometheus UI, i found it had namespace label and no pod label in metrics as below.
nginx_vts_server_requests_total{code="1xx", host="*", instance="10.1.0.19:80", job="kubernetes-service-endpoints", namespace="default", node="docker-desktop", service="hpa-prom-demo"}
I thought Prometheus may NOT use pod as a label, so i checked Prometheus config and found:
121 - action: replace
122 source_labels:
123 - __meta_kubernetes_pod_node_name
124 target_label: node
then searched
https://prometheus.io/docs/prometheus/latest/configuration/configuration/ and do the similar thing as below under every __meta_kubernetes_pod_node_name i searched(ie. 2 places)
125 - action: replace
126 source_labels:
127 - __meta_kubernetes_pod_name
128 target_label: pod
after a while, the configmap reloaded, UI and API could find pod label
$ kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1 | jq
{
"kind": "APIResourceList",
"apiVersion": "v1",
"groupVersion": "custom.metrics.k8s.io/v1beta1",
"resources": [
{
"name": "pods/nginx_vts_server_requests_per_second",
"singularName": "",
"namespaced": true,
"kind": "MetricValueList",
"verbs": [
"get"
]
},
{
"name": "namespaces/nginx_vts_server_requests_per_second",
"singularName": "",
"namespaced": false,
"kind": "MetricValueList",
"verbs": [
"get"
]
}
]
}
It is worth knowing that using the kube-prometheus repository, you can also install components such as Prometheus Adapter for Kubernetes Metrics APIs, so there is no need to install it separately with Helm.
I will use your hpa-prome-demo.yaml manifest file to demonstrate how to monitor nginx_vts_server_requests_total metrics.
First of all, we need to install Prometheus and Prometheus Adapter with appropriate configuration as described step by step below.
Copy the kube-prometheus repository and refer to the Kubernetes compatibility matrix in order to choose a compatible branch:
$ git clone https://github.com/prometheus-operator/kube-prometheus.git
$ cd kube-prometheus
$ git checkout release-0.9
Install the jb, jsonnet and gojsontoyaml tools:
$ go install -a github.com/jsonnet-bundler/jsonnet-bundler/cmd/jb#latest
$ go install github.com/google/go-jsonnet/cmd/jsonnet#latest
$ go install github.com/brancz/gojsontoyaml#latest
Uncomment the (import 'kube-prometheus/addons/custom-metrics.libsonnet') + line from the example.jsonnet file:
$ cat example.jsonnet
local kp =
(import 'kube-prometheus/main.libsonnet') +
// Uncomment the following imports to enable its patches
// (import 'kube-prometheus/addons/anti-affinity.libsonnet') +
// (import 'kube-prometheus/addons/managed-cluster.libsonnet') +
// (import 'kube-prometheus/addons/node-ports.libsonnet') +
// (import 'kube-prometheus/addons/static-etcd.libsonnet') +
(import 'kube-prometheus/addons/custom-metrics.libsonnet') + <--- This line
// (import 'kube-prometheus/addons/external-metrics.libsonnet') +
...
Add the following rule to the ./jsonnet/kube-prometheus/addons/custom-metrics.libsonnet file in the rules+ section:
{
seriesQuery: "nginx_vts_server_requests_total",
resources: {
overrides: {
namespace: { resource: 'namespace' },
pod: { resource: 'pod' },
},
},
name: { "matches": "^(.*)_total", "as": "${1}_per_second" },
metricsQuery: "(sum(rate(<<.Series>>{<<.LabelMatchers>>}[1m])) by (<<.GroupBy>>))",
},
After this update, the ./jsonnet/kube-prometheus/addons/custom-metrics.libsonnet file should look like this:
NOTE: This is not the entire file, just an important part of it.
$ cat custom-metrics.libsonnet
// Custom metrics API allows the HPA v2 to scale based on arbirary metrics.
// For more details on usage visit https://github.com/DirectXMan12/k8s-prometheus-adapter#quick-links
{
values+:: {
prometheusAdapter+: {
namespace: $.values.common.namespace,
// Rules for custom-metrics
config+:: {
rules+: [
{
seriesQuery: "nginx_vts_server_requests_total",
resources: {
overrides: {
namespace: { resource: 'namespace' },
pod: { resource: 'pod' },
},
},
name: { "matches": "^(.*)_total", "as": "${1}_per_second" },
metricsQuery: "(sum(rate(<<.Series>>{<<.LabelMatchers>>}[1m])) by (<<.GroupBy>>))",
},
...
Use the jsonnet-bundler update functionality to update the kube-prometheus dependency:
$ jb update
Compile the manifests:
$ ./build.sh example.jsonnet
Now simply use kubectl to install Prometheus and other components as per your configuration:
$ kubectl apply --server-side -f manifests/setup
$ kubectl apply -f manifests/
After configuring Prometheus, we can deploy a sample hpa-prom-demo Deployment:
NOTE: I've deleted the annotations because I'm going to use a ServiceMonitor to describe the set of targets to be monitored by Prometheus.
$ cat hpa-prome-demo.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: hpa-prom-demo
spec:
selector:
matchLabels:
app: nginx-server
template:
metadata:
labels:
app: nginx-server
spec:
containers:
- name: nginx-demo
image: cnych/nginx-vts:v1.0
resources:
limits:
cpu: 50m
requests:
cpu: 50m
ports:
- containerPort: 80
name: http
---
apiVersion: v1
kind: Service
metadata:
name: hpa-prom-demo
labels:
app: nginx-server
spec:
ports:
- port: 80
targetPort: 80
name: http
selector:
app: nginx-server
type: LoadBalancer
Next, create a ServiceMonitor that describes how to monitor our NGINX:
$ cat servicemonitor.yaml
kind: ServiceMonitor
apiVersion: monitoring.coreos.com/v1
metadata:
name: hpa-prom-demo
labels:
app: nginx-server
spec:
selector:
matchLabels:
app: nginx-server
endpoints:
- interval: 15s
path: "/status/format/prometheus"
port: http
After waiting some time, let's check the hpa-prom-demo logs to make sure that it is scrapped correctly:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
hpa-prom-demo-bbb6c65bb-49jsh 1/1 Running 0 35m
$ kubectl logs -f hpa-prom-demo-bbb6c65bb-49jsh
...
10.4.0.9 - - [04/Feb/2022:09:29:17 +0000] "GET /status/format/prometheus HTTP/1.1" 200 3771 "-" "Prometheus/2.29.1" "-"
10.4.0.9 - - [04/Feb/2022:09:29:32 +0000] "GET /status/format/prometheus HTTP/1.1" 200 3771 "-" "Prometheus/2.29.1" "-"
10.4.0.9 - - [04/Feb/2022:09:29:47 +0000] "GET /status/format/prometheus HTTP/1.1" 200 3773 "-" "Prometheus/2.29.1" "-"
10.4.0.9 - - [04/Feb/2022:09:30:02 +0000] "GET /status/format/prometheus HTTP/1.1" 200 3773 "-" "Prometheus/2.29.1" "-"
10.4.0.9 - - [04/Feb/2022:09:30:17 +0000] "GET /status/format/prometheus HTTP/1.1" 200 3773 "-" "Prometheus/2.29.1" "-"
10.4.2.12 - - [04/Feb/2022:09:30:23 +0000] "GET /status/format/prometheus HTTP/1.1" 200 3773 "-" "Prometheus/2.29.1" "-"
...
Finally, we can check if our metrics work as expected:
$ kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/" | jq . | grep -A 7 "nginx_vts_server_requests_per_second"
"name": "pods/nginx_vts_server_requests_per_second",
"singularName": "",
"namespaced": true,
"kind": "MetricValueList",
"verbs": [
"get"
]
},
--
"name": "namespaces/nginx_vts_server_requests_per_second",
"singularName": "",
"namespaced": false,
"kind": "MetricValueList",
"verbs": [
"get"
]
},
$ kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/*/nginx_vts_server_requests_per_second" | jq .
{
"kind": "MetricValueList",
"apiVersion": "custom.metrics.k8s.io/v1beta1",
"metadata": {
"selfLink": "/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/%2A/nginx_vts_server_requests_per_second"
},
"items": [
{
"describedObject": {
"kind": "Pod",
"namespace": "default",
"name": "hpa-prom-demo-bbb6c65bb-49jsh",
"apiVersion": "/v1"
},
"metricName": "nginx_vts_server_requests_per_second",
"timestamp": "2022-02-04T09:32:59Z",
"value": "533m",
"selector": null
}
]
}

Drone CI Stuck on Pending for arm

I'm trying to test ci/cd with gitea and drone but it is stuck in pending.
I was able to verify if my gitea is connected to my drone-server
here is my .drone.yaml
kind: pipeline
type: docker
name: arm64
platform:
os: linux
arch: arm64
steps:
- name: test
image: 'golang:1.10-alpine'
commands:
- go test
- name: build
image: 'golang:1.10-alpine'
commands:
- go build -o ./myapp
- name: publish
image: plugins/docker
settings:
username: mjayson
password:
from_secret: docker_pwd
repo: mjayson/sample
tags: latest
- name: deliver
image: sinlead/drone-kubectl
settings:
kubernetes_server:
from_secret: k8s_server
kubernetes_cert:
from_secret: k8s_cert
kubernetes_token:
from_secret: k8s_token
commands:
- kubectl apply -f deployment.yml
I have set up gitea and drone in my k8s cluster. Configuration below
apiVersion: v1
kind: ConfigMap
metadata:
name: drone-config
namespace: dev-ops
data:
DRONE_GITEA_SERVER: 'http://192.168.1.150:30000'
DRONE_GITEA_CLIENT_ID: '746a6cd1-cd31-4611-971b-e005bb80e662'
DRONE_GITEA_CLIENT_SECRET: 'O-NpPnTiFyIGZwqN7aeNDqIWR1sGIEJj8Cehcl0CtVI='
DRONE_RPC_SECRET: '1be6d1769148d95b5d04a84694cc0447'
DRONE_SERVER_HOST: '192.168.1.150:30001'
DRONE_SERVER_PROTO: 'http'
DRONE_LOGS_TRACE: 'true'
DRONE_LOGS_PRETTY: 'true'
DRONE_LOGS_COLOR: 'true'
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: drone-server-pv
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/infra/drone"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: drone-server-pvc
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
---
kind: Service
apiVersion: v1
metadata:
name: drone-server-service
spec:
type: NodePort
selector:
app: drone-server
ports:
- name: drone-server-http
port: 80
targetPort: 80
nodePort: 30001
- name: drone-server-ssh
port: 443
targetPort: 443
nodePort: 30003
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: drone-server-deployment
labels:
app: drone-server
spec:
replicas: 1
selector:
matchLabels:
app: drone-server
template:
metadata:
labels:
app: drone-server
spec:
containers:
- name: drone-server
image: drone/drone:1.9
ports:
- containerPort: 80
name: gitea-http
- containerPort: 443
name: gitea-ssh
envFrom:
- configMapRef:
name: drone-config
volumeMounts:
- name: pv-data
mountPath: /data
volumes:
- name: pv-data
persistentVolumeClaim:
claimName: drone-server-pvc
apiVersion: apps/v1
kind: Deployment
metadata:
name: drone-runner-deployment
labels:
app: drone-runner
spec:
replicas: 1
selector:
matchLabels:
app: drone-runner
template:
metadata:
labels:
app: drone-runner
spec:
containers:
- name: drone-runner
image: 'drone/drone-runner-kube:latest'
ports:
- containerPort: 3000
name: runner-http
env:
- name: DRONE_RPC_HOST
valueFrom:
configMapKeyRef:
name: drone-config
key: DRONE_SERVER_HOST
- name: DRONE_RPC_PROTO
valueFrom:
configMapKeyRef:
name: drone-config
key: DRONE_SERVER_PROTO
- name: DRONE_RPC_SECRET
valueFrom:
configMapKeyRef:
name: drone-config
key: DRONE_RPC_SECRET
- name: DRONE_RUNNER_CAPACITY
value: '2'
- name: DRONE_LOGS_TRACE
valueFrom:
configMapKeyRef:
name: drone-config
key: DRONE_LOGS_TRACE
- name: DRONE_LOGS_PRETTY
valueFrom:
configMapKeyRef:
name: drone-config
key: DRONE_LOGS_PRETTY
- name: DRONE_LOGS_COLOR
valueFrom:
configMapKeyRef:
name: drone-config
key: DRONE_LOGS_COLOR
and here is the drone server logs
}
{
"arch": "",
"kernel": "",
"kind": "pipeline",
"level": "debug",
"msg": "manager: request queue item",
"os": "",
"time": "2020-08-08T19:16:27Z",
"type": "kubernetes",
"variant": ""
}
{
"arch": "",
"kernel": "",
"kind": "pipeline",
"level": "debug",
"msg": "manager: context canceled",
"os": "",
"time": "2020-08-08T19:16:57Z",
"type": "kubernetes",
"variant": ""
}
{
"arch": "",
"kernel": "",
"kind": "pipeline",
"level": "debug",
"msg": "manager: request queue item",
"os": "",
"time": "2020-08-08T19:17:07Z",
"type": "kubernetes",
"variant": ""
}
{
"arch": "",
"kernel": "",
"kind": "pipeline",
"level": "debug",
"msg": "manager: context canceled",
"os": "",
"time": "2020-08-08T19:17:37Z",
"type": "kubernetes",
"variant": ""
}
{
"arch": "",
"kernel": "",
"kind": "pipeline",
"level": "debug",
"msg": "manager: request queue item",
"os": "",
"time": "2020-08-08T19:17:47Z",
"type": "kubernetes",
"variant": ""
}
{
"arch": "",
"kernel": "",
"kind": "pipeline",
"level": "debug",
"msg": "manager: context canceled",
"os": "",
"time": "2020-08-08T19:18:17Z",
"type": "kubernetes",
"variant": ""
}
{
"arch": "",
"kernel": "",
"kind": "pipeline",
"level": "debug",
"msg": "manager: request queue item",
"os": "",
"time": "2020-08-08T19:18:27Z",
"type": "kubernetes",
"variant": ""
}
{
"arch": "",
"kernel": "",
"kind": "pipeline",
"level": "debug",
"msg": "manager: context canceled",
"os": "",
"time": "2020-08-08T19:18:57Z",
"type": "kubernetes",
"variant": ""
}
{
"arch": "",
"kernel": "",
"kind": "pipeline",
"level": "debug",
"msg": "manager: request queue item",
"os": "",
"time": "2020-08-08T19:19:07Z",
"type": "kubernetes",
my drone runner log
time="2020-08-08T19:13:07Z" level=info msg="starting the server" addr=":3000"
time="2020-08-08T19:13:07Z" level=info msg="successfully pinged the remote server"
time="2020-08-08T19:13:07Z" level=info msg="polling the remote server" capacity=2 endpoint="http://192.168.1.150:30001" kind=pipeline type=kubernetes
Not sure how to deal with it as this is my first time facing such issue.I also tried updating the drone server image from 1 to 1.9 still nothing happens
I replaced the 'drone/drone-runner-kube:latest' to 'drone/drone-runner-docker:1' and specify the mounting point /var/run/docker.sock

How to deploy keycloak on kubernetes with custom configuration?

I want to deploy keycloak with below custom configuration, before starting it.
new realm
role
client
an admin user under the new realm
I am using below deployment file to create keycloak pod
apiVersion: apps/v1
kind: Deployment
metadata:
name: keycloak
namespace: default
labels:
app: keycloak
spec:
replicas: 1
selector:
matchLabels:
app: keycloak
template:
metadata:
labels:
app: keycloak
spec:
containers:
- name: keycloak
image: quay.io/keycloak/keycloak:10.0.1
env:
- name: KEYCLOAK_USER
value: "admin"
- name: KEYCLOAK_PASSWORD
value: "admin"
- name: REALM
value: "ntc"
- name: PROXY_ADDRESS_FORWARDING
value: "true"
volumeMounts:
- mountPath: /opt/jboss/keycloak/startup/elements
name: elements
ports:
- name: http
containerPort: 8080
- name: https
containerPort: 443
readinessProbe:
httpGet:
path: /auth/realms/master
port: 8080
volumes:
- name: elements
configMap:
name: keycloak-elements
and using below cilent.json and realm.json file to generate configmap for keycloak.
client.json
{
"id": "7ec4ccce-d6ed-461f-8e95-ea98e4912b8c",
"clientId": "ntc-app",
"enabled": true,
"clientAuthenticatorType": "client-secret",
"secret": "0b360a88-df24-48fa-8e96-bf6577bbee95",
"directAccessGrantsEnabled": true
}
realm.json
{
"realm": "ntc",
"id": "ntc",
"enabled": "true",
"revokeRefreshToken" : true,
"accessTokenLifespan" : 900,
"passwordPolicy": "length(8) and digits(1) and specialChars(1)",
"roles" : {
"realm" : [ {
"id": "c9253f52-1960-4c9d-af99-5facca0c0846",
"name": "admin",
"description" : "admin role",
"scopeParamRequired": false,
"composite": false,
"clientRole": false,
"containerId": "ntc"
}, {
"id" : "1e7ed0c8-9585-44b0-92f8-59e472573461",
"name" : "user",
"description" : "user role",
"scopeParamRequired" : false,
"composite" : false,
"clientRole" : false,
"containerId" : "ntc"
}
]
}
}
Both the files are saved under the elements folder and used in the below command to generate the config map:
kubectl create configmap keycloak-elements --from-file=elements
Still, I don't see any new realm/role or client created in the KeyCloak console.
I think you need to import the realm and client like described here http://www.mastertheboss.com/jboss-frameworks/keycloak/keycloak-with-docker
Some environment variables might help to accomplish the task.
When you are setting up Keycloak on kubernetes, you only need to import the new realm (realm.json) and the corresponding clients (client.json) only during the first run. So a Job needs created instead of adding it to the deployment.
Once the Job is run, the json will be imported to the Keycloak database and the job can be suspended. Adding it to the deployment will cause Keycloak to try and import the json files during each restart.
Please follow the steps in this blog post: https://blog.knoldus.com/migrate-keycloak-h2-database-to-postgres-on-kubernetes/