Is it possible to have multiple handlers in a container probe ?
Something like
livenessProbe: {
httpGet: {
path: "/ping",
port: 9099
},
exec: {
command: [
"verify-correctness.sh",
]
}
}
Update:
At Kube 1.6x kubectl apply for a config like this returns
spec.template.spec.containers[0].livenessProbe.httpGet: Forbidden: may not specify more than 1 handler type
So maybe not supported ?
Update 2:
After Ara Pulido's answer I combined the httpGet into the command like this:
"livenessProbe": {
"exec": {
"command": [
"sh",
"-c",
"reply=$(curl -s -o /dev/null -w %{http_code} http://127.0.0.1:9099/ping); if [ \"$reply\" -lt 200 -o \"$reply\" -ge 400 ]; then exit 1; fi; verify-correctness.sh;"
]
}
}
It is not supported.
There is an open issue about this, which contains several workarounds people use.
Related
I have CircleCI setup and running fine normally, it will helps with creating deployment for me. Today I have suddenly had an issue with the step in creating the deployment due to an error related to kubernetes.
I have the config.yml followed the doc from https://circleci.com/developer/orbs/orb/circleci/kubernetes
Here is my version of setup in the config file:
version: 2.1
orbs:
kube-orb: circleci/kubernetes#1.3.0
commands:
docker-check:
steps:
- docker/check:
docker-username: MY_USERNAME
docker-password: MY_PASS
registry: $DOCKER_REGISTRY
jobs:
create-deployment:
executor: aws-eks/python3
parameters:
cluster-name:
description: Name of the EKS cluster
type: string
steps:
- checkout
# It failed on this step
- kube-orb/delete-resource:
now: true
resource-names: my-frontend-deployment
resource-types: deployments
wait: true
Below is a copy of the error log
#!/bin/bash -eo pipefail
#!/bin/bash
RESOURCE_FILE_PATH=$(eval echo "$PARAM_RESOURCE_FILE_PATH")
RESOURCE_TYPES=$(eval echo "$PARAM_RESOURCE_TYPES")
RESOURCE_NAMES=$(eval echo "$PARAM_RESOURCE_NAMES")
LABEL_SELECTOR=$(eval echo "$PARAM_LABEL_SELECTOR")
ALL=$(eval echo "$PARAM_ALL")
CASCADE=$(eval echo "$PARAM_CASCADE")
FORCE=$(eval echo "$PARAM_FORCE")
GRACE_PERIOD=$(eval echo "$PARAM_GRACE_PERIOD")
IGNORE_NOT_FOUND=$(eval echo "$PARAM_IGNORE_NOT_FOUND")
NOW=$(eval echo "$PARAM_NOW")
WAIT=$(eval echo "$PARAM_WAIT")
NAMESPACE=$(eval echo "$PARAM_NAMESPACE")
DRY_RUN=$(eval echo "$PARAM_DRY_RUN")
KUSTOMIZE=$(eval echo "$PARAM_KUSTOMIZE")
if [ -n "${RESOURCE_FILE_PATH}" ]; then
if [ "${KUSTOMIZE}" == "1" ]; then
set -- "$#" -k
else
set -- "$#" -f
fi
set -- "$#" "${RESOURCE_FILE_PATH}"
elif [ -n "${RESOURCE_TYPES}" ]; then
set -- "$#" "${RESOURCE_TYPES}"
if [ -n "${RESOURCE_NAMES}" ]; then
set -- "$#" "${RESOURCE_NAMES}"
elif [ -n "${LABEL_SELECTOR}" ]; then
set -- "$#" -l
set -- "$#" "${LABEL_SELECTOR}"
fi
fi
if [ "${ALL}" == "true" ]; then
set -- "$#" --all=true
fi
if [ "${FORCE}" == "true" ]; then
set -- "$#" --force=true
fi
if [ "${GRACE_PERIOD}" != "-1" ]; then
set -- "$#" --grace-period="${GRACE_PERIOD}"
fi
if [ "${IGNORE_NOT_FOUND}" == "true" ]; then
set -- "$#" --ignore-not-found=true
fi
if [ "${NOW}" == "true" ]; then
set -- "$#" --now=true
fi
if [ -n "${NAMESPACE}" ]; then
set -- "$#" --namespace="${NAMESPACE}"
fi
if [ -n "${DRY_RUN}" ]; then
set -- "$#" --dry-run="${DRY_RUN}"
fi
set -- "$#" --wait="${WAIT}"
set -- "$#" --cascade="${CASCADE}"
if [ "$SHOW_EKSCTL_COMMAND" == "1" ]; then
set -x
fi
kubectl delete "$#"
if [ "$SHOW_EKSCTL_COMMAND" == "1" ]; then
set +x
fi
error: exec plugin: invalid apiVersion "client.authentication.k8s.io/v1alpha1"
Exited with code exit status 1
CircleCI received exit code 1
Does anyone have idea what is wrong with it? Im not sure whether the issue is happening on Circle CI side or Kubernetes side.
I was facing the exact issue since yesterday morning (16 hours ago). Then taking #Gavy's advice, I simply added this in my config.yml:
steps:
- checkout
# !!! HERE !!!
- kubernetes/install-kubectl:
kubectl-version: v1.23.5
- run:
And now it works. Hope it helps.
I am new to k8s and need some help, plz.
I want to make a change in a pod's deployment configuration and change readOnlyRootFilesystem to false.
This is what I am trying to do, but it doesn't seem to work. Plz suggest what's wrong:
kubectl patch deployment eric-ran-rdm-singlepod -n vdu -o yaml -p {"spec":{"template":{"spec":{"containers":[{"name":"eric-ran-rdm-infra":{"securityContext":[{"readOnlyRootFilesystem":"true"}]}}]}}}}
enter image description here
Thanks very much!!
Your JSON is invalid. You need to make sure you are providing valid JSON and it should be in the correct structure as defined by the k8s API as well. You can use jsonlint.com.
{
"spec": {
"template": {
"spec": {
"containers": [
{
"name": "eric-ran-rdm-infra",
"securityContext": {
"readOnlyRootFilesystem": "true"
}
}
]
}
}
}
}
Note: I have only checked the syntax here and not checked/ tested the structure against the k8s API of this JSON here, but I think it should be right, please correct me if I am wrong.
It might be easier to specify a deployment in a .yaml file and just apply that using kubectl apply -f my_deployment.yaml.
First, you should fix your JSON syntax issue as suggested by #Mushroomator
{
"spec": {
"template": {
"spec": {
"containers": [
{
"name": "eric-ran-rdm-infra",
"securityContext": {
"readOnlyRootFilesystem": "true"
}
}
]
}
}
}
}
Then, JSON should also be specified with escape char before double quotes.
Following this way:
kubectl patch deployment eric-ran-rdm-singlepod -n vdu -o yaml -p {\"spec\":{\"template\":{\"spec\":{\"containers\":[{\"name\": \"eric-ran-rdm-infra\",\"securityContext\":{\"readOnlyRootFilesystem\":\"true\"}}]}}}}
I know the "guest" user is the default for RabbitMQ, but I thought I'd configured everything to use different names.
My stack is Django / Celery / RabbitMQ, running in Docker.
First up, the error - I jst get loads of these - every few seconds:
rabbitmq_1 | 2020-07-29 08:28:00.775 [warning] <0.1234.0> HTTP access denied: user 'guest' - invalid credentials
rabbitmq_1 | 2020-07-29 08:28:05.775 [warning] <0.1240.0> HTTP access denied: user 'guest' - invalid credentials
rabbitmq_1 | 2020-07-29 08:28:10.776 [warning] <0.1246.0> HTTP access denied: user 'guest' - invalid credentials
rabbitmq_1 | 2020-07-29 08:28:15.776 [warning] <0.1252.0> HTTP access denied: user 'guest' - invalid credentials
rabbitMQ Dockerfile
FROM rabbitmq:management-alpine
ENV RABBITMQ_USER rabbit_user
ENV RABBITMQ_PASSWORD rabbit_user
ADD rabbitmq.conf /etc/rabbitmq/
ADD definitions.json /etc/rabbitmq/
RUN chown rabbitmq:rabbitmq /etc/rabbitmq/rabbitmq.conf /etc/rabbitmq/definitions.json
CMD ["rabbitmq-server"]
rabbitmq.conf
management.load_definitions = /etc/rabbitmq/definitions.json
definitions.json
{
"users": [
{
"name": "rabbit_user",
"password": "rabbit_user",
"tags": ""
},
{
"name": "admin",
"password": "admin",
"tags": "administrator"
}
],
"vhosts": [
{
"name": "\/phoenix"
}
],
"permissions": [
{
"user": "rabbit_user",
"vhost": "\/phoenix",
"configure": ".*",
"write": ".*",
"read": ".*"
}
],
"parameters": [],
"policies": [],
"exchanges": [],
"bindings": [],
"queues": [
{
"name": "high_prio",
"vhost": "\/phoenix",
"durable": true,
"auto_delete": false,
"arguments": {}
},
{
"name": "low_prio",
"vhost": "\/phoenix",
"durable": true,
"auto_delete": false,
"arguments": {}
}
]
}
docker-compose.yml
rabbitmq:
build:
context: ./rabbitmq
dockerfile: Dockerfile
# image: rabbitmq:3-management-alpine
ports:
- "15672:15672" # RabbitMQ management plugin
environment:
- RABBITMQ_DEFAULT_USER=rabbit_user
- RABBITMQ_DEFAULT_PASS=rabbit_user
- RABBITMQ_DEFAULT_VHOST=phoenix
expose:
- "5672" # Port exposed between docker containers
depends_on:
- db
- cache
celery_worker:
<<: *django
command: bash -c "celery -A phoenix.celery worker --loglevel=INFO -n worker1#%h"
environment:
- DJANGO_SECRET_KEY=${DJANGO_SECRET_KEY}
- EMAIL_HOST_PASSWORD=${EMAIL_HOST_PASSWORD}
- DJANGO_SETTINGS=${DJANGO_SETTINGS}
# HC the rabbit user. Not secure obvs, but OK for PoC.
- RABBITMQ_DEFAULT_USER=rabbit_user
- RABBITMQ_DEFAULT_PASS=rabbit_user
ports: []
links:
- rabbitmq
- cache
depends_on:
- db
- cache
- rabbitmq
settings.py
CELERY_BROKER_URL = "amqp://rabbit_user:rabbit_user#rabbitmq:5672/phoenix"
CELERY_BROKER_VHOST = "phoenix"
CELERY_RESULT_BACKEND = "django-db"
CELERY_CACHE_BACKEND = "default"
CELERY_TIME_ZONE = TIME_ZONE
I had it all working before when I just pulled the default rabbitMQ container in the docker-compose yaml file. Now I've created a specific Dockerfile for rabbitMQ, and setup rabbit_user and the vhost "phoenix". It all seems to be working - tasks are run, I see the message stats in the rabbit console, but I'm suffering these random "guest" login attempts. The word "guest" appears nowhere in my codebase, so somewhere RabbitMQ is using the default not "rabbit_user", but I can't see where.
Rather typical that I solve this by "fixing" something else ..
I noticed in my RMQ panel that the low_prio and high_prio queues had vhost "/phoenix", while the celery workers had vhost "phoenix" (I'd thought the RMQ config required the leading slash from my reading). I amended this so that all queues were allocated to "phoenix", and the mystery guest login disappeared.
I can only assume that since Celery was configured for the vhost "phoenix", that "/phoenix" was treated as s different vhost, with no users assigned to it, so RabbitMQ tried to use the "guest" default.
Not entirely sure why things were connecting to it - I'd sent nothing to those queues yet - but in case somebody else has this issue, this is what solved it for me.
I want to access limits.memory variable returned by get command in k8s
kubectl get resourcequota default -n 103000-p4-dev -o custom-columns=USED:.status.used
USED
map[limits.memory:0 requests.cpu:0 requests.memory:0]
I tried many ways but couldn't succeed
[root#iaasn00126847 ~]# k get resourcequota default -n 103000-p4-dev -o custom-columns=USED:.status.used.limits.memory
returns nothing
Is there a delimiter to fetch the same
Try with jsonpath
kubectl get resourcequota default -n 103000-p4-dev -o jsonpath="{.status.used.limits\.memory}"
This is what I tried
$ kubectl apply -f https://k8s.io/examples/admin/resource/quota-mem-cpu.yaml
resourcequota/mem-cpu-demo created
$ kubectl get resourcequota
NAME CREATED AT
mem-cpu-demo 2019-10-09T06:38:39Z
$
$ kubectl get resourcequota mem-cpu-demo -o json
{
"apiVersion": "v1",
"kind": "ResourceQuota",
"metadata": {
"annotations": {
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"v1\",\"kind\":\"ResourceQuota\",\"metadata\":{\"annotations\":{},\"name\":\"mem-cpu-demo\",\"namespace\":\"default\"},\"spec\":{\"hard\":{\"limits.cpu\":\"2\",\"limits.memory\":\"2Gi\",\"requests.cpu\":\"1\",\"requests.memory\":\"1Gi\"}}}\n"
},
"creationTimestamp": "2019-10-09T06:38:39Z",
"name": "mem-cpu-demo",
"namespace": "default",
"resourceVersion": "975",
"selfLink": "/api/v1/namespaces/default/resourcequotas/mem-cpu-demo",
"uid": "0d74d782-b717-4845-a0da-424776c05d45"
},
"spec": {
"hard": {
"limits.cpu": "2",
"limits.memory": "2Gi",
"requests.cpu": "1",
"requests.memory": "1Gi"
}
},
"status": {
"hard": {
"limits.cpu": "2",
"limits.memory": "2Gi",
"requests.cpu": "1",
"requests.memory": "1Gi"
},
"used": {
"limits.cpu": "0",
"limits.memory": "0",
"requests.cpu": "0",
"requests.memory": "0"
}
}
}
$
$ kubectl get resourcequota mem-cpu-demo -o jsonpath="{.status.used}"
map[limits.cpu:0 limits.memory:0 requests.cpu:0 requests.memory:0]$
$
$ kubectl get resourcequota mem-cpu-demo -o jsonpath="{.status.used.limits\.memory}"
0
$
$ kubectl get resourcequota mem-cpu-demo -o jsonpath="{.status.hard.limits\.memory}"
2Gi
$
For values with /, you don't need to escape them, but just the dots using brackets.
$ kubectl -n istio-system get service http2-service-ingress \
-o jsonpath="{.metadata.annotations['service\.beta\.kubernetes\.io/aws-load-balancer-type']}"
Since you key (limits.memory) contains dot, maybe you should try like this:
[root#iaasn00126847 ~]# k get resourcequota default -n 103000-p4-dev -o custom-columns=USED:.status.used.'limits\.memory'
There is no need to use jsonpath. You can still use the custom-columns output, but you need to put the key in (single or double) quotes, and escape all the dots, like this:
k get resourcequota default -n 103000-p4-dev -o custom-columns=USED:.status.used."limits\.memory"
I am currently using this with kubectl v1.17, to list nodes, as follows:
kubectl get nodes -o custom-columns=NAME:.metadata.name,ZONE:.metadata.labels.'topology\.kubernetes\.io/region'
kubectl get nodes -o custom-columns=NAME:.metadata.name,ZONE:.metadata.labels."topology\.kubernetes\.io/region"
I'm trying to get my callerid script to send a notification to my boxee box connected tv. I've got the script working using mgetty and notify-send on a couple of my computers.
here is my cidscript.sh which gets triggered by mgetty
#!/bin/sh
# send message to computer
ssh -o ConnectTimeout=10 mrplow#192.168.1.10 "DISPLAY=:0 notify-send 'Phone call from... $CALLER_NAME $CALLER_ID'" &
sleep 0.2
ssh -o ConnectTimeout=10 christine#192.168.1.3 "DISPLAY=:0 notify-send 'Phone call from... $CALLER_NAME $CALLER_ID'" &
sleep 0.2
ssh -o ConnectTimeout=10 mrplow#192.168.1.120 "DISPLAY=:0 notify-send 'Phone call from... $CALLER_NAME $CALLER_ID'" &
sleep 0.2
su mrplow -c "DISPLAY=:0.0 notify-send 'Phone call from... $CALLER_NAME $CALLER_ID'" &
sleep 5
# update logs
echo `date +"%F %a %r"`"|$CALLER_ID|$CALLER_NAME" >> /home/mrplow/answering_machine/logs/incoming-calls.log
scp -o ConnectTimeout=10 /home/mrplow/answering_machine/logs/incoming-calls.log christine#192.168.1.3:/home/christine/Desktop/incoming-calls.log
sleep 0.2
exit 1
I think json rpc is going to be the only way to get this to work
I've managed to telnet into the boxee box on raw port 9090 then paired my device
so the script will need to send the connect command
{"jsonrpc": "2.0", "method": "Device.Connect", "params":{"deviceid": "############"}, "id": 1}
then the actual notification
{"jsonrpc": "2.0", "method": "GUI.NotificationShow", "params":{"msg" : "Phone call from... $CALLER_NAME $CALLER_ID"}, "id": 1}
I tried this to no avail
curl -d '{"jsonrpc": "2.0", "method": "Device.Connect", "params":{"deviceid": "00112fa696c9"}, "id": 1}\
{"jsonrpc": "2.0", "method": "GUI.NotificationShow", "params":{"msg" : "test"}, "id": 1}' -i 192.168.1.6 9090
figured it out...
echo { \"jsonrpc\": \"2.0\", \"method\": \"Device.Connect\", \"params\":{\"deviceid\": \"############\"}, \"id\": 1}\
{ \"jsonrpc\": \"2.0\", \"method\": \"GUI.NotificationShow\", \"params\":{\"msg\" : \"Phone call from... $CALLER_NAME $CALLER_ID\"}, \"id\": 1 } | telnet 192.168.1.6 9090 &> /dev/null