How to list non-active revisions of Cloud Run? - gcloud

This is somewhat similar to How to check if the latest Cloud Run revision is ready to serve
I would like to list non-active revisions of my Cloud Run service so that I can delete them. I can list them using:
gcloud run revisions list --region europe-west1 --service service-name
The listing looks like:
REVISION ACTIVE SERVICE DEPLOYED DEPLOYED BY
✔ xxxxx-server-00083-ban yes xxxxx-server 2022-12-22 18:13:50 UTC xxxxx-server#***.iam.gserviceaccount.com
✔ xxxxx-server-00082-few xxxxx-server 2022-12-22 18:09:27 UTC xxxxx-server#***.iam.gserviceaccount.com
✔ xxxxx-server-00081-zex xxxxx-server 2022-12-22 18:03:00 UTC xxxxx-server#***.iam.gserviceaccount.com
✔ xxxxx-server-00080-bad xxxxx-server 2022-12-22 18:02:02 UTC xxxxx-server#***.iam.gserviceaccount.com
Now I would like to filter only those which do not have ACTIVE:yes. I have tried adding --filter='-active:*', but it does not seem to have any effect and I am given an warning:
WARNING: The following filter keys were not present in any resource : active
When I try listing the information with --format=JSON or --format=YAML, I am overwhelmed with information, which includes listing all past status transitions like:
status:
conditions:
- lastTransitionTime: '2022-12-22T18:14:04.208603Z'
status: 'True'
type: Ready
- lastTransitionTime: '2022-12-22T18:24:23.335439Z'
reason: Reserve
severity: Info
status: Unknown
type: Active
I have no idea if / how I can filter based on this.
How can I list only non-active Cloud Run revisions of my service?

This is what after some experimentation works for me eventually, including iterating the list and deleting all inactive revisions:
# List last created revision (it should be the active one)
ACTIVE=$(gcloud run services describe xxxxx-server --format="value(status.latestCreatedRevisionName)" --region=europe-west1)
if [ "$ACTIVE" != "" ]; then
gcloud run revisions list --region europe-west1 --service xxxxx-server --filter="metadata.name!=$ACTIVE" --format="get(metadata.name)" >nonactive
echo "Delete all but $ACTIVE"
while read p; do gcloud run revisions delete --region europe-west1 $p --quiet; done < nonactive
fi

You can use this command to do that :)
# Get non running revisions
gcloud run revisions list \
--filter="status.conditions.type:Active AND status.conditions.status:'False'" \
--format='value(metadata.name)'
And here u can delete them all at once :)
REVS=`gcloud run revisions list --filter="status.conditions.type:Active AND status.conditions.status:'False'" --format='value(metadata.name)'`
for rev in `echo $REVS`; do
echo $rev
gcloud run revisions delete $rev --quiet &
done

Related

ansible dynamic inventory kubernetes

I am trying to use the kubernetes plugin in ansible to be able to use a dynamic inventory based on my k8 cluster. I have followed this doc https://docs.ansible.com/ansible/latest/scenario_guides/kubernetes_scenarios/k8s_inventory.html however i keep getting a failed to parse error.
# ansible-inventory --list -i k8s.yaml
[WARNING]: * Failed to parse /etc/ansible/k8s.yaml with ansible_collections.kubernetes.core.plugins.inventory.k8s plugin: Invalid value "kubernetes.core.k8s" for configuration option "plugin_type: inventory
plugin: ansible_collections.kubernetes.core.plugins.inventory.k8s setting: plugin ", valid values are: ['k8s']
[WARNING]: Unable to parse /etc/ansible/k8s.yaml as an inventory source
[WARNING]: No inventory was parsed, only implicit localhost is available
{
"_meta": {
"hostvars": {}
},
"all": {
"children": [
"ungrouped"
]
}
}
extract from ansible.cfg
# egrep -i "\[inventory\]|kubernetes" ansible.cfg
[inventory]
enable_plugins = kubernetes.core.k8s
k8s.yaml
# cat k8s.yaml
plugin: kubernetes.core.k8s
The error suggests that kubernetes.core.k8s is an invalid value and that valid values are ['k8s']. yet this is exactly whats in the documentation, I have tried all manor of altering the plugin name with no success.
Can anyone steer me on what i am missing here?
So I managed to get it working by editing /usr/lib/python3/dist-packages/ansible_collections/kubernetes/core/plugins/inventory/k8s.py it seems my version only listed k8s as a pluggin name, I replaced with, kubernetes.core.k8s and it worked
options:
plugin:
description: token that ensures this is a source file for the 'k8s' plugin.
required: True
choices: ['kubernetes.core.k8s']
I did plan to raise it as a PR on the project but seems this was already updated several months back so I must have just had outdated files.
https://github.com/ansible-collections/kubernetes.core/blob/60933457e81fcfa1000f556b2bc3425bbf080602/plugins/inventory/k8s.py#L27

Bazel Kubernetes Object Error: no objects passed to apply (Google Container Registry)

I have a k8s_object rule to apply a deployment to my Google Kubernetes Cluster. Here is my setup:
load("#io_bazel_rules_docker//nodejs:image.bzl", "nodejs_image")
nodejs_image(
name = "image",
data = [":lib", "//:package.json"],
entry_point = ":index.ts",
)
load("#io_bazel_rules_k8s//k8s:object.bzl", "k8s_object")
k8s_object(
name = "k8s_deployment",
template = ":gateway.deployment.yaml",
kind = "deployment",
cluster = "gke_cents-ideas_europe-west3-b_cents-ideas",
images = {
"gcr.io/cents-ideas/gateway:latest": ":image"
},
)
But when I run bazel run //services/gateway:k8s_deployment.apply, I get the following error
INFO: Analyzed target //services/gateway:k8s_deployment.apply (0 packages loaded, 0 targets configured).
INFO: Found 1 target...
Target //services/gateway:k8s_deployment.apply up-to-date:
bazel-bin/services/gateway/k8s_deployment.apply
INFO: Elapsed time: 0.113s, Critical Path: 0.00s
INFO: 0 processes.
INFO: Build completed successfully, 1 total action
INFO: Build completed successfully, 1 total action
$ /snap/bin/kubectl --kubeconfig= --cluster=gke_cents-ideas_europe-west3-b_cents-ideas --context= --user= apply -f -
2020/02/12 14:52:44 Unable to publish images: unable to publish image gcr.io/cents-ideas/gateway:latest
error: no objects passed to apply
error: no objects passed to apply
It doesn't push the new image to the Google Container Registry.
Strangely, this worked a few days ago. But I didn't change anything.
Here is the full code if you need to take a closer look: https://github.com/flolude/cents-ideas/blob/069c773ade88dfa8aff492f024a1ade1f8ed282e/services/gateway/BUILD
Update
I don't know if this has something to do with this issue but when I run
gcloud auth configure-docker
I get some warnings:
WARNING: `docker-credential-gcloud` not in system PATH.
gcloud's Docker credential helper can be configured but it will not work until this is corrected.
WARNING: Your config file at [/home/flolu/.docker/config.json] contains these credential helper entries:
{
"credHelpers": {
"asia.gcr.io": "gcloud",
"staging-k8s.gcr.io": "gcloud",
"us.gcr.io": "gcloud",
"gcr.io": "gcloud",
"marketplace.gcr.io": "gcloud",
"eu.gcr.io": "gcloud"
}
}
Adding credentials for all GCR repositories.
WARNING: A long list of credential helpers may cause delays running 'docker build'. We recommend passing the registry name to configure only the registry you are using.
gcloud credential helpers already registered correctly.
I had google-cloud-sdk installed via snap install. What I did to make it work is to remove google-cloud-sdk via
snap remove google-cloud-sdk
and then followed those instructions to install it via
sudo apt install google-cloud-sdk
Now it works fine

Unable to bring up Eclipse che on Kubernetes

Getting ERR_TIMEOUT: Timeout set to pod wait timeout 300000 while dowloading images
I am new to Eclipse che and kubernetes. I got Kubernetes installed on Ubuntu and am trying to run chectl server:start but it is failing. What am doing wrong? Below is the trace i get. Is there a log file where i could get more details? Please help.
Details:
✔ Verify Kubernetes API...OK
✔  Looking for an already existing Che instance
✔ Verify if Che is deployed into namespace "che"
✔ Found running che deployment
✔ Found running plugin registry deployment
✔ Found running devfile registry deployment
✔  Starting already deployed Che
✔ Scaling up Che Deployments...done.
❯ ✅ Post installation checklist
❯ Che pod bootstrap
✔ scheduling...done.
✖ downloading images
→ ERR_TIMEOUT: Timeout set to pod wait timeout 300000
starting
Retrieving Che Server URL
Che status check
Error: ERR_TIMEOUT: Timeout set to pod wait timeout 300000
at KubeHelper.<anonymous> (/usr/local/lib/chectl/lib/api/kube.js:578:19)
at Generator.next (<anonymous>)
at fulfilled (/usr/local/lib/chectl/node_modules/tslib/tslib.js:107:62)
Values.yaml
#
# Copyright (c) 2012-2017 Red Hat, Inc.
# This program and the accompanying materials are made
# available under the terms of the Eclipse Public License 2.0
# which is available at https://www.eclipse.org/legal/epl-2.0/
#
# SPDX-License-Identifier: EPL-2.0
#
# the following section is for secure registries. when uncommented, a pull secret will be created
#registry:
# host: my-secure-private-registry.com
# username: myUser
# password: myPass
cheWorkspaceHttpProxy: ""
cheWorkspaceHttpsProxy: ""
cheWorkspaceNoProxy: ""
cheImage: eclipse/che-server:nightly
cheImagePullPolicy: Always
cheKeycloakRealm: "che"
cheKeycloakClientId: "che-public"
#customOidcUsernameClaim: ""
#customOidcProvider: ""
#workspaceDefaultRamRequest: ""
#workspaceDefaultRamLimit: ""
#workspaceSidecarDefaultRamLimit: ""
global:
cheNamespace: ""
multiuser: false
# This value can be passed if custom Oidc provider is used, and there is no need to deploy keycloak in multiuser mode
# default (if empty) is true
#cheDedicatedKeycloak: false
ingressDomain: <xx.xx.xx.xx.nip.io>
# See --annotations-prefix flag (https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/cli-arguments.md)
ingressAnnotationsPrefix: "nginx."
# options: default-host, single-host, multi-host
serverStrategy: multi-host
tls:
enabled: false
useCertManager: true
useStaging: true
secretName: che-tls
gitHubClientID: ""
gitHubClientSecret: ""
pvcClaim: "1Gi"
cheWorkspacesNamespace: ""
workspaceIdleTimeout: "-1"
log:
loggerConfig: ""
appenderName: "plaintext"
Try to increase timeout by setting --k8spodreadytimeout=500000
[1] https://github.com/che-incubator/chectl
Following https://github.com/eclipse/che/issues/13871 (which is for minishift)
kubectl delete namespaces che
chectl server:start --platform minikube
give it a try
I hope by now you would have installed Eclipse Che on Kubernetes successfully.

dashDB Local on fedora 25 - error code 130

I tried 30 day trial of dashDB Local. I followed the steps described in the link:
https://www.ibm.com/support/knowledgecenter/en/SS6NHC/com.ibm.swg.im.dashdb.doc/admin/linux_deploy.html
I did not create a node configuration file because mine is a SMP setup.
Logged into my docker hub account and pulled the image.
docker login -u xxx -p yyyyy
docker pull ibmdashdb/local:latest-linux
The pull took 5 minutes or so. I waited for the image download to complete.
Ran the following command. It completed successfully.
docker run -d -it --privileged=true --net=host --name=dashDB -v /mnt/clusterfs:/mnt/bludata0 -v /mnt/clusterfs:/mnt/blumeta0 ibmdashdb/local:latest-linux
ran logs command
docker logs --follow dashDB
This showed dashDB did not start but exited with error code 130
# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0f008f8e413d ibmdashdb/local:latest-linux "/usr/sbin/init" 16 seconds ago Exited (130) 1 seconds ago dashDB
#
logs command shows this:
2017-05-17T17:48:11.285582000Z Detected virtualization docker.
2017-05-17T17:48:11.286078000Z Detected architecture x86-64.
2017-05-17T17:48:11.286481000Z
2017-05-17T17:48:11.294224000Z Welcome to dashDB Local!
2017-05-17T17:48:11.294621000Z
2017-05-17T17:48:11.295022000Z Set hostname to <orion>.
2017-05-17T17:48:11.547189000Z Cannot add dependency job for unit systemd-tmpfiles-clean.timer, ignoring: Unit is masked.
2017-05-17T17:48:11.547619000Z [ OK ] Reached target Timers.
<snip>
2017-05-17T17:48:13.361610000Z [ OK ] Started The entrypoint script for initializing dashDB local.
2017-05-17T17:48:19.729980000Z [100209.207731] start_dashDB_local.sh[161]: /usr/lib/dashDB_local_common_functions.sh: line 1816: /tmp/etc_profile-LOCAL.cfg: No such file or directory
2017-05-17T17:48:20.236127000Z [100209.713223] start_dashDB_local.sh[161]: The dashDB Local container's environment is not set up yet.
2017-05-17T17:48:20.275248000Z [ OK ] Stopped Create Volatile Files and Directories.
<snip>
2017-05-17T17:48:20.737471000Z Sending SIGTERM to remaining processes...
2017-05-17T17:48:20.840909000Z Sending SIGKILL to remaining processes...
2017-05-17T17:48:20.880537000Z Powering off.
So it looks like start_dashDB_local.sh is failing at /usr/lib/dashDB_local_common_functions.sh 1816th line? I exported the image and this is the 1816th line of dashDB_local_common_functions.sh
update_etc_profile()
{
local runtime_env=$1
local cfg_file
# Check if /etc/profile/dashdb_env.sh is already updated
grep -q BLUMETAHOME /etc/profile.d/dashdb_env.sh
if [ $? -eq 0 ]; then
return
fi
case "$runtime_env" in
"AWS" | "V1.5" ) cfg_file="/tmp/etc_profile-V15_AWS.cfg"
;;
"V2.0" ) cfg_file="/tmp/etc_profile-V20.cfg"
;;
"LOCAL" ) # dashDB Local Case and also the default
cfg_file="/tmp/etc_profile-LOCAL.cfg"
;;
*) logger_error "Invalid ${runtime_env} value"
return
;;
esac
I also see /tmp/etc_profile-LOCAL.cfg in the image. Did I miss any step here?
I also created /mnt/clusterfs/nodes file ... but it did not help. The same docker run command failed in the same way.
Please help.
I am using x86_64 Fedora25.
# docker version
Client:
Version: 1.12.6
API version: 1.24
Package version: docker-common-1.12.6-6.gitae7d637.fc25.x86_64
Go version: go1.7.4
Git commit: ae7d637/1.12.6
Built: Mon Jan 30 16:15:28 2017
OS/Arch: linux/amd64
Server:
Version: 1.12.6
API version: 1.24
Package version: docker-common-1.12.6-6.gitae7d637.fc25.x86_64
Go version: go1.7.4
Git commit: ae7d637/1.12.6
Built: Mon Jan 30 16:15:28 2017
OS/Arch: linux/amd64
#
# cat /etc/fedora-release
Fedora release 25 (Twenty Five)
# uname -r
4.10.15-200.fc25.x86_64
#
Thanks for bringing this to our attention. I reached out to our developer team. It seems this is happening because inside the container, tmpfs gets mounted on to /tmp and wipes out all the scripts
We have seen this issue and moving to the latest version of docker seems to fix it. Your docker version commands shows it is an older version.
So please install the latest docker version and retry the deployment of dashdb Local and update here.
Regards
Murali

How to set kube-scheduler print log to file

kubernetes's version is 1.2
I want to watch the scheduler's log. So how to set kube-scheduler's log print to a file?
The kube-scheduler's configuration is at this path: /etc/kubernetes/scheduler.
And the global configuration is at this path: /etc/kubernetes/config.
So we can see these notes:
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"
# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"
Can you tail the contents of the service (if running in systemd): journalctl -u apiserver -f
Or if a container, find the container id of the scheduler, and tail with docker: docker logs -f