Any way to reduce output from Google Cloud Run emulator? - gcloud

The Google Cloud Run emulator (gcloud beta code dev) watches for file changes and rebuilds on every change.
So, in my terminal, there's a constant churn of building messages as I type, and it's distracting.
I tried (reference: https://cloud.google.com/sdk/gcloud/reference)
--verbosity="none" (no effect)
--quiet just elminates interactivity.
--no-user-output-enabled crashes the emulator with
Flag --enable-rpc has been deprecated, flags --rpc-port or --rpc-http-port now imply --enable-rpc=true, so please use only those instead
^CException in thread Thread-13:
Traceback (most recent call last):
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/threading.py", line 932, in _bootstrap_inner
self.run()
and a whole bunch more that I can copy if it matters.
Is there a way to either silence build logs, but still get (1) my own console.logs and (2) errors?

I suspect (because your question was the first time I became aware of this [useful] facility) that, because gcloud beta code dev is using (in my case) minikube (locally), the output is being generated by minikube (kubelet) process and not gcloud, that you can't (yet!) control the output by adding gcloud flags.
It's a good suggestion and I recommend you file an issue on Google's Issue Tracker.
kubectl (!) has a new configuration that points to minikube while it's running and (!) I'm able to kubectl logs deployment/${APP} from another term to view only my app's logs:
kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
${APP} 1/1 1 1 1m
kubectl logs deployment/${APP}
2022/01/06 17:21:58 Entered
2022/01/06 17:21:58 Starting server [:8080]
2022/01/06 17:21:58 Sleeping
2022/01/06 17:26:58 Awake
2022/01/06 17:26:58 Sleeping
~/.kube/config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /path/to/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 06 Jan 2022 09:21:47 PST
provider: minikube.sigs.k8s.io
version: v1.24.0
name: cluster_info
server: https://192.168.49.2:8443
name: gcloud-local-dev
contexts:
- context:
cluster: gcloud-local-dev
extensions:
- extension:
last-update: Thu, 06 Jan 2022 09:21:47 PST
provider: minikube.sigs.k8s.io
version: v1.24.0
name: context_info
namespace: default
user: gcloud-local-dev
name: gcloud-local-dev
current-context: gcloud-local-dev
kind: Config
preferences: {}
users:
- name: gcloud-local-dev
user:
client-certificate: /path/to/.minikube/profiles/gcloud-local-dev/client.crt
client-key: /path/to/.minikube/profiles/gcloud-local-dev/client.key

Related

EKS: Use cluster config yaml file with eksctl to create a new cluster but node can't join cluster

I am new to eks. I use this cluster config yaml file to create a new cluster,
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: h2-dev-cluster
region: us-west-2
nodeGroups:
- name: h2-dev-ng-1
instanceType: t2.small
desiredCapacity: 2
ssh: # use existing EC2 key
publicKeyName: dev-eks-node
but eksctl stuck at
waiting for at least 1 node(s) to become ready in "h2-dev-ng-1
then timeout.
I have checked all points from this aws document https://docs.aws.amazon.com/eks/latest/userguide/troubleshooting.html
all the points are right exclude The ClusterName in your worker node AWS CloudFormation template I can't check because UserData has been encrypted by cloudformation.
I access to one of node and type journalctl -u kubelet, then find these error
Jul 03 08:22:31 ip-192-168-53-151.us-west-2.compute.internal kubelet[4541]: E0703 08:22:31.007677 4541 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta
Jul 03 08:22:31 ip-192-168-53-151.us-west-2.compute.internal kubelet[4541]: E0703 08:22:31.391913 4541 kubelet.go:2272] node "ip-192-168-53-151.us-west-2.compute.internal" not found
Jul 03 08:22:31 ip-192-168-53-151.us-west-2.compute.internal kubelet[4541]: E0703 08:22:31.434158 4541 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:459: Failed to list *v1.
Jul 03 08:22:31 ip-192-168-53-151.us-west-2.compute.internal kubelet[4541]: E0703 08:22:31.492746 4541 kubelet.go:2272] node "ip-192-168-53-151.us-west-2.compute.internal" not found
Then I type cat /var/lib/kubelet/kubeconfig , I see follows
apiVersion: v1
kind: Config
clusters:
- cluster:
certificate-authority: /etc/kubernetes/pki/ca.crt
server: MASTER_ENDPOINT
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubelet
name: kubelet
current-context: kubelet
users:
- name: kubelet
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
command: /usr/bin/aws-iam-authenticator
args:
- "token"
- "-i"
- "CLUSTER_NAME"
- --region
- "AWS_REGION"
I noticed that parameter of server is MASTER_ENDPINT. So I run /etc/eks/bootstrap.sh h2-dev-cluster to set cluster name. Find the parameter become right as folllows (I marked url)
apiVersion: v1
kind: Config
clusters:
- cluster:
certificate-authority: /etc/kubernetes/pki/ca.crt
server: https://XXXXXXXX.gr7.us-west-2.eks.amazonaws.com
name: kubernetes
run sudo service restart kubectl but journalctl -u kubelet still can find the same error, and nodes still can't join cluster
How can I resolve it?
eksctl: 0.23.0 rc1 (also test with 0.20.0 has the same error)
kubectl: 1.18.5
os: ubuntu 18.04 (use a new ec2 )

rendering env-var inside kubernetes kubeconfig yaml file

I need to use an environment variable inside my kubeconfig file to point the NODE_IP of the Kubernetes API server.
My config is:
apiVersion: v1
clusters:
- cluster:
insecure-skip-tls-verify: true
server: https://$NODE_IP:6443
name: docker-for-desktop-cluster
contexts:
- context:
cluster: docker-for-desktop-cluster
user: docker-for-desktop
name: docker-for-desktop
current-context: docker-for-desktop
kind: Config
preferences: {}
users:
- name: docker-for-desktop
user:
......
But it seems like the kubeconfig file is not getting rendered variables when I run the command:
kubectl --kubeconfig mykubeConfigFile get pods.
It complains as below:
Unable to connect to the server: dial tcp: lookup $NODE_IP: no such host
Did anyone try to do something like this or is it possible to make it work?
Thanks in advance
This thread contains explanations and answers:
... either wait Implement templates · Issue #23896 · kubernetes/kubernetes for the implementation of the templating proposal in k8s (not merged yet)
... or preprocess your yaml with tools like:
envsubst:
export NODE_IP="127.0.11.1"
envsubst < mykubeConfigFile.yml | kubectl --kubeconfig mykubeConfigFile.yml get pods
sed:
cat mykubeConfigFile.yml | sed s/\$\$EXTERNAL_IP/127.0.11.1/ | kubectl --kubeconfig mykubeConfigFile.yml get pods

How helm rollback works in kubernetes?

While going through the helm documentation, i came across rollback feature.
Its a cool feature, but i have some doubts about the implementation of that feature.
How they have implemented it? If they might have used some datastore to preserve old release config, what datastore it is?
Is there any upper limit on consecutive rollbacks? If so, Upto how many rollbacks will it support? Can we change this limit?
As the documentation says, it rolls back the entire release. Helm generally stores release metadata in its own configmaps. Every-time you release changes, it appends that to the existing data. Your changes can have new deployment image, new configmaps, storages, etc. On rollback, everything goes to the previous version.
Helm 3 changed the default release information storage to Secrets in the namespace of the release. Following helm documentation should provide some of the details in this regard:
https://helm.sh/docs/topics/advanced/#storage-backends
For example (only for illustration purpose here) -
$ helm install test-release-1 .
NAME: test-release-1
LAST DEPLOYED: Sun Feb 20 13:27:53 2022
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
We can now see history and secret information for above release as follows:
$ helm history test-release-1
REVISION UPDATED STATUS CHART APP VERSION DESCRIPTION
1 Sun Feb 20 13:27:53 2022 deployed fleetman-helm-chart-test-1-0.1.0 1.16.0 Install complete
$ kubectl get secrets
NAME TYPE DATA AGE
sh.helm.release.v1.test-release-1.v1 helm.sh/release.v1 1 41s
$ kubectl describe secrets sh.helm.release.v1.test-release-1.v1
Name: sh.helm.release.v1.test-release-1.v1
Namespace: default
Labels: modifiedAt=1645363673
name=test-release-1
owner=helm
status=deployed
version=1
Annotations: <none>
Type: helm.sh/release.v1
Data
====
release: 1924 bytes
Now, it is upgraded to a new version as follows:
$ helm upgrade test-release-1 .
Release "test-release-1" has been upgraded. Happy Helming!
NAME: test-release-1
LAST DEPLOYED: Sun Feb 20 13:30:26 2022
NAMESPACE: default
STATUS: deployed
REVISION: 2
TEST SUITE: None
Following is the updated information in Kubernetes Secrets:
$ kubectl get secrets
NAME TYPE DATA AGE
sh.helm.release.v1.test-release-1.v1 helm.sh/release.v1 1 2m53s
sh.helm.release.v1.test-release-1.v2 helm.sh/release.v1 1 20s

Puppet kubernetes module

I installed the puppet kubernetes module to manage pods of my kubernetes cluster with https://github.com/garethr/garethr-kubernetes/blob/master/README.md
I am not able to get any pod information back when I run
puppet resource kubernetes_pod
It just returns an empty line.
I am using a minikube k8s cluster to test the puppet module against.
cat /etc/puppetlabs/puppet/kubernetes.conf
apiVersion: v1
clusters:
- cluster:
certificate-authority: /root/.minikube/ca.crt
server: https://<ip address>:8443
name: minikube
contexts:
- context:
cluster: minikube
user: minikube
name: minikube
current-context: minikube
kind: Config
preferences: {}
users:
- name: minikube
user:
client-certificate: /root/.minikube/apiserver.crt
client-key: /root/.minikube/apiserver.key
I am able to use curl with the certs to talk to the K8s REST API
curl --cacert /root/.minikube/ca.crt --cert /root/.minikube/apiserver.crt --key /root/.minikube/apiserver.key https://<minikube ip>:844/api/v1/pods/
It looks like the garethr-kubernetes package hasn't been updated since August 2017, so you probably need a version of the kubeclient gem at least that old. It seems kubeclient 3.0 came out quite recently, so you might want to try the latest version from the 2.5 major (currently 2.5.2).
From the requirements, this could be related to a credentials issue.
Or the configuration is set to a namespace with nothing in it.
As show in this issue, check the following:
kubectl get pods works fine at the command line, and my ~/.puppetlabs/etc/puppet/kubernetes.conf file is generated as suggested:
mc0e#xxx:~$ kubectl config view --raw=true
apiVersion: v1
clusters:
- cluster:
server: http://localhost:8080
name: test-doc
contexts:
- context:
cluster: test-doc
user: ""
name: test-doc
current-context: test-doc
kind: Config
preferences: {}
users: []

Naming gitRepo mount path in Kubernetes

When using a gitRepo volume in Kubernetes, the repo is cloned into the mountPath directory. For the following pod specification, for example:
apiVersion: v1
kind: Pod
metadata:
name: server
spec:
containers:
- image: nginx
name: nginx
volumeMounts:
- mountPath: /usr/share/docroot
name: docroot-volume
volumes:
- name: docroot-volume
gitRepo:
repository: "git#somewhere:me/my-git-repository.git"
The directory appears in the container at /usr/share/docroot/my-git-repository. This means my container needs to know my repository name. I don't want my container knowing anything about the repository name. It should just know there is a "docroot", however initialized. The only place the git repository name should appear is in the pod specification.
Is there anyway in Kubernetes to specify the full internal path to a git repo volume mount?
Currently there is no native way to do this, but I filed an issue for you.
Under the hood kuberetes is just doing a git clone $source over an emptyDir volume, but since the source is passed as a single argument there is no way to specify the destination name.
Fri, 09 Oct 2015 18:35:01 -0700 Fri, 09 Oct 2015 18:49:52 -0700 90 {kubelet stclair-minion-nwpu} FailedSync Error syncing pod, skipping: failed to exec 'git clone https://github.com/kubernetes/kubernetes.git k8s': Cloning into 'kubernetes.git k8s'...
error: The requested URL returned error: 400 while accessing https://github.com/kubernetes/kubernetes.git k8s/info/refs
fatal: HTTP request failed
: exit status 128
In the meantime, I can think of 2 options to avoid the dependency on the repository name:
Supply the repository name as an environment variable, which you can then use from your container
Modify your containers command to move the repository to the desired location before continuing