Unable to Get Data from Github Api in Grails App - github

I'm trying to make an http request to Github Api in Grails Controller.
I just started learning Grails yesterday and I'm stuck. I searched over internet for hours but it seems there is very little discussion about Grails on internet.
I simply want to call the Github Api and get user Data. I am familiar with the Api Endpoint, I have used it with other frameworks. But I am unable to figure out this (maybe) tiny problem in Grails.
Can anybody help me how do we make API calls in Grails controller?
Thanks in advance and Apologies for a naïve question.

Unable to Get Data from Github Api in Grails App
See the project at github.com/jeffbrown/tauseefahmedgithubapi.
src/main/groovy/tauseefahmedgithubapi/GitHubClient.groovy
package tauseefahmedgithubapi
import io.micronaut.http.annotation.Get
import io.micronaut.http.annotation.Header
import io.micronaut.http.client.annotation.Client
import static io.micronaut.http.HttpHeaders.USER_AGENT
#Client('https://api.github.com/')
interface GitHubClient {
#Get('/orgs/{org}/repos')
#Header(name = USER_AGENT, value = 'Micronaut Demo Application')
List<GitHubRepository> listRepositoriesForOrginazation(String org)
}
src/main/groovy/tauseefahmedgithubapi/GitHubRepository.groovy
package tauseefahmedgithubapi
import io.micronaut.core.annotation.Introspected
#Introspected
class GitHubRepository {
String name
}
grails-app/init/tauseefahmedgithubapi/BootStrap.groovy
package tauseefahmedgithubapi
import org.springframework.beans.factory.annotation.Autowired
class BootStrap {
#Autowired
GitHubClient client
def init = { servletContext ->
def repos = client.listRepositoriesForOrginazation('micronaut-projects')
for(def repo : repos) {
log.info "Repo Name: ${repo.name}"
}
}
def destroy = {
}
}
At application startup you may see output like this:
2021-06-28 13:03:16.834 INFO --- [ restartedMain] tauseefahmedgithubapi.BootStrap : Repo Name: static-website
2021-06-28 13:03:16.836 INFO --- [ restartedMain] tauseefahmedgithubapi.BootStrap : Repo Name: presentations
2021-06-28 13:03:16.836 INFO --- [ restartedMain] tauseefahmedgithubapi.BootStrap : Repo Name: micronaut-core
2021-06-28 13:03:16.836 INFO --- [ restartedMain] tauseefahmedgithubapi.BootStrap : Repo Name: micronaut-profiles
2021-06-28 13:03:16.836 INFO --- [ restartedMain] tauseefahmedgithubapi.BootStrap : Repo Name: micronaut-guides-old
2021-06-28 13:03:16.836 INFO --- [ restartedMain] tauseefahmedgithubapi.BootStrap : Repo Name: micronaut-examples
2021-06-28 13:03:16.836 INFO --- [ restartedMain] tauseefahmedgithubapi.BootStrap : Repo Name: static-website-test
2021-06-28 13:03:16.836 INFO --- [ restartedMain] tauseefahmedgithubapi.BootStrap : Repo Name: micronaut-docs
2021-06-28 13:03:16.836 INFO --- [ restartedMain] tauseefahmedgithubapi.BootStrap : Repo Name: micronaut-test
2021-06-28 13:03:16.836 INFO --- [ restartedMain] tauseefahmedgithubapi.BootStrap : Repo Name: micronaut-kotlin
2021-06-28 13:03:16.836 INFO --- [ restartedMain] tauseefahmedgithubapi.BootStrap : Repo Name: micronaut-spring
2021-06-28 13:03:16.836 INFO --- [ restartedMain] tauseefahmedgithubapi.BootStrap : Repo Name: micronaut-oauth2
2021-06-28 13:03:16.836 INFO --- [ restartedMain] tauseefahmedgithubapi.BootStrap : Repo Name: micronaut-liquibase
2021-06-28 13:03:16.836 INFO --- [ restartedMain] tauseefahmedgithubapi.BootStrap : Repo Name: micronaut-flyway
2021-06-28 13:03:16.836 INFO --- [ restartedMain] tauseefahmedgithubapi.BootStrap : Repo Name: micronaut-elasticsearch
2021-06-28 13:03:16.837 INFO --- [ restartedMain] tauseefahmedgithubapi.BootStrap : Repo Name: micronaut-graphql
2021-06-28 13:03:16.837 INFO --- [ restartedMain] tauseefahmedgithubapi.BootStrap : Repo Name: micronaut-grpc
2021-06-28 13:03:16.837 INFO --- [ restartedMain] tauseefahmedgithubapi.BootStrap : Repo Name: micronaut-kafka
2021-06-28 13:03:16.837 INFO --- [ restartedMain] tauseefahmedgithubapi.BootStrap : Repo Name: micronaut-netflix
2021-06-28 13:03:16.837 INFO --- [ restartedMain] tauseefahmedgithubapi.BootStrap : Repo Name: micronaut-groovy
2021-06-28 13:03:16.837 INFO --- [ restartedMain] tauseefahmedgithubapi.BootStrap : Repo Name: micronaut-micrometer
2021-06-28 13:03:16.837 INFO --- [ restartedMain] tauseefahmedgithubapi.BootStrap : Repo Name: micronaut-sql
2021-06-28 13:03:16.837 INFO --- [ restartedMain] tauseefahmedgithubapi.BootStrap : Repo Name: micronaut-mongodb
2021-06-28 13:03:16.837 INFO --- [ restartedMain] tauseefahmedgithubapi.BootStrap : Repo Name: micronaut-redis
2021-06-28 13:03:16.837 INFO --- [ restartedMain] tauseefahmedgithubapi.BootStrap : Repo Name: micronaut-neo4j
2021-06-28 13:03:16.837 INFO --- [ restartedMain] tauseefahmedgithubapi.BootStrap : Repo Name: micronaut-rabbitmq
2021-06-28 13:03:16.837 INFO --- [ restartedMain] tauseefahmedgithubapi.BootStrap : Repo Name: micronaut-aws
2021-06-28 13:03:16.837 INFO --- [ restartedMain] tauseefahmedgithubapi.BootStrap : Repo Name: micronaut-rss
2021-06-28 13:03:16.837 INFO --- [ restartedMain] tauseefahmedgithubapi.BootStrap : Repo Name: micronaut-gcp
2021-06-28 13:03:16.838 INFO --- [ restartedMain] tauseefahmedgithubapi.BootStrap : Repo Name: micronaut-kubernetes

Related

the server could not find the metric nginx_vts_server_requests_per_second for pods

I installed the kube-prometheus-0.9.0, and want to deploy a sample application on which to test the Prometheus metrics autoscaling, with the following resource manifest file: (hpa-prome-demo.yaml)
apiVersion: apps/v1
kind: Deployment
metadata:
name: hpa-prom-demo
spec:
selector:
matchLabels:
app: nginx-server
template:
metadata:
labels:
app: nginx-server
spec:
containers:
- name: nginx-demo
image: cnych/nginx-vts:v1.0
resources:
limits:
cpu: 50m
requests:
cpu: 50m
ports:
- containerPort: 80
name: http
---
apiVersion: v1
kind: Service
metadata:
name: hpa-prom-demo
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "80"
prometheus.io/path: "/status/format/prometheus"
spec:
ports:
- port: 80
targetPort: 80
name: http
selector:
app: nginx-server
type: NodePort
For testing purposes, used a NodePort Service and luckly I can get the http repsonse after applying the deployment. Then I installed
Prometheus Adapter via Helm Chart by creating a new hpa-prome-adapter-values.yaml file to override the default Values values, as follows.
rules:
default: false
custom:
- seriesQuery: 'nginx_vts_server_requests_total'
resources:
overrides:
kubernetes_namespace:
resource: namespace
kubernetes_pod_name:
resource: pod
name:
matches: "^(.*)_total"
as: "${1}_per_second"
metricsQuery: (sum(rate(<<.Series>>{<<.LabelMatchers>>}[1m])) by (<<.GroupBy>>))
prometheus:
url: http://prometheus-k8s.monitoring.svc
port: 9090
Added a rules rule and specify the address of Prometheus. Install Prometheus-Adapter with the following command.
$ helm install prometheus-adapter prometheus-community/prometheus-adapter -n monitoring -f hpa-prome-adapter-values.yaml
NAME: prometheus-adapter
LAST DEPLOYED: Fri Jan 28 09:16:06 2022
NAMESPACE: monitoring
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
prometheus-adapter has been deployed.
In a few minutes you should be able to list metrics using the following command(s):
kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1
Finally the adatper was installed successfully, and can get the http response, as follows.
$ kubectl get po -nmonitoring |grep adapter
prometheus-adapter-665dc5f76c-k2lnl 1/1 Running 0 133m
$ kubectl get --raw="/apis/custom.metrics.k8s.io/v1beta1" | jq
{
"kind": "APIResourceList",
"apiVersion": "v1",
"groupVersion": "custom.metrics.k8s.io/v1beta1",
"resources": [
{
"name": "namespaces/nginx_vts_server_requests_per_second",
"singularName": "",
"namespaced": false,
"kind": "MetricValueList",
"verbs": [
"get"
]
}
]
}
But it was supposed to be like this,
$ kubectl get --raw="/apis/custom.metrics.k8s.io/v1beta1" | jq
{
"kind": "APIResourceList",
"apiVersion": "v1",
"groupVersion": "custom.metrics.k8s.io/v1beta1",
"resources": [
{
"name": "namespaces/nginx_vts_server_requests_per_second",
"singularName": "",
"namespaced": false,
"kind": "MetricValueList",
"verbs": [
"get"
]
},
{
"name": "pods/nginx_vts_server_requests_per_second",
"singularName": "",
"namespaced": true,
"kind": "MetricValueList",
"verbs": [
"get"
]
}
]
}
Why I can't get the metrics pods/nginx_vts_server_requests_per_second? as a result, below query was also failed.
kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/*/nginx_vts_server_requests_per_second" | jq .
Error from server (NotFound): the server could not find the metric nginx_vts_server_requests_per_second for pods
Anybody cloud please help? many thanks.
ENV:
helm install all Prometheus charts from prometheus-community https://prometheus-community.github.io/helm-chart
k8s cluster enabled by docker for mac
Solution:
I met the same problem, from Prometheus UI, i found it had namespace label and no pod label in metrics as below.
nginx_vts_server_requests_total{code="1xx", host="*", instance="10.1.0.19:80", job="kubernetes-service-endpoints", namespace="default", node="docker-desktop", service="hpa-prom-demo"}
I thought Prometheus may NOT use pod as a label, so i checked Prometheus config and found:
121 - action: replace
122 source_labels:
123 - __meta_kubernetes_pod_node_name
124 target_label: node
then searched
https://prometheus.io/docs/prometheus/latest/configuration/configuration/ and do the similar thing as below under every __meta_kubernetes_pod_node_name i searched(ie. 2 places)
125 - action: replace
126 source_labels:
127 - __meta_kubernetes_pod_name
128 target_label: pod
after a while, the configmap reloaded, UI and API could find pod label
$ kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1 | jq
{
"kind": "APIResourceList",
"apiVersion": "v1",
"groupVersion": "custom.metrics.k8s.io/v1beta1",
"resources": [
{
"name": "pods/nginx_vts_server_requests_per_second",
"singularName": "",
"namespaced": true,
"kind": "MetricValueList",
"verbs": [
"get"
]
},
{
"name": "namespaces/nginx_vts_server_requests_per_second",
"singularName": "",
"namespaced": false,
"kind": "MetricValueList",
"verbs": [
"get"
]
}
]
}
It is worth knowing that using the kube-prometheus repository, you can also install components such as Prometheus Adapter for Kubernetes Metrics APIs, so there is no need to install it separately with Helm.
I will use your hpa-prome-demo.yaml manifest file to demonstrate how to monitor nginx_vts_server_requests_total metrics.
First of all, we need to install Prometheus and Prometheus Adapter with appropriate configuration as described step by step below.
Copy the kube-prometheus repository and refer to the Kubernetes compatibility matrix in order to choose a compatible branch:
$ git clone https://github.com/prometheus-operator/kube-prometheus.git
$ cd kube-prometheus
$ git checkout release-0.9
Install the jb, jsonnet and gojsontoyaml tools:
$ go install -a github.com/jsonnet-bundler/jsonnet-bundler/cmd/jb#latest
$ go install github.com/google/go-jsonnet/cmd/jsonnet#latest
$ go install github.com/brancz/gojsontoyaml#latest
Uncomment the (import 'kube-prometheus/addons/custom-metrics.libsonnet') + line from the example.jsonnet file:
$ cat example.jsonnet
local kp =
(import 'kube-prometheus/main.libsonnet') +
// Uncomment the following imports to enable its patches
// (import 'kube-prometheus/addons/anti-affinity.libsonnet') +
// (import 'kube-prometheus/addons/managed-cluster.libsonnet') +
// (import 'kube-prometheus/addons/node-ports.libsonnet') +
// (import 'kube-prometheus/addons/static-etcd.libsonnet') +
(import 'kube-prometheus/addons/custom-metrics.libsonnet') + <--- This line
// (import 'kube-prometheus/addons/external-metrics.libsonnet') +
...
Add the following rule to the ./jsonnet/kube-prometheus/addons/custom-metrics.libsonnet file in the rules+ section:
{
seriesQuery: "nginx_vts_server_requests_total",
resources: {
overrides: {
namespace: { resource: 'namespace' },
pod: { resource: 'pod' },
},
},
name: { "matches": "^(.*)_total", "as": "${1}_per_second" },
metricsQuery: "(sum(rate(<<.Series>>{<<.LabelMatchers>>}[1m])) by (<<.GroupBy>>))",
},
After this update, the ./jsonnet/kube-prometheus/addons/custom-metrics.libsonnet file should look like this:
NOTE: This is not the entire file, just an important part of it.
$ cat custom-metrics.libsonnet
// Custom metrics API allows the HPA v2 to scale based on arbirary metrics.
// For more details on usage visit https://github.com/DirectXMan12/k8s-prometheus-adapter#quick-links
{
values+:: {
prometheusAdapter+: {
namespace: $.values.common.namespace,
// Rules for custom-metrics
config+:: {
rules+: [
{
seriesQuery: "nginx_vts_server_requests_total",
resources: {
overrides: {
namespace: { resource: 'namespace' },
pod: { resource: 'pod' },
},
},
name: { "matches": "^(.*)_total", "as": "${1}_per_second" },
metricsQuery: "(sum(rate(<<.Series>>{<<.LabelMatchers>>}[1m])) by (<<.GroupBy>>))",
},
...
Use the jsonnet-bundler update functionality to update the kube-prometheus dependency:
$ jb update
Compile the manifests:
$ ./build.sh example.jsonnet
Now simply use kubectl to install Prometheus and other components as per your configuration:
$ kubectl apply --server-side -f manifests/setup
$ kubectl apply -f manifests/
After configuring Prometheus, we can deploy a sample hpa-prom-demo Deployment:
NOTE: I've deleted the annotations because I'm going to use a ServiceMonitor to describe the set of targets to be monitored by Prometheus.
$ cat hpa-prome-demo.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: hpa-prom-demo
spec:
selector:
matchLabels:
app: nginx-server
template:
metadata:
labels:
app: nginx-server
spec:
containers:
- name: nginx-demo
image: cnych/nginx-vts:v1.0
resources:
limits:
cpu: 50m
requests:
cpu: 50m
ports:
- containerPort: 80
name: http
---
apiVersion: v1
kind: Service
metadata:
name: hpa-prom-demo
labels:
app: nginx-server
spec:
ports:
- port: 80
targetPort: 80
name: http
selector:
app: nginx-server
type: LoadBalancer
Next, create a ServiceMonitor that describes how to monitor our NGINX:
$ cat servicemonitor.yaml
kind: ServiceMonitor
apiVersion: monitoring.coreos.com/v1
metadata:
name: hpa-prom-demo
labels:
app: nginx-server
spec:
selector:
matchLabels:
app: nginx-server
endpoints:
- interval: 15s
path: "/status/format/prometheus"
port: http
After waiting some time, let's check the hpa-prom-demo logs to make sure that it is scrapped correctly:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
hpa-prom-demo-bbb6c65bb-49jsh 1/1 Running 0 35m
$ kubectl logs -f hpa-prom-demo-bbb6c65bb-49jsh
...
10.4.0.9 - - [04/Feb/2022:09:29:17 +0000] "GET /status/format/prometheus HTTP/1.1" 200 3771 "-" "Prometheus/2.29.1" "-"
10.4.0.9 - - [04/Feb/2022:09:29:32 +0000] "GET /status/format/prometheus HTTP/1.1" 200 3771 "-" "Prometheus/2.29.1" "-"
10.4.0.9 - - [04/Feb/2022:09:29:47 +0000] "GET /status/format/prometheus HTTP/1.1" 200 3773 "-" "Prometheus/2.29.1" "-"
10.4.0.9 - - [04/Feb/2022:09:30:02 +0000] "GET /status/format/prometheus HTTP/1.1" 200 3773 "-" "Prometheus/2.29.1" "-"
10.4.0.9 - - [04/Feb/2022:09:30:17 +0000] "GET /status/format/prometheus HTTP/1.1" 200 3773 "-" "Prometheus/2.29.1" "-"
10.4.2.12 - - [04/Feb/2022:09:30:23 +0000] "GET /status/format/prometheus HTTP/1.1" 200 3773 "-" "Prometheus/2.29.1" "-"
...
Finally, we can check if our metrics work as expected:
$ kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/" | jq . | grep -A 7 "nginx_vts_server_requests_per_second"
"name": "pods/nginx_vts_server_requests_per_second",
"singularName": "",
"namespaced": true,
"kind": "MetricValueList",
"verbs": [
"get"
]
},
--
"name": "namespaces/nginx_vts_server_requests_per_second",
"singularName": "",
"namespaced": false,
"kind": "MetricValueList",
"verbs": [
"get"
]
},
$ kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/*/nginx_vts_server_requests_per_second" | jq .
{
"kind": "MetricValueList",
"apiVersion": "custom.metrics.k8s.io/v1beta1",
"metadata": {
"selfLink": "/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/%2A/nginx_vts_server_requests_per_second"
},
"items": [
{
"describedObject": {
"kind": "Pod",
"namespace": "default",
"name": "hpa-prom-demo-bbb6c65bb-49jsh",
"apiVersion": "/v1"
},
"metricName": "nginx_vts_server_requests_per_second",
"timestamp": "2022-02-04T09:32:59Z",
"value": "533m",
"selector": null
}
]
}

I have an RBAC problem, but everything I test seems ok?

This is a continuation of the problem described here (How do I fix a role-based problem when my role appears to have the correct permissions?)
I have done much more testing and still do not understand the error
Error from server (Forbidden): pods is forbidden: User "dma" cannot list resource "pods" in API group "" at the cluster scope
UPDATE: Here is another hint from the API server
watch chan error: etcdserver: mvcc: required revision has been compacted
I found this thread, but I am working in the current kubernetes
How fix this error "watch chan error: etcdserver: mvcc: required revision has been compacted"?
My user exists
NAME AGE SIGNERNAME REQUESTOR REQUESTEDDURATION CONDITION
dma 77m kubernetes.io/kube-apiserver-client kubernetes-admin <none> Approved,Issued
The clusterrole exists
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRole","metadata":{"annotations":{},"name":"kubelet-runtime"},"rules":[{"apiGroups":["","extensions","apps","argoproj.io","workflows.argoproj.io","events.argoproj.io","coordination.k8s.io"],"resources":["*"],"verbs":["*"]},{"apiGroups":["batch"],"resources":["jobs","cronjobs"],"verbs":["*"]}]}
creationTimestamp: "2021-12-16T00:24:56Z"
name: kubelet-runtime
resourceVersion: "296716"
uid: a4697d6e-c786-4ec9-bf3e-88e3dbfdb6d9
rules:
- apiGroups:
- ""
- extensions
- apps
- argoproj.io
- workflows.argoproj.io
- events.argoproj.io
- coordination.k8s.io
resources:
- '*'
verbs:
- '*'
- apiGroups:
- batch
resources:
- jobs
- cronjobs
verbs:
- '*'
The sandbox namespace exists
NAME STATUS AGE
sandbox Active 6d6h
My user has authority to operate in the kubelet cluster and the namespace "sandbox"
{
"apiVersion": "rbac.authorization.k8s.io/v1",
"kind": "ClusterRoleBinding",
"metadata": {
"annotations": {
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"rbac.authorization.k8s.io/v1\",\"kind\":\"ClusterRoleBinding\",\"metadata\":{\"annotations\":{},\"name\":\"dma-kubelet-binding\"},\"roleRef\":{\"apiGroup\":\"rbac.authorization.k8s.io\",\"kind\":\"ClusterRole\",\"name\":\"kubelet-runtime\"},\"subjects\":[{\"kind\":\"ServiceAccount\",\"name\":\"dma\",\"namespace\":\"argo\"},{\"kind\":\"ServiceAccount\",\"name\":\"dma\",\"namespace\":\"argo-events\"},{\"kind\":\"ServiceAccount\",\"name\":\"dma\",\"namespace\":\"sandbox\"}]}\n"
},
"creationTimestamp": "2021-12-16T00:25:42Z",
"name": "dma-kubelet-binding",
"resourceVersion": "371397",
"uid": "a2fb6d5b-8dba-4320-af74-71caac7bdc39"
},
"roleRef": {
"apiGroup": "rbac.authorization.k8s.io",
"kind": "ClusterRole",
"name": "kubelet-runtime"
},
"subjects": [
{
"kind": "ServiceAccount",
"name": "dma",
"namespace": "argo"
},
{
"kind": "ServiceAccount",
"name": "dma",
"namespace": "argo-events"
},
{
"kind": "ServiceAccount",
"name": "dma",
"namespace": "sandbox"
}
]
}
My user has the correct permissions
{
"apiVersion": "rbac.authorization.k8s.io/v1",
"kind": "Role",
"metadata": {
"annotations": {
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"rbac.authorization.k8s.io/v1\",\"kind\":\"Role\",\"metadata\":{\"annotations\":{},\"name\":\"dma\",\"namespace\":\"sandbox\"},\"rules\":[{\"apiGroups\":[\"\",\"apps\",\"autoscaling\",\"batch\",\"extensions\",\"policy\",\"rbac.authorization.k8s.io\",\"argoproj.io\",\"workflows.argoproj.io\"],\"resources\":[\"pods\",\"configmaps\",\"deployments\",\"events\",\"pods\",\"persistentvolumes\",\"persistentvolumeclaims\",\"services\",\"workflows\"],\"verbs\":[\"get\",\"list\",\"watch\",\"create\",\"update\",\"patch\",\"delete\"]}]}\n"
},
"creationTimestamp": "2021-12-21T19:41:38Z",
"name": "dma",
"namespace": "sandbox",
"resourceVersion": "1058387",
"uid": "94191881-895d-4457-9764-5db9b54cdb3f"
},
"rules": [
{
"apiGroups": [
"",
"apps",
"autoscaling",
"batch",
"extensions",
"policy",
"rbac.authorization.k8s.io",
"argoproj.io",
"workflows.argoproj.io"
],
"resources": [
"pods",
"configmaps",
"deployments",
"events",
"pods",
"persistentvolumes",
"persistentvolumeclaims",
"services",
"workflows"
],
"verbs": [
"get",
"list",
"watch",
"create",
"update",
"patch",
"delete"
]
}
]
}
My user is configured correctly on all nodes
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://206.81.25.186:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: dma
name: dma#kubernetes
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin#kubernetes
current-context: kubernetes-admin#kubernetes
kind: Config
preferences: {}
users:
- name: dma
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
- name: kubernetes-admin
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
Based on this website, I have been searching for a watch event.
I think have rebuilt everything above the control plane but the problem persists.
The next step would be to rebuild the entire cluster, but it would be so much more satisfying to find the actual problem.
Please help.
FIX:
So the policy for the sandbox namespace was wrong. I fixed that and the problem is gone!
I think finally understand RBAC (policies and all). Thank you very much to members of the Kubernetes slack channel. These policies have passed the first set of tests for a development environment ("sandbox") for Argo workflows. Still testing.
policies.yaml file:
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: dev
namespace: sandbox
rules:
- apiGroups:
- "*"
attributeRestrictions: null
resources: ["*"]
verbs:
- get
- watch
- list
- apiGroups: ["argoproj.io", "workflows.argoproj.io", "events.argoprpj.io"]
attributeRestrictions: null
resources:
- pods
- configmaps
- deployments
- events
- pods
- persistentvolumes
- persistentvolumeclaims
- services
- workflows
- eventbus
- eventsource
- sensor
verbs: ["*"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: dma-dev
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: dev
subjects:
- kind: User
name: dma
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: dma-admin
subjects:
- kind: User
name: dma
namespace: sandbox
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: access-nginx
namespace: sandbox
spec:
podSelector:
matchLabels:
app: nginx
ingress:
- from:
- podSelector:
matchLabels:
run: access
...

Getting "Error: Failed to load current kubeconfig. Please confirm that your kubeconfig is valid." when using VS Code Bridge to kubernetes

When trying to use the Bridge Kubernetes extension of VS Code, having configured the
tasks.json as follows:
"version": "2.0.0",
"tasks": [
{
"label": "bridge-to-kubernetes.service",
"type": "bridge-to-kubernetes.service",
"service": "frontend",
"ports": [
8080
],
"targetCluster": "minikube",
"targetNamespace": "ecomm-ns"
}
]
}
And my launch.json as
"name": "Launch Package with Kubernetes",
"type": "go",
"request": "launch",
"mode": "debug",
"program": "${workspaceFolder}",
"env": {
"GOOGLE_APPLICATION_CREDENTIALS": "somepath/ecomm-key.json",
},
"preLaunchTask": "bridge-to-kubernetes.service"
}
I get following output:
Target cluster: minikube
Current cluster: minikube
Target namespace: ecomm-ns
Current namespace: ecomm-ns
Target service name: frontend
Target service ports: 8080
Error: Failed to load current kubeconfig. Please confirm that your kubeconfig is valid.
The terminal process terminated with exit code: 1.
Kkubectl config view gives me correct output
Looking at the logs of the bridge plugin, I hav the following:
2021-02-02T07:40:18.1876210Z | Library | WARNG | Failed to load kubeconfig at '/Users/scaucheteux/.kube/config': (Line: 10, Col: 5, Idx: 1804) - (Line: 10, Col: 6, Idx: 1805): Expected 'MappingStart', got 'SequenceStart' (at Line: 10, Col: 5, Idx: 1804).
My kubeconfig looks fine and is correctly parsed by various yaml plugins and by kubectl:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJ
server: https://35.205.91.182
name: gke_sca-ecommerce-291313_europe-west1-b_ecomm-demo
- cluster:
certificate-authority: /Users/someuser/.minikube/ca.crt
extensions:
- extension :
last-update: Mon, 01 Feb 2021 15:27:30 CET
provider: minikube.sigs.k8s.io
version: v1.17.1
name: cluster_info
server: https://127.0.0.1:55000
name: minikube
contexts:
- context:
cluster: gke_sca-ecommerce-291313_europe-west1-b_ecomm-demo
namespace: ecomm-ns
user: gke_sca-ecommerce-291313_europe-west1-b_ecomm-demo
name: gke_sca-ecommerce-291313_europe-west1-b_ecomm-demo
- context:
cluster: minikube
extensions:
- extension:
last-update: Mon, 01 Feb 2021 15:27:30 CET
provider: minikube.sigs.k8s.io
version: v1.17.1
name: context_info
namespace: ecomm-ns
user: minikube
name: minikube
current-context: minikube
kind: Config
preferences: {}
users:
- name: gke_sca-ecommerce-291313_europe-west1-b_ecomm-demo
user:
auth-provider:
config:
access-token: ya29.A0A
cmd-args: config config-helper --format=json
cmd-path: /Users/someuser/Devs/gcloud/google-cloud-sdk/bin/gcloud
expiry: "2021-02-01T18:23:02Z"
expiry-key: '{.credential.token_expiry}'
token-key: '{.credential.access_token}'
name: gcp
- name: minikube
user:
client-certificate: /Users/someuser/.minikube/profiles/minikube/client.crt
client-key: /Users/someuser/.minikube/profiles/minikube/client.key
Read somewhere else removing the extensions fixes it for minikube
https://github.com/microsoft/mindaro/issues/111

PATCH 405 (Method Not Allowed) Groovy

I'm trying to do a HTTP operation with PATCH method toward a Groovy Script. If I do that request with Postman Interface I obtain a 200 ok but when I do with the Groovy Script I obtain a 405 Error Code.
Postman Request:
Postman Request
That request is did for a Groovy with JSON Data.
The function that process the request is the next:
public Object sendHttpRequest(String url, String operation, String jsdonData,
String user, String password) throws Exception {
println("Start sendHttpRequest() method");
Object gesdenResponse = null;
HttpURLConnection conn = null;
try {
println("Opening HTTP connection...");
println("URL: " + url);
URL obj = new URL(url);
conn = (HttpURLConnection) obj.openConnection();
conn.setRequestProperty("Authorization", String.format("Basic %s", getProtectedCredentials(user, password)));
println("Header \"Authorization: *****\" set up");
String method = null;
switch (operation) {
case "PASSWORD":
method = 'PATCH';
println("PASSWORD Operation");
break;
default:
break;
}
if (method?.equals("PUT") || method?.equals("POST") ||method?.equals("PATCH")) (conn.setDoOutput(true));
if (method == "PATCH") {
println("MODIFICAMOS CABECERA PARA PATCH ");
conn.setRequestProperty("X-HTTP-Method-Override", "PATCH");
conn.setRequestMethod("POST");
} else {
conn.setRequestMethod(method);
}
println("Setting up custom HTTP headers...");
conn.setRequestProperty(GesdenConstants.HTTP_CUSTOM_HEADER_SYSTEM_KEY, GesdenConstants.HTTP_CUSTOM_HEADER_SYSTEM_VALUE);
println(String.format("Custom header \"%s: %s\" set up", GesdenConstants.HTTP_CUSTOM_HEADER_SYSTEM_KEY, GesdenConstants.HTTP_CUSTOM_HEADER_SYSTEM_VALUE));
conn.setRequestProperty(GesdenConstants.HTTP_CUSTOM_HEADER_ACCEPT_KEY, GesdenConstants.HTTP_CUSTOM_HEADER_ACCEPT_VALUE);
println(String.format("Custom header \"%s: %s\" set up", GesdenConstants.HTTP_CUSTOM_HEADER_ACCEPT_KEY, GesdenConstants.HTTP_CUSTOM_HEADER_ACCEPT_VALUE));
conn.setRequestProperty(GesdenConstants.HTTP_CUSTOM_HEADER_CONTENT_TYPE_KEY, GesdenConstants.HTTP_CUSTOM_HEADER_CONTENT_TYPE_VALUE);
println(String.format("Custom header \"%s: %s\" set up", GesdenConstants.HTTP_CUSTOM_HEADER_CONTENT_TYPE_KEY, GesdenConstants.HTTP_CUSTOM_HEADER_CONTENT_TYPE_VALUE));
if (jsdonData != null && !jsdonData.isEmpty()) {
conn.setRequestProperty("Content-Length", String.format("%s", Integer.toString(jsdonData.getBytes().length)));
conn.getOutputStream().write(jsdonData.getBytes("UTF-8"));
println(String.format("JSON data set up" + conn));
}
println("Waiting for server response...");
println(String.format("conn es "+conn));
BufferedReader input = new BufferedReader(
new InputStreamReader(conn.getInputStream()));
String inputLine;
StringBuffer data = new StringBuffer();
println(String.format("linea " +inputLine));
while ((inputLine = input.readLine()) != null)
{
data.append(inputLine);
println(String.format("linea " +inputLine));
}
} catch (Exception e) {
throw e;
} finally {
if (conn != null) {
conn.disconnect();
println("HTTP connection closed");
}
println("Finish sendHttpRequest() method");
}
return gesdenResponse;
}
The log of the code is the next:
2019-08-30 10:18:07.981 INFO 22051 --- [ container-1] c.g.gesden.ResetPasswordScriptConnector : ***** SET PASSWORD started ******
2019-08-30 10:18:08.579 INFO 22051 --- [ container-1] c.g.gesden.ResetPasswordScriptConnector : Show me the url: http://10.4.8.107:8080/ServiceGesdenScim-1.0/Users/popen070/password
2019-08-30 10:18:08.586 INFO 22051 --- [ container-1] c.groovy.gesden.common.ConnectorUtils : Start toJSON() method
2019-08-30 10:18:08.589 INFO 22051 --- [ container-1] c.groovy.gesden.common.ConnectorUtils : Finish toJSON() method
2019-08-30 10:18:08.589 INFO 22051 --- [ container-1] c.g.gesden.ResetPasswordScriptConnector : El cuerpo del JSON es: {"password":"Pabloarevalo11"}
2019-08-30 10:18:08.589 INFO 22051 --- [ container-1] c.g.gesden.ResetPasswordScriptConnector : ***** SET PASSWORD antes del response ******
2019-08-30 10:18:08.592 INFO 22051 --- [ container-1] c.groovy.gesden.common.ConnectorUtils : Start sendHttpRequest() method
2019-08-30 10:18:08.592 INFO 22051 --- [ container-1] c.groovy.gesden.common.ConnectorUtils : Opening HTTP connection...
2019-08-30 10:18:08.592 INFO 22051 --- [ container-1] c.groovy.gesden.common.ConnectorUtils : URL: http://10.4.8.107:8080/ServiceGesdenScim-1.0/Users/popen070/password
2019-08-30 10:18:08.594 INFO 22051 --- [ container-1] c.groovy.gesden.common.ConnectorUtils : Header "Authorization: *****" set up
2019-08-30 10:18:08.594 INFO 22051 --- [ container-1] c.groovy.gesden.common.ConnectorUtils : PASSWORD Operation
2019-08-30 10:18:08.594 INFO 22051 --- [ container-1] c.groovy.gesden.common.ConnectorUtils : Method: PATCH
2019-08-30 10:18:08.595 INFO 22051 --- [ container-1] c.groovy.gesden.common.ConnectorUtils : MODIFICAMOS LAS CABECERAS PARA PATCH
2019-08-30 10:18:08.595 INFO 22051 --- [ container-1] c.groovy.gesden.common.ConnectorUtils : Setting up custom HTTP headers...
2019-08-30 10:18:08.595 INFO 22051 --- [ container-1] c.groovy.gesden.common.ConnectorUtils : Custom header "Sistema: Sanitas" set up
2019-08-30 10:18:08.595 INFO 22051 --- [ container-1] c.groovy.gesden.common.ConnectorUtils : Custom header "Accept: application/json" set up
2019-08-30 10:18:08.595 INFO 22051 --- [ container-1] c.groovy.gesden.common.ConnectorUtils : Custom header "Content-Type: application/json" set up
2019-08-30 10:18:08.596 INFO 22051 --- [ container-1] c.groovy.gesden.common.ConnectorUtils : JSON data set upsun.net.www.protocol.http.HttpURLConnection:http://10.4.8.107:8080/ServiceGesdenScim-1.0/Users/popen070/password
2019-08-30 10:18:08.596 INFO 22051 --- [ container-1] c.groovy.gesden.common.ConnectorUtils : Waiting for server response...
2019-08-30 10:18:08.596 INFO 22051 --- [ container-1] c.groovy.gesden.common.ConnectorUtils : conn es sun.net.www.protocol.http.HttpURLConnection:http://10.4.8.107:8080/ServiceGesdenScim-1.0/Users/popen070/password
2019-08-30 10:18:08.598 INFO 22051 --- [ container-1] c.groovy.gesden.common.ConnectorUtils : Ha llegado Server returned HTTP response code: 405 for URL: http://10.4.8.107:8080/ServiceGesdenScim-1.0/Users/popen070/password
2019-08-30 10:18:08.598 INFO 22051 --- [ container-1] c.groovy.gesden.common.ConnectorUtils : HTTP connection closed
2019-08-30 10:18:08.598 INFO 22051 --- [ container-1] c.groovy.gesden.common.ConnectorUtils : Finish sendHttpRequest() method
2019-08-30 10:18:08.598 INFO 22051 --- [ container-1] c.g.gesden.ResetPasswordScriptConnector : An exception ocurred while setting the password for user popen070: Server returned HTTP response code: 405 for URL: http://10.4.8.107:8080/ServiceGesdenScim-1.0/Users/popen070/password
2019-08-30 10:18:08.598 INFO 22051 --- [ container-1] c.g.gesden.ResetPasswordScriptConnector : ****** SET PASSWORD finished ******

Sample project throwing NoClassDefFoundError: com/fasterxml/jackson/databind/ObjectMapper

I have implemented the project as per https://spring.io/guides/gs/batch-processing/
But i am getting :
Error creating bean with name 'batchConfigurer': Invocation of init method failed; nested exception is java.lang.NoClassDefFoundError: com/fasterxml/jackson/databind/ObjectMapper
I am new to spring-batch
Can anyone please help
The following is working as expected:
$>git clone https://github.com/spring-guides/gs-batch-processing.git
$>cd gs-batch-processing/complete
$>./mvnw clean install
$>./mvnw spring-boot:run
The output is the following:
. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v2.1.4.RELEASE)
2019-05-30 12:23:12.642 INFO 90644 --- [ main] hello.Application : Starting Application on localhost with PID 90644 (/private/tmp/gs-batch-processing/complete/target/classes started by mbenhassine in /private/tmp/gs-batch-processing/complete)
2019-05-30 12:23:12.646 INFO 90644 --- [ main] hello.Application : No active profile set, falling back to default profiles: default
2019-05-30 12:23:13.333 INFO 90644 --- [ main] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Starting...
2019-05-30 12:23:13.338 WARN 90644 --- [ main] com.zaxxer.hikari.util.DriverDataSource : Registered driver with driverClassName=org.hsqldb.jdbcDriver was not found, trying direct instantiation.
2019-05-30 12:23:13.683 INFO 90644 --- [ main] com.zaxxer.hikari.pool.PoolBase : HikariPool-1 - Driver does not support get/set network timeout for connections. (feature not supported)
2019-05-30 12:23:13.687 INFO 90644 --- [ main] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Start completed.
2019-05-30 12:23:14.091 INFO 90644 --- [ main] o.s.b.c.r.s.JobRepositoryFactoryBean : No database type set, using meta data indicating: HSQL
2019-05-30 12:23:14.277 INFO 90644 --- [ main] o.s.b.c.l.support.SimpleJobLauncher : No TaskExecutor has been set, defaulting to synchronous executor.
2019-05-30 12:23:14.437 INFO 90644 --- [ main] hello.Application : Started Application in 2.114 seconds (JVM running for 5.2)
2019-05-30 12:23:14.438 INFO 90644 --- [ main] o.s.b.a.b.JobLauncherCommandLineRunner : Running default command line with: []
2019-05-30 12:23:14.503 INFO 90644 --- [ main] o.s.b.c.l.support.SimpleJobLauncher : Job: [FlowJob: [name=importUserJob]] launched with the following parameters: [{run.id=1}]
2019-05-30 12:23:14.530 INFO 90644 --- [ main] o.s.batch.core.job.SimpleStepHandler : Executing step: [step1]
2019-05-30 12:23:14.590 INFO 90644 --- [ main] hello.PersonItemProcessor : Converting (firstName: Jill, lastName: Doe) into (firstName: JILL, lastName: DOE)
2019-05-30 12:23:14.590 INFO 90644 --- [ main] hello.PersonItemProcessor : Converting (firstName: Joe, lastName: Doe) into (firstName: JOE, lastName: DOE)
2019-05-30 12:23:14.590 INFO 90644 --- [ main] hello.PersonItemProcessor : Converting (firstName: Justin, lastName: Doe) into (firstName: JUSTIN, lastName: DOE)
2019-05-30 12:23:14.590 INFO 90644 --- [ main] hello.PersonItemProcessor : Converting (firstName: Jane, lastName: Doe) into (firstName: JANE, lastName: DOE)
2019-05-30 12:23:14.590 INFO 90644 --- [ main] hello.PersonItemProcessor : Converting (firstName: John, lastName: Doe) into (firstName: JOHN, lastName: DOE)
2019-05-30 12:23:14.604 INFO 90644 --- [ main] hello.JobCompletionNotificationListener : !!! JOB FINISHED! Time to verify the results
2019-05-30 12:23:14.607 INFO 90644 --- [ main] hello.JobCompletionNotificationListener : Found <firstName: JILL, lastName: DOE> in the database.
2019-05-30 12:23:14.607 INFO 90644 --- [ main] hello.JobCompletionNotificationListener : Found <firstName: JOE, lastName: DOE> in the database.
2019-05-30 12:23:14.607 INFO 90644 --- [ main] hello.JobCompletionNotificationListener : Found <firstName: JUSTIN, lastName: DOE> in the database.
2019-05-30 12:23:14.607 INFO 90644 --- [ main] hello.JobCompletionNotificationListener : Found <firstName: JANE, lastName: DOE> in the database.
2019-05-30 12:23:14.607 INFO 90644 --- [ main] hello.JobCompletionNotificationListener : Found <firstName: JOHN, lastName: DOE> in the database.
2019-05-30 12:23:14.610 INFO 90644 --- [ main] o.s.b.c.l.support.SimpleJobLauncher : Job: [FlowJob: [name=importUserJob]] completed with the following parameters: [{run.id=1}] and the following status: [COMPLETED]
As you can see, there is no NoClassDefFoundError.
2 jars were corrupted , jackson-data bind and log4 .
While mvn clean install error was (invalid LOC header bad signature) in the logs but build was getting successfull hence i missed the logs.
i had to delete the folder from .m2 and then run mvn clean install this resolved the issue