adding environment variable inside kubernetes config file - kubernetes

I am trying to set my company proxy inside my KUBECONFIG file hoping it would be picked up when i run kubectl from command line. I have tried many things but nothing helps so far. Here is the my config file.
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURqRENDQW5TZ0F3SUJBZ0lVRmdMaWRLWWlCcHlqbERBdEgvenRPRTE3ZTVBd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1hqRUxNQWtHQTFVRUJoTUNWVk14RGpBTUJnTlZCQWdUQlZSbGVHRnpNUTh3RFFZRFZRUUhFd1pCZFhOMAphVzR4RHpBTkJnTlZCQW9UQms5eVlXTnNaVEVNTUFvR0ExVUVDeE1EVDBSWU1ROHdEUVlEVlFRREV3WkxPRk1nClEwRXdIaGNOTWpBd05USXhNVGMxTlRBd1doY05NalV3TlRJd01UYzFOVEF3V2pCZU1Rc3dDUVlEVlFRR0V3SlYKVXpFT01Bd0dBMVVFQ0JNRlZHVjRZWE14RHpBTkJnTlZCQWNUQmtGMWMzUnBiakVQTUEwR0ExVUVDaE1HVDNKaApZMnhsTVF3d0NnWURWUVFMRXdOUFJGZ3hEekFOQmdOVkJBTVRCa3M0VXlCRFFUQ0NBU0l3RFFZSktvWklodmNOCkFRRUJCUUFEZ2dFUEFEQ0NBUW9DZ2dFQkFLYWFxcFBMVXJTcVh0N1d2Rm5zZEpnTHdQUWlaelhLci9JTzJDQVcKdklFa3VEOWt5VDlnNWQ5RzNwZFlkdW53THhLcG1DOE1VdWdBZmZ3VTFSNDNNSGNXK3MxTzFKS0dnd3hzMElyVApGRkZLZ0lEMTBDMXY3Wkp3amNPY0JvWXZXUTJ4TjN6czBITEt5cGMvY2Y5ZkpMTy9zWWJ4aXNQMDNZdHNGajMrClVUckNJS25XSWRyWlhqeEI5YVJKcmtXbVpKMTlIUG9oUE5TT2hYOTdLVDNJTnZIT1JFdldIbnZMVmN5VXlqWkUKQnBCcHNlc3N0aHcvNDBNOHRUSTJMdExFQzRKbE9NdXl6azB4Z0hJRGtKSzlCa214cVkwdkE4Y3RxVTEvbjRNeQpiRlRmV1poYmFwTW9FcGJINmZFRU9FalVoVmwzdjdTYWswMWRiek10L0RPdkd5OENBd0VBQWFOQ01FQXdEZ1lEClZSMFBBUUgvQkFRREFnRUdNQThHQTFVZEV3RUIvd1FGTUFNQkFmOHdIUVlEVlIwT0JCWUVGSWJKQmI3QTMzbEEKb1Z5aHBnZ0JsRDFJekhPRU1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQ1MzRytEWTVKdTdIelFORjRUa2g1Mgp4cE1zdG84SzZBaGpWb2dqc21kWDQ5dlQ1WFdOU05GclJLeHdDcWIrU09DQ25hVTZvUFBPZVltSWI1YnBNcVZDCmsrYm9INUVXY2F1QzNWeWZCenppeTh0cktZdnZvam1PYTlBYkJnbHhNUVJ5VjNtQnJOZ0hGZktwaHV3N2FwZ0EKbWFrRWQwWjZTcXMzMSs0KzNGREJRL0Y4N0hpQ3hkbTZ4YmM0ayt3WFZPWFU3V3JEQlJ4cFRXT2J3bjNtWnRYeQpLYmw0UnBISGVOMnVkSFR2bE1rT3RCNHRlMGwrRURDbzFtbzZuZmJIM0w2QUJ5b3FyV0p1RzNCcWYzMWs1bEJhClNVcWdGaENxb3lDNnhWN09iUFdiN3BCSjF1UWdWck1DeFVTV3djVndrVkxKNTJTaWlUNWIyMy9LQWRrWFlGa0UKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
server: https://xxx.xxx.xx.xx:xxxx
name: cluster-ctdozjqhfqt
contexts:
- context:
cluster: cluster-ctdozjqhfqt
namespace: xxxxxxxxxx
user: user-ctdozjqhfqt
name: context-ctdozjqhfqt
current-context: context-ctdozjqhfqt
kind: Config
preferences: {}
users:
- name: user-ctdozjqhfqt
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
args:
- ce
- cluster
- generate-token
- --cluster-id
- xxxxxxx
- --region
- xxxxxxxx
- --profile
- xxxxxxxx
command: xxxx
env:
- name: HTTP_PROXY
value: "http://xxxxxxx"
- name: HTTPS_PROXY
value: "http://xxxxx"
If I set the same environment variable on my terminal and run kubectl again, it just works. Am i missing something or doing something wrong in my config file?

I think your problem is the PR - kubernetes-proxy-kubeconfig.
This PR could help people accessing Cloud KubeContext that needs an explicit proxy, while local context like Minikube / docker-desktop / OnPremise should not use the explicit proxy (a very logic 502 bad gateway).
ADDITIONAL INFO:
Supporting streaming endpoints over socks5 proxy is more involved, however it seems like this would be useful for many people using standard http proxies.
Take a look: http-proxy-kubeconfig, kubernetes-proxy-setup-kubeconfig.

Related

kubernetes: change the current/default context via kubectl command

I am doing an exercise from KodeKoud which provide the CKAD certification training.
The exercise has a my-kube-config.yml file located under root/. The file content is below:
(I ommited some unrelated parts)
apiVersion: v1
kind: Config
clusters:
- name: production
cluster:
certificate-authority: /etc/kubernetes/pki/ca.crt
server: https://controlplane:6443
- name: development
cluster:
certificate-authority: /etc/kubernetes/pki/ca.crt
server: https://controlplane:6443
- name: test-cluster-1
cluster:
certificate-authority: /etc/kubernetes/pki/ca.crt
server: https://controlplane:6443
contexts:
- name: test-user#production
context:
cluster: production
user: test-user
- name: research
context:
cluster: test-cluster-1
user: dev-user
users:
- name: test-user
user:
client-certificate: /etc/kubernetes/pki/users/test-user/test-user.crt
client-key: /etc/kubernetes/pki/users/test-user/test-user.key
- name: dev-user
user:
client-certificate: /etc/kubernetes/pki/users/dev-user/developer-user.crt
client-key: /etc/kubernetes/pki/users/dev-user/dev-user.key
current-context: test-user#development
The exercise asking me to:
use the dev-user to access test-cluster-1. Set the current context
to the right one so I can do that.
Since I see in the config file, there is a context named research which meets the requirement, so I run the following command to change the current context to the required one:
kubectl config use-context research
but the console gives me error: error: no context exists with the name: "research".
Ok, I guessed maybe the name with value research is not acceptable, maybe I have to follow the convention of <user-name>#<cluster-name>? I am not sure , but I then tried the following:
I modified the name from research to dev-user#test-cluster-1, so that context part becomes:
- name: dev-user#test-cluster-1
context:
cluster: test-cluster-1
user: dev-user
after that I run command: kubectl config use-context dev-user#test-cluster-1, but I get error:
error: no context exists with the name: "dev-user#test-cluster-1"
Why? Based on the course material that is the way to chagne the default/current context. Is the course out-dated that I am using a deprecated one? What is the problem?
Your initial idea was correct. You would need to change the context to research which can be done using
kubectl config use-context research
But the command would not be applied to the correct config in this instance. You can see the difference by checking the current-context with and without a kubeconfig directed to the my-kube-config file.
kubectl config current-context
kubernetes-admin#kubernetes
kubectl config --kubeconfig=/root/my-kube-config current-context
test-user#development
So run the use-context command with the correct kubeconfig
kubectl config --kubeconfig=/root/my-kube-config use-context research
To able to change context, you have to edit $HOME/.kube/config file with your config data and merge with default one. I've tried to replicate your config file and it was possible to change the context, however your config file looks very strange.
See the lines from my console for your reference:
bazhikov#cloudshell:~ (nb-project-326714)$ kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority: /etc/kubernetes/pki/ca.crt
server: https://controlplane:6443
name: development
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://35.246.22.167
name: gke_nb-project-326714_europe-west2_cluster-west2
...
...
...
- name: test-user
user:
client-certificate: /etc/kubernetes/pki/users/test-user/test-user.crt
client-key: /etc/kubernetes/pki/users/test-user/test-user.key
bazhikov#cloudshell:~ (nb-project-326714)$ kubectl config use-context research
Switched to context "research".
Copy your default config file prior editing if you don't want to ruin your cluster config :)

Unable to sign in with default config file generated with Oracle cloud

I have generated a config file with Oracle cloud for Kubernetes, The generated file keeps throwing the error "Not enough data to create auth info structure.
", wat are methods for fixing this
I have created a new oracle cloud account and set up a cluster for Kubernetes (small with only 2 nodes using quick setup) when I upload the generated config file, to Kubernetes dashboard, it throws the error "Not enough data to create auth info structure".
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURqRENDQW5TZ0F3SUJBZ0lVZFowUzdXTTFoQUtDakRtZGhhbWM1VkxlRWhrd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1hqRUxNQWtHQTFVRUJoTUNWVk14RGpBTUJnTlZCQWdUQlZSbGVHRnpNUTh3RFFZRFZRUUhFd1pCZFhOMAphVzR4RHpBTkJnTlZCQW9UQms5eVlXTnNaVEVNTUFvR0ExVUVDeE1EVDBSWU1ROHdEUVlEVlFRREV3WkxPRk1nClEwRXdIaGNOTVRrd09USTJNRGt6T0RBd1doY05NalF3T1RJME1Ea3pPREF3V2pCZU1Rc3dDUVlEVlFRR0V3SlYKVXpFT01Bd0dBMVVFQ0JNRlZHVjRZWE14RHpBTkJnTlZCQWNUQmtGMWMzUnBiakVQTUEwR0ExVUVDaE1HVDNKaApZMnhsTVF3d0NnWURWUVFMRXdOUFJGZ3hEekFOQmdOVkJBTVRCa3M0VXlCRFFUQ0NBU0l3RFFZSktvWklodmNOCkFRRUJCUUFEZ2dFUEFEQ0NBUW9DZ2dFQkFLSDFLeW5lc1JlY2V5NVlJNk1IWmxOK05oQ1o0SlFCL2FLNkJXMzQKaE5lWjdzTDFTZjFXR2k5ZnRVNEVZOFpmNzJmZkttWVlWcTcwRzFMN2l2Q0VzdnlUc0EwbE5qZW90dnM2NmhqWgpMNC96K0psQWtXWG1XOHdaYTZhMU5YbGQ4TnZ1YUtVRmdZQjNxeWZYODd3VEliRjJzL0tyK044NHpWN0loMTZECnVxUXp1OGREVE03azdwZXdGN3NaZFBSOTlEaGozcGpXcGRCd3I1MjN2ZWV0M0lMLzl3TXN6VWtkRzU3MnU3aXEKWG5zcjdXNjd2S25QM0U0Wlc1S29YMkRpdXpoOXNPZFkrQTR2N1VTeitZemllc1FWNlFuYzQ4Tk15TGw4WTdwcQppbEd2SzJVMkUzK0RpWXpHbFZuUm1GU1A3RmEzYmFBVzRtUkJjR0c0SXk5QVZ5TUNBd0VBQWFOQ01FQXdEZ1lEClZSMFBBUUgvQkFRREFnRUdNQThHQTFVZEV3RUIvd1FGTUFNQkFmOHdIUVlEVlIwT0JCWUVGUFprNlI0ZndpOTUKOFR5SSt0VWRwaExaT2NYek1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQ0g2RVFHbVNzakxsQURKZURFeUVFYwpNWm9abFU5cWs4SlZ3cE5NeGhLQXh2cWZjZ3BVcGF6ZHZuVmxkbkgrQmZKeDhHcCszK2hQVjJJZnF2bzR5Y2lSCmRnWXJJcjVuLzliNml0dWRNNzhNL01PRjNHOFdZNGx5TWZNZjF5L3ZFS1RwVUEyK2RWTXBkdEhHc21rd3ZtTGkKRmd3WUJHdXFvS0NZS0NSTXMwS2xnMXZzMTMzZE1iMVlWZEFGSWkvTWttRXk1bjBzcng3Z2FJL2JzaENpb0xpVgp0WER3NkxGRUlKOWNBNkorVEE3OGlyWnJyQjY3K3hpeTcxcnFNRTZnaE51Rkt6OXBZOGJKcDlNcDVPTDByUFM0CjBpUjFseEJKZ2VrL2hTWlZKNC9rNEtUQ2tUVkFuV1RnbFJpTVNiRHFRbjhPUVVmd1kvckY3eUJBTkkxM2QyMXAKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
server: https://czgkn3bmu4t.uk-london-1.clusters.oci.oraclecloud.com:6443
name: cluster-czgkn3bmu4t
contexts:
- context:
cluster: cluster-czgkn3bmu4t
user: user-czgkn3bmu4t
name: context-czgkn3bmu4t
current-context: context-czgkn3bmu4t
kind: ''
users:
- name: user-czgkn3bmu4t
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
args:
- ce
- cluster
- generate-token
- --cluster-id
- ocid1.cluster.oc1.uk-london-1.aaaaaaaaae3deztchfrwinjwgiztcnbqheydkyzyhbrgkmbvmczgkn3bmu4t
command: oci
env: []
if you could help me resolve this I would be extremely grateful
You should be able to solve this by downloading a v1 kubeconfig. Then specifying the --token-version=1.0.0 flag on the create kubeconfig command.
oci ce cluster create-kubeconfig <options> --token-version=1.0.0
Then use that kubeconfig in the dashboard.

kubectl does not work with multiple clusters config

I have ~/.kube/config with following content:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: REDACTED
server: https://REDACTED.yl4.us-east-1.eks.amazonaws.com
name: kubernetes-jenkins
- cluster:
certificate-authority-data: REDACTED
server: https://REDACTED.sk1.us-east-1.eks.amazonaws.com
name: kuberntes-dev
contexts:
- context:
cluster: kubernetes-dev
user: aws-dev
name: aws-dev
- context:
cluster: kubernetes-jenkins
user: aws-jenkins
name: aws-jenkins
current-context: aws-dev
kind: Config
preferences: {}
users:
- name: aws-dev
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- token
- -i
- EKS_DEV_CLUSTER
command: heptio-authenticator-aws
env: null
- name: aws-jenkins
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- token
- -i
- EKS_JENKINS_CLUSTER
command: heptio-authenticator-aws
env: null
But when I'm trying to kubectl cluster-info I get:
Kubernetes master is running at http://localhost:8080
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
The connection to the server localhost:8080 was refused - did you specify the right host or port?
As far as I understand something wrong in my kubeconfig, but I don't see what exactly.
Also I tried to find any related issues, but with no luck.
Could you suggest me something?
Thanks.
You need to choose the context that you'd like to use. More informantion on how use multiple clusters with multiple users here.
Essentially, you can view your current context (for the current cluster configured)
$ kubectl config current-context
To view, all the clusters configured:
$ kubectl config get-clusters
And to choose your cluster:
$ kubectl config use-context <cluster-name>
There are options to set different users per cluster in case you have them defined in your ~/kube/config file.
Your cluster name has a typo in it (name: kuberntes-dev) compared with the reference in the context (cluster: kubernetes-dev)

error: the server doesn't have resource type "svc"

Admins-MacBook-Pro:~ Harshin$ kubectl cluster-info
Kubernetes master is running at http://localhost:8080
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
error: the server doesn't have a resource type "services"
i am following this document
https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html?refid=gs_card
while i am trying to test my configuration in step 11 of configure kubectl for amazon eks
apiVersion: v1
clusters:
- cluster:
server: ...
certificate-authority-data: ....
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: aws
name: aws
current-context: aws
kind: Config
preferences: {}
users:
- name: aws
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
command: heptio-authenticator-aws
args:
- "token"
- "-i"
- "kunjeti"
# - "-r"
# - "<role-arn>"
# env:
# - name: AWS_PROFILE
# value: "<aws-profile>"
Change "name: kubernetes" to actual name of your cluster.
Here is what I did to work it through....
1.Enabled verbose to make sure config files are read properly.
kubectl get svc --v=10
2.Modified the file as below:
apiVersion: v1
clusters:
- cluster:
server: XXXXX
certificate-authority-data: XXXXX
name: abc-eks
contexts:
- context:
cluster: abc-eks
user: aws
name: aws
current-context: aws
kind: Config
preferences: {}
users:
- name: aws
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
command: aws-iam-authenticator
args:
- "token"
- "-i"
- "abc-eks"
# - "-r"
# - "<role-arn>"
env:
- name: AWS_PROFILE
value: "aws"
I have faced a similar issue, however this is not a direct solution but workaround. Use AWS cli commands to create cluster rather than console. As per documentation, the user or role which creates cluster will have master access.
aws eks create-cluster --name <cluster name> --role-arn <EKS Service Role> --resources-vpc-config subnetIds=<subnet ids>,securityGroupIds=<security group id>
Make sure that EKS Service Role has IAM access(I have given Full however AssumeRole will do I guess).
The EC2 machine Role should have eks:CreateCluster and IAM access. Worked for me :)
I had this issue and found it was caused default key setting in ~/.aws/credentials.
We have a few AWS accounts for different customers plus a sandbox account for our own testing and research. So our credentials file looks something like this:
[default]
aws_access_key_id = abc
aws_secret_access_key = xyz
region=us-east-1
[cpproto]
aws_access_key_id = abc
aws_secret_access_key = xyz
region=us-east-1
[sandbox]
aws_access_key_id = abc
aws_secret_access_key = xyz
region=us-east-1
I was messing around our sandbox account but the [default] section was pointing to another account.
Once I put the keys for sandbox into the default section the "kubectl get svc" command worked fine.
Seems we need a way to tell aws-iam-authenticator which keys to use same as --profile in the aws cli.
I guess you should uncomment "env" item and change your refer to ~/.aws/credentials
Because your aws_iam_authenticator requires exact AWS credentials.
Refer this document: https://docs.aws.amazon.com/eks/latest/userguide/create-kubeconfig.html
To have the AWS IAM Authenticator for Kubernetes always use a specific named AWS credential profile (instead of the default AWS credential provider chain), uncomment the env lines and substitute with the profile name to use.

Google cloud: insufficient authentication scopes

I am having difficulties sending requests to my spring boot application deployed in my Google Cloud Kubernetes cluster. My application receives a photo and sends it to the Google Vision API. I am using the provided client library (https://cloud.google.com/vision/docs/libraries#client-libraries-install-java) as explained here https://cloud.google.com/vision/docs/auth:
If you're using a client library to call the Vision API, use Application Default Credentials (ADC). Services using ADC look for credentials within a GOOGLE_APPLICATION_CREDENTIALS environment variable. Unless you specifically wish to have ADC use other credentials (for example, user credentials), we recommend you set this environment variable to point to your service account key file.
On my local machine everyting works fine, I have a docker container with an env. varialbe GOOGLE_APPLICATION_CREDENTIALS pointing to my service account key file.
I do not have this variable in my cluster. This is the response I am getting from my application in the Kubernetes cluster:
{
"timestamp": "2018-05-10T14:07:27.652+0000",
"status": 500,
"error": "Internal Server Error",
"message": "io.grpc.StatusRuntimeException: PERMISSION_DENIED: Request had insufficient authentication scopes.",
"path": "/image"
}
What I am doing wrong? Thx in advance!
I also had to specify the GOOGLE_APPLICATION_CREDENTIALS environment variable on my GKE setup, these are the steps I completed thanks to How to set GOOGLE_APPLICATION_CREDENTIALS on GKE running through Kubernetes:
1. Create the secret (in my case in my deploy step on Gitlab):
kubectl create secret generic google-application-credentials --from-file=./application-credentials.json
2. Setup the volume:
...
volumes:
- name: google-application-credentials-volume
secret:
secretName: google-application-credentials
items:
- key: application-credentials.json # default name created by the create secret from-file command
path: application-credentials.json
3. Setup the volume mount:
spec:
containers:
- name: my-service
volumeMounts:
- name: google-application-credentials-volume
mountPath: /etc/gcp
readOnly: true
4. Setup the environment variable:
spec:
containers:
- name: my-service
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /etc/gcp/application-credentials.json
That means you are trying to access a service that is not enabled or authenticated to use. Are you sure that you enabled the access to Google vision ?
You can check/enable API's from Dashboard at https://console.cloud.google.com/apis/dashboard or Navigate to APIs & Services from Menu
Will it help if you add GOOGLE_APPLICATION_CREDENTIALS environment variable to your deployment/pod/container configuration?
Here is an example of setting environment variables described in Kubernetes documentation:
apiVersion: v1
kind: Pod
metadata:
name: envar-demo
labels:
purpose: demonstrate-envars
spec:
containers:
- name: envar-demo-container
image: gcr.io/google-samples/node-hello:1.0
env:
- name: DEMO_GREETING
value: "Hello from the environment"
- name: DEMO_FAREWELL
value: "Such a sweet sorrow"