spinnaker /halyard : Unable to communicate with the Kubernetes cluster - kubernetes

I am trying to deploy spinnaker on multi node . I have 2 VMs : the first with halyard and kubectl the second contain the kubernetes master api.
my kubectl is well configured and able to communicate with the remote kubernetes api,
the "kubectl get namespaces " works
kubectl get namespaces
NAME STATUS AGE
default Active 16d
kube-public Active 16d
kube-system Active 16d
but when I run this cmd
hal config provider -d kubernetes account add spin-kubernetes --docker-registries myregistry
I get this error
Add the spin-kubernetes account
Failure
Problems in default.provider.kubernetes.spin-kubernetes:
- WARNING You have not specified a Kubernetes context in your
halconfig, Spinnaker will use "default-system" instead.
? We recommend explicitly setting a context in your halconfig, to
ensure changes to your kubeconfig won't break your deployment.
? Options include:
- default-system
! ERROR Unable to communicate with your Kubernetes cluster:
Operation: [list] for kind: [Namespace] with name: [null] in namespace:
[null] failed..
? Unable to authenticate with your Kubernetes cluster. Try using
kubectl to verify your credentials.
- Failed to add account spin-kubernetes for provider
kubernetes.

From the error message there seem to be two approaches to this, set your halconfig to talk to the default-system context, so it could communicate with your cluster or the other way around, that is configure your context.
Try this:
kubectl config view
I suppose you'll see the context and current context over there to be default-system, try changing those.
For more help do
kubectl config --help
I guess you're looking for the set-context option.
Hope that helps.

You can set this in your halconfig as mentioned by #Naim Salameh.
Another way is to try setting your K8S cluster info in your default Kubernetes config ~/.kube/config.
Not certain this will work since you are running halyard and kubectl on different VM's.
# ~/.kube/config
apiVersion: v1
clusters:
- cluster:
server: http://my-kubernetes-url
name: my-k8s-cluster
contexts:
- context:
cluster: my-k8s-cluster
namespace: default
name: my-context
current-context: my-context
kind: Config
preferences: {}
users: []

Related

Install calico GlobalNetworkPolicy via helm chart

I am trying to install a calico GlobalNetworkPolicy that will be applicable to all the pods in the cluster regardless of namespace , and to apply GlobalNetworkPolicy as per docs here -
Calico network policies and Calico global network policies are applied
using calicoctl
i.e calicoctl command (assuming calicoctl binary installed in the host) ->
calicoctl apply -f global-policy.yaml
OR if we have a calicoctl pod running ->
kubectl exec -ti -n kube-system calicoctl -- /calicoctl apply -f global-deny.yaml -o wide
global-policy.yaml ->
apiVersion: projectcalico.org/v3
kind: GlobalNetworkPolicy
metadata:
name: default-deny
spec:
selector: projectcalico.org/namespace == "kube-system"
types:
- Ingress
- Egress
Question -> How do I install such a policy via helm chart ? As helm implicitly calls via kubectl and that causes error on install.
Error using kubectl or helm =>
Error: INSTALLATION FAILED: unable to build kubernetes objects from release manifest: resource mapping not found for name: "default-deny" namespace: "" from "": no matches for kind "GlobalNetworkPolicy" in version "projectcalico.org/v3"
As per the Doc given by you Calico global network policy is a non-namespaced resource and can be applied to any kind of endpoint (pods, VMs, host interfaces) independent of namespace.
But you are using namespace in the Yaml, that might be the reason for the error. Kindly remove the name space and try again.
Because global network policies use kind: GlobalNetworkPolicy, they are grouped separately from kind: NetworkPolicy. For example, global network policies will not be returned from calicoctl get networkpolicy, and are rather returned from calicoctl get globalnetworkpolicy.
Below is the reference yaml from Doc :
apiVersion: projectcalico.org/v3
kind: GlobalNetworkPolicy
metadata:
name: allow-tcp-port-6379
Refer For more information on Global Network Policy, Calico Install Via Helm and Calico command line tools.

How to properly access multiple kubernetes cluster using kubectl

I have two clusters and the config files are stored in .kube. I am exporting KUBECONFIG as below
export KUBECONFIG=/home/vagrant/.kube/config-cluster1:/home/vagrant/.kube/config-cluster2
checking the contexts
kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* cluster-1 cluster-1 kubernetes-admin
cluster-2 cluster-2 kubernetes-admin
But when I choose cluster-2 as my current context I get an error
kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* cluster-1 cluster-1 kubernetes-admin
cluster-2 cluster-2 kubernetes-admin
kubectl config use-context cluster-2
Switched to context "cluster-2".
kubectl get pods -A
error: You must be logged in to the server (Unauthorized)
If I export only the config for cluster-2 and try running kubectl it works fine.
My question is whether I am exporting the config files properly or should I be doing something more.
You need to separate the AUTHINFO (context.user on config file) for each cluster with the respective credentials.
For example:
apiVersion: v1
clusters:
- cluster:
server: https://192.168.10.190:6443
name: cluster-1
- cluster:
server: https://192.168.99.101:8443
name: cluster-2
contexts:
- context:
cluster: cluster-1
user: kubernetes-admin-1
name: cluster-1
- context:
cluster: cluster-2
user: kubernetes-admin-2
name: cluster-2
kind: Config
preferences: {}
users:
- name: kubernetes-admin-1
user:
client-certificate: /home/user/.minikube/credential-for-cluster-1.crt
client-key: /home/user/.minikube/credential-for-cluster-1.key
- name: kubernetes-admin-2
user:
client-certificate: /home/user/.minikube/credential-for-cluster-2.crt
client-key: /home/user/.minikube/credential-for-cluster-2.key
You can find more useful tips in the following article:
Using different kubectl versions with multiple Kubernetes clusters:
When you are working with multiple Kubernetes clusters, it’s easy to
mess up with contexts and run kubectl in the wrong cluster. Beyond
that, Kubernetes has restrictions for versioning mismatch between the
client (kubectl) and server (kubernetes master), so running commands
in the right context does not mean running the right client version.
To overcome this:
Use asdf to manage multiple kubectl versions
Set the KUBECONFIG env var to change between multiple kubeconfig files
Use kube-ps1 to keep track of your current context/namespace
Use kubectx and kubens to change fast between clusters/namespaces
Use aliases to combine them all together
I also recommend the following reads:
Mastering the KUBECONFIG file by Ahmet Alp Balkan (Google Engineer)
How Zalando Manages 140+ Kubernetes Clusters by Henning Jacobs (Zalando Tech)
I wrote a script to switch kubeconfig and namespace easily. Hope it can help you.
. k-use -k <kubeconfig> -n <namespace>
https://github.com/kingonion/k-use

K8S api cloud.google.com not available in GKE v1.16.13-gke.401

I am trying to create a BackendConfig resource on a GKE cluster v1.16.13-gke.401 but it gives me the following error:
unable to recognize "backendconfig.yaml": no matches for kind "BackendConfig" in version "cloud.google.com/v1"
I have checked the available apis with the kubectl api-versions command and cloud.google.com is not available. How can I enable it?
I want to create a BackendConfig whit a custom health check like this:
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: my-backendconfig
spec:
healthCheck:
checkIntervalSec: 8
timeoutSec: 1
healthyThreshold: 1
unhealthyThreshold: 3
type: HTTP
requestPath: /health
port: 10257
And attach this BackendConfig to a Service like this:
apiVersion: v1
kind: Service
metadata:
annotations:
cloud.google.com/backend-config: '{"default": "my-backendconfig"}'
As mentioned in the comments, issue was caused due to the lack of HTTP Load Balancing add-on in your cluster.
When you are creating GKE cluster with all default setting, feature like HTTP Load Balancing is enabled.
The HTTP Load Balancing add-on is required to use the Google Cloud Load Balancer with Kubernetes Ingress. If enabled, a controller will be installed to coordinate applying load balancing configuration changes to your GCP project
More details can be found in GKE documentation.
For test I have created Cluster-1 without HTTP Load Balancing add-on. There was no BackendConfig CRD - Custom Resource Definition.
The CustomResourceDefinition API resource allows you to define custom resources. Defining a CRD object creates a new custom resource with a name and schema that you specify. The Kubernetes API serves and handles the storage of your custom resource. The name of a CRD object must be a valid DNS subdomain name.
Without BackendConfig and without cloud apiVersion like below
user#cloudshell:~ (k8s-tests-XXX)$ kubectl get crd | grep backend
user#cloudshell:~ (k8s-tests-XXX)$ kubectl api-versions | grep cloud
I was not able to create any BackendConfig.
user#cloudshell:~ (k8s-tests-XXX) $ kubectl apply -f bck.yaml
error: unable to recognize "bck.yaml": no matches for kind "BackendConfig" in version "cloud.google.com/v1"
To make it work, you have to enable HTTP Load Balancing You can do it via UI or command.
Using UI:
Navigation Menu > Clusters > [Cluster-Name] > Details > Clikc on
Edit > Scroll down to Add-ons and expand > Find HTTP load balancing and change from Disabled to Enabled.
or command:
gcloud beta container clusters update <clustername> --update-addons=HttpLoadBalancing=ENABLED --zone=<your-zone>
$ gcloud beta container clusters update cluster-1 --update-addons=HttpLoadBalancing=ENABLED --zone=us-central1-c
WARNING: Warning: basic authentication is deprecated, and will be removed in GKE control plane versions 1.19 and newer. For a list of recommended authentication methods, see: https://cloud.google.com/kubernetes-engine/docs/how-to/api-server-authentication
After a while, when Add-on was enabled:
$ kubectl get crd | grep backend
backendconfigs.cloud.google.com 2020-10-23T13:09:29Z
$ kubectl api-versions | grep cloud
cloud.google.com/v1
cloud.google.com/v1beta1
$ kubectl apply -f bck.yaml
backendconfig.cloud.google.com/my-backendconfig created

kubectl not working from other host, but works fine from localhost

What happened:
I'm testing kubernetes 1.9.0 to upgrade production cluster and I cannot access it with kubectl from other host.
I'm getting following error:
pods is forbidden: User \"system:anonymous\" cannot list pods in the namespace \"default\"
I tried with admin user and with other user created before with read only role.
What you expected to happen:
Works fine on kubernetes 1.5
How to reproduce it (as minimally and precisely as possible):
I installed kubernetes 1.9.0 with kubeadm.
I can access to local cluster from master with following command:
kubectl --kubeconfig kubeconfig get pods
with server: https://127.0.0.1:6443
I added a rule on haproxy to redirect that port to another, but I do some tests:
Old environment have a proxy configured to all requests asking for https://example.org/api/k8s will be redirect to k8s api endpoint.
I configured this new environment with same configuration but not working. (Error: pods is forbidden: User \"system:anonymous\" cannot list pods in the namespace \"default\" )
I configured this new enviroment with a new DNS name and proxying on tcp mode linking port 443 to 6443, but not working. (Error: pods is forbidden: User \"system:anonymous\" cannot list pods in the namespace \"default\" )
kubeconfig file set server field as: https://k8s.example.org
Anything else we need to know?:
kubeconfig file (kubeconfig for admin user is similar):
`
api
Version: v1
clusters:
- cluster:
certificate-authority-data: ***
server: https://127.0.0.1:6443
#server: https://k.example.org
name: kubernetes
contexts:
- context:
cluster: kubernetes
namespace: default
user: read_only
name: read_only-context
current-context: read_only-context
kind: Config
preferences: {}
users:
- name: read_only
user:
as-user-extra: {}
client-certificate: /etc/kubernetes/users/read_only/read_only.crt
client-key: /etc/kubernetes/users/read_only/read_only.key
user
name: read_only
`
Environment:
Kubernetes version (use kubectl version): 1.9.0
Cloud provider or hardware configuration: bare metal (in fact a VM on AWS)
OS (e.g. from /etc/os-release): Centos 7
Kernel (e.g. uname -a): 3.10.0-514.10.2.el7.x86_64
Install tools: kubeadm
Others:

Kubectl Error when accessing Namespaces

I was trying out the Tectonic Kubernetes sandbox setup and according to their documentation:
https://coreos.com/tectonic/docs/latest/tutorials/first-app.html
I did download the kubectl and the corresponding kube-config files, but when I tried to get the namespaces using the following command:
kubectl get namespaces
I get the following error:
The connection to the server localhost:8080 was refused - did you specify the right host or port?
What is this? From where is it picking up this port locahost:8080?
EDIT:
Joe-MacBook-Pro:~ joe$ kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
Joe-MacBook-Pro:~ joe$
I'm lacking some details on your setup, but the problem is basically clear - you're not connected to the cluster.
You should have a kubeconfig file containing the cluster connection information i.e. the context, I assume if you run kubectl config view you'll get nothing.
I'm on windows using git bash, if I run the same command I get:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: REDACTED
server: https://platform-svc-integration.net
name: svc-integration
contexts:
- context:
cluster: svc-integration
user: svc-integration-admin
name: svc-integration-system
current-context: svc-integration-system
kind: Config
preferences: {}
users:
- name: svc-integration-admin
user:
client-certificate: <path>/admin/admin.crt
client-key: <path>/admin/admin.key
basically what I'm trying to say is you need to configure your context, start by doing kubectl config --help to list your options, it's pretty straight forward but if don't manage just refer to the documentation.