Istio installation failed - ibm-cloud

I followed the instructions on https://github.com/IBM/cloud-native-starter/blob/master/documentation/IKSDeployment.md on my Mac, Kubernetes is running on IBM Cloud.
The command
$ for i in install/kubernetes/helm/istio-init/files/crd*yaml; do kubectl apply -f $i; done
hangs/does not return.
Couldn't validate the istio installation either:
$ istioctl verify-install
Error: unknown command "verify-install" for "istioctl"
Run 'istioctl --help' for usage.

We were able to solve to problem by following the steps prerequisites for the workshop: https://github.com/IBM/cloud-native-starter/blob/master/workshop/00-prerequisites.md#361-automated-creation-of-a-cluster-with-istio-for-the-workshop

Check out using our managed Istio offering. To install Istio on your Kubernetes cluster in IBM Cloud run one of the following...
Install Istio
ic ks cluster-addon-enable istio --cluster xxxx
Install Istio with extra (tracing and monitoring)
ic ks cluster-addon-enable istio-extras --cluster xxxx
Install Istio with Bookinfo
ic ks cluster-addon-enable istio-sample-bookinfo --cluster xxxx
In the above xxxx is the name of your cluster.

Related

GKE gke-gcloud-auth-plugin

I'm trying to connect to a cluster and I'm getting the following error:
gcloud container clusters get-credentials cluster1 --region europe-west2 --project my-project
Fetching cluster endpoint and auth data.
CRITICAL: ACTION REQUIRED: gke-gcloud-auth-plugin, which is needed for continued use of kubectl, was not found or is not executable.
Install gke-gcloud-auth-plugin for use with kubectl by following https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke
kubeconfig entry generated for dbcell-cluster.
I have installed Google Cloud SDK 400, kubektl 1.22.12, gke-gcloud-auth-plugin 0.3.0, and also setup /~.bashrc with
export USE_GKE_GCLOUD_AUTH_PLUGIN=True
gke-gcloud-auth-plugin --version
Kubernetes v1.24.0-alpha+f42d1572e39979f6f7de03bd163f8ec04bc7950d
but when I try to connect to the cluster always I'm getting the same error, any idea here?
Thanks
The cluster exist in that region, also I verfied the env variable
with
echo $USE_GKE_GCLOUD_AUTH_PLUGIN
True
I installed the gke-gcloud-auth-plugin using gcloud components install... I do not know what more can I check
gcloud components list
I solved the same problem by removing my current kubeconfig context for GCP.
Get your context name running:
kubectl config get-contexts
Delete the context:
kubectl config delete-context CONTEXT_NAME
Reconfigure the credentials
gcloud container clusters get-credentials CLUSTER_NAME --region REGION --project PROJECT
The warning message should be gone by now.

AWS EKS EFS-CSI Driver version and how to upgrade it

With lots of known issues[1][2] in EFS-CSI driver, i'm planning to do this driver upgrade on a running cluster.
I read most of AWS documentations but couldn't get any straight forward answers for following questions.
How to view current efs-csi driver version in EKS cluster.
How to upgrade the efs-csi driver to a specific version in a running cluster.
[1] https://github.com/kubernetes-sigs/aws-efs-csi-driver/issues/616
[2] https://github.com/kubernetes-sigs/aws-efs-csi-driver/issues/673
If you already have installed efs-csi drivers and used Helm to install efs-csi drivers, upgrade is so straightforward. Regarding to the documentation:
helm upgrade -i aws-efs-csi-driver aws-efs-csi-driver/aws-efs-csi-driver \
--namespace kube-system \
--set image.repository=602401143452.dkr.ecr.region-code.amazonaws.com/eks/aws-efs-csi-driver \
--set controller.serviceAccount.create=false \
--set controller.serviceAccount.name=efs-csi-controller-sa
Regards

Error: failed to download "stable/mssql-linux" (hint: running `helm repo update` may help)

Please see the command below:
helm install --name mymssql stable/mssql-linux --set acceptEula.value=Y --set edition.value=Developer
which I got from here: https://github.com/helm/charts/tree/master/stable/mssql-linux
After just one month it appears the --name is no longer needed so I now have (see here: Helm install unknown flag --name):
helm install mymssql stable/mssql-linux --set acceptEula.value=Y --set edition.value=Developer
The error I see now is:
Error: failed to download "stable/mssql-linux" (hint: running `helm repo update` may help)
What is the problem?
Update
Following on from the answers; the command above now works, however I cannot connect to the database using SQL Studio Manager from my local PC. The additional steps I have followed are:
1) kubectl expose deployment mymssql-mssql-linux --type=NodePort --name=mymssql-mssql-linux-service
2) kubectl get service - the below service is relevant here
mymssql-mssql-linux-service NodePort 10.107.98.68 1433:32489/TCP 7s
3) Then try to connect to the database using SQL Studio Manager 2019:
Server Name: localhost,32489
Authentication: SQL Server Authentication
Login: sa
Password: I have tried: b64enc quote and MyStrongPassword1234
I cannot connect using SQL Studio Manager.
Check if the stable repo is added or not
helm repo list
If not then add
helm repo add stable https://kubernetes-charts.storage.googleapis.com
helm repo update
And then run below to install mssql-linux
helm install mymssql stable/mssql-linux --set acceptEula.value=Y --set edition.value=Developer
Try:
helm repo add stable https://kubernetes-charts.storage.googleapis.com
helm repo update
and then run your helm command.
Explanation:
Helm in version 3 does not have any repository added by default (helm v2 had stable repository add by default), so you need to add it manually.
Update:
First of all, if you are using helm keep everything in helm values it makes thinks cleaner and easier to find it later rather than mixing kubeclt and helm - I am referring to exposing service via kubeclt.
Ad. 1,2. You have to read some docs to understand Kubernetes services.
With expose command and type NodePort you are exposing your MySQL server on port 32489 - in your case, on Kubernetes nodes. You can check IP of Kubernetes nodes with kubectl get nodes -owide, so your database is available on :32489. This approach is very tricky, it might work fine for PoC purposes, but this is not a recommended way especially on cloud-hosted Kubernetes. The same result you can achieve appending you helm command with --set service.type=NodePort.
Ad. 3 For debugging purposes you can use kubectl port-forward to port forward traffic from container to your local machine. kubectl port-forward deploymeny/mymssql-mssql-linux 1433 should do the trick and you should be able to connect to MySQL on localhost:1433.
In case if the chart you want to use is not published to hub you can install the package directly using the path to the unpacked chart directory.
For example (works for helm v3.2.4):
git clone https://github.com/helm/charts/
cd helm/charts/stable
helm install --name mymssql ./mssql-linux --set acceptEula.value=Y --set edition.value=Developer

Desired GKE pod not found , google cloud composer

I am using Google cloud composer ,and created composer environment.Composer environment is ready(has green tick), now I am trying to set variables used in DAG python code using google cloud shell.
command to set variables:
gcloud composer environments run test-environment \
--location us-central1 variables -- \
--set gcp_project xxx-gcp
Exact error message:
ERROR: (gcloud.composer.environments.run) Desired GKE pod not found. If the environment was recently started, please wait and retry.
I tried following things as part of investigation, but got same error each time.
I have created a new environment using UI and not google shell commands.
I checked pods in kubernetes engine and all are green , did not see any issue.
I verified composer API, Billing kubernetes, all required API's are enabled.
I have 'Editor' role assigned.
added screenshot I saw first time some failures
Error with exit code 1
google troubleshooting guide describe: If the exit code is 1, the container crashed because the application crashed.
This is a side effect of Composer version 1.6.0 if you are using a google-cloud-sdk that is too old, because it now launches pods in namespaces other than default. The error you see is a result of looking for Kubernetes pods in the default namespace and failing to find them.
To fix this, run gcloud components update. If you cannot yet update, a workaround to execute Airflow commands is to manually SSH to a pod yourself and run airflow. To start, obtain GKE cluster credentials:
$ gcloud container clusters get-credentials $COMPOSER_GKE_CLUSTER_NAME
Once you have the credentials, you should find which namespace the pods are running in (which you can also find using Cloud Console):
$ kubectl get namespaces
NAME STATUS AGE
composer-1-6-0-airflow-1-9-0-6f89fdb7 Active 17h
default Active 17h
kube-public Active 17h
kube-system Active 17h
You can then SSH into any scheduler/worker pod, and run commands:
$ kubectl exec \
--namespace=$NAMESPACE \
-it airflow-worker-569bc59df5-x6jhl airflow list_dags -r
You can also open a shell if you prefer:
$ kubectl exec \
--namespace=$NAMESPACE \
-it airflow-worker-569bc59df5-x6jhl bash
airflow#airflow-worker-569bc59df5-x6jhl:~$ airflow list_dags -r
The failed airflow-database-init-job jobs are unrelated and will not cause problems in your Composer environment.

Jenkins-x cluster set up failed when specifying options like --nodes, master-size and others

if I run jx create cluster aws -> it creates the cluster on aws without any issues but if I won't to specify some options like this:
jx create cluster aws --zones us-east-2b --nodes=2 --node-size=t2.micro --master-size=t2.micro
Then it fails constantly, whatever I tried to change, giving out these kind of errors for almost all options:
Error: unknown flag: - -node-size and the same for other options. Options were taken from here https://jenkins-x.io/commands/jx_create_cluster_aws/
Setting up the cluster with kops with whatever options don't have any issues
I asked about this in a comment, but the actual answer appears to be that you are on a version of jx that doesn't match the documentation. Because this is my experience with a freshly downloaded binary:
$ ./jx create cluster aws --verbose=true --zones=us-west-2a,us-west-2b,us-west-2c --cluster-name=sample --node-size=5 --master-size=m5.large
kops not found
kubectl not found
helm not found
? Missing required dependencies, deselect to avoid auto installing: [Use arrows to move, type to filter]
❯ ◉ kops
◉ kubectl
◉ helm
? nodes [? for help] (3)
^C
$ ./jx --version
1.3.90
you can see what version of jx you are using via:
jx version
you can check the options of a command via jx help create cluster aws or by browsing the online CLI reference for the command: jx create cluster aws