I'm using Knative to deploy serverless applications in K8s, recently I experienced that cluster-local-gateway is missing from istio-system namespace, Is there any reason for this..? I'm afraid that in the latest versions istio removed it or, It has to be done by Knative side.
FYI - I do my deployments in GCP enabling istio. I do not manually installed the istio
Cluster local gateway need to be installed as part of knative installation. Since you are using GKE to install istio instead of using helm you need to install it manually.
kubectl apply -f https://raw.githubusercontent.com/knative/serving/master/third_party/${VERSION}/istio-knative-extras.yaml
here VERSION is istio version i.e istio-1.5.0
https://github.com/knative/serving/blob/master/third_party/istio-1.5.0/istio-knative-extras.yaml
Related
I have added Istio to an existing GKE cluster. This cluster was initially deployed from the GKE UI with Istio "disabled".
I have deployed Istio from the CLI using kubectl and while everything works fine (istio namespace, pods, services, etc...) and I was able later on to deploy an app with Istio sidecar pods etc..., I wonder why the GKE UI still reports that Istio is disabled on this cluster. This is confusing - in effect, Istio is deployed in the cluster but the UI reports the opposite.
Is that a GKE bug ?
Deployed Istio using:
kubectl apply -f install/kubernetes/istio-auth.yaml
Deployment code can be seen here:
https://github.com/hassanhamade/istio/blob/master/deploy
From my point of view this doesn't look as a bug, I assume that the status is disabled because you have deployed a custom version of Istio on you cluster. This flag should be indicating the status of the GKE managed version.
If you want to update your cluster to use GKE managed version, you can do it as following:
With TLS enforced
gcloud beta container clusters update CLUSTER_NAME \
--update-addons=Istio=ENABLED --istio-config=auth=MTLS_STRICT
or
With mTLS in permissive mode
gcloud beta container clusters update CLUSTER_NAME \
--update-addons=Istio=ENABLED --istio-config=auth=MTLS_PERMISSIVE
Check this for more details.
Be careful since you already have deployed Istio, enabling the GKE managed one may cause issues.
Istio will only show as enabled in the GKE cluster UI when using the Istio on GKE addon. If you manually install Istio OSS, the cluster UI will show "disabled".
I'm working on kubernetes. Now I tried Digital Ocean's kubernetes which is very easy to install and access, but how can I install metric-server in it? how can I auto scale in kubernetes by DO?
Please reply as soon as possible.
The Metrics Server can be installed to your cluster with Helm:
https://github.com/helm/charts/tree/master/stable/metrics-server
helm init
helm upgrade --install metrics-server --namespace=kube-system stable/metrics-server
with RBAC enabled, see the more comprehensive instructions for installing Helm into your cluster:
https://github.com/helm/helm/blob/master/docs/rbac.md
If you wish to deploy without Helm, the manifests are available from the GitHub repository:
https://github.com/kubernetes-incubator/metrics-server/tree/master/deploy/1.8%2B
I read about Istio and I need to install it in Kubernetes.
I don't know what is the best way to install Istio in a multi-node Kubernetes cluster.
The setup is multi-node master cluster and multi-node slave for Kubernetes.
Is the best way to install with Istio multicluster or sidecar injection (automatic)?
Regards.
There is no difference on how many Master and Slave Nodes your Kubernetes cluster has if you want to install Istio.
You can follow the instructions from this link
Briefly, you need to:
Download Istio release
Install Istio’s Custom Resource Definitions using kubectl apply -f install/kubernetes/helm/istio/templates/crds.yaml from that release
Install Istio components using one of options:
without mutual TLS authentication between sidecars using kubectl apply -f install/kubernetes/istio-demo.yaml
with default mutual TLS authentication kubectl apply -f install/kubernetes/istio-demo-auth.yaml
Render Kubernetes manifest with Helm and deploy with kubectl
Use Helm and Tiller to manage the Istio deployment
For auto injection, you need to install istio-sidecar-injector component and add istio-injection=enabled label for a Namespace in which you want it to work.
Example of commands:
kubectl label namespace <namespace> istio-injection=enabled
kubectl create -n <namespace> -f <your-app-spec>.yaml
I am trying to install istio. I can easily package the helm chart if I clone the repo from github but I am just wondering if there is a helm chart repo that I can use?
If you're looking for a way to install istio version higher than 1.8.0 then there is a good news.
According to documentation helm support is back, currently in alpha.
We’ve added support for installing Istio with Helm 3. This includes both in-place upgrades and canary deployment of new control planes, after installing 1.8 or later. Helm 3 support is currently Alpha, so please try it out and give your feedback.
There is istio documentation about installing Istio with Helm 3, Helm 2 is not supported for installing Istio.
There are the Prerequisites:
Download the Istio release
Perform any necessary platform-specific setup
Check the Requirements for Pods and Services
Install a Helm client with a version higher than 3.1.1
There are the installation steps for istio 1.8.1:
Note that the default chart configuration uses the secure third party tokens for the service account token projections used by Istio proxies to authenticate with the Istio control plane. Before proceeding to install any of the charts below, you should verify if third party tokens are enabled in your cluster by following the steps describe here. If third party tokens are not enabled, you should add the option --set global.jwtPolicy=first-party-jwt to the Helm install commands. If the jwtPolicy is not set correctly, pods associated with istiod, gateways or workloads with injected Envoy proxies will not get deployed due to the missing istio-token volume.
1.Download the Istio release and change directory to the root of the release package and then follow the instructions below.
curl -L https://istio.io/downloadIstio | ISTIO_VERSION=1.8.1 sh -
cd istio-1.8.1
2.Create a namespace istio-system for Istio components:
kubectl create namespace istio-system
3.Install the Istio base chart which contains cluster-wide resources used by the Istio control plane:
helm install -n istio-system istio-base manifests/charts/base
4.Install the Istio discovery chart which deploys the istiod service:
helm install --namespace istio-system istiod manifests/charts/istio-control/istio-discovery \
--set global.hub="docker.io/istio" --set global.tag="1.8.1"
5.Install the Istio ingress gateway chart which contains the ingress gateway components:
helm install --namespace istio-system istio-ingress manifests/charts/gateways/istio-ingress \
--set global.hub="docker.io/istio" --set global.tag="1.8.1"
6.(Optional) Install the Istio egress gateway chart which contains the egress gateway components:
helm install --namespace istio-system istio-egress manifests/charts/gateways/istio-egress \
--set global.hub="docker.io/istio" --set global.tag="1.8.1"
7.Verify that all Kubernetes pods in istio-system namespace are deployed and have a STATUS of Running:
kubectl get pods -n istio-system
Yes there is. A quick google search turned this up: https://github.com/istio/istio/tree/master/install/kubernetes/helm/istio
It's a pain to find, and they don't really reference it properly in the documentation, but according to these two comments, the charts can be found in the following locations:
master: https://gcsweb.istio.io/gcs/istio-prerelease/daily-build/master-latest-daily/charts/
v1.1.x: https://gcsweb.istio.io/gcs/istio-prerelease/daily-build/release-1.1-latest-daily/charts/
For a more recent answer, you can now add helm repository for istio for a specific version with helm repo add istio.io https://storage.googleapis.com/istio-release/releases/{{< istio_full_version >}}/charts/ according to documentation here.
It seems that helm repo add istio.io https://storage.googleapis.com/istio-release/releases/charts work too but for older versions (up to 1.1.2). It is not yet documented but follow a more idiomatic versionning. An issue is open on istio : https://github.com/istio/istio/issues/15498
The official helm chart is coming now!
https://artifacthub.io/packages/helm/istio-official/gateway
Need to be careful the comment in issue #31275
Note: this is a 1.12 prerelease, so you need to pass --devel to all helm commands and should not run it in prod yet.
Because the chart is still in the alpha version, we need to pass --devel flag or specify a chart version to allow development versions.
Install steps:
helm repo add istio https://istio-release.storage.googleapis.com/charts
helm repo update
helm install --devel istio-ingressgateway istio/gateway
# or --version 1.12.0-alpha.1
helm repo add istio https://istio.io/charts works. I found it in this PR.
I want to spin up a single installer pod with helm install that once running, will apply some logic and install other applications into my cluster using helm install.
I'm aware of the helm dependencies, but I want to run some business logic with the installations and I'd rather do it in the installer pod and on the host triggering the whole installation process.
I found suggestions on using the Kubernetes REST API when inside a pod, but helm requires kubectl installed and configured.
Any ideas?
It seems this was a lot easier than I thought...
On a simple pod running Debian, I just installed kubectl, and with the default service account's secret that's already mounted, the kubectl was already configured to the cluster's API.
Note that the configured default namespace is the one that my installer pod is deployed to.
Verified with
$ kubectl cluster-info
$ kubectl get ns
I then installed helm, which was already using the kubectl to access the cluster for installing tiller.
Verified with
$ helm version
$ helm init
I installed a test chart
$ helm install --name my-release stable/wordpress
It works!!
I hope this helps
You could add kubectl to your installer pod.
"In cluster" credentials could be provided via service account in "default-token" secret: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/