How to disable istio-proxy sidecar access log for specific deployments in Kubernetes - kubernetes

I'm using istio-proxy sidecar with Kubernetes, sidecars are automatically added to the Kubernetes pods.
I want to turn off the access log for one single deployment (without disabling the sidecar).
is there an annotation to do that?

As I mentioned in comments
If you want to disable envoy’s access logging globally you can use istioctl/operator to do that.
There is istio documentation about that.
Remove, or set to "", the meshConfig.accessLogFile setting in your Istio install configuration.
There is istioctl command:
istioctl install --set meshConfig.accessLogFile=""
There is an example with istio operator:
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
profile: default
meshConfig:
accessLogFile: ""
If you want to disable it for a specific pod you can use use below command, there is envoy documentation about that.
curl -X POST http://localhost:15000/logging?level=off
As you're looking for a way to do that for deployment that trick with init container and above curl command might actually work.

Related

How to change Istio-ingressgateway type from "LoadBalancer" to "ClusterIP"

I am using Azure Kubernetes. I installed Istio 1.6.1. It installed the Istio-ingressgateway with LoadBalancer. I don't want to use Istio ingressgateway because I want to kong ingress.
I tried to run below command to change istio-ingress services from LoadBalancer to ClusterIP but getting errors.
$ kubectl patch svc istio-ingressgateway -p '{"spec": {"ports": "type": "ClusterIP"}}' -n istio-system
Error from server (BadRequest): invalid character ':' after object key:value pair
Not sure if I can make the changes and delete and re-create istio-ingress service?
The better option would be to reinstall istio without ingress controller. Do not install default profile in istio as it will install ingress controller along with other component. Check the various settings as mentioned in the installation page of istio and disable ingress controller.
Also check the documentation of using istio and kong together on k8s page and see what needs to be done on kong installation in order for enble communication between kong and other services.

How do I install Prometheus with Helm so that it's available from the browser?

I'm installing Prometheus on GKE with Helm using the standard chart as in
helm install -n prom stable/prometheus --namespace hal
but I need to be able to pull up the Prometheus UI in the browser. I know that I can do it with port forwarding, as in
kubectl port-forward -n hal svc/prom-prometheus-server 8000:80
but I'm being told "No, just expose it." Of course, there's already a service so just doing
kubectl expose deploy -n hal prom-prometheus-server
isn't going to work. I assume there's some value I can set in values.yaml that will give me an external IP, but I can't figure out what it is.
Or am I misunderstanding when they tell me "Just expose it"?
It is generally a very bad idea to expose Prometheus itself as it has no authentication mechanism, but you can absolutely set up a LoadBalancer service or Ingress aimed at the HTTP port if you want.
More commonly (and supported by the chart) you'll use Grafana for the public view and only connect to Prom itself via port-forward when needed for debugging.
Agree that it's a bad idea to expose prom publicly, but if its a demo it's ok.
Run:
kubectl expose deploy -n hal prom-prometheus-server --type=LoadBalancer
Kubernetes will create a GCP Load Balancer with an external IP.
Hope it helps!

Does Istio provide a dashboard similar to Kubernetes (yet)?

Does Istio have a dashboard similar to https://github.com/kubernetes/dashboard yet?
While it's not quite the same as the Kubernetes dashboard, Istio does come bundled with Grafana, which provides a UI for visualizing Istio metrics (via Prometheus).
See Istio's documentation for how to access: https://istio.io/docs/tasks/telemetry/using-istio-dashboard/
TL;DR, run this command: kubectl -n istio-system port-forward $(kubectl -n istio-system get pod -l app=grafana -o jsonpath='{.items[0].metadata.name}') 3000:3000
then navigate here for the dashboard: http://localhost:3000/dashboard/db/istio-mesh-dashboard
A relatively new project called Kiali aims to provide a UI to observe Istio service mesh.
https://github.com/kiali/kiali
yes it has kiali dashboard.you can see istio configurations over there and also visualize istio mesh.
you can refer the below official documentation for same.
https://istio.io/docs/tasks/observability/kiali/

does daemonset need RBAC in kubernetes?

When I deploy a deamonset in kubernetes(1.7+), i.e, nginx ingress as daemonset, do I need to set some rbac rule ??? I know I need to set some rbac rule if I use deployment.
To deploy ingress, you need to enable some RBAC rules. In the nginx controller repository you can find the RBAC rules: https://github.com/kubernetes/ingress/blob/master/examples/rbac/nginx/nginx-ingress-controller-rbac.yml
To create daemonset you don't need to create RBAC rules for it. You might need RBAC for what is running in your Pod, be it via Deployment, Daemonset or whatever. It is the software you're running inside that might want to interact with kubernetes API, like it is in case of Ingress Controller. So, it is in fact irrelevant how you make the Pod happen, the RBAC (Cluster)Role, bindings etc. It is what soft you deploy, that defines what access rules it needs.
I was able to enable RBAC using helm (--set rbac.create=true) and this error is not seen anymore, and the nginx ingress controller is working as expected!
helm install --name my-release stable/nginx-ingress --set rbac.create=true

Enabling Kubernetes PodPresets with kops

I've got a kubernetes cluster which was set up with kops with 1.5, and then upgraded to 1.6.2. I'm trying to use PodPresets. The docs state the following requirements:
You have enabled the api type settings.k8s.io/v1alpha1/podpreset
You have enabled the admission controller PodPreset
You have defined your pod presets
I'm seeing that for 1.6.x, the first is taken care of (how can I verify?). How can I apply the second? I can see that there are three kube-apiserver-* pods running in the cluster (I imagine it's for the 3 azs). I guess I can edit their yaml config from kubernetes dashboard and add PodPreset to the admission-control string. But is there a better way to achieve this?
You can list the API groups which are currently enabled in your cluster either with the api-versions kubectl command, or by sending a GET request to the /apis endpoint of your kube-apiserver:
$ curl localhost:8080/apis
{
"paths": [
"/api",
"/api/v1",
"...",
"/apis/settings.k8s.io",
"/apis/settings.k8s.io/v1alpha1",
"...",
}
Note: The settings.k8s.io/v1alpha1 API is enabled by default on Kubernetes v1.6 and v1.7 but will be disabled by default in v1.8.
You can use a kops ClusterSpec to customize the configuration of your Kubernetes components during the cluster provisioning, including the API servers.
This is described on the documentation page Using A Manifest to Manage kops Clusters, and the full spec for the KubeAPIServerConfig type is available in the kops GoDoc.
Example:
apiVersion: kops/v1
kind: Cluster
metadata:
name: k8s.example.com
spec:
kubeAPIServer:
AdmissionControl:
- NamespaceLifecycle
- LimitRanger
- PodPreset
To update an existing cluster, perform the following steps:
Get the full cluster configuration with
kops get cluster name --full
Copy the kubeAPIServer spec block from it.
Do not push back the full configuration. Instead, edit the cluster configuration with
kops edit cluster name
Paste the kubeAPIServer spec block, add the missing bits, and save.
Update the cluster resources with
kops update cluster nane
Perform a rolling update to apply the changes:
kops rolling-update name