How to deploy Open Policy Agent in a Google Kubernetes cluster - kubernetes

I'm new to k8s, and I want to deploy OPA in the same pod as of my application in Google Kubernetes engine. But I don't know how to do this.Are there any references that I can refer more details about this ? Could you please help me figure out the steps I should follow ?

It should similar as deploying to any Kubernetes cluster as documented here. The difference could be you may want to use a LoadBalancer type service instead of NodePort.

Related

How to Kubernetes Cloud like GKE , EKS,.. allocation external ip for Service with Type Loadbacer in Kubernetes?

I just new in K8s. I try to self deploy k8s cloud in internal company server. And I have question how to I setup my K8s can allocation External IP for Service with Loabalancer. May you tell you how it work in GKE or EKS?
Updated base on your comment.
What I mean how to EKS or GKE behind the scenes allocation ip, what is a mechanism?
Here's the EKS version and here's the GKE version. It's a complex thing, suggest you use these materials as the starting point before diving into technical details (which previous answer provided you the source). In case you thought of on-premises k8s cluster, it depends on the CNI that you will use, a well known CNI is Calico.
In GKE you can define services to expose or to make accessible the applications defined in the cluster. There are several kinds of services one of them is a LoadBalancer service, this can have an external IP address.

Deploying Prometheus to different kubernetes cluster

We have a central monitoring cluster that monitors different k8s clusters (running various micro services)
Currently we’ve deployed prometheus using manifests but we plan to move to a prometheus operator.
My question is, is service discovery possible for prometheus in this kind of a set up? Will I be able to annotate my pods?
Of course, you'll be able to do service discovery with the Prometheus operator for Kubernetes.
However, it does not work as it does with a standalone Pormetheus server and the kubernetes_sd_config configuration.
With the operator, the service discovery works with a custom resource called ServiceMonitor. This resource works with label selector that target services with specific label. You can find an example here, in the official github page

Installation of Istio on GKE / Google Cloud

I have created a free account on GCP as also my first cluster.
I want to deploy istio on my GKE cluster, so I am following the official instructions.
At some point, the instructions indicate that I should
Ensure that the Google Kubernetes Engine API is enabled for your
project (also found by navigating to “APIs & Services” -> “Dashboard”
in the navigation bar)
What is that supposed to mean?
Isn't the API already active since I have created and I am running a cluster?
How can a cluster be running without the API being enabled?
Enabling GKE API is the prerequisite for running GKE. If you already run GKE then you can skip this part.
You can enable Istio as a part of GKE cluster creation. Here is the good instruction from Google: https://cloud.google.com/istio/docs/istio-on-gke/installing
Those information how to install Istio on GKE - described "Istio instalation on GKE add-on".
If you are interested with the instructions how to install Istio manually you can find instructions from google here.
To verify disable/enable API for GKE, please run:
APIs & Services
type:
Kubernetes Engine API
This overview provide more information about this api.

How to integrate Kubernetes Service Type "LoadBalancer" with Specific Cloud Load Balancers

I have a question around K8S Service Type "LoadBalancer".
I am working on developing a new "Kubernetes As a Service" Platform (like GKE etc.) for multi cloud.
Question is: K8S Service Type "LoadBalancer" works with Cloud Load Balancers (which are external to Kubernetes). GKE & other cloud based solution provides direct integration with them, so If I create a GKE Cluster & implement a Service Type "LoadBalancer", it will transparently create a new GCP Load Balancer & show Load Balancer IP in Kubernetes (as External IP). Same applies to other Cloud Providers also.
I want to allow a similar feature on my new "Kubernetes As a Service" platform, where users can choose a cloud provider, create a Kubernetes Cluster & then apply a K8S Service Type "LoadBalancer" & this will result creating a Load Balancer on the (user selected) cloud platform.
I am able to automate the flow till Kubernetes Cluster Creation, but clueless when it comes to "K8S Service & External Load Balancer" Integration.
Can anyone please help me how can I approach integrating K8S Service Type "LoadBalancer" with Specific Cloud Load Balancers? Do I need to write a new CRD or is there any similar code available in Git (in case anyone know any link for reference) ?
You have to understand how kubernetes is interacting with cloud provider. Like for example previously I deployed the Kubernetes on AWS with kops. I see that kubernetes uses aws access key & access secret to interact with aws. If I remember correctly, I saw some CLI options in kube-proxy or kubelet to support AWS. (I have searched man pages for all kubernetes binaries for aws options, but I couldn't find any to provide to you).
For example look at the kubelet man page, they provided an option called --google-json-key to authenticate GCP. You will get some idea if you deploy kubernetes on AWS with kops or kube-aws and dig through the setup and its configuration/options etc.(Same applies to other cloud providers)

Rancher connect to kubernetes instead of start kubernetes

Rancher is designed (as best as I can tell) to own and run a kubernetes cluster. Rancher does provide a configuration so that kubectl can interact w/ the kubernetes cluster. Rancher seems like a nice tool. But as far as I can tell, there is no way to connect to an existing kubernetes cluster. Is there any way to do this?
If you are looking for a service that can connect to an existing k8s cluster(s) then try Containership. You can use Kubectl and/or the Containership UI to manage you workloads, config maps, etc on multiple clusters.
Hope this helps!
I got this answer on the rancher forums
There is not, most of the value we can add at the moment is around configuring, managing, and controlling access to the installation we setup.
https://forums.rancher.com/t/rancher-connect-to-kubernetes-instead-of-start-kubernetes/3209