Exposing kubernetes API to the outside world - kubernetes

I was reviewing some material related to kubernetes security and I found it is possible to expose Kubernetes API server to be accessible from the outside world, My question is what would be the benefit from doing something vulnerable like this, Anyone knows business cases for example that let you did that?
Thanks

Simply, you can use endpoints to deploy any service from your local. for sure you must implement security on your api.
I have created an application locally which builds using docker api, and deploy using kubernetes api.
Don't forget about securing your apis.

Related

kubernetes point domain to a local service

I'm new to Kubernetes and trying to point all requests to the domain to another local service.
Both applications are running in the same cluster under a different namespace
Example domains
a.domain.com hosting first app
b.domain.com hosting the second app
When I do a curl request from the first app to the second app (b.domain.com). it travels through the internet to the second app.
Usually what I could do is in /etc/hosts point b.domain.com to localhost.
What do we do in this case in Kubernetes?
I was looking into Network Policies but I'm not sure if it correct approach.
Also As I understood we could just call service name.namespace:port from the first app. But I would like to keep the full URL.
Let me know if you need more details to help me solve this.
The way to do it is by using the Kubernetes Gateway API. Now, it is true that you can deploy your own implementation since this is an Open Source project, but there are a lot of solutions already using it and it would be much easier to learn how to implement those instead.
For what you want, Istio would fit your needs. If your cluster is hosted in a Cloud environment, you can take a look at Anthos, which is the managed version of Istio.
Finally, take a look at the blog Welcome to the service mesh era, since the traffic management between services is one of the elements of the service mesh paradigm, among others like monitoring, logging, etc.

Kubernetes beginner: trigger simple actions between apps

Very simple question, I have a front-end app running in kubernetes. I would like to create a back-end containerized app that would also be in kubernetes obviously.
User actions in the frontend would need to trigger the execution of a command on the backend (echo success! for example). The UI also needs to know what was the command's output.
What is the best way to implement this in k8s?
Either through an internal service, or the two apps can also be in the same pods.
Perhaps there is some kind of messaging involved with applications such as rabbitMQ?
That depends on your application how you are planning.
Some people host frontend on bucket and from there send HTTP request to backend or so.
You can keep frontend and backend in different PODs or in a single POD also.
For example, if you are using the Node JS with express, you can run as simple API service POD also and keep frontend with it also to serve.
You can use the K8s service name for internal communication instead of adding the Message broker(RabbitMQ, Redis also can be used) unless your web app really needs it.
I would also recommend checking out the : https://learnk8s.io/deploying-nodejs-kubernetes
Github repo of application : https://github.com/learnk8s/knote-js/tree/master/01
Official example : https://kubernetes.io/docs/tutorials/stateless-application/guestbook/

GKE with Hashicorp Vault - Possible to use Google Cloud Run?

I'm looking into deploying a cluster on Google Kubernetes Engine in the near future. I've also been looking into using Vault by Hashicorp in order to manage the secrets that my cluster has access to. Specifically, I'd like to make use of dynamic secrets for greater security.
However, all of the documentation and Youtube videos that cover this type of setup always mention that a set of nodes strictly dedicated to Vault should operate as their own separate cluster - thus requiring more VMs.
I am curious if a serverless approach is possible here. Namely, using Google Cloud Run to create Vault containers on the fly.
This video (should start at the right time) mentions that Vault can be run as a Deployment so I don't see there being an issue with state. And since Google mention that each Cloud Run service gets its own stable HTTPS endpoint, I believe that I can simply pass this endpoint to my configuration and all of the pods will be able to find the service, even if new instances are created. However, I'm new to using Kubernetes so I'm not sure if I'm entirely correct here.
Can anyone with more experience using Kubernetes and/or Vault point out any potential drawbacks with this approach? Thank you.
In beta since 3 weeks, and not officially announced (It should be in a couple of days) you can have a look to secret-manager. It's a serverless secret manager with, I think, all the basic requirements that you need.
The main reason that it has not yet announced, it's because the client library in several languages aren't yet released/finished
The awesome guy on your video link, Seth Vargo, has been involved in this project.
He has also released Berglas. It's write in Python, use KMS for ciphering the secret and Google Cloud Storage for storing them. I also recommend it.
I built a python library to easily use Berglas secret in Python.
Hope that this secret management tool will meet your expectation. In any case, it's serverless and quite cheap!

How to consume Istio-based Service that enables `mtls`?

Currently, I want to introduce istio as our service-mesh framework for our microservices. I have played it sometime (< 1 week), and my understanding is that Istio really provides an easy way to secure service to service communication. Much (or all?) of Istio docs/article provides an example how client and server who have istio-proxy (envoy) installed as a sidecar container, can establish secure communication using mtls method.
However, since our existing client (which I don't have any control) who consume our service (which will be migrated to use istio) doesn't have istio, I still don't understand it well how we should do it better.
Is there any tutorial or example that provides my use case better?
How can the non-istio-based client use mtls for consuming our istio-based service? Think about using basic curl command to simulate such thing.
Also, I am thinking of distributing a specific service account (kubernetes, gcp iam service account, etc) to the client to limit the client's privilege when calling our service. I have many questions on how these things: gcp iam service account, istio, rbac, mtls, jwt token, etc contributes to securing our service API?
Any advice?
You want to add a third party to your Istio mesh outside of your network via SSL over public internet?
I dont think Istio is really meant for federating external services but you could just have an istio ingress gateway proxy sat at the edge of your network for routing into and back out of your application.
https://istio.io/docs/tasks/traffic-management/ingress/
If you're building microservices then surely you have an endpoint or gateway, that seems more sensible to me, try Apigee or something.

Rest communication between two microservices without static ip in Amazon ECS

In this question, I managed to set-up REST communication between two microservices using a user-defined bridge network in docker-compose
Now, I'm trying to do the same when hosting my microservices on AWS.
I could really use some pointers as to how to achieve this, because I'm terribly lost.
I've tried following numerous tutorials, both written and on pluralsight, but none seem to be close enough to my use case.
My project architecture is as follows:
https://i.stack.imgur.com/vc6TX.png
And my project infrastructure should probably look like this:
https://i.stack.imgur.com/X73HA.png
Thanks
You can use internal Loadbalancer for each service and create DNS records us it for app communication Also ECS and a service discovery feature that is useful in this scenario.