I'm new to Kubernetes. Recently, I was successfull to manage kubernetes with online server. But, when I move to isolated area (offline server) all of my environment running well except when I deploy some images. The different just internet connection.
First of all, I want to deploy kubernetes dashboard to make me easy to maintain Kubernetes. Can I deploy kubernetes dashboard in offline mode ?
Thanks for your help :).
The dashboard only needs to be able to talk to the Kubernetes API. It doesn't have an "online" or "offline" mode. As with all air-gapped networks, you would need a local image proxy or similar to transfer the container image to your local network. How you implement that is up to you and well out of the scope of the dashboard.
Related
My application has it's backend and it's frontend. The frontend is currently hosted in a Google Cloud Storage bucket and I am migrating the backend from Compute Engine VMs to Kubernetes Engine Autopilot.
When migrating, does it make more sense for me to migrate everything to Kubernetes Engine or would I be better off keeping the frontend in the bucket? Backend and frontend are different projects in different git repositories.
I am asking because I saw that it is possible to manage Kubernetes services' exposure, even at the level of URL Maps and Load Balancer, so I thought of perhaps entrusting all my projects' (backend and frontend) hosting to Kubernetes, since I know that Kubernetes is a very complete and powerful solution.
There isn't problem to keep your front on Cloud Storage (or elsewhere) and to have your backend on kubernetes (GKE).
It's not a "perfect pattern" because you can't handle and deploy all the part of your application only with Kubernetes and you haven't a end to end control plane management.
You have, one side to deploy your frontend and to configure your load balancer. On the other hand, you have to deploy your backend on Kubernetes with YAML.
In addition, your application is not portable on other kubernetes cluster (because it's not full kubernetes deployment, but hybrid between Kubernetes and Google Cloud, and you are partly sticky to Google Cloud). But if it's not a requirement, it's fine.
At the end, if you expose your app behind a load balancer with the front on Cloud Storage and the back on GKE, the user will see nothing. If a day, you want to package your front in a container and deploy it on GKE, keep the same load balancer (at least the same domain name) and you user won't notice the difference!!
No worry for now, you can keep going! (and it's cheaper for now! You don't pay processing to serve static resource with Cloud Storage)
Sorry to bother you, but i am having a serious issue with my online DevOps learning.
In fact, i am taking a Devops course and we are using the google cloud platform as a cloud. When i create my cluster with gcloud container clusters create xxx and then do the describe command like gcloud container clusters describe xxx, it works but i have no information regarding the login and password to Kubernetes;
That is one of the problem.
After creating the cluster, i got not Kubernetes dashboard link with the command kubectl cluster-info. Normally i should have a Kubernetes dashboard to manage my app. In place of having the Kubernetes dashboard, there is something called Kubernetes system metric.
Can somebody help me fix this problem probably someone who is used to practice on GCP.
Best regards
Can you please go through this Google Cloud Kubernetes dashboards docs[1]?
Because, I'm able to see Kubernetes dashboard in my console. But, I don't know why you are not able to see that, and I also checked there is now any service outage on Kubernetes from Google Cloud Status Dashboard[2]. But, It's working fine. So, kindly go through that Kubernetes docs, from that you will get some better understanding of working with Kubernetes in GCP.
If you're still facing any issue or abnormal behavior, please go to public issue tracker[3] or support from GCP console and raise a ticket.
[1]. https://cloud.google.com/kubernetes-engine/docs/concepts/dashboards
[2]. https://status.cloud.google.com/
[3]. https://cloud.google.com/support/docs/issue-trackers#trackers-list
When you visit the GCP dashboard docs, you should see red warning on top of the website, saying:
Warning: The open source Kubernetes Dashboard addon is deprecated for clusters on GKE and will be removed as an option in version 1.15. As an alternative, use the Cloud Console dashboards described in this guide.
Below you read:
Starting with GKE v1.15, you will no longer be able to enable the Kubernetes Dashboard by using the add-on API. You will still be able to install Kubernetes Dashboard manually by following the instructions in the project's repository. For clusters in which you have already deployed the add-on, it will continue to function but you will need to manually apply any updates and security patches that are released.
To deploy it, follow the instructions on k8s dashboard github repo
Our company use Kubernetes in all our environments. as well as on our local Macbook using minikube.
We have many microservices and most of them are running JVM which require a large amount of memory. We started to face an issue that we cannot run our stack on minikube due to out of memory of the local machine.
We thought about multiple solutions:
the first was to create a k8s cloud development environment and when a developer is working on a single microservice on his local macbook he will redirect the outbound traffic into the cloud instead of the local minikube. but this solution will create new problems:
how a pod inside the cloud dev env will send data to the local developer machine? its not just a single request/response scenario
We have many developers, they can overlap each other with different versions of each service they need to be deploy on the cloud. (We can set each developer separate namespace but we will need a huge cluster to support it)
The second solution was maybe we should use a tools like skaffold or draft to deploy our current code into the cloud development environment. that will solve issue #1 but again we see problems:
Slow development cycle - building a java image and push to remote cloud and wait for init will take too much time for developer to work.
And we still facing issue #2
Antoher though was, kubernetes support multiple nodes, why won't we just add another node, a remote node that sit on the cloud, to our local minikube? The main issue is that minikube is a single node solution. Also, we didn't find any resources for it on the web.
Last thought was to connect minikube docker daemon to a remote machine. so we will use minikube on the local machine but the docker will run the containers on a remote cloud server. But no luck so far, minikube crush when we do this manipulate. and we didn't find any resources for it on the web as well.
Any thought how to solve our issue? Thank you!
Can anyone explain an example of using kiam on kubernetes to manage service-level access control to aws resources?
According to the docs:
The server is the only process that needs to call sts:AssumeRole and
can be placed on an isolated set of EC2 instances that don't run other
user workloads.
I would like to know to run the server part of it away from nodes that host your services.
Answer: KIAM architecture is well explained here:
https://www.bluematador.com/blog/iam-access-in-kubernetes-kube2iam-vs-kiam
Basically you want to use Master Nodes in your cluster with IAM::STS permissions on them to install the Server portion of kiam and then let your worker nodes connect to master nodes to retrieve credentials.
DISCLAIMER: I did some digging on k2iam and kiam without going all the way through to taking them to a test bench and wasn't happy with what I found out. It turns out we don't need them anymore starting with K8s 1.13 in EKS, that is as of september 4th as native support from AWS has been added for PODS to access IAM STS.
https://docs.aws.amazon.com/en_pv/eks/latest/userguide/iam-roles-for-service-accounts.html
We have a 5-node Azure Service Fabric Cluster as our main Production microservices hub. Up until now, for testing purposes, we've just been pushing out separate versions of our applications (the production application with ".Test" appended to the name) to that production SFC.
We're looking for a better approach, namely a separate test Service Fabric Cluster. But the issue comes down to costs. The smallest SFC you can create in Azure is 3 nodes. Further, you can't shutdown a SFC when it's not being used, which we would also need to do to save on costs.
So now I'm looking at just spinning up a plain Windows VM in Azure and installing the local Service Fabric Cluster app (which allows just one-node setup). Is it possible to do this and be able to communicate with the cluster from outside the VM?
What you are trying to accomplish is setup a standalone cluster. The steps to do it is documented in this docs.
Yes, you can access the cluster from outside the VM, In simple terms enable access to the network and open the firewall ports.
Technically both deployments(Guide and DevCluster) are very similar, the main difference is that you have better control on the templates following the standalone guide, using the development setup you don't have much options and all the process is automated.
PS: I would highly recommend you have a UAT\Staging cluster with the
exact same specs as the production version, the approach you used
could be a good idea for staging environment. Having different
environments increase the risk of issues, mainly related to
configuration and concurrency.