currently I'have been setting up ELK Stack with docker. I've deployed those dockers to my VPS. It works well. But I'm still concern about the performance if I use ELK (Elasticsearch, Logstash, Kibana) to run in the same VPS. Are there any recommendations for deploying ELK Stack?
Thanks for any help!
Related
I currently have Air-Gapped Docker-installed Rancher v2.4.8 hosting a few clusters created in the Rancher UI.
I created a new Air-Gapped Rancher v2.6.6 instance on RKE Kubernetes cluster and I need to migrate those clusters.
Question is, After I upgrade the Docker instance, Is there any solution to migrate them? I saw an article from 2021 stating it's not possible, But maybe there's a way now or a solid workaround?
Thanks everyone.
I searched online for solutions, But I am planning this migration ahead to avoid problems.
Let's say I have a flask app, a PostgreSQL, and a Redis app. what is the best practice way to develop those apps locally which then later be deployed to Kubernetes.
Because, I have tried to develop in minikube with ksync, but I get difficulties in getting detailed debug log information.
Any ideas?
What we do with our systems is that we develop and test them locally. I am not very knowledgeable with Flask and ksyncy, but say for example, you are using Lagom Microservices Framework in Java, you run you app locally using the SBT shell where you can view all your logs. We then automate the deployment using LightBend Orchestration.
When you then decide to test the app on Kubernetes, you can choose to use minikube, but you have to configure the logging properly. You can configure centralised logging for Kubernetes using the EFK stack. This will collect all the logs from the various components of your app and store them in Elastic Search. You can then view these logs using The Kibana Dashboard. You can do a lot with the dashboard, you can view logs for a given period, or search logs by k8s namespace, or by container.
There are multiple solutions for this (aka GitOps with Kubernetes):
Skaffold
Draft
Flux - IMO the most mature.
Ksonnet
GitKube
Argo - A bit more of a workflow engine.
Metaparticle - Deploy with actual code.
I think the solution is using skaffold
A default stack in PCF Dev is cflinuxfs2 (based on Ubuntu 14.04). But I would like to use a custom stack (specifically cflinuxfs3 based on Ubuntu 18.04).
I have built it successfully (also created a corresponding BOSH release), but I am unable to register it to my PCF Dev environment. The problem seems to be that PCF Dev does not support BOSH Director.
I have also tried to use the CF Stack API, partially succeeded in registering a new stack (name and description), but was not able to upload the actual rootfs tarball.
Could anyone help me how to properly upload a custom stack to PCF Dev?
It has been my experience that PCF Dev is mainly suitable for application development. If you want to create any bosh release, you should be using bosh lite.
I highly recommend you do a bosh lite setup with cloud foundry, and you can adjust anything you want in that setup. I have been able to do that based on bosh lite instructions and cloud foundry instructions. And I recommend you use https://github.com/starkandwayne/bucc to help you get started with bosh lite, and then add cf on top.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
I am trying to finally choose between Spring Cloud Netflix, Kubernetes and Swarm for building our microservices environment. They are all very cool and do some choice is very hard.
I'll describe a little which kind of problems I want to solve.
I couldn't find any best way to design Api Gateway (not a simple load balancer) with Kubernetes or Swarm , that's why I want to use Zuul. But from other side Api Gateway must use service discovery which in case of Kubernetes or Swarm will be embedded inside the orchestra. With Kubernetes I can use it's spring cloud integration, but this way I will have server side discovery and client side discovery inside Kubernetes. Which is overkill I think.
I am wondering does anyone have some experience with them and any suggestions about that.
Thanks.
Kubernetes and Docker Swarm are container orchestration tools.
Spring Cloud is a collection of tools to build microservices/streaming architectures.
There is a bit of overlap, like service discovery, gateway or configuration services. But you could use Spring Cloud without containers and deploy the jars yourself without needing Kuberentes or Swarm.
So you'll have to choose between Kubernetes and Swarm for the orchestration of your containers, if you'll use containers.
Comparison: https://dzone.com/articles/deploying-microservices-spring-cloud-vs-kubernetes
I've read Bluemix local documentation https://console.ng.bluemix.net/docs/local/index.html#local.
It's not clear if Bluemix Local supports Docker containers or not.
-thanks in advance
Bluemix Local does not support Docker containers at this time. For questions like this about Bluemix platform capabilities, the best place is IBM developerWorks Answers. Stack Overflow is intended for technical programming questions.