I am building an automatic pipeline that's supposed to check out some code from a repository, build a docker container and launch it as a service in ECS, entirely programmatically. I progressed as far as having to provision a load balancer for a service, and here I'm stuck: I couldn't find an API or any documentation on how to create load balancers programmatically.
Am I wanting something that's not supposed to happen? For now the only way I see is to manually configure an ALB for every new service, but this defeats the whole point.
cli, or python, or java...
There are other SDKs as well. You can also leverage CloudFormation within AWS, or a third party tool like HashiCorp's Terraform.
All the SDKs I've used simply wrap the CLI tool, which is cool because it means how the SDKs are organized is largely the same regardless of language. If you ever can't find something in the SDK, start by finding it in the CLI.
Related
I have a single screen desktop application developed in Java. It is a tool to convert files, given a file in .abc format, the tool converts it to .xyz format. Basically the tool works offline and acts as a translator to convert file from one form to another.
So now, to improve the infrastructure, there are discussions to move the tool to Kubernetes or to provide REST services for the file conversion. I completely have no idea about the containers nor the REST APIs as I am a front-end developer.
More about the tool, as I told earlier, the tool is a single page application, very light doing very minimal job, totally used by 200 users approximately. So, this being the shape and size of the application, which one would be the best approach to go with and why? Basically, I am looking for a short evaluation report of Kubernetes vs REST service and architecture recommendation with reasons.
Currently your application is a standalone application which is quite an old concept.
I can mention high-level changes needs to be done when your file conversion logic would be exposed over Rest Api in Kubernetes world.
you can go through one by one following mentioned areas to get a better understanding design-wise:
java code would be a backend code and its public methods that take inputs from UI actions will be exposed over rest API.
There are multiple rest API's (jersey, rest easy, etc or spring/spring-boot framework also provides rest API support) that you can go through any of them to get an understanding.
once your backend is exposed over the rest API then it needs to be containerized means your backend will be running under the container. Can go through docker documentation and can build one sample containerized app. There is huge material present in this area.
once your backend is containerized then it will be installed in a Kubernetes cluster
Kubernetes is basically a container orchestration tool and it's quite a wide thing. you can through its official documentation for basic understanding.
SPA will be running on a client machine like today also you are able to launch from your desktop but it will communicate with the Kubernetes cluster where your application is presently packaged in a container.
References:
docker :
https://docs.docker.com/
Kubernetes :
https://kubernetes.io/
As I dive into the world of Cloud Composer, Airflow, Google Kubernetes Engine, and Kubernetes I've not yet found a good answer to what exactly makes Cloud Composer better than Helm and GKE.
Here are some things I've found that could be unique to Composer but mostly seem like they could be handled by GKE.
On their homepage:
End-to-end integration with Google Cloud products including BigQuery, Dataflow, Dataproc, Datastore, Cloud Storage, Pub/Sub, and AI Platform gives users the freedom to fully orchestrate their pipeline.
On the features page:
Identity-Aware Proxy protects the interface
Cloud Composer associates a Cloud Storage bucket with the environment. The associated bucket stores the DAGs, logs, custom plugins, and data for the environment.
The downsides of Composer I've seen include:
It takes many hours to spin up a new instance
It doesn't support Kubernetes Executor
It is risky to change the underlying GKE config because it could be changed back by a composer update
There are often errors that happen when auto-scaling often happen but are documented as known
Upgrading environments is still beta
To be clear, I'm not saying Cloud Composer is bad. I'm just having trouble seeing why people like it. When I've asked folks why it is better than Helm + GKE they haven't had any compelling answers despite that they can tell many stories of Composer being unpredictable and having lots of issues.
Are you comparing the same things?
On one side, GKE, you have a container orchestrator. Declare that you want, it will deploy and maintain the stability of the cluster according with declared configuration. This configuration can be packaged with helm to write it in an easier mode. Because you deploy container, you can use the language that you want in your services.
On the other side, you have a workflow manager, with scheduler, retry policies, parallel task, context forwarding. you write DAG in python (only!) and you have operators to interact with external product/services. It's mainly designed for data processing and used a lot by data scientist and data engineering team.
Note: Cloud Composer is deployed on top of GKE (scheduler and worker), redis, app engine and Cloud SQL.
You compare 2 different worlds: Ops world (GKE/Helm) and the App/Data world (Composer/Airflow). Have a look to this new video
Update 1:
My bad, I didn't understand!!! Anyway, personally I don't want to manage things by myself: a cluster, the update of K8S, VM patching, replicas, snapshot, backup/restore,...
If someone can do this for me, I prefer, and managed services are perfect for me!!
Do you ask yourselves this question about Cloud SQL and a database managed by yourselves on a Compute Engine instance? If not (because Cloud SQL solve a lot of boring issues), my opinion is the same for Composer.
But it's an opinion, I didn't test both and compare the performance, cost and easiness.
For local development, I was hoping to set up a localhost profile for AWS Toolkit that I could then use in Eclipse to interact with resources on localstack, but I'm at a loss to set this up. There is a local(localhost) option in AWS Toolkit, but I don't see how it would know what endpoints to access for the various services in localstack.
It seems like a relatively logical thing to want to do, or do I have to do all my interaction with the aws (or awslocal) cli?
Let's say I have a flask app, a PostgreSQL, and a Redis app. what is the best practice way to develop those apps locally which then later be deployed to Kubernetes.
Because, I have tried to develop in minikube with ksync, but I get difficulties in getting detailed debug log information.
Any ideas?
What we do with our systems is that we develop and test them locally. I am not very knowledgeable with Flask and ksyncy, but say for example, you are using Lagom Microservices Framework in Java, you run you app locally using the SBT shell where you can view all your logs. We then automate the deployment using LightBend Orchestration.
When you then decide to test the app on Kubernetes, you can choose to use minikube, but you have to configure the logging properly. You can configure centralised logging for Kubernetes using the EFK stack. This will collect all the logs from the various components of your app and store them in Elastic Search. You can then view these logs using The Kibana Dashboard. You can do a lot with the dashboard, you can view logs for a given period, or search logs by k8s namespace, or by container.
There are multiple solutions for this (aka GitOps with Kubernetes):
Skaffold
Draft
Flux - IMO the most mature.
Ksonnet
GitKube
Argo - A bit more of a workflow engine.
Metaparticle - Deploy with actual code.
I think the solution is using skaffold
I'm looking to build an api on a application that is going to run its own docker container. It needs to work with some applications via its REST apis. I'm new to development and dont understand the process very well. Can you share the broad steps necessary to build and release the APIs so that my application runs safely within the docker but externally whatever communication needs to happen they work out well.
For context: I'm going to be working on a Google Compute VM instance and the application I'm building is a HyperLedger Fabric program written in GoLang.
Links to reference material and code would also be appreciated.
REST API implementation is very easy in Go. You can use the inbuilt net/http package. Here's a tutorial which will help you understand its usage. https://tutorialedge.net/golang/creating-restful-api-with-golang/
Note : If you are planning on developing a production server, the default HTTP client is not recommended. It will knock down the server on heavy frequency calls. In that case, you have to use a custom HTTP client as described here, https://medium.com/#nate510/don-t-use-go-s-default-http-client-4804cb19f779
For learning docker I would recommend the docker docs they're very good and cover a handful of stuff. Docker swarm and orchestration are useful things to learn but most people aren't using docker swarm anymore and use things like kubernetes instead. Same principles, but different tech. I would definitely go through this website: https://docs.docker.com/ and implemented on your own computer. Then just practice by looking at other peoples dockerfiles and building your own. A good understanding a linux will definitely help with installing packages and so on.
I haven't used go myself but I suspect it shouldn't be too hard to deploy into a docker container.
The last production step of deployment will be similar for whatever your using if it's docker or no docker. The VM will need an webserver like apache or nginx to expose the ports you wish to use to the public and then you will run the docker container or the go server independently and then you'll have your system!
Hope this helps!