How do you deploy GeoIP on ECS Fargate? - amazon-ecs

How to productionise https://hub.docker.com/r/fiorix/freegeoip such that it is launched as a Fargate task and Also how to take care of the geoipupdate functionality such that the GeoLite2-City.mmdb is updated in the task.
I have the required environment details like GEOIPUPDATE_ACCOUNT_ID, GEOIPUPDATE_LICENSE_KEY and GEOIPUPDATE_EDITION_IDS but could not understand the flow for deployment as there are two separate dockerfile/images for geoip as well as geoipupdate.
Has someone deployed this on Fargate? If yes could you please list down with the high level steps for the same. I have already tried researching if such a thing is deployed on ECS, but I can only find examples for Lambda and EC2.
Thanks

Related

Why does ECS integration new-relic task require AmazonEC2ContainerServiceforEC2Role?

We are trying to use the AWS Cloudformation way of installing the ECS integration for our clusers with NewRelic as described in this link
I observed that this cloud formation first creates few IAM roles for Task that will be executed as daemon service and one of the roles on the Task created is AmazonEC2ContainerServiceforEC2Role , which includes permissions to operated with Container Instances, including Deregistering the Container Instance.
I am interested to understand under what circumstances will this daemon task required to Deregister instance or for that matter Create cluster or register instance. The complete list of permissions given by IAM are as below. Can someone please elaborated why would we need this in first place.
Tried putting this in newrelic discussion forums but havent had any luck yet
"ec2:DescribeTags", "ecs:CreateCluster", "ecs:DeregisterContainerInstance", "ecs:DiscoverPollEndpoint", "ecs:Poll", "ecs:RegisterContainerInstance", "ecs:StartTelemetrySession", "ecs:UpdateContainerInstancesState", "ecs:Submit*", "ecr:GetAuthorizationToken", "ecr:BatchCheckLayerAvailability", "ecr:GetDownloadUrlForLayer", "ecr:BatchGetImage", "logs:CreateLogStream", "logs:PutLogEvents"

GCP - spark on GKE vs Dataproc

Our organisation has recently moved its infrastructure from aws to google cloud compute and I figured dataproc clusters are a good solution to running our existing spark jobs . But when it comes to comparing the pricing , I also realised that I can just fire up a google kubernetes engine cluster and install spark in it to run spark applications on it .
Now my question is , how do “running spark on gke “ and using dataproc compare ? Which one would be the best option in terms of autoscaling , pricing and infrastructure . I’ve read googles documentation on gke and dataproc but there isn’t enough for to be sure in terms of advantages and disadvantages of using GKE or dataproc over the other .
Any expert opinion will be extremely helpful.
Thanks in advance.
Spark on DataProc is proven and it's in use at many organizations, though its not fully managed, you can automate cluster creation and tear down, submitting jobs etc through GCP api, but still it's another stack you have to manage.
Spark on GKE is something new, Spark started adding features from 2.4 onwards to support Kubernetes, and even Google updated the Kubernetes for the preview couple of days back, Link
I would just go with DataProc if I have to run Jobs in Prod environment as we speak otherwise you could just experiment yourself with Docker and see how it fares, but I think it needs little more time to be stable, from purely cost perspective it would be cheaper with Docker as you can share resources with your other services.
Adding my two cents to above answer.
I would favor DataProc, because its managed and supports Spark out of
the box. No hazzles. More importantly, cost optimized. You may not
need clusters all the time, you can have ephemeral clusters with
dataproc.
With GKE, I need to explicitly discard the cluster and recreate when
necessary. Additional care needs to be taken care of.
I could not come across any direct service offering from GCP on data
lineage. In that case, I would probably use Apache Atlas with
Spark-Atlas-Connector on Spark installation managed by myself. In
that case, running Spark on GKE with all the control lying with
myself would make a compelling choice.

How to run multiple Kubernetes jobs in sequence?

I would like to run a sequence of Kubernetes jobs one after another. It's okay if they are run on different nodes, but it's important that each one run to completion before the next one starts. Is there anything built into Kubernetes to facilitate this? Other architecture recommendations also welcome!
This requirement to add control flow, even if it's a simple sequential flow, is outside the scope of Kubernetes native entities as far as I know.
There are many workflow engine implementations for Kubernetes, most of them are focusing on solving CI/CD but are generic enough for you to use however you want.
Argo: https://applatix.com/open-source/argo/
Added a custom resource deginition in Kubernetes entity for Workflow
Brigade: https://brigade.sh/
Takes a more serverless like approach and is built on Javascript which is very flexible
Codefresh: https://codefresh.io
Has a unique approach where you can use the SaaS to easily get started without complicated installation and maintenance, and you can point Codefresh at your Kubernetes nodes to run the workflow on.
Feel free to Google for "Kubernetes Workflow", and discover the right platform for yourself.
Disclaimer: I work at Codefresh
I would try to use cronjobs and set the concurrency policy to forbid so it doesn't run concurrent jobs.
I have worked on IBM TWS (Workload Automation) which is a scheduler similar to cronjob where you can mention the dependencies of the jobs.
You can specify a job to run only after it's dependencies has run using follows keyword.

Hosting Spring Boot / MongoDB Application in Cloud

I will host my Spring Boot/ MongoDB application, developed with Java-8 in the cloud (in Europe and if possible in Germany - a demand of the customer).
I did a research and I really found a lot of possibilities.
The one that I think fits best are
Microsoft Azure and
AWS
honestly I dont know how to start. Does anyone know if there is a good tutorial to start - e.g. for installing MongoDB, than for uploading my jar file.
And than I would start my application with java -jar myApp.jar.
Is there a good how to do link?
If you're open to using Kubernetes then you could look at the example of https://github.com/nhatthai/spring-mongodb-minikube or https://github.com/elizabetht/kubernetes-mongo-docker-spring-boot You could use Azure's AKS as I'm guessing you wouldn't want to spend much time on cluster management. (AWS's EKS offering is still in preview mode at the moment.) If you did go this route you could test on a local cluster with minikube. It sounds like you're looking for a cloud provider but you might instead want to use Kubernetes as a cloud-agnostic orchestrator for your application. (On this you might want to look at Is Kubernetes + Docker + AWS = Azure + Service Fabric? ) This is just a suggestion - you could instead choose to go for something provider-specific e.g. using Azure's CosmosDB https://github.com/Azure-Samples/azure-cosmos-db-mongodb-spring

Best practice for getting RDS password to docker container on ECS

I am using Postgres Amazon RDS and Amazon ECS for running my docker containers.
The question is. What is the best practice for getting the username and password for the RDS database into the docker container running on ECS?
I see a few options:
Build the credentials into docker image. I don't like this since then everyone with access to the image can get the password.
Put the credentials in the userdata of the launch configuration used by the autoscaling group for ECS. With this approach all docker images running on my ECS cluster has access to the credentials. I don't really like that either. That way if a blackhat finds a security hole in any of my services (even services that does not use the database) he will be able to get the credentials for the database.
Put the credentials in a S3 and control the limit the access to that bucket with a IAM role that the ECS server has. Same drawbacks as putting them in the userdata.
Put the credentials in the Task Definition of ECS. I don't see any drawbacks here.
What is your thoughts on the best way to do this? Did I miss any options?
regards,
Tobias
Building it into the container is never recomended. Makes it hard to distribute and change.
Putting it into the ECS instances does not help your containers to use it. They are isolated and you'd end up with them on all instances instead of just where the containers are that need them.
Putting them into S3 means you'll have to write that functionality into your container. And it's another place to have configuration.
Putting them into your task definition is the recommended way. You can use the environment portion for this. It's flexible. It's also how PaaS offerings like Heroku and Elastic Beanstalk use DB connection strings for Ruby on rails and other services. Last benefit is it makes it easy to use your containers against different databases (like dev, test, prod) without rebuilding containers or building weird functionality
The accepted answer recommends configuring environment variables in the task definition. This configuration is buried deep in the ECS web console. You have to:
Navigate to Task Definitions
Select the correct task and revision
Choose to create a new revision (not allowed to edit existing)
Scroll down to the container section and select the correct container
Scroll down to the Env Variables section
Add your configuration
Save the configuration and task revision
Choose to update your service with the new task revision
This tutorial has screenshots that illustrate where to go.
Full disclosure: This tutorial features containers from Bitnami and I work for Bitnami. However the thoughts expressed here are my own and not the opinion of Bitnami.
For what it's worth, while putting credentials into environment variables in your task definition is certainly convenient, it's generally regarded as not particularly secure -- other processes can access your environment variables.
I'm not saying you can't do it this way -- I'm sure there are lots of people doing exactly this, but I wouldn't call it "best practice" either. Using Amazon Secrets Manager or SSM Parameter Store is definitely more secure, although getting your credentials out of there for use has its own challenges and on some platforms those challenges may make configuring your database connection much harder.
Still -- it seems like a good idea that anyone running across this question be at least aware that using the task definition for secrets is ... shall way say ... frowned upon?