Is it possible to add host entry to /etc/hosts file via Fargate 1.3.0 or 1.4.0? - amazon-ecs

I need to add host entry to /etc/hosts file via Fargate. And, extraHosts via task-definition is not supported with awsvpc. Currently, we are using Fargate 1.3.0 and ready to upgrade to 1.4.0 if the issue can be resolved with 1.4.0.
Also, question asked here - https://github.com/aws/containers-roadmap/issues/886 suggests it is possible to do via 1.3.0 and 1.4 as well. And, I cannot seem to find either ways.
Can somebody suggest how it can be done?

Related

Falco pod initcontainer is not working. curl: (22) The requested URL returned error: 404

I am trying to install falco on my kubernetes cluster with helm chart. I am deploying as Deamonset and using ebpf but getting error on my init containers. What should I do?
This is my values yaml
You are getting this error message due to not having the kernel headers installed so the eBPF driver can be compiled.
Before compiling the eBPF driver, the loader script tries to download it from https://download.falco.org, but it doesn't find it because the Oracle Linux distribution is not officially supported (it is not offered as a prebuilt driver, to be more precise).
The quickest solution would be to install the Kernel Drivers on each Kubernetes node, so Falco can compile the driver the next time it tries to start.
It is also possible to use the project Driverkit to build Falco drivers on your own (as the Falco project does) and make them available somewhere else, but then you'd need to pass the URL for the driver to the Helm Chart. This avoids polluting the system with packages you'd need only once.
You are also welcome to contribute to the project by adding support for the Oracle Linux distribution, which is relatively simple since it is quite similar to the Red Hat distribution. Once it is supported, the drivers will be available to anyone using the same kernel/distribution.
For further information, you can visit the Falco Slack channel and ask for help there, or ping anyone in the community

creating a proper kubeconfig file for a 2 node gentoo linux kubernetes cluster

I have two servers at my home with Gentoo Linux ~amd64.I would like to install Kubernetes on them to play with it a bit.
Gentoo now packages all the Kubernetes related dependencies under one package called sys-cluster/kubernetes and the latest version available at the moment is 1.18.3.
the last time I played with Kubernetes was several years ago and I think I completely forgot everything.
so I installed kubernetes on both servers. since I use systemd and the package contains only kubelet systemd service I created systemd init scripts for also kube-apiserver, kube-controller-manager, kube-proxy and kube-scheduler.
now this package also comes with kubeadm but I would like to know how to install and configure kubernetes manually.
now I want to create a kubeconfig file for my cluster configuration.
I googled and found the following url: http://docs.shippable.com/deploy/tutorial/create-kubeconfig-for-self-hosted-kubernetes-cluster/
the first step is Make sure you can access the cluster but I thought I wanted to create kubeconfig in order for the services to properly know how to access my cluster!
this web site already talks about secrets that where already configured which aren't.. i'm starting from scratch and this is not probably the way to go.
In general I want to know how to properly create a kubeconfig file for my setup, then i'll configure the services to use this kubeconfig file and go on from there.
so any information regarding this issue would be greatly appreciated.
so I asked this also in Kubernetes slack channel and they provided me this project: https://github.com/kelseyhightower/kubernetes-the-hard-way
it's a documentation project on how to configure kubernetes the hard way, in the documentation they set it up in google cloud, but it's easy to understand what they did on cloud and how to configure the same on your network.

AWS Elasticsearch domain deployed through CloudFormation. How to update ES version without replacement?

We have an AWS Elasticsearch domain we created through CloudFormation running version 6.3 of ES. When we update the ElasticsearchVersion property in the template, it replaces the Elasticsearch domain with a new one running the new version instead of updating the existing one.
How does anyone upgrade their Elasticsearch domains that were deployed with CF if it doesn't do an in-place upgrade? I am almost thinking at this point I need to create and manage my ES domains through boto3.
Any insight or ideas would be greatly appreciated.
This is now possible (as of 25/11/2019) by setting an UpdatePolicy with EnableVersionUpgrade: True.
For example:
ElasticSearchDomain:
Type: AWS::Elasticsearch::Domain
Properties: ...
UpdatePolicy:
EnableVersionUpgrade: true
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-updatepolicy.html#cfn-attributes-updatepolicy-upgradeelasticsearchdomain
Received correspondence back from AWS Support regarding an ES in-place upgrade through CloudFormation.
tl;dr It is currently not supported but a feature request is already active for this functionality.
You are correct in saying that ES in-place upgrade is not supported by CFN at this moment. Thus upgrading ES from 6.3 to 6.4 can be done via CLI or AWS Console will keep the existing domain, but with CloudFormation, it will launch a new domain and discard the existing one.
I see that there is already an active feature request for this. I will go ahead and pass your sentiment regards to our internal team about this matter as well.
Unfortunately, AWS Support does not have visibility to service enhancement implementation roadmap, so I would not be able to provide you with an exact time frame.

AWS EB deployment - where is my app?

I packaged my Scala/LiftWeb app with the sbt one-jar plugin into a single executable jar file and packed it up with Docker, exposing the embedded Jetty's port in the Dockerfile.
It runs fine locally on Docker and appearently deploys clean on AWS EB using the CLI deployment tools. On the received EB URL however, all I see is the congrats page saying "Your Docker Container is now running in Elastic Beanstalk on your own dedicated environment in the AWS Cloud.".
So, where is my app? Do I miss any steps making my app publicly available on my EB instance?
For future reference, the problem was caused by using an obsolete 2.x version of the aws-eb-cli tools package. Upgrading it to 3.x made the error obvious - building the docker image has failed on AWS.
What I was looking for was running an existing docker image, I found instruction for this scenario at https://aws.amazon.com/blogs/aws/aws-elastic-beanstalk-for-docker/.
Thanks a lot for Nick for asking the right questions which made me realize the obsolete tools package!

haproxy and keepalived on ec2

I read about using haproxy and keepalived to avoid spof for haproxy. Is it possible to do this in ec2?
Say I have 2 instances. Each with haproxy and keepalived installed. The VIP shall be aws elastic IP.
In theory this should be possible, there are blog posts dotted around with instructions on how to do it. However I have been trying to test this over the past few days but not had any luck with it.
Amazon blocks multicast on EC2 (Classic) so the primary communication method for Keepalived will not work. You need to install the latest version (at time of writing this is 1.2.13) which has unicast support built in. This should allow you to bypass the multicast restrictions that Amazon puts in place. I think the version of the keepalived package is 1.2.7 in the repos (yum install keepalived) which does not have the unicast patch included..
You should be able to use wget to download the latest TAR, unpackage it and build it from source (./configure prefix=/, make, make install). Make sure you have the GCC package and openssl-devel package installed before trying to configure as it will fail with errors otherwise.
If I get it working in the meantime I will come back and put a link to my blog with the exact steps needed :)