I'm trying to run two containers in one task. The two containers must be resolvable using their DNS.
What I did ; I defined the two containers in the same task definition :
MyTwoContainerTaskDefinition:
Type: 'AWS::ECS::TaskDefinition'
Properties:
NetworkMode: awsvpc
RuntimePlatform:
OperatingSystemFamily: LINUX
RequiresCompatibilities:
- FARGATE
ContainerDefinitions:
- Name: container1
...
- Name: container2
...
...
And then I use two (one for each container) ServiceDiscovery resources and two Service resources to permit the DNS resolution :
Container1CloudmapDiscoveryservice:
Type: AWS::ServiceDiscovery::Service
...
Container1Service:
Type: 'AWS::ECS::Service'
Properties:
ServiceName: container1
DesiredCount: 1
LaunchType: FARGATE
TaskDefinition: !Ref MyTwoContainerTaskDefinition
ServiceRegistries:
- RegistryArn: !GetAtt Container1CloudmapDiscoveryservice.Arn
Port: 7070
...
And the same resources for container 2.
The deployment is working but when I go to AWS portal I have two tasks that are containing the two containers.
I would like to have only one task containing my two containers.
Do you know if it's possible and what I'm missing ?
Yes, it's possible to have multiple containers in one task definition. See here: AWS ECS start multiple containers in one task definition
Related
As I am quite familiar with docker compose and not so familiar with amazons cloudformation I found it extremely nice to be able to basically run your docker compose files via ecs integration and viola behind the scenes everything you need is created for you. So you get your load balancer (if not already created) and your ecs cluster with your services running and everything is connected and just works. When I started wanting to do a bit more advanced things I ran into a problem that I can't seem to find an answer to online.
I have 2 services in my docker compose, my spring boot web app and my postgres db. I wanted to implement ssl and redirect all traffic to https. After a lot of research and a lot of trial and error I finally got it to work by extending my compose file with x-aws-cloudformation and adding native cloudformation yaml. When doing all of this I was forced to choose an application load balancer over a network load balancer as it operates on layer 7 (http/https). However my problem is that now I have no way of reaching my postgres database and running queries against it via for example intellij. My spring boot app works find and can read/write to my database so that works fine. Before the whole ssl implementation I didn't specify a load balancer in my compose file and so it gave me a network load balancer every time I ran my compose file. Then I could connect to my database via intellij and run queries. I have tried adding an inbound rule on my security group that basically allows all inbound traffic to my database via 5432 but that didn't help. I may not be setting the correct host when applying my connection details in intellij but I have tried using the following:
dns name of load balancer
ip-adress of load balancer
public ip of my postgres db task (launch type: fargate)
I would just like to simply reach my database and run queries against it even though it is running inside aws ecs cluster behind an application load balancer. Is there a way of achieving what I am trying to do? Or do I have to have 2 separate load balancers (one application LB and one network LB)?
Here is my docker-compose file(I have omitted a few irrelevant env variables):
version: "3.9"
x-aws-loadbalancer: arn:my-application-load-balancer
services:
my-web-app:
build:
context: .
image: hub/my-web-app
x-aws-pull_credentials: xxxxxxxx
container_name: my-app-name
ports:
- "80:80"
networks:
- my-app-network
depends_on:
- postgres
deploy:
replicas: 1
resources:
limits:
cpus: '0.5'
memory: 2048M
environment:
- SPRING_DATASOURCE_URL=jdbc:postgresql://postgres:5432/my-db?currentSchema=my-db_schema
- SPRING_DATASOURCE_USERNAME=dbpass
- SPRING_DATASOURCE_PASSWORD=dbpass
- SPRING_DATASOURCE_DRIVER-CLASS-NAME=org.postgresql.Driver
- SPRING_JPA_DATABASE_PLATFORM=org.hibernate.dialect.PostgreSQLDialect
postgres:
build:
context: docker/database
image: hub/my-db
container_name: my-db
networks:
- my-app-network
deploy:
replicas: 1
resources:
limits:
cpus: '0.5'
memory: 2048M
environment:
- POSTGRES_USER=dbpass
- POSTGRES_PASSWORD=dbpass
- POSTGRES_DB=my-db
networks:
my-app-network:
name: my-app-network
x-aws-cloudformation:
Resources:
MyWebAppTCP80TargetGroup:
Properties:
HealthCheckPath: /actuator/health
Matcher:
HttpCode: 200-499
MyWebAppTCP80Listener:
Type: AWS::ElasticLoadBalancingV2::Listener
Properties:
Protocol: HTTP
Port: 80
LoadBalancerArn: xxxxx
DefaultActions:
- Type: redirect
RedirectConfig:
Port: 443
Host: "#{host}"
Path: "/#{path}"
Query: "#{query}"
Protocol: HTTPS
StatusCode: HTTP_301
MyWebAppTCP443Listener:
Type: AWS::ElasticLoadBalancingV2::Listener
Properties:
Protocol: HTTPS
Port: 443
LoadBalancerArn: xxxxxxxxx
Certificates:
- CertificateArn: "xxxxxxxxxx"
DefaultActions:
- Type: forward
ForwardConfig:
TargetGroups:
- TargetGroupArn:
Ref: MyWebAppTCP80TargetGroup
MyWebAppTCP80RedirectRule:
Type: AWS::ElasticLoadBalancingV2::ListenerRule
Properties:
ListenerArn:
Ref: MyWebAppTCP80Listener
Priority: 1
Conditions:
- Field: host-header
HostHeaderConfig:
Values:
- "*.my-app.com"
- "www.my-app.com"
- "my-app.com"
Actions:
- Type: redirect
RedirectConfig:
Host: "#{host}"
Path: "/#{path}"
Query: "#{query}"
Port: 443
Protocol: HTTPS
StatusCode: HTTP_301
I need to use Traefik for reverse proxy, for docker. My user case requires to spin up containers from different docker-compose.yml files. Ideally I want to use on docker-compose.yml file for Traefik itself and different docker-compose.yml files for my other websites. Our websites are interconnected but come from different development streams (and different repositories).
This is for the dev to be able to pull down the sites to their local, spin up each one, develope code, and then push up to the relevant depository.
I am looking for examples on how to use labels correctly to do this (if this is the correct way).
Thanks A.
To use Traefik and its labels for dynamic deployments is probably the best choice you can make. It will make the routing so easy to work with. We use it inside docker swarm, but that's just compose with a few extra steps, so you can reuse our configuration.
You must have 1 common network for all containers & Traefik to share it so it can parse the labels.
For the labels on the services side I use:
labels:
# Traefik
- "traefik.enable=true"
- "traefik.docker.network=traefik-proxy" #that common network i was talking about
# Routers
- "traefik.http.routers.service-name.rule=Host(`$SWARM_HOST`) && PathPrefix(`/service-path`)"
- "traefik.http.routers.service-name.service=service-name"
- "traefik.http.routers.service-name.entrypoints=http" #configuration inside traefik stack
- "traefik.http.routers.service-name.middlewares=strip-path-prefix" # we use this to strip the /service-path/... part off the request so all requests hit / inside our containers (no need to worry about that on the API side)
# Services
- "traefik.http.services.service-name.loadbalancer.server.port=${LISTEN_PORT}"
For the actual Traefik service I will attach the whole compose configuration and you can cut out only parts you need and skip the swarm specific stuff:
version: '3.9'
services:
traefik:
# Use the latest v2.2.x Traefik image available
image: traefik:v2.5.4
healthcheck:
test: ["CMD", "traefik", "healthcheck", "--ping"]
interval: 10s
timeout: 5s
retries: 3
start_period: 15s
deploy:
mode: global
update_config:
order: start-first
failure_action: rollback
parallelism: 1
delay: 15s
monitor: 30s
restart_policy:
condition: any
delay: 10s
max_attempts: 3
labels:
# Enable Traefik for this service, to make it available in the public network
- "traefik.enable=true"
# Use the traefik-public network (declared below)
- "traefik.docker.network=traefik-proxy"
# Uses the environment variable DOMAIN
- "traefik.http.routers.dashboard.rule=Host(`swarm-traefik.company.org`)"
- "traefik.http.routers.dashboard.entrypoints=http"
# Use the special Traefik service api#internal with the web UI/Dashboard
- "traefik.http.routers.dashboard.service=api#internal"
# Enable HTTP Basic auth, using the middleware created above
- "traefik.http.routers.dashboard.middlewares=admin-auth"
# Define the port inside of the Docker service to use
- "traefik.http.services.dashboard.loadbalancer.server.port=8080"
# Middlewares
- "traefik.http.middlewares.strip-path-prefix.replacepathregex.regex=^/[a-z,0-9,-]+/(.*)"
- "traefik.http.middlewares.strip-path-prefix.replacepathregex.replacement=/$$1"
# admin-auth middleware with HTTP Basic auth
- "traefik.http.middlewares.admin-auth.basicauth.users=TODO_GENERATE_USER_BASIC_AUTH"
placement:
constraints:
- "node.role==manager"
volumes:
# Add Docker as a mounted volume, so that Traefik can read the labels of other services
- /var/run/docker.sock:/var/run/docker.sock:ro
command:
# Enable Docker in Traefik, so that it reads labels from Docker services
- --providers.docker
# Do not expose all Docker services, only the ones explicitly exposed
- --providers.docker.exposedbydefault=false
# Enable Docker Swarm mode
- --providers.docker.swarmmode
# Adds default network
- --providers.docker.network=traefik-proxy
# Create an entrypoint "http" listening on port 80
- --entrypoints.http.address=:80
# Enable the Traefik log, for configurations and errors
- --log
#- --log.level=INFO
# Enable the Dashboard and API
- --api
# Enable Access log - in our case we dont need it because we have Nginx infront which has top level access logs
# - --accesslog
# Enable /ping healthcheck route
- --ping=true
# Enable zipkin tracing & configuration
#- --tracing.zipkin=true
#- --tracing.zipkin.httpEndpoint=https://misc-zipkin.company.org/api/v2/spans
networks:
# Use the public network created to be shared between Traefik and
# any other service that needs to be publicly available with HTTPS
- traefik-proxy
networks:
traefik-proxy:
external: true
I have a very simple jelastic installation manifest which installs a kubernetes cluster:
jpsVersion: 1.3
jpsType: install
application:
id: shopozor-k8s-cluster
name: Shopozor k8s cluster
version: 0.0
settings:
fields:
- name: envName
caption: Env Name
type: string
default: shopozor
- name: topo
type: radio-fieldset
values:
0-dev: '<b>Development:</b> one master (1) and one scalable worker (1+)'
1-prod: '<b>Production:</b> multi master (3) with API balancers (2+) and scalable workers (2+)'
default: 0-dev
- name: k8s-version
type: string
caption: k8s manifest version
default: v1.16.3
onInstall:
- installKubernetes
- attachIpToWorkerNodes
actions:
installKubernetes:
install:
jps: https://github.com/jelastic-jps/kubernetes/blob/${settings.k8s-version}/manifest.jps
envName: ${settings.envName}
displayName: ${settings.envName}
settings:
deploy: cc
topo: ${settings.topo}
dashboard: version2
ingress-controller: Nginx
storage: true
api: true
monitoring: true
version: ${settings.k8s-version}
jaeger: false
attachIpToWorkerNodes:
- forEach(node:nodes.cp):
- jelastic.env.binder.AttachExtIp:
envName: ${settings.envName}
nodeId: ${#node.id}
If I install that manifest, then I get my cluster up and running, but the worker nodes do not get an IPv4 attached. After installing that manifest, if I additionally install the following update manifest, then it works:
jpsVersion: 1.3
jpsType: update
application:
id: attach-ext-ip
name: Attach external IP
version: 0.0
onInstall:
- attachIpToWorkerNodes
actions:
attachIpToWorkerNodes:
- forEach(node:nodes.cp):
- jelastic.env.binder.AttachExtIp:
nodeId: ${#node.id}
What is it I am doing wrong in the install manifest? why aren't the ip attached to my worker nodes, while there are if I perform that action after installation with an update manifest?
Please note, that the "public IP binding" feature is not available in the production yet. It's under active development and will be officially announced in one of our next releases.
In the current stable version, some of the functionality related to it may not work properly. Right now, it's not recommended for production use, but you can try it for test purposes only.
As for the "attachIpToWorkerNodes" action in the original manifest, the issue was that "nodes.cp" of the environment created wasn't declared in scope where "forEach" was invoked. The correct version of the action is:
attachIpToWorkerNodes:
install:
envName: ${settings.envName}
jps:
type: update
name: Attach IP To Worker Nodes
onInstall: jelastic.env.binder.AttachExtIp [nodes.cp.join(id,)]
Please let us know if you have any further questions.
I am trying to create a yaml file to deploy gke cluster in a custom network I created. I get an error
JSON payload received. Unknown name \"network\": Cannot find field."
I have tried a few names for the resources but I am still seeing the same issue
resources:
- name: myclus
type: container.v1.cluster
properties:
network: projects/project-251012/global/networks/dev-cloud
zone: "us-east4-a"
cluster:
initialClusterVersion: "1.12.9-gke.13"
currentMasterVersion: "1.12.9-gke.13"
## Initial NodePool config.
nodePools:
- name: "myclus-pool1"
initialNodeCount: 3
version: "1.12.9-gke.13"
config:
machineType: "n1-standard-1"
oauthScopes:
- https://www.googleapis.com/auth/logging.write
- https://www.googleapis.com/auth/monitoring
- https://www.googleapis.com/auth/ndev.clouddns.readwrite
preemptible: true
## Duplicates node pool config from v1.cluster section, to get it explicitly managed.
- name: myclus-pool1
type: container.v1.nodePool
properties:
zone: us-east4-a
clusterId: $(ref.myclus.name)
nodePool:
name: "myclus-pool1"
I expect it to place the cluster nodes in this network.
The network field needs to be part of the cluster spec. The top-level of properties should just be zone and cluster, network should be on the same indentation as initialClusterVersion. See more on the container.v1.cluster API reference page
Your manifest should look more like:
EDIT: there is some confusion in the API reference docs concerning deprecated fields. I offered a YAML that applies to the new API, not the one you are using. I've update with the correct syntax for the basic v1 API and further down I've added the newer API (which currently relies on gcp-types to deploy.
resources:
- name: myclus
type: container.v1.cluster
properties:
projectId: [project]
zone: us-central1-f
cluster:
name: my-clus
zone: us-central1-f
network: [network_name]
subnetwork: [subnet] ### leave this field blank if using the default network
initialClusterVersion: "1.13"
nodePools:
- name: my-clus-pool1
initialNodeCount: 0
config:
imageType: cos
- name: my-pool-1
type: container.v1.nodePool
properties:
projectId: [project]
zone: us-central1-f
clusterId: $(ref.myclus.name)
nodePool:
name: my-clus-pool2
initialNodeCount: 0
version: "1.13"
config:
imageType: ubuntu
The newer API (which provides more functionality and allows you to use more features including the v1beta1 API and beta features) would look something like this:
resources:
- name: myclus
type: gcp-types/container-v1:projects.locations.clusters
properties:
parent: projects/shared-vpc-231717/locations/us-central1-f
cluster:
name: my-clus
zone: us-central1-f
network: shared-vpc
subnetwork: local-only ### leave this field blank if using the default network
initialClusterVersion: "1.13"
nodePools:
- name: my-clus-pool1
initialNodeCount: 0
config:
imageType: cos
- name: my-pool-2
type: gcp-types/container-v1:projects.locations.clusters.nodePools
properties:
parent: projects/shared-vpc-231717/locations/us-central1-f/clusters/$(ref.myclus.name)
nodePool:
name: my-clus-separate-pool
initialNodeCount: 0
version: "1.13"
config:
imageType: ubuntu
Another note, you may want to modify your scopes, the current scopes will not allow you to pull images from gcr.io, some system pods may not spin up properly and if you are using Google's repository, you will be unable to pull those images.
Finally, you don't want to repeat the node pool resource in both the cluster spec and separately. Instead, create the cluster with a basic (default) node pool, for all additional node pools, create them as separate resources to manage them without going through the cluster. There are very few updates you can perform on a node pool, asside from resizing
I've created Fargate cluster on ECS. But when I run my instance, I've encountered following error message:
Error: The hook orm is taking too long to load. Make sure it is
triggering its initialize() callback, or else set
`sails.config.orm._hookTimeout to a higher value (currently 20000) at
Timeout.tooLong as
_onTimeout
But in mongoDB EC2 instance, I've already configured bindIp like this
# network interfaces
net:
port: 27017
bindIp: 0.0.0.0
But when I try this docker instance from my local, I have not found that error message and when I deploy that source code in EC2, no error as well. Please let me know how to solve that issue. Thanks.
Here is my sample diagram
You're not specifying if mongodb that you run and connect to from your local docker instance is also local or whether it's the same MongoDB instance in AWS (which presumably you would either use VPN or ssh tunneling to connect to).
So why the docker instance works locally and not in AWS is going to be a bit hard to explain. I'd suggest that it's network connectivity related.
We run ECS Fargate to an EC2 instance that runs mongodb. The key to this is make sure to establish the security group relationship as well.
This could for instance look like below from a Cloudformation example. You have the Fargate rAppFargateSecurityGroup security group (exposing app via 8080) attached to your Fargate Service. And you have the mongodb rMongoDbEc2SecurityGroup security group attached to the mongodb EC2 instance (exposing mongodb via port 27017).
You will notice that the glue here is "SourceSecurityGroupId: !Ref rAppFargateSecurityGroup", which allows fargate to connect to mongodb.
rAppFargateSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: !Sub '${pAppName}-${pEnvironment} ECS Security Group'
VpcId: !Ref pVpcId
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: 8080
ToPort: 8080
SourceSecurityGroupId: !Ref rAppAlbSecurityGroup
rMongoDbEc2SecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: !Sub '${pAppName}-${pEnvironment} MongoDb Security Group'
VpcId: !Ref pVpcId
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: 27017
ToPort: 27017
SourceSecurityGroupId: !Ref rAppFargateSecurityGroup
You would have the Fargate Service configured along the ways of:
rFargateService:
Type: AWS::ECS::Service
Properties:
...
NetworkConfiguration:
AwsvpcConfiguration:
SecurityGroups:
- !Ref pAppFargateSecurityGroup
Subnets:
- !Ref pPrivateSubnetA
- !Ref pPrivateSubnetB
- !Ref pPrivateSubnetC
The Fargate Service subnets would (need to) be configured in the same VPC as your mongodb host if you're not using e.g. VPC peering or Private Link.
I should also add that other things that could trip you up are NACLs. And of course local host firewalls (like iptables).