Our images have environment variables that ought to be defined during docker run, any idea how to add this variables into the cloudformation file. We currently have something like:
Task:
Type: AWS::ECS::TaskDefinition
Properties:
Family: testenv
Cpu: 256
Memory: 512
NetworkMode:
RequiresCompatibilities:
- FARGATE
ExecutionRoleArn: !ImportValue ECSTaskExecutionRole
ContainerDefinitions:
- Name: bonalds
Image: gcr.io/zonalds-21/id-me:latest // image comes from gcr
Cpu: 256
Memory: 512
PortMappings:
- ContainerPort: 4567
Protocol: tcp
LogConfiguration:
LogDriver:
Options:
awslogs-group: 'zonalds'
awslogs-region: !Ref AWS::Region
awslogs-stream-prefix: 'routme'
I can't seem to find any info in the AWS documentation, what would be the best way to add the environment variable?
Your container definition can hold environment variables.
ContainerDefinitions:
- Name: bonalds
Image: gcr.io/zonalds-21/id-me:latest // image comes from gcr
Cpu: 256
Environment:
- Name: Test
Value: 'test'
Memory: 512
PortMappings:
- ContainerPort: 4567
Protocol: tcp
LogConfiguration:
LogDriver:
Options:
awslogs-group: 'zonalds'
awslogs-region: !Ref AWS::Region
awslogs-stream-prefix: 'routme'
More information in the doc
Related
I am writing an email plugin with pgp encryption for drone which must react of the status of previous step to send the right email template, but I don't know how I get this information.
I take a look on the environment variables which are passed into my container, but there are no information about the previous step. How react other applications based on events of previous steps?
Here is an excerpt of my drone.yaml.
kind: pipeline
type: kubernetes
name: notification-test
node_selector:
kubernetes.io/os: linux
kubernetes.io/arch: amd64
steps:
- name: exit-code
commands:
- apk update
- apk add bash
- bash -c "env | sort"
- exit 1
image: docker.io/library/alpine:3.16.0
resources:
limits:
cpu: 150
memory: 150M
- name: post-env
commands:
- apk update
- apk add bash
- bash -c "env | sort"
depends_on:
- exit-code
image: docker.io/library/alpine:3.16.0
resources:
limits:
cpu: 150
memory: 150M
- name: drone-email
depends_on:
- post-env
- exit-code
environment:
SMTP_FROM_ADDRESS:
from_secret: smtp_from_address
SMTP_FROM_NAME:
from_secret: smtp_from_name
SMTP_HOST:
from_secret: smtp_host
SMTP_USERNAME:
from_secret: smtp_username
SMTP_PASSWORD:
from_secret: smtp_password
image: docker.io/volkerraschek/drone-email:latest
pull: always
resources:
limits:
cpu: 150
memory: 150M
when:
status:
- changed
- failure
trigger:
event:
exclude:
- tag
I'm using CloudFormation scripts to build an EC2 container of Ksql Server (Docker container). I have already built the other components within MSK I.e Bootstrap servers and listeners.
Within the AWS::ECS::TaskDefinition I have tried to add the bootstrap servers and listeners by using the 'Container' & 'Environment' properties within 'ContainerDefinition'. Although doing this puts the EcsService in a stuck position as the status stays as CREATE_IN_PROGRESS.
# Creating the ECS Task for KsqlDB
EcsKsqlTask:
Type: AWS::ECS::TaskDefinition
Properties:
NetworkMode: awsvpc
Cpu: '256'
Memory: '1024'
RequiresCompatibilities:
- EC2
ContainerDefinitions:
- Name: KsqlServer
Image: 123.dkr.ecr.eu-west-2.amazonaws.com/confluentinc/cp-ksql-server
Essential: true
# Environment:
# Name: KSQL_BOOTSTRAP_SERVERS
# Value: b-1.kafka.123.d1.eu-west-2.amazonaws.com:9092
Command:
- 'bin/bash docker run -d \ -v / KSQL_BOOTSTRAP_SERVERS=b-1.kafka.123.c3.eu-west-2.amazonaws.com:9092 \ -e KSQL_KSQL_SERVICE_ID=ksql_standalone_1_ \ -e KSQL_KSQL_QUERIES_FILE=/path/in/container/queries.sql \ confluentinc/ksqldb-server:0.26.0'
PortMappings:
- ContainerPort: 8080
Protocol: tcp
- ContainerPort: 22
Protocol: tcp
ExecutionRoleArn: !Ref EcsRole
TaskRoleArn: !Ref EcsRole
# Creating the ECS Service for KsqlDB
EcsService:
Type: AWS::ECS::Service
Properties:
ServiceName: EcsKsqlService
TaskDefinition: !Ref EcsKsqlTask
Cluster: !Ref EcsCluster
LaunchType: EC2
NetworkConfiguration:
AwsvpcConfiguration:
AssignPublicIp: DISABLED
SecurityGroups:
- !Ref EcsSecurityGroup
Subnets:
- !Ref PrivateSubnetOne
- !Ref PrivateSubnetTwo
Any help on any property I am missing would be greatly appreciated!
Added as so
ContainerDefinitions:
- Name: KsqlCli
Image: Images/ksql-cli
Essential: true
Environment:
- Name: KSQL_BOOTSTRAP_SERVERS
Value: b-3.boostrap.amazonaws.com
- Name: KSQL_KSQL_SERVICE_ID
Value: confluent_ksql_01
- Name: KSQL_LISTENERS
Value: http://localhost:8088
I created a Docker volume as such:
sudo docker volume create --driver=local --name=es-data1 --opt type=none --opt o=bind --opt device=/usr/local/contoso/data1/elasticsearch/data1
/usr/local/contoso/data1/elasticsearch/data1 is a symlink.
And I'm instantiating three Elasticsearch Docker containers in my docker-compose.yml file as such:
version: '3.7'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.6.0
logging:
driver: none
container_name: elasticsearch1
environment:
- node.name=elasticsearch1
- cluster.name=docker-cluster
- cluster.initial_master_nodes=elasticsearch1
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms1G -Xmx1G"
- http.cors.enabled=true
- http.cors.allow-origin=*
- network.host=_eth0_
ulimits:
nproc: 65535
memlock:
soft: -1
hard: -1
cap_add:
- ALL
# privileged: true
deploy:
replicas: 1
update_config:
parallelism: 1
delay: 10s
resources:
limits:
cpus: '1'
memory: 1G
reservations:
cpus: '1'
memory: 1G
restart_policy:
condition: unless-stopped
delay: 5s
max_attempts: 3
window: 10s
volumes:
- es-logs:/var/log
- es-data1:/usr/share/elasticsearch/data
networks:
- elastic
- ingress
ports:
- 9200:9200
- 9300:9300
healthcheck:
test: wget -q -O - http://127.0.0.1:9200/_cat/health
elasticsearch2:
image: docker.elastic.co/elasticsearch/elasticsearch:7.6.0
logging:
driver: none
container_name: elasticsearch2
environment:
- node.name=elasticsearch2
- cluster.name=docker-cluster
- cluster.initial_master_nodes=elasticsearch1
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms1G -Xmx1G"
- "discovery.zen.ping.unicast.hosts=elasticsearch1"
- http.cors.enabled=true
- http.cors.allow-origin=*
- network.host=_eth0_
ulimits:
nproc: 65535
memlock:
soft: -1
hard: -1
cap_add:
- ALL
# privileged: true
deploy:
replicas: 1
update_config:
parallelism: 1
delay: 10s
resources:
limits:
cpus: '1'
memory: 1G
reservations:
cpus: '1'
memory: 1G
restart_policy:
condition: unless-stopped
delay: 5s
max_attempts: 3
window: 10s
volumes:
- es-logs:/var/log
- es-data2:/usr/share/elasticsearch/data
networks:
- elastic
- ingress
ports:
- 9201:9200
healthcheck:
test: wget -q -O - http://127.0.0.1:9200/_cat/health
elasticsearch3:
image: docker.elastic.co/elasticsearch/elasticsearch:7.6.0
logging:
driver: none
container_name: elasticsearch3
environment:
- node.name=elasticsearch3
- cluster.name=docker-cluster
- cluster.initial_master_nodes=elasticsearch1
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms1G -Xmx1G"
- "discovery.zen.ping.unicast.hosts=elasticsearch1"
- http.cors.enabled=true
- http.cors.allow-origin=*
- network.host=_eth0_
ulimits:
nproc: 65535
memlock:
soft: -1
hard: -1
cap_add:
- ALL
# privileged: true
deploy:
replicas: 1
update_config:
parallelism: 1
delay: 10s
resources:
limits:
cpus: '1'
memory: 1G
reservations:
cpus: '1'
memory: 1G
restart_policy:
condition: unless-stopped
delay: 5s
max_attempts: 3
window: 10s
volumes:
- es-logs:/var/log
- es-data3:/usr/share/elasticsearch/data
networks:
- elastic
- ingress
ports:
- 9202:9200
healthcheck:
test: wget -q -O - http://127.0.0.1:9200/_cat/health
volumes:
es-data1:
driver: local
external: true
es-data2:
driver: local
external: true
es-data3:
driver: local
external: true
es-logs:
driver: local
external: true
networks:
elastic:
external: true
ingress:
external: true
My Problem:
The Elasticsearch containers are persisting index data to both the host filesystem and the mounted symlink.
My Question:
How do I modify my configuration so that the Elasticsearch containers are only persisting index data to the mounted symlink?
It seems to be the default behavior of the local volume driver that the files are additionally stored on the host machine. You can change the volume settings in your docker-compose.yml to prevent the docker from persisting (copying) files on the host file system (see nocopy: true), like so:
version: '3.7'
services:
elasticsearch:
....
volumes:
- type: volume
source: es-data1
target: /usr/share/elasticsearch/data
volume:
nocopy: true
....
volumes:
es-data1:
driver: local
external: true
You may also want to check this question here: Docker-compose - volumes driver local meaning. So, there seem to be some docker volume plugins that are made specifically for the portability reasons; such as flocker or hedvig. But I didn't use a plugin for such purpose, so I can't really recommend one, yet.
I am having trouble deploying a fargate cluster, and it is failing on the docker pull image with error "CannotPullContainerError". I am creating the stack with cloudformation, which is not optional, and it creates the full stack, but fails when trying to start the task based on the above error.
I have attached the cloudformation stack file which might highlight the problem, and I have doubled checked that the subnet has a route to nat(below). I also ssh'ed into an instance in the same subnet which was able to route externally. I am wondering if i have not correctly placed the pieces required i.e the service + loadbalancer are in the private subnet, or should i not be placing the internal lb in the same subnet???
This subnet is the one that currently has the placement but all 3 in the file have the same nat settings.
subnet routable (subnet-34b92250)
* 0.0.0.0/0 -> nat-05a00385366da527a
cheers in advance.
yaml cloudformaition script:
AWSTemplateFormatVersion: 2010-09-09
Description: Cloudformation stack for the new GRPC endpoints within existing vpc/subnets and using fargate
Parameters:
StackName:
Type: String
Default: cf-core-ci-grpc
Description: The name of the parent Fargate networking stack that you created. Necessary
vpcId:
Type: String
Default: vpc-0d499a68
Description: The name of the parent Fargate networking stack that you created. Necessary
Resources:
CoreGrcpInstanceSecurityGroupOpenWeb:
Type: 'AWS::EC2::SecurityGroup'
Properties:
GroupName: sgg-core-ci-grpc-ingress
GroupDescription: Allow http to client host
VpcId: !Ref vpcId
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: '80'
ToPort: '80'
CidrIp: 0.0.0.0/0
SecurityGroupEgress:
- IpProtocol: tcp
FromPort: '80'
ToPort: '80'
CidrIp: 0.0.0.0/0
LoadBalancer:
Type: 'AWS::ElasticLoadBalancingV2::LoadBalancer'
DependsOn:
- CoreGrcpInstanceSecurityGroupOpenWeb
Properties:
Name: lb-core-ci-int-grpc
Scheme: internal
Subnets:
# # pub
# - subnet-f13995a8
# - subnet-f13995a8
# - subnet-f13995a8
# pri
- subnet-34b92250
- subnet-82d85af4
- subnet-ca379b93
LoadBalancerAttributes:
- Key: idle_timeout.timeout_seconds
Value: '50'
SecurityGroups:
- !Ref CoreGrcpInstanceSecurityGroupOpenWeb
TargetGroup:
Type: 'AWS::ElasticLoadBalancingV2::TargetGroup'
DependsOn:
- LoadBalancer
Properties:
Name: tg-core-ci-grpc
Port: 3000
TargetType: ip
Protocol: HTTP
HealthCheckIntervalSeconds: 30
HealthCheckProtocol: HTTP
HealthCheckTimeoutSeconds: 10
HealthyThresholdCount: 4
Matcher:
HttpCode: '200'
TargetGroupAttributes:
- Key: deregistration_delay.timeout_seconds
Value: '20'
UnhealthyThresholdCount: 3
VpcId: !Ref vpcId
LoadBalancerListener:
Type: 'AWS::ElasticLoadBalancingV2::Listener'
DependsOn:
- TargetGroup
Properties:
DefaultActions:
- Type: forward
TargetGroupArn: !Ref TargetGroup
LoadBalancerArn: !Ref LoadBalancer
Port: 80
Protocol: HTTP
EcsCluster:
Type: 'AWS::ECS::Cluster'
DependsOn:
- LoadBalancerListener
Properties:
ClusterName: ecs-core-ci-grpc
EcsTaskRole:
Type: 'AWS::IAM::Role'
Properties:
AssumeRolePolicyDocument:
Statement:
- Effect: Allow
Principal:
Service:
# - ecs.amazonaws.com
- ecs-tasks.amazonaws.com
Action:
- 'sts:AssumeRole'
Path: /
Policies:
- PolicyName: iam-policy-ecs-task-core-ci-grpc
PolicyDocument:
Statement:
- Effect: Allow
Action:
- 'ecr:**'
Resource: '*'
CoreGrcpTaskDefinition:
Type: 'AWS::ECS::TaskDefinition'
DependsOn:
- EcsCluster
- EcsTaskRole
Properties:
NetworkMode: awsvpc
RequiresCompatibilities:
- FARGATE
ExecutionRoleArn: !Ref EcsTaskRole
Cpu: '1024'
Memory: '2048'
ContainerDefinitions:
- Name: container-core-ci-grpc
Image: 'nginx:latest'
Cpu: '256'
Memory: '1024'
PortMappings:
- ContainerPort: '80'
HostPort: '80'
Essential: 'true'
EcsService:
Type: 'AWS::ECS::Service'
DependsOn:
- CoreGrcpTaskDefinition
Properties:
Cluster: !Ref EcsCluster
LaunchType: FARGATE
DesiredCount: '1'
DeploymentConfiguration:
MaximumPercent: 150
MinimumHealthyPercent: 0
LoadBalancers:
- ContainerName: container-core-ci-grpc
ContainerPort: '80'
TargetGroupArn: !Ref TargetGroup
NetworkConfiguration:
AwsvpcConfiguration:
AssignPublicIp: DISABLED
SecurityGroups:
- !Ref CoreGrcpInstanceSecurityGroupOpenWeb
Subnets:
- subnet-34b92250
- subnet-82d85af4
- subnet-ca379b93
TaskDefinition: !Ref CoreGrcpTaskDefinition
Unfortunately AWS Fargate only supports images hosted in ECR or public repositories in Docker Hub and does not support private repositories which are hosted in Docker Hub. For more info - https://forums.aws.amazon.com/thread.jspa?threadID=268415
Even we faced the same problem using AWS Fargate couple of months back. You have only two options right now:
Migrate your images to Amazon ECR.
Use AWS Batch with custom AMI, where the custom AMI is built with Docker Hub credentials in ECS config (which we are using right now).
Edit: As mentioned by Christopher Thomas in the comment, ECS fargate now supports pulling images from DockerHub Private repositories. More info on how to set it up can be found here.
Do define this policy in your ECR registry and attach the IAM role with your task.
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "new statement",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::99999999999:role/ecsEventsRole"
},
"Action": [
"ecr:GetDownloadUrlForLayer",
"ecr:BatchGetImage",
"ecr:BatchCheckLayerAvailability",
"ecr:PutImage",
"ecr:InitiateLayerUpload",
"ecr:UploadLayerPart",
"ecr:CompleteLayerUpload"
]
}
]
}
I'm attempting to setup a service broker to add postgres to our Cloud Foundry installation. We're running our system on vmWare. I'm using this release in order to do that:
cf-services-contrib-release
I need to setup the networks: section in the manifest, and what I'm setting there isn't working.
This is what my networks look like in the vmWare vCenter UI:
And this is what my clusters and resource pools look like in the vCenter UI:
I tried both with and without quotes, around the 'name' of the network. But I'm now getting an error saying that bosh can't find the network:
Failed compiling packages > rootfs_lucid64/9b3f611b46e076b94b37645c98f9100e7bcef5dd: Can't find network: VLAN1130_LB_100.114.130.0 (00:00:01)
Failed compiling packages > postgresql93/06163819b694f8d9836586d024f64c11efe30180: Can't find network: VLAN1130_LB_100.114.130.0 (00:00:01)
Failed compiling packages > postgresql92/2867893e714aae6e6b76bd06e7aa30d47023c46e: Can't find network: VLAN1130_LB_100.114.130.0 (00:00:01)
Error 100: Can't find network: VLAN1130_LB_100.114.130.0
Task 2430 error
This was my latest configuration attempt:
networks:
- name: default
type: manual
subnets:
- range: 100.114.130.0/24
gateway: 100.114.130.1
cloud_properties:
name: VLAN1130_LB_100.114.130.0
I also tried using single quotes as below. But I got the same error as above!
networks:
- name: default
type: manual
subnets:
- range: 100.114.130.0/24
gateway: 100.114.130.1
cloud_properties:
name: 'VLAN1130_LB_100.114.130.0'
Our network that we're on is this one: 100.114.130.0/24
So it makes sense to select VLAN1130_LB_100.114.130.0 in the config.
I've tried setting all of these options in the yaml file with no quotes. And none of them seem to work!
<ul>
<li>USH_UCS_CLOUD_FOUNDRY: <a href="https://gist.github.com/bluethundr/18ac490e96a5e02fad65">postgres_2432_debug.txt</li>
<li>USH_UCS_CLOUD_FOUNDRY_DVS: postgres_2433_debug.txt</li>
<li>USH_UCS_CLOUD_FO-DVUplinks-435272: postgres_2434_debug.txt </li>
<li>VLAN1129_LB_100.114.129.0: postgres_2435_debug.txt</li>
<li>VLAN1130_LB_100.114.130.0: postgres_2436_debug.txt</li>
<li>VLAN14-ESXI_MGMT-3.156.14.0: <a href="https://gist.github.com/bluethundr/dbde624e63842721a133">postgres_2437_debug.txt</li>
</ul>
I wouldn't expect VLAN1129_LB_100.114.129.0 to work, but I tried it anyway, just to be complete.
I've supplied debug dumps of each failed attempt next to each setting you see above. Surely one of them must work! But as you can see none of them did.
Here's my complete yaml file that I deployed with the 'bosh deploy' command:
name: cf-22b9f4d62bb6f0563b71
director_uuid: fd713790-b1bc-401a-8ea1-b8209f1cc90c
releases:
- name: cf-services-contrib
version: 6
compilation:
workers: 3
network: default
reuse_compilation_vms: true
cloud_properties:
ram: 5120
disk: 10240
cpu: 2
update:
canaries: 1
canary_watch_time: 30000-60000
update_watch_time: 30000-60000
max_in_flight: 4
networks:
- name: default
type: manual
subnets:
- range: 100.114.130.0/24
gateway: 100.114.130.1
cloud_properties:
name: VLAN1130_LB_100.114.130.0
resource_pools:
- name: 'USH_UCS_CLOUD_FOUNDRY_NONPROD_01_RP'
network: default
stemcell:
name: bosh-vsphere-esxi-ubuntu-trusty-go_agent
version: '2865.1'
cloud_properties:
cpu: 2
ram: 4096
disk: 10240
datacenters:
- name: 'Universal City'
clusters:
- USH_UCS_CLOUD_FOUNDRY_NONPROD_01: {resource_pool: 'USH_UCS_CLOUD_FOUNDRY_NONPROD_01_RP'}
jobs:
- name: gateways
release: cf-services-contrib
templates:
- name: postgresql_gateway_ng
instances: 1
resource_pool: 'USH_UCS_CLOUD_FOUNDRY_NONPROD_01_RP'
networks:
- name: default
default: [dns, gateway]
properties:
# Service credentials
uaa_client_id: "cf"
uaa_endpoint: http://uaa.devcloudwest.example.com
uaa_client_auth_credentials:
username: admin
password: secret
- name: postgresql_service_node
release: cf-services-contrib
template: postgresql_node_ng
instances: 1
resource_pool: 'USH_UCS_CLOUD_FOUNDRY_NONPROD_01_RP'
persistent_disk: 10000
properties:
postgresql_node:
plan: default
networks:
- name: default
default: [dns, gateway]
properties:
networks:
apps: default
management: default
cc:
srv_api_uri: http://api.devcloudwest.example.com
nats:
address: 100.114.130.11
port: 25555
user: nats #CHANGE
password: secret
authorization_timeout: 5
service_plans:
postgresql:
default:
description: "Developer, 250MB storage, 10 connections"
free: true
job_management:
high_water: 230
low_water: 20
configuration:
capacity: 125
max_clients: 10
quota_files: 4
quota_data_size: 240
enable_journaling: true
backup:
enable: false
lifecycle:
enable: false
serialization: enable
snapshot:
quota: 1
postgresql_gateway:
token: f75df200-4daf-45b5-b92a-cb7fa1a25660
default_plan: default
supported_versions: ["9.3"]
version_aliases:
current: "9.3"
cc_api_version: v2
postgresql_node:
supported_versions: ["9.3"]
default_version: "9.3"
max_tmp: 900
password: secret
How can we get past this issue?
From Amit's comment:
The name used in Cloud Properties must include any nested sub-folders. In the provided configuration the network is nested under USH_UCS_CLOUD_FOUNDRY, so the value for name should reflect that, i.e. USH_UCS_CLOUD_FOUNDRY/VLAN1130_LB_100.114.130.0 no quotes are required.
networks:
- name: default
type: manual
subnets:
- range: 100.114.130.0/24
gateway: 100.114.130.1
cloud_properties:
name: USH_UCS_CLOUD_FOUNDRY/VLAN1130_LB_100.114.130.0