Docker compose to AWS ECS fails at the end - docker-compose

Im publishing a project via docker compose to AWS ECR but it fails on the last couple of steps. Its based on the new "docker compose" integration with an AWS context
The error i receive is:
MicroservicedocumentGeneratorService TaskFailedToStart: ResourceInitializationError: unable to pull secrets or registry auth: execution resource retrieval failed: unable to retrieve ecr registry auth: service call has been retried 3 time(s): RequestError: send request failed caused by: Post https://api.ecr....
The image is in an ECR private repository along with the others from the compose file.
I have authenticated with:
aws ecr get-login-password
The docker compose is:
microservice_documentGenerator:
image: xxx.dkr.ecr.xxx.amazonaws.com/microservice_documentgenerator:latest
networks:
- publicnet
The original dockerfile is
FROM openjdk:11-jdk-slim
COPY /Microservice.DocumentGenerator/Microservice.DocumentGenerator.jar app.jar
ENTRYPOINT ["java","-jar","/app.jar"]
The output for before the error was:
[+] Running 54/54
- projext DeleteComplete 355.3s
- PublicnetNetwork DeleteComplete 310.5s
- LogGroup DeleteComplete 306.1s
- MicroservicedocumentGeneratorTaskExecutionRole DeleteComplete 272.2s
- MicroservicedocumentGeneratorTaskDefinition Del... 251.2s
- MicroservicedocumentGeneratorServiceDiscoveryEntry DeleteComplete 220.1s
- MicroservicedocumentGeneratorService DeleteComp... 211.9s

try authentication with:
aws ecr get-login-password --region $AWS_DEFAULT_REGION | docker login --username AWS --password-stdin $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com
Plus can you mention from where you are making the call and if the server has the permission to make the call to ECR?

Related

Github Action failing to Build Images for the plugins being used in workflow

I am trying to use a plugin in my eks based k8s cluster,
I am using a Github Action controller that spawns on demand Container as Self Hosted runner
When the Github action start this plugin or any other that needs to build itself as a docker image fails with below error, any thoughts or ideas ?
This is my self hosted runner image Link
FYI : If i run a standalone alpine container in the cluster all typical cmd works, and this also works with default ubuntu based self hosted runner, so i dont think its the cluster
/usr/local/bin/docker build -t 60e226:1b6fc15462134e6fb8520b7df48cf7fd -f "/runner/_work/_actions/aquasecurity/trivy-action/master/Dockerfile" "/runner/_work/_actions/aquasecurity/trivy-action/master"
Sending build context to Docker daemon 644.6kB
Step 1/5 : FROM ghcr.io/aquasecurity/trivy:0.[3](https://github.com//docker-images/actions/runs/4134005760/jobs/7147011143#step:3:3)7.1
0.37.1: Pulling from aquasecurity/trivy
c158987b0551: Pulling fs layer
67a7d067ef7d: Pulling fs layer[6]Download complete
67a7d067ef7d: Pull complete
2ec1cdd48f38: Verifying Checksum
2ec1cdd48f38: Download complete
2ec1cdd48f38: Pull complete
fe56e6aa700e: Pull complete
Digest: sha256:7c[16](https://github.com//docker-images/actions/runs/4134005760/jobs/7147011143#step:3:16)7f7f3002948f1ec099555aa968bd8b8b097780603a38cc801fe965da0a69
Status: Downloaded newer image for ghcr.io/aquasecurity/trivy:0.37.1
---> c3e68408cd24
Step 2/5 : COPY entrypoint.sh /
---> 1f1da443ea86
Step 3/5 : RUN apk --no-cache add bash curl npm
---> Running in 647f7f479cac
fetch https://dl-cdn.alpinelinux.org/alpine/v3.[17](https://github.com//docker-images/actions/runs/4134005760/jobs/7147011143#step:3:17)/main/x86_64/APKINDEX.tar.gz
48ABC73BEB7F0000:error:0A000086:SSL routines:tls_post_process_server_certificate:certificate verify failed:ssl/statem/statem_clnt.c:[18](https://github.com//docker-images/actions/runs/4134005760/jobs/7147011143#step:3:18)89:
WARNING: Ignoring https://dl-cdn.alpinelinux.org/alpine/v3.17/main: Permission denied
fetch https://dl-cdn.alpinelinux.org/alpine/v3.17/community/x86_64/APKINDEX.tar.gz
48ABC73BEB7F0000:error:0A000086:SSL routines:tls_post_process_server_certificate:certificate verify failed:ssl/statem/statem_clnt.c:1889:
WARNING: Ignoring https://dl-cdn.alpinelinux.org/alpine/v3.17/community: Permission denied
ERROR: unable to select packages:
bash (no such package):
required by: world[bash]
curl (no such package):
required by: world[curl]
npm (no such package):
required by: world[npm]
The command '/bin/sh -c apk --no-cache add bash curl npm' returned a non-zero code: 3
Warning: Docker build failed with exit code 3, back off 6.807 seconds before retry.
It was expected to build the docker image and proceed with the github action workflow
Tried different flavors of image and nothing worked except for ubunut-latest
the plugin in question
- name: Run Trivy vulnerability scanner
uses: aquasecurity/trivy-action#master
with:
image-ref: 'test:latest'
format: 'table'
exit-code: '1'
ignore-unfixed: true
vuln-type: 'os,library'
severity: 'CRITICAL,HIGH'

Unknown host when using localstack with Spring Cloud AWS 2.3

"ResourceLoader" with AWS S3 works fine with these properties:
cloud:
aws:
s3:
endpoint: s3.amazonaws.com <-- custom endpoint added in spring cloud aws 2.3
credentials:
accessKey: XXXXXX
secretKey: XXXXXX
region:
static: us-east-1
stack:
auto: false
However, when I bring up a localstack container locally and try to use it with these properties(as per this release doc: https://spring.io/blog/2021/03/17/spring-cloud-aws-2-3-is-now-available):
cloud:
aws:
s3:
endpoint: http://localhost:4566
credentials:
accessKey: test
secretKey: test
region:
static: us-east-1
stack:
auto: false
I get this exception:
17:12:12.130 [reactor-http-nio-2] ERROR org.springframework.boot.autoconfigure.web.reactive.error.AbstractErrorWebExceptionHandler - [23efd000-1] 500 Server Error for HTTP GET "/getresource/test"
com.amazonaws.SdkClientException: Unable to execute HTTP request: mybucket.localhost
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleRetryableException(AmazonHttpClient.java:1207) ~[aws-java-sdk-core-1.11.951.jar:?]
Suppressed: reactor.core.publisher.FluxOnAssembly$OnAssemblyException:
Error has been observed at the following site(s):
|_ checkpoint ⇢ org.springframework.boot.actuate.metrics.web.reactive.server.MetricsWebFilter [DefaultWebFilterChain]
|_ checkpoint ⇢ HTTP GET "/getresource/test" [ExceptionHandlingWebHandler]
Stack trace:
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleRetryableException(AmazonHttpClient.java:1207) ~[aws-java-sdk-core-1.11.951.jar:?]
Caused by: java.net.UnknownHostException: mybucket.localhost
at java.net.InetAddress$CachedAddresses.get(InetAddress.java:797) ~[?:?]
I can view my localstack bucket files otherwise fine in an S3 browser.
Here is the docker compose config for my localstack:
version: '3.1'
services:
localstack:
image: localstack/localstack:latest
environment:
- AWS_DEFAULT_REGION=us-east-1
- AWS_ACCESS_KEY_ID=test
- AWS_SECRET_ACCESS_KEY=test
- EDGE_PORT=4566
- SERVICES=lambda,s3
ports:
- '4566-4583:4566-4583'
volumes:
- "${TEMPDIR:-/tmp/localstack}:/tmp/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
Here is how I am reading a text file:
public class ResourceTransferManager {
#Autowired
ResourceLoader resourceLoader;
public void resourceLoadingMethod() throws IOException {
Resource resource = resourceLoader.getResource("s3://mybucket/index.txt");
InputStream inputStream = resource.getInputStream();
System.out.println("File content: " + IOUtils.toString(inputStream, StandardCharsets.UTF_8));
}}
By default S3 client creates a path having bucket name as subdomain and this causes the issue.
there are couple of ways to address this issue :
In case of localstack , do not use the endpoint http://localhost:4566 , use the standard formate endpoint i.e : http://s3.localhost.localstack.cloud:4566 , this will actualy reachout to DNS and will resolve into localhost IP internally and thus this will work fine. (only caviate it , it resolve using public DNS thus it either needs internet connection or you will need to make host entries prefixing bucketname for example in host file put 127.0.0.1 <yourexpectedbucketName>.s3.localhost.localstack.cloud).
OR if you are using docker then instead of making host entries , you can also create network alias for your localstack container like : <yourexpectedbucketName>.s3.localhost.localstack.cloud
another better way is extension to first approach , but here instead of using aliases for each of your bucket (which may not always be feasible) , you can spin up local dns container and use wildcard dns config there. refer simplified sample at : https://gist.github.com/paraspatidar/c29e4adb172a5afc92852a57e621323d
( original reference : https://gist.github.com/NAR8789/92da076d0c35b434107fb4f4f198fd12)

How to use the postgres db on the windows-latest agent used in the azure pipeline?

I have a java maven project that I am building with an azure pipeline with as host "windows-latest" as it contains the correct java 13 version. However, for the integration tests, I need a postgres db and the "windows-latest" agent contains a postgres db, see: link. But how can I use this? I tried to use it by including it's serviceName in the Maven task as service:
services:
postgres: postgresql-x64-13
But then I get the error it can not find a service by that name.
I tried defining the db properties through env settings (see yml below), and then it shows the error:
Caused by: java.net.ConnectException: Connection refused
I also tried running it through a script task through the docker-compose.yml in the root of the project that I use during development, but docker-compose throws an error saying it can't find the compose file, I also doubt this the correct way.
So can I use the postgres db on the windows agent? and how?
My azure pipeline snippet:
variables:
MAVEN_CACHE_FOLDER: $(Pipeline.Workspace)/.m2/repository
MAVEN_OPTS: "-Dmaven.repo.local=$(MAVEN_CACHE_FOLDER)"
application_name: clearsky
service_name: backend
mygetUsername: myserUsername
mygetPassword: mytoken
SPRING_DATASOURCE_URL: jdbc:postgresql://localhost:5432/postgres
SPRING_DATASOURCE_USER: postgres
SPRING_DATASOURCE_PASSWORD: root
stages:
- stage: create_artifact
displayName: Create artifact
jobs:
- job: build
displayName: Build, test and publish artifact
steps:
- task: Maven#3
name: maven_package
displayName: Maven package
inputs:
goals: "package"
mavenPomFile: "backend/pom.xml"
options: '--settings backend/.mvn/settings.xml -DmygetUsername=$(mygetUsername) -DmygetPassword=$(mygetPassword)'
mavenOptions: "-Xmx3072m $(MAVEN_OPTS)"
javaHomeOption: "JDKVersion"
jdkVersionOption: "1.13"
mavenAuthenticateFeed: true
In Azure Devops Windows agen, the postgresql is disabled/stop by default.
Here is the configuration doc.
Property Value
ServiceName postgresql-x64-13
Version 13.2
ServiceStatus Stopped
ServiceStartType Disabled
You could try the following command to start the postgresql.
"C:\Program Files\PostgreSQL\13\bin\pg_ctl.exe" start -D "C:\Program Files\PostgreSQL\13\data" -w

Cloud Code for VisualStudio Code Errors on Cloud Code: Deploy

I've been trying to setup Cloud Code with VSCode and I've been running in to problems when starting the deploy process with Cloud Code: Deploy.
I've tried deploying the samples, python-hello-world-1 as well as the go-hello-world-1, to my kubernetes cluster on GKE but always end up getting errors when the deploy process starts package downloading:
Go Output
Running: skaffold run --enable-rpc -v info --rpc-http-port 49869 --filename skaffold.yaml --default-repo gcr.io/abx-lernende
starting gRPC server on port 50051
starting gRPC HTTP server on port 49869
Using kubectl context: gke_abx-lernende_europe-west4-a_joshu-test-cluster
Generating tags...
- go-hello-world -> gcr.io/abx-lernende/go-hello-world:latest
Checking cache...
- go-hello-world: Not found. Building
Building [go-hello-world]...
Sending build context to Docker daemon 57.86kB
Step 1/8 : FROM golang:1.13
---> 6586e3d10e96
Step 2/8 : RUN go get -u -v github.com/go-delve/delve/cmd/dlv
---> Running in b75ce8e5dae9
[91mgithub.com/go-delve/delve (download)
[0m[91m# cd .; git clone -- https://github.com/go-delve/delve /go/src/github.com/go-delve/delve
Cloning into '/go/src/github.com/go-delve/delve'...
fatal: unable to access 'https://github.com/go-delve/delve/': Failed to connect to github.com port 443: Connection refused
package github.com/go-delve/delve/cmd/dlv: exit status 128
[0mfailed to build: build failed: building [go-hello-world]: build artifact: unable to stream build output: The command '/bin/sh -c go get -u -v github.com/go-delve/delve/cmd/dlv' returned a non-zero code: 1
Exited with code 1.
Python Output
Running: skaffold run --enable-rpc -v info --rpc-http-port 50185 --filename
skaffold.yaml --default-repo gcr.io/abx-lernende
starting gRPC server on port 50051
starting gRPC HTTP server on port 50185
Skaffold &{Version:v1.3.1 ConfigVersion:skaffold/v2alpha3 GitVersion: GitCommit:6ba887a42438d1da578a005cf550e618fee6dfb8 GitTreeState:clean BuildDate:2020-01-31T19:55:18Z GoVersion:go1.13.4 Compiler:gc Platform:windows/amd64}
Using kubectl context: gke_abx-lernende_europe-west4-a_joshu-test-cluster
Generating tags...
- python-hello-world -> Tags generated in 0s
gcr.io/abx-lernende/python-hello-world:latest
Checking cache...
- python-hello-world: Cache check complete in 6.0001ms
Not found. Building
Building [python-hello-world]...
Sending build context to Docker daemon 4.608kB
Step 1/7 : FROM python:3.8
---> efdecc2e377a
Step 2/7 : WORKDIR /app
---> Using cache
---> a131b81cad66
Step 3/7 : COPY requirements.txt .
---> Using cache
---> 4625ef1862bd
Step 4/7 : RUN pip install --trusted-host pypi.python.org -r requirements.txt
---> Running in 4da23a158ae3
[91mWARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ConnectTimeoutError(<pip._vendor.urllib3.connection.VerifiedHTTPSConnection object at 0x7f17ba9c9d60>, 'Connection to pypi.org timed out. (connect timeout=15)')': /simple/flask/
Im assuming this is due to me being behind a corporate proxy. As counter measures I have explicitly configured VSCode, Git, pip, go and google cloud sdk all to use said proxy. On top of that I set the Windows ENV variables for the proxy. sadly without success.
Thanks!
You can configure docker to pass through proxy information into the containers by adding something like the following to your ~/.docker/config.json:
{
"proxies": {
"default": {
"httpProxy": "http://192.168.1.12:3128",
"httpsProxy": "http://192.168.1.12:3128"
}
}
}
Docker will set the HTTP_PROXY/HTTPS_PROXY environment variables within the container which is picked up by many tools.

How to setup SonarQube with Docker using Saltslack, and how to use it from CI

This post contins some information about how we integrated SonarQube in our workflow using Docker and Saltslack as Docker Container Configuration Management.
It also contains the setup used with Gradle in Travis-CI in order to execute analysis of code and analysis of Pull Requests on Github.
Also, if you see any improvements to this setup, please comment!
(If using Docker Compose, see https://github.com/SonarSource/docker-sonarqube. Feel free to maintain this answer here or copy it to a SCM.)
Requires Docker Engine 1.9
Setting up a SonarQube Server using Salt
Create this pillar file applicable for your SonarQube server:
sonar-qube:
name: sonar-qube
port: 9000
version: <ENTER SOME VERSION>
version_postgresql: <ENTER SOME VERSION>
# Using a shared disk allows you to move the SonarQube container between different servers and still keep the data.
host_storage_path: /some/shared/disk
Create this sonarqube.sls as your Docker State file.
(It requires you to have a network created named sonarnet configured in a configuration named sonarnet-config)
{% set name = salt['pillar.get']('sonar-qube:name') %}
{% set port = salt['pillar.get']('sonar-qube:port') %}
{% set tag = salt['pillar.get']('sonar-qube:version') %}
{% set pg_tag = salt['pillar.get']('sonar-qube:version_postgresql') %}
{% set host_storage_path = salt['pillar.get']('sonar-qube:host_storage_path') %}
include:
- <state file of the sonarnet-config network definition>
sonar-qube-image:
dockerng.image_present:
- name: sonarqube:{{tag}}
sonar-qube:
dockerng.running:
- name: {{name}}
- image: sonarqube:{{tag}}
- network_mode: sonarnet
- port_bindings:
- {{port}}:{{port}}
- environment:
- SONARQUBE_JDBC_URL: jdbc:postgresql://sonar-db:5432/sonar
- binds:
- {{host_storage_path}}/sonarqube/conf:/opt/sonarqube/conf
- {{host_storage_path}}/sonarqube/data:/opt/sonarqube/data
- {{host_storage_path}}/sonarqube/extensions:/opt/sonarqube/extensions
- {{host_storage_path}}/sonarqube/lib/bundled-plugins:/opt/sonarqube/lib/bundled-plugins
- require:
- dockerng: sonarnet-config
sonar-db:
dockerng.running:
- image: postgres:{{pg_tag}}
- network_mode: sonarnet
- port_bindings:
- 5432:5432
- environment:
- POSTGRES_USER: sonar
- POSTGRES_PASSWORD: sonar
- binds:
- {{host_storage_path}}/postgresql:/var/lib/postgresql
# This needs explicit mapping due to https://github.com/docker-library/postgres/blob/4e48e3228a30763913ece952c611e5e9b95c8759/Dockerfile.template#L52
- {{host_storage_path}}/postgresql/data:/var/lib/postgresql/data
- require:
- dockerng: sonarnet-config
Use regular salt to start your containers.
Once this SonarQube server is started, you should be able to reach the web gui of SonarQube.
Execute automated analysis (with Gradle in Travis CI)
These bullests will be described one by one
Enable Gradle plugin
Create users at SonarQube and Github
Write a bash script that executes analysis
Invoke bash script from Travis CI.
1) Enable the Gradle plugin
Enable the plugin according to documentation at https://plugins.gradle.org/plugin/org.sonarqube
plugins {
id "org.sonarqube" version "2.0.1"
}
2) Setup users in Github and Sonar
Github requires a user with write access (soon only read access?) to the repo. Create a sonar-ci user to a team, and provide write access to the repo for the team. See this post: https://github.com/janinko/ghprb/issues/232#issuecomment-149649126 Then create an access token for that user, the access token must grant "Full control of private repositories".
Sonar requires a user that has permission to "Execute Analysis" and "Create Projects" under Global Permissions. It also needs permissions to "BROWSE", "SEE SOURCE CODE" and "EXECUTE ANALYSIS" under Project Permissions. Generate an access token for this user.
3) Write bash script
This script will do a full analysis and publish the result at the SonarQube web GUI when merged to git branch master. This keeps track of the code evolvement over time. It will also analyze pull requests in github and write its findings directly as review comments.
These env variables needs to be set:
TRAVIS_*- set by Travis: see https://docs.travis-ci.com/user/environment-variables/
SONAR_TOKEN is the access token for the sonar server
GITHUB_SONAR_TOKEN is the access token for the sonar alaysis user on Github
sonarqube.sh:
SONAR_URL="https://sonar.example.com"
if [ -z "$SONAR_TOKEN" ] || [ -z "$GITHUB_SONAR_TOKEN" ]; then
echo "Missing environemnt variable(s) for SonarQube. Make sure all environment variables are set."
exit 1
fi
if [ "$TRAVIS_PULL_REQUEST" != "false" ]; then
echo "Running SonarQube analysis for pull request nr $TRAVIS_PULL_REQUEST..."
./gradlew sonarqube \
-Dsonar.host.url=$SONAR_URL \
-Dsonar.login=$SONAR_TOKEN \
-Dsonar.github.pullRequest=$TRAVIS_PULL_REQUEST \
-Dsonar.github.repository=$TRAVIS_REPO_SLUG \
-Dsonar.github.oauth=$GITHUB_SONAR_TOKEN \
-Dsonar.analysis.mode=issues
elif [ "$TRAVIS_BRANCH" == "master" ]; then
echo "Starting publish SonarQube analyzis results to $SONAR_URL"
./gradlew sonarqube \
-Dsonar.host.url=$SONAR_URL \
-Dsonar.login=$SONAR_TOKEN \
-Dsonar.analysis.mode=publish
fi
4) Integrate from Travis CI
In the .travis.yml add:
after_success:
- ./sonarqube.sh
before_cache:
- rm -rf $HOME/.gradle/caches/*/gradle-sonarqube-plugin
cache:
directories:
- $HOME/.sonar