Is it possible to use variables in a codeship-steps.yml file? - codeship

We currently use Codeship Pro to push Docker images to a private registry on AWS, as well as to deploy those images to an ECS cluster.
However, the codeship-steps.yml file includes a hard-coded region name for which AWS region I'm pushing to. For example:
- name: push_production
service: app
type: push
image_name: 123456789012.dkr.ecr.us-east-1.amazonaws.com/project/app-name
image_tag: "{{.Timestamp}}"
tag: master
registry: https://123456789012.dkr.ecr.us-east-1.amazonaws.com
dockercfg_service: aws_generator
I would like to be able to fairly easily switch this to deploy to a different AWS region. Thus the question:
Is it possible to use variables in a codeship-steps.yml file?
I know some of the properties can use a handful of built-in variables provided by Codeship (such as the {{.Timestamp}} value used for the image_tag property), but I don't know if, for example, values from an env_file can be used in the image_name, registry, and/or command properties of a step.
I'm imagining something like this...
codeship-steps.yml:
- name: push_production
service: app
type: push
image_name: "123456789012.dkr.ecr.{{.AWS_REGION}}.amazonaws.com/project/app-name"
image_tag: "{{.Timestamp}}"
tag: master
registry: "https://123456789012.dkr.ecr.{{.AWS_REGION}}.amazonaws.com"
dockercfg_service: aws_generator
... but that results in an "error parsing image name during push step: invalid reference format" on the push step.
I've tried simply not specifying the registry in the image_name...
image_name: project/app-name
... but I get a "Build Error: no basic auth credentials" on the push step. At this point, I'm running out of ideas.

Is it possible to use [environment] variables in a codeship-steps.yml file?
While the image_tag can take advantage of Go templates, the same is not the case for image_name, registry, or anything else. This is a separate set of templating variables that are accessible only to the image_tag generation.
As for environment variables in general (CI environment variables or those defined in the service configs), these values can be used in codeship-steps.yml on the command step when passed through a shell command. For example:
- service: app
command: echo The branch name is: $CI_BRANCH
Results in:
The branch name is: $CI_BRANCH
- service: app
command: /bin/sh -c 'echo The branch name is: $CI_BRANCH'
Results in:
The branch name is: master
As for your 'no basic auth credentials' error message, it's possible that there's an issue with how you are retrieving the basic auth credentials for access to your image registry. If you are on a MacOS device, I would recommend that you review our documentation on how to generate Docker credentials.

Related

Load environment variables when running locally via serverless offline

I want to load env variables from .env file, when running locally,
So here's my serverless.yaml file,
functions:
health:
handler: src/api/health.check
name: ${self:provider.stackName}-health
environment:
USER_POOL_APP_CLIENT_ID: !Ref UserPoolClient
You see, it sets a userpool id that gets created in the resources section as an environment variable to a lambda. This works perfectly fine, when deployed as expected.
However, when I try to run it locally via serverless-offline, no matter how I set the env variables, via dotenv or manually, it seems to get overriden by serverless, in this case all I see is "[object object]".
Other workflows that I see, load all env variables from files, like below
functions:
health:
handler: src/api/health.check
name: ${self:provider.stackName}-health
environment:
USER_POOL_APP_CLIENT_ID: {env:USER_POOL_APP_CLIENT_ID}
But wouldn't this require us to have variables of all stages, stored locally as files?
I was hoping to store only the dev version, locally, and have all the remaining fetched from the serverless file itself, automatically via !Ref like shown at the beginning.
So, how do I prevent, serverless from populating/polluting my env variables when I run locally, while sticking to the first format?
Or are there other better ways to handle this?
It happened here with the new version of serverless-offline (v12.0.4).
My solution was to use: https://www.serverless.com/plugins/serverless-plugin-ifelse
See the example below:
serverlessIfElse:
- If: '"${env:NODE_ENV}" == "development"'
Set:
functions.health.environment.USER_POOL_APP_CLIENT_ID: "${env:USER_POOL_APP_CLIENT_ID, ''}"
You can change it for your use case.

Github actions incorrectly thinks variable is a secret and so does not set outputs

A step in my workflow file will return some IDs of EC2 instances in my aws account and then i set these IDs as a github output to be used in other jobs in my workflow file
I have done this in many workflows and step will return something like this:
["i-0d945b001544f2614","i-0b90ba69d37aad78c"]
However, in one workflow file github is masking the IDs as it thinks it is a secret for some reason, so it will return:
["i-***2d571abc6d7d***4ef","i-***186ce12c5cd8e744"]
Therefore i get this error message on the workflow job summary:
Skip output 'instanceIDs' since it may contain secret.
And so the other jobs in my workflow file that rely on this output will fail as github won't set an output.
I have tried to use base64 as suggested in this post but i haven't been able to get that to work
Is there any other work around?
Recently, GitHub released a new feature - configuration variables in workflows.
Configuration variables allow you to store your non sensitive data as plain text variables that can be reused across your workflows in your repository or organization.
You can define variables at Organization, Repository, or Environment level based on your requirement.
These variables are accessible from the workflow by vars context.
Example:
jobs:
display-variables:
runs-on: ${{ vars.RUNNER }}
steps:
- name: Use variables
run: |
echo "Repository variable : ${{ vars.REPOSITORY_VAR }}"
echo "Organization variable : ${{ vars.ORGANIZATION_VAR }}"
In this example, we have the following configuration variables: RUNNER, REPOSITORY_VAR, ORGANIZATION_VAR. As opposed to the repository secrets, the values of these variables won't be masked.
For more details, see the Defining configuration variables for multiple workflows.

GitHub Actions: How to dynamically set environment url based on deployment step output?

I found out about a really nice GitHub Actions Feature called Environments. Using the appropriate syntax a Environment could also be created inside a GitHub Action workflow.yml like this:
environment:
name: test_environment
url: https://your-apps-url-here.com
As the docs state thats a valid way to create GitHub Action Environments:
Running a workflow that references an environment that does not exist
will create an environment with the referenced name.
But inside my current GitHub Action workflow is there a way I dynamically set the url based on a deployment step output? I have a dynamic URL resulting from the deployment process to AWS which I can't define up-front.
The job workflow docs tell us that there's also a way of using expressions inside the url field:
environment:
name: test_environment
url: ${{ steps.step_name.outputs.url_output }}
Now imagine a ci.yml workflow file that uses AWS CLI to deploy a static website to S3, where we used a tool like Pulumi to dynamically create a S3 Bucket inside our AWS account. We can read the dynamically created S3 url using the following command pulumi stack output bucketName. The deploy step inside the ci.yml could then look like this:
- name: Deploy Nuxt.js generated static site to S3 Bucket via AWS CLI
id: aws-sync
run: |
aws s3 sync ../dist/ s3://$(pulumi stack output bucketName) --acl public-read
echo "::set-output name=s3_url::http://$(pulumi stack output bucketUrl)"
working-directory: ./deployment
There are 2 crucial points here: First we should use id inside the deployment step to define a step name we could easily access via step_name inside our environment:url. Second we need to define a step output using echo "::set-output name=s3_url::http://$(pulumi stack output bucketUrl)". In this example I create a variable s3_url. You could replace the pulumi stack output bucketUrl with any other command you'd like or tool you use, which responds with your dynamic environment url.
Be also sure to add a http:// or https:// in order to prevent an error message like this:
Environment URL 'microservice-ui-nuxt-js-hosting-bucket-bc75fce.s3-website.eu-central-1.amazonaws.com' is not a valid http(s) URL, so it will not be shown as a link in the workflow graph.
Now the environment definition at the top of our ci.yml can access the s3_url output variable from our deployment step like this:
jobs:
ci:
runs-on: ubuntu-latest
environment:
name: microservice-ui-nuxt-js-deployment
url: ${{ steps.aws-sync.outputs.s3_url }}
steps:
- name: Checkout
...
Using steps.aws-sync we reference the deployment step directly, since we defined it with the id. The appended .outputs.s3_url then directly references the variable containing our S3 url. If you defined everything correctly the GitHub Actions UI will render the environment URL directly below the finished job:
Here's also a fully working workflow embedded inside a example project.

What does Lock services in Azure Pipelines' task DockerCompose actually do?

I am learning to use Azure Pipelines for CI/CD. I read the official document and found the Docker Compose task has an action called Lock services. I have no idea what this action actually do and what it means by locking the images.
Can anyone explain it to me, or provide me some examples on when and how to use it?
We have public the source code of this task, so you can check this page to analyze what the exactly action this command do.
For image, there has 2 different identifies: tag and digest. Now, let's assume one scenario:
Most of time, a tagged image in Container Registry is mutable, so with appropriate permissions you or anyone can update/push/delete an image with the same tag to that registry. However, when you deploy a image to production env, you could not sure whether the image with one specific tag does not been overwritten and it is the one you want to deployed.
At this time, digest would be a best choice for you.
Because digest is a SHA256 calculated from the image and identifies it uniquely. Once there has any changes to your image, the corresponding SHA256 value will be changes also.
Explanation of this action:
Check this code line(defined here). It's work logic is read out the image(s) used in the docker-compose.yml file, pull image(s) and generate a digest for them. Next a new docker-compose.yml file is automatically generated, which the image will be specified with digest in this new docker-compose.yml file.
Sample:
The task definition i used:
- task: DockerCompose#0
displayName: 'Lock services'
inputs:
containerregistrytype: 'Container Registry'
dockerRegistryEndpoint: {service connection name}
dockerComposeFile: 'Docker/docker-compose.yml'
action: 'Lock services'
removeBuildOptions: true
The docker-compose.yml:
version: '3'
services:
web:
image: xxxx/testwebapp
ports:
- "1983:80"
newsfeed:
image: xxx/merlin
redis:
image: redis
See the build log of this task:
And the contents of new docker-compose.yml which generated.
(List them by using cat xxx command):
Now, when you deploy the images to production, just use the new docker-compose.yml file the task generated automatically. This can guarantee the deployed image is the version you built at the beginning, even if someone overwrites this image later.

How to configure a custom resource type in a concourse pipeline?

I've already done a google search to find a way to setup a custom resource in concourse pipeline but the answers/documentation do not work.
Can someone provide a working example of custom resource type that is pulled from a local registry and used in a build plan?
For example, say I were to clone the git resource and slightly modify it and pushed it to my local registry.
The git resource image would be name: localhost:5000/local_git:latest
How would you be able to use this custom resource (local_git:latest) in a pipeline definition?
There are two main settings to consider here when running a local registry:
Must use insecure_registries:
insecure_registries: ["my.local.registry:8080"]
If you are running your registry in "localhost", you shouldn't use localhost as the address for your registry, if you do, the docker image will try to resolve to the localhost of the docker image instead of your local machine, in order to avoid this problem, use the IP address of your local machine. (DON'T use 127.0.0.1)
You can define your custom resource type in your pipeline under the resource_types key in the pipeline yml.
Eg:
resource_types:
- name: custom-git
type: docker-image
source:
repository: localhost:5000/local_git
An important note is that custom resource type images are fetched in a manner identical to using a base resource in your pipeline, so for your case of a private Docker registry, you will just need to configure the necessary source: on the docker-image resource (See the docs for the docker-image-resource)
You can then use the type for resources as you would any of the base types:
resources:
- name: some-custom-git-resource
type: custom-git
source: ...
Note the type: key of the resource matches the name: on the resource type.
Take a look at the Concourse Documentation for Configuring Resource Types for more information on how to use custom types in your pipeline.