CircleCI cannot find Serverless Framework after serverless installation - aws-cloudformation

I'm trying to use Serverless Compose to deploy multiple services to AWS via CircleCI. I have 3 test services for a POC, and so far deploying these to a personal AWS account from the terminal works just fine. However, when I configure it to go through CircleCI with a config.yml file, I get this error:
Could not find the Serverless Framework CLI installation. Ensure Serverless Framework is installed before continuing.
I'm puzzled because my config.yml file looks like this:
version: 2.1
orbs:
aws-cli: circleci/aws-cli#3.1.1
serverless-framework: circleci/serverless-framework#2.0.0
node: circleci/node#5.0.2
jobs:
deploy:
parameters:
stage:
type: string
executor: serverless-framework/default
steps:
- checkout
- aws-cli/install
- serverless-framework/setup
- run:
command: serverless config credentials --provider aws --key $AWS_ACCESS_KEY_ID --secret $AWS_SECRET_ACCESS_KEY
name: Configure serverless
- run:
command: npm install #serverless/compose
name: Install #serverless/compose
- run:
command: serverless deploy --stage << parameters.stage >>
name: Deploy staging
workflows:
deploy-staging:
jobs:
- node/test:
version: 17.3.0
- deploy:
context: aws-*******-developers
name: ******-sandbox-use1
stage: staging
The serverless framework is set up, the orb is present, but it says that it could not be found. All steps are successful until I get to deploy staging. I've been digging through documentation but I can't seem to find where it's going wrong with CircleCI. Does anyone know what I may be missing?

Turns out this required a weird fix, but it's best to remove the following:
The orb serverless-framework: circleci/serverless-framework#2.0.0
The setup step in the job - serverless-framework/setup
The Configure Serverless step
Once these are removed, modify the Install #serverless/compose step to run npm install and install all the packages. Then run npx serverless deploy instead of serverless deploy. This fixed the problem for me.

Related

Google Cloud, GitHub pipe with Google Cloud Run

I´m trying to a deploy pipe with Github and Google Cloud using cloud run cause´ i´m using docker containers in the server, this is my GitHub action code (workflow)
name: Build and Deploy to Cloud Run
on:
push:
branches:
- master
env:
PROJECT_ID: ${{ secrets.RUN_PROJECT }}
RUN_REGION: us-west2-a
SERVICE_NAME: helloworld-python
jobs:
setup-build-deploy:
name: Setup, Build, and Deploy
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout#v2
# Setup gcloud CLI
- name: Set up Python
uses: actions/setup-python#v4
with:
python-version: '3.9'
- uses: google-github-actions/setup-gcloud#v0
with:
version: '390.0.0'
service_account_email: ${{ secrets.ACC_MAIL }}
service_account_key: ${{ secrets.RUN_SA_KEY }}
project_id: ${{ secrets.RUN_PROJECT }}
# Build and push image to Google Container Registry
- name: Build
run: |-
gcloud builds submit \
--quiet \
--tag "gcr.io/$PROJECT_ID/$SERVICE_NAME:$GITHUB_SHA"
# Deploy image to Cloud Run
- name: Deploy
run: |-
gcloud run deploy "$SERVICE_NAME" \
--quiet \
--region "$RUN_REGION" \
--image "gcr.io/$PROJECT_ID/$SERVICE_NAME:$GITHUB_SHA" \
--platform "managed" \
--allow-unauthenticated
Everything seems to be "correct" but the moment I run the workflow, this error appears
ERROR: (gcloud.builds.submit) The required property [project] is not currently set.
You may set it for your current workspace by running:
$ gcloud config set project VALUE
or it can be set temporarily by the environment variable [CLOUDSDK_CORE_PROJECT]
The proyect ID is in the RUN_PROYECT secret, I don´t know what else to do
Is there any problem that is not letting the thing work?
Edited: Changing the version to 390.0.0 worked, but now I´m receiving this error
ERROR: (gcloud.builds.submit) Invalid value for [source]: Dockerfile required when specifying --tag
For the first error:
ERROR: (gcloud.builds.submit) The required property [project] is not currently set.
You may set it for your current workspace by running:
$ gcloud config set project VALUE
or it can be set temporarily by the environment variable [CLOUDSDK_CORE_PROJECT]
the gcloud command has not been properly configured.
According to the Authorization section of google-github-actions/setup-gcloud:
This action installs the Cloud SDK (gcloud). To configure its authentication to Google Cloud, use the google-github-actions/auth action.
So, you need to configure it for authorization using any one of the supported methods there.
For your second error:
ERROR: (gcloud.builds.submit) Invalid value for [source]: Dockerfile required when specifying --tag
the /path/to/Dockerfile is missing. You need to specify it in gcloud builds submit command.
See this relevant SO thread for more details:
Specify Dockerfile for gcloud build submit

Gcloud build trigger environment variable substitution in app.yaml for appEngine

I am trying to substitue variable in app.yaml with a cloud build trigger.
I Added substitution variable in build trigger.
Add environment variables to app.yaml in a way they can be easily substituted with build trigger variables. Like this:
env_variables:
SECRET_KEY: %SECRET_KEY%
Add a step in cloudbuild.yaml to substitute all %XXX% variables inside app.yaml with their values from build trigger.
steps:
- name: node:10.15.1
entrypoint: npm
args: ["install"]
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: bash
args:
- '-c'
- |
sed -i 's/%SESSION_SECRET%/'${_SESSION_SECRET}'/g' app.yaml
timeout: "1600s"
The problem is that Gcloud Build throw an exception :
Already have image (with digest): gcr.io/cloud-builders/gcloud
bash: _L/g: No such file or directory
Why ? How can I make a substitution of my app.yaml ?
I have a app.yaml to the root of the project at the same level of the cloudbuild.yaml
UPDATED
I am trying to build and debug gcloud locally with this command:
sudo cloud-build-local --config=cloudbuild.yaml --write-workspace=../workspace --dryrun=false --substitutions=_SESSION_SECRET=test --push .
When I take a look into the app.yaml file, the substitution worked as expected and there is no exception at all.
What is the difference with the gcloud build environment ?
OK I finally decided to use github action instead of google cloud triggers.
Since Google cloud triggers aren't able to find its own app.yaml and manage the freaking environment variable by itself.
Here is how to do it:
My environment :
App engine,
standard (not flex),
Nodejs Express application,
a PostgreSQL CloudSql
First the setup :
1. Create a new Google Cloud Project (or select an existing project).
2. Initialize your App Engine app with your project.
[Create a Google Cloud service account][sa] or select an existing one.
3. Add the the following Cloud IAM roles to your service account:
App Engine Admin - allows for the creation of new App Engine apps
Service Account User - required to deploy to App Engine as service account
Storage Admin - allows upload of source code
Cloud Build Editor - allows building of source code
[Download a JSON service account key][create-key] for the service account.
4. Add the following [secrets to your repository's secrets][gh-secret]:
GCP_PROJECT: Google Cloud project ID
GCP_SA_KEY: the downloaded service account key
The app.yaml
runtime: nodejs14
env: standard
env_variables:
SESSION_SECRET: $SESSION_SECRET
beta_settings:
cloud_sql_instances: SQL_INSTANCE
Then the github action
name: Build and Deploy to GKE
on: push
env:
PROJECT_ID: ${{ secrets.GKE_PROJECT }}
DATABASE_URL: ${{ secrets.DATABASE_URL}}
jobs:
setup-build-publish-deploy:
name: Setup, Build, Publish, and Deploy
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- uses: actions/setup-node#v2
with:
node-version: '12'
- run: npm install
- uses: actions/checkout#v1
- uses: ikuanyshbekov/app-yaml-env-compiler#v1.0
env:
SESSION_SECRET: ${{ secrets.SESSION_SECRET }}
- shell: bash
run: |
sed -i 's/SQL_INSTANCE/'${{secrets.DATABASE_URL}}'/g' app.yaml
- uses: actions-hub/gcloud#master
env:
PROJECT_ID: ${{ secrets.GKE_PROJECT }}
APPLICATION_CREDENTIALS: ${{ secrets.GCLOUD_AUTH }}
CLOUDSDK_CORE_DISABLE_PROMPTS: 1
with:
args: app deploy app.yaml
To add secrets into git hub action you must go to : Settings/secrets
Take note that I could handle all the substitution with the bash script. So I would not depend on the github project "ikuanyshbekov/app-yaml-env-compiler#v1.0"
It's a shame that GAE doesn't offer an easiest way to handle environment variable for the app.yaml. I don't want to use KMS since I need to update the beta-settings/cloud sql instance.. I really needed to substitute everything into the app.yaml.
This way I can make a specific action for the right environment and manage the secrets.
The entrypoint should be an executable, use /bin/bash or /bin/sh.
How to inspect inside the image (in general):
$ docker pull gcr.io/cloud-builders/gcloud
Using default tag: latest
latest: Pulling from cloud-builders/gcloud
...
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
gcr.io/cloud-builders/gcloud latest 8499764c4ef6 About an hour ago 4.01GB
$ docker run -ti --entrypoint '/bin/bash' 8499764c4ef6
root#60354dfb588a:/#
You can test your commands from there to test without having to sending it to Cloud Build each time.

gcloud beta command in build step in cloudbuild.yaml. Should I use entrypoint or args?

I'm trying to build and deploy a Docker image to Cloud Run. And I'd like to set min-instances=1 so I can avoid cold starts.
I'm building and deploying it using Cloud Build through the gcloud CLI.
So this is was my 1st attempt from the gcloud CLI:
gcloud builds submit . --config=./cloudbuild.yaml
And here are the build steps that are described in my cloudbuild.yaml:
steps:
# STEP_1: DOCKER BUILDS IMAGE
# STEP_2: DOCKER PUSHES IMAGE TO CLOUD REGISTRY
# STEP_3: GCLOUD SHOULD DEPLOY TO CLOUD RUN (DESCRIBED BELOW)
- name: "gcr.io/google.com/cloudsdktool/cloud-sdk"
entrypoint: gcloud
args:
- "run"
- "deploy"
- "my-service"
- "--image=gcr.io/$PROJECT_ID/my-image"
- "--platform=managed"
- "--region=us-central1"
- "--min-instances=1"
You see that the build STEP_3 runs: gcloud run deploy my-service ... min-instances=1
And I'm getting the following error:
The `--min-instances` flag is not supported in the GA release track on the
fully managed version of Cloud Run. Use `gcloud beta` to set `--min-instances` on Cloud Run (fully managed).
So I guess I'll have to use gcloud beta commands. But I have some questions in that case:
Do I also need to add the beta command to my gcloud builds submit . command?
And how should I set it in cloudbuilt.yaml? Do I add it to the entrypoint or as an argument in args?
OPTION #1
- name: "gcr.io/google.com/cloudsdktool/cloud-sdk"
entrypoint: "gcloud beta"
args:
- "run"
// ETC
OPTION #2
- name: "gcr.io/google.com/cloudsdktool/cloud-sdk"
entrypoint: gcloud
args:
- "beta"
- "run"
// ETC
There is nothing like a hidden reason for either.
Use under args. All the elements are concatenated into a string.

How to cache docker-compose build inside github-action

Is there any way to cache docker-compose so that it will not build again and again?
here is my action workflow file:
name: Github Action
on:
push:
branches:
- staging
jobs:
test:
runs-on: ubuntu-18.04
steps:
- uses: actions/checkout#v1
- name: Bootstrap app on Ubuntu
uses: actions/setup-node#v1
with:
node-version: '12'
- name: Install global packages
run: npm install -g yarn prisma
- name: Install project deps
if: steps.cache-yarn.outputs.cache-hit != 'true'
run: yarn
- name: Build docker-compose
run: docker-compose -f docker-compose.test.prisma.yml up --build -d
I want to cache the docker build step. I have tried using if: steps.cache-docker.outputs.cache-hit != 'true' then only build but didn't work.
What you are referring to is called "docker layer caching", and it is not yet natively supported in GitHub Actions.
This is discussed extensively in several places, like:
Cache docker image forum thread
Cache a Docker image built in workflow forum thread
Docker caching issue in actions/cache repository
As mentioned in the comments, there are some 3rd party actions that provide this functionality (like this one), but for such a core and fundamental feature, I would be cautious with anything that is not officially supported by GitHub itself.
For those arriving here via Google, this now "supported". Or at least it is working: https://github.community/t/use-docker-layer-caching-with-docker-compose-build-not-just-docker/156049.
The idea is to build the images using docker (and its cache) and then use docker compose to run (up) them.
If using docker/bake-action or docker/build-push-action & want to access a cached image in subsequent steps -
Use load:true to save the image
Use the same image name as the cached image across steps in order to skip rebuilds.
Example:
...
name: Build and push
uses: docker/bake-action#master
with:
push: false
load: true
set: |
web.cache-from=type=gha
web.cache-to=type=gha
-
name: Test via compose
command: docker compose run web tests
...
services:
web:
build:
context: .
image: username/imagename
command: echo "Test run successful!"
See the docker team's responses;
How to access the bake-action cached image in subsequent steps?
How to use this plugin for a docker-compose?
How to share layers with Docker Compose?`
Experiment on caching docker compose images in GitHub Actions

Gitlab + GKE + Gitlab CI unable to clone Repository

I'm trying to user GitLab CI with GKE cluster to execute pipelines. I have the experience using Docker runner, but GKE is still pretty new to me, here's what I did:
Create GKE cluster via Project settings in GitLab.
Install Helm Tiller via GitLab Project settings.
Install GitLab Runner via GitLab Project settings.
Create gitlab-ci.yml with the following content
before_script:
- php -v
standard:
image: falnyr/php-ci-tools:php-cs-fixer-7.0
script:
- php-cs-fixer fix --diff --dry-run --stop-on-violation -v --using-cache=no
lint:7.1:
image: falnyr/php-ci:7.1-no-xdebug
script:
- composer build
- php vendor/bin/parallel-lint --exclude vendor .
cache:
paths:
- vendor/
Push commit to the repository
Pipeline output is following
Running with gitlab-runner 10.3.0 (5cf5e19a)
on runner-gitlab-runner-666dd5fd55-h5xzh (04180b2e)
Using Kubernetes namespace: gitlab-managed-apps
Using Kubernetes executor with image falnyr/php-ci:7.1-no-xdebug ...
Waiting for pod gitlab-managed-apps/runner-04180b2e-project-5-concurrent-0nmpp7 to be running, status is Pending
Running on runner-04180b2e-project-5-concurrent-0nmpp7 via runner-gitlab-runner-666dd5fd55-h5xzh...
Cloning repository...
Cloning into '/group/project'...
remote: You are not allowed to download code from this project.
fatal: unable to access 'https://gitlab-ci-token:xxxxxxxxxxxxxxxxxxxx#git.domain.tld/group/project.git/': The requested URL returned error: 403
ERROR: Job failed: error executing remote command: command terminated with non-zero exit code: Error executing in Docker Container: 1
Now I think that I should add a gitlab-ci-token user with password somewhere, not sure if it is supposed to work like this.
Thanks!
After reading more about the topic it seems that pipelines should be executed via HTTPS only (not SSH).
I enabled the HTTPS communication and when I execute the pipeline as the user in the project (admin that is not added to the project throws this error) it works without a problem.