Argo Worflows - How to share files/directories between containers/tasks? - kubernetes

I have a workflow that runs DAG with 2 tasks.
Each task clones a repo.
I want that the first repo in the 1st task, will be available for the second task which runs another container.
How can I share the repos between them?
Using volume persisent, artifacts, outputs?
kind: Workflow
metadata:
name: create-configmap-dag-workflow
spec:
entrypoint: create-configmap-dag
templates:
- name: create-configmap-dag
dag:
tasks:
- name: automation-npm-packages
template: run-shell-command
arguments:
parameters:
- name: cmd
value: git clone https://github.com/org/pulumi-automation.git && cd automation && npm install
- name: campaign-update-npm-packages
template: run-shell-command
depends: automation-npm-packages.Succeeded
arguments:
parameters:
- name: cmd
value: git clone https://github.com/org/campaign-update.git && cd campaign-update/infra && npm install
- name: run-shell-command
inputs:
parameters:
- name: cmd
container:
image: amazonaws.com/jenkins-slave:ecs-global-node_master-3
command: [ "sh", "-c" ]
args: [ "{{inputs.parameters.cmd}}" ]

Related

Github actions workflow error: You have an error in your yaml syntax

I am trying to deploy to google cloud engine using github actions and my yaml config is as follows,
name: "Deploy to GAE"
on:
push:
branches: [production]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Install Dependencies
run: composer install -n --prefer-dist
- name: Generate key
run: php artisan key:generate
- name: GCP Authenticate
uses: GoogleCloudPlatform/github-actions/setup-gcloud#master
with:
version: "273.0.0"
service_account_key: ${{ secrets.GCP_SA_KEY }}
- name: Set GCP_PROJECT
env:
GCP_PROJECT: ${{ secrets.GCP_PROJECT }}
run: gcloud --quiet config set project ${GCP_PROJECT}
- name: Deploy to GAE
run: gcloud app deploy app.yaml
and github actions is throwing me the below error
Invalid workflow file: .github/workflows/main.yml#L10
You have an error in your yaml syntax on line 10
fyi, line #10 is - uses: actions/checkout#v2
The steps indentation level is incorrect, it should be inside deploy
name: "Deploy to GAE"
on:
push:
branches: [production]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Install Dependencies
run: composer install -n --prefer-dist
- name: Generate key
run: php artisan key:generate
- name: GCP Authenticate
uses: GoogleCloudPlatform/github-actions/setup-gcloud#master
with:
version: "273.0.0"
service_account_key: ${{ secrets.GCP_SA_KEY }}
- name: Set GCP_PROJECT
env:
GCP_PROJECT: ${{ secrets.GCP_PROJECT }}
run: gcloud --quiet config set project ${GCP_PROJECT}
- name: Deploy to GAE
run: gcloud app deploy app.yaml

How to run checkov scan on terraform plan

I would like to have checkov scan terraform plan output but I am not getting any success with that.Below is my code in terragrunt.hcl,GitHub Actions workflow and the message I got when my workflow completed.I have tried few methods to have it work but I am still unable to configure it correctly so that checkov can analyse the Json output of terraform plan.I would appreciate any help that I can get on this.Thank you for your assistance inadvance
terragrunt.hcl
terraform {
after_hook "after_hook_plan" {
commands = ["plan"]
execute = ["sh", "-c", "terraform show -json tfplan.binary > ${get_parent_terragrunt_dir()}/plan.json"]
}
}
GitHubActions Workflow
name: 'Checkov Security Scan'
on:
push:
branches:
- test
jobs:
Terraform:
name: 'Terraform'
runs-on: ubuntu-latest
defaults:
run:
working-directory: ${{ env.tf_working_dir }}
steps:
- name: 'checkout'
uses: actions/checkout#v2
- name: configure AWS credentials
uses: aws-actions/configure-aws-credentials#master
with:
aws-region: us-east-1
role-to-assume: ${{ env.dev_role_arn }}
- name: Setup Terraform
uses: hashicorp/setup-terraform#v1.3.2
with:
terraform_version: ${{ env.tf_version }}
terraform_wrapper: true
- name: Setup Terragrunt
uses: autero1/action-terragrunt#v1.1.0
with:
terragrunt_version: ${{ env.tg_version }}
- name: Init
id: init
run: |
terragrunt run-all init --terragrunt-non-interactive
- name: Plan
id: plan
run: |
terragrunt run-all plan -out=tfplan.binary -no-color --terragrunt-non-interactive
- name: 'Test Plan (Checkov)'
uses: bridgecrewio/checkov-action#master
with:
directory: ./applied/test/
quiet: false # optional: display only failed checks
framework: terraform # optional: run only on a specific infrastructure {cloudformation,terraform,kubernetes,all}
output_format: json # optional: the output format, one of: cli, json, junitxml, github_failed_only
checkov output message
{
"passed": 0,
"failed": 0,
"skipped": 0,
"parsing_errors": 0,
"resource_count": 0,
"checkov_version": "2.0.706"
I guess it doesn't support however you can try this
- name: Terraform Plan
id: plan
if: github.event_name == 'pull_request'
run: terraform plan --out tfplan.binary -no-color
continue-on-error: true
- name: Terraform Show
id: show
run: terraform show -json tfplan.binary | jq '.' > tfplan.json
- name: Set up Python 3.8
uses: actions/setup-python#v1
with:
python-version: 3.8
id: setup_py
- name: Install Checkov
id: checkov
run: |
python3 -m pip3 install --upgrade pip3
pip3 install checkov
continue-on-error: true
- name: Run Checkov
id: run_checkov
run: checkov -f tfplan.json -o sarif -s
continue-on-error: true
- name: Upload SARIF file
id: upload_sarif
uses: github/codeql-action/upload-sarif#v1
with:
sarif_file: results.sarif
category: checkov
continue-on-error: true

Injecting variables to my NextJS app using Kubernetes

I'm trying to generate some env variables when I'm deploying my code with Kubernets. What I'm trying to do is to generate a ConfigMap to get my variables, but it's not working.
I'm using azure pipelines to do my build and publish steps.
Dockerfile:
FROM node:14-alpine
WORKDIR /usr/src/app
COPY package.json .
COPY . .
RUN npm cache clean --force
RUN npm install
RUN npm run build
EXPOSE 80
CMD ["npm", "start"]
My azure-pipelines.yml:
stages:
#Build Dev
- stage: BuildDev
displayName: Build and Push Dev
jobs:
- job: Development
displayName: Build and Push Dev
timeoutInMinutes: 0
pool:
vmImage: ubuntu-18.04
steps:
- checkout: self
- task: Docker#1
displayName: Build Image
inputs:
azureSubscriptionEndpoint: my-subscription
azureContainerRegistry: my-container-registry
command: build
imageName: tenant/front/dev:$(Build.BuildId)
includeLatestTag: true
buildContext: '**'
- task: Docker#1
displayName: Push Image
inputs:
azureSubscriptionEndpoint: my-subscription
azureContainerRegistry: my-container-registry
command: push
imageName: tenant/front/dev:$(Build.BuildId)
buildContext: '**'
#Deploy Dev
- stage: DeployDev
displayName: Deploy Dev
jobs:
- deployment: Deploy
displayName: Deploy Dev
timeoutInMinutes: 0
pool:
vmImage: ubuntu-18.04
environment: Development-Front
strategy:
runOnce:
deploy:
steps:
- task: Kubernetes#1
displayName: 'kubectl apply'
inputs:
kubernetesServiceEndpoint: 'AKS (standard subscription)'
command: apply
useConfigurationFile: true
configurationType: inline
inline: |
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: $(appNameDev)
labels:
app: $(appNameDev)
spec:
replicas: 1
selector:
matchLabels:
app: $(appNameDev)
template:
metadata:
labels:
app: $(appNameDev)
spec:
containers:
- name: $(appNameDev)
image: tenant/front/dev:$(Build.BuildId)
imagePullPolicy:
env:
- name: NEXT_PUBLIC_APP_API
value: development
ports:
- name: http
containerPort: 80
protocol: TCP
volumeMounts:
- name: environment-variables
mountPath: /usr/src/app/.env
readOnly: true
volumes:
- name: environment-variables
configMap:
name: environment-variables
items:
- key: .env
path: .env
---
apiVersion: v1
kind: Service
metadata:
name: $(appNameDev)
labels:
app: $(appNameDev)
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
selector:
app: $(appNameDev)
---
apiVersion: v1
kind: ConfigMap
metadata:
name: environment-variables
data:
.env: |
NEXT_PUBLIC_APP_API=development
API=http://another.endpoint.com/serverSide
When I'm trying to access this NEXT_PUBLIC_APP_API variable, I'm receiving undefined. In my next.config.js, I'm exporting the variable as publicRuntimeConfig.
If you are using GitHub actions, the first thing is to add a step in your image build process to include dynamic variables
- name: Create variables
id: vars
run: |
branch=${GITHUB_REF##*/}
echo "API_URL=API_${branch^^}" >> $GITHUB_ENV
echo "APP_ENV=APP_${branch^^}" >> $GITHUB_ENV
echo "BASE_URL=BASE_${branch^^}" >> $GITHUB_ENV
sed -i "s/GIT_VERSION/${{ github.sha }}/g" k8s/${branch}/api-deployment.yaml
The second step is to build the docker image with extra arguments, if you are using another CI, just add the variables directly in the build args as below:
--build-arg PROD_ENV=NEXT_PUBLIC_API_URL=${{ secrets[env.API_URL] }}\nNEXT_PUBLIC_BASE_URL=${{ secrets[env.BASE_URL]}}\nNEXT_PUBLIC_APP_ENV=${{ secrets[env.APP_ENV] }}
Pay attention to the \n to skip lines and docker to be able to understand that you are sending multiple variables to the build process.
The last thing is to add the extra args inside the Dockerfile
# Install dependencies only when needed
FROM node:16.13.0-alpine AS deps
# Check https://github.com/nodejs/docker-node/tree/b4117f9333da4138b03a546ec926ef50a31506c3#nodealpine to understand why libc6-compat might be needed.
RUN apk add --no-cache libc6-compat
WORKDIR /app
COPY package.json yarn.lock ./
RUN yarn install
# Rebuild the source code only when needed
FROM node:16.13.0-alpine AS builder
ARG PROD_ENV=""
WORKDIR /app
COPY . .
COPY --from=deps /app/node_modules ./node_modules
RUN printf "$PROD_ENV" >> .env.production
RUN yarn build
# Production image, copy all the files and run next
FROM node:16.13.0-alpine AS runner
WORKDIR /app
ENV NODE_ENV production
# You only need to copy next.config.js if you are NOT using the default configuration
COPY --from=builder /app/next.config.js ./
COPY --from=builder /app/.env* ./
COPY --from=builder /app/next-i18next.config.js ./
COPY --from=builder /app/public ./public
COPY --from=builder /app/.next ./.next
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/package.json ./package.json
RUN addgroup -g 1001 -S nodejs
RUN adduser -S nextjs -u 1001
RUN chown -R nextjs:nodejs /app/.next
USER nextjs
# Next.js collects completely anonymous telemetry data about general usage.
# Learn more here: https://nextjs.org/telemetry
# Uncomment the following line in case you want to disable telemetry.
RUN npx next telemetry disable
CMD ["yarn", "start"]
I send as extra args PROD_ENV and then build a .env.production file on the fly with the required values.
Mark as answer if it helps you

Github Action: [!] Error: Cannot find module 'rollup-plugin-commonjs'

In my package.json there are rollup and rollup-plugin-commonjs
but inside github actions it could not find those packages!
If I do not add rollup in global package installation step of github-action it shows that rollup is not found. But after adding both rollup and rollup-plugin-commonjs I get [!] Error: Cannot find module 'rollup-plugin-commonjs'
this is my workflow file:
name: Github Action
on:
push:
branches:
- fix/auto-test
jobs:
test:
runs-on: ubuntu-18.04
steps:
- uses: actions/checkout#v1
- name: Bootstrap app on Ubuntu
uses: actions/setup-node#v1
with:
node-version: '11.x.x'
- name: Install global packages
run: npm install -g prisma rollup rollup-plugin-commonjs
- name: Get yarn cache directory path
id: yarn-cache-dir-path
run: echo "::set-output name=dir::$(yarn cache dir)"
- name: Cache Project dependencies test
uses: actions/cache#v1
id: yarn-cache
with:
path: ${{ steps.yarn-cache-dir-path.outputs.dir }}
key: ${{ runner.os }}-yarn-${{ hashFiles('**/yarn.lock') }}
restore-keys: |
${{ runner.os }}-yarn-
- name: Install project deps
if: steps.yarn-cache.outputs.cache-hit != 'true'
run: yarn
- name: Run docker
run: docker-compose -f docker-compose.test.prisma.yml up --build -d
- name: Sleep
uses: jakejarvis/wait-action#master
with:
time: '30s'
- name: Reset the database for safety
run: yarn reset:backend
- name: Deploy
run: yarn deploy:backend
- name: Build this great app
run: yarn build
- name: start app and worker concurrently and create some instances
run: |
yarn start &
yarn start:worker &
xvfb-run --auto-servernum yarn test:minimal:runner

Concourse CI: static_buildpack issue

I am deploying a simple angular4 application to cloud foundry using static_buildpack. While accessing the application I am always getting nginx 403 issue.
jobs:
- name: app
serial: true
plan:
- get: develop-repo
- task: npm-build
config:
platform: linux
image_resource:
type: docker-image
source:
repository: node
run:
path: sh
args:
- -exec
- |
cd develop-repo
npm install
npm run dist
inputs:
- name: develop-repo
outputs:
- name:
- put: develop
params:
manifest: develop-repo/manifest.yml
current_app_name: app
path: develop-repo
resources:
- name: develop-repo
type: git
- name: develop
type: cf
manifest.yml:
---
applications:
- name: app
instances: 1
memory: 512M
disk_quota: 512M
buildpack: staticfile_buildpack
stack: cflinuxfs2
All I am doing is git clone -> npm build -> cf deploy
Note: All resource variables are rightly set. Just ignored for better readability
After trying out couple of options, I found that by publishing the artifacts to the output folder we can push the app from the output folder
---
jobs:
- name: app
serial: true
plan:
- get: develop
- task: npm-build
config:
platform: linux
image_resource:
type: docker-image
source:
repository: node
inputs:
- name: develop
outputs:
- name: artifacts
run:
path: sh
args:
- -exec
- |
cd develop
npm install
npm run dist
ls
cp -R dist ../artifacts/
- put: deploy-cf
params:
manifest: develop/ci/manifests/manifest-int.yml
path: artifacts/dist
resources:
- name: develop
type: git
source:
uri: <<GITHUB-URI>>
branch:<<GITHUB-BRANCH>>
username:<<GITHUB-USERNAME>>
password: <<GITHUB-PASSWORD>>
- name: deploy-cf
type: cf
source:
api: <<CF-API>>
username: <<CF-USERNAME>>
password: <<CF-PASSWORD>>
organization: <<CF-ORG>>
space: <<CF-SPACE>>