How shall I combine one terminal command in Concourse task
command I use on terminal
export ENVIRONMENT=development NODE_ENV=local; mvn clean install
How to use this in Concourse run config? Are below lines correct?
run:
path: /usr/bin/mvn
dir: pr
args:
- -exc
- |
- export
ENVIRONMENT = development
NODE_ENV= local
- clean
- install
You can directly run the command as a shell command
run:
path: /bin/sh
dir: pr
args:
- -exc
- |
export ENVIRONMENT=development NODE_ENV=local
mvn clean install
Else, the variables being exported must be set under params in task config before run
params:
ENVIRONMENT: development
NODE_ENV: local
run:
path: /usr/bin/mvn
dir: pr
args:
- clean
- install
Related
I don't believe that nmp run prod is actually running(?) in my github action despite not throwing any kind of error. The reasons why I believe that are:
If I delete my public/js/app.js file locally and push the change, it doesn't get rebuilt and my production site breaks as there's no app.js file.
If I leave the file in place and push my code to production, it's not minified, and one of the keys I need to reference still contains the dev value.
If I replace the aforementioned key with a different value and run npm run prod locally, then app.js is minified and contains my updated value.
Why would the npm run prod command not work within a github action, and also indicate that it ran successfully?
Here's my entire workflow file:
name: Prod
on:
push:
branches: [ main ]
jobs:
laravel_tests:
runs-on: ubuntu-20.04
env:
DB_CONNECTION: mysql
DB_HOST: localhost
DB_PORT: 3306
DB_DATABASE: testdb
DB_USERNAME: root
DB_PASSWORD: root
steps:
- name: Set up MySQL
run: |
sudo systemctl start mysql
mysql -e 'CREATE DATABASE testdb;' -uroot -proot
mysql -e 'SHOW DATABASES;' -uroot -proot
- uses: actions/checkout#main
- name: Copy .env
run: php -r "file_exists('.env') || copy('.env.example', '.env');"
- name: Install Dependencies
run: composer install -q --no-ansi --no-interaction --no-scripts --no-progress
- name: Generate key
run: php artisan key:generate
- name: Directory Permissions
run: chmod -R 777 storage bootstrap/cache
- name: Clean Install
run: npm ci
- name: Compile assets
run: npm run prod
- name: Execute tests (Unit and Feature tests) via PHPUnit
run: vendor/bin/phpunit
forge_deploy:
runs-on: ubuntu-20.04
needs: laravel_tests
steps:
- name: Make Get Request
uses: satak/webrequest-action#master
with:
url: ${{ secrets.PROD_DEPLOY_URL }}
method: GET
UPDATE:
My suspicion is that running the build process in the action isn't actually updating the repo (actually I'm fairly certain it's probably not as that would likely not be the desired behavior). So then the deploy url that I'm using to push the code is likely just grabbing the repo as-is and deploying it.
I need a way to update only the public folder on the repo with the output of the npm run prod command. Not sure if this is possible, or advisable, but I'm nearly positive that's what's going on.
How to run docker-compose entrypoint configuration option with multiple bash commands
commands:
yarn install
yarn build
sleep infinity
In docker-compose.yml, for service gvhservice
gvhservice:
entrypoint:
- "/bin/sh"
- -ecx
- |
yarn install
yarn build
sleep infinity
OR
optionally, add all these commands to a file say - entrypoint.sh
and in docker-compose.yml,
gvhservice:
entrypoint: entrypoint.sh
OR,
Using the option of entrypoint.sh and command configuration option in docker-compose.yml (suitable for a variable number of commands to be passed during runtime)
entrypoint.sh
#!/bin/sh
set -ex
exec "$#"
docker-compose.yml
command:
- /bin/sh
- -ecx
- |
yarn install
yarn build
sleep infinity
I'm trying to create a cache for the following Github action:
name: dockercompose
on: push
jobs:
test:
runs-on: ubuntu-18.04
steps:
- uses: actions/checkout#v1
- name: Cache Docker Compose
id: cache-docker
uses: actions/cache#v1
with:
path: fhe_app/
key: cache-docker
- name: Build the stack
run: docker-compose up -d
working-directory: fhe_app/
whit the following Dockerfile:
FROM tensorflow/tensorflow:nightly-py3
# set work directory
WORKDIR /usr/src/app
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install dependencies
RUN python3 -m pip install --upgrade pip
COPY local_requirements.txt /usr/src/app/local_requirements.txt
RUN \
apt-get update && \
apt-get -y install python3 postgresql-server-dev-10 gcc python3-dev musl-dev netcat
RUN python3 -m pip install -r local_requirements.txt
# copy entrypoint.sh
COPY entrypoint.sh /usr/src/app/entrypoint.sh
RUN chmod +x entrypoint.sh
# copy project
COPY . /usr/src/app/
# run entrypoint.sh
ENTRYPOINT ["/usr/src/app/entrypoint.sh"]
When pushing to Github, instead of a success message, I get:
Cache Docker Compose
Cache not found for input keys: cache-docker.
And:
Post Cache Docker Compose
Post job cleanup.
/bin/tar -cz -f /home/runner/work/_temp/e13e2694-e020-476d-888e-cb29cb9184b6/cache.tgz -C /home/runner/work/fhe_server/fhe_server/fhe_app .
/bin/tar: ./app: file changed as we read it
##[warning]The process '/bin/tar' failed with exit code 1
I've other yml files not using Docker that are caching properly, so the overall structure of the yml should be fine. Is this the right way to cache docker-compose?
This is my circle.yml:
version: 2
jobs:
build:
working_directory: /app
docker:
- image: docker:stable-git
steps:
- checkout
- setup_remote_docker
- run:
name: Install dependencies
command: |
apk add --no-cache py-pip bash
pip install docker-compose
- run:
name: Start service containers and run tests
command: |
docker-compose -f docker-compose.test.yml up -d db es redis
docker-compose run web bash -c "cd myDir && ./manage.py test"
This works fine in that it brings up my service containers (db, es, redis) and I build a new image for my web container. However, my working code is not inside the freshly built image (so "cd myDir" always fails).
I figure the following lines in my Dockerfile should make my code available when it's built but it appears that it doesn't work like that:
ENV APPLICATION_ROOT /app/
RUN mkdir -p $APPLICATION_ROOT
WORKDIR $APPLICATION_ROOT
ADD . $APPLICATION_ROOT
What am I doing wrong and how can I make my code available inside my test container?
Thanks,
Use COPY, Your Dockerfile should look something like this.
FROM image
COPY . /opt/app
WORKDIR "/opt/app"
(More commands)
ENTRYPOINT
I've got a repository with multiple Dockerfiles which take ~20min each to build: https://github.com/fredrikaverpil/pyside2-wheels
I'd like to efficiently divide these Dockerfiles to be built in its own jobs.
Right now, this is my .travis.yml:
language: python
sudo: required
dist: trusty
python:
- 2.7
- 3.5
services:
- docker
install:
- docker build -f Dockerfile-Ubuntu16.04-py${TRAVIS_PYTHON_VERSION} -t fredrikaverpil/pyside2-ubuntu16.04-py${TRAVIS_PYTHON_VERSION} .
- docker run --rm -v $(pwd):/pyside-setup/dist fredrikaverpil/pyside2-ubuntu16.04-py${TRAVIS_PYTHON_VERSION}
script:
- ls -al *.whl /
This creates two jobs, one per Python version. However, I'd rather have one job per Dockerfile, as I'm about to add more such files.
How can this be achieved?
Managed to solve it, I think.
language: python
sudo: required
dist: trusty
services:
- docker
matrix:
include:
- env: DOCKER_OS=ubuntu16.04
python: 2.7
- env: DOCKER_OS=ubuntu16.04
python: 3.5
- env: DOCKER_OS=centos7
python: 2.7
install:
- docker build -f Dockerfile-$DOCKER_OS-py${TRAVIS_PYTHON_VERSION} -t fredrikaverpil/pyside2-$DOCKER_OS-py${TRAVIS_PYTHON_VERSION} .
- docker run --rm -v $(pwd):/pyside-setup/dist fredrikaverpil/pyside2-$DOCKER_OS-py${TRAVIS_PYTHON_VERSION}
script:
- ls -al *.whl /
This results in three job builds.