Volumes and Entrypoints - codeship

I have a volume declaration in a service:
volumes:
- .:/var/www
The service's container uses an entrypoint shell script to prepare resources (npm install and gulp build). It runs fine in Jet but the files created by the entrypoint aren't ever detected when it runs for real.
What is different about volumes on the actual service?

The biggest difference between your local environment and the remote environment is that the build machines are created new every time.
Locally, you probably have npm modules and build files. Remotely, however, you won't have access to those. A way to test this with jet, is to download the repository and run directly without any initial build processes - just jet steps.
Container Files
- var
|- www
|- node_modules
|- //installed modules
|- build
|- //build files
|- src
|- //source files
Build Machine Files
- root_folder
|- src
|- //source files
The difficulty with volumes during the container runtime is that what ever is in your root directory will override what was created during the image build.
The volume mapping remotely is, in most cases, unnecessary. You want to test the container in complete isolation.
I would recommend removing the volumes directive in the codeship-services.yml file - this should solve your issue.

Related

How to install an old version of the Direct X Api in GitHub actions

I'm working on an implementation of continuous integration in this project, which requires an old version of the DirectX SDK from June 2010. Is it possible to install this as a part of a GitHub Actions workflow at all? It may build with any version of the SDK as long as it's compatible with Windows 7.
Here's the workflow I've written so far, and here's the general building for Windows guide I'm following...
I have a working setup for project using DX2010, however i am not running installer (which always failed for me during beta, maybe it's fixed nowadays) but extracting only parts required for build. Looking at link you provided, this is exactly what guide recommends :)
First, DXSDK_DIR variable is set using ::set-env "command". Variable most likely should point to directory outside default location, which can be overwritten if repository is checked out after preparing DX files.
- name: Config
run: echo ::set-env name=DXSDK_DIR::$HOME/cache/
shell: bash
I didn't want to include DX files in repository, so they had to be downloaded when workflow is running. To avoid doing that over and over again cache action is used to keep files between builds.
- name: Cache
id: cache
uses: actions/cache#v1
with:
path: ~/cache
key: cache
And finally, downloading and extracting DX2010. This step will run only if cache wasn't created previously or current workflow cannot create/restore caches (like on: schedule or on: repository_dispatch).
- name: Cache create
if: steps.cache.outputs.cache-hit != 'true'
run: |
curl -L https://download.microsoft.com/download/a/e/7/ae743f1f-632b-4809-87a9-aa1bb3458e31/DXSDK_Jun10.exe -o _DX2010_.exe
7z x _DX2010_.exe DXSDK/Include -o_DX2010_
7z x _DX2010_.exe DXSDK/Lib/x86 -o_DX2010_
mv _DX2010_/DXSDK $HOME/cache
rm -fR _DX*_ _DX*_.exe
shell: bash
Aaand that's it, project is ready for compilation.

How do I create a directory inside container with JIB?

I am trying to specify a mount point in a kubernetes deployment descriptor and for that I need there to be a directory
volumeMounts:
- name: volume-mount
mountPath: /dev/bus/usb/003/005
to correspond to:
volumes:
- name: my-host-volume
hostPath:
path: /dev/bus/usb/003/005
How do I create this using jib?
UPDATE: newer Jib versions have the feature to allow specifying a copy destination directory when using <extraDirectories>. You no longer have to manually prepare the target directory structure beforehand.
Create an empty directory <project root>/src/main/jib/dev/bus/usb/003/005 in your source repo.
Details
Jib allows adding arbitrary extra files and directories using the <extraDirectories> (Maven / Gradle) configuration. Files and (sub-)directories under <extraDirectories> will be recursively copied into the root directory of the image. By default, <project root>/src/main/jib is one such "extra directory", so you can simply create an empty directory with the structure you like.
You can also tune the permissions of files and directories using <permissions> if you want.

Gitlab Runner - New folder for each build

I'm using Gitlab CI for my project. When I push on develop branch, it runs tests and update the code on my test environment (a remote server).
But the gitlab runner is already using the same build folder : builds/a3ac64e9/0/myproject/myproject
But I would like to create a now folder every time :
builds/a3ac64e9/1/yproject/myproject
builds/a3ac64e9/2/yproject/myproject
builds/a3ac64e9/3/yproject/myproject
and so on
Using this, I could just update my website by changing a symbolic link pointing to the last runner directory.
Is there a way to configure Gitlab Runner this way ?
While it doesn't make sense to use your build directory as your deployment directory, you can setup a custom build directory
Open config.toml in a text editor: (more info on where to find it here)
Set enabled = true under [runners.custom_build_dir] (more info here)
[runners.custom_build_dir]
enabled = true
In your .gitlab-ci.yml file, under variables set GIT_CLONE_PATH. It must start with $CI_BUILDS_DIR/, e.g. $CI_BUILDS_DIR/$CI_JOB_ID/$CI_PROJECT_NAME, which will probably give you what you're looking for, although if you have multiple stages, they will have different job IDs. Alternatively, you could try $CI_BUILDS_DIR/$CI_COMMIT_SHA, which would give you a unique folder for each commit. (More info here)
variables:
GIT_CLONE_PATH: '$CI_BUILDS_DIR/$CI_JOB_ID/$CI_PROJECT_NAME'
Unfortunately there is currently an issue with using GIT_BUILDS_DIR in GIT_CLONE_PATH, if you're using Windows and Powershell, so you may have to do something like this as a work-around, if all your runners have the same build directory: GIT_CLONE_PATH: 'C:\GitLab-Runner/builds/$CI_JOB_ID/$CI_PROJECT_NAME'
You may want to take a look at the variables available to you (predefined variables) to find the most suitable variables for your path.
You might want to read the following answer Changing the build intermediate paths for gitlab-runner
I'll repost my answer here:
Conceptually, this approach is not the way to go; the build directory is not a deployment directory, it's a temporary directory, to build or to deploy from, whereas on a shell executor this could be fixed.
So what you need is to deploy from that directory with a script as per gitlab-ci.yml below, to the correct directory of deployment.
stages:
- deploy
variables:
TARGET_DIR: /home/ab12/public_html/$CI_PROJECT_NAME
deploy:
stage: deploy
script:
mkdir -pv $TARGET_DIR
rsync -r --delete ./ $TARGET_DIR
tags:
- myrunner
This will move your projectfiles in /home/ab12/public_html/
naming your projects as project1 .. projectn, all your projects could use this same .gitlab-ci.yml file.
You can not achieve this only with Gitlab CI runner configuration, but you can create 2 runners, and assign them exclusively to each branch by using a combination of only and tags keywords.
Assuming your two branches are named master and develop and two runners have been tagged with master_runner and develop_runner tags, your .gitlab-ci.yml can look like this:
master_job:
<<: *your_job
only:
- master
tags:
- master_runner
develop_job:
<<: *your_job
only:
- develop
tags:
- develop_runner
(<<: *your_job is your actual job that you can factorize)

version control of docker-compose.yml

My application has 4 docker containers that talk to each other and is specified with a docker-compose.yml file, so I can just do docker-compose up -d from the location where that file is stored and it starts.
I am virtually the end of setting up my CI service to go from commit to the git repository to testing and then building the docker images that I need for my deploy. I now need to sort out how to deploy.
I already have the current version running, and my docker-compose.yml file is configured via environment variables held in a .env file. It is unlikely that it will change between versions, but it might. What will change is the .env file, as that specifies image names and tags that the CI system has just build and which the docker-compose.yml file will use to start the new version of the running system. .env is created on the fly by scripts in the repository and is run by the CI system in its workspace. My deploy step is really just about copying .env and docker-compose.yml into place and then stopping the old set of services and starting the new.
My question is, if I change the .env file or docker-compose.yml under a running version, will docker-compose down properly stop the old running images, so that when I immediately follow it with a docker-compose up -d I swap over to the new images. Is there a better way of handling this situation

TeamCity Build & Deploy: How do you pass dependent artifact paths to a script?

How do you pass the artifact paths to a script in TeamCity.
The scenario is this
Build Project
Deploy Project (with an artifact dependency to #1)
Step 2 consists of a a script which
Stops a service (to unlock files)
Copies the build artifacts to the server
Restarts the service
I'm struggling with step 2, I figure I need to pass the path of the build artifacts into the script but I can't see how you do it?
We do something like this. It is not 100% clear but it looks like you want to do the build and deployment as two separate builds in TeamCity with an artifact dependency from the deployment build on the main build which is exactly what we do. Here is how we do it.
Setup your artifacts from the main build which it sounds like you have already done.
Example: **\bin\release\*.* => bin
Set up the artifact dependency (we also do a snap shot dependency as well but you don't have to) to pull your artifacts from the main build and put them into a local folder in your deployment build.
Example: Artifacts paths: bin\**\*.* Destination path: bin\
We use a mixture of MSBuild and PowerShell for doing the actual deployment work. In each case you can reference the artifacts using a relative path.
IF the build work folder looks like this:
root
|- bin (Artifacts pulled in from main build)
|- src
|- build (Where your build and deployment scripts live)
You would access the bin files from your deployment script located in the build folder like:
..\bin\[your files]
You can then pass the path to your build artifacts like this
%teamcity.build.checkoutDir%\bin\