Setting Environment with Puppet and test it with TravisCI - github

I want to use TravisCI for testing pull requests for my github repo. But i use puppet for setting environment and installing dependencies. Is there any way to build dependencies with puppet in .travis.yml.

You need to customize the build environment by writing a shell script that installs and runs puppet.

Related

Gitlab Runner cannot retrieve dependency repo in a Powershell executor

CI Runner Context
Gitlab version : 13.12.2 (private server)
Gitlab Runner version : 14.9.1
Executor : shell executor (PowerShell)
Exploitation system : Windows 10
Project in Python (may be unrelated)
(using Poetry for dependency management)
The Problem
I am setting up an automated integration system for a project that has several internal dependencies that are hosted on the same server as the project being integrated. If I run the CI with a poetry update in the yml file, the Job console sends an exit with error code 128 upon calling a git clone on my internal dependency.
To isolate the problem, I tried simply calling a git clone on that same repo. The response is that the runner cannot authenticate itself to the Gitlab server.
What I Have Tried
Reading through the Gitlab docs, I found that the runners need authorization to pull any private dependencies. For that, Gitlab has created deploy keys.
So I followed the instructions to create the deploy key for the dependency and added it to the sub-project's deploy key list. I then ran into the exact same permissions problem.
What am I missing?
(For anyone looking for this case for a Winodws PowerShell, the user that the runner uses is nt authority/system, a system only user that I have not found a way to access as a human. I had to make the CI runner do the ssh key creation steps.)
Example .gitlab-ci.yml file:
#Commands in PowerShell
but_first:
#The initial stage, always happens first
stage: .pre
script:
# Start ssh agent for deploy keys
- Start-Service ssh-agent
# Check if ssh-agent is running
- Get-Service ssh-agent
- git clone ssh://git#PRIVATE_REPO/software/dependency-project.git
I solved my problem of pulling internal dependencies via completely bypassing the ssh pull of the source code and by switching from poetry to hatch for dependency management (I'll explain why further down).
Hosting the compiled dependencies
For this, I compiled my dependency project's source code into a distribution-ready package (in this context it was a python wheel).
Then used Gitlab's Packages and Registries offering to host my package. Instead of having packages in each source code project, I pushed the packages of all my dependencies to a project I created for this single purpose.
My .gitlab-ci.yaml file looks like this when publishing to that project:
deploy:
# Could be used to build the code into an installer
stage: Deploy
script:
- echo "deploying"
- hatch version micro
# only wheel is built (without target, both wheel and sdist are built)
- hatch build -t wheel
- echo "Build done ..."
- hatch publish --repo http://<private gitlab repo>/api/v4/projects/<project number>/packages/pypi --user gitlab-ci-token --auth $CI_JOB_TOKEN
- echo "Publishing done!"
Pulling those hosted dependencies (& why I ditched poetry)
My first problem was having pip find the extra pypi repository with all my packages. But pip already has a solution for that!
In it's pip.ini file(to find where it is, you can do pip config -v list), 2 entries need to be added:
[global]
extra-index-url = http://__token__:<your api token>#<private gitlab repo>/api/v4/projects/<project number>/packages/pypi/simple
[install]
trusted-host = <private gitlab repo>
This makes it functionally the same as adding the --extra-index-url and --trusted-host tags while calling pip install.
Since I was using a dependency manager, I was not directly using pip, but the manager's wrapper for pip. And here comes the main reason why I decided to change dependency managers: poetry does not read or recognize pip.ini. So any changes done in any of those files will be ignored.
With the configuration of the pip.ini file, any dependencies I have in the private package repo will also be searched for the installation of projects. So the line:
- git clone ssh://git#PRIVATE_REPO/software/dependency-project.git
changes to a simple line:
- pip install dependency-project
Or a line in pyproject.toml:
dependencies = [
"dependency-project",
"second_project",
]

mkdocs site doesn't exist after build on codeship

I'm trying to use codeship to automate building docs from a repository.
After the Executing the command mkdocs build --clean I get a path to where my site folder is supposed to be.
INFO - Cleaning site directory
INFO - Building documentation to directory: /home/rof/src/bitbucket.org/josephkobti/test/site
The thing is that I can't find that folder using the ssh console for debugging.
The reason for the folder not existing was a misunderstanding of Codeship's SSH Debug Build feature, documented here https://documentation.codeship.com/basic/builds-and-configuration/ssh-access/
The VMs started for the debug feature are not the actual VMs that run the automated builds. They are new VMs running the same initialization steps as the automated builds (i.e. cloning the repository, configuring project specific environment variables, ...) but none of the actual setup or test commands.
Because of this the mkdocs build --clean command wasn't run either when Joseph connected via to the debug VM, and as such the generated site wasn't available.

How to execute Octo.exe from VSTS?

I wish to execute Octo.exe from a powershell script on VSTS. Like this
Octo.exe push --package $_.FullName --replace-existing --server https://deploy.mydomain.com --apiKey API-xxxxxxxx
But I don´t know the correct path for Octo.exe or if it is present on the VSTS? Is it possible install it there? Or will i have to add the octo.exe to my source and call it from there?
You can’t call Octo.exe command if using Hosted build agent and it is impossible to install it on build agent too.
If you can call Octo.exe without install it, you can add octo.exe to the source control and map to build agent (Repository > Mappings), then you can call it via PowerShell. The path could be like $(build.sourcesdirectory)\Tool\octo.exe, according to how do you map it to the source directory)
If Octo.exe requires to install, you need to set up an on premise build agent and install Octo on that build agent.
On the other hand, there is the extension of Octopus Deploy Integration that you can install and use it directly.
Instead of cluttering source code repository with binaries,
the cleanest approach is using the Octopus REST APIs for pushing a package.
An example on how to push a package is provided by the Octopus company itself.

How do you trigger a gulp/grunt task on a remote server after deploy?

I've just switched to Roots Sage starter theme for Wordpress: roots.io/sage/docs/
and I'm currently reading up on deployment processes.
My processes is usually:
- make changes
- build with grunt/gulp
- commit (including compiled scripts)
- deploy
Sage's .gitignore file removes the dist folder (compiled files) from the repo ie. no css/js in repo. Are you supposed to install node/npm and build the assets on staging/production environment after deploy? If so, how do you trigger a gulp/grunt task on a remote server after deploy?
I'm using https://www.springloops.com/ for managing git and deploy.
Are you supposed to install node/npm and build the assets on staging/production environment after deploy?
You should avoid doing this. There is mixed opinion about committing compiled assets to a VCS as you stated you were doing previously, too.
Let's look at an example.
You finished all your testing locally. You haven't run an npm update in a few days and one of your dependencies has a loose version constraint specified; something like "~1.0.0".
You deploy. On the server, npm install is run before gulp or grunt. gulp runs, the build of your assets completes successfully, and the new version of your app is now live.
Unknown to you, version 1.0.1 of that dependency was released yesterday. For whatever reason, 1.0.1 introduced a change that breaks functionality within your app. That breaking change is now live on your site in production.
Even if you could guarantee all dependencies pulled from npm install on the server will mirror what you had locally/in staging, the headache of maintaining yet another set of software on the server (node.js, ruby, etc...) just for compiling assets should be enough to keep you from doing compilation on in production.
IMO, you should keep compiled assets out of your VCS, and rsync them to your server(s) as part of your deployment.

How to make Octopus deploy choose package version in multiple environment?

We are building packages for multiple deployment environments using TeamCity server and OctoPack. The problem is that tentacle agent chooses the latest by number version of the package, so it's the same (latest) package that is deployed on all environments. Here's the summary of our setup:
Environments DEV and STAGE;
Deployment to DEV is triggered from Git "dev" branch;
Deployment to STAGE is triggered from Git "stage" branch;
OctoPack is configured to generate packages MyProduct.1.0.0.dev-%build_counter% for DEV build configuration;
OctoPack is configured to generated packages MyProduct.1.0.0.%build_counter% for STAGE build configuration;
TeamCity is configured to expose OctoPack artefacts (NuGet packages) via its NuGet feed;
Octopus project is configured to deploy packages with NuGet Id MyProduct from TeamCity NuGet feed.
So what happens is that since DEV builds are run more frequently, they have larger %build_counter%, and STAGE doesn't get a chance to get a deployment of its own packages - Octopus tentacle preferes packages with 1.0.0.dev-* suffix.
This must be fairly common scenario, but I haven't found a simple way to solve it.
There are some parts that are not documented here: https://github.com/OctopusDeploy/Octopus-Tools. But if you look at https://github.com/OctopusDeploy/Octopus-Tools/blob/master/source/OctopusTools/Commands/CreateReleaseCommand.cs it is possible to figure out what you can do.
I think the tools is backward compatible, but not 100 % sure about that.
When you are using the octo tools, which I expect that you use, you can set the version (also called releasenumber now) option to specify the release number. If you doesn't specify anything else it will take the latest package so what you want to do is set the packageversion (also called defaultpackageversion now) that should be used for the release.
I think that should do it. If it doesn't, what are you using to create the release?
Example of what we are using from our TeamCity when using octo tools which we have added to the environment path on the build agents:
create-release --server=%conf.OctoServerApi% --project=%conf.OctoProject% --version=%env.OctopusPackageVersion% --deployto=%conf.OctoDeployEnv% --packageversion=%env.OctoPackPackageVersion% --apiKey=%conf.OctoApiKey% --waitfordeployment %conf.OctoExtraParams%
UPDATE:
The documentation for 2.0 is much better: http://docs.octopusdeploy.com/pages/viewpage.action?pageId=360596
Inspired by Tomas Jansson's answer, simply adding the following to Additional command line arguments in the OctopusDeploy: Create release build step (TeamCity v9) worked for me:
--packageversion=%build.number%