I want to build a docker image in my pipeline and then run a job inside it, without pushing or pulling the image.
Is this possible?
It's by design that you can't pass artifacts between jobs in a pipeline without using some kind of external resource to store it. However, you can pass between tasks in a single job. Also, you specify images on a per-task level rather than a per-job level. Ergo, the simplest way to do what you want may be to have a single job that has a first task to generate the docker-image, and a second task which consumes it as the container image.
In your case, you would build the docker image in the build task and use docker export to export the image's filesystem to a rootfs which you can put into the output (my-task-image). Keep in mind the particular schema to the rootfs output that it needs to match. You will need rootfs/... (the extracted 'docker export') and metadata.json which can just contain an empty json object. You can look at the in script within the docker-image-resource for more information on how to make it match the schema : https://github.com/concourse/docker-image-resource/blob/master/assets/in. Then in the subsequent task, you can add the image parameter in your pipeline yml as such:
- task: use-task-image
image: my-task-image
file: my-project/ci/tasks/my-task.yml
in order to use the built image in the task.
UDPATE: the PR was rejected
This answer doesn't currently work, as the "dry_run" PR was rejected. See https://github.com/concourse/docker-image-resource/pull/185
I will update here if I find an approach which does work.
The "dry_run" parameter which was added to the docker resource in Oct 2017 now allows this (github pr)
You need to add a dummy docker resource like:
resources:
- name: dummy-docker-image
type: docker-image
icon: docker
source:
repository: example.com
tag: latest
- name: my-source
type: git
source:
uri: git#github.com:me/my-source.git
Then add a build step which pushes to that docker resource but with "dry_run" set so that nothing actually gets pushed:
jobs:
- name: My Job
plan:
- get: my-source
trigger: true
- put: dummy-docker-image
params:
dry_run: true
build: path/to/build/scope
dockerfile: path/to/build/scope/path/to/Dockerfile
Related
I am running :
drone-server on top of kubernetes
and drone-kubernetes-runner to dynamically provisioning runner as pods.
After investigation, i found the Pod YAML of each runner defines the 1st step "git clone" using the image drone/git.
I am running the pipeline in offline environment. I have to specify nexus.company.local/drone/git instead of drone/git to avoid fetching from the public registry.
I search everywhere, but no way.
Even image_pull_secrets is valuable for explicit steps that i can define.
It's NOT valuable for implicit steps like the "clone" step
You could disable the automatic cloning and add an explicit step, specifying your own image with the nexus mirror origin.
For example:
kind: pipeline
clone:
disable: true
steps:
- name: clone
image: nexus.company.local/drone/git
DRONE_RUNNER_CLONE_IMAGE=nexus.company.local/drone/git
REF: https://docs.drone.io/runner/docker/configuration/reference/drone-runner-clone-image/
In our release-candidate pipeline to our QA (think "stage") environment, we are using a vmImage to build our docker-container and then pushing it to our Non-Prod registry.
pool:
vmImage: "ubuntu-latest"
steps:
- task: pseudo-code: ## get everything prepped for buildAndPush
- task: Docker#2
inputs:
containerRegistry: "Our Non-Prod Registroy"
repository: "apps/app-name"
command: "buildAndPush"
Dockerfile: "**/Dockerfile"
tags: |
$(Build.SourceBranchName)
These are release-candidates. Once the code is approved for release, we want to push the same container to our Production registry; however, we can't figure out how to do it in this framework. Seems like we have only two options:
rebuild it in a different run of the pipeline later which will push it to our Production registry
push every release-candidate container to our Production registry
I don't like either of these options. What I want is a way to retain the container somewhere, and then to push it to the our Production registry when we decide that we want to release it.
How would we do this?
You can actually
Push docker image to your non-production registry
Once your it is approved for the release (I don't know what exactly you mean by saying this in terms on your pipeline) you can (Please check this topic)
You can pull the image, tag it and push it to the new registry.
docker pull old-registry/app
docker tag old-registry/app new-registry/app
docker push new-registry/app
You can also take a look here where is described above case.
I'm trying to run a container job running inside a locally built and cached Docker image (from a Dockerfile) instead of pulling the image from registry. Based on my tests so far, the agent only tries to pull the image from a registry and doesn't search the image locally. I know this functionality is not documented, however I wonder if there is a way to make it work.
stages:
- stage: Build
jobs:
- job: Docker
steps:
- script: |
docker build -t builder:0.1 .
displayName: 'Build Docker'
- job: GCC
dependsOn: Build
container: builder:0.1
steps:
- script: |
cd src
make
displayName: 'Run GCC'
I am afraid there is no such way to find the locally cached image and run a container job on it for the time being.
From the documentation you mentioned:
Containers can be hosted on registries other than Docker Hub. To host an image on Azure Container Registry or another private container registry
If you define a container in YAML file, it will extract the image from the docker hub by default.
Or you can add the endpoint field to specify other registry(e.g. Azure Container Registry).
Here is the definition of the Container:
container:
image: string # container image name
options: string # arguments to pass to container at startup
endpoint: string # endpoint for a private container registry
env: { string: string } # list of environment variables to add
This means that the container job will directly extract the image from the registry and cannot search the local cache.
But this requirement is valuable.
You could add your request for this feature on our UserVoice site, which is our main forum for product suggestions.
I am having an issue getting an image to build in Azure Devops from a docker-compose file.
It appears that the first issue is that the image does not build.
This is, I believe, causing the push step to fail, as there is no created image, it is just running an existing image.
What can I do to "force" the process to build an image off of this to pass into our repo? Here is our current docker compose file
version: '3.4'
services:
rabbit:
image: rabbitmq:3.6.16-management
labels:
NAME: "rabbit"
environment:
- "RabbitMq/Host=localhost"
ports:
- "15672:15672"
- "5672:5672"
container_name: rabbit
restart: on-failure:5
Here's the build and push steps (truncating the top which doesn't really matter)
Build:
Push:
I spent a fair amount of time fighting with this today (similar issue, anyway). I do not believe the image being non-local is necessarily your problem.
Looks like you are using the "Docker Compose" Task in Azure DevOps for your Build. I had the same problem - I could get it to build fine, but couldn't ever seem to "pipe" the result to the Push Task. I suspect one could add another Task between them to resolve this, but there is an easier way.
Instead, trying using the "Docker" Task to do your build. Without really changing anything else, I was able to make that work and the "Push" step next in line was happy as could be.
It's not clear for me from the documentation if it's even possible to pass one job's output to the another job (not from task to task, but from job to job).
I don't know if conceptually I'm doing the right thing, maybe it should be modeled differently in Concourse, but what I'm trying to achieve is having pipeline for Java project split into several granular jobs, which can be executed in parallel, and triggered independently if I need to re-run some job.
How I see the pipeline:
First job:
pulls the code from github repo
builds the project with maven
deploys artifacts to the maven repository (mvn deploy)
updates SNAPSHOT versions of the Maven project submodules
copies artifacts (jar files) to the output directory (output of the task)
Second job:
picks up jar's from the output
builds docker containers for all of them (in parallel)
Pipeline goes on
I was unable to pass the output from job 1 to job 2.
Also, I am curious if any changes I introduce to the original git repo resource will be present in the next job (from job 1 to job 2).
So the questions are:
What is a proper way to pass build state from job to job (I know, jobs might get scheduled on different nodes, and definitely in different containers)?
Is it necessary to store the state in a resource (say, S3/git)?
Is the Concourse stateless by design (in this context)?
Where's the best place to get more info? I've tried the manual, it's just not that detailed.
What I've found so far:
outputs are not passed from job to job
Any changes to the resource (put to the github repo) are fetched in the next job, but changes in working copy are not
Minimal example (it fails if commented lines are uncommented with error: missing inputs: gist-upd, gist-out):
---
resources:
- name: gist
type: git
source:
uri: "git#bitbucket.org:snippets/foo/bar.git"
branch: master
private_key: {{private_git_key}}
jobs:
- name: update
plan:
- get: gist
trigger: true
- task: update-gist
config:
platform: linux
image_resource:
type: docker-image
source: {repository: concourse/bosh-cli}
inputs:
- name: gist
outputs:
- name: gist-upd
- name: gist-out
run:
path: sh
args:
- -exc
- |
git config --global user.email "nobody#concourse.ci"
git config --global user.name "Concourse"
git clone gist gist-upd
cd gist-upd
echo `date` > test
git commit -am "upd"
cd ../gist
echo "foo" > test
cd ../gist-out
echo "out" > test
- put: gist
params: {repository: gist-upd}
- name: fetch-updated
plan:
- get: gist
passed: [update]
trigger: true
- task: check-gist
config:
platform: linux
image_resource:
type: docker-image
source: {repository: alpine}
inputs:
- name: gist
#- name: gist-upd
#- name: gist-out
run:
path: sh
args:
- -exc
- |
ls -l gist
cat gist/test
#ls -l gist-upd
#cat gist-upd/test
#ls -l gist-out
#cat gist-out/test
To answer your questions one by one.
All build state needs to be passed from job to job in the form of a resource which must be stored on some sort of external store.
It is necessary to store on some sort of external store. Each resource type handles this upload and download itself, so for your specific case I would check out this maven custom resource type, which seems to do what you want it to.
Yes, this statelessness is the defining trait behind concourse. The only stateful element in concourse is a resource, which must be strictly versioned and stored on an external data store. When you combine the containerization of tasks with the external store of resources, you get the guaranteed reproducibility that concourse provides. Each version of a resource is going to be backed up on some sort of data store, and so even if the data center that your ci runs on is to completely fall down, you can still have strict reproducibility of each of your ci builds.
In order to get more info I would recommend doing a tutorial of some kind to get your hands dirty and build a pipeline yourself. Stark and wayne have a tutorial that could be useful. In order to help understand resources there is also a resources tutorial, which might be helpful for you specifically.
Also, to get to your specific error, the reason that you are seeing missing inputs is because concourse will look for directories (made by resource gets) named each of those inputs. So you would need to get resource instances named gist-upd and gist-out prior to to starting the task.