When I configure the following pipeline:
resources:
- name: my-image-src
type: git
source:
uri: https://github.com/concourse/static-golang
- name: my-image
type: docker-image
source:
repository: concourse/static-golang
username: {{username}}
password: {{password}}
jobs:
- name: "my-job"
plan:
- get: my-image-src
- put: my-image
After building and pushing the image to the Docker registry, it subsequently fetches the image. This can take some time and ultimately doesn't really add anything to the build. Is there a way to disable it?
Every put implies a get of the version that was created. There are a few reasons for this:
The primary reason for this is so that the newly created resource can be used by later steps in the build plan. Without the get there is no way to introduce "new" resources during a build's execution, as they're all resolved to a particular version to fetch when the build starts.
There are some side-benefits to doing this as well. For one, it immediately warms the cache on one worker. So it's at least not totally worthless; later jobs won't have to fetch it. It also acts as validation that the put actually had the desired effect.
In this particular case, as it's the last step in the build plan, the primary reason doesn't really apply. But we didn't bother optimizing it away since in most cases the side benefits make it worth not having the secondary question arise ("why do only SOME put steps imply a get?").
It also cannot be disabled as we resist adding so many knobs that you'll want to turn one day and then have to go back and turn back off once you actually do need it back to the default.
Docs: https://concourse-ci.org/put-step.html
Related
If my cloud formation script is like this:
myServiceName:
Type: "AWS::ECS::Service"
Properties:
ServiceName: "myServiceName"
TaskDefinition: !Ref myTaskName
myTaskName:
Type: "AWS::ECS::TaskDefinition"
Properties:
ContainerDefinitions:
Image: !Sub "${AWS::AccountId}.dkr.ecr.${AWS::Region}.amazonaws.com/docker-image-name:1.1.1"
And I update the task definition to 1.1.2
Image: !Sub "${AWS::AccountId}.dkr.ecr.${AWS::Region}.amazonaws.com/docker-image-name:1.1.2"
Then trying to run a Cloud formation update command gives me this error:
*Submitted information does not contain changes. *
Is it just not possible to update the task definition to point to a new image in an ecr with out changing the service?
All the documentation I've read says that this error comes up when you don't change any properties of your resource, so Cloudformation doesn't see any resources as changed, and therefore won't redeploy.
But you are changing a property, and yet it's still happening, which is weird. I haven't been able to find any record of such behavior.
Debugging suggestion: try adding an arbitrary new property to your resource, e.g. a tag field. If it updates successfully, it means for some reason the changed Image doesn't trigger an update, and the fix would be to always change something else too. If it still doesn't update, then I suspect something is going wrong somewhere else in your process and you're not actually uploading your changed template at all.
I found the following in the CloudFormation User Guide that may help.
Troubleshooting CloudFormation - No updates to perform
I encountered an issue adding a DeletionPolicy attribute (which is not a property). According to the documentation, adding/changing metadata will cause CloudFormation to accept certain changes.
I have a workflow that FTPs files by using an external action from someuser:
- name: ftp deploy
uses: someuser/ftp-action#master
with:
config: ${{ secrets.FTP_CONFIG }}
Is this a security concern? For example could someuser change ftp-action#master to access my secrets.FTP_CONFIG? Should I copy/paste their action into my workflow instead?
If you use ftp-action#master then every time your workflow runs it will fetch the master branch of the action and build it. So yes, I believe it would be possible for the owner to change the code to capture secrets and send them to an external server under their control.
What you can do to avoid this is use a specific version of the action and review their code. You can use a commit hash to refer to the exact version you want, such as ftp-action#efa82c9e876708f2fedf821563680e2058330de3. You could use a tag if it has release tags. e.g. ftp-action#v1.0.0
Although, this is maybe not as secure because tags can be changed.
Alternatively, and probably the most secure, is to fork the action repository and reference your own copy of it. my-fork/ftp-action#master.
The GitHub help page does mention:
Anyone with write access to a repository can read and use secrets.
If someuser does not have write access to the repository, there should be no security issue.
As commented below, you should specify the exact commit of the workflow you are using, in order to make sure it does not change its behavior without your knowledge.
I am trying to set up Stackdriver debugging using Go. Using the article and this great medium post I came up with this solution.
Key parts, in cloudbuild.yaml
- name: gcr.io/cloud-builders/wget
args: [
"-O",
"go-cloud-debug",
"https://storage.googleapis.com/cloud-debugger/compute-go/go-cloud-debug"
]
...
Dockerfile I have
...
COPY gopath/bin/stackdriver-demo /stackdriver-demo
ADD go-cloud-debug /
ADD source-context.json /
CMD ["/go-cloud-debug","-sourcecontext=./source-context.json", "-appmodule=go-errrep","-appversion=1.0","--","/stackdriver-demo"]
...
However the pods keeps crashing, the container logs show this error:
Error loading program: decoding dwarf section info at offset 0x0: too short
EDIT: Using https://storage.googleapis.com/cloud-debugger/compute-go/go-cloud-debug may be outdated as I haven't seen it used outside Daz's medium post. The official docs uses the package cloud.google.com/go/cmd/go-cloud-debug-agent
I have update cloudbuild.yaml file to install this package:
- name: 'gcr.io/cloud-builders/go'
args: ["get", "-u", "cloud.google.com/go/cmd/go-cloud-debug-agent"]
env: ['PROJECT_ROOT=github.com/roberson34/stackdriver-demo', 'CGO_ENABLED=0', 'GOOS=linux']
- name: 'gcr.io/cloud-builders/go'
args: ["install", "cloud.google.com/go/cmd/go-cloud-debug-agent"]
env: ['PROJECT_ROOT=github.com/roberson34/stackdriver-demo', 'CGO_ENABLED=0', 'GOOS=linux']
And in the Dockerfile I can get access to the binary in gopath/bin/go-cloud-debug-agent
When I execute the gopath/bin/go-cloud-debug-agent with my own program as an argument:
/go-cloud-debug-agent -sourcecontext=./source-context.json -appmodule=go-errrep -appversion=1.0 -- /stackdriver-demo
I get another opaque error:
Error loading program: AttrStmtList not present or not int64 for unit 88
So basically using the cloud-debug binary from https://storage.googleapis.com/cloud-debugger/compute-go/go-cloud-debug and cloud-debug-agent binary from the package cloud.google.com/go/cmd/go-cloud-debug-agent both don't work and give different errors.
Would appreciate any tips on what I'm doing wrong and how to fix it.
OK :-)
Yes, you should follow the current Stackdriver documentation, e.g. go-cloud-debug-agent
Unfortunately, there are now various issues with my post including a (currently broken) gcr.io/cloud-builders/kubectl for regions.
I think your issue pertains to your use of golang:alpine. Alpine uses musl rather than the glibc that you find on most other Linux distro's and so, you really must compile for Alpine to ensure your binaries reference the correct libc.
I'm able to get your solution working primarily by switching your Dockerfile to pull the Cloud Debug Agent while on Alpine and to compile your source on Alpine:
FROM golang:alpine
RUN apk add git
RUN go get -u cloud.google.com/go/cmd/go-cloud-debug-agent
ADD main.go src
RUN CGO_ENABLED=0 go build -gcflags=all='-N -l' src/main.go
ADD source-context.json /
CMD ["bin/go-cloud-debug-agent","-sourcecontext=/source-context.json", "-appmodule=stackdriver-demo","-appversion=1.0","--","main"]
I think that should get you beyond the errors that you documented and you should be able to deploy your container to Kubernetes.
I've made my version of your image publicly available (and will retain it for a few days for you):
gcr.io/dazwilkin-190402-55473323/roberson34#sha256:17cb45f1320e2fe04e0681310506f4c229896429192b0d1c2c8dc20ed54adb0d
You may wish to reference it (by that digest) in your deployment.yaml
NB For Error Reporting to be "interesting", your code needs to generate errors and, with your example, this is going to be challenging (usually a good thing). You may consider adding another errorful handler that always results in errors so that you may test the service.
Wondering if anyone has come across this: Is it possible to provide proxy information to Concourse job? Something along lines of this:
- name: bosh-deploy-0
...
jobs:
- name: deploybosh
properties:
http_proxy_url: <http_proxy_url>:<http_proxy_port>
https_proxy_url: <https_proxy_url>:<http_proxy_port>
no_proxy:
- localhost
- 127.0.0.1
If anyone has a working example, I'd be very much appreaciative!!
You can only set these properties per worker. https://github.com/concourse/concourse-bosh-release/blob/v4.2.1/jobs/worker/spec#L142-L153.
If you want a job to run with specific proxy information set, you need to
Deploy a worker with those properties set, and with some worker tag.
Configure every step of the job with that same tag.
You could also set the proxy settings at the beginning of your job task (and optionally pass the proxy endpoint with parameters or a config server backend). That's maybe not the nicest way, however, it works quite well.
I've got a Concourse job that uses the appearance of a file in an Amazon S3 bucket as a trigger to a suite of tests. Using this resource --> https://github.com/concourse/s3-resource . Problem is, the job is not firing when the file appears. When I trigger the job manually, it does see the file and start the test suite.
Yaml config looks like this:
- name: s3-trigger-file
type: s3
source:
bucket: my-bucket-name
regexp: qabot_request_(.*).json
access_key_id: {{s3-access-key-id}}
secret_access_key: {{s3-secret-access-key}}
jobs:
- name: my-job
public: true
plan:
- get: s3-trigger-file
trigger: true
When I click on the trigger itself in the Concourse UI, I see what looks like a running monitor:
As I said, the job isn't firing when the file appears, but a manual trigger does verify the S3 input is found.
How can I debug why the automatic trigger isn't firing? Also, how much latency is expected for the s3 resource to detect a new file has appeared?
Concourse 3.4. Thanks ~~
The capturing group in your regexp must refer to a semver compliant version.
See the documentation:
The version extracted from this pattern is used to version the resource. Semantic versions, or just numbers, are supported. Accordingly, full regular expressions are supported, to specify the capture groups.
Your capturing group is currently making the captured "version" quote2. You should probably delete the pipeline and regenerate it with a modified regex (e.g. qabot_request_quote(\d+).json)