Packer failure due to non-existent custom cookbook path created by shell-local provisioner (berks) - chef-solo

I have a packer configuration that provisions using chef-solo in AWS EC2. This works well. I have introduced berkshelf to manage 3rd party cookbooks and this isn't working as well.
I am working in a chef repo that has locally developed cookbooks, roles, data bags, etc. By introducing berks, I want to keep the cookbooks directory clean, and put 3rd party cookbooks into vendor/cookbooks (which is git excluded so keeps repos clean / minimizes chances of other devs adding / pushing berks-managed cookbooks in to vcs). So I added a shell-local provisioner before a chef-solo provisioner, which runs berks vendor vendor/cookbooks and updated the chef-solo provisioner with cookbook_paths ["cookbooks","vendor/cookbooks"]. My idea is that the shell-local would run before the chef-solo and both cookbook paths would be available.
However, when I run packer build, it fails fast trying to resolve the cookbook paths before the AWS builder even starts building, failing with reference to the non-existent vendor/cookbooks directory. Here is the packer provisioners segment:
"provisioners" : [
{
"type": "shell-local",
"command": "bundle install && bundle exec berks vendor vendor/cookbooks"
},
{
"type" : "chef-solo",
"cookbook_paths" : ["cookbooks","vendor/cookbooks"],
"environments_path" : "environments",
"roles_path" : "roles",
"run_list" : ["role[somerole]"]
}
],
When I run this, it fails:
amazon-ebs output will be in this color.
1 error(s) occurred:
* Bad cookbook path 'vendor/cookbooks': stat vendor/cookbooks: no such file or directory
Is there a mechanism in packer that will run the shell-local first before resolving the chef-solo provisioner? I'd like to avoid running berks in the builder (i.e. I want the cookbooks resolved by the host running packer), and would ideally like to have this run solely in packer as apposed to wrapper scripts that run berks first. I have resolved this for now by vendoring into cookbooks, but would like to avoid this route if possible too.

Just create an empty directory vendor/cookbooks.
Is there a mechanism in packer that will run the shell-local first before resolving the chef-solo provisioner?
No
and would ideally like to have this run solely in packer as apposed to wrapper scripts that run berks first.
I would recommend to reconsider if you have another problem like this. Packer tries to do one thing well and leave allot out that could better be solved by a wrapper script.

Related

Why is Access to the path '/bin/roslyn' denied?

Long story short, I was able to build a bitbucket .NET/MVC/Angular project successfully on windows 2019 azure hosted agent, but I am unable to make it build successfully on ubuntu agent.
The reason I want to build it on ubuntu is because I noticed the build time is way faster than that of the windows agent, which makes sense considering the platforms.
I am facing this error:
Copying file from "/home/vsts/work/1/s/Bobby.ProjectA/obj/Debug/Bobby.ProjectA.pdb" to "/home/vsts/work/1/s/Bobby.ProjectA/bin/Bobby.ProjectA.pdb".
CopyRoslynCompilerFilesToOutputDirectory:
Creating directory "/bin/roslyn".
Creating directory "/bin/roslyn".
Creating directory "/bin/roslyn".
Creating directory "/bin/roslyn".
/home/vsts/work/1/s/packages/Microsoft.CodeDom.Providers.DotNetCompilerPlatform.1.0.8/build/net45/Microsoft.CodeDom.Providers.DotNetCompilerPlatform.props(17,5):
warning MSB3021: Unable to copy file "/home/vsts/work/1/s/packages/Microsoft.Net.Compilers.2.4.0/build/../tools/csc.exe" to "/bin/roslyn/csc.exe". Access to the path '/bin/roslyn' is denied. [/home/vsts/work/1/s/Bobby.ProjectA/Bobby.ProjectA.csproj]
According to this post, the issue is because the VBCSCompiler is locking the src.
So i have exhausted all of these solutions here to kill the VBCCompiler, but none of them worked.
I also can't restart the ubuntu agent during a build due to CI limitation, and killall VBCSCompiler bash script before msbuild task resulted in this error: VBCSCompiler: no process found
So now i am stumped, if VBCSCompiler process is not even running, why is Access to the path '/bin/roslyn' denied???
Ive also tried updating/installing Microsoft.CodeDom.Providers.DotNetCompilerPlatform package per this post here, but that didnt do/help anything.
This is how my build tasks looks like if it helps:

Can't Deploy additionnal packages in a pipeline job

I use a nodejs App in the continuous delivery. Recently I installed a package (puppeteer) which fails to launch because it requires some shared librairies (xlib). This issue is documented (here) and I just need to install additionnal packages.
So I have added in my "BUILD" job additional lines:
#!/bin/bash
npm install
sudo apt-get update
sudo apt-get install -y --fix-missing libx11-6 libx11-xcb1 libxcb1 .......
It installs successfully (couple of errors though), the build job ends with success. (6 upgraded, 133 newly installed, 0 to remove and 55 not upgraded.)
But when I start the App in the "deploy" stage. the file is still missing!
Am I installing this properly?
2020-05-20T08:27:03.83+0000 [APP/PROC/WEB/0] ERR Unhandled Rejection at: Error: Failed to launch the browser process!
2020-05-20T08:27:03.83+0000 [APP/PROC/WEB/0] ERR /home/vcap/deps/0/node_modules/puppeteer/.local-chromium/linux-756035/chrome-linux/chrome: error while loading shared libraries: libX11-xcb.so.1: cannot open shared object file: No such file or directory
you may want to discuss this problem directly on our public Slack.
Self register here: https://ic-devops-slack-invite.us-south.devops.cloud.ibm.com/
then ask your question here https://ibm-devops-services.slack.com/
I suspect you should add the missing dependencies to your package.json
sorry to hear that registration did not work.
Simply go here https://ic-devops-slack-invite.us-south.devops.cloud.ibm.com/
put your email address
and get your invite.
You should receive an email to register - pick a password of your choice.
Anyhow, I'll check on your issue ASAP
1 - ensure puppeteer dependencies are installed without any errors.
You wrote "It installs successfully (couple of errors though)"
and "55 not upgraded".
Possibly, dependencies are not fully installed or at the required level.
2 - As suggested in previous comments, you are using the pipeline base image.
You may want to build and use your own custom image, an image that would match all your prereqs.
https://cloud.ibm.com/docs/ContinuousDelivery?topic=ContinuousDelivery-custom_docker_images
Ok got it sorted. data_Henrik was right from start.
What I was doing above in the deployment jobs was useless. It is NOT what will be deployed with the APP.
Instead, you need to deploy "multi buildpack" with (for my APP) the standard nodejs buildpack and also a buildpack specially made to install debian dependencies : https://github.com/cloudfoundry/apt-buildpack. example here: https://ict.swisscom.ch/2019/11/no-root-access-no-debian-packages-on-cloud-foundry-thats-past-with-the-apt-buildpack/
So for my nodejs app it ends up with:
1- a specific apt.yml files containing the list of dependencies (note I had to add a couple more eg libgbm-dev)
2- a specific multi-buildpack.yml containing the list of buildpacks
And that is it. I run the usual build and deploy jobs..

Does anyone have tried the HLF 2.0 feature "External Builders and Launchers" and wants to get in touch?

I'm getting my way through the HLF 2.0 docs and would love to discuss and try out the new features "External Builders and Launchers" and "Chaincode as an external service".
My goal is to run HLF2.0 on an K8s cluster (OpenShift). Does anyone wants to get in touch or has anyone already figured his way through?
Cheers from Germany
Also trying to use the ExternalBuilder. Setup core.yaml, rebuilt the containers to use it. I get an error that on "peer lifecycle chaincode install .tgz...", that the path to the scripts in core.yaml can not be found.
I've added volume bind commands in the peer-base.yaml, and in docker-compose-cli.yaml, and am using the first-network setup. Dropped out the part of the byfn.sh that would connect to the cli container, so that I do that part manually, do the create, join, update anchors successfully, and then try to do the install and fail. However, on the install, I'm failing on the /bin/detect, because it can't find that file to fork/exec it. To get that far, peer was able to read my external configuration, and read the core.yaml file. At the moment, trying the "mode: dev" in the core.yaml which seems to indicate that the scripts and the chaincode will be run "locally", which I think means it should run in the cli container. Otherwise, tried to walk the code to see how the docker containers are being created dynamically, and from what image, but haven't been able to nail that down yet.

How to avoid redundancy and time loss when re-building images during development?

As a Vagrant user, when trying Docker I noticed one significant difference between development workflow with Vagrant and with Docker - with Docker I need to rebuild my image every time from scratch, even if I made minor changes in code.
This is major problem for me, because the process of image rebuilding oftenly very redundant and time consuming.
Perhaps there are some smart workflows with Docker already invented, if so, what are they?
I filed a feature-request for the vagrant-cachier plugin for saving docker build data and attached a bash workaround for that process. If it's okay for you to hack yourself around you can implement the scripts in vagrant.
caching docker build data with vagrant
Note that this procedure needs the vagrant-cachier plugin to be installed and has to save and load +300MB files from disk if they are new to the machine. Thus it's really slow if you have dockerfiles with just 1-5 lines of code but it's fast if you have dockerfiles with a lot of LOCs or images that have to be downloaded from the net.
Also note that this approach saves every intermediate building step. So if you are building an image and change a line in the middle of a dockerfile and build again the docker build process will get all cached intermediate containers till the changed line.
Using baseimages is still the preferred way but you can combine both procedures.
Feel free to post improvements and subscribe so fgrehm will maybe implement this in his plugin natively.
As Mark O'Connor suggested, one of the tips may be building a base image to your container(s). This image should have the dependencies, package installation, downloads... or any other consuming activity. This base image should be supposed to be built less frequently than the other one(s). In a similar way, if the final states of the execution of each step of your dockerfile doesn't change, Docker don't build this layer again. Thus, you can trying execute the commands than may change this state almost every run (e.g.: apt-get update) as later as you can, so docker don't have to rebuild the steps before. And also you can try to edit your dockerfiles in the later steps better than in the first.
Another option if you compile/download something inside the container is to have it downloaded or compiled in a host folder, and attach it to the container using -v or --volume option in docker run.
Finally there is other approaches to this issue as the one used by chef with knife container. In this approach you build the container using chef cookbooks, and each time you build it (because you have edited your cookbooks...) these changes are applied as a new docker layer (AUFS layer) and you don't have to repeat all the process. I didn't recommend this solution unless you have experience with Chef and you have cookbooks to manage your software. You should work harder to get it working and if you want Chef only to manage docker containers I think it doesn't worth it (although chef is a great option to manage infrastructures).
To automate the building process in case you have several images dependents itself, you can have a bash script that helps you with that task (credits to smola#github):
#!/bin/bash
IMAGES="${IMAGES:-stratio/base:test stratio/mesos:test stratio/spark-mesos:test stratio/ingestion:test}"
LATEST_TAG="${LATEST_TAG:-test}"
for image in $IMAGES ; do
USER=${image/\/*/}
aux=${image/*\//}
NAME=${aux/:*/}
TAG=${aux/*:/}
DIR=${NAME}/${TAG}
pushd $DIR
docker build --tag=${USER}/${NAME}:${TAG} .
if [[ $TAG = $LATEST_TAG ]] ; then
docker tag ${USER}/${NAME}:${TAG} ${USER}/${NAME}:latest
fi
popd
done
There are a couple of tricks that might better your workflow (very web-focused)
Docker caching
Always make sure you are adding your source to your Docker image in Dockerfile at the very end.
Example;
COPY data/package.json /data/
RUN cd /data && npm install
COPY data/ /data
This will make sure you get optimal caching when building the image, and that Docker doesn't have to rebuild the npm packages when you are changing your source.
Also, make sure you don't have a base-image that adds folders/files that are often changed (like base images doing COPY . /data/
fig mount
Use fig (or another tool), and mount your source directory when developing. This way, you can develop with instant changes and still use the current version of your code when building the image.
development server
You can start your developer web-server when you are developing, and nginx when not (if you are developing an www app, but same idea applies to other apps).
Example, in your startup script, do something like:
if [[ $DEBUG ]]; then
/usr/bin/supervisorctl start gulp
else
/usr/bin/supervisorctl start nginx
fi
And have autostart=false in your supervisord.conf files.
auto-refresh app
If you are developing a web-app, use tools like gulp and eg gulp-connect, if you are developing a python/django app, use the runserver utility. Both reloads the server when detecting changes in the files.
If you are using the if [[ $DEBUG ]] ... trick, make them listen on the same port as your normal instance (nginx). That way, you can have 1 configuration for your reverse proxy, ie, just send the traffic to example www:8080, it will hit your web-page both in production and if you are developing.
Create a based image that holds the bulk of your application's dependencies. This will significantly reduce your docker build times.

Puppet - recognize new build versions and deploy

I have a puppet master sources my application builds into a master folder. for eg. xxxxx_v1.0.0.zip and yyyyy_v1.0.8.zip [xxxxx gets deployed to a ser of servers and yyyyy to another set of servers].
What is the best way to handle sourcing on puppet master on new versions of my application builds, without editing the .pp files on the master to reference the new build number on the filename, preferably, automatic.
Thanks
A good way to build a suitable package for your operating system instead. Puppet can use those with
package { 'application-x': ensure => latest }
Failing that, you solve this
on the agent side, by fetching your application metadata from somewhere, e.g. with an exec of wget, then having it run a script to perform the deployment if necessary
on the master side using an ENC like the Puppet Dashboard, or better yet, Hiera, to hold your latest version information
If you really want to do this through Puppet's fileserver without touching any metadata and just dropping the files in your modules, you can try with the generate function.
$latest_zip_application_x = generate("/usr/local/bin/find_latest application_x")
file { 'application_x.zip':
...
source => "puppet:///modules/application_x/path/to/$latest_zip_application_x",
}
where /usr/local/bin/find_latest is a script that will find the most recent version of your package and write it to stdout.
This is pretty horrible practice though - you are really not catering to Puppet's strengths with constructs like these.