Has anyone attempted to convert Opsworks Chef v11 recipes to Chef v12?
Im running multiple stacks on Chef 11 and decided to start converting some of them to Chef 12. Since AWS dropped their opsworks app layers, such as rails layer recipes, we (opsworks users) are now responsible for creating deploy user, git checkout repos into deploy_to, etc.
Its all good with flexibility and no more namespace conflicts, but we are missing all that good stuff opsworks was giving us for free.
Wonder if someone converted recipes for Chef 12 and open-sourced? Otherwise, is community interested in these recipes at all? Im pretty sure Im not alone here.
Thank you in advance!
The opsworks_ruby cookbook on the Supermarket is basically everything you need. It even puts the apps into the same directories (i.e. /srv/www/app_name/), sets up your database.yml, etc, etc.
The main difference between this recipe and other non-OpsWorks recipes is that this will pull everything out of the OpsWorks configuration for you. You don't have to customize the recipes, just make sure your app and layers are named correctly - it'll build everything from there - including your RDS configuration for the database.yml!
The main difference is that the layers in OpsWorks won't be "Ruby aware" so you won't have fields for Rails-ish or Ruby-ish things and instead will need to manage those elsewhere. The way ENV vars is loaded is a little different too.
Also be sure to read up on AWS' implementation of Chef 12 for OpsWorks. They technically have two chef cookbooks running, their internal one and yours. Theirs include managing the agent is up-to-date, loading up users (for ssh), wiring up monitoring, etc. You'll have to manage the rest.
We've either replaced stuff from their huge cookbook with individual cookbooks from the Supermarket or just rewrote it. For instance, the old Chef 11 opsworks_initial_setup had a couple things around tweaking network and linux settings - we recreated that.
It also does use deploy users as applicable, for instance:
$ ps -eo user,command
USER COMMAND
// snip
root nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx.conf
aws opsworks-agent: master 10820
aws opsworks-agent: keep_alive of master 10820
aws opsworks-agent: statistics of master 10820
aws opsworks-agent: process_command of master 10820
deploy unicorn_rails master --env production --daemonize -c /srv/www/app/shared/config/unicorn.conf
deploy unicorn_rails worker[0] --env production --daemonize -c /srv/www/app/shared/config/unicorn.conf
deploy unicorn_rails worker[1] --env production --daemonize -c /srv/www/app/shared/config/unicorn.conf
deploy unicorn_rails worker[2] --env production --daemonize -c /srv/www/app/shared/config/unicorn.conf
deploy unicorn_rails worker[3] --env production --daemonize -c /srv/www/app/shared/config/unicorn.conf
nginx nginx: worker process
nginx nginx: worker process
Just a small example of the process output but root boots things as needed and each process utilizes their own users to limit rights and access.
I think the most common way is to use the "application" cookbook from the supermarket: https://supermarket.chef.io/cookbooks/application/versions/4.1.6 (which is also based on Poise). Attention: use version ~4, they removed almost all of the good features in v5.
It will create the directory structure, supports different deploy-strategies and offers some events to hook.
Be aware: In my opinion, the Opsworks documentation is semi-good when it comes to the "deploy with opsworks and chef12" topic: The information from the gui (like repo-url etc), is not on the node object but in a databag from the application. For debugging it can be very helpful to have a look into the /var/chef/runs/<run-id>/ directory to see what is available from where.
Small snippet that shows the idea:
app = search("aws_opsworks_app").first
application "#{app['shortname']}" do
owner 'root'
group 'root'
repository app['app_source']['url']
revision 'master'
path "/srv/#{app['shortname']}"
end
This will create the releases/current directory structure on /srv and checkout the code. Note: you could think that ssh-key you specify in the GUI is somehow automatically put in the proper place. Its not, you'll have to take care of that on your own. Check the chef11 opsworks cookbook: https://github.com/aws/opsworks-cookbooks/blob/release-chef-11.10/scm_helper/libraries/git.rb
I don't know about the old OpsWorks cookbooks but check out https://github.com/poise/application_examples/ for some examples of doing doing Rails (and more) deploys using plain Chef (will work on OpsWorks too).
Related
is there any way we can get the hyperledger fabric binaries or build those from the source code as our machines are behind the firewalls. I am not able to run
curl -sSL goo.gl/byy2Qj | bash -s 1.0.5
which use the following commands
docker pull hyperledger/fabric-$IMAGES:$FABRIC_TAG
docker tag hyperledger/fabric-$IMAGES:$FABRIC_TAG hyperledger/fabric-$IMAGES
docker hub is blocked and external images are not allowed to download.
I believe this is the issue with most of the enterprises whose systems are behind the firewalls are provided restricted access for docker as well.
Just download the binaries directly for orderer and peers (and configtx, etc.) from this line in the script at goo.gl/byy2Qj. Just browse manually to find your flavor and release.
echo "===> Downloading platform binaries"
curl https://nexus.hyperledger.org/content/repositories/releases/org/hyperledger/fabric/hyperledger-fabric/${ARCH}-${VERSION}/hyperledger-fabric-${ARCH}-${VERSION}.tar.gz | tar xz
You may still have to clone and install the CA server, and install CouchDB, and Postgres, and Kafka and Zookeeper, etc., depending on how you want to set things up.
And you can always clone the main Fabric repo and make the binaries yourself.
You can then run them without docker (note: the cc container needs Docker available, but no images) or modify the docker scripts and create your own containers.
This page in the docs gives some good clues if you want to make yourself. You really only need to make peer and orderer but you can do make dist-clean all. Making all can take 45 min to 1 hour. You don't have to make and run any of the tests. And don't use vagrant.
https://hyperledger-fabric.readthedocs.io/en/release/dev-setup/build.html
I'm not sure I completely understand the role of Docker in the process of development and deployment.
Say, I create a Dockerfile with nginx, some database and something else which creates a container and runs fine.
I drop it somewhere in the cloud and execute it to install and configure all the dependencies and environment settings.
Next, I have a repository with a web application which I want to run inside the container I created and deployed in the first 2 steps. I regularly work on it and push the changes.
Now, how do I integrate the web application into the container?
Do I put it as a dependency inside the Dockerfile I create in the 1st step and recreate the container each time from scratch?
Or, do I deploy the container once but have procedures inside Dockerfile that install utils that pull the code from repo by command or via hooks?
What if a container is running but I want to change some settings of, say, nginx? Do I add these changes into Dockerfile and recreate the image?
In general, what's the role of Docker in the daily app development routine? Is it used often if the infrastructure is running fine and only code is changing?
I think there is no singl "use only this" answer - as you already outlined, there are different viable concepts available.
Deployment to staging/production/pre-production
a)
Do I put it as a dependency inside the Dockerfile I create in the 1st step and recreate the container each time from scratch?
This is for sure the most docker`ish way and aligns fully with he docker-philosophy. It is highly portable, reproducible and suites anything, from one container to "swarm" thousands of. E.g. this concept has no issue suddenly scaling horizontally when you need more containers, lets say due to heavy traffic / load.
It also aligns with the idea that only the configuration/data should be dynamic in a docker container, not code / binaries /artifacts
This strategy should be chosen for production use, so when not as frequent deployments happen. If you care about downtimes during container-rebuilds (on upgrade), there are good concepts to deal with that too.
We use this for production and pre-production intances.
b)
Or, do I deploy the container once but have procedures inside
Dockerfile that install utils that pull the code from repo by command
or via hooks?
This is a more common practice for very frequent deployment. You can go the pull ( what you said ) or the push (docker cp / ssh scp) concept, while i guess the latter is preferred in this kind of environment.
We use this for any kind strategy for staging instances, which basically should reflect the current "codebase" and its status. We also use this for smoke-tests and CI, but depending on the application. If the app actually changes its dependencies a lot and a clean build requires a rebuild with those to really ensure stuff is tested as it is supposed to, we actually rebuild the image during CI.
Configuration management
1.
What if a container is running but I want to change some settings of,
say, nginx? Do I add these changes into Dockerfile and recreate the
image?
I am not using this as c) since this is configuration management, not applications deployment and the answer to this can be very complicated, depending on your case. In general, if redeployment needs configuration changes, it depends on your configuration management, if you can go with b) or always have to go a).
E.g. if you use https://github.com/markround/tiller with consul as the backend, you can push the configuration changes into consul, regenerating the configuration with tiller, while using consul watch -prefix /configuration tiller as a watch-task to react on those value changes.
This enables you to go b) and fix the configuration
You can also use https://github.com/markround/tiller and on deployment, e.g. change ENV vars or some kind of yml file ( tiller supports different backends ), and call tiller during deployment yourself. This most probably needs you to have ssh or you ssh on the host and use docker cp and docker exec
Development
In development, you generally reuse your docker-compose.yml file you use for production, but overload it with docker-compose-dev.yml to e.g. mount your code-folder, set RAILS_ENV=development, reconfigurat / mount some other configurations like xdebug or more verbose nginx loggin, whatever you need. You can also add some fake MTA-services like fermata and so on
docker-compose -f docker-compose.yml -f docker-compose-dev.yml up
docker-compose-dev.yml only overloads some values, it does not redefine it or duplicate it.
Depending on how powerful your configuration management is, you can also do a pre-installation during development stack up.
We actually use scaffolding for that, we use https://github.com/xeger/docker-compose and after running it, we use docker exec and docker cp to preinstall a instance or stage something. Some examples are here https://github.com/EugenMayer/docker-sync/wiki/7.-Scripting-with-docker-sync
If you are developing under OSX and you face performance issues due to OSXFS / code shares, you probably want to have a look at http://docker-sync.io ( i am biased though )
I'm trying to use sidekiq on Bluemix. I think that I'm on the right track, but it's not working completely.
I have an app with Sinatra that uses sidekiq jobs to make many actions. I set the following line in my manifest.yml file:
command: bundle exec rackup config.ru -p $PORT && bundle exec sidekiq -r ./server.rb -c 3
I thought that with this command sidekiq will run, However, when I call the endpoint that creates a job, it's still on the "Queue" section on the Sidekiq panel.
What actions do I need to take to get sidekiq to process the job?
PS: I'm beginner on Bluemix. I'm trying to migrate my app from Heroku to Bluemix.
Straightforward answer to this question "as asked":
Your start-up command does not evaluate a second part, the one after '&&'. If you try starting that in your local environment, the result'll be the same. The server will start up and the console will simply tail the server logs, not technically evaluating to true until you send it a kill signal (so the part after '&&' will never run at the same time).
Subbing that with just '&' sort-of-kinda fixes that, since both will run at the same time.
command: bundle exec rackup config.ru -p $PORT & bundle exec sidekiq
What is not ideal with that solution? Eh, probably a lot of stuff. The biggest offender though: having two processes active at the same time, only one of them expected and observed ( the second one ).
Sending '(bluemix) cf stop' to the application instance created by the manifest with this command stops only the observed one before decommissioning the instance - in that case we can not be sure that the first process freed up external resources by properly sending notifications or closing the connections, or whatever.
What you probably could consider instead:
1. Point one.
Bluemix is a CF implementation, and with a quick manifest.yml deploy, there is nothing preventing you from having the app server and sidekiq workers run on separate instances.
2. Better shell.
command: sh -c 'command1 & command2 & wait'
**3. TBD, probably a lot of options, but I am a beginner as well. **
Separate app instances on CloudFoundry for your rack-based application and your workers would be preferable because you can then:
Scale web / workers independently (more traffic? Just scale the web application)
Deploy each component independently, if needed
Make sure each process is health-checked
The downside of using & to join commands, as suggested in the other answer, is that the first process will launch in the background. This means you won't have reliable monitoring and automatic restarts if the first process crashes.
There's a slightly out of date example on the CloudFoundry website which demos using two application manifests (one for web, one for workers) to deploy each part independently.
In a microservice architecture, I'm having a hard time grasping how one can manage environment-specific config (e.g. IP address and credentials for database or message broker).
Let's say you have three microservices ("A", "B", and "C"), each owned and maintained by a different team. Each team is going to need a team integration environment... where they work with the latest snapshot of their microservice, along with stable versions of all dependency microservices. Of course, you'll also need QA/staging/production environments as well. A simplified view of the big picture would look like this:
"Microservice A" Team Environment
Microservice A (SNAPSHOT)
Microservice B (STABLE)
Microservice C (STABLE)
"Microservice B" Team Environment
Microservice A (STABLE)
Microservice B (SNAPSHOT)
Microservice C (STABLE)
"Microservice C" Team Environment
Microservice A (STABLE)
Microservice B (STABLE)
Microservice C (SNAPSHOT)
QA / Staging / Production
Microservice A (STABLE, RELEASE, etc)
Microservice B (STABLE, RELEASE, etc)
Microservice C (STABLE, RELEASE, etc)
That's a lot of deployments, but that problem can be solved by a continuous integration server and perhaps something like Chef/Puppet/etc. The really hard part is that each microservice would need some environment data particular to each place in which it's deployed.
For example, in the "A" Team Environment, "A" needs one address and set of credentials to interact with "B". However, over in the "B" Team Environment, that deployment of "A" needs a different address and credentials to interact with that deployment of "B".
Also, as you get closer to production, environmental config info like this probably needs security restrictions (i.e. only certain people are able to modify or even view it).
So, with a microservice architecture, how to you maintain environment-specific config info and make it available to the apps? A few approaches come to mind, although they all seem problematic:
Have the build server bake them into the application at build-time - I suppose you could create a repo of per-environment properties files or scripts, and have the build process for each microservice reach out and pull in the appropriate script (you could also have a separate, limited-access repo for the production stuff). You would need a ton of scripts, though. Basically a separate one for every microservice in every place that microservice can be deployed.
Bake them into base Docker images for each environment - If the build server is putting your microservice applications into Docker containers as the last step of the build process, then you could create custom base images for each environment. The base image would contain a shell script that sets all of the environment variables you need. Your Dockerfile would be set to invoke this script prior to starting your application. This has similar challenges to the previous bullet-point, in that now you're managing a ton of Docker images.
Pull in the environment info at runtime from some sort of registry - Lastly, you could store your per-environment config inside something like Apache ZooKeeper (or even just a plain ol' database), and have your application code pull it in at runtime when it starts up. Each microservice application would need a way of telling which environment it's in (e.g. a startup parameter), so that it knows which set of variable to grab from the registry. The advantage of this approach is that now you can use the exact same build artifact (i.e. application or Docker container) all the way from the team environment up to production. On the other hand, you would now have another runtime dependency, and you'd still have to manage all of that data in your registry anyway.
How do people commonly address this issue in a microservice architecture? It seems like this would be a common thing to hear about.
Docker compose supports extending compose files, which is very useful for overriding specific parts of your configuration.
This is very useful at least for development environments and may be useful in small deployments too.
The idea is having a base shared compose file you can override for different teams or environments.
You can combine that with environment variables with different settings.
Environment variables are good if you want to replace simple values, if you need to make more complex changes then you use an extension file.
For instance, you can have a base compose file like this:
# docker-compose.yml
version: '3.3'
services:
service-a:
image: "image-name-a"
ports:
- "${PORT_A}"
service-b:
image: "image-name-b"
ports:
- "${PORT_B}"
service-c:
image: "image-name-c"
ports:
- "${PORT_C}"
If you want to change the ports you could just pass different values for variables PORT_X.
For complex changes you can have separate files to override specific parts of the compose file. You can override specific parameters for specific services, any parameter can be overridden.
For instance you can have an override file for service A with a different image and add a volume for development:
# docker-compose.override.yml
services:
service-a:
image: "image-alternative-a"
volumes:
- /my-dev-data:/var/lib/service-a/data
Docker compose picks up docker-compose.yml and docker-compose.override.yml by default, if you have more files, or files with different names, you need to specify them in order:
docker-compose -f docker-compose.yml -f docker-compose.dev.yml -f docker-compose.dev-service-a.yml up -d
For more complex environments the solution is going to depend on what you use, I know this is a docker question, but nowadays it's hard to find pure docker systems as most people use Kubernetes. In any case you are always going to have some sort of secret management provided by the environment and managed externally, then from the docker side of things you just have variables that are going to be provided by that environment.
I have a (RedHat) Puppet Master server, with Puppet Master's configuration files in /etc/puppet.
I've placed the entire contents of /etc/puppet into source control and would like my CI server (TeamCity on Windows) to be able to deploy changes to the Puppet Master server.
How can I accomplish this?
I have an idea that I can use scp, but copying to /etc/puppet would require sudo privileges. At the same time I would like a simple setup.
If there are any alternative or better ways of deploying puppet master configuration files, those answers would also be helpful.
It's unlikely that the whole /etc/puppet should be subjected to CI.
It might be more appropriate to move your $manifestdir and $modulepath instances outside that tree and make some CI client their owner. Just be careful to ensure read privileges to the puppet user.
This way, you could rely on SSH without too much of a security hole (but then, opening up your manifests for writing is always risky), and avoid the need to make the master configuration writeable to a non-root user.