offline hyperledger fabric setup on Redhat 7.0 - redhat

is there any way we can get the hyperledger fabric binaries or build those from the source code as our machines are behind the firewalls. I am not able to run
curl -sSL goo.gl/byy2Qj | bash -s 1.0.5
which use the following commands
docker pull hyperledger/fabric-$IMAGES:$FABRIC_TAG
docker tag hyperledger/fabric-$IMAGES:$FABRIC_TAG hyperledger/fabric-$IMAGES
docker hub is blocked and external images are not allowed to download.
I believe this is the issue with most of the enterprises whose systems are behind the firewalls are provided restricted access for docker as well.

Just download the binaries directly for orderer and peers (and configtx, etc.) from this line in the script at goo.gl/byy2Qj. Just browse manually to find your flavor and release.
echo "===> Downloading platform binaries"
curl https://nexus.hyperledger.org/content/repositories/releases/org/hyperledger/fabric/hyperledger-fabric/${ARCH}-${VERSION}/hyperledger-fabric-${ARCH}-${VERSION}.tar.gz | tar xz
You may still have to clone and install the CA server, and install CouchDB, and Postgres, and Kafka and Zookeeper, etc., depending on how you want to set things up.
And you can always clone the main Fabric repo and make the binaries yourself.
You can then run them without docker (note: the cc container needs Docker available, but no images) or modify the docker scripts and create your own containers.
This page in the docs gives some good clues if you want to make yourself. You really only need to make peer and orderer but you can do make dist-clean all. Making all can take 45 min to 1 hour. You don't have to make and run any of the tests. And don't use vagrant.
https://hyperledger-fabric.readthedocs.io/en/release/dev-setup/build.html

Related

How can i get the latest Business Central Docker image?

I am deploying to Azure a Business Central Docker image . where can i find the details for --image tag
I am using
--image mcr.microsoft.com/businesscentral/sandbox:base (but this give msg its over 90 days
old)
I also use
--image microsoft/bcsandbox:latest
and it still gives error:
You are trying to run a container which is more than 90 days old.
Microsoft recommends that you always run the latest version of our containers.
Set the environment variable ACCEPT_OUTDATED to 'Y' if you want to run this container anyway.
How can i run the latest version without having to set the flag ACCEPT_OUTDATED ?
The images are no longer being maintained. This is the reason you get an outdated version. You can read more about that in these blog posts.
If you want to host the latest version in Azure you will have to create a Windows Server with Containers and then use BcContainerHelper to build and run the required container image.

GitLab CI for systemd service

I have built a deb package for a systemd service, which I would like to test after building it with GitLab CI.
The image I use is based on debian:stable (i.e. buster at the time of this writing).
I am doing a basic smoke test like this:
test:
stage: test
script:
- dpkg -i myservice.deb
- systemctl start myservice
This fails with an error message, because systemctl is not found. If I install that as part of the test, it still fails because systemd is not the first process on the system.
How can I test a systemctl service on GitLab CI? Is there a Debian image which runs systemd?
systemd is not compatible with Docker, and is normally not needed as the Docker paradigm is to run one service per container, therefore a service control framework does not make much sense. Unless you specifically rely on systemd for tests…
I found an instruction to run systemd inside a Docker container at https://developers.redhat.com/blog/2014/05/05/running-systemd-within-docker-container/ (and, as a follow-up, https://developers.redhat.com/blog/2016/09/13/running-systemd-in-a-non-privileged-container/). It is based on Redhat but the principles of it should apply to other distributions as well (and it gives some more background info on systemd/docker compatibility).
However, at this point I am wondering if tests conducted on such a setup are still meaningful. It might be better to rely on a local VM for tests.

How to use Docker in the development/deployment workflow?

I'm not sure I completely understand the role of Docker in the process of development and deployment.
Say, I create a Dockerfile with nginx, some database and something else which creates a container and runs fine.
I drop it somewhere in the cloud and execute it to install and configure all the dependencies and environment settings.
Next, I have a repository with a web application which I want to run inside the container I created and deployed in the first 2 steps. I regularly work on it and push the changes.
Now, how do I integrate the web application into the container?
Do I put it as a dependency inside the Dockerfile I create in the 1st step and recreate the container each time from scratch?
Or, do I deploy the container once but have procedures inside Dockerfile that install utils that pull the code from repo by command or via hooks?
What if a container is running but I want to change some settings of, say, nginx? Do I add these changes into Dockerfile and recreate the image?
In general, what's the role of Docker in the daily app development routine? Is it used often if the infrastructure is running fine and only code is changing?
I think there is no singl "use only this" answer - as you already outlined, there are different viable concepts available.
Deployment to staging/production/pre-production
a)
Do I put it as a dependency inside the Dockerfile I create in the 1st step and recreate the container each time from scratch?
This is for sure the most docker`ish way and aligns fully with he docker-philosophy. It is highly portable, reproducible and suites anything, from one container to "swarm" thousands of. E.g. this concept has no issue suddenly scaling horizontally when you need more containers, lets say due to heavy traffic / load.
It also aligns with the idea that only the configuration/data should be dynamic in a docker container, not code / binaries /artifacts
This strategy should be chosen for production use, so when not as frequent deployments happen. If you care about downtimes during container-rebuilds (on upgrade), there are good concepts to deal with that too.
We use this for production and pre-production intances.
b)
Or, do I deploy the container once but have procedures inside
Dockerfile that install utils that pull the code from repo by command
or via hooks?
This is a more common practice for very frequent deployment. You can go the pull ( what you said ) or the push (docker cp / ssh scp) concept, while i guess the latter is preferred in this kind of environment.
We use this for any kind strategy for staging instances, which basically should reflect the current "codebase" and its status. We also use this for smoke-tests and CI, but depending on the application. If the app actually changes its dependencies a lot and a clean build requires a rebuild with those to really ensure stuff is tested as it is supposed to, we actually rebuild the image during CI.
Configuration management
1.
What if a container is running but I want to change some settings of,
say, nginx? Do I add these changes into Dockerfile and recreate the
image?
I am not using this as c) since this is configuration management, not applications deployment and the answer to this can be very complicated, depending on your case. In general, if redeployment needs configuration changes, it depends on your configuration management, if you can go with b) or always have to go a).
E.g. if you use https://github.com/markround/tiller with consul as the backend, you can push the configuration changes into consul, regenerating the configuration with tiller, while using consul watch -prefix /configuration tiller as a watch-task to react on those value changes.
This enables you to go b) and fix the configuration
You can also use https://github.com/markround/tiller and on deployment, e.g. change ENV vars or some kind of yml file ( tiller supports different backends ), and call tiller during deployment yourself. This most probably needs you to have ssh or you ssh on the host and use docker cp and docker exec
Development
In development, you generally reuse your docker-compose.yml file you use for production, but overload it with docker-compose-dev.yml to e.g. mount your code-folder, set RAILS_ENV=development, reconfigurat / mount some other configurations like xdebug or more verbose nginx loggin, whatever you need. You can also add some fake MTA-services like fermata and so on
docker-compose -f docker-compose.yml -f docker-compose-dev.yml up
docker-compose-dev.yml only overloads some values, it does not redefine it or duplicate it.
Depending on how powerful your configuration management is, you can also do a pre-installation during development stack up.
We actually use scaffolding for that, we use https://github.com/xeger/docker-compose and after running it, we use docker exec and docker cp to preinstall a instance or stage something. Some examples are here https://github.com/EugenMayer/docker-sync/wiki/7.-Scripting-with-docker-sync
If you are developing under OSX and you face performance issues due to OSXFS / code shares, you probably want to have a look at http://docker-sync.io ( i am biased though )

Opsworks Chef 12 recipes

Has anyone attempted to convert Opsworks Chef v11 recipes to Chef v12?
Im running multiple stacks on Chef 11 and decided to start converting some of them to Chef 12. Since AWS dropped their opsworks app layers, such as rails layer recipes, we (opsworks users) are now responsible for creating deploy user, git checkout repos into deploy_to, etc.
Its all good with flexibility and no more namespace conflicts, but we are missing all that good stuff opsworks was giving us for free.
Wonder if someone converted recipes for Chef 12 and open-sourced? Otherwise, is community interested in these recipes at all? Im pretty sure Im not alone here.
Thank you in advance!
The opsworks_ruby cookbook on the Supermarket is basically everything you need. It even puts the apps into the same directories (i.e. /srv/www/app_name/), sets up your database.yml, etc, etc.
The main difference between this recipe and other non-OpsWorks recipes is that this will pull everything out of the OpsWorks configuration for you. You don't have to customize the recipes, just make sure your app and layers are named correctly - it'll build everything from there - including your RDS configuration for the database.yml!
The main difference is that the layers in OpsWorks won't be "Ruby aware" so you won't have fields for Rails-ish or Ruby-ish things and instead will need to manage those elsewhere. The way ENV vars is loaded is a little different too.
Also be sure to read up on AWS' implementation of Chef 12 for OpsWorks. They technically have two chef cookbooks running, their internal one and yours. Theirs include managing the agent is up-to-date, loading up users (for ssh), wiring up monitoring, etc. You'll have to manage the rest.
We've either replaced stuff from their huge cookbook with individual cookbooks from the Supermarket or just rewrote it. For instance, the old Chef 11 opsworks_initial_setup had a couple things around tweaking network and linux settings - we recreated that.
It also does use deploy users as applicable, for instance:
$ ps -eo user,command
USER COMMAND
// snip
root nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx.conf
aws opsworks-agent: master 10820
aws opsworks-agent: keep_alive of master 10820
aws opsworks-agent: statistics of master 10820
aws opsworks-agent: process_command of master 10820
deploy unicorn_rails master --env production --daemonize -c /srv/www/app/shared/config/unicorn.conf
deploy unicorn_rails worker[0] --env production --daemonize -c /srv/www/app/shared/config/unicorn.conf
deploy unicorn_rails worker[1] --env production --daemonize -c /srv/www/app/shared/config/unicorn.conf
deploy unicorn_rails worker[2] --env production --daemonize -c /srv/www/app/shared/config/unicorn.conf
deploy unicorn_rails worker[3] --env production --daemonize -c /srv/www/app/shared/config/unicorn.conf
nginx nginx: worker process
nginx nginx: worker process
Just a small example of the process output but root boots things as needed and each process utilizes their own users to limit rights and access.
I think the most common way is to use the "application" cookbook from the supermarket: https://supermarket.chef.io/cookbooks/application/versions/4.1.6 (which is also based on Poise). Attention: use version ~4, they removed almost all of the good features in v5.
It will create the directory structure, supports different deploy-strategies and offers some events to hook.
Be aware: In my opinion, the Opsworks documentation is semi-good when it comes to the "deploy with opsworks and chef12" topic: The information from the gui (like repo-url etc), is not on the node object but in a databag from the application. For debugging it can be very helpful to have a look into the /var/chef/runs/<run-id>/ directory to see what is available from where.
Small snippet that shows the idea:
app = search("aws_opsworks_app").first
application "#{app['shortname']}" do
owner 'root'
group 'root'
repository app['app_source']['url']
revision 'master'
path "/srv/#{app['shortname']}"
end
This will create the releases/current directory structure on /srv and checkout the code. Note: you could think that ssh-key you specify in the GUI is somehow automatically put in the proper place. Its not, you'll have to take care of that on your own. Check the chef11 opsworks cookbook: https://github.com/aws/opsworks-cookbooks/blob/release-chef-11.10/scm_helper/libraries/git.rb
I don't know about the old OpsWorks cookbooks but check out https://github.com/poise/application_examples/ for some examples of doing doing Rails (and more) deploys using plain Chef (will work on OpsWorks too).

How can I deploy Puppet Master configuration files from my build server?

I have a (RedHat) Puppet Master server, with Puppet Master's configuration files in /etc/puppet.
I've placed the entire contents of /etc/puppet into source control and would like my CI server (TeamCity on Windows) to be able to deploy changes to the Puppet Master server.
How can I accomplish this?
I have an idea that I can use scp, but copying to /etc/puppet would require sudo privileges. At the same time I would like a simple setup.
If there are any alternative or better ways of deploying puppet master configuration files, those answers would also be helpful.
It's unlikely that the whole /etc/puppet should be subjected to CI.
It might be more appropriate to move your $manifestdir and $modulepath instances outside that tree and make some CI client their owner. Just be careful to ensure read privileges to the puppet user.
This way, you could rely on SSH without too much of a security hole (but then, opening up your manifests for writing is always risky), and avoid the need to make the master configuration writeable to a non-root user.