Chef cookbook for installing mongodb-shell only - mongodb

I am trying to install a mongo client via chef. Essentially this is what I have been doing in manual installs:
sudo vi /etc/yum.repos.d/mongodb.repo
[mongodb]
name=MongoDB Repository
baseurl=http://downloads-distro.mongodb.org/repo/redhat/os/x86_64/
gpgcheck=0
enabled=1
sudo yum install mongodb-org-shell-2.6.7
I don't want to reinvent the wheel here, nor do I want to install anything other than the shell. This cookbook looks like a good resource, but I cannot get it to install just the shell:
https://github.com/edelight/chef-mongodb
But it seems to not allow for any of the main components to be installed. Will i need to LWRP?

Well i picked apart the mongodb cookbook - to this tune:
yum_repository 'mongodb-org-3.0' do
description 'mongodb RPM Repository'
baseurl "http://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/3.0/#{node['kernel']['machine'] =~ /x86_64/ ? 'x86_64' : 'i686'}"
action :create
gpgcheck false
enabled true
end
case node['platform_family']
when 'debian'
# this options lets us bypass complaint of pre-existing init file
# necessary until upstream fixes ENABLE_MONGOD/DB flag
packager_opts = '-o Dpkg::Options::="--force-confold" --force-yes'
when 'rhel'
# Add --nogpgcheck option when package is signed
# see: https://jira.mongodb.org/browse/SERVER-8770
packager_opts = '--nogpgcheck'
else
packager_opts = ''
end
package node[:frt_mongodb][:package_name] do
options packager_opts
action :install
version node[:frt_mongodb][:package_version]
end
That said it looks like I should be able to use that cookbook configured with the right attributes to aCcomplish this. The biggest problem is that the recipe within manipulates files that aren't necessary for the shell.

Related

How to remove getty#tty1 link in yocto dunfell branch at time of compiliation

I am building linux system for raspberrypi4 but for some reason I need to remove getty#tty1 service in yocto.
I have created systemd_%.bbappend file for that.
Host PC is Ubuntu 18.04
this is working with warrior branch
Now, I am trying to compile with dunfell branch in yocto
but at the time of systemd compiling it gives an error like
"cannot remove /etc/systemd/system/getty.target.wants/getty#tty1, no such file or deirectory
But at the end, In final image there I can see getty#tty1.service
Also I can't find any other receipe that creates this link.
systemd_%.bbappend looks like this
DESCRIPTION = "Customization of systemD services."
do_install_append() {
rm ${D}${sysconfdir}/systemd/system/getty.target.wants/getty#tty1.service
}
FILES_${PN} += "${sysconfdir}/systemd/system"
REQUIRED_DISTRO_FEATURES= "systemd"
Thanks
Margish
On more recent versions of systemd (like the one in Yocto dunfell), the links to services are not created by the build system (ninja), but instead by running systemctl preset-all on the running system after installation (see here). This command reads the systemd preset files to determine which units to enable or disable by default.
In Yocto, what this means is that instead of the links being created as part of the systemd recipe, systemctl preset-all is run as part of the IMAGE_PREPROCESS_COMMAND during image creation in image.bbclass (see here). This is why the old method of deleting the symbolic links in /etc/systemd/system from the systemd recipe no longer works.
Instead, what you need to do is modify the 90-systemd.preset file to disable the getty#tty1 preset (or any other default system service) by changing the below line:
enable getty#.service
to this:
disable getty#.service
You can accomplish this using a bbappend file as follows*:
# systemd_%.bbappend
do_install_append() {
# Disable getty#tty1 from starting at boot time.
sed -i -e "s/enable getty#.service/disable getty#.service/g" ${D}${systemd_unitdir}/system-preset/90-systemd.preset
}
*https://stackoverflow.com/a/67505478/286701

Auditd in Yocto

I'm trying to add auditd to Yocto linux.
I added the selinux layer and it's dependent layers: openembedded-core and meta-virtualization.
I added the layers to bblayers.conf.
I added DISTRO_FEATURES_append = " acl xattr pam selinux"
and PREFERRED_PROVIDER_virtual/refpolicy ?= "refpolicy-mls" to the local.conf file.
After building (by using bitbake core-image-base) and running the qemu, the kauditd process is running, but all user-space tools are not.
The /etc/audit folder is not exist ,non of the audit's config files exists (audit.rules) and no user-space audit process is running.
In the layer's info it is declared - "User space tools for kernel auditing".
What I am missing?
Thanks.
I think I found something that will answer your question: If you know what an example binary or library you expect to be in the target image, you can find what recipe the executable is in, and then add that package to the image.
Start with the name of a binary or library you expect to be in the image and run the following. For me, I am using a CAN bus executable called candump. I wonder what recipe it's in? To find out, I issue:
devtool search candump
Which returns:
can-utils
If nothing is returned, I'd double check your conf/bblayers.conf so that the layer you think it may be in is actually being seen by your build system. If you are unsure, take a look at the link below which points to OpenEmbedded which has a handy search utility for packages.
After you find the recipe, you can then include that recipe into your build.
Here is a good reference in doing what I think you're asking on the OpenEmbedded website:
https://wiki.yoctoproject.org/wiki/Cookbook:Example:Adding_packages_to_your_OS_image
I just added auditd to my system. This is what I did.
First I got the repository checked out.
cd /path/to/yocto
git clone git://git.yoctoproject.org/meta-selinux
cd meta-selinux
# checkout the branch matching the Yocto release you are on
git checkout thud
Then I added auditd to my build.
cd /path/to/build
bitbake-layers add-layer /path/to/yocto/meta-selinux
cat >> conf/local.conf <<'END'
IMAGE_INSTALL_append = " auditd"
END
bitbake my_normal_image_target
Even though the Yocto recipe is called audit, the package name is auditd.
Of course, auditd without selinux is useless but it did attempt to run (journalctl -u auditd) and /etc/audit exists.
FWIW: To get auditd to a point where it reports say, login success/failure, I had to do a few more things. I'm not just adding it to a standard Yocto image, but to a custom image and custom machine. I'm already using systemd so I didn't have to change that (the layer seems to indicate it's required?). My local.conf looked like this.
# enable selinux
DISTRO_FEATURES_append = " acl xattr pam selinux"
# set the policy
PREFERRED_PROVIDER_virtual/refpolicy ?= "refpolicy-mls"
# install selinux packages and auditd
IMAGE_INSTALL_append = " packagegroup-core-selinux auditd"
# tell the kernel to enable selinux (non-enforcing) and audting
APPEND_append = " selinux=1 enforcing=0 audit=1"
I also had to change linux-yocto_selinux.inc to load selinux.cfg later. Probably layer/recipe ordering could have solved this too?
-SRC_URI += "${#bb.utils.contains('DISTRO_FEATURES', 'selinux', 'file://selinux.cfg', '', d)}"
+SRC_URI_append = "${#bb.utils.contains('DISTRO_FEATURES', 'selinux', 'file://selinux.cfg', '', d)}"
With all that in place, I see audit logs in my journal.

RTEMS libbsd compilation issue

I followed the steps mentioned in the link
https://github.com/RTEMS/rtems-libbsd
for sparc and 4.12 version.
# cd /opt
# mkdir RTEMS
# cd RTEMS
# sandbox="$PWD/sandbox"
# mkdir sandbox
# cd "$sandbox"
# git clone git://git.rtems.org/rtems-source-builder.git
# git clone git://git.rtems.org/rtems.git
# git clone git://git.rtems.org/rtems-libbsd.git
Build and install the tools.
# cd rtems-source-builder/rtems
# ../source-builder/sb-set-builder --prefix="$sandbox/rtems-4.12" 4.12/rtems-sparc
Bootstrap the RTEMS sources:
-----------------------------
# cd "$sandbox"
# cd rtems
# PATH="$sandbox/rtems-4.12/bin:$PATH"
# ./bootstrap
# cd "$sandbox" or cd ..
# mkdir b-sis
# cd b-sis
# "$sandbox/rtems/configure" --target=sparc-rtems4.12 --prefix="$sandbox/rtems-4.12" --disable-networking --enable-tests=samples --enable-rtemsbsp=sis
# make
# make install
Build and install rtems-libbsd
================================
# cd "$sandbox"
# cd rtems-libbsd
# git submodule init
# git submodule update rtems_waf
# waf configure --prefix="$sandbox/rtems-4.12" --rtems-bsps=sparc/sis
In this step I got an error
Setting top to : /home/subhilash/RTEMS/sandbox/rtems-libbsd
Setting out to : /home/subhilash/RTEMS/sandbox/rtems-libbsd/build
No valid arch/bsps found
The error means waf configure was unable to find sparc/sis installed in your prefix. Probably, configure and make failed without an obvious error due to the removal of the sis BSP from RTEMS during the 4.12 development cycle. Try using erc32 instead of sis.
You may get timelier responses about RTEMS by inquiring on the users mailing list.
You should also be aware that the sis simulator for the erc32 does not have a simulated NIC. And as I right this, the greth driver that you probably want for LEON CPUs doesn't yet have a driver in the rtems-libbsd TCP/IP stack. It is supported by the legacy IPV4 stack.
We would welcome a contribution to port this driver to the new stack.
I don't know if the greth driver is supported by qemu but the basic leon3 is supported.

How to configure a Jenkins plugin from a Dockerfile

I have a user that just has access to pull from github. In my Dockerfile I have added the plugins for Jenkins, such as github:1.22.4, but I want to configure the plugins as some of the people that will build the image won't know how to do the configuration, and don't care to learn.
So, I have some plugins for Jenkins and I want to be able to configure them using the Dockerfile. How can I do that?
My Dockerfile is pretty basic right now:
FROM jenkins
COPY plugins.txt /plugins.txt
RUN /usr/local/bin/plugins.sh /plugins.txt
and I have several plugins in plugins.txt, but the one I want to configure is to pull the code from github.
Did you check this git repository?
lets say you have plugins.txt like:
github:1.22.4
maven-plugin:2.7.1
ant:1.3
and Dockerfile like in your question.
You can take a look into example of plugins.sh and here is part for installing plugins. since you want do configure some plugins you can add configuration when you are installing plugin:
if ! grep -q "${plugin[0]}:${plugin[1]}" "$TEMP_ALREADY_INSTALLED"
then
echo "Downloading ${plugin[0]}:${plugin[1]}"
curl --retry 3 --retry-delay 5 -sSL -f "${JENKINS_UC_DOWNLOAD}/plugins/${plugin[0]}/${plugin[1]}/${plugin[0]}.hpi" -o "$REF/${plugin[0]}.jpi"
unzip -qqt "$REF/${plugin[0]}.jpi"
# if [ some plugin ] then
# here your configuration
# fi
(( COUNT_PLUGINS_INSTALLED += 1 ))
else
echo " ... skipping already installed: ${plugin[0]}:${plugin[1]}"
fi

puppet 4.0 vagrant modules missing

I am trying to use puppet modules in vagrant.
My box is running puppet 4.0
I am installing modules using:
if [ ! -d /etc/puppet/modules/ ]; then
puppet module install puppetlabs-java
fi
in site.pp
I have:
class { 'java':
distribution => 'jdk',
}
I keep getting an error about could not find declared class java
why can't puppet find my module?
/etc/puppet/modules/ is the default path isn't it?
vagrant file
Vagrant.configure(2) do |config|
config.vm.box = "bento/centos-7.2"
config.vm.provider "virtualbox" do |vb|
vb.gui = true
vb.memory = "8192"
end
config.vm.provision :shell, :path => "upgrade_puppet.sh"
config.vm.provision :shell, :path => "puppet_modules.sh"
config.vm.provision :puppet do |puppet|
puppet.options = '--verbose --debug'
puppet.environment_path = "puppet/environments"
puppet.environment = "production"
end
end
Updated answer now that Vagrantfile has been provided
Locations have changed in puppet 4 and directory environments are now in use by default.
So how you are using the puppet provisioner is correct. However, vagrant will upload all the directories it needs to the guest, based on your Vagrantfile to:
/tmp/vagrant-puppet/environments/production
When Vagrant calls the puppet apply it will be looking for the modules it requires in:
/tmp/vagrant-puppet/environments/production/modules
and that module directory does not exist on your host.
You can change your if block to be:
if [ ! -d /vagrant/puppet/environments/production/modules ]; then
puppet module install puppetlabs-java --modulepath /vagrant/puppet/environments/production/modules
fi
/vagrant is shared between host and guest. This would install the java module and its dependencies on your host machine under:
puppet
|
+--environments
+
-- production
|
+ -- manifests
| +
| -- site.pp
|
+ -- modules
+
-- java
+
-- stdlib
When you do your vagrant up, this content gets uploaded to the host under:
/tmp/vagrant-puppet
Tested and confirmed based on your Vagrantfile.
As Jaxim mentions, it's because the default directory locations have changed in the newer version of Puppet.
If you're interested in installing moduels automatically with Puppet, I'd recommend the R10K vagrant plugin, you can specify versions of modules and make updating them much easier, and allows you to download modules not on the forge, such as git repos.
https://github.com/jantman/vagrant-r10k
A little bit late, but I am switching from Chef over to Puppet (company policy, do not ask! :) ) and ran into the exact same situation and coming from Chef background I was refusing to "pollute" my project folder with so many Puppet specific stuff. In my opinion, I should only need Vagrantfile and nothing else.
I was also getting the "Could not find declared class java at /tmp/vagrant-puppet/environments/production" error message. So, after much messing around I've found that in puppet.options you can provide any arguments that you would normally provide if calling puppet apply at the command line.
So, if anything helps try modifying the puppet.options in your Vagrantfile as follows:
config.vm.provision :puppet do |puppet|
puppet.options = '--verbose --modulepath=/etc/puppetlabs/code/environments/production/modules'
puppet.environment_path = "puppet/environments"
puppet.environment = "production"
end
This will help Puppet find its own nose and not think that everything is available at the /tmp folder, but that the modules have already been installed at its own folder location.