How to deploy custom rpms on to salt-minion? - version-control

I'm working on salt-stack for setting up multiple machines, I wanted to ask how can we deploy rpms(placed at a custom location in master) on to the minions? I already have an idea of how can we install packages using top.sls file and name of the package that needs to be installed on minions but what I'm looking for is to deploy my custom rpms on to the minions from master.

There are two ways to approach this:
Option 1:
Define the list of RPMs in a pillar file:
package_names:
- custom-rpm1: custom-rpm1-2.6.1-2.el7.x86_64.rpm
- custom-rpm2: custom-rpm2-release-el7-3.noarch.rpm
- custom-rpm3: custom-rpm3-latest.noarch.rpm
Then in an SLS file:
install-rpm:
pkg.installed:
- sources: {{ pillar['package_names'] }}
Option 2:
Copy the directory containing the RPMs (salt://rpms in below example is relative to file_roots) to target machine and use rpm command to install (with wildcard):
copy-rpms-dir:
file.recurse:
- name: /tmp/rpms
- source: salt://rpms
install-rpms:
cmd.run:
- name: rpm -ivh /tmp/rpms/*.rpm
- success_retcodes:
- 2
Installing with rpm command requires extra check for return codes as it returns non-zero (2) when RPM is already installed.

Related

ModuleNotFoundError: No module named 'azure.mgmt.network.version' [duplicate]

After upgrading ansible to version 2.10.5 and python3.8.10 my playbook.yml fails with this error.
ModuleNotFoundError: No module named 'azure.mgmt.monitor.version'
fatal: [localhost]: FAILED! => {"attempts": 1, "changed": false, "msg": "Failed to import the required Python library (ansible[azure] (azure >= 2.0.0)) on certrenewplay's Python /usr/bin/python3`
The module is there if I run python3 -c "import azure.mgmt.monitor" and if I run pip3 list I see it installed as azure-mgmt-monitor==2.0.0
The exact part of the playbook code that is erroring is this:
- name: Create _acme-challenge record for zone "{{ env_name_dot }}"
azure_rm_dnsrecordset:
subscription_id: "{{ mgmt_subscription }}"
client_id: "{{ mgmt_vault_azure_client_id }}"
tenant: "{{ mgmt_vault_azure_tenant_id }}"
secret: "{{ mgmt_vault_azure_client_secret }}"
resource_group: "{{ mgmt_rg }}"
relative_name: "_acme-challenge.{{ env_name }}"
zone_name: "{{ dns_zone_name }}.{{ dns_zone_domain }}"
record_type: TXT
state: present
records:
- entry: "{{ cn_challenge_data }}"
time_to_live: 60
when: dns_zone_name != 'activedrop'
register: add_record
retries: 1
delay: 10
until: add_record is succeeded
I'm not sure what I'm doing wrong-can anyone advise please or help me on this please?
Thanks
This same issue happened to me because Ansible now ships with its own version of the Azure collection and it was conflicting with the version I had manually installed in my own playbook using the "ansible-galaxy collection" command.
What I suggest you do is only use the version that ships with Ansible and then install its requirements like so:
pip install -r /usr/lib/python3/dist-packages/ansible_collections/azure/azcollection/requirements-azure.txt
It is easier to setup correctly on a freshly installed system (e.g in Docker) than it is to fix a broken system.
I think that you did not follow the instruction about installing azure collection from https://github.com/ansible-collections/azure
Installing collection itself does not install python dependencies, these are installed using python pip and you need to be sure you are installing them inside the same python (v)env where ansible is installed, or ansible will give you the error that you seen when trying to load the module.
Unfortunately the azure-mgmt-monitor package is bugged, even on 3.0.0 to not properly create a version submodule. Haven't been able to track down exactly where in the code it's busted, but it is and there is a direct import of that submodule in the Ansible Galaxy module causing it to fail. Unfortunately you should use the Azure CLI at this time and forget about using azure_rm

How to show available updates for installed charts

Is there a way to use Helm to show available chart updates for installed charts?
For example I have a "web-app" chart installed as "test" with version 1.2.4, but in my repo 1.2.7 is available:
# helm ls
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
test default 1 2020-06-04 07:33:07.822952298 +0000 UTC deployed web-app-1.2.4 0.0.1
# helm search repo myrepo
NAME CHART VERSION APP VERSION DESCRIPTION
myrepo/ingress 0.1.0 1.16.0 A Helm chart for Kubernetes
myrepo/sandbox 1.2.3 1.16.0 A Helm chart for Kubernetes
myrepo/web-app 1.2.7 0.0.1 A Helm chart for Kubernetes
My goal is to write a script to send notifications of any charts that need updating so that I can review and run updates. I'd be happy to hear about any devOps style tools that do this,
As of August 28th 2022, there is no way of knowing from which repository an already installed helm chart came from.
If you want to be able to do some sort of automation, currently you need to track the information of which chart came from which repo externally.
Examples would be: storing configuration in Source Control, Installing charts as argo apps (if you're using argocd), a combination of both, etc.
Now since this question doesn't describe the use of any of these methods, I'll just make an assumption and give an example based on of the methods I mentioned.
Let's say you store all of the helm charts as dependencies of some local chart in your source control.
An example would be a Chart.yaml that looks something like this:
apiVersion: v2
name: chart-of-charts
description: A Helm chart for Kubernetes
type: application
version: 0.1.0
dependencies:
- name: some-chart
version: 0.5.1
repository: "https://somechart.io"
- name: web-app
version: 0.2.2
repository: "https://myrepo.io"
What you could do in this case is traverse through the dependencies and perform a lookup to compare the versions in the .yaml vs versions available.
An example of a bash script:
#!/bin/bash
# requires:
# - helm
# - yq (https://github.com/mikefarah/yq)
chart = Chart.yaml
length=$(yq '.dependencies | length' $chart)
for i in $(seq $length $END); do
iter=$(($i-1))
repo=$(yq .dependencies[$iter].repository $chart)
name=$(yq .dependencies[$iter].name $chart)
version=$(yq .dependencies[$iter].version $chart)
# only if this app points to an external helm chart
if helm repo add "repo$iter" $repo > /dev/null 2>&1
then
available_version=$(helm search repo "repo$iter/$name" --versions | sed -n '2p' | awk '{print $2}')
if [ "$available_version" != "$version" ]; then
echo APP: $(echo $chart | sed 's|/Chart.yaml||')
echo repository: $repo
echo chart name: $name
echo current version: $version Available version: $available_version
echo
fi
fi
done
With the command
helm search repo --regexp "myrepo/web-app" --versions
you might get all available versions.

Auto-Completion on Busybox

I run Busybox version 1.23.2. The default shell is Bourne shell. The auto completion works half: Suppose here are two directories a123 and a120. I type cd a and TAB and I just get:
# cd a
a123/ a120/
# cd a
I cannot tab through the possibilities with TAB. I have to complete the full name by hand.
I tried to install bash-completion but I get only:
# opkg list | grep bash-completion
kmod-bash-completion - 20+git0+d9c7175859-r0 - Tools for managing Linux kernel modules
libdbus-glib-1-bash-completion - 0.104-r0 - High level language (GLib) binding for D-Bus
libglib-2.0-bash-completion - 1:2.44.0-r0 - A general-purpose utility library
util-linux-bash-completion - 2.26.1-r0 - A suite of basic system administration utilities
What can I do so that I can tab through the possibilities?

How to force Devel::Cover to ignore a folder when using perl-helpers via Travis CI

The MetaCPAN Travis CI coverage builds are quite slow. See https://travis-ci.org/metacpan/metacpan-web/builds/238884497 This is likely in part because we're not successfully ignoring the /local folder that gets created by Carton as part of our build. See https://coveralls.io/builds/11809290
We're using perl-helpers to help with our Travis configuration. I thought I should be able to use the DEVEL_COVER_OPTIONS environment variable in order to fix this, but I guess I don't have the correct incantation. I've included the entire config below because a few snippets out of context seemed misleading.
language: perl
perl:
- "5.22"
matrix:
fast_finish: true
allow_failures:
- env: COVERAGE=1 USE_CPANFILE_SNAPSHOT=true
- env: USE_CPANFILE_SNAPSHOT=false HARNESS_VERBOSE=1
env:
global:
# Carton --deployment only works on the same version of perl
# that the snapshot was built from.
- DEPLOYMENT_PERL_VERSION=5.22
- DEVEL_COVER_OPTIONS="-ignore ^local/"
matrix:
# Get one passing run with coverage and one passing run with Test::Vars
# checks. If run together they more than double the build time.
- COVERAGE=1 USE_CPANFILE_SNAPSHOT=true
- USE_CPANFILE_SNAPSHOT=false HARNESS_VERBOSE=1
- USE_CPANFILE_SNAPSHOT=true
before_install:
- git clone git://github.com/travis-perl/helpers ~/travis-perl-helpers
- source ~/travis-perl-helpers/init
- npm install -g less js-beautify
# Pre-install from backpan to avoid upgrade breakage.
- cpanm -n http://cpan.metacpan.org/authors/id/M/ML/MLEHMANN/common-sense-3.6.tar.gz
- cpanm -n App::cpm Carton
install:
- cpan-install --coverage # installs converage prereqs, if enabled
- 'cpm install `test "${USE_CPANFILE_SNAPSHOT}" = "false" && echo " --resolver metadb" || echo " --resolver snapshot"`'
before_script:
- coverage-setup
script:
# Devel::Cover isn't in the cpanfile
# but if it's installed into the global dirs this should work.
- carton exec prove -lr -j$(test-jobs) t
after_success:
- coverage-report
notifications:
email:
recipients:
- olaf#seekrit.com
on_success: change
on_failure: always
irc: "irc.perl.org#metacpan-travis"
# Use newer travis infrastructure.
sudo: false
cache:
directories:
- local
The syntax for the Devel::Cover options on the command line is weird. You need to put stuff comma-separated. At least when you use PERL5OPT.
DEVEL_COVER_OPTIONS="-ignore,^local/"
See for example https://github.com/simbabque/AWS-S3/blob/master/.travis.yml#L26, where it's a whole lot of stuff with commas.
PERL5OPT=-MDevel::Cover=-ignore,"t/",+ignore,"prove",-coverage,statement,branch,condition,path,subroutine prove -lrs t

Chef cookbook for installing mongodb-shell only

I am trying to install a mongo client via chef. Essentially this is what I have been doing in manual installs:
sudo vi /etc/yum.repos.d/mongodb.repo
[mongodb]
name=MongoDB Repository
baseurl=http://downloads-distro.mongodb.org/repo/redhat/os/x86_64/
gpgcheck=0
enabled=1
sudo yum install mongodb-org-shell-2.6.7
I don't want to reinvent the wheel here, nor do I want to install anything other than the shell. This cookbook looks like a good resource, but I cannot get it to install just the shell:
https://github.com/edelight/chef-mongodb
But it seems to not allow for any of the main components to be installed. Will i need to LWRP?
Well i picked apart the mongodb cookbook - to this tune:
yum_repository 'mongodb-org-3.0' do
description 'mongodb RPM Repository'
baseurl "http://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/3.0/#{node['kernel']['machine'] =~ /x86_64/ ? 'x86_64' : 'i686'}"
action :create
gpgcheck false
enabled true
end
case node['platform_family']
when 'debian'
# this options lets us bypass complaint of pre-existing init file
# necessary until upstream fixes ENABLE_MONGOD/DB flag
packager_opts = '-o Dpkg::Options::="--force-confold" --force-yes'
when 'rhel'
# Add --nogpgcheck option when package is signed
# see: https://jira.mongodb.org/browse/SERVER-8770
packager_opts = '--nogpgcheck'
else
packager_opts = ''
end
package node[:frt_mongodb][:package_name] do
options packager_opts
action :install
version node[:frt_mongodb][:package_version]
end
That said it looks like I should be able to use that cookbook configured with the right attributes to aCcomplish this. The biggest problem is that the recipe within manipulates files that aren't necessary for the shell.