"community.kubernetes.k8s" can't be resolved in Ansible even if the collection is installed - kubernetes

I want to create some Kubernetes objects using Ansible. The community.kubernetes.k8s can do this, which is included in the community.kubernetes collection. When I try to create a namespace
- name: Create ns
community.kubernetes.k8s:
api_version: v1
kind: Namespace
name: myapp
state: present
Ansible throws an error that the collection is not installed:
ERROR! couldn't resolve module/action 'community.kubernetes.k8s'. This often indicates a misspelling, missing collection, or incorrect module path.
The error appears to be in '/home/user/ansible-project/ansible/roles/k8s/tasks/main.yml': line 14, column 3, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
- name: Create ns
^ here
But it is allready installed, as ansible-galaxy collection install confirms:
$ ansible-galaxy collection install community.kubernetes
Process install dependency map
Starting collection install process
Skipping 'community.kubernetes' as it is already installed
My installed Ansible version is 2.9.6 on Python 3.8.10, where ansible_python_interpreter=/usr/bin/python3 is set and Python2 not installed on the workstation (targeted with ansible_connection=local).
What am I doing wrong?
What I've already tried
Using old + new namings
Ansible 2.9+ is required to install collections with ansible-galaxy, so this should work. In the collection documentation I found this notice:
IMPORTANT The community.kubernetes collection is being renamed to kubernetes.core. As of version 2.0.0, the collection has been replaced by deprecated redirects for all content to kubernetes.core. If you are using FQCNs starting with community.kubernetes, please update them to kubernetes.core.
Altough this seems confusing since the Ansible documentation still refers to community.kubernetes.k8s I tried this too
- name: Create ns
kubernetes.core.k8s:
# ...
And to be sure
$ ansible-galaxy collection install kubernetes.core
Process install dependency map
Starting collection install process
Skipping 'kubernetes.core' as it is already installed
But still throwing the same couldn't resolve module/action 'kubernetes.core.k8s' error. Both directories ~/.ansible/collections/ansible_collections/kubernetes/core/ and ~/.ansible/collections/ansible_collections/community/kubernetes/ exists, so I'd guess that both (old + new naming) should work.
Checking the directory
By calling ansible-galaxy with -vvv switch, I proved that /home/user/.ansible/collections/ansible_collections is used. It also shows that those packages installs two packages under the hood: The old community.kubernetes and the new kubernetes.core:
Installing 'community.kubernetes:2.0.0' to '/home/user/.ansible/collections/ansible_collections/community/kubernetes'
Downloading https://galaxy.ansible.com/download/community-kubernetes-2.0.0.tar.gz to /home/user/.ansible/tmp/ansible-local-1610573465r9kd/tmpz_hw9gza
Installing 'kubernetes.core:2.1.1' to '/home/user/.ansible/collections/ansible_collections/kubernetes/core'
Downloading https://galaxy.ansible.com/download/kubernetes-core-2.1.1.tar.gz to /home/user/.ansible/tmp/ansible-local-1610573465r9kd/tmpz_hw9gza
which seems even more confusing to me, since the old repo says
This repo hosts the community.kubernetes (a.k.a. kubernetes.core) Ansible Collection.
For me this sounds like they're just changing the name. But as we can see, kubernetes.core has its own repo and version (2.1.1 vs 2.0).
To make sure that this directory is used, I added the following to my local ansible.cfg at project scope:
[defaults]
collections_paths = /home/user/.ansible/collections/ansible_collections/
Doesn't made any difference.

Found out that specifying collections_paths works, but without ansible_collections. Ansible expects the collections directory:
collections_paths = /home/user/.ansible/collections/
It's also nice that using ~ as placeholder for the current users home directory works, so we can keep it independent from the current user like this:
collections_paths = ~/.ansible/collections/
Now my playbook runs fine and the namespace is created:
$ kgns | grep myapp
myapp Active 9m42s
Alternatively
It's also possible to install them globally on the entire system by specifying `` as target directory (-p switch) to ansible-galaxy:
sudo ansible-galaxy collection install -r requirements.yml -p /usr/share/ansible/collections
Where requirements.txt contains (for testing purpose both, see next section)
collections:
- community.kubernetes
- kubernetes.core
But as long as there are no good reasons to install packages globally, I'd keep them locally - so imho specifying collections_paths to the local user in ansible.cfg seems the preferable solution - we also avoid executing ansible-galaxy with root permissions this way.
Which package to use now?
For testing purpose, I installed both to isolate the issue of my error. Since community.kubernetes is deprecated, I'd prefer kubernetes.core. This means to change the requirements file to
collections:
- name: kubernetes.core
or use ansible-galaxy collection install kubernetes.core alternatively - but I'd recommend using a requirements.yml which keeps your requirements well documented and makes it easier for others to install them (especially if there are more than one).
In your playbooks/roles, you just have to use kubernetes.core.* instead of community.kubernetes.*. From my first point of view, it seems that not much has changed yet - it still makes sense to follow the new documentation for kubernetes.core.* to avoid issues caused by using an outdated documentation.

Related

Chef recipe not found for cookbook

I am trying to use Chef's PostgreSQL cookbook:
https://supermarket.chef.io/cookbooks/postgresql#readme
I am getting this error:
Chef::Exceptions::RecipeNotFound: could not find recipe default for cookbook postgresql
I don't see a default.rb in the repo:
https://github.com/sous-chefs/postgresql/find/main
I've added the dependencies to Berksfile and metadata.rb and in my recipe added:
include_recipe 'postgresql'
I also added a default.rb to my repo and include_recipe 'postgresql' to that.
Still keeps saying no default recipe. Am I missing something here?
Edit:
Based on seshadri_c's answer, this is error is now gone.
I'm trying to install extension.
Have this in my default.rb:
postgresql_extension 'postgres adminpack' do
database 'postgres'
extension 'adminpack'
end
But get error
FATAL: NoMethodError: postgresql_extension[postgres adminpack] (******::default line 5) had an error: NoMethodError: bash[CREATE EXTENSION postgres adminpack] (/tmp/packer-chef-solo/local-mode-cache/cache/cookbooks/postgresql/resources/extension.rb line 31) had an error: NoMethodError: undefined method `[]' for nil:NilClass[0m
A major change was introduced in the postgresql cookbook v7.0. Quoting from supermarket page:
If you are wondering where all the recipes went in v7.0+, or how on earth I use this new cookbook please see upgrading.md for a full description.
In short, all of the cookbooks functionality has been moved from recipes to custom resources.
So, now the correct way to reuse that functionality is to "invoke" the appropriate resource instead of "including" recipes.
Example to install PostgreSQL client from my_pg_client cookbook:
In my_pg_client/metadata.rb:
depends 'postgresql' # version pin as required
Then in my_pg_client/recipes/default.rb:
# Install client software
postgresql_client_install 'My PostgreSQL Client install' do
version '9.5'
end
There are other similar custom resources, if you want to install server for example:
postgresql_server_install 'My PostgreSQL Server install' do
version '9.5'
action :install
end
There are a number of examples on how to use the custom resources in: https://github.com/sous-chefs/postgresql/tree/main/test/cookbooks/test/recipes
Update:
The postgresql_extension resource by default installs extension which is supplied as the name to the resource. So, the extension 'adminpack' property can be omitted. You could try something like:
# Considering that a DB called "postgres" exists
postgresql_extension 'adminpack' do
database 'postgres'
end

Bootstrap failed: 5: Input/output error while running any service on macOS Big Sur version 11.5.2

I am trying to run mongodb-community#4.2 service using brew services start mongodb-community#4.2 (facing similar error, while running httpd service or any other service)
Following is the error:
Error: Failure while executing; /bin/launchctl bootstrap gui/502 /Users/chiragsingla/Library/LaunchAgents/homebrew.mxcl.mongodb-community#4.2.plist exited with 5.
There can be multiple reasons behind this error message. So, the first thing to do is to find where your mongo related logs are stored. To do that, run the following command -
sudo find / -name mongod.conf
This will get you the mongo db config file. On running this command, I got /usr/local/etc/mongod.conf. You may find it directly under /etc.
On opening mongod.conf, you will find log path mentioned there. You can open the file itself, or instead get the last 15-20 lines of this log file via tail command -
tail -n 15 <<your mongo db log path>>
Now, you will need to debug the issue mentioned in the logs. Generally, I have seen these three sets of issues -
Permission issue with /tmp/mongodb-27017.sock - While some SO answers asked to change the permissions for this file as a solution, my issue with this only went away after I removed this file.
Compatibility issue - If you see a message like Version incompatibility detected, it means that the mongodb version you have currently installed is different from the version whose data is present on your system. Uninstall the current mongodb version and then install the correct older version (if you don't want to lose the data).
Once you have done it, and your mongo is up and running, and you want to upgrade mongodb version, follow this SO answer.
Permission issues with WiredTiger - Using chmod to change file permissions resolved these.
In case you have any issue other than these three, you will still need to search more on SO and figure it out. Hope this was of some help! :)

Puppetdb and postgresql module: How to manage Postgresql but NOT manage Postgresql REPO using Hiera?

I am trying to disable the managing of the Postgresql repo using Hiera when using the puppetlabs/postgresql module. I have tried every Hiera combination I can think of (from reading the docs/code) but nothing works.
puppetdb::database::postgresql::manage_package_repo: false
puppetdb::globals::manage_package_repo: false
postgresql::globals::manage_package_repo: false
It still adds the /etc/apt/sources.list.d/apt.postgresql.org.list which won't work since we are using our own Aptly mirror and servers cannot directly communicate with the internet, and so the apt update fails and the entire puppet agent run fails with it.
How do I disable the management of /etc/apt/sources.list.d/apt.postgresql.org.list using Hiera?
System:
OS: Ubuntu 16.04 and 20.04 puppetserver version: 6.12.0
puppet-version: 6.16.0
puppetserver version: 6.12.0
mod 'puppetlabs/postgresql', '6.5.0'
mod 'puppetlabs/puppetdb', '7.4.0'
mod 'puppetlabs/stdlib', '6.3.0'
According to the source for the PostgreSQL Puppet module, the repos are disabled by setting
postgresql::globals::manage_package_repo: false
And according to the source for PuppetDB, its repos are disabled with
puppetdb::manage_package_repo: false
You'll need to set both.
Note that setting these values to false won't remove the repo if it has already been installed, so if that has happened, you'll need to remove it by hand before running Puppet.
I don't think you should be doing anything with puppetdb, that's Puppet's own database where it stores node data and agent run reports. The configuration is definitely going to be somewhere below postgresql::

Upgrading postgres 9.5 to 11

So ive been tasked of upgrading our postgres server to version 11, however all the guide ive found either dont work for me or are not complete.
I have tried 2 methods and had to recall all changes:
https://www.hutsky.cz/blog/2019/02/upgrade-postgresql-from-9-3-to-11/
In this method not only are the dependency checks and upgrade commands exactly the same but also none of these commands work for me, i keep getting the error of:
"You must identify the directory where the new cluster binaries reside.
Please use the -B command-line option or the PGBINNEW environment variable.
Failure, exiting"
And ive been unable to find any fix to this.
And also tried the delete old method :
https://techcyclist.com/postgres/upgrading-postgres-to-the-latest-version-on-centos-7-server/
but in this method he deletes the old postgres completely and also the config files, but our config files have been made by the EX sys admin and i simply dont have the time it takes to study the configs to redo them in the new version, and i cant risk simply replacing the new config file with the ole one.
If anyone has done such a assignment and is willing to help, i would much appreciate it.
I used : yum install postgresql11 postgresql11-contrib postgresql11-devel postgresql11-libs postgresql11-server
to install the new postgres 11 and :
/usr/pgsql-11/bin/initdb -D /var/lib/pgsql/11/data
to init it. with a few dependencies installing in between.
afterwards all other commands :
/usr/pgsql-11/bin/pg_upgrade --old-bindir=/usr/pgsql-9.3/bin/ --
new-bindir=/usr/pgsql-11/bin/ --old-
datadir=/var/lib/pgsql/9.3/data/ --new-
datadir=/var/lib/pgsql/11/data/ --check
gave errors as described.

GraphQL error when launching KeystoneJS application

I just created a KeystoneJS using yarn create keystone-app my-app.
When I try to run it using yarn dev and browse to it I get the following error:
Error: Cannot use GraphQLSchema "[object GraphQLSchema]" from another module or realm.
Ensure that there is only one instance of "graphql" in the node_modules
directory. If different versions of "graphql" are the dependencies of other
relied on modules, use "resolutions" to ensure only one version is installed.
https://yarnpkg.com/en/docs/selective-version-resolutions
Duplicate "graphql" modules cannot be used at the same time since different
versions may have different capabilities and behavior. The data from one
version used in the function from another could produce confusing and
spurious results.
at instanceOf (/my/home/path/my-first-ks-app/node_modules/#keystonejs/keystone/node_modules/graphql/jsutils/instanceOf.js:28:13)
at isSchema (/my/home/path/my-first-ks-app/node_modules/#keystonejs/keystone/node_modules/graphql/type/schema.js:36:34)
at assertSchema (/my/home/path/my-first-ks-app/node_modules/#keystonejs/keystone/node_modules/graphql/type/schema.js:40:8)
at validateSchema (/my/home/path/my-first-ks-app/node_modules/#keystonejs/keystone/node_modules/graphql/type/validate.js:44:28)
at graphqlImpl (/my/home/path/my-first-ks-app/node_modules/#keystonejs/keystone/node_modules/graphql/graphql.js:79:62)
at /my/home/path/my-first-ks-app/node_modules/#keystonejs/keystone/node_modules/graphql/graphql.js:28:59
at new Promise (<anonymous>)
at graphql (/my/home/path/my-first-ks-app/node_modules/#keystonejs/keystone/node_modules/graphql/graphql.js:26:10)
at _graphQLQuery.<computed> (/my/home/path/my-first-ks-app/node_modules/#keystonejs/keystone/lib/Keystone/index.js:477:7)
at Keystone.executeQuery (/my/home/path/my-first-ks-app/node_modules/#keystonejs/keystone/lib/Keystone/index.js:252:14)
at Object.module.exports [as onConnect] (/my/home/path/my-first-ks-app/initial-data.js:10:22)
at /my/home/path/my-first-ks-app/node_modules/#keystonejs/keystone/lib/Keystone/index.js:323:35
at processTicksAndRejections (internal/process/task_queues.js:97:5)
at async executeDefaultServer (/my/home/path/my-first-ks-app/node_modules/#keystonejs/keystone/bin/utils.js:114:3)
error Command failed with exit code 1.
I am on Windows 10 / WSL (v1) with Ubuntu. KeystoneJS is running from Linux and MongoDB server is installed and running on Windows. This is because when I had it running in Linux, mongod showed as running and listening but I was not able to connect to it (via KeystoneJS or via shell using mongo command).
How do I fix this issue?
I was using graphql#15.0.0 when I got this error.
I fixed it by downgrading to graphql#14.6.0.
I got this issue on a keystone project with apollo.
run this line
rm -rf node_modules/#keystonejs/keystone/node_modules/graphql
or add it to the Dockerfile for building image for production