Chef recipe not found for cookbook - postgresql

I am trying to use Chef's PostgreSQL cookbook:
https://supermarket.chef.io/cookbooks/postgresql#readme
I am getting this error:
Chef::Exceptions::RecipeNotFound: could not find recipe default for cookbook postgresql
I don't see a default.rb in the repo:
https://github.com/sous-chefs/postgresql/find/main
I've added the dependencies to Berksfile and metadata.rb and in my recipe added:
include_recipe 'postgresql'
I also added a default.rb to my repo and include_recipe 'postgresql' to that.
Still keeps saying no default recipe. Am I missing something here?
Edit:
Based on seshadri_c's answer, this is error is now gone.
I'm trying to install extension.
Have this in my default.rb:
postgresql_extension 'postgres adminpack' do
database 'postgres'
extension 'adminpack'
end
But get error
FATAL: NoMethodError: postgresql_extension[postgres adminpack] (******::default line 5) had an error: NoMethodError: bash[CREATE EXTENSION postgres adminpack] (/tmp/packer-chef-solo/local-mode-cache/cache/cookbooks/postgresql/resources/extension.rb line 31) had an error: NoMethodError: undefined method `[]' for nil:NilClass[0m

A major change was introduced in the postgresql cookbook v7.0. Quoting from supermarket page:
If you are wondering where all the recipes went in v7.0+, or how on earth I use this new cookbook please see upgrading.md for a full description.
In short, all of the cookbooks functionality has been moved from recipes to custom resources.
So, now the correct way to reuse that functionality is to "invoke" the appropriate resource instead of "including" recipes.
Example to install PostgreSQL client from my_pg_client cookbook:
In my_pg_client/metadata.rb:
depends 'postgresql' # version pin as required
Then in my_pg_client/recipes/default.rb:
# Install client software
postgresql_client_install 'My PostgreSQL Client install' do
version '9.5'
end
There are other similar custom resources, if you want to install server for example:
postgresql_server_install 'My PostgreSQL Server install' do
version '9.5'
action :install
end
There are a number of examples on how to use the custom resources in: https://github.com/sous-chefs/postgresql/tree/main/test/cookbooks/test/recipes
Update:
The postgresql_extension resource by default installs extension which is supplied as the name to the resource. So, the extension 'adminpack' property can be omitted. You could try something like:
# Considering that a DB called "postgres" exists
postgresql_extension 'adminpack' do
database 'postgres'
end

Related

pg: unknown authentication message response: 10 (Golang) [duplicate]

I'm trying to follow the diesel.rs tutorial using PostgreSQL. When I get to the Diesel setup step, I get an "authentication method 10 not supported" error. How do I resolve it?
You have to upgrade the PostgreSQL client software (in this case, the libpq used by the Rust driver) to a later version that supports the scram-sha-256 authentication method introduced in PostgreSQL v10.
Downgrading password_encryption in PostgreSQL to md5, changing all the passwords and using the md5 authentication method is a possible, but bad alternative. It is more effort, and you get worse security and old, buggy software.
This isn't a Rust-specific question; the issue applies to any application connecting to a Postgres DB that doesn't support the scram-sha-256 authentication method. In my case it was a problem with the Perl application connecting to Postgres.
These steps are based on a post.
You need to have installed the latest Postgres client.
The client bin directory (SRC) is "C:\Program Files\PostgreSQL\13\bin" in this example. The target (TRG) directory is where my application binary is installed: "C:\Strawberry\c\bin". My application failed during an attempt to connect the Postgres DB with error "... authentication method 10 not supported ...".
set SRC=C:\Program Files\PostgreSQL\13\bin
set TRG=C:\Strawberry\c\bin
dir "%SRC%\libpq.dll" # to see the source DLL
dir "%TRG%\libpq__.dll" # to see the target DLL. Will be replaced from SRC
cp "%SRC%\libpq.dll" %TRG%\.
cd %TRG%
pexports libpq.dll > libpq.def
dlltool --dllname libpq.dll --def libpq.def --output-lib ..\lib\libpq.a
move "%TRG%"\libpq__.dll "%TRG%"\libpq__.dll_BUP # rename ORIGINAL name to BUP
move "%TRG%"\libpq.dll "%TRG%"\libpq__.dll # rename new DLL to ORIGINAL
At this point I was able successfully connect to Postgres from my Perl script.
The initial post shown above also suggested to copy other DLLs from source to the target:
libiconv-2.dll
libcrypto-1_1-x64.dll
libssl-1_1-x64.dll
libintl-8.dll
However, I was able to resolve my issue without copying these libraries.
Downgrading to PostgreSQL 12 helped

Bareos Postgres Plugin NOT backing up remote PostgreSQL13 database

I've installed Bareos 20.0.1 on Ubuntu 20.04.3 according to their documentations here.
I'm trying to backup a remote PostgreSQL database and apparently, there are three possible scenarios and the pros of the PostgreSQL Plugin (third solution), makes it the obvious choice.
Following the PostgreSQL Plugin documentations, in the Prerequisites for the PostgreSQL Plugin section, there is a line saying:
The plugin must be installed on the same host where the PostgreSQL database runs.
Now what I'm failing to understand is that, if I'm supposed to install the plugin on my database node, how will the bareos machine and the plugin on the db machine communicate?
Furthermore, I've checked out the source code for this module on their GitHub, and I see that the plugin source code tries to find files locally and that is a proof to the aforementioned statement.
In a desperate act, I tried installing the plugin and its dependencies on the bareos node and I keep getting the error Error: python3-fd-mod: Could not read Label File /var/lib/postgresql/13/main/backup_label which is actually trying to find the backup_label file in the bareos node.
Here is the configuration for my fileset:
FileSet {
Name = "psql"
Include {
Options {
compression=GZIP
signature = MD5
}
Plugin = "python"
":module_path=/usr/lib/bareos/plugins"
":module_name=bareos-fd-postgres"
":postgresDataDir=/var/lib/postgresql/13/main"
":walArchive=/var/lib/postgresql/13/wal_archive/"
":dbHost=DATABASE_DNS"
":dbuser=DATABASE_USER"
}
}
Note that the plugin document specifies the dbHost parameter as:
useful, if socket is not in default location. Specify socket-directory with a leading / here
However, since I'm trying a remote database, I'm using the DNS address of the remote database. I verified the bareos connection to database and made sure the backup_label file gets created while the PostgreSQL backup job runs.
I'll be happy to provide more details if necessary. Appreciate any help or even guesses :-D

"community.kubernetes.k8s" can't be resolved in Ansible even if the collection is installed

I want to create some Kubernetes objects using Ansible. The community.kubernetes.k8s can do this, which is included in the community.kubernetes collection. When I try to create a namespace
- name: Create ns
community.kubernetes.k8s:
api_version: v1
kind: Namespace
name: myapp
state: present
Ansible throws an error that the collection is not installed:
ERROR! couldn't resolve module/action 'community.kubernetes.k8s'. This often indicates a misspelling, missing collection, or incorrect module path.
The error appears to be in '/home/user/ansible-project/ansible/roles/k8s/tasks/main.yml': line 14, column 3, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
- name: Create ns
^ here
But it is allready installed, as ansible-galaxy collection install confirms:
$ ansible-galaxy collection install community.kubernetes
Process install dependency map
Starting collection install process
Skipping 'community.kubernetes' as it is already installed
My installed Ansible version is 2.9.6 on Python 3.8.10, where ansible_python_interpreter=/usr/bin/python3 is set and Python2 not installed on the workstation (targeted with ansible_connection=local).
What am I doing wrong?
What I've already tried
Using old + new namings
Ansible 2.9+ is required to install collections with ansible-galaxy, so this should work. In the collection documentation I found this notice:
IMPORTANT The community.kubernetes collection is being renamed to kubernetes.core. As of version 2.0.0, the collection has been replaced by deprecated redirects for all content to kubernetes.core. If you are using FQCNs starting with community.kubernetes, please update them to kubernetes.core.
Altough this seems confusing since the Ansible documentation still refers to community.kubernetes.k8s I tried this too
- name: Create ns
kubernetes.core.k8s:
# ...
And to be sure
$ ansible-galaxy collection install kubernetes.core
Process install dependency map
Starting collection install process
Skipping 'kubernetes.core' as it is already installed
But still throwing the same couldn't resolve module/action 'kubernetes.core.k8s' error. Both directories ~/.ansible/collections/ansible_collections/kubernetes/core/ and ~/.ansible/collections/ansible_collections/community/kubernetes/ exists, so I'd guess that both (old + new naming) should work.
Checking the directory
By calling ansible-galaxy with -vvv switch, I proved that /home/user/.ansible/collections/ansible_collections is used. It also shows that those packages installs two packages under the hood: The old community.kubernetes and the new kubernetes.core:
Installing 'community.kubernetes:2.0.0' to '/home/user/.ansible/collections/ansible_collections/community/kubernetes'
Downloading https://galaxy.ansible.com/download/community-kubernetes-2.0.0.tar.gz to /home/user/.ansible/tmp/ansible-local-1610573465r9kd/tmpz_hw9gza
Installing 'kubernetes.core:2.1.1' to '/home/user/.ansible/collections/ansible_collections/kubernetes/core'
Downloading https://galaxy.ansible.com/download/kubernetes-core-2.1.1.tar.gz to /home/user/.ansible/tmp/ansible-local-1610573465r9kd/tmpz_hw9gza
which seems even more confusing to me, since the old repo says
This repo hosts the community.kubernetes (a.k.a. kubernetes.core) Ansible Collection.
For me this sounds like they're just changing the name. But as we can see, kubernetes.core has its own repo and version (2.1.1 vs 2.0).
To make sure that this directory is used, I added the following to my local ansible.cfg at project scope:
[defaults]
collections_paths = /home/user/.ansible/collections/ansible_collections/
Doesn't made any difference.
Found out that specifying collections_paths works, but without ansible_collections. Ansible expects the collections directory:
collections_paths = /home/user/.ansible/collections/
It's also nice that using ~ as placeholder for the current users home directory works, so we can keep it independent from the current user like this:
collections_paths = ~/.ansible/collections/
Now my playbook runs fine and the namespace is created:
$ kgns | grep myapp
myapp Active 9m42s
Alternatively
It's also possible to install them globally on the entire system by specifying `` as target directory (-p switch) to ansible-galaxy:
sudo ansible-galaxy collection install -r requirements.yml -p /usr/share/ansible/collections
Where requirements.txt contains (for testing purpose both, see next section)
collections:
- community.kubernetes
- kubernetes.core
But as long as there are no good reasons to install packages globally, I'd keep them locally - so imho specifying collections_paths to the local user in ansible.cfg seems the preferable solution - we also avoid executing ansible-galaxy with root permissions this way.
Which package to use now?
For testing purpose, I installed both to isolate the issue of my error. Since community.kubernetes is deprecated, I'd prefer kubernetes.core. This means to change the requirements file to
collections:
- name: kubernetes.core
or use ansible-galaxy collection install kubernetes.core alternatively - but I'd recommend using a requirements.yml which keeps your requirements well documented and makes it easier for others to install them (especially if there are more than one).
In your playbooks/roles, you just have to use kubernetes.core.* instead of community.kubernetes.*. From my first point of view, it seems that not much has changed yet - it still makes sense to follow the new documentation for kubernetes.core.* to avoid issues caused by using an outdated documentation.

What causes error "Connection test failed: spawn npm; ENOENT" when creating new Strapi project with MongoDB?

I am trying to create a new Strapi app on Ubuntu 16.4 using MongoDB. After stepping through the tutorial, here: https://strapi.io/documentation/3.0.0-beta.x/guides/databases.html#mongodb-installation, I get the following error: Connection test failed: spawn npm; ENOENT
The error seems obvious, but I'm having issues getting to the cause of it. I've installed latest version of MongoDB and have ensured it is running using service mongod status. I can also connect directly using nc, like below.
$ nc -zvv localhost 27017
Connection to localhost 27017 port [tcp/*] succeeded!
Here is an image of the terminal output:
Any help troubleshooting this would be appreciated! Does Strapi perhaps log setup errors somewhere, or is there a way to get verbose logging? Is it possible the connection error would be logged by MongoDB somewhere?
I was able to find the answer. The problem was with using npx instead of Yarn. Strapi documentation states that either should work, however, it is clear from my experience that there is a bug when using npx.
I switched to Yarn and the process proceeded as expected without error. Steps were otherwise exactly the same.
Update: There is also a typo in Strapi documentation for yarn. They include the word "new" before the project name, which will create a project called new and ignore the project name.
Strapi docs (incorrect):
yarn create strapi-app new my-project
Correct usage, based on my experience:
yarn create strapi-app my-project
The ENOENT error is "an abbreviation of Error NO ENTry (or Error NO ENTity), and can actually be used for more than files/directories."
Why does ENOENT mean "No such file or directory"?
Everything I've read on this points toward issues with environment variables and the process.env.PATH.
"NOTE: This error is almost always caused because the command does not exist, because the working directory does not exist, or from a windows-only bug."
How do I debug "Error: spawn ENOENT" on node.js?
If you take the function that Jiaji Zhou provides in the link above and paste it into the top of your config/functions/bootstrap.js file (above module.exports), it might give you a better idea of where the error is occurring, specifically it should tell you the command it ran. Then run the command > which nameOfCommand to see what file path it returns.
"miss-installed programs are the most common cause for a not found command. Refer to each command documentation if needed and install it." - laconbass (from the same link, below Jiaji Zhou's answer)
This is how I interpret all of the above and form a solution. Put that function in bootstrap.js, then take the command returned from the function and run > which nameOfCommand. Then in bootstrap.js (you can comment out the function), put console.log(process.env.PATH) which will return a string of all the directories your current environment is checking for executables. If the path returned from your which command isn't in your process.env.PATH, you can move the command into a path, or try re-installing.

"Unable to find component name" on myodbc-installer of driver

Trying to follow the directions for installing the MySQL ODBC driver.
When I try to run:
myodbc-installer -a -d -n "MySQL ODBC 8.0 Driver" -t "Driver=/usr/local/lib/libmyodbc8w.so"
It says:
[ERROR] SQLInstaller error 6: Unable to find component name
I've found a handful of cases of people reporting this same message, e.g., here and here. Yet there seems to be no resolution.
Noticing the slight variations in the -n name string for the various drivers, I wondered if perhaps the name was something subtly different and the documentation hadn't been updated. But I used a hex editor to look in /usr/local/lib/libmyodbc8w.so and the literal string "MySQL ODBC 8.0 Driver" is in it.
There may be some instances of a name mismatch causing the problem (e.g. in one of the linked-to cases, they use -n "MySQL" instead of the prescribed -n "MySQL ODBC 5.3" from the notes).
However...in my case it was a matter of not using sudo. The error message is not very helpful in indicating that the problem could be a matter of privileges! :-/ But at the very top of the linked instruction page it says (emphasis mine):
To install the driver from a tarball distribution (.tar.gz file), download the latest version of the driver for your operating system and follow these steps, substituting the appropriate file and directory names based on the package you download (some of the steps below might require superuser privileges)
What's going on is that unixodbc has system-wide odbcinst.ini and odbc.ini. It is stated that you should not be editing these files directly, but they are edited via an API that unixodbc provides. That API is called by the MySQL helper utility called myodbc-installer:
The error message is delivered by this print_installer_error() routine
...which is called from add_driver() when the routine SQLInstallDriverExW() returns false
(Note: unixodbc on most platforms provides the (W)ide Character version of SQLInstallDriverEx(), but myodbc-installer defines its own SQLInstallDriverExW() if it is not available via a shim.)
This API apparently doesn't have a way of saying it can't get the necessary privileges to the files (in /usr/local/etc or perhaps just in /etc). So myodbc-installer is just parroting what it got. Sigh.