Facts.d - Pluginfacts could not be retrieved during puppet run - upgrade

I've just upgraded my puppet environment from 3.4.2 to 3.4.3. through puppetlabs' apt repos. I was upgrading agent(s) and master. Doing an agent run leads to following error:
Info: Retrieving pluginfacts
Debug: Failed to load library 'msgpack' for feature 'msgpack'
Debug: file_metadata supports formats: pson yaml b64_zlib_yaml raw
Debug: Failed to load library 'msgpack' for feature 'msgpack'
Debug: file_metadata supports formats: pson yaml b64_zlib_yaml raw
Error: /File[/var/lib/puppet/facts.d]: Could not evaluate: Could not retrieve information from environment production source(s) puppet://<puppetserver>/pluginfacts
Debug: Finishing transaction [...]
Nevertheless I retrieve a catalog from master, so the agent run still works and seems to do the things it should do. (Or let's say, I acutally can't determine, if something is going wrong that is related to the error message.)
However, I want to get rid of the Error message.
I double-checked version of puppet with puppet --version on agent and master. I use passenger for puppetmaster. Facter has version 2.0.1. So what did I miss?
Addition: When running an agent with the previous version 3.4.2 there will be no error message.
Any ideas? Many thanks for your support.
ITL

This is due to this bug: https://tickets.puppetlabs.com/browse/PUP-3655
The issue is that for pluginsync to work, there must be at least one module in the environment that has a facts.d directory directly off of the top level of the module.
My work around for this was to create an executable facts.d/README file at the top level of one of our main internal modules that contained the following:
#!/bin/bash
# This directory is where external fact scripts would go, if we had any. This
# directory exists only because with directory environments puppet will
# complain if there isn't a single module in an environment that doesn't have a
# facts.d directory.
echo "bug=https://tickets.puppetlabs.com/browse/PUP-3655"
exit 0

The problem your encounter here comes from facter update, and the way you use private facts with puppet on version 3.X with facter 2.X.when you want to distribute external facts (that was my case).
As said in facter 2.2 documentation, you need to relocate your facter folder into module tree :
The best way to distribute external facts is with pluginsync, which
added support for them in Puppet 3.4/Facter 2.0.1. To add external
facts to your puppet modules, just place them in
MODULEPATH/MODULE/facts.d/.
So, in older versions, the path for external facts was :
MODULEPATH/MODULE/lib/facter/external_fact.rb
If you change it to :
MODULEPATH/MODULE/facts.d/external_fact.rb
Then you won't encounter the problem any more.
Regards
--
rustx

Facter 2.0.1 was released yesterday. That's your problem. Downgrade to 1.7.x and you should be fine.

Caught the same error today, reconfiguring my puppet master:
Info: Retrieving pluginfacts
Error: /File[/var/lib/puppet/facts.d]: Could not evaluate: Could not retrieve information from environment production source(s) puppet://puppet/pluginfacts
Info: Retrieving plugin
Error: /File[/var/lib/puppet/lib]: Could not evaluate: Could not retrieve information from environment production source(s) puppet://puppet/plugins
Info: Caching catalog for puppet
Info: Applying configuration version '1405577010'
Here are my versions:
grundic#puppet:~$ puppet --version
3.6.2
grundic#puppet:~$ facter --version
2.1.0
Restarting daemon helped me (I use puppet master behind passenger):
grundic#puppet:~$ sudo service apache2 restart
* Restarting web server apache2
... waiting ...done.
grundic#puppet:~$ sudo puppet agent --test --verbose
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Caching catalog for puppet
Info: Applying configuration version '1405607835'
Notice: Dummy message for debugging
Notice: /Stage[main]/Main/Notify[Dummy message for debugging]/message: defined 'message' as 'Dummy message for debugging'
Notice: Finished catalog run in 0.06 seconds

I had the same erorrs running puppet 3.6.2 on centos 6.5.
Downgrading puppet,puppet-server, facter and hiera to the previous version (3.6.1, 2.0.2, 1.3.3) 'resolves' the issue..

As Grundic said, restart master.
Then clean up certs to the puppet agent on the master and remove the certs on the agent. Then re-run puppet agent -t and puppet cert sign --all. It will all go away. That worked for me.

for path in `ls */lib/facter | grep :$ | sed "s,:,,"`;
do MODULE=`echo $path | sed "s,/lib/facter,,"`;
cd $MODULE && ln -s lib/facter facts.d && cd .. ;
done
these parts are especially important
`ls */lib/facter | grep :$ | sed "s,:,,"`
`echo $path | sed "s,/lib/facter,,"`
This code snippet ought to be run from /etc/puppet/modules as well as from the modules/ path for each environment at /etc/puppet/environments.

Related

Confluent 5.3.0 - missing confluent shell script

I am trying to start Confluent 5.3.0 on Ubuntu and need some help.
The first thing I tried was navigating to confluent-5.3.0/bin and running sudo ./confluent start (because that's the way I've been starting Confluent 5.2.1).
This gave an error message: sudo: ./confluent: command not found.
I then checked the folder confluent-5.3.0/bin and found it's missing a shell script named confluent, which is included in 5.2.1.
I checked the Release Notes and the Quickstart. Both of them say I should start Confluent with confluent local start instead of confluent start. (See screenshot at bottom). However, I still get the same error message because the shell script is still missing: sudo: ./confluent: command not found
For those of you who are able to start Confluent 5.3.0, how are you doing it? Can you paste your command line here?
Did you have to write your own confluent shell script for 5.3.0?
Can I just copy my confluent shell script from 5.2.1? I'm assuming it must be incompatible, since they removed it from 5.3.0.
From 5.3.0 you will have to install CLI separately
https://docs.confluent.io/current/cli/installing.html#cli-install
curl -L https://cnfl.io/cli | sh -s -- -b /path-to-directory/bin
Then you can run
confluent local start --path <path-to-directory>

Drush cannot locate mysql on localhost MAMP

Using drush commands to update Drupal 8 Core on a localhost build in MAMP, I've found that drush won't acknowledge my mysql.
From reading a few threads apparently this is due to MAMP's default locations for MYSQL location not being compatible with drush's expectation.
I've followed a few forum suggestions for fixed but so far have not had any luck.
The Latest attempt gives me this permission error:
[warning] The command 'mysql' is required for preflight but cannot be found.
Please install it and retry. Drush Commandline Tool 9.2.3
Other attempts:
I followed the suggestion from March 14th on this thread:
https://github.com/drush-ops/drush/issues/3464
which gave me this error:
[info] Executing: mysql --defaults-file=/private/tmp/drush_iBYWVg --database=drupal20180405 --host=localhost --port=3306 --silent < /private/tmp/drush_7T1mwj [info] Executing: mysql --defaults-file=/private/tmp/drush_bvCyn3 --database=drupal20180405 --host=localhost --port=3306 --silent < /private/tmp/drush_a9aRha In Connection.php line 149: [PDOException (2002)] SQLSTATE[HY000] [2002] No such file or directory
Another potential solution I tried came from Chrisblomm's answer on this thread:
Drush cannot connect to MySQL on MAMP?
Unfortunately for me that triggered the first error again:
[warning] The command 'mysql' is required for preflight but cannot be found.
Please install it and retry. Drush Commandline Tool 9.2.3
UPDATE: I found a solution here:
Andrew Patton's comments on this thread solved it for me:
https://stackoverflow.com/a/29990624/2639928
Specifically his tips to "define and export mysql and mysqladmin as functions".
Once I added his suggested lines of code to to my Mac's local .bash_profile it then allowed drush to correctly identify the mysql.
This meant I was able to use all the drush commands I needed that had previously triggered drush errors.
Andrew Patton's comments here solved it for me:
https://stackoverflow.com/a/29990624/2639928
specifically his tips to "define and export mysql and mysqladmin as functions"
once I added that to my mac / user / .bash_profile my drush acknowledged the mysql and I was able to use all the commands I needed that had previously given me drush errors.
I have the same issue in my php container
[warning] The shell command 'mysql' is required but cannot be found. Please install it and retry.
The mysql client was not installed so to fix it I added mysql client
apt-get install -y default-mysql-client

Firebird 2.5 Database Server on FreeBSD 11.2

I install a Firebird database server (ver. 2.5) according to the instructions on https://www.howtoforge.com/the-perfect-database-server-firebird-2.5-and-freebsd-8.1 and I get this message "Please do not build firebird as 'root' because this may cause conflicts with SysV semaphores of running services".
Trying to compile as normal user failed because I do not have access to write in this directory.
After Firebird installation as root, when I try to create local database I got error:
# isql-fb
Use CONNECT or CREATE DATABASE to specify a database
SQL> CREATE DATABASE '/test/my.fdb';
Bus error (core dumped)
Can someone help me please?
The easiest way would be to install the package as root user, for example:
# pkg install firebird25-server
If you would like to use the ports try this:
# cd /usr/ports/databases/firebird25-server
# make install clean
In either case, the message you get will be something like this (you could ignore it to continue with the installation, just need to wait 5 seconds and then it will proceed):
> pkg install firebird25-server
Updating FreeBSD repository catalogue...
FreeBSD repository is up to date.
Updating poudriere repository catalogue...
poudriere repository is up to date.
All repositories are up to date.
Updating database digests format: 100%
The following 2 package(s) will be affected (of 0 checked):
New packages to be INSTALLED:
firebird25-server: 2.5.8_1 [FreeBSD]
firebird25-client: 2.5.8_1 [FreeBSD]
Number of packages to be installed: 2
The process will require 22 MiB more space.
5 MiB to be downloaded.
Proceed with this action? [y/N]: y
[1/2] Fetching firebird25-server-2.5.8_1.txz: 100% 2 MiB 2.4MB/s 00:01
[2/2] Fetching firebird25-client-2.5.8_1.txz: 100% 3 MiB 943.7kB/s 00:03
Checking integrity... done (0 conflicting)
[1/2] Installing firebird25-client-2.5.8_1...
[1/2] Extracting firebird25-client-2.5.8_1: 100%
[2/2] Installing firebird25-server-2.5.8_1...
===> Creating groups.
Creating group 'firebird' with gid '90'.
===> Creating users
Creating user 'firebird' with uid '90'.
###############################################################################
** IMPORTANT **
Keep in mind that if you build firebird server as 'root', this may cause
conflicts with SysV semaphores of running services.
If you want to cancel it, press ctrl-C now if you need check some things
before of build it.
###############################################################################
Here sleeps for 5 seconds and then continues:
[2/2] Extracting firebird25-server-2.5.8_1: 100%
Message from firebird25-server-2.5.8_1:
###############################################################################
Firebird was installed.
1) Support for Super Server has been added
2) Before start the server ensure that the following line exists in /etc/services:
gds_db 3050/tcp #InterBase Database Remote Protocol
3) If you use inetd (Classic Server) then add the following line to /etc/inetd.conf
gds_db stream tcp nowait firebird /usr/local/sbin/fb_inet_server fb_inet_server
And finally restart inetd.
4) If you want to use SuperClassic Server then you must add the following lines
to /etc/rc.conf file.
firebird_enable="YES"
firebird_mode="superclassic"
5) If you want to use Super Server then you must add the following lines to
/etc/rc.conf file.
firebird_enable="YES"
firebird_mode="superserver"
Note: Keep in mind that you only can add one of them but never both modes on
the same time
6) It is STRONGLY recommended that you change the SYSDBA
password with:
# gsec -user SYSDBA -pass masterkey
GSEC> modify SYSDBA -pw newpassword
GSEC> quit
before doing anything serious with Firebird.
7) See documentation in /usr/local/share/doc/firebird/ for more information.
8) Some firebird tools were renamed for avoid conflicts with some other ports
/usr/local/bin/isql -> /usr/local/bin/isql-fb
/usr/local/bin/gstat -> /usr/local/bin/fbstat
/usr/local/bin/gsplit -> /usr/local/bin/fbsplit
9) Enjoy it ;)
To start it just add to /etc/rc.conf as indicated in the message in point 4 or 5, for example:
firebird_enable="YES"
firebird_mode="superserver"
To compile it as non-root an easy way could be to change the owner of the port dir to your user, for example:
# chown -R foo:foo /usr/ports/databases/firebird25-server
Then as your user cd to the port and build by typing only make:
$ cd /usr/ports/databases/firebird25-server
$ make
Then switch back to root to install the port:
# make install
Here is a procedure I used to get around this issue in the past (based on FreeBSD 10.2). This is for firebird client, but should work similarly for server. This procedure assumes sudo is set up for the user performing the installation.
cd /usr/ports
sudo chown non-root-user-name distfiles (was root)
cd /usr/ports/databases
sudo chown non-root-user-name firebird25-client (was root)
cd /usr/ports/databases/firebird25-client
make -DPACKAGE_BUILDING (Note: No sudo is used here! This process can take a long time.)
(Note: You may be required to supply root password on this step)
make install clean (Note: You may be required to supply root password on this step)
cd /usr/ports
sudo chown root distfiles
cd /usr/ports/databases
sudo chown root firebird25-client
As for FreeBSD 11.x and Firebird...I was seeing the same "Bus error". I have concluded for now (perhaps incorrectly) that Firebird is not yet compatible with FreeBSD 11.x. If you revert to FreeBSD 10.x, you should not see this problem.

Switching between or adding multiple VOLTTRON Historian Framework

I have below agent installed in my Volttron platform:
AGENT - IDENTITY - TAG
sqlhistorianagent-3.6.1 - platform.historian - platform_historian
Following the documentation: http://volttron.readthedocs.io/en/4.1/core_services/historians/index.html
I tried to install another Historian -(Mongo Historian) following this doc.:http://volttron.readthedocs.io/en/4.1/core_services/historians/Mongo-Historian.html#prerequisites
Below the steps followed to install mongodb on Ubuntu:
Prerequisites
1.Mongodb
cd volttron
. env/bin/activate
sudo scripts/historian-scripts/root_install_mongo_ubuntu.sh
2.Mongodb connector
pip install pymongo
The installation done successfully. However, I am using below commands to check the status of the installed agent.
volttron -l log1&
volttron-ctl status
For some reason, it is not showing up under my agents.
Question:
Is it possible to have both agents in the same Volttron? if it is not,
please let me know how to switch between the historian agents (i.e replace Sqlhistorianagent with Mongodbagent) or enable Mongodbagent agent?
It is worth-mentioning that I have "Crate Historian" installed.
pymongo is required for connecting to the mongo database. You still need to install the MongodbHistorian.
You can look at https://github.com/VOLTTRON/volttron/blob/master/scripts/historian-scripts/start-historian-mysql.sh for an example of what you will need to do to install the agent itself. The following assumes that you are running it from the root of the volttron directory and that you have modified the config file in the mongodbhistorian directory to connect to your mongodb instance.
#!/usr/bin/env bash
if [ ! -e "./volttron/platform" ]; then
echo "Please execute from root of volttron repository."
exit 0
fi
export HIST="services/core/MongodbHistorian"
export HIST_CONFIG="$HIST/config.mongodb"
SCRIPTS_CORE="./scripts/core"
$SCRIPTS_CORE/start_historian.sh $1
One thing also to note is that shortly we will be updating master to releases/5.0rc so the methodology for installations will have changed.

How to deploy a meteor application to my own server?

How to deploy a meteor application to my own server?
flavour 1: the development and deployment server are the same;
flavour 2: the development server is one (maybe my localhost) and the deployment server is another (maybe a VPS in the cloud);
flavour 3: I want to make a "meteor hosting" domain, just like "meteor.com". Is it possible? How?
Update:
I'm running Ubuntu and I don't want to "demeteorize" the application. Thank you.
Meteor documentation currently says:
"[...] you need to provide Node.js 0.8 and a MongoDB server. You can
then run the application by invoking node, specifying the HTTP port
for the application to listen on, and the MongoDB endpoint."
So, among the several ways to install Node.js, I got it up and running following the best advice I found, which is basically unpacking the latest version available directly in the official Node.JS website, already compiled for Linux (64 bits, in my case):
# Does NOT need to be root user:
# create directory
mkdir -p ~/.nodes && cd ~/.nodes
# download latest Node.js distribution
curl -O http://nodejs.org/dist/v0.10.13/node-v0.10.13-linux-x64.tar.gz
# unpack it
tar -xzf node-v0.10.13-linux-x64.tar.gz
# discard it
rm node-v0.10.13-linux-x64.tar.gz
# rename unpacked folder
mv node-v0.10.13-linux-x64 0.10.13
# create symlink
ln -s 0.10.13 current
# add path to PATH
export PATH="~/.nodes/current/bin:$PATH"
# check
node --version
npm --version
And to install MongoDB, I simply followed the instructions in the MongoDB manual available in the Documentation section of its official website:
# Needs to be root user (apply "sudo" if not at root shell)
apt-key adv --keyserver keyserver.ubuntu.com --recv 7F0CEB10
echo 'deb http://downloads-distro.mongodb.org/repo/ubuntu-upstart dist 10gen' | tee /etc/apt/sources.list.d/10gen.list
apt-get update
apt-get install mongodb-10gen
The server is ready to run Meteor applications! For deployment, the main "issue" is where the "bundle" operation happens. We need to run meteor bundle command from inside the application source files tree. For example:
cd ~/leaderboard
meteor bundle leaderboard.tar.gz
If the deployment will happen in another server (flavour 2), we need to upload the bundle tar.gz file to it, using sftp, ftp, or any other file transfer method. Once the file is there, we follow both Meteor documentation and the README file which is magically included in the root of the bundle tree:
# unpack the bundle
tar -xvzf leaderboard.tar.gz
# discard tar.gz file
rm leaderboard.tar.gz
# rebuild native packages
pushd bundle/programs/server/node_modules
rm -r fibers
npm install fibers#1.0.1
popd
# setup environment variables
export MONGO_URL='mongodb://localhost'
export ROOT_URL='http://example.com'
export PORT=3000
# start the server
node main.js
If the deployment will be in the same server (flavour 1), the bundle tar.gz file is already there, and we don't need to recompile the native packages. (Just jump the corresponding section above.)
Cool! With these steps, I've got the "Leaderboard" example deployed to my custom server, not "meteor.com"... (only to learn and value their services!)
I still have to make it run on port 80 (I plan to use NginX for this), persist environment variables, start Node.JS dettached from terminal, et cetera... I am aware this setup in a "barely naked" one... just the base, the first step, basic foundation stones.
The application has been "manually" deployed, without taking advantage of all meteor deploy command magic features... I've seen people published their "meteor.sh" and "meteoric.sh" and I am following the same path... create a script to emulate the "single command deploy" feature... aware that in the near future all this stuff will be part of the pioneer Meteor explorers only, as it will grow into a whole Galaxy! and most of these issues will be an archaic thing of the past.
Anyway, I am very happy to see how fast the deployed application runs in the cheapest VPS ever, with a surprisingly low latency and almost instant simultaneous updates in several distinct browsers. Fantastic!
Thank you!!!
Try Meteor Up too
With that you can deploy into any Ubuntu server. This uses meteor build command internally. And used by many for deploying production apps.
I created Meteor Up to allow developers to deploy production quality Meteor apps until Galaxy comes.
I would recommend flavor two with a separate deployment server. Separation of concerns leads to a more stable environment for your code and its easier to debug.
To do it, there's the excellent Meteoric bash script that helps you deploy to Amazon's EC2 or your own server.
As for how to roll your own meteor.com, I suggest you break that out into it's own StackOverflow question as it's not related. Plus, I can't answer it :)
I done with it few days ago. I deployed my Meteor application to my own server on the DigitalOcean. I used Meteor Up tool for managing deploys and Nginx on the server to serve the app.
It's very simple to use. You should install meteor up with the command:
npm install -g mup
Then create the folder for deployment configuration and go to the created directory. Then run mup init command. It will created two configuration files. We are have interest for mup.json file. It have configurations for deployment process. It's looks like this:
{
// Server authentication info
"servers": [
{
"host": "hostname",
"username": "root",
"password": "password",
// or pem file (ssh based authentication)
//"pem": "~/.ssh/id_rsa",
// Also, for non-standard ssh port use this
//"sshOptions": { "port" : 49154 },
// server specific environment variables
"env": {}
}
],
// Install MongoDB on the server. Does not destroy the local MongoDB on future setups
"setupMongo": true,
// WARNING: Node.js is required! Only skip if you already have Node.js installed on server.
"setupNode": true,
// WARNING: nodeVersion defaults to 0.10.36 if omitted. Do not use v, just the version number.
"nodeVersion": "0.10.36",
// Install PhantomJS on the server
"setupPhantom": true,
// Show a progress bar during the upload of the bundle to the server.
// Might cause an error in some rare cases if set to true, for instance in Shippable CI
"enableUploadProgressBar": true,
// Application name (no spaces).
"appName": "meteor",
// Location of app (local directory). This can reference '~' as the users home directory.
// i.e., "app": "~/Meteor/my-app",
// This is the same as the line below.
"app": "/Users/arunoda/Meteor/my-app",
// Configure environment
// ROOT_URL must be set to https://YOURDOMAIN.com when using the spiderable package & force SSL
// your NGINX proxy or Cloudflare. When using just Meteor on SSL without spiderable this is not necessary
"env": {
"PORT": 80,
"ROOT_URL": "http://myapp.com",
"MONGO_URL": "mongodb://arunoda:fd8dsjsfh7#hanso.mongohq.com:10023/MyApp",
"MAIL_URL": "smtp://postmaster%40myapp.mailgun.org:adj87sjhd7s#smtp.mailgun.org:587/"
},
// Meteor Up checks if the app comes online just after the deployment.
// Before mup checks that, it will wait for the number of seconds configured below.
"deployCheckWaitTime": 15
}
After you fill all data fields you can start the setup process with command mup setup. It will setup your server.
After sucessfull setup you can deploy your app. Just type mup deploy in the console.
Another alternative is to just develop on your own server to start with.
I just created a Digital Ocean box and then connected my Cloud9 IDE account.
Now, I can develop right on the machine in a Cloud IDE and deployment is easy--just copying files.
I created a tutorial that shows exactly how my set up works.
I had a lot of trouble with meteor up, so I decided writing my own deploy script. I also added additional info how to set up nginx or mongodb. Hope it helps!
See /sh folder in repository
What the script meteor-deploy.sh does:
Select environment (./meteor-deploy.sh for staging, ./meteor-deploy.sh prod for production)
Build and bundle production version of the meteor app
Copy bundle to server
SSH into server
Do a mongodump to backup database
Stop the running app
Unpack bundle
Overwrite app files
Re-install app node package dependencies
Start the app (uses forever)
Tested for the following server configurations:
Ubuntu 14.04.4 LTS
meteor --version 1.3.2.4
node --version v0.10.41
npm --version 3.10.3