puppetdb stringify structured facts like hashes - hash

I have a Problem with puppetdb and my structured facts (hashes).
Hashes will stringified through my puppetdb.
Ubuntu 14.04
puppetserver = 3.8.7
facter = 2.4.4
puppetdb = 2.3.8-1
In my puppet.conf on the clients and server i have included:
stringify_facts = false
In my site.pp i have following entry:
if is_hash($::os) {
notify {'hash':}
notify {$os['family']:}
}
if is_string($::os) {
notify {'string':}
notify {$os['family']:}
}
If in my puppet.conf on the Server:
storeconfigs = true
storeconfigs_backend = puppetdb
and the puppetdb is running.
I get the following message on the Client puppetrun:
os is not a hash or Array when accesssing it with family.
If i changed my site.pp to only:
if is_string($::os) {
notify {'os is a string':}
}
Then i get the message -> 'os is a string'
If i change the puppet.conf on my server to:
storeconfigs = false
storeconfigs_backend = puppetdb
then all is ok. os fact will identified as a hash.
Has anyone a idea?
Please help :)
Taner

Related

Bridge startup error connecting to Zookeeper

Just getting the following error trying to setup the Bridge component using Zookeeper, according to the steps described in https://docs.corda.r3.com/website/releases/3.1/bridge-configuration-file.html?highlight=zookeeper.
> java -jar corda-bridgeserver-3.1.jar
BridgeSupervisorService: active = false
[ERROR] 20:59:31-0300 [main-EventThread] imps.EnsembleTracker.processConfigData - Invalid config event received: {server.1=10.102.32.104:2888:3888:participant, version=100000000, server.3=10.102.32.108:2888:3888:participant, server.2=10.102.32.107:2888:3888:participant}
[ERROR] 20:59:32-0300 [main-EventThread] imps.EnsembleTracker.processConfigData - Invalid config event received: {server.1=10.102.32.104:2888:3888:participant, version=100000000, server.3=10.102.32.108:2888:3888:participant, server.2=10.102.32.107:2888:3888:participant}
My bridge.conf:
bridgeMode = BridgeInner
outboundConfig {
artemisBrokerAddress = "10.102.32.97:10010"
alternateArtemisBrokerAddresses = [ "10.102.32.98:10010" ]
}
bridgeInnerConfig {
floatAddresses = ["10.102.32.103:12005", "10.102.32.105:12005"]
expectedCertificateSubject = "CN=Float Local,O=Local Only,L=London,C=GB"
customSSLConfiguration {
keyStorePassword = "bridgepass"
trustStorePassword = "trustpass"
sslKeystore = "./bridgecerts/bridge.jks"
trustStoreFile = "./bridgecerts/trust.jks"
crlCheckSoftFail = true
}
}
haConfig {
haConnectionString = "zk://10.102.32.104:2181,zk://10.102.32.107:2181,zk://10.102.32.108:2181"
}
networkParametersPath = ./network-parameters
Any thoughts?
This error is harmless. It indicates that the Dockerised Zookeeper has bad IP addresses, so when the Apache Curator is sent the dynamic topology, some checks fail. It does not invalidate the static configuration and everything should work fine.
Note that as of Corda Enterprise 3.2, you must use the Zookeeper version that is compatible with the Apache Curator library, which is 3.5.3-beta, and NOT the latest version.

How to configure external postgresql database in chef automate server

I want to know how can we configure chef automate server to use external postgresql database. I have one chef server which is configured with external elasticsearch and postgresql database, now i want to use that same postgresql database in chef automate server. Can somebody tell me how can i achieve that?
Here is my delivery.rb file configuration
delivery_fqdn "192.168.0.101"
delivery['chef_username'] = "delivery"
delivery['chef_private_key'] = "/etc/delivery/delivery.pem"
delivery['chef_server'] = "https://192.168.0.102/organizations/automate_org"
insights['enable'] = true
elasticsearch['urls'] = ['http://192.168.0.103:9200']
elasticsearch['external'] = true
data_collector['token'] = 'helloworld123'
postgresql['version'] = '9.6'
postgresql['external'] = true
postgresql['vip'] = '192.168.0.103'
postgresql['port'] = '5432'
postgresql['username'] = 'admin'
postgresql['superuser_username'] = 'admin'
postgresql['superuser_password'] = 'admin123'
Here is my chef-server.rb
postgresql['external'] = true
postgresql['vip'] = '192.168.0.103'
postgresql['port'] = 5432
postgresql['db_superuser'] = 'admin'
postgresql['db_superuser_password'] = 'admin123'
opscode_erchef['search_provider'] = 'elasticsearch'
opscode_solr4['external'] = true
opscode_solr4['external_url'] = 'http://192.168.0.103:9200'
opscode_solr4['elasticsearch_shard_count'] = 3
opscode_solr4['elasticsearch_replica_count'] = 2
opscode_erchef['search_queue_mode'] = 'batch'
rabbitmq['enable'] = false
rabbitmq['management_enabled'] = false
rabbitmq['queue_length_monitor_enabled'] = false
opscode_expander['enable'] = false
dark_launch['actions'] = false
data_collector['root_url'] = 'https://192.168.0.101/data-collector/v0'
profiles['root_url'] = 'https://192.168.0.101'
Chef Automate server still uses Chef Server, see Automate setup here. delivery.rb is for Automate settings that are non-defaults. The Chef Server settings likely still need set in chef-server.rb, see here, then run chef-server-ctl reconfigure since Automate still stores cookbooks and data in Chef Server.

Google Cloud SQL or node-mysql answers a long time

We have this project using Polymer as the FrontEnd and Node.js as the API being consumed by Polymer, and our Node API replies a really long time especially if you just leave the page alone for like 10 minutes. Upon further investigation by inserting a DATE calculation in the MySQL Query, I found out that MySQL responds a Really long time. The query looks like this:
var query = dataStruct['formed_query'];
console.log(query);
var now = Date.now();
console.log("Getting Data for Foobar Query============ "+Date());
console.log(query);
GLOBAL.db_foobar.getConnection(function(err1, connection) {
////console.log("requesting MySQL connection");
if(err1==null)
{
connection.query(query,function(err,rows,fields){
console.log("response from MySQL Foobar Query============= "+Date());
console.log("MySQL response Foobar Query=========> "+(Date.now()-now)+" ms");
if(err==null)
{
//respond.respondJSON is just a res.json(msg); but I've added a similar calculation for response time starting from express router.route until res.json occurs
respond.respondJSON(dataJSON['resVal'],res,req);
}else{
var msg = {
"status":"Error",
"desc":"[Foobar Query]Error Getting Connection",
"err":err1,
"db_name":"common",
"query":query
};
respond.respondError(msg,res,req);
}
connection.release();
});
}else{
var msg = {
"status":"Error",
"desc":"[Foobar Query]Error Getting Connection",
"err":err1,
"db_name":"common",
"query":query
};
respond.respondJSON(msg,res,req);
respond.emailError(msg);
try{
connection.release();
}catch(err_release){
respond.LogInConsole(err_release);
respond.LogInConsole(err_release.stack);
}
}
});
}
When Chrome Developer tools reports a LONG PENDING time for the API, this happens to my log.
SELECT * FROM `foobar_table` LIMIT 0,20;
MySQL response Foobar Query=========> 10006 ms
I'm dumbfounded as to why this is happening.
We have our system hosted in Google Cloud Services. Our MySQL is a Google SQL service with an activation policy of ALWAYS. We've also set that our Node Server, which is a Google Compute Engine, to keep alive TCP4 connections via:
echo 'net.ipv4.tcp_keepalive_time = 60' | sudo tee -a /etc/sysctl.conf
sudo /sbin/sysctl --load=/etc/sysctl.conf
I'm using mysql Pool from node-mysql
db_init.database = 'foobar_dbname';
db_init=ssl_set(db_init);
//GLOBAL.db_foobar = mysql.createConnection(db_init);
GLOBAL.db_foobar = mysql.createPool(db_init);
GLOBAL.db_foobar.on('connection', function (connection) {
setTimeout(tryForceRelease, mysqlForceTimeOut,connection);
});
db_init looks like this:
db_init = {
host : 'ip_address_of_GCS_SQL',
user : 'user_name_of_GCS_SQL[![enter image description here][1]][1]',
password : '',
database : '',
supportBigNumbers: true,
connectionLimit:100
};
I'm also forcing to release connections if they're not released in 2 minutes, just to make sure it's released
function tryForceRelease(connection)
{
try{
//console.log("force releasing connection");
connection.release();
}catch(err){
//do nothing
//console.log("connection already released");
}
}
This is really wracking my brains out here. If anyone can help please do.
I'll post the same answer here as I posted in node-mysql pool experiences ETIMEDOUT.
The questions are sufficiently different that I'm not sure it's worth duping them.
I suspect the reason is that keepalive is not enabled on the connection to the MySQL server.
node-mysql does not have an option to enable keepalive and neither does node-mysql2, but node-mysql2 provides a way to supply a custom function for creating sockets which we can use to enable keepalive:
var mysql = require('mysql2');
var net = require('net');
var pool = mysql.createPool({
connectionLimit : 100,
host : '123.123.123.123',
user : 'foo',
password : 'bar',
database : 'baz',
stream : function(opts) {
var socket = net.connect(opts.config.port, opts.config.host);
socket.setKeepAlive(true);
return socket;
}
});

GitLab "Reply-To" feature using Omnibus not working?

We currently are running the latest version of GitLab (v8.0.1) which is installed using the Omnibus package and trying to enable the new "reply-to" feature but nothing is happening.
We followed these instructions:
http://doc.gitlab.com/ce/incoming_email/README.html (specifically the Gmail instructions). We configured a new Gmail account with lesser-security and we also use the SMTP configuration.
The email, when replied to, is being sent to the GMail account but from there nothing is happening. The doco seems a little sparse but is GitLab supposed to pick that email up (via IMAP) and update the issue? If so, nothing is happening.
Our settings in the /etc/gitlab/gitlab.rb (and I had to add the "incoming-mail" section manually because it was not there) looks like this:
# SMTP setup
gitlab_rails['smtp_enable'] = true
gitlab_rails['smtp_address'] = "aws"
gitlab_rails['smtp_port'] = 587
gitlab_rails['smtp_user_name'] = "AWSUSER"
gitlab_rails['smtp_password'] = "AWSPASS"
gitlab_rails['smtp_domain'] = "git.ourdomain.com"
gitlab_rails['smtp_authentication'] = "login"
gitlab_rails['smtp_enable_starttls_auto'] = true
# gitlab_rails['smtp_tls'] = false
# gitlab_rails['smtp_openssl_verify_mode'] = 'none' # Can be: 'none', 'peer', 'client_once', 'fail_if_no_peer_cert', see http://api.rubyonrails.org/classes/ActionMailer/Base.html
# gitlab_rails['smtp_ca_path'] = "/etc/ssl/certs"
# gitlab_rails['smtp_ca_file'] = "/etc/ssl/certs/ca-certificates.crt"
# Configuration for Gmail / Google Apps, assumes mailbox gitlab-incoming#gmail.com
gitlab_rails['incoming_email_enabled'] = true
gitlab_rails['incoming_email_address'] = "gitlab+%{key}#ourdomain.com"
gitlab_rails['incoming_email_email'] = "gitlab#ourdomain.com"
gitlab_rails['incoming_email_password'] = "GLPASS"
gitlab_rails['incoming_email_host'] = "imap.gmail.com"
gitlab_rails['incoming_email_port'] = 993
gitlab_rails['incoming_email_ssl'] = true
gitlab_rails['incoming_email_start_tls'] = false
gitlab_rails['incoming_email_mailbox_name'] = "inbox"
For me installing the last update and restarting the server seemed to solve the problem (I did restart the server the first time as well but it still was not working).

How to set arguments for savon version 2

I'm reading railscast #290 that are going with savon version 1.
So I tried to replace command for version 2, but I couldn't do it.
http://railscasts.com/episodes/290-soap-with-savon?view=asciicast
I replaced the commands like these.
ver1 client = Savon::Client.new("http://www.webservicex.net/uszip.asmx?WSDL")
ver2 client = Savon::Client.new(wsdl: "http://www.webservicex.net/uszip.asmx?WSDL")
ver1 client.wsdl.soap_actions
ver2 client.operations
ver1 client.request :web, :get_info_by_zip, body: { "USZIP" => "90210" }
ver2 client.call(:get_info_by_zip) # need more
How can I set namespace web and body parameter USZIP and 90210?
try this (www.webservicex.net is not very reliable though):
#!ruby
require 'savon'
WSDL_URL = 'http://www.webservicex.net/uszip.asmx?wsdl'
client = Savon.client(wsdl: WSDL_URL,
log: true, # set true to switch on logging
log_level: :debug,
pretty_print_xml: true)
zip = ARGV[0] || "10004"
response = client.call(:get_info_by_zip, message: {"USZip"=>zip})
print response