Where to get/generate SSL files for new relic postgres integration - postgresql

I am trying to configure new_relic integration to monitor postgresql database connections. One of the configuration options is to either use or not use SSL, and configuration for where SSL files are stored. The documentation does not explain where to get/how to generate these files.
Where can one get, or generate these SSL files?
ENABLE_SSL: true
TRUST_SERVER_CERTIFICATE: false
SSL_CERT_LOCATION: <needed postgresql.crt>
SSL_ROOT_CERT_LOCATION: <needed root_cert.crt>
SSL_KEY_LOCATION: <needed postgresql.key>
For reference, see: https://docs.newrelic.com/docs/infrastructure/infrastructure-monitoring/infrastructure-security/infrastructure-security/

Those are Postgres server-side certificates, and their creation and setup is documented in the official manual

Related

Freeradius 2.x to 3.x LDAP configuration with multiple AD trees

I am trying to migrate an older 2.x server to 3.x due to the LDAPS connectivity requirement for a new AD tree/domain that is being created. I had to upgrade not only Freeradius but the server OS to support newer versions of TLS. I roughly had the configuration I think correct in 2.x, but cannot be 100% certain as authentication to the new AD tree structure was not completely working because of the SSL/TLS incompatibility. I am having a harder time with the new module configuration layout in 3.x.
The current 2.x performs authentication for 2 methods:
1) LDAP to the existing AD tree using a redundant server setup
2) SQL/PERL via a custom module.
The new 3.x server I need to perform 3 authentication checks via 2 methods:
1) LDAP to the existing AD tree using a redundant server setup
2) LDAPS to the new AD tree (possible redundant server setup)
3) SQL/PERL via the custom module
I have read that this may require templates for the LDAP configuration, but have not found any examples for that. Any help/guidance would be greatly appreciated.
The config is all in the LDAP module configuration file, raddb/mods-available/ldap - the ldap attribute map is in there, too.
To connect to two different LDAP servers, create two instances of the ldap module, e.g. where you have
ldap {
...
}
add another copy of that config with
ldap ldap-new {
...
}
then you can call ldap or ldap-new as appropriate in the server where needed to query the required LDAP server.
Make sure you create the appropriate symlinks to enable the module, e.g. raddb/mods-enabled/ldap -> ../mods-available/ldap.
You can certainly use templates to save duplicating config, but to begin with it's a lot easier to just copy the ldap config file, change the instance name in the new file and then tweak from there. Templates are likely to make things more confusing unless you know what you're doing.

Postgresql : SSL certificate error unable to get local issuer certificate

In PostgreSQL, whenever I execute an API URL with secure connection with query
like below
select *
from http_get('https://url......');
I get an error
SSL certificate problem: unable to get local issuer certificate
For this I have already placed a SSL folder in my azure database installation file at following path
C:\Program Files\PostgreSQL\9.6\ssl\certs
What should I do to get rid of this? Is there any SSL extension available, or do I require configuration changes or any other effort?
Please let me know the possible solutions for it.
A few questions...
First, are you using this contrib module: https://github.com/pramsey/pgsql-http ?
Is the server that serves https://url....... using a self-signed (or invalid) certificate?
If the answer to those two questions is "yes" then you may not be able to use that contrib module without some modification. I'm not sure how limited your access is to PostgreSQL in Azure, but if you can install your own C-based contrib modules there is some hope...
pgsql-http only exposes certain CURLOPTs (see: https://github.com/pramsey/pgsql-http#curl-options) values which are settable with http_set_curlopt()
For endpoints using self-signed certificates, I expect the CURLOPT you'll want to include support for to ignore SSL errors is CURLOPT_SSL_VERIFYPEER
If there are other issues like SSL/TLS protocol or cipher mismatches, there are other CURLOPTs that can be patched-in, but those also are not available without customization of the contrib module.
I don't think anything in your
C:\Program Files\PostgreSQL\9.6\ssl\certs
folder has any effect on the http_get() functionality.
If you don't want to get your hands dirty compiling and installing custom contrib modules, you can create an issue on the github page of the maintainer and see if it gets picked up.
You might also take a peek at https://github.com/pramsey/pgsql-http#why-this-is-a-bad-idea because the author of the module makes several very good points to consider.

Understanding OPC-UA Security using Eclipse Milo

I am new to this OPC-UA world and Eclipse Milo.
I do not understand how the security works here,
Discussing about client-example provided by eclipse-milo
I see few properties of security being used to connect to the OPCUA Server:
SecurityPolicy,
MessageSecurityMode,
clientCertificate,
clientKeyPair,
setIdentityProvider,
How the above configurations are linked with each other?
I was trying to run client-examples -> BrowseNodeExample.
This example internally runs the ExampleServer.
ExampleServer is configured to run with Anonymous and UsernamePassword Provider. It is also bound to accept SecurityPolicy.None, Basic128Rsa15, Basic256, Basic256Sha256 with MessageSecurityMode as SignandEncrypt except for SecurityPolicy.None where MessageSecurityMode is None too.
The problem is with AnonymousProvider I could connect to the server with all SecurtiyPolicy and MessageSecurityMode pair mentioned above (without client certificates provided).
But I could not do the same for UsernameProvider, For UsernameProvider only SecurityPolicy MessageSecurityMode pair with None runs successfully.
All others pairs throw security checks failed exception (when certificate provided) else user access denied (when client certificate not provided). How to make this work?
Lastly, It would be really nice if someone could point me to proper User documentation for Eclipse Milo. Since I could not see any documentation except examples codes, and they are not documented.
SecurityPolicy and MessageSecurityMode go hand-in-hand. The security policy dictates the set of algorithms that will be used for signatures and encryption, if any. The message security mode determines whether the messages will be signed, signed and encrypted, or neither in the case where no security is used.
clientCertificate and clientKeyPair must be configured if you plan to use security. You can't use encryption or signatures if you don't have a certificate and private key, after all.
IdentityProvider used to provide the credentials that identify the user of the session, if any.
When the ExampleServer starts up it logs that its using a temporary security directory, something like this: security temp dir: /var/folders/z5/n2r_tpbn5wd_2kf6jh5kn9_40000gn/T/security. When a client connects using any kind of security its certificate is not initially trusted by the server, resulting in the Bad_SecurityChecksFailed errors you're seeing. Inside this directory you'll find a folder rejected where rejected client certificates are stored. If you move the certificate(s) to the trusted folder the client should then be able to connect using security.

Unable to update composer in a Symfony project

I just imported a Symfony project from GitHub to Intellij IDEA. I used the usual method : https://www.jetbrains.com/help/phpstorm/2016.2/cloning-a-repository-from-github.html
Now I want to update composer and start working. But when I type the command line :
composer update
I got this error :
your configuration does not allow connections to http://packagist.org/packages.json...
And I can't continue. Please where I'm wrong ?
Newer versions of Composer do not allow connections via unsecured HTTP anymore by default:
Defaults to true. If set to true only HTTPS URLs are allowed to be downloaded via Composer. If you really absolutely need HTTP access to something then you can disable it, but using Let's Encrypt to get a free SSL certificate is generally a better alternative.
Source
To resolve this, make sure to use HTTPS to connect to the repositories, or change your Composer config.
If your resource URL is secured (using ssl) add https:// in front of your URL.
If you want to allow not secured connection add:
"config": {
"secure-http": false
},
in your composer.json
warning: Please note that is always a good practice to use ssl certificates and allow only secured connections.

Chef LWRP, Definition, or Cookbook for abstracting creation of Nginx virtual hosts

I'm trying to figure out the correct way to architect a solution to automatically configure new Rails App servers.
I've looked at the chef-rails cookbook and it seems a little verbose. In our case we always deploy Nginx a certain way, always perform backups a certain way, etc, so much of the configuration would be redundant from one node definition to the next.
My goal is to be able to create a new Rails App server by defining just the following information.
wh_webhead "test_app" do
ssl :enable
backups :enable
passenger :enable
ruby_version 2.0.0
db_type :mysql
db_user "testuser"
db_pass "3207496r9w6"
nagios_ssl_string_match "login"
end
Then I would like Chef to perform the following actions:
Create user accounts
Setup box and install
Install Nginx w/wildcard SSL cert
Configure log rotation
Setup firewall rules to allow traffic to ports 80 and 443
Install Passenger and RVM with Ruby 2.0.0
Create Rails app dirs following template (e.g. /opt/local/test_app)
Create new database on MySQL server, grant access, and setup firewall rules
Create firewall rules for Nagios and configure Nagios to monitor:
port 80 for redirection to port 443
port 443 for HTTP 200 status
port 443 for the text "login"
Configure backups for app dir (e.g. /opt/local/test_app)
I'm already using the community cookbooks for Nginx, Nagios, Ufw, etc and have created recipes in a custom cookbook to configure Mysql and Nginx. There's just a lot of duplicate code from one app's Nginx/Mysql cookbook to the next.
What I'm struggling with is where to use Cookbooks, Recipes, LWRPs and Definitions to properly abstract this.
Should I put the default configuration for Nginx and Mysql in Definitions and then use those in recipes or create custom wrapper cookbooks with the defaults?
First, take a look at the application_ruby and artifact cookbook, both of which can automate these workflows for you.
I specifically enjoy using the artifact cookbook, as it provides a lot of flexibility, but the application_ruby cookbook has built-in support for Passenger, Unicorn and other tools you'd normally find in a Rails application requirements.
As for your question regarding Cookbooks, Recipes, LWRPs and Definitions I would definitely look at #sethvargo's answer at https://stackoverflow.com/a/21733093/747032. It provides a good guide on what to use when, from an employee at Opscode (now called Chef (the company)), and someone who is constantly involved in the Chef community and thus has excellent knowledge on this topic.
As far as my advice (which I'll keep concise):
Use LWRP's to wrap a lot of resources that are always called together, for example, we use an "AWS EBS" LWRP, to create, mount and format new EBS'.
Use recipes to call on all your LWRP's (both custom and public) and resources.
Don't use definitions, they are really deprecated by LWRP's in my opinion.