Freeradius 2.x to 3.x LDAP configuration with multiple AD trees - centos

I am trying to migrate an older 2.x server to 3.x due to the LDAPS connectivity requirement for a new AD tree/domain that is being created. I had to upgrade not only Freeradius but the server OS to support newer versions of TLS. I roughly had the configuration I think correct in 2.x, but cannot be 100% certain as authentication to the new AD tree structure was not completely working because of the SSL/TLS incompatibility. I am having a harder time with the new module configuration layout in 3.x.
The current 2.x performs authentication for 2 methods:
1) LDAP to the existing AD tree using a redundant server setup
2) SQL/PERL via a custom module.
The new 3.x server I need to perform 3 authentication checks via 2 methods:
1) LDAP to the existing AD tree using a redundant server setup
2) LDAPS to the new AD tree (possible redundant server setup)
3) SQL/PERL via the custom module
I have read that this may require templates for the LDAP configuration, but have not found any examples for that. Any help/guidance would be greatly appreciated.

The config is all in the LDAP module configuration file, raddb/mods-available/ldap - the ldap attribute map is in there, too.
To connect to two different LDAP servers, create two instances of the ldap module, e.g. where you have
ldap {
...
}
add another copy of that config with
ldap ldap-new {
...
}
then you can call ldap or ldap-new as appropriate in the server where needed to query the required LDAP server.
Make sure you create the appropriate symlinks to enable the module, e.g. raddb/mods-enabled/ldap -> ../mods-available/ldap.
You can certainly use templates to save duplicating config, but to begin with it's a lot easier to just copy the ldap config file, change the instance name in the new file and then tweak from there. Templates are likely to make things more confusing unless you know what you're doing.

Related

EF Core 3.1 using Authentication=Active Directory Integrated

[Update 1]
I could make it work using the following connection string
Server=tcp:mydatabaseserver.database.windows.net,1433;Initial Catalog=mydbname
and implementing an interceptor as mentioned in this article.
This proves that Azure is correctly configured, and the problem is somewhere in the application (maybe a missing package?).
Anyway, I would still like to be able to change the connection string and switch between AAD authentication and sql authentication, without additional logic in the application.
[/Update 1]
I'm using EF Core 3.1.4 on an Azure WebApp, and I would like to use the Azure AD identity assigned to the application for authentication, but I run into the following exception:
ArgumentException: Invalid value for key 'authentication'.
Microsoft.Data.Common.DbConnectionStringBuilderUtil.ConvertToAuthenticationType(string keyword, object value)
This is the connection string:
{
"ConnectionStrings": {
"Admin": "Server=tcp:mydatabaseserver.database.windows.net,1433;Initial Catalog=mydbname;Authentication=Active Directory Integrated"
}
}
I initialize the context using the following code:
var connectionString = this.Configuration.GetConnectionString("Admin");
services.AddDbContext<NetCoreDataContext>(builder => builder.UseSqlServer(connectionString));
The Microsoft.Azure.Services.AppAuthentication package is also imported (version 1.5.0)
Active Directory Integrated wasn't working for me in .NET Core 3.1 but it works now ever since I installed the NuGet package Microsoft.Data.SqlClient (I installed version v2.0.1). It now works with the following connection string:
"MyDbConnStr": "Server=tcp:mydbserver.database.windows.net,1433;Database=MyDb;Authentication=ActiveDirectoryIntegrated"
Note: it also works if I have spaces between the words like this:
"MyDbConnStr": "Server=tcp:mydbserver.database.windows.net,1433;Database=MyDb;Authentication=Active Directory Integrated"
And it also works if I include escaped quotes like this:
"MyDbConnStr": "Server=tcp:mydbserver.database.windows.net,1433;Database=MyDb;Authentication="Active Directory Integrated""
Finally, note that there are additional properties which can also be used in the connection string:
;User ID=myruntimeuser#mydomain.com;Persist Security Info=true;Encrypt=true;TrustServerCertificate=true;MultipleActiveResultSets=true
Welcome to the Net frameworks/runtimes hell.
Currently ActiveDirectoryIntegrated and ActiveDirectoryInteractiveauthentication options are not supported for NetCore apps.
The reason is that starting with v3.0, EF Core uses Microsoft.Data.SqlClient instead of System.Data.SqlClient. And the most recent at this time version of Microsoft.Data.SqlClient (also the preview versions) supports these two options only for NET Framework.
You can see similar question in their issue tracker Why does SqlClient for .Net Core not allow an authentication method 'Active Directory Interactive'? #374, as well as the documentation of the SqlAuthenticationMethod enum - ActiveDirectoryIntegrated (emphasis is mine):
The authentication method uses Active Directory Integrated. Use Active Directory Integrated to connect to a SQL Database using integrated Windows authentication. Available for .NET Framework applications only.
With that being said, use the Authentication workaround, or wait this option to be eventually implemented for Net Core.
Upgrading the Nuget packages:
Microsoft.EntityFrameworkCore and Microsoft.EntityFrameworkCore.SqlServer to 6.0.1 and using Authentication=Active Directory Managed Identity in the connection string helped me resolve the issue.
UPDATE
If you use azure msi, pls read this document.
https://learn.microsoft.com/en-us/azure/app-service/app-service-web-tutorial-connect-msi
PRIVIOUS
Your problems maybe not configure in portal. You can follow the offical document to finished it, then try again.
First, you need to create SQL managed instances which maybe cost your long time. Then u need to configure Active Directory admin and your db. When you finished it, you will find ADO.NET(Active Directory password authentication) in your SQL database ->Connection strings in portal. You can copy and paste it in your code to solve the issue.
I have tried it by myself, and it works for me. For more detail, you can see this post.

Postgresql : SSL certificate error unable to get local issuer certificate

In PostgreSQL, whenever I execute an API URL with secure connection with query
like below
select *
from http_get('https://url......');
I get an error
SSL certificate problem: unable to get local issuer certificate
For this I have already placed a SSL folder in my azure database installation file at following path
C:\Program Files\PostgreSQL\9.6\ssl\certs
What should I do to get rid of this? Is there any SSL extension available, or do I require configuration changes or any other effort?
Please let me know the possible solutions for it.
A few questions...
First, are you using this contrib module: https://github.com/pramsey/pgsql-http ?
Is the server that serves https://url....... using a self-signed (or invalid) certificate?
If the answer to those two questions is "yes" then you may not be able to use that contrib module without some modification. I'm not sure how limited your access is to PostgreSQL in Azure, but if you can install your own C-based contrib modules there is some hope...
pgsql-http only exposes certain CURLOPTs (see: https://github.com/pramsey/pgsql-http#curl-options) values which are settable with http_set_curlopt()
For endpoints using self-signed certificates, I expect the CURLOPT you'll want to include support for to ignore SSL errors is CURLOPT_SSL_VERIFYPEER
If there are other issues like SSL/TLS protocol or cipher mismatches, there are other CURLOPTs that can be patched-in, but those also are not available without customization of the contrib module.
I don't think anything in your
C:\Program Files\PostgreSQL\9.6\ssl\certs
folder has any effect on the http_get() functionality.
If you don't want to get your hands dirty compiling and installing custom contrib modules, you can create an issue on the github page of the maintainer and see if it gets picked up.
You might also take a peek at https://github.com/pramsey/pgsql-http#why-this-is-a-bad-idea because the author of the module makes several very good points to consider.

Chef LWRP, Definition, or Cookbook for abstracting creation of Nginx virtual hosts

I'm trying to figure out the correct way to architect a solution to automatically configure new Rails App servers.
I've looked at the chef-rails cookbook and it seems a little verbose. In our case we always deploy Nginx a certain way, always perform backups a certain way, etc, so much of the configuration would be redundant from one node definition to the next.
My goal is to be able to create a new Rails App server by defining just the following information.
wh_webhead "test_app" do
ssl :enable
backups :enable
passenger :enable
ruby_version 2.0.0
db_type :mysql
db_user "testuser"
db_pass "3207496r9w6"
nagios_ssl_string_match "login"
end
Then I would like Chef to perform the following actions:
Create user accounts
Setup box and install
Install Nginx w/wildcard SSL cert
Configure log rotation
Setup firewall rules to allow traffic to ports 80 and 443
Install Passenger and RVM with Ruby 2.0.0
Create Rails app dirs following template (e.g. /opt/local/test_app)
Create new database on MySQL server, grant access, and setup firewall rules
Create firewall rules for Nagios and configure Nagios to monitor:
port 80 for redirection to port 443
port 443 for HTTP 200 status
port 443 for the text "login"
Configure backups for app dir (e.g. /opt/local/test_app)
I'm already using the community cookbooks for Nginx, Nagios, Ufw, etc and have created recipes in a custom cookbook to configure Mysql and Nginx. There's just a lot of duplicate code from one app's Nginx/Mysql cookbook to the next.
What I'm struggling with is where to use Cookbooks, Recipes, LWRPs and Definitions to properly abstract this.
Should I put the default configuration for Nginx and Mysql in Definitions and then use those in recipes or create custom wrapper cookbooks with the defaults?
First, take a look at the application_ruby and artifact cookbook, both of which can automate these workflows for you.
I specifically enjoy using the artifact cookbook, as it provides a lot of flexibility, but the application_ruby cookbook has built-in support for Passenger, Unicorn and other tools you'd normally find in a Rails application requirements.
As for your question regarding Cookbooks, Recipes, LWRPs and Definitions I would definitely look at #sethvargo's answer at https://stackoverflow.com/a/21733093/747032. It provides a good guide on what to use when, from an employee at Opscode (now called Chef (the company)), and someone who is constantly involved in the Chef community and thus has excellent knowledge on this topic.
As far as my advice (which I'll keep concise):
Use LWRP's to wrap a lot of resources that are always called together, for example, we use an "AWS EBS" LWRP, to create, mount and format new EBS'.
Use recipes to call on all your LWRP's (both custom and public) and resources.
Don't use definitions, they are really deprecated by LWRP's in my opinion.

Can I call synchroniseUserDirectories (ConfluenceRpc) via REST, SOAP or XML-RPC?

I am using Confluence 4.2.5 (build 3284) with CAS SSO connected to my LDAP server and would like to be able to call synchroniseUserDirectories() from the LDAP server when a user changes their password so that the change is instantaneous.
The way it works now is that users have to wait for the Confluence to run it's periodic LDAP synchronization which can be disconcerting for them.
I have tried using the XML-RPC interface to call changeUserPassword() (as an administrator) but it doesn't work. The operation raises an exception "Error changing password for user ...". I presume that that is because the user is defined in the LDAP but I can't tell for sure because the exception message wasn't clear about the cause.
Here is example code that I would like to be able to use. It doesn't work.
#!/usr/bin/env python
import xmlrpclib
url = 'https://docs.example.com'
admin_user = 'frobisher'
admin_pass = 'supersecretstuff'
username = 'bigbob'
new_password = 'bigbobsbigsecret'
server = xmlrpclib.ServerProxy(url + '/rpc/xmlrpc')
token = server.confluence2.login(admin_user, admin_pass)
# CITATION: https://developer.atlassian.com/display/CONFDEV/Remote+Confluence+Methods
# this doesn't exist but would be my preferred approach.
# It raises a NoSuchMethodException exception.
server.confluence2.synchroniseUserDirectories(token)
# this throws a general exception, because of the LDAP? The message
# wasn't clear about the source of the problem.
#server.confluence2.changeUserPassword(token,
# username,
# password)
server.confluence2.logout(token)
Is there any way to do this using SOAP or REST? I was concerned about REST because it sounds like it is still a prototype.
If none of those approaches will work, can it be done with a simple plugin considering that this must be a push operation from the LDAP server to the Confluence server? I have no experience writing plugins but I do some java work occasionally.
Any hints would be greatly appreciated.
The short answer is "no". The ability to synchronise remote user directories is not exposed as a remote operation in Confluence.
The long answer is "yes", you can write a plugin to do this. If you're already familiar with java, then perhaps the best answer is to just show you some source code I've written that performs a similar function: https://bitbucket.org/jaysee00/confluence-user-sync-api This plugin gives you SOAP, XML-RPC and JSON-RPC methods to force an individual user account to be synced in to Confluence from a remote directory.
That might suit your purposes as-is, but I imagine it would be possible to edit the source of this plugin and change it to synchronise an entire directory, too.

How do I use Puppet's ralsh with resource types provided by modules?

I have installed the postgresql module from Puppetforge.
How can I query Postgresql resources using ralsh ?
None of the following works:
# ralsh postgresql::db
# ralsh puppetlabs/postgresql::db
# ralsh puppetlabs-postgresql::db
I was hoping to use this to get a list of databases (including attributes such as character sets) and user names/passwords from the current system in a form that I can paste into a puppet manifest to recreate that setup on a different machine.
In principle, any puppet client gets the current state of your system from another program called Facter. You should create a custom Fact (a module of Facter), and then included into your puppet client. Afterwards, I think you could call this custom Fact from ralsh.
More information about creating a custom Fact can be found in here.
In creating your own Fact, you should execute your SQL query and then save the result into particular variable.