How can I tell if postgres has kerberos installed? [closed] - postgresql

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I am trying to configure postgres (8.4.13) to work with Kerberos. I cannot seem to get it work. The one "gotcha" I keep reading about is that postgres must be built with kerberos support. Well, the postgres I have is an rpm downloaded from the internet. How I can tell if this postgres was built with Kerberos support or not? Is there a way to list "installed components? Thanks!!!

pg_config might be helpful, e.g.:
pg_config --configure

Some binaries have command like options which will allow you to see how they were compiled. I'm not sure if postgres will do that. You can use --version (but this is generally very minimalistic) and --describe-config which will dump values like:
Security and Authentication STRING FILE:/usr/local/etc/postgresql/krb5.keytab
Sets the location of the Kerberos server key file
if kerberos capabilites are configured for the postgresql installation.
As well many packaging systems have methods for capturing compile time options that were passed to the build process (FreeBSD's pkg info -f for example does this). It's been a while since I have used rpm and newer versions may have methods for this sort of query directly on the binary. On rpm based systems I have administered I would keep the src.rpm and .spec files on hand in a local repository for each installed application. This was in order to comply with out in house "policies" :-) and to track the configure and OPTFLAGS settings, source code used in the build, etc.
Here is a response to a similar question:
https://serverfault.com/questions/36037/how-can-i-find-what-options-an-rpm-was-compiled-with
A generic UNIX method for seeing which libraries a binary is linked against is to use "ldd" like this:
$~/ ldd /usr/local/bin/postgres
/usr/local/bin/postgres:
libgssapi.so.3 => /usr/local/lib/libgssapi.so.3 (0x800a38000)
libxml2.so.5 => /usr/local/lib/libxml2.so.5 (0x800b74000)
libpam.so.5 => /usr/lib/libpam.so.5 (0x800dc4000)
libssl.so.6 => /usr/lib/libssl.so.6 (0x800ecc000)
libcrypto.so.6 => /lib/libcrypto.so.6 (0x80101f000)
libm.so.5 => /lib/libm.so.5 (0x8012bf000)
libc.so.7 => /lib/libc.so.7 (0x8013df000)
libintl.so.9 => /usr/local/lib/libintl.so.9 (0x801621000)
libheimntlm.so.1 => /usr/local/lib/libheimntlm.so.1 (0x80172a000)
libkrb5.so.26 => /usr/local/lib/libkrb5.so.26 (0x801830000)
libheimbase.so.1 => /usr/local/lib/libheimbase.so.1 (0x8019ad000)
libhx509.so.5 => /usr/local/lib/libhx509.so.5 (0x801ab1000)
libwind.so.0 => /usr/local/lib/libwind.so.0 (0x801bf9000)
libsqlite3.so.8 => /usr/local/lib/libsqlite3.so.8 (0x801d21000)
libasn1.so.8 => /usr/local/lib/libasn1.so.8 (0x801ec3000)
libcom_err.so.2 => /usr/local/lib/libcom_err.so.2 (0x802058000)
libiconv.so.3 => /usr/local/lib/libiconv.so.3 (0x80215b000)
libroken.so.19 => /usr/local/lib/libroken.so.19 (0x802358000)
libcrypt.so.5 => /lib/libcrypt.so.5 (0x802469000)
libthr.so.3 => /lib/libthr.so.3 (0x802589000)
libz.so.5 => /lib/libz.so.5 (0x8026a2000)
As you can see on my system the postgresql binary is linked against libkrb5.so.26, libgssapi.so.3, libheimntlm.so.1 etc. (these are Heimdal Kerberos libraries).
[EDIT: I still think Milen's response is most likely the best, most thorough and recommended approach BUT one caveat I ran into just today: on most of my systems (most of these are FreeBSD) pg_config appears to be installed with the postgresql-client pkg and so can potentially have different options set than what is selected when the postgresql-server is built. I tend to build lots of functionality into the clients so they can connect to a range of servers which are often running on different machines. The package with the client command line shell and libraries (postgresql-devel in most RPM-based or Linux systems) is what give capabilities to database modules and libraries for python, perl, etc. that connect to your DB server. The client libraries often reside on a separate host when you have a web application that is grabbing and storing data (CRUD) in a database back end. That said, most likely binary client/server/devel packages are built with the same options set ;-)
Anyway, just another data point ... cheers.]
Hope that helps.
Cheers,

Related

How to add changes for NO ENCRYPT DB2 option to db2RestoreStruct

I am trying to restore encrypted DB to non-encryped DB. I made changes by setting piDbEncOpts to SQL_ENCRYPT_DB_NO but still restore is being failed. Is there db2 sample code is there where I can check how to set "NO Encrypt" option in DB2. I am adding with below code snippet.
db2RestoreStruct->piDbEncOpts->encryptDb = SQL_ENCRYPT_DB_NO
The 'C' API named db2Restore will restore an encrypted-image to a unencrypted database , when used correctly.
You can use a modified version of IBM's samples files: dbrestore.sqc and related files, to see how to do it.
Depending on your 'C' compiler version and settings you might get a lot of warnings from IBM's code, because IBM does not appear to maintain the code of their samples as the years pass. However, you do not need to run IBM's sample code, you can study it to understand how to fix your own C code.
If installed, the samples component must match your Db2-server version+fixpack , and you must use the C include files that come with your Db2-server version+fixpack to get the relevant definitions.
The modifications to IBM's samples code include:
When using the db2Restore API ensure its first argument has a value that is compatible with your server Db2-version-and-fixpack to access the required functionality. If you specify the wrong version number for the first argument, for example a version of Db2 that did not support this functionality, then the API will fail. For example, on my Db2-LUW v11.1.4.6, I used the predefined db2Version1113 , like this:
db2Restore(db2Version1113, &restoreStruct, &sqlca);
When setting the restore iOptions field: enable the flag DB2RESTORE_NOENCRYPT, for example, in IBM's example, include the additional flag: restoreStruct.iOptions = DB2RESTORE_OFFLINE | DB2RESTORE_DB | DB2RESTORE_NODATALINK | DB2RESTORE_NOROLLFWD | DB2RESTORE_NOENCRYPT;
Ensure the restoredDbAlias differs from the encrypted-backup alias name.
I tested with Db2 v11.1.4.6 (db2Version1113 in the API) with gcc 9.3.
I also tested with Db2 v11.5 (db2Version11500 in the API) with gcc 9.3.

Is it possible to notify a service on a different host with puppet?

I have a puppet module for host-1 doing some file exchanges.
Is it possible to inform another puppet agent on host-2 (i.e. with a notify) about a change made on host-1?
And if it is possible, what would be a best practice way to do that?
class fileexchangehost1 {
file { '/var/apache2/htdocs':
ensure => directory,
source => "puppet:///modules/${module_name}/var/apache2/htdocs",
owner => 'root',
group => 'root',
recurse => true,
purge => true,
force => true,
notify => Service['restart-Service-on-host-2'],
}
}
Many have asked this question and at various times there has been talk of implementing a feature to make it possible. But it's not possible, and not likely to be possible any time soon.
Exported resources was considered an early solution to problems similar to this, although some e.g. here have argued it is not a good solution and I don't see exported resources used often nowadays.
I think, nowadays, the recommended approach would be to keep-it-simple, and use something like Puppet Bolt to simply run commands on node A, and then on node B, in order.
If not Puppet Bolt, you could also use MCollective's successor, Choria, or even Ansible for this.
Puppet has no direct way of notifying a service on one host from the manifest of another.
That said, could you use exported resources for this? We use exported resources with Icinga, so one host generates Icinga configuration for itself, then exports it to the Icinga server, which restarts the daemon.
For example, on the client host:
##file { "/etc/icinga2/conf.d/puppet/${::fqdn}.conf":
ensure => file,
[...]
tag => "icinga_client_conf",
}
And on the master host:
File <<| tag == "icinga_client_conf" |>> {
notify => Service['icinga2'],
}
In your case there doesn't appear to be a resource being exported, but would this give you the tools to build something to do what you need?

Zend_Cache with Memcachier

Can I use memcachier with ZendFramework 1.12 ?
The provider that I am useing(AppFog) only offers Memcachier (Memcached is comming soon from 10 Months) And My app will need a lot of caching when it starts. I don't want to stick with APC so I have no other good alternative.
So this is just a half answer right now, I'll try to figure out the rest. I work for MemCachier by the way, so please shoot us an email on support#memcachier.com if you have more questions.
PHP includes two memcache bindings by default: memcache and memcached. The first one (memcache) is its own implementation of the memcache procotol, while the second one (memcached) is a php binding to the libmemcached C++ library.
The memcached binding for php does support SASL these days (since version 2.0.0). Sadly it isn't documented. It also is an optional part of the memcached module, so you'll need to make sure it is compiled on your machine (or AppFog) with the SASL support enabled. The steps to do that roughly are:
Install libmemcached. I used version 1.0.14.
Install php-memcached. Make sure to pass the "--enable-memcached-sasl" option to it when running ./configure.
When building both of these you can sanity check the output of "./configure" to make sure that SASL support is indeed enabled, sadly right now it can be tricky.
Edit you php.ini file. Put the following line into it:
[memcached]
memcached.use_sasl = 1
I did all of this on OSX 10.8 using homebrew. If that is the case for you the following should work:
$ brew install libmemcached
$ brew edit php54-memcached
// you'll need to add the line:
args << "--enable-memcached-sasl"
// to the brew file.
$ brew install php54-memcached
Now to actually use SASL support, here is a test file that demonstrates it and is a good sanity check.
<?php
/**
* Test of the PHP Memcached extension.
*/
error_reporting(E_ALL & ~E_NOTICE);
$use = ini_get("memcached.use_sasl");
$have = Memcached::HAVE_SASL;
echo "Have SASL? $have\n";
echo "Using SASL? $use\n\n";
$mc = new Memcached();
$mc->setOption(Memcached::OPT_BINARY_PROTOCOL, true);
$mc->setSaslAuthData("user-1", "pass");
$mc->addServer("localhost", 11211);
$mc->set("foo", "Hello!");
$mc->set("bar", "Memcached...");
$arr = array(
$mc->get("foo"),
$mc->get("bar")
);
var_dump($arr);
?>
Adapting this to work in the Zend Framework is unknown to me right now. I'm not familiar with it so may take me some time to install and figure out. It seems very doable though given one of the backends works with SASL auth.

Perl Net::LDAPS SSL version negotiation

Platform
OS: RHEL 5.4
Perl: 5.8.8
OpenSSL: 0.9.8e
Net::LDAP: 0.33
Details
We have a perl script which monitors LDAP servers used by our application and sends alerts if these are not reachable or functional in some way. On the whole this works well except for a particular LDAP server which is only accepting SSLv2. The LDAP object creation looks as follows
my $current_server = Net::LDAP->new('ldaps://10.1.1.1:636/,
verify => 'none',
sslversion => 'sslv23',
timeout => 30 )
Notice sslv23, which according to the documentation, allows SSLv2 or 3 to be used. The relevant entry in the docs is
sslversion => 'sslv2' | 'sslv3' | 'sslv23' | 'tlsv1'
However, the above does not work. When I change the "sslversion" to be "sslv2" then the script does work.
Conclusion
How can I get Net::LDAPS to retry with sslv2 if sslv3 does not work?
Thanks,
Fred.
To answer my own question - I could not find a way to make the library fall back to SSLv2 if SSLv3 fails so I did that in my own code by first trying SSLv3 and, if that fails, SSLv2. This works for my specific situation.
I encourage anyone who has more information to comment.
Which version of IO::Socket::SSL are you using?
Which version of Net::SSLeay are you using?
If you look at the source code for IO::Socket::SSL, you will see that sslv23 means it will use Net::SSLeay::CTX_new() which is equivalent to Net::SSLeay::CTX_v23_new(). SSLv2 may have been dropped from your library's CTX_v23_new implementation.

How to deploy bugzilla with psgi on dotcloud?

I want to deploy bugzilla on dotcloud, but the perl environment is psgi.
The doc said I must use "modules to add PSGI hooks to legacy CGI or FastCGI applications".
I found CGI::Emulate::PSGI module but could not figure out how to do it.
I am a Python programmer and have no experience in Perl.
I had partial success with bugzilla-4.0.2 on a local openSUSE. I don't think Bugzilla will be suitable for cloud deployment in the short term because of its large amount of manual setup necessary. Follow the instructions referenced from docs/en/html/index.html, then run
plackup -MPlack::App::CGIBin -e'Plack::App::CGIBin->new(root => ".")->to_app'
and visit http://localhost:5000/index.cgi. The static files are missing, e.g. stylesheets. Something like along the lines of
plackup -MPlack::Builder -MPlack::App::Directory -MPlack::App::CGIBin -e 'builder {
mount "/" => Plack::App::CGIBin->new(root => ".")->to_app;
mount "/" => Plack::App::Directory->new({ root => "." })->to_app;
}'
is necessary, but mounting to the same path actually does not work in Plack 0.9985, or I'm doing it wrong.
I did not try it but This sounds like what you want. Its bugzila deployed to a cloud stackato.
You can join Stackato then deploy the bugzilla sample.
https://github.com/Stackato-Apps/bugzilla