Please help as I am a little clueless..
I am trying to upgrade our Airflow installation including Kerberos authentication from the localexecutor to the celeryexecutor. Currently, we run the airflow installation from only one server.
The exact same sqoop job fails using the celeryexecutor due to an kerberos authentication error while the localexecutor is successful:
Client cannot authenticate via:[TOKEN, KERBEROS]; Host Details : local host is: "lsrv****.linux.****/10.251.128.148"; destination host is: "lsrv***.linux.****":8020; , while invoking ClientNamenodeProtocolTranslatorPB.getFileInfo over lsrv****.linux.****/10.251.128.104:8020 after 1 failover attempts. Trying to failover after sleeping for 1377ms.'
kerberos settings:
[kerberos]
ccache = /tmp/krb5cc_32606
# gets augmented with fqdn
principal = airflow
reinit_frequency = 3600
kinit_path = kinit
keytab = /var/lib/airhome/.certs/airflow.keytab
Is there anything that needs to change in the kerberos or celery setup to have kerberos work in combination with celery executor? Or does anything need to change in the Cloudera Hadoop settings? (e.g.: hadoop.security.token.service.use_ip?)
Does it have to do something with the additional ip adress mentioned in the error message?
You need to run also airflow kerberos where you run airflow worker. It's Kerberos ticket renewer component in Airflow.
https://airflow.apache.org/docs/stable/security.html#kerberos
Related
I have created a connection in Glue with a DocumentDB cluster. The cluster is running and I can connect from my laptop and also from AWS athena to run Athena queries over it. The connection URL in Glue follows this format:
mongodb://host:27017/database
In the connection creation I have tried enabling and disabling the SSL connection option:
Also I have disable in the cluster the TLS and rebooted the database. Every time I test the connection with Glue I get this error:
Check that your connection definition references your Mongo database with correct URL syntax, username, and password.
Exiting with error code 30
Also I have tried setting the user and password in the URL but I get the same error.
How can I solve this?
Thanks!!!
First of all, does the "database" actually exists in DocumentDB cluster? Make sure you select the right VPC for Glue, has to be the same as DocumentDB. When using the Test Connection option, one of the security groups has to have an allow all rule, or the source security group in your inbound rule can be restricted to the same security group.
This blog post has some good info on how to setup a Glue connection to MongoDB/DocumentDB.
I have solved the problem. Disabling TLS on DocumnetDB and in the Glue connection works. I have to find the way to make it working with TLS enabled.
How do we connect to PostgreSql through Kerberos?
I tried to connect through
1. adding Kerberos module in drivers folder,
2. then adding jaasappllication n kerberosservername with db url
3. and providing cfg file to cmd param java.security.auth.login.config=
But while starting Corda node it throws error w/ message -
no valid credentials provided.. Mechanism Level: Failed to find kerberos tgt
However same is working with simple Java program.
Kerberos is a product unrelated to R3, as of now there is no demo integration between kerberos and Corda at the moment.
Hence there is no documentation around this on the docs.corda.net. However, we are exploring the potentials of it in our research.
We have implemented hasicorp open source vault as single node with consul backend.
We need help regarding implementing vault 3 node cluster for HA in single datacenter and also well as in multidatacenter.
Could you please help me on this.
The Vault Deployment guide has more on this.
https://learn.hashicorp.com/vault/operations/ops-deployment-guide#help-and-reference
Combine it with this guide: https://learn.hashicorp.com/vault/operations/ops-vault-ha-consul
I shall assume, just based on the fact that you've already gotten a single node up with a Consul backend, that you already know a little about scripting, Git, configuring new SSH connections, installing software, and Virtual Machines.
Also, these are hard to explain, and have much better resources elsewhere.
Further if you get stuck with the prerequisites, tools to install, or downloading the code, please have a look at the resources on the internet.
If you get an error with Vault working improperly, though, make a Github issue ASAP.
Anyway, with that out of the way, the short answer is this:
Step 1: Set up 3 Consul servers, each with references to each other.
Step 2: Set up 3 Vault servers, each of them independent, but with a reference to a Consul
address as their Storage Backend.
Step 3: Initialize the Cluster with your brand new Vault API.
Now for the long answer.
Prerequisites
OS-Specific Prerequisites
MacOS: OSX 10.13 or later
Windows: Windows must have Powershell 3.0 or later. If you're on Windows 7, I recommend Windows Management Framework 4.0, because it's easier to install
Vagrant
Set up Vagrant, because it will take care of all of the networking and resource setup for the underlying infrastructure to use, here.
Especially for Vagrant, the Getting Started guide takes about 30 minutes once you have Vagrant and Virtualbox installed: https://www.vagrantup.com/intro/getting-started/index.html
Install Tools
Make sure you have Git installed
Install the latest version of Vagrant (NOTE: WINDOWS 7 AND WINDOWS 8 REQUIRE POWERSHELL >= 3)
Install the latest version of VMWare or Virtualbox
Download the Code for this demonstration
Related Vendor Documentation Link: https://help.github.com/articles/cloning-a-repository
git clone https://github.com/v6/super-duper-vault-train.git
Use this Code to Make a Vault Cluster
Related Vagrant Vendor Documentation Link: https://www.vagrantup.com/intro/index.html#why-vagrant-
cd super-duper-vault-train
vagrant up ## NOTE: You may have to wait a while for this, and there will be some "connection retry" errors for a long time before a successful connection occurs, because the VM is booting. Make sure you have the latest version, and try the Vagrant getting started guide, too
vagrant status
vagrant ssh instance5
After you do this, you'll see your command prompt change to show vagrant#instance5.
You can also vagrant ssh to other VMs listed in the output of vagrant status.
You can now use Vault or Consul from within the VM for which you ran vagrant ssh.
Vault
Explore the Vault Cluster
ps -ef | grep vault ## Check the Vault process (run while inside a Vagrant-managed Instance)
ps -ef | grep consul ## Check the Consul process (run while inside a Vagrant-managed Instance)
vault version ## Output should be Vault v0.10.2 ('3ee0802ed08cb7f4046c2151ec4671a076b76166')
consul version ## Output should show Consul Agent version and Raft Protocol version
The Vagrant boxes have the following IP addresses:
192.168.13.35
192.168.13.36
192.168.13.37
Vault is on port 8200.
Consul is on port 8500.
Click the Links
http://192.168.13.35:8200 (Vault)
http://192.168.13.35:8500 (Consul)
http://192.168.13.36:8200 (Vault)
http://192.168.13.36:8500 (Consul)
http://192.168.13.37:8200 (Vault)
http://192.168.13.37:8500 (Consul)
Start Vault Data
Related Vendor Documentation Link: https://www.vaultproject.io/api/system/init.html
Start Vault.
Run this command on one of the Vagrant-managed VMs, or somewhere on your computer that has curl installed.
curl -s --request PUT -d '{"secret_shares": 3,"secret_threshold": 2}' http://192.168.13.35:8200/v1/sys/init
Unseal Vault
Related Vendor Documentation Link: https://www.vaultproject.io/api/system/unseal.html
This will unseal the Vault at 192.168.13.35:8200. You can use the same process for 192.168.13.36:8200 and 192.168.13.37:8200.
Use your unseal key to replace the value for key abcd1430890..., and run this on the Vagrant-managed VM.
curl --request PUT --data '{"key":"abcd12345678..."}' http://192.168.13.35:8200/v1/sys/unseal
Run that curl command again. But use a different value for "key":. Replace efgh2541901... with a different key than you used in the previous step, from the keys you received when running the v1/sys/init endpoint.
curl --request PUT --data '{"key":"efgh910111213..."}' http://192.168.13.35:8200/v1/sys/unseal
Non-Vagrant
Please refer to the file PRODUCTION_INSTALLATION.md in this repository.
Codified Vault Policies and Configuration
To Provision Vault via its API, please refer to the
provision_vault folder.
It has data and scripts.
The data folder's tree corresponds to the HashiCorp Vault API
endpoints, similar to the following:
https://www.hashicorp.com/blog/codifying-vault-policies-and-configuration#layout-and-design
You can use the Codified Vault
Policies and Configuration
with your initial Root token, after
initializing and unsealing Vault,
to configure Vault quickly via its API.
The .json files inside each folder
correspond to the payloads to send to Vault
via its API, but there may also be .hcl,
.sample, and .sh files for convenience's sake.
Hashicorp have written some guidance on how to get started with setting up a HA Vault and Consul configuration:
https://www.vaultproject.io/guides/operations/vault-ha-consul.html
I have successfully mounted and used NFS Version 4 having Solaris server and FreeBSD client.
Problem is when having FreeBSD server and FreeBSD client at version 4. Version 3 works excellent.
I use FreeBSD NFS server since FreeBSD verson 4.5 (then having IBM AiX clients).
The problem:
mounts OK, but there are no principals appear at the kerberos cache, and when trying to read or write on the mounted filesystem I get the error: Input/output error
nfs/server-fqdn#REALM and nfs/client-fqdn#REALM principals are created at kerberos server and stored at keytab files properly at both sides.
I issue tgt tickets from the KDC using the above for both sides for the root's kerberos cache.
I start services properly:
file /etc/rc.conf
rpcbind_enable="YES"
gssd_enable="YES"
rpc_statd_enable="YES"
rpc_lockd_enable="YES"
mountd_enable="YES"
nfsuserd_enable="YES"
nfs_server_enable="YES"
nfsv4_server_enable="YES"
then I start services
at client: rpcbind, gssd, nfsuserd,
at server all above having the exports file:
V4: /marble/nfs -sec=krb5:krb5i:krb5p -network 10.20.30.0 -mask 255.255.255.0
I mount:
# mount_nfs -o nfsv4 servername:/ /my/mounted/nfs
#
# mkdir /my/mounted/nfs/e
# mkdir: /my/mounted/nfs/e: Input/output error
#
Same result for even an ls command.
klist does not show any new principals at root's cache, or any other cache.
The amazing performance at version 3 I love, but need local lock files feature of NFS4.
Second reason is security. I need kerberised RPC calls (-sec=krbp).
If anyone of you has achieved this using FreeBSD server for NFS Version 4, please give a feedback to this question, I'll be glad if you do.
Comments are not good to give code examples. Here is the setup of FreeBSD client and FreeBSD server that works for me. I don't use Kerberos but if you make it working with this minimal configuration then you can add Kerberos afterwards (I believe).
Server rc.conf:
nfs_server_enable="YES"
nfs_server_flags="-u -t -n 4"
nfsv4_server_enable="YES"
nfsuserd_enable="YES"
mountd_flags="-r"
Server /etc/exports:
/parent/path1 -mapall=1001:1001 192.168.2.200
/parent/path2 -mapall=1001:1001 192.168.2.200
... (more shares)
V4: /parent/ -sec=sys 192.168.2.200
Client rc.conf:
nfs_client_enable="YES"
nfs_client_flags="-n 4"
rpc_lockd_enable="YES"
rpc_statd_enable="YES"
Client fstab:
192.168.2.100:/path1/ /mnt/path1/ nfs rw,bg,late,failok,nfsv4 0 0
192.168.2.100:/path2/ /mnt/path2/ nfs rw,bg,late,failok,nfsv4 0 0
... (more shares)
As you see the client tries to mount only what's after the /parent/ path specified in the V4 line on the server. 192.168.2.100 is server IP and 192.168.2.200 is the client IP. This setup will only allow that one client connect to the server.
I hope I haven't missed anything. BTW please rise questions like this on SuperUser or ServerFault rather than StackOverflow. I am surprised this question hasn't been closed yet because of that ;)
I'm trying to setup a cell and a collective in a WAS for bluemix service. I've found a few steps online for generic liberty setup, but nothing specific for a bluemix collective or cell. Can someone point me in the right direction?
At a high level, you should be able to do the following for a Cell:
Login to the Admin Console as wsadmin
Create a server.
Open all the ports on each host for each server created by running the openFirewallPorts.sh script. Below, you will find the standard ports for a new server given that only one server exists on each host You may need to open more ports for additional servers on the same host since ports can be unique per server. Try the following:
cd WAS_HOME/virtual/bin
export serverPorts=2810:TCP,2810:UDP,8880:TCP,8880:UDP,9101:TCP,9101:UDP,9061:TCP,9061:UDP,9080:TCP,9080:UDP,9354:TCP,9354:UDP,9044:TCP,9044:UDP,9443:TCP,9443:UDP,5060:TCP,5060:UDP,5061:TCP,5061:UDP,11005:TCP,11005:UDP,11007:TCP,11007:UDP,9633:TCP,9633:UDP,7276:TCP,7276:UDP,7286:TCP,7286:UDP,5558:TCP,5558:UDP,5578:TCP,5578:UDP
sudo ./openFirewallPorts.sh -ports $serverPorts -persist true
Start your server.
Deploy your application.
There are a few slight differences for a Liberty Collective, but again, at a high level, you should be able to try the following:
Switch your user to wsadmin or ssh to your host using wsadmin / password
On each host, create a server and join it to the collective. Be sure to use the full host name of the controller for the --host parameter.
cd WAS_HOME/bin
./server create server
./collective join server --host=yourhostname --port=9443 --user=wsadmin --password=xxxxxxxx --keystorePassword=yyyyyyyy
Accept the chain certificate (y/n) y
Save the output from each join so you can paste it into each host's application server.xml file before deploying your application.
Install the features required by your application on each host. The features listed below are an example.
cd /opt/IBM/WebSphere/Liberty/bin
./featureManager install --acceptLicense ejblite-3.2 websocket-1.0 jsp-2.3 jdbc-4.1 jaxrs-2.0 cdi-1.2 beanValidation-1.1
NOTE: Output from this command will contain messages similar to:
chmod: changing permissions of
`/opt/IBM/WebSphere/Liberty/bin/featureManager': Operation not
permitted
This is OK. You should see this message upon completion:
Product validation completed successfully.
Update your application's server.xml file with the information saved in Step 2.
Start your server.
Deploy your application.
Verify your application is reachable :9080/appname