Rundeck env variables work for Ubuntu but not for CentOS machines - rundeck

I have exported few env variables in /etc/sysconfig/rundeckd for CentOS and /etc/default/rundeckd for Ubuntu. The vars seem to be accessed in Ubuntu rdcli but not in centos. What am I missing?

The configuration must be set in the user shell (for example at .bashrc file), check the configuration documentation here.
Look:
export RD_URL=http://192.168.33.10:4440
export RD_USER=admin
export RD_PASSWORD=admin

Related

vagrant up fails with: cannot translate name # rb_sysopen when trying to run homestead

When I run vagrant up I get the following error:
Vagrant/embedded/gems/2.2.14/gems/vagrant-2.2.14/plugins/hosts/suse/host.rb:20:in `initialize': Cannot translate name. # rb_sysopen - /etc/os-release (Errno::ELOOP)
I have installed Vagrant for Windows and I'm trying to launch Laravel's Homestead that I cloned in WSL2 by cd'ing into the Z: directory that WSL2 provides via PowerShell (so that I have access to Vagrant that's installed on Windows).
cd Z:\home\coder\projects\homestead
It seems that Vagrant is trying to recognize the OS from the filesystem if I'm understanding correctly. So if you're trying to run Vagrant on Windows across a network share that is Unix/WSL/Linux it seems that it will try to run as if it is Unix and fail.
Solution
I was able to copy the homestead directory from the network share into my Windows environment and then navigate to that directory and run vagrant up successfully using powershell.
Another Option
It sounds like you should also be able to install Vagrant within WSL2 and use it from within WSL2 instead of PowerShell.
Another possibility to note is that you can invoke exes from within WSL2, but it sounds like it will not work properly if you were trying to run Window's Vagrant from within WSL2.
Research
https://github.com/roots/trellis/issues/1083
https://www.vagrantup.com/docs/other/wsl.html
https://discourse.roots.io/t/command-vagrant-up-in-wsl-is-failed/16528

Jupyter directories when in virtual environemnt

Where does jupyter store kernelspecs and other data, when running inside a virtual environment?
(I'm interested in conda environments, but knowing about other kinds of virtual envs would be interesting too).
I think I found it.
When inside a virtual environment, one can run
jupyter --paths
and one will see jupyter locations (for the jupyter installed inside the currently active environment).
Something like:
config:
/home/<user>/.jupyter
/home/<user>/anaconda3/envs/<this-env>/etc/jupyter
/usr/local/etc/jupyter
/etc/jupyter
data:
/home/<user>/.local/share/jupyter
/home/<user>/anaconda3/envs/<this-env>/share/jupyter
/usr/local/share/jupyter
/usr/share/jupyter
runtime:
/home/<user>/.local/share/jupyter/runtime
The directory where kernelspecs are would be /home/<user>/anaconda3/envs/<this-env>/share/jupyter/kernels

jupyter pyspark outputs: No module name sknn.mlp

I have 1 WorkerNode SPARK HDInsight cluster. I need to use scikit-neuralnetwork and vaderSentiment module in Pyspark Jupyter.
Installed the library using commands below:
cd /usr/bin/anaconda/bin/
export PATH=/usr/bin/anaconda/bin:$PATH
conda update matplotlib
conda install Theano
pip install scikit-neuralnetwork
pip install vaderSentiment
Next I open pyspark terminal and i am able to successfully import the module. Screenshot below.
Now, i open Jupyter Pyspark Notebook:
Just to add, I am able to import pre-installed module from Jupyter like "import pandas"
The installation goes to:
admin123#hn0-linuxh:/usr/bin/anaconda/bin$ sudo find / -name "vaderSentiment"
/usr/bin/anaconda/lib/python2.7/site-packages/vaderSentiment
/usr/local/lib/python2.7/dist-packages/vaderSentiment
For pre-installed modules:
admin123#hn0-linuxh:/usr/bin/anaconda/bin$ sudo find / -name "pandas"
/usr/bin/anaconda/pkgs/pandas-0.17.1-np19py27_0/lib/python2.7/site-packages/pandas
/usr/bin/anaconda/pkgs/pandas-0.16.2-np19py27_0/lib/python2.7/site-packages/pandas
/usr/bin/anaconda/pkgs/bokeh-0.9.0-np19py27_0/Examples/bokeh/compat/pandas
/usr/bin/anaconda/Examples/bokeh/compat/pandas
/usr/bin/anaconda/lib/python2.7/site-packages/pandas
sys.executable path is same in both Jupyter and terminal.
print(sys.executable)
/usr/bin/anaconda/bin/python
Any help would greatly appreciated.
The issue is that while you are installing it on the headnode (one of the VMs), you are not installing it on all the other VMs (worker nodes). When the Pyspark app for Jupyter gets created, it gets run in YARN cluster mode, and so the application master starts in a random worker node.
One way of installing the libraries in all worker nodes would be to create a script action that runs against worker nodes and installs the necessary libraries:
https://azure.microsoft.com/en-us/documentation/articles/hdinsight-hadoop-customize-cluster-linux/
Do note that there's two python installations in the cluster, and you have to refer to the Anaconda installation explicitly. Installing scikit-neuralnetwork would look something like this:
sudo /usr/bin/anaconda/bin/pip install scikit-neuralnetwork
The second way of doing this is to simply ssh into the workernodes from the headnode. First, ssh into the headnode, then figure out the workernode IPs by going to Ambari at: https://YOURCLUSTER.azurehdinsight.net/#/main/hosts. Then, ssh 10.0.0.# and execute the installation commands yourself for all worker nodes.
I did this for scikit-neuralnetwork and while it does import correctly, it throws saying it cannot create a file in ~/.theano. Because YARN is running Pyspark sessions as the nobody user, Theano cannot create its config file. Doing a little bit of digging around, I see that there's a way to change where Theano writes/looks for its config file. Please also take care of that while doing the installation: http://deeplearning.net/software/theano/library/config.html#envvar-THEANORC
Forgot to mention, to modify an env var, you need to set the variable when creating the pyspark session. Execute this in the Jupyter notebook:
%%configure -f
{
"conf": {
"spark.executorEnv.THEANORC": "{YOURPATH}",
"spark.yarn.appMasterEnv.THEANORC": "{YOURPATH}"
}
}
Thanks!
Easy way to resolve this was:
Create a bash script
cd /usr/bin/anaconda/bin/
export PATH=/usr/bin/anaconda/bin:$PATH
conda update matplotlib
conda install Theano
pip install scikit-neuralnetwork
pip install vaderSentiment
Copy the above created bash script to any container in Azure storage account.
While creating HDInsight Spark cluster, use script action and mention the above path in URL. Ex: https://sa-account-name.blob.core.windows.net/containername/path-of-installation-file.sh
Install it in both HeadNodes and WorkerNodes.
Now, open Jupyter and you should be able to import the modules.

Postgres in Conda Environment (Ubuntu 14.04)

Being new to Anaconda, I am having some trouble properly setting up a conda environment. What I am interested in achieving is setting up an environment for a django application with a postgres database. The following command creates the environment:
$ conda create -n django1.7-webdev python=3.4 django=1.7 postgresql=9.1
This second command activates the environment:
$ source activate django1.7-webdev
At this point, though, when trying to run psql, I get the following error:
$ psql
psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/tmp/.s.PGSQL.5432"?
How can I start PostgreSQL in the conda environment? The following command starts the PostgreSQL installed outside the activated conda environment, which is not what I want:
$ sudo service postgresql start
The postgresql documentation on starting servers is at https://www.postgresql.org/docs/9.1/static/server-start.html - before that, you might also need to initialize a database: https://www.postgresql.org/docs/9.1/static/creating-cluster.html
The conda package should include any binaries necessary to follow those directions. Moreover, these binaries should already be on PATH, since you are activating the environment.
In general, if you're starting a command with sudo to interact with conda, something is wrong. Unless you are trying to do some centrally-owned install that several users use, conda should never require admin rights.

pass kinit a custom krb5.conf file

I'm using kinit to log into a server that my sys admin didn't anticipate us using. It seems that the default location for the config file is /etc/krb5.conf, but I don't have root access so I can't edit this file to add a new server. How can I pass kinit a custom config file?
OK, solved the problem: the default config file location can be overridden by setting the KRB5_CONFIG environment variable
I had the same issue today. Here's the command that worked for me, for future reference:
env KRB5_CONFIG=/path/to/custom/krb5.conf kinit <your..args..here>
Try using
on Win
-Djava.security.krb5.conf=C:/IBM/IBMSSO/krb5.ini
on non Win
-Djava.security.krb5.conf=/opt/IBM/IBMSSO/krb5.conf
Example on Windows (with IBM Java)
java -Djava.security.krb5.conf=C:/IBM/IBMSSO/krb5.ini com.ibm.security.krb5.internal.tools.Kinit -k -t C:/IBM/IBMSSO/SSOICNTilo.keytab HTTP/myserver.123.com#123.COM