I have configured the Kerberos executor plugin as a zip file in /libext directory.
But the plugin is not listed under Node Executors.
Is there any other configurations to be done for the plugin configurations.
After copy the zip file at libext directory, now you have a new node executor for your projects named "SSH Kerberos Executor". Take a look at this. (Tested under Rundeck 3.2.1).
If it doesn't work try restarting Rundeck and you'll see the new node executor for sure.
Related
I'm exploring this python package mrjob to run MapReduce jobs in python. I've tried running it in the local environment and it works perfectly.
I have Hadoop 3.3 runs on Kubernetes (GKE) cluster. So I also managed to run mrjob successfully in the name-node pod from inside.
Now, I've got a Jupyter Notebook pod running in the same Kubernetes cluster (same namespace). I wonder whether I can run mrjob MapReduce jobs from the Jupyter Notebook.
The problem seems to be that I don't have $HADOOP_HOME defined in the Jupyter Notebook environment. So based on the documentation I created a config file called mrjob.conf as follows;
runners:
hadoop:
cmdenv:
PATH: <pod name>:/opt/hadoop
However mrjob is still unable to detect hadoop bin and gives the below error
FileNotFoundError: [Errno 2] No such file or directory: 'hadoop'
So is there a way in which I can configure mrjob to run with my existing Hadoop installation on the GKE cluster? I've tried searching for similar examples but was unable to find one.
mrjob is a wrapper around hadoop-streaming, therefore requires Hadoop binaries to be installed on the server(s) where the code will run (pods here, I guess); including the Juptyer pod that submits the application.
IMO, it would be much easier for you to deploy PySpark/PyFlink/Beam applications in k8s than hadoop-streaming since you don't "need" Hadoop in k8s to run such distributed processes.
Beam would be recommended since it is compatible with GCP DataFlow
I am trying to install Confluent on Windows using WSL. I have done most of the setup as described here, but I am facing the following error while trying to start confluence
sai#DESKTOP-IRLOG8O:~$ confluent local services start
The local commands are intended for a single-node development environment only,
NOT for production usage. https://docs.confluent.io/current/cli/index.html
Using CONFLUENT_CURRENT: /tmp/confluent.266515
Error: fork/exec /mnt/c/jdk-15.0.2/bin/java: no such file or directory
This is my JAVA_HOME
sai#DESKTOP-IRLOG8O:~$ echo $JAVA_HOME
/mnt/c/jdk-15.0.2
Which also means that I have Java on my windows machine in path C:\jdk-15.0.2
Inside my WSL bash, I am able to see file java under path /mnt/c/jdk-15.0.2/bin
I am not sure what is the issue here? Please help resolve this. Let me know if any other details are needed.
I have a solution. Somehow, the space in my JAVA_HOME path was not getting resolved properly. I moved the JDK path on my Windows machine to a path without any spaces and the issue was resolved. The WSL Linux host uses the JAVA_HOME path as mounted in /mnt/c/<PATH_TO_JDK_ON_WINDOWS_HOST>
I am able to configure Fastlane locally and working well with terminal, but when I am trying to run it with Jenkins(I have configured Jenkins locally on my macbook) it is failing every-time(i have installed ruby 2.5.0 again).
Any help on the same would be highly appreciated.
I am attaching SS for your reference.
Jenkins run its build scripts using specified user 'jenkins'. You might want to check if 'jenkins' user had installed requires dependencies to run fastlane, for e.g ruby ...
Have you set up your PATH in Jenkins? In the configuration of your node, in the environment variables section, you'll want to include /usr/local/bin/ with Jenkins's PATH by entering /usr/local/bin/:$PATH.
How to run Apache Storm in Single Node on Windows OS? Can anyone provide a link for that?
Install Java
Download and install a JDK (Storm works with both Oracle and OpenJDK 6/7). For this setup I used JDK 7 from Oracle.
I installed Java in:
C:\Java\jdk1.7.0_45\
Install Python
To test the installation, we’ll be deploying the “word count” sample from the storm-starter project which uses a multi-lang bolt written in python. I used python 2.7.6 which can be downloaded here.
I installed python in:
C:\Python27\
Install and Run Zookeeper
Download Apache Zookeeper 3.3.6 and extract it. Configure and run Zookeeper with the following commands:
> cd zookeeper-3.3.6
> copy conf\zoo_sample.cfg conf\zoo.cfg
> .\bin\zkServer.cmd
Install Storm
The changes that allow Storm to run seamlessly on Windows have not been officially released yet, but you can download a build with those changes incorporated here.
(Source branch for that build can be found here).
Extract that file to the location of your choice. I chose C:.
Configure Environment Variables
On Windows Storm requires the STORM_HOME and JAVA_HOME environment variables to be set, as well as some additions to the PATH variable:
JAVA_HOME:
C:\Java\jdk1.7.0_45\
STORM_HOME:
C:\storm-0.9.1-incubating-SNAPSHOT-12182013\
PATH: (add)
%STORM_HOME%\bin;%JAVA_HOME%\bin;C:\Python27;C:\Python27\Lib\site-packages\;C:\Python27\Scripts\;
PATHEXT: (add)
.PY
Start Nimbus, Supervisor, and Storm UI Daemons
For each deamon open a separate command prompt.
Nimbus
cd %STORM_HOME%
storm nimbus
Supervisor
cd %STORM_HOME%
storm supervisor
Storm UI
cd %STORM_HOME%
storm ui
Verify that Storm is running by opening http://localhost:8080/ in a browser.
Deploy the “Word Count” Topology
Either build the storm-starter project from source, or download a pre-built jar
Deploy the Word Count topology to your local cluster with the storm jar command:
storm jar storm-starter-0.0.1-SNAPSHOT-jar-with-dependencies.jar storm.starter.WordCountTopology WordCount -c nimbus.host=localhost
I'm using CKAN as my open data portal and am installing the Archiver Extension by following the instructions at https://github.com/ckan/ckanext-archiver. I have installed celery as shown:
Successfully installed celery kombu kombu-sqlalchemy messytables flask anyjson amqplib xlrd python-magic chardet json-table-schema lxml Werkzeug
But I am unable to run it.
/usr/lib/ckan/default/src$ paster celeryd -c /etc/ckan/default
Command 'celeryd' not known (you may need to run setup.py egg_info)
Known commands:
create Create the file layout for a Python distribution
exe Run #! executable files
help Display help
make-config Install a package and create a fresh config file/directory
points Show information about entry points
post Run a request for the described application
request Run a request for the described application
serve Serve the described application
setup-app Setup an application, given a config file
My CKAN root directory: usr/lib/ckan/default/src
path to ckan config file: /etc/ckan/default
Hope someone can help solve my issue. Thanks.
Sorry for the late answer but you need to be in /usr/lib/ckan/default/src/ckan context in order to perform the command.