I want update project settings for existing Rundeck Project. For example : add resources / nodes to the Rundeck project using CLI. How do I do that?
Solution:
Below is the command using Rundeck CLI to add a resource file in .xml format to an existing Rundeck project
$ rd projects configure set -p MG-Test-CLI --
--resources.source.1.config.file="/home/rundeck/iidas/resources.xml" \ --resources.source.1.config.generateFileAutomatically=true
--resources.source.1.config.includeServerNode=true
--resources.source.1.type=file
Here more useful Rundeck CLI project commands:
# create project
rd projects create -p project1
# get project configuration
rd projects configure get -p project1
# Configure nodes from remote URL (GitLab)
rd projects configure update -p project1 -- \
--resources.source.1.type=url \
--resources.source.1.config.url='https://git.i.example.com/api/v4/projects/3/repository/files/project1%2Fdev%2Fnodes.json/raw?ref=master&private_token=1234567890' \
--resources.source.1.config.timeout=10 \
--resources.source.1.config.cache=false
See also Rundeck CLI documentation and project parameter.
Related
----- i use kafka, kafka-connect(image: confluentinc/cp-kafka-connect)
when you use kafka in docker container if you wanna operate kafka, you have to go into the container(like 'docker exec -it kafka' or 'docker exec -it kafka-connect' ----> this is another question what i wanna ask) , right..??
i tried putting some connector (jdbc connector, mysql connector) into kafka-connect container, but it didn't work.
so.. my question is
after docker-compose up(put in container), if i wanna connect with some connectors('./bin/connect-distributed.sh ./etc/kafka/connect-distributed.properties'),
what container i have to go into???
if i type plugin path, where should i write?( kafka? kafka-connect?)
I wouldn't mind if it was difficult to read... sorry for that
No, you don't need to exec anywhere unless you cannot download Kafka on your host machine to get the CLI scripts. But you'd only exec for kafka-topics, console producer/consumer, kafka-consumer-groups, etc, not any of the Connect scripts.
The Connect container automatically runs the Distributed script and you simply provide CONNECT_PLUGIN_PATH as an environment variable to any directory in the container you want to use for the plugins (I like /opt/connectors if I mount volume, but that's not where confluent-hub installs to for that image). That variable doesn't do anything for the broker image, only Connect.
Related How to install connectors to the docker image of apache kafka connect
If your requirement is startup a Kafka Connect.
You can use the basic guide published by Confluent "Build Your Own Apache Kafka® Demos"
Basically you need execute the following instructions:
git clone https://github.com/confluentinc/cp-all-in-one.git
cd cp-all-in-one/cp-all-in-one
git checkout 7.1.1-post
docker-compose up -d
This has Control Center at http://localhost:8088
If you need install a Connector, you can go to the https://www.confluent.io/hub select your specific connector.
Then, you can create your DockerImage of specific Kafka Connect server.
1.- Write a Dockerfile.
vim Dockerfile
2.- Add connector "example JDBC" from Confluent Hub.
FROM confluentinc/cp-kafka-connect
ENV MYSQL_DRIVER_VERSION 5.1.39
RUN confluent-hub install --no-prompt confluentinc/kafka-connect-jdbc:10.5.0
RUN curl -k -SL "https://dev.mysql.com/get/Downloads/Connector-J/mysql-connector-java-${MYSQL_DRIVER_VERSION}.tar.gz" \
| tar -xzf - -C /usr/share/confluent-hub-components/confluentinc-kafka-connect-jdbc/lib \
--strip-components=1 mysql-connector-java-5.1.39/mysql-connector-java-${MYSQL_DRIVER_VERSION}-bin.jar
3.- Build the docker image.
docker build . -t my-kafka-connect-jdbc:1.0.0
4.- Then, you can go to edit your docker-compose.yml, change the line 57
from:
image: cnfldemos/cp-server-connect-datagen:0.5.3-7.1.0
to:
image: my-kafka-connect-jdbc:1.0.0
5.- Finally, stop and start your Confluent Platform local environment:
docker-compose down
docker-compose up
Verify your docker
docker ps
Test your Connect server:
curl --location --request GET 'http://localhost:8083/connectors'
We have a requirement where we need to access a file hosted on our github private repo in our Azure Databricks notebook.
Currently we are doing it using curl command using the Personal Access Token of a user.
curl -H 'Authorization: token INSERTACCESSTOKENHERE' -H 'Accept:
application/vnd.github.v3.raw' -O -L
https://api.github.com/repos/*owner*/*repo*/contents/*path*
Is there a way we can avoid the use of PAT and use deploy keys or anything?
From summer 2021 databricks has introduced integration of git repos functionality.
More info here: https://learn.microsoft.com/en-us/azure/databricks/repos
If you add your file (excel, json etc.) in the repo, then you can use a relative path to access it and read it.
e.g. pd.read_excel("./test_data.xlsx")
Be aware that you need a cluster with a databricks version 8.4+ (or 9.1+?)
You can also test what is your current working directory by executing the following command. os.getcwd()
If you have correctly integrated the repo then your result should be something like:
/Workspace/Repos/george#myemail.com/REPO_FOLDER/analysis
otherwise it will be something like: /databricks/driver
Integrate Git and azure databricks.
This documentation shows how to integrate Git and azure databricks
Step1: Get raw URL of the File.
Step2: Use wget to access the file:
wget https://github.com/githubtraining/hellogitworld/blob/master/resources/labels.properties
For a project, I have created project in Coverity Server and 2 streams for Java and CPP in that proejct.
I'm running coverity for the project in Jenkins. And coverity report will be append in mail template.
I also want to give a link to the project in the coverity server.
Like http://192.168.1.20:8081/defects/index.htm?projectId=10068.
I found out the project will be listed only after the coverity is finished running and then only I can see project and project ID in server.
If I get the project ID, I can create the project link.
I'm running below code in script to export report to csv file by passing Project Name.
/opt/coverity/cov-sa-linux64-5.5.3/bin/cov-manage-im --mode defects --show --action Undecided --project Jenkins_Week34_Coverity --host 192.168.1.20 --user admin --password admin123 --port 8081 --fields cid,file >/opt/cov/curr.csv
Similar way, is there any way to get the project ID by passing Project Name?
Or while committing report to server, do we get project id?
It's possible to craft a link that uses search parameters, using your values as an example the below should allow you to send the email:
http://192.168.1.20/query/defects.htm?project=Jenkins_Week34_Coverity
Note that the project name is case sensitive.
Hi I found the answer by:
Api list all Project name also ProjectKey (project ID):
curl -u username:pw https://your-coverity-server/api/v2/projects
Or you can get specific project info:
curl -u username:pw
https://your-coverity-server/api/v2/projects/project-name
I can not understand conception of Docker. I trying to install this component (graphite rendering graphs from influxdb):
https://github.com/vimeo/graphite-api-influxdb-docker
I was faced with docker at first time and it is important to deploy graphite+influxdb from that link by this work night.
The question is: if I need search github links of graphite and influxdb, install them, and after that make them work under docker?
For what docker and how quickly to deploy this project.
As I understood I need to do next steps from github link:
#cd /root
#yum install docker
#docker pull vimeo/graphite-api-influxdb
#git clone https://github.com/vimeo/graphite-api-influxdb-docker.git
#cd graphite-api-influxdb-docker
#ls
Dockerfile graphite-api.sh graphite-api.yaml LICENSE NOTICE README.md
#vi graphite-api.yaml (change <host> to localhost)
#docker build .
#docker run -p 8000:8000 <image-id> (<image-id> here i set like vimeo/graphite-api-influxdb if this true?)
I feel that I think in different direction and hope for a few words what u think about will a little help to me.
First you need to clone the GitHub repository
git clone https://github.com/vimeo/graphite-api-influxdb-docker.git
Second, you have to add your own graphite-api.yaml (if you want)
Build it:
docker build .
If you need more information about how to build a Docker content from a Dockerfile, read the "Building an image from a Dockerfile" section from this link to know how to build a Docker image from a Dockerfile.
You can add a name with -t option (and use it as ID in the next step).
And, finally, run the content :
docker run -p 8000:8000 [ID]
[ID] is provided to you when you build the Docker content (it is explained in the link).
I hope my answer will help you.
I've got a project that I wan to build with Jenkins.
The project is hosted in a private GitHub repo and I've put the SSH public key in GitHub of my user "deploy".
The project gets checked out fine thanks to the deploy credential in Jenkins git plugin section in the build config.
But a vendor lib which is hosted as private in same GitHub organisation is loaded via a build step command :
php composer.phar install -o --prefer-dist --no-dev
I've installed Jenkins git plugin in order to checkout the main repo from GitHub via private SSH key.
But when the composer tries to checkout the sub project I get
Failed to clone the git#github.com:Organisation/Repo.git repository, try running in interactive mode so that you can enter your GitHub credentials
I've tried to get the composer command ran as a different user without success with stuff like :
su - deploy -c 'php composer.phar install -o --prefer-dist --no-dev'
looks weird anyway. I'd like to figure out the proper way of having the composer doing his job. Thought ?
Jenkins is actually running the shell commands as "jenkins" user.
It means that "jenkins" needs access to GitHub.
Then all the git#github.com:Organisation/Repo.git will work without additional credentials.
Here is explained how to grant Jenkins access to GitHub over SSH
# Login as the jenkins user and specify shell explicity,
# since the default shell is /bin/false for most
# jenkins installations.
sudo su jenkins -s /bin/bash
ssh-keygen -t rsa -C "your_email#example.com"
# Copy ~/.ssh/id_rsa.pub into your Github
# Allow adding the SSH host key to your known_hosts
ssh -T git#github.com
# Exit from su
exit
Inspired from: Managing SSH keys within Jenkins for Git