Add instance in AWX - ansible-awx

I implemented ansible awx through k3s (kubectl). But I have other servers that I would like to add as an instance to perform tasks in awx.
I've already researched and looked in several places where to make the inclusion and I didn't find it. Can someone help me?
Thanks in advance!

Related

Jenkins JobDSL - Found multiple extensions which provide method kubectlBuildWrapper with arguments []

I'm new to Jenkins and Groovy Scripts, but I came to the community in order to get help, because I couldn't find anything valuable so far on the web...
We have a set of JobDSL groovy script which i'm trying to learning, but I'm not an developer, so... I'm stucked!
This is the Error message, which Jenkins returns me:
Found multiple extensions which provide method kubectlBuildWrapper
with arguments
[applications_apt$_run_closure1$_closure5$_closure9$_closure40$_closure42$_closure44$_closure48#6e592d55]:
[org.jenkinsci.plugins.kubernetes.cli.KubectlBuildWrapper,
org.csanchez.jenkins.plugins.kubernetes.KubectlBuildWrapper]
I don't have a clue where start fixing this.
We have 2 installations of jenkins. One of them is running in a VM (normal deploy) and the second one, it is running on Kubernetes.
On the first installation, this pipeline runs pretty fine, but the second it gives me the error that i've mentioned above.
So, if anyone could help, I will be very thankful.
Thank you!

Tagging AMI on AWS Regions

I am trying to tag an AWS AMI that is given to me by another team. The AMI is showing under "Private Images". I cant seem to tag it with terraform even though the whole environment is built on terraform. Have you encountered issue like this? Any tool will help, I was also looking into packer however, packer does not seem to tag the image that it does not create.
I tried python script and bash script, but they are becoming difficult to manage when you have 6 tags.
For example in python, I have to
Key = "environment"
Value = "dev"
So this becomes difficult. Any suggestion would be appreciated
You can only tag using Terraform while resource creation or modification. you can write Python code to do this.
I can help you if needed.
please share the requirements in details with screenshot.

Does anyone have tried the HLF 2.0 feature "External Builders and Launchers" and wants to get in touch?

I'm getting my way through the HLF 2.0 docs and would love to discuss and try out the new features "External Builders and Launchers" and "Chaincode as an external service".
My goal is to run HLF2.0 on an K8s cluster (OpenShift). Does anyone wants to get in touch or has anyone already figured his way through?
Cheers from Germany
Also trying to use the ExternalBuilder. Setup core.yaml, rebuilt the containers to use it. I get an error that on "peer lifecycle chaincode install .tgz...", that the path to the scripts in core.yaml can not be found.
I've added volume bind commands in the peer-base.yaml, and in docker-compose-cli.yaml, and am using the first-network setup. Dropped out the part of the byfn.sh that would connect to the cli container, so that I do that part manually, do the create, join, update anchors successfully, and then try to do the install and fail. However, on the install, I'm failing on the /bin/detect, because it can't find that file to fork/exec it. To get that far, peer was able to read my external configuration, and read the core.yaml file. At the moment, trying the "mode: dev" in the core.yaml which seems to indicate that the scripts and the chaincode will be run "locally", which I think means it should run in the cli container. Otherwise, tried to walk the code to see how the docker containers are being created dynamically, and from what image, but haven't been able to nail that down yet.

Capistrano -- the task `staging:symlink' does not exist

I'm attempting to deploy to a server via Capistrano and I keep getting the error the task `staging:symlink' does not exist.
I've run cap:deploy setup and cap deploy successfully getting the releases and shared directories created but the above error always shows at the end and I think it's stopping my code from getting moved to the root of the directory where it belongs.
I'm new at using Capistrano and I've Googled the issue but I cannot find anything that helps. I can include my code and everything I just don't know what to show to help... let me know!
Thanks for any help you can provide!
The built-in symlink task for capistrano 2.x is cap deploy:symlink.
staging:symlink is not a valid task, unless you've defined it yourself. If you're not defining this, you are accidentally calling it somewhere in your config files (deploy.rb or one of your staging config files, if you're using multistage).
Additionally, deploy:symlink should be called automatically as part of the deploy task. You don't need to call it manually.

mapreduce programs using eclipse in CDH4

I am very new to Java, eclipse and Hadoop things, so pardon my mistake if it my question seems too silly.
The question is:
I have 3 node CDH4 cluster of RHEL5 on cloud platform. CDH4 setup has been completed and now I want to write some sample mapreduce programs to learn about it.
Here is my understanding to how to do it:
To write Java mapreduce programs I will have to install Eclipse in my main server, right? Which version of eclipse should i go for.
And just installing eclipse will not be enough, I will have to do some setting changes so that it can use my CDH cluster, what are the things needed to do this?
and last but not least, could you guys please suggest some sites where i can get more info regarding same, remember i am just beginner in all these..:)
Thanks in advance...
pankaj
Pankaj, you can always visit the official page. Apart from this you might find these links helpful :
http://blog.cloudera.com/blog/2013/08/how-to-use-eclipse-with-mapreduce-in-clouderas-quickstart-vm/
http://cloudfront.blogspot.in/2012/07/how-to-run-mapreduce-programs-using.html#.UgH8OWT08Vk
It is not mandatory to have Eclipse on the main server(main server=master machine???). Any of the last 3 versions of eclipse work perfectly fine. Don't know about earlier versions. You can either run your job through Eclipse directly or you can write your job in Eclipse and export it as a jar. You can then copy this jar to your JT machine and execute it there through the shell using hadoop/jar command. If you are running your job directly through the eclipse you need to tell it the location of your NameNode and JobTracker machines though these properties :
Configuration conf = new Configuration();
conf.set("fs.default.name", "hdfs://NN_HOST:9000");
conf.set("mapred.job.tracker", "JT_HOST:9001");
(Change the hostnames and ports as per your configuration).
One quick suggestion though. You can always search for these kind of things before posting the question. A lot of info is available over the net and it is very easily accessible.
HTH