Python winrm node definition - rundeck

Actually I'm using rundeck WinRM overthere plugin to get Windows connection and command excutions, using this node definition template
<node name="XXXXXX"
hostname="1.1.1.1"
osFamily="Windows"
osName="Microsoft Windows Server 2016"
osArch="amd64"
node-executor="overthere-winrm"
file-copier="overthere-winrm"
winrm-cert-trust="all"
winrm-auth-type="ntlm"
winrm-protocol="https"
winrm-cmd="PowerShell"
winrm-password-storage-path="keys/project/TEST/XXXXX.password">
Note the above lines with
node-executor="overthere-winrm"
file-copier="overthere-winrm"
I'm trying to setup py-winrm plugin on a similar way like winrm-overthere plugin. I don't know if it's possible to setup pywinrm node and file executor on node definition when using pywinrm plugin.
Question
Can I setup pywinrm plugin with node-executor and file-copier attribute on node definition?
This setup enable me to use SSH globally on a project and pywinrm on Windows nodes.
Thanks in advance

Sure, is defined as a property like the old Overthere WinRM plugin, look at the rundeck documentation.
In the Pywinrm plugin case:
node-executor="WinRMPython"
file-copier="WinRMcpPython"

Related

Wildfly 26.1.3 Domain Batch JBeret WFLYCTL0030: No resource definition is registered for address

I am trying to run JBeret Batch on a Wildfly Domain Cluster locally, but I keep getting the error
"WFLYCTL0030: No resource definition is registered for address [ (\"deployment\" => \"ExampleJob.war\"), (\"subsystem\" => \"batch-jberet\") ]"
To host the cluster I tried to use the default configuration. I am using Wildfly 26.1.3 .
To run my setup I am using this commands on my windows machine to run the Master and Slave.
.\bin\domain.bat --host-config=host-master.xml -Djboss.domain.base.dir=domain1
and
.\bin\domain.bat --host-config=host-slave.xml -Djboss.domain.base.dir=host1 -Djboss.domain.master.address=127.0.0.1 -Djboss.management.http.port=9991
After That I deploy any batch application and try to run it in the adminpanel and get the error.
I tried also to use the cli, which didnt change anything.
I also tried to run it without the base.dir config so it uses the same folder but that did not change something.
I also tried to run different JDKs (I am now using JDK11).
To test that it is not working I tried this job from jberet directly. Also I tried other example Jobs like csv2json and keep getting the same error.
In Standalone the example jobs work.

How do you configure simple-subscriptions to run using Postgraphile CLI?

I would like to add subscriptions into a react project I have which currently references graphql using the port 5000 endpoint created by running postgraphile. I start this using the command line startup of globally installed (via npm -g postgraphile) postgraphile. This works fine.
Via the documentation (https://www.graphile.org/postgraphile/subscriptions/) I see that I should be able to enable subscriptions via the CLI:-
https://www.graphile.org/postgraphile/subscriptions/#enabling-with-the-cli-1
However when I run this I get the error:-
"Error: Cannot find module '--simple-subscriptions'"
I'm not sure if it's something to do with pg-pubsub. As I installed postgraphile globally, I did the same with pg-pubsub. Therefore rather than:-
postgraphile --plugins #graphile/pg-pubsub --subscriptions --simple-subscriptions etc
I have:-
postgraphile --plugins pg-pubsub --subscriptions --simple-subscriptions etc
However I know it picks up the plugin ok as running it with "-append-plugins MySubscriptionPlugin.js" rather than simple-subscriptions starts the server with "(subscriptions enabled)" listed.
Has anyone managed to get simple subscriptions running via CLI?

Syndesis (Fuse-online) Integration build failed for unknown host "maven1.repo.org"

We installed fuse-online 7.4 on openshift 3.11. We created an integration containing an OpenApiProvider connection and an SQL connection.
When we publish the integration, the build fails with the following error:
"repo1.maven.org: Name or service not known: Unknown host repo1.maven.org: Name or service not known"
Openshift is installed behing an enterprise http proxy
The image registry.access.redhat.com/fuse7/fuse-ignite-s2i is pulled correctly since docker is configured with proxy.
syndesis-server DeploymentConfig has been set with proxies environment variables
I suppose that, since the buildconfig for the integration is created dynamically, is not possible to inject HTTP_PROXY,HTTPS_PROXY,NO_PROXY env variables to the build pod.
We read https://docs.openshift.com/container-platform/3.11/install_config/http_proxies.html#s2i-builds but since we don't have any rights to modify s2i image we cannot proceed.
Is there any way to provide proxy information during during fuse-online integration build?
Finally we succeeded to inject http proxy environment variables in dynamic created build pods.
We modified syndesis-server-config config map reporting proxy variables on mavenOptions key like this:
mavenOptions: "-XX:+UseG1GC -XX:+UseStringDeduplication -Xmx310m -Dhttp.proxyHost= -Dhttp.proxyPort= -Dhttps.proxyHost= -Dhttps.proxyPort= -Dhttp.nonProxyHosts="
Thanks for the support
Let me know if you have any other idea of resolving the issue
Can you check the DNS of your network connection? Not sure why but sometimes I have to use one of the "reliable" DNS on my machine (like the 8.8.8.8 from Google) to make sure repo1.maven.org is reachable.
You can check if this is the problem trying a simple
$ ping repo1.maven.org
If that doesn't work, you have to check your DNS.

Fabric8/JBoss Fuse Creating ssh container creation

I'm trying to create a new SSH container to an existing fabric and It was successfully created using this command:
fabric:container-create-ssh --proxy-uri http://"root-container":8181/maven/download/ --jvm-opts "-Xms1024m -XX:MaxPermSize=1024m -Xmx2014m -Djavax.net.debug=ssl" --path /app/testing/ --host "testIP" --private-key ~/.ssh/id_rsa --profile default --profile anotherprofile --resolver localhostname --zookeeper-password "zookeepr pass" testing-container
The problem is that once i created this new container all the existing ssh container changes their maven download/upload proxy to the above container IP
so instead of using http://"currentroot":8181/maven/download/ to http://testIP:8181/maven/download/
i tried a lot to change the maven proxy from the "root-container" fabric profile and default profile but still couldn't reach a solution?
Is there a missing step that i should take to solve this issue in adding a new ssh container without updating the existing maven repo?
It depends on what is anotherprofile
http://host:port/maven/download/ URIs are registered in Zookeeper registry that's used by fabric environment.
fabric-maven-proxy feature, which is declared in fabric profile is the feature that's responsible for starting a maven proxy inside any container that has this feature/profile installed. Please check if your new container has this feature installed. Maybe anotherprofile has fabric profile set as parent?
How containers use remote repositories
Each container, when provisioned by fabric-agent, uses io.fabric8.agent PID configuration (OSGi configuration admin) and the property is org.ops4j.pax.url.mvn.repositories. It contains a list of remote repositories that are searched for artifacts to install in container.
But there's some dynamism involved too. Fabric agent always searches Zookeeper registry and finds URIs that are registered by other containers that run the above mentioned feature (fabric-maven-proxy). All such discovered URIs are prepended to the list found in org.ops4j.pax.url.mvn.repositories property.
How to check maven problem in logs
If you add karaf profile to a container, you'll have logging configuration available in org.ops4j.pax.logging PID - you can nicely configure it in hawtio. By default, there's commented section like this:
# help with identification of maven-related problems with fabric-maven
#log4j.logger.org.eclipse.aether = TRACE
#log4j.logger.org.apache.http.headers = DEBUG
#log4j.logger.io.fabric8.maven.util = TRACE
#log4j.logger.io.fabric8.maven.url = TRACE
#log4j.logger.io.fabric8.agent.download = DEBUG
You can uncomment these to see (much) more information about how maven repositories are used.

installing kubernetes on coreos with rkt and automated script

I'm trying to install kuberentes with rkt on my real (not virtual) coreos servers at home using the scripts at https://github.com/coreos/coreos-kubernetes/tree/master/multi-node/generic and I have some questions.
my etcd2 is using tls keys, I can't see anywhere in the script where I can define where the certificates are located.
can I supply a domain instead of IP for ADVERTISE_IP and CONTROLLER_ENDPOINT ?
when I tried to install kubernetes manually I needed start the rkt service api. it doesn't state in the documents that it needed here, does it mean that I don't need it if I use these scripts? or is it just something that's missing in the documents?
thanks!
update
Rob thank you so much for your response. I wasn't clear enough regarding etcd2. I already have etcd2 tls installed and properly configured on my coreos servers. so I configured my etcd servers in the controller-install.sh file:
export ETCD_ENDPOINTS="https://coreos-2.tux-in.com:2379,https://coreos-3.tux-in.com:2379"
but when I run the controller-install.sh script, it returns and repeat the following output:
Waiting for etcd...
Trying: https://coreos-2.tux-in.com:2379
Trying: https://coreos-3.tux-in.com:2379
Trying: https://coreos-2.tux-in.com:2379
Trying: https://coreos-3.tux-in.com:2379
...
so I was guessing it's because i didn't define etcd related tls certificates in the controller script and that is why it stuck in that faze.
on my macbook pro laptop I have the following alias configured:
alias myetcdctl="~/apps/etcd-v3.0.8-darwin-amd64/etcdctl --endpoint=https://coreos-2.tux-in.com:2379 --ca-file=/Users/ufk/Projects/coreos/tux-in/etcd/certs/certs-names/ca.pem --cert-file=/Users/ufk/Projects/coreos/tux-in/etcd/certs/certs-names/etcd1.pem --key-file=/Users/ufk/Projects/coreos/tux-in/etcd/certs/certs-names/etcd1-key.pem --timeout=10s"
so when I run myetcdctl member list I get:
8832ce6a269a7dac: name=ccff826d5f564c67abf35467306f80a0 peerURLs=https://coreos-3.tux-in.com:2380 clientURLs=https://coreos-3.tux-in.com:2379 isLeader=true
a2c0ac9708ef90fc: name=dc38bc8f20e64940b260d3f7b260430d peerURLs=https://coreos-2.tux-in.com:2380 clientURLs=https://coreos-2.tux-in.com:2379 isLeader=false
so I'm guessing that I don't really have a problem there.
any ideas?
thanks!
my etcd2 is using tls keys, I can't see anywhere in the script where I can define where the certificates are located.
These scripts don't start an etcd server. You will need to set one up manually and will be able to use TLS and as many nodes as you would like. This isn't clear in the current form of the document, I will attempt a PR to fix.
can I supply a domain instead of IP for ADVERTISE_IP and CONTROLLER_ENDPOINT ?
Only CONTROLLER_ENDPOINT be a domain name.
when I tried to install kubernetes manually I needed start the rkt service api. it doesn't state in the documents that it needed here, does it mean that I don't need it if I use these scripts? or is it just something that's missing in the documents?
These scripts include/start the rkt API service. As you can see below, it also has a Restart parameter set (source):
[Unit]
Before=kubelet.service
[Service]
ExecStart=/usr/bin/rkt api-service
Restart=always
RestartSec=10
[Install]
RequiredBy=kubelet.service