Switch job on online Node jenkins - plugins

I have jenkins and job in jenkins with selection "Node" (Server) by users and "Label Expression" (ex. server1||server2). But, if my server1 is gone to offline, I want to start my job on server2 automatically. Can anyone help me?
Thanks.

Jenkins' node labels are supposed to be used the other way round. See Manage Jenkins → Manage Nodes → select a node → Configure → Click
to the right of the Labels field:
Labels (AKA tags) are used for grouping multiple slaves into one logical group.
So each of your servers (Server1, Server2) should have the same label assigned to it. Let's say build. Define this in Label Expression in your project.
Select Ignore offline nodes under This build is parameterized → Node → Node eligibility.
Such, if a user selects an offline node the other one of the group should be taken (if this one is online).

Related

Rundeck ansible inventory: static instead of dynamic

Deployed Rundeck (rundeck/rundeck:4.2.0) importing and discovering my inventory using Ansible Resource Model Source. Having 300 nodes, out of which statistically ~150 are accessible/online, the rest is offline (IOT devices). All working fine.
My challenge is when creating jobs i can assign only those nodes which are online, while i wanted to assign ALL nodes (including those offline) and keep retrying the job for the failed ones only. Only this way i could track the completeness of my deployment. Ideally i would love rundeck to be intelligent enough to automatically deploy the job as soon as my node goes back online.
Any ideas/hints how to achieve that ?
Thanks,
The easiest way is to use the health checks feature (only available on PagerDuty Process Automation On-Prem, formerly "Rundeck Enterprise"), in that way you can use a node filter only for "healthy" (up) nodes.
Using this approach (e.g: configuring a command health check against all nodes) you can dispatch your jobs only for "up" nodes (from a global set of nodes), this is possible using the .* as node filter and !healthcheck:status: HEALTHY as exclude node filter. If any "offline" node "turns on", the filter/exclude filter should work automatically.
On Ansible/Rundeck integration it works using the following environment variable: ANSIBLE_HOST_KEY_CHECKING=False or using host_key_checking=false on the ansible.cfg file (at [defaults] section).
In that way, you can see all ansible hosts in your Rundeck nodes, and your commands/jobs should be dispatched only for online nodes, if any "offline" node changes their status, the filter should work.

Deployment group search

Can deployment groups be searched for a particular machine? I need a way to find if a server is already part of a deployment group. I can't tell from the group names if they are applicable to the project.
You could use the REST API for Deployment Groups and look in the Machines property.
Or, if you have access, you can open the deployment groups navigation and look at the targets.
You may want to employ a more robust implementation of tags for the machines in your deployment groups, so you know you'll only deploy to machines in the group that are applicable.
You could simply use Rest API to list all deployment groups in a specific team project.
GET https://dev.azure.com/fabrikam/{project}/_apis/distributedtask/deploymentgroups?api-version=5.0-preview.1
Conversely, it's not able to list all target groups for a special machine at present.
As a workaround, you can add a tag to a machine on the depoyment group configuration.
After that, specify the tag at the release deployment group phase.
This will make the release to choose all machine within the group deployment that match you tag.

Deploy to a specific machine in group

Often when trying to deploy to a group of machines, at least one of the machines in the group will fail for one reason or another (offline, application in use, etc..)
Is there a way to selectively deploy to a specific machine in a Deployment group, without putting each machine in its own group?
You can add a tag to a machine on the depoyment group configuration.
After that, specify the tag at the release deployment group phase.
This will make the release to choose all machine within the group deployment that match you tag.
You can filter machines through tags:

One or more placement constraints on the service are undefined on all nodes that are currently up

trying to setup that specific services get deploy to specific node types I am getting this error using Visual Studio publish dialog (that breaks calling new-servicefabricapplication PS command)
I am using the service manifest to define the placementConstraints like this:
<StatelessServiceType ServiceTypeName="VisualObjects2.WebServiceType" >
<PlacementConstraints>(nodeType==node2)</PlacementConstraints>
</StatelessServiceType>
How can i define this placement constraints on the nodes?
In the Azure portal, go to your SF Cluster, select node types and for each one you can add a key-value list of placement constraints. There I put the key-value: nodetype = node2. After this, the deployment was done only in the nodes with this attribute

Deployment in IBM Websphere 7 cluster with nodes with High availability

Environment :
Java EE webApp
JDK: 1.6,
AS: Websphere app server 7,
OS:redhatzLinux
I am not a websphere admin and I am asked to develop a way or a script to solve the issue below:
I have a cluster with three nodes NodeA NodeB and NodeC. My application runs on these clusters. I want to deploy my application on these nodes such that i dont need to bring all of them down at once. These days the deployments is done this way : we come at night to stop all the servers all at once from console. Then we install the application on the main node which is on the same machine as the deployment manager and then we synchronize and bring all the servers back up one by one.
What I am asked to do is that we upgrade the application or install the new ear file by not bringing everything down as this is causing downtime to the application. Is there a way to acheive this. WAS 7 is a very mature product i am sure there must be a way to do it.
I looked at the documentation/tutorial we can do something like "Update" where we select the application (from Apllications> websphere enterprise application)and select update and then select radio button "Replace Entire Application" and radio button"local file system" and point to the new ear file. But in that case the doc says that it will bring down all the servers as well when updating. its the same as before. no online deployment.
I am a java programmer so I thought of using what tools I have to solve this
Tell me if this is can be an issue :
1) We bring down NODEA
2) We remove the NODEA from the cluster (by pressing remove node button or using the removeNode.sh)
3) Install the new Ear on the NODEA (can we do this in the same admin console? or through shell script or jython or may be like a standalone server)
3) We then start it up back again and then add it to cluster.
NOW we have NODEA with new applicaition while NODE B and NODEC are with old application versions.
Then we bring down NODEB
remove NODEB from cluster
install applciation on NODEB
start it up again
Add it back to cluster
NOW we have two nodes with new application and NODEC with old
we try the same process for NODEC.
Will this work. Has any one tried this. what issues can you think of that can happen.
I will so appreciate any feedback from here. I am sure there are experienced ppl on this forum. I dont think this is a rare issue,i believe this is something any organization would want with High Availability requirements.
Thanks for any help in advance.
Syed...
This is a possible duplicate of How can i do zero down time deployment on cluster environment?. Here is essentially my answer from that question:
After updating the application, you can utilize the "Rollout Update" feature. Rather than saving and synchronizing the nodes after updating, you can use this feature which automatically performs the following tasks to enable the changes to propagate to all deployment targets while maintaining high availability (assuming you have a horizontal cluster, such that cluster members exist on multiple nodes, which it sounds like you do):
Save session changes to the master configuration
For each node in the cluster (one at a time, to enable continuous availability):
Stop the cluster members on the node
Synchronize the node
Start the application servers (which automatically starts the application)
Alternatively, you can follow the following procedure.
Stop all nodeagents except Node A.
Comment out or disable the Node A from Load Balancer or Plugin (So the traffic will not come to the node)
Deploy the application.
Changes will be synchronized only on Node A as its nodeagent is up.
Uncomment/enable the Node A from plugin / load balancer.
Comment/disable Node B from plugin/load balancer to stop incomming traffic on the node.
Start the nodeagent of Node B so it will synchronize the file changes on the Node. The ear application will stop and start after synchronization.
Uncomment/enable the Node B from plugin / load balancer.
Repeat steps 6,7,8 for all the remaining nodes.
Regards,
Laique Ahmed