I have a buildbot 0.8.6p1 configured. There is one master and one slave, so far.
It is possible to configure several slaves:
c['slaves'] = [
BuildSlave("eng-hwsim-n1", "123")
BuildSlave("eng-hwsim-n2", "123")
]
It is also possible to add slave or slaves to the array of builders:
c['builders'].append(
BuilderConfig(name="runnightly-top",
slavenames=["eng-hwsim-n1", "eng-hwsim-n2"],
factory=fac_nightly_top,
builddir='../../runnightly-top',
slavebuilddir='runnightly-top'))
In this case, will buildbot run the same builder on all slaves or one of the slaves?
Is there a way to configure buildbot to run a builder on one of the slaves, whichever is available/least loaded/etc?
Thanks so much.
The builder will only build with one slave from the pool of slaves. If I understand correctly, this is the behavior you desire.
Related
I have a question about Rundeck features. Is it possible to include conditions within job execution? As it is quite difficult to explain, I provide an example:
You have 2 redundant firewalls in your network. You implement a job 'job1' and it's aim is to update your firewall's configuration. Master is down, therefore you do not want to update slave. Indeed if you do so, slave will have to restart and there will not have any firewall running for a short time. So, what I want to do is to test, before running the update, that none of my firewalls are out of service. If the master is down, then do not update slave.
So, is it possible to involve multiple nodes within one job?
Thanks for helping!
You create a job which pings both the firewalls. If both are up then this job will succeed. Now create another job which includes this job before update job in workflow. Make this job proceed only if first workflow succeeds. That should solve your problem.
We are putting together an architecture to support High Availability for our Postgres 9.5 Database. We have 1 master and 3 slaves Replicating the data of the master. When The master goes down Slave 1 is promoted to new master but Slave 2 and Slave 3 are still pointing to the previous master and not the updated master node.
Is there a way to make the slaves to read from the new master dynamically . Or does it require changing the configurations manually and restarting the slaves?
There's no short answer, but I'll try:
When primary server fails you'll promote one slave, and reconfigure all other slaves to target the new master. However there's one scenario in which reconfiguring other slaves might not be needed: if you are using "WAL archiving", and your archive is stored on a shared drive which survived the failure of the old primary. If the new primary continues to use the same shared store you might not need to reconfigure other slaves. Again, I've never tried this - you can try.
If your replication mechanism is based on "replication slots" (introduced in PostgreSQL 9.4) - then you have to reconfigure all the slaves. In this case actually you'll have to rebuild replication on all other slaves from scratch (as if they've never been slaves at all). Nevertheless, in my opinion "replication slots" are better choice.
Regarding automation: You've asked if it is possible to automatically reconfigure other slaves, but thing you've missed to mention is if you have any failover automation implemented. What I'm trying to say is that PostgreSQL itself will not automatically perform failover (promote one of slaves when the master fails). At least you have to create "trigger file" on the slave to be promoted, and you have to do this manually or by using another product (for example pgpool2).
If you use pgpool2 - you can setup automatic slave reconfiguration by setting follow_master_command pgpool.conf value.
Finally I'll strongly recommend reading this tutorial - it'll make your life easier.
Edit:
I've forgot to say two things:
Automatically reconfiguring all other slaves as soon as the new master is promoted might not be a good idea, especially if you have many slaves. It'll put additional pressure on your new primary, and on your network, so in some cases it is better to postpone this for night hours for example. More on this in the mentioned tutorial.
I've wrote the tutorial.
As e4c5 commented, you can use repmgr for managing this type of tasks. I have tried repmgr and I was done without a problem.
I have followed a tutorial for doing that and here is the link:
http://jensd.be/591/linux/setup-a-redundant-postgresql-database-with-repmgr-and-pgpool
I hope following this tutorial you can do what you want without any problem.
I have a production environment that consists of several (persistent and ad-hoc) EMR Spark clusters.
I would like to use one instance of spark-jobserver to manage the job JARs for this environment in general, and be able to specify the intended master right when I POST /jobs, and not permanently in the config file (using master = "local[4]" configuration key).
Obviously I would prefer to have spark-jobserver running on a standalone machine, and not on any of the masters.
Is this somehow possible?
You can write a SparkMasterProvider
https://github.com/spark-jobserver/spark-jobserver/blob/master/job-server/src/spark.jobserver/util/SparkMasterProvider.scala
A complex example is here https://github.com/spark-jobserver/jobserver-cassandra/blob/master/src/main/scala/spark.jobserver/masterLocators/dse/DseSparkMasterProvider.scala
I think all you have to do is write one that will return the config input as spark master, that way you can pass it as part of job config.
Is there any way to do the following:
I have 2 jobs. One job on offline node has to trigger the second one. Are there any plugins in Jenkins that can do this. I know that TeamCity has a way of achieving this, but I think that Jenkins is more constrictive
When you configure your node, you can set Availability to Take this slave on-line when in demand and off-line when idle.
Set Usage as Leave this machine for tied jobs only
Finally, configure the job to be executed only on that node.
This way, when the job goes to queue and cannot execute (because the node is offline), Jenkins will try to bring this node online. After the job is finished, the node will go back to offline.
This of course relies on the fact that Jenkins is configured to be able to start this node.
One instance will always be turn on, on which the main job can be run. And have created the job which will look in DB and if in the DB no running instances, it will prepare one node. And the third job after running tests will clean up my environment.
I am planning to implement multiple slave agents in my env and a single master agent.
Can anybody show me an example how to use this feature in UVM.And how do we start the item for a particular slave sequencer from the testcase.
This is pretty typical. Without details, the general outline is:
In build_phase:
Create and configure the agents just as you have described.
The masters and slaves will be configured as active.
A slave agent is typically a reactive agent that responds to stimulus from the DUT so in that case, sequence items in the slave driver will be initiated by the DUT so they aren't under direct testcase control.