I have setup an ESB cluster using jdbc connections to ms sql databases for local and remotely mounted config and gov registries. 1x mgt and 2xworker
Our .car file contains some ws-security policy artifacts which go to config. When I deploy to mgt it deploys OK. I have SVN dep sync setup to the cluster and when it picks up the .car it starts to deploy on the worker but fails when loading the policy files into conf. It is trying to duplicate the policy in the shared conf and fails - of course that is right but; how should I deploy these 'shared' artifacts when a .car file is distributed by svn? I need to be able to control the deploy properly. The only way I can see is via the dev studio which is terrible for our change management practice.
Thanks for you help.
I can recommend multiple solutions. You can decide what to choose from them.
Since you have only 2 worker nodes, you can get rid of (disable) deployment synchronization and deploy the car files to all the nodes. I believe you have some automated process, so it wont be a problem to deploy to all nodes. While doing so, modify your project to bundle the policies to a separate car file and the services to another. When deploying, you deploy the policies only to management node and the services to all nodes.
Second option is to, add the policies to local registry. i.e. Not the config registry, not the governance registry. Then, when you deploy the car to the management node, it will add the policies to local registry of the management node. When the car file is dep-synced, worker nodes will deploy them and they will add the policies to their local registry. This will avoid the worker nodes trying to add the policies to the same location.
By going through the question, I felt you have external databases to the local registry too. But, its not necessary. You can use the internal H2 database for the local registry. H2 databases sometimes get corrupted. If such a thing happens, all you have to do is, delete the H2 database and restart the server with -Dsetup option. Having an external DB is fine. But, thats an overkill.
Related
I wanna deploy the Spring-cloud-data-flow on several hosts.
I will deploy the server of Spring-cloud-data-flow on one host-A, and deploy the agents on the other hosts(These hosts are in charge of executing the tasks).
Except the host-A, all the other hosts run the same tasks.
Shall I modify on the basis of the Spring Cloud Data Flow Local Server or on the Spring Cloud Data Flow Apache Yarn Server or other better choice?
Do you mean how the apps are deployed on several hosts? If so, the apps are deployed using the underlying deployer implementation. For instance, if it is local deployer then, each app is deployed by spawning a new process. You can scale out the number apps deployment using the count property during the stream deploy. I am not sure what do you mean by the agents here.
I have configured a Soa Cluster with one admin node and two managed nodes and all server nodes configured in three different machines. once I deploy a Bpell to one managed node it automatically deploys in the other managed nodes as well(default behavior). once you go to soa enterprise manager those deployed Bpels can be viewed under [Soa -> managed node -> Defult ->..]. It is the same place where we deploy new Bpels. I accidentally undeploy all bpels (you can do it by right clicking a managed node and choosing un-deploy option).
Now I'm having a hard time to get back to previous state, how to deploy all those projects again to a specific managed node. I tried to restart the node hoping it would sync again, yet the managed server went to "admin" state (not the ok state).
is there anything needs to be done !!
Thanks, Hemal
You'll need to start the server from command line, it will work.
For managing 'managed servers' from EM or WLS console, there's one additional step that's needed during instalation process.
Please modify the nodemanager.properties of WLS and set the property startscriptenabled=true.
http://download.oracle.com/docs/cd/E12839_01/core.1111/e10105/start.htm#CIHBACFI
I have a (RedHat) Puppet Master server, with Puppet Master's configuration files in /etc/puppet.
I've placed the entire contents of /etc/puppet into source control and would like my CI server (TeamCity on Windows) to be able to deploy changes to the Puppet Master server.
How can I accomplish this?
I have an idea that I can use scp, but copying to /etc/puppet would require sudo privileges. At the same time I would like a simple setup.
If there are any alternative or better ways of deploying puppet master configuration files, those answers would also be helpful.
It's unlikely that the whole /etc/puppet should be subjected to CI.
It might be more appropriate to move your $manifestdir and $modulepath instances outside that tree and make some CI client their owner. Just be careful to ensure read privileges to the puppet user.
This way, you could rely on SSH without too much of a security hole (but then, opening up your manifests for writing is always risky), and avoid the need to make the master configuration writeable to a non-root user.
I'm using Amazon Web Services to create an autoscaling group of application instances behind an Elastic Load Balancer. I'm using a CloudFormation template to create the autoscaling group + load balancer and have been using Ansible to configure other instances.
I'm having trouble wrapping my head around how to design things such that when new autoscaling instances come up, they can automatically be provisioned by Ansible (that is, without me needing to find out the new instance's hostname and run Ansible for it). I've looked into Ansible's ansible-pull feature but I'm not quite sure I understand how to use it. It requires a central git repository which it pulls from, but how do you deal with sensitive information which you wouldn't want to commit?
Also, the current way I'm using Ansible with AWS is to create the stack using a CloudFormation template, then I get the hostnames as output from the stack, and then generate a hosts file for Ansible to use. This doesn't feel quite right – is there "best practice" for this?
Yes, another way is just to simply run your playbooks locally once the instance starts. For example you can create an EC2 AMI for your deployment that in the rc.local file (Linux) calls ansible-playbook -i <inventory-only-with-localhost-file> <your-playbook>.yml. rc.local is almost the last script run at startup.
You could just store that sensitive information in your EC2 AMI, but this is a very wide topic and really depends on what kind of sensitive information it is. (You can also use private git repositories to store sensitive data).
If for example your playbooks get updated regularly you can create a cron entry in your AMI that runs every so often and that actually runs your playbook to make sure your instance configuration is always up to date. Thus avoiding having "push" from a remote workstation.
This is just one approach there could be many others and it depends on what kind of service you are running, what kind data you are using, etc.
I don't think you should use Ansible to configure new auto-scaled instances. Instead use Ansible to configure a new image, of which you will create an AMI (Amazon Machine Image), and order AWS autoscaling to launch from that instead.
On top of this, you should also use Ansible to easily update your existing running instances whenever you change your playbook.
Alternatives
There are a few ways to do this. First, I wanted to cover some alternative ways.
One option is to use Ansible Tower. This creates a dependency though: your Ansible Tower server needs to be up and running at the time autoscaling or similar happens.
The other option is to use something like packer.io and build fully-functioning server AMIs. You can install all your code into these using Ansible. This doesn't have any non-AWS dependencies, and has the advantage that it means servers start up fast. Generally speaking building AMIs is the recommended approach for autoscaling.
Ansible Config in S3 Buckets
The alternative route is a bit more complex, but has worked well for us when running a large site (millions of users). It's "serverless" and only depends on AWS services. It also supports multiple Availability Zones well, and doesn't depend on running any central server.
I've put together a GitHub repo that contains a fully-working example with Cloudformation. I also put together a presentation for the London Ansible meetup.
Overall, it works as follows:
Create S3 buckets for storing the pieces that you're going to need to bootstrap your servers.
Save your Ansible playbook and roles etc in one of those S3 buckets.
Have your Autoscaling process run a small shell script. This script fetches things from your S3 buckets and uses it to "bootstrap" Ansible.
Ansible then does everything else.
All secret values such as Database passwords are stored in CloudFormation Parameter values. The 'bootstrap' shell script copies these into an Ansible fact file.
So that you're not dependent on external services being up you also need to save any build dependencies (eg: any .deb files, package install files or similar) in an S3 bucket. You want this because you don't want to require ansible.com or similar to be up and running for your Autoscale bootstrap script to be able to run. Generally speaking I've tried to only depend on Amazon services like S3.
In our case, we then also use AWS CodeDeploy to actually install the Rails application itself.
The key bits of the config relating to the above are:
S3 Bucket Creation
Script that copies things to S3
Script to copy Bootstrap Ansible. This is the core of the process. This also writes the Ansible fact files based on the CloudFormation parameters.
Use the Facts in the template.
So I am a little confused by reading the documents.
I want to setup AppFabric caching and hosting.
Can I do the following?
DC
SQL Server
AppFabric1
AppFabric2
All these computers are joined to the DC.
I want to be able to have AppFabric1 be the mainhost but also part of the cache cluster?
What about AppFabric2? or AppFabricX? How can I make them part of the cache cluster?
Do I have to make AppFabric1 and AppFabric2 configured in Windows as part of a cluster (i.e setup the entire environment as a cluster)?
Can I install AppFabric independently on AppFabric1 and 2 and have them cluster together and "make it work"? If so - how?
I see documentation about setting it up in a webfarm but also a workgroup... and that's it. nothing about computers joined to a domain.
I want to setup AppFabric caching and hosting.
Caching and Hosting are two totaly different things and generally don't share the same use cases.
AppFabric Caching provides an in-memory, distributed cache platform for Windows Server, previously named Velocity. The cache cluster is a collection of one or more instances of the Caching Service working together. You can easily add new cache host without restarting the cluster in the "storage location" (xml or sql server).
Can I install AppFabric independently on AppFabric1 and 2 and have
them cluster together and "make it work"? If so - how?
Don't worry... this can be done easily during installation. In addition, there are powerfull PS module to to the same thing.
AppFabric Hosting enhance the hosting of WCF and Workflow Foundation services in WAS (autostart, monitoring of hosted services, workflow persistence, ...). There is no cluster here and basically you just have to configure to monitoring/persistence DB for each server.
Just try it !
When you are adding the second node in the AppFabric cluster, make sure to choose the option Join Cluster (instead of New Cluster) and point to the path of the share where you stored the configuration (assuming that you used FILE SHARE to store the configuration of the cluster). The share that you used should be accessible from Appfabric2.