Is there a deployment folder where a war can be placed in the master node so that it gets deployed to all the slave nodes in a domain managed setup in JBoss AS7?
I know that we can use the JBoss CLI to deploy to a server group which places the artifact in the JBOSS_HOME/domain/data//content directory.
However I would like to find out if there is a way that it can be placed in a deployments folder under the domain of the master node (e.g. JBOSS_HOME/domain/deployments) that is similar to the one available in the standalone mode (i.e. JBOSS_HOME/standalone/deployments) so that the deployment scanner picks it up and makes it available to the slave nodes in the domain without the explicit deploy command via CLI.
To summarize the comments above: There is no deployment directory in domain mode.
You can use the CLI
the web console
the maven plugin
or create your own deployment manager.
I wrote a, now old, blog post on how to do this on a standalone server, but it could be slightly changed to use on a domain server. Have a look at how it's done with the jboss-as-maven-plugin for an example.
Related
My goal is to automatically deploy EAR file in WebSphere Application server cluster by Monitored Directory Deployment. So my deployment target is cluster. WAS version is 9.0.0.10. Everything works fine if I drop EAR file in
monitored directory for example '/{monitored_directory_defined_in_WAS}/clusters/my_cluster_name/'. Application is deployed and started. But I also want to deploy that application into the IBM HTTP server(resides in the same WebSphere cell with cluster) in the same automatically process by Monitored Directory Deployment.
I tried to manually predefine deploymentTargets(cluster and ibm http server) in deployment.xml file, put it in the EAR file, and drop EAR in the '/{monitored_directory_defined_in_WAS}/clusters/my_cluster_name/', but WAS deploy EAR only in cluster. As a consequence I must manually via WAS console map all modules from EAR to ibm http server, but I do not want to do it manually.
My second idea/attempt was sto create separate monitored directory for the ibm http server '/{monitored_directory_defined_in_WAS}/servers/my_ibm_http_server_name/'.
First I drop EAR into the '/{monitored_directory_defined_in_WAS}/clusters/my_cluster_name/' and right after that I drop EAR in to the '/{monitored_directory_defined_in_WAS}/servers/my_ibm_http_server_name/'. Result is that the EAR modules are deployed only in to the web server, and that is not my goal.
Is that even possible by WAS Monitored Deployment Directory functionality?
Is it allowed to manually create the deployment.xml file and add it to the EAR file?
First of all installing via monitored director is not recommended in the production environments, as it lacks control.
As you correctly suspected it is not possible to install it to the cluster and web server - check Installing enterprise application files by adding them to a monitored directory.
Because you can use only one server directory, drag and drop to map
applications to combinations of servers is limited. Scenarios
requiring use of more than one server, such as mapping to an
application server and a web server, are not supported by direct drag
and drop of an application file.
However, you still want to use it, you may deploy property file into monitored directory. That property file can fully customize your deployment e.g. configuring also module to map to web server.
Check Installing enterprise application files by adding properties files to a monitored directory for more details
UPDATE
If you have issues, I'd suggest the following approach - install your application 'classically' via admin console and map it to both web server and cluster. Then run wsadmin command to extract propertes:
AdminTask.extractConfigProperties('[-propertiesFileName myApp.props -configData Deployment=MyApplication -options [[SimpleOutputFormat true]]]')
Try to use format from that exported file for your properties
I had time to run it in my environment. I have app with 2 modules inside, one module is just mapped to the cluster, other is mapped to both cluster and web server. Here is relevant part of the property file:
taskName=MapModulesToServers
row1={ module=HelloTestUI #readonly
uri=HelloTestUI.war,WEB-INF/web.xml #readonly
server=WebSphere:cell=!{cellName},cluster=!{clusterName} }
row0={ module=HelloTestWeb #readonly
uri=HelloTestWeb.war,WEB-INF/web.xml #readonly
server=WebSphere:cell=!{cellName},cluster=!{clusterName}+WebSphere:cell=!{cellName},node=!{nodeName},server=!{serverName} }
I didnt try to use that property file to deploy the app via monitored directory, but as you can see the entry is created and mapping is done via + sign that connects cluster and web server.
If you dont see the mapping to your web server, make sure you saved the changes done in the console, and then connected via wsadmin, otherwise wsadmin will have not current data.
I am not clear enough yet how Service Fabric allows deployment.
From the applications being created in a single VS solution, let me try to ask with file formats for better understanding.
In a single Visual Studio solution, there are
a single .sln
a single .sfproj
multiple .csproj(s)
As I see these files, multiple services (.csproj files) are bound to a single Service Fabric application (.sfproj file), which is under single solution file (.sln file).
Can I individually deploy a .csproj project to the Service Fabric cluster, or are these now bound to a .sfproj so that I have to deploy multiple services (each created with .csproj and bound to .sfproj) together?
The answer to your question is yes and no at the same time. Let me explain it in detail.
Can I individually deploy .csproj project to the Service Fabric cluster
The answer is no you can't deploy a service - in term of Service Fabric the minimal unit of deployment is the application (the .sfproj one). So no matter what changes you have you still need to deploy the application.
But as we all understand performing a full deployment of all application services is very hard, consumes lots of time and causes lots of disturbance to the cluster. To avoid this massive update, all Service Fabric components have their own versions (you can take a closer look at ServiceManifest.xml and ApplicationManifest.xml). So each time application is deployed to the cluster, Service Fabric goes through all services included in the application and updates only components that have been changed (i.e. have different version).
This approach allows you to perform updates of very high granularity i.e. you can update only <Config /> package of the single service.
I have installed jboss-fuse-karaf-6.3.0 and created a project in developer studio.
I'm not able to figure out certain concepts around it.
In Apache Fuse how Karaf and Fabric containers are related ? What I understood is Karaf provides runtime environment for the project to run. Fabric is for managing deployments. Is that correct ?
I have started Karaf container by running FuseInstall/bin/start.bat . How to start the fabric container ?
Is http://localhost:8181/hawtio is fabric console ?
Is there a way to directly deploy a project to Karaf container using maven ? or we need to deploy the project to fabric ?
Thanks !
Fuse is an ESB product by Redhat. And yes, you understood it correctly that Karaf provides an OSGI runtime whereas Fabric is for managing multi-container deployments.
You don't start a fabric container. You need a Fabric agent or something similar for that. Not very familiar with it, but you can refer Fuse's documentation here and here regarding this.
Hawtio is basically a visual management console for various containers.
You can definitely deploy your OSGI bundle directly into a Karaf container. There are various commands such as :osgi:install " OR placing the bundle at FuseInstallDir/deploy. The Documentation it explains much better.
A Fabric is just a group of commonly managed Karaf containers. It lets you manage your containers using Profiles instead of just features and bundles.
Once you have started a Karaf container you can CREATE a Fabric. Follow these instructions: https://access.redhat.com/documentation/en-US/Red_Hat_JBoss_Fuse/6.2.1/html-single/Fabric_Guide/index.html#Deploy-Fabric-Create . Any other Karaf containers you start will then be JOINED to the existing Fabric.
Once the Fabric has been created, localhost:8181/hawtio will have Fabric specific content
If you are using Fabric, then you can use the fabric8 Maven plugin to deploy your application to a Profile directly. See more details here: https://fabric8.io/gitbook/mavenPlugin.html . Basically you can just run mvn fabric8:deploy and it will update the fabric to use your new code. Be careful here as this will tell Fabric where to find your new code in its list of Maven repos. If you have not deployed your code to a central or shared repo and it is only on your local machine, and the container that is getting the deployment is on a separate machine, it will not work.
Be sure to read up on how profiles work as well, because adding your code to a profile does not add it to a container unless that container is already set up to include the profile you are updating. The fabric guide I linked first explains this well.
I have an application running on Clustered Wildfly environment
Server-One on Machineone and Server-one on Machinetwo are used to form a HA cluster. Serverone also act as a domain controller in my cluster environment.
When I go from UI Console management
http://machineOneIp:9990/console/App.html#domain-deployments
And tries to replace or update the deployment war then it start deploying on both the servers.
Is there any way to change the deployment scanner to stop scanning for new changes ?
Any help would be great use.
this is not the deployment scanner, this scanner is available in standalone mode only and pick up *.?ar from the standalone/deployments folder.
In domain mode a deployment can activated for a server-group.
In this case ALL servers of this group will deploy it.
If you want to have two servers with different deployments you need to create two server groups, you might share the same profile.
But if both are in the same cluster I don't understand why you don't want to have the same application deployed.
Is it possible to set a remote path (may be a FTP location) as Hot Deployment directory in Karaf environment?
I am aware of the org.apache.felix.fileinstall-deploy.cfg file under Karaf_home/etc. But this seems to be useful for changing the deployment directory to local path only.
Actually I have different Karaf installations running on different machines and I need to deploy my bundle to all instances. To avoid this I was planning to keep the deployment in some FTP location which will be treated as deployment directory for all instances.
Any idea?
I strongly suggest against using the deployment directory for productive environments.
For your usecase I'd rather start using Karaf-Cellar and let it do the deployment across your distributed Karaf installations. This works especially good in conjunction with either a maven repository or a specialized Karaf Instance running also Karaf-Cave that will work as OBR server.