Specify target machine for application deployment cloud foundry - deployment

I have a cloud foundry setup with multiple cells(virtual machines) to host the deployed applications.
After 'cf push' the apps get deployed on any of the cell (as per auctioneer algorithm).
Is there a way to define the target VM to host the application?
The production machines should be different from pre-production machines.
How to maintain production and pre-production setup in Cloud Foundry?
Thanks

You can assign placement tags on Diego cells and define isolation segments to ensure apps in a space are placed on those specific cells.
That way, you can ensure apps from pre-production and production spaces are hosted on different VMs. Configuration steps are here: http://docs.cloudfoundry.org/adminguide/isolation-segments.html.

Related

two networks in one docker-compose file

I have a rather complex docker-compose setup. My docker-compose file consists of following services:
A_mysql
A_apache
B_mysql
B_apache
B_sync
I need my host machine to access two different php projects via http running on A_apache and B_apache. Furthermore i need to separate the networks (one for all A_* services and one for all B_* services). The B_sync service needs to access the A_mysql database to sync data to the B_mysql of his own network.
How can I separate these services in two networks and access a particular service (A_mysql) from another network (B_sync)? How do I set a fixed IP for this service?
I know that putting all services in one and the same network would deprecate the sync job, but since this is a smoke test , this wouldn't fit the production environment.

Spring cloud data flow deployment

I wanna deploy the Spring-cloud-data-flow on several hosts.
I will deploy the server of Spring-cloud-data-flow on one host-A, and deploy the agents on the other hosts(These hosts are in charge of executing the tasks).
Except the host-A, all the other hosts run the same tasks.
Shall I modify on the basis of the Spring Cloud Data Flow Local Server or on the Spring Cloud Data Flow Apache Yarn Server or other better choice?
Do you mean how the apps are deployed on several hosts? If so, the apps are deployed using the underlying deployer implementation. For instance, if it is local deployer then, each app is deployed by spawning a new process. You can scale out the number apps deployment using the count property during the stream deploy. I am not sure what do you mean by the agents here.

Azure Service Fabric-based Services: Prerequisite is always a prepared cluster?

If I've understood the docs properly, azure service fabric-based apps/microservices cannot be installed together with their service-fabric operational environment in one "packaged installer" step. For example, if I want to deploy a set of microservices on premises at a company that is running a typical windows server 2012 or VMWare IT center, then I'm out of luck? I'd have to require the company to first commit to (and execute) an installation of an azure app service fabric on several machines.
If this is the case, then the Azure Service Fabric is only an option for pure cloud operations where the service fabric cluster can be created on-demand by the provider or for companies that have already committed to azure service fabric. This means that a provider of classical "installer-based" software cannot evolve to the azure service fabric advantages since the datacenter policies of the potential customers is unknown.
What have I missed?
Yes, you always have to have a cluster to run Service Fabric Applications and Microservices. It is however not any more limited to a pure cloud environment, as of September last year the on-premise version of Azure Service Fabric for Windows Server went GA (https://azure.microsoft.com/en-us/blog/azure-service-fabric-for-windows-server-now-ga/) and that lets you run your own cluster on your own machines (whether physical or virtual, doesn't matter) or in another data center (or even at another cloud provider).
Of course, as you say, this requires your customer company to either have their own cluster or that you set one up for them (https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-cluster-creation-for-windows-server). They will also need to have the competence to manage that cluster over time. It could be argued though that this shouldn't be much more difficult than managing a VMWare farm or setting up and managing say a Docker container host(s).
For the traditional 'shrink-wrapped-DVD-installer-type' of software vendor this might not be as easy as just supplying an .exe and some system requirements, i agree with you on that. If the customer can't or don't wan't to run their own cluster and cloud is not an option then it definitely adds additional complexity to selling and delivering your solution.
The fact that you can run your own cluster on any Windows Server environment means that there is no real lock-in to Azure as a cloud platform, I think that this is a big pro for SF as a framework. Once you have a cluster to receive your applications then you can focus on developing that, this cannot be said of most other cloud-based PaaS frameworks/services.

Does IBM Bluemix eliminate the need to maintain servers?

Currently we are maintaining server for each environment like DEV, FVT, UAT and PROD.
I think we can create spaces in Bluemix to replicate the above setup, but does Bluemix completely remove the need of servers?.
I think we at least need to maintain a Sandbox environment to test the code before pushing it to Bluemix.
And how does the deployment process differ in Bluemix compared to the traditional way?
#aryanRaj_kary
The concept of spaces[1] is perfect for separating out environments like DEV, FVT & PROD. I don't think there's anything wrong with having a sandbox as well, but the spaces concept in Bluemix should satisfy your needs.
In Bluemix, in terms of HA, you have the choice of two deployment methods. We use an intelligent update service called Active Deploy [2] and we also employ the zero-downtime concept of "Blue-green" deployments [3]. The difference between the two is that in Blue-Green deployments, both versions are never active at the same time. However, with Active Deploy, there's minimal traffic allowed to both versions during ramp-up phase [4].
[1] https://console.ng.bluemix.net/docs/admin/orgs_spaces.html#spaceinfo
[2] https://console.ng.bluemix.net/docs/services/ActiveDeploy/index.html
[3] https://console.ng.bluemix.net/docs/manageapps/updapps.html#blue_green
[4] https://console.ng.bluemix.net/docs/services/ActiveDeploy/faq.html#bluegreendeployments

AppFabric setup in a domain

So I am a little confused by reading the documents.
I want to setup AppFabric caching and hosting.
Can I do the following?
DC
SQL Server
AppFabric1
AppFabric2
All these computers are joined to the DC.
I want to be able to have AppFabric1 be the mainhost but also part of the cache cluster?
What about AppFabric2? or AppFabricX? How can I make them part of the cache cluster?
Do I have to make AppFabric1 and AppFabric2 configured in Windows as part of a cluster (i.e setup the entire environment as a cluster)?
Can I install AppFabric independently on AppFabric1 and 2 and have them cluster together and "make it work"? If so - how?
I see documentation about setting it up in a webfarm but also a workgroup... and that's it. nothing about computers joined to a domain.
I want to setup AppFabric caching and hosting.
Caching and Hosting are two totaly different things and generally don't share the same use cases.
AppFabric Caching provides an in-memory, distributed cache platform for Windows Server, previously named Velocity. The cache cluster is a collection of one or more instances of the Caching Service working together. You can easily add new cache host without restarting the cluster in the "storage location" (xml or sql server).
Can I install AppFabric independently on AppFabric1 and 2 and have
them cluster together and "make it work"? If so - how?
Don't worry... this can be done easily during installation. In addition, there are powerfull PS module to to the same thing.
AppFabric Hosting enhance the hosting of WCF and Workflow Foundation services in WAS (autostart, monitoring of hosted services, workflow persistence, ...). There is no cluster here and basically you just have to configure to monitoring/persistence DB for each server.
Just try it !
When you are adding the second node in the AppFabric cluster, make sure to choose the option Join Cluster (instead of New Cluster) and point to the path of the share where you stored the configuration (assuming that you used FILE SHARE to store the configuration of the cluster). The share that you used should be accessible from Appfabric2.