Does IBM Bluemix eliminate the need to maintain servers? - ibm-cloud

Currently we are maintaining server for each environment like DEV, FVT, UAT and PROD.
I think we can create spaces in Bluemix to replicate the above setup, but does Bluemix completely remove the need of servers?.
I think we at least need to maintain a Sandbox environment to test the code before pushing it to Bluemix.
And how does the deployment process differ in Bluemix compared to the traditional way?

#aryanRaj_kary
The concept of spaces[1] is perfect for separating out environments like DEV, FVT & PROD. I don't think there's anything wrong with having a sandbox as well, but the spaces concept in Bluemix should satisfy your needs.
In Bluemix, in terms of HA, you have the choice of two deployment methods. We use an intelligent update service called Active Deploy [2] and we also employ the zero-downtime concept of "Blue-green" deployments [3]. The difference between the two is that in Blue-Green deployments, both versions are never active at the same time. However, with Active Deploy, there's minimal traffic allowed to both versions during ramp-up phase [4].
[1] https://console.ng.bluemix.net/docs/admin/orgs_spaces.html#spaceinfo
[2] https://console.ng.bluemix.net/docs/services/ActiveDeploy/index.html
[3] https://console.ng.bluemix.net/docs/manageapps/updapps.html#blue_green
[4] https://console.ng.bluemix.net/docs/services/ActiveDeploy/faq.html#bluegreendeployments

Related

Setting up a highly available Gitlab server

I need to setup a highly available Gitlab server on a bare metal, also i would know the best practices ( includes Security, Networking, authorizations, firewall ... etc.) to make the job done.
Configuring GitLab to be highly available is a complex process. Even a minimal scaled environment consists of a minimum of 5 distinct nodes/servers. Something much closer to actual high availability can require 11 nodes. See High Availability documentation for more information.
Please note that GitLab EE Omnibus also include some Premium/Ultimate-only features that make HA much easier - bundled Redis Seninel, Consul, PgBouncer, RepMgr, etc. This is in addition to access to the Support team for HA setup and configuration assistance.
Not that I'm trying to sell you on GitLab EE. But that may help to illustrate that this is a complex topic. If you truly need HA GitLab then it's probably a really critical part of your organization and that's why GitLab provides those features and services to Premium/Ultimate customers.
That said, HA GitLab can be achieved with GitLab CE/Core. But you will have to know how to configure each component including PostgreSQL replication, Redis/Redis Sentinel, load balancers, etc.

Running multiple build agents and deployment agents that service different Organisations on one Server

Is it possible to run multiple Azure Self-hosted build/deploy agents and multiple deployment agents on one server? Also, can these agents service more than one organisation or even multiple Azure AD Tenants?
I do realise the consequences with the server straining under IO bottlenecks and the like, these agents will probably never have to manage more than 3 projects being build and/or deployed at a time, but the sources can be from different projects in different organisations or possibly Tenants.
I have deployed my Deployment Agents to the servers and they function fine with a Microsoft-hosted build agent (my question is about ONE of these servers, it would apply to all of them eventually), but I am afraid to now start deploying the build agents to the same servers now.
This approach is very Do-able and is actually really cost-effective if you do not have continuous deployments or your virtual machine has the IO capacity to handle the planed traffic.
Understand the basics of an Agent. What exactly happens when you host a Windows Agent is that it creates a Windows Service which would run internally a separate new process and perform the actions for the agent.
Since these are independent processes, they are not at all impacted by the operations of other agents. As long as you are not trying to access the same files/resources this approach is actually a great approach and we should surely try this.

Specify target machine for application deployment cloud foundry

I have a cloud foundry setup with multiple cells(virtual machines) to host the deployed applications.
After 'cf push' the apps get deployed on any of the cell (as per auctioneer algorithm).
Is there a way to define the target VM to host the application?
The production machines should be different from pre-production machines.
How to maintain production and pre-production setup in Cloud Foundry?
Thanks
You can assign placement tags on Diego cells and define isolation segments to ensure apps in a space are placed on those specific cells.
That way, you can ensure apps from pre-production and production spaces are hosted on different VMs. Configuration steps are here: http://docs.cloudfoundry.org/adminguide/isolation-segments.html.

Azure Service Fabric-based Services: Prerequisite is always a prepared cluster?

If I've understood the docs properly, azure service fabric-based apps/microservices cannot be installed together with their service-fabric operational environment in one "packaged installer" step. For example, if I want to deploy a set of microservices on premises at a company that is running a typical windows server 2012 or VMWare IT center, then I'm out of luck? I'd have to require the company to first commit to (and execute) an installation of an azure app service fabric on several machines.
If this is the case, then the Azure Service Fabric is only an option for pure cloud operations where the service fabric cluster can be created on-demand by the provider or for companies that have already committed to azure service fabric. This means that a provider of classical "installer-based" software cannot evolve to the azure service fabric advantages since the datacenter policies of the potential customers is unknown.
What have I missed?
Yes, you always have to have a cluster to run Service Fabric Applications and Microservices. It is however not any more limited to a pure cloud environment, as of September last year the on-premise version of Azure Service Fabric for Windows Server went GA (https://azure.microsoft.com/en-us/blog/azure-service-fabric-for-windows-server-now-ga/) and that lets you run your own cluster on your own machines (whether physical or virtual, doesn't matter) or in another data center (or even at another cloud provider).
Of course, as you say, this requires your customer company to either have their own cluster or that you set one up for them (https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-cluster-creation-for-windows-server). They will also need to have the competence to manage that cluster over time. It could be argued though that this shouldn't be much more difficult than managing a VMWare farm or setting up and managing say a Docker container host(s).
For the traditional 'shrink-wrapped-DVD-installer-type' of software vendor this might not be as easy as just supplying an .exe and some system requirements, i agree with you on that. If the customer can't or don't wan't to run their own cluster and cloud is not an option then it definitely adds additional complexity to selling and delivering your solution.
The fact that you can run your own cluster on any Windows Server environment means that there is no real lock-in to Azure as a cloud platform, I think that this is a big pro for SF as a framework. Once you have a cluster to receive your applications then you can focus on developing that, this cannot be said of most other cloud-based PaaS frameworks/services.

How do micro services in Cloud Foundry communicate?

I'm a newbie in Cloud Foundry. In following the reference application provided by Predix (https://www.predix.io/resources/tutorials/tutorial-details.html?tutorial_id=1473&tag=1610&journey=Connect%20devices%20using%20the%20Reference%20App&resources=1592,1473,1600), the application consisted of several modules and each module is implemented as micro service.
My question is, how do these micro services talk to each other? I understand they must be using some sort of REST calls but the problem is:
service registry: Say I have services A, B, C. How do these components 'discover' the REST URLs of other components? As the component URL is only known after the service is pushed to cloud foundry.
How does cloud foundry controls the components dependency during service startup and service shutdown? Say A cannot start until B is started. B needs to be shutdown if A is shutdown.
The ref-app 'application' consists of several 'apps' and Predix 'services'. An app is bound to the service via an entry in the manifest.yml. Thus, it gets the service endpoint and other important configuration information via this binding. When an app is bound to a service, the 'cf env ' command returns the needed info.
There might still be some Service endpoint info in a property file, but that's something that will be refactored out over time.
The individual apps of the ref-app application are put in separate microservices, since they get used as components of other applications. Hence, the microservices approach. If there were startup dependencies across apps, the CI/CD pipeline that pushes the apps to the cloud would need to manage these dependencies. The dependencies in ref-app are simply the obvious ones, read-on.
While it's true that coupling of microservices is not in the design. There are some obvious reasons this might happen. Language and function. If you have a "back-end" microservice written in Java used by a "front-end" UI microservice written in Javascript on NodeJS then these are pushed as two separate apps. Theoretically the UI won't work too well without the back-end, but there is a plan to actually make that happen with some canned JSON. Still there is some logical coupling there.
The nice things you get from microservices is that they might need to scale differently and cloud foundry makes that quite easy with the 'cf scale' command. They might be used by multiple other microservices, hence creating new scale requirements. So, thinking about what needs to scale and also the release cycle of the functionality helps in deciding what comprises a microservice.
As for ordering, for example, the Google Maps api might be required by your application so it could be said that it should be launched first and your application second. But in reality, your application should take in to account that the maps api might be down. Your goal should be that your app behaves well when a dependent microservice is not available.
The 'apps' of the 'application' know about each due to their name and the URL that the cloud gives it. There are actually many copies of the reference app running in various clouds and spaces. They are prefaced with things like Dev or QA or Integration, etc. Could we get the Dev front end talking to the QA back-end microservice, sure, it's just a URL.
In addition to the aforementioned, etcd (which I haven't tried yet), you can also create a CUPS service 'definition'. This is also a set of key/value pairs. Which you can tie to the Space (dev/qa/stage/prod) and bind them via the manifest. This way you get the props from the environment.
If micro-services do need to talk to each other, generally its via REST as you have noticed.However microservice purists may be against such dependencies. That apart, service discovery is enabled by publishing available endpoints on to a service registry - etcd in case of CloudFoundry. Once endpoint is registered, various instances of a given service can register themselves to the registry using a POST operation. Client will need to know only about the published end point and not the individual service instance's end point. This is self-registration. Client will either communicate to a load balancer such as ELB, which looks up service registry or client should be aware of the service registry.
For (2), there should not be such a hard dependency between micro-services as per micro-service definition, if one is designing such a coupled set of services that indicates some imminent issues such as orchestrating and synchronizing. If such dependencies do emerge, you will have rely on service registries, health-checks and circuit-breakers for fall-back.