Recent Bluemix terms of use document mentions availabilty only for applications deployed across at leas two regions. In the previous versions of Bluemix terms of use (e.g. here) there was so called service level objective which was not related to a specific deployment model.
What is Bluemix service availabilty in case I want to deploy my application only in one region?
Just for info, the answer was added at:
https://developer.ibm.com/answers/questions/246896/bluemix-public-single-region-availability-sla.html
Related
Recently, I went to the AWS Proton service, I also tried to do a hands-on service, unfortunately, I was not able to succeed.
What I am not able to understand is what advantage I am getting with Proton, because the end to end pipeline I can build using CodeCommit, CodeDeploy, CodePipeline, and CloudFormation.
It will be great if someone could jot down the use cases where Proton can be used compared to the components which I suggested above.
From what I understand, AWS Proton is similar to AWS Service Catalog in that it allows
administrators prepare some CloudFormation (CFN) templates which Developers/Users can provision when they need them. The difference is that AWS Service Catalog is geared towards general users, e.g. those who just want to start a per-configured instance by Administrators, or provision entire infrastructures from the set of approve architectures (e.g. instance + rds + lambda functions). In contrast, AWS Proton is geared towards developers, so that they can provision by themselves entire architectures that they need for developments, such as CICD pipelines.
In both cases, CFN is used as a primary way in which these architectures are defined and provisioned. You can think of AWS Service Catalog and AWS Proton as high level services, while CFN as low level service which is used as a building block for the two others.
because the end to end pipeline I can build using CodeCommit, CodeDeploy, CodePipeline, and CloudFormation
Yes, in both cases (AWS Service Catalog and AWS Proton) you can do all of that. But not everyone want's to do it. Many AWS users and developers do not have time and/or interest in defining all the solutions they need in CFN. This is time consuming and requires experience. Also, its not a good security practice to allow everyone in your account provision everything they need without any constrains.
AWS Service Catalog and AWS Proton solve these issues as you can pre-define set of CFN templates and allow your users and developers to easily provision them. It also provide clear role separation in your account, so you have users which manage infrastructure and are administrators, while the other ones users/developers. This way both these groups of users concentrate on what they know best - infrastructure as code and software development.
I'm a newbie in Cloud Foundry. In following the reference application provided by Predix (https://www.predix.io/resources/tutorials/tutorial-details.html?tutorial_id=1473&tag=1610&journey=Connect%20devices%20using%20the%20Reference%20App&resources=1592,1473,1600), the application consisted of several modules and each module is implemented as micro service.
My question is, how do these micro services talk to each other? I understand they must be using some sort of REST calls but the problem is:
service registry: Say I have services A, B, C. How do these components 'discover' the REST URLs of other components? As the component URL is only known after the service is pushed to cloud foundry.
How does cloud foundry controls the components dependency during service startup and service shutdown? Say A cannot start until B is started. B needs to be shutdown if A is shutdown.
The ref-app 'application' consists of several 'apps' and Predix 'services'. An app is bound to the service via an entry in the manifest.yml. Thus, it gets the service endpoint and other important configuration information via this binding. When an app is bound to a service, the 'cf env ' command returns the needed info.
There might still be some Service endpoint info in a property file, but that's something that will be refactored out over time.
The individual apps of the ref-app application are put in separate microservices, since they get used as components of other applications. Hence, the microservices approach. If there were startup dependencies across apps, the CI/CD pipeline that pushes the apps to the cloud would need to manage these dependencies. The dependencies in ref-app are simply the obvious ones, read-on.
While it's true that coupling of microservices is not in the design. There are some obvious reasons this might happen. Language and function. If you have a "back-end" microservice written in Java used by a "front-end" UI microservice written in Javascript on NodeJS then these are pushed as two separate apps. Theoretically the UI won't work too well without the back-end, but there is a plan to actually make that happen with some canned JSON. Still there is some logical coupling there.
The nice things you get from microservices is that they might need to scale differently and cloud foundry makes that quite easy with the 'cf scale' command. They might be used by multiple other microservices, hence creating new scale requirements. So, thinking about what needs to scale and also the release cycle of the functionality helps in deciding what comprises a microservice.
As for ordering, for example, the Google Maps api might be required by your application so it could be said that it should be launched first and your application second. But in reality, your application should take in to account that the maps api might be down. Your goal should be that your app behaves well when a dependent microservice is not available.
The 'apps' of the 'application' know about each due to their name and the URL that the cloud gives it. There are actually many copies of the reference app running in various clouds and spaces. They are prefaced with things like Dev or QA or Integration, etc. Could we get the Dev front end talking to the QA back-end microservice, sure, it's just a URL.
In addition to the aforementioned, etcd (which I haven't tried yet), you can also create a CUPS service 'definition'. This is also a set of key/value pairs. Which you can tie to the Space (dev/qa/stage/prod) and bind them via the manifest. This way you get the props from the environment.
If micro-services do need to talk to each other, generally its via REST as you have noticed.However microservice purists may be against such dependencies. That apart, service discovery is enabled by publishing available endpoints on to a service registry - etcd in case of CloudFoundry. Once endpoint is registered, various instances of a given service can register themselves to the registry using a POST operation. Client will need to know only about the published end point and not the individual service instance's end point. This is self-registration. Client will either communicate to a load balancer such as ELB, which looks up service registry or client should be aware of the service registry.
For (2), there should not be such a hard dependency between micro-services as per micro-service definition, if one is designing such a coupled set of services that indicates some imminent issues such as orchestrating and synchronizing. If such dependencies do emerge, you will have rely on service registries, health-checks and circuit-breakers for fall-back.
I have my website on Bluemix and all of yesterday they EU region was down. I want to know if it is possible to have another instance on US or Sydney and then, if one is down, automatically redirect to the next.
The platform doesn't have such a feature to automatically redirect to applications in other regions on error conditions. Applications in other regions are treated as separate applications.
Optimally, to handle rare conditions like the one this weekend, you can create a load balancer with something like NGINX or HAProxy outside of bluemix to direct to the best/available geography.
For example: https://www.howtoforge.com/high-availability-load-balancer-haproxy-heartbeat-debian-etch
It has been necessary for IBM to re-start its Bluemix servers this weekend due to an urgent security patch. The IBM recommendation is to take advantage of the capability to have multiple application instances deployed in the different regions, as indicated in Ram's answer.
The maintenance phase in the EU-GB and Sydney regions is now complete. It is ongoing for the US region. For the latest updates and details on this maintenance, check http://ibm.biz/bluemixstatus.
In order to integrate the Vennam response, you could create a load balancer in bluemix using containers (or VM) (of course this workaround doesn't work if containers are down) but you can install NGINX or HAProxy. You could also use Bluemix containers as environment test before moving your load balancer on outside server.
I've seen several status updates on Bluemix saying that applications are being restarted and there will be issues logging in, e.g.
During this time, you might experience temporary errors logging into
Bluemix or managing applications, such as starting, staging, and so
on. If this situation occurs, retry the operation later. The latest
status will be available at http://ibm.biz/bluemixstatus throughout
the upgrade process.
Existing applications will see a brief restart of instances, but near
continuous availability is expected.
Is it possible then to build a high-availability application on Bluemix?
IBM Bluemix supports deploying applications in multiple different regions.
Minimising downtime during platform issues can be achieved by hosting your application in multiple regions simultaneously and using an external load-balancer to move traffic between the instances depending on availability.
Replicating application data between regions will be dependent on the individual services you're using. For example, Cloudant supports multi-master replication, allowing you to failover without any manual intervention.
Is it possible to run more than one instance of MongoDB in Azure? I need (in the future) partition database to many node.
You can run multiple instances if you use a replica set, as you can then use internal endpoints for inter-node communication. If you only have standalone instances, they won't be able to communicate with each other and won't share data.
I've been presenting this at various Mongodb conferences (DC, Silicon Valley), and you can watch the Silicon Valley video recording of my presentation here.
EDIT: 10gen has now published a .NET project that launches a replicaset, improving upon the original work I did, and may be downloaded here, with docs here.