Worker Role Importance in Windows Azure? - asp.net-mvc-2

In my cloud application am using only one role i.e web role not using worker role.i have little bit confusion between web role and worker role .
1)Both are same or not?
2)In our application what is the importance of worker role?
3)If i want to add worker role in my cloud application then how can add to my application.Please suggest me?
Thanks in Advance,
PCSSCP.

You'll find this answered in several places, such as here. They're both Windows servers, but Web Roles have IIS enabled. You can do the same things in Web and Worker roles. If, say, you're using Tomcat for a web server, it's better to use a Worker role since you don't need IIS.
You go with separate roles when you have different scaling needs. For example, maybe you want a Small (single-core) Web role for your website, running 2-5 instances depending on traffic. Then maybe a Medium (2-core) Worker with 2-10 instances depending on job workload in your Azure queue. Just a made-up scenario, but hopefully that illustrates why you might want multiple roles defined. You're free to run all your code in a single role

Related

Expressing that a service requires another

I'm new to k8s, so this question might be kind of weird, please correct me as necessary.
I have an application which requires a redis database. I know that I should configure it to connect to <redis service name>.<namespace> and the cluster DNS will get me to the right place, if it exists.
It feels to me like I want to express the relationship between the application and the database. Like I want to say that the application shouldn't be deployable until the database is there and working, and maybe that it's in an error state if the DB goes away. Is that something you'd normally do, and if so - how? I can think of other instances: like with an SQL database you might need to create the tables your app wants to use at init time.
Is the alternative to try to connect early and exit 1, so that the cluster keeps on retrying? Feels like that would work but it's not very declarative.
Design for resiliency
Modern applications and Kubernetes are (or should be) designed for resiliency. The applications should be designed without single point of failure and be resilient to changes in e.g. network topology. Also see Twelve factor-app: IV. Backing services.
This means that your Redis typically should be a cluster of e.g. 3 instances. It also means that your app should retry connections if connections fails - this can also happens same time after running - since upgrades of a cluster (or rolling upgrade of an app) is done by terminating one instance at a time meanwhile a new instance at a time is launched. E.g. the instance (of a cluster) that your app currently is connected to might go away and your app need to reconnect, perhaps establish a connection to a different instance in the same cluster.
SQL Databases and schemas
I can think of other instances: like with an SQL database you might need to create the tables your app wants to use at init time.
Yes, this is a different case. On Kubernetes your app is typically deployed with at least 2 replicas, or more (for high-availability reasons). You need to consider that when managing schema changes for your app. Common tools to manage the schema are Flyway or Liquibase and they can be run as Jobs. E.g. first launch a Job to create your DB-tables and after that deploy your app. And after some weeks you might want to change some tables and launch a new Job for this schema migration.
As you've seen, YAML objects can not express such dependencies. As suggested by #fabian-lopez, your application container may include an initContainer that would wait for dependencies to be available, before starting their main container.
Now, if you want a state machine, capable to provision a database, initialize its schema, maybe import some records, and only then create your application: you're looking for an operator. Then, you may use the operator-sdk ( https://github.com/operator-framework/operator-sdk ), or pretty much anything integrating with some Kubernetes cluster API.
I think Init Containers is something you could leverage for this use case
This is up to your application code, not something Kubernetes helps nor hinders.

High-Availability of Keycloak across remote sites

I’ve been looking into Keycloak as an on-prem IAM and SSO solution for my company. One thing that I’m unclear on from reading the documentation is if Keycloak’s clustered mode can handle our requirements for instance federation across sites.
We have some remote manned sites that occasionally run critical telemetry-gathering processes. Our AD domain is replicated to those sites.
The issue is that there is a single internet link to the sites. If we had keycloak at the main office, and the internet link went down for a day, any software at the remote site that relies on keycloak to authenticate wouldn’t work (which would be a big problem).
Can we set up Keycloak in a cluster mode (ie, putting an instance at each site), so that if this link went out, remote users are able to connect to their local instance automatically and authenticate with local apps? What happens when the connection is restored and the databases are out of sync - does keycloak automatically repair this?
Cheers
In general answer is "yes", you can setup two keycloak instances in different locations, and link them with each other via cluster (under the hood it would be infinispan cache replication). But it depends on details of your infrastructure.
Main goal of Keycloak cluster is to perform sessions cache replication between nodes. So in simplest case you can setup two nodes that looks to same DB instance, and when first node goes down second would handle whole job, but if DB also goes down second node would be useless. In such case each site should have both separate Keycloak node and DB replica (how to achieve DB replication is out of scope of this topic). Third option is to use multitenancy feature of keycloak application adapter, in that case you secure application by two separate Keycloak instances, that know nothing about each other.
Try to start from this documentation article:
https://www.keycloak.org/docs/latest/server_installation/index.html#crossdc-mode

Cloud Foundry for SaaS

I am implementing a service broker for my SaaS application on Cloud Foundry.
On create-service of my SaaS application, I create instance of another service (Say service-A) also ie. a new service instance of another service (service-A) is also created for every tenant which on-boards my application.
The details of the newly created service instance (service-A) is passed to my service-broker via environment variable.
To be able to process this newly injected environment variable, the service-broker need to be restaged/restarted.
This means a down-time for the service-broker for every new on-boarding customer.
I have following questions:
1) How these kind on use-cases are handled in Cloud Foundry?
2) Why Cloud Foundry chose to use environment variables to pass the info required to use a service? It seems limiting, as it requires application restart.
As a first guess, your service could be some kind of API provided to a customer. This API must store the data it is sent in some database (e.g. MongoDb or Mysql). So MongoDb or Mysql would be what you call Service-A.
Since you want the performance of the API endpoints for your customers to be independent of each other, you are provisioning dedicated databases for each of your customers, that is for each of the service instances of your service.
You are right in that you would need to restage your service broker if you were to get the credentials to these databases from the environment of your service broker. Or at least you would have to re-read the VCAP_SERVICES environment variable. Yet there is another solution:
Use the CC-API to create the services, and bind them to whatever app you like. Then use again the CC-API to query the bindings of this app. This will include the credentials. Here is the link to the API docs on this endpoint:
https://apidocs.cloudfoundry.org/247/apps/list_all_service_bindings_for_the_app.html
It sounds like you are not using services in the 'correct' manner. It's very hard to tell without more detail of your use case. For instance, why does your broker need to have this additional service attached?
To answer your questions:
1) Not like this. You're using service bindings to represent data, rather than using them as backing services. Many service brokers (I've written quite a few) need to dynamically provision things like Cassandra clusters, but they keep some state about which Cassandra clusters belong to which CF service in a data store of their own. The broker does not bind to each thing it is responsible for creating.
2) Because 12 Factor applications should treat backing services as attached, static resources. It is not normal to say add a new MySQL database to a running application.

Is it possible to isolate applications from one another in Service Fabric?

When running a Service Fabric cluster, it would make sense to have multiple applications running in it, but those applications might not be dependant on each other in any way. For example, I can have a CustomerApp in there, and a WikiApp.
Now from a security standpoint, it would be great if the WikiApp could be isolated from the CustomerApp, as a Wiki clearly should not be able to connect to services from an App that is holding customer data. I could put authentication into the services of the CustomerApp itself to allow only calls from authenticated services, but in addition, it would be even better if the WikiApp would not even be able to connect or see the other App and not able to resolve an endpoint adress from the naming service.
So is there a way to really isolate applications from each other in Service Fabric with a platform feature? I could not find anything about it in the documentation, and I also doubt it's possible the way Service Fabric works, but it would be very useful.
And to be clear, I'm really talking about isolating applications (ApplicationTypes) from each other, not services within a single application.
There are some levels of isolation built in:
Application instances have process-level isolation, in that each application instance runs in its own process.
Node isolation is possible, using placement constraints, to "isolate" services from each other by constraining them to run on different nodes.
Container support will be available in the future, where applications and services can run inside containers for further environment and resource isolation.
Services can run under unique user accounts, which you can use to perform authentication yourself at the application level.
But unfortunately there is no fine-grained role-based access mechanism built in to the platform today. So, for example, system-wide operations like running queries to get a list of applications or services or resolving endpoints using the naming service doesn't have any role-based access built in.

cloudformation best practices in AWS

We are at early stages with running our services on AWS. We have our server hosted in AWS, in a VPC, having private and public subnets and have multiple instances in private and public subnets using ELB and autoscaling setup (using AMIs) for frontend web servers. The whole environement(VPC, security groups, EC2 instances, DB instances, S3 buckets, cloudfront) are setup manually using AWS console at first.
Application servers host jboss and war files are deployed on the servers.
As per AWS best practices we want to create whole infrastructure using cloudformation and have setup test/stage/prod environment.
-Would it be a good idea to have all the above componenets (VPC, security groups, EC2 instances, DB instances, S3 buckets, cloudfront etc) using one cloudformation stack/template? Or we should we create two stacks 1) having network replated components and 2) having EC2 related components?
-Once we have a prod envoronemtn running with cloudformation stact and In case we want to update the new AMIs on prod in future, how can we update the live running EC2 instances using cloudformation without interruptions?
-What are the best practices/multiple ways for code deployment to multiple EC2 notes when a new release is done? We dont use Contineus integration at the moment.
It's a very good idea to separate your setup into multiple stacks. One obvious reason is that stacks have certain limits that you may reach eventually. A more practical reason is that you don't really need to update, say, your VPC every time you just want to deploy a new version. The network architecture typically changes less frequently. Another reason to avoid having one huge template, or to make changes to an "important" template needlessly, is that you always run the risk of messing things up. If there's an error in your template and you remove an important resource by accident (e.g. commented out) you'll be very sorry. So separating stacks out of sheer caution is probably a good idea.
If you want to update your application you can simply update the template with the new AMIs and CFN will know what needs to be recreated or updated. You can read about rolling updates here. However, I'd recommend considering using something a bit more straightforward for deploying your actual code, like Ansible or Chef.
I'd also recommend you look into Docker for packaging and deploying your application's nodes. Very handy.