App in space or in different space - ibm-cloud

I know that every application the deploy to CF space are deployed to isolated container .
application in space can share service instance which is not the case for application in
different spaces ...
my question is that: I know that the application from different spaces doesn't have
any way to impact other application
but in case of two applications are deployed to the same space, there is a way that it
have "privileges " to harm (from security perspective...) other application in
the space
which is not available to applications that deployed to different spaces ?

No, whether two apps are pushed to the same or different spaces does not make a difference in that respect.
The reason they cannot share a service instance unless they're part of the same space is an organizational restriction (to e.g. prevent you from accidentally binding your production database service to your dev space's apps) and is not enforced on the network level.
(Which you can confirm by copying the service instance's credentials and create a user-provided service instance in the other space and bind that to the other app: both can access the target service fine. <- this is a work-around to address certain use cases, like a message queue shared by apps in different spaces, to which we are currently exploring a proper solution for).

Related

if an application needs several containers running on the same host, why not just make a single container with everything you need?

I started working on kubernetes. I already worked with single container pods. Now i want to working on multiple container pod. i read the statement like
if an application needs several containers running on the same host, why not just make a single container with everything you need?
means two containers with single IP address. My dought is, in which cases two or more containers uses same host?
Could you please anybody explain me above scenario with an example?
This is called "multiple processes per container".
https://docs.docker.com/config/containers/multi-service_container/
t's discussed on the internet many times and it has many gotchas. Basically there's not a lot of benefit of doing it.
Ideally you want container to host 1 process and its threads/subprocesses.
So if your database process is in a crash loop, let it crash and let docker restart it. This should not impact your web container.
Also putting processes in separate containers lets you set separate memory/CPU limits so that you can set different limits for your web container and database container.
That's why Kubernetes exposes POD concept which lets you run multiple containers in the same namespace. Read this page fully: https://kubernetes.io/docs/concepts/workloads/pods/pod/
Typically it is recommended to run one process in a docker container. Although it is very much possible to run multiple processes in a container it is discouraged more on the basis of application architecture and DevOps perspective rather than technical reasons.
Following discussion gives a better insight into this debate: https://devops.stackexchange.com/questions/447/why-it-is-recommended-to-run-only-one-process-in-a-container
When one runs multiple processes from different packages/tools in the same docker it could run into dependency and upgradability issues that docker meant to solve in the first place. Nevertheless, for many applications, it makes much sense to scale and manage application in blocks rather than individual components, sometimes it makes life bit easy. So PODs are basically a balance between the two. It gives the isolation for each process for better service manageability and yet groups them together so that the application as a whole can be better managed.
I would say segregating responsibilities can be a boon. Maintenance is whole lot easier this way. A container can have a single entrypoint and a pod's health is checked via that main process in the entry point. If your front end (say Java over Tomcat ) application is working fine but database is not working (although one must never use production database in container), you will not get the feedback that u might get when they are different containers.
Also, one of the best part of docker images are they are available as different modules and maintenance is super easy that way.

Azure Service Fabric deployment

I am doing API deployment to Service Fabric Nodes, and it is by default going to D drive (Temp drive), I would like to change this default behavior and deploy it to another drive or C drive to avoid application loss in case of VMSS deallocation. How can I do this?
You say you want to do this to avoid application loss, however:
SF already replicates your application package to multiple machines when you Register the application package in the Image Store (part of the provisioning/deployment process)
Generally, if you want your application code and config to be safe, keeping it somewhere outside the cluster (wherever you're deploying from, or in blob storage) is usually a better answer.
SF doesn't really support deallocating the VMs out from under it and then bringing them back later. See the FAQ answer here.
So overall I'm not sure that what you're trying to do is the right solution to your real problem, and it looks like you're heading into multiple unsupported scenarios, which usually means there's some misunderstanding.
That all said, of course, it's configurable.
Within a node type, you can specify the dataPath (example here). However, it's not recommended that you change this.
"settings": {
"dataPath": "D:\\\\SvcFab",
},

How to organize pods with non-replicatable containers in kubernetes?

I'm trying to get my head around kubernetes.
I understand that pods are a great way to organize containers that are related. I understand that replication controllers are a great way to ensure they are up and running.
However, I do not get how to do it in real life.
Given a webapp with, say a rails app on unicorn, behind nginx with a postgres database.
The nginx and rails app can autoscale horizontally (if they are shared nothing), but postgres can't out of the box.
Does that mean I can't organize the postgres database within the same pod as nginx and rails, when I want to have two servers behind a loadbalancer? Does postgres need an own replication controller and is simply a service within the cluster?
The general question about that is: In common webscenarios, what kind of containers are going into one pod? I know that this can't be answered generally, so the ideas behind it are interesting.
The answer to this really depends on how far down the rabbithole you want to go. You are really describing 3 independent parts of your app - ngingx, rails, postgres. Each of those parts has different needs when it comes to monitoring, scaling, and updating.
You CAN put all 3 into a single pod. That replicates the experience of a VM, but with some advantages for manageability, deployment etc. But you're already calling out one of the major disadvantages - you want to scale (for example) the rails app but not the postgres instance. It's time to decompose.
What if you instead made 2 pods - one for rails+nginx and one for postgres. Now you can scale your frontend without messing up your database deployment. You might go even further and split your rails app and nginx into distinct pods, if that makes sense. Or split your rails app into 5 smaller apps.
This is the whole idea behind microservices (which is a very hyped word, I know). Decompose the problem into smaller and more manageable chunks. This is WHY kubernetes exists - to help you manage the resulting ocean of microservices.
To answer your last question - there is no single recipe. It's all about how your application is structured and what makes sense FOR YOU. Maybe you want to decompose along team boundaries, or along departments in your company, or along admin roles. The questions to ask yourself are things like "if I update or scale this pod, is there any part of it that I don't want updated/sclaed at the same time?"
In some sense it is a data normalization problem. A pod should be a model of one concept or owner or scope. I hope that helps a little.
You should put containers into the same pod when you want to deploy and update them at the same time or if they need to share local state (disk, network, etc). In some edge cases, you may also want to co-locate them for performance reasons.
In your scenario, since nginx and the rails app can scale horizontally, they should be in their own pods so that you can provision the right number of replicas for each tier of your application (unless you would always scale them at the same time). The postgres database would be in a separate pod, accessed via a service.
This allows you to update to a newer version of nginx without changing anything else about your service. Same for the rails app. And they could each scale independently.

Howto develop a SaaS application with limited resources each tenant

I'd like to develop a bunch of SaaS-Applications in Java and I'm not sure wat is the best way to go.
Each Application will have a WAR containing the Webservice and will have at least one Worker-WAR, which is a Thread waiting for new Tasks in the DB to come up and then working off this task. This worker contains the intelligence of the application and uses a lot of cpu. The Webservice gives Users the possibility to add new tasks and other stuff ...
Resource Limitations
The infrastructure must ensure the following:
The Webservice must always get a certain amount of cpu time to be able to respond to the user. So the hungry Worker must not get all cpu time for its working.
Each Tenant has its own worker and they must not interfere with each other as it must be not possible to block the whole system (and all tenants) with a single task.
Resource Sharing
It would be nice to be able to share the resources but always ensure that in extreme situations every worker and webservice gets the required minimum.
Versioning
As new Versions of a application are released each tenant must have the possibility to initiate a update on its own when he adapted to the API-Changes. Furthermore a tenant must be able to keep more than one application-endpoint (lets call them channels) to have a production channel and a beta-channel. In the Beta-Channel the tenant can test againts new versions and when he feels comfortable with the new version he can update his production channel.
User-Management
All applications of a tenant must share a user-Database and have the same way to authenticate.
Environment
I want to use Java EE 7. I would enjoy using Wildfly.
Question
What is the best infrastructure to approach these aims? I want to host this on my own servers.
What I already found
I understand that you cannot limit CPU-usage in a jvm. So the Workers must have their own jvms.
I looked at PaaS-Providers like OpenShift Origin, but it seems that they encourage you to run a application-server per tenant, per application which sounds to me as a resource-eater.
Is there no way to have one Wildfly running and limit the amount cpu-usage per tenant and app?
Thank You
Lukas

How To Deploy Web Application

We have an internal web system that handles the majority of our companies business. Hundreds of users use it throughout the day, it's very high priority and must always be running. We're looking at moving to ASP.NET MVC 2; at the moment we use web forms. The beauty of using web forms is we can instantaneously release a single web page as opposed to deploying the entire application.
I'm interested to know how others are deploying their applications whilst still making them accessible to the user. Using the deployment tool in Visual Studio would supposedly cause a halt. I'm looking for a method that's super quick.
If you had high priority bug fixes for example, would it be wise to perhaps mix web forms with MVC and instead replace the view with a code-behind web form until you make the next proper release which isn't a web form?
I've also seen other solutions on the same server of having the same web application run side-by-side and either change the root directory in IIS or change the web.config to point to a different folder, but the problem with this is that you have to do an entire build and deploy even if it were for a simple bug fix.
EDIT: To elaborate, how do you deploy the application without causing any disruption to users.
How is everyone else doing it?
I guess you can run the MVC application uncompiled also? and just replace .cs/views and such on the run.
A websetup uninstall/install is very quick, but it kills the application pool.. which might cause problem. Depending on how your site is built.
The smoothest way is to run it on two servers and store the sessions in sql server or shared state. Then you can just bring S1 down and patch it => bring s1 back up again and bring S2 down => patch S2 and then bring it up again. Al thought this might not work if you make any major changes to the session parts of the code.
Have multiple instances of your website running on multiple servers. The best way to do it is to have a production environment, a test environment, and a developement environment. You can create test cases and run the load every time you have a new build, if can get through all the tests, move the version into production ;).
You could have two physical servers each running IIS and hosting a copy of the site. OR you could run two copies of the site under different IIS endpoints on the SAME server.
Either way you cut it you are going to need at least two copies of the site in production.
I call this an A<->B switch method.
Firstly, have each production site on a different IP address. In your company's DNS, add an entry set to one of the IPs and give it a really short TTL. Then you can update site B and also pre-test/warm-up the site by hitting the IP address. When it's ready to go, get your DNS switched to the new site B. Once your TTL has expired you can take down site A and update it.
Using a shared session state will help to minimise the transition of users between sites.