Installing all IIS components with ISES, WKC and Stewardship center on one computer - ibm-information-server

I need to install all IIS components with ISES, WKC and Stewardship center on one computer
I am wondering if this will be practically possible or not and I listed below my questions to be very clear to understand each missing information for us:Is it possible to install & configure those components (IIS, ISES, WKC and Stewardship center) on one node/machine ?
If yes, What's the suitable hardware sizing and allocated resources needed for this node?
If no, what is the suggested hardware sizing and allocated resources considered minimum dedicated nodes/resources for that PoC ?
On other hand, I did many IIS installation with different standard topologies on only premises environment, it's nice to have from your experiences any documents or links describe above products installation, tips and steps

You can do it with a minimum of 2 VMs/Servers because the Microservices Tier (WKC) has to be on its own VM/Server.
You can put the Stewardship Center, Engine, Repository and Services Tiers in 1 VM/Server.
Minimum Configuration will be 16 Cores and 64GB RAM for Microservices Tier and 4 Cores with 32GB RAM for the others.
What do you plan to do with Stewardship Center? In V11.7.1 Activiti is provided as another option for workflow and it is part of the Microservices Tier.
What PoC are you planning to do?

Related

Setting up a highly available Gitlab server

I need to setup a highly available Gitlab server on a bare metal, also i would know the best practices ( includes Security, Networking, authorizations, firewall ... etc.) to make the job done.
Configuring GitLab to be highly available is a complex process. Even a minimal scaled environment consists of a minimum of 5 distinct nodes/servers. Something much closer to actual high availability can require 11 nodes. See High Availability documentation for more information.
Please note that GitLab EE Omnibus also include some Premium/Ultimate-only features that make HA much easier - bundled Redis Seninel, Consul, PgBouncer, RepMgr, etc. This is in addition to access to the Support team for HA setup and configuration assistance.
Not that I'm trying to sell you on GitLab EE. But that may help to illustrate that this is a complex topic. If you truly need HA GitLab then it's probably a really critical part of your organization and that's why GitLab provides those features and services to Premium/Ultimate customers.
That said, HA GitLab can be achieved with GitLab CE/Core. But you will have to know how to configure each component including PostgreSQL replication, Redis/Redis Sentinel, load balancers, etc.

Does it make sense Service Fabric in a single machine?

Service Fabric looks great but right now, I do not have enough demand to hire 5 machines (I think it is the minimum number of nodes of a cluster).
I was thinking to install Service Fabric SDK on a single Azure Virtual Machine.
I know that I will not have the main benefits of a service fabric application: reliability and scalability, but I will be developing in a framework that I can easily can hire more machines and to scale if it is necessary in the future without changing anything.
Right now, I have 15 microservices and I plan to add 10 more. At the present I am using IIS and deployment and maintenance is not too fast. It seems that Service Fabric could solve it, plus it would be easily scalabe
Does it make sense to use Service Fabric in a single machine? or better to keep under IIS.
Technically it is possible, though it doesn't make much sense. The one node cluster, runs with a special configuration and so, scale out of that cluster is not supported. You can use a single node cluster for testing and then create another one for production use.

Apache Mesos vs Google Kubernetes

What's the difference between Apache's Mesos and Google's Kubernetes
I read the accepted answers but I'm still confused what the differences are.
If Kubernetes is a cluster management then what does Mesos do (I understand what it does from watching bunch of videos but I suppose I'm more confused how those two work together)?
From reading both Kubernetes and Marathon are "framework" sitting on top of Mesos?
What is Mesos responsible for and what are Kubernetes/Marathon responsible for and how do they work with each other?
EDIT:
I think the better question is When would I want to use Kubernetes on top of Mesos vs just running Mesos alone?
Mesos is another abstraction layer. It simply abstracts underlying hardware so the software that want to run on the top of it could only define required resources without having to know any other information.
Kubernetes could do similar thing but without abstraction provided by Mesos you can't run other frameworks (e.g., Spark or Cassandra) on same machine without manually dividing it between those frameworks.
Apache Mesos is a resource manager that shares resources (CPU shares, RAM, disk, ports) across a cluster of machines in a fair way. By sharing, I mean it offers these resources to so called framework schedulers (such as Marathon) and thereby has a clear separation of concerns in terms of resource management and scheduling decisions (which is implemented, depending on the job type, for example long-running or batch, by the framework scheduler). See also the Mesos architecture for further details.

OpenStack API Implementations

I have spent the last 6 hours reading through buzzword-riddled, lofty, high-level documents/blogs/articles/slideshares, trying to wrap my head around what OpenStack is, exactly. I understand that:
OpenStack is a free and open-source cloud computing software platform. Users primarily deploy it as an infrastructure as a service (IaaS) solution.
But again, that's a very lofty, high-level, gloss-over-the-details summary that doesn't really have meaning to me as an engineer.
I think I get the basic concept, but would like to bounce my understanding off of SO, and additionally I am having a tough time seeing the "forest through the trees" on the subject of OpenStack's componentry.
My understanding is that OpenStack:
Installs as an executable application on 1+ virtual machines (guest VMs); and
Somehow, all instances of your OpenStack cluster know about each other (that is, all instances running on all VMs you just installed them on) and form a collective pool of resources; and
Each OpenStack instance (again, running inside its own VM) houses the dashboard app ("Horizon") as well as 10 or so other components/modules (Nova, Cinder, Glance, etc.); and
Nova, is the OpenStack component/module that CRUDs VMs/nodes for your tenants, is somehow capable of turning the guest VM that it is running inside of into its own hypervisor, and spin up 1+ VMs inside of it (hence you have a VM inside of a VM) for any particular tenant
So please, if anything I have stated about OpenStack so far is incorrect, please begin by correcting me!
Assuming I am more or less correct, my understanding of the various OpenStack components is that they are really just APIs and require the open source community to provide concrete implementations:
Nova (VM manager)
Keystone (auth provider)
Neutron (networking manager)
Cinder (block storage manager)
etc...
Above, I believe all components are APIs. But these APIs have to have implementations that make sense for the OpenStack deployer/maintainer. So I would imagine that there are, say, multiple Neutron API providers, multipe Nova API providers, etc. However, after reviewing all of the official documentation this morning, I can find no such providers for these APIs. This leaves a sick feeling in my stomach like I am fundamentally mis-understanding OpenStack's componentry. Can someone help connect the dots for me?
Not quite.
Installs as an executable application on 1+ virtual machines (guest VMs); and
OpenStack isn't a single executable, there are many different modules, some required and some optional. You can install OpenStack on a VM (see DevStack, a distro that is friendly to VMs) but that is not the intended usage for production, you would only do that for testing or evaluation purposes.
When you are doing it for real, you install OpenStack on a cluster of physical machines. The OpenStack Install Guide recommends the following minimal structure for your cloud:
A controller node, running the core services
A network node, running the networking service
One or more compute nodes, where instances are created
Zero or more object and/or block storage nodes
But note that this is a minimal structure. For a more robust install you would have more than one controller and network nodes.
Somehow, all instances of your OpenStack cluster know about each other (that is, all instances running on all VMs you just installed them on) and form a collective pool of resources;
The OpenStack nodes (be them VMs or physical machines, it does not make a difference at this point) talk among themselves. Through configuration they all know how to reach the others.
Each OpenStack instance (again, running inside its own VM) houses the dashboard app ("Horizon") as well as 10 or so other components/modules (Nova, Cinder, Glance, etc.); and
No. In OpenStack jargon, the term "instance" is associated with the virtual machines that are created in the compute nodes. Here you meant "controller node", which does include the core services and the dashboard. And once again, these do not necessarily run on VMs.
Nova, is the OpenStack component/module that CRUDs VMs/nodes for your tenants, is somehow capable of turning the guest VM that it is running inside of into its own hypervisor, and spin up 1+ VMs inside of it (hence you have a VM inside of a VM) for any particular tenant
I think this is easier to understand if you forget about the "guest VM". In a production environment OpenStack would be installed on physical machines. The compute nodes are beefy machines that can host many VMs. The nova-compute service runs on these nodes and interfaces to a hypervisor, such as KVM, to allocate virtual machines, which OpenStack calls "instances".
If your compute nodes are hosted on VMs instead of on physical machines things work pretty much in the same way. In this setup typically the hypervisor is QEMU, which can be installed in a VM, and then can create VMs inside the VM just fine, though there is a big performance hit when compared to running the compute nodes on physical hardware.
Assuming I am more or less correct, my understanding of the various OpenStack components is that they are really just APIs
No. These services expose themselves as APIs, but that is not all they are. The APIs are also implemented.
and require the open source community to provide concrete implementations
Most services need to interface with an external service. Nova needs to talk to a hypervisor, neutron to interfaces, bridges, gateways, etc., cinder and swift to storage providers, and so on. This is really a small part of what an OpenStack service does, there is a lot more built on top that is independent of the low level external service. The OpenStack services include the support for the most common external services, and of course anybody who is interested can implement more of these.
Above, I believe all components are APIs. But these APIs have to have implementations that make sense for the OpenStack deployer/maintainer. So I would imagine that there are, say, multiple Neutron API providers, multipe Nova API providers, etc.
No. There is one Nova API implementation, and one Neutron API implementation. Based on configuration you tell each of these services how to interface with lower level services such as the hypervisor the networking stack, etc. And as I said above, support for a range of these is already implemented, so if you are using with ordinary x86 hardware for your nodes, then you should be fine.

Connecting laptop(s)/desktop(s) to form a MATLAB computing cluster?

I have experience running parallel jobs on a remote cluster, and parallel (parfor) jobs on a single local machine, but never tried making a cluster of my own. I have access couple of laptops/desktops/servers (root access on all except one server), and was wondering if I could connect them all (or some) to form a local cluster (will have about 30 cores total).
Once you move beyond working with one machine, you move license types from a parallel computing toolbox to a Distributed Computing Server license. The licenses are available in clusters from 8 workers and up. List price on a 8 worker cluster is $6K, 32 workers are $21K. You can get more information on the Mathworks product page. Also note that submitting jobs to the workers requires the Parallel Computing Toolbox.
Once you have the worker licenses the only supported way to distribute jobs to the workers is through a scheduler. The server licenses come with a basic Mathworks scheduler that does have some limitations, but is ideal for single users or small groups. Beyond that you would need to go with one of the higher end schedulers such as LSF. A full list of supported schedulers is on the product page. Moving from a PCT setup on a single machine to a distributed setup can be fairly involved.
Are you prepared to pay the license cost for this? You can use local clusters (up to 8) using 1 copy of the parallel computing toolbox license. But to use distributed clusters, you need a distributed computing toolbox for each "node" (processor core) on the cluster. I'm not familiar with how to set this up. I know that I have access to a few of these clusters, and I also use local clusters extensively. We opted to not create our own distributed cluster for this reason. We also have data that shows that distributed clusters were slow for our particular tasks (a lot of file io was happening in our case).
I know this doesn't answer your question, just a few things to think about.