What are the Server Requirements for BluePrism tool with 5 Bots? - server

The Company that I work for is planning to purchase BluePrism tool, we are in-need of the server requirements (RAM size,HD size etc) for the functioning 5 Bots.
Thanks in Advance for your help.

The Automation Platform required depends of the nature of your automations.
You can run everything in a single node and 8G of RAM and 4 Cores will be enough (if you are not automating applications that require many resources) and if you do not need to run anyhing concurrently.
On the other hand for Production purposes:
1 Blue Prism Application Server 2012 R2 (This is used to Orchestrate
your Platform)... regularly I put at least 8G RAM and 4 Cores, HDD
used for BP is insifnificant 10G after OS installation should be
good.
1 SQL Server (where the structure of your automation, and the execution logs will be allocated) at least 8G RAM and HDD depends of
your transacctions, logging and other configurations that you will
choose when you automate.
1 Interactive Client, (where you will monitor your platform) 4G Ram 50G HDD
1 Runtime Resource (where your automations will run)... here you have to take in consideration all the resources that will be used by
your automation, here you have to deal with that HDD Space, and
Processor Needs. You can only run 1 automation at time, so if you
need to run 5 different automations (what I think that you called
robots) concurrently, you need 5 runtime resources, each one must
have installed the required applications (used to run its
automation)

Related

Cassandra + Redis Project Server setup

Currently i'm having a Dedicated VPS Server with 4GB Ram , 50GB Hard-disk , i have a SAAS solution running on the server with more than 1500 customers. Now i'm going to upgrade the projects business plan,There will be 25000 customers and about 500 - 1000 customers using the project realtime . For now it takes 5 Seconds to fetch cassandra database records from the server to the application.Then i came through redis and it says that saving a copy to redis will help to fetch the data much faster and lowers server overhead.
Am i right about this ?
If i need to improve the overall performance , Can anybody tell me what are the things i need to upgrade ?
Can a server with configuration said above can handle cassandra and redis together ?
Thanks in advance .
A machine with 4GB of RAM will probably only be single-core so it's too small for any production workload and only suitable for dev usage where you run 1 or 2 transactions per second, mostly for functional testing.
We generally recommend deploying Cassandra on machines with at least 2 cores + 8GB allocated to the heap (so need at least 16GB of RAM) for low production loads. For moderate loads, 4 cores + 32GB RAM is ideal so you can allocate 16GB to the heap.
If you're just at the proof-of-concept stage, there's a tier on DataStax Astra that's free forever and doesn't require a credit card to create an account. I recommend it to most people because you can launch a cluster in a few clicks and you can quickly focus on developing your app. Cheers!

Installing all IIS components with ISES, WKC and Stewardship center on one computer

I need to install all IIS components with ISES, WKC and Stewardship center on one computer
I am wondering if this will be practically possible or not and I listed below my questions to be very clear to understand each missing information for us:Is it possible to install & configure those components (IIS, ISES, WKC and Stewardship center) on one node/machine ?
If yes, What's the suitable hardware sizing and allocated resources needed for this node?
If no, what is the suggested hardware sizing and allocated resources considered minimum dedicated nodes/resources for that PoC ?
On other hand, I did many IIS installation with different standard topologies on only premises environment, it's nice to have from your experiences any documents or links describe above products installation, tips and steps
You can do it with a minimum of 2 VMs/Servers because the Microservices Tier (WKC) has to be on its own VM/Server.
You can put the Stewardship Center, Engine, Repository and Services Tiers in 1 VM/Server.
Minimum Configuration will be 16 Cores and 64GB RAM for Microservices Tier and 4 Cores with 32GB RAM for the others.
What do you plan to do with Stewardship Center? In V11.7.1 Activiti is provided as another option for workflow and it is part of the Microservices Tier.
What PoC are you planning to do?

Batch jobs and reduced SSD lifetime?

I am working on a batch job which imports data from a legacy database, transforms the data in 3NF and inserts the resulting data into another database (target database). The batch job is written with Spring Batch.
While I was developing the steps of the job, I wrote unit tests to test the functionality for each step. But now I am finished with development of the steps and want to test the system in a kind of testing environment before rolling the batch job out to production. Therefore, I imported the legacy database locally on a MySQL server and also created a local version of the target database. These MySQL servers are deployed on my Macbook Pro with 256 GB SSD. I already ran the job a few times with little bugfixes but now it came to my mind that SSDs are more sensible to write cycles than a standard HDD. Hence, I checked the process mysqld in my activity manager and noticed that 424.64 GB have been written to my SSD in the last three days.
How much influence (lifetime, write cycles) does this number of written GB will have to my SSD? Would you recommend to deploy the database on a normal HDD instead of using my SSD? Or do you think that I am falsely alarmed?
I would recommend you deploy the database to a normal HDD, because the NAND flash on your SSD do have a max erase threshold. In other words, you are wearing down your SSD. Although SSDs have features to ensure that the NAND flash wear down evenly, you are definitely wearing it down much faster than normal usage.

Setting up the optimal number of processors/cores per processor virtual machine (VMware)

I was looking for an answear but didn't find one.
I'm trying to create a new VM to develop a web application. What would be the optimal processor settings?
I have i7 (6th gen) with hyperthreading.
Host OS: Windows 10. Guest OS: CentOS.
Off topic: RAM that should I give to VM should be 50% of my memory? Would it be ok? (I have 16GB RAM)
Thanks!
This is referred to as 'right-sizing' a vm, and it is dependent on the application workload that will run inside it. Ideally, you want to provide the VM with the minimum amount of resources the app requires to run correctly. "Correctly" is subjective based upon your expectations.
Inside your VM (CentOS) you can run top to see how much memory and cpu % is being used. You can also install htop which you may find friendlier than top.
RAM
If you see a low % of RAM being used, you can probably reduce what you're giving the VM. If you are seeing any swap memory used (paging to disk), you may want to increase the RAM. Start with 2GB and see how the app behaves.
CPU
You'll may want to start with no more than 2vCPUs, checkout top to see how utilized the application is under load, and then make an assessment for more/less vCPUs.
The way a hosted hypervisor (VMware Workstation) handles guest CPU usage is through a CPU scheduler. When you give a vm x number of vCPUs, the VM will need to wait till that many cores are free on the CPU to do 'work'. The more vCPUs you give it, the more difficult (slower) it will be to schedule. It's more complicated than this, but I'm trying to keep it high level. CPU scheduling deep dive.

Docker instead of multiple VMs

So we have around 8 VMs running on a 32 GB RAM and 8 Physical core server. Six of them run a mail server each(Zimbra), two of them run multiple web applications. The load on the servers are very high primarily because of heavy load on each VMs.
We recently came across Docker. It seems to be a cool idea to create containers of applications. Do you think it's a viable idea to run applications of each of these VMs inside 8 Docker Containers. Currently the server is heavily utilized because multiple VMs have serious I/O issues.
Or can docker be utilized in cases where we are only running web applications, and not email or any other infra apps. Do advise...
Docker will certainly alleviate your server's CPU load, removing the overhead from the hypervisor's with that aspect.
Regarding I/O, my tests revealed that Docker has its own overhead on I/O, due to how AUFS (or lately device mapper) works. In that front you will still gain some benefits over the hypervisor's I/O overhead, but not bare-metal performance on I/O. My observations, for my own needs, pointed that Docker was not "bare-metal performance like" when dealing with intense I/O services.
Have you thought about adding more RAM. 64GB or more? For a large zimbra deployment 4GB per VM may not be enough. Zimbra like all messaging and collaboration systems, is an IO bound application.
Having zmdiaglog (/opt/zimbra/libexec/zmdiaglog) data to see if you are allocating memory correctly would help. as per here;
http://wiki.zimbra.com/wiki/Performance_Tuning_Guidelines_for_Large_Deployments#Memory_Allocation