How many RAM does Eclipse CHE use for itself? - eclipse

For whom have installed Eclipse Che. Did you have a look at the RAM usage of the core system ? I mean, before starting to code ?

See What are the minimum requirements to run an Eclipse Che server for 1 user?
I personally run Minishift on Windows 10 in order to play with Red Hat CodeReady Workspaces 1.0.1 (the product version of Che 6.x). I recommend giving minishift 6GB of ram, if you can spare it.

The resource requirements are detailed in the Che documentation admin guide.
For a multi-user Che deployment, there are 3 containers, which require RAM and storage space for persistent volumes. The absolute minimum resources are:
Che workspace server: 750MB of RAM, 1GB of disk in a PVC
Keycloak: 1GB of RAM, 2 PVCs, 1GB each
PostgreSQL: ~515MB of RAM, 1GB PVC for the database
So just to run the workspace server, you will need about 3GB of RAM and 4GB of persistent storage.
In addition, workspaces will need ~2GB of RAM each (requirements change based on language runtime and developer tools). Workspaces can use ephemeral storage or persistent storage for source code and work. If using ephemeral storage, you will need to commit your work before the workspace is stopped or auto-suspended. If using persistent storage, you can use one large PVC which is shared by all workspaces, or a separate PVC per workspace (which may take more storage resources, and makes right-sizing more difficult).

Related

How to recover a Fedora Server 36 storage pool after upgrading to v37?

I recently upgraded Fedora Server (F) v36 to v37. The F36 server had a volume group consisting of three physical drives combined to form a storage pool, which I named “BigDrive”. During the upgrade the logical volume information seems to have been lost and BigDrive didn’t appear or mount in the F37 server. I’ve been unable to find any backup logical volume information. At present the 3 drives are installed on the F37 server. I would welcome advise on how to recombine the three drives, recover the logical volume information, and access the data stored in the shared pool. Can anyone suggest a process to do that, or a utility that could rebuild the storage pool from the physical drives?
I haven't found any helpful information in the various Fedora documentation or usual websites that don't reference using the backed up logical volume information which somehow didn't survive the upgrade process. This is because the OS hard drive was wiped and repartitioned as part of the upgrade. The drives that formed the storage pool were not formatted, nor do they store any OS or application files. They were purely data storage.

Mongodb on the cloud

I'm preparing my production environment on the Hetzner cloud, but I have some doubts (I'm more a developer than a devops).
I will get 3 servers for the replicaset with 8 core, 32 Gb ram and 240 gb ssd. I'm a bit worried about the size of the ssd the server comes with and Hetzner has the possibility to create volumes to be attached to the servers. Since mongodb uses a single folder for the db data, I was wondering how can I use the 240 gb that comes with the server in combination with external volumes. At the beginning I can use the 240 gb, but then I will have to move the data folder to a volume when it reaches capacity. Im fine with this, but it looks to me that when I will move to volumes, this 240gb will not be used anymore (yes I can use them to save the mongo journaling as they suggest to store it in a separate partition).
So, my noob question is, how can I use both the disk that comes with the server and the external volumes?
Thank you

How storage space is allocated in Minikube?

I am using Minikube to bootstrap a Kubernetes cluster on my local machine (for learning purposes). I am in Windows platform. Minikube is installed on C drive. It's actually low on disk space due to some personal files and other Softwares. According to Minikube documentations, it requires a 20GB of disk space for its VM. However, when I try to bootstrap the Kubernetes cluster sometimes booting up fails stating low disk space. But disk space is available in my other drives.
By default on which drive, Minikube allocates its space? Installed location? Is there any way to specify on which drive Minikube allocates its 20GB space?
As pointed out in the comments, disk allocation is done by the driver which is used to create the VM. In my case I was using hyperv as my VM driver, so I used following steps. (Your steps may slightly vary according to your Windows OS version - I am using Windows 10)
Start ---> Hyper-V manager ---> Hyper-V settings ---> Change the default folder to store virtual hard disk files
You can find detailed illustration in here

Migrate to kubernetes

We're planning to migrate our software to run in kubernetes with auto scalling, this is our current infrastructure:
PHP and apache are running in Google Compute Engine n1-standard-4 (4 vCPUs, 15 GB memory)
MySql is running in Google Cloud SQL
Data files (csv, pdf) and the code are storing in a single SSD Persistent Disk
I found many posts that recomments to store the data file in the Google Cloud Storage and use the API to fetch the file and uploading to the bucket. We have very limited time so I decide to use NFS to share the data files over the pods, the problem is nfs speed is slow, it's around 100mb/s when I copying the file with pv, the result from iperf is 1.96 Gbits/sec.Do you know how to achieve the same result without implement the cloud storage? or increase the NFS speed?
Data files (csv, pdf) and the code are storing in a single SSD Persistent Disk
There's nothing stopping you from volume mounting an SSD into the Pod so you can continue to use an SSD. I can only speak to AWS terminology, but some EC2 instances come with "local" SSD hardware, and thus you would only need to use a nodeSelector to ensure your Pods were scheduled onto machines that had said local storage available.
Where you're going to run into problems is if you are currently just using one php+apache and thus just one SSD, but now you want to scale the application up and it requires that all php+apache have access to the same SSD. That's a classic distributed application architecture problem, and something kubernetes itself can't fix for you.
If you're willing to expend the effort, you can also try any one of the other distributed filesystems (Ceph, GlusterFS, etc) and see if they perform better for your situation. Then again, "We have very limited time" I guess pretty much means that's off the table.

Should I use SSD or HDD as local disks for kubernetes cluster?

Is it worth using SSD as boot disk? I'm not planning to access local disks within pods.
Also, GCP by default creates 100GB disk. If I use 20GB disk, will it cripple the cluster or it's OK to use smaller sized disks?
Why one or the other?. Kubernetes (Google Conainer Engine) is mainly Memory and CPU intensive unless your applications need a huge throughput on the hard drives. If you want to save money you can create tags on the nodes with HDD and use the node-affinity to tweak which pods goes where so you can have few nodes with SSD and target them with the affinity tags.
I would always recommend SSD considering the small difference in price and large difference in performance. Even if it just speeds up the deployment/upgrade of containers.
Reducing the disk size to what is required for running your PODs should save you more. I cannot give a general recommendation for disk size since it depends on the OS you are using and how many PODs you will end up on each node as well as how big each POD is going to be. To give an example: When I run coreOS based images with staging deployments for nginx, php and some application servers I can reduce the disk size to 10gb with ample free room (both for master and worker nodes). On the extreme side - If I run self-contained golang application containers without storage need, each POD will only require a few MB space.