When trying to attach a second Portable Storage to the VSI I receive
SoftLayer_Exception_Virtual_Guest_MaxPortableVolumes - Unable to attach portable volume. The destination guest has reached the maximum number of allowed disks.
Is this a SoftLayer limitation to allow only single Portable Volume to be connected to the instance?
Yes it is. It depends on the server that you ordered. Some virtual servers only allow 2 disks, others allow 5 or more. You can see the max capacity of your sever in control portal by clicking on "modify configuration" and in the disks section you will see the maximum amount of disk allowed for that server. Also you can see that when you order a new server.
Regards
Related
I recently upgraded Fedora Server (F) v36 to v37. The F36 server had a volume group consisting of three physical drives combined to form a storage pool, which I named “BigDrive”. During the upgrade the logical volume information seems to have been lost and BigDrive didn’t appear or mount in the F37 server. I’ve been unable to find any backup logical volume information. At present the 3 drives are installed on the F37 server. I would welcome advise on how to recombine the three drives, recover the logical volume information, and access the data stored in the shared pool. Can anyone suggest a process to do that, or a utility that could rebuild the storage pool from the physical drives?
I haven't found any helpful information in the various Fedora documentation or usual websites that don't reference using the backed up logical volume information which somehow didn't survive the upgrade process. This is because the OS hard drive was wiped and repartitioned as part of the upgrade. The drives that formed the storage pool were not formatted, nor do they store any OS or application files. They were purely data storage.
So, I am in the process of setting up a WSFC to enable use of Always On Basic for SQL Server 2019. I am using Windows Server 2019 and have enabled Failover Clustering on both server nodes which are on the same domain. I am not planning to use shared storage in the cluster itself, only a fileshare on another node (not part of this cluster, but on the same domain) as the witness.
When running the Cluster Validation wizard, I get a "Physical disk {...} does not have the inquiry data (SCSI page 83h VPD descriptor) that is required by failover clustering." failure message.
As the cluster will not rely on any shared storage, can I safely deselect the Storage and Storage Spaces Direct tests during the validation and proceed with the set up?
We're planning to migrate our software to run in kubernetes with auto scalling, this is our current infrastructure:
PHP and apache are running in Google Compute Engine n1-standard-4 (4 vCPUs, 15 GB memory)
MySql is running in Google Cloud SQL
Data files (csv, pdf) and the code are storing in a single SSD Persistent Disk
I found many posts that recomments to store the data file in the Google Cloud Storage and use the API to fetch the file and uploading to the bucket. We have very limited time so I decide to use NFS to share the data files over the pods, the problem is nfs speed is slow, it's around 100mb/s when I copying the file with pv, the result from iperf is 1.96 Gbits/sec.Do you know how to achieve the same result without implement the cloud storage? or increase the NFS speed?
Data files (csv, pdf) and the code are storing in a single SSD Persistent Disk
There's nothing stopping you from volume mounting an SSD into the Pod so you can continue to use an SSD. I can only speak to AWS terminology, but some EC2 instances come with "local" SSD hardware, and thus you would only need to use a nodeSelector to ensure your Pods were scheduled onto machines that had said local storage available.
Where you're going to run into problems is if you are currently just using one php+apache and thus just one SSD, but now you want to scale the application up and it requires that all php+apache have access to the same SSD. That's a classic distributed application architecture problem, and something kubernetes itself can't fix for you.
If you're willing to expend the effort, you can also try any one of the other distributed filesystems (Ceph, GlusterFS, etc) and see if they perform better for your situation. Then again, "We have very limited time" I guess pretty much means that's off the table.
I am trying to understand how horizontal scalability (virtualization) is working in terms of disk storage.
virtualization is a layer upon the hardware computer nodes and manage the needed resources for the requests.
So my question is what happens when I deploy my war into the web server for example ? I mean I have a replicated storage in different nodes?
After I did some researches I found NAS vs SAN. so i expect to have SAN replication for data stability .... that is true ?
Where is my storage disk exactly when I have a horizontal based server like Google Engine or AWS?
Thanks,
Hopefully a couple of these examples will help. Let's take a general, crude example. I'll try to keep information simple to understand. Let's say I have a business running on LAMP stack. Apache+PHP is running on WEB1 server, MySQL on DB1 server. Customer data sits on WEB1.
SAN replication
First - your question about replication. That's mostly for disaster recovery. For data stability/reliability, SAN have appropriate RAID levels, service level agreements and spare disks. For example RAID5 tolerates failure of 1 disk in a raid-set. RAID6 tolerates failure of 2 disks in a raid-set etc. Having hot-spare disks help in quick repopulation of failed RAID disk. Organizations also snapshot their disk volumes and replay them in a different data-center so as to have a 2nd copy of their data. This is done over and above regular backups and VM snapshots.
AWS disks
There are 2 types of disks AWS has:
Ephemeral: disks connected to EC2 instance
Elastic Block Storage (EBS)
Ephemeral storage
Don't use this for anything critical. AWS offers EC2 instances with ephemeral storage (that means, VM has disks attached to the server) and recommends that users purchase slice of their disk in the form of EBS (Elastic Block Storage). I'd chose to not run anything on ephemeral storage because if EC2 instance stops, information on ephemeral storage is gone! However, if my partitions were on EBS volume, EC2 restart will be seamless. All data will stay alive on my EBS volume.
EBS
When I want a VM, I'd choose an EC2 instance (CPU/Memory). Then I buy disk in the form of EBS volume of 100GB (or more if I want to do RAID/LVM etc.) and attach it to my EC2 instance. Now I can install OS on my EC2 volume. Partitions are all created on my EBS volume. When EC2 reboots, my data stays as-is.
Disk scaling
Let's say I began my business with an EC2 instance + 100GB of EBS volume. All's well until my customers began to upload really large files. My disk is getting full and I need to expand a partition. With AWS, I could buy another slice of 100GB of EBS volume and expand my partition to use this additional 100GB.
Server scaling
Let's say my business is doing really well and my EC2 instance isn't keeping up with traffic. I need more horse-power and I choose to add another server WEB2 running Apache+PHP server with its own EBS volume. But what about customer data? Will I store some data on WEB1 and some on WEB2? That'd be hard to reconcile.
Keeping code same on WEB1 and WEB2
Code from Git (or version control of choice) will be deployed to both WEB1 and WEB2 simultaneously. That will keeps both my server's code up to date. Configuration management of my servers can happen through Ansible/Puppet/Chef.
Streamlining data storage
I have some options. Let's discuss two options that will allow WEB1 and WEB2 to share data/disk space. Important note - EBS volume cannot be shared with multiple EC2 instances. EBS volume can be attached to only one EC2 instance.
First option - stand up another server DATA1 and attach a large EBS volume to it and move customer files there. WEB1 and WEB2 will send customer data to DATA1 (rsync/ftp/scp). WEB1 and WEB2 will read/write from DB1 database also. I could even safeguard my data by taking snapshots of EBS volume and replaying the snapshot on another server called DATA2 in a different AWS region or availability zone in case DATA1 is unavailable.
Second option - AWS has S3 storage. It's reliable and cheaper than EBS. Instead of standing up DATA1 and DATA2, it is much easier and cheaper to create a bucket on S3 and store customer data there. WEB1 and WEB2 can read/write to S3 seamlessly.
Where're my disks on AWS?
I don't know, and I don't need to know. AWS must have racks and racks of disks. I am getting a slice of disk space from somewhere there. Their disks are likely to have redundancy but EBS failures are possible. For our own sanity, it is good to RAID and snapshot EBS volumes over and above taking regular backups.
Similar to disks, AWS must have racks and racks of servers. I am getting a virtual machine in the form of EC2 instance of my choice from somewhere in those racks. When I shutdown and restart EC2 server, I may get the same specification VM from a different rack. However, when my EBS volume will remain the same unless I terminate EBS volume and buy a new EBS volume.
One thing to recognize is that if I bought EC2 instance in Oregon, my EBS volume will be in the same Oregon region and also the same availability zone.
Note - this is a very generic answer.
Resources:
node1: Physical cluster node 1.
node2: Physical cluster node 2.
cluster1: Cluster containing node1 and node2 used to host virtual machines.
san1: Dell md3200 highly available storage device (SAN).
lun1: A lun dedicated to file server storage located on san1.
driveZ: A hard drive currently a resource on node1 that is 100GB and has the
drive letter Z:\. This drive letter is lun1 that resides on san1.
virtual1: A virtual server used as a file server only.
Synopsis / Goals:
I have two nodes/servers on my network. Theses two nodes (node1 and node2) are part of a cluster (cluster1) that is used for hosting all my virtual machines. There is a SAN involved (san1) that has many LUNs created on it one of which (lun1) will be used to store all data dedicated to a virtual machine (virtual1). Eventually lun1 is created, given the name "storage" and strictly used for the virtual machine "virtual1" to store and access data.
What I have currently in place:
- I currently have created the SAN (san1), created a disk group with the
virtual disk (storage), and assigned a LUN (lun1) to it.
- I have set up two physical servers that are connected to the SAN via SAS
cables (multi paths).
- I have set up the clustering feature on those two servers and have hyper-v
role installed on each as well.
- I have created a cluster (cluster1) with server members node1 and node2.
- I have created a virtual server (virtual1) and made it highly available
on the cluster (cluster1).
Question:
Is it possible to have lun1 (drive z) brought up and accessed by virtual1?
What I have tried:
I had the lun1 aka driveZ showing up in node1's disk management. I then added it as a resource to the cluster storage area. I tried to do two different things. (1) I tried to add it as a Cluster Shared Volume, shortly after I realized that only the cluster members could see/access it and not the virtual machines even though they were created as a service under in the cluster. (2) I tried to move the resource (driveZ) to the virtual machine (virtual1) within cluster1. After doing that I went into the virtual machine settings and added the drive as a SCSI drive (using lun1 # 100GB) and refreshed the Disk Management on the virtual machine (virtual1). The drive showed up and allowed me to assign a drive letter, then asked me if I wanted to format it... What about all my data thats on it?? Was that a bust? Anyway, thats where I'm at right now... Ideas?
Thoughts:
Just so I'm clear, all of this is for testing atm... Actual sizes of resources in production greatly differ. I was thinking about adding the driveZ (lun1) as a Cluster Shared Volume, and then add a new Hyper-V virtual SCSI drive (say 50G so later I can try to expand to 100G, the full size of the physical/SAN drive) to my VM. Storing the fixed VHD (Virtual Hard Disk) inside the Cluster Shared Volume "driveZ". I'm testing it out now... But I have concerns... 1) What happens when I try to create a really large VHD (around 7TB)? 2) Can the fixed disk VHD be expanded in any way? I plan on making my new SAN virtual disk larger than 7TB in the future... Currently its going to stay at 7TB but that will expand at some point...
Figured it out!
The correct way to do it is...
Setup a SAN, create a disk group with two virtual disks, and assigned LUNs to them.
Setup your 2 physical servers with Win Server 2008 R2, connect them both to the SAN.
Add the Failover Cluster feature, and the Hyper-V role to both servers.
For the two drives (from the SAN), bring them online and initialize them both. Create a simple volume on each drive if you wish, even format them if you want.
Create a cluster, add 1 of the virtual disks from the SAN as a Cluster Shared Volume. This will be used to store the virtual machines on.
Create a virtual machine and store it on the CSV ex: C:\ClusterStorage\Volume1\, then power it up.
The second drive you need to take offline. This should just be a drive on the host server. It has to be offline! When you right click and choose offline, go ahead and right click then go to properties. On that page look for the LUN number and write it down.
Open up the VM settings go down to Scsi controller and add a drive. Choose physical drive and choose the correct LUN number. Hit OK and it should show up in the VM Storage Manager.
As a helpful tool check these pages out...
Configuring Disks and Storage
Hyper-V Clustering Video 1
Hyper-V Clustering Video 2
Hyper-V Clustering Video 3
Hyper-V Clustering Video 4
Hyper-V Clustering Video 5