How can we determine the available capacity in an IBM Cloud Data Center or Regioin ? EG how many free bare metal servers, VSIs, subnets, VLANs , etc are available in the Dallas13 Data center ?
According to IBM cloud support, 'there is not really a public facing document that shows the DC availability. VLAN and subnets are on a case by case review, the default is no more than 5 VLANs per router and IP limitations depend on utilization. '
Related
I am really stuck here deciding whether I really need a VPC to deploy my MongoDB instance (a graphQL server also) into on AWS? I'm working on a project that's going to have a GraphQL server to serve a mobile-app along with a MongoDB instance to store the data. I've read everywhere that you must use a VPC, why though? Can't I use the security groups that AWS provides? This will allow me to lockdown my MongoDB instance right?
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-security-groups.html
The reason I don't want to use a VPC is purely because of the extra costs!. The project I'm working on has a small budget & paying all the extra money (min $60 a month) for the VPC on AWS just isn't viable. Maybe if I was building an application that was going to be massive and has 10s of thousands of users and required scale and added security for peace of mind, then I'd consider using a VPC, but since it's not going to be that, and the budget is small, is it okay to use the security groups to lockdown my mongodb ec2 instance?
I've looked into other hosting solutions, in particular Digitalocean as they provide a free VPC service, however Digitalocean does not have data centers in my region (amongst other things) + I've used AWS a fair bit in the past and would love to keep using it.
I would love any suggestions about what I could/should do.
Security groups are a feature of VPCs and are tightly coupled with how EC2 instances are hosted. You need a VPC to define your networking rules including if your instances that host the MongoDB and GraphQL servers are public/private and what their security group rules are.
I'm not sure what costs you are referring to as VPCs are free and all accounts come with a VPC already created for you (the default VPC). You only pay for the ingress/egress traffic that you use so if you aren't doing anything massive, then the cost will be tiny ($0.02/GB) compared to the cost of the instances used to host your machines.
To address your comment, A NAT Gateway would only be needed if you want your instances on private subnets but you want those subnets to have internet access. This is not required if you are comfortable with putting your instances on public subnets and then locking them down with security group and NACL rules (this is not the best security practice but it is a comprise you can make to save on costs).
I have created one DocumentDB cluster in AWS with two instance running in it, but I need to know the exact storage which will be used for storing the data and also how AWS charge for one cluster.
When you provision an Amazon DocumentDB cluster, you don’t need to specify how much storage or I/Os you need for your cluster. Amazon DocumentDB uses a unique storage system that automatically scales from 10 GB up to 64 TB of data per cluster in 10 GB increments.
Storage is at the cluster level, which means all your instances share the storage. You can view how much storage are you using my monitoring the VolumeBytesUsed metric in the Monitoring tab of your Amazon DocumentDB console. Storage in DocumentDB is priced as low as $0.02 per GB/month (prices may vary across AWS regions). Details here - https://aws.amazon.com/documentdb/pricing/. To how much you are paying for storage, you can also go to the AWS Billing console and view the details of your DocumentDB bill
What is the cheapest possible vm sku you can run a service fabric cluster on in Azure?
I just tried to create a 1 node cluster using DS1_V2 and got a warning "Not enough disk space". Then I tried DS2_V2 Promo and the warning goes away.
It cost 142.85 USD and you need 5 of them. So that will be a total cost of 714.25 $ month plus usage.
Is the minimum cost for a service fabric cluster really around 1.000 USD a month.
What are the minimum requirement for running it on premise?
Is it possible to deploy 1 virtual machine in azure, install service fabric on that and deploy to that. (I know that wont scale, be fault tolerant etc)
For a production environment, you are correct you will need at least 5x D class machines for a Service fabric cluster.
For a QA environment you can set up a 3 node cluster with a Bronze durability level which should bring down the costs a bit.
For a development environment, you could use the Service Fabric Local Cluster manager which allows you to emulate a 1 Node or a 5 Node environment on your local machine and recently there is a new option in Azure to create and run a 1 Node cluster - see below.
As for capacity planning, you can find some good guidelines in the official docs.
For production workloads
The recommended VM SKU is Standard D3 or Standard D3_V2 or equivalent with a minimum of 14 GB of local SSD.
The minimum supported use VM SKU is Standard D1 or Standard D1_V2 or equivalent with a minimum of 14 GB of local SSD.
Partial core VM SKUs like Standard A0 are not supported for production workloads
Standard A1 SKU is specifically not supported for production workloads for performance reasons.
When trying to attach a second Portable Storage to the VSI I receive
SoftLayer_Exception_Virtual_Guest_MaxPortableVolumes - Unable to attach portable volume. The destination guest has reached the maximum number of allowed disks.
Is this a SoftLayer limitation to allow only single Portable Volume to be connected to the instance?
Yes it is. It depends on the server that you ordered. Some virtual servers only allow 2 disks, others allow 5 or more. You can see the max capacity of your sever in control portal by clicking on "modify configuration" and in the disks section you will see the maximum amount of disk allowed for that server. Also you can see that when you order a new server.
Regards
I'm working on trying to setup some monitoring on a Google Cloud SQL node and am not seeing how to do it. I was able to install the monitoring agent on my Google Compute Engine instances to monitor CPU, Network, etc. I have not been able to figure out how to do so on the Cloud SQL instance. I have access to these types of monitoring:
Storage Usage (GB)
Number of Read/Write operations
Egress Bytes
Active Connections
MySQL Queries
MySQL Questions
InnoDB Pages Read/Written (pages/sec)
InnoDB Data fsyncs (operations/sec)
InnoDB Log fsyncs (operations/sec)
I'm sure these are great options, but at this point all I want to pay attention to is if my node is performing on a CPU/RAM standpoint as they seem to first and foremost measures for performance.
If I'm missing something, or misunderstnading what I'm trying to do, any advice is appreciated.
Thanks!
Google has a Stackdriver which is for logging and monitoring Google and AWS cloud infrastructure. It can monitor every single thing present on GCP. You can create visualization to monitor your Cloud SQL instance in one dashboard. You just have to ---->
1. login to stackdriver and Go to any existing dashboard, If you dont have create one.---->
2. Add chart and select Cloud SQL in resource Name.---->
3. Select CPU Utilization from metric and save. You can also monitor memory, Disk I/o, Delta count of Queries or servers Up-time and many more.
if you want to monitor any other GCP Compute engine, App-Engine, Kubernetese Engine, storage bucket, Bigtable or pub/sub you just have to select appropriate resource name from list. Hope you got your answer.
You can view all of them directly from the "Overview" tab of the Cloud SQL console:
I have added this as a feature request as issue 110.
https://code.google.com/p/googlecloudsql/issues/detail?id=110