I've just learned that the max vCPU one can use per region with gcloud is 96. At the same time, I'm pretty sure you can build a managed instance group composed of 2200 computers. I would think this managed instance group would be more than 96. Suppose I want to use more than 96 vCPU. What would be the best way to do it if it's possible? Should I just build another 96 vCPU in another region or perhaps use managed instance groups?
Indeed, the max for N type machines is 96 vCPUs. You would need to create in multiple regions if you needed to.
However, in the documentation Machine Types, there are the types of M1 and M2 machine types, that support more than that - M2 for example, support 208 or 416 vCPUs. I would recommend you take a look at this above documentation, to confirm if this would help you. :)
Let me know if the information helped you!
Related
It my first question in here,
I work in project to automate hyper-v replica solution.
so my problem is that in powershell
set-vmreplication -RecoveryHistory
-RecoveryHistory
has 24 like a max value and i want to pypass that my need it's 72.
i try to see the code source but it's in binary i can't do anything
in that way (#microsoft : * _ *:).
i post in microsoft forum and nothing
enter image description here
the error (in french) it say 72 > 24 that all.
so if someone has a solution or a beginning of solution that will be very helpfull.
thank you all.
and have a nice day.
#gvee #gvee
Hi and thank you for your help but...
i test that in my lab and that's not true the ps cmd :
Set-VMReplication VM01 -RecoveryHistory 24 -VSSSnapshotFrequencyHour 4
it does 24 snapshot and every 4 hours it does a synch snapshot and in total i have 24 snapshots( 18 standart ans 6 coherent )
so in final it doesn't solve my initial problem : store more than 24 hours snap with hyper-v replica
I realize this is an older post but I would like to accomplish the same thing, 3 days or more of checkpoints. Why? Because Ransomware tends to strike over weekends. My replica would be useless by Monday. What I have done is enable previous versions on my replica target so that I can restore a previous version of a replica that might be a month old or 4 days old. At least the data will be there and in the meantime I can figure out how to restore it. Aside from this I plan to use extended replication to another server and Veeam replication. Hopefully I will come up with a cleaner solution once I start seeing how these technologies work. Wondering what others are doing about this? Replication for 24 hours is not enough.
It is well documented that 24 is the maximum value allowed for -RecoveryHistory.
-RecoveryHistory
Specifies whether to store additional recovery points on the Replica virtual machine. Storing more than the most recent recovery point of the primary virtual machine allows you to recover to an earlier point in time. However, storing additional recovery points requires more storage and processing resources. You can configure as many as 24 recovery points to be stored.
Based on your comment:
my need it's more than 24h
It sounds like you're assuming your interval can only be 24*1hr; which is untrue.
The 24 snapshots is a fixed maximum; but you can change the frequency the snapshots are taken using the -VSSSnapshotFrequencyHour parameter.
Specifies the frequency, in hours, at which Volume Shadow Copy Service (VSS) performs a snapshot backup of the virtual machines
So for example you could do:
Set-VMReplication VM01 -RecoveryHistory 24 -VSSSnapshotFrequencyHour 4
Effectively that would give you 4 days of history!
I need to create a scaled up iSCSI setup for some testing (around 1024), but all I have is limited hardware. My requirement is to create a large number of iscsi ports, which has unique IQN names and can be discovered at a storage controller as separate physical entities.
In FC, I should be able to do it with NPIV, where I could virtualize a single port to have multiple number of WWNs. But I don't find an equivalent solution in iSCSI.
Any suggestions?
I don't have any simulators like SANBLAZE handy. So I am trying to explore options which can be done at operating system level.
You can use software iscsi to set up as many targets as you like (and have backing store for).
For example, you can use openSUSE (disclaimer: I work for SUSE) running something like Leap 42.2 and use targetcli to set up targets. The man page for targetcli(1) is pretty clear on examples. You can, for example, set up a separate 1Gig file for each target -- that is, if you have 1T of storage. Scale down the size so 1024 of them fit on your disc.
I'm trying to compose some rather largish (~50-10GB) objects on cloud storage and the compose object limit seems arbitrarily low - only 32 items. So I'm currently using 350MB chunks -- why is the limit so low?
I read somewhere that an iterative compose doesn't work - is that correct? ie, if I wanted to compose 64 objects to two, then two to 1 - does that not work?
One single request can only compose 32 components, but you can compose composed objects. The overall limit is 1024:
https://developers.google.com/storage/docs/composite-objects#_Compose
It's a simple question with apparently a multitude of answers.
Findings have ranged anywhere from:
a. 22 bytes as per Basho's documentation:
http://docs.basho.com/riak/latest/references/appendices/Bitcask-Capacity-Planning/
b. 450~ bytes over here:
http://lists.basho.com/pipermail/riak-users_lists.basho.com/2011-August/005178.html
http://lists.basho.com/pipermail/riak-users_lists.basho.com/2011-May/004292.html
c. And anecdotal records that state overheads anywhere in the range of 45 to 200 bytes.
Why isn't there a straight answer to this? I understand it's an intricate problem - one of the mailing list entries above makes it clear! - but is even coming up with a consistent ballpark so difficult? Why isn't Basho's documentation clear about this?
I have another set of problems related to how I am to structure my logic based on the key overhead (storing lots of small values versus "collecting" them in larger structures), but I guess that's another question.
The static overhead is stated on our capacity planner as 22 bytes because that's the size of the C struct. As noted on that page, the capacity planner is simply providing a rough estimate for sizing.
The old post on the mailing list by Nico you link to is probably the best complete accounting of bitcask internals you will find and is accurate. Figuring in the 8bytes for a pointer to the entry and the 13bytes of erlang overhead on the bucket/key pair you arrive at 43 bytes on a 64 bit system.
As for there not being a straight answer ... actually asking us (via email, the mailing list, IRC, carrier pigeon, etc) will always produce an actual answer.
Bitcask requires all keys to be held in memory. As far as I can see the overhead referenced in a) is the one to be used when estimating the total amount of RAM bitcask will require across the cluster due to this requirement.
When writing data to disk, Riak stores the actual value together with various metadata, e.g. the vector clock. The post mentioning 450 bytes listed in b) appears to be an estimate of the storage overhead on disk and would therefore probably apply also to other backends.
Nico's post seems to contain a good and accurate explanation.
What is the largest known Neo4j cluster (in db size, graph stats, or # of machines)?
The # of nodes and relationships was recently (with the 1.3 release) expanded to 32 billion each and another 64 billion for properties. If you look at the mailing list, there have been recent inquiries for quite large datastores.
As an approach to an answer you might want to check out this interview with Emil Eifrem (neo's founder): http://www.infoq.com/interviews/eifrem-graphdbs. In particular check out the part on "From a data complexity perspective, how does Neo4j help remove some of the implementation complexity in storing your data?": "hundreds of millions is probably a large one. And billions that's definitly a large one."
I was in conversation with neo technologies recently, in which they shared that the largest installations they know of machine-wise do not have more than 3-5 machines.
Also, they said that the size of the graph neo4j can efficiently handle is dependent on the number of nodes and edges in the graph. If they can all be kept in memory, most queries will be fast. You find the sizes for nodes and edges in memory at http://wiki.neo4j.org/content/Configuration_Settings (it's 9 bytes per node and 33 bytes per relationship).