Softlayer: Create flavor vm instance using Terraform - ibm-cloud

I found there exist some flavor vm type, e.g. c1.1x1 or b1.2x4 in bluemix portal site.
But ibm_compute_vm_instance seems only be able to setup cores.
Could i be able to create for type c1 or m1?
Or, which cpu type it is by default while deploying?

They has add the ability to create flaovr using terraform ref.
Check the new field 'flavor_key_name'.
But it still need setup 'local_disk'. 'local_disk' is 'true' while using using bl1 or bl2, or it should be 'false'.

IBM Terraform does not have an attribute to put a flavor’s value that is needed to create a VM, the same that you do in blumix portal, or the presetId attribute that is used in other languages.
This issue has already been reported, you can see it in this link: https://github.com/IBM-Cloud/terraform-provider-ibm/issues/151
To create a new VM by terraform you have to choose the CPU, RAM and FIRST DISK separately.
e.g you can choose this flavor:
"name" :" C1.2x2x25 "
It means (2 x 2.0 GHz Cores, 2 GB RAM, 25 GB (SAN) First Disk)
There is no default CPU type while deploying. You have to choose one.
To find these values you can use the following rest api:
Method: GET
https://[username]-[apiKey]#api.softlayer.com/rest/v3/SoftLayer_Virtual_Guest/getCreateObjectOptions
You will find here the following result:
{
"flavor": {
"keyName": "C1_2X2X25",
"name": "C1.2x2x25",
"configuration": [
{
"category": {
"name": "Computing Instance"
},
"price": {
"hourlyRecurringFee": ".045",
"item": {
"description": "2 x 2.0 GHz Cores"
}
}
},
{
"category": {
"name": "First Disk"
},
"price": {
"hourlyRecurringFee": "0",
"item": {
"description": "25 GB (SAN)"
}
}
},
{
"category": {
"name": "RAM"
},
"price": {
"hourlyRecurringFee": ".03",
"item": {
"description": "2 GB"
}
}
}
],
"totalMinimumHourlyFee": "0.075",
"totalMinimumRecurringFee": "49.77"
},
"template": {
"id": null,
"supplementalCreateObjectOptions": {
"flavorKeyName": "C1_2X2X25"
}
}
},
This is an example how could you send the terraform request:
resource "ibm_compute_vm_instance" "twc_terraform_sample" {
hostname = "twc-terraform-sample-name"
domain = "bar.example.com"
os_reference_code = "DEBIAN_7_64"
datacenter = "wdc01"
network_speed = 10
hourly_billing = true
private_network_only = false
cores = 2
memory = 2048
disks = [25]
dedicated_acct_host_only = true
local_disk = false
}
The RAM (memory) that you have to send must be in MB (e.g 2GB will be 2048 MB)

Related

Error connecting to environment 1 Org Local Fabric: Error querying channels: 14 UNAVAILABLE: failed to connect to all addresses

I am unable to run my ibm evote blockchain application in hyperledger faric.I am using IBM Evote in VS Code (v1.39) in ubuntu 16. When I start my local fabric (1 org local fabric), I am facing above error.
following is my local_fabric_connection.json file code
{
"name": "local_fabric",
"version": "1.0.0",
"client": {
"organization": "Org1",
"connection": {
"timeout": {
"peer": {
"endorser": "300"
},
"orderer": "300"
}
}
},
"organizations": {
"Org1": {
"mspid": "Org1MSP",
"peers": [
"peer0.org1.example.com"
],
"certificateAuthorities": [
"ca.org1.example.com"
]
}
},
"peers": {
"peer0.org1.example.com": {
"url": "grpc://localhost:17051"
}
},
"certificateAuthorities": {
"ca.org1.example.com": {
"url": "http://localhost:17054",
"caName": "ca.org1.example.com"
}
}
}
and following is the snapshot
Based off your second image it doesn't look like your 1 Org Local Fabric started properly in the first place (you have no gateways and for some reason your wallets aren't grouped together!).
If you teardown your 1 Org Local Fabric then start it again hopefully it'll work.

What thresholds should be set in Service Fabric Placement / Load balancing config for Cluster with large number of guest executable applications?

What thresholds should be set in Service Fabric Placement / Load balancing config for Cluster with large number of guest executable applications?
I am having trouble with Service Fabric trying to place too many services onto a single node too fast.
To give an example of cluster size, there are 2-4 worker node types, there are 3-6 worker nodes per node type, each node type may run 200 guest executable applications, and each application will have at least 2 replicas. The nodes are more than capable of running the services while running, it is just startup time where CPU is too high.
The problem seems to be the thresholds or defaults for placement and load balancing rules set in the cluster config. As examples of what I have tried: I have turned on InBuildThrottlingEnabled and set InBuildThrottlingGlobalMaxValue to 100, I have set the Global Movement Throttle settings to be various percentages of the total application count.
At this point there are two distinct scenarios I am trying to solve for. In both cases, the nodes go to 100% for an amount of time such that service fabric declares the node as down.
1st: Starting an entire cluster from all nodes being off without overwhelming nodes.
2nd: A single node being overwhelmed by too many services starting after a host comes back online
Here are my current parameters on the cluster:
"Name": "PlacementAndLoadBalancing",
"Parameters": [
{
"Name": "UseMoveCostReports",
"Value": "true"
},
{
"Name": "PLBRefreshGap",
"Value": "1"
},
{
"Name": "MinPlacementInterval",
"Value": "30.0"
},
{
"Name": "MinLoadBalancingInterval",
"Value": "30.0"
},
{
"Name": "MinConstraintCheckInterval",
"Value": "30.0"
},
{
"Name": "GlobalMovementThrottleThresholdForPlacement",
"Value": "25"
},
{
"Name": "GlobalMovementThrottleThresholdForBalancing",
"Value": "25"
},
{
"Name": "GlobalMovementThrottleThreshold",
"Value": "25"
},
{
"Name": "GlobalMovementThrottleCountingInterval",
"Value": "450"
},
{
"Name": "InBuildThrottlingEnabled",
"Value": "false"
},
{
"Name": "InBuildThrottlingGlobalMaxValue",
"Value": "100"
}
]
},
Based on discussion in answer below, wanted to leave a graph-image: if a node goes down, the act of shuffling services on to the remaining nodes will cause a second node to go down, as noted here. Green node goes down, then purple goes down due to too many resources being shuffled onto it.
From SF's perspective, 1 & 2 are the same problem. Also as a note, SF doesn't evict a node just because CPU consumption is high. So: "The nodes go to 100% for an amount of time such that service fabric declares the node as down." needs some more explanation. The machines might be failing for other reasons, or I guess could be so loaded that the kernel level failure detectors can't ping other machines, but that isn't very common.
For config changes: I would remove all of these to go with the defaults
{
"Name": "PLBRefreshGap",
"Value": "1"
},
{
"Name": "MinPlacementInterval",
"Value": "30.0"
},
{
"Name": "MinLoadBalancingInterval",
"Value": "30.0"
},
{
"Name": "MinConstraintCheckInterval",
"Value": "30.0"
},
For the inbuild throttle to work, this needs to flip to true:
{
"Name": "InBuildThrottlingEnabled",
"Value": "false"
},
Also, since these are likely constraint violations and placement (not proactive rebalancing) we need to explicitly instruct SF to throttle those operations as well. There is config for this in SF, although it is not documented or publicly supported at this time, you can see it in the settings. By default only balancing is throttled, but you should be able to turn on throttling for all phases and set appropriate limits via something like the below.
These first two settings are also within PlacementAndLoadBalancing, like the ones above.
{
"Name": "ThrottlePlacementPhase",
"Value": "true"
},
{
"Name": "ThrottleConstraintCheckPhase",
"Value": "true"
},
These next settings to set the limits are in their own sections, and are a map of the different node type names to the limit you want to throttle for that node type.
{
"name": "MaximumInBuildReplicasPerNodeConstraintCheckThrottle",
"parameters": [
{
"name": "YourNodeTypeNameHere",
"value": "100"
},
{
"name": "YourOtherNodeTypeNameHere",
"value": "100"
}
]
},
{
"name": "MaximumInBuildReplicasPerNodePlacementThrottle",
"parameters": [
{
"name": "YourNodeTypeNameHere",
"value": "100"
},
{
"name": "YourOtherNodeTypeNameHere",
"value": "100"
}
]
},
{
"name": "MaximumInBuildReplicasPerNodeBalancingThrottle",
"parameters": [
{
"name": "YourNodeTypeNameHere",
"value": "100"
},
{
"name": "YourOtherNodeTypeNameHere",
"value": "100"
}
]
},
{
"name": "MaximumInBuildReplicasPerNode",
"parameters": [
{
"name": "YourNodeTypeNameHere",
"value": "100"
},
{
"name": "YourOtherNodeTypeNameHere",
"value": "100"
}
]
}
I would make these changes and then try again. Additional information like what is actually causing the nodes to be down (confirmed via events and SF health info) would help identify the source of the problem. It would probably also be good to verify that starting 100 instances of the apps on the node actually works and whether that's an appropriate threshold.

How to measure per user bandwidth usage on google cloud storage?

We want to charge users based on the amount of traffic their data has. Actually the amount of downstream bandwidth their data is consuming.
I have exported google cloud storage access_logs. From the logs, I can count the number of times a file is accessed. (filesize * count will be the bandwidth usage)
But the problem is that this doesn't work well with cached content. My calculated value is much more than the actual usage.
I went with this method because our traffic will be new and won't use the cache, which means that the difference won't matter. But in reality, it seems like it is a real problem.
This is a common use case and I think there should be a better way to solve this problem with google cloud storage.
{
"insertId": "-tohip8e1vmvw",
"logName": "projects/bucket/logs/cloudaudit.googleapis.com%2Fdata_access",
"protoPayload": {
"#type": "type.googleapis.com/google.cloud.audit.AuditLog",
"authenticationInfo": {
"principalEmail": "firebase-storage#system.gserviceaccount.com"
},
"authorizationInfo": [
{
"granted": true,
"permission": "storage.objects.get",
"resource": "projects/_/bucket/bucket.appspot.com/objects/users/2y7aPImLYeTsCt6X0dwNMlW9K5h1/somefile",
"resourceAttributes": {}
},
{
"granted": true,
"permission": "storage.objects.getIamPolicy",
"resource": "projects/_/bucket/bucket.appspot.com/objects/users/2y7aPImLYeTsCt6X0dwNMlW9K5h1/somefile",
"resourceAttributes": {}
}
],
"methodName": "storage.objects.get",
"requestMetadata": {
"destinationAttributes": {},
"requestAttributes": {
"auth": {},
"time": "2019-07-02T11:58:36.068Z"
}
},
"resourceLocation": {
"currentLocations": [
"eu"
]
},
"resourceName": "projects/_/bucket/bucket.appspot.com/objects/users/2y7aPImLYeTsCt6X0dwNMlW9K5h1/somefile",
"serviceName": "storage.googleapis.com",
"status": {}
},
"receiveTimestamp": "2019-07-02T11:58:36.412798307Z",
"resource": {
"labels": {
"bucket_name": "bucket.appspot.com",
"location": "eu",
"project_id": "project-id"
},
"type": "gcs_bucket"
},
"severity": "INFO",
"timestamp": "2019-07-02T11:58:36.062Z"
}
An entry of the log.
We are using a single bucket for now. Can also use multiple if it helps.
One possibility is to have a separate bucket for each user and get the bucket's bandwidth usage through timeseries api.
The endpoint for this purpose is:
https://cloud.google.com/monitoring/api/ref_v3/rest/v3/projects.timeSeries/list
And following are the parameters to achieve bytes sent for one hour (we can specify time range above 60s) whose sum will be the total bytes sent from the bucket.
{
"dataSets": [
{
"timeSeriesFilter": {
"filter": "metric.type=\"storage.googleapis.com/network/sent_bytes_count\" resource.type=\"gcs_bucket\" resource.label.\"project_id\"=\"<<<< project id here >>>>\" resource.label.\"bucket_name\"=\"<<<< bucket name here >>>>\"",
"perSeriesAligner": "ALIGN_SUM",
"crossSeriesReducer": "REDUCE_SUM",
"secondaryCrossSeriesReducer": "REDUCE_SUM",
"minAlignmentPeriod": "3600s",
"groupByFields": [
"resource.label.\"bucket_name\""
],
"unitOverride": "By"
},
"targetAxis": "Y1",
"plotType": "LINE",
"legendTemplate": "${resource.labels.bucket_name}"
}
],
"options": {
"mode": "COLOR"
},
"constantLines": [],
"timeshiftDuration": "0s",
"y1Axis": {
"label": "y1Axis",
"scale": "LINEAR"
}
}

What is the DNS Record logged by kubedns?

I'm using Google Container Engine and I'm noticing entries like the following in my logs
{
"insertId": "1qfzyonf2z1q0m",
"internalId": {
"projectNumber": "1009253435077"
},
"labels": {
"compute.googleapis.com/resource_id": "710923338689591312",
"compute.googleapis.com/resource_name": "fluentd-cloud-logging-gke-gas2016-4fe456307445d52d-worker-pool-",
"compute.googleapis.com/resource_type": "instance",
"container.googleapis.com/cluster_name": "gas2016-4fe456307445d52d",
"container.googleapis.com/container_name": "kubedns",
"container.googleapis.com/instance_id": "710923338689591312",
"container.googleapis.com/namespace_name": "kube-system",
"container.googleapis.com/pod_name": "kube-dns-v17-e4rr2",
"container.googleapis.com/stream": "stderr"
},
"logName": "projects/cml-236417448818/logs/kubedns",
"resource": {
"labels": {
"cluster_name": "gas2016-4fe456307445d52d",
"container_name": "kubedns",
"instance_id": "710923338689591312",
"namespace_id": "kube-system",
"pod_id": "kube-dns-v17-e4rr2",
"zone": "us-central1-f"
},
"type": "container"
},
"severity": "ERROR",
"textPayload": "I0718 17:05:20.552572 1 dns.go:660] DNS Record:&{worker-7.default.svc.cluster.local. 6000 10 10 false 30 0 }, hash:f97f8525\n",
"timestamp": "2016-07-18T17:05:20.000Z"
}
Is this an actual error or is the severity incorrect? Where can I find the definition for the struct that is being printed?
The severity is incorrect. This is some tracing/debugging that shouldn't have been left in the binary, and has been removed since 1.3 was cut. It will be removed in a future release.
See also: Google container engine cluster showing large number of dns errors in logs

Internal Server Error while Verifying Order in Endurance Storage

I am verifying order using the below REST Call in Mozilla Poster
https://[username]:[apiKey]#api.softlayer.com/rest/v3/SoftLayer_Product_Order/verifyOrder
and the jSON payload for this call is:-
{
"parameters": [
{
"location": 138124,
"packageId": 240,
"osFormatType": {
"id": 12,
"keyName": "LINUX"
},
"complexType":
SoftLayer_Container_Product_Order_Network_Storage_Enterprise",
"prices": [
{
"id": 45064 # Endurance Storage
},
{
"id": 45104 # Bloack Storage
},
{
"id": 45074 # 0.25 IOPS/GB
},
{
"id": 45354 # 100 GB Storage space
},
{
"id": 6028 # 5 GB Snapshot space
}
],
"quantity": 1
}
]
}
Call used for snap shot space is :-
https://api.softlayer.com/rest/v3.1/SoftLayer_Product_Package/240/getItemPrices?objectFilter={%22itemPrices%22: {%22categories%22: {%22categoryCode%22: {%22operation%22: %22storage_snapshot_space%22}}}}
But still I am facing issue. Error I got is 500 Internal Server Error.
Please assist me in same. Thanks in Advance
There were some mistakes in your template:
The start quote (") for
"SoftLayer_Container_Product_Order_Network_Storage_Enterprise" value
is missing.
The item price: "45354" refers to : "2000 GB Storage Space"
The id: "6028" refers to "itemId". You should specify the
"price identifier" ("id" property using SoftLayer_Product_Package::getItemPrices) instead of "itemId"
Try this template:
{
"parameters":[
{
"location":138124,
"packageId":240,
"osFormatType":{
"id":12,
"keyName":"LINUX"
},
"complexType":"SoftLayer_Container_Product_Order_Network_Storage_Enterprise",
"prices":[
{
"id":45064 # Endurance Storage
},
{
"id":45104 # Block Storage
},
{
"id":45074 # 0.25 IOPS/GB
},
{
"id":45214 # 100 GB Storage space
},
{
"id":46126 # 5 GB Snapshot space
}
],
"quantity":1
}
]
}
Some important references:
Location-based Pricing and
You
Ordering and Configuring Endurance and Performance Block Storage
with
VMware#SoftLayer