Internal Server Error while Verifying Order in Endurance Storage - rest

I am verifying order using the below REST Call in Mozilla Poster
https://[username]:[apiKey]#api.softlayer.com/rest/v3/SoftLayer_Product_Order/verifyOrder
and the jSON payload for this call is:-
{
"parameters": [
{
"location": 138124,
"packageId": 240,
"osFormatType": {
"id": 12,
"keyName": "LINUX"
},
"complexType":
SoftLayer_Container_Product_Order_Network_Storage_Enterprise",
"prices": [
{
"id": 45064 # Endurance Storage
},
{
"id": 45104 # Bloack Storage
},
{
"id": 45074 # 0.25 IOPS/GB
},
{
"id": 45354 # 100 GB Storage space
},
{
"id": 6028 # 5 GB Snapshot space
}
],
"quantity": 1
}
]
}
Call used for snap shot space is :-
https://api.softlayer.com/rest/v3.1/SoftLayer_Product_Package/240/getItemPrices?objectFilter={%22itemPrices%22: {%22categories%22: {%22categoryCode%22: {%22operation%22: %22storage_snapshot_space%22}}}}
But still I am facing issue. Error I got is 500 Internal Server Error.
Please assist me in same. Thanks in Advance

There were some mistakes in your template:
The start quote (") for
"SoftLayer_Container_Product_Order_Network_Storage_Enterprise" value
is missing.
The item price: "45354" refers to : "2000 GB Storage Space"
The id: "6028" refers to "itemId". You should specify the
"price identifier" ("id" property using SoftLayer_Product_Package::getItemPrices) instead of "itemId"
Try this template:
{
"parameters":[
{
"location":138124,
"packageId":240,
"osFormatType":{
"id":12,
"keyName":"LINUX"
},
"complexType":"SoftLayer_Container_Product_Order_Network_Storage_Enterprise",
"prices":[
{
"id":45064 # Endurance Storage
},
{
"id":45104 # Block Storage
},
{
"id":45074 # 0.25 IOPS/GB
},
{
"id":45214 # 100 GB Storage space
},
{
"id":46126 # 5 GB Snapshot space
}
],
"quantity":1
}
]
}
Some important references:
Location-based Pricing and
You
Ordering and Configuring Endurance and Performance Block Storage
with
VMware#SoftLayer

Related

How to create duplicate block storage from snapshot using Softlayer rest api

I have created snapshot but unable to create block storage from snapshot. We cant find any Api document. Anyone help me on this request
Try using the following slcli command to order a duplicate volume:
slcli block volume-duplicate --origin-snapshot-id 11111 --billing monthly 22222
Replace 11111 for your snapshot id and 22222 for your volume id.
To get the list of the snapshot ids for your volume you can use the following command:
slcli block snapshot-list 1234
Replace 1234 for your volume id.
You can order a duplicate volume through rest call too see the below example:
Method: POST
https://[username]:[apiKey]#api.softlayer.com/rest/v3.1/SoftLayer_Product_Order/verifyOrder
Body: Json
{"parameters": [{
"complexType":"SoftLayer_Container_Product_Order_Network_Storage_AsAService",
"packageId": 759,
"location":449600,
"quantity": 1,
"prices": [
{ "id": 225129,
"item": {
"id": 13215,
"description": "Storage space for 2 IOPS per GB"
}},
{ "id": 192043,
"item": {
"id": 5938,
"description": "0.25 IOPS per GB"
}},
{"id": 192473,
"item": {
"id": 5130,
"description": "20 GB Storage Space"
}},
{"id":189433,
"item": {
"id": 9571,
"description": "Storage as a Service"
}},
{"id":189443,
"item": {
"id": 5944,
"description": "Block Storage"
}}],
"useHourlyPricing": false,
"duplicateOriginSnapshotId": 11111,
"duplicateOriginVolumeId": 22222,
"osFormatType": {
"id":12,
"keyName":"LINUX"
},
"volumeSize": 16000
}
]}
Replace 11111 for your snapshot id and 22222 for your volume id.

How to measure per user bandwidth usage on google cloud storage?

We want to charge users based on the amount of traffic their data has. Actually the amount of downstream bandwidth their data is consuming.
I have exported google cloud storage access_logs. From the logs, I can count the number of times a file is accessed. (filesize * count will be the bandwidth usage)
But the problem is that this doesn't work well with cached content. My calculated value is much more than the actual usage.
I went with this method because our traffic will be new and won't use the cache, which means that the difference won't matter. But in reality, it seems like it is a real problem.
This is a common use case and I think there should be a better way to solve this problem with google cloud storage.
{
"insertId": "-tohip8e1vmvw",
"logName": "projects/bucket/logs/cloudaudit.googleapis.com%2Fdata_access",
"protoPayload": {
"#type": "type.googleapis.com/google.cloud.audit.AuditLog",
"authenticationInfo": {
"principalEmail": "firebase-storage#system.gserviceaccount.com"
},
"authorizationInfo": [
{
"granted": true,
"permission": "storage.objects.get",
"resource": "projects/_/bucket/bucket.appspot.com/objects/users/2y7aPImLYeTsCt6X0dwNMlW9K5h1/somefile",
"resourceAttributes": {}
},
{
"granted": true,
"permission": "storage.objects.getIamPolicy",
"resource": "projects/_/bucket/bucket.appspot.com/objects/users/2y7aPImLYeTsCt6X0dwNMlW9K5h1/somefile",
"resourceAttributes": {}
}
],
"methodName": "storage.objects.get",
"requestMetadata": {
"destinationAttributes": {},
"requestAttributes": {
"auth": {},
"time": "2019-07-02T11:58:36.068Z"
}
},
"resourceLocation": {
"currentLocations": [
"eu"
]
},
"resourceName": "projects/_/bucket/bucket.appspot.com/objects/users/2y7aPImLYeTsCt6X0dwNMlW9K5h1/somefile",
"serviceName": "storage.googleapis.com",
"status": {}
},
"receiveTimestamp": "2019-07-02T11:58:36.412798307Z",
"resource": {
"labels": {
"bucket_name": "bucket.appspot.com",
"location": "eu",
"project_id": "project-id"
},
"type": "gcs_bucket"
},
"severity": "INFO",
"timestamp": "2019-07-02T11:58:36.062Z"
}
An entry of the log.
We are using a single bucket for now. Can also use multiple if it helps.
One possibility is to have a separate bucket for each user and get the bucket's bandwidth usage through timeseries api.
The endpoint for this purpose is:
https://cloud.google.com/monitoring/api/ref_v3/rest/v3/projects.timeSeries/list
And following are the parameters to achieve bytes sent for one hour (we can specify time range above 60s) whose sum will be the total bytes sent from the bucket.
{
"dataSets": [
{
"timeSeriesFilter": {
"filter": "metric.type=\"storage.googleapis.com/network/sent_bytes_count\" resource.type=\"gcs_bucket\" resource.label.\"project_id\"=\"<<<< project id here >>>>\" resource.label.\"bucket_name\"=\"<<<< bucket name here >>>>\"",
"perSeriesAligner": "ALIGN_SUM",
"crossSeriesReducer": "REDUCE_SUM",
"secondaryCrossSeriesReducer": "REDUCE_SUM",
"minAlignmentPeriod": "3600s",
"groupByFields": [
"resource.label.\"bucket_name\""
],
"unitOverride": "By"
},
"targetAxis": "Y1",
"plotType": "LINE",
"legendTemplate": "${resource.labels.bucket_name}"
}
],
"options": {
"mode": "COLOR"
},
"constantLines": [],
"timeshiftDuration": "0s",
"y1Axis": {
"label": "y1Axis",
"scale": "LINEAR"
}
}

Softlayer: Create flavor vm instance using Terraform

I found there exist some flavor vm type, e.g. c1.1x1 or b1.2x4 in bluemix portal site.
But ibm_compute_vm_instance seems only be able to setup cores.
Could i be able to create for type c1 or m1?
Or, which cpu type it is by default while deploying?
They has add the ability to create flaovr using terraform ref.
Check the new field 'flavor_key_name'.
But it still need setup 'local_disk'. 'local_disk' is 'true' while using using bl1 or bl2, or it should be 'false'.
IBM Terraform does not have an attribute to put a flavor’s value that is needed to create a VM, the same that you do in blumix portal, or the presetId attribute that is used in other languages.
This issue has already been reported, you can see it in this link: https://github.com/IBM-Cloud/terraform-provider-ibm/issues/151
To create a new VM by terraform you have to choose the CPU, RAM and FIRST DISK separately.
e.g you can choose this flavor:
"name" :" C1.2x2x25 "
It means (2 x 2.0 GHz Cores, 2 GB RAM, 25 GB (SAN) First Disk)
There is no default CPU type while deploying. You have to choose one.
To find these values you can use the following rest api:
Method: GET
https://[username]-[apiKey]#api.softlayer.com/rest/v3/SoftLayer_Virtual_Guest/getCreateObjectOptions
You will find here the following result:
{
"flavor": {
"keyName": "C1_2X2X25",
"name": "C1.2x2x25",
"configuration": [
{
"category": {
"name": "Computing Instance"
},
"price": {
"hourlyRecurringFee": ".045",
"item": {
"description": "2 x 2.0 GHz Cores"
}
}
},
{
"category": {
"name": "First Disk"
},
"price": {
"hourlyRecurringFee": "0",
"item": {
"description": "25 GB (SAN)"
}
}
},
{
"category": {
"name": "RAM"
},
"price": {
"hourlyRecurringFee": ".03",
"item": {
"description": "2 GB"
}
}
}
],
"totalMinimumHourlyFee": "0.075",
"totalMinimumRecurringFee": "49.77"
},
"template": {
"id": null,
"supplementalCreateObjectOptions": {
"flavorKeyName": "C1_2X2X25"
}
}
},
This is an example how could you send the terraform request:
resource "ibm_compute_vm_instance" "twc_terraform_sample" {
hostname = "twc-terraform-sample-name"
domain = "bar.example.com"
os_reference_code = "DEBIAN_7_64"
datacenter = "wdc01"
network_speed = 10
hourly_billing = true
private_network_only = false
cores = 2
memory = 2048
disks = [25]
dedicated_acct_host_only = true
local_disk = false
}
The RAM (memory) that you have to send must be in MB (e.g 2GB will be 2048 MB)

Could I request volume to get encrypted volume through Softlayer API

Currently, I would call 'SoftLayer_Virtual_Guest/getUpgradeItemPrices' to get local or san disk and call 'SoftLayer_Product_Package/id' (which block is based on '222') to get external disks. And I noticed that SoftLayer portal could provision an encrypted file/block volume.
And my question is that how could I request an encrypted disk by Softlayer API through these methods.
Thank you. :)
UPDATE
The encryption will be set automatically once it has completed the provision.
Note that encryption is only available in data centers with an asterisk (so called Upgraded Data centers).You may use the SoftLayer_Network_Storage::getFileBlockEncryptedLocations method to identify which they are.
Try the following REST requests:
For Block storage:
https://[username]:[apiKey]#api.softlayer.com/rest/v3/SoftLayer_Product_Order/verifyOrder
method: POST
{
"parameters":[
{
"complexType": "SoftLayer_Container_Product_Order_Network_Storage_AsAService",
"location": 449494,
"packageId": 759,
"volumeSize": 500,
"prices": [
{
"id": 189433
},
{
"id": 189443
},
{
"id": 193373
},
{
"id": 194633
},
{
"id": 193433
}],
"osFormatType": {
"keyName": "LINUX"
}
}
]
}
For File Storage:
https://[username]:[apiKey]#api.softlayer.com/rest/v3/SoftLayer_Product_Order/verifyOrder
method: POST
{
"parameters":[
{
"complexType": "SoftLayer_Container_Product_Order_Network_Storage_AsAService",
"location": 449600,
"packageId": 759,
"volumeSize": 250,
"prices": [
{
"id": 189433
},
{
"id": 189453
},
{
"id": 192043
},
{
"id": 193013
},
{
"id": 192053
}
],
"osFormatType": {
"keyName": "LINUX"
}
}
]
}
For more information please see below:
https://knowledgelayer.softlayer.com/procedure/migrate-file-storage-encrypted-file-storage
https://knowledgelayer.softlayer.com/faqs/1483#7277

Magento 2 REST API - get product by slug, or get media in REST search

I have an ember app which I'm using as the front end. I need to fetch a product from the REST api but instead of using the SKU, I need to use the slug. So I access the following endpoint which works fine: http://*.com/index.php/rest/V1/products?searchCriteria[filter_groups][0][filters][0][field]=url_key&searchCriteria[filter_groups][0][filters][0][value]=daniels-icecream-slug
However, the result is obviously a product list as opposed to the product endpoint, so some of the data is omitted. Namely, the media_gallery_entries field. So is there anyways I can either return this data in the /products?searchCriteria endpoint or is there a way I can fetch /products/:slug instead of /products/:sku for the product endpoint?
you need to define conditionType as well with the API Call like following
V1/products/?searchCriteria[filterGroups][0][filters][0][field]=url_key&searchCriteria[filterGroups][0][filters][0][value]=%shirt%&searchCriteria[filterGroups][0][filters][0][condition_type]=like
Parameters :
searchCriteria[filterGroups][0][filters][0][field]=url_key
searchCriteria[filterGroups][0][filters][0][value]=%shirt%
searchCriteria[filterGroups][0][filters][0][condition_type]=like
Note: Make sure to prefix & suffix % in value as per your requirements.
I am using the same in my api Calls and it works
I'm using Magento v2.2, and when I do a search, each item has a image attribute (in the custom_attributes list) that mangento automatically adds to a product when up add an image to it:
{
"items": [{
"sku": "MH07-XS-Black",
"name": "Hero Hoodie-XS-Black",
"custom_attributes": [{
"attribute_code": "description",
"value": "<p>Gray and black color blocking sets you apart as the Hero Hoodie keeps you warm on the bus, campus or cold mean streets. Slanted outsize front pockets keep your style real . . . convenient.</p>\n<p>• Full-zip gray and black hoodie.<br />• Ribbed hem.<br />• Standard fit.<br />• Drawcord hood cinch.<br />• Water-resistant coating.</p>"
},
{
"attribute_code": "image",
"value": "/m/h/mh07-black_main.jpg"
},
{
"attribute_code": "small_image",
"value": "/m/h/mh07-black_main.jpg"
},
{
"attribute_code": "thumbnail",
"value": "/m/h/mh07-black_main.jpg"
},
{
"attribute_code": "color",
"value": "49"
},
{
"attribute_code": "minimal_price",
"value": "54.0000"
},
{
"attribute_code": "category_ids",
"value": [
"15"
]
},
{
"attribute_code": "options_container",
"value": "container2"
},
{
"attribute_code": "required_options",
"value": "0"
},
{
"attribute_code": "has_options",
"value": "0"
},
{
"attribute_code": "url_key",
"value": "hero-hoodie-xs-black"
},
{
"attribute_code": "msrp_display_actual_price_type",
"value": "0"
},
{
"attribute_code": "tax_class_id",
"value": "2"
},
{
"attribute_code": "size",
"value": "167"
}
]
}]
}