I'm trying to deploy Vyatta on a SoftLayer environment using Rest API and wanted some leads on how it can be achieved. I did see a python client to do it but we cannot use either of the option, ie Python/PHP/JAVA and have to exclusively use only Rest API to deploy a network appliance to a Softlayer Infrastructure.
I tried using the CLI but it does not query for the operating System Code.
slcli virtual create
Hostname: test
Domain: test.com
Datacenter: dal09
Operating System Code:
Does any one know how to deploy vyatta using RestAPI/CLI..Or how i can query for the OS Code and Network vlans to deploy vyatta
Thank you,
Anish
https://softlayer-python.readthedocs.io/en/latest/cli/ordering.html
To place an order using slcli client, you can refer to the above doc.
However here is an example specifically for ordering a Vyatta Gateway Appliance:
$ slcli order package-list | grep -i gateway
Network Gateway Appliance NETWORK_GATEWAY_APPLIANCE BARE_METAL_GATEWAY
Network Gateway Appliance Cluster NETWORK_GATEWAY_APPLIANCE_CLUSTER GATEWAY_RESOURCE_GROUP
Network Gateway Appliance (10 Gbps) 2U_NETWORK_GATEWAY_APPLIANCE_1O_GBPS BARE_METAL_GATEWAY
Virtual Router Appliance VIRTUAL_ROUTER_APPLIANCE_1_GPBS BARE_METAL_GATEWAY
Virtual Router Appliance (10 Gpbs) VIRTUAL_ROUTER_APPLIANCE_10_GPBS BARE_METAL_GATEWAY
$ slcli order package-locations NETWORK_GATEWAY_APPLIANCE
:.........:.......:........................:...............:
: id : dc : description : keyName :
:.........:.......:........................:...............:
: 265592 : ams01 : AMS01 - Amsterdam : AMSTERDAM :
...
...
: 814994 : ams03 : AMS03 - Amsterdam : AMSTERDAM03 :
$ slcli order item-list NETWORK_GATEWAY_APPLIANCE | grep -i vyatta
os OS_VYATTA_6_X_SUBSCRIPTION_EDITION_64_BIT Vyatta 6.x Subscription Edition (64 bit)
os OS_VYATTA_5600_5_X_UP_TO_1GBPS_SUBSCRIPTION_EDITION_64_BIT Virtual Router Appliance 5.x (up to 2 Gbps) Subscription Edition (64 Bit)
$ slcli order place --verify NETWORK_GATEWAY_APPLIANCE WASHINGTON07 OS_VYATTA_5600_5_X_UP_TO_1GBPS_SUBSCRIPTION_EDITION_64_BIT ...
Above command is an example on how to verify an order of a Vyatta. Depending on the flavor you would like to order, the command may vary.
You will need to specify, in the command, each required category as shown in the table below.
Once you are happy with the order, you can remove --verify and it will place the actual order.
$ slcli order category-list NETWORK_GATEWAY_APPLIANCE
:........................................:.......................:............:
: name : categoryCode : isRequired :
:........................................:.......................:............:
: Server : server : Y :
: Surcharges : premium : N :
: Operating System : os : Y :
: RAM : ram : Y :
: Disk Controller : disk_controller : Y :
: First Hard Drive : disk0 : Y :
: Second Hard Drive : disk1 : N :
: Third Hard Drive : disk2 : N :
: SRIOV Enabled : sriov_enabled : Y :
: Fourth Hard Drive : disk3 : N :
: Public Bandwidth : bandwidth : Y :
: Uplink Port Speeds : port_speed : Y :
: Remote Management : remote_management : Y :
: Primary IP Addresses : pri_ip_addresses : Y :
: Primary IPv6 Addresses : pri_ipv6_addresses : Y :
: Monitoring : monitoring : Y :
: Notification : notification : Y :
: Response : response : Y :
: VPN Management - Private Network : vpn_management : Y :
: Vulnerability Assessments & Management : vulnerability_scanner : Y :
:........................................:.......................:............:
As #Xiang Wang commented, it should be possible to order using slcli order command.
There are also some examples in python and go that you can try:
https://softlayer.github.io/python/orderVyatta/
https://softlayer.github.io/python/order_vyatta.py/
https://softlayer.github.io/go/order_vyatta_gateway.go/
REST
Following is a sample of JSON structure you can use to build the order, take into account the prices could change depending on package and location, also some of them could have conflicts.
To retrieve the list of prices you can use getItems or getItemPrices
Use placeOrder instead of verifyOrder when you ready to order.
POST:
https://api.softlayer.com/rest/v3/SoftLayer_Product_Order/verifyOrder
PAYLOAD:
{
"parameters": [{
"orderContainers": [{
"complexType": "SoftLayer_Container_Product_Order_Hardware_Server_Gateway_Appliance",
"hardware": [
{
"hostname": "gateway",
"domain": "softlayer.com"
}
],
"quantity": 1,
"location": "AMSTERDAM",
"packageId": 1055,
"prices": [
{
"id": 206251,
"item": { "description": "Single Intel Xeon E3-1270 v6 (4 Cores, 3.80 GHz)" }
},
{
"id": 209453,
"item": { "description": "16 GB RAM" }
},
{
"id": 201199,
"item": { "description": "Virtual Router Appliance 5.x (up to 2 Gbps) Subscription Edition (64 Bit)" }
},
{
"id": 32927,
"item": { "description": "Non-RAID" }
},
{
"id": 83483,
"item": { "description": "2.00 TB SATA" }
},
{
"id": 33867,
"item": { "description": "20000 GB Bandwidth Allotment" }
},
{
"id": 96817,
"item": { "description": "1 Gbps Public & Private Network Uplinks" }
},
{
"id": 80263,
"item": { "description": "Host Ping and TCP Service Monitoring" }
},
{
"id": 32627,
"item": { "description": "Automated Notification" }
},
{
"id": 35310,
"item": { "description": "Nessus Vulnerability Assessment & Reporting" }
},
{
"id": 32500,
"item": { "description": "Email and Ticket" }
},
{
"id": 25014,
"item": { "description": "Reboot / KVM over IP" }
},
{
"id": 212715,
"item": { "description": "SRIOV Enabled" }
},
{
"id": 34807,
"item": { "description": "1 IP Address" }
},
{
"id": 33483,
"item": { "description": "Unlimited SSL VPN Users & 1 PPTP VPN User per account" }
}
]
}]
}]
}
If you want a High Availability device (HA) then you need to specify two hardware objects in the hardware parameter and the quantity must be 2.
Related
I use the following Terraform ARM template to deploy to Azure Stack:
...
resource "azurestack_template_deployment" "nsg-rule1" {
count = "${var.nsgr_map["nsg_sourceportranges"] == "" && var.nsgr_map["nsg_destinationportranges"] == "" && var.nsgr_map["nsg_sourceaddressprefixes"] == "" && var.nsgr_map["nsg_destinationaddressprefixes"] == "" ? 1 : 0}"
name = "${var.nsgr_map["nsg_rulename"]}"
resource_group_name = "${var.nsgr_map["rsg_name"]}"
template_body = <<DEPLOY
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"networkSecurityGroupName": {
"type": "String"
},
"networkSecurityGroupRuleName": {
"type" : "String"
},
"protocol" : {
"type" : "String"
},
"sourcePortRange": {
"type" : "String"
},
"destinationPortRange": {
"type" : "String"
},
"sourceAddressPrefix" : {
"type" : "String"
},
"destinationAddressPrefix" : {
"type" : "String"
},
"access" : {
"type" : "String"
},
"priority" : {
"type" : "String"
},
"direction" : {
"type" : "String"
},
"sourcePortRanges" : {
"type" : "String"
},
"destinationPortRanges" : {
"type" : "String"
},
"sourceAddressPrefixes" : {
"type" : "String"
},
"destinationAddressPrefixes" : {
"type" : "String"
}
},
"variables": {},
"resources": [
{
"type": "Microsoft.Network/networkSecurityGroups/securityRules",
"apiVersion": "2017-10-01",
"name": "[concat(parameters('networkSecurityGroupName'),'/',parameters('networkSecurityGroupRuleName'))]",
"properties": {
"protocol": "[parameters('protocol')]",
"sourcePortRange": "[parameters('sourcePortRange')]",
"destinationPortRange": "[parameters('destinationPortRange')]",
"sourceAddressPrefix": "[parameters('sourceAddressPrefix')]",
"destinationAddressPrefix": "[parameters('destinationAddressPrefix')]",
"access": "[parameters('access')]",
"priority": "[parameters('priority')]",
"direction": "[parameters('direction')]",
"sourcePortRanges": "[parameters('sourcePortRanges')]",
"destinationPortRanges": "[parameters('destinationPortRanges')]",
"sourceAddressPrefixes": "[parameters('sourceAddressPrefixes')]",
"destinationAddressPrefixes": "[parameters('destinationAddressPrefixes')]"
}
}
]
}
DEPLOY
# these key-value pairs are passed into the ARM Template's `parameters` block
parameters = {
networkSecurityGroupName = "${var.nsgr_map["nsg_name"]}"
networkSecurityGroupRuleName = "${var.nsgr_map["nsg_rulename"]}"
protocol = "${var.nsgr_map["nsg_protocol"]}"
sourcePortRange = "${var.nsgr_map["nsg_source_portrange"]}"
destinationPortRange = "${var.nsgr_map["nsg_destination_portrange"]}"
sourceAddressPrefix ="${var.nsgr_map["nsg_sourceaddressprefix"]}"
destinationAddressPrefix = "${var.nsgr_map["nsg_destinationaddressprefix"]}"
access = "${var.nsgr_map["nsg_access"]}"
priority = "${var.nsgr_map["nsg_priority"]}"
direction = "${var.nsgr_map["nsg_direction"]}"
sourcePortRanges = ""
destinationPortRanges = ""
sourceAddressPrefixes = ""
destinationAddressPrefixes = ""
}
deployment_mode = "Incremental"
}
...
In fact I want to add to an existing Network Security Group (NSG) on Azure Stack, some NSG-rules.
The problem is that if I deploy different rules with the same name under the same resource group the deployment fails because the NSG is not part of the ID, which identifies the resource to create.
In other words I have two rules with the same name - say 'NSG_Rule_Open_VPN' under the resource group 'Virtual_Network_1' but for two different network security groups 'nsg_1' and 'nsg_2'.
Then the deployment fails with an error, because Terraform deploys the same resource twice (but the targeted NSG's are not the same)
If I look to the activity protocol on Azure Stack it becomes clear that Terraform doesn't use the name of the targeted NSG in the ID to create the resource:
"resourceId": "/subscriptions/yyyxxx/resourcegroups/RSG_99_VirtualNetwork_01/providers/Microsoft.Resources/deployments/NSR_out_TCP_allow_VMtoINTERNET-HTTPS",
It uses only the resource group name 'RSG_99_VirtualNetwork_01' and the rule name 'NSR_out_TCP_allow_VMtoINTERNET-HTTPS' but not the NSG name.
Is there a way to avoid this. so that Terraform creates a resourceID which also has a dependency to the NSG name?
this happens because this is the deployment name, not the NSG rule name. so you need to update this bit to include nsg-name or some counter or something random:
resource "azurestack_template_deployment" "nsg-rule1" {
count = not_important_removed
name = "${var.nsgr_map["nsg_rulename"]}" # this bit needs to be updated
resource_group_name = "${var.nsgr_map["rsg_name"]}"
I found there exist some flavor vm type, e.g. c1.1x1 or b1.2x4 in bluemix portal site.
But ibm_compute_vm_instance seems only be able to setup cores.
Could i be able to create for type c1 or m1?
Or, which cpu type it is by default while deploying?
They has add the ability to create flaovr using terraform ref.
Check the new field 'flavor_key_name'.
But it still need setup 'local_disk'. 'local_disk' is 'true' while using using bl1 or bl2, or it should be 'false'.
IBM Terraform does not have an attribute to put a flavor’s value that is needed to create a VM, the same that you do in blumix portal, or the presetId attribute that is used in other languages.
This issue has already been reported, you can see it in this link: https://github.com/IBM-Cloud/terraform-provider-ibm/issues/151
To create a new VM by terraform you have to choose the CPU, RAM and FIRST DISK separately.
e.g you can choose this flavor:
"name" :" C1.2x2x25 "
It means (2 x 2.0 GHz Cores, 2 GB RAM, 25 GB (SAN) First Disk)
There is no default CPU type while deploying. You have to choose one.
To find these values you can use the following rest api:
Method: GET
https://[username]-[apiKey]#api.softlayer.com/rest/v3/SoftLayer_Virtual_Guest/getCreateObjectOptions
You will find here the following result:
{
"flavor": {
"keyName": "C1_2X2X25",
"name": "C1.2x2x25",
"configuration": [
{
"category": {
"name": "Computing Instance"
},
"price": {
"hourlyRecurringFee": ".045",
"item": {
"description": "2 x 2.0 GHz Cores"
}
}
},
{
"category": {
"name": "First Disk"
},
"price": {
"hourlyRecurringFee": "0",
"item": {
"description": "25 GB (SAN)"
}
}
},
{
"category": {
"name": "RAM"
},
"price": {
"hourlyRecurringFee": ".03",
"item": {
"description": "2 GB"
}
}
}
],
"totalMinimumHourlyFee": "0.075",
"totalMinimumRecurringFee": "49.77"
},
"template": {
"id": null,
"supplementalCreateObjectOptions": {
"flavorKeyName": "C1_2X2X25"
}
}
},
This is an example how could you send the terraform request:
resource "ibm_compute_vm_instance" "twc_terraform_sample" {
hostname = "twc-terraform-sample-name"
domain = "bar.example.com"
os_reference_code = "DEBIAN_7_64"
datacenter = "wdc01"
network_speed = 10
hourly_billing = true
private_network_only = false
cores = 2
memory = 2048
disks = [25]
dedicated_acct_host_only = true
local_disk = false
}
The RAM (memory) that you have to send must be in MB (e.g 2GB will be 2048 MB)
Short: IT DOES NOT SSO
Longer: I am trying to unite logins of two services via CAS (v5.0.4). I have configured the services and am now able to log into both. The problem is - CAS is not acting as an SSO provider. Logging into one of the services logs you in BUT you still have to enter your credentials for the second service (and vice versa). I suspect that I am missing some configuration options.
Here are my services:
{
"#class" : "org.apereo.cas.services.RegexRegisteredService",
"serviceId" : "^(http|https)://service1.*",
"name" : "service1",
"id" : 12345678,
"accessStrategy" : {
"#class" : "org.apereo.cas.services.DefaultRegisteredServiceAccessStrategy",
"enabled" : true,
"ssoEnabled" : true
}
}
and
{
"#class" : "org.apereo.cas.services.OidcRegisteredService",
"clientId": "client",
"clientSecret": "secret",
"serviceId" : "^https://service2.*",
"signIdToken": true,
"bypassApprovalPrompt": true,
"name": "OIDC",
"id": 87654321,
"evaluationOrder": 1,
"attributeReleasePolicy" : {
"#class" : "org.apereo.cas.services.ReturnAllAttributeReleasePolicy"
},
"accessStrategy" : {
"#class" : "org.apereo.cas.services.DefaultRegisteredServiceAccessStrategy",
"enabled" : true,
"ssoEnabled" : true
}
}
Thank you!
The problem was a bug in v5.0.x. Upgrading to v5.1.x fixes the issue.
In the AWS Resource AWS::ApplicationAutoScaling::ScalableTarget, how do I retrieve the value of ResourceId which is the unique resource identifier that associated with the scaleable target?
The format used in the ResourceId property is service/cluster_name/service_name.
You are correct that the ResourceId property should be in the form service/<cluster_name>/<service_name>.
Incidentally, (and to help future Googlers) the error message that you get when you use an incorrect ResourceId or ScalableDimension is:
Unsupported service namespace, resource type or scalable dimension
To achieve this in CloudFormation you can use a Fn::Join for the ResourceId. Here is a snippet of the Scalable target that I am using:
"ECSServiceScalableTarget": {
"Type": "AWS::ApplicationAutoScaling::ScalableTarget",
"Properties": {
"MaxCapacity": 5,
"MinCapacity": 1,
"ResourceId": { "Fn::Join" : ["/", [
"service",
{ "Ref": "ECSCluster" },
{ "Fn::GetAtt" : ["ECSService", "Name"] }
]]},
"RoleARN": { "Fn::Join" : ["", ["arn:aws:iam::", { "Ref" : "AWS::AccountId" }, ":role/ApplicationAutoscalingECSRole"]]},
"ScalableDimension": "ecs:service:DesiredCount",
"ServiceNamespace": "ecs"
}
}
I use the {"Fn::Join" : [ ":", [ "a", "b", "c" ] ]"} to get the value of the "ResourceId".
Well I'm trying to show the following entity:
{
"contextResponses" : [
{
"contextElement" : {
"type" : "City",
"isPattern" : "false",
"id" : "Miraflores",
"attributes" : [
{
"name" : "position",
"type" : "coords",
"value" : "-12.119816, -77.028916",
"metadatas" : [
{
"name" : "location",
"type" : "string",
"value" : "WSG84"
}
]
}
]
},
"statusCode" : {
"code" : "200",
"reasonPhrase" : "OK"
}
}
]
}
Wiring NGSI Source and NGSI Entity to Poi operatiors with MapViewer widget (Insert/Update PoI), with the following settings:
NGSI Source
NGSI server URL: mydirection:1026
NGSI proxy URL: http://mashup.lab.fi-ware.org:3000/
NGSI entities: City
NGSI Attributes: position
NGSI Entity to Poi
Coordinates attribute: position
But nothing shows up in the map! Can somebody help me figure out what the problem is?
Seems your configuration is correct (I'm assuming mydirection:1026 is a full URL, i.e. includes the protocol), but probably your network is filtering port 3000. Try to use http://ngsiproxy.lab.fi-ware.org as NGSI proxy instead of http://mashup.lab.fi-ware.org:3000/.
Indeed, I recommend you to enable https notifications in your context broker instance and use https://ngsiproxy.lab.fi-ware.org instead, especially if you are creating your WireCloud dashboard in an https web page (e.g. https://mashup.lab.fi-ware.org) as using this NGSI proxy will solve some mixed content problems, see:
Chrome: https://support.google.com/chrome/answer/1342714?hl=en
Firefox: https://blog.mozilla.org/tanvi/2013/04/10/mixed-content-blocking-enabled-in-firefox-23/
Update: FIWARE has move from fi-ware.org to fiware.org. The recommended NGSI proxy server is now ngsiproxy.lab.fiware.org (ngsiproxy.lab.fi-ware.org still works).
Three simple steps to start MapViewer on Fiware:
Update the Orion ContextBroker in your system
You should check if the daemons rush and rdis are installed and running in your system
You should create a correct boot sequence in the init.d: redis, rush and contextBroker
After these steps, you can build your viewing interface in Wirecloud using MapViewer, NGSI source and NGSI entity to POI.
You must use structured JSON messages correctly as in the following example:
{ "contextElements":
[
{
"type": "iotdevice","isPattern": "false","id": "edison1", "attributes":
[
{
"name": "temperature",
"type": "string",
"value": "10"
},
{
"name" : "position",
"type" : "coords",
"value" : "-20, 35",
"metadatas" : [
{
"name" : "location",
"type" : "string",
"value" : "WSG84"
}
]
}
]
}
],
"updateAction": "APPEND"
}