This is my code to place a baremetal server. The placeOrder function will return the order receipt. Then I check the order status from SL portal and it was changed to approved.
My question is do we have any API's to check when the baremetal server will be provisioned? We need to configure it after the baremetal server provisioned.
var sess = session.New(userName, apiKey, endpoint)
accountService := services.GetAccountService(sess)
order := datatypes.Container_Product_Order{
Quantity: sl.Int(1),
Hardware: []datatypes.Hardware{
{
Hostname: sl.String("test10g"),
Domain: sl.String("example.com"),
PrimaryBackendNetworkComponent: &datatypes.Network_Component{
NetworkVlan: &datatypes.Network_Vlan{Id: sl.Int(2288425)},
},
},
},
Location: sl.String("DALLAS10"),
PackageId: sl.Int(911), // Single E3-1270 v6
Prices: []datatypes.Product_Item_Price{
{Id : sl.Int(206249)}, // server
{Id : sl.Int(209427)}, // ram
{Id : sl.Int(175789)}, // os
{Id : sl.Int(32927)}, // disk controller
{Id : sl.Int(49761)}, // disk 0
{Id : sl.Int(50359)}, // bandwidth
{Id : sl.Int(35686)}, // portSpeed
{Id : sl.Int(34241)}, // monitoring
{Id : sl.Int(34996)}, // response
{Id : sl.Int(33483)}, // vpn management
{Id : sl.Int(35310)}, // vulnerabilityScanner
{Id : sl.Int(34807)}, // pri_ip_address
{Id : sl.Int(32500)}, // notification
{Id : sl.Int(25014)}, // remote_management
},
}
service := services.GetProductOrderService(sess)
receipt, err := service.PlaceOrder(&order, sl.Bool(false))
// Any functions to check the order status here?
// Need some code to waiting for the baremetal server to become ready.
You need to use the SoftLayer_Hardware::getObject method and make it query the server repeatedly until the "provisionDate" parameter is filled in, once it is filled then the provisioning completed.
Fore more information see the following:
How can I reliably track the status of a newly provisioned BareMetal server using the REST API
what is the API (REST) to find out whether bare metal server provisioned or not?
https://softlayer.github.io/reference/services/SoftLayer_Virtual_Guest/getLastTransaction/
Related
Hi I am wondering why the custom events I have set up don't seem to be showing on flurry.com portal.
I am going to guess it has something to do with how I have set it up - but according to the flurry documentation I have done it correctly.
This is the result when I click a button that fires logEvent
msg = <FlurryStreamEvent: 0x28350e000, type = 134, json = { "fl.event.type" : "CUSTOM_EVENT", "fl.event.id" : 2, "fl.timed.event.duration" : 0, "fl.event.timed" : false, "fl.event.uptime" : 2284321185, "fl.timed.event.starting" : false, "fl.event.user.parameters" : { "RXBpc29kZV90aXRsZQ==" : "WW91ciBTbyBTdHVwaWQ=", "cG9kY2FzdA==" : "Mm5lcmRzIEluIEEgUm9vbQ==" }, "fl.event.name" : "UG9kY2FzdF9QbGF5", "fl.frame.version" : 1, "fl.event.flurry.parameters" : { }, "fl.event.timestamp" : 1603026151045 }>
My concern is "fl.event.flurry.parameters" : { }, it's empty - I have no idea if it is meant to be empty..
This is how I am calling it:
let data = ["podcast": post.title, "Episode_title":podcast.title]
Flurry.logEvent("Podcast_Play", withParameters: data)
fl.event.user.parameters is the one that contains the parameters you set, so in your example, they are reporting in. Not seeing them in the portal could be due to the expected time it takes for data to propagate. If it takes more than several hours, email us at support#flurry.com with the details.
I created persistent postgresql databases for nodes in corda. After setting up the databases and building nodes I'm able to record flow to the database, but after rebuilding nodes and running them again I'm unable to create the same flow anymore.
I guess this is means that the database still has old information about the nodes, but then how one can update the nodes and retain the old states from the database?
This is the error I get from running the same flow after rebuilding.
"net.corda.core.CordaRuntimeException: The Initiator of CollectSignaturesFlow must pass in exactly the sessions required to sign the transaction. "
My deployNodes task:
task deployNodes(type: net.corda.plugins.Cordform, dependsOn: ['jar']) {
directory "./build/nodes"
ext.drivers = ['.jdbc_driver']
ext.extraConfig = [
'dataSourceProperties.dataSourceClassName' : "org.postgresql.ds.PGSimpleDataSource",
'database.transactionIsolationLevel' : 'READ_COMMITTED',
'database.initialiseSchema': "true"
]
nodeDefaults {
projectCordapp {
deploy = false
}
cordapp project(':cordapp-contracts-states')
cordapp project(':cordapp')
}
node {
name "O=NetworkMapAndNotary,L=Helsinki,C=FI"
notary = [validating : true]
rpcSettings {
address "localhost:10004"
adminAddress "localhost:10044"
}
p2pPort 10002
extraConfig = ext.extraConfig + [
'dataSourceProperties.dataSource.url' :
"jdbc:postgresql://localhost:5432/nms_db?currentSchema=nms_schema",
'dataSourceProperties.dataSource.user' : "nms_corda",
'dataSourceProperties.dataSource.password' : "corda1234",
]
drivers = ext.drivers
}
node {
name "O=AccountOperator,L=Helsinki,C=FI"
p2pPort 10005
rpcSettings {
address "localhost:10006"
adminAddress "localhost:10046"
}
webPort 10007
rpcUsers = [[ user: "user1", "password": "test", "permissions": ["ALL"]]]
extraConfig = ext.extraConfig + [
'dataSourceProperties.dataSource.url' :
"jdbc:postgresql://localhost:5432/ao_db?currentSchema=ao_schema",
'dataSourceProperties.dataSource.user' : "ao_corda",
'dataSourceProperties.dataSource.password' : "corda1234",
]
drivers = ext.drivers
}
}
I have also tried running run clearNetworkMapCacheon each node's rpc. After running it I can see that node_info_hash, and node_infos tables are empty but then how can I update those tables with right node info?
For example this Migrating data from Corda 2 to Corda 3 question says I should rerun every transaction after upgrading cordapps, is this applicable also in just regular cordapp update and this is done? Also this https://docs.corda.net/head/node-operations-upgrade-cordapps.html instruction says "CorDapps must ship with database migration scripts or clear documentation about how to update the database to be compatible with the new version." But I tried migrate some previous states to new database instance with no luck.
I face this issue once when I migrate code from Corda3 to Corda4, so I fix it by send session in both CollectSignatureFlow and Finality flow.
Hope that help
Running on DataPower 7.5.2.0
I created a JWT Generator as part of a AAA Policy and it is working fine, I am able to generate, sign and then externally verify the JWT with no issues.
Now I want to add a custom claim to the JWT, so I ticked the box for Custom and then uploaded this Gateway script file:
var claim = {
"result" : {
"user" : "hardcode"
}
};
session.output.write(claim);
and it generates the correct JWT with the user attribute. However when I try to add a second value to it like so:
var claim = {
"result" : {
"user" : "hardcode",
"name" : "myname"
}
};
session.output.write(claim);
I now get this error:
[Error: Required CustomClaim Name or Value field missing] errorMessage: 'Required CustomClaim Name or Value field missing', errorCode: '0x8580005c', errorDescription: 'GatewayScript console log message.', errorSuggestion: 'GatewayScript console log message. Refer to the message for more information.'
Which is the same message I got before I realized I had to set the output to result from the InfoCenter's vague documentation.
How do I add multiple custom claims in the JWT Generator Gateway script??
It would appear that DataPower only allows you to add a single custom claim, so you just need to make that a complex object like so:
var claim = {
"result" : {
"claim" : {
"user" : "hardcode",
"one" : true,
"clientId" : "asdf-asdf-asdf",
"endpoint" : "http://192.168.142:8080/member/ws"
}
}
};
session.output.write(claim);
This then generates the correct JWT with a nest claim.
eyJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJhcGljIiwic3ViIjoiYWRtaW4iLCJleHAiOjE0ODIyNjU5ODQsImlhdCI6MTQ4MjI2MjM4NCwianRpIjoiZDhjNTE1ZDEtZmVjMS00ZGVmLThiNDctZmYzY2E2OWVjOWRiIiwibm9uY2UiOiJtN2lVZlBqTCIsImF1ZCI6ImlkMSIsImNsYWltIjp7InVzZXIiOiJmcmVkIiwib25lIjp0cnVlLCJjbGllbnRJZCI6ImFzZGYtYXNkZi1hc2RmIiwiZW5kcG9pbnQiOiJodHRwOi8vMTkyLjE2OC4xNDI6ODA4MC9tZW1iZXIvd3MifX0.viakwnM5bhhmGIn0QmDJTmsWCuIciO2BOdUVyxYpsFA
I have the following data structure , users and services have many to many
relationship
users : {
user1 : {
name : blah,
email : a#a.com,
services : {
servicekey1 : true,
servicekey9 : true
}
}
}
services : {
servicekey1 : {
name : blahserve,
category : blahbers,
providers : {
user1 : true,
user7 : true
}
}
}
I want to get the list of user objects for a service. How can this be done with and without angularfire.
I came up with this query (without angularfire) -
var refu = new Firebase("https://myapp.firebaseio.com/users");
var refs = new Firebase("https://myapp.firebaseio.com/services");
refs.child("servicekey1/providers").once("value",function(s){
var users = s.val();
angular.forEach(users, function(v,k){
refu.child(k).once("value", function(su){
console.log(su.val());
})
})
});
This solves my purpose but I feel there should be a better way to do it , may be with angular fire. Please suggest if there are any other/ better ways to achieve this?
I am developing angularjs nodejs application
Following has Payment Collection find function and result
var collectionId = "5673d6c7da28e94f51277894"
Payment.find({id: collectionId}).exec(function(err,payment)
console.log(payment);
);
Console result :
{
"_id" : ObjectId("5673d6c7da28e94f51277894"),
"response" : {
"status" : "approved",
"id" : "PAY-9N740711P28316116KZX5U4I"
}
}
I need to find payment collection using response id
My code here
var paymentId = "PAY-9N740711P28316116KZX5U4I"
Payment.find({ response : {id: paymentId}}).exec(function(err,payment)
console.log(payment);
);
Console result :
undefined
If you are not clear question, please comment
Hope answer, thanks
It should be:
Payment.find({ 'response.id': paymentId }).exec(function(err,payment) {
console.log(payment);
});