How can I sign a JSON transaction? - xrp

I have a JSON representation of an XRPL transaction, like the one below, and I want to sign it in C++ using rippled. How can I?
{
"TransactionType" : "Payment",
"Account" : "rf1BiGeXwwQoi8Z2ueFYTEXSwuJYfV2Jpn",
"Destination" : "ra5nK24KXen9AHvsdFTKHSANinZseWnPcX",
"Amount" : "1000000"
}

Parse a transaction, represented by the class STTx (which stands for "serialized type: transaction"), from JSON. ripple-libpp has good example code.
Construct a signing key, represented by the type SecretKey. If you have a Base58-encoded signing key, you can use parseBase58 (pass TokenType::AccountSecret for the first parameter).
Derive the verifying key (represented by the class PublicKey) from the signing key with derivePublicKey (pass KeyType::secp256k1 or KeyType::ed25519 for the first parameter, depending on the signing algorithm you choose to use).
Sign the transaction with STTx::sign.
Read the signature via Blob const signature = sttx.getFieldVL(sfTxnSignature) (a Blob is a vector of bytes).

Related

Can I add new REST properties without making a breaking change?

We currently have 4 content types that might be included in a delivery. In about 8-12 months we will probably have another 2-4 content types. I'm working on a public REST api now and wondering whether I can write the api in such a way that the future additions won't require a version bump?
Currently we are thinking a delivery would return a json result kind of like this:
{
"dateDelivered": "2016-01-01",
"clientId": "000001",
"contentCounts" : {
"total" : 100,
"articles": 75,
"slideshows": 25
... // other content types as we add them
}
"content" : {
"articles" : "http://api.example.com/v0/deliveries/1234/content/articles",
"slideshows" : "http://api.example.com/v0/deliveries/1234/content/slideshows",
... // other content types as we add them
}
}
That defines contentCounts and content as objects with an optional property for each available content type. I suppose I could define that as an array of objects for each content type, but I don't see how that would really change anything.
Is there any reason it be a breaking change if in the future the result object looked more like this:
{
"dateDelivered": "2016-01-01",
"clientId": "000001",
"contentCounts" : {
"total" : 150,
"articles": 75,
"slideshows": 25,
"events": 25,
"videos": 25
}
"content" : {
"articles" : "http://api.example.com/v0/deliveries/1234/content/articles",
"slideshows" : "http://api.example.com/v0/deliveries/1234/content/slideshows",
"events" : "http://api.example.com/v0/deliveries/1234/content/events",
"videos" : "http://api.example.com/v0/deliveries/1234/content/videos"
}
}
Breaking change is a relative notion.
It is breaking the client code that does not account for those changes.
In your case, if a client of your REST API has hardcoded the content types, then he will need to change his code to account for new content types.
In that sense, his code is broken because it does not handle all of it.
In another sense, his code is not broken because as long as you do not remove content types, his code will continue to work for the content types it supports.
If the client code is smart enough to iterate through properties and be flexible about the changes, it is fine.
In any case, if you plan on changing the model, you should mention it.
Whether it is adding, removing or renaming those content types, if the client knows it, he can write a client that will successfully use your REST API. In that case, NO, it would not be a breaking change because is has a dynamic structure (content types can vary) but in a structured way.

Need help to storing an map of type interface in my mongodb database using golang

I m in the process of creating application where my back end is in go lang and database is mongoDB. My problem is that i have a map in my struct declared like
Data struct {
data map[interface{}]interface{}
}
after adding values in to this like
var data Data
data["us"]="country"
data[2]="number"
data["mother"]="son"
I m inserting it like
c.Insert(&data)
When i insert this i m losing my key and can only see the values...
{
"_id" : Object Id("57e8d9048c1c6f751ccfaf50"),
"data" : {
"<interface {} Value>" : "country",
"<interface {} Value>" : "number",
"<interface {} Value>" : "son"
},
}
May i know any way possible to use interface and get both key and values in my mongoDB. Thanks....
You can use nothing but string as key in MongoDB documents. Even if you would define your Data structure as map[int]interface{} Mongo (don't know if mgo will convert types) wouldn't allow you to insert this object into the database. Actually, you can use nothing but string as JSON key at all as this wouldn't be JSON (try in your browser console the next code JSON.parse('{2:"number"}')).
So define your Data as bson.M (shortcut for map[string]interface{}) and use strconv package to convert your numbers into strings.
But I guess you must look at arrays/slices, as only one reason why someone may need to have numbers as keys in JSON is iterations through these fields in future. And for iterations we use arrays.
Update: just checked how mgo deals with map[int]interface{}. It inserts into DB entry like {"<int Value>" : "hello"}. Where <int Value> is not number but actually string <int Value>

How to send Nested Array parameter using alamofire's Multipart form data

How to send this parameter to multipart
let dictionary = [
"user" :
[
"email" : "\(email!)",
"token" : "\(loginToken!)"
],
"photo_data" :[
"name" : "Toko Tokoan1",
"avatar_photo" : photo,
"background_photo" : photo,
"phone" : "0222222222",
"addresses" :[[
"address" : "Jalan Kita",
"provinceid" : 13,
"cityid" : 185,
"postal" : "45512"
]],
"banks" :[[
"bank_name" : "PT Bank BCA",
"account_number" : "292993122",
"account_name" : "Tukiyeum"
]]
]
]
I tried this below code but I can't encode value (which in NSDic) to utf 8
for (key, value) in current_user {
if key == "avatar_photo" || key == "background_photo"{
multipartFormData.appendBodyPart(fileURL: value.data(using: String.Encoding.utf8)!, name: key) // value error because its NSDic
}else{
multipartFormData.appendBodyPart(data: value.data(using: String.Encoding.utf8)!, name: value) // value error because its NSDic
}
}
value in append body part cannot be use because it's NSDictionary not string. How the right way to put that parameter in multipartformdata?
It is allowed to have nested multiparts.
The use of a Content-Type of multipart in a body part within another multipart entity is explicitly allowed. In such cases, for obvious reasons, care must be taken to ensure that each nested multipart entity must use a different boundary delimiter.
RFC 1341
So you have to do the same you did on the outer loop: Simply loop through the contents of the dictionary generating key-value pairs. Obviously you have to set a different part delimiter, so the client can distinguish a nested part change from a top-level part change.
Maybe it is easier to send the whole structure as application/json.

Extracting and printing substring(based on delimiter or pattern) from key value pair in mongoDB

We have a document in mongoDB with key value pairs in which we have two columns "_id" and "value" column which looks like below:
{
"_id" : ObjectId("53cf9048b6e9e884602db85f"),
"value" : "Security ID:\t\tS-1-0-0\tAccount Name:\t\tKanav Narula\tAccount Domain:\t\tINDIA\t
}
Now we want to execute a query on value field.
We want to extract Account Name and Account Domain from value field defined in the above document.
Expected output:
{
"_id" : ObjectId("53cf9048b6e9e884602db85f"),
"value" : "Security ID:\t\tS-1-0-0\tAccount Name:\t\tKanav Narula\tAccount Domain:\t\tINDIA\t,
"Account Name" : Kanav Narula,
"Account Domain" : INDIA
}
can anyone suggest ways to perform this activity in mongoDB.
Thanks in advance.
In order to make it searchable, the components of the Value string should have been separate fields, otherwise, Value is just a large string. The schema is wrong for what you are trying to do. Did you port it from Redis or some other key/value store? MongoDB is not a key/value store, you need to rethink your use case in a new light so to speak.

Querying sub array with $where

I have a collection with following document:
{
"_id" : ObjectId("51f1fd2b8188d3117c6da352"),
"cust_id" : "abc1234",
"ord_date" : ISODate("2012-10-03T18:30:00Z"),
"status" : "A",
"price" : 27,
"items" : [{
"sku" : "mmm",
"qty" : 5,
"price" : 2.5
}, {
"sku" : "nnn",
"qty" : 5,
"price" : 2.5
}]
}
I want to use "$where" in the fields of "items", so something like this:
{$where:"this.items.sku==mmm"}
How can I do it? It works when the field is not of array type.
You don't need a $where operator to do this; just use a query object of:
{ "items.sku": mmm }
As for why your $where isn't working, the value of that operator is executed as JavaScript, so that's not going to check each element of the items array, it's just going to treat items as a normal object and compare its sku property (which is undefined) to mmm.
You are comparing this.items.sku to a variable mmm, which isn't initialized and thus has the value unefined. What you want to do, is iterate the array and compare each entry to the string 'mmm'. This example does this by using the array method some which returns true, when the passed function returns true for at least one of the entries:
{$where:"return this.items.some(function(entry){return entry.sku =='mmm'})"}
But really, don't do this. In a comment to the answer by JohnnyHK you said "my service is just a interface between user and mongodb, totally unaware what the field client want's to store". You aren't really explaining your use-case, but I am sure you can solve this better.
The $where operator invokes the Javascript engine even though this
trivial expression could be done with a normal query. This means unnecessary performance overhead.
Every single document in the collection is passed to the function, so when you have an index, it can not be used.
When the javascript function is generated from something provided by the client, you must be careful to sanetize and escape it properly, or your application gets vulnerable to code injection.
I've been reading through your comments in addition to the question. It sounds like your users can generically add some attributes, which you are storing in an array within a document. Your client needs to be able to query an arbitrary pair from the document in a generic manner. The pattern to achieve this is typically as follows:
{
.
.
attributes:[
{k:"some user defined key",
v:"the value"},
{k: ,v:}
.
.
]
}
Note that in your case, items is attributes. Now to get the document, your query will be something like:
eg)
db.collection.find({attributes:{$elemMatch:{k:"sku",v:"mmm"}}});
(index attributes.k, attributes.v)
This allows your service to provide a way to query the data, and letting the client specify what the k,v pairs are. The one caveat with this design is always be aware that documents have a 16MB limit (unless you have a use case that makes GridFS appropriate). There are functions like $slice which may help with controlling this.