I just finished the pluralsigt course and completed the tutorial of the official project documentation without problems, but nevertheless using the CLI I could not use the functions get_acc_ast_tx, get_acc_tx, I checked that the peer keys and the configuration files and correspond to genesis file, where admin#test is allowed to use these functions and I get:
[2019-12-08 04: 55: 57.883070400] [E] [CLI/ResponseHandler/Query]: Query is stateless invalid.
The genesis file I use is the initial one of the git repository:
{
"blockV1": {
"payload": {
"transactions": [{
"payload": {
"reducedPayload": {
"commands": [{
"addPeer": {
"peer": {
"address": "127.0.0.1:10001",
"peerKey": "bddd58404d1315e0eb27902c5d7c8eb0602c16238f005773df406bc191308929"
}
}
}, {
"createRole": {
"roleName": "admin",
"permissions": ["can_add_peer", "can_add_signatory", "can_create_account", "can_create_domain", "can_get_all_acc_ast", "can_get_all_acc_ast_txs", "can_get_all_acc_detail", "can_get_all_acc_txs", "can_get_all_accounts", "can_get_all_signatories", "can_get_all_txs", "can_get_blocks", "can_get_roles", "can_read_assets", "can_remove_signatory", "can_set_quorum"]
}
}, {
"createRole": {
"roleName": "user",
"permissions": ["can_add_signatory", "can_get_my_acc_ast", "can_get_my_acc_ast_txs", "can_get_my_acc_detail", "can_get_my_acc_txs", "can_get_my_account", "can_get_my_signatories", "can_get_my_txs", "can_grant_can_add_my_signatory", "can_grant_can_remove_my_signatory", "can_grant_can_set_my_account_detail", "can_grant_can_set_my_quorum", "can_grant_can_transfer_my_assets", "can_receive", "can_remove_signatory", "can_set_quorum", "can_transfer"]
}
}, {
"createRole": {
"roleName": "money_creator",
"permissions": ["can_add_asset_qty", "can_create_asset", "can_receive", "can_transfer"]
}
}, {
"createDomain": {
"domainId": "test",
"defaultRole": "user"
}
}, {
"createAsset": {
"assetName": "coin",
"domainId": "test",
"precision": 2
}
}, {
"createAccount": {
"accountName": "admin",
"domainId": "test",
"publicKey": "313a07e6384776ed95447710d15e59148473ccfc052a681317a72a69f2a49910"
}
}, {
"createAccount": {
"accountName": "test",
"domainId": "test",
"publicKey": "716fe505f69f18511a1b083915aa9ff73ef36e6688199f3959750db38b8f4bfc"
}
}, {
"appendRole": {
"accountId": "admin#test",
"roleName": "admin"
}
}, {
"appendRole": {
"accountId": "admin#test",
"roleName": "money_creator"
}
}],
"quorum": 1
}
}
}],
"txNumber": 1,
"height": "1",
"prevBlockHash": "0000000000000000000000000000000000000000000000000000000000000000"
}
}
}
I use the hyperledger image of docker, in MAC OS CATALINA.
I followed the tutorial according to this manual: https://iroha.readthedocs.io/en/latest/build/index.html
Thank you very much for the help.
Unfortunately, CLI is rather outdated – we are working on new solution for it, but meanwhile it is better to use one of the SDKs available – for Java, Python, JS or iOS (if you prefer mobile development).
All of them contain examples, so it should not be too tricky to use those. Although, if you encounter any issues, please contact us using one of the chats here.
This is due to outdated cli. A newer version that is developed will replace it, but is not yet ready.
The exact problem is that there was pagination metadata added for these queries in iroha, but the cli was not updated to set it properly. Protobuf transport allows cli to send a query without some fields that were added later, but iroha refuses to handle it.
You can use one of client libraries that are always kept up to date: https://iroha.readthedocs.io/en/latest/develop/libraries.html.
Related
I've recently updated my app from Micronaut 2 to Micronaut 3, and as a result all Mongo automatic CSFLE encryption/decryption has stopped working.
If I create a ClientEncryption object and manually decrypt the field, that works, and the logging shows that it is fetching KMS and key information needed to decrypt it:
INFO org.mongodb.driver.client - executeStateMachine: READY
INFO org.mongodb.driver.client - executeStateMachine: NEED_MONGO_KEYS
INFO org.mongodb.driver.client - executeStateMachine: NEED_KMS
// manual decryption result here
But for the automatic process, it just prints the READY state only, and no encryption/decryption takes place.
Is there any examples showing automatic CSFLE working with Micronaut 3, or has anyone run into this issue? Could this be a bug with Micronaut 3?
The two relevant dependencies in the Micronaut 3 upgrade are:
implementation "io.micronaut.mongodb:micronaut-mongo-reactive:4.2.0" // driver
implementation "org.mongodb:mongodb-crypt:1.5.2" // uses libmongocrypt
and the mongodb-enterprise-cryptd v5.0.6 binary is installed on the ubuntu:20.04 OS that we're running the app on. The mongocryptdSpawnPath extra options property in the Mongo connection is pointed at the location of the installation.
Server version: Enterprise 4.2.21
I can't give exact schemaMap and DB details, but here is a similar one generated by the same code, for a DB called zoo and two collections using CSFLE called dogAnimals and catAnimals.
sample dogAnimals document:
{
"basicDetails": {
"dogName":"Barney", // should be encrypted
"age":5,
},
"furtherDetails": {
"dogBreedInfo": { // should be encrypted
"breedName": "Golden Retriever",
"averageLifeSpanInYears": 20
}
}
}
sample catAnimals document:
{
"catName":"Mrs Miggins", // should be encrypted
"age":2,
"catFacts": {
"favouriteHuman": "Robert Bingley", // should be encrypted
"mood": "snob"
}
}
Matching schemaMap:
{
"zoo.dogAnimals": {
"bsonType": "object",
"encryptMetadata": {
"keyId": [
{
"$binary": {
"base64": "12345678",
"subType": "04"
}
}
]
},
"properties": {
"basicDetails": {
"bsonType": "object",
"properties": {
"dogName": {
"encrypt": {
"bsonType": "string",
"algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic"
}
}
}
},
"futherDetails": {
"bsonType": "object",
"properties": {
"dogBreedInfo": {
"encrypt": {
"bsonType": "object",
"algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random"
}
}
}
}
}
},
"zoo.catAnimals": {
"bsonType": "object",
"encryptMetadata": {
"keyId": [
{
"$binary": {
"base64": "12345678",
"subType": "04"
}
}
]
},
"properties": {
"catName": {
"encrypt": {
"bsonType": "string",
"algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random"
}
},
"catFacts": {
"bsonType": "object",
"properties": {
"favouriteHuman": {
"encrypt": {
"bsonType": "string",
"algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random"
}
}
}
}
}
}
}
Writing as answer since it's quite big.
But for the automatic process, it just prints the READY state only
AFAIK, this doesn't say a lot since this information can be cached from previous attempts (if it's not first run).
I've tried your above documents and schemaMap and it encrypts 3 fields from 4 your cases with auto encryption, but it doesn't work with dogs.furtherDetails, because you have a typo: furtherDetails vs futherDetails. So make sure there are no other typos in your schemaMap.
After much debugging it turns out the JNA library being used is not invoking the crypto binaries correctly, so it sounds like a bug. Will report this to Mongo and see if they can help fix this....
I am unable to run my ibm evote blockchain application in hyperledger faric.I am using IBM Evote in VS Code (v1.39) in ubuntu 16. When I start my local fabric (1 org local fabric), I am facing above error.
following is my local_fabric_connection.json file code
{
"name": "local_fabric",
"version": "1.0.0",
"client": {
"organization": "Org1",
"connection": {
"timeout": {
"peer": {
"endorser": "300"
},
"orderer": "300"
}
}
},
"organizations": {
"Org1": {
"mspid": "Org1MSP",
"peers": [
"peer0.org1.example.com"
],
"certificateAuthorities": [
"ca.org1.example.com"
]
}
},
"peers": {
"peer0.org1.example.com": {
"url": "grpc://localhost:17051"
}
},
"certificateAuthorities": {
"ca.org1.example.com": {
"url": "http://localhost:17054",
"caName": "ca.org1.example.com"
}
}
}
and following is the snapshot
Based off your second image it doesn't look like your 1 Org Local Fabric started properly in the first place (you have no gateways and for some reason your wallets aren't grouped together!).
If you teardown your 1 Org Local Fabric then start it again hopefully it'll work.
I have some dumped BSON and JSON files from a MongoDB server running on Google Cloud Platform(GCP) and I want to restore the data into a new local server with version 4.0.3. However, I got errors showing that the indices cannot be restored. I had to convert {"$numberInt": "1"} in the JSON files to 1 to make the restoring process success. Why I need to take effort to fix the format of the dumped files. Is it due to the different versions between the source server and the target server or due to some things I did not do correctly?
I have googled and searched stack overflow, but I did not see any one discussed this problem. And the release note of MongoDB does not mention any changes related to this problem.
Here is the JSON example cannot accept by mongorestore with version 4.0.3
{
"options": {},
"indexes": [
{
"v": {
"$numberInt": "2"
},
"key": {
"_id": {
"$numberInt": "1"
}
},
"name": "_id_",
"ns": "demo.item"
},
{
"v": {
"$numberInt": "2"
},
"key": {
"itemId": {
"$numberDouble": "1.0"
}
},
"name": "itemId_1",
"ns": "demo.item"
}
],
"uuid": "8ce4755612da4d048b0fd38a793f2b55"
}
And this is the accepted one which is converted on my own.
{
"options": {},
"indexes": [
{
"v": 2,
"key": {
"_id": 1
},
"name": "_id_",
"ns": "demo.item"
},
{
"v": 2,
"key": {
"itemId": 1.0
},
"name": "itemId_1",
"ns": "demo.item"
}
],
"uuid": "8ce4755612da4d048b0fd38a793f2b55"
}
And here is the script I use to do the conversion.
Questions:
Why mongorestore does not accept the dumped file created by mongodump?
Is there any method for avoiding from modifying the dumped files manually?
You need to use mongorestore version 4.2+ that supports Extended JSON v2.0 (Canonical mode or Relaxed) format. See reference here.
So I just used 2 working days trying to figure this out. We are automatic rendering process for maps. All the data is given in SQL base and my job is to write "wrapper" so we can implement this in our in-house framework. I managed all but one needed requests.
That request is POST featuretype since this is a way of creating a layer that can later be rendered.
I have all requests saved in postman for pre-testing on example data given by geoserver itself. I can't even get response with status code 201 and always get 500 internal server error. This status is described as possible syntax error in sytax. But I actually just copied and pasted exampled and used geoserver provided data.
This is the requst: http://127.0.0.1:8080/geoserver/rest/workspaces/tiger/datastores/nyc/featuretypes
and its body:
{
"name": "poi",
"nativeName": "poi",
"namespace": {
"name": "tiger",
"href": "http://localhost:8080/geoserver/rest/namespaces/tiger.json"
},
"title": "Manhattan (NY) points of interest",
"abstract": "Points of interest in New York, New York (on Manhattan). One of the attributes contains the name of a file with a picture of the point of interest.",
"keywords": {
"string": [
"poi",
"Manhattan",
"DS_poi",
"points_of_interest",
"sampleKeyword\\#language=ab\\;",
"area of effect\\#language=bg\\;\\#vocabulary=technical\\;",
"Привет\\#language=ru\\;\\#vocabulary=friendly\\;"
]
},
"metadataLinks": {
"metadataLink": [
{
"type": "text/plain",
"metadataType": "FGDC",
"content": "www.google.com"
}
]
},
"dataLinks": {
"org.geoserver.catalog.impl.DataLinkInfoImpl": [
{
"type": "text/plain",
"content": "http://www.google.com"
}
]
},
"nativeCRS": "GEOGCS[\"WGS 84\", \n DATUM[\"World Geodetic System 1984\", \n SPHEROID[\"WGS 84\", 6378137.0, 298.257223563, AUTHORITY[\"EPSG\",\"7030\"]], \n AUTHORITY[\"EPSG\",\"6326\"]], \n PRIMEM[\"Greenwich\", 0.0, AUTHORITY[\"EPSG\",\"8901\"]], \n UNIT[\"degree\", 0.017453292519943295], \n AXIS[\"Geodetic longitude\", EAST], \n AXIS[\"Geodetic latitude\", NORTH], \n AUTHORITY[\"EPSG\",\"4326\"]]",
"srs": "EPSG:4326",
"nativeBoundingBox": {
"minx": -74.0118315772888,
"maxx": -74.00153046439813,
"miny": 40.70754683896324,
"maxy": 40.719885123828675,
"crs": "EPSG:4326"
},
"latLonBoundingBox": {
"minx": -74.0118315772888,
"maxx": -74.00857344353275,
"miny": 40.70754683896324,
"maxy": 40.711945649065406,
"crs": "EPSG:4326"
},
"projectionPolicy": "REPROJECT_TO_DECLARED",
"enabled": true,
"metadata": {
"entry": [
{
"#key": "kml.regionateStrategy",
"$": "external-sorting"
},
{
"#key": "kml.regionateFeatureLimit",
"$": "15"
},
{
"#key": "cacheAgeMax",
"$": "3000"
},
{
"#key": "cachingEnabled",
"$": "true"
},
{
"#key": "kml.regionateAttribute",
"$": "NAME"
},
{
"#key": "indexingEnabled",
"$": "false"
},
{
"#key": "dirName",
"$": "DS_poi_poi"
}
]
},
"store": {
"#class": "dataStore",
"name": "tiger:nyc",
"href": "http://localhost:8080/geoserver/rest/workspaces/tiger/datastores/nyc.json"
},
"cqlFilter": "INCLUDE",
"maxFeatures": 100,
"numDecimals": 6,
"responseSRS": {
"string": [
4326
]
},
"overridingServiceSRS": true,
"skipNumberMatched": true,
"circularArcPresent": true,
"linearizationTolerance": 10,
"attributes": {
"attribute": [
{
"name": "the_geom",
"minOccurs": 0,
"maxOccurs": 1,
"nillable": true,
"binding": "com.vividsolutions.jts.geom.Point"
},
{},
{},
{}
]
}
}
So it is example case and I can't get any useful response from the server. I get the code 500 with body name (the first item in json). Similarly I get same code with body FeatureTypeInfo when trying with xml body(first tag).
I already tried the request in new instance of geoserver in Docker (changed the port) and still no success.
I check if datastore, workspace is available and that layer "poi" doesn't yet exists.
Here are also some logs of request (similar for xml body):
2018-08-03 07:35:02,198 ERROR [geoserver.rest] -
com.thoughtworks.xstream.mapper.CannotResolveClassException: name at
com.thoughtworks.xstream.mapper.DefaultMapper.realClass(DefaultMapper.java:79)
at .....
Does anyone know the solution to this and got it working. I am using GeoServer 2.13.1
So i was still looking for the answer and using this post (https://gis.stackexchange.com/questions/12970/create-a-layer-in-geoserver-using-rest) got to the right content to POST featureType and hence creating a layer in GeoServer.
The documentation is off in REST API docs.
Using above link I found out that when using JSON there is a missing insertion in JSON. For API to work here we need to add:
{featureType:
name: "...",
nativeName: "...",
.
.
.}
So that it doesn't start with "name" attribute but it is contained in "featureType".
I didn't try that for XML also but I guess it could be similar.
Hope this helps someone out there struggling like I did.
Blaz is correct here, you need an outer object of FeatureType and then an inner object with your config. So;
{
"featureType": {
"name": "layer",
"nativeName": "poi",
"your config": "stuff"
}
I find though that using a post request I get very little if any response and its not obvious if the layer creation worked. But you can call http://IP:8080/geoserver/rest/layers.json to check if your new layer is there.
It costs me a lot of time to create FeatureTypes using REST API. Use Json like this really works:
{
"featureType": {
"name": "layer",
"nativeName": "poi"
"otherProperties...":"values..."
}
And use Json below to create Workspace:
{
"workspace": {
"name": "test_workspace"
}
}
The REST API is out of date now. That's disappointing. Is there anyone knows how to get the lastest REST API document?
I'm currently working with an action package that declares it will handle the following intents:
actions.intent.MAIN
actions.intent.TEXT
actions.intent.OPTION
I've started with the first two, and proxying this to my own NLP/response gathering I'm able to get basic functionality working. I'm now trying to move forward with showing the user lists using askWithList. My Action Package is defined as follows:
{
"actions": [
{
"name": "MAIN",
"fulfillment": {
"conversationName": "JamesTest"
},
"intent": {
"name": "actions.intent.MAIN"
}
},
{
"name": "TEXT",
"fulfillment": {
"conversationName": "JamesTest"
},
"intent": {
"name": "actions.intent.TEXT"
}
},
{
"name": "OPTION",
"fulfillment": {
"conversationName": "JamesTest"
},
"intent": {
"name": "actions.intent.OPTION"
}
}
],
"conversations": {
"JamesTest": {
"name": "JamesTest",
"url": "myngrok"
}
}
}
When I try to respond with askWithList and test in the simulator I get the following error:
{
"name": "ResponseValidation",
"subDebugEntry": [{
"name": "MalformedResponse",
"debugInfo": "expected_inputs[0].possible_intents[0]: intent 'actions.intent.OPTION' is only supported for version 2 and above."
}]
}
Per the documentation my understanding was that all projects created after May 17 2017 would be using version 2 SDK by default. I also cannot seem to find any indication that I would be able to explicitly declare what version I would like to use in the Action Package definition.
Has anyone run into this? Is this just a limitation of the simulator, or am I missing something obvious?
It looks as though there is an undocumented (at least I can't find it) field in the conversations block called fulfillmentApiVersion that must be set to 2 in your actions package. Answer sourced from here: askWithList on Actions on Google
You're missing something not in the least bit obvious. {: The documentation for this is somewhat hidden and the gactions command still generates a version 1 json file.
The action package must explicitly indicate the version that it is using, otherwise it will be assumed to be using version 1.
To specify version 2, your "conversations" section should look something like:
"conversations": {
"JamesTest": {
"name": "JamesTest",
"url": "myngrok",
"fulfillmentApiVersion": 2
}
}
Note the "fulfillmentApiVersion" parameter.