Unable to create flow in corda after rebuilding nodes, postgresql database - postgresql

I created persistent postgresql databases for nodes in corda. After setting up the databases and building nodes I'm able to record flow to the database, but after rebuilding nodes and running them again I'm unable to create the same flow anymore.
I guess this is means that the database still has old information about the nodes, but then how one can update the nodes and retain the old states from the database?
This is the error I get from running the same flow after rebuilding.
"net.corda.core.CordaRuntimeException: The Initiator of CollectSignaturesFlow must pass in exactly the sessions required to sign the transaction. "
My deployNodes task:
task deployNodes(type: net.corda.plugins.Cordform, dependsOn: ['jar']) {
directory "./build/nodes"
ext.drivers = ['.jdbc_driver']
ext.extraConfig = [
'dataSourceProperties.dataSourceClassName' : "org.postgresql.ds.PGSimpleDataSource",
'database.transactionIsolationLevel' : 'READ_COMMITTED',
'database.initialiseSchema': "true"
]
nodeDefaults {
projectCordapp {
deploy = false
}
cordapp project(':cordapp-contracts-states')
cordapp project(':cordapp')
}
node {
name "O=NetworkMapAndNotary,L=Helsinki,C=FI"
notary = [validating : true]
rpcSettings {
address "localhost:10004"
adminAddress "localhost:10044"
}
p2pPort 10002
extraConfig = ext.extraConfig + [
'dataSourceProperties.dataSource.url' :
"jdbc:postgresql://localhost:5432/nms_db?currentSchema=nms_schema",
'dataSourceProperties.dataSource.user' : "nms_corda",
'dataSourceProperties.dataSource.password' : "corda1234",
]
drivers = ext.drivers
}
node {
name "O=AccountOperator,L=Helsinki,C=FI"
p2pPort 10005
rpcSettings {
address "localhost:10006"
adminAddress "localhost:10046"
}
webPort 10007
rpcUsers = [[ user: "user1", "password": "test", "permissions": ["ALL"]]]
extraConfig = ext.extraConfig + [
'dataSourceProperties.dataSource.url' :
"jdbc:postgresql://localhost:5432/ao_db?currentSchema=ao_schema",
'dataSourceProperties.dataSource.user' : "ao_corda",
'dataSourceProperties.dataSource.password' : "corda1234",
]
drivers = ext.drivers
}
}
I have also tried running run clearNetworkMapCacheon each node's rpc. After running it I can see that node_info_hash, and node_infos tables are empty but then how can I update those tables with right node info?
For example this Migrating data from Corda 2 to Corda 3 question says I should rerun every transaction after upgrading cordapps, is this applicable also in just regular cordapp update and this is done? Also this https://docs.corda.net/head/node-operations-upgrade-cordapps.html instruction says "CorDapps must ship with database migration scripts or clear documentation about how to update the database to be compatible with the new version." But I tried migrate some previous states to new database instance with no luck.

I face this issue once when I migrate code from Corda3 to Corda4, so I fix it by send session in both CollectSignatureFlow and Finality flow.
Hope that help

Related

Prisma fails on Postgres query with findMany

I have the following query in Prisma that basically returns all users where campaign id is one from the array I provide and they are added to the system within the defined time range. Also I have another entity Click for each user that should be included in the response.
const users = await this.prisma.user.findMany({
where: {
campaign: {
in: [
...campaigns.map((campaign) => campaign.id),
...campaigns.map((campaign) => campaign.name),
],
},
createdAt: {
gte: dateRange.since,
lt: dateRange.until,
},
},
include: {
clicks: true,
},
});
The problem is this query runs fine in localhost where I don't have much data, but in the production database there are nearly 500.000 users and 250.000 clicks in total, so I am not sure if that is the root case but the query fails with the following exception:
Error:
Invalid `this.prisma.user.findMany()` invocation in
/usr/src/app/dist/xyx/xyx.service.js:135:58
132 }
133 async getUsers(campaigns, dateRange) {
134 try {
→ 135 const users = await this.prisma.user.findMany(
Can't reach database server at `xyz`:`25060`
Please make sure your database server is running at `xyz`:`25060`.
Prisma error code is P1001.
xyz replaced for obvious reasons in the paths and connection string to the DB.
the only solution we found is to check what is the limit for your query and then use pagination (skip, take) params in loop to download data part by part and glue them back together then ... not optimal, but, it works. See existing bug report for example
https://github.com/prisma/prisma/issues/8832

MongoDB Replica Set Snapshot with Lambda Function best practice

I'm quite new to MongoDB and a bit confused. I'm trying to create on AWS an automated backup routine to be used in production, and I want to make sure I'm doing it correctly.
By now I have set a Replica Set with 1 Arbiter and 3 Members (1 primary, 1 secondary, 1 hidden delayed by 4 hours). Each member has 3 separate EBS volumes (data 100GB, journal 20GB, log 10GB).
I created a Lambda Function with NodeJS that run every hour (with CloudWatch Event) to take a snapshot, that performs the following operations:
MongoClient connects to the hidden delayed member mongodb://admin:password#ec2.private.ip:27017/admin
Flush all pending write operations db.command({ fsync: 1, lock: true })
Create EC2 Connection to retrieve all EBS volumes tagged as LambdaSnapshot:
const ec2Conn = new AWS.EC2({ region: 'us-west-1' })
const params = {
Filters: [
{
Name: "tag-key",
Values: ["LambdaSnapshot"],
},
],
};
const volumes = (await ec2Conn.describeVolumes(params).promise()).Volumes;
const volumeIds = volumes.map((volume) => volume.VolumeId);
Create a snapshot on each volume:
return Promise.all(
volumeIds.map(async (volumeId) => {
const formattedDate = moment().format("DD/MM/YYYY HH:mm:ss");
const snapshot = await ec2Conn
.createSnapshot({
Description: "Snapshot " + volumeId + " taken on " + formattedDate,
VolumeId: volumeId,
TagSpecifications: [
{
ResourceType: "snapshot",
Tags: [
{
Key: "Name",
Value: volume + " " + formattedDate,
},
{
Key: snapshotTag,
Value: volume + " " + formattedDate,
},
],
},
],
})
.promise();
return snapshot.SnapshotId;
});
);
Unlock the instance for writes with db.command({ fsyncUnlock: 1 })
I have 2 main doubts.
I have tagged as LambdaSnapshot only the EBS Volume containing DATA (the one of 100GB) of the hidden delayed member. I'm not sure if I have to take snapshot of the journal and the log volumes as well.
I notice that even though I'm using await to run the command to create the snapshot, the function continues anyway and unlock the instance while the snapshot is in pending state. I'm not sure, but I think that the command createSnapshot() give only the input to AWS to start the snapshot and resolve the promise without waiting for complete. So I'm in doubts if I have to unlock the db outside the lambda function once the snapshot complete; in that case I don't know how to listen for the complete event to run a second lambda function that unlock the db.
Thanks in advance
As stated in docs EBS snapshot creation is asynchronous:
Snapshots occur asynchronously; the point-in-time snapshot is created immediately, but the status of the snapshot is pending until the snapshot is complete (when all of the modified blocks have been transferred to Amazon S3), which can take several hours for large initial snapshots or subsequent snapshots where many blocks have changed. While it is completing, an in-progress snapshot is not affected by ongoing reads and writes to the volume.
Backup documentation says:
To get a correct snapshot of a running mongod process, you must have journaling enabled and the journal must reside on the same logical volume as the other MongoDB data files. Without journaling enabled, there is no guarantee that the snapshot will be consistent or valid.
Unless you have another documentation reference saying you do NOT need to back up log or journal data with all the other data, I suggest backing up everything together.

How to pass a parameter to dashDB query using Node-RED editor?

In Bluemix Node-RED application I use Cloudant and dashDB services. I replicated Cloudant database into dashDB which contains multiple values stored in a table like DALERT,DEVICE,ID etc.
I am trying to search records from the CLOUDANT table in my dashDB using DALERT column which value is equal to critical.
I am trying these way in Node-RED editor but unable to retrieve data from dashDB:
[
{"id":"9941f62b.66be08","type":"http in","name":"","url":"/get/specificcritical","method":"get","swaggerDoc":"","x":103.5,"y":409,"z":"c96fb1cb.36905","wires":[["b52196bf.4ade68","292d3e2e.d6d2c2"]]},
{"id":"b52196bf.4ade68","type":"function","name":"","func":"msg.Dalert=msg.payload.Dalert;\n\nreturn msg;","outputs":1,"noerr":0,"x":326,"y":353,"z":"c96fb1cb.36905","wires":[["457cf34.fba830c","d937915d.26c87"]]},
{"id":"457cf34.fba830c","type":"dashDB in","service":"dashDB-9a","query":"select * from XXXXX.CLOUDANT WHERE DALERT=?;","params":"msg.Dalert","name":"","x":510,"y":404,"z":"c96fb1cb.36905","wires":[["60d36407.9f2c9c","886c48df.7793b8"]]},
{"id":"d937915d.26c87","type":"debug","name":"","active":false,"console":"true","complete":"payload","x":599,"y":327,"z":"c96fb1cb.36905","wires":[]},
{"id":"60d36407.9f2c9c","type":"debug","name":"","active":true,"console":"false","complete":"false","x":758,"y":397,"z":"c96fb1cb.36905","wires":[]},
{"id":"886c48df.7793b8","type":"http response","name":"","x":771,"y":477,"z":"c96fb1cb.36905","wires":[]},
{"id":"292d3e2e.d6d2c2","type":"debug","name":"","active":true,"console":"true","complete":"payload","x":321,"y":483,"z":"c96fb1cb.36905","wires":[]}
]
Please let me know if there is any solution.
If I understood correct your question, you just need to modify your function node with something similar to this:
msg.dalert="critical";
return msg;
Assuming your DALERT column in the CLOUDANT table is of type VARCHAR. You may need to change it if is a different type on your database.
Running the application like:
http://yourappname.mybluemix.net/get/specificcritical
will result in output similar to this for my table:
[
{
"DALERT": "critical",
"DEVICE": "device1",
"ID": 1
},
{
"DALERT": "critical",
"DEVICE": "device3",
"ID": 3
},
{
"DALERT": "critical",
"DEVICE": "device5",
"ID": 5
}
]
Here is the new node flow I created with the changes (I added an input node with blank message just to test the flow in the editor):
[{"id":"c7468303.38b98","type":"http in","name":"","url":"/get/specificcritical","method":"get","swaggerDoc":"","x":125,"y":245,"z":"8e2ae4a.f71d518","wires":[["c685ce8c.397a3","20dfeeba.df2012"]]},{"id":"c685ce8c.397a3","type":"function","name":"","func":"msg.dalert=\"critical\";\nreturn msg;","outputs":1,"noerr":0,"x":347.5,"y":189,"z":"8e2ae4a.f71d518","wires":[["e1f8c153.1e074","1f2d6f8e.e0d29"]]},{"id":"e1f8c153.1e074","type":"dashDB in","service":"dashDB-0a","query":"select * from CLOUDANT WHERE DALERT=?;","params":"msg.dalert","name":"","x":531.5,"y":240,"z":"8e2ae4a.f71d518","wires":[["f1810e4c.0e7ef","1401dc1a.ebfe24"]]},{"id":"1f2d6f8e.e0d29","type":"debug","name":"","active":false,"console":"true","complete":"payload","x":620.5,"y":163,"z":"8e2ae4a.f71d518","wires":[]},{"id":"f1810e4c.0e7ef","type":"debug","name":"dashDB Output","active":true,"console":"false","complete":"payload","x":779.5,"y":233,"z":"8e2ae4a.f71d518","wires":[]},{"id":"1401dc1a.ebfe24","type":"http response","name":"","x":792.5,"y":313,"z":"8e2ae4a.f71d518","wires":[]},{"id":"20dfeeba.df2012","type":"debug","name":"","active":true,"console":"true","complete":"payload","x":342.5,"y":319,"z":"8e2ae4a.f71d518","wires":[]},{"id":"1f530672.e0acfa","type":"inject","name":"","topic":"","payload":"","payloadType":"none","repeat":"","crontab":"","once":false,"x":122,"y":112,"z":"8e2ae4a.f71d518","wires":[["c685ce8c.397a3"]]}]
I got ans for question.
if we want to pass value as parameter just write msg.variable=msg.payload.variable; and return msg;
inside function node and msg.variable also declare in query parm inside dashDB IN node. eg:msg.Dalert=msg.payload.Dalert; and
critical value pass with url as http://yourappname.mybluemix.net/get/specificcritical?Dalert=critical
Its simple working node red flow
[
{"id":"9941f62b.66be08","type":"http in","name":"","url":"/get/specificcritical","method":"get","swaggerDoc":"","x":103.5,"y":409,"z":"c96fb1cb.36905","wires":[["b52196bf.4ade68","292d3e2e.d6d2c2"]]},
{"id":"b52196bf.4ade68","type":"function","name":"","func":"msg.Device=msg.payload.Device;\n\nreturn msg;","outputs":1,"noerr":0,"x":326,"y":353,"z":"c96fb1cb.36905","wires":[["457cf34.fba830c","d937915d.26c87"]]},
{"id":"457cf34.fba830c","type":"dashDB in","service":"dashDB-XX","query":"select * from XXXXX.CLOUDANT WHERE DEVICE=?","params":"msg.Device","name":"","x":510,"y":404,"z":"c96fb1cb.36905","wires":[["60d36407.9f2c9c","886c48df.7793b8"]]},
{"id":"d937915d.26c87","type":"debug","name":"","active":false,"console":"true","complete":"payload","x":599,"y":327,"z":"c96fb1cb.36905","wires":[]},
{"id":"60d36407.9f2c9c","type":"debug","name":"","active":true,"console":"false","complete":"false","x":758,"y":397,"z":"c96fb1cb.36905","wires":[]},
{"id":"886c48df.7793b8","type":"http response","name":"","x":771,"y":477,"z":"c96fb1cb.36905","wires":[]},
{"id":"292d3e2e.d6d2c2","type":"debug","name":"","active":true,"console":"true","complete":"payload","x":321,"y":483,"z":"c96fb1cb.36905","wires":[]}
]

Meteor application not seeding db in deployment

I have a meteor application which upon startup seeds a mongo db document:
Meteor.startup(function () {
Dynamics.remove({});
Dynamics.insert({ name : "voteTimer", time : 0 });
Dynamics.insert({ name : "winningWord", content : "" });
});
These are called in a React component eg:
getMeteorData() {
return {
winningWord: Dynamics.findOne({name: "winningWord"}).content
}
},
On my local machine this works fine. Once deployed via meteor deploy however, the app crashes:
Cannot read property 'content' of undefined
This indicates that there are no documents in the Dynamics collection. Even stranger, I am still able to access these variable in the chrome dev console.
Even if you start inserting items on startup, those inserts are asynchronous and your component's getMeteorData probably still tries to fetch your document before it is inserted. Since getMeteorData is reactive (I think), you simply need to check for your findOne to return a proper document and it should work as soon as the document is ready:
getMeteorData() {
var dynamic = Dynamics.findOne({name: "winningWord"});
if (dynamic) {
return {
winningWord: dynamic.content
}
}
return {winningWord:""}; // whatever
},

how to query attributes inside a role in chef?

I am using chef version 10.16.2
I have a role (in ruby format). I need to access an attrubute set in one of the cookbooks
eg.
name "basebox"
description "A basic box with some packages, ruby and rbenv installed"
deployers = node['users']['names'].find {|k,v| v['role'] == "deploy" }
override_attributes {
{"rbenv" => {
"group_users" => deployers
}
}
}
run_list [
"recipe[users]",
"recipe[packages]",
"recipe[nginx]",
"recipe[ruby]"
]
I am using chef-solo so i cannot use search as given on http://wiki.opscode.com/display/chef/Search#Search-FindNodeswithaRoleintheExpandedRunList
How do i access node attributes in a role definition ?
Roles are JSON data.
That is, when you upload the role Ruby file to the server with knife, they are converted to JSON. Consider this role:
name "gaming-system"
description "Systems used for gaming"
run_list(
"recipe[steam::installer]",
"recipe[teamspeak3::client]"
)
When I upload it with knife role from file gaming-system.rb, I have this on the server:
{
"name": "gaming-system",
"description": "Systems used for gaming",
"json_class": "Chef::Role",
"default_attributes": {
},
"override_attributes": {
},
"chef_type": "role",
"run_list": [
"recipe[steam::installer]",
"recipe[teamspeak3::client]"
],
"env_run_lists": {
}
}
The reason for the Ruby DSL is that it is "nicer" or "easier" to write than the JSON. Compare the lines and syntax, and it's easy to see which is preferable to new users (who may not be familiar with JSON).
That data is consumed through the API. If you need to do any logic with attributes on your node, do it in a recipe.
Not sure if I 100% follow, but if you want to access an attribute which is set by a role from a recipe, then you just call it like any other node attribute. For example, in the case you presented, assuming the node has the basebox role in its run_list, you would just call:
node['rbenv']['group_users']
The role attributes are merged into the node.
HTH