column "scopes" does not exist when using Loopback 3.8.0 - loopback

I hit a weird problem where it said column "scopes" does not exist. Here is the log I encountered in server but not in local enviroment:
Unhandled error for request GET /api/continents?access_token=aaaaaaaaaaabbbbbbbbbbbbbbbbL1AwzSoH8eHXwPdjzQATRXqto3lngEokVxR2j: error: column "scopes" does not exist
2017-05-05T04:35:06.642201+00:00 app[web.1]: at Connection.parseE (/app/node_modules/pg/lib/connection.js:569:11)
2017-05-05T04:35:06.642202+00:00 app[web.1]: at Connection.parseMessage (/app/node_modules/pg/lib/connection.js:396:17)
2017-05-05T04:35:06.642203+00:00 app[web.1]: at TLSSocket.<anonymous> (/app/node_modules/pg/lib/connection.js:132:22)
2017-05-05T04:35:06.642204+00:00 app[web.1]: at emitOne (events.js:96:13)
2017-05-05T04:35:06.642209+00:00 app[web.1]: at TLSSocket.emit (events.js:188:7)
2017-05-05T04:35:06.642210+00:00 app[web.1]: at readableAddChunk (_stream_readable.js:176:18)
2017-05-05T04:35:06.642210+00:00 app[web.1]: at TLSSocket.Readable.push (_stream_readable.js:134:10)
2017-05-05T04:35:06.642211+00:00 app[web.1]: at TLSWrap.onread (net.js:547:20)
All APIs involving access token failed for same reason. If access token is not set, the APIs work as expected (if public, I got data; if authentication required, I got 401/403).
I tried local - it works, I tried heroku local - it works too. After a long testing, I found that the differences (and verified) is both my local and heroku local are running loopback version 3.4.0 while my servers are running 3.8.0.
After I enforce the server to use 3.4.0, it is normal.
Digging into /node_modules/loopback/common/models/access-token.json, here are the differences between 3.4.0 and 3.8.0:
Loopback 3.4.0:
"name": "AccessToken",
"properties": {
"id": { "type": "string", "id": true },
"ttl": { "type": "number", "ttl": true, "default": 1209600, "description": "time to live in seconds (2 weeks by default)" },
"created": { "type": "Date", "defaultFn": "now" }
},
Loopback 3.8.0:
"name": "AccessToken",
"properties": {
"id": { "type": "string", "id": true },
"ttl": { "type": "number", "ttl": true, "default": 1209600, "description": "time to live in seconds (2 weeks by default)" },
"scopes": {
"type": ["string"],
"description": "Array of scopes granted to this access token."
},
"created": { "type": "Date", "defaultFn": "now" }
},
Since I didn't checkin the node_modules, does anyone know how can I fix the issue?

I just ran into the same issue from upgrading to Loopback v3.8.
You can remedy the issue by autoupdating the AccessToken table using a script. Here is a basic version of an autoupdate script.
var path = require('path');
var app = require(path.resolve(__dirname, '../server/server'));
var ds = app.datasources.db;
function update() {
// migrate AccessToken
ds.autoupdate('AccessToken', function (err) {
console.log("ds.autoupdate('AccessToken', err=", err)
if (err) throw err;
ds.disconnect();
}); // autoupdate('AccessToken')
}
// console.log("ds=", ds)
console.log("ds.connected=", ds.connected)
if(ds.connected) {
// Run autoupdate
update();
} else {
ds.once('connected', function() {
// Run autoupdate
update();
});
}
You can run this by naming it autoupdate.js, putting it in the root of the server directory and then run in the console node autoupdate.js
Then you will be golden.

I faced the same trouble moving from lb2.x to lb3.17.0
just did an alter table :
ALTER TABLE accesstoken ADD scopes TEXT;

Related

Error connecting to environment 1 Org Local Fabric: Error querying channels: 14 UNAVAILABLE: failed to connect to all addresses

I am unable to run my ibm evote blockchain application in hyperledger faric.I am using IBM Evote in VS Code (v1.39) in ubuntu 16. When I start my local fabric (1 org local fabric), I am facing above error.
following is my local_fabric_connection.json file code
{
"name": "local_fabric",
"version": "1.0.0",
"client": {
"organization": "Org1",
"connection": {
"timeout": {
"peer": {
"endorser": "300"
},
"orderer": "300"
}
}
},
"organizations": {
"Org1": {
"mspid": "Org1MSP",
"peers": [
"peer0.org1.example.com"
],
"certificateAuthorities": [
"ca.org1.example.com"
]
}
},
"peers": {
"peer0.org1.example.com": {
"url": "grpc://localhost:17051"
}
},
"certificateAuthorities": {
"ca.org1.example.com": {
"url": "http://localhost:17054",
"caName": "ca.org1.example.com"
}
}
}
and following is the snapshot
Based off your second image it doesn't look like your 1 Org Local Fabric started properly in the first place (you have no gateways and for some reason your wallets aren't grouped together!).
If you teardown your 1 Org Local Fabric then start it again hopefully it'll work.

Azure DB for PostgreSQL - changes to log_line_prefix parameter not implemented

I have a General Purpose Single Server instance of Azure DB for PostgreSQL where I have installed the pgAudit plugin.
I am trying to add more data to the pgAudit session auditing entries by following the instructions on Microsoft's page and PostgreSQL's page and I tried to set up log_line_prefix in the following configurations:
t=%t c=%c a=%a u=%u d=%d r=%r% h=h% e=e c=%c
%t,%c,%a,%u,%d,%r,%h,%e,%c
%t%c%a%u%d%r%h%e%c
None of these have any effect on events collected. Here's the most of what an INSERT looks like:
{
"LogicalServerName": "postgresql4moi",
"SubscriptionId": "****",
"ResourceGroup": "OLC_Research",
"time": "2020-05-05T12:10:59Z",
"resourceId": "***",
"category": "PostgreSQLLogs",
"operationName": "LogEvent",
"properties": {
"prefix": "t=2020-05-05 12:10:59 UTC c=5eb157c4.5c a=DBeaver 7.0.1 - SQLEditor <testingScript.sql> u=system d=postgres r=****.234(4344)h=he=e c=5eb157c4.5c",
"message": "AUDIT: SESSION,6,1,WRITE,INSERT,,,\"INSERT INTO public.koko_table VALUES ('kokoMoko','kokoMoko')\",<none>",
"detail": "",
"errorLevel": "LOG",
"domain": "postgres-11",
"schemaName": "",
"tableName": "",
"columnName": "",
"datatypeName": ""
}
}
Is there something else I forgot to configure?
I event restarted the database after each attempt to set the parameter.
Thanks in advance.

Strapi EADDRNOTAVAIL error while deploying on Dokku

I am trying to deploy Strapi on a Dokku instance on a Digital Ocean droplet. I originally ran into some issues connecting to the mongo database, but after some trial and error and a lot of review of these docs and this issue, I was able to get it to stop complaining about the mongo connection. Here was my final config/environments/production/database.json
{
"defaultConnection": "default",
"connections": {
"default": {
"connector": "mongoose",
"settings": {
"client": "mongo",
"uri": "${process.env.MONGO_URL}",
"database": "${process.env.DATABASE_NAME}",
"username": "${process.env.DATABASE_USERNAME}",
"password": "${process.env.DATABASE_PASSWORD}",
"port": "${process.env.DATABASE_PORT || 27017}"
},
"options": {
"authenticationDatabase": "${process.env.DATABASE_AUTHENTICATION_DATABASE || ''}",
"useUnifiedTopology": "${process.env.USE_UNIFIED_TOPOLOGY || false}",
"ssl": "${process.env.DATABASE_SSL || false}"
}
}
}
}
Here is my config/environments/production/server.json
{
"host": "${process.env.HOST || '0.0.0.0'}",
"port": "${process.env.PORT || 1337}",
"production": true,
"proxy": {
"enabled": false
},
"cron": {
"enabled": false
},
"admin": {
"autoOpen": false
}
}
I believe the original issue was that I was accidentally using the PORT variable for the database instead of the DATABASE_PORT variable.
However, now that I have that worked out I am getting this error:
error Error: listen EADDRNOTAVAIL: address not available <my-host-ip>:5000
I thought maybe there was some wrong port being cached somewhere, but regardless of what I do, I can't seem to get it to work. Do I need to enable ssl? and then add a letsencrypt cert to my domain? am i using the wrong ports? set a proxy in the server.json?
PS. I am using Dokku Mongo. Didn't think that would be an issue considering the dynos don't go to sleep like they would on heroku. Is that an incorrect assumption?
Also, there are other apps running on the droplet. Maybe a proxy problem?

Hyperledger Explorer Error 12 UNIMPLEMENTED: service protos.Endorser

I am trying to run the Hypeledger Explorer for my blockchain network. I have followed the instructions almost word for word using the Hyperldger Explorer
But anytime I do the final call: ./start.sh --- I get a litany of errors
error: [client-utils.js]: sendPeersProposal - Promise is rejected: Error: 12 UNIMPLEMENTED: unknown service protos.Endorser
at new createStatusError (/home/ubuntu/bludev/blockchain-explorer/node_modules/grpc/src/client.js:64:15)
at /home/ubuntu/bludev/blockchain-explorer/node_modules/grpc/src/client.js:583:15
error: [Client.js]: Failed Installed Chaincodes Query. Error: Error: 12 UNIMPLEMENTED: unknown service protos.Endorser
at new createStatusError (/home/ubuntu/bludev/blockchain-explorer/node_modules/grpc/src/client.js:64:15)
at /home/ubuntu/bludev/blockchain-explorer/node_modules/grpc/src/client.js:583:15
...
And so on. For more info I am using
nodejs 6.9 and PostgreSQL 9.5. This is the way my config.json file looks:
{
"network-config": {
"org1": {
"name": "peerOrg1",
"mspid": "Org1MSP",
"peer1": {
"requests": "grpc://127.0.0.1:7051",
"events": "grpc://127.0.0.1:7053",
"server-hostname": "peer0.org1.example.com",
"tls_cacerts": "/home/ubuntu/bludev/fabric-samples/first-network/crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt"
},
"admin": {
"key": "/home/ubuntu/bludev/hlcomposer/fabric-dev-servers/fabric-scripts/hlfv1/composer/crypto-config/peerOrganizations/org1.example.com/users/Admin#org1.example.com/msp/keystore",
"cert": "/home/ubuntu/bludev/hlcomposer/fabric-dev-servers/fabric-scripts/hlfv1/composer/crypto-config/peerOrganizations/org1.example.com/users/Admin#org1.example.com/msp/signcerts"
}
}
},
"host": "localhost",
"port": "3000",
"channel": "mychannel",
"keyValueStore": "/tmp/fabric-client-kvs",
"eventWaitTime": "30000",
"pg": {
"host": "12.109.99.233",
"port": "3000",
"database": "fabricexplorer",
"username": "postgres",
"passwd": "password1"
}}
The problem is your hyperledger network does not have any endorser in network.
try first-network sample from official fabric-samples folder, rebuild the explorer and then try again.

JasperReports Server 6.2 - Error 400:bad request - User creation with roles REST v2

I am not sure what is going wrong with the create user api with roles.
Observations:
When fired without the roles it works fine, the payload is given below
{
"fullName": "unittestuser",
"emailAddress": null,
"enabled": true,
"password": "39HN=K?E",
"roles": null
}
when same endpoint is invoked with the addition of roles then it fails giving the http error code 400 (bad request)
{
"fullName": "unittestuser",
"emailAddress": null,
"enabled": true,
"password": "39HN=K?E",
"roles": [
{ "name": "unittest" },
{ "name": "UsernamePasswordAuthentication" },
{ "name": "Platform_NamedUser" },
{ "name": "Platform_Anyone" },
{ "name": "Platform_Metadata_MetadataInitializeUser" }
]
}
The roles part works when the default roles shipped with JasperReports Server installation are sent.
{
"fullName": "unittestuser3",
"emailAddress": null,
"externallyDefined": false,
"enabled": true,
"password": "39HN=K?E",
"roles": [
{ "name": "ROLE_USER" },
{ "name": "ROLE_ADMINISTRATOR" }
]
}
I have checked the the new roles which I have created are present on the JR Server before the create user is hit, so I am not sure what is going wrong with the newly created roles. I am using REST api v2 for role creation as well as user creation.
I have also tried creation the user first with empty roles and then adding roles the update call still fails with the same error.
Let me know if anyone has a clue.
Fixed...the new 6.0 on wards requires tenantid to be passed with the name of the role.
So instead of:
{ "name": "unittest" }
I passed: { "name": "unittest", "tenantId": "myorg" }