Not able to set up my loopback model.Error:Persisted model has not been correctly attached to a DataSource - loopback

restraunt.json file
`{
"name": "restraunt",
"base": "PersistedModel",
"idInjection": true,
"options": {
"validateUpsert": true
},
"properties": {
"name": {
"type": "string",
"required": true
},
"location": {
"type": "string",
"required": true
}
},
"validations": [],
"relations": {},
"acls": [],
"methods": {}
}`
restraunt.js file
`module.exports = function(Restraunt) {
Restraunt.find({where:{id:1}}, function(data) {
console.log(data);
})
};`
model-config.json file
`"restraunt": {
"dataSource": "restrauntManagement"
}`
datasources.json file
`{
"db": {
"name": "db",
"connector": "memory"
},
"restrauntManagement": {
"host": "localhost",
"port": 0,
"url": "",
"database": "restraunt-management",
"password": "restraunt-management",
"name": "restrauntManagement",
"user": "rohit",
"connector": "mysql"
}
}`
I am able to get,put,post from the explorer which means the sql db has been set up properly but i am not able to 'find' from restraunt.js file.It throws an error.
"Error: Cannot call restraunt.find(). The find method has not been setup. The PersistedModel has not been correctly attached to a DataSource"

Besides that executing code in boot folder, there's a possibility to use event, emitted after attaching the model.
You can write your code right in model.js, not in boot folder.
Looks like:
Model.once("attached", function () {})
Model = Accounts (for example).
I know, this is an old topic, but maybe this helps someone else.

Try installing mysql connector again:
npm i -S loopback-connector-mysql
Take a look at your datasources.json, because mysql's port might be wrong, default port is 3306, also you could try changing localhost to 0.0.0.0.
"restrauntManagement": {
"host": "localhost", /* if you're using docker, you need to set it to 0.0.0.0 instead of localhost */
"port": 0, /* default port is 3306 */
"url": "",
"database": "restraunt-management",
"password": "restraunt-management",
"name": "restrauntManagement",
"user": "rohit",
"connector": "mysql"
}
model-config.json must be:
"restraunt": {
"dataSource": "restrauntManagement" /* this name must be the same name in datasources object key (in your case it is restrauntManagement not the connector name which is mysql) */
}
You also need to execute the migration for restaurant model:
create migration.js at /server/boot and add this:
'use strict';
module.exports = function(server) {
var mysql = server.dataSources.mysql;
mysql.autoupdate('restraunt');
};
you need to migrate every single model you'll use it. you also need to migrate the default models (ACL, AccessToken, etc...) if you're going to attach them to a datasource.
Also in the docs says you can't perform any operation inside the model.js file because the system (at that point) it is not fully loaded. Any operation you need to execute must be inside a .js file in the /boot directory because the system is completely loaded there. You can perform operations inside remote methods because the system is loaded as well.

Related

Azure ARM Template parameters for parametrized linked service

Please, forgive the confusing tittle, if it is, but it does describe the problem I am having
So, I have a linked service in my Azure Datafactory. It is used for Azure SQL Database connect.
The Database name and user name are being taken from the parameters set in linked service itself. Here is a snippet of json config
"typeProperties": {
"connectionString": "Integrated Security=False;Encrypt=True;Connection Timeout=30;Data Source=myserver.database.windows.net;Initial Catalog=#{linkedService().dbName};User ID=#{linkedService().dbUserName}",
"password": {
"type": "AzureKeyVaultSecret",
"store": {
"referenceName": "KeyVaultLink",
"type": "LinkedServiceReference"
},
"secretName": "DBPassword"
},
"alwaysEncryptedSettings": {
"alwaysEncryptedAkvAuthType": "ManagedIdentity"
}
}
This works fine when in debug in the Azure portal. However, when I get the ARM Template for the whole thing, during ARM Template deployment it asks for input Connection string for the linked service. If I go to the linked service definition, and look up its connection string it will come this way
"connectionString": "Integrated Security=False;Encrypt=True;Connection Timeout=30;Data Source=dmsql.database.windows.net;Initial Catalog=#{linkedService().dbName};User ID=#{linkedService().dbUserName}"
Then when I input it in the ARM Template deployment should I be replacing "#{linkedService().dbName}" and "#{linkedService().dbUserName}" with actual values at the spot when I am entring it ? I am confused because during the ARM Template deployment there are no separate fields for these parameters, and these (parameters specific to linked service itself) are not present as separate parameters in the ARM Template definition.
I created database in my azure portal
and enabled system assigned managed Identity for sql db.
Image for reference:
I created azure keywault and created secret.
Image for reference:
I have created new access policy for Azure data factory.
Image for reference:
I created Azure data factory and enabled system managed identity.
Image for reference:
I have created new parametrized linked service to connect with database with below parameters dbName and userName. I am taking database name and User name dynamically by using above parameters.
Image for reference:
Linked service is created successfully.
json format of my lined service:
{
"name": "SqlServer1",
"properties": {
"parameters": {
"dbName": {
"type": "String"
},
"userName": {
"type": "String"
}
},
"annotations": [],
"type": "SqlServer",
"typeProperties": {
"connectionString": "Integrated Security=False;Data Source=dbservere;Initial Catalog=#{linkedService().dbName};User ID=#{linkedService().userName}",
"password": {
"type": "AzureKeyVaultSecret",
"store": {
"referenceName": "AzureKeyVault1",
"type": "LinkedServiceReference"
},
"secretName": "DBPASSWORD"
},
"alwaysEncryptedSettings": {
"alwaysEncryptedAkvAuthType": "ManagedIdentity"
}
}
}
}
I exported the arm template of data factory.
This is my linked service in my ARM template:
"SqlServer1_connectionString": {
"type": "secureString",
"metadata": "Secure string for 'connectionString' of 'SqlServer1'",
"defaultValue": "Integrated Security=False;Data Source=dbservere;Initial Catalog=#{linkedService().dbName};User ID=#{linkedService().userName}"
},
"AzureKeyVault1_properties_typeProperties_baseUrl": {
"type": "string",
"defaultValue": "https://keysqlad.vault.azure.net/"
}
Image for reference:
I have got parameters dbName and userName in my ARM template description.
{
"name": "[concat(parameters('factoryName'), '/SqlServer1')]",
"type": "Microsoft.DataFactory/factories/linkedServices",
"apiVersion": "2018-06-01",
"properties": {
"parameters": {
"dbName": {
"type": "String"
},
"userName": {
"type": "String"
}
},
"annotations": [],
"type": "SqlServer",
"typeProperties": {
"connectionString": "[parameters('SqlServer1_connectionString')]",
"password": {
"type": "AzureKeyVaultSecret",
"store": {
"referenceName": "AzureKeyVault1",
"type": "LinkedServiceReference"
},
"secretName": "DBPASSWORD"
},
"alwaysEncryptedSettings": {
"alwaysEncryptedAkvAuthType": "ManagedIdentity"
}
}
},
"dependsOn": [
"[concat(variables('factoryId'), '/linkedServices/AzureKeyVault1')]"
]
}
Image for reference:
If you didn't get parameters in ARM template description copy the value of "connectionString" and modified what you needed to and left the parameters in place and added it to the "connectionString" override parameter in my Azure Release Pipeline, and it will work.

Connect strapi with mongoose to a MongoDb (mLab)

I tried to connect Strapi to mLab with this database.js config but it doesn't work. I get the error :
ConnectorError: connector "strapi-hook-mongoose" not found: Cannot find module 'strapi-connector-strapi-hook-mongoose'
Here is my database.js config file :
{
"defaultConnection": "default",
"connections": {
"default": {
"connector": "strapi-hook-mongoose",
"settings": {
"database": "strapi-test",
"host": "ds131914.mlab.com",
"srv": false,
"port": "31914",
"username": "root",
"password": "root010101"
},
"options": {
"authenticationDatabase": "strapi-test"
}
}
}
}
What should I do ?
After some search, it appers to me that this database.js config was from an old tutorial (this one). So to solve this probleme, you first need to install npm i -S strapi-connector-mongoose in order to install the right connecter.
Now, you need to change you database.js config for the desire environement. In my case, it was production. So edit config/environement/production/database.js like this :
{
"defaultConnection": "default",
"connections": {
"default": {
"connector": "mongoose",
"settings": {
"client": "mongo",
"host": "ds131914.mlab.com",
"port": "31914",
"srv": false,
"database": "strapi-test",
"username": "root",
"password": "root010101"
},
"options": {
"authenticationDatabase": "strapi-test",
"ssl": false
}
}
}
}
Like this, it should work !

loopback w/ table schemas, not identifying schema in options

Nature of the issue
My db2 database makes wide use of table schemas for organization, so the table in question is LIVE.TBLADDRESS -
My model uses the "options" to specify the table schema
"options": {
"idInjection": false,
"db2": {
"schema": "LIVE",
"table": "TBLADDRESS"
}
}
the model is in the model-config.json using
,"Tbladdress": {
"dataSource": "x3",
"public": true
}
I get an error when I try to use the explorer to do a simple 'get' or any other API call.
"statusCode": 500,
"name": "Error",
"message": "[IBM][CLI Driver][DB2/LINUXX8664] SQL0204N "DB2X.TBLADDRESS" is an undefined name. SQLSTATE=42704\r\n",
Expected behavior
Once I specified the schema - I'd expect the API to resolve correctly
Actual behavior
The default schema for db user is used at all times...regardless of specified schema in options.
Suggested resolution
Maybe I set it in the wrong place, I will continue to look for the information, It is possible I am missing something.
This is what I "see" using DB Viewer...so you have an idea what I'm referring to.
DEV - host:50000/DEV
-schemas
|-AAA
|-BBB
|-DB2X (this is the schema that the error is referring to...but NOT the one specified in the model)
|-DDD
|-LIVE (this is the correct schema)
|--Tables
|--|-TBLA
|--|-TBLADDRESS
|-ZZZ
If it helps - this happens with manually create models or models generated by discovery scripts.
These are my config files, and model
/common/models/Tbladdress.json
{
"name": "Tbladdress",
"options": {
"idInjection": false,
"db2": {
"schema": "LIVE",
"table": "TBLADDRESS"
}
},
"properties": {
...
}
}
/datasources.json
{
"db": {
"name": "db",
"connector": "memory"
},
"x3": {
"name": "x3",
"connector": "db2",
"username": "...",
"password": "...",
"database": "...",
"hostname": "...",
"port": 50000
}
}
/model-config.json
{
"_meta": {
...
},
"User": {
"dataSource": "db"
},
"AccessToken": {
"dataSource": "db",
"public": false
},
"ACL": {
"dataSource": "db",
"public": false
},
"RoleMapping": {
"dataSource": "db",
"public": false,
"options": {
"strictObjectIDCoercion": true
}
},
"Role": {
"dataSource": "db",
"public": false
}
,"Tbladdress": {
"dataSource": "x3",
"public": true
}
}
http://localhost:3000/explorer/#!/Tbladdress/Tbladdress_findById
{
"error": {
"statusCode": 500,
"name": "Error",
"message": "[IBM][CLI Driver][DB2/LINUXX8664] SQL0204N \"DB2X.TBLADDRESS\" is an undefined name. SQLSTATE=42704\r\n",
"errors": [],
"error": "[node-ibm_db] SQL_ERROR",
"state": "42S02",
"stack": "Error: [IBM][CLI Driver][DB2/LINUXX8664] SQL0204N \"DB2X.TBLADDRESS\" is an undefined name. SQLSTATE=42704\r\n"
}
}
...Headers...
{
"date": "Sun, 18 Feb 2018 05:20:36 GMT",
"x-content-type-options": "nosniff",
"x-download-options": "noopen",
"x-frame-options": "DENY",
"content-type": "application/json; charset=utf-8",
"transfer-encoding": "chunked",
"connection": "keep-alive",
"access-control-allow-credentials": "true",
"vary": "Origin, Accept-Encoding",
"x-xss-protection": "1; mode=block"
}
USING:
loopback-cli v3 to generate express app
loopback-connector-db2 to connect to DB2 v10
Node v8.9.2
Package.JSON dependencies looks like this (as mentioned it's a default install, with one model added - to see if I could get it to work)
"dependencies": {
"compression": "^1.0.3",
"cors": "^2.5.2",
"helmet": "^1.3.0",
"loopback": "^3.0.0",
"loopback-boot": "^2.6.5",
"loopback-component-explorer": "^5.0.0",
"loopback-connector-db2": "^2.1.1",
"serve-favicon": "^2.0.1",
"strong-error-handler": "^2.0.0"
},
Yes - the DB2 connector worked fine when I specified the "LIVE" schema on data discovery - but it does NOT seem to be working when I use the API. I don't know if it's the connector or the loopback app.
For loopback-connector-db2, you must define SCHEMA in the datasources.json config file.
{
"x3": {
"name": "x3",
"connector": "db2",
"username": "...",
"password": "...",
"database": "...",
"hostname": "...",
"port": 50000
},
"x3Live": {
"name": "x3Live",
"connector": "db2",
"schema": "LIVE",
"username": "...",
"password": "...",
"database": "...",
"hostname": "...",
"port": 50000
}
}
Unfortunately, you will need to create a new datasource (e.g. x3Live). Use the old x3 datasource for the models using the DB2X schema, and the new x3Live datasource for the models using the LIVE schema.

Exporting a AWS Postgres RDS Table to AWS S3

I wanted to use AWS Data Pipeline to pipe data from a Postgres RDS to AWS S3. Does anybody know how this is done?
More precisely, I wanted to export a Postgres Table to AWS S3 using data Pipeline. The reason I am using Data Pipeline is I want to automate this process and this export is going to run once every week.
Any other suggestions will also work.
There is a sample on github.
https://github.com/awslabs/data-pipeline-samples/tree/master/samples/RDStoS3
Here is the code:
https://github.com/awslabs/data-pipeline-samples/blob/master/samples/RDStoS3/RDStoS3Pipeline.json
You can define a copy-activity in the Data Pipeline interface to extract data from a Postgres RDS instance into S3.
Create a data node of the type SqlDataNode. Specify table name and select query.
Setup the database connection by specifying RDS instance ID (the instance ID is in your URL, e.g. your-instance-id.xxxxx.eu-west-1.rds.amazonaws.com) along with username, password and database name.
Create a data node of the type S3DataNode.
Create a Copy activity and set the SqlDataNode as input and the S3DataNode as output.
Another option is to use an external tool like Alooma. Alooma can replicate tables from PostgreSQL database hosted Amazon RDS to Amazon S3 (https://www.alooma.com/integrations/postgresql/s3). The process can be automated and you can run it once a week.
I built a Pipeline from scratch using the MySQL and the documentation as reference.
You need to have the roles on place, DataPipelineDefaultResourceRole && DataPipelineDefaultRole.
I haven't load the parameters, so, you need to get into the architech and put your credentials and folders.
Hope it helps.
{
"objects": [
{
"failureAndRerunMode": "CASCADE",
"resourceRole": "DataPipelineDefaultResourceRole",
"role": "DataPipelineDefaultRole",
"pipelineLogUri": "#{myS3LogsPath}",
"scheduleType": "ONDEMAND",
"name": "Default",
"id": "Default"
},
{
"database": {
"ref": "DatabaseId_WC2j5"
},
"name": "DefaultSqlDataNode1",
"id": "SqlDataNodeId_VevnE",
"type": "SqlDataNode",
"selectQuery": "#{myRDSSelectQuery}",
"table": "#{myRDSTable}"
},
{
"*password": "#{*myRDSPassword}",
"name": "RDS_database",
"id": "DatabaseId_WC2j5",
"type": "RdsDatabase",
"rdsInstanceId": "#{myRDSId}",
"username": "#{myRDSUsername}"
},
{
"output": {
"ref": "S3DataNodeId_iYhHx"
},
"input": {
"ref": "SqlDataNodeId_VevnE"
},
"name": "DefaultCopyActivity1",
"runsOn": {
"ref": "ResourceId_G9GWz"
},
"id": "CopyActivityId_CapKO",
"type": "CopyActivity"
},
{
"dependsOn": {
"ref": "CopyActivityId_CapKO"
},
"filePath": "#{myS3Container}#{format(#scheduledStartTime, 'YYYY-MM-dd-HH-mm-ss')}",
"name": "DefaultS3DataNode1",
"id": "S3DataNodeId_iYhHx",
"type": "S3DataNode"
},
{
"resourceRole": "DataPipelineDefaultResourceRole",
"role": "DataPipelineDefaultRole",
"instanceType": "m1.medium",
"name": "DefaultResource1",
"id": "ResourceId_G9GWz",
"type": "Ec2Resource",
"terminateAfter": "30 Minutes"
}
],
"parameters": [
]
}
You can now do this with aws_s3.query_export_to_s3 command within postgres itself https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/postgresql-s3-export.html

How to use MongoDB operators in Strongloop

I want to use $inc for update an attribute of a Model (user), but I find two problems. I can't find if the parameter allowExtendedOperators:true, and I don't know if this is write correctly:
common/user.js
user.updateAttribute('coins',{ '$inc': {coins: -1} }, function(err,...);
common/user.json
"name": "user",
"base": "User",
"strict": true,
"idInjection": true,
"options": {
"validateUpsert": true
},
...
"settings": {
"mongodb": {
"allowExtendedOperators": true
}
},
I try this but nothing happen...
server/datasource.development.js
"MongoDB": {
"host": "...",
"port": "..."
"database": "...",
"name": "MongoDB",
"connector": "mongodb",
"allowExtendedOperators": true
}
I was looking for on the documentation StrongLoop and the only example is to make a updateAll and says:
There are two ways to enable the allowExtendedOperators flag: in the
model definition JSON file and as an option passed to the update
method.
But nothing works to me..
Call the method as follows:
user.updateAttributes({"$inc": {coins: -1}}, callback);
although the callback will always return the old instance before decrementing.