How do you connect over SSL to Postgres in Loopback v3 - postgresql

My datasource.json file looks like this...
{
"db": {
"name": "db",
"connector": "memory"
},
"mydb": {
"host": "mydbhost.db.ondigitalocean.com",
"port": 25060,
"url": "",
"database": "mydb-staging",
"password": "mypassword",
"name": "mydb",
"user": "myuser",
"connector": "postgresql",
"ssl" : true
}
}
But DigitalOcean managed Postgres provides you with a CA file to use.
Where do I put it?
How do i configure LB3 to know about it?
Loopback docs say https://loopback.io/doc/en/lb3/PostgreSQL-connector.html
The PostgreSQL connector uses node-postgres as the driver. For more information about configuration parameters, see node-postgres documentation. https://node-postgres.com/features/ssl
I just don't understand how to setup LB.
When I start my server up i get...
Unhandled rejection error: permission denied for database mydb-staging

If you are using the database services on digital ocean, only the default "doadmin" user can Read and Write on any database, any other added user can only read data.

Related

Kafka Connect FileConfigProvdier not working

I'm running Kafka Connect with JDBC Source Connector for DB2 in standalone mode. Everything works fine, but I'm putting the passwords and other sensitive info into my connector file in plain text. I'd like to remove this, so I found that FileConfigProvider can be used:
https://docs.confluent.io/current/connect/security.html#fileconfigprovider
However, when I try to use this it does not seem to pick up my properties file. Here's what I'm doing:
connect.standalone.oroperties -
config.providers=file
config.providers.file.class=org.apache.kafka.common.config.provider.FileConfigProvider
secrets.properties -
password=thePassword
Source Config -
"connection.password": "${file:/Users/me/app/context/src/main/kafkaconnect/connector/secrets.properties:password}",
"table.whitelist": "MY_TABLE",
"mode": "timestamp",
When I try to load my source connector (via rest api) I get the following error:
{"error_code":400,"message":"Connector configuration is invalid and contains the following 2 error(s):\nInvalid value com.ibm.db2.jcc.am.SqlInvalidAuthorizationSpecException: [jcc][t4][2013][11249][4.26.14] Connection authorization failure occurred. Reason: User ID or Password invalid. ERRORCODE=-4214, SQLSTATE=28000 for configuration Couldn't open connection to jdbc:db2://11.1.111.111:50000/mydb\nInvalid value com.ibm.db2.jcc.am.SqlInvalidAuthorizationSpecException: [jcc][t4][2013][11249][4.26.14] Connection authorization failure occurred. Reason: User ID or Password invalid. ERRORCODE=-4214, SQLSTATE=28000 for configuration Couldn't open connection to jdbc:db2://11.1.111.111:50000/mydb\nYou can also find the above list of errors at the endpoint /{connectorType}/config/validate"}
The password I'm providing is correct. It works if I just hardcode it into my source json. Any ideas? Thanks!
Edit: As a note, I get similar results on the sink side inserting into a Postgres database.
Edit: Result of GET /connectors:
{
"name": "jdbc_source_test-dev",
"config": {
"connector.class": "io.confluent.connect.jdbc.JdbcSourceConnector",
"timestamp.column.name": "UPDATED_TS",
"connection.password": "${file:/opt/connect-secrets.properties:dev-password}",
"validate.non.null": "false",
"table.whitelist": "MY_TABLE",
"mode": "timestamp",
"topic.prefix": "db2-test-",
"transforms.extractInt.field": "kafka_id",
"_comment": "The Kafka topic will be made up of this prefix, plus the table name ",
"connection.user": "username",
"name": "jdbc_source_test-dev",
"value.converter": "org.apache.kafka.connect.json.JsonConverter",
"connection.url": "jdbc:db2://11.1.111.111:50000/mydb",
"key.converter": "org.apache.kafka.connect.storage.StringConverter"
},
"tasks": [
{
"connector": "jdbc_source_test-dev",
"task": 0
}
],
"type": "source"
}

Setting up Strapi using MongoDB via the PM2 Runtime

I'm quite new to Strapi and I'm following the Strapi deployment documentation at https://strapi.io/documentation/3.0.0-beta.x/guides/deployment.html#configuration. I have setup strapi using mongodb and it seems to work both in production and dev on my server. I can create content types and add data...
Now I'm trying to start Strapi using the PM2 Runtime. I have setup the ecosystem.config.js file (see below) and I run pm2 start ecosystem.config.js. The Strapi app seems to start just fine, but now what happens in the browser is that I am prompted with a new admin user. Seems like all users and data is lost... Mongo db not accessed or whats going on?
this is my ecosystem.config.js file
module.exports = {
apps : [{
name: 'cms.strapi',
cwd: '/var/www/domain/public_html',
script: 'server.js',
env: {
NODE_ENV: 'production',
DATABASE_HOST: '127.0.0.1',
DATABASE_PORT: '28015',
DATABASE_NAME: 'db-name',
DATABASE_USERNAME: 'db-u-name',
DATABASE_PASSWORD: 'pw',
},
}],
};
What am I missing?
Hi Jim and thanks for your reply! I believe the problem was a mixup between the prod and the dev environment. Sorry, my bad. I thought I was in one environment when I was really in the other. I guess it should be obvious when you start the server from the prompt whether your starting dev or prod, but once the web server is up and running in the browser I guess you can't tell from the gui whether it's the one or the other. At least I can't find one other than that the admin usernames (and possibly data) are different... Hmm..
Anyway my production/database.json file looks like this:
{
"defaultConnection": "default",
"connections": {
"default": {
"connector": "mongoose",
"settings": {
"uri": "mongodb://localhost:27017/db-prod",
"database": "db-prod",
"host": "127.0.0.1",
"srv": false,
"port": 27017,
"username": "u-name-prd",
"password": "pw"
},
"options": {
"ssl": false
}
}
}
}
PM2 Runtime seems to be working correctly with Strapi and Mongo now :-)

Import database from mysql in orientdb

I'm trying to import a database having only one table into orientdb using their import functionality. I wrote this json
`{
"config": {
"log": "debug"
},
"extractor" : {
"jdbc": { "driver": "com.mysql.jdbc.Driver",
"url": "jdbc:mysql://localhost:8889/footballEvents",
"userName": "root",
"userPassword": "root",
"query": "select * from 10eventslight_2" }
},
"transformers" : [
{ "vertex": { "class": "events"} }
],
"loader" : {
"orientdb": {
"dbURL": "remote:localhost/footballEvents",
"dbUser": "root",
"dbPassword": "root",
"serverUser": "root",
"serverPassword": "root",
"dbAutoCreate": true
}
}
}`
Then I run the command sudo ./oetl.sh importScript.json and I don't get any error, the script runs normally. I attached the output of the command here
Reading the [orientdb] INFO committing message at the end I tried to connect to my database and run the commit command but the system answers me that no transaction is running. I'm quite sure that the dbUrl and the db/server credentials in my json are good because I can use this address to connect to my database via the orientdb console. Concerning the mysql part, no doubt it's working because it extracts data from the database and I know my credentials are ok.
So it looks like it's working, not any error comes up but nothing happens and I don't understand why.
If it has any importance, I'm on Mac OS 10.13.1 with orientdb 2.2.29.
Thanks in advance.
OrientDB Teleporter is a tool that synchronizes a RDBMS to OrientDB database. Teleporter is fully compatible with several RDBMS that have a JDBC driver: Teleporter has been successfully tested with Oracle, SQLServer, MySQL, PostgreSQL and HyperSQL. Teleporter manages all the necessary type conversions between the different DBMSs and imports all your data as Graph in OrientDB. This feature is available both for the OrientDB Enterprise Edition and the OrientDB Community Edition. But beware: in community edition you can migrate your source relational database but you cannot enjoy the synchronize feature, only available in the enterprise edition.
For more information: https://orientdb.com/docs/last/Teleporter-Home.html
Hope it helps
Regards

How to connect Loopback API with mongdb port in Google Kubernetes Engine

I am following this blog to deploy MongoDB in GKE and I came to a point where I need to connect my Loopback-API image in different pod but the same cluster to talk to the database.
Since the local development works as aspected with this datasource.json as following:
{
"db": {
"host": "database",
"port": 27017,
"url": "",
"database": "test",
"password": "",
"name": "mongoDS",
"user": "",
"connector": "mongodb"
}
}
In the tutorial, it is written that,
the connection string URI would be:
“mongodb://mongo-0.mongo,mongo-1.mongo,mongo-2.mongo:27017/dbname_?”
I am not sure how to implement it to the above datasource.json. Any help will be appreciated.
The tutorial creates a headless service with the name "mongo" in the default ns.
Replace your "host": "database" with "host": "mongo" in your pod's datasource.json.
First, in Loopback's datasource, the name attribute is the same as the key of datasource. Second, the host attribute ought to be the name of pod that contains mongo DB.

Bluemix - error connecting loopback starter with mongoDB compose

I am trying to setup the loopback starter app on Bluemix with MongoDB. I have set up the MongoDB instance in Compose. However, when I keep getting a connection error even though I have followed all instructions and can even connect using the mongo shell.
{ [MongoError: connect ECONNREFUSED] name: 'MongoError', message: 'connect ECONNREFUSED' }
Take a look here: http://www.technicaladvices.com/2015/10/06/deploying-your-existing-strongloop-application-to-ibm-bluemix/
It shows the details of deploying a "StrongLoop" app in IBM Bluemix.
If the issue is still there open a support request directly from your Bluemix console or you can open a new ticket here: https://support.ng.bluemix.net/gethelp/
I was able to solve the problem by using the following configuration format in datasources.json:
"mongoDs": {
"host": "candidate.53.mongolayer.com",
"port": 10852,
"database": "SiteRite",
"username": "xxxx",
"password": "xxxx",
"name": "mongoDs",
"connector": "mongodb"
}
NOT using the 'url' key and using 'host' and 'port' with a separate 'username' and 'password' for the database is what seemed to have fixed it.