Problems connecting to mysql database using mysql1 package on flutter web - flutter

im tying to connect to my mysql database using mysql1 package, i've tried sqljocky as well, but both of them dont work.
I get error Error: Unsupported operation: RawSocket constructor
my code is exactly like in example, heres my code, maybe youll see what im doing wrong.
import 'package:mysql1/mysql1.dart';
class Database {
static var s = ConnectionSettings(
user: "user",
password: "password",
host: "host",
port: 3306,
db: "db",
);
static Future<MySqlConnection> connect() async{
return await MySqlConnection.connect(s);
}}

Unfortunately, mysql1 does not support web: https://github.com/adamlofts/mysql1_dart#flutter-web
I think in general, a web application would be better off using a REST-API approach, since you don't want to expose your SQL credentials in the Frontend.

Related

Getting correct socketPath for TypeORM config

I'm trying to connect a Cloud Run service to Cloud SQL postgres instance. I believe I'm nearly there, but am having some trouble getting the deployed instance to connect properly. My local environment can connect (via SSL) to the database intended for production, but the deployed version can't...
I'm using TypeORM, and have everything setup properly in the configuration...
#Module({
imports: [
TypeOrmModule.forRootAsync({
imports: [ConfigModule],
inject: [ConfigService],
useFactory: (configService: ConfigService) => {
const socketPath = configService.get('DB_SOCKET_PATH');
const extra = socketPath ? {
socketPath: socketPath,
ssl: {
rejectUnauthorized: false,
ca: Buffer.from(process.env.DB_SSL_CA, 'base64').toString('ascii'),
cert: Buffer.from(process.env.DB_SSL_CERT, 'base64').toString('ascii'),
key: Buffer.from(process.env.DB_SSL_KEY, 'base64').toString('ascii'),
}
} : { };
return ({
type: 'postgres',
host: socketPath || configService.get('DB_HOST'),
port: configService.get('DB_PORT'),
username: configService.get('DB_USER'),
password: configService.get('DB_PASS'),
database: configService.get('DB_NAME'),
extra: extra,
entities: [__dirname + '/../../modules/**/*.entity{.ts,.js}'],
namingStrategy: new SnakeNamingStrategy(),
synchronize: true,
});
}
})
]
})
export class DatabaseModule { }
Despite that I'm getting an error when I try to use the socketPath as the host rather than the actual host variable (necessary for GCP). It seems that TypeORM is adding extra characters, /.s.PGSQL.5432, at the end of my connection string that I don't want. And just to clarify, the socket path is in the form of /cloudsql/<PROJECT_ID>:<REGION>:<INSTANCE>.
[Nest] 28532 - 02/15/2021, 2:25:07 PM [ExceptionHandler] connect ENOENT <DB_SOCKET_PATH>/.s.PGSQL.5432 +3ms
Error: connect ENOENT <DB_SOCKET_PATH>/.s.PGSQL.5432
at PipeConnectWrap.afterConnect [as oncomplete] (net.js:1141:16)
At an older point in time, this used to work for me but I guess something changed in the TypeORM library. Does anybody have any ideas on this? Thanks!
EDIT: As of now I've gotten it to connect to the server correctly, but it's now giving me an error that says the server doesn't support SSL connections, which makes no sense given that I can connect via SSL fine on my local machine...?
SOLUTION: The issue does not seem to any code's fault, but rather some networking stuff on the GCP side. I configured the service and database to run through a VPC then just used a private IP address for the host.
It seems that TypeORM is adding extra characters, /.s.PGSQL.5432
This is actually intended - the Postgres spec requires that the unix sockets end with this suffix.
[Nest] 28532 - 02/15/2021, 2:25:07 PM [ExceptionHandler] connect ENOENT <DB_SOCKET_PATH>/.s.PGSQL.5432 +3ms
The error means that the socket wasn't found - usually because there was a misconfiguration and the Cloud SQL proxy couldn't start. You can check your logs at the instance start up to see if the proxy left any errors, but generally it'll come down to the following:
The Cloud SQL Admin API needs to be enabled
Your service account needs to have Cloud SQL Connect IAM role (or equivalent)
The service needs to be configured for Cloud SQL.
For a full list of instructions, see the Connecting from Cloud Run to Cloud SQL page.

Setting MongoDB connection with Airflow

According to Astronomer docs here:
Despite this, I'm still not quite sure how to structure the JSON in Extras for this. I've tried:
{ uri: mongodb+srv://myuser:mypass#my-cluster.dwxnd.gcp.mongodb.net/mydb?retryWrites=true&w=majority } in the Extras but that doesn't work:
It seems like this should be obvious, yet I am struggling. What's the correct way, using our MongoDB URI from MongoDB Atlas, to create this connection in Airflow?
This setup worked for me in MongoDB Atlas. Extra part is important as it adds mongodb+srv to the final connection URI. Make sure you have provider package installed (http://airflow.apache.org/docs/apache-airflow-providers-mongo/stable/index.html).
Conn Id: mongo_connection
Conn Type: MongoDB
Host: cluster0.mtfak.mongodb.net
Schema: MyDatabaseName
Login: myuser
Password: mypass
Port: empty
Extra: {"srv": true}
This is what I would try:
Conn Type: mongodb+srv (or mongodb)
Host:my-cluster.blahlah.mongodb.net,
Login: <username>, Password: <password>.
Schema: admin (or your authDB)
The JSON object is as simple as this
{ retryWrite:true,
<field>:value,
w:majority
}

Debugging connection PostgreSQL Loopback 4

Im on a mac(OS 10.14) using nodejs 14 and PostgresSQL 12.
I just installed Loopback4 and after following this tutorial Im not able to use any of the enpoints that use Models, ie that connect to Postgres, I constantly get a timeout.
It seems like its not even reaching the Postgres Server, but the error gives no information, just that the request times out.
There are no issues with the Postgres server since I can connect and request information with other nodejs applications to the same database.
I also tried to set this as the host host: '/var/run/postgresql/', same result.
I now tried the approach with a Docker container, setting the datasource files as follows:
import {inject, lifeCycleObserver, LifeCycleObserver} from '#loopback/core';
import {juggler} from '#loopback/repository';
const config = {
name: 'mydb',
connector: 'postgresql',
url: 'postgres://postgres:mysecretpassword#localhost:5434/test',
ssl: false,
};
// Observe application's life cycle to disconnect the datasource when
// application is stopped. This allows the application to be shut down
// gracefully. The `stop()` method is inherited from `juggler.DataSource`.
// Learn more at https://loopback.io/doc/en/lb4/Life-cycle.html
#lifeCycleObserver('datasource')
export class PostgresSqlDataSource extends juggler.DataSource
implements LifeCycleObserver {
static dataSourceName = 'PostgresSQL';
static readonly defaultConfig = config;
constructor(
#inject('datasources.config.PostgresSQL', {optional: true})
dsConfig: object = config,
) {
super(dsConfig);
}
}
With that same url I can log on my command line from my mac.
Is there a way to add logging and print any connection error? Other ways to debug it?
[UPDATE]
As of today Loopback4 Postgres connector does not work properly with Nodejs 14.
When starting the application, instead of running
npm start, you can set the debug string by running:
DEBUG=loopback:connector:postgresql npm start
If you want it to be more generic, you can use:
DEBUG=loopback:* npm start

cant use .native function in sails mongo

I've been working around some ways to use .native() to do a simple aggregation function in sails with mongo.
Already following the steps to install dependencies. (http://sailsjs.org/documentation/reference/waterline-orm/models/native)
But still it returns me this error : .native is not a function
Did I missed something ?
You may be using the wrong adapter. You can check this in your models.js in the connection key. It might be commented out, if it is, it's going to connect to to local disk. Check that in connections.js, the name of your object that has mongodb config is named the same as models.js. eg.
connections.js
mongoServer: {
adapter: 'sails-mongo',
host: 'localhost',
port: 27017,
database: 'dbname'
}
models.js
connection: 'mongoServer'

SailsJS deployment to Heroku, connect to Mongolabs MongoDB

I am right now attempting my first Heroku deployment of a SailsJS API. My app uses SailsJS v0.11 andsails-mongo 0.11.2.
I have updated config/connections.js to include the connection information to MongoDB database I have hosted for free at Mongolab.
mongodb: {
adapter: 'sails-mongo',
url: "mongodb://db-user:password123#ds047812.mongolab.com:47812/testing-db"
}
Also updated config/models.js to point to that adapter.
module.exports.models = {
connection: 'mongodb',
migrate: 'safe'
};
This is basically all I have changed from running the code locally, when I deploy to Heroku the app crashes and I get this error...
/home/zacharyhustles/smallChangeAPI/node_modules/connect-mongo/lib/connect-mongo.js:186
throw err;
^
at Socket.emit (events.js:107:17)
2015-07-08T19:37:00.778316+00:00 app[web.1]:
at Socket.<anonymous> (/app/node_modules/connect-mongo/node_modules/mongodb/lib/mongodb/connection/connection.js:534:10)
Error: Error connecting to database: failed to connect to [localhost:27017]
How do I get rid of this, and make sure Sails does not try connecting to localhost db?
Ok, the problem was with storing sessions.
My solution was to setup a Redis database to store sessions.
In config/sessions.js make sure everything is commented out except for the method you want for session store.
Mine looked like this:
adapter: 'redis',
host: 'example.redistogo.com',
port: 1111,
db: '/redistogo',
pass: 'XXXXXYYYYYYXYXYXYYX',
This solved my posted problem, hope this helps another person out.