Error while connecting to local DB using Ruby ROM with Sinatra - sinatra

I'm fairly new to Sinatra and trying to connect to my database and using Ruby ROM gem for this. However, I'm always getting the following error message when I'm trying to execute my rake task:
ArgumentError: URIs without an explicit scheme are not supported anymore.
Here is my Rakefile:
require_relative './config/environment'
require 'rom/sql/rake_task'
namespace :db do
task :setup do
configuration = ROM::Configuration.new(:sql, ENV['DATABASE_URL'])
ROM::SQL::RakeSupport.env = configuration
end
end
And this my .env.development while where I've placed my database URL
DATABASE_URL='postgres://username:password#localhost/convertor_dev'
I'm using the following gems(if this helps):
rom
rom-sql
pg
rake

Ok so after some research, I found that ROM::Configuration.new(:sql, ENV['DATABASE_URL']) internally calls Sequel.connect(ENV['DATABASE_URL']). Therefore, a quick fix around this will be like this:
connection = Sequel.connect(ENV['DATABASE_URL'])
configuration = ROM::Configuration.new(:sql, connection)

Related

In a Golang app, SASL auth succeeds when passing variables to the process, but fails when exported variables are used. Why?

We have a Golang app built with Gorm that connects with PostgreSQL 14 (hosted in the same machine) with an SSL mode of verify-full. The certificates were issued by ZeroSSL, and generated with the help of Acme.sh. In the PostgreSQL configuration, we have set the following certificate-related variables:
ssl = on
ssl_cert_file = '/etc/ssl/certs/our-certificate.pem'
ssl_key_file = '/etc/ssl/private/our-certificate.key'
The Golang app and the PostgreSQL server are hosted in an Ubuntu 20.04 machine.
Our Golang app depends on a bunch of environment variables for, among others, database passwords and usernames. When we pass those variables to our app directly, i.e.:
DB_HOST=dbhost DB_NAME=dbname DB_USERNAME=username DB_PASSWORD=123456789 [...] ./go-app
Everything works well.
On the other hand, we also tried exporting those variables first. Exporting the variables is done by putting the variables inside an env file first, then doing the following:
set -a
source env-file.env
set +a
The environment variables are exported now at this point, and we can echo the variables from the terminal.
Unfortunately, when our Golang app reaches the point where it would attempt to connect with the PostgreSQL server, we would get the following error message:
error="failed to connect to `host=dbhost user=username database=dbname`: failed SASL auth (FATAL: password authentication failed for user \"username\" (SQLSTATE 28P01))"
I have already logged the contents of the exported database-related environment variables to the console, and they were of the same values as the variables directly passed to the process.
Why does this happen? It's a weird behaviour for me, but I believe I am simply lacking some necessary information.

PostgREST on Google Cloud SQL: unix socket URI format?

Any of you with experience with PostgREST and Cloud SQL ?
I have my SQL instance ready with open access (0.0.0.0/0) and I can access it with local PostGREST using the Cloud proxy app.
Now I want to run Postgrest from an instance of the same project but
I can't find an URI format for Postgrest that supports Cloud SQL format, as
Google SQL Cloud uses only unix sockets like /cloudsql/INSTANCE_CONNECTION_NAME
Config 1
db-uri = "postgres://postgres:password#/unix(/cloudsql/INSTANCE_CONNECTION_NAME)/mydatabase"
db-schema = "api"
jwt-secret = "OOcJ7VoSY1mXqod4MKtb9WCCwt9erJkRQ2tzYmLb4Xe="
db-anon-role = "web_anon"
server-port=3000
Returns {"details":"could not translate host name \"unix(\" to address: Unknown host\n","code":"","message":"Database connection error"}
Config 2
db-uri = "postgres://postgres:password#/mydatabase?unix_socket=/cloudsql/INSTANCE_CONNECTION_NAME"
db-schema = "api"
jwt-secret = "OOcJ7VoSY1mXqod4MKtb9WCCwt9erJkRQ2tzYmLb4Xe="
db-anon-role = "web_anon"
server-port=3000
The parser rejects the question mark
{"details":"invalid URI query parameter: \"unix_socket\"\n","code":"","message":"Database connection error"}
Config 3
db-uri = "postgres://postgres:password#/mydatabase"
db-schema = "api"
jwt-secret = "OOcJ7VoSY1mXqod4MKtb9WCCwt9erJkRQ2tzYmLb4Xe="
db-anon-role = "web_anon"
server-port=3000
server-unix-socket= "/cloudsql/INSTANCE_CONNECTION_NAME"
server-unix-socket appears to only take socket lock file path. Feeding it /cloudsql/INSTANCE_CONNECTION_NAME tries to delete file as in `postgrest.exe: /cloudsql/INSTANCE_CONNECTION_NAME: DeleteFile "/cloudsql/INSTANCE_CONNECTION_NAME": invalid argument t (The filename, directory name, or volume label syntax is incorrect.)
Documentation
Cloud SQL Doc
https://cloud.google.com/sql/docs/mysql/connect-run
PostgREST
http://postgrest.org/en/v6.0/configuration.html
https://github.com/PostgREST/postgrest/issues/1186
https://github.com/PostgREST/postgrest/issues/169
Environment
PostgreSQL version:11
PostgREST version: 6.0.2
Operating system: Win10 and Alpine
First you have to add the Cloud SQL connection to the Cloud Run instance:
https://cloud.google.com/sql/docs/postgres/connect-run#configuring
After that, the DB connection will be available in the service on a Unix domain socket at path /cloudsql/<cloud_sql_instance_connection_name> and you can set the PGRST_DB_URI environment variable to reflect that.
Here's the correct format:
postgres://<pg_user>:<pg_pass>#/<db_name>?host=/cloudsql/<cloud_sql_instance_connection_name>
e.g.
postgres://postgres:postgres#/postgres?host=/cloudsql/project-id:zone-id-1:sql-instance
According with Connecting with CloudSQL, the example is:
# postgres+pg8000://<db_user>:<db_pass>#/<db_name>?unix_sock=/cloudsql//.s.PGSQL.5432
Then you can try with (Just as #marian.vladoi mentioned):
db-uri = "postgres://postgres:password#/mydatabase?unix_socket=/cloudsql/INSTANCE_CONNECTION_NAME/.s.PGSQL.5432"
Keep in mind that the connection name should include:
ProjectID:Region:DatabaseName
For example: myproject:myregion:myinstance
Anyway, you can find here more options to connect from external applications and from within Google Cloud.
I tried many variations but couldn't get it to work out of the box, however I'll post this workaround.
FWIW I was able to use an alternate socket location with postgrest locally, but then when trying to use the cloudsql location it doesn't seem to interpret it right - perhaps the colons in the socket path are throwing it off?
In any case as #Steve_Chávez mentions, this approach does work db-uri = postgres:///user:password#/dbname and defaults to the postgrest default socket location (/run/postgresql/.s.PGSQL.5432). So in the docker entrypoint we can symlink this location to the actual socket injected by Cloud Run.
First, add the following to the Dockerfile (above USER 1000):
RUN mkdir -p /run/postgresql/ && chown postgrest:postgrest /run/postgresql/
Then add an executable file at /etc/entrypoint.bash containing:
set -eEux pipefail
CLOUDSQL_INSTANCE_NAME=${CLOUDSQL_INSTANCE_NAME:-PROJECT_REGION_INSTANCE_NAME}
POSTGRES_SOCKET_LOCATION=/run/postgresql
ln -s /cloudsql/${CLOUDSQL_INSTANCE_NAME}/.s.PGSQL.5432 ${POSTGRES_SOCKET_LOCATION}/.s.PGSQL.5432
postgrest /etc/postgrest.conf
Change the Dockefile entrypoint to CMD /etc/entrypoint.sh. Then add CLOUDSQL_INSTANCE_NAME as an env var in cloud run. The PGRST_DB_URI env var is like so postgres://authenticator:password#/postgres
An alternative approach if you don't like this, would be to connect via serverless vpc connector.
I struggled with this too.
I end up doing a one-liner for DB-URI env variable
host=/cloudsql/project-id:zone:instance-id user=user port=5432 dbname=dbname password=password
However, I have postgrest running on cloud run that lets you specify the instance connection name via
INSTANCE_CONNECTION_NAME=/cloudsql/project-id:zone:instance-id
Maybe you can host it there and you end up doing it serverless Im not sure where are you running it currently.
https://cloud.google.com/sql/docs/mysql/connect-run

What causes error "Connection test failed: spawn npm; ENOENT" when creating new Strapi project with MongoDB?

I am trying to create a new Strapi app on Ubuntu 16.4 using MongoDB. After stepping through the tutorial, here: https://strapi.io/documentation/3.0.0-beta.x/guides/databases.html#mongodb-installation, I get the following error: Connection test failed: spawn npm; ENOENT
The error seems obvious, but I'm having issues getting to the cause of it. I've installed latest version of MongoDB and have ensured it is running using service mongod status. I can also connect directly using nc, like below.
$ nc -zvv localhost 27017
Connection to localhost 27017 port [tcp/*] succeeded!
Here is an image of the terminal output:
Any help troubleshooting this would be appreciated! Does Strapi perhaps log setup errors somewhere, or is there a way to get verbose logging? Is it possible the connection error would be logged by MongoDB somewhere?
I was able to find the answer. The problem was with using npx instead of Yarn. Strapi documentation states that either should work, however, it is clear from my experience that there is a bug when using npx.
I switched to Yarn and the process proceeded as expected without error. Steps were otherwise exactly the same.
Update: There is also a typo in Strapi documentation for yarn. They include the word "new" before the project name, which will create a project called new and ignore the project name.
Strapi docs (incorrect):
yarn create strapi-app new my-project
Correct usage, based on my experience:
yarn create strapi-app my-project
The ENOENT error is "an abbreviation of Error NO ENTry (or Error NO ENTity), and can actually be used for more than files/directories."
Why does ENOENT mean "No such file or directory"?
Everything I've read on this points toward issues with environment variables and the process.env.PATH.
"NOTE: This error is almost always caused because the command does not exist, because the working directory does not exist, or from a windows-only bug."
How do I debug "Error: spawn ENOENT" on node.js?
If you take the function that Jiaji Zhou provides in the link above and paste it into the top of your config/functions/bootstrap.js file (above module.exports), it might give you a better idea of where the error is occurring, specifically it should tell you the command it ran. Then run the command > which nameOfCommand to see what file path it returns.
"miss-installed programs are the most common cause for a not found command. Refer to each command documentation if needed and install it." - laconbass (from the same link, below Jiaji Zhou's answer)
This is how I interpret all of the above and form a solution. Put that function in bootstrap.js, then take the command returned from the function and run > which nameOfCommand. Then in bootstrap.js (you can comment out the function), put console.log(process.env.PATH) which will return a string of all the directories your current environment is checking for executables. If the path returned from your which command isn't in your process.env.PATH, you can move the command into a path, or try re-installing.

Unicorn Procfile and development databases (Ruby on Rails 4, PostgreSQL, Heroku, Resque)

I am developing a RoR4 app using a Postgre database that does some background database processing with resque and is hosted on Heroku. My question regards running locally for development.
It is my understanding (in this particular project) that when I start the web server using rails server, it connects to a development database, and when I start it using foreman start (with an appropriate Procfile), it connects to some other local database.
My problem is that my Resque jobs look for ActiveRecords in the development database, forcing me to use rails server. However, I need to access some environment variables, stored in a .env file, and it is my understanding that only foreman is able to read these environment variables.
How do I approach a workaround for this problem?
More specifically, how can I get my Resque jobs to look for ActiveRecords in the same database that foreman start uses? Or, how do I have foreman start use the the development database?
My guess for the latter would be edit the Procfile, but I haven't had luck finding a simple solution.
Procfile.dev:
web: bundle exec unicorn -c ./config/unicorn.rb -E $RAILS_DEV
NOTE: $RAILS_DEV=development
Any help would be appreciated. Thanks.
I found out what the problem is. It turns out that my .env file had an explicit URL to connect to my Heroku database. Once I removed that line, foreman start connected to my development database.

How to connect to PostgreSQL in Erlang using epgsql driver?

I would like to access a PostgreSQL database in Erlang. I downloaded the epgsql driver, it was a few directories and files, but I don't understand how to use it.
How can I write an Erlang program and use the epgsql driver to access a PostgreSQL database?
I made a new folder and copied all files from src/ in the driver and pgsql.hrl to my new folder. Then I created a simple test program:
-module(dbtest).
-export([dbquery/0]).
dbquery() ->
{ok,C} = pgsql:connect("localhost", "postgres", "mypassword",
[{database, "mydatabase"}]),
{ok, Cols, Rows} = pgsql:equery(C, "select * from mytable").
Then I started erl and compiled the modules with c(pgsql). and c(dbtest).
But then when I exeute dbtest:dbquery(). I get this error:
** exception error: undefined function pgsql:connect/4
in function dbtest:dbquery/0
Any suggestions on how I can connect to a PostgreSQL database using Erlang?
Rebar is a good tool to use but, I'm finding it's good to know how your project should be structured so you can tell what to do when things go wrong. Try organizing your project like this:
/deps/epqsql/
/src/dbtest.erl
/ebin
Then cd into deps/epqsql and run make to build the library.
Your dbtest.erl file should also explicitly reference the library, add this near the top:
-include_lib("deps/epgsql/include/pgsql.hrl").
You'll probably want to use a Makefile (or rebar) that compiles your code when you make changes but, try this to compile things right now: erlc -I deps/epqsql/ebin -o ebin src/dbtest.erl.
When testing, make sure your load paths are set correctly, try: erl -pz deps/epqsql/ebin/ ebin/. When the erl console loads up, try dbtest:dbquery(). and see what happens!
I don't have Postgresql setup on my machine but, I was able to get more reasonable looking errors with this setup.
I recommend to use ejabber pgsql driver https://svn.process-one.net/ejabberd-modules/pgsql/trunk/