How can I solve a SCRAM authentication issue in Mongodb with Docker and Prisma? - mongodb

When I try to run any command from the prisma I get this error message:
$ npx prisma migrate reset
Error: MongoDB error
SCRAM failure: Authentication failed.
0: migration_core::state::Reset
at migration-engine/core/src/state.rs:341
When I run docker ps:
CONTAINER ID | IMAGE | COMMAND | CREATED | STATUS | PORTS | NAMES
388a7219da3d | mongo:latest | "docker-entrypoint.s…" | About an hour ago | Up About an hour | 0.0.0.0:27017->27017/tcp, :::27017->27017/tcp | auction-mongodb
my schema.prisma:
// This is your Prisma schema file,
// learn more about it in the docs: https://pris.ly/d/prisma-schema
generator client {
provider = "prisma-client-js"
previewFeatures = ["mongodb"]
}
datasource db {
provider = "mongodb"
url = env("DATABASE_URL")
}
model Test {
id String #id #default(auto()) #map("_id") #db.ObjectId
text String
}
.env:
DATABASE_URL=mongodb://auction-user:password#localhost:27017/auction-db?authSource=admin

Simply add ?authSource=admin to the end of your DATABASE_URL in .env file.

in my case this code solved.
user and pw suppose to have root role.
DATABASE_URL="mongodb://root:example#localhost:3333/liz?retryWrites=true&w=majority&authSource=admin"
DATABASE_URL="mongodb://username:password#ip:port/db?retryWrites=true&w=majority&authSource=admin"
do remove +rsv if you want to connect to your local mongodb

Related

How can I add a custom variable to Sqitch, to be used in target postgres

I would like to add few variables:"username' and "database" in my sqitch.conf on a defined target.
file sqitch.conf=>
engine = pg
[core "variables"]
username = jv_root
database = test
[target "dev_1"]
uri = db:pg://username#sqlhost:5432/database
[target "dev_2"]
uri = db:pg://username#sqlhost2:5432/database
where I run:
sqitch deploy -t dev_1
it throw an error =>
ERROR: no such user: username
You can add environment specific variables like this.
[target.dev_1.variables]
username = jv_root
password = test
How you address them in your sql files depends on the sql dialect.

The value supplied for parameter 'instanceProfileName' is not valid

Running cdk deploy I receive the following error message:
CREATE_FAILED | AWS::ImageBuilder::InfrastructureConfiguration | TestInfrastructureConfiguration The value supplied for parameter 'instanceProfileName' is not valid. The provided instance profile does not exist. Please specify a different instance profile and try again. (Service: Imagebuilder, Status Code: 400, Request ID: 41f431d7-8544-48e9-9faf-a870b83b0100, Extended Request ID: null)
The C# code looks like this:
var instanceProfile = new CfnInstanceProfile(this, "TestInstanceProfile", new CfnInstanceProfileProps {
InstanceProfileName = "test-instance-profile",
Roles = new string[] { "TestServiceRoleForImageBuilder" }
});
var infrastructureConfiguration = new CfnInfrastructureConfiguration(this, "TestInfrastructureConfiguration", new CfnInfrastructureConfigurationProps {
Name = "test-infrastructure-configuration",
InstanceProfileName = instanceProfile.InstanceProfileName,
InstanceTypes = new string[] { "t2.medium" },
Logging = new CfnInfrastructureConfiguration.LoggingProperty {
S3Logs = new CfnInfrastructureConfiguration.S3LogsProperty {
S3BucketName = "s3-test-assets",
S3KeyPrefix = "ImageBuilder/Logs"
}
},
SubnetId = "subnet-12f3456f",
SecurityGroupIds = new string[] { "sg-12b3e4e5b67f8900f" }
});
The TestServiceRoleForImageBuilder exists and was working previously. Same code was running successfully about a month ago. Any suggestions?
If I remove the CfninfrastructureConfiguration creation part, deployment runs successfully:, but takes at least 2 minutes to complete.
AwsImageBuilderStack: deploying...
AwsImageBuilderStack: creating CloudFormation changeset...
0/3 | 14:24:37 | REVIEW_IN_PROGRESS | AWS::CloudFormation::Stack | AwsImageBuilderStack User Initiated
0/3 | 14:24:43 | CREATE_IN_PROGRESS | AWS::CloudFormation::Stack | AwsImageBuilderStack User Initiated
0/3 | 14:24:47 | CREATE_IN_PROGRESS | AWS::CDK::Metadata | CDKMetadata/Default (CDKMetadata)
0/3 | 14:24:47 | CREATE_IN_PROGRESS | AWS::IAM::InstanceProfile | TestInstanceProfile
0/3 | 14:24:47 | CREATE_IN_PROGRESS | AWS::IAM::InstanceProfile | TestInstanceProfile Resource creation Initiated
1/3 | 14:24:48 | CREATE_IN_PROGRESS | AWS::CDK::Metadata | CDKMetadata/Default (CDKMetadata) Resource creation Initiated
1/3 | 14:24:48 | CREATE_COMPLETE | AWS::CDK::Metadata | CDKMetadata/Default (CDKMetadata)
1/3 Currently in progress: AwsImageBuilderStack, TestInstanceProfile
3/3 | 14:26:48 | CREATE_COMPLETE | AWS::IAM::InstanceProfile | TestInstanceProfile
3/3 | 14:26:49 | CREATE_COMPLETE | AWS::CloudFormation::Stack | AwsImageBuilderStack
Is it probably some race condition? Should I use multiple stacks to achieve my goal?
Should it be possible to use a wait condition (AWS::CloudFormation::WaitCondition) to bypass the 2 minutes of creation time in case it is intended (AWS::IAM::InstanceProfile resources always take exactly 2 minutes to create)?
Environment
CDK CLI Version: 1.73.0
Node.js Version: 14.13.0
OS: Windows 10
Language (Version): C# (.NET Core 3.1)
Update
Since the cause seems to be AWS internal, I used a pre-created instance profile as a workaround. The profile can be either created through IAM Management Console or CLI. However it would be nice to have a proper solution.
You have to create a dependency between the two constructs. CDK does not infer it when using the optional name parameter, as opposed to the logical id (which doesn't seem to work in this situation).
infrastructureConfiguration.node.addDependency(instanceProfile)
Here are the relevant docs: https://docs.aws.amazon.com/cdk/api/latest/docs/core-readme.html#construct-dependencies

Error: failed to connect to database: password authentication failed in Rust

I am trying to connect to database in Rust using sqlx crate and Postgres database.
main.rs:
use dotenv;
use sqlx::Pool;
use sqlx::PgPool;
use sqlx::query;
#[async_std::main]
async fn main() -> Result<(), Error> {
dotenv::dotenv().ok();
pretty_env_logger::init();
let url = std::env::var("DATABASE_URL").unwrap();
dbg!(url);
let db_url = std::env::var("DATABASE_URL")?;
let db_pool: PgPool = Pool::new(&db_url).await?;
let rows = query!("select 1 as one").fetch_one(&db_pool).await?;
dbg!(rows);
let mut app = tide::new();
app.at("/").get(|_| async move {Ok("Hello Rustacean!")});
app.listen("127.0.0.1:8080").await?;
Ok(())
}
#[derive(thiserror::Error, Debug)]
enum Error {
#[error(transparent)]
DbError(#[from] sqlx::Error),
#[error(transparent)]
IoError(#[from] std::io::Error),
#[error(transparent)]
VarError(#[from] std::env::VarError),
}
Here is my .env file:
DATABASE_URL=postgres://localhost/twitter
RUST_LOG=trace
Error log:
error: failed to connect to database: password authentication failed for user "ayman"
--> src/main.rs:19:16
|
19 | let rows = query!("select 1 as one").fetch_one(&db_pool).await?;
| ^^^^^^^^^^^^^^^^^^^^^^^^^
|
= note: this error originates in a macro (in Nightly builds, run with -Z macro-backtrace for more info)
error: aborting due to previous error
error: could not compile `backend`.
Note:
There exists a database called twitter.
I have include macros for sqlx's dependency
sqlx = {version="0.3.5", features = ["runtime-async-std", "macros", "chrono", "json", "postgres", "uuid"]}
Am I missing some level of authentication for connecting to database? I could not find it in docs for sqlx::Query macro
The reason why it is unable to authenticate is that you must provide credentials before accessing the database
There are two ways to do it
Option 1: Change your URL to contain the credentials - For instance -
DATABASE_URL=postgres://localhost?dbname=mydb&user=postgres&password=postgres
Option 2 Use PgConnectionOptions - For instance
let pool_options = PgConnectOptions::new()
.host("localhost")
.port(5432)
.username("dbuser")
.database("dbtest")
.password("dbpassword");
let pool: PgPool = Pool::<Postgres>::connect_with(pool_options).await?;
Note: The sqlx version that I am using is sqlx = {version="0.5.1"}
For more information refer the docs - https://docs.rs/sqlx/0.5.1/sqlx/postgres/struct.PgConnectOptions.html#method.password
Hope this helps you.

Cannot delete an instance of module in Terraform which contains a provider

I have a module which contains resources for:
azure postgres server
azure postgres database
postgres role (user)
postgres provider (for the server and used to create the role)
In one of my env directories I can have 0-N .tf files which is an instance of that module and each specify database name etc. So if I add another .tf file with a new name then a new database server with a database will be provisioned. All this works fine.
However, if I now delete an existing database module (one of the .tf files in my env directory) I run into issues. Terraform will now try to get the state of all the previously existing resources and since that specific provider (for that postgres server) now is gone terraform cannot get the state of the created postgres role, with the output a provider configuration block is required for all operations.
I understand why this happens but I cannot figure out how to solve this. I want to "dynamically" create (and remove) postgres servers with a database on them but this requires "dynamic" providers which then makes me get stuck on this.
Example of how it looks
resource "azurerm_postgresql_server" "postgresserver" {
name = "${var.db_name}-server"
location = "${var.location}"
resource_group_name = "${var.resource_group}"
sku = ["${var.vmSize}"]
storage_profile = ["${var.storage}"]
administrator_login = "psqladminun"
administrator_login_password = "${random_string.db-password.result}"
version = "${var.postgres_version}"
ssl_enforcement = "Disabled"
}
provider "postgresql" {
version = "0.1.0"
host = "${azurerm_postgresql_server.postgresserver.fqdn}"
port = 5432
database = "postgres"
username = "${azurerm_postgresql_server.postgresserver.administrator_login}#${azurerm_postgresql_server.postgresserver.name}".
password = "${azurerm_postgresql_server.postgresserver.administrator_login_password}"
}
resource "azurerm_postgresql_database" "db" {
name = "${var.db_name}"
resource_group_name = "${var.resource_group}"
server_name = "${azurerm_postgresql_server.postgresserver.name}"
charset = "UTF8"
collation = "English_United States.1252"
}
resource "postgresql_role" "role" {
name = "${random_string.user.result}"
login = true
connection_limit = 100
password = "${random_string.pass.result}"
create_role = true
create_database = true
depends_on = ["azurerm_postgresql_database.db"]
}
Above you see how we, in the module create a postgres server, postgres db and also a postgres role (where only the role utilizes the postgres provider). So if I now define an instance datadb.tf such as:
module "datadb" {
source = "../../modules/postgres"
db_name = "datadb"
resource_group = "${azurerm_resource_group.resource-group.name}"
location = "${azurerm_resource_group.resource-group.location}"
}
then it will be provisioned successfully. The issue is if I later on delete that same file (datadb.tf) then the planning fails because it will try to get the state of the postgres role without having the postgres provider present.
The postgres provider is only needed for the postgres role which will be destroyed as soon as the azure provider destroys the postgres db and postgres server, so the actual removal of that role is not necessary. Is there a way to tell terraform that "if this resource should be removed, you don't have to do anything because it will be removed dependent on being removed"? Or does anyone see any other solutions?
I hope my goal and issue is clear, thanks!
I think the only solution is a two-step solution, but I think it's still clean enough.
What I would do is have two files per database (name them how you want).
db-1-infra.tf
db-1-pgsql.tf
Put everything except your postgres resources in db-1-infra.tf
resource "azurerm_postgresql_server" "postgresserver" {
name = "${var.db_name}-server"
location = "${var.location}"
resource_group_name = "${var.resource_group}"
sku = ["${var.vmSize}"]
storage_profile = ["${var.storage}"]
administrator_login = "psqladminun"
administrator_login_password = "${random_string.db-password.result}"
version = "${var.postgres_version}"
ssl_enforcement = "Disabled"
}
provider "postgresql" {
version = "0.1.0"
host = "${azurerm_postgresql_server.postgresserver.fqdn}"
port = 5432
database = "postgres"
username = "${azurerm_postgresql_server.postgresserver.administrator_login}#${azurerm_postgresql_server.postgresserver.name}".
password = "${azurerm_postgresql_server.postgresserver.administrator_login_password}"
}
resource "azurerm_postgresql_database" "db" {
name = "${var.db_name}"
resource_group_name = "${var.resource_group}"
server_name = "${azurerm_postgresql_server.postgresserver.name}"
charset = "UTF8"
collation = "English_United States.1252"
}
Put your PostgreSQL resources in db-1-pgsql.tf
resource "postgresql_role" "role" {
name = "${random_string.user.result}"
login = true
connection_limit = 100
password = "${random_string.pass.result}"
create_role = true
create_database = true
depends_on = ["azurerm_postgresql_database.db"]
}
When you want to get rid of your database, first delete the file db-1-pgsql.tf and apply. Next, delete db-1-infra.tf and apply again.
The first step will destroy all postgres resources and free you up to run the second step, which will remove the postgres provider for that database.

How to destroy Postgres databases owned by other users in RDS using Terraform?

I managed to nicely get Terraform creating databases and roles in an RDS Postgres database but due to the stripped down permissions of the rds_superuser I can't see an easy way to destroy the created databases that are owned by another user.
Using the following config:
resource "postgresql_role" "app" {
name = "app"
login = true
password = "foo"
skip_reassign_owned = true
}
resource "postgresql_database" "database" {
name = "app_database"
owner = "${postgresql_role.app.name}"
}
(For reference the skip_reassign_owned is required because the rds_superuser group doesn't get the necessary permissions to reassign ownership)
leads to this error:
Error applying plan:
1 error(s) occurred:
* postgresql_database.database (destroy): 1 error(s) occurred:
* postgresql_database.database: Error dropping database: pq: must be owner of database debug_db1
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.
Using local-exec provisioners I was able to grant the role that owned the database to the admin user and the application user:
resource "aws_db_instance" "database" {
...
}
provider "postgresql" {
host = "${aws_db_instance.database.address}"
port = 5432
username = "myadminuser"
password = "adminpassword"
sslmode = "require"
connect_timeout = 15
}
resource "postgresql_role" "app" {
name = "app"
login = true
password = "apppassword"
skip_reassign_owned = true
}
resource "postgresql_role" "group" {
name = "${postgresql_role.app.name}_group"
skip_reassign_owned = true
provisioner "local-exec" {
command = "PGPASSWORD=adminpassword psql -h ${aws_db_instance.database.address} -U myadminuser postgres -c 'GRANT ${self.name} TO myadminuser, ${postgresql_role.app.name};'"
}
}
resource "postgresql_database" "database" {
name = "mydatabase"
owner = "${postgresql_role.group.name}"
}
which seems to work compared to setting ownership only for the app user. I do wonder if there's a better way I can do this without having to shell out in a local-exec though?
After raising this question I managed to raise a pull request with the fix that was released in in version 0.1.1 of the Postgresql provider so now works fine in the latest release of the provider.