dbt to snowflake connections fails via profiles.yml - snowflake-task

I'm trying to connect to snowflake via dbt but connections fail with the error below:
Using profiles.yml file at /home/myname/.dbt/profiles.yml
Using dbt_project.yml file at /mnt/c/Users/Public/learn_dbt/rks-learn-dbt/learn_dbt/dbt_project.yml
Configuration:
profiles.yml file [ERROR invalid]
dbt_project.yml file [OK found and valid]
Profile loading failed for the following reason:
Runtime Error
Could not find profile named 'learn_dbt'
Required dependencies:
- git [OK found]
Any advice please.
Note: I am learning to setup dbt connections looking at udemy videos.
Below is my profiles.yml file:
learn_dbt:
target: dev
outputs:
dev:
type: snowflake
account: XXXXXX
user: XXXX
password: XXXX
role: transform_role
database: analytics
warehouse: transform_wh
schema: dbt
threads: 1
client_session_keep_alive: False

My first guess is that you have a profiles.yml file in your dbt project folder and dbt is not actually using the one in /home/myname/.dbt/.
Could you try running the following?
dbt debug --profiles-dir /home/myname/.dbt
The flag --profiles-dir works on most dbt cli commands and lets you use a custom profiles.yml that's outside your project. I use this flag all the time.

I had to run pip install dbt-snowflake and then it worked.
It seems dbt has seperated it's modules to dbt-core and it's adapters dbt-snowflake, dbt-postgres etc

I think this is a similar issue to what i had when using the cloud environment.
If you are using a snowflake instance on the West coast the the account name looks like
If you are using a snowflake instance on the East coast the the account name looks like <xxx12345.us-east-1>

Overall this error mean it is unable to connect to yead your environment template to get your snowflake account details.
'env.pd.template.bat' or 'env.pd.template.sh' is the base file which has your Snowfalke account settings, so you have run this command to connect to snowflake from your editor.
You can use '.bat', or .sh commands based on Powershell or CMD editor.
In my scenario I ran 'env.pd..private.bat', you need to run this command everytime to connect to snowflake account with your credentials. I ran this in cmd window. It fixed my error.

Related

Connect Postgres db hosted in azure storage using docker

I am trying to connect the postgres database hosted in azure storage account from within the flyway, flyway is running as docker image in docker container
docker run --rm flyway/flyway -url=jdbc:postgresql://postgres-azure-db:5432/postgres -user=user -password=password info but I am getting the error ERROR: Unable to obtain connection from database
Any idea/doc-link would be helpful
You have a similar error (different context, same general solution) in this flyway issue
my missing piece for reaching private Cloud SQL instances from private worker pools in Cloud Build was a missing network route.
The fix is ensuring the Service Networking VPC peer has the "Export custom routes" setting enabled, and that the Cloud Router advertises the route.
In your context (Azure), see "Quickstart: Use Azure Data Studio to connect and query PostgreSQL"
You can also try with a local Postgres instance first, and Azure Data Studio, for testing.
After exploring few option, I implemented the flyway using the Azure container instance. Create an ACI to store the flyway docker image and to execute the commands inside ACI, Also created a file share to keep the config file and sql scripts.
All these resource (Storage, ACI, file share) I created via the terraform scripts which is being triggered from Jenkins.

Unknown property errors trying to do a with Flyway migration with per script config files

My company is evaluating Flyway for database releases. We have an AWS PostgreSQL version 11.2 database and I have installed Flyway Community Edition version 6.1.2.
I have successfully baselined the database and run several basic DDL scripts using Flyway migrate. However now I am testing a more complicated scenario in which I need to run multiple scripts as one migration but each script has to connect as a different PostgreSqL user. I have tried to do this by setting up two sql files each with their own config file as described here: https://flywaydb.org/documentation/scriptconfigfiles
Every time I run the migrate command I get a property error: "ERROR: Unknown script configuration property: flyway.user" or "ERROR: Unknown script configuration property: user", etc, etc.
For debugging purposes I removed one sql and config combo so that I now only have one file each. The files are named V2020.1.14.08.41.00__role_test1.sql and V2020.1.14.08.41.00__role_test1.sql.conf. I did confirm that any changes to that config file are being picked up by the migrate command. My config file contains the following properties (values changes for security reasons):
flyway.url=jdbc:postgresql://...
flyway.user=user1
flyway.password=password
flyway.schemas=test
I have also tried removing the flyway prefix:
url=jdbc:postgresql://...
user=user1
password=password
schemas=test
And removing the url parameter (both flyway.url and url) so the migration reads that value from the default flyway.conf file. Example:
user=user1
password=password
schemas=test
I get the errors every time. Anyone have any ideas? All help is greatly appreciated.
There is a typo in your code:
flyeay.user=user1
It should be:
flyway.user=user1

AWS DMS Streaming replication : Logical Decoding Output Plugins(test_decoding) not accessible

I'm trying to migrate a PostgreSQL DB persisted on cloud (on DO droplet) to RDS using AWS Database Migration Service (DMS).
I've successfully configured the replication instance and endpoints.
I've created a task with Migrate existing data and replicate ongoing changes. When I start the task it shows some error ERROR: could not access file "test_decoding": No such file or directory.
I've tried to create a replication slot manually on my DB console it throws the same error.
I've followed the procedures which was suggested on the DMS documentation for Postgres
I'm using PostgreSQL 9.4.6 on my source endpoint.
I presume that the problem is the output plugin test_decoding was not accessible to do the replication.
Please assist me to resolve this. Thanks in advance!
You must install postgresql-contrib additional supplied modules on Your source endpoint.
If it is installed, make sure, directory where test_decoding module located is the same with directory where PostgreSQL expect it.
In *nix, You can check module directory by command:
pg_config --pkglibdir
If it is not the same, copy module, or make symlink, or some other solution You prefer.

How do I connect to an AWS PostgreSQL RDS instance using SSL and the sslrootcert parameter from a Windows environment?

We have a Windows EC2 instance on which we are running a custom command line application (C# console app using NpgSQL) to connect to a PostgreSQL RDS instance. Based on the instructions here:
http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_PostgreSQL.html#PostgreSQL.Concepts.General.SSL
we created a new DB parameter group with rds.force_ssl set to 1 and rebooted our RDS instance. We also downloaded and imported to Windows the pem file referenced on the page.
I was able to connect to the RDS instance from my Windows EC2 instance via pgAdmin by specifying SSL mode as Verify-Full. Our command-line application reads connection strings from a file and they look like this now that I've added the sslmode parameter:
Server=OurInstanceAddress;Port=5432;SearchPath='$user,public,topology';Database=OurDatabase;User Id=username;Password=mypassword;sslmode=verify-full;
Using this connection string failed with the error referenced at the bottom of the page:
FATAL: no pg_hba.conf entry for host "host.ip", user "someuser", database "postgres", SSL off
I tried adding the sslrootcert parameter, but I'm not sure if I'm dealing with it properly. I tried using the example (sslrootcert=rds-ssl-ca-cert.pem) and I tried using the name of the pem that I downloaded. I feel like there is something about the path information that I'm giving to the sslrootcert parameter that isn't right, especially in a Windows environment. I've tried using the name, I've tried using the following paths:
- sslrootcert=C:\keys\rds-combined-ca-bundle.pem - single backslash
- sslrootcert=C:\\\keys\\\rds-combined-ca-bundle.pem - double backslash
- sslrootcert=C:/keys/rds-combined-ca-bundle.pem - Linux style backslash
All of these produced the same error mentioned above.
Any insight would be appreciated.
I solved it using the environment variables instead for specifiying cert paths in connection url
-DPGSSLROOTCERT=/certs/root.crt
-DPGSSLKEY=/certs/amazon-postgresql.key
-PGSSLCERT=/certs/amazon-postgresql.crt
Although I'm in cygwin. There are some hints in the documentation when using windows here https://www.postgresql.org/docs/9.0/static/libpq-ssl.html

Replic-Action log error Cannot open Link: TNS Could not resolve the service name

Hi I am working with ReplicAction tool to transfer data from Lotus Notes View to Oracle Database.
When i Create the link document for Oracle DB it is created successfully without any error
When I create the Include Table for Oracle Db it is created successfully and all columns are listed
When i create the Replication it is also created successfully,
But when the job executes it gives the error is log :
05/08/2012 01:37:16 AM Starting Replication: BADtoProductPortal
05/08/2012 01:37:19 AM Error: <ODBC Error> [DataDirect][ODBC Oracle driver][Oracle]ORA-12154: TNS:could not resolve service name
05/08/2012 01:37:19 AM Error: Information: Unable to open Link: PPLink
05/08/2012 01:37:19 AM Error: Replication to Link <PPLink> did not complete
05/08/2012 01:37:20 AM End of Replication: BADtoProductPortal
If the error is with service name, Then i think we should not be able to create Link document also.
When i use ODBC connection for link, then i am unable to create Replication job, giving the error like Notes Data field "ID" does not match the source data field.
But i know it was working before.
I suggest to check that the TASK runing the job uses the same TNS entry as you are doing "manually".
I suggest to check that the TASK has also access to your Oracle driver. This tasks has right to run it?
ORA-12154 error is thrown during the logon process to a database. This error indicates that the communication software (TNS) in Oracle ( SQL *Net or Net8 ) did not recognize the host/service name specified in the connection parameters.
So the issue is clearly a type en "environment difference" between your configuration when you run manually the replication and when the job run it.
Hope I help
I'm assuming here that when you successfully replicate you're doing it manually from your local machine, and when the job fails it's running scheduled on a server. If that's the case I agree with Emmanuel. Remember running the job locally uses the local tnsnames.ora file, running it scheduled uses the tnsnames.ora file on the server. You may not be aware of anything changing but are you responsible for maintainance on the server?