dbt could not find an incremental strategy macro with the name "get_incremental_insert_sql" - amazon-redshift

In DBT Redshift, I have a model with this config:
{{config(
materialized='incremental',
incremental_strategy='insert'
)
}}
It runs fine with dbt-core 1.2
dbt-core==1.2.4
dbt-extractor==0.4.1
dbt-postgres==1.2.4
dbt-redshift==1.2.0
But If I upgrade dbt-core/postgres/redshift to 1.3.0 it breaks with this error:
dbt could not find an incremental strategy macro with the name "get_incremental_insert_sql"
Is insert strategy not supported or do I need to alter the config?

I don't think insert was ever supported; it was probably invalid config being quietly ignored. They refactored this config in 1.3 (see this issue, which added validation.
Your options for Redshift are append or delete+insert. Since you haven't defined a unique key, I think you probably want append.
This should probably be better-documented here, but that page doesn't mention PG/Redshift.

Related

How to add changes for NO ENCRYPT DB2 option to db2RestoreStruct

I am trying to restore encrypted DB to non-encryped DB. I made changes by setting piDbEncOpts to SQL_ENCRYPT_DB_NO but still restore is being failed. Is there db2 sample code is there where I can check how to set "NO Encrypt" option in DB2. I am adding with below code snippet.
db2RestoreStruct->piDbEncOpts->encryptDb = SQL_ENCRYPT_DB_NO
The 'C' API named db2Restore will restore an encrypted-image to a unencrypted database , when used correctly.
You can use a modified version of IBM's samples files: dbrestore.sqc and related files, to see how to do it.
Depending on your 'C' compiler version and settings you might get a lot of warnings from IBM's code, because IBM does not appear to maintain the code of their samples as the years pass. However, you do not need to run IBM's sample code, you can study it to understand how to fix your own C code.
If installed, the samples component must match your Db2-server version+fixpack , and you must use the C include files that come with your Db2-server version+fixpack to get the relevant definitions.
The modifications to IBM's samples code include:
When using the db2Restore API ensure its first argument has a value that is compatible with your server Db2-version-and-fixpack to access the required functionality. If you specify the wrong version number for the first argument, for example a version of Db2 that did not support this functionality, then the API will fail. For example, on my Db2-LUW v11.1.4.6, I used the predefined db2Version1113 , like this:
db2Restore(db2Version1113, &restoreStruct, &sqlca);
When setting the restore iOptions field: enable the flag DB2RESTORE_NOENCRYPT, for example, in IBM's example, include the additional flag: restoreStruct.iOptions = DB2RESTORE_OFFLINE | DB2RESTORE_DB | DB2RESTORE_NODATALINK | DB2RESTORE_NOROLLFWD | DB2RESTORE_NOENCRYPT;
Ensure the restoredDbAlias differs from the encrypted-backup alias name.
I tested with Db2 v11.1.4.6 (db2Version1113 in the API) with gcc 9.3.
I also tested with Db2 v11.5 (db2Version11500 in the API) with gcc 9.3.

PostgreSQL "forgets" default schema when closing data source connection

I am running into a very strange issue with Spring Boot and Spring Data: after I manually close a connection, the formerly working application seems to "forget" which schema it's using and complains about missing relations.
Here's the code snippet in question:
try (Connection connection = this.dataSource.getConnection()) {
ScriptUtils.executeSqlScript(connection, new ClassPathResource("/script.sql"));
}
This code works fine, but after it executes, the application immediately starts throwing errors like the following:
org.postgresql.util.PSQLException: ERROR: relation "some_table" does not exist
Prior to executing the code above, the application works fine (including referencing the table it later complains about). If I remove the try-resource block, and do not close the Connection, everything also works fine, except that I've now created a resource leak. I have also tried explicitly setting the default schema (public) in the following ways:
In the JDBC URL with the currentSchema parameter
With the the spring.datasource.hikari.schema parameter
With the spring.datasource.jpa.properties.hibernate.default_schema property
The last does alleviate the issue with respect to Hibernate managed classes, but the issue persists with native queries. I could, of course, make the schema explicit in those queries, but that doesn't seem to address the root issue. Why would closing a connection trigger this behavior?
My environment:
Spring Boot 2.5.1
PostgreSQL 12.7
Thanks to several users above who immediately saw what I did not. The script, adapted from an older pg_dump run, was indeed mucking with the search_path:
SELECT pg_catalog.set_config('search_path', '', false);
Removing that line, and some other unnecessary ones, resolved the problem. Big duh on my part.

Yaml schema Validation powershell

I'm working with powershell-yaml to parse my YAML into a PowerShell object.
currently, I have a problem validating my YAML schema. I've used this package yaml-schema-validator for my javascript project and I couldn't find any familiar function\moudle to help me solve this problem with Powershell.
Is There a schema validation language for YAML in Powershell?
Simply put, no, I don't believe there are any Powershell native options for doc validation against a YAML schema.
Since YAML is a super-set of JSON, one could (depending on the YAML being validated), use a schema expressed in JSON and validated w/Test-JSON.
There are two active YAML modules I'm aware of: (1) PSYaml and (2) powershell-yaml. The second being what you use today. I don't believe either of them validate YAML docs against a schema.
I believe there are schema validation modules/projects in the following:
Ruby
Python
PHP
JavaScript
You can see the list in Schema Validation for YAML.
You could always do you validation in another language, and wrap that call in Powershell. You just have to handle the integration yourself.

play framework 2.0 evolutions, how to mark an inconsistent state as resolved in PROD

I have an application developed in scala play2.0,
it worked successfully in local, but if failed when deployed to heroku.
the reason of the fail is that locally i was using a H2 database,
and using postgresql in heroku, i have to change one of the data types from "clob" to "text".
the problem now is that the database in heroku is in a "inconsistent state", according to the play20 documentation.
in DEV mode (locally), you can just click on the "Mark it as resolved" when the html appears.
how to "mark it ask resolved" in the heroku PROD environtment?
http://www.playframework.com/documentation/2.1.1/Evolutions
ps: note, because it was a new application, i just deleted the database and re-started.
however, here i am asking what is the proper way to handle evolutions in the PROD env.
that is, the "Mark it as resolved" issue for PROD is not explained here: http://www.playframework.com/documentation/2.1.1/Evolutions
Although I couldn't find a way to do it via the play command, you can do it by editing the database directly.
Imagine you're trying to go from 5.sql to 6.sql. Here's what you do:
Figure out and fix the problem(s) that caused the database to enter an inconsistent state (i.e. manually apply your !Ups and fix all the problems with them).
Manually apply your !Downs so that the database is in the state it was after 5.sql was applied.
Go into your database, find the table called play_evolutions, and look at the row with id 6. It should saying something like applying ups in the state column and have the error message in the last_problem column.
Delete the row with id 6. This will make Play think you are in the state you were with 5.sql.
Now you should be able to run play -DapplyEvolutions.default=true start to evolve to 6.sql.
Inconsistent state just means that the evolutions could not be applied and thus, the application is blocked. Update your evolution scripts and re-deploy.

Hortonworks-oozie

I am trying to run a workflow in hortonworks cluster using oozie.
Getting the following error:
Error: Invalid workflow-app, org.xml.sax.SAXParseException: cvc-complex-type.2.4.c: The matching wildcard is strict, but no declaration can be found for element 'hive'.
does anyone know the reason?
Atleast a sample hive workflow.xml which can be run on hortonworks distribution would be helpful??
This has to do with the first line of your workflow:
<workflow-app name="${workflowName}" xmlns="uri:oozie:workflow:0.4">
Specifically: uri:oozie:workflow:0.4
the xmlns value tells oozie what xml pattern to follow. I am assuming you used an online resource to build an action, which maybe in a newer scheme than what you specified.
There are versions
-uri:oozie:workflow:0.1
-uri:oozie:workflow:0.2
-uri:oozie:workflow:0.2.5
-uri:oozie:workflow:0.3
-uri:oozie:workflow:0.4
See: Oozie Workflow Schemes
But Usually setting yours to the code example above (0.4) will work for all newer workflows.
Actions also have schemes so it is important to look at what functions they have in each version.
The hive action currently goes up to 0.5 I believe, although I use 0.4 with this line:
<hive xmlns="uri:oozie:hive-action:0.4">
If this does not help, please update the question with your workflow for further help.