Cannot create schema migrations table in redshift with ecto - amazon-redshift

I have a project where the database is redshift and I am using Postgrex adapter in my phoenix project, locally I am using postgresql, and everything is working fine, but when I deploy and try to run migrations, I am getting this error.
15:39:27.201 [error] Could not create schema migrations table. This error usually happens due to the following:
* The database does not exist
* The "schema_migrations" table, which Ecto uses for managing
migrations, was defined by another library
* There is a deadlock while migrating (such as using concurrent
indexes with a migration_lock)
To fix the first issue, run "mix ecto.create".
To address the second, you can run "mix ecto.drop" followed by
"mix ecto.create". Alternatively you may configure Ecto to use
another table for managing migrations:
config :my_service, MyService.Repo,
migration_source: "some_other_table_for_schema_migrations"
The full error report is shown below.
▸ Given the following expression: Elixir.MyService.StartupTasks.init()
▸ The remote call failed with:
▸ ** (exit) %Postgrex.Error{connection_id: 5598, message: nil, postgres: %{code: :feature_not_supported, file: "/home/ec2-user/padb/src/pg/src/backend/commands/tablecmds.c", line: "3690", message: "timestamp or timestamp with time zone column do not support precision.", pg_code: "0A000", routine: "xen_type_size_from_attr", severity: "ERROR"}, query: nil}
▸ (ecto_sql) lib/ecto/adapters/sql.ex:629: Ecto.Adapters.SQL.raise_sql_call_error/1
▸ (elixir) lib/enum.ex:1336: Enum."-map/2-lists^map/1-0-"/2
▸ (ecto_sql) lib/ecto/adapters/sql.ex:716: Ecto.Adapters.SQL.execute_ddl/4
▸ (ecto_sql) lib/ecto/migrator.ex:633: Ecto.Migrator.verbose_schema_migration/3
▸ (ecto_sql) lib/ecto/migrator.ex:477: Ecto.Migrator.lock_for_migrations/4
▸ (ecto_sql) lib/ecto/migrator.ex:401: Ecto.Migrator.run/4
▸ (my_service) lib/my_service/startup_tasks.ex:11: MyService.StartupTasks.migrate/0
▸ (stdlib) erl_eval.erl:680: :erl_eval.do_apply/6
It seems that redshift does not support some of the data types that postgres supports, so is there a better way to go about this or can I, create my own schema migrations table with another time stamp?

There are limitations that the driver cannot outcome, since the principle of working compared to postgres database differs, here is the documentation to the ecto adapter.
In documentation is stated:
We highly recommend reading the Designing Tables section from the AWS
Redshift documentation.
If you want to continue to use postgres on local, then you will need to create 2 separate repos and respectively migrations. Here are the commands you can use to migrate a separate repo.
I recommend however to get a dev instance of redshift and use it for development, since the method of working with databases like redshift is different and you can easily make a mistake.

Related

deleting knex migrate_lock table - is it safe?

I actually stopped a rollback migration midway, now I don't really know what the problem is, it won't rollback migrations any more. I used the command: knex migrate:rollback --knexfile=knexfile-client.ts --verbose <name> for rollback.
What I do know is these records are kept in the migrations and migrations_lock that postgres creates automatically, I was wondering if the problem will be solved if I delete these two. Would they regenerate making things fully functional again? I mean my database is heavy, and I'd like to avoid pulling and recreating everything.
Currently this is the error:
$ knex migrate:rollback --knexfile=knexfile-client.ts --verbose
Requiring external module ts-node/register
Using environment: development
FS-related option specified for migration configuration. This resets migrationSource to default FsMigrations
FS-related option specified for migration configuration. This resets migrationSource to default FsMigrations
migration file "20210413082306_create_find_report.ts" failed
migration failed with error: drop table "find_report" - table "find_report" does not exist
error: drop table "find_report" - table "find_report" does not exist
at Parser.parseErrorMessage (/app/node_modules/pg-protocol/src/parser.ts:357:11)
at Parser.handlePacket (/app/node_modules/pg-protocol/src/parser.ts:186:21)
at Parser.parse (/app/node_modules/pg-protocol/src/parser.ts:101:30)
at Socket.<anonymous> (/app/node_modules/pg-protocol/src/index.ts:7:48)
at Socket.emit (node:events:369:20)
at addChunk (node:internal/streams/readable:313:12)
at readableAddChunk (node:internal/streams/readable:288:9)
at Socket.Readable.push (node:internal/streams/readable:227:10)
at TCP.onStreamRead (node:internal/stream_base_commons:190:23)
error Command failed with exit code 1.
I've triple checked find_report table exists from pgAdmin. Any help would be appreciated. Thanks in advance.
PS: I'm actually using a make script to run above command so that I don't actually have to specify the file name

Scaffolding with geometry data in PostgreSql mapping error

I'm trying to do ModelFirst (scaffolding) of an existing bbdd in postgresql with geometry data.
In the VS project I have well installed all the necessary nuget packages (EntityFrameworkCore, EntityFrameworkCore.Design, EntityFrameworkCore.Relational, EntityFrameworkCore.Tools, Npgsql.EntityFrameworkCore.PostgreSQL, Npgsql.EntityFrameworkCore.PostgreSQL.Design and Npgsql.NetTopologySuite).
In VS PM, when launching the command:
Scaffold-DbContext "Host=myserver;Database=spatial;Username=postgres;Password=xxxxxxxx" Npgsql.EntityFrameworkCore.PostgreSQL -Schemas spu -OutputDir Spatials
He gives me these exceptions:
Could not find type mapping for column 'spu.nuts.geom' with data type
'geometry(Geometry,4326)'. Skipping column.
And it doesn't map the geometry columns, all rest columns are ok.
What am I doing wrong?
Can I specify scaffolding using NetTopologySuite?
Thanks a lot
Edit: Solved.
Show comments

Failure to find table when using multiple schemas in PostgreSQL

WPF PostgreSQL 11.1
Npgsql.PostgresException: '42P01: relation "testme" does not exist'
When attempting to use a PostgreSQL database with multiple schemas, I have defined the following connection strings in the App.config. Note that the only difference is in the SearchPath:
<system.data>
<DbProviderFactories>
<add name="Npgsql Data Provider" invariant="Npgsql" support="FF" description=".Net Framework Data Provider for Postgresql Server" type="Npgsql.NpgsqlFactory, Npgsql, Version=4.0.4.0, Culture=neutral" />
</DbProviderFactories>
</system.data>
<connectionStrings>
<clear />
<add name="localconnection" providerName="Npgsql" connectionString="Server=127.0.0.1;Port=5432;Database=chaos;User Id=postgres;Password=****;Searchpath=nova" />
<add name="phoenixconnection" providerName="Npgsql" connectionString="Server=127.0.0.1;Port=5432;Database=chaos;User Id=postgres;Password=****;SearchPath=phoenix;" />
</connectionStrings>
The Npgsql data provider was installed using NuGet: Runtime Version:
v4.0.30319 Version: 4.0.4.0
In PostgreSQL, in the Phoenix schema:
CREATE TABLE phoenix.testme
(
name text COLLATE pg_catalog."default" NOT NULL
)
WITH (
OIDS = FALSE
)
TABLESPACE pg_default;
ALTER TABLE phoenix.testme
OWNER to postgres;
Using PgAdmin, displaying the testme table works without problem:
select * from phoenix.testme;
I have configured the WCF service using the above connection strings. Using PetaPoco, I write the following script:
public string SayHello()
{
string msg;
using (var db = new chaosDB("phoenixconnection"))
{
var m = db.ExecuteScalar<string>("select version()");
msg = string.Format("Hello from {0}", m);
m = db.ExecuteScalar<string>("select current_schema");
msg = string.Format("{0} Current Schema is {1}", msg, m);
var ss = db.ExecuteScalar<string>("show search_path");
var s = db.Fetch<string>("select * from testme"); <---THIS FAILS!
msg = string.Format("{0} I Am {1}", msg, m);
}
return msg;
}
All works correctly until the "select * from testme" is executed, when I receive the above error. Note: ss from "show search_path" returns correctly with "phoenix"
WHAT AM I DOING WRONG? How do I get this to work??
Any help is most appreciated?
After much head scratching the answer became self-evident. First I had reset the search_path in the database. This did not help. Then I rebuilt the POCO's with PetaPoco and quickly discovered that not only was the new table, "testme", not created, but nor were any POCO's. So, checking, the Database.tt file in PetaPoco showed it to have the wrong ConnectionStringName. Changing the ConnectionStringName to "phoenixconnection" allowed building the POCO's, but again failed to find the "testme" table.
Then the mistake became readily apparent, as stated above, both the "phoenixconnection" and the "localconnection" were pointed to the same port. From previous development, I had PostgreSQL v10.1 running on the same port as the newer PostgreSQL v11.1. Apparently, the first PostgreSQL v10.1 was receiving the connection (and not the newer PostgreSQL v11.1).
Going to services (services.msc) and shutting down v10.1 and running Database.TT now gave the error:
System.InvalidOperationException: Sequence contains more than one matching element
Apparently v10.1 (which I was using for development) only had ONE schema, but v11.1 has multiple schemas. I take the error message to mean that PetaPoco was seeing multiple tables with the same table name--i.e.,it was not distinguishing between schemas.
So, the problem is now solved.
Fix the ports! The older single-schema PostgreSQL v10.1 is kept on port: 5432.
The newer multiple-schema PostgreSQL is kept on port 5433. The v10.1 will be used for the POCO's.
Fix the connection strings in App.config of the WCF so that at run time, the WCF will use the newer v11.1. Once generated, LEAVE THE POCO'S alone and reference them in the WCF file.
Apparently, PetaPoco, can only work with one schema in generating its POCO's, but at runtime will read the connection strings from the App.Config of the WCF to execute its queries, etc. (So in the App.config where Database.TT resides, point PetaPoco to the "development" Database having only a single schema, but in the WCF environment, point the connection string to the new database with multiple schemas. The SearchPath of the connection string IS respected when running through Npgsql).
It would be nice if PetaPoco could generate POCO's specific to a schema in a multi-schema environment, but at the moment, I guess it can't :(
Addendum Note: It turns out that a given instance of PostgreSQL can have multiple DATABASES. So if the connection string for Npgsql is specific to a development database --i.e., a database with only one schema--then during development, PetaPoco works great to create the POCO's. These POCO's can then be directly used in a WCF Service project and uploaded to IIS website. The App.config files of the web site can then be directed to use the run-time database (again in the connection string) to the deployed database. All works well! :)

DB2 exception: Cannot create PoolableConnectionFactory SQLCODE=-142,

I keep having the following error in MobileFirst Platform 6.3:
Runtime: org.apache.commons.dbcp.SQLNestedException: Cannot create
PoolableConnectionFactory (DB2 SQL Error: SQLCODE=-142,
SQLSTATE=42612, SQLERRMC=null, DRIVER=4.19.26)
This is my adapter code:
var test2 = WL.Server.createSQLStatement("SELECT * FROM WSDIWC.WBPTRR1");
function getCEID(cnum) {
return WL.Server.invokeSQLStatement({
preparedStatement : test2,
parameters : []
});
}
And adapter XML:
<connectivity>
<connectionPolicy xsi:type="sql:SQLConnectionPolicy">
<!-- Example for using a JNDI data source, replace with actual data source
name -->
<!-- <dataSourceJNDIName>${training-jndi-name}</dataSourceJNDIName> -->
<!-- Example for using MySQL connector, do not forget to put the MySQL
connector library in the project's lib folder -->
<dataSourceDefinition>
<driverClass>com.ibm.db2.jcc.DB2Driver</driverClass>
<url>jdbc:db2://***</url>
<user>**</user>
<password>**</password>
</dataSourceDefinition>
</connectionPolicy>
</connectivity>
I have remove the url, user and password.
Hope you help me out on clarification about the current problem.
I already know that the sql is not accepted since it's just a simple query.
I have also research about z/OS DB2 that it has issue with same error code sqlcode=-142. http://answers.splunk.com/answers/117024/splunk-db-connect-db2.html
While you say that this is a "simple query", the exception error code mentions the following:
-142
THE SQL STATEMENT IS NOT SUPPORTED
Explanation
An SQL statement was detected that is not supported by the database.
The statement might be valid for other IBM® relational database
products or it might be valid in another context. For example,
statements such as VALUES and SIGNAL or RESIGNAL SQLSTATE can be used
only in certain contexts, such as in a trigger body or in an SQL
Procedure.
System action
The statement cannot be processed.
Programmer response
Change the syntax of the SQL statement or remove the statement from
the program.
You should review the DB2 SQL guidelines for how to achieve what you want to achieve, and also explain that in the question if you'd like further assistance. For example, are you sure "WSDIWC.WBPTRR1" is actually available?
I encountered this same problem with JDBC connections to mainframe DB2 in MobileFirst 6.3. Connections to DB2 LUW worked fine. It appears that default pool validationQuery is valid for DB2 LUW but not DB2 z/OS.
You can work around the bug by doing the data source configuration in the Liberty profile server.xml. From the Eclipse Servers view, expand MobileFirst Development Server and edit the Server Configuration. Add the driver and data source there; for example:
<library id="db2jcc">
<fileset dir="whereever" includes="db2jcc4.jar db2jcc_license_cisuz.jar"/>
</library>
<dataSource id="db2" jndiName="jdbc/db2">
<jdbcDriver libraryRef="db2jcc"/>
<properties.db2.jcc databaseName="mydb" portNumber="5021"
serverName="myserver" user="myuser" password="mypw" />
</dataSource>
Then reference it in your adapter XML under connectionPolicy:
<dataSourceJNDIName>jdbc/db2</dataSourceJNDIName>
A benefit of configuring data sources in server.xml (vs the adapter XML) is you have access to all data source, JDBC, and JCC properties. So if the connection pool gives you other problems, you can customize it or switch to another data source type, such as type="javax.sql.DataSource".

iReport designer 4.5.1 /4.6.0 cannot interact with Hive

I have followed the instructions from here and installed the updated plugin. The error has become:
Query error
Message: net.sf.jasperreports.engine.JRException:
Error executing SQL statement for : null Level: SEVERE Stack Trace:
Error executing SQL statement for : null com.jaspersoft.hadoop.hive.HiveFieldsProvider.getFields(HiveFieldsProvider.java:113)
com.jaspersoft.ireport.hadoop.hive.designer.HiveFieldsProvider.getFields(HiveFieldsProvider.java:32)
com.jaspersoft.ireport.hadoop.hive.connection.HiveConnection.readFields(HiveConnection.java:154)
com.jaspersoft.ireport.designer.wizards.ConnectionSelectionWizardPanel.validate(ConnectionSelectionWizardPanel.java:146)
org.openide.WizardDescriptor$7.run(WizardDescriptor.java:1357)
org.openide.util.RequestProcessor$Task.run(RequestProcessor.java:572)
org.openide.util.RequestProcessor$Processor.run(RequestProcessor.java:997)
After downgrading to 4.5.0 the error has become (the connection is verified and I am able to query the table from hive):
Query error
Message: net.sf.jasperreports.engine.JRException: Query returned non-zero code: 10, cause:
FAILED: Error in semantic analysis: Line 1:14 Table not found 'panstats' Level:
SEVERE Stack Trace: Query returned non-zero code: 10, cause:
FAILED: Error in semantic analysis: Line 1:14 Table not found 'panstats'
com.jaspersoft.hadoop.hive.HiveFieldsProvider.getFields(HiveFieldsProvider.java:260)
com.jaspersoft.ireport.hadoop.hive.designer.HiveFieldsProvider.getFields(HiveFieldsProvider.java:32)
com.jaspersoft.ireport.hadoop.hive.connection.HiveConnection.readFields(HiveConnection.java:146)
com.jaspersoft.ireport.designer.wizards.ConnectionSelectionWizardPanel.validate(ConnectionSelectionWizardPanel.java:146)
org.openide.WizardDescriptor$7.run(WizardDescriptor.java:1357)
org.openide.util.RequestProcessor$Task.run(RequestProcessor.java:572)
org.openide.util.RequestProcessor$Processor.run(RequestProcessor.java:997)
I am using Hive 0.8.1 on OS X Lion 10.7.4.
Is your query as simple as select * from panstats? I suspect that the query is not the problem, but you'll want to confirm that first.
You could try querying that table from a tool like SQuirreL SQL. If that tool also cannot get the data, then it's probably a Hive issue. If it can... then it's probably an issue with iReport or the Hive plugin.
It sounds like Hive is not configured to share metadata. It uses the annoying default configuration with Derby, so outside connections don't get access to your panstats table. I came across this article about configuring Hive earlier this year. It documents using MySQL instead of derby. If that's indeed the problem, then it's just a Hive configuration issue. Following that article would solve things both for SQuirreL and for iReport.