Create table and alter table dynamically in scala slick - scala

How to create table and alter table dynamically in scala slick?
Is the way just only play SQL query?

Do you mean the Slick table objects or the schema in the database? In case of the former, you can call .column on your table object any time to request a column dynamcally. In case of the latter, Slick allows to create new schemas. For manipulation existing schemas use SQL or play evolutions. More info here: Play!: Does Slick's DDL replace Evolutions?

Related

Postgres - Find only by UUID

I've got PostgreSQL DB with multiple schemas and tables in that schemas. Every row in table have PRIMARY UUID like "Ref_Key" => "41bf3b1e-91f0-491c-a6bd-c48a17e7c252"
Is it possible to find row only by it UUID, without specifying schema and table?
No, that is not possible. You can only query tables that explicitly appear in the FROM clause of a SELECT statement.

Can slick automatically create tables in the database (generate SQL and execute) from the models?

I've understood that slick-codegen can generate scala classes from the database tables. Can we do the opposite, creating tables if they don't exist in the database from the models?
You can create tables in Slick from a model: it's not related to the codegen tool, though.
When you define a model in Slick, you can use the .schema method to generate the database schema commands. There are examples of this in The Slick Manual:
// Assuming we have coffees and suppliers queries, we combine the schemas:
val schema = coffees.schema ++ suppliers.schema
// Now we can run a variety of commands to CREATE TABLE etc:
db.run(DBIO.seq(
schema.create,
schema.createIfNotExists,
schema.drop,
schema.dropIfExists
))
However, that's not automatic: you'd need to write something in your start-up code to decide to run the DDL commands or not.

How to upsert/Delete the DB2 source table data using Pyspark/SQL/DataFrames SPARK RDD's?

I'm trying to run the upsert/delete some of the values in DB2 database source table, which is a existing table on DB2. Is it possible using Pyspark/Spark SQL/Dataframes.
There is no direct way for update/delete in relational database using Pyspark job, but there are workarounds.
(1) You can create a identical empty table (secondary table) in relational database and insert data into secondary table using pyspark job, and write a DML trigger that would perform desired DML operation on your primary table.
(2) You can create a dataframe (eg. a) in spark that would be copy of your existing relational table and merge existing table dataframe with current dataframe(eg. b) and create a new dataframe(eg. c) that would be having latest changes. Now truncate the relational database table and reload with spark latest changes dataframe(c).
These is just a workaround and not a optimal solution for huge amount of data.

Is there any benefit using index or foreignKey in slick table schema?

Should i use indexes and foreignKey in slick schema table? Is there any benefits like performance or query planner?
We are using Flyway DB migration tool so we wont use this schema with schema.create
Foreign keys in slick schema table would help you in two cases
first is from slick doc
...foreign key can be used to navigate to the referenced data with a join. For this purpose, it behaves the same as a manually defined utility method for finding the joined data ...
And if you generate DB schema using slick (for example, in tests)
Setting up indexes help you to fasten your searching(data retrieval), but slows down insertion. So you need to decide according to your requirement what you want. If there is more searching and data in DB is huge you should go for indexing.
Foreign keys, on the other hand, are used to maintain the relationship between different tables which are used for join in relational DB. Adding foreign keys will not have any impact on performance.
You can get more insight on indexing here -> Indexing,
And for foreign key here -> Foreign Key

Create a new table with existing style from layer_styles table in postgres

Is there any way to create a postgis table with existing style from layer_styles table? Say for example i have so many styles stored in layer_styles table. I need to assign one of the style from layer_styles table to the new table which i am going to create. Can that be done using postgresql during table creation using sql command?
You need to identify the layer IDs of interest (or name, or any column you want) and to create the new table using this data+structure. However using the style in this secondary table may not be that easy
create table public.layer_styles_2 as
(select * from public.layer_styles where id in (2,3));