I have two scripts, one creates table, one fills it in, they look like this.
databaseChangeLog:
- changeSet:
id: "0"
author: author
changes:
- createTable:
columns:
- column:
constraints:
nullable: false
primaryKey: true
primaryKeyName: board_id
name: id
type: integer
- column:
constraints:
nullable: false
name: engines
type: varchar(45)
//more code
databaseChangeLog:
- changeSet:
id: board_table_fill
author: 777
changes:
- insert:
tableName: boards
columns:
- column:
name: id
value: 777
- column:
name: engines
value: stock
- column:
name: markets
value: index
//more code
I need to start both scripts, how do I do that? That's application properties:
spring:
application:
name: 777
datasource:
driverClassName: org.postgresql.Driver
username: 777
password: 777
url: 777
jpa:
hibernate:
ddl-auto: validate
liquibase:
change-log: "classpath:db/changelog/db.changelog-777.yml"
In my case only the script to create the table starts, but I need both. I'm new to liquibase, sorry for a stupid question.
Answer provided by CT Liv in comments:
You need to create a master changelog that includes the other two.
See here: https://docs.liquibase.com/concepts/changelogs/attributes/include.html
The example is in XML but the YAML version is straightforward.
Here is an example: https://github.com/thombergs/code-examples/blob/master/spring-boot/data-migration/liquibase/src/main/resources/db/changelog/db.changelog-master.yaml
Related
I have Spring Boot App with Postgres which I want to run with Docker and user and database should create automatically with environmental variables.
docker-compose.yml
version: '3'
services:
microservice:
build: ./
image: someimage
ports:
- "8080:8080"
environment:
MEMORY_OPTS: "-Xmx512M"
PG_DB_HOST: postgres
PG_DB_PORT: 5432
PG_DB_PASSWORD: postgres
depends_on:
- postgres
postgres:
image: postgres:10.4-alpine
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=db
ports:
- 5432:5432
application.yml
spring:
config:
activate:
on-profile: local
jpa:
database-platform: org.hibernate.dialect.PostgreSQLDialect
hibernate:
ddl-auto: update
show-sql: true
datasource:
url: jdbc:postgresql://postgres:5432/db
username: postgres
password: postgres
liquibase:
change-log: classpath:/db/changelog/db.changelog-master.yaml
db/changelog/db.changelog-master.yaml
databaseChangeLog:
- logicalFilePath: db/changelog/db.changelog-master.yaml
- preConditions:
- runningAs:
username: postgres
- changeSet:
id: 1
author: postgres
changes:
- createTable:
tableName: employees
columns:
- column:
name: EMPLOYEE_ID
type: int
autoIncrement: true
constraints:
primaryKey: true
nullable: false
- column:
name: FIRST_NAME
type: varchar(50)
- column:
name: LAST_NAME
type: varchar(50)
constraints:
nullable: false
- column:
name: LOCATION
type: varchar(50)
And when I run docker compose up, I always get
Caused by: org.postgresql.util.PSQLException: FATAL: role "postgres" does not exist
or Caused by: org.postgresql.util.PSQLException: FATAL: database "db" does not exist
If I create manually user and database, it works, but as you understand, I need to do it automatically, I suppose some problems with env. variables.
I'm creating Glue Database, Glue Table with Schema, and Glue Crawler using CFT, please find my code below. In my Glue Crawler, I would like to specify the glue table "myTestTable" and schema in the Glue Crawler so that when any schema update happens (adding or removing any field) my crawler automatically updates with this new schema change.
How can I achieve this using CFT, it would be really appreciated if someone can help me to resolve this issue.
GlueDatabase:
Type: AWS::Glue::Database
Properties:
CatalogId: !Ref AWS::AccountId
DatabaseInput:
Name: myTestGlueDB
GlueTable:
Type: AWS::Glue::Table
Properties:
CatalogId: !Ref AWS::AccountId
DatabaseName: !Ref GlueDatabase
TableInput:
Parameters:
classification: parquet
Name: myTestTable
Owner: owner
Retention: 0
StorageDescriptor:
Columns:
- Name: productName
Type: string
- Name: productId
Type: string
InputFormat: org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat
OutputFormat: org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat
Compressed: false
NumberOfBuckets: -1
SerdeInfo:
SerializationLibrary: org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe
Parameters:
serialization.format: '1'
Location: s3://test-bucket
BucketColumns: []
SortColumns: []
StoredAsSubDirectories: false
PartitionKeys:
- Name: id
Type: string
- Name: year
Type: string
- Name: month
Type: string
- Name: day
Type: string
TableType: EXTERNAL_TABLE
GlueCrawler:
Type: AWS::Glue::Crawler
Properties:
Name: myTestCrawler
Role: GlueCrawlerRole
DatabaseName: myTestGlueDB
TablePrefix: myTable-
Targets:
S3Targets:
- Path: s3://test-bucket
SchemaChangePolicy:
UpdateBehavior: UPDATE_IN_DATABASE
DeleteBehavior: LOG
Configuration: "{\"Version\":1.0,\"Grouping\":{\"TableGroupingPolicy\":\"CombineCompatibleSchemas\"}}"
In grafana charts I try to add notifiers and getting the error. The notifiers conf is below:
notifiers: {}
- name: email-notifier
type: email
uid: email1
# either:
org_id: 1
# or
org_name: Main Org.
is_default: true
settings:
addresses: an_email_address#example.com
The critical part was missing there were
notifiers.yaml:
notifiers:
in
notifiers: {}
notifiers.yaml:
notifiers:
- name: email-notifier
type: email
uid: email1
# either:
org_id: 1
# or
org_name: Main Org.
is_default: true
settings:
addresses: an_email_address#example.com
I've got an OpenAPI schema (edited it to be a minimal working example):
---
openapi: 3.0.0
info:
title: Players API
version: 0.0.1-alpha1
paths:
/players/{id}:
get:
responses:
'200':
description: OK
content:
application/json:
schema:
allOf:
- $ref: '#/components/schemas/Player'
- type: object
required:
- id
properties:
spec:
type: object
required:
- display_name
components:
schemas:
Player:
type: object
properties:
spec:
$ref: '#/components/schemas/PlayerSpec'
additionalProperties: false
PlayerSpec:
type: object
properties:
display_name:
type: string
description: The name of the player.
example: LeBron
environment:
allOf:
- $ref: '#/components/schemas/PlayerReference'
- required:
- related
description: The environment to which the player belongs.
additionalProperties: false
PlayerReference:
type: object
required:
- id
properties:
id:
type: string
example: 'lebron-23'
and after I run: redoc-cli bundle example.yaml to generate the docs I can see:
(basically id: lebron-23 is there -- i.e., the docs look as expected).
The problem is in order to make it work I had to add example: 'lebron-23' to the definition of a generic PlayerReference component but I'd rather move this example: 'lebron-23' line to this section instead:
environment:
allOf:
- $ref: '#/components/schemas/PlayerReference'
- required:
- related
description: The environment to which the player belongs.
<-------- add id.example here or something
Since environment is an object, an object-level example would look like this:
environment:
allOf:
- $ref: '#/components/schemas/PlayerReference'
- required:
- related
description: The environment to which the player belongs.
example:
id: lebron-23
This example will probably override any examples from allOf subschemas (rather than merge with them), so make sure to include all property values you want to see in this example.
I am trying to create a swagger doc for a post meetings call. I am unsure how to add body fields under parameters (like meeting.name, meeting.time, meeting.duration etc). I get the error in the parameters "allowedValues: path, query, header, cookie". I am unsure which one to pick from here? I am done this previously by writing "-in: body" but body doesn't seem to be an option here.
openapi: '3.0.0'
info:
title: WebcastCreateMeeting
version: "1.1"
servers:
- url: https://api.webcasts.com/api
paths:
'/event/create/{event_title}/{folder_id}/{type}/{scheduled_start_time}/{scheduled_duration}/{timezone}/{region}/{acquisition_type}/{audience_capacity}/{event_expiration_months}/{token}':
post:
tags:
- CreateMeetingCallbody
summary: EventGM
parameters:
- in: path
name: token
description: the auth token
required: true
schema:
type: string
example: 123j1lkj3lk1j3i1j23l1kj3k1j2l3j12l3j1l2
- in: path
name: event_title
description: Name of the event from Cvent
required: true
schema:
type: string
- in: body
name: user
description: this is a test user
schema:
type: object
required:
- username
properties:
username:
type: string
- in: path
name: folder_id
description: ID of the folder under whihc the Meeting is to be added
required: true
schema:
type: string
- in: path
name: type
description: Type of the Meeting
required: true
schema:
type: string
- in: path
name: scheduled_start_time
description: Start time of
required: true
schema:
type: string
format: date-time
- in: path
name: scheduled_duration
description: Duration
required: true
schema:
type: integer
example: 60
- in: path
name: timezone
description: TimeZone of the event LU table
required: true
schema:
type: string
- in: path
name: region
description: Region from Zoom
required: true
schema:
type: integer
example: 1
- in: path
name: acquisition_type
description: To be added from GM
required: true
schema:
type: integer
example: 0
- in: path
name: audience_capacity
description: To be added for capacity
required: true
schema:
type: integer
- in: path
name: event_expiration_months
description: the month it expires on.
required: true
schema:
type: integer
example: 3
responses:
200:
description: This would be the response.
content:
application/json;charset=utf-8:
schema:
type: array
items:
properties:
scheduled_duration:
type: integer
example: 30
event_id:
type: integer
example: 0000000
audience_capacity:
type: integer
example: 30
folder_name:
type: string
example: Folder_Name
viewer_link:
type: string
example: https://viewer_link
scheduled_start_time:
type: string
format: date-time
example: 1541674800000
scheduled_player_close_time:
type: integer
example: 10
event_title:
type: string
[![enter image description here][1]][1]
Kindly advise on how to describe the body fields I am supposed to pass in the code.
Thanks
Got it, its passed separately in "requestbody" check here
Thanks.