Steps :
We have created a Kafka topic called pgsqlcountry which has all the
streaming data from postgreSQL DB.
we created a stream called country for processing the topic into a
table.
stream was created successfully.
-
ksql> describe country;
Field | Type
------------------------------
ROWTIME | BIGINT
ROWKEY | VARCHAR(STRING)
ID | BIGINT
COUNTRY | VARCHAR(STRING)
CREATED_AT | BIGINT
UPDATED_AT | BIGINT
-
we run the SQL command "select * from country"
we get error as below
-
ksql> select * from country;
null | null | null | null | null | null
Exception in thread "ksql_query_1-8f1f36a7-e83c-476d-8561-98fe9ed8866b-StreamThread-2" java.lang.NullPointerException
Please find my stacktrace in this screenshot
When I had java.lang.NullPointerException, I had incompatible versions of the ksqldb images.
There appears to be the cp prefix images, e.g.
cp-ksqldb-server
cp-ksqldb-cli
Don't mix and match with these images (try to stick with one group or the other):
sqldb-server
ksqldb-cli
You can run some other commands (e.g. SHOW PROPERTIES;) and see if this also gives the same error of java.lang.NullPointerException. If it does, it probably is the image.
Related
I'm new to flyway & have been going through the documentation of flyway but couldn't find a doc which describes what each column in schema_version_history (or whatever you would have configured to name the flyway table) means. I'm specifically intrigued by the column named "type". So far the possible values for this column that I've observed in some legacy project at work are SQL & DELETE.
But I have no clue what this means in terms of flyway migrations.
Below are some sample rows from the table. Note that for installed rank 54 & 56, same migration file is present with same checksum but one has type SQL and another has DELETE.
-[ RECORD 53 ]-+---------------------------------------------------------------------------------------------------
installed_rank | 54
version | 2022.11.18.11.35.49.65
description | add column seqence in attribute table
type | SQL
script | V2022_11_18_11_35_49_65__add_column_seqence_in_attribute_table.sql
checksum | 408921517
installed_by | postgres
installed_on | 2022-11-18 12:04:47.652058
execution_time | 345
success | t
-[ RECORD 54 ]-+---------------------------------------------------------------------------------------------------
installed_rank | 55
version | 2022.11.15.14.17.44.36
description | update address column in attribute table
type | DELETE
script | V2022_11_15_14_17_44_36__update_address_column_in_attribute_table.sql
checksum | 1347853326
installed_by | postgres
installed_on | 2022-11-18 14:52:09.265902
execution_time | 0
success | t
-[ RECORD 55 ]-+---------------------------------------------------------------------------------------------------
installed_rank | 56
version | 2022.11.18.11.35.49.65
description | add column seqence in attribute table
type | DELETE
script | V2022_11_18_11_35_49_65__add_column_seqence_in_attribute_table.sql
checksum | 408921517
installed_by | postgres
installed_on | 2022-11-18 14:52:09.265902
execution_time | 0
success | t
-[ RECORD 56 ]-+---------------------------------------------------------------------------------------------------
installed_rank | 58
version | 2022.11.18.11.35.49.65
description | add column seqence in attribute table
type | SQL
script | V2022_11_18_11_35_49_65__add_column_seqence_in_attribute_table.sql
checksum | 408921517
installed_by | postgres
installed_on | 2022-12-09 14:01:59.352589
execution_time | 174
success | t
Great question. This is as close as I got to documentation on that table:
https://www.red-gate.com/hub/product-learning/flyway/exploring-the-flyway-schema-history-table
That article doesn't really describe the type column well at all, suggesting it only has two possible values and I've seen at least three; DELETE, SQL and JDBC. Not sure what else it may have.
EDIT: Also now confirmed these two values; BASELINE and UNDO_SQL
It's actually marked as intentionally not documented since it's not a part of the public API:
https://flywaydb.org/documentation/learnmore/faq#case-sensitive
We recently upgraded the OS:
$ cat /etc/redhat-release
Red Hat Enterprise Linux Server release 7.6 (Maipo)
After upgrading, we are facing lot of issues with GitLab (predominantly with Postgres)..
Our GitLab is dockerized i.e. GitLab (and all its internal services including PostgreSQL) is running inside a single container. The container does not have it's own glibc, so it is using the one from the OS.
ERROR: canceling statement due to statement timeout
STATEMENT:
SELECT relnamespace::regnamespace as schemaname,
relname as relname,
pg_total_relation_size(oid) bytes FROM pg_class WHERE relkind = 'r';
The timeout messages appear continuously and this results in users facing 502 errors when accessing GitLab.
I checked the statement timeout set on the database.
gitlabhq_production=# show statement_timeout;
statement_timeout
-------------------
1min
(1 row)
I don't know what to make of this. This is probably the default setting. Is this an issue with postgres? What does this mean? Anything I can do to fix this?
EDIT:
Checked pg_stat_activity and don't see any locks as the server was rebooted earlier. The same query is running fine now but we keep seeing this issue intermittently.
Ran \d pg_class to check whether the table uses any indexes and also to check the string column.
gitlabhq_production=# \d pg_class
Table "pg_catalog.pg_class"
Column | Type | Modifiers
---------------------+-----------+-----------
relname | name | not null
relnamespace | oid | not null
reltype | oid | not null
reloftype | oid | not null
relowner | oid | not null
relam | oid | not null
relfilenode | oid | not null
reltablespace | oid | not null
relpages | integer | not null
reltuples | real | not null
relallvisible | integer | not null
reltoastrelid | oid | not null
relhasindex | boolean | not null
relisshared | boolean | not null
relpersistence | "char" | not null
relkind | "char" | not null
relnatts | smallint | not null
relchecks | smallint | not null
relhasoids | boolean | not null
relhaspkey | boolean | not null
relhasrules | boolean | not null
relhastriggers | boolean | not null
relhassubclass | boolean | not null
relrowsecurity | boolean | not null
relforcerowsecurity | boolean | not null
relispopulated | boolean | not null
relreplident | "char" | not null
relfrozenxid | xid | not null
relminmxid | xid | not null
relacl | aclitem[] |
reloptions | text[] |
Indexes:
"pg_class_oid_index" UNIQUE, btree (oid)
"pg_class_relname_nsp_index" UNIQUE, btree (relname, relnamespace)
"pg_class_tblspc_relfilenode_index" btree (reltablespace, relfilenode)
Would reindexing all tables and possibly alter tables help?
You should check whether the query us running for a minute or whether it is blocked behind a database lock. This can be seen from the pg_stat_activity row for the backend, which will show if the query is waiting for a lock or not (state=active and wait_event_type and wait_event indicate a lock).
If it is a lock, get rid of the locking transaction. It may be a prepared transaction, so check for these too.
If there is no lock at fault, it could be that your indexes have become corrupted by the operating system upgrade:
Since PostgreSQL uses operating system collations, database indexes on strings are sorted in collation order and an operating system upgrade can (and often does) lead to changed collations due to bug fixes in the C library, you should rebuild all indexes on string columns after such an upgrade.
The statement that you are showing does not use an index scan, so it should not be affected, but other statements may be.
Also, if you are using Docker, it may be that your container uses its own glibc that was not upgraded, and then you are not affected.
I am trying to import a csv file into the postgres table where I can successfully do so using COPY FROM:
import.sql
\copy myTable FROM '..\CSV_OUTPUT.csv' DELIMITER ',' CSV HEADER;
But that query only adds rows if it is currently not in the database, otherwise it exits with an error. Key (id)=(#) already exists.
myTable
id | alias | address
------+-------------+---------------
11 | red_foo | 10.1.1.11
12 | blue_foo | 10.1.1.12
CSV_OUTPUT.csv
id | alias | address
------+-------------+---------------
10 | black_foo | 10.1.1.11
12 | blue_foo | 10.1.1.12
13 | grey_foo | 10.1.1.13
14 | pink_foo | 10.1.1.14
My desired output is to insert the rows from the csv file into postgresql if address does not exist. myTable should contain grey_foo and pink_foo already but not black_foo since its address already exist.
What should be the right queries to use in order to achieve this? Your suggestions and ideas are highly appreciated.
Copy the data into a staging table first, and then update your main table (myTable) with only the rows with the keys that don't already exist. For example, assuming you have imported the data into a table named staging:
with nw as (
select s.id, s.alias, s.address
from staging as s
left join mytable as m on m.address=s.address
where m.address is null
)
insert into mytable
(id, alias, address)
select id, alias, address
from nw;
If you can upgrade to Postgres 9.5, you could instead use an INSERT command with the ON CONFLICT DO NOTHING clause.
There are few ways to store report output in JR Server: FS, FTP and Repository. The repository output is the default one. I guess the files in the repository must be stored in the DB or file system. Are the files kept forever? How can I manage the repository and for example set a file's lifetime?
The repository outputs are stored in the database. Usually there is no need to set the lifetime.
As of JasperReports Server v 6.3.0 the reference to all resources is kept in jiresource table, while content of is kept in jiresource.
In my case I was able to retrieve all output reports with:
select r.id,r.name,r.creation_date
from jiresource r, jicontentresource c
where r.id = c.id;
The definition of jicontentresource is
jasperserver=# \d+ jicontentresource
id | bigint | not null | plain | |
data | bytea | | extended | |
file_type | character varying(20) | | extended | |
I am using orientdb community edition 1.7.9 on mac osx.
Database Info:
DISTRIBUTED CONFIGURATION: none (OrientDB is running in standalone
mode)
DATABASE PROPERTIES
NAME | VALUE|
Name | null |
Version | 9 |
Date format | yyyy-MM-dd |
Datetime format | yyyy-MM-dd HH:mm:ss |
Timezone | Asia/xxxx |
Locale Country | US |
Locale Language | en |
Charset | UTF-8 |
Schema RID | #0:1 |
Index Manager RID | #0:2 |
Dictionary RID | null |
Command flow:
create cluster xyz physical default default append
alter class me add cluster xyz
move vertex #1:2 to cluster:xyz
Studio UI throw the following error:
014-10-22 14:59:33:043 SEVE Internal server error:
com.orientechnologies.orient.core.command.OCommandExecutorNotFoundException:
Cannot find a command executor for the command request: sql.MOVE
VERTEX #1:2 TO CLUSTER:xyz [ONetworkProtocolHttpDb]
Console return a record as select does. I do not see error in the log.
I am planning a critical feature by using altering cluster for selected records.
Could anyone help on this regard?
Thanks in advance.
Cheers
move vertex command is not supported in 1.7.x
you have to use switch to 2.0-M2
The OrientDB Console is a Java Application made to work against OrientDB databases and Server instances.
more