How Jasperreports Server stores report output internally? - jasperserver

There are few ways to store report output in JR Server: FS, FTP and Repository. The repository output is the default one. I guess the files in the repository must be stored in the DB or file system. Are the files kept forever? How can I manage the repository and for example set a file's lifetime?

The repository outputs are stored in the database. Usually there is no need to set the lifetime.

As of JasperReports Server v 6.3.0 the reference to all resources is kept in jiresource table, while content of is kept in jiresource.
In my case I was able to retrieve all output reports with:
select r.id,r.name,r.creation_date
from jiresource r, jicontentresource c
where r.id = c.id;
The definition of jicontentresource is
jasperserver=# \d+ jicontentresource
id | bigint | not null | plain | |
data | bytea | | extended | |
file_type | character varying(20) | | extended | |

Related

What do the various columns in schema_version_history table of flyway represent?

I'm new to flyway & have been going through the documentation of flyway but couldn't find a doc which describes what each column in schema_version_history (or whatever you would have configured to name the flyway table) means. I'm specifically intrigued by the column named "type". So far the possible values for this column that I've observed in some legacy project at work are SQL & DELETE.
But I have no clue what this means in terms of flyway migrations.
Below are some sample rows from the table. Note that for installed rank 54 & 56, same migration file is present with same checksum but one has type SQL and another has DELETE.
-[ RECORD 53 ]-+---------------------------------------------------------------------------------------------------
installed_rank | 54
version | 2022.11.18.11.35.49.65
description | add column seqence in attribute table
type | SQL
script | V2022_11_18_11_35_49_65__add_column_seqence_in_attribute_table.sql
checksum | 408921517
installed_by | postgres
installed_on | 2022-11-18 12:04:47.652058
execution_time | 345
success | t
-[ RECORD 54 ]-+---------------------------------------------------------------------------------------------------
installed_rank | 55
version | 2022.11.15.14.17.44.36
description | update address column in attribute table
type | DELETE
script | V2022_11_15_14_17_44_36__update_address_column_in_attribute_table.sql
checksum | 1347853326
installed_by | postgres
installed_on | 2022-11-18 14:52:09.265902
execution_time | 0
success | t
-[ RECORD 55 ]-+---------------------------------------------------------------------------------------------------
installed_rank | 56
version | 2022.11.18.11.35.49.65
description | add column seqence in attribute table
type | DELETE
script | V2022_11_18_11_35_49_65__add_column_seqence_in_attribute_table.sql
checksum | 408921517
installed_by | postgres
installed_on | 2022-11-18 14:52:09.265902
execution_time | 0
success | t
-[ RECORD 56 ]-+---------------------------------------------------------------------------------------------------
installed_rank | 58
version | 2022.11.18.11.35.49.65
description | add column seqence in attribute table
type | SQL
script | V2022_11_18_11_35_49_65__add_column_seqence_in_attribute_table.sql
checksum | 408921517
installed_by | postgres
installed_on | 2022-12-09 14:01:59.352589
execution_time | 174
success | t
Great question. This is as close as I got to documentation on that table:
https://www.red-gate.com/hub/product-learning/flyway/exploring-the-flyway-schema-history-table
That article doesn't really describe the type column well at all, suggesting it only has two possible values and I've seen at least three; DELETE, SQL and JDBC. Not sure what else it may have.
EDIT: Also now confirmed these two values; BASELINE and UNDO_SQL
It's actually marked as intentionally not documented since it's not a part of the public API:
https://flywaydb.org/documentation/learnmore/faq#case-sensitive

Can't import a CSV file into PostgreSQL

Summary
I failed to import CSV files into a table on PostgreSQL.
Even though it says that the import was successfully completed, there's no rows created.
How did this happen, and how can I fix this? Thank you.
Details
1. The CSV file I (failed to) imported, is like this 1. CSV file imported
| number | ticket | category | question | answer | url | note |
|--------|------------|-----------|--------------------------|-----------------------|----------------|----------|
| 1 | #0000000 | Temp>123 | *confirming* | Would you...? | https:///....a | - |
| 2 | #1234567 | AAA / BBB | "a" vs "b" | If A, "a". If B, "b". | https:///....b | #0000000 |
| 3 | #1234567-2 | AAA>abc | Can we do sth using "a"? | Yes, blah blah blah. | https:///....b | - |
And this is the table on PostgreSQL
numberr : numeric
ticketr : char
category : char[]
question : char
answer : char
url : char
note : char
2.\ The message after the import
Even though it says that the import was "successfully completed"
When I hit “More details” of the import pop up (3. Message - Completed)
--command " "\\copy public.test (\"number\", ticket, category, question, answer, url, note) FROM '/Users/alice/Desktop/test.csv' DELIMITER ',' CSV HEADER ENCODING 'UTF8' QUOTE '\"' ESCAPE '''';""
3. The message when I made sure that the file was actually imported
When I click "Count Rows", it says "Table rows counted: 0"
I tried the following script in Query Tool of the table, and it shows no rows created
SELECT * FROM (table name)
For references
Created Postgres Log, but only the header is created.
Screenshot
4. No row created / 1. CSV file imported / 2. Import Preference / 3. Message - Completed / 5. postgres_log
After changing the name of a column from "number" to "consecutive", the error message showed up in Query Tool (not in Import/Export)
Tried Query Tool instead of Import/Emport
--> the situation didn’t change
Changed the first column name from “number” to “constructive” in both csv and psql table
--> the situation didn’t change
Tried Query Tool
copy public.test (consecutive, ticket, category, question, answer, url, note) FROM '/Users/alice/Desktop/test5.csv' DELIMITER ',' CSV HEADER ENCODING 'UTF8' QUOTE '"' ESCAPE '''';""
--> the situation didn’t change
Tried Query Tool
copy public.test (consecutive, ticket, category, question, answer, url, note) FROM '/Users/alice/Desktop/test5.csv' DELIMITER ',' CSV HEADER ENCODING 'UTF8'
—>got error message
ERROR: could not open file "/Users/alice/Desktop/test5.csv" for reading: Permission denied
HINT: COPY FROM instructs the PostgreSQL server process to read a file. You may want a client-side facility such as psql's \copy.
SQL state: 42501
check columns settings in tab at 2. Import Preference image
right from options tab
there you should set columns order as in your file
also check more details at 3. Message - Cpmpleted
This is a file permission issue.
Open a shell terminal, go to the directory where the data file is stored, and run chmod +rx * and retry loading the data file into your DB.

How to debug "Sugar CRM X Files May Only Be Used With A Sugar CRM Y Database."

Sometimes one gets a message like:
Sugar CRM 6.4.5 Files May Only Be Used With A Sugar CRM 6.4.5 Database.
I am wondering how Sugar determines what version of the database it is using. In the above case, I get the following output:
select * from config where name='sugar_version';
+----------+---------------+-------+
| category | name | value |
+----------+---------------+-------+
| info | sugar_version | 6.4.5 |
+----------+---------------+-------+
1 row in set (0.00 sec)
cat config.php |grep sugar_version
'sugar_version' => '6.4.5',
Given the above output, I am wondering how to debug the output "Sugar CRM 6.4.5 Files May Only Be Used With A Sugar CRM 6.4.5 Database.": Sugar seems to think the files are not of version 6.4.5 even though the sugar_version is 6.4.5 in config.php; where should I look next?
Two options for the issue:
Option 1: Update your database for the latest version.
Option 2: Follow the steps below and change the SugarCRM cnfig version.
mysql> select * from config where name ='sugar_version';
+----------+---------------+---------+----------+
| category | name | value | platform |
+----------+---------------+---------+----------+
| info | sugar_version | 7.7.0.0 | NULL |
+----------+---------------+---------+----------+
1 row in set (0.00 sec)
Update your sugarcrm version to apporipriate :
mysql> update config set value='7.7.1.1' where name ='sugar_version';
Query OK, 1 row affected (0.00 sec)
Rows matched: 1 Changed: 1 Warnings: 0
The above commands seem to be correct. Sugar seems to check that config.php and the config table in the database contain the same version. In my case I was making the mistake of using the wrong database -- so if you're like me and tend to have your databases mixed up, double check in config.php that 'dbconfig' is indeed pointing to the right database.

OrientDB: Cannot find a command executor for the command request: sql.MOVE VERTEX

I am using orientdb community edition 1.7.9 on mac osx.
Database Info:
DISTRIBUTED CONFIGURATION: none (OrientDB is running in standalone
mode)
DATABASE PROPERTIES
NAME | VALUE|
Name | null |
Version | 9 |
Date format | yyyy-MM-dd |
Datetime format | yyyy-MM-dd HH:mm:ss |
Timezone | Asia/xxxx |
Locale Country | US |
Locale Language | en |
Charset | UTF-8 |
Schema RID | #0:1 |
Index Manager RID | #0:2 |
Dictionary RID | null |
Command flow:
create cluster xyz physical default default append
alter class me add cluster xyz
move vertex #1:2 to cluster:xyz
Studio UI throw the following error:
014-10-22 14:59:33:043 SEVE Internal server error:
com.orientechnologies.orient.core.command.OCommandExecutorNotFoundException:
Cannot find a command executor for the command request: sql.MOVE
VERTEX #1:2 TO CLUSTER:xyz [ONetworkProtocolHttpDb]
Console return a record as select does. I do not see error in the log.
I am planning a critical feature by using altering cluster for selected records.
Could anyone help on this regard?
Thanks in advance.
Cheers
move vertex command is not supported in 1.7.x
you have to use switch to 2.0-M2
The OrientDB Console is a Java Application made to work against OrientDB databases and Server instances.
more

to_tsvector in simple mode throwing away non english in some setups

On some pg installs I am noticing the following happens
sam=# select '你好 世界'::tsvector;
tsvector
---------------
'世界' '你好'
(1 row)
sam=# select to_tsvector('simple', '你好 世界');
to_tsvector
-------------
(1 row)
Even though my db is configured like so:
MBA:bin sam$ ./psql -l
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
-----------+-------+----------+-------------+-------------+-------------------
postgres | sam | UTF8 | en_AU.UTF-8 | en_AU.UTF-8 |
sam | sam | UTF8 | en_AU.UTF-8 | en_AU.UTF-8 |
template0 | sam | UTF8 | en_AU.UTF-8 | en_AU.UTF-8 | =c/sam +
| | | | | sam=CTc/sam
template1 | sam | UTF8 | en_AU.UTF-8 | en_AU.UTF-8 | =c/sam +
| | | | | sam=CTc/sam
(4 rows)
On other similar setups I am seeing select to_tsvector('simple', '你好 世界'); correctly return the tokens.
How do I diagnose the simple tokeniser to figure out why it is chucking out these letters?
Simplest repro seems to be installing postgres via postgres app. Does not happen when installing postgres on ubuntu with a locale set.
Unfortunately, default parser used by text search highly depends on the database initialization and especially on lc_collate and the current database object encoding.
This is due to some inner working of the default text parser. It is vaguely documented:
Note: The parser's notion of a "letter" is determined by the database's locale setting, specifically lc_ctype. Words containing only the basic ASCII letters are reported as a separate token type, since it is sometimes useful to distinguish them.
The important part is these comments in PostgreSQL source code:
/* [...]
* Notes:
* - with multibyte encoding and C-locale isw* function may fail
* or give wrong result.
* - multibyte encoding and C-locale often are used for
* Asian languages.
* - if locale is C then we use pgwstr instead of wstr.
*/
and below:
/*
* any non-ascii symbol with multibyte encoding with C-locale is
* an alpha character
*/
Consequently, if you want to use the default parser with Chinese, make sure your database is initialized with the C locale and you have a multibyte encoding, so all characters above U+007F will be treated as alpha (including spaces such as IDEOGRAPHIC SPACE U+3000 !). Typically, the following initdb call will do what you expect:
initdb --locale=C -E UTF-8
Otherwise, Chinese characters will be skipped and treated as blank.
You can check this with debug function ts_debug. With a database initialized with lc_collate=en_US.UTF-8 or any other configuration where tokenization fails, you will get:
SELECT * FROM ts_debug('simple', '你好 世界');
alias | description | token | dictionaries | dictionary | lexemes
-------+---------------+-----------+--------------+------------+---------
blank | Space symbols | 你好 世界 | {} | |
Conversely, with lc_collate=C and a UTF-8 database (initialized as above), you will get the proper result:
SELECT * FROM ts_debug('simple', '你好 世界');
alias | description | token | dictionaries | dictionary | lexemes
-------+-------------------+-------+--------------+------------+---------
word | Word, all letters | 你好 | {simple} | simple | {你好}
blank | Space symbols | | {} | |
word | Word, all letters | 世界 | {simple} | simple | {世界}
It seems, however, that you mean to tokenize Chinese text where words are already separated by regular spaces, i.e. tokenization/segmentation does not happen within PostgreSQL. For this use case, I strongly suggest using a custom parser. This is especially true if you do not use other features of PostgreSQL simple parser, such as tokenizing URLs.
A parser tokenizing on space characters is very easy to implement. In fact, in contrib/test_parser, there is a sample code doing exactly that. This parser will work whatever the locale. There was a buffer overrun bug in this parser that was fixed in 2012, make sure you use a recent version.