SQL results doesn't show Korean character field of a mariaDB table - eclipse

I am using MariaDB 10.1.34 with Eclipse Oxygen.
I made a simple table, input some records, with a Korean character.
When I execute "select * from member" in the mariaDB console, I can see
the Korean character well, as in the picture.
But when I execute the same query in the scrapbook of eclipse,
the result doesn't show the records. I can only see empty line.
I just started learning programming in school so I don't know what is exactly
the problem.
It seems the DB saved the record correctly, but when JDBC is getting data from
the DB, it can't read Korean character.

Related

Unable to paste entire text into Postgres table

trying to insert data into Postgres table via windows command prompt using copy/paste, but there seems to be a character limit of 4077? tried "different data" and got the same result limited to 4077 characters. so went onto windows power shell. while I was able to paste a bit more characters in power shell, I'm still unable to get all data into the table. for the field of interest, datatype is set to text. no error message, just pasted text that is truncated and cannot be further extended with a follow up paste.

kettle Converting from other Databases to PostgreSQL

I have a problem when I convert mysql database to postgres.
mysql tablenames and columns are all uppercase but kettle creates postgres tables all lowercase when i run this job. The components of tableoutput prints log just like this
INSERT INTO USER_MENU ("FLOW_ID", "USER_ID" .... ,
I try mysql all lowercase, run successfully. I know postgres is case-sensitive but how to solve this case when mysql all uppercase or how to make the tableoutput to output lowercase sql.
Using kettle 6.1.0.1-R
Quick answer: The create table statement is an editable text. In particular you can copy/paste it into a notepad (or any editor), change all to lowercase and copy/paste it back before to push the create button. (Useful also for non-standard SQL dialect, like Date/Time/Boolean types).
Neat answer: Edit the connection. On the right panel you have a General/Advanced/Options/Pooling/Cluster menu. Go to the Advanced panel. There you can tell weather your database is using uppercase or lowercase.
Stupid but quick use of the clever answer: Use the Advanced menu to force quote around identifiers.
Really smart answer: Edit the MySQL connection and select the Options menu, and refer to that page. Postgres conforms strictly to standand SQL, so be reluctant to change the Options default for Postgres connections.

How to populated the table via Pentaho Data Integration's table_output step?

I am performing an ETL job via Pentaho 7.1.
The job is to populate a table 'PRO_T_TICKETS' in PostgreSQL 9.2 via the Pentaho Jobs and transformations?
I have mapped the table fields with respect to the stream fields
Mapped Fields
My Table PRO_T_TICKETS contains the Schema (Column Names) in UPPERCASE.
Is this the reason I can't populate the table PRO_T_TICKETS with my ETL Job?
I duplicated the step TABLE_OUTPUT to PRO_T_TICKETS and changed the Target table field to 'PRO_T_TICKETS2'. Pentaho created a new table with lowercase schema and populated the data in it.
But I want this data to be uploaded in the table PRO_T_TICKETS only and with the UPPERCASE schema if possible.
I am attaching the whole job here and the error thrown by Pentaho. Pentaho Error I have also tried my query by adding double quotes to the column names as you can see in the error. But it didn't help.
What do you think I should do?
When you create (or modify) the connection, select Advanced on the left panel and click on the Force to upper case or Force to lower case or, even better, Preserve case of reserved words.
To know which option to choose, copy the 4th line of your error log, the line starting with INSERT INTO "public"."PRO_T_TICKETS("OID"... in your SQL-developer tool and change the connection advanced parameters until it works.
Also, at debug time, don't use batch updates, don't use lazy conversion on previous steps, and try with one (1) field rather than all (25).
Just as a complement: it worked for me following the tips from AlainD and using specific configurations that I'd like to share with you. I have a transformation streaming data from MySQL to PostgreSQL using a Table Input and Output. In both of DBs I have uppercase objects.
I did the following steps to work in the right way:
In the table input (MySQL) the objects are uppercase too, but I typed in lowercase and it worked and I didn't set any special option in the DB Connection.
In the table output (PostgreSQL) I typed everything in uppercase (schema, table name and columns) and I also set "specify the database fields" (clicking on "Get fields").
In the target DB Connection (PostgreSQL) I put the options (in "Advanced" section): "Quote all in database" and "Preserve case of reserved words".
PS: Ah, the last option is because I've found out that there was one more problem with my fields: there was a column called "Admin" (yes guys, they created a camelcase column using a reserved word!) and for that reason I must to put "Preserve case of reserved words" and type it as "Admin" (without quotes and in camelcase) in the Table Output.

How do I use SQL Developer for inserting and selecting unicode 7.0 values into Oracle 12.2 tables?

I need to test my company's product compatibility with new features that oracle declared they have in 12.2. One of them is support for unicode 7.0.
I checked the NLS_CHARACTERSET in my database is set up to AL32UTF8, and I've got a table with varchar2 columns but I have absolutely no idea how to insert unicode values into it.
I looked at the changeset unicode published and at this post about unicode emojis and pictographs (the highest ranking answer). The problem, is that SQL*DEVELOPER (and Dbeaver for that matter) turn everything in the new languages to ? or squares and I don't know how to use SQL to insert values that will be returned as pictographs or emoticons.
Thanks in advance
The procedure is as follows:
Go and download Bablepad.
Open a new file in it and copy+paste the unicode block codes of the pictographs you want to insert into the database. highlight them and hit alt+x to convert them to the pictographs.
Open Notepad++ and create a new txt file with only INSERT INTO the table you created. Copy+paste the pictographs and save the file.
Connect to your oracle from your computer via SQL*PLUS and run #\
et voila.

How can I change character code from Shift-JIS to UTF-8 when I copy data from DB2 to Postgres?

I'm trying to migrate data from DB2 to Postgres using pentaho ETL now.
character code on DB2 is Shift-JIS (Japanese specific character code) and Postgres is UTF-8.
I could migrate data from DB2 to Postgres successfully, but Japanese character has not been transformed properly (it has been changed to strange characters..)
How can I change character code from Shift-Jis to UTF-8 when I transfer data?
It was bit though problem for me, but I could solve it finally.
first, you need to choose "Modified Java Script value" from job list and write the script as below.
(I'm assuming that the value in the table is column1 and new value is value1)
here is the example of the source code. (You can specify multiple values if you need)
var value1 = new Packages.java.lang.String(new
Packages.java.lang.String(column1).getBytes("ISO8859_1"),"Shift-JIS").replaceAll(" ","");
//you don't need to use replaceAll() if you don't need to trim the string.
Finally, click "Get variables" and the value will be shown in the table below.
then, you can choose the "value1" in the next job and it has been converted to correct encode. (which you specified)