How can I import arrays data into Quicksight from Postgresql? For example {1,2,3,4,5}. I tried to import all data into Quicksight but it doesn't recognize the arrays. However, if I download as csv from Postgresql and then import local csv file to Quicksight, it will recognize the arrays as string.
Have you tried converting the data type using a custom SQL command that will convert the array? https://docs.aws.amazon.com/quicksight/latest/user/adding-a-SQL-query.html
SELECT column_name->'name' AS name
FROM table_name
I achieved that with custom SQL + unnest() on the column with array, difference is that later you will have to keep in mind that some rows are not unique
You can use array_join(array,';'). This will show your array element in quick sight with ; separated like [1;2;3;4;5]
Related
I'm trying to import a CSV into an existing PostgreSQL table using DBeaver Import Tool and I need to transform a numeric value multiplying it by 100.
Someone knows the syntax to be used in the expression?
The official documentation talks about JEXL expressions, but i can't find any example.
I can't post images, but the one in this question is exactly where i need to put the expression.
I was expecting something like:
${column}*100
It seems the columns are exposed as variables so a simple(r) column * 100 should do.
Hi I am trying to split the string column which has a delimiter(',')
drop table #address
CREATE TABLE #Address(stir VARCHAR(max));
GO
INSERT INTO #Address(stir)
values('aa,"","7453adeg3","tom","jon","1900-01-01","14155","","2"')
,('ca,"23","42316eg3","pom","","1800-01-01","9999","","1"')
,('daa,"","1324567a","","catty","","756432","213",""')
GO
Expected output:
I am using PARSENAME but it is returning null values? guide me on my expected out put
thanks in advance
The best solution here would be to just create a flat CSV file based on your current insert data, and then use SQL Server's bulk import tool to load it into a table. The following CSV data should be workable here:
aa,"","7453adeg3","tom","jon","1900-01-01","14155","","2"
ca,"23","42316eg3","pom","","1800-01-01","9999","","1"
daa,"","1324567a","","catty","","756432","213",""
Just make sure that you specify double quote as the field escape character.
I have to push parquet file data which I am reading from IBM Cloud SQL Query to Db2 on Cloud.
My parquet file has data in array format, and I want to push that to DB2 on Cloud too.
Is there any way to push that array data of parquet file to Db2 on Cloud?
Have you checked out this advise in the documentation?
https://cloud.ibm.com/docs/services/sql-query?topic=sql-query-overview#limitations
If a JSON, ORC, or Parquet object contains a nested or arrayed
structure, a query with CSV output using a wildcard (for example,
SELECT * from cos://...) returns an error such as "Invalid CSV data
type used: struct." Use one of the following
workarounds:
For a nested structure, use the FLATTEN table transformation function.
Alternatively, you can specify the fully nested column names
instead of the wildcard, for example, SELECT address.city, address.street, ... from cos://....
For an array, use the Spark SQL explode() function, for example, select explode(contact_names) from cos://....
I have a CLOB(2000000) field in a db2 (v10) database, and I would like to run a simple UPDATE query on it to replace each occurances of "foo" to "baaz".
Since the contents of the field is more then 32k, I get the following error:
"{some char data from field}" is too long.. SQLCODE=-433, SQLSTATE=22001
How can I replace the values?
UPDATE:
The query was the following (changed UPDATE into SELECT for easier testing):
SELECT REPLACE(my_clob_column, 'foo', 'baaz') FROM my_table WHERE id = 10726
UPDATE 2
As mustaccio pointed out, REPLACE does not work on CLOB fields (or at least not without doing a cast to VARCHAR on the data entered - which in my case is not possible since the size of the data is more than 32k) - the question is about finding an alternative way to acchive the REPLACE functionallity for CLOB fields.
Thanks,
krisy
Finally, since I have found no way to this by an SQL query, I ended up exporting the table, editing its lob content in Notepad++, and importing the table back again.
Not sure if this applies to your case: There are 2 different REPLACE functions offered by DB2, SYSIBM.REPLACE and SYSFUN.REPLACE. The version of REPLACE in SYSFUN accepts CLOBs and supports values up to 1 MByte. In case your values are longer than you would need to write your own (SQL-based?) function.
BTW: You can check function resolution by executing "values(current path)"
I'm creating a BIRT report and I need to split a comma delimited string from a dataset into multiple columns in a table.
The data looks like:
256,1400.031,-70.014,1,4.544,0.36,10,31,30.89999962,0
256,1400,-69.984,2,4.574,1.36,10,0,0,0
...
The data is stored this way in the database and I can't change it but I need to be able to display it as a table. I'm new to BIRT, any ideas?
I think the easiest way is to create a computed column in the dataset for each field.
For example if the merged field from database is named "mergedData" you can split it with this kind of expression:
First field (computed column) expression:
var tempArray=row["mergedData"].split(",");
tempArray[0];
Second field:
var tempArray=row["mergedData"].split(",");
tempArray[1];
etc..
Depending on some variables that you did not mention.
If the dataset is stagenent (not updated much or ever). Open the data set with Excel, converiting it from .csv to .xls and save.
Use the Excel as a datasource. Assuming you are using BIRT 4.1 or newer this should work fine.
I don't think there is any SQL code that easily converts .csv