REST API V2: Converting String type parameter to List type - rest

I'm trying to execute a JasperReports report through REST V2 services by passing the param value as part of the url. Then, in report, I have a SQL which take a list type param. But how to convert the String type param to List type to run the query?
Here, the String param has the comma separated values like below:
https://[host_name]:[port]/jasperserver/rest_v2/reports/reports/samples/[report_name].pdf?param_str=value1,value2,value3
we need to convert param_str to a List type from String type.
I'm getting the cast exception like :
Caused by: net.sf.jasperreports.engine.fill.JRExpressionEvalException: Error evaluating expression :
Caused by: java.lang.ClassCastException: java.lang.String cannot be cast to java.util.List

Can't believe I am doing this, it seems you can earn quite a lot on Jasper obscure and badly documented area by doing consulting. But I work with open source tech so should have the correct mindset. :-)
Here you are param_str=value1&param_str=value2&param_str=value3
If you find out how to do the same with Map, pls tell.

Related

Problem writting an enun on PostgreSQL using a PySpark Dataframe with jdbc write

So I am moving data from a MySQL (5.7) database to a PostgreSQL (12.7) one using PySpark (Spark 3.0.1, Scala 2.12). A table from the destiny model has an column that is an Enum.
CREATE TYPE ORDER_STATUS AS ENUM (
'SHIPPED','PAID','REFUNDED','PARTIALLY_REFUNDED','PROCESSING');
When inserting:
df_orders.select(df_orders.columns).write.format('jdbc').options(**postgres_write_opts_table).mode('append').save()
I am getting the next exception
Caused by: org.postgresql.util.PSQLException: ERROR: column "status" is of type order_status but expression is of type character varying
Hint: You will need to rewrite or cast the expression.
Basically I need to cast the column status to ORDER_STATUS . I have tried to use a UserDefinedType (PySpark does no have SQLUserDefinedType) but no really knowing what I am doing because the documentation is not very clear.
class StatusUDT(UserDefinedType):
#classmethod
def sqlType(self):
return NullType()
#classmethod
def module(cls):
return cls.__module__
def serialize(self, obj):
return f"{obj.value}::order_status_type"
def deserialize(self, datum):
return {x.value: x for x in Some}[datum]
And then I try the casting
df_orders = df_orders.withColumn("status", col("status").cast(StatusUDT()))
Then I am getting the next error then:
AnalysisException: cannot resolve 'CAST(`status` AS NULL)' due to data type mismatch: cannot cast string to null;;
Is there any way to cast this Enum?
So I finally was able to overcome this issue. I temporally removed the Enum so I could keep doing more tests and then I had a similar issue with a JSON type.
Searching about it I found this post: How to save String as JSONB type in postgres when using AWS Glue . And I fixed it setting the property:
'stringtype':"unspecified"
as the post answer suggests.
Then I put back the Enum into the table and this property also worked. I was able to run the insertions with no further issues.

Postgres: How to casting JSONB value to numeric

I am having issues casting a jsonb value. And would love some guidance.
what we are trying to achieve is that some data came in as strings, and we want to cast that to numbers.
Consider the following update statement:
update customer
set traits = jsonb_set(traits, '{arr}',traits->'arr'::text::integer)
where jsonb_typeof(traits->'arr') = 'string'
and traits->'arr' is not null
We currently get the following error:
[22P02] ERROR: invalid input syntax for type integer: "arr"
I have tried all sort of casting incantations, but can't figure away past this.
Anyone have a path forward for us ?!
working solution looks like THIS:
update customer
set traits = jsonb_set(traits, '{arr}',(traits->>'arr')::integer::text::jsonb)
where jsonb_typeof(traits->'arr') = 'string'
and traits->'arr' is not null
with a triple cast. Smells a bit off
The problem is that your expression
traits->'arr'::text::integer
is evaluated as
traits->('arr'::text::integer)
which is trying to cast 'arr' to an integer (failing for obvious reasons with the error message you mention). Instead, you want
(traits->'arr')::text::integer
-- or
(traits->>'arr')::integer

PySpark Schema should be specified in DDL format as a string literal or output of the schema_of_json function instead of schemaofjson(`col1`);

I'm trying to infer a schema from a json like string with schema_of_json function and then use the schema to format that string value as a struct using from_json function. My code is
import pyspark.sql.functions as sqlf
dfTemp = readFromEventHubs()
df= dfTemp.withColumn("col1", sqlf.get_json_object(col("jsonString"), '$.*'))
col1Val= df.col1
jsonSchema = sqlf.schema_of_json(col1Val)
df.select(sqlf.from_json(df.col1, jsonSchema).alias("jsonCol"))
but I have the following exception
AnalysisException: 'Schema should be specified in DDL format as a string literal or output of the schema_of_json function instead of schemaofjson(`col1Val`);'
Just a precision, I'm using spark streaming.
What's wrong with my code, Thank you
schema_of_json expects a string representing a valid JSON object. You’re passing it a pyspark.sql.Column, likely because you’re hoping it will infer the schema of every single row. That won’t happen though.
from_json expects as its first positional argument a Column, that contains JSON strings and as its second argument pyspark.sql.types.StructType or pyspark.sql.types.ArrayType or even (since 2.3) a DDL-formatted string or a JSON format string (which is a specification).
That means you can’t infer per row a different schema.
If you know the schema before reading (chances are that you do know it), then pass it in (”schema-on-read”) when you call from_json. If you’re not fixed on Databricks Delta, you could use a different DataFrameReader: spark.read.json, leaving its keyword argument schema unspecified so that it will infer the schema.

In PostgreSQL, which types can be cast with the type name first?

Reading the PostgreSQL docs, I see that you can cast a longish bit of text to xml like this:
SELECT xml '<long>long text, may span many lines</long>'
SELECT xml '...'
Curious, I found that I could do the same with JSON:
SELECT json '{"arg1":"val1", <more args spanning many lines>}'
(I couldn't find an official reference for this one. It just works!)
By contrast, this does not work:
SELECT float8 3.14159
I like this alternate syntax from a readability perspective. Now I'm looking for a reference listing which types may be specified up front like this. but I haven't found it yet.
Any pointers?
The documentation says:
A constant of an arbitrary type can be entered using any one of the following notations:
type 'string'
'string'::type
CAST ( 'string' AS type )
The string constant's text is passed to the input conversion routine for the type called type. The result is a constant of the indicated type. The explicit type cast can be omitted if there is no ambiguity as to the type the constant must be (for example, when it is assigned directly to a table column), in which case it is automatically coerced.
The form you are asking about is the first one.
So this can be used for all PostgreSQL types.
Note that the data must be specified as a string literal (in single or dollar quotes) when you use that syntax.

Passing a String parameter to a subreport Dataset

I'm trying to pass a parameter from Java to a subreport. The problem is when I got to check all the values inside an IN. The parameter $P{Itens} is coming on this format(1234,5678,9012) and it's String.
How can I solve it?
You should pass the parameter type as List from Java to report carrying all possible values. Afterwards, edit your report jrxml and set parameter $P{Itens} datatype to java.util.List instead of String.
In report query, replace iae.COD_PECA in $P{Itens} with $X{IN, iae.COD_PECA, $P{Itens}}. Here, $X{} is a built-in support provided by JasperReports for SQL clause functions.