crystal report error: "a string is required here" on select - crystal-reports

I run the following code in crystal reports and it tells me that
" string is required here"
on the second case. I've tried casting {?Pm-Student_course_attendance_csv.Level} as a string, but no luck
Global NumberVar grade;
Global NumberVar baseline;
Select ({?Pm-Student_course_attendance_csv.Level})
Case "R", "TL", "TU", "OTH", "P", "BTECFS":
If (CStr({PM_csv.mv_value}) = "Pass") Then
"On Track"
Else
"Below"
Case "H", "ASD", "AD", "AS", "G": //error from this line onwards inclusive
Select (CStr({PM_csv.mv_value}))
Case "A*":
grade := 11
Case "A*/A":
grade := 10
Case "A":
grade := 9
Case "A/B":
grade := 8
Case "B":
grade := 7
Case "B/C":
grade := 6
Case "C":
grade := 5
Case "C/D":
grade := 4
Case "D":
grade := 3
Case "D/E":
grade := 2
Case "E":
grade := 1
Case "U":
grade := 0
Default :
grade := 0;

Your formula cannot have two different return types. From the error and above it returns a string, but below it you are trying to return a number (A formula's return type is also the last assignment, so if the last thing your formula does is grade:=1 then the formula tries to return the value 1, which it obviously won't allow you to do. You have to add a string return value for each case in the second half of the formula.
...
Case "H", "ASD", "AD", "AS", "G": //error from this line onwards inclusive
Select (CStr({PM_csv.mv_value}))
Case "A*":
grade := 11;
"Return value" //<---- Must be a string
Case "A*/A":
...

Related

PySpark: Explode schema columns does not match with underlying nested schema

I am using pyspark in combination with Azure-Synapse. I am reading multiple nested JSON with the same structure in a dataframe using the below sample:
{
"AmountOfOrders": 2,
"TotalEarnings": 1800,
"OrderDetails": [
{
"OrderNumber": 1,
"OrderDate": "2022-7-06",
"OrderLine": [
{
"LineNumber": 1,
"Product": "Laptop",
"Price": 1000
},
{
"LineNumber": 2,
"Product": "Tablet",
"Price": 500
},
{
"LineNumber": 3,
"Product": "Mobilephone",
"Price": 300
}
]
},
{
"OrderNumber": 2,
"OrderDate": "2022-7-06",
"OrderLine": [
{
"LineNumber": 1,
"Product": "Printer",
"Price": 100,
"Discount": 0
},
{
"LineNumber": 2,
"Product": "Paper",
"Price": 50,
"Discount": 0
},
{
"LineNumber": 3,
"Product": "Toner",
"Price": 30,
"Discount": 0
}
]
}
]
}
I am trying to get the the LineNumbers of Ordernumber 1 in a separate dataframe using a generic function which extract the array and Struct of the dataframe. Using the code below:
def read_nested_structure(df,excludeList,messageType,coll):
display(df.limit(10))
print('read_nested_structure')
cols =[]
match = 0
match_field = ""
print(df.schema[coll].dataType.fields)
for field in df.schema[coll].dataType.fields:
for c in excludeList:
if c == field.name:
print('Match = ' + field.name)
match = 1
if match == 0:
# cols.append(coll)
cols.append(col(coll + "." + field.name).alias(field.name))
match = 0
# cols.append(coll)
print(cols)
df = df.select(cols)
return df
def read_nested_structure_2(df,excludeList,messageType):
cols =[]
match = 0
for coll in df.schema.names:
if isinstance(df.schema[coll].dataType, ArrayType):
print( coll + "-- : Array")
df = df.withColumn(coll, explode(coll).alias(coll))
cols.append(coll)
elif isinstance(df.schema[coll].dataType, StructType):
if messageType == 'Header':
for field in df.schema[coll].dataType.fields:
cols.append(col(coll + "." + field.name).alias(coll + "_" + field.name))
elif messageType == 'Content':
print('Struct - Content')
for field in df.schema[coll].dataType.fields:
cols.append(col(coll + "." + field.name).alias(field.name))
else:
for c in excludeList:
if c == coll:
match = 1
if match == 0:
cols.append(coll)
df = df.select(cols)
return df
df = spark.read.load(datalakelocation + '/test.json', format='json')
df = unpack_to_content_dataframe_simple_2(df,exclude)
df = df.filter(df.OrderNumber == 1)
df = unpack_to_content_dataframe_simple_2(df,exclude)
display(df.limit(10))
which result in the following dataframe:
as you can see the yellow marked attribute is added to the dataframe which is not part of OrderNumber 1. How can I filter a row in the dataframe which results in a update schema ( in this case without the Discount attribute)?
I have used read_nested_structure_2() function in the following way to get the same results as yours. The code I used to get this result using read_nested_structure_2() is as follows:
x = read_nested_structure_2(df,[],'Header')
y = read_nested_structure_2(x,[],'Content')
y = y.filter(y.OrderNumber == 1)
z = read_nested_structure_2(y,[],'Header')
final = read_nested_structure_2(z,[],'Content')
display(final)
The output of after using this code is:
The column Discount will be created even if it is specified for one Product in the entire input JSON. In order to remove this column, we have to do it separately to get another dataframe without Discount (only if it is invalid).
You are going to use the same function to extract data from StructType or ArrayType, it is not recommended to write code to remove fields (say Discount) having all null values, in the same function. Doing so would complicate the code.
Instead, we can write another function which does this work for us. This function should remove a column where all of its values are null. The following is the function that can be used to do this.
def exclude_fields_that_dont_exist(filtered_df):
cols=[]
#iterate through columns
for column in filtered_df.columns:
#null_count is the count of null values in a column
null_count = filtered_df.filter(filtered_df[column].isNull()).count()
#check if null_count equals the total column value count
#if they are equal, those columns are not required (Like Discount)
if(filtered_df.select(column).count() != null_count):
cols.append(column)
#return dataframe with required columns
return filtered_df.select(*cols)
When you use this function on the filtered dataframe (final in my case), then you get a resulting dataframe as shown below:
mydf = exclude_fields_that_dont_exist(final)
# removes columns from a dataframe that have all null values.
display(mydf)
NOTE:
For example, for OrderNumber=1, the product Laptop has a 10% discount and the rest of the products (for the same order number) don't have a discount value (in the JSON).
The function needs to include the Discount column since it is a required information.
To avoid using more loops inside a function, you can also consider replacing all the null values with 0 since a Product with no Discount specified (null value) is same as a Product with Discount value as 0 (If this is feasible, then you can use fill() or fillna() functions to fill null values with any desired value)

I'm looking for a way to pass in either a slice of int or a comma delimited string into a database/sql 'in' query [duplicate]

I am trying to execute the following query against the PostgreSQL database in Go using pq driver:
SELECT COUNT(id)
FROM tags
WHERE id IN (1, 2, 3)
where 1, 2, 3 is passed at a slice tags := []string{"1", "2", "3"}.
I have tried many different things like:
s := "(" + strings.Join(tags, ",") + ")"
if err := Db.QueryRow(`
SELECT COUNT(id)
FROM tags
WHERE id IN $1`, s,
).Scan(&num); err != nil {
log.Println(err)
}
which results in pq: syntax error at or near "$1". I also tried
if err := Db.QueryRow(`
SELECT COUNT(id)
FROM tags
WHERE id IN ($1)`, strings.Join(stringTagIds, ","),
).Scan(&num); err != nil {
log.Println(err)
}
which also fails with pq: invalid input syntax for integer: "1,2,3"
I also tried passing a slice of integers/strings directly and got sql: converting Exec argument #0's type: unsupported type []string, a slice.
So how can I execute this query in Go?
Pre-building the SQL query (preventing SQL injection)
If you're generating an SQL string with a param placeholder for each of the values, it's easier to just generate the final SQL right away.
Note that since values are strings, there's place for SQL injection attack, so we first test if all the string values are indeed numbers, and we only proceed if so:
tags := []string{"1", "2", "3"}
buf := bytes.NewBufferString("SELECT COUNT(id) FROM tags WHERE id IN(")
for i, v := range tags {
if i > 0 {
buf.WriteString(",")
}
if _, err := strconv.Atoi(v); err != nil {
panic("Not number!")
}
buf.WriteString(v)
}
buf.WriteString(")")
Executing it:
num := 0
if err := Db.QueryRow(buf.String()).Scan(&num); err != nil {
log.Println(err)
}
Using ANY
You can also use Postgresql's ANY, whose syntax is as follows:
expression operator ANY (array expression)
Using that, our query may look like this:
SELECT COUNT(id) FROM tags WHERE id = ANY('{1,2,3}'::int[])
In this case you can declare the text form of the array as a parameter:
SELECT COUNT(id) FROM tags WHERE id = ANY($1::int[])
Which can simply be built like this:
tags := []string{"1", "2", "3"}
param := "{" + strings.Join(tags, ",") + "}"
Note that no check is required in this case as the array expression will not allow SQL injection (but rather will result in a query execution error).
So the full code:
tags := []string{"1", "2", "3"}
q := "SELECT COUNT(id) FROM tags WHERE id = ANY($1::int[])"
param := "{" + strings.Join(tags, ",") + "}"
num := 0
if err := Db.QueryRow(q, param).Scan(&num); err != nil {
log.Println(err)
}
This is not really a Golang issue, you are using a string to compare to integer (id) in your SQL request. That means, SQL receive:
SELECT COUNT(id)
FROM tags
WHERE id IN ("1, 2, 3")
instead of what you want to give it. You just need to convert your tags into integer and passe it to the query.
EDIT:
Since you are trying to pass multiple value to the query, then you should tell it:
params := make([]string, 0, len(tags))
for i := range tags {
params = append(params, fmt.Sprintf("$%d", i+1))
}
query := fmt.Sprintf("SELECT COUNT(id) FROM tags WHERE id IN (%s)", strings.Join(params, ", "))
This will end the query with a "($1, $2, $3...", then convert your tags as int:
values := make([]int, 0, len(tags))
for _, s := range tags {
val, err := strconv.Atoi(s)
if err != nil {
// Do whatever is required with the error
fmt.Println("Err : ", err)
} else {
values = append(values, val)
}
}
And finally, you can use it in the query:
Db.QueryRow(query, values...)
This should do it.
Extending #icza solution, you can use pq.Array instead of building the params yourself.
So using his example, the code can look like this:
tags := []string{"1", "2", "3"}
q := "SELECT COUNT(id) FROM tags WHERE id = ANY($1::int[])"
num := 0
if err := Db.QueryRow(q, pq.Array(tags)).Scan(&num); err != nil {
log.Println(err)
}

Postgres: How to string pattern match query a json column?

I have a column with json type but I'm wondering how to select filter it i.e.
select * from fooTable where myjson like "orld";
How would I query for a substring match like the above. Say searching for "orld" under "bar" keys?
{ "foo": "hello", "bar": "world"}
I took a look at this documentation but it is quite confusing.
https://www.postgresql.org/docs/current/static/datatype-json.html
Use the ->> operator to get json attributes as text, example
with my_table(id, my_json) as (
values
(1, '{ "foo": "hello", "bar": "world"}'::json),
(2, '{ "foo": "hello", "bar": "moon"}'::json)
)
select t.*
from my_table t
where my_json->>'bar' like '%orld'
id | my_json
----+-----------------------------------
1 | { "foo": "hello", "bar": "world"}
(1 row)
Note that you need a placeholder % in the pattern.

Go and IN clause in Postgres

I am trying to execute the following query against the PostgreSQL database in Go using pq driver:
SELECT COUNT(id)
FROM tags
WHERE id IN (1, 2, 3)
where 1, 2, 3 is passed at a slice tags := []string{"1", "2", "3"}.
I have tried many different things like:
s := "(" + strings.Join(tags, ",") + ")"
if err := Db.QueryRow(`
SELECT COUNT(id)
FROM tags
WHERE id IN $1`, s,
).Scan(&num); err != nil {
log.Println(err)
}
which results in pq: syntax error at or near "$1". I also tried
if err := Db.QueryRow(`
SELECT COUNT(id)
FROM tags
WHERE id IN ($1)`, strings.Join(stringTagIds, ","),
).Scan(&num); err != nil {
log.Println(err)
}
which also fails with pq: invalid input syntax for integer: "1,2,3"
I also tried passing a slice of integers/strings directly and got sql: converting Exec argument #0's type: unsupported type []string, a slice.
So how can I execute this query in Go?
Pre-building the SQL query (preventing SQL injection)
If you're generating an SQL string with a param placeholder for each of the values, it's easier to just generate the final SQL right away.
Note that since values are strings, there's place for SQL injection attack, so we first test if all the string values are indeed numbers, and we only proceed if so:
tags := []string{"1", "2", "3"}
buf := bytes.NewBufferString("SELECT COUNT(id) FROM tags WHERE id IN(")
for i, v := range tags {
if i > 0 {
buf.WriteString(",")
}
if _, err := strconv.Atoi(v); err != nil {
panic("Not number!")
}
buf.WriteString(v)
}
buf.WriteString(")")
Executing it:
num := 0
if err := Db.QueryRow(buf.String()).Scan(&num); err != nil {
log.Println(err)
}
Using ANY
You can also use Postgresql's ANY, whose syntax is as follows:
expression operator ANY (array expression)
Using that, our query may look like this:
SELECT COUNT(id) FROM tags WHERE id = ANY('{1,2,3}'::int[])
In this case you can declare the text form of the array as a parameter:
SELECT COUNT(id) FROM tags WHERE id = ANY($1::int[])
Which can simply be built like this:
tags := []string{"1", "2", "3"}
param := "{" + strings.Join(tags, ",") + "}"
Note that no check is required in this case as the array expression will not allow SQL injection (but rather will result in a query execution error).
So the full code:
tags := []string{"1", "2", "3"}
q := "SELECT COUNT(id) FROM tags WHERE id = ANY($1::int[])"
param := "{" + strings.Join(tags, ",") + "}"
num := 0
if err := Db.QueryRow(q, param).Scan(&num); err != nil {
log.Println(err)
}
This is not really a Golang issue, you are using a string to compare to integer (id) in your SQL request. That means, SQL receive:
SELECT COUNT(id)
FROM tags
WHERE id IN ("1, 2, 3")
instead of what you want to give it. You just need to convert your tags into integer and passe it to the query.
EDIT:
Since you are trying to pass multiple value to the query, then you should tell it:
params := make([]string, 0, len(tags))
for i := range tags {
params = append(params, fmt.Sprintf("$%d", i+1))
}
query := fmt.Sprintf("SELECT COUNT(id) FROM tags WHERE id IN (%s)", strings.Join(params, ", "))
This will end the query with a "($1, $2, $3...", then convert your tags as int:
values := make([]int, 0, len(tags))
for _, s := range tags {
val, err := strconv.Atoi(s)
if err != nil {
// Do whatever is required with the error
fmt.Println("Err : ", err)
} else {
values = append(values, val)
}
}
And finally, you can use it in the query:
Db.QueryRow(query, values...)
This should do it.
Extending #icza solution, you can use pq.Array instead of building the params yourself.
So using his example, the code can look like this:
tags := []string{"1", "2", "3"}
q := "SELECT COUNT(id) FROM tags WHERE id = ANY($1::int[])"
num := 0
if err := Db.QueryRow(q, pq.Array(tags)).Scan(&num); err != nil {
log.Println(err)
}

Count Aggregates in clingo

Test data
addEmployee(EmplID, Name1, Name2, TypeOfWork, Salary, TxnDate)
addEmployee("tjb1998", "eva", "mcdowell", "ra", 55000, 20).
addEmployee("tjb1987x", "ben", "xena", "cdt", 68000, q50).
addEmployee("tjb2112", "ryoko", "hakubi", "ra", 63000, 60).
addEmployee("tjb1987", "ben", "croshaw", "cdt", 68000, 90).
addEmployee("tjb3300m", "amane", "mauna", "ma", 61000, 105).
I want to group by employees as per the type of work and count of employees for the particular type of work.
e.g:
ra 4
cdt 2
ma 1
below is the query I am trying to run
employee(TOW) :- addEmployee(_,_,_,TOW,_,_).
nmbrEmployeesOfSameType (N) :- N = #count { employee(TOW) }.
Please advise, I am at the beginner level for Clingo
Try this:
addEmployee("tjb1998", "eva", "mcdowell", "ra", 55000, 20).
addEmployee("tjb1987x", "ben", "xena", "cdt", 68000, q50).
addEmployee("tjb2112", "ryoko", "hakubi", "ra", 63000, 60).
addEmployee("tjb1987", "ben", "croshaw", "cdt", 60000, 90).
addEmployee("tjb3300m", "amane", "mauna", "ma", 61000, 105).
getType(P, X) :- addEmployee(X, _, _, P, _, _).
type(P) :- addEmployee(_, _, _, P, _, _).
result(P, S) :- S = #count{ I : getType(P,I)}, type(P).
#show result/2.
And the output will look like:
clingo version 4.5.3
Reading from test.lp
Solving...
Answer: 1
result("ra",2) result("cdt",2) result("ma",1)
SATISFIABLE
You can also copy my code and run it here to see if it works.