How to change a SQL KEYWORDS in a jrxml query? - jasper-reports

I want to change ORDER BY statement in a jrxml
for example
SELECT * FROM table WHERE Id = 1 ORDER BY Name ASC
How to change ASC dynamic?
I tried below , but it's not work
SELECT * FROM table WHERE Id = $P{Id} ORDER BY Name $P{OrderType}
EXEC sp_executeSQL 'SELECT * FROM table WHERE Id = $P{Id} ORDER BY Name $P{OrderType}'
I have to change it in jrxml, so I cant use code like c# or java.

Related

i want get return count rows when execute select into table_n from table

when I run
select * into mobile_n from mobile where c_name='dic'
I want to get the reuslt of select count(1) from mobile_n
I tried
select count(1)
from (
select * into mobile_n from mobile where c_name='dic'
return *
)
but it did not work
You can't. https://www.postgresql.org/docs/current/sql-selectinto.html.
SELECT INTO creates a new table and fills it with data computed by a
query. The data is not returned to the client, as it is with a normal
SELECT. The new table's columns have the names and data types
associated with the output columns of the SELECT.
emphasis/bold by me.
work around:
create table mobile_n as select * from mobile limit 0;
with a as(
insert into mobile_n select * from mobile where c_name = 'dic' returning 1)
select count(*) from a;
You can try to use CTE for count result
WITH result AS (
select *
into mobile_n
from mobile
where c_name='dic'
RETURNING 1
)
SELECT count(*)
FROM result;

Trying to use a CTE calculation to update a column

I am trying to update column3, based on a calculation which is happening between column1 and column2. The theory is relatively simple, however I seem to be struggling with CTE's. If column1 is not null, then column1 * AVG(column2) gets put in column3.
I have searched the forums and tried a few different methods, including CTE and standard UPDATE queries. I seem to be making a mistake.
WITH cte_avg1 AS (
SELECT "column1" * AVG("column2") AS avg
FROM table1
)
UPDATE table1
SET "column3" = cte_avg1.avg
FROM cte_avg1
WHERE "column1" IS NOT NULL;
The error message which I am getting is as follows;
ERROR: column must appear in the GROUP BY clause or be used in an aggregate function
LINE 5: SELECT "column1" * AVG("column2"...
In an aggregating query all columns after SELECT must either be in the GROUP BY clause or a parameter to an aggregation function. Move the multiplication out of the CTE.
WITH cte_avg1
AS
(
SELECT avg(column2) avg
FROM table1
)
UPDATE table1
SET column3 = column1 * cte_avg1.avg
FROM cte_avg1
WHERE column1 IS NOT NULL;

Sum of a column within the subquery in Postgresql

I have a Postgresql table where I have 2 fields i.e. ID and Name ie column1 and column2 in the SQLFiddle. The default record_count I put for a particular ID is 1. I want to get the record_count for column 1 and sum that record_count by column1.
I tried to use this query but somehow its showing some error.
select sum(column_record) group by column_record ,
* from (select column1,1::int4 as column_record from test) a
Also find the Input/Output screenshot in the form of excel below :
SQL Fiddle for the same :
http://sqlfiddle.com/#!15/12fe9/1
If you're using a window function (you may want to use normal grouping, which is "a lot" more faster and performant), this is the way to do it:
-- create temp table test as (select * from (values ('a', 'b'), ('c', 'd')) a(column1, column2));
select sum(column_record) over (partition by column_record),
* from (select column1, 1::int4 as column_record from test) a;

Pass String in Postgres Query

I am using Postgresql 9.3 and what I am trying to do is Passing column names as string into my query. For newtable my column number can be dynamic sometimes it might be 3 or more for which I am trying to select column from another table and pssing relut of my query as string in the existing query
Please help how can i do this
select * from crosstab (
'select "TIMESTAMP_S","VARIABLE","VALUE" from archieve_export_db_a3 group by 1,2,3 order by 1,2',
'select distinct "VARIABLE" From archieve_export_db_variables order by 1'
) AS newtable (TIMESTAMP_S int,_col1 integer,_col2 integer);

PostgreSQL - return most common value for all columns in a table

I've got a table with a lot of columns in it and I want to run a query to find the most common value in each column.
Ordinarily for a single column, I'd run something like:
SELECT country
FROM users
GROUP BY country
ORDER BY count(*) DESC
LIMIT 1
Does PostgreSQL have a built in function for doing this or can anyone suggest a query I could run to achieve this?
Using the same query, for more than one column you should do:
SELECT *
FROM
(
SELECT country
FROM users
GROUP BY 1
ORDER BY count(*) DESC
LIMIT 1
) country
,(
SELECT city
FROM users
GROUP BY 1
ORDER BY count(*) DESC
LIMIT 1
) city
This works for any type and will return all the values in the same row, with the columns having its original name.
For more columns just had more subquerys as:
,(
SELECT someOtherColumn
FROM users
GROUP BY 1
ORDER BY count(*) DESC
LIMIT 1
) someOtherColumn
Edit:
You could reach it with window functions also. However it will not be better in performance nor in readability.
Starting from PG 9.4 there is aggregate function for this:
mode() WITHIN GROUP (ORDER BY sort_expression)
returns the most frequent input value (arbitrarily choosing the first one if there are multiple equally-frequent results)
And for earlier versions, you could create one...
CREATE OR REPLACE FUNCTION mode_array(anyarray)
RETURNS anyelement AS
$BODY$
SELECT a FROM unnest($1) a GROUP BY 1 ORDER BY COUNT(1) DESC, 1 LIMIT 1;
$BODY$
LANGUAGE SQL IMMUTABLE;
CREATE AGGREGATE mode(anyelement)(
SFUNC = array_append, --Function to call for each row. Just builds the array
STYPE = anyarray,
FINALFUNC = mode_array, --Function to call after everything has been added to array
INITCOND = '{}'--Initialize an empty array when starting
) ;
Usage: SELECT mode(column) FROM table;
If I were doing this, I'd write a query like this one:
SELECT 'country', country
FROM users
GROUP BY country
ORDER BY count(*) DESC
LIMIT 1
UNION ALL
SELECT 'city', city
FROM USERS
GROUP BY city
ORDER BY count(*) DESC
LIMIT 1
-- etc.
It should be noted this only works if all the columns are of compatible types. If they are not, you'll probably need a different solution.
This window function version will read the users table and the computed table once each. The correlated subquery version will read the users table once for each of the columns. If the columns are many as in the OPs case then my guess is that this is faster. SQL Fiddle
select distinct on (country_count, age_count) *
from (
select
country,
count(*) over(partition by country) as country_count,
age,
count(*) over(partition by age) as age_count
from users
) s
order by country_count desc, age_count desc
limit 1