Can't figure out why this query won't run, because its saying I don't have a Boolean statement where one should be.
The error I'm getting says:
An error occurred while executing the query.
An expression of non-boolean type specified in a context where a condition is expected, near 'AND'. (Microsoft SQL Server Report Builder)
Server Name: xxxxxxxxxx
Error Number: 4145
Severity: 15
State: 1
Line Number: 9
SELECT
REF_TPI_NPI_PHYS.MNE,
REF_TPI_NPI_PHYS.REFERRING,
RPT_MK_PRACT_PHYS_PT_CNT.COUNT_PTS AS TOTAL_PTS,
Sum(IIf(HBA1C_DATE IS NOT NULL, 1, 0)) AS COUNT_HBA1C,
Format((Sum(IIf(HBA1C_DATE IS NOT NULL, 1, 0)) / ([RPT_MK_PRACT_PHYS_PT_CNT].COUNT_PTS)), 'Percent') AS PANELED_HBA1C,
Format(Sum(IIf(HBA1C_VALUE IS NOT NULL AND IsNumeric(CAST(HBA1C_VALUE AS FLOAT)) AND CAST(HBA1C_VALUE AS FLOAT) <= 100, 1, 0))) / Sum(IIf(HBA1C_DATE IS NOT NULL, 1, 0)), 'Percent') AS HBA1C_COMP
Sum(IIf(LDL_DATE IS NOT NULL, 1, 0)) AS COUNT_LDL,
Format((Sum(IIf(LDL_DATE IS NOT NULL, 1, 0)) / ([RPT_MK_PRACT_PHYS_PT_CNT].COUNT_PTS)), 'Percent') AS PANELED_LDL,
Format(Sum(IIf(LDL_VALUE IS NOT NULL AND IsNumeric(CAST(LDL_VALUE AS FLOAT)) AND CAST(LDL_VALUE AS FLOAT) <= 100, 1, 0))) / Sum(IIf(LDL_DATE IS NOT NULL, 1, 0)), 'Percent') AS LDL_COMP
FROM
(
(
TBL_HBA1C_LDL LEFT JOIN TBL_REGISTRATION
ON CAST(RTRIM(LTRIM(TBL_HBA1C_LDL.PAT_MRN)) AS INTEGER) = TBL_REGISTRATION.MRN
)
LEFT JOIN
REF_TPI_NPI_PHYS
ON TBL_REGISTRATION.PCP = REF_TPI_NPI_PHYS.REFERRING
)
LEFT JOIN
RPT_MK_PRACT_PHYS_PT_CNT
ON TBL_REGISTRATION.PCP = RPT_MK_PRACT_PHYS_PT_CNT.PCP
GROUP BY
REF_TPI_NPI_PHYS.MNE,
REF_TPI_NPI_PHYS.REFERRING,
RPT_MK_PRACT_PHYS_PT_CNT.COUNT_PTS
ORDER BY
REF_TPI_NPI_PHYS.MNE,
REF_TPI_NPI_PHYS.REFERRING
Related
I am trying to change that mysql query to mongodb format. I can't find an answer on how to do it. Please suggest a solution.
I've tried several ways, but it doesn't work, please help
I know I have to use a aggregate query, but it's difficult because I have a lot of subqueries. How should I configure the stage?
SELECT #rowcnt := COUNT(*) AS cnt
, SUM(keywordcount) AS total_k_count
, SUM(exposecount) AS total_exp, SUM(clickcount) AS total_cc
, IF( SUM(exposecount) = 0, 0, ROUND((SUM(clickcount) / SUM(exposecount))* 100, 2) ) AS total_cr
, IF( SUM(clickcount) = 0, 0, ROUND(SUM(cost) / SUM(clickcount)) ) AS total_ccost
, ROUND(SUM(cost)) AS total_cost
, SUM(COUNT) AS total_count, SUM(directcount) AS total_d_count, SUM(indirectcount) AS total_ind_count
, IF( SUM(clickcount) = 0, 0, ROUND( ( SUM(COUNT) / SUM(clickcount) ) * 100, 2) ) AS total_con_r
, IF( SUM(COUNT) = 0, 0, ROUND((SUM(cost) / SUM(COUNT))) ) AS total_con_expense
, SUM(conversioncost) AS total_con_cost, SUM(conversiondirectcost) AS total_con_d_cost, SUM(conversionindirectcost) AS total_con_ind_cost
, IF( SUM(cost) = 0, 0, ROUND( ( SUM(conversioncost) / SUM(cost) * 100 ) ) ) AS total_roas
FROM (
SELECT a.s_date
, a.master_id, REPLACE( c.media_name, '\'', '\`' ) AS media_name, a.media_code
, b.keywordcount, SUM(a.exposecount) AS exposecount, SUM(a.clickcount) AS clickcount, SUM(a.cost) AS cost
, IF( SUM(a.exposecount) = 0, 0, ROUND((SUM(a.clickcount) / SUM(a.exposecount))* 100, 2) ) AS clickrate
, IF( SUM(a.clickcount)= 0, 0, ROUND( (SUM(a.cost) / SUM(a.clickcount)) , 2) ) AS clickcost
, b.count, b.directcount, b.indirectcount
, IF( SUM(a.clickcount) = 0, 0, ROUND((b.count / SUM(a.clickcount))* 100, 2) ) AS conversionrate
, IF( b.count = 0, 0, ROUND((SUM(a.cost) / b.count)) ) AS conversionexpense
, b.conversioncost, b.conversiondirectcost, b.conversionindirectcost
, IF( SUM(a.cost) = 0, 0, ROUND(( b.conversioncost / SUM(a.cost)) * 100) ) AS roas
FROM TMP_Report AS a
LEFT OUTER JOIN (
SELECT nrc.media_code, COUNT(*) AS keywordcount, nrc.master_id, SUM(nrc.count) AS COUNT, SUM(nrc.directcount) AS directcount, SUM(nrc.indirectcount) AS indirectcount
, SUM(nrc.conversioncost) AS conversioncost, SUM(nrc.conversiondirectcost) AS conversiondirectcost, SUM(nrc.conversionindirectcost) AS conversionindirectcost
FROM (
SELECT nrc.media_code, nrc.s_date, nrc.master_id, nrc.keyword_id, SUM(nrc.count) AS COUNT, SUM(nrc.directcount) AS directcount, SUM(nrc.indirectcount) AS indirectcount
, SUM(nrc.cost) AS conversioncost, SUM(nrc.directcost) AS conversiondirectcost, SUM(nrc.indirectcost) AS conversionindirectcost
FROM TMP_ReportConv AS nrc
GROUP BY nrc.master_id, nrc.keyword_id, media_code, s_date
) AS nrc
INNER JOIN autoanswer.calendar AS c2 ON nrc.s_date = c2.cal_date
GROUP BY nrc.master_id, nrc.media_code
) AS b ON a.master_id = b.master_id AND a.media_code = b.media_code
LEFT OUTER JOIN naver.mst_media c ON a.media_code = c.media_id
WHERE a.s_date BETWEEN $get_s_date AND $get_e_date
GROUP BY a.master_id, a.media_code
) AS m;
I am not going to do your job, but this could be a starting point:
db.TMP_Report.aggregate([
{
$group: {
_id: {master_id: "$master_id", media_code: "$media_code"},
count: { $count: {} },
clickcount: { $sum: "$clickcount" },
exposecount: { $sum: "$exposecount" }
}
},
{
$set: {
total_cr: { $round: [{ $multiply: [{ $divide: ["$clickcount", {$cond: ["$exposecount", "$exposecount", null]}] }, 100] }, 2] }
}
}
])
I had a lines (multilinestring) table in my PostGIS database (Postgres 11), which I have converted to linestrings and also checked the validity (ST_IsValid()) of new linestring geometries.
create table my_line_tbl as
select
gid gid_multi,
adm_code, t_count,
st_length((st_dump(st_linemerge(geom))).geom)::int len,
(st_dump(st_linemerge(geom))).geom geom
from
my_multiline_tbl
order by gid;
alter table my_line_tbl add column id serial primary key not null;
The first 10 rows look like this:
id, gid_multi, adm_code, t_count, len, geom
1, 1, 30, 5242, 407, LINESTRING(...)
2, 1, 30, 3421, 561, LINESTRING(...)
3, 2, 50, 5248, 3, LINESTRING(...)
4, 2, 50, 1458, 3, LINESTRING(...)
5, 2, 60, 2541, 28, LINESTRING(...)
6, 2, 30, 3325, 4, LINESTRING(...)
7, 2, 20, 1142, 5, LINESTRING(...)
8, 2, 30, 1425, 7, LINESTRING(...)
9, 3, 30, 2254, 4, LINESTRING(...)
10, 3, 50, 2254, 50, LINESTRING(...)
I am trying to develop the logic.
Find all <= 10 m segments and merge those to neighboring geometries
(previous or next) > 10 m
If there are many <= 10 m segments next to
each other merge them to make > 10 m segments (min length: > 10 m)
In case of intersections, merge any <= 10 m segments to the longest neighboring geometry
I thought of using SQL window functions to check the length (st_length()) of succeeding geometries (lead(id) over()), and then merging them, but the problem with this approach is, the successive IDs are not next to each other (do not intersects, st_intersects()).
My code attempt (dynamic SQL) is here, where I try to separate <= 10 and > 10 meter geometries.
with lt10mseg as (
select
id, gid_multi,
len, geom lt10m_geom
from
my_line_tbl
where len <= 10
order by id
), gt10mseg as (
select
id, gid_multi,
len, geom gt10m_geom
from
my_line_tbl
where len > 10
order by id
)
select
st_intersects(lt10m_geom,gt10m_geom)
from
lt10mseg, gt10mseg
order by id
Any help/suggestions (dynamic SQL/PLPGSQL) to continue develop the above logic? The ultimate goal is to get rid of <= 10 m segments by merging them to the neighbors.
I keep getting an unexpected select error as well as an unexpected ON error in rows 61 AND 64 in my snowsql statement.
Not sure why if anyone can help that would be great. I've added the portion of my snowsql statement below.
I'm trying to use a select statement within a where clause is there a way to do this?
AS select
t1.sunday_date,
t1.sunday_year_month,
t1.sunday_month,
t1.dc,
t1.source_sku,
t1.Product_Family,
t1.Product_type,
t1.Product_Subtype,
t1.Material,
t1.Color,
t1.Size,
t1.EOL_Date,
t1.NPI_Date,
t1.period_start,
t1.period_month,
IIF( t4.period_start < t1.sunday_date, iif(ISNULL(ta.actual_quantity), 0, ta.actual_quantity),
IIF(ISNULL(tfc.SOPFCSTOVERRIDE ), iif(ISNULL(tf.Period_Start), 0, tf.dc_forecast) , tfc.SOPFCSTOVERRIDE
)) AS forecast_updated,
iif(ISNULL(tf.Period_Start),t4.period_start,tf.Period_Start) AS period_start_forecast,
iif(ISNULL(ti.VALUATED_UNRESTRICTED_USE_STOCK), 0, ti.VALUATED_UNRESTRICTED_USE_STOCK) AS inventory_quantity,
iif(ISNULL(ti.HCI_DS_KEYFIGURE_QUANTITY), 0, ti.HCI_DS_KEYFIGURE_QUANTITY) AS in_transit_quantity,
iif(ISNULL(ti.planned_quantity), 0, ti.planned_quantity) AS inbound_quantity,
iif(ISNULL(tbac.backlog_ecomm ), 0, tbac.backlog_ecomm) + iif(ISNULL(tbac_sap.backlog_sap_open), 0, tbac_sap.backlog_sap_open) AS backlog_quantity,
iif(ISNULL(ta.actual_quantity), 0, ta.actual_quantity) AS actual_quantity,
iif(ISNULL(tso.open_orders), 0, tso.open_orders) AS open_orders,
iif(ISNULL(tf.Period_Start), 0, tf.dc_forecast) AS forecast,
tfc.SOPFCSTOVERRIDE AS forecast_consumption,
iif(ISNULL(tpc.SHIP_DATE), 0, tpc.SHIP_DATE) AS production_current_week,
iif(ISNULL(tpc.SHIP_DATE), 0, tpc.SHIP_DATE) AS production_next_week,
NOW() AS updated_timestamp
FROM ( ( ( ( ( ( ( ( (
SELECT
e.sunday_date,
e.sunday_month,
e.sunday_year_month,
d.dc,
c.SOURCE_SKU,
c.Product_Family,
c.Product_Type,
c.Product_Subtype,
c.Material,
c.Color,
c.Size,
c.EOL_Date,
c.NPI_Date,
b.period_start,
b.period_month
FROM
(SELECT sunday_date, sunday_month, sunday_year_month FROM bas_report_date) AS e,
(SELECT distinct Week_Date AS period_start, DateSerial('445_Year','445_Month',1) AS period_month from inv_bas_445_Month_Alignment) AS b,
(SELECT source_sku AS source_sku, Product_Family, Product_Type, Product_Subtype, Material, Color, Size, EOL_Date, NPI_Date from inv_vw_product_dev ) AS c,
(SELECT dc AS dc FROM inv_bas_dc_site_lookup) AS d
WHERE b.period_start >=
( select
MIN(mt.Reference_Date )
FROM BAS_report_date tr
INNER JOIN inv_bas_445_Month_Alignment mt ON tr.sunday_month = DateSerial(mt.'445_Year',mt.'445_Month,1')
)
AND b.period_start <= DateAdd("ww", 26,e.sunday_date)
) t1
LEFT JOIN
(
SELECT
MATERIAL_NUMBER,
CINT(LOCATION_NUMBER) AS Int_Location_ID,
HCI_DS_KEYFIGURE_DATE,
HCI_DS_KEYFIGURE_QUANTITY,
PLANNED_QUANTITY,
VALUATED_UNRESTRICTED_USE_STOCK
FROM inv_vw_ibp_transit_inventorry_dev
) ti
You can replace the DateSerial() function
(which is from VBA / MS Access / Excel from the Microsoft universe)
with DATE_FROM_PARTS().
DATE_FROM_PARTS() also supports the non-obvious functionality of DateSerial():
DateSerial(2020, 1, 1 - 1) gets you New Year's Eve - the day before New Year's Day
DATE_FROM_PARTS(2020, 1 - 1, 1 - 1) is the month before the day before New Year's Day
DATE_FROM_PARTS(y, m + 1, 0) is End Of Month (EOM).
etc., etc.
I am using PostgreSQL 8.2 and I am also new to PostgreSQL.
I have to add one condition in the WHERE clause depending upon specific value (49) of the field (activity.type). Here is my Query:
SELECT activity.*
FROM activity
LEFT JOIN event_types ON activity.customstatusid = event_types.id, getviewableemployees(3222, NULL) AS report
WHERE
(
CASE WHEN activity.type = 49 THEN
'activity.individualid IN(SELECT individualid from prospects where prospects.individualid = activity.individualid)'
ELSE 1
END
)
AND activity.date BETWEEN '2016-10-01' AND '2016-10-06'
AND activity.type IN (21, 22, 49, 50, 37, 199)
AND (event_types.status = 1 or event_types.status IS NULL);
When I run above query in the command line access of PGSQL then I get below error:
ERROR: invalid input syntax for integer: "activity.individualid IN(SELECT individualid from prospects where prospects.individualid = activity.individualid)"
What I am missing here?
Implement your where clause as:
WHERE (
activity.type != 49 OR
activity.individualid IN (
SELECT individualid from prospects
WHERE prospects.individualid = activity.individualid)
)
AND activity.date BETWEEN '2016-10-01' AND '2016-10-06'
AND activity.type IN (21, 22, 49, 50, 37, 199)
AND (event_types.status = 1 or event_types.status IS NULL);
The first clause will only be true when either:
activity.type != 49; or
activity.type == 49 and activity.individualid is found in the subquery.
Could anyone explain what the exclude_nodata_value argument to ST_DumpValues does?
For example, given the following:
WITH
-- Create a raster 4x4 raster, with each value set to 8 and NODATA set to -99.
tbl_1 AS (
SELECT
ST_AddBand(
ST_MakeEmptyRaster(4, 4, 0, 0, 1, -1, 0, 0, 4326),
1, '32BF', 8, -99
) AS rast
),
-- Set the values in rows 1 and 2 to -99.
tbl_2 AS (
SELECT
ST_SetValues(
rast, 1, 1, 1, 4, 2, -99, FALSE
) AS rast FROM tbl_1)
Why does the following select statement return NULLs in the first two rows:
SELECT ST_DumpValues(rast, 1, TRUE) AS cell_values FROM tbl_2;
Like this:
{{NULL,NULL,NULL,NULL},{NULL,NULL,NULL,NULL},{8,8,8,8},{8,8,8,8}}
But the following select statement return -99s?
SELECT ST_DumpValues(rast, 1, FALSE) AS cell_values FROM tbl_2;
Like this:
{{-99,-99,-99,-99},{-99,-99,-99,-99},{8,8,8,8},{8,8,8,8}}
Clearly, with both statements the first two rows really contain -99s. However, in the first case (exclude_nodata_value=TRUE) these values have been masked (but not replaced) by NULLS.
Thanks for any help. The subtle differences between NULL and NODATA within PostGIS have been driving me crazy for several days.