Incorrect value from count in posrgres on enum rows - postgresql

I have the following query written in sqlalchemy
_stats = db.session.query(
func.count(a.status==s.STATUS_IN_PROGRESS).label("in_progress_"),
func.count(a.status==s.STATUS_COMPLETED).label("completed_"),
) \
.filter(a.uuid==some_uuid) \
.first()
this returns (7, 7) which is incorrect it should return (7,0) i.e. in_progress_ = 7, completed_ = 0
When I do this in two queries I get the correct values
_stats_in_progress = db.session.query(
func.count(a.status==s.STATUS_IN_PROGRESS).label("in_progress_"),
) \
.filter(a.uuid==some_uuid) \
.first()
_stats_in_complete = db.session.query(
func.count(a.status==s.STATUS_COMPLETED).label("completed_"),
) \
.filter(a.uuid==some_uuid) \
.first()
The corresponding SQL also does not work when using the two counts
SELECT count(a.status = 'IN_PROGRESS') AS in_progress_,
count(a.status = 'STATUS_COMPLETED') AS completed_
FROM a
WHERE a.uuid = '9a353554a6874ebcbf0fe88eb8223d33'
this returns 7,7 too, while if I do the query with just one count I get the correct values.
Does anyone know what I'm doing wrong?

Related

How do I split() the result of a split()?

My PySpark data field has a column with a value of the form 0000-00-00-00-00-00-000_000.xxxx where 0 is a digit and x is a letter. The value represents an observation timestamp with some other values mixed in.
In my notebook, I have a cell that attempts to split the column containing the timestamp. For the most part, it works.I get most of the work done with the following:
`splitDF = ( df
.withColumn("fn_year", split(df["fn"], "-").getItem(0))
.withColumn("fn_month", split(df["fn"], "-").getItem(1))
.withColumn("fn_day", split(df["fn"], "-").getItem(2))
.withColumn("fn_hour", split(df["fn"], "-").getItem(3))
.withColumn("fn_min", split(df["fn"], "-").getItem(4))
.withColumn("fn_sec", split(df["fn"], "-").getItem(5))
.withColumn("fn_milli", split(df["fn"], "-").getItem(6))
)`
I need to extract two values from the string; the 000 preceding the underscore and the 000 following the underscore. I would normally (my usual language / environment is C# / .NET 7, web API stuff) just split the string multiple times using the two delimiters ('_' and '.') and grab the necessary components. I can't get that to work in this case. When I try to pass the split into another split I get ["", "", "", "", "", "", "", "", ""] for the result (.getItem(x) omitted).
Here's an example of what I thought might work to split on the underscore and then the period:
splitDF = df.withColumn("fn_qc", split(split(df["fn"], "_").getItem(1), ".").getItem(0))
Basically we split string based on dash - it would return an array which is used across. In the last statement, we split again based on underscore. For the last value you could use a substring or split again based on period or just replace xxxx if it is a static value...
Hope this helps.
from pyspark.sql.functions import split, col, substring
date_list = [["2023-01-02-03-04-05-666_777.xxxx"], ["2023-12-11-10-09-08-444_333.xxxx"]]
cols = ["fn"]
df = spark.createDataFrame(date_list, cols)
splitDF = df.withColumn("split_on_dash", split(col("fn"), "-")) \
.withColumn("fn_year", col("split_on_dash")[0]) \
.withColumn("fn_month", col("split_on_dash")[1]) \
.withColumn("fn_day", col("split_on_dash")[2]) \
.withColumn("fn_hour", col("split_on_dash")[3]) \
.withColumn("fn_min", col("split_on_dash")[4]) \
.withColumn("fn_sec", col("split_on_dash")[5]) \
.withColumn("fn_milli", split(col("split_on_dash")[6], "_")[0]) \
.withColumn("fn_after_underscore", substring(split(col("split_on_dash")[6], "_")[1], 0, 3))
display(splitDF)
You can select only the required columns later or drop the unnecessary ones...

Rebuild sphinx index fail

We have 4 sphinx indexes built using data from one table. All indexes have the same source settings except that they take different documents. We have checks like this mod(id, 4) = <index number> to distribute documents and document attributes between indexes.
Question: One of the four indexes (the same one) fails to rebuild almost every time we rebuild the indexes. Other indexes never have this issue and are rebuild correctly.
We have partitioned the documents and attribute tables. For example this is how documents table is partitioned:
PARTITION BY HASH(mod(id, 4))(
PARTITION `p0` COMMENT '',
PARTITION `p1` COMMENT '',
PARTITION `p2` COMMENT '',
PARTITION `p3` COMMENT ''
);
We think that indexer hangs after it has received all documents but before it starts receiving attributes. We can see this when we check sessions on MySQL server.
The index which fails to rebuild is using mod(id, 4) = 0 condition.
We use Sphinx 2.0.4-release on Ubuntu 64bit 12.04.02 LTS.
Data source config
source ble_job_2 : ble_job
{
sql_query = select job_notice.id as id, \
body, title, source, company, \
UNIX_TIMESTAMP(insertDate) as date, \
substring(company, 1, 1) as companyletter, \
job_notice.locationCountry as country, \
location_us_state.stateName as state, \
0 as expired, \
clusterId, \
groupCity, \
groupCityAttr, \
job_notice.cityLat as citylat, \
job_notice.cityLng as citylng, \
job_notice.zipLat as ziplat, \
job_notice.zipLng as ziplng, \
feedId, job_notice.rating as rating, \
job_notice.cityId as cityid \
from job_notice \
left join location_us_state on job_notice.locationState = location_us_state.stateCode \
where job_notice.status != 'expired' \
and mod(job_notice.id, 4) = 1
sql_attr_multi = uint attr from query; \
select noticeId, attributeId as attr from job_notice_attribute where mod(noticeId, 4) = 1
} # source ble_job_2
Index config
index ble_job_2
{
type = plain
source = ble_job_2
path = /var/lib/sphinxsearch/data/ble_job_2
docinfo = extern
mlock = 0
morphology = none
stopwords = /etc/sphinxsearch/stopwords/blockwords.txt
min_word_len = 1
charset_type = utf-8
enable_star = 0
html_strip = 0
} # index_ble_job_2
Any help would be greatly appreciated.
Warm regards.
Luckily we have fixed the issue.
We have applied the range query setup and this helped us to get index rebuild stable. I think this is because Sphinx runs several queries and each returns limited relatively small set of results. This allows MySQL to complete the query normally and sent all the results back to Sphinx.
The same issue is described on Sphinx forum Indexer Hangs & MySQL Query Sleeps.
The changes in the config for data source are
sql_query_range = SELECT MIN(id),MAX(id) FROM job_notice where mod(job_notice.id, 4) = 1
sql_range_step = 200000
sql_query = select job_notice.id as id, \
...
and mod(job_notice.id, 4) = 1 and job_notice.id >= $start AND job_notice.id <= $end
Please note that no ranges should be applied to sql_attr_multi query - Bad query in Sphinx MVA

Sphinx weird behavior

I have weird trouble creating index on sphinx 2.0.5-id64-release (r3308)
/etc/sphinx/sphinx.conf
source keywords
{
// ..
sql_query = \
SELECT keywords.lid, keywords.keyword FROM keywords_sites \
LEFT JOIN keywords ON keywords_sites.kid = keywords.kid \
GROUP BY keywords_sites.kid \
sql_attr_uint = lid
sql_field_string = keyword
// ...
}
I get warning
WARNING: attribute 'lid' not found - IGNORING
But when i change query to:
sql_query = \
SELECT 1, keywords.lid, keywords.keyword FROM keywords_sites \
LEFT JOIN keywords ON keywords_sites.kid = keywords.kid \
GROUP BY keywords_sites.kid \
I don't get any warnings. Why is this happen?
The first column from the sql_query is ALWAYS used as the document_id.
The document_id can not be defined as an attibute.
If you want to store the primary key in an attribute as well, then you need to include it twice in the query.

Sphinx + Postgres + uuid issues

I have a sql_query for a source defined like so:
sql_query = SELECT \
criteria.item_uuid, \
criteria.user_id, \
criteria.color, \
criteria.selection, \
criteria.item_id, \
home.state, \
item.* \
FROM criteria \
INNER JOIN item USING (item_uuid) \
INNER JOIN user_info home USING (user_id) \
WHERE criteria.item_uuid IS NOT NULL
And then an index:
index csearch {
source = csearch
path = /usr/local/sphinx/var/data/csearch
docinfo = extern
enable_star = 1
min_prefix_len = 0
min_infix_len = 0
morphology = stem_en
}
But when I run indexer --rotate csearch I get:
indexing index 'csearch'...
WARNING: zero/NULL document_id, skipping
The idea is that the item_uuid column is the identifier I want, based on some combination of the other columns. The item_uuid column is a uuid type in postgres: perhaps sphinx does not support this? Anyway, any ideas here would be greatly appreciated.
Read the docs, the document_id must be unique unsigned non-zero integers.
http://www.sphx.org/docs/manual-1.10.html#data-restrictions
You could try using SELECT row_number(), uuid, etc...

How to solve code duplication in the following PostgreSQL query?

I have a table Inputs and a derived table Parameters
CREATE TABLE Configurables
(
id SERIAL PRIMARY KEY
);
CREATE TABLE Inputs
(
configurable integer REFERENCES Configurables( id ),
name text,
time timestamp,
PRIMARY KEY( configurable, name, time )
);
CREATE TABLE Parameters
(
configurable integer,
name text,
time timestamp,
value text,
FOREIGN KEY( configurable, name, time ) REFERENCES Inputs( configurable, name, time )
);
The following query checks whether a parameter has been changed, or is not present yet, and inserts the parameter with a new value.
QString PostgreSQLQueryEngine::saveParameter( int configurable, const QString& name, const QString& value )
{
return QString( "\
INSERT INTO Inputs( configurable, name, time ) \
WITH MyParameter AS \
( \
SELECT configurable, name, time, value \
FROM \
( \
SELECT configurable, name, time, value \
FROM Parameters \
WHERE (configurable = %1) AND (name = '%2') AND time = \
( \
SELECT max( time ) \
FROM Parameters \
WHERE (configurable = %1) AND (name = '%2') \
) \
UNION \
SELECT %1 AS configurable, '%2' AS name, '-infinity' AS time, NULL AS value \
)AS foo \
) \
SELECT %1 AS configurable, '%2' AS name, 'now' AS time FROM MyParameter \
WHERE time = (SELECT max(time) FROM MyParameter) AND (value <> '%3' OR value IS NULL); \
\
INSERT INTO Parameters( configurable, name, time, value ) \
WITH MyParameter AS \
( \
SELECT configurable, name, time, value \
FROM \
( \
SELECT configurable, name, time, value \
FROM Parameters \
WHERE (configurable = %1) AND (name = '%2') AND time = \
( \
SELECT max( time ) \
FROM Parameters \
WHERE (configurable = %1) AND (name = '%2') \
) \
UNION \
SELECT %1 AS configurable, '%2' AS name, '-infinity' AS time, NULL AS value \
)AS foo \
) \
SELECT %1 AS configurable, '%2' AS name, 'now' AS time, '%3' AS value FROM MyParameter \
WHERE time = (SELECT max(time) FROM MyParameter) AND (value <> '%3' OR value IS NULL); \
" ).arg( configurable ).arg( name ).arg( value );
}
How should I best solve the duplication of 2 the MyParameter subqueries?
Any other tips on cleaning up a query like this
You should avoid de-normalized tables. You should use a view for easy overview of Parameter table. It would be much, much easier.
You should only use de-normalized summary table if your view isn't fast enough. But any de-normalized tables should be maintained using triggers, as otherwise you risk that this tables go out of sync.
For this you can create a trigger on Parameters that will upsert into Inputs on insert. If you ever delete or update this columns on Parameters then maintaining Inputs would be complicated. You'd have to delete rows when there's no corresponding row in Parameters - you'd need to maintain counts in Inputs, to know when there's no corresponding row in Parameters. Concurrent insert/update/delete performance will suck, as any change in Parameters will have to block a row in Inputs. This is all ugly and bad - a view is much better solution.