I have weird trouble creating index on sphinx 2.0.5-id64-release (r3308)
/etc/sphinx/sphinx.conf
source keywords
{
// ..
sql_query = \
SELECT keywords.lid, keywords.keyword FROM keywords_sites \
LEFT JOIN keywords ON keywords_sites.kid = keywords.kid \
GROUP BY keywords_sites.kid \
sql_attr_uint = lid
sql_field_string = keyword
// ...
}
I get warning
WARNING: attribute 'lid' not found - IGNORING
But when i change query to:
sql_query = \
SELECT 1, keywords.lid, keywords.keyword FROM keywords_sites \
LEFT JOIN keywords ON keywords_sites.kid = keywords.kid \
GROUP BY keywords_sites.kid \
I don't get any warnings. Why is this happen?
The first column from the sql_query is ALWAYS used as the document_id.
The document_id can not be defined as an attibute.
If you want to store the primary key in an attribute as well, then you need to include it twice in the query.
Related
I have the following query written in sqlalchemy
_stats = db.session.query(
func.count(a.status==s.STATUS_IN_PROGRESS).label("in_progress_"),
func.count(a.status==s.STATUS_COMPLETED).label("completed_"),
) \
.filter(a.uuid==some_uuid) \
.first()
this returns (7, 7) which is incorrect it should return (7,0) i.e. in_progress_ = 7, completed_ = 0
When I do this in two queries I get the correct values
_stats_in_progress = db.session.query(
func.count(a.status==s.STATUS_IN_PROGRESS).label("in_progress_"),
) \
.filter(a.uuid==some_uuid) \
.first()
_stats_in_complete = db.session.query(
func.count(a.status==s.STATUS_COMPLETED).label("completed_"),
) \
.filter(a.uuid==some_uuid) \
.first()
The corresponding SQL also does not work when using the two counts
SELECT count(a.status = 'IN_PROGRESS') AS in_progress_,
count(a.status = 'STATUS_COMPLETED') AS completed_
FROM a
WHERE a.uuid = '9a353554a6874ebcbf0fe88eb8223d33'
this returns 7,7 too, while if I do the query with just one count I get the correct values.
Does anyone know what I'm doing wrong?
We have 4 sphinx indexes built using data from one table. All indexes have the same source settings except that they take different documents. We have checks like this mod(id, 4) = <index number> to distribute documents and document attributes between indexes.
Question: One of the four indexes (the same one) fails to rebuild almost every time we rebuild the indexes. Other indexes never have this issue and are rebuild correctly.
We have partitioned the documents and attribute tables. For example this is how documents table is partitioned:
PARTITION BY HASH(mod(id, 4))(
PARTITION `p0` COMMENT '',
PARTITION `p1` COMMENT '',
PARTITION `p2` COMMENT '',
PARTITION `p3` COMMENT ''
);
We think that indexer hangs after it has received all documents but before it starts receiving attributes. We can see this when we check sessions on MySQL server.
The index which fails to rebuild is using mod(id, 4) = 0 condition.
We use Sphinx 2.0.4-release on Ubuntu 64bit 12.04.02 LTS.
Data source config
source ble_job_2 : ble_job
{
sql_query = select job_notice.id as id, \
body, title, source, company, \
UNIX_TIMESTAMP(insertDate) as date, \
substring(company, 1, 1) as companyletter, \
job_notice.locationCountry as country, \
location_us_state.stateName as state, \
0 as expired, \
clusterId, \
groupCity, \
groupCityAttr, \
job_notice.cityLat as citylat, \
job_notice.cityLng as citylng, \
job_notice.zipLat as ziplat, \
job_notice.zipLng as ziplng, \
feedId, job_notice.rating as rating, \
job_notice.cityId as cityid \
from job_notice \
left join location_us_state on job_notice.locationState = location_us_state.stateCode \
where job_notice.status != 'expired' \
and mod(job_notice.id, 4) = 1
sql_attr_multi = uint attr from query; \
select noticeId, attributeId as attr from job_notice_attribute where mod(noticeId, 4) = 1
} # source ble_job_2
Index config
index ble_job_2
{
type = plain
source = ble_job_2
path = /var/lib/sphinxsearch/data/ble_job_2
docinfo = extern
mlock = 0
morphology = none
stopwords = /etc/sphinxsearch/stopwords/blockwords.txt
min_word_len = 1
charset_type = utf-8
enable_star = 0
html_strip = 0
} # index_ble_job_2
Any help would be greatly appreciated.
Warm regards.
Luckily we have fixed the issue.
We have applied the range query setup and this helped us to get index rebuild stable. I think this is because Sphinx runs several queries and each returns limited relatively small set of results. This allows MySQL to complete the query normally and sent all the results back to Sphinx.
The same issue is described on Sphinx forum Indexer Hangs & MySQL Query Sleeps.
The changes in the config for data source are
sql_query_range = SELECT MIN(id),MAX(id) FROM job_notice where mod(job_notice.id, 4) = 1
sql_range_step = 200000
sql_query = select job_notice.id as id, \
...
and mod(job_notice.id, 4) = 1 and job_notice.id >= $start AND job_notice.id <= $end
Please note that no ranges should be applied to sql_attr_multi query - Bad query in Sphinx MVA
In my application I'm using MySQL query:
SELECT DISTINCT * FROM forum_topic \
LEFT JOIN forum_post ON forum_post.id_topic = forum_topic.Id \
WHERE MATCH (forum_post.content) AGAINST ('searching text') \
AND !MATCH (forum_topic.topic_name) AGAINST ('searching text') \
GROUP BY forum_topic.Id
but now I want to migrate into Sphinx. I created config file and table sph_counter in DB. Now my config looks like that:
source main
{
type = mysql
sql_host = localhost
sql_user = root
sql_pass =
sql_db = sphinx
sql_port = 3306 # optional, default is 3306
sql_query_pre = SET NAMES utf8
sql_query_pre = REPLACE INTO sph_counter SELECT 1, MAX(Id) FROM forum_post
sql_query = SELECT * FROM forum_topic LEFT JOIN forum_post ON forum_post.id_topic = forum_topic.Id \
WHERE forum_post.Id<=( SELECT max_doc_id FROM sph_counter WHERE counter_id=1 )
AND MATCH (forum_post.content) AGAINST ('searching text') \
AND !MATCH (forum_topic.topic_name) AGAINST ('searching text')
GROUP BY(forum_topic.Id)
sql_attr_uint = id_topic
}
source delta : main
{
sql_query_pre = SET NAMES utf8
sql_query = SELECT * FROM forum_topic LEFT JOIN forum_post ON forum_post.id_topic = forum_topic.Id \
WHERE forum_post.Id<=( SELECT max_doc_id FROM sph_counter WHERE counter_id=1 )
AND MATCH (forum_post.content) AGAINST ('searching text') \
AND !MATCH (forum_topic.topic_name) AGAINST ('searching text')
GROUP BY(forum_topic.Id)
}
index main
{
source = main
path = /var/data/main_sphinx
charset_type = utf-8
}
index delta : main
{
source = delta
path = /var/data/delta_sphinx
charset_type = utf-8
}
Is that the right way I'm searching with Sphinx? Or have I do this from PHP script?
You dont put the 'query' in teh config file. You want the sphinx index to contain ALL your documents. Sphinx runs the query, offline and indexes teh results. Sphinx will then run queries against its index.
So you actully want something like
sql_query = SELECT p.*,t.* FROM forum_post p INNER JOIN forum_topic p ON p.id_topic = t.Id \
WHERE p.Id<=( SELECT max_doc_id FROM sph_counter WHERE counter_id=1 )
Wouldnt suggest the GROUP BY id_topic - because that will mean one document per topic. Which will meant sphinx will only see one post per thread, so most of the topic wont be searchable.
Have also moved the tables around, so Posts is first. So that the sphinx document_id - the first column from the SELECT list - is the post_id - because that is what is unique.
You have the topic id as a attribute, so can do grouping in sphinx if needbe.
Now you can run indexer with this index, and index every document.
Then you run queries against the index (like your 'searching text' example):
$cl->setMatchMode(SPH_MATCH_EXTENDED);
$res = $cl->Query('#content searching text','index');
This way, you build one index, then run arbitary queries against it.
(Using the #content syntax, means only searches the content column with your query, which is butter, than searching, and then excluding it from author.
I have a sql_query for a source defined like so:
sql_query = SELECT \
criteria.item_uuid, \
criteria.user_id, \
criteria.color, \
criteria.selection, \
criteria.item_id, \
home.state, \
item.* \
FROM criteria \
INNER JOIN item USING (item_uuid) \
INNER JOIN user_info home USING (user_id) \
WHERE criteria.item_uuid IS NOT NULL
And then an index:
index csearch {
source = csearch
path = /usr/local/sphinx/var/data/csearch
docinfo = extern
enable_star = 1
min_prefix_len = 0
min_infix_len = 0
morphology = stem_en
}
But when I run indexer --rotate csearch I get:
indexing index 'csearch'...
WARNING: zero/NULL document_id, skipping
The idea is that the item_uuid column is the identifier I want, based on some combination of the other columns. The item_uuid column is a uuid type in postgres: perhaps sphinx does not support this? Anyway, any ideas here would be greatly appreciated.
Read the docs, the document_id must be unique unsigned non-zero integers.
http://www.sphx.org/docs/manual-1.10.html#data-restrictions
You could try using SELECT row_number(), uuid, etc...
I have a table Inputs and a derived table Parameters
CREATE TABLE Configurables
(
id SERIAL PRIMARY KEY
);
CREATE TABLE Inputs
(
configurable integer REFERENCES Configurables( id ),
name text,
time timestamp,
PRIMARY KEY( configurable, name, time )
);
CREATE TABLE Parameters
(
configurable integer,
name text,
time timestamp,
value text,
FOREIGN KEY( configurable, name, time ) REFERENCES Inputs( configurable, name, time )
);
The following query checks whether a parameter has been changed, or is not present yet, and inserts the parameter with a new value.
QString PostgreSQLQueryEngine::saveParameter( int configurable, const QString& name, const QString& value )
{
return QString( "\
INSERT INTO Inputs( configurable, name, time ) \
WITH MyParameter AS \
( \
SELECT configurable, name, time, value \
FROM \
( \
SELECT configurable, name, time, value \
FROM Parameters \
WHERE (configurable = %1) AND (name = '%2') AND time = \
( \
SELECT max( time ) \
FROM Parameters \
WHERE (configurable = %1) AND (name = '%2') \
) \
UNION \
SELECT %1 AS configurable, '%2' AS name, '-infinity' AS time, NULL AS value \
)AS foo \
) \
SELECT %1 AS configurable, '%2' AS name, 'now' AS time FROM MyParameter \
WHERE time = (SELECT max(time) FROM MyParameter) AND (value <> '%3' OR value IS NULL); \
\
INSERT INTO Parameters( configurable, name, time, value ) \
WITH MyParameter AS \
( \
SELECT configurable, name, time, value \
FROM \
( \
SELECT configurable, name, time, value \
FROM Parameters \
WHERE (configurable = %1) AND (name = '%2') AND time = \
( \
SELECT max( time ) \
FROM Parameters \
WHERE (configurable = %1) AND (name = '%2') \
) \
UNION \
SELECT %1 AS configurable, '%2' AS name, '-infinity' AS time, NULL AS value \
)AS foo \
) \
SELECT %1 AS configurable, '%2' AS name, 'now' AS time, '%3' AS value FROM MyParameter \
WHERE time = (SELECT max(time) FROM MyParameter) AND (value <> '%3' OR value IS NULL); \
" ).arg( configurable ).arg( name ).arg( value );
}
How should I best solve the duplication of 2 the MyParameter subqueries?
Any other tips on cleaning up a query like this
You should avoid de-normalized tables. You should use a view for easy overview of Parameter table. It would be much, much easier.
You should only use de-normalized summary table if your view isn't fast enough. But any de-normalized tables should be maintained using triggers, as otherwise you risk that this tables go out of sync.
For this you can create a trigger on Parameters that will upsert into Inputs on insert. If you ever delete or update this columns on Parameters then maintaining Inputs would be complicated. You'd have to delete rows when there's no corresponding row in Parameters - you'd need to maintain counts in Inputs, to know when there's no corresponding row in Parameters. Concurrent insert/update/delete performance will suck, as any change in Parameters will have to block a row in Inputs. This is all ugly and bad - a view is much better solution.