Changing name of `id` column in SphinxSearch index - sphinx

I have created a SphinxSearch index which look like this:
+---------+-----------+
| Field | Type |
+---------+-----------+
| id | bigint |
| message | field |
| created | timestamp |
+---------+-----------+
Is there a way to run the indexer to change the name of the id column? I'm concerned about having multiple indexes all with a column called id. I would prefer to name it message_id or something more descriptive.

No. Id is a fixed name. Its the unique document id.
Can duplicate it into a unique attribute if you want

Related

PSQL import fails: ERROR: invalid input syntax for type uuid: "id"

I exported the users table from my Heroku-hosted sql db. The export looks fine, but when I try to import it, I get ERROR: invalid input syntax for type uuid: "id"
This is the command used, per the Heroku site:
\copy users FROM ~/user_export.csv WITH (FORMAT CSV);
EDIT:
I didn't include this, but the error also includes:
CONTEXT: COPY users, line 1, column id: "id"
I had done programmer math and translated that to a zero-based format, but maybe it's the header that's the issue?
-- ANSWER: YES. argh.
/EDIT
I've found some posts in different places that seem to involve JSON fields, but the schema is fairly simple, and only simple objects are used:
Table "public.users"
Column | Type | Collation | Nullable | Default
------------------+--------------------------+-----------+----------+---------------------
id | uuid | | not null |
name | text | | not null |
username | text | | not null |
password_hash | text | | not null |
created_at | timestamp with time zone | | |
updated_at | timestamp with time zone | | |
tournament_id | uuid | | |
background | text | | |
as_player | boolean | | |
as_streamer | boolean | | |
administrator | administrator | | not null | 'no'::administrator
creator | boolean | | not null | false
creator_approved | boolean | | not null | true
Indexes:
"users_pkey" PRIMARY KEY, btree (id)
"uq:users.username" UNIQUE CONSTRAINT, btree (username)
Referenced by:
TABLE "tokens" CONSTRAINT "tokens_userID_fkey" FOREIGN KEY ("userID") REFERENCES users(id) ON DELETE CASCADE
TABLE "tournament_player_pivot" CONSTRAINT "tournament_player_pivot_playerID_fkey" FOREIGN KEY ("playerID") REFERENCES users(id)
The table the data was exported from and the table I'm trying to import to have the identical schema. I've come across suggestions that there is a specific single-quoted format for UUID fields, but manually modifying that has no effect.
What is the problem here?
This is a sample from the export file using a testing user:
id,name,username,password_hash,created_at,updated_at,tournament_id,background,as_player,as_streamer,administrator,creator,creator_approved
ad5230b4-2377-4a8d-8725-d49cd78121af,z9,z9#test.com,$2b$12$97GXVp1p.nfke8L4EYK2Fuev9IE3k0WFAf4O3NvziYHjogFCAppO6,2022-05-07 06:03:44.020019+00,2022-05-07 06:03:44.020019+00,,,f,f,no,f,t

Postgres: jsonb to jsonb[]?

Let's say I have this table:
ams=# \d player
Table "public.player"
Column | Type | Collation | Nullable | Default
-------------+--------------------------+-----------+----------+-------------------
id | integer | | not null |
created | timestamp with time zone | | not null | CURRENT_TIMESTAMP
player_info | jsonb | | not null |
And then I have this:
ams=# \d report
Table "public.report"
Column | Type | Collation | Nullable | Default
---------+--------------------------+-----------+----------+---------
id | integer | | not null |
created | timestamp with time zone | | not null |
data | jsonb[] | | not null |
How can I take the player_info from all the rows in the player table and insert that into a single row in the report table (into the data jsonb[] field)? My attempts with jsonb_agg() return a jsonb, and I can't for the life of me figure out how to go from jsonb to jsonb[]. Any pointers would be very much appreciated! Thanks in advance.
If you plainly want to copy the values, just treat it like any other data type, and use ARRAY_AGG.
SELECT ARRAY_AGG(player_info)
FROM player
WHERE id IN (...)
should return something of type json[].
Since jsonb[] is an array at the type level in PostgreSQL vs. a json array, use array_agg() instead of jsonb_agg().
insert into report
select 1 as id, now() as created, array_agg(player_info)
from player
;

Does size of row or column effect aggregation queries in Postgresql?

Consider the following table definition:
Column | Type | Collation | Nullable | Default
-----------------+--------------------------+-----------+----------+-------------
id | uuid | | not null |
reference_id | text | | |
data | jsonb | | |
tag | character varying(255) | | |
created_at | timestamp with time zone | | |
updated_at | timestamp with time zone | | |
is_active | boolean | | not null | true
status | integer | | | 0
message | text | | |
batch_id | uuid | | not null |
config | jsonb | | |
Overall table size to be over 500M and every row in the table contains a data column to have a JSON of over 50MB.
Questions -
Does the size of the data column effect aggregation such as count?
Assume we are running the below query -
select count(*)
from table
where batch_id = '88f30539-32d7-445c-8d34-f1da5899175c';
Does the size of the data column effect aggregation such as sum?
Assume we are running the below queries -
Query 1 -
select sum(data->>'count'::int)
from table
where batch_id = '88f30539-32d7-445c-8d34-f1da5899175c';
Query 2 -
select sum(jsonb_array_length(data->'some_array'))
from table
where batch_id = '88f30539-32d7-445c-8d34-f1da5899175c';
The best way to know is to measure.
Once the data is large enough to always be TOASTed, then its size will no longer affect the performance of queries which do not need to access the TOASTed data contents, like your first one. Your last two do need to access the contents and their performance will depend on the size.

How to add pivot table to pivot table

My DB is something like that : participant <=> event
I have a table "event_participant" to link the two others.
Now I want a new table : "Eat", that adds dinner's date for each day of an event for a participant.
I imagine table "Eat" like that : event_id, participant_id, date
But maybe it's better to do so : event_participant_id, date
Or maybe there's another way; you tell me.
I use Eloquent for Laravel 5.1 but any SQL answer could help.
Assuming your participant table
id | participant_fname | participant_lname | created_at | updated_at
| | | |
Assuming your event table
id | event_name | created_at | updated_at
| | |
Assuming your event_participant table
id | participant_id | event_id | created_at | updated_at
| | | |
As you have mentioned for the new Eat table, you could save the primary key of event_participant table. By doing so you could fetch the Event and Participant related details by doing a JOIN query. Both this table would be Pivot tables.
Eat table
id | event_participant_id | created_at | updated_at
| | |
Eloquent Relations
Laravel Relationships
Within your model you could then include the relationships between each tables to fetch values using Eloquent Relations
Hope this is helpful.

Postgres: unique on array and varchar column

I would like to ask if it is possible to set UNIQUE on array column - I need to check array items if are uniqued.
Secondly, I wish to have also second column to be included in this.
To imagine what I need, I'm including example: imagine, you have entries with domains and aliases. Column domain is varchar having main domain in it, aliases is array which can be empty. As logical, nothing in column domain can be in aliases as well as right opposite.
If there is any option how to do it, I would be glad for showing how. And the best will be to include help how to do it in sqlalchemy (table declaration, using in TurboGears).
Postgresql: 9.2
sqlalchemy: 0.7
UPDATE:
I have found, how to do multi-column unique in sqlalchemy, however it does not work on array:
client_table = Table('client', metadata,
Column('id', types.Integer, autoincrement = True, primary_key = True),
Column('name', types.String),
Column('domain', types.String),
Column('alias', postgresql.ARRAY(types.String)),
UniqueConstraint('domain', 'alias', name = 'domains')
)
Then desc:
wb=# \d+ client
Table "public.client"
Column | Type | Modifiers | Storage | Description
--------+---------------------+-----------------------------------------------------+----------+-------------
id | integer | not null default nextval('client_id_seq'::regclass) | plain |
name | character varying | | extended |
domain | character varying | not null | extended |
alias | character varying[] | | extended |
Indexes:
"client_pkey" PRIMARY KEY, btree (id)
"domains" UNIQUE CONSTRAINT, btree (domain, alias)
And select (after test insert):
wb=# select * from client;
id | name | domain | alias
----+-------+---------------+--------------------------
1 | test1 | www.test.com | {www.test1.com,test.com}
2 | test2 | www.test1.com |
3 | test3 | www.test.com |
Thanks in advance.
figure out how to do this in pure Postgresql syntax, then use DDL to emit it exactly.