How to search on a single OR multiple columns with TSVECTOR and TSQUERY - postgresql

I used some boilerplate code (below) that creates a normalized tsvector _search column of all columns I specify (in searchObjects) that I'd like full-text search on.
For the most part, this is fine. I'm using this in conjunction with Sequelize, so my query looks like:
const articles = await Article.findAndCountAll({
where: {
[Sequelize.Op.and]: Sequelize.fn(
'article._search ## plainto_tsquery',
'english',
Sequelize.literal(':query')
),
[Sequelize.Op.and]: { status: STATUS_TYPE_ACTIVE }
},
replacements: { query: q }
});
Search index setup:
const vectorName = '_search';
const searchObjects = {
articles: ['headline', 'cleaned_body', 'summary'],
brands: ['name', 'cleaned_about'],
products: ['name', 'cleaned_description']
};
module.exports = {
up: async queryInterface =>
await queryInterface.sequelize.transaction(t =>
Promise.all(
Object.keys(searchObjects).map(table =>
queryInterface.sequelize
.query(
`
ALTER TABLE ${table} ADD COLUMN ${vectorName} TSVECTOR;
`,
{ transaction: t }
)
.then(() =>
queryInterface.sequelize.query(
`
UPDATE ${table} SET ${vectorName} = to_tsvector('english', ${searchObjects[
table
].join(" || ' ' || ")});
`,
{ transaction: t }
)
)
.then(() =>
queryInterface.sequelize.query(
`
CREATE INDEX ${table}_search ON ${table} USING gin(${vectorName});
`,
{ transaction: t }
)
)
.then(() =>
queryInterface.sequelize.query(
`
CREATE TRIGGER ${table}_vector_update
BEFORE INSERT OR UPDATE ON ${table}
FOR EACH ROW EXECUTE PROCEDURE tsvector_update_trigger(${vectorName}, 'pg_catalog.english', ${searchObjects[
table
].join(', ')});
`,
{ transaction: t }
)
)
.error(console.log)
)
)
),
down: async queryInterface =>
await queryInterface.sequelize.transaction(t =>
Promise.all(
Object.keys(searchObjects).map(table =>
queryInterface.sequelize
.query(
`
DROP TRIGGER ${table}_vector_update ON ${table};
`,
{ transaction: t }
)
.then(() =>
queryInterface.sequelize.query(
`
DROP INDEX ${table}_search;
`,
{ transaction: t }
)
)
.then(() =>
queryInterface.sequelize.query(
`
ALTER TABLE ${table} DROP COLUMN ${vectorName};
`,
{ transaction: t }
)
)
)
)
)
};
The problem is that because the code concats both columns within each array of searchObjects, what is getting stored is a combined index of all columns in each array.
For example on the articles table: 'headline', 'cleaned_body', 'summary' are all part of that single generated _search vector.
Because of this, I can't really search by ONLY headline or ONLY cleaned_body, etc. I'd like to be able to search each column separately and also together.
The use case is in my search typeahead I only want to search on headline. But on my search results page, I want to search on all columns specified in searchObjects.
Can someone give me a hint on what I need to change? Should I create a new tsvector for each column?

If anyone is curious, here's how you can create a tsvector for each column:
try {
for (const table in searchObjects) {
for (const col of searchObjects[table]) {
await queryInterface.sequelize.query(
`ALTER TABLE ${table} ADD COLUMN ${col + vectorName} TSVECTOR;`,
{ transaction }
);
await queryInterface.sequelize.query(
`UPDATE ${table} SET ${col + vectorName} = to_tsvector('english', ${col});`,
{ transaction }
);
await queryInterface.sequelize.query(
`CREATE INDEX ${table}_${col}_search ON ${table} USING gin(${col +
vectorName});`,
{ transaction }
);
await queryInterface.sequelize.query(
`CREATE TRIGGER ${table}_${col}_vector_update
BEFORE INSERT OR UPDATE ON ${table}
FOR EACH ROW EXECUTE PROCEDURE tsvector_update_trigger(${col +
vectorName}, 'pg_catalog.english', ${col});`,
{ transaction }
);
}
}
await transaction.commit();
} catch (err) {
await transaction.rollback();
throw err;
}

Related

MongoDB and triggers

I have a post call that inserts records in an Atlas mongodb database, the Atlas service has a trigger active to increment a correlative field (no_hc), then I make a query to retrieve that correlative field with the _id generated in the insert of the document. I put the code below so that they can tell me what I am doing wrong since this code sometimes returns null the field no_hc. Thanks since now
I add information about what I need to execute. I have a collection on which every time a document is added the mongodb server executes a trigger to increment a certain field, the problem is that the findoneandupsert query returns the inserted document with the autoincrement field set to null, since apparently the trigger is executed after the result of findoneandupsert is returned, how can I solve this issue?
router.post('/admisionUpsert', (req, res) => {
admisionUpsert(req.body)
.then(data => res.json(data))
.catch(err => res.status(400).json({ error: err + ' Unable to add ' }));
})
async function admisionUpsert(body) {
let nohc = body.no_hc
delete body['no_hc'];
let id;
let noprot
await Paciente.findOneAndUpdate(
{ no_hc: nohc },
{
apellido: body.apellido,
nombre: body.nombre,
sexo: body.sexo,
no_doc: body.no_doc,
fec_nac: body.fec_nac,
calle: body.calle,
no_tel: body.no_tel,
email: body.email
}, {
new: true,
upsert: true
}
)
.then(data => { id = data._id })
console.log("id pac", id)
Paciente.findOne({ _id: id }).then(data => { nohc = data.no_hc })
console.log("nohc pac", nohc)
await Protocolo.findOneAndUpdate(
{ no_prot: "" },
{
cod_os: body.cod_os,
no_os: body.no_os,
plan_os: body.plan_os,
no_hc: nohc,
sexo: body.sexo,
medico: body.medico,
diag: body.diag,
fec_prot: body.fec_prot,
medicacion: body.medicacion,
demora: body.demora
}, {
new: true,
upsert: true
}
)
.then(data => { id = data._id })
console.log("id prot", id)
Protocolo.findOne({ _id: id }).then(data => { noprot = data.no_prot })
console.log("noprot prot", noprot)
let practicasProtocolo = []
body.practicas_solicitadas.map((practica, index1) => {
practicasProtocolo.push({
no_prot: noprot,
cod_ana: practica.codigo,
cod_os: practica.cod_os,
estado_administrativo: practica.estado_administrtivo,
estado_muestra: practica.estado_muestra,
estado_proceso: practica.estado_proceso,
})
practica.parametro.map((parametro, index) => {
par = parametro.codigo
tipo_dato = parametro.tipo_dato
if ((tipo_dato === "numerico") || (tipo_dato === "frase") || (tipo_dato === "codigo")) {
practicasProtocolo[index1][par] = null;
}
})
})
console.log(practicasProtocolo)
practicasProtocolo.map(async (practica, index) => {
await new Practica(practica).save()
})
return { nohc: nohc, noprot: noprot }
}

Saving aws S3 image location links to postgres db table

I have a postgres db via Heroku, with a PUT route to save uploaded s3 bucket image links to the database, however the links are not saving to the database table. There are no errors, I am receiving but the links are simply not saving to the table with the update query I am calling for the db. Can anyone say what could be wrong here?
//Here is the table scheme
CREATE TABLE Users_Channel(
id SERIAL PRIMARY KEY UNIQUE,
userID INT UNIQUE,
FOREIGN KEY(userID) REFERENCES Users(id),
channelName varchar(255) UNIQUE,
FOREIGN KEY(channelName) REFERENCES Users(Username),
Profile_Avatar TEXT NULL,
Slider_Pic1 TEXT NULL,
Slider_Pic2 TEXT NULL,
Slider_Pic3 TEXT NULL,
Subscriber_Count INT NULL,
UNIQUE(channelName, userID)
);
//Database Update Query to updated channel by channel name
async function updateChannel({
channelname,
profile_avatar,
slider_pic1,
slider_pic2,
slider_pic3
}) {
try {
const { rows } = await client.query(
`
UPDATE users_channel
SET profile_avatar=$2, slider_pic1=$3, slider_pic2=$4, slider_pic3=$5
WHERE channelname=$1
RETURNING *;
`,
[channelname, profile_avatar, slider_pic1, slider_pic2, slider_pic3]
);
return rows;
} catch (error) {
throw error;
}
}
//API Put Route
usersRouter.put(
"/myprofile/update/:channelname",
profileUpdate,
requireUser,
async (req, res, next) => {
const { channelname } = req.params;
const pic1 = req.files["avatar"][0];
const pic2 = req.files["slide1"][0];
const pic3 = req.files["slide2"][0];
const pic4 = req.files["slide3"][0];
try {
const result = await uploadFile(pic1);
const result1 = await uploadFile(pic2);
const result2 = await uploadFile(pic3);
const result3 = await uploadFile(pic4);
console.log(result, result1, result2, result3);
const updateData = {
profile_avatar: result.Location,
slider_pic1: result1.Location,
slider_pic2: result2.Location,
slider_pic3: result3.Location,
};
console.log(updateData);
const updatedchannel = await updateChannel(channelname, updateData);
res.send({ channel: updatedchannel });
} catch (error) {
console.error("Could not update user profile", error);
next(error);
}
}
);
Solved it had to rework my update query portion to the following below. Took out the curly brackets and created a variable with curly brackets instead and passed it to the function.
async function updateChannel(channelname, photos) {
const { profile_avatar, slider_pic1, slider_pic2, slider_pic3} = photos;
try {
const { rows } = await client.query(
`
UPDATE users_channel
SET profile_avatar=$2, slider_pic1=$3, slider_pic2=$4, slider_pic3=$5
WHERE channelname=$1
RETURNING *;
`,
[channelname, profile_avatar, slider_pic1, slider_pic2, slider_pic3]
);
return rows;
} catch (error) {
throw error;
}
}

How to alter postgres sql constraint with Knex?

Table is created with:
exports.up = function (knex) {
return knex.schema.createTable('organisations_users', (table) => {
table.uuid('organisation_id').notNullable().references('id').inTable('organisations').onDelete('SET NULL').index();
};
exports.down = function (knex) {
return knex.schema.dropTableIfExists('organisations_users');
};
In another migration file I would like to alter the onDelete command to "CASCADE".
I tried (among other things):
exports.up = function (knex) {
return knex.schema.alterTable('organisations_users', (table) => {
table.uuid('organisation_id').alter().notNullable().references('id').inTable('organisations').onDelete('CASCADE').index();
});
};
But then knex states that the contstraint already exist (which is true, thats why i want to alter it)
What would be the command for this? I'm also fine with a knex.raw string.
Thank you
Solved it by:
exports.up = function (knex) {
return knex.schema.alterTable('organisations_users', async (table) => {
// First drop delete references
await knex.raw('ALTER TABLE organisations_users DROP CONSTRAINT IF EXISTS organisations_users_organisation_id_foreign')
// Add the correct delete command (was SET NULL, now CASCADE)
table.uuid('organisation_id').alter().notNullable().references('id').inTable('organisations').onDelete('CASCADE');
});
};
exports.down = function (knex) {
return knex.schema.dropTableIfExists('organisations_users');
};

How to return the serial primary key from a db insert to use for another db insert

I'm using the express generated template with postgresql and I have 2 rest route methods for creating consignments and tracking.
However i want tracking to be updated on each consignment insert, but i require the serial primary key to do it. So from the createCon function i require it to return the id after the insert, to use for the cid field in the createConTracking.
routes/index.js file
var db = require('../queries');
router.post('/api/cons', db.createCon);
router.post('/api/cons/:id/tracking', db.createConTracking);
queries.js
var promise = require('bluebird');
var options = {
promiseLib: promise
};
var pgp = require('pg-promise')(options);
var db = pgp(connectionString);
function createCon(req, res, next) {
var conid = parseInt(req.body.conid);
db.none('insert into consignments(conid, payterm,........)'+
'values($1, $2, ......)',
[conid, req.body.payterm,........])
.then(function () {
res.status(200)
.json({
status: 'success',
message: 'Inserted one con'
});
})
.catch(function (err) {
return next(err);
});
}
function createConTracking(req, res, next) {
var cid = parseInt(req.params.id);
var userid = req.user.email;
var conid = parseInt(req.body.conid);
db.none('insert into tracking(status, remarks, depot, userid, date, cid, conid)'+
'values($1, $2, $3, $4,$5, $6, $7)',
[req.body.status, req.body.remarks, req.body.depot, userid, req.body.date, cid, conid])
.then(function (data) {
res.status(200)
.json({
data: data,
status: 'success',
message: 'Updated Tracking'
});
})
.catch(function (err) {
return next(err);
});
}
DB
CREATE TABLE consignments (
ID SERIAL PRIMARY KEY,
conId INTEGER,
payTerm VARCHAR,
CREATE TABLE tracking (
ID SERIAL PRIMARY KEY,
status VARCHAR,
remarks VARCHAR,
cid INTEGER
);
I'm the author of pg-promise.
You should execute multiple queries within a task (method task) when not changing data, or transaction (method tx) when changing data. And in case of making two changes to the database, like in your example, it should be a transaction.
You would append RETURNING id to your first insert query and then use method one to indicate that you expect one row back.
function myRequestHandler(req, res, next) {
db.tx(async t => {
const id = await t.one('INSERT INTO consignments(...) VALUES(...) RETURNING id', [param1, etc], c => +c.id);
return t.none('INSERT INTO tracking(...) VALUES(...)', [id, etc]);
})
.then(() => {
res.status(200)
.json({
status: 'success',
message: 'Inserted a consignment + tracking'
});
})
.catch(error => {
return next(error);
});
}
In the example above we execute the two queries inside a transaction. And for the first query we use the third parameter for an easy return value transformation, plus conversion (in case it is a 64-bit like BIGSERIAL).
Simply add a RETURNING clause to your INSERT statement. This clause allows you to return data concerning the actual values in the inserted record.
insert into consignments(conid, payterm,........)
values($1, $2, ......)
returning id;

Alter string to enum

I am using the method below to change the column type from string to enum. Is there an alternative way to do this?
Is it possible to use it as a knex.raw to form such query?
CREATE TYPE type AS ENUM ('disabled', 'include', 'exclude');
ALTER TABLE test_table ALTER COLUMN test_col DROP DEFAULT;
ALTER TABLE test_table ALTER COLUMN test_col TYPE logic USING(test_col::type), ALTER COLUMN test_col SET DEFAULT 'disabled'::logic;
return schema
.table('offers', function (table) {
cols.forEach(function (column) {
table.renameColumn(column, column + '_old');
});
}).then(function () {
var schema = knex.schema;
return schema.table('offers', function (table) {
cols.forEach(function (column) {
table.enum(column, ['disabled', 'include', 'exclude']).defaultTo('disabled');
});
});
}).then(function () {
return knex.select('*').from('offers');
}).then(function (rows) {
return Promise.map(rows, function (row) {
var data = {};
cols.forEach(function (column) {
data[column] = row[column+'_old'];
}, data);
return knex('offers').where('id', '=', row.id).update(data);
})
}).then(function () {
var schema = knex.schema;
return schema.table('offers',function (table) {
cols.forEach(function (column) {
table.dropColumn(column+'_old');
});
});
});