I am trying to do a following/follower type of concept that many social networks have. I have 2 INT fields called me and following, Those INT fields are composed of users Unique ID. I want to make so that if user 23 is following user 7 or 23->7 then user 23 can not follow user 7 again because the relationship already exists. This picture will clear things up
Notice above the first 2 rows are 31->27 or user 31 is following 27 twice that is redundant. Is there some constraint that I can use to prevent that from happening ? I am using postgres version 9.4
You can do this by creating a unique index. But not just any unique index, one on expressions:
create unique index unq_t_me_following
on (least(me, following), greatest(me, following));
Related
I need to define and create indices for a postgresql DB used for translation memory.
This is related to this (Database design question regarding performance) question I've posted and the oversimplified design follows this (How to design a database for translation dictionary?) answer. The only difference being I have a Segment (basically a sentence instead of a word).
Tables:
I. languages
ID NAME
---------------
1 English
2 Slovenian
II. segments
ID CONTENT LANGUAGE_ID
-------------------------------
1 Hello World 1
2 Zdravo, svet 2
III. translation_records (TranslationRecord has more columns, omitted here, like domain, user etc.)
ID SOURCE_SEGMENT_ID TARGET_SEGMENT_ID
--------------------------------------
1 1 2
I want to index the segments table for when searching existing translations and for when searching combination of words in the DB.
My question is, is it enough to create an index for the segments table for the CONTENT column or should I also tokenize the CONTENT column to a new column TOKENS and index that as well?
Also, am I missing something else that might be important for creating such indices?
---EDIT---
Querying examples:
When a user enters a new text to translate, the app returns predefined number of existing translation records where source segment's content matches by a certain percent with the entered text.
When a user triggers a manual query to list a predefined number of existing translation records where source segment's content includes the words marked by the user (i.e. the concordance search).
Since there is only one table for all language combinations the first condition for querying would be the language_combination (attribute of translation_record).
---EDIT---
How can I properly page by ordering on a column that could possibly have repeated values? I have a table called posts, which has a column that holds the number of likes of a certain post, called num_likes, and I want to order by num_likes DESC. But the image below shows a problem that I run into - the new row inserted between the two pages causes repeated data to be fetched.
This link here explains the problem, and gives the solution of keyset pagination, but from what I've seen, that only works if the column that the rows are being sorted on are distinct / unique. How would I do this if that is not the case?
You can easily make the sort key unique by adding the primary key to it.
You don't have to display the primary key to the user, just use it internally to tell “equal” rows apart.
For querying and indexing, you can make use of PostgreSQL's ability to compare like this: (num_likes, id) >= (4, 325698).
I would like to create a unique 10 digit ID in MongoDB - for every user under a particular client that we get in our system. Note that we store all of our users (who are from different clients), under the same collection. The ID needs to be unique only among users of same client. Two users from two different clients, can share the same ID, even though they exist in the same collection.
Any suggestions on how this 10 digit id can be unique for every user.
mongoid_token looks like it can do what you need:
https://github.com/thetron/mongoid_token
I have tons of data in my database. and i have so many duplicates in it. I want to remove all these duplicates and keep only one data from it,
For example i have data like this in my login object
id name
123 abc
124 abc
125 abc
126 abc
127 pqr
128 pqs
i want to keep only one user whose name is 'abc' and remove all the other users whose name is 'abc'
How can i achieve it in mongodb
thanks in advance
One method is to actually create a unique index on the users name and give it an option to dropDups: http://docs.mongodb.org/manual/core/indexes/#drop-duplicates
So for your example:
db.user.ensureIndex({name:1},{unique:1,dropDups:true})
Would create a unique index on the name that will drop all other duplicates.
Of course the double benefit of this is that you now have a unique index on the name ensuring that you don't get duplicates again.
You can also do this via clent side but it is likely to be slower than doing it by building an index.
I am supposed to generate a partial random unique id to be stored as an identifier for users.
Criteria:
8 digits
First 4 digits is of my own (for eg. the year)
Last 4 digits can be anything random.
How do I use entity framework to make sure this id is unique? I don't want to have a loop that generates then check the database. Can something like this be done in 1 database call?
The only way to do this in a single call would be to call a stored procedure that generates the ID and checks uniqueness.