I have been experimenting with Supabase recently and trying to make a twitter like 'replying to #user' comment feature.
So I have a database ERD attached below for reference, As you can see each comment has a userid and also a 'replyingTo' which also stores the userid of the comment which is being replied to.
Now I'm able to query individually between these two tables, So I can get the user of comment along with the comment very easily but however, When I'm trying to fetch the user profile of the comment creator and the profile of replyingTo, I get the following error -
Could not embed because more than one relationship was found for 'Comments' and 'profiles'
I'm not too experienced at PostgreSQL so I'm not sure how to do something of this scope, This is the following code I'm using currently which is giving me the error I have described above.
const { data, error } = await supabase
.from('Comments')
.select('ReplyingTo, profiles:profiles(id), profiles:profiles(id)')
.eq('commentid', cmtid)
My expected outcome is to get the comment, the user profile who created the comment and also the profile of the user who is receiving a reply.
Thank you for your time and patience.
As the error says, the problem is that you have two relationships between profiles and Comments. For this case, you'd need to disambiguate(which foreign key column you want to use for the join) as specified in the PostgREST docs. For the supabase js client, it should be like:
const { data, error } = await supabase
.from('Comments')
.select('ReplyingTo, profiles1:profiles!userid(*), profiles2:profiles!ReplyingTo(*)')
.eq('commentid', cmtid)
Related
Hello, I have a problem I created a Registration form and im trying to check if there is any user which have a certain username inside the Firebase Db. I tried to get the reference of all the users.
var users = Database.database().reference("users")
But I don't know how I could check if there is any user with a specified username.
You'll want to use a query for that. Something like:
let query = users.queryOrdered(byChild: "username").equalTo("two")
Then execute the query and check whether the result snapshot exists.
Note though that you won't be able to guarantee uniqueness in this way. If multiple users perform the check at the same time, they may both end up claiming the same user name.
To guarantee a unique user name, you will need to store the user names as the key - as keys are by definition unique within their parent node. For more on this, see some of these top search results and possibly also from here.
I have added posts to firebase and I am wondering how I can pull the posts chronologically based on when the user has posted them.
My Database is set up like below
The first node after comments is the User ID and then the posts are underneath that. Obviously, these posts are in order, however if a new user posts something in between "posting" and "another 1" ,for example, how would I pull that so it shows up in between.
Is there a way to remove the autoID and just use the userID as a key? The problem I am running into is the previous post is overwritten then.
I am accepting the answer as it is the most thorough. What I did to solve my problem was just create the unique key as the first node and then use the UID as a child and the comment as a child. Then I pull the unique key's as they are in order and find the comment associated with the uid.
The other answers all have merit but a more complete solution includes timestamping the post and denormalizing your data so it can be queried (assuming it would be queried at some point). In Firebase, flatter is better.
posts
post_0
title: "Posts And Posting"
msg: "I think there should be more posts about posting"
by_uid: "uid_0"
timestamp: "20171030105500"
inv_timestamp: "-20171030105500"
uid_time: "uid_0_ 20171030105500"
uid_inv_time: "uid_0_-20171030105500"
comments:
comment_0
for_post: "post_0"
text: "Yeah, posts about posting are informative"
by_uid: "uid_1"
timestamp: "20171030105700"
inv_timestamp: "-20171030105700"
uid_time: "uid_1_20171030105700"
uid_inv_time: "uid_1_-20171030105700"
comment_1
for_post: "post_0"
text: "Noooo mooooore posts, please"
by_uid: "uid_2"
timestamp: "20171030110300"
inv_timestamp: "-20171030110300"
uid_time: "uid_2_20171030110300"
uid_inv_time: "uid_2_-20171030110300"
With this structure we can
get posts and their comments and order them ascending or descending
query for all posts within the last week
all comments or posts made by a user
all comments or posts made by a user within a date range (tricky, huh)
I threw a couple of other key: value pairs in there to round it out a bit: compound values, query-ing ascending and descending, timestamp.
You can not use the userID as key value instead of the autoID, because the key must be unique, thats why Firebase just updates the value and does not add another one with the same key. Normally Firebase nodes are ordered chronologically by default, so if you pull the values, those should be in the right order. However if you wanna make sure about that, you can add a timestamp value and set a server timestamp. After pulling the data you can order it by that timestamp (I think there is actually a timestamp saved automatically by firebase that you can access somehow, but you need to look that up in the documentation). If I got it right, in order to accomplish what you want, you need to change the structure of your database. For example you could maybe use the autoID but save the userID you wanted to use as key as a value if you need that. Hope I got your idea right, if not just be more precise and I will try to help.
Firebase keys are chronological by default - it's built into their key generation algorithm. I think you need to restructure/rethink your data.
Your POSTS database should (possibly) have the comments listed with each post, and then you can duplicate on the user record if needed for faster retrieval if they need to be accessed by user. So something like:
POSTS
- post (unique key)
- title (text)
- date (timestamp)
- comments
- comment (unique key)
- text (text)
- user_id (user key)
- date (timestamp)
When you pull the comments, you shouldn't be pulling them from a bunch of different users. That could result it a lot of queries and a ton of load time. Instead, the comments could be added (chronologically of course) to the post object itself, and also to the user if you want to keep a reference there. Unlike in MySQL, NoSQL databases can have quite a bit of this data duplication.
I'd like to write a factory for a blog post, that doesn't create a new user record for every post, but rather pick a random user from those that already exist. How would I do this?
You could randomly order your table, take a record and assign it to your Post. Bear in mind that there is definitely a cleaner way to do this, but here's one that works, obviously assuming your users are already in your test database.
user = User.order("RANDOM()").take #PostgreSQL
user = User.order("RAND()").take #MySQL
post = create(:post, user: user)
I have a very basic news feed modelled in IBM Graph (TitanDB backed by Cassandra) as shown below:
I am trying to write a query that does the following:
Start at vertex USER: John.Smith
Get the 15 most recent posts from the users FRIENDS combined with his own.
Check to see if USER: John.Smith likes any of those posts and return as a simple is_liked boolean property for each post.
There are a couple of pre-requisites for this query:
In each returned post, the properties of the posting USER should also be returned. For the sake of this question, only the avatar property is required.
I need to be able to paginate these results. i.e. Once I have retrieved the top 15 posts, I then need to be able to return the next 15, then the next etc.
I have no problem getting the users friends, and their LATEST_POSTS:
g.V().hasLabel("USER").has("userid", "John.Smith").both("FRIEND").out("LATEST_POST");
I have read the Tinkerpop documentation but am finding myself still lost as to how to begin building upon this query in order to meet my requirements.
Also, any commentary on this approach in terms of performance, data modelling, schema or indexing advice would be extremely helpful. i.e Should I expect this approach to be able to retrieve feeds in real-time at scale?
Thanks in advance.
For the given graph schema, the query would be something like this:
g.V().has("user", "userid", "John.Smith").as("john").
union(identity(), both("FRIEND")).as("user").
out("LATEST_POST").
flatMap(emit().repeat(out("PREVIOUS_POST")).range(page * pageSize, (page + 1) * pageSize)).as("post").
choose(__.in("LIKED").where(eq("john")), constant(true), constant(false)).as("likedByJohn")
select("user", "post", "likedByJohn")
But Alaa already pointed out that this approach won't scale and how you could improve your graph schema.
You should check the pagination recipe in http://tinkerpop.apache.org/docs/3.2.3-SNAPSHOT/recipes/#pagination. Here's a simplified way to retrieve one range/page at a time
gremlin> g.V().hasLabel('person').range(0,2)
==>v[1]
==>v[2]
gremlin> g.V().hasLabel('person').range(2,4)
==>v[4]
==>v[6]
Regarding the model you have , I would avoid using a LATEST_POST edge as you will need to keep updating this edge everytime a user has a new post. It's better to add a timestamp property to the post and you can always sort your returned results on the timestamp to get the latest post.
I have simple app which displays list of recrods, also user whould be able to edit barticular record by id.
Because list is big, i don't fetch it as whole, but partially via Product.fetch(data: $.param(page: 1)).
Then when someone try to edit record, i call Product.find(id) and if recrord was already prefetched with fetch then it works fine, but when record not yet fetched then i got error like: "Product" model could not find a record for the ID "1152"
So, the question is why find not performing ajax call and how to make it perform it or maybe there is another solution exists ?
Spine.find only looks in the already loaded records. Doing an ajax request isn't the function of find. So you have to try-catch your find and when it gives this error, you have to fetch it.
id = 1152
try
product = Product.find id
catch err
Product.fetch(
data:
id: id
processData: true
)
# Try again after Product.refresh
To be honest, I think this isn't a nice way to do it, but it is how spine works... I rather have it fetch it automatically, or at least not throwing an error on find.