Order Posts by Most Votes (Overall, Last Month, etc.) with Laravel MongoDB - mongodb

I am trying to understand more advanced functions of mongodb and laravel but having trouble with this. Currently I have my schema setup with a users, posts, and posts_votes collections. The posts_votes has a user_id, post_id and timestamp field.
In a relational DB, I would just left join the posts_votes collection, count, and order by that count. Exclude dates when need be and all that.
MongoDB I am having difficulty b/c there's no left join equivalent. So I'd like to learn how to accomplish my goal in a more document-y way.
On my Post model in Laravel, I reference this way. So looking at an individual post, I can get the vote count, see if current user voted for a specific post, etc.
public function votes()
{
return $this->hasMany(PostVote::class, 'post_id');
}
And my current working query looks like this:
$posts = Post::forCategoryType($type)
->with('votes', 'author', 'businessType')
->where('approved', true)
->paginate(25);
The forCategoryType method is just extended scope I added. Here it is on the Post model/document class.
public function scopeForCategoryType($builder, $catType)
{
if ($catType->exists) {
return $builder->where('cat_id', $catType->id);
}
return $builder;
}
So when I look at posts like this one, it's close to what I want to accomplish, but I am not applying it properly. For instance, I changed my main query to look like this:
$posts = Post::forBusinessType($type)
->with('votes', 'author', 'businessType')
->where('approved', true)
->sortByVotes()
->paginate(25);
And created this new method on the Post model:
public function scopeSortByVotes($builder, $dir = 'desc')
{
return $builder->raw(function($collection) {
return $collection->aggregate([
['$group' => [
'_id' => ['post_id' => 'votes.$post_id', 'user_id' => 'votes.$user_id']
],
'vote_count' => ['$sum' => 1]
],
['$sort' => ['vote_count' => -1]]
]);
});
}
This returns the error exception: A pipeline stage specification object must contain exactly one field.
Not sure how to fix that (still looking), so then I tried:
return $collection->aggregate([
['$unwind' => '$votes'],
['$group' => [
'_id' => ['post_id' => ['$votes.post_id', 'user_id' => '$votes.user_id']],
'count' => ['$sum' => 1]
]
]
]);
returns an empty ArrayIterator, so then I tried:
public function scopeSortByVotes($builder, $dir = 'desc')
{
return $builder->raw(function($collection) {
return $collection->aggregate([
'$lookup' => [
'from' => 'community_posts_votes',
'localField' => 'post_id',
'foreignField' => '_id',
'as' => 'vote_count'
]
]);
});
}
But on this setup, I just get the list of posts unsorted. On version 3.2.8.
The default loads everything by most recent. But ultimately I want to be able to pull these posts based on how many votes they got lifetime, but also query based on which posts got the most votes in the last week, month, etc.
That example I shared has the grand total linked in the Post model and an array of all the user ids that voted on it. With the way I have things setup using a separate collection holding the user_id, post_id and timestamps of when the vote happened, can I still accomplish the same goal?
Note: using this laravel mongodb library.

Related

How to find all the matches in a nested array with the _id with mongoose

This question may be easy for some of you but I can't get how this query works.
In the attached picture: https://i.stack.imgur.com/KzK0O.png
Number 1 is the endpoint with the query I can't get it to work.
Number 2 is the endpoint where you can see how I am storing the object match in the database.
Number 3 is the data structure in the frontend.
Number 4 is the Match mongoose model.
I am trying to get all the matches that have the _id I am sending by param in any of its members array.
I am trying it with $in, but I am not sure how this nested object array property query works.
I am very new at web development and this is quite difficult to me, any help would be highly appreciated, even some documentation for dummies, since I can't understand the one in the official site.
Thanks in advance
router.get("/profile/:_id", (req, res) => {
const idFromParam = req.params._id;
console.log("params", req.params._id);
Match.find({ match: [ { teams: [{ members: $in: [_id: idFromParam ] } ] ] }}).populate("users")
.then((response) => {
res.json(response);
console.log("response", response);
}) })
.catch((err) =>
res.status(500).json({ code: 500, message: "Error fetching", err })
);
});

Insert into relationship table using id created at user registration

I have two tables as seen below
The first table is for users and is populated via a registration form on the client side. When a new user is created, I need the second 'quotas' table to be populated with date, amount, and linked with the user id. The 'user_id' is used to pull the quotas information in a GET and display client side. I am having issues using the 'id' to populate the second table at the time of creation. I am using knex to make all queries. Would I be using join to link them in knex?
server
hydrateRouter // get all users
.route('/api/user')
.get((req, res) => {
knexInstance
.select('*')
.from('hydrate_users')
.then(results => {
res.json(results)
})
})
.post(jsonParser, (req, res, next) => { // register new users
const { username, glasses } = req.body;
const password = bcrypt.hashSync(req.body.password, 8);
const newUser = { username, password, glasses };
knexInstance
.insert(newUser)
.into('hydrate_users')
.then(user => {
res.status(201).json(user);
})
.catch(next);
})
client
export default class Register extends React.Component {
constructor(props) {
super(props);
this.state = {
username: '',
password: '',
glasses: 0
}
}
handleSubmit(event) {
event.preventDefault();
fetch('http://localhost:8000/api/user', {
method: 'POST',
headers: {
'Content-Type': 'application/json'
},
body: JSON.stringify(this.state)
})
.then(response => response.json())
.then(responseJSON => {
this.props.history.push('/login');
})
}
server side route for displaying the water amount
hydrateRouter
.route('/api/user/waterconsumed/:user_id') // display water consumed/day
.all(requireAuth)
.get((req, res, next) => {
const {user_id} = req.params;
knexInstance
.from('hydrate_quotas')
.select('amount')
.where('user_id', user_id)
.first()
.then(water => {
res.json(water)
})
.catch(next)
})
Thank you!
Getting the id of an inserted row
So this is a common pattern in relational databases, where you can't create the egg until you have the unique id of the chicken that lays it! Clearly, the database needs to tell you how it wants to refer to the chicken.
In Postgres, you can simply use Knex's .returning function to make it explicit that you want the new row's id column returned to you after a successful insert. That'll make the first part of your query look like this:
knexInstance
.insert(newUser)
.into('users')
.returning('id')
Note: not all databases support this in the same way. In particular, if you happen to be developing locally using SQLite, it will return the number of rows affected by the query, not the id, since SQLite doesn't support SQL's RETURNING. Best is just to develop locally using Postgres to avoid nasty surprises.
Ok, so we know which chicken we're after. Now we need to make sure we've waited for the right id, then go ahead and use it:
.then(([ userId ]) => knexInstance
.insert({ user_id: userId,
date: knex.fn.now(),
amount: userConstants.INITIAL_QUOTA_AMOUNT })
.into('quotas')
)
Or however you choose to populate that second table.
Note: DATE is a SQL keyword. For that reason, it doesn't make a great column name. How about created or updated instead?
Responding with sensible data
So that's basic "I have the ID, let's insert to another table" strategy. However, you actually want to be able to respond with the user that was created... this seems like sensible API behaviour for a 201 response.
What you don't want to do is respond with the entire user record from the database, which will expose the password hash (as you're doing in your first code block from your question). Ideally, you'd probably like to respond with some UI-friendly combination of both tables.
Luckily, .returning also accepts an array argument. This allows us to pass a list of columns we'd like to respond with, reducing the risk of accidentally exposing something to the API surface that we'd rather not transmit.
const userColumns = [ 'id', 'username', 'glasses' ]
const quotaColumns = [ 'amount' ]
knexInstance
.insert(newUser)
.into('users')
.returning(userColumns)
.then(([ user]) => knexInstance
.insert({
user_id: user.id,
date: knex.fn.now(),
amount: userConstants.INITIAL_QUOTA_AMOUNT
})
.into('quotas')
.returning(quotaColumns)
.then(([ quota ]) => res.status(201)
.json({
...user,
...quota
})
)
)
Async/await for readability
These days, I'd probably avoid a promise chain like that in favour of the syntactic sugar that await provides us.
try {
const [ user ] = await knexInstance
.insert(newUser)
.into('users')
.returning(userColumns)
const [ quota ] = await knexInstance
.insert({
user_id: userId,
date: knex.fn.now(),
amount: userConstants.INITIAL_QUOTA_AMOUNT
})
.into('quotas')
.returning(quotaColumns)
res
.status(201)
.json({
...user,
...quota
})
} catch (e) {
next(Error("Something went wrong while inserting a user!"))
}
A note on transactions
There are a few assumptions here, but one big one: we assume that both inserts will be successful. Sure, we provide some error handling, but there's still the possibility that the first insert will succeed, and the second fail or time out for some reason.
Typically, we'd do multiple insertions in a transaction block. Here's how Knex handles this:
try {
const userResponse = await knexInstance.transaction(async tx => {
const [ user ] = await tx.insert(...)
const [ quota ] = await tx.insert(...)
return {
...user,
...quota
}
})
res
.status(201)
.json(userResponse)
} catch (e) {
next(Error('...'))
}
This is good general practice for multiple inserts that depend on each other, since it sets up an "all or nothing" approach: if something fails, the database will go back to its previous state.

Filter data on call to getHyperCubeData

When I run the following, I got all records from my table object (assuming i have 100 records in all). Is there a way to send the selection/filter, for example, I want to retrieve only those where department='procuring'.
table.getHyperCubeData('/qHyperCubeDef', [{
qWidth: 8,
qHeight: 100
}]).then(data => console.log(data));
I figured out the answer. Before getting the hypercube data, I need to get the field from the Doc class, then do the following:
.then(doc => doc.getField('department'))
.then(field => field.clear().then(() => field.select({qMatch: filter['procuring']})))

How do I get a Wyam pipeline of documents based of a comma-delimited meta value from a previous pipeline?

I have a Wyam pipeline called "Posts" filled with documents. Some of these documents have a Tags meta value, which is a comma-delimited list of tags. For example, let's say it has three documents, with Tags meta of:
gumby,pokey
gumby,oscar
oscar,kermit
I want a new pipeline filled with one document for each unique tag found in all documents in the "Posts" pipeline. These documents should have the tag in a meta value called TagName.
So, the above values should result in a new pipeline consisting of four documents, with the TagName meta values of:
gumby
pokey
oscar
kermit
Here is my solution. This technically works, but I feel like it's inefficient, and I'm pretty sure there has to be a better way.
Documents(c => c.Documents["Posts"]
.Select(d => d.String("Tags", string.Empty))
.SelectMany(s => s.Split(",".ToCharArray()))
.Select(s => s.Trim().ToLower())
.Distinct()
.Select(s => c.GetNewDocument(
string.Empty,
new List<KeyValuePair<string, object>>()
{
new KeyValuePair<string, object>("TagName", s)
}
))
)
So, I'm calling Documents and passing in a ContextConfig which:
Gets the documents from "Posts" (I have a collection of documents)
Selects the Tags meta value (now I have a collection of strings)
Splits this on the comma (a bigger collection of strings)
then trims and lower cases (still a collection of strings)
De-dupes it (a smaller collection of strings)
Then creates a new document for each value in the list, with am empty body and a TagName value for the string (I should end up with a collection of new documents)
Again, this works. But is there a better way?
That's actually not bad at all - part of the challenge here is getting the comma-separated list of tags into something that can be processed by a LINQ expression or similar. That part is probably unavoidable and accounts for 3 of the lines in your expression.
That said, Wyam does provide a little help here with the ToLookup() extension (see the bottom of this page: http://wyam.io/getting-started/concepts).
Here's how that might look (this code is from a self-contained LINQPad script and would need to be adjusted for use in a Wyam config file):
public void Main()
{
Engine engine = new Engine();
engine.Pipelines.Add("Posts",
new PostsDocuments(),
new Meta("TagArray", (doc, ctx) => doc.String("Tags")
.ToLowerInvariant().Split(',').Select(x => x.Trim()).ToArray())
);
engine.Pipelines.Add("Tags",
new Documents(ctx => ctx.Documents["Posts"]
.ToLookup<string>("TagArray")
.Select(x => ctx.GetNewDocument(new MetadataItems { { "TagName", x.Key } }))),
new Execute((doc, ctx) =>
{
Console.WriteLine(doc["TagName"]);
return null;
})
);
engine.Execute();
}
public class PostsDocuments : IModule
{
public IEnumerable<IDocument> Execute(IReadOnlyList<IDocument> inputs, IExecutionContext context)
{
yield return context.GetNewDocument(new MetadataItems { { "Tags", "gumby,pokey" } });
yield return context.GetNewDocument(new MetadataItems { { "Tags", "gumby,oscar" } });
yield return context.GetNewDocument(new MetadataItems { { "Tags", "oscar,kermit" } });
}
}
Output:
gumby
pokey
oscar
kermit
A lot of that is just housekeeping to set up the fake environment for testing. The important part that you're looking for is this:
engine.Pipelines.Add("Tags",
new Documents(ctx => ctx.Documents["Posts"]
.ToLookup<string>("TagArray")
.Select(x => ctx.GetNewDocument(new MetadataItems { { "TagName", x.Key } }))),
// ...
);
Note that we still have to do the work of getting the comma delimited tags list into an array - it's just happening earlier up in the "Posts" pipeline.

How to remove ranking of query results

I have the following pg_search scope on my stories.rb model:
pg_search_scope :with_text,
:against => :title,
:using => { :tsearch => { :dictionary => "english" }},
:associated_against => { :posts => :contents }
I want the query to return the results ignoring any ranking (I care only about the date the story was last updated order DESC). I know that this is an easy question for most of the people who view it, but how do I turn off the rank ordering in pg_search?
I'm the author of pg_search.
You could do something like this, which uses ActiveRecord::QueryMethods#reorder
MyModel.with_text("foo").reorder("updated_at DESC")