foo__icontains QuerySet empty when querying for old Google Cloud Datastore records - google-cloud-storage

I have a model Foo which has a field called bar.
class Foo(models.Model):
bar = models.CharField(max_length=70)
Given an existing instance of Foo whose bar field is set to 'qux', the following query returns an empty QuerySet:
Foo.objects.filter(bar__icontains="qux")
However, if I reference/save the previous instance or I create/save a new Foo, I am able to find it using a similar query.
So, how can I find old, existing records using icontains?
Djangae's documentation makes specific reference to using contains and icontains, but I see no mention of this particular behavior or how to address it. (I do see the index being added to djangaeidx.yaml) I also see nothing in the Migration documentation which makes me think I need to explicitly add an index or similar.

The answer can be found in the 0.9.10 migration guide.
In this situation, you'd need to run something like:
defer_iteration(Foo.objects.all(), Foo.save, _target="your-new-app-version")
in order to add the necessary indexes to existing records.
While this worked, it definitely feels heavy handed to me. I'd be happy to hear from anyone else who might have an alternative solution.

Related

Disable global query filters *inside* another global query filter

Currently using EF Core 3.1 (although upgrading to EF Core 5 soon, so if this becomes easier in that version, I'm still interested).
I'm implementing some global query filters to restrict access to entity data based on various criteria.
One of the filters operates on a type called "Encounter". An Encounter is linked to a single Person (via foreign key PersonID), but a Person can have many Encounters. The Person record has a navigation property to all linked encounters via Person.Encounters.
The query filter for Encounter is to restrict records based on the value of a particular property on Encounter (call it EncounterType). This works fine if my query filter is something like this:
x => x.EncounterType == "MyType"
However, I need to extend this logic so that an Encounter is allowed/loaded if any encounter linked to the Person meets the criteria.
So, the query filter for Encounter needs to be something like this:
x => x.Person.Encounters.Any(y => y.EncounterType == "MyType")
This does not currently work, because we run into a cycle and a StackOverflowException1; the global query filter for Encounter ends up expanding itself infinitely. (I think it's because we access the Person.Encounters navigation property and evaluate the same Encounter query filter for each encounter in Person.Encounters.)
What I really want to do is to completely disable the global query filter for any navigation properties used in this particular expression. In this scenario, I want to consider all other Encounters linked to the Person, without further filtering.
I know when actually running a query, I can call IgnoreQueryFilters(). I want something like that, but available inside the expression or when adding the query filter with HasQueryFilter().
Is this possible? Is there any other way I can accomplish this with global query filters?
[1] Which, while frustrating, is pretty cool for me...I've never posted a question about an actual stack overflow on StackOverflow :)

Is it possible to prevent the reading and/or setting of a field value with DBIx Class?

I'm working in a project that uses Catalyst and DBIx::Class.
I have a requirement where, under a certain condition, users should not be able to read or set a specific field in a table (e.g. the last_name field in a list of users that will be presented and may be edited by the user).
Instead of applying the conditional logic to each part of the project where that table field is read or set, risking old or new cases where the logic is missed, is it possible to implement the logic directly in the DBIx::Class based module, to never return or change the value of that field when the condition is met?
I've been trying to find the answer, and I'm still reading, but I'm somewhat new to DBIx::Class and its documentation. Any help would be highly appreciated. Thank you!
I‘d use an around Moose method modifier on the column accessor generated by DBIC.
This won‘t be a real security solution as you can still access data without the Result class, for example when using HashRefInflator.
Same for calling get_column.
Real security would be at the database level with column level security and not allowing the database user used by the application to fetch that field.
Another solution I can think of is an additional Result class for that table that doesn‘t include the column, maybe even defaulting to it and only use the one including the column when the user has a special role.

What is a secure way to prevent a user from updating specific fields in a MongoDB document?

I am trying to prevent users from updating certain fields in a mongodb object for which they are allowed to edit every other field. For instance the user should be able to edit/add/remove all fields except the field "permissions". My current approach is to test each key the user is trying to "$set" and see if it starts with the substring "permissions" (to cover dot notation). Example in python:
def sanitize_set(son):
return {"$set": {k: v for k, v in son.get("$set", {}).items()
if not k.startswith("permissions")}}
This approach is beautifully simple and seems to work. I wanted to reach out to the community to see if anyone else has tackled this issue before or sees obvious flaws in my approach. Thank you,
Joshua
Without seeing some example data with an explanation of what should/shouldn't be updatable - it's hard to say for sure, but the way I would prevent this would be to not allow the user to directly supply the fields they will be updating. For example say you had a function called update_employee which updated information in an employee document. If you implement it like this:
update_employee(employee):
db.employees.update({_id: session.user_id}, {$set: employee})
Whatever gets passed in as the employee object is what will be updated. Instead you could create the update object using the values passed in like so:
update_employee(employee):
updatedEmployee = {
email: employee.email,
address: employee.address,
phone: employee.phone
}
db.employees.update({_id: session.user_id}, {$set: updatedEmployee})
This way you have complete control over what is being updated in your database. So if an extra field (such as salary) is passed in, it will be ignored.
Since (as far as I know) does not have a field lock, what you can do in this case is create a routine to pick up the specific document, present it to the user in any way you wish, but simply only show the fields they are allowed to edit.
You can present the entire JSON representation to the user (editor) and have a routine which simply does not allow changes to the fields that are locked. In other words if you dont want field {"name": "Sam"} to be edited even if the editor changes this value to {"name": "Joe"} just kick it out before updating and only update fields which are allowed to be edited. Since it is all done in memory before an actual update (upsert) you have total control over what is being edited and what is not.
If you follow a scheme which does have a prefix say e_address where you have decided any field with e_ allows editing, the job is that much easier programmatically.
Even in user-defined roles I have not seen any possibility of locking specific fields in a collection. (I could be wrong here.)
The programming constructs here are simple though.
A. Pick up field to memory
B. Editor does editing
C. Only update fields which are allowed to be edited. Any other changes just ignore.
(I kept this answer generic as I do not use Python, though the construct should apply to any language.)

In what scenarios would I need to use the CREATEREF, DEREF and REF keywords?

This question is about why I would use the above keywords. I've found plenty of MSDN pages that explain how. I'm looking for the why.
What query would I be trying to write that means I need them? I ask because the examples I have found appear to be achievable in other ways...
To try and figure it out myself, I created a very simple entity model using the Employee and EmployeePayHistory tables from the AdventureWorks database.
One example I saw online demonstrated something similar to the following Entity SQL:
SELECT VALUE
DEREF(CREATEREF(AdventureWorksEntities3.Employee, row(h.EmployeeID))).HireDate
FROM
AdventureWorksEntities3.EmployeePayHistory as h
This seems to pull back the HireDate without having to specify a join?
Why is this better than the SQL below (that appears to do exactly the same thing)?
SELECT VALUE
h.Employee.HireDate
FROM
AdventureWorksEntities3.EmployeePayHistory as h
Looking at the above two statements, I can't work out what extra the CREATEREF, DEREF bit is adding since I appear to be able to get at what I want without them.
I'm assuming I have just not found the scenarios that demostrate the purpose. I'm assuming there are scenarios where using these keywords is either simpler or is the only way to accomplish the required result.
What I can't find is the scenarios....
Can anyone fill in the gap? I don't need entire sets of SQL. I just need a starting point to play with i.e. a brief description of a scenario or two... I can expand on that myself.
Look at this post
One of the benefits of references is that it can be thought as a ‘lightweight’ entity in which we don’t need to spend resources in creating and maintaining the full entity state/values until it is really necessary. Once you have a ref to an entity, you can dereference it by using DEREF expression or by just invoking a property of the entity
TL;DR - REF/DEREF are similar to C++ pointers. It they are references to persisted entities (not entities which have not be saved to a data source).
Why would you use such a thing?: A reference to an entity uses less memory than having the DEFEF'ed (or expanded; or filled; or instantiated) entity. This may come in handy if you have a bunch of records that have image information and image data (4GB Files stored in the database). If you didn't use a REF, and you pulled back 10 of these entities just to get the image meta-data, then you'd quickly fill up your memory.
I know, I know. It'd be easier just to pull back the metadata in your query, but then you lose the point of what REF is good for :-D

How do you created a nested form using HTML::FormHandler in Catalyst?

I have a situation where we have a base recordset with about one hundred thousand records. And, we are creating a separate application that shares some of the dataset, but not most, so we are creating a detail table that has a one to one relationship with the original table. What I want to do is pull the existing information from the original table and display it as read only, but I want the fields in the detail table to be writeable.
I've started by creating a listing with the contents of the original table, I want the user to be able to seemlessly hit "edit" by an entry and be taken to the form to create the detail record, click save, and have it update, or create the record.
So, what is the best way to do this?
This question is old, but in order to help future searchers...
Including a related table in a form is handled automatically if your DBIx::Class result sources are correctly setup. A related table can be pulled into the form by simply defining a compound field using the relationship name of the other field and defining subfields with the names of the columns in the related table.
See: HTML::FormHandler::Manual::Fields
And: HTML::FormHandler::Manual::Database
This isn't a real answer, in that I can't really provide you with a solution, but I can hopefully provide you with somewhere to start. I think you need to define a form in HTML::FormHandler which represents a single row from your detail table, then build your read-only form from multiple instances of single-row form. This article describes the problem and a partial solution much better than I have:
http://catdev.blogspot.com/2009/05/defining-form-processing-problem.html
I'm fairly new to Perl and HTML::FormHandler, and there's a good chance that there is a better way of doing this. It's a problem that I have solved 'manually' in PHP before, but one which I am sure I will come across in Perl.