Query records before insertion with Ecto (similar to an AR callback) - callback

I'm new to Elixir and Phoenix (less than 10 days) but very excited about it and like many others I come from a Rails background.
I understand Ecto is not AR and callbacks have been deprecated or removed but I need to add a custom validation that should only happen on creation and needs to perform a query.
Here's what my Reservation model basically looks like.
schema "reservations" do
field :ends_at, :utc_datetime
field :name, :string, null: false
field :starts_at, :utc_datetime
field :user_id, :id
end
and then I have another schema Slot, which looks like this:
schema "slots" do
field :ends_at, :utc_datetime
field :name, :string, null: false
field :starts_at, :utc_datetime
field :admin_id, :id
end
Whenever I'm adding a new reservation, I need to query my DB to check if there are any slots with matching ends_at and starts_at. If there are, I need to prevent the record from being saved and add an error to it (similar to what in Rails we accomplish with throw :abort and errors.add).
Can someone please shed a light on this? What's the Ecto way of doing this?
Best regards

*edit: added examples using separate changesets for creation and updation
You can add a custom validation function in your changeset validation chain and do DB queries in it.
Haven't run this code, but something like this should work
# separate changeset for creation
def create_changeset(struct, params) do
struct
|> cast(params, [...list of fields...])
|> validate_unique([:name]) # lets say it has to be unique
|> validate_slots # -- custom validation
end
# separate changeset for updation, no slot-check
def update_changeset(struct, params) do
struct
|> cast(params, [...list of fields...])
|> validate_unique([:name]) # lets say it has to be unique
end
def validate_slots(changeset) do
starts_at = get_field(changeset, :starts_at)
ends_at = get_field(changeset, :ends_at)
slots = Repo.all(from s in Slot, where: s.starts_at == ^starts_at and s.ends_at == ^ends_at)
if Enum.empty?(slots) do
changeset
else
add_error( changeset, :starts_at, "has slot with similar starts_at/ends_at")
end
end
#---- using the changesets
# creation
%Reservation{} |> Reservation.create_changeset(params) |> Repo.insert()
# updation
%Reservation{} |> Reservation.update_changeset(params) |> Repo.update()
Although, from the look of it, you should probably normalize your starts_at and ends_at into a separate table called booking_time_frame or something and add unique indexes to it.
Or you might end up with more types of bookings and then have to check starts_at/ends_at across 3 tables and so on.

Related

Why is unique contraint not working in Ecto?

I have the following in my User model :
def changeset(user, attrs) do
user
|> cast(attrs, [:login, :email])
|> validate_required([:login, :email])
|> unique_constraint(:login)
|> unique_constraint(:email)
end
However just setting the unique_contraint this way does not work. I'm still getting duplicate login and email when testing my controller.
I got this working but I had to put the :unique keyword argument to true in the model schema as well as create a unique index per column in the migration.
Is Ecto not checking the constraint itself in addition to the PostgreSQL unique index ? Is there any point to adding a unique_constraint to the changeset/2 function ?
The unique constraint works by relying on the database to check if the unique constraint has been violated or not and, if so, Ecto converts it into a changeset error.
— Ecto.Changeset.unique_constraint/3
That said, the reason for unique_constraint/3 to ever exist is to unify errors (make the changeset error out of what was received from the DB.) That obviously eases and standardizes the error handling.
Ecto obviously cannot check the constraint on its own, without relying on DB.

MS Access Subform conditional format based on data on a table

My database has a Form view for the Lab to input the results of their tests to each batch we produce.
I have a query as Subform data source and I use a date combo box for the lab guys to see the production for the day. My problem is that when they write the value of the test (say batch 5, product A, test result= 10), I need the subform to do a conditional formatting based on a min/max specification located in another table. Say this table says product A allows a range from 11-15. Then when the technician writes 10, he should be prompted to quarantine the product. One easy way is to mark the CELL in BOLD RED.
This is my Subform layout:
Product | Batch # | Test result
My Spec sheet is something like this:
Product | Min | Max
I tried using DLookup without success, maybe the syntax is wrong.
Any recommendations on how to tackle this?
One nice idea that came through my mind would be to have a button to automatically notify my quality guy.
Thank you.
Updated for easy understanding
Autonumber ID field in Products is set as primary key yet the Product field in Batch is text. Change to number long integer type. This is correctly done in [Product Specification] table.
Once that field type is fixed, can include [Product Specification] table in LabBatchQuery and the specification fields will be available for direct reference in Conditional Formatting.
SELECT Batch.*, [Product Specification].*
FROM [Product Specification] RIGHT JOIN Batch ON [Product Specification].[Product Code] = Batch.Product
WHERE (((Batch.[Production Date])=[Forms]![TNA Batch Analysis]![Today Date]));
Change the RecordSource to simply reference LabBatchQuery instead of the SQL statement. Or instead of having dynamic parameter in the query object (I never do), put it in RecordSource SQL statement. SELECT * FROM LabBatchQuery WHERE [Production Date]=[Forms]![TNA Batch Analysis]![Today Date];
The Conditional Formatting rules will be like: Value Is Not Between [pH Min] And [pH Max]
Change the [Today Date] textbox AfterUpdate code to: Me.[LabBatchQuery subform].Requery
Advise not to use spaces nor punctuation/special characters (underscore only exception) in naming convention.
Also recommend not setting lookups in tables.
If Products and [Product Specification] have 1-to-1 relationship, why not combine as 1 table?
You can use Form_BeforeUpdate to validate / conditional formatting the data
Below code will restrict PH values if they are out of range defined in Product Specification table. Similarly you can define for other fields.
Put this code to your subform:
Private Sub Form_BeforeUpdate(Cancel As Integer)
Dim pH_Min As Double, pH_Max As Double
If Nz(Product, -1) <> -1 And Nz(Me.Batch__, "") <> "" And Nz(Me.pH, -1) <> -1 Then
pH_Min = DLookup("[pH Min]", "Product Specification", "[Product Code]=" & Product)
pH_Max = DLookup("[pH Max]", "Product Specification", "[Product Code]=" & Product)
If Me.pH.Value >= pH_Min And Me.pH.Value <= pH_Max Then
'no action
Else
MsgBox "Invalid Value", vbInformation
Cancel = True
End If
End If
End Sub

Within transaction: Use inserted id for reference to next insertion

I want to create a wallet for a user, when the user itself is being created. Ideally i want this to happen within a transaction, as one may not exist without the other.
I want something like this, in Ecto paradigm.
BEGIN;
INSERT INTO albums [...];
INSERT INTO album_images (lastval(), image_id) [...];
COMMIT;
Taken from https://github.com/elixir-ecto/ecto/issues/2154.
How would achieve such?
Consider using Multi and at the end you would put all the things processed in the Multi to Repo.transaction().
Ecto.Multi will help you to organize this flow, because Multi.run accepts Multi structure that contains result of previous computation - that's why you can safely use it, because if first operation fails, the second one will be rejected by the transaction as well.
The best way to write it is to put the business operations to separate functions, where the second one accept Multi with the name of the result of previous operation.
Multi.new
|> Multi.insert(:albums, insert_albums(arguments))
|> Multi.run(:album_images, AlbumImage, :insert, [])
|> Repo.transaction()
where AlbumImage.insert might look like:
defmodule AlbumImage do
def insert(%{albums: albums}) do
# code
end
end

Does ActiveRecord#first method always return record with minimal ID?

Env: Rails 4.2.4, Postgres 9.4.1.0
Is there a guarantee that ActiveRecord#first method will always return a record with minimal ID and ActiveRecord#last - with maximum ID?
I can see from Rails console that for these 2 methods appropriate ORDER ASC/DESC is added to generated SQL. But an author of another SO thread Rails with Postgres data is returned out of order tells that first method returned NOT first record...
ActiveRecord first:
2.2.3 :001 > Account.first
Account Load (1.3ms) SELECT "accounts".* FROM "accounts" ORDER BY "accounts"."id" ASC LIMIT 1
ActiveRecord last:
2.2.3 :002 > Account.last
Account Load (0.8ms) SELECT "accounts".* FROM "accounts" ORDER BY "accounts"."id" DESC LIMIT 1
==========
ADDED LATER:
So, I did my own investigation (based on D-side answer) and the Answer is NO. Generally speaking the only guarantee is that first method will return first record from a collection. It may as a side effect add ORDER BY PRIMARY_KEY condition to SQL, but it depends on either records were already loaded into cache/memory or not.
Here's methods extraction from Rails 4.2.4:
/activerecord/lib/active_record/relation/finder_methods.rb
# Find the first record (or first N records if a parameter is supplied).
# If no order is defined it will order by primary key.
# ---> NO, IT IS NOT. <--- This comment is WRONG.
def first(limit = nil)
if limit
find_nth_with_limit(offset_index, limit)
else
find_nth(0, offset_index) # <---- When we get there - `find_nth_with_limit` method will be triggered (and will add `ORDER BY`) only when its `loaded?` is false
end
end
def find_nth(index, offset)
if loaded?
#records[index] # <--- Here's the `problem` where record is just returned by index, no `ORDER BY` is applied to SQL
else
offset += index
#offsets[offset] ||= find_nth_with_limit(offset, 1).first
end
end
Here's a few examples to be clear:
Account.first # True, records are ordered by ID
a = Account.where('free_days > 1') # False, No ordering
a.first # False, no ordering, record simply returned by #records[index]
Account.where('free_days > 1').first # True, Ordered by ID
a = Account.all # False, No ordering
a.first # False, no ordering, record simply returned by #records[index]
Account.all.first # True, Ordered by ID
Now examples with has-many relationship:
Account has_many AccountStatuses, AccountStatus belongs_to Account
a = Account.first
a.account_statuses # No ordering
a.account_statuses.first
# Here is a tricky part: sometimes it returns #record[index] entry, sometimes it may add ORDER BY ID (if records were not loaded before)
Here is my conclusion:
Treat method first as returning a first record from already loaded collection (which may be loaded in any order, i.e. unordered). And if I want to be sure that first method will return record with minimal ID - then a collection upon which I apply first method should be appropriately ordered before.
And Rails documentation about first method is just wrong and need to be rewritten.
http://guides.rubyonrails.org/active_record_querying.html
1.1.3 first
The first method finds the first record ordered by the primary key. <--- No, it is not!
If sorting is not chosen, the rows will be returned in an unspecified
order. The actual order in that case will depend on the scan and join
plan types and the order on disk, but it must not be relied on. A
particular output ordering can only be guaranteed if the sort step is
explicitly chosen.
http://www.postgresql.org/docs/9.4/static/queries-order.html (emphasis mine)
So ActiveRecord actually adds ordering by primary key, whichever that is, to keep the result deterministic. Relevant source code is easy to find using pry, but here are extracts from Rails 4.2.4:
# show-source Thing.all.first
def first(limit = nil)
if limit
find_nth_with_limit(offset_index, limit)
else
find_nth(0, offset_index)
end
end
# show-source Thing.all.find_nth
def find_nth(index, offset)
if loaded?
#records[index]
else
offset += index
#offsets[offset] ||= find_nth_with_limit(offset, 1).first
end
end
# show-source Thing.all.find_nth_with_limit
def find_nth_with_limit(offset, limit)
relation = if order_values.empty? && primary_key
order(arel_table[primary_key].asc) # <-- ATTENTION
else
self
end
relation = relation.offset(offset) unless offset.zero?
relation.limit(limit).to_a
end
it may change depending of your Database engine, it returns always the minimal ID in mysql with first method but it does not works the same for postgresql, I had several issues with this when I was a nobai, my app was working as expected in local with mysql, but everything was messed up when deployed to heroku with postgresql, so for avoid issues with postgresql always order your records by id before the query:
Account.order(:id).first
The above ensures minimal ID for mysql, postgresql and any other database engine as you can see in the query:
SELECT `accounts`.* FROM `accounts` ORDER BY `accounts`.`id` ASC LIMIT 1
I don't think that answer you reference is relevant (even to the question it is on), as it refers to non-ordered querying, whereas first and last do apply an order based on id.
In some cases, where you are applying your own group on the query, you cannot use first or last because an order by cannot be applied if the grouping does not include id, but you can use take instead to just get the first row returned.
There have been versions where first and/or last did not apply the order (one of the late Rails 3 on PostgreSQL as I recall), but they were errors.

Running cleaning/validation code before committing in sqlalchemy

I'm totally new to PostGreSQL and SQLAlchemy, and I'm trying to figure out how to run validation/cleaning code on an SQLAlchemy model before it is committed to the database. The idea is to ensure data consistency beyond the standard type enforcement that comes built into SQL databases. For example, if I have a User model built on SQLAlchemy's models,
class User(db.Model):
...
email = db.Column(db.String())
zipCode = db.Column(db.String())
lat = db.Column(db.Float())
lng = db.Column(db.Float())
...
Before committing this document, I want to:
trim any leading & trailing spaces off the email field
ensure that the zip code is a 5-digit string of numbers (I'll define a regex for this)
automatically look up the corresponding latitude/longitude of the zip code and save those in the lat & lng fields.
other things that a database schema can't enforce
Does SQLAlchemy provide an easy way to provide Python code that is guaranteed to run before committing to do arbitrary tasks like this?
I found the easiest is to hook onto the update and insert events. http://docs.sqlalchemy.org/en/latest/orm/events.html
from sqlalchemy import event
def my_before_insert_listener(mapper, connection, target):
target.email=target.email.trim()
#All the other stuff
# associate the listener function with User,
# to execute during the "before_insert" hook
event.listen(
User, 'before_insert', my_before_insert_listener)
You can create custom sqlalchemy types that do this sort of thing.