I have the following in my User model :
def changeset(user, attrs) do
user
|> cast(attrs, [:login, :email])
|> validate_required([:login, :email])
|> unique_constraint(:login)
|> unique_constraint(:email)
end
However just setting the unique_contraint this way does not work. I'm still getting duplicate login and email when testing my controller.
I got this working but I had to put the :unique keyword argument to true in the model schema as well as create a unique index per column in the migration.
Is Ecto not checking the constraint itself in addition to the PostgreSQL unique index ? Is there any point to adding a unique_constraint to the changeset/2 function ?
The unique constraint works by relying on the database to check if the unique constraint has been violated or not and, if so, Ecto converts it into a changeset error.
— Ecto.Changeset.unique_constraint/3
That said, the reason for unique_constraint/3 to ever exist is to unify errors (make the changeset error out of what was received from the DB.) That obviously eases and standardizes the error handling.
Ecto obviously cannot check the constraint on its own, without relying on DB.
Related
Is it possible to fetch only one field from the database using the SORM Framework?
What I want in plain SQL would be:
SELECT node_id FROM messages
I can't seem to be able to reproduce this in sorm. I know this might be against how sorm is supposed to work, but right now I have two huge tables with different kind of messages. I was asked to get all the unique node_ids from both tables.
I know I could just query both tables using sorm and parse through all the data but I would like to put the database to work. Obviously, this would be even better if one can get only unique node_ids in a single db call.
Right now with just querying everything and parsing it, it takes way too long.
There doesn't seem to be ORM support for what you want to do, unless node_id happens to be the primary key of your Message object:
val messages = Db.query[Message].fetchIds()
In this case you shouldn't need to worry about it being UNIQUE, since primary keys are by definition unique. Alternatively, you can run a custom SQL query:
Db.fetchWithSql[Message]("SELECT DISTINCT node_id FROM messages")
Note this latter might be typed wrong: you'd have to try it against your database. You might need fetchWithSql[Int], or some other variation: it is unclear what SORM does in the eventuality that the primary key hasn't been queried.
I want to update a field in Cassandra which is indexed using phantom scala sdk like:
this.update.where(_.id eqs folderId)
.and(_.owner eqs owner)
.modify(_.parent setTo parentId)
the parent field is a indexed field in table. But the operation is not allowed when compile the code, there will have compile exception like:
[error] C:\User\src\main\scala\com\autodesk\platform\columbus\cassandra\DataItem.scala:161: could not find implicit value for evidence parameter of type com.websudos.phantom.column.ModifiableColumn[T]
The error is caused by update the field which is indexed.
My workaround is to delete the record and insert a new record to "update" the record.
Is there a better way for the situation?
You are not allowed to update a field that is part of the primary key, because if you do so you are rendering Cassandra unable to ever re-compose the hash of the row you are updating.
Read here for details on the topic. In essence, if you had a HashMap[K, V] what you are trying to do is update the K, but in doing so you will never be able to retrieve the same V again.
So in Cassandra, just like in the HashMap, an update to an index is done with a DELETE and then a new INSERT. That's why phantom intentionally prevents you from compiling your query, I wrote those compile time restrictions in for the specific purposes of preventing invalid CQL.
I write lots of code using sql alchemy on top of postgres 9.3. I often have to do an insert, after checking that the record does not already exist. To do so, I do the following
c = session.query(ClassName).filter(ClassName.id=new.id).count()
if c==0:
session.add(new)
session.commit()
This is sort of tedious. Is there any way to set up sql alchemy + postgres to handle that checking automatically? I'm not necessarily looking for a unique-ness index in postgres (which will throw an error if the record already exists) so much as an "add" operation that knows what to do if a record is already there.
Why not define your own
"add" operation that knows what to do if a record is already there.
?
def addIfNotExist(session, new):
if not c:
session.add(new)
session.commit()
else:
pass #put other code here if needed
addIfNotExist(session, new)
Without putting a unique index on id, this is the most direct thing I can think of, as there isn't (to my knowledge) a built in way of doing what you want to do
I get the following ArgumentNullException when I insert a rather simple entity into a table.
I don't think it matters, but the database is an SQL Server Compact .sdf file.
Value cannot be null.
Parameter name: source
at System.Linq.Enumerable.Any[TSource](IEnumerable1 source, Func2 predicate)
at System.Data.Entity.Internal.InternalContext.WrapUpdateException(UpdateException updateException)
at System.Data.Entity.Internal.InternalContext.SaveChanges()
at System.Data.Entity.Internal.LazyInternalContext.SaveChanges()
at System.Data.Entity.DbContext.SaveChanges()
{{ the code below }}
Here is the code that I'm using:
var newMaterial = _localJobs.DbMaterials.Create();
newMaterial.JobID = a_job.ID;
newMaterial.MaterialName = material.Name;
newMaterial.UseType = material.UseType;
newMaterial.Length = material.Length;
newMaterial.Width = material.Width;
newMaterial.Thickness = material.Thickness;
newMaterial.DefaultLength = material.DefaultLength;
newMaterial.DefaultWidth = material.DefaultWidth;
newMaterial.DefaultThickness = material.DefaultThickness;
_localJobs.DbMaterials.Add(newMaterial);
_localJobs.SaveChanges(); // <- The exception occurs here.
I am properly populating every field with valid data. The only key herein is JobID. It is a foreign key GUID with an explicit relation with a table called Job. The proper record already exists in the database.
Here is my table schema.
It's very old thread but still I am going to post answer to it because i had similar issue and I could not find answer on it. Maybe it will help someone.
Answer for me was very silly. My ID column was not an primary key of the table.
This error occurs when you have a field that:
Does not allow nulls
Does not have a default value
That you do not supply a value for in the insert
The reason for the error could be:
you are setting one of the fields to null
the save changes is also trying to save a row that was added earlier that is not showing up in the code you posted.
Try using SQL Profiler then you will see which insert statement is causing the error.
Given you will have checked all of the sensible suggestions already provided. Then another possibility is You model and DB dont Match.
Do you have a
Migration outstanding.
.. Double check. I have seen this message before ;-)
I have the following query called searchit
SELECT 2 AS sourceID, BLOG_COMMENTS.bID, BLOG_TOPICS.Topic_Title,
BLOG_TOPICS.LFD, BLOG_TOPICS.LC,
BLOG_COMMENTS.Comment_Narrative
FROM BLOG_COMMENTS INNER JOIN BLOG_TOPICS
ON BLOG_COMMENTS.bID = BLOG_TOPICS.bID
WHERE (BLOG_COMMENTS.Comment_Narrative LIKE #Phrase)
This query executes AND returns the correct results in the query builder!
HOWEVER, the query needs to run in code-behind, so I have the following line:
DataTable blogcomments = btad.searchit(aphrase);
There are no null fields in any row of any column in EITHER of the tables. The tables are small enough I can easily detect null data. Note that bID is key for blog_topics and cID is key for blog comments.
In any case, when I run this I get the following error:
Failed to enable constraints. One or more rows contain values
violating non-null, unique, or foreign-key constraints.
Tables have a 1 x N relationship, many comments for each blog entry. IF I run the query with DISTINCT and remove the Comment_Narrative from the return fields, it returns data correctly (but I need the other rows!) However, when I return the other rows, I get the above error!
I think tells me that there is a constraint on the return table that I did not put there, therefore it must somehow be inheriting that constraint from the call to the query itself because one of the tables happens to have a primary key defined (which it MUST have). But why does the query work fine in the querybuilder? The querybuilder does not care that bID is duped in the result (and it should not be), but the code-behind DOES care.
Addendum:
Just as tests,
I removed the bID from the return list and I still get the error.
I removed the primary key from blog_topics.bID and I get the same error.
This kinda tells me that it's not the fact that my bID is duped that is causing the problem.
Another test:
I went into the designer code (I know it's nasty, I'm just desperate).
I added the following:
// zzz
try
{
this.Adapter.Fill(dataTable);
}
catch ( global::System.Exception ex )
{
}
Oddly enough, when I run it, I get the same error as before AND it doesn't show the changes I've made in the error message:
Line 13909: }
Line 13910: BPLL_Dataset.BLOG_TOPICSDataTable dataTable = new BPLL_Dataset.BLOG_TOPICSDataTable();
Line 13911: this.Adapter.Fill(dataTable);
Line 13912: return dataTable;
Line 13913: }
I'm stumped.... Unless maybe it sees I'm not doing anything in the try catch and is optimizing for me.
Another addendum:
Suspecting that it was ignoring the test code I added to the designer, I added something to the catch. It produces the SAME error and acts like it does not see this code. (Well, okay, it DOES NOT see this code, because it prints out same as before into the browser.)
// zzz
try
{
this.Adapter.Fill(dataTable);
}
catch ( global::System.Exception ex )
{
System.Web.HttpContext.Current.Response.Redirect("errorpage.aspx");
}
The thing is, when I made the original post, I was ALREADY trying to do a work-around. I'm not sure how far I can afford to go down the rabbit hole. Maybe I read the whole mess into C# and do all the joins and crap myself. I really hate to do that, because I've only recently gotten out of the habit, but I perceive I'm making a good faith effort to use the the tool the way God and Microsoft intended. From wit's end, tff.
You don't really show how you're running this query from C# ... but I'm assuming either as a straight text in a SqlCommand or it's being done by some ORM ... Have you attempted writing this query as a Stored Procedure and calling it that way? The stored Procedure would be easier to test and run by itself with sample data.
Given the fact that the error is mentioning null values I would presume that, if it is a problem with the query and not some other element of your code, then it'd have to be on one of the following fields:
BLOG_COMMENTS.bID
BLOG_TOPICS.bID
BLOG_COMMENTS.Comment_Narrative
If any of those fields are Nullable then you should be doing a COALESCE or an ISNULL on them before using them in any comparison or Join. It's situations like these which explain why most DBAs prefer to have as few nullable columns in tables as possible - they cause overhead and are prone to errors.
If that still doesn't fix your problem, then COALESCE/ISNULL all fields that are nullable and are being returned by this query. Take all null values out of the equation and just get the thing working and then, if you really need the null values to be null, go back through and remove the COALESCE/ISNULLs one at a time until you find the culprit.
My problem came from ignorance and a bit of dullness. I did not realize that just because a field is a key in the sql table does mean it has to be a key in the tableadapter. If one has a key field defined in the SQL table and then creates a table adapter, the corresponding field in the adapter will also be a key. All I had to do was unset the key field in the tableadapter and it worked.
Solution:
Select the key field in the adapter.
Right click
Select "Delete Key" (keeps the field, but removes the "key" icon)
That's it.