Phoenix form validation check not showing up but non-null constraint from postgres - postgresql

When trying to build form validations my error shows up in the form when I use a username exceeding 20 char, but not when I enter nothing. I get a violates non-null constraint Postgres.Error view.
# user.ex
def changeset(model, params \\ %{}) do
model
|> cast(params, ~w(name username), [])
|> validate_length(:username, min: 1, max: 20)
end
Which probably is because of the migration:
def change do
create table(:users) do
add :name, :string
add :username, :string, null: false
add :password_hash, :string
timestamps
end
create unique_index(:users, [:username])
end
Going through the Programming Phoenix book, which is unfortunately getting a little outdated, I can't find a quick solution to this problem.
Somehow the Postgres shouldn't come before the validation checks. Any idea on how to make this error go away?

If it doesn't work, I assume you're using Phoenix 1.3, so try changing this code
def changeset(model, params \\ %{}) do
model
|> cast(params, ~w(name username), [])
|> validate_length(:username, min: 1, max: 20)
end
With this:
def changeset(model, params \\ %{}) do
model
|> cast(params, ~w(name username))
|> validate_required([:username])
|> validate_length(:username, min: 1, max: 20)
end
You can check more details in documentation of Ecto.Changeset.validate_required/3.
Hope that helps!

Related

Elixir: Variable undefined during AST expansion

I have a module like this, ast1 and ast2 look the same, but I get an error with rest undefined in second one. Can someone explain the problem?
defmodule PacketDef do
pk_def = {:pk_name, [
{:unk_int1, :int},
{:unk_int2, :int},
]}
{pkn, field_defs} = pk_def
field_decs = Enum.map(field_defs, fn
({var_name, var_type}) when var_type in [:int] ->
rest = Macro.var(:rest, __MODULE__)
dec_name = String.to_atom("decode_#{var_type}")
xvar_name = Macro.var(var_name, __MODULE__)
quote do
{:ok, unquote(xvar_name), unquote(rest)} = unquote(dec_name)(unquote(rest))
end
(_field_def) ->
nil
end)
ast1 = quote do
def decode(unquote(pkn), rest) do
{:ok, unk_int1, rest} = decode_int(rest)
{:ok, unk_int2, rest} = decode_int(rest)
{:ok, rest}
end
end
ast2 = quote do
def decode(unquote(pkn), rest) do
unquote_splicing(field_decs)
{:ok, rest}
end
end
IO.puts("ast1")
IO.inspect(ast1, width: 100)
IO.puts("ast2")
IO.inspect(ast2, width: 100)
def decode(unquote(pkn), rest) do
{:ok, unk_int1, rest} = decode_int(rest)
{:ok, unk_int2, rest} = decode_int(rest)
{:ok, rest}
end
# why get error *rest* here
def decode(unquote(pkn), rest) do
unquote_splicing(field_decs)
{:ok, rest}
end
def decode_int(<<b::32-little, rest::binary>>) do
{:ok, b, rest}
end
end
update
What I want to do is, given pk_def generated decode function like in ast1, but with fields decode is generated dynamically.
The problem lies with the function definition not header, specifically the line:
unquote_splicing(field_decs)
If you remove this line, the code will work. The reason is that when the field_decs AST is expanded using unquote_splicing, it makes a sub-call trying to unquote rest variable which fails. Fixing how your AST gets evaluated will fix this as well.
This looks like an XY Problem to me. I'm not exactly sure what you're trying to do here but when dealing with language extension and custom DSLs, you should break it down into multiple smaller and composable Macros (with the majority of functionality implemented in private functions) and should also take good care of macro hygiene. That will substantially reduce your code complexity and make it easier to deal with code expansion in general, since you won't have to deal with ASTs directly.

How to create Anorm query to skip updating None values in DB (Scala)

I am using Anorm (2.5.1) in my Play+Scala application (2.5.x, 2.11.11). I keep facing the issue quite often where if the case class argument value is None, I don't want that parameter value to be inserted/updated in SQL DB. For example:
case class EditableUser(
user_pk: String,
country: Option[String],
country_phone_code: Option[Int],
phonenumber: Option[String],
emailid: Option[String],
format_all: Option[String]
)
....
val eUser: EditableUser = EditableUser("PK0001", None, None, None, Some("xyz#email.com"), Some("yes"))
...
SQL"""
update #$USR SET
COUNTRY=${eUser.country},
COUNTRY_PHONE_CODE=${eUser.country_phone_code},
PHONENUMBER=${eUser.phonenumber},
EMAILID=${emailid},
FORMAT_ALL=${format_all}
where (lower(USER_PK)=lower(${eUser.user_pk}))
""".execute()
Here when the value is None, Anorm will insert 'null' into corresponding column in SQL DB. Instead I want to write the query in such a way that Anorm skips updating those values which are None i.e. does not overwrite.
You should use boundStatements/preparedStatement and while setting values for the query don’t set the values for the columns which are none.
For example
SQL(
"""
select * from Country c
join CountryLanguage l on l.CountryCode = c.Code
where c.code = {countryCode};
"""
).on("countryCode" -> "FRA")
Or in your case:
import play.api.db.DB
import anorm._
val stat = DB.withConnection(implicit c =>
SQL("SELECT name, email FROM user WHERE id={id}").on("id" -> 42)
)
While writing you your query you check if the value you are going to put in on(x->something) is not None if it’s nice don’t put it hence you will not update the values which are none.
Without the ability (or library) to access the attribute names themselves, it would still be possible, if slightly clunky in some circles, to build the update statement dynamically depending on the values that are present in the case class:
case class Foo(name:String, age:Option[Int], heightCm:Option[Int])
...
def phrase(k:String,v:Option[Int]):String=if (v.isDefined) s", $k={$k}" else ""
def update(foo:Foo) : Either[String, Foo] = DB.withConnection { implicit c =>
def stmt(foo:Foo) = "update foo set "+
//-- non option fields
"name={name}" +
//-- option fields
phrase("age", foo.age) +
phrase("heightCm", foo.heightCm)
SQL(stmt(foo))
.on('name -> name, 'age -> age, 'heightCm -> heightCm)
.executeUpdate()
The symbols that are not present in the actual submitted SQL can still be specified in the on. Catering for other data types also needed.

Storing jsonb data through ecto

I'm trying to pass jsonb data into postgres via ecto. I'd like to be able to take a valid JSON string, add it as a graphql argument, and see that json in my table.
migration
defmodule MyApp.CreateJsonTable do
use Ecto.Migration
def change do
create table(:geodata) do
add(:json, :map)
timestamps(type: :utc_datetime)
end
end
end
My understanding is that you need to define a struct for Poison for JSONB, and then decode into that when you insert.
defmodule Geodatajson do
use MyApp, :model
embedded_schema do
field(:latitude, :float)
field(:longitude, :float)
end
end
now the model:
defmodule MyApp.Geodata do
use MyApp, :model
alias MyApp.Repo
alias MyApp.Geodata
schema "geodata" do
embeds_one(:json, Geodatajson)
timestamps()
end
def changeset(struct, params \\ %{}) do
struct
|> cast(params, [:json])
end
def add_geodata(str) do
json = str |> Poison.decode!(as: Geodatajson)
data = %Geodata{json: json}
Repo.insert(data)
end
end
I try to pass in the data like this:
iex> MyApp.Geodata.add_geodata("{\"latitude\": 1.23, \"longitude\": 4.56}")
but the JSONB does not get decoded:
{:ok,
%MyApp.Geodata{
__meta__: #Ecto.Schema.Metadata<:loaded, "geodata">,
id: 26,
inserted_at: ~N[2018-04-28 13:28:42.346382],
json: %Geodatajson{
id: "3b22ef94-92eb-4c64-8174-9ce1cb88e8c5",
latitude: nil,
longitude: nil
},
updated_at: ~N[2018-04-28 13:28:42.346392]
}}
What can I do to get this data into postgres?
Poison's as: requires you to pass a struct instance, not just the name of the module.
json = str |> Poison.decode!(as: Geodatajson)
should be:
json = str |> Poison.decode!(as: %Geodatajson{})

Conversion error with a custom DateRange Ecto type

I'm having trouble with a custom Ecto type that I'm writing. It is be backed by %Postgrex.Range{} type.
The code is
defmodule Foo.Ecto.DateRange do
#behaviour Ecto.Type
def type, do: :daterange
def cast(%{"lower" => lower, "upper" => upper}) do
new_lower = Date.from_iso8601! lower
new_upper = Date.from_iso8601! upper
{:ok, Date.range(new_lower, new_upper)}
end
def cast(%Date.Range{}=range) do
{:ok, range}
end
def cast(_), do: :error
def load(%Postgrex.Range{lower: lower, upper: upper}) do
{:ok, Date.range(lower, upper)}
end
def load(_), do: :error
def dump(%Date.Range{}=range) do
{:ok, %Postgrex.Range{lower: range.first, upper: range.last}}
end
def dump(_), do: :error
end
The migration is
def change do
create table(:users) do
add :email, :string, null: false
add :username, :string
add :name, :string, null: false
add :password_hash, :text, null: false
add :period, :daterange
timestamps()
end
The user schema is
schema "users" do
field :username, :string
field :name, :string
field :email, :string
field :password_hash, :string
field :password, :string, virtual: true
field :period, Foo.Ecto.DateRange
The problematic code in my seeds.exs is this one:
today = Date.utc_today()
{:ok, user2} = create_user %{name: "Gloubi Boulga",
email: "gloub#boul.ga", password: "xptdr32POD?é23PRK*efz",
period: Date.range(today, Timex.shift(today, months: 2))
}
And finally, the error is this one:
* (CaseClauseError) no case clause matching: {~D[2017-11-04]}
(ecto) lib/ecto/adapters/postgres/datetime.ex:40: Ecto.Adapters.Postgres.TypeModule.encode_value/2
(ecto) /home/tchoutri/dev/Projects/Foo/deps/postgrex/lib/postgrex/type_module.ex:717: Ecto.Adapters.Postgres.TypeModule.encode_params/3
[…]
priv/repo/seeds.exs:33: anonymous fn/0 in :elixir_compiler_1.__FILE__/1
And of course, I do not understand why this kind of conversion is happening, and this is very frustrating, especially considering that creating a custom Ecto type backed by %Postgrex.Range{} should be somewhat trivial.
EDIT: I've put some Logger.debug in the cast function and I can see
[debug] Casting new_date #DateRange<~D[2017-11-11], ~D[2018-01-11]>
appearing and
%Postgrex.Range{lower: ~D[2017-11-11], lower_inclusive: true, upper: ~D[2018-01-11], upper_inclusive: true}
in the dump function.
Within a %Postgrex.Range{}, the current version of Postgrex (0.13.3) expects %Postgrex.Date{}s. See the relevant test here.
However as seen in the link, %Postgrex.Date{} is deprecated in the next release and you are expected to use %Date{} from 0.14 onwards (still in development).
I came across this today. I hope this still helps:
def dump(%Date.Range{} = range) do
{:ok, %Postgrex.Range{lower: Date.to_erl(range.first), upper: Date.to_erl(range.last)}}
end
Here's what I ended up with:
defmodule DateRange do
#moduledoc false
#behaviour Ecto.Type
#doc """
Does use the `:tsrange` postgrex type.
"""
def type, do: :daterange
#doc """
Can cast various formats:
# Simple maps (default to `[]` semantic like Date.range)
%{"lower" => "2015-01-23", "upper" => "2015-01-23"}
# Postgrex range with Date structs for upper and lower bound
%Postgrex.Range{lower: #Date<2015-01-23>, upper: #Date<2015-01-23>}
"""
def cast(%Date.Range{first: lower, last: upper}), do: cast(%{lower: lower, up
per: upper})
def cast(%{"lower" => lower, "upper" => upper}), do: cast(%{lower: lower, uppe
r: upper})
def cast(%Postgrex.Range{lower: %Date{}, upper: %Date{}} = range), do: {:ok, r
ange}
def cast(%{lower: %Date{} = lower, upper: %Date{} = upper}) do
{:ok, %Postgrex.Range{lower: lower, upper: upper}}
end
def cast(%{lower: lower, upper: upper}) do
try do
with {:ok, new_lower, 0} <- Date.from_iso8601(lower),
{:ok, new_upper, 0} <- Date.from_iso8601(upper) do
{:ok, %Postgrex.Range{lower: new_lower, upper: new_upper}}
else
_ -> :error
end
rescue
FunctionClauseError -> :error
end
end
def cast(_), do: :error
#end_of_times ~D[9999-12-31]
#start_of_times ~D[0000-01-01]
defp canonicalize_bounds(date, inclusive, offset, infinite_bound) do
with {:ok, date} <- Date.from_erl(date) do
case inclusive do
false -> {:ok, Timex.shift(date, days: offset)}
true -> {:ok, date}
end
else
^inclusive = false when is_nil(date) -> {:ok, infinite_bound}
_ -> :error
end
end
#doc """
Does load the postgrex returned range and converts data back to Date structs.
"""
def load(%Postgrex.Range{lower: lower, lower_inclusive: lower_inclusive,
upper: upper, upper_inclusive: upper_inclusive}) do
with {:ok, lower} <- canonicalize_bounds(lower, lower_inclusive, 1, #start_
of_times),
{:ok, upper} <- canonicalize_bounds(upper, upper_inclusive, -1, #end_of
_times) do
{:ok, Date.range(lower, upper)}
else
_ -> :error
end
end
def load(_), do: :error
#doc """
Does convert the Date bounds into erl format for the db.
"""
def dump(%Postgrex.Range{lower: %Date{} = lower, upper: %Date{} = upper} = range) do
with {:ok, lower} <- Ecto.DataType.dump(lower),
{:ok, upper} <- Ecto.DataType.dump(upper) do
{:ok, %{range | lower: lower, upper: upper}}
else
_ -> :error
end
end
def dump(_), do: :error
end

How to handle embedded schema in Phoenix?

Lets say i have user model like follow
schema "users" do
field :user_name, :string
field :password, :string
end
and address model like follow
schema "address" do
field :line1, :string
field :country, :string
end
i am using mongo db as database so i want json format like follow
{ _id: "dfd", user_name: "$$$$", password: "xxx", address: { line1 : "line1", country: "india" } }
1)how to create and validate changeset where username in user modal and country in address model are required fields?
2)how to get final changeset after validate both?
Assuming the mongo adapter works similarly to postgresql jsonb columns:
defmodule User do
use Ecto.Schema
import Ecto.Changeset
schema "users" do
field :user_name, :string
field :password, :string
embeds_one :address, Address
end
def changeset(model, params \\ %{}) do
model
|> cast(params, [:user_name, :password]
|> cast_embed(:address)
|> validate_required(:user_name, :password, :address)
end
end
defmodule Address do
use Ecto.Schema
import Ecto.Changeset
#primary_key false
embedded_schema do
field :line1, :string
field :country, :string
end
def changeset(model, params \\ %{}) do
model
|> cast(params, [:line1, :country])
|> validate_required([:line1, :country])
end
end