I need to now when a row changed in my db. I'm using phoenix 1.2.4. I already have the triggers using postgres, but actually I don't know if I need them.
Do you know how could I solve my problem?
NOTE: The data base isn't necessarily changed from the controllers, rather I have a cron that update some parts.
I saw this tutorial (Publish/subscribe with PostgreSQL and Phoenix Framework) a few days ago and it seems like it contains exactly what you want.
It sets up the notification from the DB and then broadcast it. In your case, you just need the notification part and should be all good.
I hope that helps :)
Postgrex.Notifications is the module which will use postgresql listen/notify to deliver messages to an elixir process.
A simple example:
defmodule MyListener do
use GenServer
def start_link(), do: GenServer.start_link(__MODULE__, [])
def init(_arg) do
{:ok, pid} = Postgrex.Notifications.start_link(MyRepo.config())
Postgrex.Notifications.listen(pid, "my_table")
{:ok, []}
end
def handle_info({:notification, _connection_pid, _ref, _channel, payload}, state) do
# ... do something with payload ...
{:noreply, state}
end
end
Related
I am looking for a way to isolate which of my review environments process which jobs.
We are using delayed_job and am running some kubernetes alias clusters based on a master cluster.
Is this at all possible? I found a way to prefix the worker's name simply, but I can't find a way to pass this on to the actual job.
Any help is appreciated.
The way I figured it should work is something like this.
I'm not sure if this is the right way to go, perhaps the same thing could be achieved using the lifecycle events? I just add a column and use the lifecycle events to add the data and query it?
Crossposted to collectiveidea/delayed_job/issues/1125
Eventually, I ended up with the following solution. Add a varchar column named cluster to the delayed_jobs table and BOOM. Works like a charm.
require 'delayed/backend/active_record'
module Delayed
module Backend
module ActiveRecord
class Configuration
attr_accessor :cluster
end
# A job object that is persisted to the database.
# Contains the work object as a YAML field.
class Job < ::ActiveRecord::Base
READY_SQL = <<~SQL.squish.freeze
((cluster = ? AND run_at <= ? AND (locked_at IS NULL OR locked_at < ?)) OR locked_by = ?) AND failed_at IS NULL
SQL
before_save :set_cluster
def self.ready_to_run(worker_name, max_run_time)
where(READY_SQL, cluster, db_time_now, db_time_now - max_run_time, worker_name)
end
# When a worker is exiting, make sure we don't have any locked jobs.
def self.clear_locks!(worker_name)
where(cluster: cluster, locked_by: worker_name)
.update_all(locked_by: nil, locked_at: nil) # rubocop:disable Rails/SkipsModelValidations
end
def self.cluster
Delayed::Backend::ActiveRecord.configuration.cluster
end
def set_cluster
self.cluster ||= self.class.cluster
end
end
end
end
end
Delayed::Backend::ActiveRecord.configuration.cluster = ENV['CLUSTER'] if ENV['CLUSTER']
In the below code, I'm trying to do two operations. One, to create a customer in a db, and the other, to create an event in the db. The creation of the event, is dependent on the creation of the user.
I'm new to Scala, and confused on the role of Futures here. I'm trying to query a db and see if the user is there, and if not, create the user. The below code is supposed to check if the user exists with the customerByPhone() function, and if it doesn't, then go into the createUserAndEvent() function.
What it's actually doing, is skipping the response from customerByPhone and going straight into createUserAndEvent(). I thought that by using a flatmap, the program would automatically wait for the response and that I wouldn't have to use Await.result is that not the case? Is there a way to avoid using Await.result to not block the thread on production code?
override def findOrCreate(phoneNumber: String, creationReason: String): Future[AvroCustomer] = {
//query for customer in db
//TODO this goes into createUserAndEvent before checking that response comes back empty from querying for user
customerByPhone(phoneNumber)
.flatMap(_ => createUserAndEvent(phoneNumber, creationReason, 1.0))
}
You don't need to use Await.result or any other blocking. You do in fact have the result from customerByPhone, you're just ignoring it with the _ . I think what you want is something like this:
customerByPhone(phoneNumber)
.flatMap(customer => {
if(customer == null)
createUserAndEvent(phoneNumber, creationReason, 1.0)
else
Future(customer)
})
You need to code the logic to do something only if the customer isn't there.
So I recently decided I wanted to learn Elixir for the new year, and have been going through the Phoenix framework's book on how web development works in Elixir.
So far I am really enjoying it, and am already starting to love the language. I've come across a few issues with the Come-on-in package though.
One was compiling it, which is fine. But I am wondering if it is causing problems, the issue is I am having trouble figuring out how to debug this issue.
defmodule Rumbl.Auth do
import Plug.Conn
def init(opts) do
Keyword.fetch!(opts, :repo)
end
def call(conn, repo) do
user_id = get_session(conn, :user_id)
user = user_id && repo.get(Rumbl.User, user_id)
assign(conn, :current_user, user)
end
def login(conn, user) do
conn
|> assign(:current_user, user)
|> put_session(:user_id, user.id)
|> configure_session(renew: true)
end
def logout(conn) do
configure_session(conn, drop: true)
end
import Comeonin.Bcrypt, only: [checkpw: 2, dummy_checkpw: 0]
def login_by_username_and_pass(conn, username, given_pass, opts) do
repo = Keyword.fetch!(opts, :repo)
user = repo.get_by(Rumbl.User, username: username)
cond do
user && checkpw(given_pass, user.password_hash) ->
{:ok, login(conn, user)}
user ->
{:error, :unauthorized, conn}
true ->
dummy_checkpw()
{:error, :not_found, conn}
end
end
end
That is the code, and everything is compiling and I can see it's being sent through correctly. But for some reason the password is never being resolved. I made another user with the password "password" and even did something like this:
checkpw("password", "$2b$12$aa4dos3r4YwX7HKgj.JiL.bEzg42QjxBvWwm5M")
Just to see if it was how I was passing the information, obviously that is the hash in my database, and that also does not work. I am at a loss to what I am doing wrong, or since this is my first time using Bcrypt and am not 100% sure how the salting works if it's how I am using the library itself.
I am hashing the passwords with this:
defp put_pass_hash(changeset) do
case changeset do
%Ecto.Changeset{valid?: true, changes: %{password: pass}} ->
put_change(changeset, :password_hash, Comeonin.Bcrypt.hashpwsalt(pass))
_ ->
changeset
end
end
I've looked over everything I can think of, and it all looks correct, but for some reason Comeonin is not comparing the passwords correctly. Any help would be much appreciated, thanks!
The issue I was having was not anything to do with Elixir or the Comeonin library!
I had only allowed a Varchar of 45 for my passwords, and it was truncating the response. I am just going to leave this here in case anyone does something as silly as this in the future!
Ecto.Model.Callbacks are now deprecated.
I am trying to achieve the same behavior as before_insert but to no avail! I can't even get anything to trigger IO.puts("hello") inside my changeset/2.
Here's what I have:
def changeset(model, params \\ :empty) do
IO.puts "HELLO" # never prints
model
|> cast(params, #required_fields, #optional_fields)
|> put_change(:column_name, "hello")
end
Instead of put_change, I've tried subbing change, cast, and practically everything else inside Ecto.Changeset.
I've also tried the non-piping method, just in case:
chset = cast(model, params, #required_fields, #optional_fields)
put_change(chset, :column_name, "hello")
The end-goal is shifting the row's inserted_at for a new value, so a simple default: "hello" on the Schema won't suffice.
Many thanks!
I ended up solving it via a Postgres fragment.
It's not quite what I was looking for, so I'll leave this question open for someone with an answer involving the changset, because that's still doing absolutely nothing for me.
Here's my temporary solution, involving the postgres Migration file.
create table(:stuffs) do
add :expires_at, :datetime, default: fragment("now() + interval '60 days'")
end
I have a simple mongo application that happens to be async (using Akka).
I send a message to an actor, which in turn write 3 records to a database.
I'm using WriteConcern.SAFE because I want to be sure the write happened (also tried WriteConcern.FSYNC_SAFE).
I pause for a second to let the writes happen then do a read--and get nothing.
So my write code might be:
collection.save( myObj, WriteConcern.SAFE )
println("--1--")
collection.save( myObj, WriteConcern.SAFE )
println("--2--")
collection.save( myObj, WriteConcern.SAFE )
println("--3--")
then in my test code (running outside the actor--in another thread) I print out the # of records I find:
println( collection.findAll(...) )
My output looks like this:
--1--
--2--
--3--
(pauses)
0
Indeed if I look in the database I see no records. Sometimes I actually do see data there and the test works. Async code can be tricky and it's possible the test code is being hit before the writes happen, so I also tried printing out timestamps to ensure these are being executed in the order presented--they are. The data should be there. Sample output below w/timestamps:
Saved: brand_1 / dev 1375486024040
Saved: brand_1 / dev2 1375486024156
Saved: brand_1 / dev3 1375486024261
1375486026593 0 found
So the 3 saves clearly happened (and should have written) a full 2 seconds before the read was attempted.
I understand for more liberal WriteConcerns you could get this behavior, but I thought the two safest ones would assure me the write actually happened before proceeding.
Subtle but simple problem. I was using a def to create my connection... which I then proceeded to call twice as if it was a val. So I actually had 2 different writers so that explained the sometimes-difference in my results. Refactored to a val and all was predictable. Agonizing to identify, easy to understand/fix.