Update a single value in a database table through form submission - postgresql

Here is my table in the database :
id | account_name | account_number | account_type | address | email | ifsc_code | is_default_account | phone_num | User
-----+--------------+----------------+--------------+---------+------------------------------+-----------+--------------------+-------------+----------
201 | helloi32irn | 55265766432454 | Savings | | mypal.appa99721989#gmail.com | 5545 | f | 98654567876 | abc
195 | hello | 55265766435523 | Savings | | mypal.1989#gmail.com | 5545 | t | 98654567876 | axyz
203 | what | 01010101010101 | Current | | guillaume#sample.com | 6123 | f | 09099990 | abc
On form submission in the view, which only posts a single parameter which in my case is name= "activate" which corresponds to the column "is_default_account" in the table.
I want to change the value of "is_default_account" from "t" to "f". For example here in the table, for account_name "hello" it is "t". And i want to deactivate it, i.e make it "f" and activate any of the other that has been sent trough the form

This will update your table and make account 'what' default (assuming that is_default_account is BOOLEAN field):
UPDATE table
SET is_default_account = (account_name = 'what')
You may want limit updates if table is more than just few rows you listed, like this:
UPDATE table
SET is_default_account = (account_name = 'what')
WHERE is_default_account != (account_name = 'what')
AND <limit updates by some other criteria like user name>

I think to accomplish what you want to do you should send at least two values from the form. One for the id of the account you want to update and the other for the action (activate here). You can also just send the id and have it toggle. There are many ways to do this but I can't figure out exactly what you are trying to do and whether you want SQL or Playframework code. Without limiting your update in somewhere (like id) you can't precisely control what specific rows get updated. Please clarify your question and add some more code if you want help on the playframework side, which I would think you do.

Related

create JSONB array grouped from column values with incrementing integers

For a PostgreSQL table, suppose the following data is in table A:
key_path | key | value
--------------------------------------
foo[1]__scrog | scrog | apple
foo[2]__scrog | scrog | orange
bar | bar | peach
baz[1]__biscuit | biscuit | watermelon
The goal is to group data when there is an incrementing number present for an otherwise identical value for column key_path.
For context, key_path is a JSON key path and key is the leaf key. The desired outcome would be:
key_path_group | key | values
------------------------------------------------------------
[foo[1]__scrog, foo[2]__scrog] | scrog | [apple, orange]
bar | bar | peach
[baz[1]__biscuit] | biscuit | [watermelon]
Also noting that for key_path=baz[1]__biscuit even though there is only a single incrementing value, it still triggers casting to an array of length 1.
Any tips or suggestions much appreciated!
May have answered my own question (sometimes just typing it out helps). The following gets very close, if not exactly, what I'm looking for:
select
regexp_replace(key_path, '(.*)\[(\d+)\](.*)', '\1[x]\3') as key_path_group,
key,
jsonb_agg(value) as values
from A
group by gp_key_path, key;

How to create a Postgres trigger that calculates values

How would you create a trigger that uses the values of the row being inserted to be calculated first so that a value being inserted gets transformed?
Let's say I have this table labor_rates,
+---------------+-----------------+--------------+------------+
| labor_rate_id | rate_per_minute | unit_minutes | created_at |
+---------------+-----------------+--------------+------------+
| bigint | numeric | numeric | timestamp |
+---------------+-----------------+--------------+------------+
Each time a new record is created, I need that the rate is calculated as rate/unit (the smallest unit here is a minute).
So example, when inserting a new record:
INSERT INTO labor_rates(rate, unit)
VALUES (60, 480);
It would create a new record with these values:
+---------------+-----------------+--------------+----------------------------+
| labor_rate_id | rate_per_minute | unit_minutes | created_at |
+---------------+-----------------+--------------+----------------------------+
| 1000000 | 1.1979 | 60 | 2017-03-16 01:59:47.208111 |
+---------------+-----------------+--------------+----------------------------+
One could argue that this should be left as a calculated field instead of storing the calculated value. But in this case, it would be best if the calculated value is stored.
I am fairly new to triggers so any help would be much appreciated.

How do I add additional information to a Pivot (using Fluent)?

In Vapor, we can create many-to-many relationships by creating a Pivot<U, T> object, where U and T are the models that we want to link together. So if I want to create a system where Users can have many Files and many Files can belong to many Users, I'd associate them like this:
var alice = User(name: "Alice")
try! alice.save()
var sales = File(name: "sales.xclx")
try! sales.save()
var pivot = Pivot<User, File>(alice, sales)
try! pivot.save()
What I can't figure out for the life of me is how would I make a Pivot<User, File> contain additional information? For example, I'd like to know when was this file associated associated to Alice, or what permissions she has over it.
On a Relational database, Fluent creates this table for the Pivot<User, File> type.
+---------+---------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+---------+---------+------+-----+---------+----------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| file_id | int(11) | NO | | NULL | |
| user_id | int(11) | NO | | NULL | |
+---------+---------+------+-----+---------+----------------+
But I'd like the ability to represent something like this:
+---------+---------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+---------+---------+------+-----+---------+----------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| file_id | int(11) | NO | | NULL | |
| user_id | int(11) | NO | | NULL | |
| date | date | NO | | NULL | |
| perms | varchar | NO | | READ | |
+---------+---------+------+-----+---------+----------------+
The Pivot<U, T> object can be thought of as the "bare minimum" required fields for a pivoted relation like siblings.
If you want to add custom fields to this table, you can create your own class to act as the pivot as long as it has the required elements:
Table name for Foo and Bar is bar_foo (lowercase, alphabetically ordered)
There exists at least the three columns: id, bar_id, foo_id
In other words, the table created by your pivot class must have at least the elements a Pivot<Foo, Bar> preparation would have created.
With this done, you can create new pivot relations by creating and saving instances of your pivot class.
When .siblings() relations are called on your models that use this pivot table, the default Pivot<U, T> will still be created to perform the fetch. But, this won't create any issues since the required fields are present on the pivot table.
so after having the same problem described by Andy and asking for a solution on the Vapor Slack I was redirected here.
My implementation (using PostgreSQL) of the solution proposed by Tanner can be found here
The key is the Rating model:
it’s a plain Model subclass
it has an entity name of movie_user (as described by Tanner the names of the relating models in alphabetical order)
it has the fields userId (mapping to "user_id") and movieId (mapping to "movie_id"), both are of type Node.
in prepare(Database) it again uses the name "movie_user" and defines the Id fields as Ints.
With that set up you can define the following relationship convenience methods:
On Movie: all raters
extension Movie {
func raters() throws -> Siblings<User> {
return try siblings()
}
}
On User: all rated movies
extension User {
func ratedMovies() throws -> Siblings<Movie> {
return try siblings()
}
}
A new rating for a movie (for a user) can be added like this:
ratings.post() { request in
var rating = try Rating(node: request.json)
try rating.save()
return rating
}
As Rating is a Model subclass, we can create it directly from the requests JSON. This requires the client to send a JSON document that conforms to the node structure of the Rating class:
{
"user_id": <the rating users id>,
"movie_id": <the id of the movie to be rated>,
"stars": <1-5 or whatever makes sense for your usecase>
}
To get all actual Ratings for a given movie, you seem to have to resolve the relationship manually (at least I think so, maybe somebody can give me a hint on how to do this better):
let ratings = try Rating.query().filter("movie_id", movieId).all()
Also, there seems to be no way of somehow calculating an average on the database right now. I would have loved something like this to work:
// this is now how it works
let averageRating = try Rating.query().filter("movie_id", movieId).average("stars")
So I hope this helps anybody coming across this problem. And thanks to all the wonderful people who contribute to the Vapor project!
Thx to #WERUreo for pointing out that the part where a rating is created was missing.

Sane way to store different data types within same column in postgres?

I'm currently attempting to modify an existing API that interacts with a postgres database. Long story short, it's essentially stores descriptors/metadata to determine where an actual 'asset' (typically this is a file of some sort) is storing on the server's hard disk.
Currently, its possible to 'tag' these 'assets' with any number of undefined key-value pairs (i.e. uploadedBy, addedOn, assetType, etc.) These tags are stored in a separate table with a structure similar to the following:
+---------------+----------------+-------------+
|assetid (text) | tagid(integer) | value(text) |
|---------------+----------------+-------------|
|someStringValue| 1234 | someValue |
|---------------+----------------+-------------|
|aDiffStringKey | 1235 | a username |
|---------------+----------------+-------------|
|aDiffStrKey | 1236 | Nov 5, 1605 |
+---------------+----------------+-------------+
assetid and tagid are foreign keys from other tables. Think of the assetid representing a file and the tagid/value pair is a map of descriptors.
Right now, the API (which is in Java) creates all these key-value pairs as a Map object. This includes things like timestamps/dates. What we'd like to do is to somehow be able to store different types of data for the value in the key-value pair. Or at least, storing it differently within the database, so that if we needed to, we could run queries checking date-ranges and the like on these tags. However, if they're stored as text items in the db, then we'd have to a.) Know that this is actually a date/time/timestamp item, and b.) convert into something that we could actually run such a query on.
There is only 1 idea I could think of thus far, without complete changing changing the layout of the db too much.
It is to expand the assettag table (shown above) to have additional columns for various types (numeric, text, timestamp), allow them to be null, and then on insert, checking the corresponding 'key' to figure out what type of data it really is. However, I can see a lot of problems with that sort of implementation.
Can any PostgreSQL-Ninjas out there offer a suggestion on how to approach this problem? I'm only recently getting thrown back into the deep-end of database interactions, so I admit I'm a bit rusty.
You've basically got two choices:
Option 1: A sparse table
Have one column for each data type, but only use the column that matches that data type you want to store. Of course this leads to most columns being null - a waste of space, but the purists like it because of the strong typing. It's a bit clunky having to check each column for null to figure out which datatype applies. Also, too bad if you actually want to store a null - then you must chose a specific value that "means null" - more clunkiness.
Option 2: Two columns - one for content, one for type
Everything can be expressed as text, so have a text column for the value, and another column (int or text) for the type, so your app code can restore the correct value in the correct type object. Good things are you don't have lots of nulls, but importantly you can easily extend the types to something beyond SQL data types to application classes by storing their value as json and their type as the class name.
I have used option 2 several times in my career and it was always very successful.
Another option, depending on what your doing, could be to just have one value column but store some json around the value...
This could look something like:
{
"type": "datetime",
"value": "2019-05-31 13:51:36"
}
That could even go a step further, using a Json or XML column.
I'm not in any way PostgreSQL ninja, but I think that instead of two columns (one for name and one for type) you could look at hstore data type:
data type for storing sets of key/value pairs within a single
PostgreSQL value. This can be useful in various scenarios, such as
rows with many attributes that are rarely examined, or semi-structured
data. Keys and values are simply text strings.
Of course, you have to check how date/timestamps converting into and from this type and see if it good for you.
You can use 2 different technics:
if you have floating type for every tagid
Define table and ID for every tagid-assetid combination and actual data tables:
maintable:
+---------------+----------------+-----------------+---------------+
|assetid (text) | tagid(integer) | tablename(text) | table_id(int) |
|---------------+----------------+-----------------+---------------|
|someStringValue| 1234 | tablebool | 123 |
|---------------+----------------+-----------------+---------------|
|aDiffStringKey | 1235 | tablefloat | 123 |
|---------------+----------------+-----------------+---------------|
|aDiffStrKey | 1236 | tablestring | 123 |
+---------------+----------------+-----------------+---------------+
tablebool
+-------------+-------------+
| id(integer) | value(bool) |
|-------------+-------------|
| 123 | False |
+-------------+-------------+
tablefloat
+-------------+--------------+
| id(integer) | value(float) |
|-------------+--------------|
| 123 | 12.345 |
+-------------+--------------+
tablestring
+-------------+---------------+
| id(integer) | value(string) |
|-------------+---------------|
| 123 | 'text' |
+-------------+---------------+
In case if every tagid has fixed type
create tagid description table
tag descriptors
+---------------+----------------+-----------------+
|assetid (text) | tagid(integer) | tablename(text) |
|---------------+----------------+-----------------|
|someStringValue| 1234 | tablebool |
|---------------+----------------+-----------------|
|aDiffStringKey | 1235 | tablefloat |
|---------------+----------------+-----------------|
|aDiffStrKey | 1236 | tablestring |
+---------------+----------------+-----------------+
and correspodnding data tables
tablebool
+-------------+----------------+-------------+
| id(integer) | tagid(integer) | value(bool) |
|-------------+----------------+-------------|
| 123 | 1234 | False |
+-------------+----------------+-------------+
tablefloat
+-------------+----------------+--------------+
| id(integer) | tagid(integer) | value(float) |
|-------------+----------------+--------------|
| 123 | 1235 | 12.345 |
+-------------+----------------+--------------+
tablestring
+-------------+----------------+---------------+
| id(integer) | tagid(integer) | value(string) |
|-------------+----------------+---------------|
| 123 | 1236 | 'text' |
+-------------+----------------+---------------+
All this is just for general idea. You should adapt it for your needs.

SHOW STATUS LIKE in Sphinx SE

After executing that query
select count(*) from tablename WHERE query=';';
that query will return count as 20.
But that table having totally 771498 records. while execute on SHOW STATUS LIKE 'sphinx_%';
it has return like this
+--------------------+--------+
| Variable_name | Value |
+--------------------+--------+
| sphinx_error | 5732 |
| sphinx_time | 837 |
| sphinx_total | 1000 |
| sphinx_total_found | 771498 |
| sphinx_word_count | 0 |
| sphinx_words | |
+--------------------+--------+
Here i have doubt .
what is sphinx_error?
what is sphinx_time?
what is sphinx_total?
what is sphinx_total_found?
what is sphinx_word_count?
what is sphinx_words?
It will be very helpful for me. Advance thanks
firstly sphinxse is not a real mysql table. Its a fake table. It accepts a query, then sphinxse forwards it to a running instance in the background, and returns the results to produce a 'table' to mysql.
So count(*) wont work. It simply runs the query and counts the rows. There are only 20 rows, unless you ask for more.
sphinx_error? - indicates an error - maybe SHOW WARNINGS would get the text.
sphinx_time? - how long in milisecons the query took
sphinx_total? - how many records you can actully retrieve (subject to max_matches)
sphinx_total_found? - how many records actully match
sphinx_word_count? - the number of words in your query
sphinx_words? - how many docs/hits match each of the words - because you have no query, its empty.