DBVisualizer query snytax to add a range of values - dbvisualizer

I need to be able to do something like this in DBVisualizer:
WHERE
Column = {12, 23, 55, 33, 22}
Instead, I am doing this
WHERE
Column = 12 OR
Column = 23 OR
Column = 55 OR
Column = 33 OR
Column = 22
Is there some sort of syntax reserved for this purpose? The latter is very tedious, and I am not a database person. So any help is very much appreciated!

WHERE Column in (12,23,55,33,22)
note that this language is not a function of dbvisualizer, but of the database you are connecting to. The above code works for every RDBMS I've ever worked with.
http://www.w3schools.com/sql/sql_in.asp
I think this wants the SQL tag!

Related

update several columns of a table at once

In postgresql, I want to update several rows of a table according to their id. This doesn't work:
UPDATE table SET othertable_id = 5 WHERE id = 2, 45, 22, 75
What is the correct syntax for this?
Use an IN operator:
update the_table
set othertable_id = 5
where id in (2,45,22,75);

Using select statement in pyspark changes values in column

I'm experiencing a very weird behavior in pyspark (databricks).
In my initial dataframe (df_original) I have multiple columns (id, text and some_others) and I add a new column 'detected_language'. The new column is added using a join with another dataframe df_detections (with columns id and detected_language). The ids in the two dataframes correspond to each other).
df_detections is created like this:
ids = [125, ...] # length x
detections = ['ko', ...] # length x
detections_with_id = list(zip(ids, detections))
df_detections = spark.createDataFrame(detections_with_id, ["id", "detected_language"])
df = df_original.join(df_detections, on='id', how='left)
Here is the weird part. Whenever I display the dataframe using a select statement I get the correct detected_language value. However, using only display I get a totally different value (e.g. 'fr' or any other language code) for the same entry (see the statements and their corresponding results below).
How is that possible? Can anybody think of a reason why this is? And how would I solve something like this?
Displaying correct value with select:
display(df.select(['id', 'text', 'detected_language']))
id
text
detected_language
125
내 한국어 텍스트
ko
...
...
...
Displaying wrong value without select:
display(df)
id
text
other_columns...
detected_language
125
내 한국어 텍스트
...
fr
...
...
...
...
I appreciate any hints or ideas! Thank you!

Why am I getting an ambiguous column in my PG on conflict insert?

Here is my query:
insert into zoning_algorithm_value (algorithm_value_id, value, zoning_id)
values (
61,
21,
7321
)
on conflict(algorithm_value_id)
DO
update set value = 21 where zoning_id = 7321 and algorithm_value_id = 61;
I am only referencing one table. Here is the error I am getting.
[42702] ERROR: column reference "zoning_id" is ambiguous
How can it be ambiguous when there is only one table and one column with that name? How do I make this upsert work?
You either need to specify the table name or EXCLUDED to precede the fields in the WHERE clause.
For example, if you only want to update value when the "new" zoning_id and algorithm_value_id are 7321 and 61, respectively:
insert into zoning_algorithm_value (algorithm_value_id, value, zoning_id)
values (61, 21, 7321)
on conflict(algorithm_value_id)
DO
update set value = 21 where EXCLUDED.zoning_id = 7321 and EXCLUDED.algorithm_value_id = 61;
If you instead want the WHERE to reference the "existing" record values, use the table name.

How to simply transpose two columns into a single row in postgres?

Following is the output of my query:
key ;value
"2BxtRdkRvwc-2hPjF8LBmHD-finapril" ;4
"3QXORSfsIY0-2sDizCyvY6m-finapril" ;12
"4QXORSfsIY0-2sDizCyvY6m-curr" ;12
"5QXORSfsIY0-29Xcom4SHVh-finapril" ;12
What i want is simply to bring the rows into columns so that only one row remains with the key as the column name.
I have seen examples with crosstab catering to much complex use cases but i want to know if there is a simpler way in which this can be achieved in my particular case?
Any help is appreciated
Thanks
Postgres Version : 9.5.10
It is impossible to execute a query resulting in an unknown number and names of columns. The simplest way to get a similar effect is to generate a json object which can be easily interpreted by a client app as a pivot table, example:
with the_data(key, value) as (
values
('2BxtRdkRvwc-2hPjF8LBmHD-finapril', 4),
('3QXORSfsIY0-2sDizCyvY6m-finapril', 12),
('4QXORSfsIY0-2sDizCyvY6m-curr', 12),
('5QXORSfsIY0-29Xcom4SHVh-finapril', 12)
)
select jsonb_object_agg(key, value)
from the_data;
The query returns this json object:
{
"4QXORSfsIY0-2sDizCyvY6m-curr": 12,
"2BxtRdkRvwc-2hPjF8LBmHD-finapril": 4,
"3QXORSfsIY0-2sDizCyvY6m-finapril": 12,
"5QXORSfsIY0-29Xcom4SHVh-finapril": 12
}

Ruby on Rails: How to get monthly count by using PG

Now I'm facing a issue that is I want to write a statement to return monthly count,
For example, in period of 2014-01 to 2014-12. return an order array like
["Jan, 5", "Feb, 0",...,"Dec, 55" ]
The possible solution that I only know is
1. get a scope to return monthly record
2. calculate the period number, like here is 12
3. repeat 12.times to get record size for each month
4. build array
The problem is I have to repeat queries for 12 times! That's so weird.
I know group_by could be a better choice, but no idea how to achieve the performance which I really want to be. Could anyone help me?
Format your date column using Postgres's to_char and then use it in ActiveRecord's group method.
start = Date.new(2014, 1, 1)
finish = Date.new(2014, 12, 31)
range = start..finish
return_hash = ModelClass.
where(created_at: range).
group("to_char(created_at, 'Mon YYYY')").
count
That will return a hash like {"Nov 2014" => 500}
To 'fill in the gaps' you can create a month_names array and do:
month_names.each{ |month| return_hash[month] ||= 0 }
Consider creating a new hash altogether that has keys sorted according to your month_names variable.
Then to get your desired output:
return_hash.map{ |month, count| "#{month}, #{count}" }
I use the groupdate gem (https://github.com/ankane/groupdate)
Then add .group_by_month(:created_at).count to your query