I've already got a method for excel, but I want the padding to be done via the query to reduce my effort later in the process
Excel Example
=TEXT(LEFT(A2,FIND(".",A2,1)-1),"000") & "." & TEXT(MID(A2,FIND(
".",A2,1)+1,FIND(".",A2,FIND(".",A2,1)+1)-FIND(".",A2,1)-1),"000")
& "." & TEXT(MID(A2,FIND(".",A2,FIND(".",A2,1)+1)+1,FIND(".",A2,
FIND(".",A2,FIND(".",A2,1)+1)+1)-FIND(".",A2,FIND(".",A2,1)+1)-1),
"000") & "." & TEXT(RIGHT(A2,LEN(A2)-FIND(".",A2,FIND(".",A2,FIND(
".",A2,1)+1)+1)),"000")
I tried searching the PostgreSQL documentation but nothing was obvious on converting to padded
I also investigated potentially doing a CAST as I have done for hostnames utilizing regex
Hostname CAST Example for PostgreSQL
UPPER(regexp_replace(da.host_name, '([\.][\w\.]+)', '', 'g')) AS hostname
But, I am hitting a roadblock here. Any suggestions?
I'm making the assumption that by "Padding an IP" you are wanting to lpad 0's to the front of each ip part.
Using regexp_replace you can do the following:
SELECT regexp_replace(regexp_replace('19.2.2.2', '([0-9]{1,3})', '00\1', 'g'), '(0*)([0-9]{3})', '\2', 'g');
Optionally if you are on 9.4 or newer you could get crafty with UNNEST() or REGEXP_SPLIT_TO_TABLE() and the new WITH ORDINALITY keyword to split each ip part (and the key from the table) out to its own row. Then you can lpad() with 0's and string_agg() it back together using the ordinal that was preserved in the unnest or regexp_split_to_table():
user=# SELECT * FROM test;
id | ip
----+--------------
1 | 19.16.2.2
2 | 20.321.123.1
(2 rows)
user=# SELECT id, string_agg(lpad(ip_part, 3, '0'),'.' ORDER BY rn) FROM test t, regexp_split_to_table(t.ip, '\.') WITH ORDINALITY s(ip_part, rn) GROUP BY id;
id | string_agg
----+-----------------
1 | 019.016.002.002
2 | 020.321.123.001
(2 rows)
Theoretically this would work in older versions since it seems like ordinals are preserved during unnest(), but it feels more like luck and I wouldn't productionize any code that depends on that.
Related
I have a value in salesforce that we store in another database. The value in Salesforce has commas. I have to delete the commas and then place the new number in postgres before performing a query. Is there a way to make this less steps?
Example: SF Number = 1,234,567
1) Paste in Postgres
2) Remove commas manually
3) Run Query
The number can be 4,5,6 or 7 digits long so substr doesn't work consistently
The perfect format at the end should look like --- select * from customers where id=1234567
postgres=# select REGEXP_REPLACE('1,234,567', ',', '', 'g');
regexp_replace
----------------
1234567
(1 row)
postgres=#
I have the following column with entries
01:02:02:02
02:01:100:300
128:02:12:02
input
I need a way to choose parts I want to display like
01:02:02
02:01:100
128:02:12
output
or
01:02
02:01
128:02
I tried suggested solutions in similar questions without success like
select substring(column_name, '[^:]*$') from table_name;
how could this work?
To get the first three parts, you can use
SELECT substring(column_name FROM '^(([^:]*:){2}[^:]*)')
FROM table_name;
For the first two parts, omit the {2}. For the first four parts, make it {3}.
try split_part (where you can specify which occurrence you want), eg:
t=# with s as (select '128:02:12:02'::text m) select split_part(m,':',1),split_part(m,':',2) from s;
split_part | split_part
------------+------------
128 | 02
(1 row)
I have a table with a string column. I want to remove the stop words. I used this query which seems Ok.
SELECT to_tsvector('english',colName)from tblName order by colName asc;
it does not update the column in table
I want to see the stop words of Postgresql and what the query found.Then in case I can replace it with my own file. I also checked this address and could not find the stop words list file. Actually, the address does not exist.
$SHAREDIR/tsearch_data/english.stop
There is no function to do that.
You could use something like this (in this example in German):
SELECT array_to_string(tsvector_to_array(to_tsvector('Hallo, Bill und Susi!')), ' ');
array_to_string
-----------------
bill hallo susi
(1 row)
This removes stop words, but also stems and non-words, and it does not care about word order, so I doubt that the result will make you happy.
If that doesn't fit the bill, you can use regexp_replace like this:
SELECT regexp_replace('Bill and Susi, hand over or die!', '\y(and|or|if)\y', '', 'g');
regexp_replace
-----------------------------
Bill Susi, hand over die!
(1 row)
But that requires that you include your list of stop words in the query string. An improved version would store the stop words in a table.
The chosen answer did not match my requirement, but I found a solution for this:
SELECT regexp_replace('Bill and Susi, hand over or die!', '[^ ]*$','');
regexp_replace
-----------------------------
Bill and Susi, hand over or
(1 row)
I have the same question as this:
Splitting a comma-separated field in Postgresql and doing a UNION ALL on all the resulting tables
Just that my 'fruits' column is delimited by '|'. When I try:
SELECT
yourTable.ID,
regexp_split_to_table(yourTable.fruits, E'|') AS split_fruits
FROM yourTable
I get the following:
ERROR: type "e" does not exist
Q1. What does the E do? I saw some examples where E is not used. The official docs don't explain it in their "quick brown fox..." example.
Q2. How do I use '|' as the delimiter for my query?
Edit: I am using PostgreSQL 8.0.2. unnest() and regexp_split_to_table() both are not supported.
A1
E is a prefix for Posix-style escape strings. You don't normally need this in modern Postgres. Only prepend it if you want to interpret special characters in the string. Like E'\n' for a newline char.Details and links to documentation:
Insert text with single quotes in PostgreSQL
SQL select where column begins with \
E is pointless noise in your query, but it should still work. The answer you are linking to is not very good, I am afraid.
A2
Should work as is. But better without the E.
SELECT id, regexp_split_to_table(fruits, '|') AS split_fruits
FROM tbl;
For simple delimiters, you don't need expensive regular expressions. This is typically faster:
SELECT id, unnest(string_to_array(fruits, '|')) AS split_fruits
FROM tbl;
In Postgres 9.3+ you'd rather use a LATERAL join for set-returning functions:
SELECT t.id, f.split_fruits
FROM tbl t
LEFT JOIN LATERAL unnest(string_to_array(fruits, '|')) AS f(split_fruits)
ON true;
Details:
What is the difference between LATERAL and a subquery in PostgreSQL?
PostgreSQL unnest() with element number
Amazon Redshift is not Postgres
It only implements a reduced set of features as documented in its manual. In particular, there are no table functions, including the essential functions unnest(), generate_series() or regexp_split_to_table() when working with its "compute nodes" (accessing any tables).
You should go with a normalized table layout to begin with (extra table with one fruit per row).
Or here are some options to create a set of rows in Redshift:
How to select multiple rows filled with constants in Amazon Redshift?
This workaround should do it:
Create a table of numbers, with at least as many rows as there can be fruits in your column. Temporary or permanent if you'll keep using it. Say we never have more than 9:
CREATE TEMP TABLE nr9(i int);
INSERT INTO nr9(i) VALUES (1),(2),(3),(4),(5),(6),(7),(8),(9);
Join to the number table and use split_part(), which is actually implemented in Redshift:
SELECT *, split_part(t.fruits, '|', n.i) As fruit
FROM nr9 n
JOIN tbl t ON split_part(t.fruits, '|', n.i) <> ''
Voilá.
I am getting this error in the pg production mode, but its working fine in sqlite3 development mode.
ActiveRecord::StatementInvalid in ManagementController#index
PG::Error: ERROR: column "estates.id" must appear in the GROUP BY clause or be used in an aggregate function
LINE 1: SELECT "estates".* FROM "estates" WHERE "estates"."Mgmt" = ...
^
: SELECT "estates".* FROM "estates" WHERE "estates"."Mgmt" = 'Mazzey' GROUP BY user_id
#myestate = Estate.where(:Mgmt => current_user.Company).group(:user_id).all
If user_id is the PRIMARY KEY then you need to upgrade PostgreSQL; newer versions will correctly handle grouping by the primary key.
If user_id is neither unique nor the primary key for the 'estates' relation in question, then this query doesn't make much sense, since PostgreSQL has no way to know which value to return for each column of estates where multiple rows share the same user_id. You must use an aggregate function that expresses what you want, like min, max, avg, string_agg, array_agg, etc or add the column(s) of interest to the GROUP BY.
Alternately you can rephrase the query to use DISTINCT ON and an ORDER BY if you really do want to pick a somewhat arbitrary row, though I really doubt it's possible to express that via ActiveRecord.
Some databases - including SQLite and MySQL - will just pick an arbitrary row. This is considered incorrect and unsafe by the PostgreSQL team, so PostgreSQL follows the SQL standard and considers such queries to be errors.
If you have:
col1 col2
fred 42
bob 9
fred 44
fred 99
and you do:
SELECT col1, col2 FROM mytable GROUP BY col1;
then it's obvious that you should get the row:
bob 9
but what about the result for fred? There is no single correct answer to pick, so the database will refuse to execute such unsafe queries. If you wanted the greatest col2 for any col1 you'd use the max aggregate:
SELECT col1, max(col2) AS max_col2 FROM mytable GROUP BY col1;
I recently moved from MySQL to PostgreSQL and encountered the same issue. Just for reference, the best approach I've found is to use DISTINCT ON as suggested in this SO answer:
Elegant PostgreSQL Group by for Ruby on Rails / ActiveRecord
This will let you get one record for each unique value in your chosen column that matches the other query conditions:
MyModel.where(:some_col => value).select("DISTINCT ON (unique_col) *")
I prefer DISTINCT ON because I can still get all the other column values in the row. DISTINCT alone will only return the value of that specific column.
After often receiving the error myself I realised that Rails (I am using rails 4) automatically adds an 'order by id' at the end of your grouping query. This often results in the error above. So make sure you append your own .order(:group_by_column) at the end of your Rails query. Hence you will have something like this:
#problems = Problem.select('problems.username, sum(problems.weight) as weight_sum').group('problems.username').order('problems.username')
#myestate1 = Estate.where(:Mgmt => current_user.Company)
#myestate = #myestate1.select("DISTINCT(user_id)")
this is what I did.