I have a file like this.
Column1 Column2
-3500 value1
-3480 value2
-3460 value 3
9920 value 50
9940
9960
10000
10020
40000 Last value
Look at this example:
awk 'NR>1{$1=$1-4.91}1' file
Column1 Column2
-3504.91 value1
-3484.91 value2
-3464.91 value 3
9915.09 value 50
9935.09
9955.09
9995.09
10015.1
39995.1 Last value
I would like to have the correct value, not the rounded like this:
Column1 Column2
-3504.91 value1
-3484.91 value2
-3464.91 value 3
9915.09 value 50
9935.09
9955.09
9995.09
10015.09
39995.09 Last value
I am trying to subtract a constant value 4.91 from the first column. Everything works fine until 9980, but starting at 10000, awk is subtracting only 4.9 from the data and gives values 0.01 lesser than the original ones.I think it is rounding upto some decimal places, but I don't want the rounded values.Can anybody suggest me a workaround..
Anyother suggestions from shell script or Perl are welcome.
Can be done in perl:
perl -pe 's/^([-0-9]+)/$1 - 4.91/e' your_file
Details:
-p reads file line by line and prints it
-e runs perl code with line content in $_
s/.../.../e - replace regexp by expression
^([-0-9]+) - matched any digit and/or - sign. Also captures matched fragment.
$1 - 4.91 - does the work using value captured by regexp.
Here is how to do it with awk:
You simply set how many decimal you like using the printf function.
awk 'NR>1{printf "%.2f\n",$1-4.91}' file
-3504.91
-3484.91
-3464.91
9915.09
9935.09
9955.09
9995.09
10015.09
39995.09
$ awk 'BEGIN{CONVFMT="%.2f"} NR>1{$1=$1-4.91}1' file
Column1 Column2
-3504.91 value1
-3484.91 value2
-3464.91 value 3
9915.09 value 50
9935.09
9955.09
9995.09
10015.09
39995.09 Last value
or if you prefer:
$ awk 'NR>1{$1=sprintf("%.2f",$1-4.91)}1' file
Column1 Column2
-3504.91 value1
-3484.91 value2
-3464.91 value 3
9915.09 value 50
9935.09
9955.09
9995.09
10015.09
39995.09 Last value
Related
I am trying make a function for the aggregate consumption by mid in a kdb+ table (aggregate value by mid). Also this table is being imported from a csv file like this:
table: ("JJP";enlist",")0:`:data.csv
Where the meta data is for the table columns is:
mid is type long(j), value(j) is type long and ts is type timestamp (p).
Here is my function:
agg: {select avg value by mid from table}
but I get the
'type
[0] get select avg value by mid from table
But the type of value is type long (j). So I am not sure why I can't get the avg I also tried this with type int.
Value can't be used as a column name because it is keyword used in kdb+. Renaming the column should correct the issue.
value is a keyword and should not be used as a column name.
https://code.kx.com/q/ref/value/
You can remove it as a column name using .Q.id
https://code.kx.com/q/ref/dotq/#qid-sanitize
q)t:flip`value`price!(1 2;1 2)
q)t
value price
-----------
1 1
2 2
q)t:.Q.id t
q)t
value1 price
------------
1 1
2 2
Or xcol
https://code.kx.com/q/ref/cols/#xcol
q)(enlist[`value]!enlist[`val]) xcol t
val price
---------
1 1
2 2
You can rename the value column as you read it:
flip`mid`val`ts!("JJP";",")0:`:data.csv
below is the sample data
column1
column2
20
23,24,32,xyz,78
21
xx,32,ss,11,78
22
pqr,sql,2,77a,67
Now how can I update the 4th position value of column2 as 'TRUE'.
For record 1 the value 'xyz' would be replaced by 'TRUE',
for record 2 the value '11' would be replaced by 'TRUE',
for record 3 the value '77a' would be replaced by 'TRUE'
The update table would look like:
column1
column2
20
23,24,32,TRUE,78
21
xx,32,ss,TRUE,78
22
pqr,sql,2,TRUE,67
You can convert the comma separated string into an array, unnest that array and while aggregating back into the dreaded comma separated string, replace the value at the fourth position:
update badly_designed_table
set column2 = (select string_agg(case idx when 4 then 'TRUE' else ch end, ',')
from unnest(string_to_array(column2, ',')) with ordinality as x(ch, idx))
;
If you need that frequently, you can write a function that sets the value at a specific position of such a string.
The correct solution however is to not store data this way.
If you really need to de-normalize your model, then at least use a proper array rather than a comma separated string.
We have an existing column(type- double precision) in our postgres table and we want to convert the data type of that column to numeric, we've tried the below approaches but all of them had truncation/data loss on the last decimal positions.
directly converting to numeric
converting to numeric with precision and scale
converting to text and then to numeric
converting to text only
The data loss I mentioned looks like this for eg: if we have a value 23.291400909423828, then after altering the column datatype that value is converted to 23.2914009094238 resulting in loss of the last 2 decimal places.
note: This is happening only if the value has more than 13 decimals(values right to the decimal point)
One way to possibly do this:
show extra_float_digits ;
extra_float_digits
--------------------
3
create table float_numeric(number_fld float8);
insert into float_numeric values (21.291400909423828), (23.291400909422436);
select * from float_numeric ;
number_fld
--------------------
21.291400909423828
23.291400909422435
alter table float_numeric alter COLUMN number_fld type numeric using number_fld::text::numeric;
\d float_numeric
Table "public.float_numeric"
Column | Type | Collation | Nullable | Default
------------+---------+-----------+----------+---------
number_fld | numeric | | |
select * from float_numeric ;
number_fld
--------------------
21.291400909423828
23.291400909422435
I have a column in a Postgresql table that is unique and character varying(10) type. The table contains old alpha-numeric values that I need to keep. Every time a new row is created from this point forward, I want it to be numeric only. I would like to get the max numeric-only value from this table for this column then create a new row with that max value incremented by 1.
Is there a way to query this table for the max numeric value only for this column?
For example, if this column currently has the values:
1111
A1111A
1234
1234A
3331
B3332
C-3333
33-D33
3**333*
Is there a query that will return 3333, AKA cut out all the non-numeric characters from the values and then perform a MAX() on them?
Not precisely what you asking, but something that I think will work better for you.
To go over all the columns, convert each to numbers, and then cast it to integer & return max.:
SELECT MAX(regexp_replace(my_column, '[^0-9]', '', 'g')::int) FROM public.foobar;
This gets you your max value... say 2999.
Now, going forward, consider making the default for your column a serial-like value, and convert it to text... that way you set the "MAX" once, and then let postgres do all the work for future values.
-- create simple integer sequence
CREATE SEQUENCE public.foobar_my_column_seq
INCREMENT 1
MINVALUE 1
MAXVALUE 9223372036854775807
START 1
CACHE 0;
-- use new sequence as default value for column __and__ convert to text
ALTER TABLE foobar
ALTER COLUMN my_column
SET DEFAULT nextval('publc.foobar_my_column_seq'::regclass)::text;
-- initialize "next value" of sequence to whatever is larger than
-- what you already have in your data ... say 3000:
ALTER SEQUENCE public.foobar_my_column_seq RESTART WITH 3000;
Because you're simply setting default, you don't change your current alpha-numeric values.
I figured it out. The following query works.
select text_value, regexp_replace(text_value, '[^0-9]+', '') as new_value from the_table;
Result:
text_value | new_value
-----------------------+-------------
4*215474 | 4215474
740024 | 740024
4*100535 | 4100535
42356 | 42356
CASH |
4*215474 | 4215474
740025 | 740025
740026 | 740026
4*5089655798 | 45089655798
4*15680 | 415680
4*224034 | 4224034
4*265718708 | 4265718708
I need to remove the prefix chr in the first column
column1 column2
chr1 123456
chr2 125679
to look like
1 123456
2 125679
I tried sed -i 's/chr//g', but it will create an empty space.
try this one without the g
sed -i 's/chr//'