Error in org-table-sum org-mode? - emacs

I am just getting started with Emacs org-mode and I am already getting really confused about a simple column sum (org-table-sum). I start with
| date | sum |
|------+-------|
| | 16.2 |
| | 6.16 |
| | 6.16 |
| | |
When I hit C-c + (org-table-sum) below the second column I get the correct sum 28.52. If I add another line to make it
| date | sum |
|------+-------|
| | 16.2 |
| | 6.16 |
| | 6.16 |
| | 13.11 |
| | |
C-c + gives me 41.629999999999995. ???
If I change the last line from 13.11to 13.12, C-c +will give me (the correct) 41.64.
WTF?
Any explanation appreciated! Thanks!

Most decimal numbers cannot be represented exactly in binary floating point encoding (either single or double precision).
Test 13.11 here, to see that after conversion to double precision, the nearest number represented is 13.109999656677246.
This problem is not emacs related, but is a fundamental issue when working with floating point representation in a different base (binary rather than decimal).
Using calc's vsum, the result is OK:
| date | sum |
|------+-------|
| | 16.2 |
| | 6.16 |
| | 6.16 |
| | 13.11 |
|------+-------|
| | 41.63 |
#+TBLFM: #6$2=vsum(#I..#II)
This works because calc works with arbitrary precision and will not encode the numbers in a binary floating point format.

Related

Hive Double and Decimal value incorrect format

I am migrating the sql statement greenplum to HiveSQL supported with pyspark scripts. we are encounter the issues if the double or decimal value, the value getting changed while doing the sum of the double or decimal value.
source data output as below like.
+------------------------+
| test_amount|
+------------------------+
| 6.987713783438851E8 |
| 1.42712398248124E10 |
| 6.808514167337805E8 |
| 1.5215888908771296E10 |
actual greenplum provide the output while doing the sum of the amount value as below like,
2022-01-16 2 14970011203.1514433
2022-01-17 2 15896793850.6107666
GP sum out put
+-------------+------+------------------------+
| _c0 | _c1 | _c2 |
+-------------+------+------------------------+
| 2022-01-16 | 2 | 1.4970011203156286E10 |
| 2022-01-17 | 2 | 1.5896740325505075E10 |
+-------------+------+------------------------+
please help us.

How to convert text-based table to image from terminal in the same structure?

I'm trying to convert text-based table to the image, but the structure is broken after convertation.
I have file with the next structure:
+-----------+------+------------+-----------------+
| City name | Area | Population | Annual Rainfall |
+-----------+------+------------+-----------------+
| Adelaide | 1295 | 1158259 | 600.5 |
| Brisbane | 5905 | 1857594 | 1146.4 |
| Darwin | 112 | 120900 | 1714.7 |
| Hobart | 1357 | 205556 | 619.5 |
| Sydney | 2058 | 4336374 | 1214.8 |
| Melbourne | 1566 | 3806092 | 646.9 |
| Perth | 5386 | 1554769 | 869.4 |
+-----------+------+------------+-----------------+%
after a conversation with ImageMagick and command below:
convert label:"$(cat test.txt)" result1.png
I have next image:
As you can see, the structure of the columns is broken.
Do you have an idea of how can such an issue be solved?
Regards,
Ihor
You need to set the TypeFace to something monospace to match the terminal.
convert -font "Liberation-Mono" label:#test.txt result1.png
You can identify which fonts on the system by running
identify -list font | grep Mono

Weird ghost records in PostgreSQL - what are they?

I have a very weird issue on our postgresql DB. I have a table called "statement" which has some strange records in it.
Using the command line console psql, I query select * from customer.statement where type in ('QUOTE'); and get 12 rows back. 7 rows look normal, 5 are missing all data except a single column which is a nullable column but seems to hold real values entered by the user. psql tells me that 7 rows were returned even though there are 12. Most of the other columns are not nullable. The weird records look like this:
select * from customer.statement where type = 'QUOTE';
id | issuer_id | recipient_id | recipient_name | recipient_reference | source_statement_id | catalogue_id | reference | issue_date | due_date | description | total | currency | type | tax_level | rounding_mode | status | recall_requested | time_created | time_updated | time_paid
------------------+------------------+------------------+----------------+---------------------+---------------------+--------------+-----------+------------+------------+------------------------------------------------------------------+-----------+----------+-------+-----------+---------------+-----------+------------------+----------------------------+----------------------------+-----------
... 7 valid records removed ...
| | | | | | | | | | Build bulkheads and sheet with plasterboard. +| | | | | | | | | |
| | | | | | | | | | Patch all patches. +| | | | | | | | | |
| | | | | | | | | | Set and sand all joints ready for painting. +| | | | | | | | | |
| | | | | | | | | | Use wall angle on bulkhead in main bedroom. +| | | | | | | | | |
| | | | | | | | | | Build nib and sheet and set in entrance | | | | | | | | | |
(7 rows)
If I run the same query using pgAdmin, I don't see those weird records.
Anyone know what these are?
The plus sign before the separator (+|) indicates a newline character in the displayed string value in psql. So no additional rows, just the same row continued with line breaks. The final line of output in your quote confirms as much: (7 rows).
In pgAdmin you don't see the extra lines as long as you don't increase the height of the field (or copy / paste the content somewhere), but there are multiple lines as well.
Try in psql and in pgAdmin:
test=# SELECT E'This\nis\na\ntest.' AS multi_line, 'foo' AS single_line;
multi_line | single_line
--------------+-------------
This +| foo
is +|
a +|
test. |
(1 row)
The manual about psql:
linestyle
Sets the border line drawing style to one of ascii, old-ascii, or unicode. [...] The default setting is ascii. [...]
ascii style uses plain ASCII characters. Newlines in data are shown using a + symbol in the right-hand margin. [...]

Tableau - Calculated field for difference between date and maximum date in table

I have the following table that I have loaded in Tableau (It has only one column CreatedOnDate)
+-----------------+
| CreatedOnDate |
+-----------------+
| 1/1/2016 |
| 1/2/2016 |
| 1/3/2016 |
| 1/4/2016 |
| 1/5/2016 |
| 1/6/2016 |
| 1/7/2016 |
| 1/8/2016 |
| 1/9/2016 |
| 1/10/2016 |
| 1/11/2016 |
| 1/12/2016 |
| 1/13/2016 |
| 1/14/2016 |
+-----------------+
I want to be able to find the maximum date in the table, compare it with every date in the table and get the difference in days. For the above table, the maximum date in table is 1/14/2016. Every date is compared to 1/14/2016 to find the difference.
Expected Output
+-----------------+------------+
| CreatedOnDate | Difference |
+-----------------+------------+
| 1/1/2016 | 13 |
| 1/2/2016 | 12 |
| 1/3/2016 | 11 |
| 1/4/2016 | 10 |
| 1/5/2016 | 9 |
| 1/6/2016 | 8 |
| 1/7/2016 | 7 |
| 1/8/2016 | 6 |
| 1/9/2016 | 5 |
| 1/10/2016 | 4 |
| 1/11/2016 | 3 |
| 1/12/2016 | 2 |
| 1/13/2016 | 1 |
| 1/14/2016 | 0 |
+-----------------+------------+
My goal is to create this Difference calculated field. I am struggling to find a way to do this using DATEDIFF.
And help would be appreciated!!
woodhead92, this approach would work, but means you have to use table calculations. Much more flexible approach (available since v8) is Level of Details expressions:
First, define a MAX date for the whole dataset with this calculated field called MaxDate LOD:
{FIXED : MAX(CreatedOnDate) }
This will always calculate the maximum date on table (will overwrite filters as well, if you need to reflect them, make sure you add them to context.
Then you can use pretty much the same calculated field, but no need for ATTR or Table Calculations:
DATEDIFF('day', [CreatedOnDate], [MaxDate LOD])
Hope this helps!

Using named fields to determine ranges with vsum in Emacs org-table-mode, impossible?

I have been trying to simplify a semi-complex table that I have by adding named fields, without a problem, until I get to the vsum operator. I had the formula set to $M=vsum($3..#-4) which works, however I am continuously having to add and remove items from those fields, which changes the column numbering. This results in me having to change the field specifications of the vsum range after every update/change. I thus tried naming the top field and bottom fields with the thought of supplying the named variables to vsum, giving me a table similar to the following:
| / | <> | <> |
|---+--------+---------|
| | Title1 | Title 2 |
|---+--------+---------|
| _ | | START |
| | name | 1000 |
| | name | 3456 |
| | name | 123 |
| ^ | | END |
|---+--------+---------|
| _ | | MT |
| # | Total | #ERROR |
| # | | |
|---+--------+---------|
#+TBLFM: $MT=vsum($START..$END)
This is the debug formula output from the above table:
Substitution history of formula
Orig: vsum($START..$END)
$xyz-> vsum((1000)..(123))
#r$c-> vsum((1000)..(123))
$1-> vsum((1000)..(123))
-----------^
Error: Expected `)'
I have tried embrasing the named field variables in parenthesis, and several other ways but have thus far not been able to get this to work. I am hoping I am just missing something and being blind, but perhaps this is not possible to do?
I have also tried the sum-up function with no success as well. Thank you in advance for your assistance.
The following solution works by using #II and #III to refer to all entries between the second and third hline.
| / | <> | <> |
|---+--------+---------|
| | Title1 | Title 2 |
|---+--------+---------|
| | name | 1000 |
| | name | 3456 |
| | name | 123 |
|---+--------+---------|
| _ | | MT |
| # | Total | 4579 |
| # | | |
|---+--------+---------|
#+TBLFM: $MT=vsum(#II..#III)
Documentation: http://orgmode.org/manual/References.html#References