Orgmode Radio tables, pandoc,and docx - org-mode

Pandoc's (version 1.13.1) docx converter apparently requires that there be a blank line before the table start and after the table end. With radio tables, this requires a blank line after the BEGIN RECEIVE cookie and a blank line before END RECEIVE. In order for the orgtbl SEND command to create these spaces, I need to specify :tstart and :tend as follows:
<!--
#+ORGTBL: send testtbl orgtbl-to-orgtbl :tstart newline :tend "\\"
| col1 | col2 |
|------+------|
| a | b |
| here | is |
-->
<!-- BEGIN RECEIVE ORGTBL testtbl -->
| col1 | col2 |
|------+------|
| a | b |
| here | is |
<!-- END RECEIVE ORGTBL testtbl -->
My question: Why is different syntax required for :tstart and :tend? Using newline or "\\" for both doesn't work.

Related

Create full text search configuration with two dictionaries

I want to perform a full text search on a postgresql column using the english_stem dictionary and the simple dictionary. I can do something like this:
ALTER TEXT SEARCH CONFIGURATION english_simple_conf
ALTER MAPPING FOR asciiword, asciihword, hword_asciipart, word, hword, hword_part
WITH english_stem, simple;
But this checks that the word is in both dictionaries. Is there a way to alter this configuration so the word can be matched with one dictionary OR the other?
Edit:
The reason I think they are not being checked in order is because when searching for a partial word that should be found in the simple dictionary, nothing is returned.
select * from ts_debug('english', 'gutter cleaning services');
alias | description | token | dictionaries | dictionary | lexemes
-----------+-----------------+----------+----------------+--------------+----------
asciiword | Word, all ASCII | gutter | {english_stem} | english_stem | {gutter}
blank | Space symbols | | {} | |
asciiword | Word, all ASCII | cleaning | {english_stem} | english_stem | {clean}
blank | Space symbols | | {} | |
asciiword | Word, all ASCII | services | {english_stem} | english_stem | {servic}
select * from ts_debug('simple', 'gutter cleaning services');
alias | description | token | dictionaries | dictionary | lexemes
-----------+-----------------+----------+--------------+------------+------------
asciiword | Word, all ASCII | gutter | {simple} | simple | {gutter}
blank | Space symbols | | {} | |
asciiword | Word, all ASCII | cleaning | {simple} | simple | {cleaning}
blank | Space symbols | | {} | |
asciiword | Word, all ASCII | services | {simple} | simple | {services}
select name from categories where (to_tsvector('english_simple_conf', name) ## (to_tsquery('english_simple_conf', 'cleani:*')));
name
------
(0 rows)
But searching for a partial in the english dictionary returns as expected.
select name from categories where (to_tsvector('english_simple_conf', name) ## (to_tsquery('english_simple_conf', 'clea:*')));
name
--------------------------
Gutter Cleaning Services
But this checks that the word is in both dictionaries.
That's not correct. As noted in the docs (see the description for the dictionary_name parameter), it checks them in order; it only checks the 2nd dictionary if it did not get a token from the first. You can verify this with ts_debug().
testdb=# ALTER TEXT SEARCH CONFIGURATION english_simple_conf
ALTER MAPPING FOR asciiword, asciihword, hword_asciipart, word, hword, hword_part
WITH simple;
ALTER TEXT SEARCH CONFIGURATION
testdb=# select * from ts_debug('public.english_simple_conf', 'cars boats n0taword');
alias | description | token | dictionaries | dictionary | lexemes
-----------+--------------------------+----------+--------------+------------+------------
asciiword | Word, all ASCII | cars | {simple} | simple | {cars}
blank | Space symbols | | {} | |
asciiword | Word, all ASCII | boats | {simple} | simple | {boats}
blank | Space symbols | | {} | |
numword | Word, letters and digits | n0taword | {simple} | simple | {n0taword}
(5 rows)
testdb=# ALTER TEXT SEARCH CONFIGURATION english_simple_conf
ALTER MAPPING FOR asciiword, asciihword, hword_asciipart, word, hword, hword_part
WITH english_stem, simple;
ALTER TEXT SEARCH CONFIGURATION
testdb=# select * from ts_debug('public.english_simple_conf', 'cars boats n0taword');
alias | description | token | dictionaries | dictionary | lexemes
-----------+--------------------------+----------+-----------------------+--------------+------------
asciiword | Word, all ASCII | cars | {english_stem,simple} | english_stem | {car}
blank | Space symbols | | {} | |
asciiword | Word, all ASCII | boats | {english_stem,simple} | english_stem | {boat}
blank | Space symbols | | {} | |
numword | Word, letters and digits | n0taword | {simple} | simple | {n0taword}
(5 rows)
The reason for the difference in the last two queries is that english_stem stems 'Cleaning' to 'clean', so searching for 'cleani*' will not match. Try adding the to_tsvector and to_tsquery expressions as a column and removing them from the WHERE; you'll see that "Gutter Cleaning Services" is stemmed to 'clean':2 'gutter':1 'servic':3.
testdb=# select to_tsvector('english_simple_conf', name), to_tsquery('english_simple_conf', 'cleani:*'), name from categories;
to_tsvector | to_tsquery | name
---------------------------------+------------+--------------------------
'clean':2 'gutter':1 'servic':3 | 'cleani':* | Gutter Cleaning Services
(1 row)
testdb=# select to_tsvector('english_simple_conf', name), to_tsquery('english_simple_conf', 'cleaning:*'), name from categories;
to_tsvector | to_tsquery | name
---------------------------------+------------+--------------------------
'clean':2 'gutter':1 'servic':3 | 'clean':* | Gutter Cleaning Services
(1 row)
If you change the ts_query to instead search for cleaning:*, that will get stemmed as well and again match. But, english_stem cannot figure out that 'cleani' is meant to stem to 'clean' unless it also sees the 'ng'. So, that falls through to simple, which performs no stemming, and you end up with the mismatch - still a trailing i in the tsquery, but not in the tsvector.
Stemming isn't meant to work on arbitrary prefixes of words, only on whole ones; for prefix matching, you'd use a traditional left-anchored LIKE.

Does SQL have a way to group rows without squashing the group into a single row?

I want to do a single query that outputs an array of arrays of table rows. Think along the lines of <table><rowgroup><tr><tr><tr><rowgroup><tr><tr>. Is SQL capable of this? (specifically, as implemented in MariaDB, though migration to AWS RDS might occur one day)
The GROUP BY statement alone does not do this, it creates one row per group.
Here's an example of what I'm thinking of…
SELECT * FROM memes;
+------------+----------+
| file_name | file_ext |
+------------+----------+
| kittens | jpeg |
| puppies | gif |
| cats | jpeg |
| doggos | mp4 |
| horses | gif |
| chickens | gif |
| ducks | jpeg |
+------------+----------+
SELECT * FROM memes GROUP BY file_ext WITHOUT COLLAPSING GROUPS;
+------------+----------+
| file_name | file_ext |
+------------+----------+
| kittens | jpeg |
| cats | jpeg |
| ducks | jpeg |
+------------+----------+
| puppies | gif |
| horses | gif |
| chickens | gif |
+------------+----------+
| doggos | mp4 |
+------------+----------+
I've been using MySQL for ~20 years and have not come across this functionality before but maybe I've just been looking in the wrong place ¯\_(ツ)_/¯
I haven't seen an array rendering such as the one you want, but you can simulate it with multiple GROUP BY / GROUP_CONCAT() clauses.
For example:
select concat('[', group_concat(g), ']') as a
from (
select concat('[', group_concat(file_name), ']') as g
from memes
group by file_ext
) x
Result:
a
---------------------------------------------------------
[[puppies,horses,chickens],[kittens,cats,ducks],[doggos]]
See running example at DB Fiddle.
You can tweak the delimiters such as ,, [, and ].
SELECT ... ORDER BY file_ext will come close to your second output.
Using GROUP BY ... WITH ROLLUP would let you do subtotals under each group, which is not what you wanted either, but it would give you extra lines where you want the breaks.

PostgreSQL, import a csv file

I'm trying to copy a CSV file into an empty table, after trying to match the columns in the CSV which failed with the exact same error.
COPY books
FROM '/path/to/file/books.csv' CSV HEADER;
error:
ERROR: extra data after last expected column
CONTEXT: COPY books, line 2: "1,Harry Potter and the Half-Blood Prince (Harry Potter #6),J.K. Rowling/Mary GrandPré,4.57,0439785..."
SQL state: 22P04
Also, I would like that the publication_date will be of date type, so it can be queried, How can that be applied during copying?
a piece of the CSV file:
bookID| title | authors | average_rating | isbn|isbn13 |num_pages | ratings_count| text_reviews_count| publication_date|
----------------------------------------------------------------------------------------------------------------------------------
1 | harry potter | | | |
|(harry Potter | | | |
| #6) | author | 4 |"num" | "num"| 600 | 3243 | 534 | 01/01/2000 |
SELECT * FROM books
output:
bookID| title | authors | average_rating | isbn |isbn13| language_code
---------------------------------------------------------------------------------
text | character| text | integer | text | text | character |
| varying | | | | | varying
| num_pages | ratings_count| text_reviews_count| publication_date| publisher
-------------------------------------------------------------------------------
| integer | bigint | bigint | date | character
varying
First of all, the columns number mismatch from CSV file and TABLE, but you can specify via COPY command the columns for the table that you need:
COPY books (bookID,title,authors,average_rating,isbn,isbn13,num_pages,ratings_count , text_reviews_count , publication_date) FROM '/path/to/file/books.csv' CSV header delimiter ',';
You can specify you delimiter

Generating a markdown table with key bindings in Spacemacs

What is the best way to generate a markdown table with key bindings in Spacemacs (evil mode)?
Update: To clarify, this question is not about editing markdown, but automatically generating the table content for a large number of key bindings.
This could be an elisp function iterating through the possible single keystrokes (letters, numbers, punctuation, and possibly space and some control characters, with and without modifier keys), seeing what function each key is bound to (if any), and getting the description of the function.
You can do that manually using SPC h d k, but it would be handy to generate a table, given the number of possible key bindings and the way they can depend on the buffer mode and the state.
The table should show single keystrokes (letters, numbers, punctuation) with and without modifiers, the function bound to them, and the first line of the function description.
The result should look something like this:
https://github.com/cjolowicz/howto/blob/master/spacemacs.md
| Key | Mnemonic | Description | Function |
| ------ | -------- | --------------------------------------------------------------- | ------------------------ |
| a | *append* | Switch to Insert state just after point. | `evil-append` |
| b | *backward* | Move the cursor to the beginning of the COUNT-th previous word. | `evil-backward-word-begin` |
| c | *change* | Change text from BEG to END with TYPE. | `evil-change` |
| d | *delete* | Delete text from BEG to END with TYPE. | `evil-delete` |
| e | *end* | Move the cursor to the end of the COUNT-th next word. | `evil-forward-word-end` |
| f | *find* | Move to the next COUNT’th occurrence of CHAR. | `evil-find-char` |
| g | *goto* | (prefix) | |
| h | | Move cursor to the left by COUNT characters. | `evil-backward-char` |
| i | *insert* | Switch to Insert state just before point. | `evil-insert` |
| j | | Move the cursor COUNT lines down. | `evil-next-line` |
| k | | Move the cursor COUNT lines up. | `evil-previous-line` |
| l | | Move cursor to the right by COUNT characters. | `evil-forward-char` |
| m | *mark* | Set the marker denoted by CHAR to position POS. | `evil-set-marker` |
| n | *next* | Goes to the next occurrence. | `evil-ex-search-next` |
| o | *open* | Insert a new line below point and switch to Insert state. | `evil-open-below` |
| p | *paste* | Disable paste transient state if there is more than 1 cursor. | `evil-mc-paste-after` |
| q | | Record a keyboard macro into REGISTER. | `evil-record-macro` |
| r | *replace* | Replace text from BEG to END with CHAR. | `evil-replace` |
| s | *substitute* | Change a character. | `evil-substitute` |
| t | *to* | Move before the next COUNT’th occurrence of CHAR. | `evil-find-char-to` |
| u | *undo* | Undo changes. | `evil-tree-undo` |
| v | *visual* | Characterwise selection. | `evil-visual-char` |
| w | *word* | Move the cursor to the beginning of the COUNT-th next word. | `evil-forward-word-begin` |
| x | *cross* | Delete next character. | `evil-delete-char` |
| y | *yank* | Saves the characters in motion into the kill-ring. | `evil-yank` |
| z | *scroll* | (prefix) | |
(The Mnemonic column would of course be handcrafted.)
The orgtbl-mode minor mode that comes with Org (and therefore Emacs itself) should be able to help here. Activate it, then use Tab and Ret to navigate from cell to cell, letting orgtbl create and balance cells as you go. (Balancing happens when you navigate to a new cell, e.g. with Tab.)
You'll have to start the table yourself, e.g. with something like
| Key | Mnemonic | Description | Function |
|-
but from there orgtbl can take over. You can also use things like org-table-insert-column and org-table-move-row-down to make other kinds of tabular changes.
I'm not entirely sure how nicely this will play with evil-mode or what bindings it will use come out of the box, but it's worth a try.

Cross tab with a list of values instead of summation

I want a Cross tab that lists field values and counts them instead of just giving a count for the summation. I know I could make this with groups but I cant list the values vertically that way. From my research I believe I have to use a Display String Formula.
SQL Field Data
-------------------------------------------------
| Play # | Formation |Back Set | R/P | PLAY |
-------------------------------------------------
| 1 | TREY | FG | R | TRUCK |
-------------------------------------------------
| 2 | T | FG | R | RHINO |
-------------------------------------------------
| 3 | D | FG | P | 5 STEP |
-------------------------------------------------
| 4 | D | FG | P | 5 STEP |
-------------------------------------------------
| 5 | K JET | NG | R | DOG |
-------------------------------------------------
Desired report structure:
-----------------------------------------------------------
| Backet & Formation | Run | Pass |
-----------------------------------------------------------
| NG K JET | BULLA 1 | |
| | HELL 3 | |
-----------------------------------------------------------
| FG D | | 5 STEP 2 |
-----------------------------------------------------------
| NG K JET | DOG | |
-----------------------------------------------------------
| FG T | RHINO | |
-----------------------------------------------------------
Don't see why a Crosstab is necessary for this - especially if the entire body of the report is just that table.
Group your records by Bracket and Formation - If that's not
something natively configured in your table, make a new Formula field
and group on that.
Drop the 3 relevant fields into whichever section you need to display. (It might be a Footer, based on whether or not you want repeats
Write a formula to determine whether or not Run or Pass are displayed, and place it in their suppression field. (Good luck getting a Crosstab to do that for you! It tends to prefer 0s over blanks.)
If there's more to the report than just this table, you can cheat the system by placing your "table" into a subreport. And of course you can stretch Line objects across the sections and it will stretch to form the table outlines