Cannot get NS records - postgresql

I currently have a PostgreSQL backed PowerDNS setup. The records table for a domain is currently set up somewhat like this:
id | domain_id | name | type | content | ttl | prio | change_date | disabled | ordername | auth
----+-----------+---------------------+-------+------------------------------------------------------------------+-----+------+-------------+----------+-----------+------
16 | 2 | server1.abcd.me | A | 1.2.3.4 | 300 | | 1458164023 | f | | f
17 | 2 | abcd.me | CNAME | server1.abcd.me | 300 | | 1458164023 | f | | f
13 | 2 | abcd.me | NS | e.ns.buddyns.com | 300 | | 1458165277 | f | | f
15 | 2 | abcd.me | NS | ns3.abcd.me | 300 | | 1458165277 | f | | f
14 | 2 | abcd.me | NS | ns2.abcd.me | 300 | | 1458165277 | f | | f
12 | 2 | abcd.me | SOA | server1.abcd.me abcdef.ghijk#gmail.com 0 3600 3600 36000 300 | 300 | | 1458165447 | f | | f
18 | 2 | www.abcd.me | CNAME | server1.abcd.me | 300 | | 1458165629 | f | | f
However, for a dig #localhost abcd.me NS comes up with:
; <<>> DiG 9.9.5-11ubuntu1.3-Ubuntu <<>> #localhost abcd.me NS
; (2 servers found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 50639
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 1
;; WARNING: recursion requested but not available
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1680
;; QUESTION SECTION:
;abcd.me. IN NS
;; ANSWER SECTION:
abcd.me. 300 IN CNAME server1.abcd.me.
;; AUTHORITY SECTION:
abcd.me. 300 IN SOA server1.abcd.me. abcdef\.ghijk.gmail.com. 1458165629 3600 3600 36000 300
;; Query time: 2 msec
;; SERVER: 127.0.0.1#53(127.0.0.1)
;; WHEN: Wed Mar 16 23:01:02 CET 2016
;; MSG SIZE rcvd: 120
I cannot seem to understand why the NS record do not show up, and any help would be very appreciated.
Thanks!

I just figured out why that hasn't been working.
As per RFC 1912 (correct me if I'm wrong), any record with a CNAME cannot have another type of record attached to the same name. Replacing the abcd.me CNAME with abcd.me A record solved it for me.

Related

Check previous and next record

I'm trying to compare different costs from different periods. But I dont no how I can compare the single record with the record before and after. What I need is a yes or no in my dataset when the costs from a records is the same as record before and record after.
My dataset looks like this:
+--------+-----------+----------+------------+-------+-----------+
| Client | Provision | CAK Year | CAK Period | Costs | Serial Nr |
+--------+-----------+----------+------------+-------+-----------+
| 1 | 210 | 2017 | 13 | 150 | 1 |
+--------+-----------+----------+------------+-------+-----------+
| 1 | 210 | 2018 | 1 | 200 | 2 |
+--------+-----------+----------+------------+-------+-----------+
| 1 | 210 | 2018 | 2 | 170 | 3 |
+--------+-----------+----------+------------+-------+-----------+
| 1 | 210 | 2018 | 3 | 150 | 4 |
+--------+-----------+----------+------------+-------+-----------+
| 1 | 210 | 2018 | 4 | 150 | 5 |
+--------+-----------+----------+------------+-------+-----------+
| 1 | 210 | 2018 | 5 | 150 | 6 |
+--------+-----------+----------+------------+-------+-----------+
| 1 | 689 | 2018 | 1 | 345 | 1 |
+--------+-----------+----------+------------+-------+-----------+
| 1 | 689 | 2018 | 2 | 345 | 1 |
+--------+-----------+----------+------------+-------+-----------+
| 1 | 689 | 2018 | 3 | 345 | 1 |
+--------+-----------+----------+------------+-------+-----------+
What i've tried so far:
CASE
WHEN Provision = Provision
AND Costs = LEAD(Costs, 1, 0) OVER(ORDER BY CAK Year, CAK Period)
AND Costs = LAG(Costs, 1, 0) OVER(ORDER BY CAK Year, CAK Period)
THEN 'Yes
ELSE 'No'
END
My expected result:
+--------+-----------+----------+------------+-------+-----------+--------+
| Client | Provision | CAK Year | CAK Period | Costs | Serial Nr | Result |
+--------+-----------+----------+------------+-------+-----------+--------+
| 1 | 210 | 2017 | 13 | 150 | 1 | No
+--------+-----------+----------+------------+-------+-----------+--------+
| 1 | 210 | 2018 | 1 | 200 | 2 | No
+--------+-----------+----------+------------+-------+-----------+--------+
| 1 | 210 | 2018 | 2 | 170 | 3 | No
+--------+-----------+----------+------------+-------+-----------+--------+
| 1 | 210 | 2018 | 3 | 150 | 4 | No
+--------+-----------+----------+------------+-------+-----------+--------+
| 1 | 210 | 2018 | 4 | 150 | 5 | Yes
+--------+-----------+----------+------------+-------+-----------+--------+
| 1 | 210 | 2018 | 5 | 150 | 6 | No
+--------+-----------+----------+------------+-------+-----------+--------+
| 1 | 689 | 2018 | 1 | 345 | 1 | No
+--------+-----------+----------+------------+-------+-----------+--------+
| 1 | 689 | 2018 | 2 | 345 | 1 | Yes
+--------+-----------+----------+------------+-------+-----------+--------+
| 1 | 689 | 2018 | 3 | 345 | 1 | No
+--------+-----------+----------+------------+-------+-----------+--------+
You guys can help me further because I don't get the expected result?
You need to add in partition by Provision otherwise your lag and lead ordering will run across all Provision values:
declare #d table(Client int,Provision int,CAKYear int, CAKPeriod int, Costs int, SerialNr int);
insert into #d values
(1,210,2017,13,150,1)
,(1,210,2018,1,200,2)
,(1,210,2018,2,170,3)
,(1,210,2018,3,150,4)
,(1,210,2018,4,150,5)
,(1,210,2018,5,150,6)
,(1,689,2018,1,345,1)
,(1,689,2018,2,345,1)
,(1,689,2018,3,345,1);
select *
,case when Provision = Provision
and Costs = lead(Costs, 1, 0) over(partition by Provision order by CAKYear, CAKPeriod)
and Costs = lag(Costs, 1, 0) over(partition by Provision order by CAKYear, CAKPeriod)
then 'Yes'
else 'No'
end as Result
from #d
order by Provision
,CAKYear
,CAKPeriod;
Output
+--------+-----------+---------+-----------+-------+----------+--------+
| Client | Provision | CAKYear | CAKPeriod | Costs | SerialNr | Result |
+--------+-----------+---------+-----------+-------+----------+--------+
| 1 | 210 | 2017 | 13 | 150 | 1 | No |
| 1 | 210 | 2018 | 1 | 200 | 2 | No |
| 1 | 210 | 2018 | 2 | 170 | 3 | No |
| 1 | 210 | 2018 | 3 | 150 | 4 | No |
| 1 | 210 | 2018 | 4 | 150 | 5 | Yes |
| 1 | 210 | 2018 | 5 | 150 | 6 | No |
| 1 | 689 | 2018 | 1 | 345 | 1 | No |
| 1 | 689 | 2018 | 2 | 345 | 1 | Yes |
| 1 | 689 | 2018 | 3 | 345 | 1 | No |
+--------+-----------+---------+-----------+-------+----------+--------+

SQL Server 2008 R2 - converting columns to rows and have all values in one column

I am having a hard time trying to wrap my head around the pivot/unpivot concepts and hoping someone can help or give me some guidance on how to approach my problem.
Here is a simplified sample table I have
+-------+------+------+------+------+------+
| SAUID | COM1 | COM2 | COM3 | COM4 | COM5 |
+-------+------+------+------+------+------+
| 1 | 24 | 22 | 100 | 0 | 45 |
| 2 | 34 | 55 | 789 | 23 | 0 |
| 3 | 33 | 99 | 5552 | 35 | 4675 |
+-------+------+------+------+------+------+
The end result I am looking for a table result similar below
+-------+-----------+-------+
| SAUID | OCCUPANCY | VALUE |
+-------+-----------+-------+
| 1 | COM1 | 24 |
| 1 | COM2 | 22 |
| 1 | COM3 | 100 |
| 1 | COM4 | 0 |
| 1 | COM5 | 45 |
| 2 | COM1 | 34 |
| 2 | COM2 | 55 |
| 2 | COM3 | 789 |
| 2 | COM4 | 23 |
| 2 | COM5 | 0 |
| 3 | COM1 | 33 |
| 3 | COM2 | 99 |
| 3 | COM3 | 5552 |
| 3 | COM4 | 35 |
| 3 | COM5 | 4675 |
+-------+-----------+-------+
Im looking around but most of the examples seem to use pivot but having a hard time trying to wrap that around my case as I need the values all in one column.
I hoping to experiment with some hardcoding to get fimilar with my example but my actual table columns are ~100 with varying #s of SAUID per table and looks like it will require dynamic sql?
Thanks for the help in advance.
Use UNPIVOT:
SELECT u.SAUID, u.OCCUPANCY, u.VALUE
FROM yourTable t
UNPIVOT
(
VALUE for OCCUPANCY in (COM1, COM2, COM3, COM4, COM5)
) u;
ORDER BY
u.SAUID, u.OCCUPANCY;
Demo

Are pg_stat_database and pg_stat_activity really listing the same stuff aka how do I get a list of all backends

In this answer to the question Right query to get the current number of connections in a PostgreSQL DB the poster implies that
SELECT sum(numbackends) FROM pg_stat_database;
and
SELECT count(*) FROM pg_stat_activity;
give the same results.
However, if I do this on my db the first one says 119 and the second one 30.
This is the difference as shown by summing numbackends and counting:
+------+-------------+-------+
| | numbackends | count |
+------+-------------+-------+
| db1 | 1 | 1 |
| db2 | 1 | 1 |
| db3 | 1 | 1 |
| db4 | 1 | 1 |
| db5 | 2 | 2 |
| db6 | 2 | 2 |
| db7 | 12 | 3 | <--
| db8 | 4 | 4 |
| db9 | 5 | 5 |
| db10 | 78 | 35 | <--
+------+-------------+-------+
Why does this difference exist?
How can I list each of the 119-30=89 backends not shown in pg_stat_activity?

conditional sum(sumif) in org-table

I have a table like this:
#+NAME: ENTRY
|------+--------|
| Item | Amount |
|------+--------|
| A | 100 |
| B | 20 |
| A | 120 |
| C | 40 |
| B | 50 |
| A | 20 |
| C | 16 |
|------+--------|
and then I need to sum each item in another table:
#+NAME: RESULT
|------+-----|
| Item | Sum |
|------+-----|
| A | 240 |
| B | 70 |
| C | 56 |
|------+-----|
I've tried using vlookup and remote reference in this table, but I'm not able to sum the resulting list like:
#+TBLFM: $2=vsum((vconcat (org-lookup-all $1 '(remote(ENTRY,#2$1..#>$1)) '(remote(ENTRY,#2$2..#>$2)))))
But it does not give the answer.
So I have to use a place holder to hold the resulting list then sum it:
#+NAME: RESULT
|------+--------------+-----|
| Item | Placeholder | Sum |
|------+--------------+-----|
| A | [100 120 20] | 240 |
| B | [20 50] | 70 |
| C | [40 16] | 56 |
|------+--------------+-----|
#+TBLFM: $2='(vconcat (org-lookup-all $1 '(remote(ENTRY,#2$1..#>$1)) '(remote(ENTRY,#2$2..#>$2))))::$3=vsum($2)
Is there a better solution for this?
One way to do this is without vsum:
#+TBLFM: $2='(apply '+ (mapcar 'string-to-number (org-lookup-all $1 '(remote(ENTRY,#2$1..#>$1)) '(remote(ENTRY,#2$2..#>$2)))))
If you want to use a calc function, you can always use calc-eval:
#+TBLFM: $2='(calc-eval (format "vsum(%s)" (vconcat (org-lookup-all $1 '(remote(ENTRY,#2$1..#>$1)) '(remote(ENTRY,#2$2..#>$2))))))

Comint Mode Inserts Line Break Every 4096 Characters

Using Emacs 23.2.1 on Ubuntu Lucid, any mode based on Comint inserts occasional line breaks for larger outputs (see example Shell and SQL mode output, below). I've tried this in both SQL Mode and Shell Mode, with the same result in either case. Running similar commands in a plain terminal emulator does not cause these problems (for both shell mode and mysql mode commands).
Things I have tried:
Using MySQL in SQL Mode, adding the following flags: -A, -C, -t, -f, -n, and setting max_allowed_packet to 16MB.
Setting comint-buffer-maximum-size to 10240.
None of these have any effect on this behavior.
If I scroll up to the lines in question and delete the line breaks, the output then appears correctly, so a possible solution to this problem could involve a hook that deletes every 4096th character, if such a thing is possible.
Note: In the terminal examples, the output appears to be cut off at points other than every 4096 characters. In SQL-mode, it is exactly every 4096 (a suspicious number indeed).
Here is some sample output:
brent#battlecruiser:/$ for i in {1..4096}; do echo -n 0; done; echo;
0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
In this case, it should print out a single line of 0s, but in fact has inserted a new line character after 904 characters.
Also an Example in SQL Mode using MySQL:
mysql> show variables like '%n%';
+-----------------------------------------+----------------------------------+
| Variable_name | Value |
+-----------------------------------------+----------------------------------+
| auto_increment_increment | 1 |
| auto_increment_offset | 1 |
| binlog_cache_size | 32768 |
| binlog_format | STATEMENT |
| bulk_insert_buffer_size | 8388608 |
| character_set_client | utf8 |
| character_set_connection | utf8 |
| collation_connection | utf8_general_ci |
| collation_database | latin1_swedish_ci |
| collation_server | latin1_swedish_ci |
| completion_type | 0 |
| concurrent_insert | 1 |
| connect_timeout | 10 |
| delayed_insert_limit | 100 |
| delayed_insert_timeout | 300 |
| div_precision_increment | 4 |
| engine_condition_pushdown | ON |
| error_count | 0 |
| event_scheduler | OFF |
| foreign_key_checks | ON |
| ft_boolean_syntax | + -><()~*:""&| |
| ft_max_word_len | 84 |
| ft_min_word_len | 4 |
| ft_query_expansion_limit | 20 |
| general_log | OFF |
| general_log_file | /var/lib/mysql/battlecruiser.log |
| group_concat_max_len | 1024 |
| have_community_features | YES |
| have_dynamic_loading | YES |
| have_innodb | YES |
| have_ndbcluster | NO |
| have_openssl | DISABLED |
| have_partitioning | YES |
| have_symlink | YES |
| hostname | battlecruiser |
| identity | 0 |
| ignore_builtin_innodb | OFF |
| init_connect | |
| init_file | |
| init_slave | |
| innodb_adaptive_hash_index | ON |
| innodb_additional_mem_pool_size | 1048576 |
| innodb_autoextend_increment | 8 |
| innodb_autoinc_lock_mode | 1 |
| innodb_buffer_pool_size | 8388608 |
| innodb_checksums | ON |
| innodb_commit_concurrency | 0 |
| innodb_concurrency_tickets | 500 |
| innodb_data_file_path | ibdata1:10M:autoextend
|
| innodb_data_home_dir | |
| innodb_doublewrite | ON |
| innodb_fast_shutdown | 1 |
| innodb_file_io_threads | 4 |
| innodb_file_per_table | OFF |
| innodb_flush_log_at_trx_commit | 1 |
| innodb_flush_method | |
| innodb_force_recovery | 0 |
| innodb_lock_wait_timeout | 50 |
| innodb_locks_unsafe_for_binlog | OFF |
| innodb_log_buffer_size | 1048576 |
| innodb_log_file_size | 5242880 |
| innodb_log_files_in_group | 2 |
| innodb_log_group_home_dir | ./ |
| innodb_max_dirty_pages_pct | 90 |
| innodb_max_purge_lag | 0 |
| innodb_mirrored_log_groups | 1 |
| innodb_open_files | 300 |
| innodb_rollback_on_timeout | OFF |
| innodb_stats_on_metadata | ON |
| innodb_support_xa | ON |
| innodb_sync_spin_loops | 20 |
| innodb_table_locks | ON |
| innodb_thread_concurrency | 8 |
| innodb_thread_sleep_delay | 10000 |
| innodb_use_legacy_cardinality_algorithm | ON |
| insert_id | 0 |
| interactive_timeout | 28800 |
| join_buffer_size | 131072 |
| keep_files_on_create | OFF |
| key_cache_division_limit | 100 |
| language | /usr/share/mysql/english/ |
| last_insert_id | 0 |
| lc_time_names | en_US |
| license | GPL |
| local_infile | ON |
| locked_in_memory | OFF |
| log_bin | OFF |
| log_bin_trust_function_creators | OFF |
| log_bin_trust_routine_creators | OFF |
| log_queries_not_using_indexes | OFF |
| log_warnings | 1 |
| long_query_time | 10.000000 |
| lower_case_table_names | 0 |
| max_binlog_cache_size | 4294963200 |
| max_binlog_size | 104857600 |
| max_connect_errors | 10 |
| max_connections | 151 |
| max_error_count | 64 |
| max_insert_delayed_threads | 20 |
| max_join_size | 18446744073709551615 |
| max_length_for_sort_data | 1024
|
| max_prepared_stmt_count | 16382 |
| max_sort_length | 1024 |
| max_sp_recursion_depth | 0 |
| max_user_connections | 0 |
| max_write_lock_count | 4294967295 |
| min_examined_row_limit | 0 |
| multi_range_count | 256 |
| myisam_data_pointer_size | 6 |
| myisam_recover_options | BACKUP |
| net_buffer_length | 16384 |
| net_read_timeout | 30 |
| net_retry_count | 10 |
| net_write_timeout | 60 |
| new | OFF |
| open_files_limit | 1024 |
| optimizer_prune_level | 1 |
| plugin_dir | /usr/lib/mysql/plugin |
| profiling | OFF |
| profiling_history_size | 15 |
| protocol_version | 10 |
| query_cache_min_res_unit | 4096 |
| query_cache_wlock_invalidate | OFF |
| rand_seed1 | |
| rand_seed2 | |
| range_alloc_block_size | 4096 |
| read_only | OFF |
| read_rnd_buffer_size | 262144 |
| relay_log_index | |
| relay_log_info_file | relay-log.info |
| rpl_recovery_rank | 0 |
| skip_external_locking | ON |
| skip_networking | OFF |
| slave_net_timeout | 3600 |
| slave_transaction_retries | 10 |
| slow_launch_time | 2 |
| sql_auto_is_null | ON |
| sql_log_bin | ON |
| sql_max_join_size | 18446744073709551615 |
| sql_notes | ON |
| sql_slave_skip_counter | |
| sql_warnings | OFF |
| storage_engine | MyISAM |
| sync_binlog | 0 |
| sync_frm | ON |
| system_time_zone | EDT |
| table_definition_cache | 256 |
| table_open_cache | 64 |
| thread_handling | one-thread-per-connection |
| time_zone | SYSTEM |
| transaction_alloc_block_size | 8192 |
| transaction_prealloc_size | 4096 |
| tx_isolation
| REPEATABLE-READ |
| unique_checks | ON |
| version | 5.1.41-3ubuntu12.10 |
| version_comment | (Ubuntu) |
| version_compile_machine | i486 |
| version_compile_os | debian-linux-gnu |
| warning_count | 0 |
+-----------------------------------------+----------------------------------+
159 rows in set (0.00 sec)
Here the output is always interrupted by a newline at exact intervals of 4096 characters.
In addition to possible solutions, any new ways to find more information about what is happening would be appreciated.
I had similar problems, though my breaks seemed to be at 1024 characters (ah-ha! in version 21_1 this was the case). This wasn't that big a deal for me, but I did write something that properly concatenated the results so I could post-process them. That didn't affect the output though, so it won't be much help.
The root of your problem lies in read_process_output in process.c, which hard codes the 4096:
/* Read pending output from the process channel,
starting with our buffered-ahead character if we have one.
Yield number of decoded characters read.
This function reads at most 4096 characters.
If you want to read all available subprocess output,
you must call it repeatedly until it returns zero.
The characters read are decoded according to PROC's coding-system
for decoding. */
static int
read_process_output (proc, channel)
Lisp_Object proc;
register int channel;
{
// ... snip
int readmax = 4096;
Like you mentioned in your question, a very possible solution to this would be to write a function (call it, clean-up-comint-output-at-4096-chars), and add it to the comint-output-filter-functions. Something like this. Note: untested code.
(add-hook 'comint-output-filter-functions 'clean-up-comint-output-at-4096-chars)
(defun clean-up-comint-output-at-4096-chars (&optional str)
"look for string of 4096 length and remove newline in the buffer"
(let ((magic-block-size 4096))
(save-match-data
(when (= magic-block-size (length str))
;; at the magic block size, look for a newline
(goto-char (point-max))
(when (and (search-backward str nil t)
(progn
(forward-char magic-block-size)
(looking-at "\n")))
(delete-char 1))))))
I have found the solution to this problem. I had put in my configuration file the following code sourced from http://www.emacswiki.org/emacs/SqlMode
(defun sql-add-newline-first (output)
"Add newline to beginning of OUTPUT for `comint-preoutput-filter-functions'"
(concat "\n" output))
(defun sqli-add-hooks ()
"Add hooks to `sql-interactive-mode-hook'."
(add-hook 'comint-preoutput-filter-functions
'sql-add-newline-first))
(add-hook 'sql-interactive-mode-hook 'sqli-add-hooks)
After removing the code (which because it sets the comint-preoutput-filter-functions, affects shell-mode as well), I no longer experience these issues.
My proposed replacement for this code to get the behavior I want (works for me so far):
(defun sql-add-newline-first (output)
"Add newline to beginning of OUTPUT for `comint-preoutput-filter-functions'"
(remove-hook 'comint-preoutput-filter-functions
'sql-add-newline-first)
(concat "\n" output))
(defun sql-send-region-better (start end)
"Send a region to the SQL process."
(interactive "r")
(if (buffer-live-p sql-buffer)
(save-excursion
(add-hook 'comint-preoutput-filter-functions
'sql-add-newline-first)
(comint-send-region sql-buffer start end)
(if (string-match "\n$" (buffer-substring start end))
()
(comint-send-string sql-buffer "\n"))
(message "Sent string to buffer %s." (buffer-name sql-buffer))
(if sql-pop-to-buffer-after-send-region
(pop-to-buffer sql-buffer)
(display-buffer sql-buffer)))
(message "No SQL process started.")))
(defvar sql-mode-map
(let ((map (make-sparse-keymap)))
(define-key map (kbd "C-c C-c") 'sql-send-paragraph)
(define-key map (kbd "C-c C-r") 'sql-send-region-better)
(define-key map (kbd "C-c C-s") 'sql-send-string)
(define-key map (kbd "C-c C-b") 'sql-send-buffer)
map)
"Mode map used for `sql-mode'.")
Essentially, I am adding the hook right before my sql-send-region-better code starts sending output, then inside the hook I am removing the hook again, guaranteeing that it only inserts the one new line that I want.
Here is my implementation of only prepending "\n" once per input:
(defvar sql-last-prompt-pos 1
"position of last prompt when added recording started")
(make-variable-buffer-local 'sql-last-prompt-pos)
(put 'sql-last-prompt-pos 'permanent-local t)
(defun sql-add-newline-first (output)
"Add newline to beginning of OUTPUT for
`comint-preoutput-filter-functions'
This fixes up the display of queries sent to the inferior
buffer programatically. But also adds extra new-line for
interactive commands.
"
(let ((begin-of-prompt
(or (and comint-last-prompt-overlay
;; sometimes this overlay is not on prompt
(save-excursion
(goto-char (overlay-start comint-last-prompt-overlay))
(looking-at-p comint-prompt-regexp)
(point)))
1)))
(if (> begin-of-prompt sql-last-prompt-pos)
(progn
(setq sql-last-prompt-pos begin-of-prompt)
(concat "\n" output))
output)))
(defun le-sqli-setup ()
"Add hooks to `sql-interactive-mode-hook'."
(add-hook 'comint-preoutput-filter-functions
'sql-add-newline-first t t))
(add-hook 'sql-interactive-mode-hook 'le-sqli-setup)
My solution -- add a newline (but then remove the hook to prevent multiple ones breaking up the text). And then re-add the hook at every input prompt.
(defun sql-add-newline-first (output)
"Add newline to beginning of sql OUTPUT, but remove the hook so
that it doesn't output a newline everytime the output cache is
filled."
(remove-hook 'comint-preoutput-filter-functions 'sql-add-newline-first)
(concat "\n" output))
(defun sql-readd-newline-first (ignore)
"Readd the newline putting hook"
(add-hook 'comint-preoutput-filter-functions 'sql-add-newline-first))
(defun sqli-add-hooks ()
"Add the 'suicidal' newline printing hook, and another hook to
respawn it at every input prompt."
(add-hook 'comint-preoutput-filter-functions 'sql-add-newline-first)
(add-hook 'comint-input-filter-functions 'sql-readd-newline-first))
(add-hook 'sql-interactive-mode-hook 'sqli-add-hooks)
Also, in my case I was using postgresql. Which has the nasty habit of putting extra prompts after a multiline query (like database-# database-# database-# | col | col | ) which pushes the column names away. To solve, I eventually did this:
(defun sql-remove-continuing-prompts (output)
(concat "\n" (replace-regexp-in-string "warren_hero[^=()]# " "" output)))
(defun sqli-add-hooks ()
(add-hook 'comint-preoutput-filter-functions 'sql-remove-continuing-prompts))
(add-hook 'sql-interactive-mode-hook 'sqli-add-hooks)