KDB port gets reset after closing the current session - kdb

I am working with KDB database with version KDB+ 4.0. I can see in the kx docs that we can set the port using following command:
\p 5000
This sets the port on 5000. But once I close the KDB terminal, the port gets reset on 0i and all the tables and data created in the KDB with port 5000 vanishes.
Does anybody knows how to set the port so that it remains as it is for every session till I don't change it?

Add -p 5000 to the start command of your q session.
https://code.kx.com/q/basics/listening-port/

Related

pg_settings discrepancy between psql and npgsql

When connecting to a pg 11 instance and executing
select setting, source from pg_settings
where name='tcp_keepalives_interval';
I get two different responses between connecting via psql and a script with using Npgsql;
The command line psql client returns
0 | default
while the Npgsql script will return
75 | default
75 matches net.ipv4.tcp_keepalive_intvl but I still would have expected 0.
What is the cause of this discrepancy and how can I account for it generally in C# with Npgsql?
Looking into my crystal ball, I see that your database server is not on Windows and the psql session is running on the database server. Your psql session is connected via UNIX sockets (a local connection).
The documentation says (emphasis mine):
keepalives_count
Controls the number of TCP keepalives that can be lost before the client's connection to the server is considered dead. A value of zero uses the system default. This parameter is ignored for connections made via a Unix-domain socket, or if keepalives are disabled. It is only supported on systems where TCP_KEEPCNT or an equivalent socket option is available; on other systems, it has no effect.
If you connect with psql via TCP, you should see 75 as well.

KDB - using a gui client

I'm just getting started using KDB again. At my old job everything was all set up on the server with a number of tables and I would just query the data. I'm now loading in my own data files and want to be able to query it from a GUI client.
The two I used in the past were QStudio and QPadInsight. For both of these, I need to connect to the server. I'm pretty sure I need to have it listen to localhost since the server is running on my desktop computer where the client is also running. I don't know what port to set it to. Also, do I need to do anything to have the server running other than opening a command prompt and running Q (c:\q\w32\q.q)?
Thanks for the help.
You only need to set the port for connecting it via qpad.
However, you can also load a specific file in that session from the command prompt.
c:\q\w32>q server.q -p 1234 //load the server.q file in q session
KDB+ 3.5 2017.11.30 Copyright (C) 1993-2017 Kx Systems
q)
If you are just bringing up the q session then you have to set the port and load some server-specific code manually.
c:\q\w32>q
KDB+ 3.5 2017.11.30 Copyright (C) 1993-2017 Kx Systems
q)\l server.q
q)\p 1234
Now it can be connected via qstudio or qpad using the connection string `::1234
Check this out to set the QHOME
You can set the QINIT variable to point to some q file that can act as the bootstrap file for all the q sessions you will run on your box (e.g. helper functions)
You can add the commands to a bat file to avoid any manual steps.
set QHOME=C:\q
set QINIT=C:\code\server.q
set PATH=%PATH%;%QHOME%;%QHOME%\w32
q -p 1234

intermittent "connection reset by peer" sql postgres

After a period of inactivity, my go web service is getting a net.OpError with message read tcp x.x.x.x:52086->x.x.x.x:24414: read: connection reset by peer when executing the first postgres sql query. After the error, the subsequent requests will work fine.
The postgres database is hosted with compose.com which has haproxy in front of the postgres db. My go web app is using standard sql and sqlx.
I've tried running a ticker invoking db.Ping() every 15 minutes, but this hasn't fixed the issue.
Why is the go standard sql lib not handling these connection drops?
Since no one wrote that explicity. The solution to this problem is setting db.SetConnMaxLifetime(time.Minute). I tried it and it works. Connection reset occurs often on AWS where there is inactivity limit set to 350 seconds, after that TCP RST is returned.

PostgreSQL on remote database: No buffer space available (maximum connections reached)?

I'm trying to put a huge data into PostgreSQL (PostGIS for detail).
About 100 scenes, each scene contains 12 bands of raster image. Each image is about 100MB
What I do:
For each scene in scenes (
for each band in scene (
Open connection to postGIS db
Add band
)
SET PGPASSWORD=password
psql -h 192.168.2.1 -p 5432 -U user -d spatial_db -f combine_bands.sql
)
It ran well till scene #46. It causes an error No buffer space available (maximum connections reached)
I run script on Windows 7, my remote server is on Ubuntu 12.04 LTS.
UPDATE: Connect to remote server and run sql file.
This message:
No buffer space available (maximum connections reached?)
comes from a Java exception, not the PostgreSQL server. A java stack trace may be useful to get some context.
If the connection was rejected by PostgreSQL, the message would be:
FATAL: connection limit exceeded for non-superusers
Still it may be that the program exceeds its max number of open sockets by not closing its connections to PostgreSQL. Your script should close each DB connection as soon as it's finished with it, or open just one and reuse it throughout the whole process.
Simultaneaous connections for the same program are only needed when issuing queries in parallel, which doesn't seem to be the case here.

Is there a timeout for idle PostgreSQL connections?

1 S postgres 5038 876 0 80 0 - 11962 sk_wai 09:57 ? 00:00:00 postgres: postgres my_app ::1(45035) idle
1 S postgres 9796 876 0 80 0 - 11964 sk_wai 11:01 ? 00:00:00 postgres: postgres my_app ::1(43084) idle
I see a lot of them. We are trying to fix our connection leak. But meanwhile, we want to set a timeout for these idle connections, maybe max to 5 minute.
It sounds like you have a connection leak in your application because it fails to close pooled connections. You aren't having issues just with <idle> in transaction sessions, but with too many connections overall.
Killing connections is not the right answer for that, but it's an OK-ish temporary workaround.
Rather than re-starting PostgreSQL to boot all other connections off a PostgreSQL database, see: How do I detach all other users from a postgres database? and How to drop a PostgreSQL database if there are active connections to it? . The latter shows a better query.
For setting timeouts, as #Doon suggested see How to close idle connections in PostgreSQL automatically?, which advises you to use PgBouncer to proxy for PostgreSQL and manage idle connections. This is a very good idea if you have a buggy application that leaks connections anyway; I very strongly recommend configuring PgBouncer.
A TCP keepalive won't do the job here, because the app is still connected and alive, it just shouldn't be.
In PostgreSQL 9.2 and above, you can use the new state_change timestamp column and the state field of pg_stat_activity to implement an idle connection reaper. Have a cron job run something like this:
SELECT pg_terminate_backend(pid)
FROM pg_stat_activity
WHERE datname = 'regress'
AND pid <> pg_backend_pid()
AND state = 'idle'
AND state_change < current_timestamp - INTERVAL '5' MINUTE;
In older versions you need to implement complicated schemes that keep track of when the connection went idle. Do not bother; just use pgbouncer.
In PostgreSQL 9.6, there's a new option idle_in_transaction_session_timeout which should accomplish what you describe. You can set it using the SET command, e.g.:
SET SESSION idle_in_transaction_session_timeout = '5min';
In PostgreSQL 9.1, the idle connections with following query. It helped me to ward off the situation which warranted in restarting the database. This happens mostly with JDBC connections opened and not closed properly.
SELECT
pg_terminate_backend(procpid)
FROM
pg_stat_activity
WHERE
current_query = '<IDLE>'
AND
now() - query_start > '00:10:00';
if you are using postgresql 9.6+, then in your postgresql.conf you can set
idle_in_transaction_session_timeout = 30000 (msec)
There is a timeout on broken connections (i.e. due to network errors), which relies on the OS' TCP keepalive feature. By default on Linux, broken TCP connections are closed after ~2 hours (see sysctl net.ipv4.tcp_keepalive_time).
There is also a timeout on abandoned transactions, idle_in_transaction_session_timeout and on locks, lock_timeout. It is recommended to set these in postgresql.conf.
But there is no timeout for a properly established client connection. If a client wants to keep the connection open, then it should be able to do so indefinitely. If a client is leaking connections (like opening more and more connections and never closing), then fix the client. Do not try to abort properly established idle connections on the server side.
A possible workaround that allows to enable database session timeout without an external scheduled task is to use the extension pg_timeout that I have developped.
Another option is set this value "tcp_keepalives_idle". Check more in documentation https://www.postgresql.org/docs/10/runtime-config-connection.html.