I need some help! Some time ago we migrated an Oracle database to DB2 on Cloud.
My application use a WildFly Server and the connection to Oracle database previously only needed 5 minutes to start, that was a normal time to start.
However, when I start using DB2 on cloud, the time to start is about 14 minutes.
Using IBM Data Server Manager, I tried to monitor what is happening with the database during the application server connection and I can see a process call "Compilation Process", which is responsible for the time to complete the connection. I do not understand if that is normal, or if there exists any solution to decrease the time used for that process. Thank you very much !!
Related
I am running long queries on managed PostgreSQL server instances. Since I have no access to the underlying OS I need to use a client (DBeaver or PGAdmin etc) on my local laptop to run queries. I's like to avoid my queries from stopping when I have an internet outage and also avoid having to leave my laptop on for hours just to maintain a connection.
With MS SQl Server, I used SQLAgent to run the queries directly on the server and so could shut down my laptop and the queries would carry on without issue.
Is there a way to trigger a query to run independently from the client admin software that triggered it?
Thanks
No, that is not possible, unless you have a third-party extension like pg_timetable or pg_cron installed.
Looking for any thoughts on what is going on here:
Environment:
Java 11 GCP Function that copies data into table
Postgres 11 Cloud SQL using JDBC driver
(org.postgresql:postgresql:42.2.5)
No changes to any code or configuration in 2 weeks.
I'm connecting to the private SQL IP address, so similar to
jdbc:postgresql://10.90.123.4/...
I'm not requiring a SSL cert
There is Serverless VPC Access set up between the Function and SQL.
This is happening across two different GCP projects and SQL servers.
Prior to this Saturday (2/22), everything was working fine. We are using Postgres' CopyManager to load data into a table: copyManager.copyIn(sql, this.reader);
After 2/22, this started failing with "SSL error: DATA_LENGTH_TOO_LONG" as seen in the SQL server log. These failures are 100% consistent and still happen. I can see that SQL was restarted by Google a few hours before the issue started and I'm wondering if this is somehow related to whatever maintenance happened, SQL version upgrade? I'm unclear what version we had before Saturday, but it's now 11.6.
Interestingly enough, I can avoid the error if the file loaded into the table is under a certain size:
14,052 bytes (16 KB on disk): This fails every time.
14,051 bytes (16 KB on disk): This works every time.
I'd appreciate if someone from Google could confirm what took place during the maintenance window that might be causing this error. We are currently blocked by this as we load much larger datasets into the database than ~14 000 bytes.
FYI this was caused by a JDK issue with TLS v1.3, addressed in JDK 11.05. Google will likely upgrade the JDK used for the Cloud Functions JVMs from 11.04 to something newer next week. See https://bugs.openjdk.java.net/browse/JDK-8221253
I am new in the form and also new in postgresql.
Normally I use MySQL for my project but I’ve decided to start migrating towards postgresql for some valid reasons which I found in this database.
Expanding on the problem:
I need to analyze data via some mathematical formulas but in order to do this I need to get the data from the software via the API.
The software, the API and Postgresql v. 11.4 which I installed on a desktop are running on windows. So far I’ve managed to take the data via the API and import it into Postgreql.
My problem is how to transfer this data from
the local Postgresql (on the PC ) to a web Postgresql (installed in a Web server ) which is running Linux.
For example if I take the data every five minutes from software via API and put it in local db postgresql, how can I transfer this data (automatically if possible) to the db in the web server running Linux? I rejected a data dump because importing the whole db every time is not viable.
What I would like is to import only the five-minute data which gradually adds to the previous data.
I also rejected the idea of making a master - slave architecture
because not knowing the total amount of data, on the web server I have almost 2 Tb of hard disk while on the local pc I have only one hard disk that serves only to take the data and then to send it to the web server for the analysis.
Could someone please help by giving some good advice regarding how to achieve this objective?
Thanks to all for any answers.
Brief Background:
We have a cloud based Warehouse Management System that uses Glassfish to dish out the java interface. The Warehouse Management System consists of a Dashboard and a mobile application - both of which talk constantly with the Glassfish server (using a web browser).
Issue:
Recently our PostgreSQL database server HDD failed. After restoring from a backup and moving the database to an Amazon Web Service Server, idle connections seem to be dropping out. This causes the entire Warehouse Management System to fail. Restarting the Glassfish server seems to fix the issue until the idle connection causes it to fail again.
It happens around 3-4 times per day after approx 20mins of idle activity i.e. our customer's lunch breaks, after hours etc..
Question:
Is there a setting that I'm missing in the postgresql.conf file? What else could be causing this?
Attachments:
I've attached a screenshot containing the output of running 'select * from pg_stat_activity;' and also the postgresql.conf file.
select * from pg_stat_activity
postgresql.conf
Log:
postgresql-8.4-main.log shows this occasionally, although it doesn't seem to be when it cuts out.
2015-10-19 07:51:41 NZDT [9971-1] postgres#customerName LOG: unexpected EOF on client connection
glassfish server.log is riddled with these lines:
[#|2015-10-19T07:46:49.715+1300|SEVERE|glassfish3.1.1|javax.enterprise.system.container.web.com.sun.enterprise.web|_ThreadID=25;_ThreadName=Thread-2;|WebModule[/pns-CustomerName]Received InterruptedException on request thread
[#|2015-10-20T09:34:42.351+1300|WARNING|glassfish3.1.1|com.sun.grizzly.config.GrizzlyServiceListener|_ThreadID=17;_ThreadName=Thread-2;|GRIZZLY0023: Interrupting idle Thread: http-thread-pool-8080(2).|
[#|2015-10-20T07:33:55.414+1300|WARNING|glassfish3.1.1|javax.enterprise.system.container.web.com.sun.enterprise.web|_ThreadID=14;_ThreadName=Thread-2;|Response Error during finishResponse java.lang.NullPointerException
Thanks in advance
We are having an issue with SQL Server 2008 R2 64 responding to stored procedure call. About every 2 weeks or so, the database stops responding to stored procedures called from an ADO connection/Command set (4.0 framework). We have been working on this for several months now, with little improvement.
System changes:
We upgraded an existing vendor product from SQL Server 2005 to SQL Server 2008 R2 via their upgrade method. The database instance moved from a 32-bit Windows 2003 Server to 64-bit Windows 2008 Server.
The pattern of failure:
The application is run throughout the day, executed by different users via Citrix without issue. Every few weeks, the application stops responding around the same time frame. Once the database stops responding to the hosted instance of the application, any execution of the procedure from the application hangs (installed on CITRIX server, installed on varied physical systems, or debugging in VStudio 2010). After an hour of checking logs, server status, SQL Monitoring tools, tracing the repeated execution attempts, the server decides to respond to the application without intervention.
Strange thing is, when the server is not responding to ADO.Net calls, we execute the stored procedure from SQL Server Management Studio and receive results in 1 to 2 seconds. We are using the same login to access SQL Server Management Studio, and executing the stored procedure with the same parameters.
Looking at the connection string passed to the ADO connection, I don’t see anything unusual:
connectionString="Data Source=myserver\myinstance;Initial Catalog=databaseName;Persist Security Info=True;User ID=xxxxx;Password=yyyyy;Connect Timeout=45"
Tried so far:
Added extra 2gb of RAM to the OS: no change
Added extra tempdb file, expanded size of tempdb log file from 1 to 5gb: reduced the issue from weekly to every 2nd or 3rd week.
Installed SQL Server 2008 R2 SP3: no change.
The black cloud:
To me, the repeating time pattern of failure implies an issue at the database host (server or resource), but the DBAs do not see load or resource issue. If it were purely a host issue, why does it respond to SQL Server Management calls, and not ADO.NET calls?
The last occurrence lasted over two hours, and was resolved after rebooting the database server. Not a great fallback, but desperate times and all…..
Updating the ADO.NET connection to use named pipes has resolved the issue for our application. Prefixing the database name with "np:" has the connection using named pipes.
connectionString="Data Source=np:myserver\myinstance;Initial Catalog=databaseName;Persist Security Info=True;User ID=xxxxx;Password=yyyyy;Connect Timeout=45"
The issue returned on 5/14. This query timeout posting gave us hints how to force SQL Management Studio to behave like the ADO.NET connection and allowed us to recognize this is a "parameter sniffing" issue. We have applied changes to disable the parameter sniffing within the stored procedure.