One of our google cloud sql instance is in limbo since several hours. First it rejected the communication packages. I then tried to restart it and that is the state the instance has since a few hours. There is no way to create a replica or get another instance running with the same data and thus we're standing still.
Inspecting the logs in the dev console reveal the following entries.
I 2015-12-16 12:12:18.976 2015-12-16 11:12:18 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
I 2015-12-16 12:12:18.976 2015-12-16 11:12:18 0 [Note] (mysqld 5.6.26-log) starting as process 847865 ...
I 2015-12-16 12:12:18.979 2015-12-16 11:12:18 847865 [ERROR] COLLATION 'utf8_general_ci' is not valid for CHARACTER SET 'utf8mb4'
I 2015-12-16 12:12:18.979 2015-12-16 11:12:18 847865 [ERROR] Aborting
Even worse, since it is a cloud sql instance and since it is in limbo, there is no way to change/influence/fix any setting.
Help is highly appreciated.
Update: The instance is back alive.
Related
I have some JOBs running on SQL Server 2000, which are calling stored procedures or queries against remote SQL Servers (different editions).
The JOB calls a DTS, and is the DTS who does the remote connection and executes the Stored Procedure or gets a query results from the remote server.
This has been working without errors for years. I don't know why during the last month, I'm having randomly errors on these kind of jobs... I've read some other posts and seems to be related to a security issue, but I repeat, the most of times the jobs are working, only some runs are failing with the error.
Executed as user: SERVER\user. DTSRun: Loading... DTSRun: Executing...
DTSRun OnStart: DTSStep_DTSDynamicPropertiesTask_2 DTSRun OnError:
DTSStep_DTSDynamicPropertiesTask_2, Error = -2147467259 (80004005)
Error string: Login timeout expired Error source: Microsoft OLE DB
Provider for SQL Server Help file: Help context: 0
Error Detail Records: Error: -2147467259 (80004005); Provider Error: 0 (0)
Error string: Login timeout expired Error source: Microsoft OLE DB Provider for SQL Server
Help file: Help context: 0 DTSRun OnFinish:
DTSStep_DTSDynamicPropertiesTask_2 DTSRun: Package execution complete.
Process Exit Code 1. The step failed.
I really don't know what to check. After reboot the server the problems are still there. Any help from you guys would be appreciated.
EDIT 2019-02-14 16:15 -------------------------------------------------------------------------------------------
One of the solutions I found has been to change the Remote Login Timeout property from the default 20 seconds to 30 seconds, or to 0 (Zero means without timeout), by executing the next code:
sp_configure 'remote login timeout', 30 --Or 0 seconds for infinite
go
reconfigure with override
go
From: https://support.microsoft.com/es-es/help/314530/error-message-when-you-execute-a-linked-server-query-in-sql-server-tim
I've tried this solution changing it to 30 seconds, but with the same result. Of course I didn't set it to 0 for obvious reasons, the timeouts are there for something. And also tried 300 seconds (5 minutes to make a login!) and still the same.
EDIT 2019-02-25 11:25 -------------------------------------------------------------------------------------------
Very similar to my problem, still not solved...
https://www.sqlservercentral.com/Forums/Topic1727739-391-1.aspx
For the moment I have a temp solution, and it is to increase the Connect Timeout on the Connection object.
It was blank (probably using its default value).
Since I changed this property (Connection Object > Advanced... > Connect Timeout) to 300 I'm not having the problems on these DTS. I leaved 2 DTS without the change to ensure I continued having the problem, and these are the only ones which continue having the problem. The changed ones are working fine now.
I have been working with HFC SDK for Node.js and it used to work, but since last night I am having some problems.
When running helloblockchain.js only few times works, most time I get this error when it tries to enroll a new user:
E0113 11:56:05.983919636 5288 handshake.c:128] Security handshake failed: {"created":"#1484304965.983872199","description":"Handshake read failed","file":"../src/core/lib/security/transport/handshake.c","file_line":237,"referenced_errors":[{"created":"#1484304965.983866102","description":"FD shutdown","file":"../src/core/lib/iomgr/ev_epoll_linux.c","file_line":948}]}
Error: Failed to register and enroll JohnDoe: Error
Other times, the enroll works and the failure appears deploying the chaincode:
Enrolled and registered JohnDoe successfully
Deploying chaincode ...
E0113 12:14:27.341527043 5455 handshake.c:128] Security handshake failed: {"created":"#1484306067.341430168","description":"Handshake read failed","file":"../src/core/lib/security/transport/handshake.c","file_line":237,"referenced_errors":[{"created":"#1484306067.341421859","description":"FD shutdown","file":"../src/core/lib/iomgr/ev_epoll_linux.c","file_line":948}]}
Failed to deploy chaincode: request={"fcn":"init","args":["a","100","b","200"],"chaincodePath":"chaincode","certificatePath":"/certs/peer/cert.pem"}, error={"error":{"code":14,"metadata":{"_internal_repr":{}}},"msg":"Error"}
Or:
Enrolled and registered JohnDoe successfully
Deploying chaincode ...
E0113 12:15:27.448867739 5483 handshake.c:128] Security handshake failed: {"created":"#1484306127.448692244","description":"Handshake read failed","file":"../src/core/lib/security/transport/handshake.c","file_line":237,"referenced_errors":[{"created":"#1484306127.448668047","description":"FD shutdown","file":"../src/core/lib/iomgr/ev_epoll_linux.c","file_line":948}]}
events.js:160
throw er; // Unhandled 'error' event
^
Error
at ClientDuplexStream._emitStatusIfDone (/usr/lib/node_modules/hfc/node_modules/grpc/src/node/src/client.js:189:19)
at ClientDuplexStream._readsDone (/usr/lib/node_modules/hfc/node_modules/grpc/src/node/src/client.js:158:8)
at readCallback (/usr/lib/node_modules/hfc/node_modules/grpc/src/node/src/client.js:217:12)
E0113 12:15:27.563487641 5483 handshake.c:128] Security handshake failed: {"created":"#1484306127.563437122","description":"Handshake read failed","file":"../src/core/lib/security/transport/handshake.c","file_line":237,"referenced_errors":[{"created":"#1484306127.563429661","description":"FD shutdown","file":"../src/core/lib/iomgr/ev_epoll_linux.c","file_line":948}]}
This code worked yesterday, so I don't know what could be happening.
Does anybody know how can I fix it?
Thanks,
Javier.
ibm-bluemix
blockchain
These types of intermittent issues are usually related to GRPC. An initial suggestion is to ensure that you are using at least GRPC version 1.0.0.
If you are using a Mac, then the maximum number of open file descriptors should be checked (using ulimit -n). Sometimes this is initially set to a low value such as 256, so increasing the value could help.
There are a couple of GRPC issues with similar symptoms.
https://github.com/grpc/grpc/issues/8732
https://github.com/grpc/grpc/issues/8839
https://github.com/grpc/grpc/issues/8382
There is a grpc.initial_reconnect_backoff_ms property that is mentioned in some of these issues. Increasing the value past the 1000 ms level might help reduce the frequency of issues. Below are instructions for how the helloblockchain.js file can be modified to set this property to a higher value.
Open the helloblockchain.js file in the Hyperledger Fabric Client example and find the enrollAndRegisterUsers function.
Add “grpc.initial_reconnect_backoff_ms": 5000 to the setMemberServicesUrl call.
chain.setMemberServicesUrl(ca_url, {
pem: cert, "grpc.initial_reconnect_backoff_ms": 5000
});
Add “grpc.initial_reconnect_backoff_ms": 5000 to the addPeer call.
chain.addPeer("grpcs://" + peers[i].discovery_host + ":" + peers[i].discovery_port,
{pem: cert, "grpc.initial_reconnect_backoff_ms": 5000
});
Note that setting the grpc.initial_reconnect_backoff_ms property may reduce the frequency of issues, but it will not necessarily eliminate all issues.
The connection to the eventhub that is made in the helloblockchain.js file can also be a factor. There is an earlier version of the Hyperledger Fabric Client that does not utilize the eventhub. This earlier version could be tried to determine if this makes a difference. After running git clone https://github.com/IBM-Blockchain/SDK-Demo.git, run git checkout b7d5195 to use this prior level. Before running node helloblockchain.js from a Node.js command window, the git status command can be used to check the code level that is being used.
I'm experiencing intermittent connectivity errors from a Spring Boot application communicating with a D1 Google CloudSQL Server with the configuration settings described here HikariCP MySQL settings
I was wondering if anyone has encountered this before.
I've read the FAQ posted here Hikari FAQ and I'm wondering if my default idleTimeout and maxLifeTime (30 mins) settings might be at fault; wait_timeout and interactive_timeout on the server are both set to default 28800s (8 hours).
The FAQ says that these two settings should be about a minute less that the server settings, but if I'm losing connections after 30 minutes I can't quite see how upping the maxLifeTime to 7hrs 59mins is going to improve the situation.
Does anyone have any recommendations?
Redacted stack trace(s):
Get these from time to time
org.springframework.security.authentication.InternalAuthenticationServiceException: Could not get JDBC Connection; nested exception is java.sql.SQLException: Timeout after 30018ms of waiting for a connection.
at org.springframework.security.authentication.dao.DaoAuthenticationProvider.retrieveUser(DaoAuthenticationProvider.java:110)
at org.springframework.security.authentication.dao.AbstractUserDetailsAuthenticationProvider.authenticate(AbstractUserDetailsAuthenticationProvider.java:132)
at org.springframework.security.authentication.ProviderManager.authenticate(ProviderManager.java:156)
at org.springframework.security.authentication.ProviderManager.authenticate(ProviderManager.java:177)
...
Caused by: org.springframework.jdbc.CannotGetJdbcConnectionException: Could not get JDBC Connection; nested exception is java.sql.SQLException: Timeout after 30023ms of waiting for a connection.
at org.springframework.jdbc.datasource.DataSourceUtils.getConnection(DataSourceUtils.java:80)
....
Caused by: java.sql.SQLException: Timeout after 30023ms of waiting for a connection.
at com.zaxxer.hikari.pool.HikariPool.getConnection(HikariPool.java:208)
at com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:108)
at org.springframework.jdbc.datasource.DataSourceUtils.doGetConnection(DataSourceUtils.java:111)
at org.springframework.jdbc.datasource.DataSourceUtils.getConnection(DataSourceUtils.java:77)
... 59 common frames omitted
at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:630)
at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:695)
at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:727)
at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:737)
at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:787)
Hibernate search:
2015-02-17 10:34:17.090 INFO 1 --- [ entityloader-2] o.h.s.i.SimpleIndexingProgressMonitor : HSEARCH000030: 31050 documents indexed in 1147865 ms
2015-02-17 10:34:17.090 INFO 1 --- [ entityloader-2] o.h.s.i.SimpleIndexingProgressMonitor : HSEARCH000031: Indexing speed: 27.050219 documents/second; progress: 99.89%
2015-02-17 10:41:59.917 WARN 1 --- [ntifierloader-1] com.zaxxer.hikari.proxy.ConnectionProxy : Connection com.mysql.jdbc.JDBC4Connection#372f2018 (HikariPool-0) marked as broken because of SQLSTATE(08S01), ErrorCode(0).
com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
The last packet successfully received from the server was 1,611,087 milliseconds ago. The last packet sent successfully to the server was 927,899 milliseconds ago.
The indexing isn't particularly quick at the moment I think because I'm not using projections. The process takes about 30 minutes to execute.
Thanks
It could be a couple of things here. First, the network infrastructure (firewalls, load-balancers, etc.) between the application tier and the database tier can impose their own connection timeouts, regardless of MySql settings.
The indexing failure indicates that the connection was out of the pool for ~27 minutes with no SQL activity when that failure occurred.
Second, specifically regarding the "Could not get JDBC Connection" error, you may be running into Cloud SQL connection limits.
I recommend three things. One, make sure you are on the latest HikariCP (2.3.2) and latest MySql Connector/J driver (5.1.34). Two, enable DEBUG-level logging for the com.zaxxer.hikari package. HikariCP debug logging is not "chatty", but will log pool statistics every 30 seconds (and sometimes more detail in failure conditions). Lastly, try setting the maxPoolSize to something smaller (unless already at the default), and setting maxLifeTime to 15 or 20 minutes (1200000ms).
If the error occurs again, post updated logs containing the HikariCP debug logs around the time of failure. Also, feel free to open a tracking issue over on Github as larger logs etc. are easier there.
I was following the processes in this very helpful post:
How do I run a Play Framework 2.0 application as a Windows service?
when I ran into trouble in Step 9. When I execute runConsole.bat, the service cycles between the running and restarting states. The full log is here:
wrapper.log
but what jumps out at me in the log are the following:
INFO|7268/0|play.core.server.NettyServer|13-12-28 13:07:28|Oops, cannot start the server.
INFO|7268/0|play.core.server.NettyServer|13-12-28 13:07:28|Configuration error: Configuration error[Cannot connect to database [default]]
...
INFO|7268/0|play.core.server.NettyServer|13-12-28 13:07:28|Caused by: org.h2.jdbc.JdbcSQLException: Database may be already in use: "Locked by another process". Possible solutions: close all other connection(s); use the server mode [90020-168]
...
INFO|wrapper|play.core.server.NettyServer|13-12-28 13:07:28|restart process due to default exit code rule
INFO|wrapper|play.core.server.NettyServer|13-12-28 13:07:28|restart internal RUNNING
INFO|wrapper|play.core.server.NettyServer|13-12-28 13:07:28|stopping process with pid/timeout 7268 45000
INFO|wrapper|play.core.server.NettyServer|13-12-28 13:07:30|process exit code: -1
...
INFO|7812/1|play.core.server.NettyServer|13-12-28 13:07:45|[error] c.j.b.h.AbstractConnectionHook - Failed to obtain initial connection Sleeping for 0ms and trying again. Attempts left: 0. Exception: null
INFO|7812/1|play.core.server.NettyServer|13-12-28 13:07:45|Oops, cannot start the server.
INFO|7812/1|play.core.server.NettyServer|13-12-28 13:07:45|Configuration error: Configuration error[Cannot connect to database [default]]
repeats several times...
Half way through writing this post, I realized that before step 9., you should terminate the start.bat script which you start up in step 6. When I did this, runConsole.bat would execute normally without all the errors I list above.
I am currently trying to replace Hypersonic with MS-SQL 2008 R2 in JBoss AS 5.1.0GA.
I have followed the instructions in the JBoss Server Configuration Guide, however the server fails to load with this error:
2013-09-26 17:06:04,479 WARN [org.jboss.resource.adapter.jms.inflow.JmsActivation] (WorkManager(2)-3) Failure in jms activation org.jboss.resource.adapter.jms.inflow.JmsActivationSpec#8bb1eb(ra=org.jboss.resource.adapter.jms.JmsResourceAdapter#c54851 destination=queue/iam/im/jms/queue/wpUtilQueue destinationType=javax.jms.Queue tx=true durable=false reconnect=10 provider=DefaultJMSProvider user=null maxMessages=1 minSession=1 maxSession=15 keepAlive=30000 useDLQ=true DLQHandler=org.jboss.resource.adapter.jms.inflow.dlq.GenericDLQHandler DLQJndiName=queue/DLQ DLQUser=null DLQMaxResent=10)
javax.naming.NameNotFoundException: DLQ not bound
(I left out the stack trace for brevity; it isn't important.)
I have checked, and DLQ is defined in destinations-service.xml
I'm not sure where to proceed from here; every response I can find on Google seems to suggest that defining the queue in destinations-service.xml has solved the issue for almost everyone.
Any help would be appreciated.
It turns out that the instructions in the Configuration Guide aren't 100% complete. The issue was that a ChannelFactory was referenced in the mssql-persistence-service.xml; however, this environment is not clustered, and so there were no ChannelFactory objects defined.
Removing the reference to the ChannelFactory was sufficient to resolve the issue.