Google SQL query (MySQL) - google-cloud-sql

Sorry, newbie in SQL. I have such table in Google Cloud SQL (MySql).
How can I get time difference between surrounding rows like 164 and 165?
I want to get time period (downtime) when no one sencor worked with the condition that downtime more 20 minutes.
autoID | Datetime | Number_of_sensor
163 | 2020-04-06 13:46:42 | C3
164 | 2020-04-06 13:46:45 | C4
165 | 2020-04-06 15:10:48 | C3
166 | 2020-04-06 15:46:48 | C4
I tried something but cann't get result.

You would have to use something called Window Functions, which are only available on MySQL after version 8.0
Google Cloud SQL for MySQL is only up version 5.7 for now.
However if you use Postgres that will be able to run as a Window Function SQL Query.

You can use the native TIMESTAMPDIFF() function, using the values from 2 tables offset by 1 as arguments, and join those 2 tables. This will work with MySQL 5.7 which is used on Google Cloud SQL.
SELECT a.autoID, a.Datetime , TIMESTAMPDIFF(MINUTE,a.Datetime,COALESCE(b.Datetime,0)) as downtime
FROM test a
LEFT JOIN test b on a.autoID=b.autoID-1
HAVING downtime > 20;

Related

Why is id as SERIAL discontinuous values after failover in RDS Aurora PostgreSQL?

I'm testing failover using RDS Aurora PostgreSQL.
First, create RDS Aurora PostgreSQL and access the writer cluster to create users table.
$ CREATE TABLE users (
id SERIAL PRIMARY KEY NOT NULL,
name varchar(10) NOT NULL,
createAt TIMESTAMP DEFAULT Now() );
And I added one row and checked the table.
$ INSERT INTO users(name) VALUES ('test');
$ SELECT * FROM users;
+----+--------+----------------------------+
| id | name | createdAt |
+----+--------+----------------------------+
| 1 | test | 2022-02-02 23:09:57.047981 |
+----+--------+----------------------------+
After failover of RDS Aurora Cluster, I added another row and checked the table.
$ INSERT INTO users(name) VALUES ('temp');
$ SELECT * FROM users;
+-----+--------+----------------------------+
| id | name | createdAt |
+-----+--------+----------------------------+
| 1 | test | 2022-02-01 11:09:57.047981 |
| 32 | temp | 2022-02-01 11:25:57.047981 |
+-----+--------+----------------------------+
After failover, the id value that should be 2 became 32.
Why is this happening?
Is there any way to solve this problem?
That is to be expected. Index modifications are not WAL logged whenever nextval is called, because that could become a performance bottleneck. Rather, a WAL record is written every 32 calls. That means that the sequence can skip some values after a crash or failover to the standby.
You may want to read my ruminations about gaps in sequences.

How to identify truncated columns in SQL Server 2016

I have been experimenting using the code below and it seems it does not work.
DBCC TRACEON (460);
DECLARE #aa as TABLE (name varchar(5))
INSERT INTO #aa
SELECT '1234567890'
Error
String or binary data would be truncated
Expected error:
String or binary data would be truncated in table #aa, column name. Truncated value: '1234567890'
According to https://www.procuresql.com/blog/2018/09/26/string-or-binary-data-get-truncated/ SQL Sever 2019 will be able to identify the columns that have been truncated, but can be used in SQL Server 2016 using TRACEON 460.
In terms of roles, I have "public", "processadmin", and "sysadmin".
In the sys.messages I think the patch for this feature based on message_id=2628:
+------------+------------------------------------------------------------------------------------------------------+
| message_id | text |
+------------+------------------------------------------------------------------------------------------------------+
| 2628 | String or binary data would be truncated in table '%.*ls', column '%.*ls'. Truncated value: '%.*ls'. |
| 8152 | String or binary data would be truncated. |
+------------+------------------------------------------------------------------------------------------------------+
Details:
Microsoft SQL Server 2016 Standard (64-bit)
Version : 13.0.5149.0
Is Clustered : False
Is HADR Enabled : False
Is XTP Supported : True
The new error message hasn't yet been back-ported to SQL Server 2016. From this post (emphasis mine):
This new message is also backported ... (and in an upcoming SQL Server 2016 SP2 CU) ...
This CU has not been delivered yet. The most recent, CU5 (13.0.5264.1), was released in January and did not include it.
And just a small correction, you need to opt in to this behavior (via the trace flag) even in the SQL Server 2019 CTPs. The reason is that a different error number is produced, and this could break existing applications and unit tests that behave based on the error number raised. This will be documented as a breaking change when SQL Server 2019 is released, but I'm sure it will still bite some people when they upgrade.

`ds.` prefix on Hive tables when accessed via JDBC

I have a HiveServer2 running with JDBC connections and it works fine from Impala, Beeline and Spark clients. The metastore is running in a PostgreSQL server.
For example, the columns in Hive are
select * from testdb.test_table limit 3;
dt | val_test | val_test_b | test_c
12 0.2 B C
13 1.2 B A
14 9.4 T C
When I try to access the same tables from ZoomData, all column tables get a ds. prefix that is not in the original table column names:
ds.dt | ds.val_test | ds.val_test_b | ds.test_c
12 0.2 B C
13 1.2 B A
14 9.4 T C
and sometimes, when accessing the data, the Zoomdata JDBC client gives
Error: cannot find `ds.val_test` column.
Error: cannot find `ds_val_test` column.
What could be causing this?

MongoDB vs Couchbase performance on single node

I am doing POC on document data store so for that I have selected MongoDB and CouchBase for evaluation.
Environment Detail is as below
Machine : Centos 6.7, 2 core cpu, CPU MHz: 2494.078, RAM : 7 GB (6 GB Free)
MongoDB db version v3.2.0 with default configuration
CouchBase Version: 4.1.0-5005 Enterprise Edition (Cluster RAM: 3GB, Bucket RAM: 1GB)
Document Size : 326 B
Following is the result of POC
+--------------+---------------------------------------------+--------------+--------------+--------------+--------------+--------------+-------------+-------------+
| Operation | insert (in 10 batch each bacth is of 100K ) | select query | select query | select query | select query | select query | range query | range query |
+--------------+---------------------------------------------+--------------+--------------+--------------+--------------+--------------+-------------+-------------+
| Record Count | 1000K | 0 | 100 | 33k | 140k | 334k | 114k | 460k |
| Mongo | 99 sec | 568ms | 792ms | 1500ms | 3800ms | 7800ms | - | 15387ms |
| CouchBase | 370 sec | 8ms | 250ms | 6700ms | 28000ms | 69000ms | 28644ms | - |
+--------------+---------------------------------------------+--------------+--------------+--------------+--------------+--------------+-------------+-------------+
Client: I have used JAVA sdk and spring data.
There is big diffrence in performance of couchbase and mongodb on single node. Is there there any configuration parameter to increase performnace of couchbase?
It appears the current version of Spring Data MongoDB uses WriteConcern.UNACKNOWLEGED - it's fire and forget. You should enable WriteResultChecking.EXCEPTION or use WriteConcern.ACKNOWLEDGED.
http://docs.spring.io/spring-data/mongodb/docs/current/reference/html/#mongo.mongo-3.write-concern
What durability options are you using? Are you running out of bucket memory? 2 cpus is on the the low side for couchbase minimum requirements, if compaction is running at the same time as your test I would expect that to make a difference. This can be disabled in the settings.
Couchbase on a single node is not something I would ever run in production, minium 3 nodes, so if you have the time increasing your node count might give you some more meaningful figures
If you have 6GB of memory available, you might want to increase the amount of memory allocated to your bucket. In MongoDB 3.2, WiredTiger will use 60% of memory minus 1GB. For 7GB, that's 2.6 to 3.2GB (I'm not sure if it is 60% of available or 60% of total). Perhaps configure your bucket to match that.

Postgres - How to debug/trace 'Idle in transaction' connection

I am using Postgres for one of my applications and sometimes (not very frequently) one of the connection goes into <IDLE> in transaction state and it keeps acquired lock that causes other connections to wait on these locks ultimately causing my application to hang.
Following is the output from pg_stat_activity table for that process:
select * from pg_stat_activity
24081 | db | 798 | 16384 | db | | 10.112.61.218 | | 59034 | 2013-09-12 23:46:05.132267+00 | 2013-09-12 23:47:31.763084+00 | 2013-09-12 23:47:31.763534+00 | f | <IDLE> in transaction
This indicates that PID=798 is in <IDLE> in transaction state. The client process on web server is found as following using the client_port (59034) from above output.
sudo netstat -apl | grep 59034
tcp 0 0 ip-10-112-61-218.:59034 db-server:postgresql ESTABLISHED 23843/pgbouncer
I know that something is wrong in my application code (I killed one of the running application cron and it freed the locks) that is causing the connection to hang, but I am not able to trace it.
This is not very frequent and I can't find any definite reproduction steps either as this only occurs on the production server.
I would like to get inputs on how to trace such idle connection, e.g. getting last executed query or some kind of trace-back to identify which part of code is causing this issue.
If you upgrade to 9.2 or higher, the pg_stat_activity view will show you what the most recent query executed was for idle in transaction connections.
select * from pg_stat_activity \x\g\x
...
waiting | f
state | idle in transaction
query | select count(*) from pg_class ;
You can also (even in 9.1) look in pg_locks to see what locks are being held by the idle in transaction process. If it only has locks on very commonly used objects, this might not narrow things down much, but if it was a peculiar lock that could tell you exactly where in your code to look.
If you are stuck with 9.1, you can perhaps use the debugger to get all but the first 22 characters of the query (the first 22 are overwritten by the <IDLE> in transaction\0 message). For example:
(gdb) printf "%s\n", ((MyBEEntry->st_activity)+22)