Implementing office online in my application (php)...
Excel & powerpoint working like a charme!
If I want to edit a Word Document, I'll get "Session expired" in the editor view.
I've created a log file on my server...
date - request type - missmatch or "ok" - current lock string - request wopi-lock - old wopi lock
2017-06-08 14:35:00 - CheckFileInfo -- Current Lock: || X-Wopi-Lock: || X-Wopi-OldLock:
2017-06-08 14:35:01 - Lock - OK -- Current Lock: || X-Wopi-Lock: {"S":"fdcc8cdd-1173-4935-b94b-aca35228a332","F":4} || X-Wopi-OldLock:
2017-06-08 14:35:01 - GetFile -- Current Lock: || X-Wopi-Lock: || X-Wopi-OldLock:
2017-06-08 14:35:01 - Lock - Missmatch -- Current Lock: {"S":"fdcc8cdd-1173-4935-b94b-aca35228a332","F":4} || X-Wopi-Lock: {"S":"fdcc8cdd-1173-4935-b94b-aca35228a332","F":6,"E":2,"M":"2e113d2cc5e0457f94543c383961bd37","P":"3823A714-066D-4B80-9D32-439DA70147B0"} || X-Wopi-OldLock:
2017-06-08 14:35:01 - Lock - Missmatch -- Current Lock: {"S":"fdcc8cdd-1173-4935-b94b-aca35228a332","F":4} || X-Wopi-Lock: {"S":"fdcc8cdd-1173-4935-b94b-aca35228a332","F":6,"E":2,"M":"2e113d2cc5e0457f94543c383961bd37","P":"3823A714-066D-4B80-9D32-439DA70147B0"} || X-Wopi-OldLock:
2017-06-08 14:35:01 - Lock - Missmatch -- Current Lock: {"S":"fdcc8cdd-1173-4935-b94b-aca35228a332","F":4} || X-Wopi-Lock: {"S":"fdcc8cdd-1173-4935-b94b-aca35228a332","F":6,"E":2,"M":"2e113d2cc5e0457f94543c383961bd37","P":"3823A714-066D-4B80-9D32-439DA70147B0"} || X-Wopi-OldLock:
2017-06-08 14:35:05 - Unlock - Missmatch -- Current Lock: {"S":"fdcc8cdd-1173-4935-b94b-aca35228a332","F":4} || X-Wopi-Lock: GetCurrentLock-00000000-0000-0000-0000-000000000000 || X-Wopi-OldLock:
If I "haked" a 200 response in the "missmatch", the session expired is gone :P
But then, I've no Co-Authoring.
Okay, figured it out! I didn't get the OldLock header as espected beacause it is case sensetive in the header array..
changed $headers['X-Wopi-OldLock'] to $headers['X-Wopi-Oldlock'] and now I'm back in business!
thanks
Related
tell me please, I'm setting up postgresql monitoring via zabbix 4.2. I am using the standard built-in postgresql template. All data is displayed correctly, except for metrics from the pgsql.query.time.sql query, data from pgsql.query.time.sql is not displayed
I try to manually execute this request, I get an error:
psql -qtAX -h "$1" -p "$2" -U "$3" -d "$4" -v tmax=$5 -f "/var/lib/zabbix/postgresql/pgsql.query.time.sql"
psql:/var/lib/zabbix/postgresql/pgsql.query.time.sql:31: ERROR: syntax error (near position: ")")
STRING 22: ...'epoch' FROM (clock_timestamp() - query_start)) > )::integer...
Here is the query itself from /var/lib/zabbix/postgresql/pgsql.query.time.sql
WITH T AS
(SELECT db.datname,
coalesce(T.query_time_max, 0) query_time_max,
coalesce(T.tx_time_max, 0) tx_time_max,
coalesce(T.mro_time_max, 0) mro_time_max,
coalesce(T.query_time_sum, 0) query_time_sum,
coalesce(T.tx_time_sum, 0) tx_time_sum,
coalesce(T.mro_time_sum, 0) mro_time_sum,
coalesce(T.query_slow_count, 0) query_slow_count,
coalesce(T.tx_slow_count, 0) tx_slow_count,
coalesce(T.mro_slow_count, 0) mro_slow_count
FROM pg_database db NATURAL
LEFT JOIN (
SELECT datname,
extract(epoch FROM now())::integer ts,
coalesce(max(extract('epoch' FROM (clock_timestamp() - query_start))::integer * (state NOT IN ('idle', 'idle in transaction', 'idle in transaction (aborted)') AND query !~* E'^(\\s*(--[^\\n]*\\n|/\\*.*\\*/|\\n))*(autovacuum|VACUUM|ANALYZE|REINDEX|CLUSTER|CREATE|ALTER|TRUNCATE|DROP)')::integer), 0) query_time_max,
coalesce(max(extract('epoch' FROM (clock_timestamp() - query_start))::integer * (state NOT IN ('idle') AND query !~* E'^(\\s*(--[^\\n]*\\n|/\\*.*\\*/|\\n))*(autovacuum|VACUUM|ANALYZE|REINDEX|CLUSTER|CREATE|ALTER|TRUNCATE|DROP)')::integer), 0) tx_time_max,
coalesce(max(extract('epoch' FROM (clock_timestamp() - query_start))::integer * (state NOT IN ('idle') AND query ~* E'^(\\s*(--[^\\n]*\\n|/\\*.*\\*/|\\n))*(autovacuum|VACUUM|ANALYZE|REINDEX|CLUSTER|CREATE|ALTER|TRUNCATE|DROP)')::integer), 0) mro_time_max,
coalesce(sum(extract('epoch' FROM (clock_timestamp() - query_start))::integer * (state NOT IN ('idle', 'idle in transaction', 'idle in transaction (aborted)') AND query !~* E'^(\\s*(--[^\\n]*\\n|/\\*.*\\*/|\\n))*(autovacuum|VACUUM|ANALYZE|REINDEX|CLUSTER|CREATE|ALTER|TRUNCATE|DROP)')::integer), 0) query_time_sum,
coalesce(sum(extract('epoch' FROM (clock_timestamp() - query_start))::integer * (state NOT IN ('idle') AND query !~* E'^(\\s*(--[^\\n]*\\n|/\\*.*\\*/|\\n))*(autovacuum|VACUUM|ANALYZE|REINDEX|CLUSTER|CREATE|ALTER|TRUNCATE|DROP)')::integer), 0) tx_time_sum,
coalesce(sum(extract('epoch' FROM (clock_timestamp() - query_start))::integer * (state NOT IN ('idle') AND query ~* E'^(\\s*(--[^\\n]*\\n|/\\*.*\\*/|\\n))*(autovacuum|VACUUM|ANALYZE|REINDEX|CLUSTER|CREATE|ALTER|TRUNCATE|DROP)')::integer), 0) mro_time_sum,
coalesce(sum((extract('epoch' FROM (clock_timestamp() - query_start)) > :tmax)::integer * (state NOT IN ('idle', 'idle in transaction', 'idle in transaction (aborted)') AND query !~* E'^(\\s*(--[^\\n]*\\n|/\\*.*\\*/|\\n))*(autovacuum|VACUUM|ANALYZE|REINDEX|CLUSTER|CREATE|ALTER|TRUNCATE|DROP)')::integer), 0) query_slow_count,
coalesce(sum((extract('epoch' FROM (clock_timestamp() - query_start)) > :tmax)::integer * (state NOT IN ('idle') AND query !~* E'^(\\s*(--[^\\n]*\\n|/\\*.*\\*/|\\n))*(autovacuum|VACUUM|ANALYZE|REINDEX|CLUSTER|CREATE|ALTER|TRUNCATE|DROP)')::integer), 0) tx_slow_count,
coalesce(sum((extract('epoch' FROM (clock_timestamp() - query_start)) > :tmax)::integer * (state NOT IN ('idle') AND query ~* E'^(\\s*(--[^\\n]*\\n|/\\*.*\\*/|\\n))*(autovacuum|VACUUM|ANALYZE|REINDEX|CLUSTER|CREATE|ALTER|TRUNCATE|DROP)')::integer), 0) mro_slow_count
FROM pg_stat_activity
WHERE pid <> pg_backend_pid()
GROUP BY 1) T
WHERE NOT db.datistemplate )
SELECT json_object_agg(datname, row_to_json(T))
FROM T
The documentation says:
-c command
[...]
command must be either a command string that is completely parsable by the server (i.e., it contains no psql-specific features), or a single backslash command.
So you cannot use variables that way.
You could use a "here document":
psql <<EOF
\set x $5
SELECT :x
EOF
Or you use the shell variable directly in the statement:
psql -c "SELECT $5"
The issue here is that you are passing an empty value to tmax=, wich is indicated by an error
psql:/var/lib/zabbix/postgresql/pgsql.query.time.sql:31: ERROR: syntax error (near position: ")")
STRING 22: ...'epoch' FROM (clock_timestamp() - query_start)) > )::integer...
^-- missing value here
I believe {$PG.SLOW_QUERIES.MAX.WARN} macro is the culprit here. Make sure that:
It exists in template macros;
It is not being overwritten to an empty value by a host macro.
Now, this may not be an actual reason why monitoring doesn't work. To be sure what exactly goes wrong we need to see zabbix agent logs, you can find those in /var/log/zabbix/zabbix_agentd.log. You may need to change DebugLevel in agent config to 3 or even 4 (be weary level 4 produces a lot of output).
I enabled debug level = 4 and in the log of the zabbix agent I saw the correct command now:
psql -qtAX -h "127.0.0.1" -p "5432" -U "zbx_monitor" -d "test2" -v tmax=30 -f "/var/lib/zabbix/postgresql/pgsql.query.time.sql"
But in the output of the command, I get 0 values for all metrics for all databases, although I ran a parallel query to the test2 database, which takes 14 seconds (i have run this query many times). Why is that?
Here is the output of the command execution result
psql -qtAX -h "127.0.0.1" -p "5432" -U "zbx_monitor" -d "test2" -v tmax=30 -f "/var/lib/zabbix/postgresql/pgsql.query.time.sql"
{ "postgres" : {"datname":"postgres","query_time_max":0,"tx_time_max":0,"mro_time_max":0,"query_time_sum":0,"tx_time_sum":0,"mro_time_sum":0,"query_slow_count":0,"tx_slow_count":0,"mro_slow_count":0}, "test" : {"datname":"test","query_time_max":0,"tx_time_max":0,"mro_time_max":0,"query_time_sum":0,"tx_time_sum":0,"mro_time_sum":0,"query_slow_count":0,"tx_slow_count":0,"mro_slow_count":0}, "test2" : {"datname":"test2","query_time_max":0,"tx_time_max":0,"mro_time_max":0,"query_time_sum":0,"tx_time_sum":0,"mro_time_sum":0,"query_slow_count":0,"tx_slow_count":0,"mro_slow_count":0} }
Hope you're all okay.
We hit this limit quite often. We know there is no way to up the 500 limit of concurrent user connections in Redshift. We also know certain views (pg_user_info) provide info as to the user's actual limit.
We are looking for some answers not found in this forum plus any guidance based on your experience.
Questions:
Does recreation of the cluster with bigger EC2 instances, would yield a higher limit value?
Does adding new nodes to the existing cluster would yield a higher limit value?
From the app development perspective: What specific strategies/actions you'd recommend in order to spot or predict a situation whereby this limit will be hit?
Txs - Jimmy
Okay folks.
thanks to all who answered.
I posted a support ticket in AWS and this is the recommendation, pasting all here, it's long but I hope it works for many people running into this issue. The idea is to catch the situation before it happens:
To monitor the number of connections made to the database, you can create a cloudwatch alarm based on the Database connections metrics that will trigger a lambda function when a certain threshold is reached. This lambda function can then terminate idle connections by calling a procedure that terminates idle connections.
Please find the query that creates a procedure to log and terminate long running inactive sessions
:
1. Add view to get all current inactive sessions in the cluster
CREATE OR REPLACE VIEW inactive_sessions as (
select a.process,
trim(a.user_name) as user_name,
trim(c.remotehost) as remotehost,
a.usesysid,
a.starttime,
datediff(s,a.starttime,sysdate) as session_dur,
b.last_end,
datediff(s,case when b.last_end is not null then b.last_end else a.starttime end,sysdate) idle_dur
FROM
(
select starttime,process,u.usesysid,user_name
from stv_sessions s, pg_user u
where
s.user_name = u.usename
and u.usesysid>1
and process NOT IN (select pid from stv_inflight where userid>1
union select pid from stv_recents where status != 'Done' and userid>1)
) a
LEFT OUTER JOIN (
select
userid,pid,max(endtime) as last_end from svl_statementtext
where userid>1 and sequence=0 group by 1,2) b ON a.usesysid = b.userid AND a.process = b.pid
LEFT OUTER JOIN (
select username, pid, remotehost from stl_connection_log
where event = 'initiating session' and username <> 'rsdb') c on a.user_name = c.username AND a.process = c.pid
WHERE (b.last_end > a.starttime OR b.last_end is null)
ORDER BY idle_dur
);
2. Add table for logging information about long running transactions that was terminated
CREATE TABLE IF NOT EXISTS terminated_inactive_sessions (
process int,
user_name varchar(50),
remotehost varchar(50),
starttime timestamp,
session_dur int,
idle_dur int,
terminated_on timestamp DEFAULT GETDATE()
);
3. Add procedure to log and terminate any inactive transactions running for longer than 'n' amount of seconds
CREATE OR REPLACE PROCEDURE terminate_and_log_inactive_sessions (n INTEGER)
AS $$
DECLARE
expired RECORD ;
BEGIN
FOR expired IN SELECT process, user_name, remotehost, starttime, session_dur, idle_dur FROM inactive_sessions where idle_dur >= n
LOOP
EXECUTE 'INSERT INTO terminated_inactive_sessions (process, user_name, remotehost, starttime, session_dur, idle_dur) values (' || expired.process || ' , ''' || expired.user_name || ''' , ''' || expired.remotehost || ''' , ''' || expired.starttime || ''' , ' || expired.session_dur || ' , ' || expired.idle_dur || ');';
EXECUTE 'SELECT PG_TERMINATE_BACKEND(' || expired.process || ')';
END LOOP ;
END ;
$$ LANGUAGE plpgsql;
4. Execute the procedure by running the following command:
call terminate_and_log_inactive_sessions(100);
Here is a sample lambda function that attempts to close idle connections by querying the view 'inactive_sessions' created above, which you can use as a reference.
#Current time
now = datetime.datetime.now()
query = "SELECT process, user_name, session_dur, idle_dur FROM inactive_sessions where idle_dur >= %d"
logger = logging.getLogger()
logger.setLevel(logging.INFO)
def lambda_handler(event, context):
try:
conn = psycopg2.connect("dbname=" + db_database + " user=" + db_user + " password=" + db_password + " port=" + db_port + " host=" + db_host)
conn.autocommit = True
except:
logger.error("ERROR: Unexpected error: Could not connect to Redshift cluster.")
sys.exit()
logger.info("SUCCESS: Connection to RDS Redshift cluster succeeded")
with conn.cursor() as cur:
cur.execute(query % (session_idle_limit))
row_count = cur.rowcount
if row_count >=1:
result = cur.fetchall()
for row in result:
print("terminating session with pid %s that has been idle for %d seconds at %s" % (row[0],row[3],now))
cur.execute("SELECT PG_TERMINATE_BACKEND(%s);" % (row[0]))
conn.close()
else:
conn.close()
As you said this is a hard limit in Redshift and there is no way to up it. Redshift is not a high concurrency / high connection database.
I expect that if you need the large data analytic horsepower of Redshift you can get around this with connection sharing. Pgpool is a common tool for this.
I am trying to do log table in dynamodb and my table looks like
Pid[HashKey] || TableName[SecondaryIndex] || CreateDate[RangeKey] || OldValue || NewValue
10 || Product || 10.10.2013 00:00:01 || Shoe || Skirt
10 || Product || 10.10.2013 00:00:02 || Skirt || Pant
11 || ProductCategory || 10.10.2013 00:00:01 || Shoes || Skirts
19 || ProductCategory || 10.10.2013 00:00:01 || Tables || Armchairs
Pid = My main db tables primary key
TableName = My main db table names
CreateDate = Row created date
now I want to get list of
where (Pid = 10 AND TableName = "Product") OR (Pid = 11 AND
TableName="ProductCategory")
in a single request (it wouldn't be so short like this. It could include too many tables and pids)
I tried batchget but I didn't use it because I couldn't query with secondary index. It needs rangekey with equal operator.
I tried query but this time I couldn't send multiple hash key in a same query.
Any ideas or successions?
Thank you.
The problem here is the OR .... Generally you cannot get this where condition with a single query operation without modifying your row,
Solution 1: You have to issue 2 query operations, and append them to the same resultset.
where (Pid = 10 AND TableName = "Product")
union
where (Pid = 11 AND TableName = "ProductCategory")
Those operations should run in parallel to optimize performance.
Solution2: create a field xxx that describe your condition and maintain it on writes, than
you could create a global secondary index on it and perform a single query.
I'm trying to write an Function in an Oracle database. When i'm in the function editor and attempt to run it, DbVis inserts an additional parameter.
#call [dbvis - v0] = SP_GET_ANNUAL_SALES_HISTORY( [dbvis - v1], 'DAL', '00105315', '2013' );
#echo returnValue = [dbvis - v0];
#echo p1 = [dbvis - v0];
Then I get this error:
... Physical database connection acquired for: JdaTest
10:36:40 [#CALL - 0 row(s), 0.000 secs] [Error Code: 6550, SQL State: 65000] ORA-06550: line 1, column 13:
PLS-00306: wrong number or types of arguments in call to 'SP_GET_ANNUAL_SALES_HISTORY'
ORA-06550: line 1, column 7:
PL/SQL: Statement ignored
10:36:40 [#ECHO - 0 row(s), 0.000 secs] returnValue = null
10:36:40 [#ECHO - 0 row(s), 0.000 secs] p1 = null
... 3 statement(s) executed, 0 row(s) affected, exec/fetch time: 0.000/0.000 sec [2 successful, 0 warnings, 1 errors]
This is my function. It's my first foray into stored procedures. At this point, I'm just trying to getting it to run and delivery some results. hte Return type is a Type I also created. It's 'CREATE OR REPLACE TYPE "SAPMGR"."ANNUAL_SALES_HISTORY" is Varray(12) of number'
CREATE OR REPLACE FUNCTION "SAPMGR"."SP_GET_ANNUAL_SALES_HISTORY" (loc_in IN varchar2,item_in IN varchar2,year_in IN varchar2)
RETURN annual_sales_history
AS
yearStart Date;
yearEnd Date;
start_date sales_history.start_date%TYPE;
qty sales_history.quantity%TYPE;
ash annual_sales_history;
cursor c1 is
select start_date,QTY
from sales_history
where item = item_in
and loc = loc_in
and start_date between yearStart and yearEnd
order by item, loc, start_date;
BEGIN
Loop
fetch c1 into start_date, qty;
exit when c1%notfound;
ash(extract(month from start_date)) := qty;
DBMS_OUTPUT.PUT_LINE( 'ash' || ' ' || ash(0) || ' ' || ash(1) || ' ' || ash(2) || ' ' || ash(3) || ' ' || ash(4) || ' ' || ash(5)|| ' ' || ash(6) || ' ' || ash(7));
End loop
commit;
close c1;
EXCEPTION
WHEN OTHERS THEN
raise_application_error(-20001,'An error was encountered - '||SQLCODE||' -ERROR- '||SQLERRM);
RETURN ash;
END;
What is [dbvis - v1] and how do I get rid of it? Or, please show me where I'm leaving something out maybe
Thanks.
I received mail from DbVisualizer that their editor does not recognize custom data types, and that was what I was attempting. So, stay with native types, and you're ok.
I have a play framework 2.0.4 application that wants to modify rows in db.
I need to update 'few' messages in db to status "opened" (read messages)
I did it like below
String sql = " UPDATE message SET opened = true, opened_date = now() "
+" WHERE id_profile_to = :id1 AND id_profile_from = :id2 AND opened IS NOT true";
SqlUpdate update = Ebean.createSqlUpdate(sql);
update.setParameter("id1", myProfileId);
update.setParameter("id2", conversationProfileId);
int modifiedCount = update.execute();
I have modified the postgresql to log all the queries.
modifiedCount is the actual number of modified rows - but the query is in transaction.
After the query is done in the db there is ROLLBACK - so the UPDATE is not made.
I have tried to change db to H2 - with the same result.
This is the query from postgres audit log
2012-12-18 00:21:17 CET : S_1: BEGIN
2012-12-18 00:21:17 CET : <unnamed>: UPDATE message SET opened = true, opened_date = now() WHERE id_profile_to = $1 AND id_profile_from = $2 AND opened IS NOT true
2012-12-18 00:21:17 CET : parameters: $1 = '1', $2 = '2'
2012-12-18 00:21:17 CET : S_2: ROLLBACK
..........
Play Framework documentation and Ebean docs - states that there is no transaction /if not declared or transient if needed per query/.
So... I have made the trick
Ebean.beginTransaction();
int modifiedCount = update.execute();
Ebean.commitTransaction();
Ebean.endTransaction();
Logger.info("update mod = " + modifiedCount);
But this makes no difference - the same behavior ...
Ebean.execute(update);
Again - the same ..
Next step i did - I annontated the method with
#Transactional(type=TxType.NEVER)
and
#Transactional(type=TxType.MANDATORY)
None of them made a difference.
I am so frustrated with Ebean :(
Anybody can help, please ?
BTW.
I set
Ebean.getServer(null).getAdminLogging().setDebugGeneratedSql(true);
Ebean.getServer(null).getAdminLogging().setDebugLazyLoad(true);
Ebean.getServer(null).getAdminLogging().setLogLevel(LogLevel.SQL);
to see in Play console the query - other queries are logged - this update - not
just remove the initial space...Yes..I couldn't believe it either...
change from " UPDATE... to "UPDATE...
And thats all...
i think you have to use raw sql instead of createSqlUpdate statement.