Backup restore fails on multi-user mode error - progress-4gl

I have a script to automate restoring a database from a backup. My script first stops all appserver instances, stops all databases, then restores from a backup. Below is the pseudo-code:
foreach appserver:
asbman -name (appserver) -stop
foreach database:
dbman -name (database) -stop
proutil database.db -C enablelargefiles
echo y | prorest database.db backup.bak -verbose
Once my script reaches the prorest command, it outputs the following error:
** The database D:\Directory\Wrk\db\database is in use in multi-user mode. (276)
After waiting ~60 seconds, running the prorest command again executes as expected, and the database is restored correctly. My guess is that there are processes tied to the database that are still running after the database is stopped. I would like to find a solution to this problem without having to use methods such as a sleep-retry to determine when the database is capable of being restored. Is there a solution to this problem, or are there better methods for restoring a database in this way?

There are some timeouts that can come into play:
When an unconditional batch shutdown runs (PROSHUT -by), the following sequence of events takes place:
If there are any running processes left after:
30 Seconds - wake up clients waiting for locks.
60 Seconds - wake up clients waiting for locks.
90 Seconds - wake up clients waiting on screen input.
5 Minutes - Resend the shutdown signal to remaining clients.
10 Minutes - Send a terminate (SIGTERM) signal to remaining clients.
More info here:
http://knowledgebase.progress.com/articles/Article/P3222
You can tail the database.lg file and look for the messages telling you that the database is shut down:
[2017/02/06#20:20:56.353+0100] P-14292 T-13420 I SHUT 5: (542) Server shutdown started by Jens on CON:.
[2017/02/06#20:20:56.499+0100] P-10276 T-11404 I BROKER 0: (15193) The normal shutdown of the database will continue for 10 Min 0 Sec if required.
[2017/02/06#20:20:56.499+0100] P-10276 T-11404 I BROKER 0: (2248) Begin normal shutdown
[2017/02/06#20:20:57.499+0100] P-10276 T-11404 I BROKER 0: (2263) Resending shutdown request to 0 user(s).
[2017/02/06#20:21:01.692+0100] P-10276 T-11404 I BROKER 0: (15109) At Database close the number of live transactions is 0.
[2017/02/06#20:21:01.692+0100] P-10276 T-11404 I BROKER 0: (15743) Before Image Log Completion at Block 1 Offset 5300.
[2017/02/06#20:21:01.693+0100] P-10276 T-11404 I BROKER 0: (453) Logout by Jens on CON:.
[2017/02/06#20:21:01.694+0100] P-10276 T-11404 I BROKER : (16869) Removed shared memory with segment_id: 50528256
[2017/02/06#20:21:01.694+0100] P-10276 T-11404 I BROKER : (334) Multi-user session end.
[2017/02/06#20:21:02.356+0100] P-14292 T-13420 I SHUT 5: (453) Logout by Jens on CON:.
The (334) message is basically telling you that the database is shut down.
Another option could be to check for the database lock file (database.lk). It's only there if the database is running:
...
2017-02-06 20:21 2 228 224 mySportsDb.b1
2017-02-06 20:21 1 703 936 mySportsDb.d1
2017-02-06 20:21 32 768 mySportsDb.db
2017-02-06 20:21 89 643 mySportsDb.lg
2017-02-06 18:00 920 mySportsDb.lic
2017-02-06 20:26 265 mySportsDb.lk
...
There are also a couple of scripts you can run to check the status of the database. See more here:
http://knowledgebase.progress.com/articles/Article/P136887

Related

Unable to start tomcat 9 with flowable war- PUBLIC.ACT_DE_DATABASECHANGELOGLOCK error

I have downloaded flowable from flowable.com/open-source and placed the flowable-ui.war and flowable-rest.war in tomcat 9.0.52 webapps folder.
When i start server after some time i can see below line repeating in cmd and server getting stopped.
SELECT LOCKED FROM PUBLIC.ACT_DE_DATABASECHANGELOGLOCK WHERE ID=1
2021-08-13 20:45:05.818 INFO 8316 --- [ main] l.lockservice.StandardLockService : Waiting for changelog lock.
Why is this issue occurring I have not made any changes?
The message
l.lockservice.StandardLockService : Waiting for changelog lock.
occurs when Flowable waits for the lock for the DB changes to be released.
If that doesn't happen it means that some other node has picked up the log and not released it properly. I would suggest you manually deleting the lock from that table (ACT_DE_DATABASECHANGELOGLOCK).
In addition to that, there is no need to run both flowable-ui.war and flowable-rest.war. flowable-rest.war is a subset of flowable-ui.war.

Fusion Freeswitch Maximum Calls In Progress

I use Fusion core Freeswitch to build my PBX Server.
My version:
FreeSWITCH version: 1.10.2-release-14-f7bdd3845a~64bit (-release-14-f7bdd3845a 64bit)
it working find till last month BUT when user registrations reach to 1000
i have check Freeswitch log ( debug level) freeswitch still working
I have check postgreSql log still working
But client disconnected ( webrct from Web using SipJS and Zoiper use TCP protocol) and can not connect to Freeswitch for Registrations , so it can make any call at this time.
At this time when i see log it show "Maximum Calls In Progress"
I have try increase session reach to 5000 and session per second to 1000 and flush cached/ restart freeswitch but still not woking.
Here my switch.conf.xml
Here my postgresql.conf
Here my log when server down: fs_log
You can see i restart freeswitch at this log:
2020-07-29 14:39:08.291394 [INFO] switch_core.c:2879 Shutting down ca289c03-0617-46bf-a7af-eda4a4fe2fbb 2020-07-29 14:39:08.291394 [NOTICE] switch_core_session.c:407 Hangup sofia/internal/1100365#125.212.xxx.xxx [CS_NEW] [SYSTEM_SHUTDOWN]
Please take a look at help me solve this.

Randomly Login Timeout Expired errors from SQL 2000 DTS against SQL2008R2 databases

I have some JOBs running on SQL Server 2000, which are calling stored procedures or queries against remote SQL Servers (different editions).
The JOB calls a DTS, and is the DTS who does the remote connection and executes the Stored Procedure or gets a query results from the remote server.
This has been working without errors for years. I don't know why during the last month, I'm having randomly errors on these kind of jobs... I've read some other posts and seems to be related to a security issue, but I repeat, the most of times the jobs are working, only some runs are failing with the error.
Executed as user: SERVER\user. DTSRun: Loading... DTSRun: Executing...
DTSRun OnStart: DTSStep_DTSDynamicPropertiesTask_2 DTSRun OnError:
DTSStep_DTSDynamicPropertiesTask_2, Error = -2147467259 (80004005)
Error string: Login timeout expired Error source: Microsoft OLE DB
Provider for SQL Server Help file: Help context: 0
Error Detail Records: Error: -2147467259 (80004005); Provider Error: 0 (0)
Error string: Login timeout expired Error source: Microsoft OLE DB Provider for SQL Server
Help file: Help context: 0 DTSRun OnFinish:
DTSStep_DTSDynamicPropertiesTask_2 DTSRun: Package execution complete.
Process Exit Code 1. The step failed.
I really don't know what to check. After reboot the server the problems are still there. Any help from you guys would be appreciated.
EDIT 2019-02-14 16:15 -------------------------------------------------------------------------------------------
One of the solutions I found has been to change the Remote Login Timeout property from the default 20 seconds to 30 seconds, or to 0 (Zero means without timeout), by executing the next code:
sp_configure 'remote login timeout', 30 --Or 0 seconds for infinite
go
reconfigure with override
go
From: https://support.microsoft.com/es-es/help/314530/error-message-when-you-execute-a-linked-server-query-in-sql-server-tim
I've tried this solution changing it to 30 seconds, but with the same result. Of course I didn't set it to 0 for obvious reasons, the timeouts are there for something. And also tried 300 seconds (5 minutes to make a login!) and still the same.
EDIT 2019-02-25 11:25 -------------------------------------------------------------------------------------------
Very similar to my problem, still not solved...
https://www.sqlservercentral.com/Forums/Topic1727739-391-1.aspx
For the moment I have a temp solution, and it is to increase the Connect Timeout on the Connection object.
It was blank (probably using its default value).
Since I changed this property (Connection Object > Advanced... > Connect Timeout) to 300 I'm not having the problems on these DTS. I leaved 2 DTS without the change to ensure I continued having the problem, and these are the only ones which continue having the problem. The changed ones are working fine now.

How to fill data upto a size in multiple disk?

I am creating 4 mountpoint disk in Windows OS. I need to copy files up to a threshold value (say 50 GB).
I tried with vdbench. It works fine, but it throws an exception at last.
compratio=4
dedupratio=1
dedupunit=256k
* Host Definition section
hd=default,user=Administator,shell=vdbench,jvms=1
hd=localhost,system=localhost
********************************************************************************
* Storage Definition section
fsd=fsd1,anchor=C:\UnMapTest-Volume1\disk1\,depth=1,width=1,files=1,size=5g
fsd=fsd2,anchor=C:\UnMapTest-Volume2\disk2\,depth=1,width=1,files=1,size=5g
fwd=fwd1,fsd=fsd*,operation=write,xfersize=1m,fileio=sequential,fileselect=random,threads=10
rd=rd1,fwd=fwd1,fwdrate=max,format=yes,elapsed=1h,interval=1
Below is the exception from vdbench. Due to this my calling script would fail.
05:29:14.287 Message from slave localhost-0:
05:29:14.289 file=C:\UnMapTest-Volume1\disk1\\vdb.1_1.dir\vdb_f0001.file,busy=true
05:29:14.290 Thread: FwgThread write C:\UnMapTest-Volume1\disk1\ rd=rd1 For loops: None
05:29:14.291
05:29:14.292 last_ok_request: Thu Dec 28 05:28:57 PST 2017
05:29:14.292 Duration: 16.92 seconds
05:29:14.293 consecutive_blocks: 10001
05:29:14.294 last_block: FILE_BUSY File busy
05:29:14.294 operation: write
05:29:14.295
05:29:14.296 Do you maybe have more threads running than that you have
05:29:14.296 files and therefore some threads ultimately give up after 10000 tries?
05:29:14.300 *
05:29:14.301 ******************************************************
05:29:14.302 * Slave localhost-0 aborting: Too many thread blocks *
05:29:14.302 ******************************************************
05:29:14.303 *
05:29:21.235
05:29:21.235 Slave localhost-0 prematurely terminated.
05:29:21.235
05:29:21.235 Slave aborted. Abort message received:
05:29:21.235 Too many thread blocks
05:29:21.235
05:29:21.235 Look at file localhost-0.stdout.html for more information.
05:29:21.735
05:29:21.735 Slave localhost-0 prematurely terminated.
05:29:21.735
java.lang.RuntimeException: Slave localhost-0 prematurely terminated.
at Vdb.common.failure(common.java:335)
at Vdb.SlaveStarter.startSlave(SlaveStarter.java:198)
at Vdb.SlaveStarter.run(SlaveStarter.java:47)
I am using PowerShell in a Windows machine. Even if some other tools like Diskspd is having way to fill data up to some threshold then please provide me.
I found the answer by myself.
I have done this using Diskspd.exe as below
The following command fill 50 GB data in the mentioned disk folder
.\diskspd.exe -c50G -b4K -t2 C:\UnMapTest-Volume1\disk1\testfile1.dat
It is very simple than Vdbench for my requirement.
Caution : But it is not having real data so array side disk size is
not shown up with the size

orientdb 2.2.26 cluster setup - queries hang

Orientdb version - 2.2.26
CLuster - 3 node setup, readQuorum = 2, writeQuorum = "majority", ridBag.embeddedToSbtreeBonsaiThreshold = 2147483647
Nodes - CentOS 7.0, 24 cores and 96 GB RAM
Gremlin-scala/tinkerpop APIs are used for querying and inserting.
This code works fine on single node setup.
Code checks for existing vertex in graph. If vertex does not exist, then the insert operation are batched and send to the db within a transaction.
I see following warnings in orientdb log on all three nodes -
2017-09-15 16:37:31:025 WARNI [dev2] Timeout (852567ms) on waiting for synchronous responses from nodes=[dev1, dev3, dev2] responsesSoFar=[] request=(id=1.354 task=record_read(#65:22)) [ODistributedDatabaseImpl]
2017-09-15 16:52:18:239 WARNI [dev2] Timeout (1049042ms) on waiting for synchronous responses from nodes=[dev1, dev3, dev2] responsesSoFar=[] request=(id=1.568 task=record_read(#63:24)) [ODistributedDatabaseImpl]
2017-09-15 17:25:22:477 WARNI [dev2] Timeout (1984236ms) on waiting for synchronous responses from nodes=[dev1, dev3, dev2] responsesSoFar=[] request=(id=1.889 task=record_read(#63:24)) [ODistributedDatabaseImpl]
There is no problem on network. Firewall is disabled on all three nodes.
Are these logs related to the problem ?
What else I should check to fix the problem ?