C0000005 ACCESS_VIOLATION Exception with Progress Openedge when calling a Web Service - soap

I am trying to hit an external web service using Progress OE 11.5. When I execute the code from GUI Procedure Editor or the CHUI Procedure Editor, it crashes when calling the API
RUN ProcessTrack IN hTrackPortType(INPUT lcRequest, OUTPUT lcResponse) no-error.
I don't get any errors. The progress GUI window just crashes.
When I traced the logs it has "C0000005 ACCESS_VIOLATION" exception. Any idea why this is caused? But when I access the same web services from SoapUI or from a Python program it works fine. I am not sure if Progress OpenEdge has any access restrictions to contact the apis.
I have the full stack trace here.
=====================================================
PROGRESS stack trace as of Fri Aug 07 12:26:40 2020
=====================================================
Progress OpenEdge Release 11.5 build 1114 on WINNT
Startup parameters:
-pf C:\Progressx86\OpenEdge\startup.pf,-cpinternal ISO8859-1,-cpstream ISO8859-1,-cpcoll Basic,-cpcase Basic,-d mdy,-numsep 44,-numdec 46,(end .pf),-param C:\.....\api_request.p
Exception code: C0000005 ACCESS_VIOLATION
Fault address: 025C21CC 1C3:0034002D
Registers:
EAX:086496B8
EBX:00000002
ECX:03100000
EDX:03100000
ESI:59DF2175
EDI:085AA1E0
CS:EIP:0023:025C21CC
SS:ESP:002B:00F4BFA0 EBP:00F4BFD0
DS:002B ES:002B FS:0053 GS:002B
Flags:00210206
Debugging dll: C:\Progressx86\OpenEdge\bin\DBGHELP.DLL
Symbol Path:
C:\Progressx86\OpenEdge\bin;C:\Progressx86\OpenEdge\pdbfiles
Call Stack:
Address Frame
025C21CC 00F4BF9C 0000:00000000
085AA1E0 00F4BFD0 0000:00000000
59DF27DB 00F4BFDC WSDLAttribute::getHandle+3F52B
59DA415C 00F4F130 WSDLArray_Empty+23ABC
59DCD9A0 00F4F144 WSDLAttribute::getHandle+1A6F0
59E502A8 00F4F198 WSDLAttribute::getHandle+9CFF8
59E5032C 00F4F1CC WSDLAttribute::getHandle+9D07C
59DCF04E 00F4F204 WSDLAttribute::getHandle+1BD9E
59D9B724 00F4F240 WSDLArray_Empty+1B084
59E50403 00F4F260 WSDLAttribute::getHandle+9D153
59D55D6F 00F4F2A0 csp_tweakFileURL+312F
** ABL Stack Trace **
--> C:\....\p56215_api_request.ped at line 54 (C:\.....\p56215_api_request.ped)
adecomm/_runcode.p at line 665 (adecomm/_runcode.r)
ExecuteRun adeedit/_proedit.p at line 3613 (adeedit/_proedit.r)
RunFile adeedit/_proedit.p at line 10625 (adeedit/_proedit.r)
USER-INTERFACE-TRIGGER adeedit/_proedit.p at line 1985 (adeedit/_proedit.r)
adeedit/_proedit.p at line 12280 (adeedit/_proedit.r)
_edit.p at line 408 (C:\Progressx86\OpenEdge\gui\_edit.r)
** Persistent procedures/Classes **
** PROPATH **
.,C:\Progressx86\OpenEdge\gui,C:\Progressx86\OpenEdge\gui\ablunit.pl,C:\Progressx86\OpenEdge\gui\adecomm.pl,C:\Progressx86\OpenEdge\gui\adecomp.pl,C:\Progressx86\OpenEdge\gui\adedesk.pl,C:\Progressx86\OpenEdge\gui\adedict.pl,C:\Progressx86\OpenEdge\gui\adeedit.pl,C:\Progressx86\OpenEdge\gui\adeicon.pl,C:\Progressx86\OpenEdge\gui\aderes.pl,C:\Progressx86\OpenEdge\gui\adeshar.pl,C:\Progressx86\OpenEdge\gui\adeuib.pl,C:\Progressx86\OpenEdge\gui\adeweb.pl,C:\Progressx86\OpenEdge\gui\adexml.pl,C:\Progressx86\OpenEdge\gui\dataadmin.pl,C:\Progressx86\OpenEdge\gui\OpenEdge.BusinessLogic.pl,C:\Progressx86\OpenEdge\gui\OpenEdge.Core.pl,C:\Progressx86\OpenEdge\gui\OpenEdge.ServerAdmin.pl,C:\Progressx86\OpenEdge\gui\prodict.pl,C:\Progressx86\OpenEdge\gui\protools.pl,C:\Progressx86\OpenEdge,C:\Progressx86\OpenEdge\bin
** Databases (logical/type/physical) **
** End of Protrace **

This KnowledgeBase post indicates that this is a known error. If you run a version below 11.7.1 you should consider upgrading to the latest version of 11.7 (currently 11.7.6). If you run a version later than 11.7.1 that's mentioned in the article you should consider contacting Progress support.
EDIT: since running 11.5 upgrading should be a priority!

Related

kernel - postgres segfault error 15 in libc-2.19.so

Yesterday we had crash of PostgreSQL 9.5.14 running on Debian 8 (Linux xxxxxx 3.16.0-7-amd64 #1 SMP Debian 3.16.59-1 (2018-10-03) x86_64 GNU/Linux) - Segmentation fault. Database closed all connections and reinitialized itself staying ~1 minute in recovery mode.
PostgreSQL log:
2018-10-xx xx:xx:xx UTC [580-2] LOG: server process (PID 16461) was
terminated by signal 11: Segmentation fault
kern.log:
Oct xx xx:xx:xx xxxxxxxx kernel: [117977.301353] postgres[16461]:
segfault at 7efd3237db90 ip 00007efd3237db90 sp 00007ffd26826678 error
15 in libc-2.19.so[7efd322a2000+1a1000]
According to libc documentation (https://support.novell.com/docs/Tids/Solutions/10100304.html) error code 15 means:
NX_EDEADLK 15 resource deadlock would occur - which does not tell me much.
Could you tell me please if we can do something to avoid this problem in the future? Because this server is of course production one.
All packages are up to date currently. Upgrade of PG is unfortunately not the option. Server runs on Google Compute Engine.
error code 15 means: NX_EDEADLK 15
No, it doesn't mean that. This answer explains how to interpret 15 here.
It's bits 0, 1, 2, 3 set => protection fault, write access, user mode, use of reserved bit. Most likely your postgress process attempted to write to some wild pointer.
if we can do something to avoid this problem in the future?
The only thing you can do is find the bug and fix it, or upgrade to a release of postgress where that bug is already fixed (and hope that no new ones were introduced).
To understand where the bug might be, you should check whether a core dump was produced (if not, do enable them). If you have the core, use gdb /path/to/postgress /path/to/core, and then where GDB command. That will give you crash stack trace, which may allow you to find similar bug reports.

Endeca baseline update is failing: SEVERE: Utility 'rmdir_dgraph-input-old' failed

Baseline update on endeca is failing. Please find the logs below:
INFO: Finished pushing content to dgraph.
INFO: [AuthoringMDEXHost] Starting shell utility 'rmdir_dgraph-input-old'.
INFO: [LiveMDEXHostA] Starting shell utility 'cleanDir_local-dgraph-input'.
INFO: [LiveMDEXHostA] Starting shell utility 'rmdir_dgraph-input-old'.
SEVERE: Utility 'rmdir_dgraph-input-old' failed. Refer to utility logs in [ENDECA_CONF]/logs/shell on host LiveMDEXHostA.
Occurred while executing line 7 of valid BeanShell script:
AuthoringDgraphCluster.copyIndexToDgraphServers();
AuthoringDgraphCluster.applyIndex();
LiveDgraphCluster.cleanDirs();
LiveDgraphCluster.copyIndexToDgraphServers();
LiveDgraphCluster.applyIndex();
SEVERE: Error executing valid BeanShell script.
Occurred while executing line 19 of valid BeanShell script:
Dgidx.run();
// distribute index, update Dgraphs
DistributeIndexAndApply.run();
// Upload the generated dimension values to Workbench
WorkbenchManager.cleanDirs();
SEVERE: Caught an exception while invoking method 'run' on object 'BaselineUpdate'. Releasing locks.
Caused by java.lang.reflect.InvocationTargetException
sun.reflect.NativeMethodAccessorImpl invoke0 - null
Caused by com.endeca.soleng.eac.toolkit.exception.AppControlException
com.endeca.soleng.eac.toolkit.script.Script runBeanShellScript - Error executing valid BeanShell script.
Caused by com.endeca.soleng.eac.toolkit.exception.AppControlException
com.endeca.soleng.eac.toolkit.script.Script runBeanShellScript - Error executing valid BeanShell script.
Caused by com.endeca.soleng.eac.toolkit.exception.EacComponentControlException
com.endeca.soleng.eac.toolkit.utility.Utility run - Utility 'rmdir_dgraph-input-old' failed. Refer to utility logs in [ENDECA_CONF]/logs/shell on host LiveMDEXHostA.
INFO: Released lock 'update_lock'.
Has anyone seen this type of error before? Please let me know the potential solution. Also Baseline update is taking 2 to 3 hours and then it's failing, it's annoying.
Thanks!
Check logs under
endeca/PlatformServices/workspace/logs/shell
There should be a log named like appName.rmdir_dgraph-input-old.log
with more info about the error.
Probably a non existing folder is trying to be removed or something like that.
If this is the case just create the folder that the utility is trying to remove and execute baseline again.

Exceptions in MongoDB/Cursor.pm line 161

Using MongoDB 2.4.5 with version 0.702 of the Perl MongoDB driver, I frequently run into exceptions like these:
recv timed out (30000 ms) at ...MongoDB/Cursor.pm line 161.
couldn't get response to throw out at ...MongoDB/Cursor.pm line 161.
missed the response we wanted, please try again at ...MongoDB/Cursor.pm line 161
invalid header received at ...MongoDB/Cursor.pm line 161.
can't get db response, not connected at ...MongoDB/Cursor.pm line 161.
The exceptions are intermittent, and often vanish on the next request (this is a web app). Occasionally, the exceptions will persist over several consecutive requests.
This is a tiny database running the default configuration (no sharding or anything fancy). I've tried using some of the tools listed here and here, but I'm unclear on how to apply them to this situation.
This is all running on Debian 7.1 64-bit. The web server is Mojolicious' hypnotoad 4.07 on perl 5.16.3 running behind apache2.
Can you kindly suggest some tools & strategies for diagnosing the problem? Thanks for your time.

Crystal report pro 11 crashes when linking tables

I have a series problem with Crystal report pro version 11.5.11.1470 SP 5. When i make a new report and add multiple tables and just press on link tab to make links between tables, it crashes and closes. I uninstalled the program so many times and installed it again but in vain. I googled this issue the whole day but with no luck at all. the technical issue log showes the following:
Problem Event Name: APPCRASH
Application Name: crw32.exe
Application Version: 11.5.11.1470
Application Timestamp: 492e9155
Fault Module Name: MFC71U.DLL
Fault Module Version: 7.10.3077.0
Fault Module Timestamp: 3e77fc29
Exception Code: c0000005
Exception Offset: 00032e72
OS Version: 6.1.7601.2.1.0.256.4
Locale ID: 1033
Additional Information 1: 1fc1
Additional Information 2: 1fc163a1c57ae45571bce37d539b233f
Additional Information 3: 12a3
Additional Information 4: 12a3f41f6385dcf1330ec81c9df2618c

Unable to read crash dump in windbg

I have been getting a stackoverflow exception in my program which may be originating from a thirdparty libary, microsoft.sharepoint.client.runtime.dll.
Using adplus to create the crash dump, I'm facing the problem that I'm struggling to get any information from it when i open it in windbg. This is what I get as a response:
> 0:000> .restart /f
Loading Dump File [C:\symbols\FULLDUMP_FirstChance_epr_Process_Shut_Down_DocumentumMigrator.exe__0234_2011-11-17_15-19-59-426_0d80.dmp]
User Mini Dump File with Full Memory: Only application data is available
Comment: 'FirstChance_epr_Process_Shut_Down'
Symbol search path is: C:\symbols
Executable search path is:
Windows 7 Version 7601 (Service Pack 1) MP (8 procs) Free x64
Product: Server, suite: Enterprise TerminalServer SingleUserTS
Machine Name:
Debug session time: Thu Nov 17 15:19:59.000 2011 (UTC + 2:00)
System Uptime: 2 days 2:44:48.177
Process Uptime: 0 days 0:13:05.000
.........................................WARNING: rsaenh overlaps cryptsp
.................WARNING: rasman overlaps apphelp
......
..WARNING: webio overlaps winhttp
.WARNING: credssp overlaps mswsock
.WARNING: IPHLPAPI overlaps mswsock
.WARNING: winnsi overlaps mswsock
............
wow64cpu!CpupSyscallStub+0x9:
00000000`74e42e09 c3 ret
Any ideas as to how i can get more information from the dump, or how to use it to find where my stackoverflow error is occuring?
The problem you are facing is that the process is 32-bit, but you are running on 64-bit, therefore your dump is a 64-bit dump. To make use of the dump you have to run the following commands:
.load wow64exts
.effmach x86
!analyze -v
The last command should give you a meaningful stack trace.
This page provides lots of useful information and method to analyze the problem.
http://www.dumpanalysis.org/blog/index.php/2007/09/11/crash-dump-analysis-patterns-part-26/
You didn't mention if your code is managed or unmanaged. Assuming it is unmanaged. In debugger:
.symfix
.reload
~*kb
Look through the call stack for all threads and identify thread that caused SO. It is easy to identify the thread with SO, because the call stack will be extra long. Switch to that thread using command ~<N>s, where is thread number, dump more of the call stack using command k 200 to dump up to 200 lines of call stack. At the very bottom of the call stack you should be able to see the code that originated the nested loop.
If your code is managed, use SOS extension to dump call stacks.