Pacemaker dont start resource jboss and pgsql - postgresql

I test pacemaker on two servers.
On two nodes stands CentOS 7 x64
jdk-7u80-linux-x64
JBoss 7.1.1 Final
Pgsql (PostgreSQL) 9.2.24
pcs --version
0.9.165
Set up 3 resources. IPaddr2 works without problems. But with jboss and pgsql problems.
What if they run teams
/bin/sh /usr/lib/ocf/resource.d/heartbeat/pgsql start
/bin/sh /usr/lib/ocf/resource.d/heartbeat/jboss start
they work, but the pacemaker does not see them.
[root # centos-test1 heartbeat] # pcs status --all
Cluster name: test
Stack: corosync
Current DC: centos-test1 (version 1.1.19-8.el7_6.2-c3c624ea3d) - partition with quorum
Last updated: Wed Dec 26 06:58:21 2018
Last change: Wed Dec 26 06:07:27 2018 by root via cibadmin on centos-test1
2 nodes configured
3 resources configured
Online: [centos-test1 centos-test2]
Full list of resources:
virtual_ip (ocf :: heartbeat: ipaddr2): Started centos-test1
jboss (ocf :: heartbeat: jboss): Stopped
pgsql (ocf :: heartbeat: pgsql): Stopped
Failed Actions:
* jboss_start_0 on centos-test1 'unknown error' (1): call = 18, status = Timed Out, exitreason = '',
    last-rc-change = 'Wed Dec 26 06:08:16 2018', queued = 0ms, exec = 20002ms
* pgsql_start_0 on centos-test1 'not configured' (6): call = 15, status = complete, exitreason = '',
    last-rc-change = 'Wed Dec 26 06:07:56 2018', queued = 0ms, exec = 115ms
* jboss_start_0 on centos-test2 'unknown error' (1): call = 14, status = Timed Out, exitreason = '',
    last-rc-change = 'Wed Dec 26 13:07:04 2018', queued = 0ms, exec = 20002ms
Daemon Status:
  corosync: active / enabled
  pacemaker: active / enabled
  pcsd: active / enabled
  
In ocf :: heartbeat: there were errors with environment variables, we had to explicitly indicate in the files:
# Initialization:
: / usr / lib / ocf / lib / heartbeat
. / usr / lib / ocf / lib / heartbeat / ocf-shellfuncs
#: $ {OCF_FUNCTIONS_DIR = $ {OCF_ROOT} / lib / heartbeat}
# $ {OCF_FUNCTIONS_DIR} / ocf-shellfuncs
corasync.log
Dec 26 14:19:21 [27771] centos-test1 pengine: info: common_print: virtual_ip (ocf::heartbeat:IPaddr2): Started centos-test1
Dec 26 14:19:21 [27771] centos-test1 pengine: info: common_print: jboss (ocf::heartbeat:jboss): FAILED centos-test1
Dec 26 14:19:21 [27771] centos-test1 pengine: info: common_print: pgsql (ocf::heartbeat:pgsql): Stopped
Dec 26 14:19:21 [27771] centos-test1 pengine: info: pe_get_failcount: jboss has failed INFINITY times on centos-test1
Dec 26 14:19:21 [27771] centos-test1 pengine: warning: check_migration_threshold: Forcing jboss away from centos-test1 after 1000000 failures (max=1000000)
Dec 26 14:19:21 [27771] centos-test1 pengine: info: pe_get_failcount: pgsql has failed INFINITY times on centos-test1
Dec 26 14:19:21 [27771] centos-test1 pengine: warning: check_migration_threshold: Forcing pgsql away from centos-test1 after 1000000 failures (max=1000000)
Dec 26 14:19:21 [27771] centos-test1 pengine: info: pe_get_failcount: jboss has failed INFINITY times on centos-test2
Dec 26 14:19:21 [27771] centos-test1 pengine: warning: check_migration_threshold: Forcing jboss away from centos-test2 after 1000000 failures (max=1000000)
Dec 26 14:19:21 [27771] centos-test1 pengine: info: native_color: Resource jboss cannot run anywhere
Dec 26 14:19:21 [27771] centos-test1 pengine: info: native_color: Resource pgsql cannot run anywhere
Dec 26 14:19:21 [27771] centos-test1 pengine: info: LogActions: Leave virtual_ip (Started centos-test1)
Dec 26 14:19:21 [27771] centos-test1 pengine: notice: LogAction: * Stop jboss ( centos-test1 ) due to node availability
Dec 26 14:19:21 [27771] centos-test1 pengine: info: LogActions: Leave pgsql (Stopped)
Dec 26 14:19:21 [27771] centos-test1 pengine: notice: process_pe_message: Calculated transition 5, saving inputs in /var/lib/pacemaker/pengine/pe-input-266.bz2
Dec 26 14:19:21 [27772] centos-test1 crmd: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE | input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response
Dec 26 14:19:21 [27772] centos-test1 crmd: info: do_te_invoke: Processing graph 5 (ref=pe_calc-dc-1545823161-30) derived from /var/lib/pacemaker/pengine/pe-input-266.bz2
Dec 26 14:19:21 [27772] centos-test1 crmd: notice: te_rsc_command: Initiating stop operation jboss_stop_0 locally on centos-test1 | action 2
Dec 26 14:19:21 [27772] centos-test1 crmd: info: do_lrm_rsc_op: Performing key=2:5:0:19594a89-d772-4748-8c9a-5a7888a82914 op=jboss_stop_0
Dec 26 14:19:21 [27767] centos-test1 cib: info: cib_process_request: Forwarding cib_modify operation for section status to all (origin=local/crmd/54)
Dec 26 14:19:21 [27769] centos-test1 lrmd: info: log_execute: executing - rsc:jboss action:stop call_id:18
Dec 26 14:19:21 [27767] centos-test1 cib: info: cib_perform_op: Diff: --- 0.15.35 2
Dec 26 14:19:21 [27767] centos-test1 cib: info: cib_perform_op: Diff: +++ 0.15.36 (null)
Dec 26 14:19:21 [27767] centos-test1 cib: info: cib_perform_op: + /cib: #num_updates=36
Dec 26 14:19:21 [27767] centos-test1 cib: info: cib_perform_op: + /cib/status/node_state[#id='1']/lrm[#id='1']/lrm_resources/lrm_resource[#id='jboss']/lrm_rsc_op[#id='jboss_last_0']: #operation_key=jboss_stop_0, #operation=stop, #transition-key=2:5:0:19594a89-d772-4748-8c9a-5a7888a82914, #transition-magic=-1:193;2:5:0:19594a89-d772-4748-8c9a-5a7888a82914, #call-id=-1, #rc-code=193, #op-status=-1, #last-run=1545823161, #last-rc-change=1545823161, #exec-time=0
Dec 26 14:19:21 [27767] centos-test1 cib: info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=centos-test1/crmd/54, version=0.15.36)
Dec 26 14:19:21 jboss(jboss)[28346]: INFO: JBoss[jboss] is already stopped.
Dec 26 14:19:21 [27769] centos-test1 lrmd: info: log_finished: finished - rsc:jboss action:stop call_id:18 pid:28346 exit-code:0 exec-time:21ms queue-time:0ms
Dec 26 14:19:21 [27772] centos-test1 crmd: notice: process_lrm_event: Result of stop operation for jboss on centos-test1: 0 (ok) | call=18 key=jboss_stop_0 confirmed=true cib-update=55
Dec 26 14:19:21 [27767] centos-test1 cib: info: cib_process_request: Forwarding cib_modify operation for section status to all (origin=local/crmd/55)
Dec 26 14:19:21 [27767] centos-test1 cib: info: cib_perform_op: Diff: --- 0.15.36 2
Dec 26 14:19:21 [27767] centos-test1 cib: info: cib_perform_op: Diff: +++ 0.15.37 (null)
Dec 26 14:19:21 [27767] centos-test1 cib: info: cib_perform_op: + /cib: #num_updates=37
Dec 26 14:19:21 [27767] centos-test1 cib: info: cib_perform_op: + /cib/status/node_state[#id='1']/lrm[#id='1']/lrm_resources/lrm_resource[#id='jboss']/lrm_rsc_op[#id='jboss_last_0']: #transition-magic=0:0;2:5:0:19594a89-d772-4748-8c9a-5a7888a82914, #call-id=18, #rc-code=0, #op-status=0, #exec-time=21
Dec 26 14:19:21 [27767] centos-test1 cib: info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=centos-test1/crmd/55, version=0.15.37)
Dec 26 14:19:21 [27772] centos-test1 crmd: info: match_graph_event: Action jboss_stop_0 (2) confirmed on centos-test1 (rc=0)
Dec 26 14:19:21 [27772] centos-test1 crmd: notice: run_graph: Transition 5 (Complete=2, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-266.bz2): Complete
Dec 26 14:19:21 [27772] centos-test1 crmd: info: do_log: Input I_TE_SUCCESS received in state S_TRANSITION_ENGINE from notify_crmd
Dec 26 14:19:21 [27772] centos-test1 crmd: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE | input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd
Dec 26 14:19:26 [27767] centos-test1 cib: info: cib_process_ping: Reporting our current digest to centos-test1: 1441d742a8ffbf1c1f45b9d38dd1a776 for 0.15.37 (0x55a05cd6c580 0)

Boht resource have reached max cap on number of retries to start. the cluster has given up on them.
crm_resource can be of help to reset failcount and trigger start of resource on a
node.
You have to cleanup the resources on each of hte node to trigger a restart.
On commandline execute "crm_resource --help" and go to the end for an example of crm_resource --cleanup

Related

Openliberty arquillian testing

I'm running into an issue when running an integration test with arquillian against openliberty. Basically it is timing out on the deployment. Apparently you can set appDeployTimeout which I have done. However it does not appear to be loaded as it times out at the same time.
Am I missing something? Can I set this in server.xml?
EDIT 1: Logging:
[INFO] Running za.co.nb.offermanagement.apis.resources.RecalculateOfferIT
Picked up JAVA_TOOL_OPTIONS: -Dcom.ibm.ws.logging.console.log.level=INFO -Dsystem.context.root=/weboffer
Launching defaultServer (Open Liberty 21.0.0.11/wlp-1.0.58.cl211120211019-1900) on OpenJDK 64-Bit Server VM, version 1.8.0_312-b07 (en_ZA)
[AUDIT ] CWWKE0001I: The server defaultServer has been launched.
[AUDIT ] CWWKG0093A: Processing configuration drop-ins resource: C:\Users\cc327150\dev\projects\nedbank\bpmexjee-offermanagement\WebOfferManagement\target\liberty\wlp\usr\servers\defaultServer\configDropins\overrides\liberty-plugin-variable-config.xml
[INFO ] CWWKE0002I: The kernel started after 3.386 seconds
[INFO ] CWWKF0007I: Feature update started.
[INFO ] Aries Blueprint packages not available. So namespaces will not be registered
[AUDIT ] CWWKZ0058I: Monitoring dropins for applications.
[AUDIT ] CWWKI0001I: The CORBA name server is now available at corbaloc:iiop:localhost:2809/NameService.
[WARNING ] CWWKZ0014W: The application weboffer could not be started as it could not be found at location weboffer.war.
[INFO ] CWWKO0219I: TCP Channel defaultHttpEndpoint has been started and is now listening for requests on host * (IPv6) port 9080.
[AUDIT ] CWWKF0012I: The server installed the following features: [ejb-3.2, ejbHome-3.2, ejbLite-3.2, ejbPersistentTimer-3.2, ejbRemote-3.2, jaxrs-2.1, jaxrsClient-2.1, jca-1.7, jdbc-4.1, jndi-1.0, jsonp-1.1, localConnector-1.0, mdb-3.2, servlet-4.0].
[INFO ] CWWKF0008I: Feature update completed in 8.636 seconds.
[AUDIT ] CWWKF0011I: The defaultServer server is ready to run a smarter planet. The defaultServer server started in 12.014 seconds.
[ERROR ] CWWKZ0013E: It is not possible to start two applications called weboffer.
Nov 24, 2021 2:30:17 PM io.openliberty.arquillian.managed.WLPManagedContainer logAllApps
INFO: Listing all apps...
[INFO ] SESN8501I: The session manager did not find a persistent storage location; HttpSession objects will be stored in the local application server's memory.
Nov 24, 2021 2:30:17 PM io.openliberty.arquillian.managed.WLPManagedContainer logAllApps
INFO: Size of results: 43
Nov 24, 2021 2:30:17 PM io.openliberty.arquillian.managed.WLPManagedContainer logAllApps
INFO: java.lang:type=MemoryPool,name=Metaspace
Nov 24, 2021 2:30:17 PM io.openliberty.arquillian.managed.WLPManagedContainer logAllApps
INFO: java.lang:type=MemoryPool,name=PS Old Gen
Nov 24, 2021 2:30:17 PM io.openliberty.arquillian.managed.WLPManagedContainer logAllApps
INFO: java.lang:type=Runtime
Nov 24, 2021 2:30:17 PM io.openliberty.arquillian.managed.WLPManagedContainer logAllApps
INFO: osgi.core:service=permissionadmin,version=1.2,framework=org.eclipse.osgi,uuid=1e536b3a-5a70-420e-b7f2-a0d1a4d1bb51
Nov 24, 2021 2:30:17 PM io.openliberty.arquillian.managed.WLPManagedContainer logAllApps
INFO: WebSphere:name=com.ibm.websphere.config.mbeans.FeatureListMBean
Nov 24, 2021 2:30:17 PM io.openliberty.arquillian.managed.WLPManagedContainer logAllApps
INFO: java.lang:type=GarbageCollector,name=PS Scavenge
Nov 24, 2021 2:30:17 PM io.openliberty.arquillian.managed.WLPManagedContainer logAllApps
INFO: java.lang:type=Threading
Nov 24, 2021 2:30:17 PM io.openliberty.arquillian.managed.WLPManagedContainer logAllApps
INFO: WebSphere:name=com.ibm.websphere.runtime.update.RuntimeUpdateNotificationMBean
Nov 24, 2021 2:30:17 PM io.openliberty.arquillian.managed.WLPManagedContainer logAllApps
INFO: java.lang:type=MemoryPool,name=PS Eden Space
Nov 24, 2021 2:30:17 PM io.openliberty.arquillian.managed.WLPManagedContainer logAllApps
INFO: java.nio:type=BufferPool,name=mapped
Nov 24, 2021 2:30:17 PM io.openliberty.arquillian.managed.WLPManagedContainer logAllApps
INFO: jdk.management.jfr:type=FlightRecorder
Nov 24, 2021 2:30:17 PM io.openliberty.arquillian.managed.WLPManagedContainer logAllApps
INFO: WebSphere:name=com.ibm.ws.jmx.mbeans.sessionManagerMBean
Nov 24, 2021 2:30:17 PM io.openliberty.arquillian.managed.WLPManagedContainer logAllApps
INFO: java.lang:type=MemoryPool,name=Compressed Class Space
Nov 24, 2021 2:30:17 PM io.openliberty.arquillian.managed.WLPManagedContainer logAllApps
INFO: WebSphere:name=com.ibm.ws.config.serverSchemaGenerator
Nov 24, 2021 2:30:17 PM io.openliberty.arquillian.managed.WLPManagedContainer logAllApps
INFO: WebSphere:feature=kernel,name=ServerInfo
Nov 24, 2021 2:30:17 PM io.openliberty.arquillian.managed.WLPManagedContainer logAllApps
INFO: java.lang:type=MemoryPool,name=PS Survivor Space
Nov 24, 2021 2:30:17 PM io.openliberty.arquillian.managed.WLPManagedContainer logAllApps
INFO: java.util.logging:type=Logging
Nov 24, 2021 2:30:17 PM io.openliberty.arquillian.managed.WLPManagedContainer logAllApps
INFO: osgi.compendium:service=cm,version=1.3,framework=org.eclipse.osgi,uuid=1e536b3a-5a70-420e-b7f2-a0d1a4d1bb51
Nov 24, 2021 2:30:17 PM io.openliberty.arquillian.managed.WLPManagedContainer logAllApps
INFO: osgi.core:type=bundleState,version=1.7,framework=org.eclipse.osgi,uuid=1e536b3a-5a70-420e-b7f2-a0d1a4d1bb51
Nov 24, 2021 2:30:17 PM io.openliberty.arquillian.managed.WLPManagedContainer logAllApps
INFO: osgi.core:type=packageState,version=1.5,framework=org.eclipse.osgi,uuid=1e536b3a-5a70-420e-b7f2-a0d1a4d1bb51
Nov 24, 2021 2:30:17 PM io.openliberty.arquillian.managed.WLPManagedContainer logAllApps
INFO: java.lang:type=Compilation
Nov 24, 2021 2:30:17 PM io.openliberty.arquillian.managed.WLPManagedContainer logAllApps
INFO: WebSphere:name=com.ibm.ws.jmx.mbeans.generatePluginConfig
Nov 24, 2021 2:30:17 PM io.openliberty.arquillian.managed.WLPManagedContainer logAllApps
INFO: java.lang:type=OperatingSystem
Nov 24, 2021 2:30:17 PM io.openliberty.arquillian.managed.WLPManagedContainer logAllApps
INFO: WebSphere:feature=PluginUtility,name=PluginConfigRequester
Nov 24, 2021 2:30:17 PM io.openliberty.arquillian.managed.WLPManagedContainer logAllApps
INFO: JMImplementation:type=MBeanServerDelegate
Nov 24, 2021 2:30:17 PM io.openliberty.arquillian.managed.WLPManagedContainer logAllApps
INFO: java.lang:type=MemoryManager,name=Metaspace Manager
Nov 24, 2021 2:30:17 PM io.openliberty.arquillian.managed.WLPManagedContainer logAllApps
INFO: WebSphere:feature=channelfw,type=endpoint,name=defaultHttpEndpoint
Nov 24, 2021 2:30:17 PM io.openliberty.arquillian.managed.WLPManagedContainer logAllApps
INFO: java.lang:type=ClassLoading
Nov 24, 2021 2:30:17 PM io.openliberty.arquillian.managed.WLPManagedContainer logAllApps
INFO: com.sun.management:type=HotSpotDiagnostic
Nov 24, 2021 2:30:17 PM io.openliberty.arquillian.managed.WLPManagedContainer logAllApps
INFO: WebSphere:name=com.ibm.websphere.config.mbeans.ServerXMLConfigurationMBean
Nov 24, 2021 2:30:17 PM io.openliberty.arquillian.managed.WLPManagedContainer logAllApps
INFO: java.lang:type=MemoryManager,name=CodeCacheManager
Nov 24, 2021 2:30:17 PM io.openliberty.arquillian.managed.WLPManagedContainer logAllApps
INFO: WebSphere:feature=ejbPersistentTimer,type=EJBPersistentTimerService,name=EJBPersistentTimerService
Nov 24, 2021 2:30:17 PM io.openliberty.arquillian.managed.WLPManagedContainer logAllApps
INFO: java.lang:type=MemoryPool,name=Code Cache
Nov 24, 2021 2:30:17 PM io.openliberty.arquillian.managed.WLPManagedContainer logAllApps
INFO: osgi.core:type=framework,version=1.7,framework=org.eclipse.osgi,uuid=1e536b3a-5a70-420e-b7f2-a0d1a4d1bb51
Nov 24, 2021 2:30:17 PM io.openliberty.arquillian.managed.WLPManagedContainer logAllApps
INFO: WebSphere:feature=persistence,type=DDLGenerationMBean,name=DDLGenerationMBean
Nov 24, 2021 2:30:17 PM io.openliberty.arquillian.managed.WLPManagedContainer logAllApps
INFO: java.nio:type=BufferPool,name=direct
Nov 24, 2021 2:30:17 PM io.openliberty.arquillian.managed.WLPManagedContainer logAllApps
INFO: osgi.core:type=serviceState,version=1.7,framework=org.eclipse.osgi,uuid=1e536b3a-5a70-420e-b7f2-a0d1a4d1bb51
Nov 24, 2021 2:30:17 PM io.openliberty.arquillian.managed.WLPManagedContainer logAllApps
INFO: WebSphere:service=com.ibm.websphere.application.ApplicationMBean,name=weboffer
Nov 24, 2021 2:30:17 PM io.openliberty.arquillian.managed.WLPManagedContainer logAllApps
INFO: WebSphere:feature=kernel,name=ServerEndpointControl
Nov 24, 2021 2:30:17 PM io.openliberty.arquillian.managed.WLPManagedContainer logAllApps
INFO: java.lang:type=GarbageCollector,name=PS MarkSweep
Nov 24, 2021 2:30:17 PM io.openliberty.arquillian.managed.WLPManagedContainer logAllApps
INFO: com.sun.management:type=DiagnosticCommand
Nov 24, 2021 2:30:17 PM io.openliberty.arquillian.managed.WLPManagedContainer logAllApps
INFO: WebSphere:service=com.ibm.ws.kernel.filemonitor.FileNotificationMBean
Nov 24, 2021 2:30:17 PM io.openliberty.arquillian.managed.WLPManagedContainer logAllApps
INFO: java.lang:type=Memory
Nov 24, 2021 2:30:20 PM io.openliberty.arquillian.managed.WLPManagedContainer logAllApps
INFO: Listing all apps...
Nov 24, 2021 2:30:20 PM io.openliberty.arquillian.managed.WLPManagedContainer logAllApps
INFO: Size of results: 43
Nov 24, 2021 2:30:20 PM io.openliberty.arquillian.managed.WLPManagedContainer logAllApps
INFO: java.lang:type=MemoryPool,name=Metaspace
Nov 24, 2021 2:30:20 PM io.openliberty.arquillian.managed.WLPManagedContainer logAllApps
INFO: java.lang:type=MemoryPool,name=PS Old Gen
Nov 24, 2021 2:30:20 PM io.openliberty.arquillian.managed.WLPManagedContainer logAllApps
INFO: java.lang:type=Runtime
Nov 24, 2021 2:30:20 PM io.openliberty.arquillian.managed.WLPManagedContainer logAllApps
INFO: osgi.core:service=permissionadmin,version=1.2,framework=org.eclipse.osgi,uuid=1e536b3a-5a70-420e-b7f2-a0d1a4d1bb51
Nov 24, 2021 2:30:20 PM io.openliberty.arquillian.managed.WLPManagedContainer logAllApps
INFO: WebSphere:name=com.ibm.websphere.config.mbeans.FeatureListMBean
Nov 24, 2021 2:30:20 PM io.openliberty.arquillian.managed.WLPManagedContainer logAllApps
INFO: java.lang:type=GarbageCollector,name=PS Scavenge
Nov 24, 2021 2:30:20 PM io.openliberty.arquillian.managed.WLPManagedContainer logAllApps
INFO: java.lang:type=Threading
Nov 24, 2021 2:30:20 PM io.openliberty.arquillian.managed.WLPManagedContainer logAllApps
INFO: WebSphere:name=com.ibm.websphere.runtime.update.RuntimeUpdateNotificationMBean
Nov 24, 2021 2:30:20 PM io.openliberty.arquillian.managed.WLPManagedContainer logAllApps
INFO: java.lang:type=MemoryPool,name=PS Eden Space
Nov 24, 2021 2:30:20 PM io.openliberty.arquillian.managed.WLPManagedContainer logAllApps
INFO: java.nio:type=BufferPool,name=mapped
Nov 24, 2021 2:30:20 PM io.openliberty.arquillian.managed.WLPManagedContainer logAllApps
INFO: jdk.management.jfr:type=FlightRecorder
Nov 24, 2021 2:30:20 PM io.openliberty.arquillian.managed.WLPManagedContainer logAllApps
INFO: WebSphere:name=com.ibm.ws.jmx.mbeans.sessionManagerMBean
Nov 24, 2021 2:30:20 PM io.openliberty.arquillian.managed.WLPManagedContainer logAllApps
INFO: java.lang:type=MemoryPool,name=Compressed Class Space
Nov 24, 2021 2:30:20 PM io.openliberty.arquillian.managed.WLPManagedContainer logAllApps
INFO: WebSphere:name=com.ibm.ws.config.serverSchemaGenerator
Nov 24, 2021 2:30:20 PM io.openliberty.arquillian.managed.WLPManagedContainer logAllApps
INFO: WebSphere:feature=kernel,name=ServerInfo
Nov 24, 2021 2:30:20 PM io.openliberty.arquillian.managed.WLPManagedContainer logAllApps
INFO: java.lang:type=MemoryPool,name=PS Survivor Space
Nov 24, 2021 2:30:20 PM io.openliberty.arquillian.managed.WLPManagedContainer logAllApps
INFO: java.util.logging:type=Logging
Nov 24, 2021 2:30:20 PM io.openliberty.arquillian.managed.WLPManagedContainer logAllApps
INFO: osgi.compendium:service=cm,version=1.3,framework=org.eclipse.osgi,uuid=1e536b3a-5a70-420e-b7f2-a0d1a4d1bb51
Nov 24, 2021 2:30:20 PM io.openliberty.arquillian.managed.WLPManagedContainer logAllApps
INFO: osgi.core:type=bundleState,version=1.7,framework=org.eclipse.osgi,uuid=1e536b3a-5a70-420e-b7f2-a0d1a4d1bb51
Nov 24, 2021 2:30:20 PM io.openliberty.arquillian.managed.WLPManagedContainer logAllApps
INFO: osgi.core:type=packageState,version=1.5,framework=org.eclipse.osgi,uuid=1e536b3a-5a70-420e-b7f2-a0d1a4d1bb51
Nov 24, 2021 2:30:20 PM io.openliberty.arquillian.managed.WLPManagedContainer logAllApps
INFO: java.lang:type=Compilation
Nov 24, 2021 2:30:20 PM io.openliberty.arquillian.managed.WLPManagedContainer logAllApps
INFO: WebSphere:name=com.ibm.ws.jmx.mbeans.generatePluginConfig
Nov 24, 2021 2:30:20 PM io.openliberty.arquillian.managed.WLPManagedContainer logAllApps
INFO: java.lang:type=OperatingSystem
Nov 24, 2021 2:30:20 PM io.openliberty.arquillian.managed.WLPManagedContainer logAllApps
INFO: WebSphere:feature=PluginUtility,name=PluginConfigRequester
Nov 24, 2021 2:30:20 PM io.openliberty.arquillian.managed.WLPManagedContainer logAllApps
INFO: JMImplementation:type=MBeanServerDelegate
Nov 24, 2021 2:30:20 PM io.openliberty.arquillian.managed.WLPManagedContainer logAllApps
INFO: java.lang:type=MemoryManager,name=Metaspace Manager
Nov 24, 2021 2:30:20 PM io.openliberty.arquillian.managed.WLPManagedContainer logAllApps
INFO: WebSphere:feature=channelfw,type=endpoint,name=defaultHttpEndpoint
Nov 24, 2021 2:30:20 PM io.openliberty.arquillian.managed.WLPManagedContainer logAllApps
INFO: java.lang:type=ClassLoading
Nov 24, 2021 2:30:20 PM io.openliberty.arquillian.managed.WLPManagedContainer logAllApps
INFO: com.sun.management:type=HotSpotDiagnostic
Nov 24, 2021 2:30:20 PM io.openliberty.arquillian.managed.WLPManagedContainer logAllApps
INFO: WebSphere:name=com.ibm.websphere.config.mbeans.ServerXMLConfigurationMBean
Nov 24, 2021 2:30:20 PM io.openliberty.arquillian.managed.WLPManagedContainer logAllApps
INFO: java.lang:type=MemoryManager,name=CodeCacheManager
Nov 24, 2021 2:30:20 PM io.openliberty.arquillian.managed.WLPManagedContainer logAllApps
INFO: WebSphere:feature=ejbPersistentTimer,type=EJBPersistentTimerService,name=EJBPersistentTimerService
Nov 24, 2021 2:30:20 PM io.openliberty.arquillian.managed.WLPManagedContainer logAllApps
INFO: java.lang:type=MemoryPool,name=Code Cache
Nov 24, 2021 2:30:20 PM io.openliberty.arquillian.managed.WLPManagedContainer logAllApps
INFO: osgi.core:type=framework,version=1.7,framework=org.eclipse.osgi,uuid=1e536b3a-5a70-420e-b7f2-a0d1a4d1bb51
Nov 24, 2021 2:30:20 PM io.openliberty.arquillian.managed.WLPManagedContainer logAllApps
INFO: WebSphere:feature=persistence,type=DDLGenerationMBean,name=DDLGenerationMBean
Nov 24, 2021 2:30:20 PM io.openliberty.arquillian.managed.WLPManagedContainer logAllApps
INFO: java.nio:type=BufferPool,name=direct
Nov 24, 2021 2:30:20 PM io.openliberty.arquillian.managed.WLPManagedContainer logAllApps
INFO: osgi.core:type=serviceState,version=1.7,framework=org.eclipse.osgi,uuid=1e536b3a-5a70-420e-b7f2-a0d1a4d1bb51
Nov 24, 2021 2:30:20 PM io.openliberty.arquillian.managed.WLPManagedContainer logAllApps
INFO: WebSphere:service=com.ibm.websphere.application.ApplicationMBean,name=weboffer
Nov 24, 2021 2:30:20 PM io.openliberty.arquillian.managed.WLPManagedContainer logAllApps
INFO: WebSphere:feature=kernel,name=ServerEndpointControl
Nov 24, 2021 2:30:20 PM io.openliberty.arquillian.managed.WLPManagedContainer logAllApps
INFO: java.lang:type=GarbageCollector,name=PS MarkSweep
Nov 24, 2021 2:30:20 PM io.openliberty.arquillian.managed.WLPManagedContainer logAllApps
INFO: com.sun.management:type=DiagnosticCommand
Nov 24, 2021 2:30:20 PM io.openliberty.arquillian.managed.WLPManagedContainer logAllApps
INFO: WebSphere:service=com.ibm.ws.kernel.filemonitor.FileNotificationMBean
Nov 24, 2021 2:30:20 PM io.openliberty.arquillian.managed.WLPManagedContainer logAllApps
INFO: java.lang:type=Memory
[ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 43.261 s <<< FAILURE! - in za.co.nb.offermanagement.apis.resources.RecalculateOfferIT
[ERROR] za.co.nb.offermanagement.apis.resources.RecalculateOfferIT Time elapsed: 43.257 s <<< ERROR!
org.jboss.arquillian.container.spi.client.container.DeploymentException: Timeout while waiting for "weboffer" ApplicationMBean to reach STARTED. Actual state: STOPPED.
Stopping server defaultServer.
[AUDIT ] CWWKE0055I: Server shutdown requested on Wednesday 24 November 2021 at 2:30 PM. The server defaultServer is shutting down.
[AUDIT ] CWWKE1100I: Waiting for up to 30 seconds for the server to quiesce.
[INFO ] CWWKO0220I: TCP Channel defaultHttpEndpoint has stopped listening for requests on host * (IPv6) port 9080.
[INFO ] CWWKE1101I: Server quiesce complete.
[AUDIT ] CWWKI0002I: The CORBA name server is no longer available at corbaloc:iiop:localhost:2809/NameService.
[AUDIT ] CWWKE0036I: The server defaultServer stopped after 43.785 seconds.
Server defaultServer stopped.
[INFO]
[INFO] Results:
[INFO]
[ERROR] Errors:
[ERROR] RecalculateOfferIT » Deployment Timeout while waiting for "weboffer" Applicati...
[INFO]
[ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0
[INFO]
EDIT 2 messages.log:
********************************************************************************
product = Open Liberty 21.0.0.11 (wlp-1.0.58.cl211120211019-1900)
wlp.install.dir = C:/Users/cc327150/dev/projects/nedbank/bpmexjee-offermanagement/WebOfferManagement/target/liberty/wlp/
java.home = C:\Users\cc327150\dev\tools\jdk8u312\jre
java.version = 1.8.0_312
java.runtime = OpenJDK Runtime Environment (1.8.0_312-b07)
os = Windows 10 (10.0; amd64) (en_ZA)
process = 18348#V105P10PRA4704
********************************************************************************
[2021/11/25 15:29:37:212 CAT] 00000001 com.ibm.ws.kernel.launch.internal.FrameworkManager A CWWKE0001I: The server defaultServer has been launched.
[2021/11/25 15:30:01:917 CAT] 00000013 com.ibm.ws.config.xml.internal.ServerXMLConfiguration A CWWKG0093A: Processing configuration drop-ins resource: C:\Users\cc327150\dev\projects\nedbank\bpmexjee-offermanagement\WebOfferManagement\target\liberty\wlp\usr\servers\defaultServer\configDropins\overrides\liberty-plugin-variable-config.xml
[2021/11/25 15:30:02:520 CAT] 00000001 com.ibm.ws.kernel.launch.internal.FrameworkManager I CWWKE0002I: The kernel started after 26.211 seconds
[2021/11/25 15:30:03:573 CAT] 0000001c com.ibm.ws.kernel.feature.internal.FeatureManager I CWWKF0007I: Feature update started.
[2021/11/25 15:30:12:061 CAT] 00000014 .apache.cxf.cxf.core.3.2:1.0.58.cl211120211019-1900(id=149)] I Aries Blueprint packages not available. So namespaces will not be registered
[2021/11/25 15:30:12:781 CAT] 00000013 com.ibm.ws.app.manager.internal.monitor.DropinMonitor A CWWKZ0058I: Monitoring dropins for applications.
[2021/11/25 15:30:14:909 CAT] 00000014 com.ibm.ws.transport.iiop.internal.ORBWrapperInternal A CWWKI0001I: The CORBA name server is now available at corbaloc:iiop:localhost:2809/NameService.
[2021/11/25 15:30:16:598 CAT] 0000001f com.ibm.ws.app.manager.AppMessageHelper W CWWKZ0014W: The application weboffer could not be started as it could not be found at location weboffer.war.
[2021/11/25 15:30:16:614 CAT] 0000001c com.ibm.ws.tcpchannel.internal.TCPPort I CWWKO0219I: TCP Channel defaultHttpEndpoint has been started and is now listening for requests on host * (IPv6) port 9080.
[2021/11/25 15:30:16:786 CAT] 0000001c com.ibm.ws.kernel.feature.internal.FeatureManager A CWWKF0012I: The server installed the following features: [ejb-3.2, ejbHome-3.2, ejbLite-3.2, ejbPersistentTimer-3.2, ejbRemote-3.2, jaxrs-2.1, jaxrsClient-2.1, jca-1.7, jdbc-4.1, jndi-1.0, jsonp-1.1, localConnector-1.0, mdb-3.2, servlet-4.0].
[2021/11/25 15:30:16:786 CAT] 0000001c com.ibm.ws.kernel.feature.internal.FeatureManager I CWWKF0008I: Feature update completed in 14.273 seconds.
[2021/11/25 15:30:16:786 CAT] 0000001c com.ibm.ws.kernel.feature.internal.FeatureManager A CWWKF0011I: The defaultServer server is ready to run a smarter planet. The defaultServer server started in 40.472 seconds.
[2021/11/25 15:30:17:177 CAT] 0000000f com.ibm.ws.kernel.launch.internal.ServerCommandListener A CWWKE0055I: Server shutdown requested on Thursday 25 November 2021 at 3:30 PM. The server defaultServer is shutting down.
[2021/11/25 15:30:17:724 CAT] 00000029 com.ibm.ws.runtime.update.internal.RuntimeUpdateManagerImpl A CWWKE1100I: Waiting for up to 30 seconds for the server to quiesce.
[2021/11/25 15:30:17:756 CAT] 0000001e com.ibm.ws.tcpchannel.internal.TCPChannel I CWWKO0220I: TCP Channel defaultHttpEndpoint has stopped listening for requests on host * (IPv6) port 9080.
[2021/11/25 15:30:17:776 CAT] 00000029 com.ibm.ws.runtime.update.internal.RuntimeUpdateManagerImpl I CWWKE1101I: Server quiesce complete.
[2021/11/25 15:30:17:931 CAT] 00000029 com.ibm.ws.transport.iiop.internal.ORBWrapperInternal A CWWKI0002I: The CORBA name server is no longer available at corbaloc:iiop:localhost:2809/NameService.
[2021/11/25 15:30:20:081 CAT] 00000001 com.ibm.ws.kernel.launch.internal.FrameworkManager A CWWKE0036I: The server defaultServer stopped after 43.771 seconds.
EDIT 3:
<server description="Sample Liberty server">
<featureManager>
<feature>jaxrs-2.1</feature>
<feature>jsonp-1.1</feature>
<feature>localConnector-1.0</feature>
<feature>servlet-4.0</feature>
<feature>ejb-3.2</feature>
</featureManager>
<variable name="default.http.port" defaultValue="9080"/>
<variable name="default.https.port" defaultValue="9443"/>
<webApplication location="weboffer.war" contextRoot="/" />
<applicationManager startTimeout="200s" />
<applicationMonitor updateTrigger="mbean" />
<httpEndpoint host="*" httpPort="${default.http.port}"
httpsPort="${default.https.port}" id="defaultHttpEndpoint"/>
</server>
Which Jakarta EE version you are using?
I have created some template and projects for Jakarta EE 8 and 9 in the past years.
Check the Jakarta EE 8 template project and Jakarta EE 9 template project, both include Arquillian OpenLiberty managed and remote adapter configuration, note read the related docs for details, links provided in the readme of these projects.
For the arqullian.xml configuration, check Jakarta EE 9 readme.
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<arquillian xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="http://jboss.org/schema/arquillian"
xsi:schemaLocation="http://jboss.org/schema/arquillian http://jboss.org/schema/arquillian/arquillian_1_0.xsd">
<engine>
<property name="deploymentExportPath">target/</property>
</engine>
<container qualifier="liberty-remote">
<configuration>
<property name="hostName">localhost</property>
<property name="serverName">defaultServer</property>
<property name="username">admin</property>
<property name="password">admin</property>
<property name="httpPort">9080</property>
<property name="httpsPort">9443</property>
<!-- change appDeployTimeout here -->
<property name="appDeployTimeout">120</property>
</configuration>
</container>
</arquillian>

Prevent OOM inside container

I'm running conda install -c anaconda pycodestyle in a container with this spec:
apiVersion: v1
kind: Pod
metadata:
name: conda-${PYTHON_VERSION}
spec:
securityContext:
runAsUser: 0
runAsGroup: 0
containers:
- name: python
image: continuumio/conda-ci-linux-64-python${PYTHON_VERSION}
command:
- /bin/bash
args:
- "-c"
- "sleep 99d"
workingDir: /home/jenkins/agent
resources:
requests:
memory: "256Mi"
cpu: "1"
limits:
memory: "256Mi"
cpu: "1"
My understanding was that if I set limits and requests to the same value, OOM won't be called... seems like I was wrong.
To make myself clear: I don't want overprovisioning to happen at all, I don't want the kernel to panic, if it allocated memory it cannot back up with real memory.
Ideally, I want to understand how to prevent these errors from happening in Kubernetes at all, not specifically for conda, but if there's any way to limit conda to a particular amount of memory, it'll help too.
The machine running these containers has 16MB of memory, and it might, at most, try to run three of those.
The OMM message looks like this:
14:07:03 2021] Task in /kubepods/burstable/poda6df66b5-bfc5-43be-b02d-66f09e7ecf0f/2203670eb25d83d72428831a35773b90445f19ee37c117f196d6774442022db8 killed as a result of limit of /kubepods/burstable/poda6df66b5-bfc5-43be-b02d-66f09e7ecf0f/2203670eb25d83d72428831a35773b90445f19ee37c117f196d6774442022db8
[Wed Jul 21 14:07:03 2021] memory: usage 262144kB, limit 262144kB, failcnt 17168
[Wed Jul 21 14:07:03 2021] memory+swap: usage 0kB, limit 9007199254740988kB, failcnt 0
[Wed Jul 21 14:07:03 2021] kmem: usage 11128kB, limit 9007199254740988kB, failcnt 0
[Wed Jul 21 14:07:03 2021] Memory cgroup stats for /kubepods/burstable/poda6df66b5-bfc5-43be-b02d-66f09e7ecf0f/2203670eb25d83d72428831a35773b90445f19ee37c117f196d6774442022db8: cache:104KB rss:250524KB rss_huge:0KB shmem:0KB mapped_file:660KB dirty:0KB writeback:0KB inactive_anon:125500KB active_anon:125496KB inactive_file:8KB active_file:12KB unevictable:0KB
[Wed Jul 21 14:07:03 2021] Tasks state (memory values in pages):
[Wed Jul 21 14:07:03 2021] [ pid ] uid tgid total_vm rss pgtables_bytes swapents oom_score_adj name
[Wed Jul 21 14:07:03 2021] [3659435] 0 3659435 1012 169 40960 22 984 sleep
[Wed Jul 21 14:07:03 2021] [3660809] 0 3660809 597 155 45056 26 984 sh
[Wed Jul 21 14:07:03 2021] [3660823] 0 3660823 597 170 40960 18 984 sh
[Wed Jul 21 14:07:03 2021] [3660824] 0 3660824 597 14 40960 9 984 sh
[Wed Jul 21 14:07:03 2021] [3660825] 0 3660825 597 170 45056 23 984 sh
[Wed Jul 21 14:07:03 2021] [3660827] 0 3660827 162644 44560 753664 38159 984 conda
[Wed Jul 21 14:07:03 2021] [3660890] 0 3660890 1012 169 49152 22 984 sleep
[Wed Jul 21 14:07:03 2021] Memory cgroup out of memory: Kill process 3660827 (conda) score 1123 or sacrifice child
[Wed Jul 21 14:07:03 2021] Killed process 3660827 (conda) total-vm:650576kB, anon-rss:165968kB, file-rss:12272kB, shmem-rss:0kB
[Wed Jul 21 14:07:03 2021] oom_reaper: reaped process 3660827 (conda), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB
I also don't like the word "burstable" here... I thought it was supposed to be "guaranteed"

i can't receive any email if enabled email forwarding

if I enabled email forwarding, I can't receive any email but when I disabled email forwarding. Then webmail working fine. Anyone, please look into email logs? I don't see any sending and receiving problems. when email forwarding disabled on cyberpanel.
This problem only appear after enabling email forwarding.
Mar 29 07:14:01 blastoff postfix/bounce[2809]: 3DC8B3410DD: sender non-delivery notification: C9CED3410DE
Mar 29 07:14:01 blastoff postfix/qmgr[1082]: 3DC8B3410DD: removed
Mar 29 07:14:01 blastoff postfix/smtp[2934]: connect to gmail-smtp-in.l.google.com[2607:f8b0:4023:c03::1b]:25: Network is unreachable
Mar 29 07:14:02 blastoff postfix/smtp[2934]: C9CED3410DE: to=<riadloud#gmail.com>, relay=gmail-smtp-in.l.google.com[74.125.137.26]:25, delay=0.86, delays=0.01/0.07/0.36/0.42, dsn=2.0.0, status=sent (250 2.0.0 OK 1617002042 i21si16857578otj.220 - gsmtp)
Mar 29 07:14:02 blastoff postfix/qmgr[1082]: C9CED3410DE: removed
Mar 29 07:14:32 blastoff dovecot: imap-login: Login: user=<riad#blastoff.us>, method=PLAIN, rip=::1, lip=::1, mpid=2967, TLS, session=<QfcVoqe+6pUAAAAAAAAAAAAAAAAAAAAB>
Mar 29 07:14:32 blastoff dovecot: imap(riad#blastoff.us)<2967><QfcVoqe+6pUAAAAAAAAAAAAAAAAAAAAB>: Logged out in=89 out=1045 deleted=0 expunged=0 trashed=0 hdr_count=0 hdr_bytes=0 body_count=0 body_bytes=0
Mar 29 07:16:32 blastoff dovecot: imap-login: Login: user=<riad#blastoff.us>, method=PLAIN, rip=::1, lip=::1, mpid=3043, TLS, session=<b41Aqae+7JUAAAAAAAAAAAAAAAAAAAAB>
Mar 29 07:16:32 blastoff dovecot: imap(riad#blastoff.us)<3043><b41Aqae+7JUAAAAAAAAAAAAAAAAAAAAB>: Logged out in=89 out=1045 deleted=0 expunged=0 trashed=0 hdr_count=0 hdr_bytes=0 body_count=0 body_bytes=0
Mar 29 07:17:18 blastoff postfix/anvil[2916]: statistics: max connection rate 1/60s for (smtp:209.85.217.53) at Mar 29 07:13:56
Mar 29 07:17:18 blastoff postfix/anvil[2916]: statistics: max connection count 1 for (smtp:209.85.217.53) at Mar 29 07:13:56
Mar 29 07:17:18 blastoff postfix/anvil[2916]: statistics: max cache size 1 at Mar 29 07:13:56
Mar 29 07:17:24 blastoff dovecot: imap-login: Login: user=<riad#blastoff.us>, method=PLAIN, rip=::1, lip=::1, mpid=3087, TLS, session=<3BlfrKe+7pUAAAAAAAAAAAAAAAAAAAAB>
Mar 29 07:17:24 blastoff dovecot: imap(riad#blastoff.us)<3087><3BlfrKe+7pUAAAAAAAAAAAAAAAAAAAAB>: Logged out in=240 out=1300 deleted=0 expunged=0 trashed=0 hdr_count=0 hdr_bytes=0 body_count=0 body_bytes=0
Mar 29 07:18:17 blastoff spamd[3146]: logger: removing stderr method
Mar 29 07:18:17 blastoff spamd[3148]: config: no rules were found! Do you need to run 'sa-update'?
Mar 29 07:18:18 blastoff spamd[3146]: child process [3148] exited or timed out without signaling production of a PID file: exit 255 at /usr/sbin/spamd line 3034.
Mar 29 07:18:19 blastoff spamd[3150]: logger: removing stderr method
Mar 29 07:18:19 blastoff spamd[3152]: config: no rules were found! Do you need to run 'sa-update'?
Mar 29 07:18:20 blastoff spamd[3150]: child process [3152] exited or timed out without signaling production of a PID file: exit 255 at /usr/sbin/spamd line 3034.
Mar 29 07:18:21 blastoff spamd[3156]: logger: removing stderr method
Mar 29 07:18:21 blastoff spamd[3158]: config: no rules were found! Do you need to run 'sa-update'?
Mar 29 07:18:22 blastoff spamd[3156]: child process [3158] exited or timed out without signaling production of a PID file: exit 255 at /usr/sbin/spamd line 3034.
Mar 29 07:18:23 blastoff spamd[3159]: logger: removing stderr method
Mar 29 07:18:23 blastoff spamd[3161]: config: no rules were found! Do you need to run 'sa-update'?
Mar 29 07:18:24 blastoff spamd[3159]: child process [3161] exited or timed out without signaling production of a PID file: exit 255 at /usr/sbin/spamd line 3034.
Mar 29 07:18:24 blastoff spamd[3162]: logger: removing stderr method
Mar 29 07:18:24 blastoff spamd[3164]: config: no rules were found! Do you need to run 'sa-update'?
Mar 29 07:18:25 blastoff spamd[3162]: child process [3164] exited or timed out without signaling production of a PID file: exit 255 at /usr/sbin/spamd line 3034.
Mar 29 07:18:32 blastoff dovecot: imap-login: Login: user=<riad#blastoff.us>, method=PLAIN, rip=::1, lip=::1, mpid=3176, TLS, session=<qhxjsKe+8JUAAAAAAAAAAAAAAAAAAAAB>
Mar 29 07:18:32 blastoff dovecot: imap(riad#blastoff.us)<3176><qhxjsKe+8JUAAAAAAAAAAAAAAAAAAAAB>: Logged out in=89 out=1053 deleted=0 expunged=0 trashed=0 hdr_count=0 hdr_bytes=0 body_count=0 body_bytes=0
Mar 29 07:19:15 blastoff postfix/smtpd[3197]: connect from mail-vs1-f45.google.com[209.85.217.45]
Mar 29 07:19:15 blastoff postfix/smtpd[3197]: 9043434088F: client=mail-vs1-f45.google.com[209.85.217.45]
Mar 29 07:19:15 blastoff postfix/smtpd[3197]: warning: connect to /var/log/policyServerSocket: No such file or directory
Mar 29 07:19:16 blastoff postfix/smtpd[3197]: warning: connect to /var/log/policyServerSocket: No such file or directory
Mar 29 07:19:16 blastoff postfix/smtpd[3197]: warning: problem talking to server /var/log/policyServerSocket: No such file or directory
Mar 29 07:19:16 blastoff postfix/cleanup[3201]: 9043434088F: hold: header Received: from mail-vs1-f45.google.com (mail-vs1-f45.google.com [209.85.217.45])??by mail.blastoff.us (Postfix) with ESMTPS id 9043434088F??for <riad#blastoff.us>; Mon, 29 Mar 2021 07:19:15 +0000 (UTC from mail-vs1-f45.google.com[209.85.217.45]; from=<riadloud#gmail.com> to=<riad#blastoff.us> proto=ESMTP helo=<mail-vs1-f45.google.com>
Mar 29 07:19:16 blastoff postfix/cleanup[3201]: 9043434088F: message-id=<CACGWsS=QumtoJMTYX49XNFv7Kbk_-+xhJ4TrZdFezAytvToTow#mail.gmail.com>
Mar 29 07:19:16 blastoff opendkim[920]: 9043434088F: s=20161025 d=gmail.com SSL
Mar 29 07:19:16 blastoff postfix/smtpd[3197]: disconnect from mail-vs1-f45.google.com[209.85.217.45] ehlo=2 starttls=1 mail=1 rcpt=1 data=1 quit=1 commands=7
Mar 29 07:19:17 blastoff postfix/qmgr[1082]: 07B043410DD: from=<riadloud#gmail.com>, size=2541, nrcpt=2 (queue active)
Mar 29 07:19:19 blastoff postfix/pipe[3212]: 07B043410DD: to=<riad#blastoff.us>, relay=spamassassin, delay=3.7, delays=1.6/0.01/0/2, dsn=5.3.0, status=bounced (command line usage error. Command output: lda: Fatal: Unknown argument: unix Usage: dovecot-lda [-c <config file>] [-d <username>] [-p <path>] [-m <mailbox>] [-e] [-k] [-f <envelope sender>] [-a <original envelope recipient>] [-r <final envelope recipient>] )
Mar 29 07:19:19 blastoff postfix/pipe[3213]: 07B043410DD: to=<riadshout#gmail.com>, orig_to=<riad#blastoff.us>, relay=spamassassin, delay=3.7, delays=1.6/0.02/0/2, dsn=5.3.0, status=bounced (command line usage error. Command output: lda: Fatal: Unknown argument: unix Usage: dovecot-lda [-c <config file>] [-d <username>] [-p <path>] [-m <mailbox>] [-e] [-k] [-f <envelope sender>] [-a <original envelope recipient>] [-r <final envelope recipient>] )
Mar 29 07:19:19 blastoff postfix/cleanup[3201]: 3AEDA3410DE: message-id=<20210329071919.3AEDA3410DE#mail.blastoff.us>
Mar 29 07:19:19 blastoff postfix/bounce[3217]: 07B043410DD: sender non-delivery notification: 3AEDA3410DE
Mar 29 07:19:19 blastoff postfix/qmgr[1082]: 3AEDA3410DE: from=<>, size=6095, nrcpt=1 (queue active)
Mar 29 07:19:19 blastoff postfix/qmgr[1082]: 07B043410DD: removed
Mar 29 07:19:19 blastoff postfix/smtp[3220]: connect to gmail-smtp-in.l.google.com[2607:f8b0:4023:c03::1a]:25: Network is unreachable
Mar 29 07:19:19 blastoff postfix/smtp[3220]: 3AEDA3410DE: to=<riadloud#gmail.com>, relay=gmail-smtp-in.l.google.com[74.125.137.26]:25, delay=0.67, delays=0/0.02/0.29/0.35, dsn=2.0.0, status=sent (250 2.0.0 OK 1617002359 g9si17981450plj.221 - gsmtp)
Mar 29 07:19:19 blastoff postfix/qmgr[1082]: 3AEDA3410DE: removed
There are many configuration issues here.
The one that is causing the reject is:
command line usage error. Command output: lda: Fatal: Unknown argument: unix Usage: dovecot-lda [-c <config file>] [-d <username>] [-p <path>]
But there are also spamassassin issues (no rules found) and other milter issues (/var/log/policyServerSocket missing).
It doesn’t look like a forwarding issue but a general misconfiguration issue.
It is quite difficult to provide a precise advice here, you should review the whole configuration with someone familiar with this stuff.

code-oss (Code - OSS) on FreeBSD-CURRENT: renderer process crashing, the application can no longer open my workspace

FreeBSD editors/vscode recently began crashing for me at startup.
Removed, reinstalled, no improvement.
I wondered whether removal of rapid_render.json would work around the issue, it did not.
Another user of the system can start the application without crashing.
Please: how might I resolve the issue?
grahamperrin#mowa219-gjp4-8570p:~ % less ~/.config/Code\ -\ OSS/Backups/workspaces.json
{"rootURIWorkspaces":[],"folderURIWorkspaces":[],"emptyWorkspaceInfos":[{"backupFolder":"1579922206882"}],"emptyWorkspaces":["1579922206882"]}
grahamperrin#mowa219-gjp4-8570p:~ % less ~/.config/Code\ -\ OSS/logs/20201217T074443/main.log
[2020-12-17 07:44:44.168] [main] [info] update#ctor - updates are disabled as there is no update URL
[2020-12-17 07:44:48.938] [main] [error] [VS Code]: renderer process crashed!
[2020-12-17 07:44:59.171] [main] [error] [VS Code]: renderer process crashed!
[2020-12-17 07:45:09.432] [main] [error] [VS Code]: renderer process crashed!
grahamperrin#mowa219-gjp4-8570p:~ % ls -ahlrt ~/.config/Code\ -\ OSS/
total 174
drwx------ 3 grahamperrin grahamperrin 3B 26 Dec 2019 Code Cache
-rw-r--r-- 1 grahamperrin grahamperrin 36B 26 Dec 2019 machineid
drwxr-xr-x 5 grahamperrin grahamperrin 6B 2 Jan 2020 User
drwxr-xr-x 2 grahamperrin grahamperrin 2B 25 Jan 2020 Workspaces
drwxr-xr-x 3 grahamperrin grahamperrin 3B 4 Jul 06:00 clp
drwx------ 2 grahamperrin grahamperrin 7B 16 Jul 10:57 GPUCache
-rw------- 1 grahamperrin grahamperrin 0B 26 Sep 09:08 .org.chromium.Chromium.uev8QR
drwxr-xr-x 3 grahamperrin grahamperrin 3B 26 Sep 17:24 CachedData
drwxr-xr-x 3 grahamperrin grahamperrin 4B 20 Nov 17:57 Backups
drwxr-xr-x 2 grahamperrin grahamperrin 4B 6 Dec 19:43 CachedExtensions
drwx------ 3 grahamperrin grahamperrin 284B 14 Dec 11:17 Cache
-rw------- 1 grahamperrin grahamperrin 2.0K 16 Dec 05:56 TransportSecurity
-rw------- 1 grahamperrin grahamperrin 20K 16 Dec 05:57 Cookies
-rw------- 1 grahamperrin grahamperrin 0B 16 Dec 05:57 Cookies-journal
-rw-r--r-- 1 grahamperrin grahamperrin 446B 16 Dec 05:58 rapid_render.json
-rw------- 1 grahamperrin grahamperrin 2.8K 16 Dec 05:58 .org.chromium.Chromium.eOeolK
-rw------- 1 grahamperrin grahamperrin 2.8K 16 Dec 14:06 .org.chromium.Chromium.a8khF7
-rw------- 1 grahamperrin grahamperrin 0B 16 Dec 14:20 .org.chromium.Chromium.fuVPBw
-rw------- 1 grahamperrin grahamperrin 2.8K 16 Dec 20:47 .org.chromium.Chromium.ICOK4n
-rw------- 1 grahamperrin grahamperrin 2.8K 16 Dec 21:56 .org.chromium.Chromium.21Y7f0
-rw------- 1 grahamperrin grahamperrin 2.8K 17 Dec 05:00 .org.chromium.Chromium.Ro52fi
-rw------- 1 grahamperrin grahamperrin 2.8K 17 Dec 05:29 .org.chromium.Chromium.J1dZMh
-rw------- 1 grahamperrin grahamperrin 0B 17 Dec 06:35 .org.chromium.Chromium.P3gDsE
-rw-r--r-- 1 grahamperrin grahamperrin 75K 17 Dec 06:42 storage.json
-rw------- 1 grahamperrin grahamperrin 0B 17 Dec 06:42 .org.chromium.Chromium.bQdorG
-rw------- 1 grahamperrin grahamperrin 2.8K 17 Dec 06:47 Network Persistent State
drwx------ 2 grahamperrin grahamperrin 8B 17 Dec 06:47 Session Storage
-rw------- 1 grahamperrin grahamperrin 0B 17 Dec 07:32 .org.chromium.Chromium.AQtB07
-rw------- 1 grahamperrin grahamperrin 0B 17 Dec 07:39 .org.chromium.Chromium.FT0hEj
drwx------ 3 grahamperrin grahamperrin 3B 17 Dec 07:44 blob_storage
-rw-r--r-- 1 grahamperrin grahamperrin 12K 17 Dec 07:44 languagepacks.json
drwxr-xr-x 12 grahamperrin grahamperrin 12B 17 Dec 07:44 logs
-rw------- 1 grahamperrin grahamperrin 0B 17 Dec 07:45 .org.chromium.Chromium.KK5qMG
drwx------ 14 grahamperrin grahamperrin 35B 17 Dec 07:45 .
drwxr-xr-x 100 grahamperrin grahamperrin 264B 17 Dec 08:11 ..
grahamperrin#mowa219-gjp4-8570p:~ % ls -ahlrRt ~/.config/Code\ -\ OSS/Backups/
total 10
drwxr-xr-x 3 grahamperrin grahamperrin 4B 20 Nov 17:57 .
drwxr-xr-x 4 grahamperrin grahamperrin 4B 28 Nov 06:29 1579922206882
-rw-r--r-- 1 grahamperrin grahamperrin 142B 17 Dec 07:44 workspaces.json
drwx------ 14 grahamperrin grahamperrin 35B 17 Dec 07:45 ..
/home/grahamperrin/.config/Code - OSS/Backups/1579922206882:
total 10
drwxr-xr-x 3 grahamperrin grahamperrin 4B 20 Nov 17:57 ..
drwxr-xr-x 4 grahamperrin grahamperrin 4B 28 Nov 06:29 .
drwxr-xr-x 2 grahamperrin grahamperrin 2B 10 Dec 15:14 file
drwxr-xr-x 2 grahamperrin grahamperrin 14B 16 Dec 05:56 untitled
/home/grahamperrin/.config/Code - OSS/Backups/1579922206882/file:
total 1
drwxr-xr-x 4 grahamperrin grahamperrin 4B 28 Nov 06:29 ..
drwxr-xr-x 2 grahamperrin grahamperrin 2B 10 Dec 15:14 .
/home/grahamperrin/.config/Code - OSS/Backups/1579922206882/untitled:
total 63
drwxr-xr-x 4 grahamperrin grahamperrin 4B 28 Nov 06:29 ..
-rw-r--r-- 1 grahamperrin grahamperrin 551B 16 Dec 05:56 b2bd717a77da570a5c596af6934cadc7
-rw-r--r-- 1 grahamperrin grahamperrin 652B 16 Dec 05:56 0ea542ac1d82a4ad63b68365c0270c53
-rw-r--r-- 1 grahamperrin grahamperrin 1.6K 16 Dec 05:56 109fbbd2da4537c9ab3475d44131d9f8
-rw-r--r-- 1 grahamperrin grahamperrin 2.5K 16 Dec 05:56 2f0c80a5829bd778936522620f8dc240
-rw-r--r-- 1 grahamperrin grahamperrin 317B 16 Dec 05:56 387795c86765346eca0c041bac00348b
-rw-r--r-- 1 grahamperrin grahamperrin 902B 16 Dec 05:56 3e42341b68b5e3d2ec3af201cdb461a0
-rw-r--r-- 1 grahamperrin grahamperrin 242B 16 Dec 05:56 5a4df22f62baaaa5684aacc5372f2b14
-rw-r--r-- 1 grahamperrin grahamperrin 115B 16 Dec 05:56 8526d8318dcbce336eae5b633e7f2b20
-rw-r--r-- 1 grahamperrin grahamperrin 4.4K 16 Dec 05:56 85a25ec2bf655a740ef43253dcde2851
-rw-r--r-- 1 grahamperrin grahamperrin 538B 16 Dec 05:56 bba55dec34aadf10f7d0655859dd3ade
-rw-r--r-- 1 grahamperrin grahamperrin 238B 16 Dec 05:56 d45b5ea50824ae45a6f3cae14bb85e07
-rw-r--r-- 1 grahamperrin grahamperrin 184B 16 Dec 05:56 e5e5e2d9b68c3afbc119011b57046d5a
drwxr-xr-x 2 grahamperrin grahamperrin 14B 16 Dec 05:56 .
grahamperrin#mowa219-gjp4-8570p:~ % less ~/.config/Code\ -\ OSS/Backups/1579922206882/untitled/b2bd717a77da570a5c596af6934cadc7
untitled:Untitled-4
net user Administrator | find /i "Password last set"
runas /user:Administrator powershell
Start-Process powershell -Verb runAs
cd "c:\Windows\Downloaded Program Files\" ; date ; whoami ; query user ; wget https://extranet.brighton.ac.uk/public/download/BIGIPComponentInstaller.msi -OutFile BIGIPComponentInstaller.msi ; wget https://extranet.brighton.ac.uk/public/download/f5vpn_setup.exe -OutFile f5vpn_setup.exe ; dir . | sort LastWriteTime | Out-Default ; winver ; .\BIGIPComponentInstaller.msi ; .\f5vpn_setup.exe ; cd ~
grahamperrin#mowa219-gjp4-8570p:~ % less ~/.config/Code\ -\ OSS/rapid_render.json
{"id":"monaco-parts-splash","colorInfo":{"foreground":"#cccccc","editorBackground":"#1e1e1e","titleBarBackground":"#3c3c3c","activityBarBackground":"#333333","sideBarBackground":"#252526","statusBarBackground":"#007acc","statusBarNoFolderBackground":"#68217a"},"layoutInfo":{"sideBarSide":"left","editorPartMinWidth":220,"titleBarHeight":0,"activityBarWidth":48,"sideBarWidth":170,"statusBarHeight":22,"windowBorder":false},"baseTheme":"vs-dark"}
grahamperrin#mowa219-gjp4-8570p:~ % rm ~/.config/Code\ -\ OSS/rapid_render.json
grahamperrin#mowa219-gjp4-8570p:~ % code-oss --verbose
[main 2020-12-17T08:17:09.207Z] Starting VS Code
[main 2020-12-17T08:17:09.224Z] from: /usr/local/share/code-oss/resources/app
[main 2020-12-17T08:17:09.225Z] args: {
_: [],
diff: false,
add: false,
goto: false,
'new-window': false,
'reuse-window': false,
wait: false,
help: false,
'list-extensions': false,
'show-versions': false,
version: false,
verbose: true,
status: false,
'prof-startup': false,
'disable-extensions': false,
'disable-gpu': false,
telemetry: false,
logExtensionHostCommunication: false,
'skip-release-notes': false,
'disable-restore-windows': false,
'disable-telemetry': false,
'disable-updates': false,
'disable-crash-reporter': false,
'disable-user-env-probe': false,
'skip-add-to-recently-opened': false,
'unity-launch': false,
'open-url': false,
'file-write': false,
'file-chmod': false,
'driver-verbose': false,
force: false,
'do-not-sync': false,
trace: false,
'force-user-env': false,
'no-proxy-server': false,
nolazy: false,
'force-renderer-accessibility': false,
'ignore-certificate-errors': false,
'allow-insecure-localhost': false
}
[main 2020-12-17T08:17:09.230Z] Resolving machine identifier...
[main 2020-12-17T08:17:09.231Z] Resolved machine identifier: 76d5dcb36bedd2b6a2ae2706b11c68da607ea2bce16973ed535e6bfdec09baac (trueMachineId: undefined)
[main 2020-12-17T08:17:09.659Z] [storage state.vscdb] open(/home/grahamperrin/.config/Code - OSS/User/globalStorage/state.vscdb, retryOnBusy: true)
[main 2020-12-17T08:17:09.662Z] lifecycle (main): phase changed (value: 2)
[main 2020-12-17T08:17:09.664Z] windowsManager#open
[main 2020-12-17T08:17:09.667Z] window#validateWindowState: validating window state on 2 display(s) { mode: 0, x: 0, y: 0, width: 1133, height: 510 }
[main 2020-12-17T08:17:09.668Z] window#validateWindowState: multi-monitor working area { x: 0, y: 0, width: 1920, height: 1080 }
[main 2020-12-17T08:17:09.669Z] window#ctor: using window state { mode: 0, x: 0, y: 0, width: 1133, height: 510 }
[main 2020-12-17T08:17:10.515Z] lifecycle (main): phase changed (value: 3)
[main 2020-12-17T08:17:10.516Z] update#ctor - updates are disabled as there is no update URL
[6407:1217/081711.075297:ERROR:buffer_manager.cc(488)] [.DisplayCompositor]GL ERROR :GL_INVALID_OPERATION : glBufferData: <- error from previous GL command
[main 2020-12-17T08:17:11.808Z] [storage state.vscdb] Trace (event): SELECT * FROM ItemTable
[main 2020-12-17T08:17:11.810Z] [storage state.vscdb] getItems(): 41 rows
[main 2020-12-17T08:17:11.913Z] [storage state.vscdb] updateItems(): insert(Map(3) {storage.serviceMachineId => 735a3a8a-3134-4ebb-abad-e6b9359a2727, telemetry.lastSessionDate => Thu, 17 Dec 2020 07:44:45 GMT, telemetry.currentSessionDate => Thu, 17 Dec 2020 08:17:11 GMT}), delete(Set(0) {})
[main 2020-12-17T08:17:11.914Z] [storage state.vscdb] Trace (event): BEGIN TRANSACTION
[main 2020-12-17T08:17:11.915Z] [storage state.vscdb] Trace (event): INSERT INTO ItemTable VALUES ('storage.serviceMachineId','735a3a8a-3134-4ebb-abad-e6b9359a2727'),('telemetry.lastSessionDate','Thu, 17 Dec 2020 07:44:45 GMT'),('telemetry.currentSessionDate','Thu, 17 Dec 2020 08:17:11 GMT')
[main 2020-12-17T08:17:11.916Z] [storage state.vscdb] Trace (event): END TRANSACTION
[main 2020-12-17T08:17:13.521Z] getShellEnvironment: running on CLI, skipping
[6407:1217/081713.710517:ERROR:buffer_manager.cc(488)] [.DisplayCompositor]GL ERROR :GL_INVALID_OPERATION : glBufferData: <- error from previous GL command
(node:7307) Electron: Loading non context-aware native modules in the renderer process is deprecated and will stop working at some point in the future, please see https://github.com/electron/electron/issues/18397 for more information
(node:7307) Electron: Loading non context-aware native modules in the renderer process is deprecated and will stop working at some point in the future, please see https://github.com/electron/electron/issues/18397 for more information
(node:7307) Electron: Loading non context-aware native modules in the renderer process is deprecated and will stop working at some point in the future, please see https://github.com/electron/electron/issues/18397 for more information
(node:7307) Electron: Loading non context-aware native modules in the renderer process is deprecated and will stop working at some point in the future, please see https://github.com/electron/electron/issues/18397 for more information
[main 2020-12-17T08:17:17.184Z] Shared process: IPC ready
(node:9633) Electron: Loading non context-aware native modules in the renderer process is deprecated and will stop working at some point in the future, please see https://github.com/electron/electron/issues/18397 for more information
(node:9633) Electron: Loading non context-aware native modules in the renderer process is deprecated and will stop working at some point in the future, please see https://github.com/electron/electron/issues/18397 for more information
[main 2020-12-17T08:17:17.813Z] Shared process: init ready
[main 2020-12-17T08:17:20.956Z] [VS Code]: renderer process crashed!
(node:9971) Electron: Loading non context-aware native modules in the renderer process is deprecated and will stop working at some point in the future, please see https://github.com/electron/electron/issues/18397 for more information
(node:9971) Electron: Loading non context-aware native modules in the renderer process is deprecated and will stop working at some point in the future, please see https://github.com/electron/electron/issues/18397 for more information
(node:9971) Electron: Loading non context-aware native modules in the renderer process is deprecated and will stop working at some point in the future, please see https://github.com/electron/electron/issues/18397 for more information
(node:9971) Electron: Loading non context-aware native modules in the renderer process is deprecated and will stop working at some point in the future, please see https://github.com/electron/electron/issues/18397 for more information
[main 2020-12-17T08:17:28.334Z] [VS Code]: renderer process crashed!
[main 2020-12-17T08:17:29.665Z] Lifecycle#window.on('closed') - window ID 1
[main 2020-12-17T08:17:29.666Z] Lifecycle#onWillShutdown.fire()
[6407:1217/081729.728513:WARNING:x11_util.cc(1399)] X error received: serial 478, error_code 173 (GLXBadWindow), request_code 153, minor_code 32 (X_GLXDestroyWindow)
[6407:1217/081729.728617:WARNING:x11_util.cc(1399)] X error received: serial 482, error_code 3 (BadWindow (invalid Window parameter)), request_code 4, minor_code 0 (X_DestroyWindow)
grahamperrin#mowa219-gjp4-8570p:~ % less ~/.config/Code\ -\ OSS/.org.chromium.Chromium.eOeolK
{"net":{"http_server_properties":{"servers":[{"isolation":[],"server":"https://davidwang.gallerycdn.vsassets.io","supports_spdy":true},{"isolation":[],"server":"https://ajshort.gallery.vsassets.io","supports_spdy":true},{"isolation":[],"server":"https://stkb.gallerycdn.vsassets.io","supports_spdy":true},{"isolation":[],"server":"https://ionutvmi.gallerycdn.vsassets.io","supports_spdy":true},{"isolation":[],"server":"https://ms-vscode.gallerycdn.vsassets.io","supports_spdy":true},{"isolation":[],"server":"https://ms-python.gallerycdn.vsassets.io","supports_spdy":true},{"isolation":[],"server":"https://ms-ceintl.gallerycdn.vsassets.io","supports_spdy":true},{"isolation":[],"server":"https://ms-iot.gallerycdn.vsassets.io","supports_spdy":true},{"isolation":[],"server":"https://ms-iot.gallery.vsassets.io","supports_spdy":true},{"isolation":[],"server":"https://bgforge.gallerycdn.vsassets.io","supports_spdy":true},{"isolation":[],"server":"https://classix.gallerycdn.vsassets.io","supports_spdy":true},{"isolation":[],"server":"https://vmssoftwareinc.gallerycdn.vsassets.io","supports_spdy":true},{"isolation":[],"server":"https://killerall.gallerycdn.vsassets.io","supports_spdy":true},{"isolation":[],"server":"https://siamz.gallerycdn.vsassets.io","supports_spdy":true},{"isolation":[],"server":"https://alexhenriquepv.gallerycdn.vsassets.io","supports_spdy":true},{"isolation":[],"server":"https://leighlondon.gallerycdn.vsassets.io","supports_spdy":true},{"isolation":[],"server":"https://ionutvmi.gallery.vsassets.io","supports_spdy":true},{"isolation":[],"server":"https://miusuncle.gallerycdn.vsassets.io","supports_spdy":true},{"isolation":[],"server":"https://yedhrab.gallerycdn.vsassets.io","supports_spdy":true},{"isolation":[],"server":"https://rjarouche.gallerycdn.vsassets.io","supports_spdy":true},{"isolation":[],"server":"https://neptunedesign.gallerycdn.vsassets.io","supports_spdy":true},{"isolation":[],"server":"https://jakob101.gallerycdn.vsassets.io","supports_spdy":true},{"isolation":[],"server":"https://dariofuzinato.gallerycdn.vsassets.io","supports_spdy":true},{"isolation":[],"server":"https://sryze.gallerycdn.vsassets.io","supports_spdy":true},{"isolation":[],"server":"https://flesler.gallerycdn.vsassets.io","supports_spdy":true},{"isolation":[],"server":"https://stkb.gallery.vsassets.io","supports_spdy":true},{"isolation":[],"server":"https://ms-ceintl.gallery.vsassets.io","supports_spdy":true},{"isolation":[],"server":"https://tomashubelbauer.gallerycdn.vsassets.io","supports_spdy":true},{"isolation":[],"server":"https://tomashubelbauer.gallery.vsassets.io","supports_spdy":true},{"isolation":[],"server":"https://marketplace.visualstudio.com","supports_spdy":true}],"version":5},"network_qualities":{"CAASABiAgICA+P////8B":"4G","CAYSABiAgICA+P////8B":"Offline"}}}
grahamperrin#mowa219-gjp4-8570p:~ % code-oss --disable-extensions --verbose
[main 2020-12-17T08:19:01.005Z] Starting VS Code
[main 2020-12-17T08:19:01.011Z] from: /usr/local/share/code-oss/resources/app
[main 2020-12-17T08:19:01.011Z] args: {
_: [],
diff: false,
add: false,
goto: false,
'new-window': false,
'reuse-window': false,
wait: false,
help: false,
'list-extensions': false,
'show-versions': false,
version: false,
verbose: true,
status: false,
'prof-startup': false,
'disable-extensions': true,
'disable-gpu': false,
telemetry: false,
logExtensionHostCommunication: false,
'skip-release-notes': false,
'disable-restore-windows': false,
'disable-telemetry': false,
'disable-updates': false,
'disable-crash-reporter': false,
'disable-user-env-probe': false,
'skip-add-to-recently-opened': false,
'unity-launch': false,
'open-url': false,
'file-write': false,
'file-chmod': false,
'driver-verbose': false,
force: false,
'do-not-sync': false,
trace: false,
'force-user-env': false,
'no-proxy-server': false,
nolazy: false,
'force-renderer-accessibility': false,
'ignore-certificate-errors': false,
'allow-insecure-localhost': false
}
[main 2020-12-17T08:19:01.016Z] Resolving machine identifier...
[main 2020-12-17T08:19:01.016Z] Resolved machine identifier: 76d5dcb36bedd2b6a2ae2706b11c68da607ea2bce16973ed535e6bfdec09baac (trueMachineId: undefined)
[main 2020-12-17T08:19:01.143Z] [storage state.vscdb] open(/home/grahamperrin/.config/Code - OSS/User/globalStorage/state.vscdb, retryOnBusy: true)
[main 2020-12-17T08:19:01.146Z] lifecycle (main): phase changed (value: 2)
[main 2020-12-17T08:19:01.148Z] windowsManager#open
[main 2020-12-17T08:19:01.151Z] window#validateWindowState: validating window state on 2 display(s) { mode: 0, x: 0, y: 0, width: 1133, height: 510 }
[main 2020-12-17T08:19:01.152Z] window#validateWindowState: multi-monitor working area { x: 0, y: 0, width: 1920, height: 1080 }
[main 2020-12-17T08:19:01.153Z] window#ctor: using window state { mode: 0, x: 0, y: 0, width: 1133, height: 510 }
[main 2020-12-17T08:19:01.681Z] lifecycle (main): phase changed (value: 3)
[main 2020-12-17T08:19:01.684Z] update#ctor - updates are disabled as there is no update URL
[13433:1217/081902.366547:ERROR:buffer_manager.cc(488)] [.DisplayCompositor]GL ERROR :GL_INVALID_OPERATION : glBufferData: <- error from previous GL command
[main 2020-12-17T08:19:02.583Z] [storage state.vscdb] getItems(): 41 rows
[main 2020-12-17T08:19:02.585Z] [storage state.vscdb] Trace (event): SELECT * FROM ItemTable
[main 2020-12-17T08:19:02.686Z] [storage state.vscdb] updateItems(): insert(Map(3) {storage.serviceMachineId => 735a3a8a-3134-4ebb-abad-e6b9359a2727, telemetry.lastSessionDate => Thu, 17 Dec 2020 08:17:11 GMT, telemetry.currentSessionDate => Thu, 17 Dec 2020 08:19:02 GMT}), delete(Set(0) {})
[main 2020-12-17T08:19:02.689Z] [storage state.vscdb] Trace (event): BEGIN TRANSACTION
[main 2020-12-17T08:19:02.692Z] [storage state.vscdb] Trace (event): INSERT INTO ItemTable VALUES ('storage.serviceMachineId','735a3a8a-3134-4ebb-abad-e6b9359a2727'),('telemetry.lastSessionDate','Thu, 17 Dec 2020 08:17:11 GMT'),('telemetry.currentSessionDate','Thu, 17 Dec 2020 08:19:02 GMT')
[main 2020-12-17T08:19:02.693Z] [storage state.vscdb] Trace (event): END TRANSACTION
[main 2020-12-17T08:19:02.864Z] getShellEnvironment: running on CLI, skipping
(node:13490) Electron: Loading non context-aware native modules in the renderer process is deprecated and will stop working at some point in the future, please see https://github.com/electron/electron/issues/18397 for more information
(node:13490) Electron: Loading non context-aware native modules in the renderer process is deprecated and will stop working at some point in the future, please see https://github.com/electron/electron/issues/18397 for more information
(node:13490) Electron: Loading non context-aware native modules in the renderer process is deprecated and will stop working at some point in the future, please see https://github.com/electron/electron/issues/18397 for more information
(node:13490) Electron: Loading non context-aware native modules in the renderer process is deprecated and will stop working at some point in the future, please see https://github.com/electron/electron/issues/18397 for more information
[main 2020-12-17T08:19:04.147Z] [VS Code]: renderer process crashed!
[13433:1217/081904.745930:ERROR:buffer_manager.cc(488)] [.DisplayCompositor]GL ERROR :GL_INVALID_OPERATION : glBufferData: <- error from previous GL command
[main 2020-12-17T08:19:06.291Z] Shared process: IPC ready
(node:13567) Electron: Loading non context-aware native modules in the renderer process is deprecated and will stop working at some point in the future, please see https://github.com/electron/electron/issues/18397 for more information
(node:13567) Electron: Loading non context-aware native modules in the renderer process is deprecated and will stop working at some point in the future, please see https://github.com/electron/electron/issues/18397 for more information
[main 2020-12-17T08:19:06.356Z] Shared process: init ready
[main 2020-12-17T08:19:07.353Z] Lifecycle#window.on('closed') - window ID 1
[main 2020-12-17T08:19:07.353Z] Lifecycle#onWillShutdown.fire()
grahamperrin#mowa219-gjp4-8570p:~ % gdb /usr/local/bin/code-oss ./code-oss.core
GNU gdb (GDB) 10.1 [GDB v10.1 for FreeBSD]
Copyright (C) 2020 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Type "show copying" and "show warranty" for details.
This GDB was configured as "x86_64-portbld-freebsd13.0".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<https://www.gnu.org/software/gdb/bugs/>.
Find the GDB manual and other documentation resources online at:
<http://www.gnu.org/software/gdb/documentation/>.
For help, type "help".
Type "apropos word" to search for commands related to "word"...
"0x7fffffffe080s": not in executable format: file format not recognized
[New LWP 102724]
[New LWP 116825]
[New LWP 116826]
[New LWP 116827]
[New LWP 116828]
[New LWP 116829]
[New LWP 116830]
[New LWP 116831]
[New LWP 116832]
[New LWP 116833]
[New LWP 116834]
[New LWP 116835]
[New LWP 116836]
[New LWP 116838]
[New LWP 116839]
[New LWP 116840]
[New LWP 116841]
[New LWP 116842]
[New LWP 116843]
[New LWP 116844]
[New LWP 116845]
[New LWP 116846]
[New LWP 116847]
[New LWP 116848]
[New LWP 116849]
[New LWP 116850]
[New LWP 116851]
[New LWP 116867]
[New LWP 116888]
Core was generated by `code-oss: --disable-extensions --verbose --no-sandbox'.
Program terminated with signal SIGBUS, Bus error.
--Type <RET> for more, q to quit, c to continue without paging--
#0 0x00000000025d0c77 in ?? ()
[Current thread is 1 (LWP 102724)]
(gdb) bt
#0 0x00000000025d0c77 in ?? ()
#1 0x000000081432a0c0 in ?? ()
#2 0x000000081227a608 in ?? ()
#3 0x00007fffffffd750 in ?? ()
#4 0x0000000002c5e17b in ?? ()
#5 0x000000081227a608 in ?? ()
#6 0x000000081227a620 in ?? ()
#7 0x000000081432a0c0 in ?? ()
#8 0x000000081227a620 in ?? ()
#9 0x000000081227bff0 in ?? ()
#10 0x0000000007740100 in ?? ()
#11 0x000000081432a0c0 in ?? ()
#12 0x00000008133b6730 in ?? ()
#13 0x00000008133b6730 in ?? ()
#14 0x00000008133b6740 in ?? ()
#15 0xecf3e8d0b6254a2e in ?? ()
#16 0x000000080f6d9620 in ?? ()
#17 0x000000081227a608 in ?? ()
#18 0x00007fffffffd7e8 in ?? ()
#19 0x0000000000000005 in ?? ()
#20 0x0000000001a3c432 in ?? ()
#21 0x00007fffffffd7d0 in ?? ()
#22 0x0000000002c5e06e in ?? ()
#23 0x0000000000000000 in ?? ()
(gdb) q
grahamperrin#mowa219-gjp4-8570p:~ % pkg query '%o %v %R' vscode
editors/vscode 1.46.1 FreeBSD
grahamperrin#mowa219-gjp4-8570p:~ % uname -v
FreeBSD 13.0-CURRENT #74 r368589: Sun Dec 13 07:55:46 GMT 2020 root#mowa219-gjp4-8570p:/usr/obj/usr/src/amd64.amd64/sys/GENERIC-NODEBUG
grahamperrin#mowa219-gjp4-8570p:~ %
– unfortunately, nothing useful in the backtrace.
Bug reported:
Code - OSS crashed consistently at start time – renderer process crashed! – until after I removed cached data workbench.desktop.main-17c1ea9255cc303c9339b9c2ce2b4a02.code · Issue #113069 · microsoft/vscode
With the bugged ~/.config/Code - OSS directory (restored from a backup):
After removing the offending file, a first run of the application:
Extension host terminated unexpectedly.
This occurred a few seconds after every run of the application.
Output from code-oss --verbose at https://pastebin.com/gPXMNdrv
After disabling all possible extensions (English (United Kingdom) Language Pack for Visual Studio Code can not be disabled), then re-enabling all: touch wood, no recurrence of Extension host terminated unexpectedly.

freeradius daloradius authentication failure

i followed this tutorial to install freeradius and dalo radius for the raspberry pi:
http://www.binaryheartbeat.net/2013/12/raspberry-pi-based-freeradius-server.html
i tested the file authentication and it worked fine but after installing daloradius and switching to MySQL authnetications fail for unknown reasons
here is freeradius output that occurs when trying to authenticate a user:
rad_recv: Access-Request packet from host 192.168.1.1 port 32779, id=216, length=172
User-Name = "ccc"
State = 0xf9775519ff7f4c9188c14494359a170f
EAP-Message = 0x0208005b190017030100500d2898ca35aa9fa9e4febd8816c9e6deda71960fe5692b7c3d0499f2b5bba6b531483e373e14f8aff517aa081e214edc98e2c8bb22d16a961ecff4f498d20d152535b4d11ace1484b985bd2501ade77b
Service-Type = Framed-User
Framed-MTU = 1420
NAS-IP-Address = 192.168.1.1
Message-Authenticator = 0x49fc781b8a152fbec467b2c1f275a1a1
Tue Dec 29 18:38:47 2015 : Info: # Executing section authorize from file /etc/freeradius/sites-enabled/default
Tue Dec 29 18:38:47 2015 : Info: +group authorize {
Tue Dec 29 18:38:47 2015 : Info: ++[preprocess] = ok
Tue Dec 29 18:38:47 2015 : Info: ++[chap] = noop
Tue Dec 29 18:38:47 2015 : Info: ++[mschap] = noop
Tue Dec 29 18:38:47 2015 : Info: ++[digest] = noop
Tue Dec 29 18:38:47 2015 : Info: [suffix] No '#' in User-Name = "ccc", looking up realm NULL
Tue Dec 29 18:38:47 2015 : Info: [suffix] No such realm "NULL"
Tue Dec 29 18:38:47 2015 : Info: ++[suffix] = noop
Tue Dec 29 18:38:47 2015 : Info: [eap] EAP packet type response id 8 length 91
Tue Dec 29 18:38:47 2015 : Info: [eap] Continuing tunnel setup.
Tue Dec 29 18:38:47 2015 : Info: ++[eap] = ok
Tue Dec 29 18:38:47 2015 : Info: +} # group authorize = ok
Tue Dec 29 18:38:47 2015 : Info: Found Auth-Type = EAP
Tue Dec 29 18:38:47 2015 : Info: # Executing group from file /etc/freeradius/sites-enabled/default
Tue Dec 29 18:38:47 2015 : Info: +group authenticate {
Tue Dec 29 18:38:47 2015 : Info: [eap] Request found, released from the list
Tue Dec 29 18:38:47 2015 : Info: [eap] EAP/peap
Tue Dec 29 18:38:47 2015 : Info: [eap] processing type peap
Tue Dec 29 18:38:47 2015 : Info: [peap] processing EAP-TLS
Tue Dec 29 18:38:47 2015 : Info: [peap] eaptls_verify returned 7
Tue Dec 29 18:38:47 2015 : Info: [peap] Done initial handshake
Tue Dec 29 18:38:47 2015 : Info: [peap] eaptls_process returned 7
Tue Dec 29 18:38:47 2015 : Info: [peap] EAPTLS_OK
Tue Dec 29 18:38:47 2015 : Info: [peap] Session established. Decoding tunneled attributes.
Tue Dec 29 18:38:47 2015 : Info: [peap] Peap state phase2
Tue Dec 29 18:38:47 2015 : Info: [peap] EAP type mschapv2
Tue Dec 29 18:38:47 2015 : Info: [peap] Got tunneled request
EAP-Message = 0x0208003e1a0208003931461c2f1334a4b7bab38912e9d82dd97b000000000000000070fb7810a938a00d884f17dc01b62eaa7dde9fbb7ab2cf4200636363
server {
Tue Dec 29 18:38:47 2015 : Info: [peap] Setting User-Name to ccc
Sending tunneled request
EAP-Message = 0x0208003e1a0208003931461c2f1334a4b7bab38912e9d82dd97b000000000000000070fb7810a938a00d884f17dc01b62eaa7dde9fbb7ab2cf4200636363
FreeRADIUS-Proxied-To = 127.0.0.1
User-Name = "ccc"
State = 0x4bb6eef44bbef48a7072f4e023895561
server inner-tunnel {
Tue Dec 29 18:38:47 2015 : Info: # Executing section authorize from file /etc/freeradius/sites-enabled/inner-tunnel
Tue Dec 29 18:38:47 2015 : Info: +group authorize {
Tue Dec 29 18:38:47 2015 : Info: ++[chap] = noop
Tue Dec 29 18:38:47 2015 : Info: ++[mschap] = noop
Tue Dec 29 18:38:47 2015 : Info: [suffix] No '#' in User-Name = "ccc", looking up realm NULL
Tue Dec 29 18:38:47 2015 : Info: [suffix] No such realm "NULL"
Tue Dec 29 18:38:47 2015 : Info: ++[suffix] = noop
Tue Dec 29 18:38:47 2015 : Info: ++update control {
Tue Dec 29 18:38:47 2015 : Info: ++} # update control = noop
Tue Dec 29 18:38:47 2015 : Info: [eap] EAP packet type response id 8 length 62
Tue Dec 29 18:38:47 2015 : Info: [eap] No EAP Start, assuming it's an on-going EAP conversation
Tue Dec 29 18:38:47 2015 : Info: ++[eap] = updated
Tue Dec 29 18:38:47 2015 : Info: ++[files] = noop
Tue Dec 29 18:38:47 2015 : Info: ++[expiration] = noop
Tue Dec 29 18:38:47 2015 : Info: ++[logintime] = noop
Tue Dec 29 18:38:47 2015 : Info: ++[pap] = noop
Tue Dec 29 18:38:47 2015 : Info: +} # group authorize = updated
Tue Dec 29 18:38:47 2015 : Info: Found Auth-Type = EAP
Tue Dec 29 18:38:47 2015 : Info: # Executing group from file /etc/freeradius/sites-enabled/inner-tunnel
Tue Dec 29 18:38:47 2015 : Info: +group authenticate {
Tue Dec 29 18:38:47 2015 : Info: [eap] Request found, released from the list
Tue Dec 29 18:38:47 2015 : Info: [eap] EAP/mschapv2
Tue Dec 29 18:38:47 2015 : Info: [eap] processing type mschapv2
Tue Dec 29 18:38:47 2015 : Info: [mschapv2] # Executing group from file /etc/freeradius/sites-enabled/inner-tunnel
Tue Dec 29 18:38:47 2015 : Info: [mschapv2] +group MS-CHAP {
Tue Dec 29 18:38:47 2015 : Info: [mschap] No Cleartext-Password configured. Cannot create LM-Password.
Tue Dec 29 18:38:47 2015 : Info: [mschap] No Cleartext-Password configured. Cannot create NT-Password.
Tue Dec 29 18:38:47 2015 : Info: [mschap] Creating challenge hash with username: ccc
Tue Dec 29 18:38:47 2015 : Info: [mschap] Client is using MS-CHAPv2 for ccc, we need NT-Password
Tue Dec 29 18:38:47 2015 : Info: [mschap] FAILED: No NT/LM-Password. Cannot perform authentication.
Tue Dec 29 18:38:47 2015 : Info: [mschap] FAILED: MS-CHAP2-Response is incorrect
Tue Dec 29 18:38:47 2015 : Info: ++[mschap] = reject
Tue Dec 29 18:38:47 2015 : Info: +} # group MS-CHAP = reject
Tue Dec 29 18:38:47 2015 : Info: [eap] Freeing handler
Tue Dec 29 18:38:47 2015 : Info: ++[eap] = reject
Tue Dec 29 18:38:47 2015 : Info: +} # group authenticate = reject
Tue Dec 29 18:38:47 2015 : Info: Failed to authenticate the user.
Tue Dec 29 18:38:47 2015 : Info: Using Post-Auth-Type REJECT
Tue Dec 29 18:38:47 2015 : Info: # Executing group from file /etc/freeradius/sites-enabled/inner-tunnel
Tue Dec 29 18:38:47 2015 : Info: +group REJECT {
Tue Dec 29 18:38:47 2015 : Info: [attr_filter.access_reject] expand: %{User-Name} -> ccc
Tue Dec 29 18:38:47 2015 : Debug: attr_filter: Matched entry DEFAULT at line 11
Tue Dec 29 18:38:47 2015 : Info: ++[attr_filter.access_reject] = updated
Tue Dec 29 18:38:47 2015 : Info: +} # group REJECT = updated
} # server inner-tunnel
Tue Dec 29 18:38:47 2015 : Info: [peap] Got tunneled reply code 3
MS-CHAP-Error = "\010E=691 R=1"
EAP-Message = 0x04080004
Message-Authenticator = 0x00000000000000000000000000000000
Tue Dec 29 18:38:47 2015 : Info: [peap] Got tunneled reply RADIUS code 3
MS-CHAP-Error = "\010E=691 R=1"
EAP-Message = 0x04080004
Message-Authenticator = 0x00000000000000000000000000000000
Tue Dec 29 18:38:47 2015 : Info: [peap] Tunneled authentication was rejected.
Tue Dec 29 18:38:47 2015 : Info: [peap] FAILURE
Tue Dec 29 18:38:47 2015 : Info: ++[eap] = handled
Tue Dec 29 18:38:47 2015 : Info: +} # group authenticate = handled
Sending Access-Challenge of id 216 to 192.168.1.1 port 32779
EAP-Message = 0x0109002b190017030100205991bfd8f9e7f70794477d653c848e8b443626b3b935a5b3f049ac7af1534d3e
Message-Authenticator = 0x00000000000000000000000000000000
State = 0xf9775519fe7e4c9188c14494359a170f
Tue Dec 29 18:38:47 2015 : Info: Finished request 7.
Tue Dec 29 18:38:47 2015 : Debug: Going to the next request
Tue Dec 29 18:38:47 2015 : Debug: Waking up in 0.4 seconds.
rad_recv: Access-Request packet from host 192.168.1.1 port 32779, id=217, length=124
User-Name = "ccc"
State = 0xf9775519fe7e4c9188c14494359a170f
EAP-Message = 0x0209002b190017030100202a7f1a72de2970b689e44c005661d1e1e444854af7499ebeb23eabc7bfad7b64
Service-Type = Framed-User
Framed-MTU = 1420
NAS-IP-Address = 192.168.1.1
Message-Authenticator = 0xc9b0d8e268df2d8e4b484725c3efa189
Tue Dec 29 18:38:47 2015 : Info: # Executing section authorize from file /etc/freeradius/sites-enabled/default
Tue Dec 29 18:38:47 2015 : Info: +group authorize {
Tue Dec 29 18:38:47 2015 : Info: ++[preprocess] = ok
Tue Dec 29 18:38:47 2015 : Info: ++[chap] = noop
Tue Dec 29 18:38:47 2015 : Info: ++[mschap] = noop
Tue Dec 29 18:38:47 2015 : Info: ++[digest] = noop
Tue Dec 29 18:38:47 2015 : Info: [suffix] No '#' in User-Name = "ccc", looking up realm NULL
Tue Dec 29 18:38:47 2015 : Info: [suffix] No such realm "NULL"
Tue Dec 29 18:38:47 2015 : Info: ++[suffix] = noop
Tue Dec 29 18:38:47 2015 : Info: [eap] EAP packet type response id 9 length 43
Tue Dec 29 18:38:47 2015 : Info: [eap] Continuing tunnel setup.
Tue Dec 29 18:38:47 2015 : Info: ++[eap] = ok
Tue Dec 29 18:38:47 2015 : Info: +} # group authorize = ok
Tue Dec 29 18:38:47 2015 : Info: Found Auth-Type = EAP
Tue Dec 29 18:38:47 2015 : Info: # Executing group from file /etc/freeradius/sites-enabled/default
Tue Dec 29 18:38:47 2015 : Info: +group authenticate {
Tue Dec 29 18:38:47 2015 : Info: [eap] Request found, released from the list
Tue Dec 29 18:38:47 2015 : Info: [eap] EAP/peap
Tue Dec 29 18:38:47 2015 : Info: [eap] processing type peap
Tue Dec 29 18:38:47 2015 : Info: [peap] processing EAP-TLS
Tue Dec 29 18:38:47 2015 : Info: [peap] eaptls_verify returned 7
Tue Dec 29 18:38:47 2015 : Info: [peap] Done initial handshake
Tue Dec 29 18:38:47 2015 : Info: [peap] eaptls_process returned 7
Tue Dec 29 18:38:47 2015 : Info: [peap] EAPTLS_OK
Tue Dec 29 18:38:47 2015 : Info: [peap] Session established. Decoding tunneled attributes.
Tue Dec 29 18:38:47 2015 : Info: [peap] Peap state send tlv failure
Tue Dec 29 18:38:47 2015 : Info: [peap] Received EAP-TLV response.
Tue Dec 29 18:38:47 2015 : Info: [peap] The users session was previously rejected: returning reject (again.)
Tue Dec 29 18:38:47 2015 : Info: [peap] *** This means you need to read the PREVIOUS messages in the debug output
Tue Dec 29 18:38:47 2015 : Info: [peap] *** to find out the reason why the user was rejected.
Tue Dec 29 18:38:47 2015 : Info: [peap] *** Look for "reject" or "fail". Those earlier messages will tell you.
Tue Dec 29 18:38:47 2015 : Info: [peap] *** what went wrong, and how to fix the problem.
Tue Dec 29 18:38:47 2015 : Info: [eap] Handler failed in EAP/peap
Tue Dec 29 18:38:47 2015 : Info: [eap] Failed in EAP select
Tue Dec 29 18:38:47 2015 : Info: ++[eap] = invalid
Tue Dec 29 18:38:47 2015 : Info: +} # group authenticate = invalid
Tue Dec 29 18:38:47 2015 : Info: Failed to authenticate the user.
Tue Dec 29 18:38:47 2015 : Info: Using Post-Auth-Type REJECT
Tue Dec 29 18:38:47 2015 : Info: # Executing group from file /etc/freeradius/sites-enabled/default
Tue Dec 29 18:38:47 2015 : Info: +group REJECT {
Tue Dec 29 18:38:47 2015 : Info: [sql] expand: %{User-Name} -> ccc
Tue Dec 29 18:38:47 2015 : Info: [sql] sql_set_user escaped user --> 'ccc'
Tue Dec 29 18:38:47 2015 : Info: [sql] expand: %{User-Password} ->
Tue Dec 29 18:38:47 2015 : Info: [sql] ... expanding second conditional
Tue Dec 29 18:38:47 2015 : Info: [sql] expand: %{Chap-Password} ->
Tue Dec 29 18:38:47 2015 : Info: [sql] expand: INSERT INTO radpostauth (username, pass, reply, authdate) VALUES ( '%{User-Name}', '%{%{User-Password}:-%{Chap-Password}}', '%{reply:Packet-Type}', '%S') -> INSERT INTO radpostauth (username, pass, reply, authdate) VALUES ( 'ccc', '', 'Access-Reject', '2015-12-29 18:38:47')
Tue Dec 29 18:38:47 2015 : Debug: rlm_sql (sql) in sql_postauth: query is INSERT INTO radpostauth (username, pass, reply, authdate) VALUES ( 'ccc', '', 'Access-Reject', '2015-12-29 18:38:47')
Tue Dec 29 18:38:47 2015 : Debug: rlm_sql (sql): Reserving sql socket id: 29
Tue Dec 29 18:38:47 2015 : Debug: rlm_sql (sql): Released sql socket id: 29
Tue Dec 29 18:38:47 2015 : Info: ++[sql] = ok
Tue Dec 29 18:38:47 2015 : Info: [attr_filter.access_reject] expand: %{User-Name} -> ccc
Tue Dec 29 18:38:47 2015 : Debug: attr_filter: Matched entry DEFAULT at line 11
Tue Dec 29 18:38:47 2015 : Info: ++[attr_filter.access_reject] = updated
Tue Dec 29 18:38:47 2015 : Info: +} # group REJECT = updated
Tue Dec 29 18:38:47 2015 : Info: Delaying reject of request 8 for 1 seconds
Tue Dec 29 18:38:47 2015 : Debug: Going to the next request
Tue Dec 29 18:38:47 2015 : Debug: Waking up in 0.1 seconds.
Tue Dec 29 18:38:47 2015 : Info: Cleaning up request 0 ID 209 with timestamp +11
Tue Dec 29 18:38:47 2015 : Debug: Waking up in 0.3 seconds.
Tue Dec 29 18:38:47 2015 : Info: Cleaning up request 1 ID 210 with timestamp +11
Tue Dec 29 18:38:47 2015 : Debug: Waking up in 0.3 seconds.
Tue Dec 29 18:38:48 2015 : Info: Cleaning up request 2 ID 211 with timestamp +12
Tue Dec 29 18:38:48 2015 : Debug: Waking up in 0.1 seconds.
Tue Dec 29 18:38:48 2015 : Info: Sending delayed reject for request 8
Sending Access-Reject of id 217 to 192.168.1.1 port 32779
EAP-Message = 0x04090004
Message-Authenticator = 0x00000000000000000000000000000000
Found the solution,
the problem was that i didn't configure the /etc/raddb/sites-available/inner-tunnel file to use sql