Maximum execution time exceeds due to session start? - zend-framework

I am getting following error when I use Zend_Session::Start() in my bootstrap file.
Maximum execution time of 30 seconds exceeded in G:\wamp\library\Zend\Session.php on line 480
On line 480 of Zend\Session.php code is
$startedCleanly = session_start();
The browser keeps loading the page like forever due to it , its like infinite while loop .
Context
class Bootstrap extends Zend_Application_Bootstrap_Bootstrap
{
protected $_config;
protected $_acl;
protected $_auth;
public function _initMyAutoloader()
{
$autloader = Zend_Loader_Autoloader::getInstance();
$autloader->pushAutoloader(new Zend_Application_Module_Autoloader(array('basePath'=>APPLICATION_PATH . '/','namespace'=>'')));
return $autloader ;
}
public function _initMyConfig()
{
Zend_Session::start();
$this->_config = new Zend_Config($this->getOptions());
Zend_Registry::set('config',$this->_config);
return $this->_config;
}
Thanks.

It's not because of session, but max_execution_time.
max_execution_time can be set in php.ini, so you can modify it:
max_execution_time = 60 ; Maximum execution time of each script, in seconds

Related

GWT Async DataProvider always jumping to celtable' first page

I have a simple celltable with:
a)A timer for retrieving the Whole list of values from the server (via rpc). when the data comes from the server:
public void onSuccess(final Object result) {
myList ((List<myObject>) result);
setRowData(myList);
}
b)A AsyncDataProvider to refresh the current displayed page, where:
protected void onRangeChanged(HasData<myObject> display) {
final Range range = display.getVisibleRange();
int start = range.getStart();
int end = start + range.getLength();
List<myObject> dataInRange = myList.subList(start, end);
// Push the data back into the list.
setRowData(start, dataInRange);
}
This works fine, refreshing the table when new data arrives from the server ..BUT....It jumps to the first page of my displayed table regardless he current page (Page size=20).
It is like it is ignoring the start and dataInRange values of the onRangeChanged method
the sentence:
setRowData(myList);
Is firing properly the onRangeChanged event of the DataProvider, but some how the 'start' values get 0
Any tip?
Many thanks
The problem is that when you call setRowData(myList); you also change the row range.
See the documentation:
Set the complete list of values to display on one page.
Equivalent to calling setRowCount(int) with the length of the list of values, setVisibleRange(Range) from 0 to the size of the list of values, and setRowData(int, List) with a start of 0 and the specified list of values.
Literally, below is the implementation:
public final void setRowData(List<? extends T> values) {
setRowCount(values.size());
setVisibleRange(0, values.size());
setRowData(0, values);
}

Lumen Database Queue first job always failing Allowed memory exhausted

I have a very odd situation where I set up a job to run in my Lumen database queue and all but the first job is processed. I do keep getting this particular error:
[2017-12-12 22:07:10] lumen.ERROR: Symfony\Component\Debug\Exception\FatalErrorException: Allowed memory size of 1073741824 bytes exhausted (tried to allocate 702558208 bytes) in /var/www/vhosts/XXXXXXXXX$
Stack trace:
#0 /var/www/vhosts/XXXXXXXX/vendor/laravel/lumen-framework/src/Concerns/RegistersExceptionHandlers.php(54): Laravel\Lumen\Application->handleShutdown()
#1 [internal function]: Laravel\Lumen\Application->Laravel\Lumen\Concerns\{closure}()
#2 {main}
I have tried allowing the memory limit to go up but I keep getting the same error with differing values for the exhausted memory.
I find it very odd that it is always the first job and all of the rest of the jobs run perfectly fine. Should I be looking for bad data in the first job?
My code basically looks like this:
This is my Command file
namespace App\Console\Commands;
use App\Jobs\UpdateNNNAppListJob;
use Illuminate\Console\Command;
use App\Services\MiddlewareApi;
use Illuminate\Support\Facades\DB;
use Illuminate\Support\Facades\Log;
use Mockery\Exception;
use Illuminate\Support\Facades\Queue;
class AddEmailsToAppList extends Command
{
/**
* The name and signature of the console command.
*
* #var string
*/
protected $signature = 'addemails:nnnmobileapp';
/**
* The console command description.
*
* #var string
*/
protected $description = 'This will add all mobile app users in the database to the nnn mobile app list.';
/**
* Create a new command instance.
*
* #return void
*/
public function __construct()
{
parent::__construct();
}
public function handle()
{
$chunkSize = 500; //this is the most middleware can handle with its bulk signup call
$emailChunks = $this->getEmailsToAdd($chunkSize);
$jobDelay = 120; //time between queued jobs
$jobDelayTimeKeeper = 60; //This will be the actual time delay that will be put into the later method
foreach ($emailChunks as $emailChunk) {
Queue::later($jobDelayTimeKeeper, new UpdateMmpAppListJob($emailChunk));
$jobDelayTimeKeeper = $jobDelayTimeKeeper + $jobDelay;
}
}
public function getEmailsToAdd($chunkSize)
{
$emails = DB::table('app_users')
->join('app_datas', 'app_datas.customer_number', '=', 'app_users.customer_number')
->select('app_users.email')
->get()
->chunk($chunkSize);
return $emails;
}
}
Here is my Job File
<?php
namespace App\Jobs;
use App\Services\MiddlewareApi;
use Illuminate\Support\Facades\Log;
use Mockery\Exception;
class UpdateMmpAppListJob extends Job
{
/**
* Array of emails to update list with
* #var array
*/
protected $emailArray;
/**
* The number of times the job may be attempted.
*
* #var int
*/
public $tries = 2;
public function __construct($emailArray)
{
$this->emailArray = $emailArray;
}
public function handle()
{
$listCodeToAddTo = 'NNNAPP';
$sourceId = 'NNNNNNN';
$middlewareApi = new MiddlewareApi();
try {
$middlewareApi->post_add_customer_signup_bulk($listCodeToAddTo, $this->emailArray, $sourceId);
} catch (\Exception $e) {
Log::error('An error occurred with theUpdateMmpAppListJob: ' . $e);
mail('djarrin#NNN.com', 'UpdateNnnAppListJob Failure', 'A failure in the UpdateNnnAppListJob, here is the exception: ' . $e);
}
}
public function failed(\Exception $exception)
{
mail('djarrin#moneymappress.com', 'Push Processor Que Failure', 'A failure in the UpdateMmpAppListJob, here is the exception: ' . $exception);
}
}
Any help/suggestions on this issue would be appreciate.
Your code calls ->get() which will load the entire result into memory. This causes the huge memory allocation that you're seeing. Remove it and let ->chunk(...) work with the database builder instead of the in-memory Collection that get() has returned. You would also have to provide a callback to chunk that processes every chunk.
public function handle() {
$chunkSize = 500; //this is the most middleware can handle with its bulk signup call
$jobDelay = 120; //time between queued jobs
$jobDelayTimeKeeper = 60; //This will be the actual time delay that will be put into the later method
DB::table('app_users')
->join('app_datas', 'app_datas.customer_number', '=', 'app_users.customer_number')
->select('app_users.email')
->chunk($chunkSize, function($emailChunk) use (&$jobDelayTimeKeeper, $jobDelay) {
Queue::later($jobDelayTimeKeeper, new UpdateMmpAppListJob($emailChunk));
$jobDelayTimeKeeper = $jobDelayTimeKeeper + $jobDelay;
});
}
The above concept is correct but this syntax was required to get past the
[2017-12-14 22:08:26] lumen.ERROR: RuntimeException: You must specify an orderBy clause when using this function. in /home/vagrant/sites/nnn/vendor/illuminate/database/Query/Builder.php:1877
This is for Lumen 5.5:
public function handle()
{
$chunkSize = 500; //this is the most middleware can handle with its bulk signup call
$jobDelay = 120; //time between queued jobs
$jobDelayTimeKeeper = 60; //This will be the actual time delay that will be put into the later method
$emails = DB::table('app_users')
->join('app_datas', 'app_datas.customer_number', '=', 'app_users.customer_number')
->select('app_users.email')->orderBy('app_users.id', 'desc');
$emails->chunk($chunkSize, function($emailChunk) use (&$jobDelayTimeKeeper, $jobDelay) {
Queue::later($jobDelayTimeKeeper, new UpdateMmpAppListJob($emailChunk));
$jobDelayTimeKeeper = $jobDelayTimeKeeper + $jobDelay;
});
}

Rules are skipped in KnowledgeBase

we are using drools 5.5 final version.we have thousands of objects and two rules so we are getting objects in chunk(100 size) wise and creating knowledge base for every chunk and firing rules.since creation of Knowledge Base is expensive we are getting performance issue.So we are creating Knowledge Base once and using that knowledge base for every chunk in this case after 4 to 5 chunks got executed from 6th chunk on wards rules are not getting fired though match is there .please suggest what can be done.
sample code
public static KnowledgeBase getPackageKnowledgeBase(PackageDescr pkg){
KnowledgeBuilderConfiguration builderConf = KnowledgeBuilderFactory.newKnowledgeBuilderConfiguration();
KnowledgeBuilder kbuilder = KnowledgeBuilderFactory.newKnowledgeBuilder(builderConf);
kbuilder.add(ResourceFactory.newDescrResource(pkg), ResourceType.DESCR);
Collection<KnowledgePackage> kpkgs = kbuilder.getKnowledgePackages();
if(kbuilder.hasErrors()){
LOGGER.error(kbuilder.getErrors());
}
KnowledgePackage knowledgePackage = kpkgs.iterator().next();
KnowledgeBase kbase= KnowledgeBaseFactory.newKnowledgeBase();
kbase.addKnowledgePackages(Collections.singletonList(knowledgePackage));
return kbase;
}
using method
chunkSize=100;
int start = 0;
Count = -1;
KnowledgeBase kbase=getPackageKnowledgeBase(pkgdscr)//pkgdscr contails all rules got from db
while(Count!=0 && Count <= chunkSize ){
LOGGER.debug("Deduction not getting "+mappedCustomerId);
Objects inputObjects = handler.getPaginatedInputObjects(start);
Count = inputObjects.size();
start=start+chunkSize;
StatefulKnowledgeSession ksession = kbase.newStatefulKnowledgeSession();
for(Object object:inputObjects){
ksession.insert(object);
}
ksession.fireAllRules();
ksession.dispose();
}
Below is the essential part of your loop. Looks to me that this loop terminates as soon as Count exceeds chunkSize (100). You sure this never happens?
while(Count!=0 && Count <= chunkSize ){
Objects inputObjects = ...;
Count = inputObjects.size();
...
StatefulKnowledgeSession ksession = ...;
for(Object object:inputObjects){
ksession.insert(object);
}
ksession.fireAllRules();
...
}

Omnet++/INET: parameter added to cMessage becomes 0 when received

In our simulation we added two fields to cMessage class as protected:
/* sequence number for log files */
long seqNo = 0;
/* timestamp at sending message */
simtime_t sendingTime;
and we add the following publics methods
public:
void setSeqNo(long n) {
this->seqNo = n;
}
long getSeqNo() {
return this->seqNo;
}
void setSentTime(simtime_t t) {
this->sendingTime = t;
}
simtime_t getSentTime() {
return this->sendingTime;
}
Now, when the server simulated application runs, before each message seding it performs:
pkt->setSeqNo(numPkSent);
pkt->setSentTime(simTime());
fprintf(this->analyticsCorrespondentNode, "PKT %u SENT AT TIME %f TO NODE %s \n", numPkSent, pkt->getSentTime().dbl(), d->clientAddr.get4().str().c_str());
On the other hand, when the message is received by the simulated application client if performs:
double recvTime = simTime().dbl();
fprintf(this->analyticsMobileNode, "RECEIVED PKT num. %d SENT AT TIME: %f RECEIVED AT TIME %f TRANSMISSION TIME ELAPSED %f \n", msg->getSeqNo(), msg->getSentTime().dbl(), recvTime, recvTime - msg->getSentTime().dbl());
The problem is that SeqNo is correctly written by the client as it had been set by the server before sending. Instead, the methods
msg->getSentTime.dbl()
always returns 0 in the client log file while it is correctly set by the server in the server log file. I don't understand why, maybe there's something strange happening in the conversion between cMessage to cPacket in the client application...do you know this?
In order to add own fields to a packet definition you should only prepare the definition in *.msg file. For example file FooPacket.msg:
packet FooPacket {
long seqNo;
simtime_t sendingTime;
// other fields...
}
Then, in your source file *.cc add:
#include "FooPacket_m.h"
The class FooPacket which derives from cPacket as well as all setter and getter methods will be generated automatically during the compilation - you will see the following files: FooPacket_m.h and FooPacket_m.cc.
When your client receives a message, you should check whether the type is the same as you expected and then cast it to FooPacket type. For example this way:
void handleMessage(cMessage *msg) {
if (dynamic_cast<FooPacket* >(msg)) {
FooPacket *pkt = check_and_cast<FooPacket* >(msg);
simtime_t t = pkt->getSendingTime();
}
// ...
}
It could be the conversion from cMessage to cPacket. Have you tried this?
Packet pk = check_and_cast<Packet *>(msg);
pk->getSentTime.dbl();
Also you can try to check if there is a problem with simtime_t double somewhere, try double for sentTime parameter

SWT, TypedEvent: how to make use of the time variable

The TypedEvent class has the member variable time. I want to use it to discard too old events. Unfortunately, it is of type int where as System.currentTimeMillis() returns long and both are very different, even when masking them with 0xFFFFFFFFL as the JavaDoc of time is telling me. How should the time be interpreted?
Note: As you haven't mentioned the operating system therefore I am safely assuming it as Windows (because this is what I have got).
Answer
If you closely look at the org.eclipse.swt.widgets.Widget class then you will find that TypedEvent.time is initialized as follows:
event.time = display.getLastEventTime ();
Which in return calls: OS.GetMessageTime ();
Now, SWT directly works with OS widgets therefore on a windows machine the call OS.GetMessageTime (); directly translates to Windows GetMessageTime API.
Check the GetMessageTime on MSDN. As per the page:
Retrieves the message time for the
last message retrieved by the
GetMessage function. The time is a
long integer that specifies the
elapsed time, in milliseconds, from
the time the system was started to the
time the message was created (that is,
placed in the thread's message queue).
Pay special attention to the line from the time the system was started to the time the message was created, which means it is not the standard System.currentTimeMillis() which is the elapsed time, in milliseconds, since 1, Jan 1970.
Also, To calculate time delays between
messages, verify that the time of the
second message is greater than the
time of the first message; then,
subtract the time of the first message
from the time of the second message.
See the below example code, which prints two different messages for time less than 5 seconds and greater than 5 seconds. (Note: It should be noted that the timer starts with the first event. So the calculation is always relative with-respect-to first event). Because of its relative nature the TypedEvent.time might not be suitable for your purpose as the first event may come very late.
>> Code
import java.util.Calendar;
import org.eclipse.swt.events.KeyEvent;
import org.eclipse.swt.events.KeyListener;
import org.eclipse.swt.widgets.Display;
import org.eclipse.swt.widgets.Shell;
public class ControlF
{
static Calendar first = null;
public static void main(String[] args)
{
Display display = new Display ();
final Shell shell = new Shell (display);
shell.addKeyListener(new KeyListener() {
public void keyReleased(KeyEvent e) {
}
public void keyPressed(KeyEvent e)
{
long eventTime = (e.time&0xFFFFFFFFL) ;
if(first == null)
{
System.out.println("in");
first = Calendar.getInstance();
first.setTimeInMillis(eventTime);
}
Calendar cal = Calendar.getInstance();
cal.setTimeInMillis(eventTime);
long dif = (cal.getTimeInMillis() - first.getTimeInMillis())/1000;
if( dif <= 5)
{
System.out.println("Within 5 secs [" + dif + "]");
}else
System.out.println("Oops!! out of 5 second range !!");
}
});
shell.setSize (200, 200);
shell.open ();
while (!shell.isDisposed()) {
if (!display.readAndDispatch ()) display.sleep ();
}
display.dispose ();
}
}