Laravel Forge Queues not firing - queue

In my Forge server I go to Queues and check the worker status, which always shows RUNNING.
However, when pushing a job onto the queue it never fires, yet when I SSH into the server and run php artisan queue:listen it does.
This on L 5.4 with two new Forge load balanced servers. I've restarted the workers, deleted them and created new ones... I've even restarted the servers, but no joy.
Do I have to manually start the queues in Artisan?
Here's my config/queue.php
'beanstalkd' => [
'driver' => 'beanstalkd',
'host' => 'localhost',
'queue' => 'default',
'retry_after' => 90,
],
My .env
QUEUE_DRIVER=beanstalkd
The Forge Queue Worker Status
worker-124625:worker-124625_00 RUNNING pid 17929, uptime 13:52:44
Forge Queue settings
(source: pbrd.co)
Nothing is out of the ordinary... I don't think...

Related

Is it possible to notify a service on a different host with puppet?

I have a puppet module for host-1 doing some file exchanges.
Is it possible to inform another puppet agent on host-2 (i.e. with a notify) about a change made on host-1?
And if it is possible, what would be a best practice way to do that?
class fileexchangehost1 {
file { '/var/apache2/htdocs':
ensure => directory,
source => "puppet:///modules/${module_name}/var/apache2/htdocs",
owner => 'root',
group => 'root',
recurse => true,
purge => true,
force => true,
notify => Service['restart-Service-on-host-2'],
}
}
Many have asked this question and at various times there has been talk of implementing a feature to make it possible. But it's not possible, and not likely to be possible any time soon.
Exported resources was considered an early solution to problems similar to this, although some e.g. here have argued it is not a good solution and I don't see exported resources used often nowadays.
I think, nowadays, the recommended approach would be to keep-it-simple, and use something like Puppet Bolt to simply run commands on node A, and then on node B, in order.
If not Puppet Bolt, you could also use MCollective's successor, Choria, or even Ansible for this.
Puppet has no direct way of notifying a service on one host from the manifest of another.
That said, could you use exported resources for this? We use exported resources with Icinga, so one host generates Icinga configuration for itself, then exports it to the Icinga server, which restarts the daemon.
For example, on the client host:
##file { "/etc/icinga2/conf.d/puppet/${::fqdn}.conf":
ensure => file,
[...]
tag => "icinga_client_conf",
}
And on the master host:
File <<| tag == "icinga_client_conf" |>> {
notify => Service['icinga2'],
}
In your case there doesn't appear to be a resource being exported, but would this give you the tools to build something to do what you need?

How to monitor and control MBDs from JBOSS Domain Mode (List, stopDelivery StartDelivery)

I want to list information about, start and stop the delivery of MDBs running in several servers. This page https://access.redhat.com/solutions/428023
shows how to stop and start MDB's delivery in standalone mode:
[standalone#localhost:9999 /] /deployment=MDBStopDeliveryApplication.jar/subsystem=ejb3/message-driven-bean=TestMDB:start-delivery(){"outcome" => "success"}
[standalone#localhost:9999 /] /deployment=MDBStopDeliveryApplication.jar/subsystem=ejb3/message-driven-bean=TestMDB:stop-delivery()
Can this be done in domain mode for all the servers? if so how?
[domain# ip :9999 /] /deployment=name.ear/subsystem=ebj3/whatever
[domain# ip :9999 /] /deployment=name.ear/subsystem=ebj3:whatever()
I can't do any operation on the subsystem=ebj3, or any of it's children. And TAB for completion is also doing nothing. The result of any operation is always:
{
"outcome" => "failed",
"failure-description" => "JBAS014883: No resource definition is registered for address [
(\"deployment\" => \"name.ear\"),
(\"subsystem\" => \"ebj3\")
]",
"rolled-back" => true
}
In domain mode, you can not query or manipulate these atributes globally. Your configuration is stored in a profile, the profile is assigned to a server-group and then a server instance is assigned to the group. Servers are running on a host, which acts as a slave connected to a domain controller. There can be multiple hosts running on different machines and each host can manage server instances assigned to different groups. In order to achieve your goal you need to execute those commands on each server where your application is deployed. If you want to automate it you can first query list of servers belonging to server group and then iterate over them for example in a bash script invoking CLI. To query this info for specific server just prefix your command with /host=<your_host>/server=<your_server>/
This was bug in EAP 6.4 n prior versions. It is fixed in EAP 6.4.5. You can use below CLI command to stat/stop MDB :
/host=master/server=server-three/deployment=xxxx.jar/subsystem=ejb3/message-driven-bean=xxx:start-delivery()

Laravel mail queue wrong mailgun domain

I m using Laravel 5.1 and for sending emails I use:
Mail::queue(XYZ, compact(ABC), function($message) use ($mailTo)
{
$message->from(XXX, XXX);
$message->to(XXX)->subject(XXX);
});
In the services.php I have:
'mailgun' => [
'domain' => env('MAILGUN_DOMAIN'),
'secret' => env('MAILGUN_SECRET'),
],
and everything works as expected.
However I had to change my Mailgun domain from sandboxXXXXXXXXXXXXX.mailgun.org to my.domain.com
and email are not delivered using the new domain unless I use Mail::send instead of Mail::queue.
I also ran php artisan queue:restart, php artisan cache:clear and finally I restarted the supervisor on my server but it didn't work.
In my log file I can see that using Mail::queue, Guzzle is still using the old domain while contacting Mailgun even if there is not any trace of the old domain in the code anymore.
Any suggestion?!
How can I fix this issue and be able to queue my emails using the new domain?
Thanks for your help!

cannot create jms topic in jboss 6.0.1 using cli interface on cluster environment

I am trying to create the jms-topic in JBOSS 6.0.1 version using command line interface of jboss .
I am able to do it on standalone server but issues arises when i run command on cluster environment.
Here is what i do:
Standalone server command:
jms-topic add --topic-address=java:/com.matrix.jms.samp.imp.solution.topic.ReplyTopic --entries=java:/com.matrix.jms.samp.imp.solution.topic.ReplyTopic
Exectues succesfully
Now when i run same command on cluster environment of jboss it ask me to give profile. I give profile and execute this command:
jms-topic add --profile=hornetq-server --topic-address=java:/com.matrix.jms.samp.imp.solution.topic.ReplyTopic --entries=java:/com.matrix.jms.samp.imp.solution.topic.ReplyTopic
It gives me following error,
JBAS014766: Resource [("profile" => "hornetq-server")] does not exist; a resource at address [
("profile" => "hornetq-server"),
("subsystem" => "messaging"),
("hornetq-server" => "default"),
("jms-topic" => "java:/com.matrix.jms.samp.imp.solution.topic.ReplyTopic")
] cannot be created until all ancestor resources have been added
Is my profile name wrong . I am able to create topic from jboss management console.
Please suggest

Accessing Google Cloud SQL instance from Google Compute Engine?

After spending a few hours, this is the only real documentation I can find for accessing Cloud SQL from outside of GAE: https://developers.google.com/cloud-sql/docs/external
The problem is, this is for a Java application (via JDBC).
I need to access my Cloud SQL DB from within a PHP, Dart, or NodeJS application. I thought by giving my GCE instance rights to connect to Cloud SQL, this would be easy. But no arrangement of socket strings (using mysql drivers) seems to be effective.
For argument's sake, let's say I'm trying to connect with a PHP app. My mysql connection array looks like this:
(
'driver' => 'mysql',
'unix_socket' => '/cloudsql/project-id:instance-id',
'host' => 'localhost',
'database' => 'dbname',
'username' => 'root',
'password' => '',
'charset' => 'utf8',
'collation' => 'utf8_unicode_ci',
'prefix' => '',
)
This is as close as I got, but I'll get a generic "Can't connect to local MySQL server through socket" error.
While this is an older question, I just thought I should share what I've found in regards to this.
First off, you were attempting to connect to a MySQL Server on your GCE instance, not your remote CloudSQL instance.
To begin
Go into your dashboard and request an IP for your CloudSQL Instance.
Go to CloudSQL Access Control and add your GCE IP address.
Connect to CloudSQL from GCE via mysql-client and add a new (non-root) user
Use the CloudSQL IP and the new non-root user to access CloudSQL from GCE PHP files.
Hope this helps.
The Cloud SQL team are working on improving the connectivity from Compute Engine. If you send this question to google-cloud-sql-discuss#googlegroups.com, they will be able to follow up.
You could connect indirectly I.E. Create a Java-based App Engine App that provides an interface to the database for you, and consume that interface from your PHP app?
For example: Java App Engine App has a 'getEmployees' method call that calls a Select query on the DB and then formats and returns the results as a JSON file. Your PHP app would then call this method and consume the JSON...