Queues are one of those tools in Laravel that everyone knows is there, but very few people understand deeply. It’s understandable–Laravel is often the first place folks have run into queues, and to be honest, they’re not simple.
Thankfully, very little has changed on a user-facing front with regard to how queues work in Laravel 5.3.
Daemon as default
The biggest change is that the command you would’ve once used to "listen" for queue jobs:
php artisan queue:listen
230; is no longer the default. Instead, running
queue:work as a daemon is now the default:
php artisan queue:work
This was possible in the past by running
php artisan queue:work --daemon, but now, you don’t have to pass
--daemon (instead, pass
--once if you want to only work on a single job), and Laravel is recommending you use
queue:work (daemon style) instead of
queue:listen as your default.
What’s the difference?
php artisan queue:listen listens to your queue and spins up the entire application every time it operates on a queue job. This is slower, but doesn’t require rebooting the worker every time you push new code.
php artisan queue:work keeps the application running in between jobs, which makes it faster and lighter, but you’ll need to restart the listener every time you push new code. The best way to do this is to run
php artisan queue:restart on every deploy.
It’s now recommended that you run a Supervisor process on your Linux hosts to watch your queue listener and restart it if it gets stopped. The docs now have a writeup on how to set up Supervisor correctly.
Essentially, you’re going to install it using
apt-get, configure it using the
/etc/supervisor/conf.d file, and define that the queue worker should be restarted if it fails. You can even define how many queue workers you’d like to run at a given time.
Under the hood
The last final change is one that’s largely transparent to us as developers, but the new queue infrastructure has a different model of how the primary worker handles control of each job. It’s complicated, but it gives us the benefit of the worker having a lot more control over the behavior of long-running or misbehaving queue jobs. The new system also takes advantage of PHP 7.1’s pcntl_async_signals when it’s available.
As a reminder, you can control these long-running jobs using
retry_after; you can define that a queue worker process will kill a child process if it takes longer than a given amount of time using
php artisan queue:work --timeout=90
Note that you can use this
timeout in combination with
retry_after, which is a setting in your queue configuration file.
retry_after defines how long the worker should wait before assuming that a job has failed and needs to be re-added to the queue for a second try. As the docs note, make sure that your
retry_after is at least a few seconds longer than your
timeout so you don’t get an overlap spinning up multiple copies of the same job.
That’s it for now! It’s pretty simple and light stuff, but I think it makes the entire setup a little bit cleaner and more predictable. Good stuff.