megacolorboy

Abdush Shakoor's Weblog

Writings, experiments & ideas.

Why Hangfire recurring jobs should always have stable IDs?

Recently, I learned that not giving explicit IDs to Hangfire recurring jobs can silently create duplicates.

When you call RecurringJob.AddOrUpdate(...) without a pre-defined ID, Hangfire auto-generates one based on the method expression. That sounds fine—until you deploy across multiple nodes or refactor your code.

In such cases, Hangfire may generate a new derived ID, while the old recurring job continues running in the background. The result? Duplicate jobs executing on schedule, often goes unnoticed.

Simple fix

Make sure that you always pass a clear, human-readable ID:

RecurringJob.AddOrUpdate<ICustomBackgroundServiceManager>(
    "payments:reconcile-3h", // Human-readable ID
    s => s.ResendPaymentReportToVendorHourly(),
    Cron.Hourly(3)
);

Next, ensure that overlaps are impossible in a multi-node setup by adding this attribute to your job method:

[DisableConcurrentExecution(timeoutInSeconds: 3600)]
public Task ResendPaymentReportToVendorHourly() { /* ... */ }

This uses a distributed lock backed by Hangfire storage—so even with multiple nodes, only one execution runs at a time. If you’re wondering whether this locks the job for the full hour: it doesn’t. The timeout exists for crash-safety, not throttling.

After doing this, recurring jobs become much easier to reason about, identify, pause, or delete in a multi-node environment.

Hope you found this tip useful!

Running Laravel queues with Supervisor on Ubuntu

Laravel queues aren’t new to me, but I realized I’d never really written down how I usually set them up on a fresh Ubuntu server.

Whenever I need queue workers to run continuously — and survive restarts or crashes — I almost always reach for Supervisor. It’s simple, boring, and gets the job done.

First, install it on your server:

sudo apt update
sudo apt install supervisor

Then create a small config file for the queue worker:

sudo vim /etc/supervisor/conf.d/laravel-worker.conf

This is the setup I generally start with:

[program:laravel-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /var/www/project/artisan queue:work --sleep=3 --tries=3 --max-time=3600
autostart=true
autorestart=true
user=www-data
numprocs=2
redirect_stderr=true
stdout_logfile=/var/www/project/storage/logs/laravel-worker.log
stopwaitsecs=600

A few quick notes on what this does:

  • It runs queue:work instead of queue:listen, so the worker stays in memory
  • Two worker processes are started in parallel (numprocs=2)
  • If the queue is empty, the worker sleeps for a few seconds instead of spinning
  • Failed jobs are retried a limited number of times
  • The worker is force-restarted every hour to avoid memory leaks
  • If a worker crashes, Supervisor brings it back up automatically

Once the file is saved, reload Supervisor’s config:

sudo supervisorctl reread
sudo supervisorctl update

Start the workers:

sudo supervisorctl start laravel-worker:*

And check their status:

sudo supervisorctl status

Whenever I deploy new code or change environment variables, I just restart them:

sudo supervisorctl restart laravel-worker:*

Logs end up in Laravel’s storage/logs, which is usually the first place I look when something feels off.

This setup has been reliable for me across multiple projects. No dashboards, no extra moving parts — just queue workers quietly doing their job in the background.

Hope you found this useful!

DateTimeKind.Unspecified can quietly break your dates

Two months ago, a client raised a critical ticket where some users complained that their Start Date of the Financial Year had gone one day backward. This wasn’t caught during UAT while integrating their API, and I was surprised to run into this date conversion bug while I was on vacation (yes, that sucks!).

I had a perfectly normal-looking date:

2025-01-01 00:00:00.0000000

Nothing fancy. But after converting it to UTC, I noticed something odd — the date changed.

Luckily, I was able to trace where it was coming from, and after debugging in Visual Studio 2022, it turned out the culprit was DateTimeKind.Unspecified.

When a DateTime is parsed without timezone information, .NET marks it as Unspecified. If you then call ToUniversalTime(), .NET assumes the value is local time and converts it to UTC. On a UTC+5:30 system, that means:

2025-01-01 00:00 → 2024-12-31T18:30:00Z

Same instant, different calendar date. Easy to miss, painful to debug.

The fix

If your date is already meant to be UTC, you need to say so explicitly:

DateTime.SpecifyKind(inputDateTime, DateTimeKind.Utc);

Or handle it defensively in one place:

switch (inputDateTime.Kind)
{
    case DateTimeKind.Utc:
        return inputDateTime;

    case DateTimeKind.Local:
        return inputDateTime.ToUniversalTime();

    case DateTimeKind.Unspecified:
    default:
        return DateTime.SpecifyKind(inputDateTime, DateTimeKind.Utc);
}

Takeaway

Never leave DateTimeKind to chance especially if you’re working with APIs, audits, or anything date-sensitive.

It’s one of those small details that only shows up when things go wrong — which makes it worth handling upfront.

Hope you found this tip useful!

Resize images from a file list using ImageMagick

Today, a colleague of mine uploaded a large number of images, only to realise later that they were all uncompressed and the site was loading noticeably slower.

His first thought was to compress them and re-upload everything again—but why do something redundant when you can handle it easily from the terminal using ImageMagick?

Previously, I’ve written about how ImageMagick makes it easy to resize images in bulk. However, sometimes you don’t want to touch every image in a directory.

In cases like this, if you already know which images need resizing, you can list their filenames in a text file and let ImageMagick process only those.

Assume a files.txt like this:

image1.jpg
photo_02.png
banner.jpeg

You can then resize just those images while keeping the original aspect ratio intact and avoiding upscaling:

while IFS= read -r file; do
  [ -f "$file" ] && mogrify -resize 500x600\> "$file"
done < files.txt

This works well when cleaning up large media folders or fixing legacy content where only a subset of images needs adjustment.

Hope you found this tip useful!

Can AI really replace software engineers?

My take on the impact of AI in the near future for software engineers.

I’ve spoken about this topic on this blog two years ago. But with how dramatically the AI landscape has changed—especially with the advent of more advanced models—I think it’s worth revisiting.

Think about it: if companies like OpenAI, Anthropic, or Microsoft truly believed that AI could replace software engineers, why would they still aggressively hunt for top engineering talent in Silicon Valley or spend billions acquiring startups?

Task or Responsibility?

Here’s how I see it in this AI era: AI can replace many programming tasks, but not the role or responsibility itself.

Programming is only one part of the job. If you step back and think about what you actually do, you’ll realize there’s a lot more involved than just writing code in your favorite editor.

This is where many people go wrong—by conflating a task with the role. It’s similar to saying calculators replaced mathematicians or accountants. Yes, calculators automated arithmetic, but they also enabled people to focus on more complex problems. Arithmetic was never the job; understanding the principles behind it was.

AI works the same way. It makes execution faster, but it doesn’t replace understanding.

What AI can’t do?

Think about what you actually do in a typical week.

You sit in closed rooms with project managers and clients who describe vague or unintelligible problems. You’re the one who decodes what they actually need. You look at the codebase and:

  • Figure out which parts need to change and which must remain untouched
  • Push back on feature requests that might introduce long-term technical debt
  • Review a colleague’s code before it reaches production
  • Decide whether something is ready to go live or needs more testing

There are many more responsibilities like this—and none of them are programming.

It’s just your job.

Raising concerns

This post isn’t meant to turn a blind eye to what’s happening in the industry.

We’ve seen massive layoffs across large corporations and companies reducing headcount. Will this happen again? Absolutely. But in most cases, these are cost-cutting measures wrapped in a different narrative, with AI often used as a convenient justification.

So who stays, and who’s at risk?

Engineers who understand that their role goes far beyond writing code—those who bring context, judgment, and clarity to ambiguous problems—are far more likely to remain valuable. On the other hand, those who rely solely on producing output without understanding why they’re producing it are the most vulnerable.

A stronger feedback loop

Will junior engineers be replaced? That’s something I plan to address in a separate post.

But one thing worth discussing is this: if AI handles a large part of code generation, can juniors still build judgment? I think they can—because AI significantly shortens the feedback loop.

Having spent over a decade in this industry, I remember the days of endlessly browsing Stack Overflow and flipping through programming books for answers. What once took hours or days now takes seconds. It may feel like skipping steps, but in reality, you’re just learning faster.

Consider this: you were hired before the AI wave because your company saw value in what you brought to the table. Now, with AI tooling, you’re significantly more productive. You ship faster, handle more complex scenarios, and deliver better outcomes.

It wouldn’t make much sense for a company to let you go simply because you’ve become more efficient at your job.

Staying ahead

If you’re already thinking about how to adapt, here’s where you can start:

  • Use AI tools: Whether it’s Claude, ChatGPT, Cursor, or something else—figure out what works for you and what doesn’t
  • Strengthen your actual role: Focus on understanding requirements, trade-offs, and communication with stakeholders
  • Learn systems end-to-end: The more you understand how a system works as a whole, the harder you are to replace
  • Document your work: Keep track of how you solve problems—it pays off later in your career
  • Stay open to learning: Being defensive or closed-minded will only slow you down. Embrace the tools and move forward

Conclusion

This field is changing rapidly. Tasks that once took days can now be completed in seconds. Some skills are becoming less relevant, while others are more important than ever.

If there’s one thing to take away from this, it’s this: your value was never in writing code. It’s in knowing what to build, why to build it, when to ship, and when to push back. It’s about solving the right problems—problems that actually help people.

Block browser features with permissions policy in Nginx

Recently, I learned that you can explicitly disable browser features like camera, microphone, and geolocation using the Permissions-Policy HTTP response header.

Using a single line in nginx, it does the job:

add_header Permissions-Policy "camera=(), microphone=(), geolocation=()" always;

What this does?

  • Disables camera access
  • Disables microphone access
  • Disables geolocation access
  • Applies to all origins
  • The browser won’t even prompt the user for permission

The empty () means no origins are allowed to use these features.

Why this is useful?

  • Improves security and privacy
  • Prevents misuse by third-party scripts
  • Good default for content sites, admin panels, and APIs

The always flag ensures the header is sent even on error responses (404, 500, etc.).

Hope you found this tip useful!

Ubuntu Bootloader Recovery

This is a guide on how to recover Ubuntu's GRUB Bootloader in the case of whenever a LVM UUID is changed or corrupted.

The Problem

The error "disk not found: /xxx-xxx" in grub rescue> mode suggests GRUB can't find the device or logical volume it was previously configured to boot from.

This usually happens when: * LVM volumes weren't activated during boot * The volume group UUID changed or became corrupted

The Context

The /xxx-xxx in the message is likely referring to a LVM volume by UUID, e.g.:

error: disk 'lvmid/XXX-XXX-XXX' not found

This typically means GRUB is referencing a missing or renamed LVM LV or VG

The Solution

Boot from a live ISO, open a terminal and follow these steps:

1. Check Disks and LVM State

sudo lsblk
sudo fdisk -l

Then:

sudo vgscan

This activates any found LVM volume groups and logical volumes.

Then verify with:

sudo lvdisplay

3. Mount the System Manually

Assuming your root volume is /dev/mapper/your_lvm_drive:

sudo mkdir /mnt/recovery
sudo mount /dev/mapper/your_lvm_drive /mnt/recovery

4. Prepare for chroot

sudo mount --bind /dev /mnt/recovery/dev
sudo mount --bind /proc /mnt/recovery/proc
sudo mount --bind /sys /mnt/recovery/sys
sudo chroot /mnt/recovery

The chroot command changes the root directory for the kernel, effectively making a specified directory the new starting point for any file access within the chrooted process.

5. Reinstall GRUB

grub-install /dev/sdX  # Replace sdX with the actual disk (like /dev/sda)
update-grub

Exit chroot mode, unmount the recovery path from live-boot OS and reboot the system:

exit
sudo umount -R /mnt/recovery
sudo reboot

Hope you found this article useful!

Eliminate Laravel .env leakage between multiple sites on Apache (Windows)

Today, I learned something pretty important (and frustrating until I figured it out): if you're running multiple Laravel applications on Apache (Windows) using mod_php, you can run into a strange bug where one site's .env settings override the other.

This became painfully obvious when I noticed that one of my Laravel sites was randomly connecting to the wrong database, even though the configurations in each .env were different. Weird, right?

Turns out, the culprit was mod_php.

The problem

I had two Laravel sites hosted on the same Windows machine with Apache. Both were running fine — until they weren’t.

The issue? Sometimes one site would pick up the database config of the other. One minute I’d be on Site A, and then somehow its .env settings were coming from Site B.

This is just total chaos and worse, this issue existed from more two years until I decided to dive into the problem.

The cause

Laravel loads .env values during the bootstrap process and caches them in memory. When using mod_php, Apache loads PHP as a module once and shares it across all sites. That means only one copy of Laravel’s environment gets loaded — and reused across both apps.

So even if each app had its own .env, it didn’t matter. Apache was essentially saying:

“Hey, here’s PHP. I already booted Laravel with this config — just reuse it.”

The solution

After some digging, the solution is ditch mod_php and switch to FastCGI (mod_fcgid).

With FastCGI, each site runs its own instance of PHP via php-cgi.exe, which boots Laravel in isolation — meaning each .env file is respected per site.

How to configure it?

First, you need to disable mod_php in httpd.conf:

# LoadModule php_module "C:/path/to/php/php8apache2_4.dll"
# <FilesMatch \.php$>
#     SetHandler application/x-httpd-php
# </FilesMatch>
# PHPIniDir "C:/path/to/php/"

Then install mod_fcgid from Apache Lounge and copy mod_fcgid.so to modules/ directory and enable it in httpd.conf:

LoadModule fcgid_module modules/mod_fcgid.so
Include conf/extra/httpd-fcgid.conf

Next, you have to create/update httpd-fcgid.conf:

<IfModule fcgid_module>
   AddHandler fcgid-script .php
   FcgidInitialEnv PHPRC "C:/path/to/php"
   FcgidWrapper "C:/path/to/php/php-cgi.exe" .php
   Options +ExecCGI
   FcgidMaxProcessesPerClass 150
   FcgidMaxRequestsPerProcess 1000
   FcgidProcessLifeTime 300
   FcgidIOTimeout 120
   FcgidPassHeader Authorization
   FcgidFixPathinfo 0
</IfModule>

And then atlast, update your virtual host:

<VirtualHost *:443>
   ServerName example.com
   DocumentRoot "C:/path/to/website/public"

   <Directory "C:/path/to/website/public">
      Options +ExecCGI -Indexes +FollowSymLinks
      AllowOverride All
      Require all granted
      AddHandler fcgid-script .php
      FCGIWrapper "C:/path/to/php/php-cgi.exe" .php
   </Directory>

   SSLEngine on
   SSLCertificateFile "..."
   SSLCertificateKeyFile "..."
</VirtualHost>

Restart the nginx service and clear the laravel application cache:

httpd -k restart
php artisan config:clear
php artisan cache:clear

Bonus: Fixing 403 Forbidden Errors

After switching to FastCGI, I ran into a 403 Forbidden error on one site. That turned out to be a missing Options +ExecCGI and Require all granted in the <Directory> block. Once added, everything worked perfectly.

Result

Now, each Laravel site runs completely isolated, even though they use the same php-cgi.exe. No more .env bleed. No more wrong DBs. Just clean, independent Laravel environments — as it should be.

Takeaways

  • Don’t run multiple Laravel apps on Apache with mod_php unless you're okay with shared config risks.
  • Use mod_fcgid and php-cgi.exe for true isolation.
  • Even if using the same PHP version, FastCGI will spawn separate processes, so .env loading stays scoped.
  • Always check Apache error logs (error.log) and php_sapi_name() for confirmation.

If you’re running Laravel on Windows with Apache — this fix is a must.

Happy hosting and hope you found this article useful!