megacolorboy

Abdush Shakoor's Weblog

Writings, experiments & ideas.

Rewriting my SSG again — The right way

Around eight years ago, I wrote my own static site generator in Python. It was very simple. No frameworks, no strong structure — just scripts that generated pages for my blog.

It worked, so I kept using it.

There were no proper validations. No structured exception handling. Almost no separation of concerns. I didn’t follow standard practices. At that time, it was just a hobby project, and I only cared that it produced HTML files correctly.

And it did.

To be honest, I was too lazy to rewrite it and it was "stable" enough, so I ignored the technical debt.

Contrasting differences

Over the years, I’ve worked on much larger and more structured systems — enterprise APIs, background services, integrations, layered architectures. I’ve learned to care about validation, clean boundaries, proper error handling, and maintainability.

Now when I open my old SSG code, I can clearly see the difference.

It reflects how I used to think about code.

It’s not terrible. It’s just basic. It lacks discipline.

There’s no validation layer. No clear contracts. Some modules handle too many responsibilities. Dependencies are outdated. Type hints are either missing or inconsistent.

The real issue is that it works — but extending it doesn’t feel comfortable.

Why I decided to rewrite it?

This rewrite is not about fixing bugs. There are no critical issues. It’s just that single god-like python class is really bugging me.

I want to:

  • Add proper validation
  • Introduce structured exception handling
  • Strengthen type hints
  • Upgrade dependencies
  • Separate responsibilities clearly
  • Make it easier to extend in the future

I also want to refresh my blog properly. If I’m going to continue building on top of this tool, it should be something I trust and feel comfortable maintaining.

Right now, adding a new feature feels like touching something fragile. After refactoring, I want it to feel stable and predictable.

Time for some justice!

This time, I’m approaching it like a real project, even though it’s still personal.

Clear module boundaries. Explicit data models. Proper validation. Clean CLI entry points. Better organization overall.

I’ll be using PyCharm heavily during this process. When renaming models, reorganizing modules, or tightening type contracts, I want safe refactoring and immediate feedback. Strong inspections and accurate find usages will help a lot when reshaping older code.

Tooling becomes more important during refactoring than during initial development.

I’m not trying to over-engineer it. I’m just trying to build it properly — based on what I’ve learned over the years.

Conclusion

Sometimes old code is not a mistake. It’s just a snapshot of your earlier experience.

Rewriting this SSG is simply updating it to match how I think today.

Over the next few days, I’ll be cleaning it up and rebuilding it with better structure. Not because it failed — but because I’ve improved.

Hope you liked reading this article!

How JetBrains IDEs Improved My Productivity Across .NET, Laravel, and Python

I work across three different stacks almost every week: .NET for enterprise APIs and integrations, Laravel for large backend systems, and Python for CLI tools and automation.

On paper, these ecosystems are very different. Different communities, different tooling culture, different philosophies.

But my development experience feels almost the same every day — because I use JetBrains IDEs for all of them.

Rider, PHPStorm, and PyCharm share almost identical core capabilities:

  • Smart navigation
  • Safe refactoring
  • Powerful debugging
  • Git integration
  • Database tools
  • Deep code inspections

This is not about which IDE has more features, it’s about staying in that flow of productivity.

Being in the flow

What I like about the JetBrains suite is that, when I move from .NET to Laravel to Python, I don’t feel like I switched tools. I only switched languages and frameworks.

That small difference matters more than people think.

As developers, we already deal with architecture decisions, business rules, integrations, performance issues, and production risks. When I change stacks, I don’t want to also change keyboard shortcuts, debugger behavior, navigation style, or refactoring workflow.

With JetBrains, the mental model stays stable and reduces my mental friction in a very real way.

No more refactoring nightmares!

I might be slightly opinionated but hear me out, okay?

If you work on serious systems — payments, integrations, background jobs, multi-layer architectures — you cannot afford sloppy refactoring.

I frequently:

  • Move logic from controllers into services
  • Rename DTO properties
  • Break large files into smaller components
  • Extract interfaces
  • Clean legacy code

Making use of features like: Search Everywhere, Go to definition, Find usages or Refactor has significantly improved my code confidence and that allows me to continuously improve architecture instead of being afraid to touch old code.

Debugging feels calm and predictable

Whether I’m debugging:

  • A controller in .NET
  • A service in Laravel
  • A module in Python

The experience is consistent and predictable: breakpoints, variable inspection, stepping into async calls, evaluating expressions, you name it!

When debugging feels predictable, solving complex problems becomes less stressful. And that directly improves productivity.

The IDE as a Second Reviewer

Another thing I value is how deeply the IDE understands the code.

It doesn’t just highlight syntax errors. It understands type relationships, method references, incorrect imports, potential null issues, and unused dependencies.

Many mistakes are caught before I even run the application.

It feels like having a second reviewer sitting beside me while I write code.

For someone working across multiple stacks, that safety net is quite powerful.

A consistent ecosystem

The biggest advantage, however, is consistency.

I don’t want three different mental environments. Rather something that adapts to the language. With Rider, PHPStorm, and PyCharm: The engine, navigation and philosophy feels the same.

I mean, don't get me wrong, I still use VS Code, Visual Studio, and lighter setups. They are really good tools in their own way.

But when working on enterprise APIs, government systems, payment workflows, and long-running background processes, I prefer:

  • Depth over minimalism
  • Strong refactoring over quick editing
  • Intelligent tooling over lightweight flexibility

Productivity is rarely about one big feature. It’s about small improvements repeated every day:

  • Strong Git integration
  • Built-in database tools
  • Consistent formatting
  • Reliable search
  • Clean UI scaling on high-resolution screens

Each one saves seconds. Over months and years, those seconds become hours.

Final Thoughts

By the time, you're done reading this article, one might assume that working across multiple tech stacks .NET, Laravel, and Python could feel chaotic.

For me, I feel like I've found the right ecosystem that just doesn't get in my way and allows me to be consistent, productive and focused on building stuff and problem-solving instead of fighting my tools.

Consistency is one of the most underrated productivity multipliers in software development.

Hope you liked reading this article!

Why Hangfire recurring jobs should always have stable IDs?

Recently, I learned that not giving explicit IDs to Hangfire recurring jobs can silently create duplicates.

When you call RecurringJob.AddOrUpdate(...) without a pre-defined ID, Hangfire auto-generates one based on the method expression. That sounds fine—until you deploy across multiple nodes or refactor your code.

In such cases, Hangfire may generate a new derived ID, while the old recurring job continues running in the background. The result? Duplicate jobs executing on schedule, often goes unnoticed.

Simple fix

Make sure that you always pass a clear, human-readable ID:

RecurringJob.AddOrUpdate<ICustomBackgroundServiceManager>(
    "payments:reconcile-3h", // Human-readable ID
    s => s.ResendPaymentReportToVendorHourly(),
    Cron.Hourly(3)
);

Next, ensure that overlaps are impossible in a multi-node setup by adding this attribute to your job method:

[DisableConcurrentExecution(timeoutInSeconds: 3600)]
public Task ResendPaymentReportToVendorHourly() { /* ... */ }

This uses a distributed lock backed by Hangfire storage—so even with multiple nodes, only one execution runs at a time. If you’re wondering whether this locks the job for the full hour: it doesn’t. The timeout exists for crash-safety, not throttling.

After doing this, recurring jobs become much easier to reason about, identify, pause, or delete in a multi-node environment.

Hope you found this tip useful!

Running Laravel queues with Supervisor on Ubuntu

Laravel queues aren’t new to me, but I realized I’d never really written down how I usually set them up on a fresh Ubuntu server.

Whenever I need queue workers to run continuously — and survive restarts or crashes — I almost always reach for Supervisor. It’s simple, boring, and gets the job done.

First, install it on your server:

sudo apt update
sudo apt install supervisor

Then create a small config file for the queue worker:

sudo vim /etc/supervisor/conf.d/laravel-worker.conf

This is the setup I generally start with:

[program:laravel-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /var/www/project/artisan queue:work --sleep=3 --tries=3 --max-time=3600
autostart=true
autorestart=true
user=www-data
numprocs=2
redirect_stderr=true
stdout_logfile=/var/www/project/storage/logs/laravel-worker.log
stopwaitsecs=600

A few quick notes on what this does:

  • It runs queue:work instead of queue:listen, so the worker stays in memory
  • Two worker processes are started in parallel (numprocs=2)
  • If the queue is empty, the worker sleeps for a few seconds instead of spinning
  • Failed jobs are retried a limited number of times
  • The worker is force-restarted every hour to avoid memory leaks
  • If a worker crashes, Supervisor brings it back up automatically

Once the file is saved, reload Supervisor’s config:

sudo supervisorctl reread
sudo supervisorctl update

Start the workers:

sudo supervisorctl start laravel-worker:*

And check their status:

sudo supervisorctl status

Whenever I deploy new code or change environment variables, I just restart them:

sudo supervisorctl restart laravel-worker:*

Logs end up in Laravel’s storage/logs, which is usually the first place I look when something feels off.

This setup has been reliable for me across multiple projects. No dashboards, no extra moving parts — just queue workers quietly doing their job in the background.

Hope you found this useful!

DateTimeKind.Unspecified can quietly break your dates

Two months ago, a client raised a critical ticket where some users complained that their Start Date of the Financial Year had gone one day backward. This wasn’t caught during UAT while integrating their API, and I was surprised to run into this date conversion bug while I was on vacation (yes, that sucks!).

I had a perfectly normal-looking date:

2025-01-01 00:00:00.0000000

Nothing fancy. But after converting it to UTC, I noticed something odd — the date changed.

Luckily, I was able to trace where it was coming from, and after debugging in Visual Studio 2022, it turned out the culprit was DateTimeKind.Unspecified.

When a DateTime is parsed without timezone information, .NET marks it as Unspecified. If you then call ToUniversalTime(), .NET assumes the value is local time and converts it to UTC. On a UTC+5:30 system, that means:

2025-01-01 00:00 → 2024-12-31T18:30:00Z

Same instant, different calendar date. Easy to miss, painful to debug.

The fix

If your date is already meant to be UTC, you need to say so explicitly:

DateTime.SpecifyKind(inputDateTime, DateTimeKind.Utc);

Or handle it defensively in one place:

switch (inputDateTime.Kind)
{
    case DateTimeKind.Utc:
        return inputDateTime;

    case DateTimeKind.Local:
        return inputDateTime.ToUniversalTime();

    case DateTimeKind.Unspecified:
    default:
        return DateTime.SpecifyKind(inputDateTime, DateTimeKind.Utc);
}

Takeaway

Never leave DateTimeKind to chance especially if you’re working with APIs, audits, or anything date-sensitive.

It’s one of those small details that only shows up when things go wrong — which makes it worth handling upfront.

Hope you found this tip useful!

Resize images from a file list using ImageMagick

Today, a colleague of mine uploaded a large number of images, only to realise later that they were all uncompressed and the site was loading noticeably slower.

His first thought was to compress them and re-upload everything again—but why do something redundant when you can handle it easily from the terminal using ImageMagick?

Previously, I’ve written about how ImageMagick makes it easy to resize images in bulk. However, sometimes you don’t want to touch every image in a directory.

In cases like this, if you already know which images need resizing, you can list their filenames in a text file and let ImageMagick process only those.

Assume a files.txt like this:

image1.jpg
photo_02.png
banner.jpeg

You can then resize just those images while keeping the original aspect ratio intact and avoiding upscaling:

while IFS= read -r file; do
  [ -f "$file" ] && mogrify -resize 500x600\> "$file"
done < files.txt

This works well when cleaning up large media folders or fixing legacy content where only a subset of images needs adjustment.

Hope you found this tip useful!

Can AI really replace software engineers?

My take on the impact of AI in the near future for software engineers.

I’ve spoken about this topic on this blog two years ago. But with how dramatically the AI landscape has changed—especially with the advent of more advanced models—I think it’s worth revisiting.

Think about it: if companies like OpenAI, Anthropic, or Microsoft truly believed that AI could replace software engineers, why would they still aggressively hunt for top engineering talent in Silicon Valley or spend billions acquiring startups?

Task or Responsibility?

Here’s how I see it in this AI era: AI can replace many programming tasks, but not the role or responsibility itself.

Programming is only one part of the job. If you step back and think about what you actually do, you’ll realize there’s a lot more involved than just writing code in your favorite editor.

This is where many people go wrong—by conflating a task with the role. It’s similar to saying calculators replaced mathematicians or accountants. Yes, calculators automated arithmetic, but they also enabled people to focus on more complex problems. Arithmetic was never the job; understanding the principles behind it was.

AI works the same way. It makes execution faster, but it doesn’t replace understanding.

What AI can’t do?

Think about what you actually do in a typical week.

You sit in closed rooms with project managers and clients who describe vague or unintelligible problems. You’re the one who decodes what they actually need. You look at the codebase and:

  • Figure out which parts need to change and which must remain untouched
  • Push back on feature requests that might introduce long-term technical debt
  • Review a colleague’s code before it reaches production
  • Decide whether something is ready to go live or needs more testing

There are many more responsibilities like this—and none of them are programming.

It’s just your job.

Raising concerns

This post isn’t meant to turn a blind eye to what’s happening in the industry.

We’ve seen massive layoffs across large corporations and companies reducing headcount. Will this happen again? Absolutely. But in most cases, these are cost-cutting measures wrapped in a different narrative, with AI often used as a convenient justification.

So who stays, and who’s at risk?

Engineers who understand that their role goes far beyond writing code—those who bring context, judgment, and clarity to ambiguous problems—are far more likely to remain valuable. On the other hand, those who rely solely on producing output without understanding why they’re producing it are the most vulnerable.

A stronger feedback loop

Will junior engineers be replaced? That’s something I plan to address in a separate post.

But one thing worth discussing is this: if AI handles a large part of code generation, can juniors still build judgment? I think they can—because AI significantly shortens the feedback loop.

Having spent over a decade in this industry, I remember the days of endlessly browsing Stack Overflow and flipping through programming books for answers. What once took hours or days now takes seconds. It may feel like skipping steps, but in reality, you’re just learning faster.

Consider this: you were hired before the AI wave because your company saw value in what you brought to the table. Now, with AI tooling, you’re significantly more productive. You ship faster, handle more complex scenarios, and deliver better outcomes.

It wouldn’t make much sense for a company to let you go simply because you’ve become more efficient at your job.

Staying ahead

If you’re already thinking about how to adapt, here’s where you can start:

  • Use AI tools: Whether it’s Claude, ChatGPT, Cursor, or something else—figure out what works for you and what doesn’t
  • Strengthen your actual role: Focus on understanding requirements, trade-offs, and communication with stakeholders
  • Learn systems end-to-end: The more you understand how a system works as a whole, the harder you are to replace
  • Document your work: Keep track of how you solve problems—it pays off later in your career
  • Stay open to learning: Being defensive or closed-minded will only slow you down. Embrace the tools and move forward

Conclusion

This field is changing rapidly. Tasks that once took days can now be completed in seconds. Some skills are becoming less relevant, while others are more important than ever.

If there’s one thing to take away from this, it’s this: your value was never in writing code. It’s in knowing what to build, why to build it, when to ship, and when to push back. It’s about solving the right problems—problems that actually help people.

Block browser features with permissions policy in Nginx

Recently, I learned that you can explicitly disable browser features like camera, microphone, and geolocation using the Permissions-Policy HTTP response header.

Using a single line in nginx, it does the job:

add_header Permissions-Policy "camera=(), microphone=(), geolocation=()" always;

What this does?

  • Disables camera access
  • Disables microphone access
  • Disables geolocation access
  • Applies to all origins
  • The browser won’t even prompt the user for permission

The empty () means no origins are allowed to use these features.

Why this is useful?

  • Improves security and privacy
  • Prevents misuse by third-party scripts
  • Good default for content sites, admin panels, and APIs

The always flag ensures the header is sent even on error responses (404, 500, etc.).

Hope you found this tip useful!