megacolorboy

Abdush Shakoor's Weblog

Writings, experiments & ideas.

Laravel Scopes vs. Builder Queries: Which Should You Use?

If you're building a Laravel application, you're probably spending a lot of time writing queries. And as your project grows, you'll inevitably face this question: Should I use scopes or builder queries? While both have their place, choosing the right tool for the job can make a world of difference. Here's my opinionated take on the matter.

The Case for Scopes

Scopes are, quite simply, one of Laravel's hidden gems. They let you encapsulate common query logic within your models, making your code clean, reusable, and easy to read. Think of them as tiny, purposeful functions designed to save you time and sanity.

Take this example:

<?php
    // In your model
    public function scopeActive($query)
    {
        return $query->where('status', 'active');
    }

    // Usage
    $activeUsers = User::active()->get();
?>

Suddenly, instead of littering your controllers with where('status', 'active') everywhere, you have a single, reusable method that reads like English. Scopes shine when you need commonly used filters like active, published, or recent. They’re easy to use, they’re consistent, and they make your code feel more intuitive.

Why I Prefer Scopes for Reusability

Here’s the thing: in any sizable Laravel app, you’ll inevitably find patterns in your queries. Rewriting the same query logic over and over? That’s a code smell. By using scopes, you centralize your query logic, reducing redundancy and improving maintainability.

For example:

<?php
    $pubishedPosts = Post::published()->recent()->get();
?>

This reads beautifully and keeps your codebase DRY (Don’t Repeat Yourself). If the definition of "published" or "recent" changes, you only need to update it in one place. Scopes turn repetitive query logic into single lines of magic.

The Case for Builder Queries

That said, not everything belongs in a scope. Some queries are just too specific, too complex, or too dynamic. This is where builder queries come in.

Imagine you’re building a report that requires multiple joins, conditional logic, or dynamic filters. Scopes could become unwieldy here. Instead, a well-crafted builder query in your controller, service, or repository might make more sense:

<?php
    $users = User::where('status', 'active')
        ->whereDate('created_at', '>', now()->subDays(30))
        ->orderBy('created_at', 'desc')
        ->get();
?>

Builder queries are perfect for:

  • One-off operations.
  • Highly dynamic queries.
  • Scenarios where scopes would make your models bloated or overly complex.

The flexibility of builder queries is unmatched. You can construct them on the fly, adapt them to user inputs, and handle edge cases without worrying about making your models an unreadable mess.

My Opinionated Take: Use Scopes as a Default, Builder Queries for the Edge Cases

If I had to pick a side, I’d say: lean on scopes as your default tool, and reserve builder queries for those rare cases when scopes just don’t cut it. Why?

  1. Scopes enhance readability. Your queries read like sentences, and your intentions are crystal clear.
  2. Scopes promote DRY principles. They’re reusable and encapsulate logic, which makes future maintenance a breeze.
  3. Builder queries are powerful but can become messy. Unless you’re careful, a complex query in your controller can grow into a sprawling monstrosity. Keep your controllers lean and delegate to scopes or dedicated query classes where possible.

When Not to Use Scopes

There are times when using a scope might do more harm than good:

  • Too much complexity: If a scope needs multiple parameters or involves complex joins, it’s better suited as a custom query builder or a dedicated repository method.
  • Rarely used logic: Don’t clutter your models with scopes for queries that are only needed once or twice.
  • Dynamic, user-driven queries: When filters are highly variable, builder queries give you the flexibility you need.

Conclusion: Balance Is Key

Laravel gives you powerful tools to write queries, and both scopes and builder queries have their roles. Use scopes to simplify and centralize reusable logic, and reach for builder queries when flexibility and complexity demand it. By balancing both, you’ll keep your codebase clean, maintainable, and a joy to work with.

So, what’s your take? Are you a scope enthusiast or a builder query champion? Either way, Laravel’s got you covered.

A Starter Guide to Software Version Control

Version Control Systems (VCS) are indispensable tools for modern software development. When I first started working with version control, I was overwhelmed by the terminology and the various workflows. Over time, I realized that mastering VCS isn't just about understanding commands—it's about adopting practices that make development smoother and more collaborative. In this post, I’ll share my journey, the lessons I’ve learned, and a practical approach to version control that works for teams of any size.

What is Version Control?

I remember my first experience losing hours of work because I accidentally overwrote a file. That’s when I discovered the magic of version control. It’s like having a time machine for your code! At its core, version control tracks and manages changes to your software code. Here’s why it’s essential:

  • Collaborate with ease: Gone are the days of emailing files back and forth. Multiple developers can work on the same codebase without stepping on each other’s toes.
  • Track every change: With a detailed history of changes, debugging becomes less of a nightmare.
  • Rollback anytime: If something breaks, you can revert to a stable version in seconds.

Today, tools like Git, Subversion (SVN), and Mercurial are popular choices, with Git leading the pack. If you haven’t tried Git yet, you’re missing out on a developer’s best friend.

Key Concepts

Here’s a quick glossary that helped me when I was starting out:

  1. Repository: Think of this as your project’s home. It stores your code and its entire version history.
  2. Commit: A snapshot of your code changes. A good commit is like leaving breadcrumbs for your future self or teammates.
  3. Branch: This was a game-changer for me. Branches let you work on new features or fixes without touching the main codebase.
  4. Merge: When you’re ready to bring your work back into the main project, merging combines it with the existing code.
  5. Tag: These are like bookmarks, marking specific points in your project’s history—perfect for releases.

Types of Branches

When I joined my first team project, I was introduced to a branching strategy that revolutionized the way I worked. Here’s how it breaks down:

1. Main Branches

  • Main (main or master): This is the crown jewel—the stable, production-ready branch. It’s sacred territory where only thoroughly tested code belongs.
  • Development (dev): This is where the magic happens. New features and fixes are integrated and tested here before they’re ready for production.

2. Feature Branches

When you’re working on something new, create a feature branch. Here’s how it works:

  • Purpose: To develop a specific feature.
  • Workflow: Start from dev, and when you’re done, merge back into dev.
  • Naming Convention: feature/new-login-system

3. Release Branches

Preparing for a new release? Here’s what you do:

  • Purpose: To finalize a version for production.
  • Workflow: Start from dev, do final testing and fixes, then merge into main.
  • Naming Convention: release/v1.0

4. Hotfix Branches

Production bugs can’t wait. That’s where hotfix branches save the day:

  • Purpose: To fix critical issues in production.
  • Workflow: Start from main, fix the issue, then merge into both main and dev.
  • Naming Convention: hotfix/login-bugfix

A Practical Workflow Example

Here’s a typical workflow I follow:

  1. Feature Development: Let’s say I’m building a new login system. I’d create a branch called feature/new-login-system off dev. Once the feature is complete and tested, I merge it back into dev.
  2. Preparing for Release: When it’s time to launch, I’d create a branch called release/v1.0 from dev. After some final tweaks, I merge it into main and tag it as v1.0.
  3. Production Hotfix: If a bug pops up in v1.0, I’d create a branch called hotfix/login-bugfix from main. Once fixed, I’d merge it into both main and dev and tag it as v1.0.1.

Best Practices

Here are some lessons I learned the hard way:

  1. Write Clear Commit Messages: Your future self will thank you. A good message explains what changed and why.
  2. Keep Commits Small: Each commit should represent a single, logical change. This makes debugging a breeze.
  3. Review Before Merging: Always use pull requests or merge requests. Two heads are better than one.
  4. Use Tags for Releases: Tags are lifesavers when you need to rollback or track changes.
  5. Backup Your Repository: This might sound obvious, but don’t take it for granted.

Why Version Control is a Game-Changer?

For me, version control transformed how I approach coding. It made collaboration smoother, debugging faster, and deployments safer. Whether you’re working solo or with a team, adopting a thoughtful version control strategy is a must.

Final Thoughts

Version control is more than just a tool—it’s a mindset. By following the practices and strategies outlined here, you’ll be well on your way to mastering it. Trust me, your future self—and your teammates—will thank you.

Mastering PHPStorm: Essential Shortcuts and Tips for Faster Coding

As a developer, you know that every second counts. PHPStorm, the powerful IDE from JetBrains, is packed with features designed to make coding faster and more efficient—but only if you know how to use them! Here are some essential keybindings and tips that can supercharge your PHPStorm experience and help you code at lightning speed.

Quick Access to Any Action

Ever found yourself lost in PHPStorm’s vast functionality, looking for a specific tool or action? Press Ctrl + Shift + A to bring up the Toggle Actions dialog. This is your command center for quickly finding and triggering any action, even if you don’t know the shortcut for it.

Locate Symbols

Working on a large project? Ctrl + Alt + Shift + N is your friend. This key combination lets you search for any symbol in your project. Whether it's a function, method, or variable, finding it with this shortcut is a breeze.

File Search Made Easy

When you need to open a file but don’t want to dig through directories, use Ctrl + Shift + N. This brings up the file search dialog, allowing you to open any file in your project by typing its name.

Searching for Classes

Quickly locate classes by pressing Ctrl + N. This shortcut is a must for object-oriented projects where multiple classes interact. Simply type the class name, and PHPStorm will locate it for you.

Rename Symbols

Refactoring code often involves renaming variables, functions, or classes. Press Shift + F6 to Refactor/Rename any symbol. This shortcut ensures that PHPStorm updates all references to the renamed symbol, saving you from manually updating each instance.

Replace Text in the Current File

Working on a specific file and need to make some quick replacements? Ctrl + R opens the Find and Replace dialog within the current file, making edits quick and painless.

Project-Wide Search

Need to search or replace text across the entire project? Use Ctrl + Shift + F for a project-wide search. For project-wide find-and-replace, press Ctrl + Shift + R. These shortcuts are invaluable for large-scale code adjustments.

Select the Next Occurrence

Find yourself needing to select multiple occurrences of a word or phrase? Alt + J selects the next occurrence of your current selection (case-sensitive). This feature is ideal for making consistent changes across multiple lines without using traditional search-and-replace.

Move Lines Up or Down

Reordering lines of code is common in refactoring. Alt + Shift + / lets you move the current line (or selection) up or down. A simple, powerful way to reorganize your code with ease.

Get Quick Fixes

If PHPStorm detects an issue or sees a possible improvement, pressing Alt + Enter provides a Quick Fix or Suggestion. It’s a fantastic way to implement suggestions quickly and clean up your code without manually combing through error messages.

Extract Method Shortcut

Refactoring code into methods is essential for clean, reusable code. Highlight the code block and press Ctrl + M + G to Extract Method, instantly creating a new method from the selected code. This is a must for breaking down long, complex functions into manageable pieces.

Bonus: Multi-Cursor Mode

If you’re not using PHPStorm’s multi-cursor mode, you’re missing out on some serious productivity gains! Here’s how to use it:

  1. Select a piece of code.
  2. Press Alt + Shift + Insert to activate multi-cursor mode. Now, you can place multiple cursors wherever needed, allowing you to type or edit in multiple locations simultaneously.

Final Thoughts

Mastering these PHPStorm shortcuts can drastically reduce your time spent on repetitive tasks, allowing you to focus on what really matters: building great software. Give these keybindings a try, and see how they can boost your productivity and help you achieve a smoother, more efficient coding workflow.

Installing and Configuring Elasticsearch and Kibana on Ubuntu

I've always wanted to write this article but never got the chance to do so, I had written some notes on how to set up Elasticsearch and Kibana on an Ubuntu server (20.04 or later). Whether you're building a search engine, analyzing logs, or just exploring the Elastic Stack, this guide will help you get everything up and running smoothly.

Here's how I did it:

Setting up Elasticsearch

Add the Elasticsearch GPG key and repository

curl -fsSL https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo gpg --dearmor -o /usr/share/keyrings/elastic.gpg
echo "deb [signed-by=/usr/share/keyrings/elastic.gpg] https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list

Install Elasticsearch

sudo apt update
sudo apt install elasticsearch

Configure Elasticsearch by editing /etc/elasticsearch/elasticsearch.yml

network.host: localhost
http.port: 9200
http.host: 0.0.0.0

Start and enable Elasticsearch

sudo systemctl start elasticsearch
sudo systemctl enable elasticsearch

Set up an NGINX reverse proxy for Elasticsearch

Add this server block to your NGINX config (e.g., /etc/nginx/sites-available/your_domain):

server {
   listen 8834;

   # Uncomment for SSL
   # listen 8834 ssl;
   # ssl_certificate /path/to/certificate/crt.pem;
   # ssl_certificate_key /path/to/key/key.pem;
   # ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
   # ssl_prefer_server_ciphers on;
   # ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';

   server_name your_domain;

   location / {
      proxy_pass http://localhost:9200;
      proxy_http_version 1.1;
      proxy_set_header Upgrade $http_upgrade;
      proxy_set_header Connection 'upgrade';
      proxy_set_header Host $host;
      proxy_cache_bypass $http_upgrade;
   }
}

and then test and restart NGINX service:

sudo nginx -t
sudo systemctl restart nginx

Once done, you can verify if Elasticsearch is running by visiting http://yourdomain.com:8834 in your browser. You should see a JSON response with Elasticsearch details.

Setting up Kibana

Install Kibana

sudo apt install kibana

Configure Kibana by editing /etc/kibana/kibana.yml

server.port: 5601
server.host: 0.0.0.0
elasticsearch.hosts: ["http://localhost:9200"]

Start and enable Kibana

sudo systemctl enable kibana
sudo systemctl start kibana

Create an admin user for Kibana

echo "your_admin_username:`openssl passwd -apr1`" | sudo tee -a /etc/nginx/htpasswd.users

Make sure that you enter a strong password when prompted.

Set up an NGINX reverse proxy for Kibana

Add this server block to your NGINX config:

server {
   listen 8833;

   # Uncomment for SSL
   # listen 8833 ssl;
   # ssl_certificate /path/to/certificate/crt.pem;
   # ssl_certificate_key /path/to/key/key.pem;
   # ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
   # ssl_prefer_server_ciphers on;
   # ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';

   server_name your_domain;

   auth_basic "Restricted Access";
   auth_basic_user_file /etc/nginx/htpasswd.users;

   location / {
      proxy_pass http://localhost:5601;
      proxy_http_version 1.1;
      proxy_set_header Upgrade $http_upgrade;
      proxy_set_header Connection 'upgrade';
      proxy_set_header Host $host;
      proxy_cache_bypass $http_upgrade;
   }
}

and then test and restart NGINX service:

sudo nginx -t
sudo systemctl restart nginx

Once done, you can try to access Kibana by visiting http://yourdomain.com:8833. You'll be prompted for the admin credentials you created earlier.

Wrapping Up

And there you go! Elasticsearch and Kibana are now up and running on your Ubuntu server, ready to help you search, analyze, and visualize your data. Whether you're diving into logs, building a search feature, or just experimenting with the Elastic Stack, this setup should give you a solid foundation.

References

Hope you found this useful!

How to Send Test Emails via PowerShell?

How to Send Test Emails via PowerShell

Whenever I wanted to send a test mail from a Windows Server, I make use of PowerShell. This makes it easier and is a quick yet efficient way to test email functionality, troubleshoot SMTP servers, or verify email delivery.

Here’s what I usually do, including examples with and without user authentication.

1. Basic Command to Send a Test Email

The Send-MailMessage cmdlet in PowerShell makes it easy to send emails via an SMTP server. Here’s the basic command I used:

Send-MailMessage -SmtpServer "0.0.0.0" -Port 25 -From "sample@example.com" -To "john.doe@example.com" -Subject "A subject" -Body "A body"
  • -SmtpServer: The IP address or hostname of the SMTP server.
  • -Port: The SMTP port (default is 25).
  • -From: The sender’s email address.
  • -To: The recipient’s email address.
  • -Subject: The subject of the email.
  • -Body: The content of the email.

This command works for SMTP servers that don’t require authentication (e.g., internal SMTP relays).

2. Sending Emails with User Authentication

If the SMTP server requires authentication, you can add the -Credential parameter to provide a username and password. Here’s how:

$credential = Get-Credential
Send-MailMessage -SmtpServer "smtp.example.com" -Port 587 -From "sample@example.com" -To "john.doe@example.com" -Subject "A subject" -Body "A body" -Credential $credential -UseSsl
  • -Credential: Prompts for a username and password. You can also create a PSCredential object programmatically.
  • -UseSsl: Enables SSL/TLS encryption, which is often required for authenticated SMTP servers (e.g., port 587).

Example with Hardcoded Credentials:

If you don’t want to be prompted for credentials, you can create a PSCredential object like this on PowerShell ISE:

$username = "sample@example.com"
$password = ConvertTo-SecureString "YourPassword" -AsPlainText -Force
$credential = New-Object System.Management.Automation.PSCredential ($username, $password)

Send-MailMessage -SmtpServer "smtp.example.com" -Port 587 -From "sample@example.com" -To "john.doe@example.com" -Subject "A subject" -Body "A body" -Credential $credential -UseSsl

3. Adding Attachments

You can also attach files to your email using the -Attachments parameter:

Send-MailMessage -SmtpServer "smtp.example.com" -Port 587 -From "sample@example.com" -To "john.doe@example.com" -Subject "A subject" -Body "A body" -Attachments "C:\path\to\file.txt" -Credential $credential -UseSsl

4. Troubleshooting Tips

Connection Issues:

If the email fails to send, ensure the SMTP server is reachable and the port is open. Use Test-NetConnection to verify connectivity:

Test-NetConnection -ComputerName smtp.example.com -Port 587

Authentication Errors:

Double-check the username and password. If the SMTP server requires a specific authentication method (e.g., OAuth), you may need additional configuration.

SSL/TLS Errors:

Ensure the -UseSsl parameter is used if the SMTP server requires encryption.

Example Workflow:

Here’s a complete example with authentication and attachments:

# Create credentials
$username = "sample@example.com"
$password = ConvertTo-SecureString "YourPassword" -AsPlainText -Force
$credential = New-Object System.Management.Automation.PSCredential ($username, $password)

# Send email with attachment
Send-MailMessage -SmtpServer "smtp.example.com" -Port 587 -From "sample@example.com" -To "john.doe@example.com" -Subject "Test Email with Attachment" -Body "Please find the attached file." -Attachments "C:\path\to\file.txt" -Credential $credential -UseSsl

Why do this way?

Using PowerShell to send test emails is a powerful way to:

  • Test SMTP server configurations.
  • Verify email delivery and troubleshoot issues.
  • Automate email notifications in scripts.

Whether you’re working with an internal SMTP relay or an external server requiring authentication, PowerShell’s Send-MailMessage cmdlet makes it easy to get the job done.

Hope you found this useful!

How to Find Users with Duplicate Email Addresses in SQL?

When managing a database with user information, ensuring data integrity is crucial. One common issue that can arise is duplicate email addresses in the users table.

Let me show you how to write a simple yet powerful SQL query to identify such duplicates!

The Problem

In many applications, email addresses should be unique identifiers for users. However, duplicates can sneak into the database due to bugs, manual data imports, or other anomalies. Detecting and resolving these duplicates is essential to maintain data integrity and ensure proper functionality of user-related features, such as authentication.

The Solution

Using SQL, you can quickly find all duplicate email addresses in your users table by leveraging the GROUP BY and HAVING clauses.

Here’s the query:

SELECT email, COUNT(*) AS email_count
FROM users
GROUP BY email
HAVING COUNT(*) > 1;

How It Works:

  1. GROUP BY email: Groups the rows in the users table by the email column, so each group represents a unique email.
  2. COUNT(*): Counts the number of rows in each group.
  3. HAVING COUNT(*) > 1: Filters the groups to only include those where the count is greater than 1, i.e., duplicate email addresses.

Enhanced Query for User Details

If you want to see more details about the users who share the same email address (e.g., user IDs, names), you can use a subquery:

SELECT u.*
FROM users u
JOIN (
    SELECT email
    FROM users
    GROUP BY email
    HAVING COUNT(*) > 1
) dup_emails ON u.email = dup_emails.email;

Explanation

  • The subquery identifies all duplicate email addresses.
  • The main query joins this result with the users table to retrieve detailed information about each user associated with the duplicate emails.

Final Thoughts

This simple query can save you a lot of time when auditing your database for duplicate entries. Whether you’re cleaning up data or debugging an issue, identifying duplicates is an important step toward ensuring a robust and reliable application.

Hope you found this tip useful!

How to Filter Records Based on a String Column Containing Numbers with Commas and Dots?

Last month, I encountered a scenario where I needed to filter records in a database based on a "revenue" column. The challenge? The revenue column was stored as a string data type, and some of the values contained commas (,) and dots (.). Here’s how I tackled the problem and wrote an SQL query to filter records based on the numeric value of the revenue column.

What I faced?

The revenue column was stored as a string, and the values looked like this:

  • "50,000,000"
  • "75.000.000"
  • "10000000"

I needed to filter records where the revenue was greater than "50,000,000". However, since the column was a string, I couldn’t directly compare it to a numeric value.

What I did?

To handle this, I used a combination of SQL functions:

  1. REPLACE: To remove commas (,) and dots (.) from the string.
  2. CAST: To convert the cleaned string into a numeric data type (DECIMAL in this case).

Here’s the SQL query I wrote:

SELECT *
FROM your_table
WHERE CAST(REPLACE(revenue, ',', '') AS DECIMAL(18, 2)) > 50000000;

Need a breakdown? Here you go:

  1. REPLACE(revenue, ',', ''): This removes commas from the revenue string. For example, "50,000,000" becomes "50000000".
  2. CAST(... AS DECIMAL(18, 2)): This converts the cleaned string into a DECIMAL value with 18 total digits and 2 decimal places. For example, "50000000" becomes 50000000.00.
  3. > 50000000: Finally, the query filters records where the numeric value of revenue is greater than 50,000,000.

Handling Dots as Thousand Separators

If the revenue column uses dots (.) as thousand separators (e.g., "75.000.000"), you can extend the REPLACE function to remove dots as well:

SELECT *
FROM your_table
WHERE CAST(REPLACE(REPLACE(revenue, ',', ''), '.', '') AS DECIMAL(18, 2)) > 50000000;

This ensures that both commas and dots are removed before converting the string to a numeric value.

Why did I do this?

This approach is helpful when:

  • You’re working with data stored as strings but need to perform numeric comparisons.
  • The data contains formatting characters like commas or dots.
  • You want to avoid manual data cleaning or preprocessing.

By using SQL functions like REPLACE and CAST, you can handle these scenarios directly in your queries.

Oh, this works for SQL and MySQL as well!

Hope you found this article useful!

How to Cherry-Pick Commits from One Git Branch to Another?

Although I had heard of Git's cherry-pick command, I never used it until a few months ago when I encountered a situation that demanded it. A client required us to work on multiple incidents and change requests (CRs), but they only wanted specific, approved updates pushed to the repository.

This created a challenge, especially when I was simultaneously working on CRs and incidents. I needed a way to isolate and push only the approved changes while keeping the rest intact. That’s when I decided to try out Git’s cherry-pick command—and it worked like a charm! It made managing my codebase much easier.

Here’s a simple walkthrough of how you can use it too:

The Scenario

I had two branches in my Git repository:

  • Branch A: The primary branch where ongoing development happens.
  • Branch B: A feature branch where I accidentally pushed updates that should partially go into Branch A.

While it’s straightforward to merge changes from one branch into another, I only needed a subset of the commits from Branch B to be applied to Branch A. This is where Git’s cherry-pick command shines.

Go along with me and follow it step-by-step:

1. Switch to the Target Branch

First, check out the branch where you want to apply the commits (in my case, Branch A):

git checkout A

2. Find the Commits to Cherry-Pick

Use the git log command to list the commits in Branch B and identify the specific commit hashes you want to cherry-pick:

git log B

You should see output similar to this:

commit abc1234 (HEAD -> B)
Author: Your Name <you@example.com>
Date:   Mon Jan 1 12:00:00 2025 +0000

    Add feature X

commit def5678
Author: Your Name <you@example.com>
Date:   Sun Dec 31 12:00:00 2024 +0000

    Fix bug Y

Take note of the commit hash(es) you need. For example, let’s say you want to cherry-pick abc1234 and def5678.

3. Cherry-Pick the Commit(s)

Single Commit

To apply a single commit from Branch B onto Branch A, use:

git cherry-pick abc1234

Multiple Commits

To apply multiple non-contiguous commits, list their hashes:

git cherry-pick abc1234 def5678

Range of Commits

To cherry-pick a range of contiguous commits, use the ^..< notation:

git cherry-pick def5678^..abc1234

This includes all commits from def5678 to abc1234, inclusive.

4. Resolve Any Conflicts

If there are conflicts during the cherry-picking process, Git will pause and notify you. Resolve the conflicts in your files, then mark them as resolved:

git add <file>

Continue the cherry-pick process:

git cherry-pick --continue

To abort the cherry-pick if things go wrong:

git cherry-pick --abort
  1. Verify the Result

Once the cherry-picking is complete, you can inspect your branch to ensure the changes were applied:

git log

You should see the cherry-picked commits in Branch A.

When is cherry-picking useful?

Cherry-picking is perfect when you need specific commits from another branch without merging all its changes. This is especially helpful in scenarios like:

  • Applying a bug fix from a feature branch to the main branch.
  • Pulling specific updates without merging unrelated work.

Bonus Tips

  • Always double-check commit hashes before cherry-picking to avoid unexpected results.
  • Use descriptive commit messages to make it easier to identify what each commit does
  • If you find yourself cherry-picking often, consider rethinking your branch workflows to reduce the need for it.

That’s it! Now you know how to cherry-pick commits in Git. It’s a small but incredibly powerful tool to keep in your Git arsenal.