A blog about programming topics in general, focusing on the Java programming language.

Author: andrestascon

Java records tutorial

Introduction

Welcome to a comprehensive guide on Java records tutorial! Java programming just received a significant upgrade with the introduction of records in Java 14. Records present a streamlined approach to defining immutable data models, simplifying code and enhancing developer productivity. In this tutorial, we’re delving deep into the world of Java records, exploring their features, benefits, and practical applications. Join us as we unravel the mysteries of Java records and empower you to leverage this powerful feature in your Java projects. Let’s dive in and discover the wonders of Java records tutorial together!

Java programming just got a serious upgrade with the arrival of records in Java 14. Records offer a slick new way to handle data in Java apps, making your code cleaner and your life easier. In this guide, we’re diving headfirst into the world of Java records, breaking down what they are, why they’re awesome, and how you can use them like a pro.

Getting to Know Java Records

Java records are like the cool new kids on the block. They’re a fresh feature introduced in Java 14 (and officially released in Java 16) designed to simplify how we define data-centric classes. Think of them as your go-to for creating immutable data transfer objects (DTOs) and value-based classes.

Let’s kick things off with a simple example:

public record Person(String name, int age) {}

With this one-liner, we’ve created a Person record with name and age components. The best part? The compiler does the heavy lifting, generating methods like getName(), getAge(), toString(), equals(), and hashCode() for us automatically.

Extending Java Records

While records offer a lot out of the box, you can also extend them to add custom functionality. For example, you can define additional methods or constructors to suit your specific needs.

Here’s a quick example:

public record Employee(String name, int age) {
    public String greet() {
        return "Hello, I'm " + name + " and I'm " + age + " years old!";
    }
}

Benefits of Java Records

Java records offer several advantages over traditional Java classes, including:

  1. Concise Syntax: Records reduce boilerplate code by providing a compact syntax for defining immutable data models.
  2. Immutable by Default: All components of a record are implicitly final, making records immutable by default.
  3. Automatic Methods: The compiler automatically generates accessor methods, toString(), equals(), and hashCode() implementations based on record components.
  4. Enhanced Readability: Records enhance code readability by clearly expressing the intent of representing data.

Practical Applications

Java records find applications in various scenarios, including:

  • DTOs and POJOs: Records are well-suited for defining simple data transfer objects (DTOs) and plain old Java objects (POJOs).
  • API Responses: Records can represent API response payloads, encapsulating data returned from external services.
  • Domain Models: Records can model domain entities and value objects in domain-driven design (DDD) architectures.

Using Java Records in Another Class

Let’s see how we can use our Person record from another class:

public class Main {
    public static void main(String[] args) {
        Person person = new Person("Alice", 30);
        System.out.println("Name: " + person.name());
        System.out.println("Age: " + person.age());
    }
}

In this example, we create a Person object named person and initialize it with values for name and age. We then access the components using the accessor methods generated by the compiler (name() and age()).

Best Practices

When using Java records, consider the following best practices:

  • Immutability: Leverage the immutability provided by records to ensure data integrity and thread safety.
  • Encapsulation: Limit the visibility of record components to maintain encapsulation and data-hiding principles.
  • Use Cases: Evaluate whether records are suitable for the specific use case, considering the nature and complexity of the data being modeled.

Further Reading and Resources

For more information on Java records and related topics, check out these resources:

Conclusion

Java records usher in a new era of data modeling, offering a concise and intuitive approach to defining immutable data structures. By embracing records, Java developers can write cleaner, more expressive code while focusing on the essence of their data models. As Java evolves, records stand as a testament to the language’s commitment to simplicity, productivity, and developer satisfaction.

So why not give Java records a spin in your next project? Streamline your data modeling and unlock new possibilities in Java development!

Happy coding with Java records! ๐Ÿš€โœจ

Checked vs Runtime Exceptions

Introduction

Hey there, Java enthusiasts! Today, we’re diving into the world of Java exceptions, where we’ll chat about Checked vs Runtime Exceptions. These little quirks are like the guard dogs of your code, making sure everything runs smoothly (or not!). Checked Exceptions are the rule enforcers, while Runtime Exceptions give you a bit more freedom but also more responsibility. In this article, we’re unraveling the mysteries of Checked vs Runtime Exceptions, so grab your coffee and let’s get started on this coding adventure!

Exception handling in Java is a critical aspect of building robust and reliable software applications. Among the various types of exceptions, Checked and Runtime exceptions stand out as fundamental constructs, each serving distinct purposes and requiring different handling strategies. In this comprehensive exploration, we unravel the nuances of Checked and Runtime exceptions, backed by detailed code examples and best practices.

Understanding Checked Exceptions

Checked exceptions, also referred to as compile-time exceptions, are exceptions that the compiler mandates to be either caught or declared in the method signature using the throws clause. These exceptions typically signify conditions that a well-architected application should anticipate and gracefully recover from during runtime. Examples include IOException, SQLException, and FileNotFoundException.

Let’s delve into a practical example:

import java.io.*;

public class FileReaderExample {

    public void readFile() throws IOException {
        FileReader fileReader = new FileReader("example.txt");
        BufferedReader bufferedReader = new BufferedReader(fileReader);
        String line = bufferedReader.readLine();
        while (line != null) {
            System.out.println(line);
            line = bufferedReader.readLine();
        }
        bufferedReader.close();
    }

    public static void main(String[] args) {
        FileReaderExample reader = new FileReaderExample();
        try {
            reader.readFile();
        } catch (IOException e) {
            System.err.println("Error reading the file: " + e.getMessage());
        }
    }
}

In this scenario, the readFile() method reads from a file and handles IOException, a checked exception, by declaring throws IOException in its signature. The main() method catches and handles the exception gracefully using a try-catch block.

For further understanding and exploration, consider the following resources:

Exploring Runtime Exceptions

Runtime exceptions, also known as unchecked exceptions, differ from checked exceptions in that they need not be explicitly declared in the method signature or caught at compile time. These exceptions typically represent programming errors or conditions beyond the developer’s control, such as null references, array index out of bounds, and arithmetic overflows. Examples include NullPointerException, ArrayIndexOutOfBoundsException, and IllegalArgumentException.

Consider the following example:

public class DivideExample {

    public static void main(String[] args) {
        int dividend = 10;
        int divisor = 0;
        try {
            int result = dividend / divisor;
            System.out.println("Result: " + result);
        } catch (ArithmeticException e) {
            System.err.println("Error: Division by zero");
        }
    }
}

Here, attempting to divide by zero results in an ArithmeticException, a runtime exception. Though not explicitly declared, the exception is caught and handled within the try-catch block.

For further understanding and exploration, consider the following resources:

Choosing Between Checked and Runtime Exceptions

When determining which type of exception to use, consider the following guidelines:

  • Checked Exceptions: Employ checked exceptions for situations where recovery is feasible and meaningful. These exceptions enforce error handling and promote code robustness by explicitly documenting potential failure points.
  • Runtime Exceptions: Reserve runtime exceptions for programming errors or conditions outside the application’s control. Runtime exceptions are suitable for scenarios where recovery may be impractical, such as invalid input parameters or unexpected runtime conditions.

Creating Custom Exceptions

Sometimes, the predefined exceptions in Java just don’t cut it for our specific needs. That’s where creating our own exceptions comes into play. By crafting custom exceptions, we can tailor error handling to fit our unique application requirements.

To create a custom exception in Java, we typically extend the Exception class or one of its subclasses like RuntimeException. This allows us to define our own exception types with specialized behavior and messages.

Here’s a simple example of how we can create a custom exception:

public class CustomException extends Exception {

    public CustomException() {
        super("This is a custom exception!");
    }

    public CustomException(String message) {
        super(message);
    }
}

In this example, we’ve created a custom exception called CustomException that extends the Exception class. We’ve provided two constructors: one with a default message and another allowing us to specify a custom message.

Now, let’s see how we can use our custom exception in a Java program:

public class CustomExceptionExample {

    public void checkValue(int value) throws CustomException {
        if (value < 0) {
            throw new CustomException("Value cannot be negative!");
        }
    }

    public static void main(String[] args) {
        CustomExceptionExample example = new CustomExceptionExample();
        try {
            example.checkValue(-5);
        } catch (CustomException e) {
            System.err.println("Caught CustomException: " + e.getMessage());
        }
    }
}

In this example, the checkValue() method checks if a given value is negative. If it is, it throws our custom CustomException with a specific message. In the main() method, we catch and handle this custom exception, providing meaningful feedback to the user.

Creating custom exceptions allows us to add clarity and specificity to our error handling, making our Java programs more robust and user-friendly. So go ahead, unleash your creativity, and craft those custom exceptions for your Java applications!

Conclusion

Mastering the distinction between Checked and Runtime exceptions is pivotal for crafting resilient and maintainable Java applications. By leveraging checked exceptions for recoverable conditions and runtime exceptions for unexpected errors, developers can enhance software reliability and predictability. Embrace effective exception handling practices, communicate errors clearly, and design code with exception safety in mind.

Exception handling is not merely a technical detail but a cornerstone of Java programming excellence, empowering developers to build software systems that withstand the test of time.

For further reading and exploration, check out some other posts on exceptions:

Happy coding! โ˜•

Migrating from Java 11 to 17: Key features

Hello Java developers! If you’ve been working with Java for a while, you know that each release brings along exciting new features and improvements. Java 17, like its previous versions, offers a range of enhancements that can make your coding journey smoother and more efficient. In this blog post, we’ll explore some key features introduced between Java 11 and Java 17. Let’s dive in!

Records (JEP 395)

What is it?

Records provide a compact way to declare classes that are holders of immutable data. They can help reduce boilerplate code for simple data carrier classes.

Example:

// Java 11 style class for Point
public class Point {
    private final int x;
    private final int y;

    public Point(int x, int y) {
        this.x = x;
        this.y = y;
    }

    // Getters and other methods
}

// Java 17 style record
public record Point(int x, int y) { }

Pattern Matching for switch (JEP 406)

What is it?

Java 17 introduced enhancements to pattern matching for switch expressions. This feature lets you destructure objects directly within a switch expression, making your code more concise and readable.

Example:

// Java 11 style switch statement
String day = "Monday";
switch (day) {
    case "Monday":
    case "Wednesday":
    case "Friday":
        System.out.println("It's a workday");
        break;
    case "Saturday":
    case "Sunday":
        System.out.println("It's the weekend");
        break;
    default:
        System.out.println("Invalid day");
}

// Java 17 style switch expression
String day = "Monday";
String typeOfDay = switch (day) {
    case "Monday", "Wednesday", "Friday" -> "It's a workday";
    case "Saturday", "Sunday" -> "It's the weekend";
    default -> "Invalid day";
};

System.out.println(typeOfDay);

Sealed Classes (JEP 409)

What is it?

Sealed classes provide a mechanism to control which classes can extend or implement a given class or interface. This helps in designing more robust and maintainable code by restricting the inheritance hierarchy.

Example:

// Define a sealed interface
sealed interface Shape permits Circle, Rectangle, Triangle { }

// Sealed classes implementing the interface
final class Circle implements Shape { /* ... */ }
final class Rectangle implements Shape { /* ... */ }
final class Triangle implements Shape { /* ... */ }

Pattern Matching for instanceof (JEP 394)

What is it?

Similar to pattern matching for switch, Java 17 introduces pattern matching for the instanceof operator. This allows you to cast and use the type in a single step.

Example:

// Java 11 style
if (obj instanceof String) {
    String s = (String) obj;
    System.out.println(s.length());
}

// Java 17 style with pattern matching
if (obj instanceof String s) {
    System.out.println(s.length());
}

New Garbage Collectors

What is it?

Java 17 introduced two new experimental garbage collectors: ZGC (Z Garbage Collector) and Shenandoah. These collectors aim to provide low-latency and high-throughput garbage collection options.

Example:

To enable ZGC, you can use the following JVM option:

java -XX:+UseZGC YourApplication

Conclusion

Java 17 brings a lot of features that enhance productivity, maintainability, and performance. While migrating from Java 11 to 17 might require some adjustments, leveraging these new features can significantly benefit your applications. Stay tuned for more updates, and happy coding!

Remember, this is just a glimpse of what Java 17 offers. Exploring the official documentation and experimenting with these features will provide you with a deeper understanding and appreciation of the Java ecosystem’s evolution.

References

https://openjdk.org/jeps/395

https://openjdk.org/jeps/406

https://openjdk.org/jeps/409

https://openjdk.org/jeps/394

For loops vs Streams in Java

Introduction

For loops vs Streams in Java. Probably one of the most asked questions since Java 8 introduced the Streams API.

Today’s blog post will discuss the pros and cons and when to use them.

Performance comparison

Check out this Twitter’s thread comparing for loops with Streams as a reference.

As we can see there, Streams tend to perform quite slower than for loops. This is especially true when the operations carried out by either the Stream or the for loop are not that many.

If the amount of data is small, for loops will usually perform better than Streams.

It’s also worth noticing that the Streams API allows us to create parallel Streams without having to worry about the implementation details. Quick disclaimer: Make sure you need parallel Streams before actually using them and run some benchmarks.

Readability comparison

This is probably the most subjective topic. There are many old-school developers that will stick with for loops forever. On the other hand, there are also many others (including myself) that started working with Streams and were charmed by the readability they provide.

For loops

Let’s use an example, we want to iterate over an ArrayList of Strings that will have a maximum of 5 elements and we want to know the total length of all Strings combined. With for loops we could do something like:

private static int getTotalLengthOfElements(List<String> input) {
    int totalLength = 0;
    for (int i = 0; i < input.size(); i++) {
        totalLength += input.get(i).length();
    }
    return totalLength;
}

We can then make use of this method

public static void main(String[] args){
    List<String> input = List.of("What", "a", "bunch", "of", "Strings");
    int totalLengthOfStrings = getTotalLengthOfElements(input);
    System.out.println(totalLengthOfStrings);
}

That will print 19.

For loop result

Streams

The previous example could also be written with Streams as:

private static int getTotalLengthOfElements(List<String> input) {
    return input.stream()
            .mapToInt(String::length)
            .sum();
}

This will, once again output 19:

Streams result

Here we leverage the existence of the sum function provided by the IntStream interface.

In my opinion, even though this will probably perform slower than the for approach, this looks way better in terms of readability.

If the input data were to change to a higher amount of Strings, we could make use of the parallelStream just by calling the method:

private static int getTotalLengthOfElements(List<String> input) {
    return input.parallelStream()
            .mapToInt(String::length)
            .sum();
}

So we could say that this approach is also better in terms of adaptability.

Conclusion

We briefly discussed for loops vs Streams in Java.

I would say that if you’re working on a project with your team, you should decide with which approach you all feel more comfortable.

Most likely, performance won’t be an issue when using any of both so the most important thing to worry about is readability. And that is entirely up to you.

Quick reminder that if you’re a Java lover just like me, don’t miss the Java posts I’ll be uploading to this blog.

Parameterized tests in Java

Introduction

Parameterized tests are a JUnit feature that allows us to execute the same test multiple times with different input data by making use of the annotation @ParameterizedTest.

When to use Parameterized tests

Let’s say we want to test our brand-new class:

public class Calculator {

    public int divide(int num1, int num2) {
        return num1 / num2;
    }
}

(I know it’s pretty complex logic ๐Ÿ˜‰). A typical scenario for this case is when we divide by zero, we will get an exception. Let’s test that with a simple test class:

class CalculatorTest {

    static Calculator calculator;

    @BeforeAll
    static void setup() {
        calculator = new Calculator();
    }

    @Test
    void it_should_throw_exception_when_divided_by_zero() {
        assertThrows(ArithmeticException.class, () -> calculator.divide(1, 0));
    }
}

So we execute the test and see what happens:

Test execution

Nice! the test has worked out as expected.

Now, let’s say we want to prove that this won’t happen for several inputs. We could start creating tests for each input, but if we want to test 6 different inputs, that would require 6 different tests.

For this kind of occasion, we can make use of the @ParameterizedTest annotation.

How to use Parameterized tests

The first thing we need to know in order to make use of Parameterized tests is to be able to pass our custom input data to the test we want to parameterize. There are multiple ways of doing this but let’s focus on the main ones.

So, the question is: How do we pass custom input data to parameterized tests?

@ValueSource

We can use the @ValueSource annotation for some simple input data

@ParameterizedTest
    @ValueSource(ints = {1, 2, -1, -2})
    void it_should_not_throw_exception_with_valid_values(int num) {
        assertDoesNotThrow(() -> calculator.divide(1, num));
    }

This creates 4 different tests, one for each value in the @ValueSource annotation.

Test execution with @ValueSource

@MethodSource

What if we want to pass multiple parameters, or use a more complex logic for the creation of the input data? We could use the @MethodSource annotation. This annotation will receive as a parameter, the name of the method that will return the input data for the test.

For instance, if we want to pass both numbers when dividing we could do something like this:

@ParameterizedTest
    @MethodSource("getValidValues")
    void it_should_not_throw_exception_with_valid_values(int num1, int num2) {
        assertDoesNotThrow(() -> calculator.divide(num1, num2));
    }

    private static Stream<Arguments> getValidValues() {
        return Stream.of(
                Arguments.of(1, 1),
                Arguments.of(10, 2),
                Arguments.of(-1, 1),
                Arguments.of(-1, -1),
                Arguments.of(0, 1)
        );
    }

We passed the String "getValidValues" to the @MethodSource that references to the new method that we have just created: private static Stream<Arguments> getValidValues()

This generates 5 different tests at the execution time, given that we are passing a Stream containing 5 different Arguments:

Test execution with @MethodSource

Some considerations:

  • Method must be static. Otherwise, we will receive the error: org.junit.platform.commons.PreconditionViolationException: Cannot invoke non-static method
  • When passing multiple arguments, we can use the class Arguments wrapped in Stream.
  • When passing a single argument, the method can just return a Stream of the required data type. For instance, if we want to pass Integers the method would just return Stream<Integer>.
private static Stream<Integer> getValidValues() {
        return Stream.of(1, 2, -1, -2);
}

Conclusion

We learned how to make use of the Parameterized tests feature in Java. It’s a really simple, yet so powerful feature that every Java developer should know about.

The provided example was not complex at all so let me know if it would be useful to write another post to get deeper into the topic with more realistic examples. For a 101 introduction, it should be enough though.

If you’re interested, check out more TeachingDev Java posts!

Get to know Git aliases

Introduction

To boost your productivity as a Software Developer, you can familiarize yourself with many topics.

I would say the most useful technical skills are those that shorten the time you use on trivial tasks. We all have to deal with daily repetitive tasks that constantly drain so much time from us.

This is why I would encourage you to do some things, such as master your IDE, use git aliases and all things you can think of that make you spend less time than needed.

What are Git aliases?

Let me just say it straight: Git aliases are a quicker way to write git commands.

That’s it! There’s not really much more to add.

Why should you use Git aliases?

If you could work less for the same amount of outcome you would do it, right?

If you answered yes, that’s the reason you needed to hear to start using Git aliases. Anyways, let me just enumerate some reasons to use them:

  • You will be more efficient.
  • You will learn a bit more about how is Git configured internally.
  • If you are like me, you will feel better by using them.
  • Bonus: You will definitely look cooler when you share the screen with your coworkers. ๐Ÿ˜‰

How to create Git aliases

I hope I have convinced you to use Git aliases so far. I know you might not want to invest too much time in creating your own aliases and getting used to them. Well, I bring you good news! There are multiple ways to get started with Git aliases. The one I recommend the most is importing already existing Git aliases that someone thought of.

Here I will leave my favorite ones, so you can have a look at them and decide which one suits you best. The good news is, even if you don’t feel any of them is matching your vibe, you can edit them later.

https://github.com/GitAlias/gitalias

https://github.com/peterhurford/git-aliases.zsh

https://github.com/SixArm/gitconfig-settings

This is a list of the most useful commands I’m personally using since I use the GitAlias project (the first one listed).

git a = add
git aa = add --all

git c = commit
git ca = commit --amend

git co = checkout

git cp = cherry-pick
git cpa = cherry-pick --abort
git cpc = cherry-pick --continue

git m = merge
git ma = merge --abort
git mc = merge --continue

git pf = pull --ff-only
git pr = pull --rebase

git rb = rebase
git rba = rebase --abort
git rbc = rebase --continue

git rv = revert

git s = status

As you can see, once you get used to it you will start working way faster than you used to. This project has some common singularities for the commands such as adding at the end a for --abort, c for --continue

How to create your own aliases

If you want to create your custom aliases, you just have to run the following command

$ git config --global alias.co checkout

By doing so, we would’ve just created the alias co for checkout so the next time we run the command git co in reality, we would have written git checkout.

You can also edit your ~/.gitconfig file

[alias]
    st = status
    ca = commit --amend
    ma = merge --abort
    mc = merge --continue

Share Git aliases across all your devices

For some extra bonus points, I’ll leave you with this repository if you want to share your git aliases inside your organization.

Maybe it would be useful for some new joiners to your team, new to git aliases, to have already set up git aliases.

If that’s the case, just check this repository and follow the instructions. It’s really simple.

https://github.com/pipelineinc/alias4git

Conclusion

I will admit that I’m a geek about shortcuts. I love to learn all my IDE’s shortcuts and all that kind of stuff. This made me boost my productivity as I’m saving time in probably the most repeated actions on a daily basis.

Still, if you’re not yet convinced about using Git aliases, just give them a try. See how they affect (or not) your productivity.

If you liked this post, let me know if you would be interested in some other productivity booster ideas. Such as, mastering your IDE. Thanks for reading!

How to ignore files in Git

Introduction

Nowadays, every developer out there must be familiar with some version control system. I have now worked in 5 different jobs. Every single one of them was using some kind of version control system.

Even though there are many of them, the most used by far is Git.

When working on a real project, we often need to create or modify a file we don’t want to be added to the repository.

The most common scenario for a backend developer is probably some local file configuration to access to a local database or some kind of parameters that need to be passed to the execution of the server itself.

For a frontend developer, you may be sick of hearing this but in case you forgot it, Iโ€™ll remind you: Never upload the node-modules folder! The node-modules folder is a folder that contains all dependencies of a project. It is usually generated over a npm install command that installs all dependencies for a given project (shoutout to all of us who someday did upload a node-modules folder ๐Ÿ˜…). This is equivalent to the folder of a Java project using Maven that contains all the dependencies. Imagine uploading that to your Git repository.

.Gitignore file

The most used approach is probably the .gitignore file. It is a file that is located in every local git project. As you can imagine, and given that the file starts with a dot, it is a hidden file.

Now, it may vary its exact location depending on which IDE you’re using. I know for a fact that if you are using IntelliJ Idea it is located inside the .idea folder.

Its content will vary but this just so you have an idea of what it looks like, this is the default one for a generated Java project using IntelliJ Idea:

# Default ignored files
/shelf/
/workspace.xml

In some other IDEs such as Visual Studio it is usually placed in the same project’s main path. If we use npm we will probably have a structure like this one.

# See https://help.github.com/articles/ignoring-files/ for more about ignoring files.

# dependencies
/node_modules
/.pnp
.pnp.js

# testing
/coverage

# production
/build

# misc
.DS_Store
.env.local
.env.development.local
.env.test.local
.env.production.local

npm-debug.log*
yarn-debug.log*
yarn-error.log*

As you can see, the /node_modules folder is already added by default to the .gitignore file.

How do we make a file not eligible to be added to our git staging area? Well, we can just add its path to this .gitignore file.

The pros of using this approach are quite evident. We have a shared file with all the developers in our team, because this file is uploaded to the git repository.

If we, as a team, want to ignore some files or folders for all of us, the developers, all we have to do is to add those in the .gitignore file.

Precisely, because it’s a shared file, what if we just want to ignore a private file that won’t be uploaded to the repo? We can’t do it by adding it to our .gitignore file.

But fear no more! Because in those cases we can make use of the following approach.

Global exclude file

There is one file, which path is .git/info/excludes that its purpose is to ignore private files. This configuration won’t be pushed to the repository, therefore it’s a private configuration.

The idea behind this is pretty much the same than the .gitignore file but itโ€™s worth to notice that this file wonโ€™t be uploaded to the remote repository.

This approach allows us to untrack some files without sharing this configuration with our team. This is incredibly beneficial for local configuration files, for instance.

Example

With the following project structure

Project structure

I do have a newly added Main.java file and a local.conf file. But letโ€™s say I only want to add the Main.java file to the remote repository.

Since this local.conf is my personal and private local configuration that won’t be shared with the rest of the developers, I will add it to the .git/info/excludes file as we explained before.

Therefore, I will just open the file in a text editor and add it there:

# git ls-files --others --exclude-from=.git/info/exclude
# Lines that start with '#' are comments.
# For a project mostly in C, the following would be a good set of
# exclude patterns (uncomment them if you want to use them):
# *.[oa]
# *~
local.conf

As you can see I just added local.conf to the file.

But if we run a git status we will find a surprise! It’s still showing as added:

PS D:\Projects\git-testing> git status
On branch feature-foo
Your branch is up to date with 'origin/feature-foo'.

Changes to be committed:
  (use "git restore --staged <file>..." to unstage)
        new file:   local.conf
        new file:   src/main/java/Main.java

Untracked files:
  (use "git add <file>..." to include in what will be committed)
        .idea/

Have in mind that if the file was previously checked in, we will have to run the following command in order to be removed:

git rm --cached local.conf

Note: Sometimes we will have to use the -f (force) flag.

If we run git status again we will see that our local.conf file has disappeared:

PS D:\Projects\git-testing> git status
On branch feature-foo
Your branch is up to date with 'origin/feature-foo'.

Changes to be committed:
  (use "git restore --staged <file>..." to unstage)
        new file:   src/main/java/Main.java

Untracked files:
  (use "git add <file>..." to include in what will be committed)
        .idea/

Conclusion

Now that you’ve learned how to ignore files in git, what are you waiting for? Stop wasting your time modifying that local configuration file that we don’t want to push to our repo, add it to the excludes file or talk to your team about adding it to the .gitignore file.

I hope you enjoyed this quick post, do you know some other ways to ignore files in Git? Let me know in the comments!

String interning in Java

Introduction

String interning is a concept that not many Developers out there know. It’s deeply bounded to the way the JVM handles memory.

Nowadays it’s easier than ever to leverage the existence of built-in functions, libraries, frameworks… Don’t get me wrong, they help us stop reinventing the wheel over and over.

However, I also feel like it’s hard to know what’s going on underneath. This may not even be a problem at all unless you have to deal with specific circumstances, such as memory management.

Therefore, it’s quite useful to, at least have, a basic understanding of everything you can. By digging deeper, eventually, not everything will be a black box to you.

In today’s post, we will talk about how Strings work in Java, how the JVM handle them, the best way to treat them, and some useful information. I hope you like it. ๐Ÿ˜Š

How does the JVM handle Strings

String is a special kind of class in Java. It’s the only one that we can instantiate with double quotes. The other classes that can be instantiated without using the new keywords are the primitive types.

What happens when you instantiate a String with double quotes?

Java has what is known as String pool. We can think of it as a bag that contains Strings. Every time we create a String that is not yet in the String pool, the JVM adds it.

As we can see in the example, once the String a is created, “Hello” is added to the String pool. Then a new String b is referenced to a. Remember that using the = operator in Java means that the left side (in this case the String b) will point to the memory address of the right side (the String a). However, in the case of the Strings, b is now pointing to the “Hello” String in the String pool.

When the String c is created, given that is initialized to “World” and it is not yet in the String pool, it gets added there as well.

This way, the JVM optimizes memory allocation and consumption as it will only allocate the space of the “Hello” String once.

A curious thing is that you can use the new operator to create a String as shown in the example. When we create the String d using this new operator, instead of it pointing to the already existing “Hello” String in the pool, it allocates memory for it as it would do for a regular object.

This is why you should not create Strings using the constructor.

Immutability

A really important concept about the String class is that it’s immutable. Now, what does this mean? An immutable object is one that can’t be modified. Therefore, when we want to modify an immutable object, we have to create another one. Once we instantiate an immutable object, we won’t be able to change its value.

This happens with many other classes in Java, such as Date, all the wrapper classes of the primitives’ types: Integer, Double and so many more.

This also means that any operation performed over a String won’t modify the String. It will instead, create a new one. That’s why, when you perform an operation on a String but don’t assign it back, nothing will change in the original String.

String test = "Hello";
test.concat(" World");
System.out.println(test); // "Hello"

test = test.concat(" World");
System.out.println(test); // "Hello World"

What happens under the scenes here is that when the String test is reassigned to the output of test.concat(" World"); a new String is created: "Hello World". The JVM then adds this String This new String to the String pool (if not present yet) and then test will point to this new String in the pool.

Equals vs == operator

These previous explanations come in handy when we think about how should we check that a String is the same as another one.

We all know that the == operator returns true when the memory address of the two objects compared is the same. So, for instance:

int num1 = 5;
int num2 = num1;
System.out.println(num1  == num2); // true

num1 is initialized to 5, the JVM allocates memory for this integer 5 and then makes num1 point to that memory address. When num1 is assigned to num2, both are pointing to the same memory address which means that the == operator will return true.

What happens with Strings then?

String str1 = "Hello";
String str2 = "Hello";
System.out.println(str1 == str2); // true

String str3 = new String("Hello");
String str4 = new String("Hello");
System.out.println(str3 == str4); // false
System.out.println(str1 == str3); // false

String str5 = str3.intern();
System.out.println(str1 == str5); // true

There are a couple of things to explain here.

  • str1 == str2 -> true. As we explained before, both are pointing to the same memory address.
  • str3 == str4 -> false. Due to str3 and str4 are both instantiated with the String constructor, both are pointing to a different memory address.
  • str1 == str3 -> false. One string str1 is in the String pool, the other String str3 is not.
  • str1 == str5 -> true. Quickly explained, the intern() method interns the given String, that is to say, performs String internment in the String pool. Therefore, both are pointing to the same String.

So what do we do? Do we just spin a wheel and accept our fate? Well, actually there’s a better approach, use equals to compare Strings.

The equals method allows us to compare the content of the Strings, rather than the memory address. As a result, the previous example with equals would be:

String str1 = "Hello";
String str2 = "Hello";
System.out.println(str1.equals(str2)); // true

String str3 = new String("Hello");
String str4 = new String("Hello");
System.out.println(str3.equals(str4)); // true
System.out.println(str1.equals(str3)); // true

String str5 = str3.intern();
System.out.println(str1.equals(str5)); // true

No matter what we do, all Strings here have the same content, therefore, the equals method returns true for all of them.

Conclusion

I think I gave you enough reasons to remember that you should always use the equals method to compare Strings and try to avoid the == operator.

Soon enough, I will write a blog post on how equals internally work but until then, feel free to investigate and play around on your own (as the best Developers do). Here are some references to get started though. Such as a guide to the Java String pool or some more examples on String interning.

Hope you found this post useful and enjoyed reading it. If you did, you will find my socials at the bottom of this page. You know what to do next ๐Ÿ˜‰ (much appreciated).

Otherwise, or if you feel like you want to give me your insights on this topic, don’t hesitate to post a comment. I’ll be so happy to help/read your suggestions.

How to squash regular and merge commits

Introduction

Have you ever had to work on some feature branch that took quite a long time to develop? You would have had to get the latest updates from another branch, such as main from time to time.

Maybe for the first updates, just rebasing the main branch onto your feature branch was enough, given that there were not many changes in main yet. This would create a regular commit in your branch and once the development of your feature would be done, you could just squash all your commits to a single one and open a Pull Request to the main branch. Everything looks good, you live to see another day and love your job!

However, if the development takes more time, you might end up in a situation where rebasing main onto your feature branch turns out to be 20 step rebase with multiple files in each step.

As a developer, this is quite painful to deal with because every time you need to sync your branch with main you will spend a lot of time on this.

Fear no more! I will talk about my proposed workaround for this kind of situation:

How to squash regular commits

If you just have regular commits on your branch you can easily squash all of them down to one with the git rebase -i command

In this example, we can see how we have one commit “1” in main and three commits in feature-foo branch: “2“, “3” and "4“.

If we want to squash all commits in feature-foo branch, down to one we would just run the command

$ git rebase -i HEAD~3

that will enter the VIM editor (or the one that you have configured)

Changing pick to squash will (obviously) squash that commit into the next one

We will just have to edit the commit message and there we have our squashed commit

Pro tip: You can actually do this simple operation with your favorite git tool, such as IntelliJ Git window:

Just select all commits you want to squash and click on Squash Commits, edit the commit message and the result will be the very same.

How to squash merge commits

Imagine we have the following situation:

We have been working in the feature-foo branch but we were forced to regularly update it with main work. As a result, we now have several regular commits and 2 merge commits.

We still want to squash all the commits in the feature-foo branch in order to have a single commit before we merge our feature branch to main. However, we can no longer use the git rebase -i command due to the merge commits.

My workaround for this kind of scenario consists of a list of steps:

  • Create a temporal branch from main.
 $ git checkout -b temp main
  • Merge with the --squash flag
$ git merge --squash feature-foo
  • Commit the changes
$ git commit
  • (Optional) Edit the commit message that has just popped up.
  • Move to your feature branch ->
$ git checkout feature-foo
  • Hard reset to the temporary branch
    • TIP: If you wanna test this works before actually pushing to your branch, you can even create a temporary branch from your feature branch and do the git reset there.
$ git reset --hard temp
  • Push the changes
    • Note that we have to specify -f flag because of the hard reset
$ git push -f
  • Now you can remove the temporary branch
$ git branch -d temp

As you can see, our feature-foo branch now contains all the commits squashed into a single one, the 8 commit.

Additional tip: If you have been working on the feature branch for so long, the squashed commit will have the date of the first commit you pushed. That is to say, the one with the older date.

To fix this you can run the command

$ git commit --amend --date="now"

Do you use some other workaround for these situations? Let me know in the comments, I’ll be happy to check them out!

Introduction to Java Streams API

Introduction

Functional programming is a programming paradigm that promotes the use of functions and the avoidance of changing state and mutable data. This style of programming can lead to more concise, expressive, and maintainable code. The best way to start making use of it is by using the Java Streams API.

Java, being one of the most widely used programming languages, has also adopted functional programming concepts and features. In this post, we will take a look at how functional programming can be applied in Java 8 using the Streams API and functional interfaces.

Functional programming has several benefits over imperative programming, such as:

  • It promotes immutability, which means that data cannot be modified once it has been created. This can lead to fewer bugs and a more predictable program.
  • It encourages the use of pure functions, which are functions that always return the same output for the same input, and do not have any side effects. This can make code more testable and reusable.
  • It allows for the creation of higher-order functions, which are functions that take other functions as input or return functions as output. This can lead to more expressive and reusable code.

The Streams API

The Streams API is a powerful and flexible API that allows you to perform operations on collections of data in a functional way. It provides a fluent API for working with collections of data, such as filtering, mapping, and reducing.

Java Streams 101

First things first, if you are reading this article, chances are this is one of the first times you’re hearing about functional programming. If that’s your case, let me introduce you to one easy concept you must understand in order to master the use of the Streams API.

I’m talking about terminal vs intermediate operations. You have to think of Streams as a pipeline, each operation performed by a Stream either terminates or not the Stream.

Some examples of non-terminal (or intermediate) operations are:

  • map -> Transforms the Stream from type A to type B.
  • filter -> Filters the elements in the Stream.
  • limit -> Limits the elements in the Stream.

All those functions do not terminate the stream and as a result, they return another stream. That is to say, they perform some operation for the given stream and return a different stream.

Some examples of terminal operations are:

  • collect -> Collects the elements in the stream (usually to a List)
  • forEach -> Perform an operation for each element in the stream

Those functions terminate the stream and thus, they require a semicolon as a regular line of code ender in Java.

Read more about it here

Now, let’s talk next about the most basic -yet most used- Stream’s functions.

Map

I would probably say it’s the most useful and the first Stream function any programmer should learn.

Have you ever experienced having a list of objects, let’s say of class Animal, and wanted to iterate through the list just to collect some property (such as name, age, color…)?

Most of the Java developers would probably create a new ArrayList for storing this property, iterate the list, probably using a for-loop and collect the given property in the new list. Something like this:

Was I close to what you were expecting? I guess so.

Quick disclaimer! There’s nothing wrong with this approach, I’ll write a post regarding whether you should use streams over loops. For the sake of this post, let’s just say there are multiple available approaches up to you.

If you wanna read more about this, feel free to look for some information yourself. This can be your starting point though.

Now, this could be easily done as well with Java streams like so:

Lambda expression

In this example, we are leveraging the existence of the map function. As a non-terminal operation that takes a lambda expression as the argument, it transforms the given Stream, in this case from type Animal to type String. In this case, we’re specifying that for each animal in the animals list, it should be mapped to a string, by using its property name and then collecting everything to a list that is returned.

You can also use the method reference of the Animal class

Method reference

Notice as well that there’s no need for curly braces when the lambda expression is a one-liner.

Filter

Another function you should be familiar with. It expects a Predicate (don’t worry about it, it’s basically a lambda expression/method reference) as an argument to filter the given stream. Since it’s a non-terminal operation, it returns another stream. This is one of the main benefits of using the Streams API, being able to chain call.

Easiest example ever, given a list of integers, return those that are greater than 10:

ForEach

This one is a terminal operation and as so, it ends the stream. It expects a Consumer which as a function that will be performed to each of the elements in the Stream.

Let’s say that for each filtered number of the previous example, we want to output it to the console, instead of adding it to a list:

Notice how I used the method reference so that I don’t have to write a lambda expression.

Some more examples

Let’s say we have a list of integers and we want to find the sum of all even numbers in the list. With the Streams API, we can do this in a single line of code:

Given a list of integers, return the sum of all even numbers

In this example, we first create a stream of the list of numbers, then use the filter method only to keep even numbers, then we use the mapToInt method to convert the stream of Integers to a stream of primitive ints, and finally, we use the sum method to find the sum of all numbers in the stream.

Functional interfaces are another functional programming feature introduced in Java 8. They are interfaces that have a single abstract method, such as Predicate, Function, and Consumer. These interfaces can be used to create lambda expressions and method references that can be passed as arguments to methods.

For example, let’s say we have a list of strings and we want to print all strings that are longer than 3 characters. With functional interfaces, we can do this in a single line of code:

Given a list of strings, print out those longer than 3 characters in a one-liner

In this example, we first create a list of strings, then we use the forEach method and pass a lambda expression that checks if the length of a string is greater than 3, and if so, it prints the string.

Conclusion

Functional programming is a powerful and expressive way to write code, and with the introduction of the Streams API and functional interfaces in Java 8, it’s now easier than ever to write functional code in Java. While it’s not always the best solution for every problem, understanding the concepts of functional programming is an extremely important tool for every Java developer.

At first, you might want to give it a try and practice this new way of thinking. Once you get used to it, you will realize the high potential of using functional programming.