JAX London Blog

What's new in Java 22?

An overview of features and functions

May 28, 2024

This article explores new features in Java 22, including finalized functionalities like unnamed variables for patterns and foreign function APIs, and new functionalities like pre-constructor code and multi-file source code launching. It previews features like a class-file API for bytecode manipulation and stream gatherers for custom stream operations, enhancing code readability, performance, developer experience, and stream processing.

STAY TUNED!

Learn more about JAX London

Six months after the release of OpenJDK 21, the next major version has already appeared. Many of the features are familiar, they were already part of the OpenJDK as an incubator or preview. With the “Foreign Function & Memory API” and the “Unnamed Variables & Patterns”, two JDK Enhancement Proposals (JEPs) have now been finalized. And there are also new features to discover.

The OpenJDK 21 was an LTS version, for which many providers offer long-term support. For this reason, and also because a relatively large number of JEPs (15) were implemented for 21, it generated a lot of attention. This may have been due in particular to the finalization of Virtual Threads, a revolution in the Java environment, which in terms of importance is on a par with Generics (Java 5), Lambda Expressions, the Stream API (Java 8) and the Platform Module System (Java 9). But Java 22 has nothing to hide either. This time, twelve JEPs have been implemented [1] and some interesting topics have been introduced or existing ones completed:

    • 423: Region Pinning for G1
    • 447: Statements before super(…) (Preview)
    • 454: Foreign Function & Memory API
    • 456: Unnamed Variables & Patterns
    • 457: Class-File API (Preview)
    • 458: Launch Multi-File Source-Code Programs
    • 459: String Templates (Second Preview)
    • 460: Vector API (Seventh Incubator)
    • 461: Stream Gatherers (Preview)
    • 462: Structured Concurrency (Second Preview)
    • 463: Implicitly Declared Classes and Instance Main Methods (Second Preview)
    • 464: Scoped Values (Second Preview)

And even if not all of these new functions will be used immediately in our day-to-day development work, it is still worth taking a look at them. Let’s start with the long runners. The “Vector API” is now included for the seventh time as an incubator and has appeared regularly in releases since Java 16. This is about supporting the modern possibilities of SIMD computer architectures with vector processors. SIMD – Single Instruction Multiple Data – allows many processors to process different data simultaneously. Parallelization at the hardware level means that the SIMD principle reduces the effort required for computationally intensive loops.

The reason for the long incubation phase of the Vector API is explained in the goals of JEP 460 [2]: “Alignment with Project Valhalla – the long-term goal of the Vector API is to leverage Project Valhalla’s enhancements to the Java object model. Primarily this will mean changing the Vector API’s current value-based classes to be value classes so that programs can work with value objects, i.e., class instances that lack object identity. Accordingly, the Vector API will incubate over multiple releases until the necessary features of Project Valhalla become available as preview features.”

We are therefore waiting for the reforms to the type system. Java currently has a two-part type system with primitive and reference types (classes). The primitive data types were originally introduced to optimise performance, but have decisive disadvantages in terms of handling. Reference types are also not always the best choice, particularly in terms of efficiency and memory consumption. A solution is needed that lies in between, that is as lean and performant as primitive data types, but also recognises the advantages of self-created reference types in the form of classes. Valhalla Value Types (have no identity) and Universal Generics (List*<int>*) from the incubator project could therefore soon be incorporated into the JDK. Unfortunately, JEP 401 “Value Classes and Objects” has not yet made it into Java 22. Accordingly, we will probably see the Vector API again as an incubator feature for a few more releases. This time there were only a few changes to the API compared to JDK 21, mainly bug fixes and minor performance improvements.

Ready: FFM API

The “Foreign Function & Memory APIs” have also been on board for several years. The two parts have been included individually in the JDK since version 14 and 16 respectively and as a joint incubator JEP since 17. Anyone who has been travelling in the Java world for a very long time will be familiar with the Java Native Interface (JNI). This allows native C-code to be called from Java. However, the approach is relatively complex and fragile. The Foreign Function API offers statically typed, purely Java-based access to native code (C-libraries). This interface can considerably simplify the previously error-prone and slow process of connecting a native library.

The Foreign Memory Access API enables Java applications to allocate additional memory outside the heap. The aim of the new API is to reduce the implementation effort by 90 per cent and accelerate performance by a factor of 4 to 5. The FFM API was finalised in this release. There were only a few minor improvements based on feedback from previous releases.

Finished: A building block for pattern matching

The “Unnamed Variables & Patterns” (JEP 456) initially introduced as a preview in JDK 21 have also been finalised. There have been no changes since the previous version. The unnamed patterns are part of pattern matching, which has been introduced in Java for many years. It was only in Java 21 that two major gaps were closed with “Pattern Matching for switch” and “Record Patterns”.

Pattern matching is about matching existing structures with patterns in order to be able to implement complicated case distinctions efficiently and maintainably. A pattern is a combination of a predicate (which matches the target structure) and a set of variables within this pattern. The corresponding content is assigned to these variables for matching hits and thus extracted.

The intention of pattern matching is to destructure data objects, i.e. to split them into their components and assign them to individual variables for further processing. Depending on the use case, the declarations of variables or nested patterns may be necessary for the compiler but are never used. To avoid unnecessary warnings from the compiler and the linting tools and to better express the intention not to use this variable, it can simply be replaced by an underscore.

As the underscore is not a variable identifier, it can also be used several times in a line in Java-instructions. In Listing 1, the value for the Empty-instance is not required in branch 1. Only one of the two record properties is used in each of branches 2 and 3. The other values can simply be ignored using the underscore (_).

Listing 1

static <T> boolean contains(T value,
 LinkedList<T> list) {
 return switch (list) {
    case Empty _ -> false; 
    case Element<T>(T v, _) 
      when Objects.equals(v, value) -> true;
    case Element<T>(_, var tail) ->
      contains(value, tail); 
  }; 
}

The underscore used to be a legal identifier. Since Java 9, the single underscore may no longer be used. However, variables may still begin with an underscore, and two underscores (__) are also possible. If an underscore was used as a symbol for an unused variable in the source code before Java 9, this can easily be migrated to higher Java versions by appending a second underscore.

There will be further innovations in pattern matching in the future. Currently, JEP 455 (“Primitive Types in Patterns”, instanceof and switch) is already planned for OpenJDK 23 (released in September 2024). Here, instanceof and switch are extended so that primitive data types (int, long …) can also be used. The following code checks at runtime whether the value of i fits into the value range of byte. If it does, you can continue working directly with the new variable b. Other topics in the coming years will probably be new pattern types for matching arrays, maps, POJOs etc.

int i = 1000; 
if (i instanceof byte b) {
   ...
}

The same principle also applies to declared but never used variables for exceptions in catch-blocks, for lambda parameters or in for-loops (Listing 2). It is highly likely that this small, inconspicuous feature will become very present in our daily work as Java developers in the future..

Listing 2
try {
var result = 5 / 0;
} catch(ArithmeticException _) {
System.out.println("Division nicht möglich");
}

System.out.println(new ArrayList<>(List.of(1, 2, 3))
.stream()
.map(_ -> 42)
.toList()); // [42, 42, 42]
int total = 0;
for (Order _ : orders) {
total++;
}

New functions

The “Statements before super()” enable the insertion of lines of code in constructors before the super() or this() call, which is actually mandatory in the first line. The reason for the previous restriction is that the compiler ensures that potential superclasses must first be fully initialized before the state of the subclass can be created or queried. Although this() involves delegation to an overloaded constructor in the same class, it will ultimately also explicitly or implicitly call the super-constructor of the superclass.

Strictly speaking, the strict requirements still apply. The statements that may now be used before the super()– call must not have read or write access to the current instance. However, the validation or transformation of input parameters and the splitting of a parameter into individual parts can now take place before the super() call (Listing 3).

Listing 3: Statements before super
class Person {
private final String firstname;
private final String lastname;
private final LocalDate birthdate;

public Person(String firstname, String lastname, LocalDate birthdate) {
this.firstname = firstname;
this.lastname = lastname;
this.birthdate = birthdate;
}
}

class Employee extends Person {
private final String company;

public Employee(String name, LocalDate birthdate, String company) {
if (company == null || company.isEmpty()) {
throw new IllegalArgumentException("company is null or empty");
}
String[] names = name.split("\\s" );
super(names[0], names[1], birthdate);
this.company = company;
}
}

System.out.println(new Employee("Dieter Develop", LocalDate.now(), "embarc"));

Until now, this typically required an overloaded auxiliary constructor or a private method as a workaround. This disrupted the reading flow and made the implementation unnecessarily complex. When calculating on the input parameters, values are also unnecessarily determined multiple times. In the example in Listing 3, the splitting into first name and surname would have to be performed twice in order to pass the first element of the split()-call as the first parameter and then the second element from the second calculation as the second parameter.

All lines before the super()– or this()– call are called prolog. Lines of code after or without super() or this() are called epilogue. In the prologue, fields of the class or superclass must not be accessed read-only, as they may not yet be initialized. In addition, no instance methods of the class may be called and no instances of non-static inner classes may be created, as these could potentially hold a reference to the parent object that has not yet been initialized. In contrast, fields and methods of the outer classes may be accessed in the prologue of the constructor of a non-static inner class.

Lines of code before this() are also permitted for records and enums according to the rules mentioned above. As they cannot derive from other classes, we do not use calls to super() in this environment.

Start Java applications more easily

New programmers often find it difficult to get started with Java. For some time now, however, there have been small improvements that make the first steps easier. The JShell has been included since Java 9. In this Read Eval Print Loop (REPL), small programming examples can be tried out very quickly without having to deal with compiling and build management.

In Java 11, “Launch Single-File Source-Code Programs” were introduced, with which individual Java files (with class declaration and main-method) can simply be started from the console without a separate compilation step.

In Java 21, JEP 445 “Unnamed Classes and Instance main Methods” was added as a preview to define Java main applications in a much leaner way and without superfluous boilerplate code. This means that main-methods can be written much more compactly (without public, static, String[] args …). In addition, they no longer have to be embedded in a class definition.

The previous approach was a major hurdle, especially for beginners starting out in Java with a simple Hello World program. Right from the start, they have to understand several concepts (classes, static methods, string arrays, etc.) that are not relevant at that moment. In connection with the above-mentioned “Launch Single-File Source-Code Program”, introduced in OpenJDK 11 as JEP 330, the effort is further reduced, as only a lean main- method is required in the *.java file and a class definition can even be dispensed with.

The concept was revised again for Java 22 and a second preview was presented with JEP 463 “Implicitly Declared Classes and Instance Main Methods”. There are now no more Unnamed Classes; instead, a name is selected by the runtime environment. In the first attempts, the name of the Java file (ImplicitMain.java => class ImplicitMain) was used, which seems to be the default for many distributors. These implicitly declared classes behave like normal top-level classes and now require no additional support for tooling, the use of libraries or in the runtime environment. The only condition is that the file must be in the root package.

The following code shows a simple example including the command line command. As this is a preview-feature, the current Java version must be specified and the preview function activated.

// > java –source 22 –enable-preview Main.java void main() { System.out.println(“Hello, World!”);}

The implicitly declared class may even contain additional attributes/fields and methods (Listing 4). As a result, Java now almost feels like a scripting language and completely new use cases become possible. We can now write shell scripts and benefit from Java’s powerful range of functions and type safety.

 


Listing 4: Additional fields and methods with the instance main method
// > java --source 22 --enable-preview Main.java

final String greeting = "Hello";

void main() {
System.out.println(greet("World"));
}

String greet(String name) {
return STR."\{greeting}, \{name}!";
}

As there can theoretically be several main- methods in an implicitly declared class, prioritization is required when calling the start procedure. The logic behind this has been significantly simplified in the second preview. A distinction is now only made as to whether there is a parameter of type string array or not. If yes, then this method has priority. In Listing 5, the topmost main-method is called. The compiler prohibits the existence of both a static and an instance method with the same method signature. The main method with the string array should therefore also be static, but there can only be either a static or an instance method. Visibility attributes such as publicprotected etc. are ignored at this point and the compiler would in turn prevent there being a private and a public method with the same method signature.

 

Listing 5: Call sequence of main methods in unnamed classes

// Gewinner, da mit String[] args
void main(String[] args) {
System.out.println("instance main with String[] args");
}

// nicht erlaubt, da es schon eine main-Methode mit der gleichen Signatur gibt
/*
static void main(String[] args) {
System.out.println("static main with String[] args");
}

The “Launch Single-File Source-Code Programs” mentioned above have also been updated in Java 22. Previously, only the code in exactly one file could be executed directly without prior compilation. Although several classes are allowed in one file, this quickly becomes confusing. The classes should be split into separate files, but the single-file source code no longer works. With JEP 458 “Launch Multi-File Source-Code Programs”, the code can now be structured in any number of Java files. In addition to the multi-files, code from JARs (which are located in the libs subdirectory) can also be integrated using *–class-path ‘libs/**’.

There are two special features to consider. If one of the classes is located in a package, e.g. in de.embarc, then the class must also be located in the corresponding subdirectory de/embarc and the Java command must be extended by the complete path: java de/embarc/class.java. Further information can be found in the description of JEP 458, including how this feature is used when using modules (JPMS).

Analyzing and manipulating Java bytecode

The “Class-File API” (JEP 457) enables the reading and writing of class-files, i.e. compiled bytecode. Both the JDK itself and many libraries and frameworks have so far relied on ASM, a universal Java-bytecode-manipulation-and-analysis framework [3], for this task. It can be used to modify existing classes and to dynamically generate classes in binary format. In addition to the OpenJDK, ASM is also used in the Groovy- and Kotlin-compilers, in some test-coverage-tools (Cobertura, Jacoco) and build-management-tools (Gradle). Mockito uses it indirectly to generate mock classes via Byte-Buddy.

The shorter release cycles of the OpenJDK do not make it easy for ASM to keep up with changes to the bytecode. This in turn leads to dependencies and the tools and frameworks mentioned above cannot deal with new OpenJDK releases quickly enough. The development of the JDK-internal Class File API is intended to minimize such dependencies.

ASM has analyzed and modified existing bytecode based on the Visitor Pattern. This pattern is bulky and inflexible. It is a workaround for the lack of support for pattern matching. Since Java now supports pattern matching, the necessary code can be expressed more directly and consistently in the new Class File API. Listing 6 shows an example that is also described in JEP 457.

Listing 6: Parsing classes with pattern matching

    CodeModel code = ...
Set<ClassDesc> deps = new HashSet<>();
for (CodeElement e : code) {
  switch (e) {
    case FieldInstruction f  -> deps.add(f.owner());
    case InvokeInstruction i -> deps.add(i.owner());
    ... and so on for instanceof, cast, etc ...
  }
}

In contrast to ASM’s visitor approach, classes are created with builders. For example, to create a foobar method (Listing 7), the code example also taken from JEP 457 can be used (Listing 8).

Listing 7: Method to be generated

void fooBar(boolean z, int x) {
  if (z)
    foo(x);
  else
    bar(x);
}
Listing 8: Creating the method from Listing 7

CodeBuilder classBuilder = ...;
classBuilder.withMethod("fooBar",  
  MethodTypeDesc.of(CD_void, CD_boolean, CD_int), 
  flags,
  methodBuilder -> methodBuilder
    .withCode(codeBuilder -> {
      codeBuilder
        .iload(codeBuilder.parameterSlot(0))
        .ifThenElse(
          b1 -> b1.aload(codeBuilder.receiverSlot())
            .iload(codeBuilder.parameterSlot(1))
            .invokevirtual(
              ClassDesc.of("Foo"), 
              "foo",
              MethodTypeDesc.of(CD_void, CD_int)),
          b2 -> b2.aload(codeBuilder.receiverSlot())
            .iload(codeBuilder.parameterSlot(1))
            .invokevirtual(
              ClassDesc.of("Foo"), 
              "bar",
              MethodTypeDesc.of(
                CD_void, CD_int))
      .return_();
});


Existing code can also be changed. Listing 9 shows an example of how all methods beginning with debug are deleted from an existing class.

Listing 9: Transforming an existing class

ClassFile cf = ClassFile.of();
ClassModel classModel = cf.parse(bytes);
byte[] newBytes = 
  cf.build(
    classModel.thisClass().asSymbol(),
    classBuilder -> {
      for (ClassElement ce : classModel) {
        if (!(ce instanceof MethodModel mm
          && mm.methodName().stringValue()
            .startsWith("debug"))) {
              classBuilder.with(ce);
            }
      }
});

User-defined intermediate operations on streams

The Stream API was introduced in Java 8. A stream is only evaluated when required and can potentially contain an unlimited number of values. These can be processed sequentially or in parallel. A stream pipeline typically consists of three parts: the source, one or more intermediate operations and a final operation (Listing 10).

Listing 10: Different stages of the stream pipeline

var number = Arrays.asList("abc1", "abc2", "abc3").stream() // Quelle
  .skip(1) // 1. Intermediate Operation
  .map(element -> element.substring(0, 3)) // 2. Intermediate Operation
  .sorted() // 3. Intermediate Operation
  .count(); // Terminal Operation
System.out.println(number);

The JDK contains a limited number of predefined intermediate operations such as filter, map, flatMap, mapMulti, distinct, sorted, peak, limit, skip, takeWhile and dropWhile. Again and again, there are requests for additional methods such as window or fold. But instead of just providing the required operations, an API (JEP 461 “Stream Gatherers”) has now been developed as a preview. The API enables both JDK developers and users to implement any intermediate operations themselves.

The following gatherers are supplied, accessible via the java.util.stream.Gatherers class (Listing 11 shows some examples):

  • fold: stateful N-1 gatherer, builds an aggregate incrementally and outputs this aggregate at the end
  • mapConcurrent: stateful 1-1 gatherer that calls the passed function concurrently for each input element
  • scan: stateful 1-1 gatherer, applies a function to the current state and the current element to create the next element
  • windowFixed: stateful N-N-gatherer that groups input elements into lists of a given size
  • windowSliding: similar to windowFixed, the next frame is created after the first frame, in which the first element is deleted and all other values slide in after it
Listing 11: Gatherer supplied

// will contain: Optional["12345"]
Optional<String> numberString = Stream.of(1, 2, 3, 4, 5)
  .gather(
    Gatherers.fold(() -> "", (string, number) -> string + number)
  )
  .findFirst();
System.out.println(numberString);

// will contain: ["1", "12", "123"]
List<String> numberStrings = Stream.of(1, 2, 3).gather(
  Gatherers.scan(() -> "", (string, number) -> string + number)
).toList();
System.out.println(numberStrings);

// will contain: [[1, 2, 3], [4, 5, 6], [7, 8]]
List<List<Integer>> windows = Stream.of(1, 2, 3, 4, 5, 6, 7, 8).gather(Gatherers.windowFixed(3)).toList();
System.out.println(windows);

// will contain: [[1, 2], [2, 3], [3, 4], [4, 5]]
List<List<Integer>> windows2 = Stream.of(1, 2, 3, 4, 5).gather(Gatherers.windowSliding(2)).toList();
System.out.println(windows2);

// will contain: [[1, 2, 3], [2, 3, 4], [3, 4, 5]]
List<List<Integer>> windows3 = Stream.of(1, 2, 3, 4, 5).gather(Gatherers.windowSliding(3)).toList();
System.out.println(windows3);

Let’s first take a look at the structure of a stream gatherer. It can have a status so that elements can be transformed differently depending on the previous actions. And it can also terminate a stream prematurely, such as limit() or takeWhile(). The gatherer starts with an optional initializer that can provide the status. This is followed by the integrator, which processes each element of the stream and updates the status if necessary. This is followed by the optional finisher, which is called after the last element has been processed and, depending on the status, sends further elements to the next stage of the stream pipeline. Last but not least, an optional combiner combines the status of transformations executed in parallel.

To write your own implementation, you must derive from the Gatherer interface and implement at least the integrator() method. The others provide a default implementation and are therefore optional (Listing 12).

Listing 12: Interface Gatherer

    interface Gatherer<T, A, R> {
      default Supplier<A> initializer() {
        return defaultInitializer();
      };
    
      Integrator<A, T, R> integrator();
    
      default BinaryOperator<A> combiner() {
        return defaultCombiner();
      }
    
      default BiConsumer<A, Downstream<? super R>> finisher() {
        return defaultFinisher();
      };
      [...]
    }
If you want to implement the *windowFixed* gatherer again, it would look like Listing 13. The example is taken from the description of JEP 461 [[4]](https://openjdk.org/jeps/461).

**Listing 13: Different stages of the stream pipeline**

record WindowFixed<TR>(int windowSize)
implements Gatherer<TR, ArrayList<TR>, List<TR>> {

public WindowFixed {
// Validate input
if (windowSize < 1)
throw new IllegalArgumentException("window size must be positive" );
}

@Override
public Supplier<ArrayList<TR>> initializer() {
// Create an ArrayList to hold the current open window
return () -> new ArrayList<>(windowSize);
}

@Override
public Integrator<ArrayList<TR>, TR, List<TR>> integrator() {
// The integrator is invoked for each element consumed
return Gatherer.Integrator.ofGreedy((window, element, downstream) -> {

// Add the element to the current open window
window.add(element);

// Until we reach our desired window size,
// return true to signal that more elements are desired
if (window.size() < windowSize)
return true;

// When the window is full, close it by creating a copy
var result = new ArrayList<TR>(window);
// Clear the window so the next can be started
window.clear();

// Send the closed window downstream
return downstream.push(result);

});
}

// The combiner is omitted since this operation is intrinsically sequential,
// and thus cannot be parallelized

@Override
public BiConsumer<ArrayList<TR>, Downstream<? super List<TR>>> finisher() {
// The finisher runs when there are no more elements to pass from
// the upstream
return (window, downstream) -> {
// If the downstream still accepts more elements and the current
// open window is non-empty, then send a copy of it downstream
if (!downstream.isRejecting() && !window.isEmpty()) {
downstream.push(new ArrayList<TR>(window));
window.clear();
}
};
}
}

// [[1, 2], [3, 4], [5]]
System.out.println(Stream.of(1,2,3,4,5).gather(new WindowFixed(2)).toList());

StringTemplates available again as a preview

Already in Java 12, a new way of generating character strings was to be introduced with the “Raw String Literals”. This involved multi-line strings and ignoring escape sequences. However, the associated JEP 326 [5] was withdrawn for various reasons (including confusing syntax, different behavior than normal strings with regard to multiline strings). However, OpenJDK 15 already allowed the simple definition of multiline strings using text blocks. Even then, the first voices were raised that Java also needed a string interpolation mechanism, as many other languages had been offering for some time. String interpolation allows character strings to contain placeholders whose code expressions are automatically resolved.

In Java, there have also been many different ways of assembling character strings at runtime from static texts, the content of variables and calculated values: String concatenation, string builder/buffer, String.format(), MessageFormat. However, these approaches present very different challenges. Due to the immutability of string instances, the performance of string concatenation can be a problem on the one hand, while on the other hand readability suffers with almost all variants and errors can easily occur (e.g. when assembling database queries).

As of Java 21, “text templates” can be defined (JEP 430). The second preview has now been published in OpenJDK 22. There were no relevant changes from the user’s point of view: One would like to gather further insights first and would like feedback.

Within a string template, the placeholders marked with {..} may contain variables or any Java expressions (calculations, method calls …). A simple example is shown in Listing 14. The template expression consists of the template processor (STR), a dot (.) and the actual template, which may contain placeholders with embedded expressions.

Listing 14: Simple string interpolation

User user = new User("[email protected]", new Role("admin")); 

String info = STR."'\{user.email()}' hat Rolle '\{user.role().name()}'"; 

// '[email protected]' hat Rolle 'admin'
System.out.println(info); 

At first glance, the syntax seems to take some getting used to. The template is passed to the processor instance after a dot (similar to an instance method call). This notation is due to backward compatibility, so that existing code continues to behave correctly. The string { is not a valid escape sequence, so there can be no strings containing { in existing Java source code. The compiler would have thrown an error in versions prior to 21. This ensures that no old code is incorrectly recognized as a string template. In other programming languages, interpolation may be easier to solve, but the syntax in Java offers very flexible and versatile possibilities, especially due to the development of custom processors.

Line breaks are permitted within a placeholder (to insert code comments, for example) and quotation marks and even nested string-templates can be used inside. The string-templates can also be combined with multi-line strings, e.g. to generate longer SQL-queries or XML/HTML-or JSON-documents. Listing 15 shows a template with a multiline string and several lines for the placeholder title, which is also concatenated with a prefix.

Listing 15: Interpolation in multi-line strings

String title = "My Web Page"; 
String text = "Hello, world"; 
String html = STR.""" 
  <html> 
    <head> 
      <title>\{ // Comment 
        "My company: " + title } 
      </title> 
    </head> 
    <body> 
      <p>\{text}</p> 
    </body> 
  </html>""";

A total of three template-processor-implementations are supplied with the JDK, whose instances can simply be used as variables. In the case of simple string interpolation, the STR processor is used (Listing 14), which bluntly inserts the expressions instead of the placeholders. FMT is another template processor that evaluates the formatting rules known from String.format(). While STR is automatically imported in every Java-class, FMT must be explicitly activated via the import import static java.util.FormatProcessor.FMT:

import static java.util.FormatProcessor.FMT;

double zahl = 2.0;
String formatted = FMT."Zahl: %7.2f\{zahl}";
System.out.println(formatted); // Zahl:    2.00

To create a string-template-object directly, the third processor RAW provided by the JDK must be used. This allows the structure of the string template to be analyzed, the static text blocks can be viewed with the fragments() method and the references to the expressions to be inserted can be viewed with values(). With process(Processor) you can then pass any processor and trigger the interpolation (Listing 16). Only then is the string template resolved, beforehand the individual parts are kept separately in memory (lazy evaluation = evaluation on demand).

 

Listing 16: Raw String Template

import static java.lang.StringTemplate.RAW;

template = RAW.”Today is day
\{LocalDate.now().getDayOfYear()} of year
\{LocalDate.now().getYear()}.”;

System.out.println(template.fragments()); System.out.println(template.values());

// Today is day 82 of year 2024.
String result = template.process(STR); System.out.println(result);

Both STR and FMT each return a string: String → String. The big advantage compared to other programming languages is that a processor can also return a different data type. For example, a JSONObject can be generated directly from a multi-line string. To do this, it must be derived from StringTemplate.Processor, e.g. via the static auxiliary method Processor.of(). To create the JSONObject, the string template is first interpolated (the placeholders are replaced) and the result is converted to JSON (Listing 17).

Listing 17: Simple Custom Template Processor

// new Template Processor
var JSON = StringTemplate.Processor.of(
(StringTemplate template) ->
new JSONObject(template.interpolate()));

User user = new User(“[email protected]”,
new Role(“admin”));

JSONObject json = JSON.”””
{
“user”: “\{ user.email() }”,
“roles”: “\{user.role().name()}”
}”””;

// {“roles”:”admin”,”user”:”[email protected]”} System.out.println(json);

}

This simple example does not yet show its full power. Instead of calling the parameterless method template.interpolate() directly, you can first perform transformations on the placeholder variables (template.values()) in order to escape potentially dangerous user input, for example. You then pass the text blocks (fragments()) and the adjusted values (newValues) to the interpolate-method and can generate the JSONObject again from the result (Listing 18).

Listing 18: Advanced Custom Template Processor
Processor<JSONObject, JSONException> JSON = template -> {
String quote = "\"";
List<Object> newValues = new ArrayList<>();
for (Object value : template.values()) {
// Ersetzungen durchführen
// ...
newValues.add(value);
}

var result = StringTemplate.
interpolate(template.fragments(), newValues);
return new JSONObject(result);
};

According to this principle, we can perform any string interpolations, for example internationalising texts with a locale-object, assembling log messages or converting SQL-statements into a JDBCPreparedStatement. Whenever there is a text-based template that needs to be transformed, validated or cleaned up, the string templates in Java now offer a simple template-engine built into the JDK without dependencies on third-party providers. In the future, frameworks and libraries will probably provide such template processors. Or we can simply create our own processors for internal use, thereby avoiding redundant and potentially dangerous code. Existing template-engines such as FreeMarker or Thymeleaf are not rendered obsolete by string templates, as they cover a different area of application. While the string templates built into the JDK check at compile-time whether the expressions in the placeholders can be resolved, template-engines are better able to deal with dynamic content. Shortly before the release of OpenJDK 22, Brian Goetz surprised everyone on the mailing list of the Amber incubator project by announcing that he wanted to rethink the current implementation of string templates [6]. This initially had no effect on Java 22, as the feature freeze (Rampdown Phase One) had already taken place in December. But for Java 23, the template processors may be questioned again and replaced by simple method calls. This approach proves that the process with the incubator and preview-phases-works well and that the feedback collected is taken seriously. We will find out to what extent the string templates will change again by June 2024 at the latest, when the range of functions for OpenJDK 23 is finalised.

Innovations in the virtual threads environment

Virtual threads were the biggest change in OpenJDK 21. They make it possible to implement concurrent processing of tasks executed in parallel, even with a very large number of threads, and even to write easily understandable code. This code can also be debugged in the same way as sequential code using conventional means. Virtual threads behave like normal threads, but are not mapped 1:1 to operating system threads. Instead, there is a pool of carrier threads to which virtual threads are temporarily assigned. As soon as the virtual thread encounters a blocking operation, it is taken from the carrier thread, which can then take over another virtual thread (a new one or a previously blocked one). As virtual threads are so lightweight, they can be created quickly at any time. So you don’t have to reuse threads, but can simply instantiate new ones again and again. Nothing has changed in the virtual threads themselves; they were already finalised in JDK 21.

Structured concurrency was introduced in the virtual thread environment with OpenJDK 19. Until Java 21, there were also repeated changes, this time the API in JEP 462 remained unchanged and further feedback is to be awaited.

When processing several parallel subtasks, Structured Concurrency allows implementation in a particularly readable and maintainable way. Previously, parallel streams or the ExecutorService were used for splitting tasks to be processed in parallel. The latter is very powerful, but also makes simple implementations quite complicated and error-prone. For example, it is difficult to recognise when one of the subtasks produces an error in order to immediately cancel all the others cleanly. If, for example, a task runs for a very long time, you only receive feedback late on when other tasks have encountered problems. Debugging is also not easy, as the tasks in the thread dumps cannot be assigned to the respective threads from the pool.

With the Structured Concurrency, we replace the ExecutorService with a StructuredTaskScope, where you can select different strategies (Listing 19 uses ShutdownOnFailure). This new API makes concurrent code easier to read and can also handle error situations better.

Listing 19: Structured Concurrency
try (var scope = 
  new StructuredTaskScope.ShutdownOnFailure()) {

  Future<String> task1 = scope.fork(() -> { ... }
  Future<String> task2 = scope.fork(() -> { ... }
  Future<String> task3 = scope.fork(() -> { ... }

  scope.join(); 
  scope.throwIfFailed();

  System.out.println(task1.get());
  System.out.println(task2.get());
  System.out.println(task3.get());
}

scope.join() is used to wait until all tasks have been successfully completed. If one fails or is cancelled, the other two are also terminated. The error can also be passed on with throwIfFailed() and the output of the results is skipped.

This new approach has several advantages. Firstly, the task and subtasks in the code form a self-contained, related unit. The threads do not come from a thread pool with heavyweight platform threads; instead, each subtask is executed in a new virtual thread. In the event of errors, subtasks that are still running are cancelled. In addition, the information is better in error situations because the call hierarchy is visible both in the code structure and in the stack trace of the exception.

However, due to the preview status, certain switches must still be activated for structured concurrency during compilation and execution:

$ javac --enable-preview StructuredConcurrencyMain.java $ java --enable-preview StructuredConcurrencyMain

Scoped Values

An alternative to the ThreadLocal-variables was also introduced in JDK 20 in the virtual thread environment. The “Scoped Values” are also still in preview status (JEP 464). There have been no changes compared to the previous version; here too, experience and feedback should be gathered first.

Scoped values allow a temporary value to be saved for a limited time, whereby only the thread that wrote the value can read it again. Scoped values are usually created as public static (global) fields. They can then be accessed from any methods located deeper in the call stack. If several threads use the same scoped value field, it will contain different values depending on the executing thread. The concept works in the same way as ThreadLocal, but has some advantages.

In Listing 20, information about the logged-in user is extracted from a web request. This information must be accessed in the further call chain (in the service, in the repository …). One variant would be to loop through the user object as an additional parameter in the methods to be called. Scoped values prevent this redundant and confusing additional parameter listing. Specifically, the user object is set with ScopedValue.where() and then an instance of Runnable is passed to the run method, for whose call duration the scoped value should be valid. Alternatively, a callable object can be passed using the call() method in order to also evaluate return values. The attempt to execute the content outside the scoped value context leads to an exception because no user object is stored for this call.

The scoped value is read out by calling get(). If there is no value, you can react with a fallback (orElse()) or by throwing an exception (orElseThrow()).

Listing 20: Scoped Values
public final static ScopedValue<SomeUser> CURRENT_USER = 
  ScopedValue.newInstance();

[...]

SomeController someController = 
  new SomeController(
    new SomeService(new SomeRepository())); 

someController.someControllerAction(
  HttpRequest.newBuilder()
    .uri(new URL( "http://example.com")
    .toURI()).build());

static class SomeController { 

  final SomeService someService;

  SomeController(SomeService someService) { 
    this.someService = someService; 
  }

  public void someControllerAction(
    HttpRequest request) { 
    
    SomeUser user = authenticate(request);   
    ScopedValue.where(CURRENT_USER, user)
      .run(() -> someService.processService());
  }
}

static class SomeService { 
  final SomeRepository someRepository;

  SomeService(SomeRepository someRepository) {    
    this.someRepository = someRepository; 
  }

  void processService() {   
    System.out.println(CURRENT_USER
      .orElseThrow(() -> 
        new RuntimeException("no valid user")));
    }
  }
}

Here too, certain switches must be activated during compilation and execution (similar to the code for Structured Concurrency above).

The ScopedValue class is immutable and therefore does not offer a set method. This makes the code easier to maintain because there can be no state changes to the existing object. If a different value must be visible for a certain code section (call of another method in the chain), the value can be rebound (e.g. set to zero as in the code below). As soon as the limited code section is finished, the original value becomes visible again.

ScopedValue.where(CURRENT_USER, null)
  .run(() -> someRepository.getSomeData());

Scoped values also work together with the Structured Concurrency. Visibility is inherited by the child processes created via a StructuredTaskScope. This means that all child threads forked via fork() can also access the user information contained in the scoped value.

Just like ThreadLocals, scoped values are available for both platform and virtual threads. The advantages of Scoped Values are manifold. The content is automatically cleaned up as soon as the runnable/callable process is finished: this prevents memory leaks. The immutability of scoped values also increases comprehensibility and readability. Child processes generated with structured concurrency also have access to the one, immutable value. The information is not copied as with InheritedThreadLocals (which can lead to higher memory consumption).

What else happened?

We have not yet talked about one point, JEP 423 “Region Pinning for G1”. This is about reducing latencies in the standard garbage collector G1 in relation to critical regions when using the JNI (Java Native Interface). Further smaller innovations for which there are no JEPs can be found in the release notes [7]. Changes to the JDK (Java class library) can also be tracked very easily using the Java Almanac [8]. This overview also contains, for example, all the new features of the stream gatherers.

Conclusion

The semi-annual releases always have a lot of surprises in store for us Java developers. Four new topics were added in Java 22. Some features that have been delivered as previews for some time have been finalised. And the JDK developers are not running out of ideas for the next features. If you would like to find out about possible future topics in advance, you can take a look at the JEP index under “Draft and submitted JEPs” [9].

The world of Java is still on the move. Java will be 30 years old next year. Oracle is planning a revival of JavaOne in the San Francisco Bay Area in March 2025 to mark this anniversary [10]. Maybe we’ll meet there and toast to our good old Java.

Links & Literature

[1] https://openjdk.java.net/projects/jdk/22/

[2] https://openjdk.org/jeps/460

[3] https://asm.ow2.io

[4] https://openjdk.org/jeps/461

[5] https://openjdk.org/jeps/326

[6] https://mail.openjdk.org/pipermail/amber-spec-experts/2024-March/004010.html

[7] https://jdk.java.net/22/release-notes

[8] https://javaalmanac.io/jdk/22/apidiff/21/

[9] https://openjdk.org/jeps/0#Draft-and-submitted-JEPs

[10] https://inside.java/2024/03/19/announcing-javaone-2025/

Behind the Tracks

Software Architecture & Design
Software innovation & more
Microservices
Architecture structure & more
Agile & Communication
Methodologies & more
DevOps & Continuous Delivery
Delivery Pipelines, Testing & more
Big Data & Machine Learning
Saving, processing & more