The following code demonstrates how to create a string of repeated characters. We use the String.repeat(int count) method introduced in Java 11. This method takes one parameter of type int which is the number of times to repeat the string. The count must be a positive number, a negative number will cause this method to throw java.lang.IllegalArgumentException.
In the snippet below, we use the method to repeat characters and draw some triangles. We combine the repeat() method with a for loop to draw the triangles.
package org.kodejava.basic;
public class StringRepeatDemo {
public static void main(String[] args) {
String star = "*";
String fiveStars = star.repeat(5);
System.out.println("fiveStars = " + fiveStars);
String arrow = "-->";
String arrows = arrow.repeat(10);
System.out.println("arrows = " + arrows);
String asterisk = "#";
for (int i = 1; i <= 10; i++) {
System.out.println(asterisk.repeat(i));
}
int height = 10;
for (int i = 1, j = 1; i <= height; i++, j += 2) {
System.out.println(" ".repeat(height - i) + "*".repeat(j));
}
}
}
Since JDK 8, we can create a datetime formatter / parser pattern that can have optional sections. When parsing a datetime string that contains optional values, for example, a date without time part or a datetime without second part, we can create a parsing pattern wrapped within the [] symbols. The [ character is the optional section start symbol, and the ] character is the optional section end symbol. The pattern inside this symbol will be considered as an optional value.
We can use the java.time.format.DateTimeFormatter class to parse the string of datetime or format the datetime object, and use it with the new Java time API classes such as java.time.LocalDate or java.time.LocalDateTime to convert the string into respective LocalDate or LocalDateTime object as show in the code snippet below.
A friend of mine told me that he has a large Excel file, and he asked me if I could split the file into multiple smaller Excel files. So I write this little program using Apache POI to do it.
The code snippet below basically contains the following steps:
Load the Excel as an InputStream from the classpath, if the file was not found the program will exit.
Create XSSFWorkbook from the input stream, and get the first XSSFSheet from the workbook.
Iterate the rows of data from the source worksheet.
On the first rownum for each split file we create a new workbook using SXSSFWorkbook and also create the SXSSFSheet.
Read the first row from the source worksheet, store it in headerRow to be used for creating header row in each new sheet.
If we are at the first row, write the header.
Write each row from the source sheet to the destination sheet untuk the max rows is reached.
Write the workbook into a new file.
And here is the complete code that you can try.
package org.kodejava.poi;
import org.apache.poi.ss.usermodel.Cell;
import org.apache.poi.ss.usermodel.CellType;
import org.apache.poi.ss.usermodel.Row;
import org.apache.poi.xssf.streaming.SXSSFCell;
import org.apache.poi.xssf.streaming.SXSSFRow;
import org.apache.poi.xssf.streaming.SXSSFSheet;
import org.apache.poi.xssf.streaming.SXSSFWorkbook;
import org.apache.poi.xssf.usermodel.XSSFRow;
import org.apache.poi.xssf.usermodel.XSSFSheet;
import org.apache.poi.xssf.usermodel.XSSFWorkbook;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.time.LocalDateTime;
import java.time.temporal.ChronoUnit;
public class SplitExcelDemo {
public static final int MAX_ROWS_PER_FILE = 1000 - 1;
public static final boolean WITH_HEADER = true;
public static void main(String[] args) {
LocalDateTime startTime = LocalDateTime.now();
String filename = "/stock.xlsx";
try (InputStream is = SplitExcelDemo.class.getResourceAsStream(filename)) {
if (is == null) {
System.out.println("Source file was not found!");
System.exit(1);
}
XSSFWorkbook srcWorkbook = new XSSFWorkbook(is);
XSSFSheet srcSheet = srcWorkbook.getSheetAt(0);
int physicalNumberOfRows = srcSheet.getPhysicalNumberOfRows();
int rownum = 0;
int splitCounter = 0;
SXSSFWorkbook destWorkbook = null;
SXSSFSheet destSheet = null;
XSSFRow headerRow = null;
for (Row srcRow : srcSheet) {
if (rownum == 0) {
// At the beginning let's create a new workbook and worksheet
destWorkbook = new SXSSFWorkbook();
destSheet = destWorkbook.createSheet();
}
if (srcRow.getRowNum() == 0 && WITH_HEADER) {
// Copy header row to be use in each split file
headerRow = (XSSFRow) srcRow;
}
if (rownum == 0 && WITH_HEADER) {
// Add row header to each split file
if (headerRow != null) {
SXSSFRow firstRow = destSheet.createRow(rownum);
int index = 0;
for (Cell cell : headerRow) {
SXSSFCell headerCell = firstRow.createCell(index++, cell.getCellType());
if (cell.getCellType() == CellType.STRING) {
headerCell.setCellValue(cell.getStringCellValue());
}
}
}
} else {
// Copy rows from source worksheet into destination worksheet
SXSSFRow descRow = destSheet.createRow(rownum);
int index = 0;
for (Cell cell : srcRow) {
SXSSFCell destCell = descRow.createCell(index++, cell.getCellType());
switch (cell.getCellType()) {
case NUMERIC -> destCell.setCellValue(cell.getNumericCellValue());
case STRING -> destCell.setCellValue(cell.getStringCellValue());
}
}
}
// When a max number of rows copied are reached, or when we are at the end of worksheet,
// write data into a new file
if (rownum == MAX_ROWS_PER_FILE || srcRow.getRowNum() == physicalNumberOfRows - 1) {
rownum = -1;
String output = String.format("split-%03d.xlsx", splitCounter++);
System.out.println("Writing " + output);
try (OutputStream os = new FileOutputStream(output)) {
destWorkbook.write(os);
} catch (IOException e){
e.printStackTrace();
}
}
rownum = rownum + 1;
}
// Display processing time
LocalDateTime endTime = LocalDateTime.now();
long minutes = startTime.until(endTime, ChronoUnit.MINUTES);
startTime = startTime.plusMinutes(minutes);
long seconds = startTime.until(endTime, ChronoUnit.SECONDS);
System.out.printf("Splitting finished in %d minutes and %d seconds %n", minutes, seconds);
} catch (Exception e) {
e.printStackTrace();
}
}
}
The output of running this program will look like this:
The Runtime.getRuntime().availableProcessors() method returns the maximum number of processors available to the Java virtual machine, the value will never be smaller than one. Knowing the number of available processor you can use it for example to limit the number of thread in your application when you are writing a multi-thread code.
package org.kodejava.lang;
public class NumberProcessorExample {
public static void main(String[] args) {
final int processors = Runtime.getRuntime().availableProcessors();
System.out.println("Number of processors = " + processors);
}
}
When you use the Spring framework @Transactional annotation in your service layer you might want to see what is happening in your code related to database transaction. You want to see when a transaction is started, when it is committed or rollbacked.
To activate the log for transactional message you can add the following configurations in your application properties file. For example when using the JpaTransactionManager you can set the log level to DEBUG.
In this example we are going to build a simple search page using ZK framework and Spring Boot. We are going to use the latest available version of Spring Boot (3.0.0) and ZK Framework (9.6.0.2). So without taking more time let’s start by creating a new spring boot project with the following pom.xml. You can create the initial project using your IDE or spring initializr.
Create a Spring Boot project and add the following dependencies:
This properties file configure ZK application homepage and the prefix where the zul files are located. We also configure the datasource to our application database.
An entity that represent out record label with just two property of id and name. Getters and setters are generated by Lombok library, it also generated to equals() and hashcode() method, and also the toString() method.
package org.kodejava.zk.entity;
import jakarta.persistence.*;
import lombok.Data;
@Data
@Entity
public class Label {
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
private Long id;
private String name;
}
The LabelRepository.java definition.
Create the LabelRepository which extends the JpaRepository and JpaSpecificationExecutor interfaces.
AbstractSearchController.java a base search controller.
A base class that we can use to implements all the search page in an application. Basically it provides the method to search our application data. It defines a couple of abstract method that need to be implemented by the search controller classes such as what repository to use and the specification for searching the data. We can also define the default sort column and the direction of the data sorting.
package org.kodejava.zk.controller;
import org.springframework.data.domain.Sort;
import org.springframework.data.jpa.domain.Specification;
import org.springframework.data.jpa.repository.JpaSpecificationExecutor;
import org.zkoss.zk.ui.Component;
import org.zkoss.zk.ui.event.Event;
import org.zkoss.zk.ui.select.SelectorComposer;
import org.zkoss.zk.ui.select.annotation.Listen;
import org.zkoss.zk.ui.select.annotation.VariableResolver;
import org.zkoss.zk.ui.select.annotation.Wire;
import org.zkoss.zul.Listbox;
@VariableResolver(org.zkoss.zkplus.spring.DelegatingVariableResolver.class)
public abstract class AbstractSearchController<T> extends SelectorComposer<Component> {
@Wire
protected Listbox listbox;
public abstract JpaSpecificationExecutor<T> getRepository();
public abstract Specification<T> getSpecification();
public abstract String getCacheKey();
protected String getDefaultSortColumn() {
return "id";
}
protected Sort.Direction getDefaultSortDirection() {
return Sort.Direction.ASC;
}
protected boolean getMultiple() {
return false;
}
@Override
public void doAfterCompose(Component comp) throws Exception {
super.doAfterCompose(comp);
search();
}
@Listen("onClick=#searchButton")
public void search() {
listbox.setVisible(true);
SearchListModel<T> model = new SearchListModel<>(getRepository(), getSpecification(), getCacheKey());
model.setMultiple(getMultiple());
model.setDefaultSortColumn(getDefaultSortColumn());
model.setDefaultSortDirection(getDefaultSortDirection());
listbox.setModel(model);
listbox.setActivePage(0);
}
@Listen("onOK=#searchForm")
public void onEnterPressed(Event event) {
search();
}
public int getPageSize() {
return SearchListModel.PAGE_SIZE;
}
}
SearchListModel.java
An implementation of ListModel, this class will query the database using the provided repository and specification. It read data page-by-page and cache it so when we navigating the Listbox page it doesn’t read the data that have already been cached.
package org.kodejava.zk.controller;
import org.springframework.data.domain.Page;
import org.springframework.data.domain.PageRequest;
import org.springframework.data.domain.Sort;
import org.springframework.data.jpa.domain.Specification;
import org.springframework.data.jpa.repository.JpaSpecificationExecutor;
import org.zkoss.zk.ui.Execution;
import org.zkoss.zk.ui.Executions;
import org.zkoss.zul.FieldComparator;
import org.zkoss.zul.ListModelList;
import java.util.Comparator;
import java.util.HashMap;
import java.util.Map;
public class SearchListModel<T> extends ListModelList<T> {
public static final int PAGE_SIZE = 5;
private final JpaSpecificationExecutor<T> repository;
private final String cacheKey;
private long totalElements;
private Comparator<T> comparator;
private boolean ascending = false;
private final Specification<T> specification;
private Sort.Direction defaultSortDirection = Sort.Direction.ASC;
private String defaultSortColumn = "id";
public SearchListModel(JpaSpecificationExecutor<T> repository, Specification<T> specification, String cacheKey) {
this.repository = repository;
this.specification = specification;
this.cacheKey = cacheKey;
this.totalElements = repository.count(specification);
}
@Override
public T getElementAt(int index) {
Map<Integer, T> cache = getCache();
T target = cache.get(index);
if (target == null) {
Sort sort = Sort.by(getDefaultSortDirection(), getDefaultSortColumn());
if (comparator != null) {
FieldComparator fieldComparator = (FieldComparator) comparator;
String orderBy = fieldComparator.getRawOrderBy();
sort = Sort.by(ascending ? Sort.Direction.ASC : Sort.Direction.DESC, orderBy);
}
Page<T> pageResult = repository.findAll(specification, PageRequest.of(getPage(index), PAGE_SIZE, sort));
totalElements = pageResult.getTotalElements();
int indexKey = index;
for (T t : pageResult.toList()) {
cache.put(indexKey, t);
indexKey++;
}
} else {
return target;
}
target = cache.get(index);
if (target == null) {
throw new RuntimeException("element at " + index + " cannot be found in the database.");
} else {
return target;
}
}
@Override
public int getSize() {
return (int) totalElements;
}
@Override
public void sort(Comparator<T> comparator, boolean ascending) {
super.sort(comparator, ascending);
this.comparator = comparator;
this.ascending = ascending;
}
@SuppressWarnings("unchecked")
private Map<Integer, T> getCache() {
Execution execution = Executions.getCurrent();
Map<Integer, T> cache = (Map<Integer, T>) execution.getAttribute(cacheKey);
if (cache == null) {
cache = new HashMap<>();
execution.setAttribute(cacheKey, cache);
}
return cache;
}
private int getPage(int index) {
if (index != 0) {
return index / PAGE_SIZE;
}
return index;
}
public Sort.Direction getDefaultSortDirection() {
return defaultSortDirection;
}
public void setDefaultSortDirection(Sort.Direction defaultSortDirection) {
this.defaultSortDirection = defaultSortDirection;
}
public String getDefaultSortColumn() {
return defaultSortColumn;
}
public void setDefaultSortColumn(String defaultSortColumn) {
this.defaultSortColumn = defaultSortColumn;
}
}
LabelSearchController.java
Our label search page controller which extends from AbstractSearchController class. We provide the LabelRepository and the Specification to filter the data.
The code snippet below shows you a simple way to calculate days between two dates excluding weekends and holidays. As an example, you can use this function for calculating work days. The snippet utilize the java.time API and the Stream API to calculate the value.
What we do in the code below can be described as the following:
Create a list of holidays. The dates might be read from a database or a file.
Define filter Predicate for holidays.
Define filter Predicate for weekends.
These predicates will be use for filtering the days between two dates.
Define the startDate and the endDate to be calculated.
Using Stream.iterate() we iterate the dates, filter it based on the defined predicates.
Finally, we get the result as list.
The actual days between is the size of the list, workDays.size().
package org.kodejava.datetime;
import java.time.DayOfWeek;
import java.time.LocalDate;
import java.time.Month;
import java.time.temporal.ChronoUnit;
import java.util.ArrayList;
import java.util.List;
import java.util.function.Predicate;
import java.util.stream.Stream;
public class DaysBetweenDates {
public static void main(String[] args) {
List<LocalDate> holidays = new ArrayList<>();
holidays.add(LocalDate.of(2022, Month.DECEMBER, 26));
holidays.add(LocalDate.of(2023, Month.JANUARY, 2));
Predicate<LocalDate> isHoliday = holidays::contains;
Predicate<LocalDate> isWeekend = date -> date.getDayOfWeek() == DayOfWeek.SATURDAY
|| date.getDayOfWeek() == DayOfWeek.SUNDAY;
LocalDate startDate = LocalDate.of(2022, Month.DECEMBER, 23);
LocalDate endDate = LocalDate.of(2023, Month.JANUARY, 3);
System.out.println("Start date = " + startDate);
System.out.println("End date = " + endDate);
// Days between startDate inclusive and endDate exclusive
long daysBetween = ChronoUnit.DAYS.between(startDate, endDate);
System.out.println("Days between = " + daysBetween);
List<LocalDate> workDays = Stream.iterate(startDate, date -> date.plusDays(1))
.limit(daysBetween)
.filter(isHoliday.or(isWeekend).negate())
.toList();
long actualDaysBetween = workDays.size();
System.out.println("Actual days between = " + actualDaysBetween);
}
}
Running the code snippet above give us the following result:
Start date = 2022-12-23
End date = 2023-01-03
Days between = 11
Actual days between = 5
The following code example demonstrate how to export MySQL database schema into markdown table format. We get the table structure information by executing MySQL’s DESCRIBE statement.
The steps we do in the code snippet below:
Connect to the database.
We obtain the list of table name from the database / schema.
Executes DESCRIBE statement for each table name.
Read table structure information such as field, type, null, key, default and extra.
Write the information into markdown table format and save it into table.md.
And here are the complete code snippet.
package org.kodejava.jdbc;
import java.io.BufferedWriter;
import java.io.FileWriter;
import java.sql.*;
import java.util.ArrayList;
import java.util.List;
public class DescribeMySQLToMarkDown {
private static final String URL = "jdbc:mysql://localhost/kodejava";
private static final String USERNAME = "root";
private static final String PASSWORD = "";
public static void main(String[] args) {
String tableQuery = """
select table_name
from information_schema.tables
where table_schema = 'kodejava'
and table_type = 'BASE TABLE'
order by table_name;
""";
try (Connection connection = DriverManager.getConnection(URL, USERNAME, PASSWORD)) {
Statement stmt = connection.createStatement();
ResultSet resultSet = stmt.executeQuery(tableQuery);
List<String> tables = new ArrayList<>();
while (resultSet.next()) {
tables.add(resultSet.getString("table_name"));
}
System.out.println(tables.size() + " tables found.");
try (BufferedWriter writer = new BufferedWriter(new FileWriter("table.md"))) {
for (String table : tables) {
System.out.println("Processing table: " + table);
Statement statement = connection.createStatement();
ResultSet descResult = statement.executeQuery("DESCRIBE " + table);
writer.write(String.format("Table Name: **%s**%n%n", table));
writer.write("| Field Name | Data Type | Null | Key | Default | Extra |\n");
writer.write("|:---|:---|:---|:---|:---|:---|\n");
while (descResult.next()) {
String field = descResult.getString("field");
String type = descResult.getString("type");
String nullInfo = descResult.getString("null");
String key = descResult.getString("key");
String defaultInfo = descResult.getString("default");
String extra = descResult.getString("extra");
String line = String.format("| %s | %s | %s | %s | %s | %s |%n",
field, type, nullInfo, key, defaultInfo, extra);
writer.write(line);
}
writer.write("\n<br/>\n<br/>\n");
}
} catch (Exception e) {
e.printStackTrace();
}
} catch (SQLException e) {
e.printStackTrace();
}
}
}
This code snippet will produce something like below. I have tidy up the markdown for a better presentation.
Table Name: **books**
| Field Name | Data Type | Null | Key | Default | Extra |
|:-----------|:----------------|:-----|:----|:--------|:---------------|
| id | bigint unsigned | NO | PRI | null | auto_increment |
| isbn | varchar(30) | NO | | null | |
<br/>
<br/>
In the following code snippet we will convert java.util.Map object into JSON string and convert back the JSON string into Java Map object. In this example we will be using the Jackson library.
To convert from Map to JSON string the steps are:
Create a map of string keys and values.
Create an instance of Jackson ObjectMapper.
To convert map to JSON string we call the writeValueAsString() method and pass the map as argument.
Running the code snippet above will print the following output:
json = {"RED":"#FF0000","WHITE":"#FFFFFF","BLUE":"#0000FF","BLACK":"#000000","YELLOW":"#FFFF00","GREEN":"#008000"}
Map:
RED = #FF0000
WHITE = #FFFFFF
BLUE = #0000FF
BLACK = #000000
YELLOW = #FFFF00
GREEN = #008000