Showing posts with label Java. Show all posts
Showing posts with label Java. Show all posts

Knapsack without repetition

Similarly to the previous problem, we have a knapsack with a specified capacity, a number of items with different weight and value, and we want to select a subset of them that would fit in the knapsack maximizing its value. The difference is that now we are not allowed to put in the knapsack copies of the same item. For this reason this problem is also known as 0/1 knapsack.

If we keep a dynamic programming approach, the solution won't be too different to the previous one, however we'll need to store the results for all subproblems in a bidimensional array and the process of generating a new subsolution is going to be a bit more complex.

The idea is that we have a row for each family of subproblems having up to the actual number of items we can choose from.

Zeroth row is there just to simplify the algorithm, being quite obvious that a knapsack containing zero or one element chosen from no item is going to always have a weight of zero.

First row is only marginally more interesting. We check the first item, as soon as its weight matches the capacity of a (sub)knapsack, we assign its value to that subsolution, an then al the other subsolutions on its right would have the same value, since we have no other items available.

It is from second row that we have a more clear view on the problem complexity, since for a generic position in it we have to ensure the capacity of the current knapsack in the subproblem would be optimally filled by the available items.

Let's see a test case to help us to figure out how to work it out.
public void testWithouRepetition() {
 int[] weights = new int[] { 2, 3, 4, 6 };
 int[] values = new int[] { 9, 14, 16, 30 };

 assertThat(Knapsack.withoutRepetition(values, weights, 10), is(46));
}
The table with all the subsolutions for our problem would be:
[ 0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0]
[ 0,  0,  9,  9,  9,  9,  9,  9,  9,  9,  9]
[ 0,  0,  9, 14, 14, 23, 23, 23, 23, 23, 23]
[ 0,  0,  9, 14, 16, 23, 25, 30, 30, 39, 39]
[ 0,  0,  9, 14, 16, 23, 30, 30, 39, 44, 46]
Notice that, zeroth column is an all-zero because whichever is the number of items available, having zero as capacity of the current sub-knapsack will lead to a no-item-selected solution. First column is again filled with zeroes, but for a different reason, there is no item with a weight 1.
In the first row we use only the first item available, the one with weight 2 and value 9. From the second cell on, we can put its value in it.
In the second row, we can put only {2, 9} in cell (2, 2). Next to the right, (2, 3), we could put {2, 9} or {3, 14}. The latter wins. Since (2, 5) on, we can put both items, so the value stored in these cells is 23.
And so on.

Here is how I have modified the algorithm seen in the previous post to adapt it to this slightly different problem:
public static int withoutRepetition(int[] values, int[] weights, int weight) {
   assert (values.length == weights.length); // 1

   int[][] results = new int[values.length + 1][weight + 1]; // 2
   for(int i = 1; i < results.length; ++i) { // 3
      for(int j = 1; j < results[i].length; ++j) { // 4
         results[i][j] = results[i - 1][j]; // 5
         if(weights[i - 1] > j) // 6
            continue;
         int candidate = results[i - 1][j - weights[i - 1]] + values[i - 1]; // 7
         if(candidate > results[i][j])
            results[i][j] = candidate;
      }
   }

   return results[values.length][weight];
}
1. A minimal checking to ensure no funny stuff in input. For each value should be available the correlated weight.
2. Core of the dynamic programming algorithm. A table that is going to contain all the subsolutions is created. The following double for loop will fill it up and in the bottom right cell we'll find the solution.
3. Row 0 is not affected, since we know that it has to contain just a bunch of zeroes.
4. Column 0, for each row, is not affected since, again, we just want a zero in it.
5. The current element is initialized with the value contained in the cell in row above, same column.
6. If the weight of the item associated with the current row is bigger than the capacity for the current subsolution, there is not much to do, just skip to the next iteration.
7. Otherwise, let's check if considering the new element in the knapsack will bring to a better solution. We make room for it in the knapsack, considering the best solution in the row above that has enough room for it, and add its value.

On GitHub you will see that my Java class Knapsack now has also the method to calculate the "without repetition" variation of the problem. And a test case for it has been added to the associated JUnit class.

Go to the full post

Knapsack with repetitions

We are given a set of items, each of them having a weight and a value, and a knapsack with a specified capacity. Each item is available in as many elements as we like. We want to fill the knapsack in a way that it contains the higher possible value.

A JUnit test case should help to clarify the problem:
public void testWithRepetition() {
   int[] weights = new int[] { 2, 3, 4, 6 };
   int[] values = new int[] { 9, 14, 16, 30 };

   assertThat(Knapsack.withRepetition(values, weights, 10), is(48));
}
We have for different items. The lighter one has weight 2 and value 9, the heavier 6 and 30. We want to write a static method withRepetition() in the class Knapsack that, given in input values, weight and knapsack capacity (here is 10) it would give back the maximum value storable in that knapsack.

In our case the expected result is 48, coming from weights 6 + 2 + 2, being the value of this triad 30 + 9 + 9.

Here is my Dynamic Programming solution, based on the algorithm discussed in chapter 6 of the excellent book Algorithms by S. Dasgupta, C.H. Papadimitriou, and U.V. Vazirani, using Java as implementation language:
public static int withRepetition(int[] values, int[] weights, int weight) {
   assert(values.length == weights.length);

   int[] results = new int[weight + 1]; // 1
   for(int i = 1; i < results.length; ++i) { // 2
      for(int j = 0; j < weights.length; ++j) { // 3
         if(weights[j] > i) // 4
            continue;
         int candidate = results[i - weights[j]] + values[j]; // 5
         if(candidate > results[i])
            results[i] = candidate;
      }
   }
   return results[results.length - 1]; // 6
}
1. Here I am going to store all the results of the subproblems and, in the last element, the one to the actual problem. Notice that we need an element also for the trivial case, knapsack of size zero, that has the obvious solution of zero.
2. Looping on all the DP subproblems, from the first one, where the capacity is one, to calculate at last the actual problem.
3. For each subproblem I verify which is among all the items available the one that maximize the result.
4. If the current item has a weight bigger than the knapsack capacity, there is nothing to do.
5. Check what happens using the previous solution that allows to use the current item, storing the candidate result in a temporary variable. If it is a better result, store it in the array of subresults.
6. In the last element of the array we have the required value.

The only test case that I tried, is the one provided by the book that you have already seen above. Anyway, you could find it on GitHub, side by side the source Java code for the class Knapsack.

Go to the full post

Levenshtein distance

Given two strings, return its Levenshtein distance, i.e. the minimum number of character changes needed to mutate the first string in the second one.

You can have a look at this Levenshtein demo to get a better idea of what is expected. Notice that the demo is more complicated than the required solution. We are interested only in the "plain" string type, and we always use weight 1 for indel (insertion/deletion) and substitution operations. Moreover, we don't consider swap.

The Levenshtein distance is often used to show how Dynamic Programming works in a more complicated case than the classic make change problem. Here I present my Java solution based on the Wagner–Fischer algorithm, that asks us to build a bidimensional table generating its values starting from the most basic cases up to the required solution.

The idea of the algorithm is to put on one table axis the first string and the second one on the other one, paying attention to give an extra position at the beginning of both string. So, for instance, if we want to calculate the distance between "one" and "bone", we'll 4 rows and 5 columns (or the other way round).

Then we have to populate the tables. The top-left cell is the easiest one, it represent the case I want to pass from an empty string to another empty one. No change is needed, so I put there a zero.

To understand what I should put in the first row I can think to the case I want to calculate the distance from an empty string to another one size n. Obviously, I would increase by one the value in each cell, representing the cost of inserting a new character.

Similarly, the first column will follow the same structure. Here the increment would represent the deletion of a character.

The generic case is a bit more complicated, since we have to evaluate four different cases. Let's think to the cell (1,1). To decide which value assign to it we need to compare the first elements of both string. If it is the same, we don't have to increase the distance, we just keep the value calculate for the top left case, namely cell (0,0). Otherwise, we need to consider all the three possible editing that would let us arrive there, inserting a new character, deleting or substituting the existing one. Since we want to find the minimum distance, we'll choose the smallest among these values.

Remembering how we filled the first row and column, we should see how to get the value for inserting and deleting, it is just a matter of increasing the value contained on the cell on the left and on the top, respectively. For substitution we have to refer to the top-left cell, as we have done for the equality case.

Once we have grasped the sense of it, building the table is a piece of cake, and what we have to do is just returning the value in the bottom-right cell.

For testing purpose, and for let me better understand how things work, instead of implementing the solution with a single static method, I used a class with a constructor, that does all the dirty job, and a couple of public methods, one that returns the calculated Levenshtein distance and another one that checks the table as generated by the constructor. You could see the full Java code on GitHub, both for the Levenshtein class and its associated test cases, here I publish and comment only the ctor:
private int[][] matrix; // 1

public Levenshtein(String from, String to) { // 2
    matrix = new int[from.length() + 1][to.length() + 1]; // 3
    for (int i = 1; i < matrix[0].length; ++i) { // 4
        matrix[0][i] = i;
    }
    for (int j = 1; j < matrix.length; ++j) {
        matrix[j][0] = j;
    }

    for (int i = 0; i < from.length(); i++) { // 5
        for (int j = 0; j < to.length(); j++) {
            if (from.charAt(i) == to.charAt(j)) { // 6
                matrix[i + 1][j + 1] = matrix[i][j];
            } else { // 7
                int insert = matrix[i][j + 1] + 1;
                int delete = matrix[i + 1][j] + 1;
                int substitute = matrix[i][j] + 1;
                matrix[i + 1][j + 1] = Math.min(Math.min(insert, delete), substitute);
            }
        }
    }
}
1. Here is where I am going to store all the partial solutions and the actual one.
2. I think the first string associated to the rows in matrix, the second one to the columns.
3. The matrix has one row plus one for each character in the first string and a column plus one for each character in the second one.
4. Initialize the zeroth row and column as discussed above. No need to initialize matrix[0][0], the default value 0 is exactly what we want.
5. Loop on all the characters in both strings.
6. It's a match! Put in the relative cell in the matrix the same value stored in the top left cell.
7. Otherwise calculate how much is going to cost to put there the right value by insertion, deletion, substitution, and put there the minimum value among these three ones.

Once built the matrix, the solution appears on the bottom right cell in the matrix.

Go to the full post

Minimum number of coins to make change

We should give a certain amount back, and we want minimize the number of coins involved in the operation, accordingly to the available denominations. So, for instance, we have only copper Euro cents (1, 2, 5) and the change should be 4. In this case the expect result is 2, a couple of 2c coins.

We could think to use a greedy procedure, and sometimes it works beautifully, as in example I showed above. However it does not work for all possible denominations. As a counterexample think what happens if we have to generate 40 from (1, 5, 10, 20, 25). The greedy solution is 3, 25c + 10c + 5c, while the optimal solution is 2, 20c + 20c.

Actually, this problem is commonly used to introduce dynamic programming, where the idea is solving all the subproblems originating from the required one, starting from the simplest ones, and using those partial solutions to build up the more complex ones.

Here is a possible Java solution:
public static int minNumCoins(int amount, List<Integer> coins) {
 assert (amount >= 0); // 1
 assert (isConsistent(coins)); // 2

 List<Integer> results = new ArrayList<>(amount + 1); // 3
 results.add(0); // 4
 for(int i = 1; i <= amount; ++i) { // 5
  results.add(Integer.MAX_VALUE); // 6
  for(int coin : coins) { // 7
   if(coin > i)
    break;

   int tentative = results.get(i - coin) + 1; // 8
   if(tentative < results.get(i)) {
    results.set(i, tentative);
   }
  }
 }

 return results.get(results.size() - 1); // 9
}
1. I assert that the required amount should be not negative.
2. I have delegate to an helper method the consistency check on the coin list. I assume the requisite are that it should not be empty, it has to start from 1, and to be sorted.
3. I'm going to store in this array-list the results for the subproblems and the problem itself. When a step would required a previously calculated subsolution, it would get the result from this list. This technique is known as memoization.
4. Base case. An amount of zero requires no coin.
5. Let's calculate all the subproblems before generating the result for the actual one.
6. Initialize the current solution to the worst possible value.
7. Check it against all available coins, assuming the list being sorted, see (2), we can stop looping as soon as we see a coin greater than the index of current element in the list.
8. A tentative solution is the number of tentatives for the previous intermediate step plus one. If this try is better than the current tentative solution, let's use it as candidate.
9. Now we just have to return the last item in the result array-list.

Full Java code is on github. There you will find, beside the code you have already seen above, in the same class CoinChanger also a greedy implementation, that is going to fail in some cases. Also a JUnit suite of test cases is there.

Go to the full post

Greatest Common Divisor

Given two integers, find their greatest common divisor.

If you don't know the Euclid algorithm, you should get to a piece of code like this:
static public int naive(int a, int b) {
    int result = 1;
    for (int i = 2; i <= Math.min(a, b); ++i) {
        if (a % i == 0 && b % i == 0) {
            result = i;
        }
    }
    return result;
}
This raw idea could be improved considering that it doesn't make any sense to check numbers bigger than the square root of the smaller one (beside itself), as well as the multiple of the already discarded numbers - if 2 is not a good candidate, there is no use in checking any even number.

However, Euclid devised such an elegant and fast converging algorithm that you should have a really good reason to explore alternatives.

Knowing that the gcd of a and b is the same as the gcd of a and a % b, we can write:
public static int euclid(int a, int b) {
    while (b != 0) {
        int temp = b;
        b = a % b;
        a = temp;
    }
    return a;
}

To ensure I didn't make any silly mistake when writing my Java code, I have written a couple of simple JUnit test cases. Both files are now on GitHub.

Go to the full post

Last digit of a Fibonacci number

The Fibonacci sequence leads rapidly to such huge values that it gets difficult even to think to a monster like Fibonacci of one million. However, let's say that we have a less mind-boggling problem, calculating just the last digit of it, even if for large input, like one hundred million. This is not a big issue, in the worst case it is to spot a linear time complexity solution, and if we check closely the properties of the sequence, we could devise a costant time one.

Firstly, I write a few test case. I am using JUnit 4, and the assertThat Hamcrest-style assertion:
public class FibonacciTest {
    static final Fibonacci fib = new Fibonacci();

    // ...

    @Test
    public void testLastDigit40() {
        assertThat(fib.lastDigitNaive(40), is(5));
    }

    @Test
    public void testLastDigit327305() {
        assertThat(fib.lastDigit(327_305), is(5));
    }

    @Test
    public void testLastDigit100mio() {
        assertThat(fib.lastDigit(100_000_000), is(5));
    }
It is reasonably easy to calculate Fibonacci(40) = 102_334_155 and, trust me, both Fibonacci(327_305) and Fibonacci(100_000_000) have also a five as their last digit.

Then I write a first implementation for lastDigit() method in my class Fibonacci:
public int lastDigit(int n) {
    if (n < 2) // 1
        return n;

    int previous = 0; // 2
    int current = 1;

    for (int i = 1; i < n; ++i) { // 3
        int older = previous;
        previous = current;
        current = (older + previous) % 10; // 4
    }

    return current;
}
1. Accordingly to the modern definition of this sequence, Fibonacci(0) is 0 and Fibonacci(1) is 1.
2. To calculate the current Fibonacci number we need to keep track of the latest two ones.
3. Let's loop until we reach our target. The previous value will be discarded, current one will become previous, and the current one is calculated.
4. We are lucky, we don't need to store the actual Fibonacci number, it's last digit is enough.

Not a bad solution, however we could do better. We just have to notice that after 60 numbers, the final number in the sequence enter in a loop and repeat the first value. The is an interesting thread on math stackexchange that gives more details on this curious feature.

Let's exploit it to get a better solution:
public int lastDigit(int n) {
    int[] fibLast = new int[60]; // 1
    fibLast[0] = 0;
    fibLast[1] = 1;
    for (int i = 2; i < 60; ++i) { // 2
        fibLast[i] = (fibLast[i - 1] + fibLast[i - 2]) % 10;
    }

    return fibLast[n%60]; // 3
}
1. I am going to do the same job as in the first implementation, however I cache the first sixty values in a raw array.
2. Notice the despicable use of the magic number 60 here and in (1). It should be a well commented constant instead!
3. Thanks to the looping nature of the last cypher in Fibonacci numbers, I just have to apply to modulo operator to extract the correct index in my cached values.

As an obvious performance improvement we should initialize the cache only the first time the method is called. In any case, I think the point is made, so I commit the Fibonacci Java class and its JUnit tester to github.

Go to the full post

Add testing to a Maven project

This post was not so obsolete as the previous two ones, however it referred to JUnit 4, while the most interesting point in the argument, I feel, is in the fact that we should use a recent Surefire Maven plugin if we want the test run on build work with JUnit 5.

So, again, so long my outdated post. If interested, you could get its new version on my other blog: https://thisthread.blogspot.com/2020/05/test-on-build-with-maven.html.

Go to the full post

Adding Logback to a Maven Project

If you have written a Java project, maybe using Eclipse as IDE, but fundamentally using Maven as build manager, and you ask yourself how to add dependencies to it and, even more importantly, how to tell Maven to include your dependencies in the resulting jar, making it "fat" but also so more easily usable. This post was for you.

I used Logback as sample library, but the story is the same any dependency you are putting in your app.

Unfortunately it aged badly, so I had to ditch it.

Still, I decided to rewrite it and I put the new version in my other blog. Follow the link if interested in it. https://thisthread.blogspot.com/2020/05/jar-with-dependencies-through-maven.html

Go to the full post

Simple Maven Eclipse Project

As the title hints, this post was about creating a tiny little Java desktop apps on the Eclipse IDE using Maven as building tool.

I say "was" because, unfortunately, it was too obsolete to survive.

My first idea was just to clean it a bit and make it more fit for the current time. Then I found out that it was simpler for me, and clearer for the reader, if I rewrote it from scratch. Along the way, I have also refactored the code and rearrange the repository on GitHub.

Moreover, I moved it to my other blog, this thread, that I plan to keep as the only one active. If interested, please follow the link https://thisthread.blogspot.com/2020/05/simple-maven-eclipse-project.html

Go to the full post

Spring log only to file

Logging is one of the most fuzzy area in Java. The standard JUL, java.util.logging, entered the arena late, and it has to compete against well respected libraries like Log4J2, SLF4J and Logback (usually SLF4J with Logback). The Spring guys decided to go for JCL, the Apache Commons Logging, that wraps SLF4J for the actual logging library of your choice, here being Logback.

If you don't have any special requirement, you can happily ignore which actual logger is used, just write your code for the JCL interface, leaving out of your scope any low level dependency.

However, if you want your log going to a file, and not to console, as often is the case, you have to deal with the actual logger. Not a big deal, if Logback is your choice.

Let's modify the function greeting in my GreetingController to log some (un)useful comments:
public String greeting() {
 log.trace("trace hello");
 log.debug("debug hello");
 log.info("info hello");
 log.warn("warn hello");
 log.error("error hello");
 log.fatal("fatal hello");
 return "Hello!";
}
Where log is private static final object of type org.apache.commons.logging.Log initialized through the JCL LogFactory.

This could be enough. Still you should have a mildly surprising output, something like:
2016-05-30 22:14:00.888  INFO (...)  : info hello
2016-05-30 22:14:00.888  WARN (...)  : warn hello
2016-05-30 22:14:00.888 ERROR (...)  : error hello
2016-05-30 22:14:00.888 ERROR (...)  : fatal hello
I edited out details in the middle of the lines, I want to focus on the fact that we miss trace and debug messages, and the fatal one became an error one. If you really want fatal log messages, logback is not your choice, since it does not have this log level, and so they are mapped as simple errors.

The first problem could be easily solved adding an entry in the Spring application.properties file (in source/main/resources). Say that I want to log all messages, from trace up to fatal, generated in my packages rooted in dd.manny. I'll add this line:
logging.level.dd.manny=trace
Good. Now I want Spring to log to a file. By default the file will have name spring.log, and I can decided in which folder to be placed like this:
logging.path=/tmp
Nice and easy. Just one thing. I wanted the log to go exclusively to file. To get this effect I have to configure the actual logger.

For this reason I added a logback configuration file in the src/main/resources folder that is mimic of the default Spring one, but it has no appender for console. The key point is that I keep the log level to INFO and I specify FILE as appender, that is going to be set through the property specified above.

The full Spring Boot project is on github. The relevant files are GreetingController.java, application.properties, and logback.xml.

Go to the full post

Setting up Android Studio

Android Studio is the Google IDE for Android based on Intellij IDEA announced at 2013 Google I/O. Currently is available in an early access preview that, even if not completely stable, is worth a try.

Here I report what I have done to install it on a Linux Debian-based (actually Ubuntu) box, then I'll write about how I developed a first "Hello world" application.

Download

On android.com, I went to the Android Studio page. There I found links to specific Android Studio bundles for three different platforms, Windows, Mac OS X, Linux. I clicked on the Linux one, and I patiently waited the 400 Meg package to be downloaded on my machine. At the time I have written this post, the current version was 0.2.2.

Install

The bundle was actually a compressed tarball, named "android-studio-bundle-130.737825-linux.tgz". I moved it to someplace looked right to me, and expanded it by a call to tar xvfz. Anything was put in the folder android-studio. Under its bin directory there is a shell script, studio.sh, that we need to call to run the IDE.

Theoretically speaking, that's it.

Still, I have experience a couple of issues. The minor one was solved installing Gradle "by hand", getting it from the official gradle.org download page. The other came from adb, the Android Debug Bridge, that had a couple of dependencies not resolved at runtime. It logged that the problem was about curses, but when I ran ldd on it, I saw that where reported as not found both libncurses.so.5 and libstdc++.so.6. In my case this was due to the fact that I am on a 64 bit system, while adb is a 32 bit application. This was the solution:
sudo apt-get install lib32ncurses5 lib32stdc++6

Go to the full post

Introduction to ActiveMQ

Sort of mouthful of a title for this book, Instant Apache ActiveMQ messaging application development how-to, but don't let this taking you aback. It is a slim (sixty-sh pages) tutorial on ActiveMQ written by Timothy Bish, who actually is an active contributor to that Apache project.

You can't expect it to go too deep in the matter, after all it's in a series that has as headline "Learn in an instant, short, fast, focused", still it I think it is good as a first tutorial. It refers ActiveMQ version 5.8 (the latest release, currently), and it is thought for a Java developer who needs to get introduced to this message broker. You'd better to have some experience on Java, but you'd probably get the most of it even if you don't know much about MOM (Message-Oriented Middleware) and in particular about JMS, the Java EE messaging standard defined by the JSR 914 specification and implemented by ActiveMQ.

After installing ActiveMQ and setting up a working environment (no IDE is used, we are showed instead how to do it "by hand", via Maven), it is showed how to create a first simple application that sends and receive a message. Then we are introduced to JMS queues, topics, selectors. All this stuff is marked as "simple", and it is followed by a chapter where we are introduced to the request-response pattern (the only part of the book that is considered "intermediate").

The third part of the book consists in half a dozen "advanced" chapters where we can read about scheduling, monitoring, testing, pooling connections, using virtual destinations and failover transport.

Lot of examples help to make clearer the concepts explained by Bish in a plain and readable prose. There is no room to enter in much details, still many "There's more" sections offer pointers to find more information in the net.

Go to the full post

Marshalling derived classes with JAXB

When I have to convert Java objects to XML strings, I normally use JAXB, the nice standard library included in the Java Standard Edition since version 1.6. I have already written in the past about marshaling and unmarshaling (that means, transform an object to an XML stream, and vice-versa), but I have recently used JAXB to marshal a class that has a polymorphic data member. There are a few differences from the standard behavior that I think they deserve some extra words.

I have a hierarchy of classes, that I want to use polimorphically in another class. Something like this:
@XmlRootElement // 1
public class Data {
  @XmlElement
  private String a;

  // ...
}

@XmlRootElement 
public class DataExt extends Data {
  @XmlElement
  private String b;

  // ...
}

@XmlRootElement
public class Phone {
  @XmlElement
  private String type;
  @XmlElement // 2
  private Data data;

  // ...
}
1. This is not enough. When JAXB performs the binding, it doesn't care about the runtime class associated to the current object, but it goes straight to the compile time type. We should explicitly say which other classes are part of the hierarchy, using the XmlSeeAlso annotation.
2. We need also to tell to JAXB that it should check the runtime type for this object. This is done by replacing the XmlElement annotation with @XmlElementRef. Actually, at least in my case, this is not a strict necessity. Once specified @XmlSeeAlso on the class definition, JAXB is smart enough to deduce that it has to polimorphically treat the related objects. Still, it adds an attribute to element, in my case xsi:type="dataExt", to show which is the actual type it is using.

So, we'd better change an annotation, and add another one:
@XmlRootElement
public class Phone {
  @XmlElement
  private String type;
  @XmlElementRef
  private Data data;

  // ...
}

@XmlRootElement
@XmlSeeAlso(DataExt.class)
public class Data {
  @XmlElement
  private String a;

  // ...
}
Besides, I also wanted to keep transparent to the XML user which kind of actual class I used. Meaning that I want the same tag for elements referring to the Data hierarchy. That's very easy. I specify the same name for both XmlRootElement's:
@XmlRootElement(name="data")
public class DataExt extends Data {
  @XmlElement
  private String b;

  // ...
}

Go to the full post

Improved SLF4J simple logging

I have written in the past a few introductory posts to SFL4J based on version 1.6.4 that are now obsolete (as Peter correctly pointed out). Here I consider mainly that, since version 1.6.6, SLF4J SimpleLogger supports configuration properties. This makes more attractive the SimpleLogger usage, at least for a tiny project, or for early stages of development.

The process of installing SLF4J has not changed at all in this time, you should go to the official SLF4J download page, fetch the latest package available (currently version 1.7.5), extract the content in your preferred local directory. If, for any reason, you need a previous version, you could go and fetch it from the SLF4J repository.

I have written a tiny two-class Java application to check logging at work. The main class, named Hello, does some logging and then create an object of the other class (that I named Another):
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

public class Hello {
  public static void main(String[] args) {
    Logger logger = LoggerFactory.getLogger(Hello.class);

    logger.info("an info message");
    try { Thread.sleep(472); } catch(Exception e) {}
    logger.error("an error message");
    try { Thread.sleep(123); } catch(Exception e) {}
    logger.trace("a trace message");
    try { Thread.sleep(42); } catch(Exception e) {}
    logger.warn("a warn message");
    try { Thread.sleep(105); } catch(Exception e) {}
    logger.debug("a debug message");

    new Another();
  }
}

The other class, Another, is even simpler:
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

public class Another {
  public Another() {
    Logger logger = LoggerFactory.getLogger(Another.class);
    logger.info("an info message");
    logger.error("an error message");
  }
}

To build these classes, you need to put in classpath the SLF4J API jar, named slf4j-api-1.7.5.jar (version number may vary).

The minimal configuration to run my application requires to see the same SLF4J API jar in its classpath. Forgetting it leads to a java.lang.NoClassDefFoundError for org/slf4j/LoggerFactory. Otherwise you would get a friendly warning from SLF4J, that could be translated "You didn't say where I should send your log, so I would just swallow your messages, sending them to the NO-OP plugin. If this is not the behavior you expected, you'd better check the documentation":
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html (...)
And if you check the documentation, you would see that you have to put in the classpath another jar, that would determine how the logger should behave.

The NOP implementation is contained in slf4j-nop-1.7.5.jar, if you put this jar in the classpath, you will get the same result as above (nothing logged), without the showed reminder.

If you want SLF4J to delegate to JUL the logging, you should put slf4j-jdk14-1.7.5.jar in the classpath. JUL is the standard Java Logger, (java.util.logging is its package name), included in the JDK since version 1.4, and the SLF4J plugin used in this case acts as a bridge to that log implementation. For my example, I get this output:
Apr 19, 2013 8:18:45 PM Hello main
INFO: an info message
Apr 19, 2013 8:18:46 PM Hello main
SEVERE: an error message
Apr 19, 2013 8:18:46 PM Hello main
WARNING: a warn message
Apr 19, 2013 8:18:46 PM Another 
INFO: an info message
Apr 19, 2013 8:18:46 PM Another 
SEVERE: an error message

Let's finally see what I get if I put in the classpath slf4j-simple-1.7.5.jar, the SLF4J simple logging plugin.
[main] INFO Hello - an info message
[main] ERROR Hello - an error message
[main] WARN Hello - a warn message
[main] INFO Another - an info message
[main] ERROR Another - an error message
In square brackets we have the thread name, then we see the log level, then the logger name, and finally the log message.

The interesting part is that now we can specify a file, named simplelogger.properties, where we could set some configuration variables. More details in the official SLF4J documentation, but let's see a sample configuration file, and which is its impact on the log generation:
#default is System.err
#org.slf4j.simpleLogger.logFile = simple.log

#trace, debug, info, warn, error - default is info
org.slf4j.simpleLogger.defaultLogLevel = trace

org.slf4j.simpleLogger.log.Another = error

#default is false
org.slf4j.simpleLogger.showDateTime = true

#default is milliseconds since startup
org.slf4j.simpleLogger.dateTimeFormat = HH.mm.ss:SSS

#default is true
org.slf4j.simpleLogger.showThreadName = false
If we specify the org.slf4j.simpleLogger.logFile property, the log is redirected to the file named in that way. By default the special name System.err is used, that is a placeholder for the standard error console.

The default loglevel is info, meaning only info and above are usually logged, but we can change it setting the org.slf4j.simpleLogger.defaultLogLevel property. Here I say that I want to see all the logging (being trace the lowest available level).

I can set the loglevel for a specific class (or package), setting a org.slf4j.simpleLogger.log.* property. Here you can see that for the class Another I want to dump only the top level log messages (namely, errors).

Usually date/time is not shown in the log, to have it we should specify that org.slf4j.simpleLogger.showDateTime is true.

If we don't specify the date/time format, and we specify that we want it showed, we see the number of milliseconds from the application startup. The org.slf4j.simpleLogger.dateTimeFormat is used to specify a different format. Here I ask for the its time as hour-minute-second:milliseconds.

If I don't care of the thread name, I can say that I don't want it to be logged, setting the org.slf4j.simpleLogger.showThreadName to false.

With such a simplelogger.properties, the resulting log is something like:
20.51.11:015 INFO Hello - an info message
20.51.11:488 ERROR Hello - an error message
20.51.11:612 TRACE Hello - a trace message
20.51.11:654 WARN Hello - a warn message
20.51.11:760 DEBUG Hello - a debug message
20.51.11:761 ERROR Another - an error message

Go to the full post

From Java objects to Json

I am pretty busy in these days. Just a few lines to remember to myself that I have a couple of blogs to think about, and it would be good write something on them once in a while.

Currently I am working on a servlet that also generates a few Json responses on request. It happens. Json is much popular in JavaScript environment (and not only there), and one should be prepared to convert data to and from that format.

That servlet I am working on, currently does this job "by hand" (yuck!) polluting the code with obscure short strings that makes a nightmare maintaining it.

A few better solutions exist. I had a fast look around and I picked up the google-gson library, that looks to me simple and powerful enough for my current requirements.

As you could guess from its name, google-gson is a project hosted by Google Code. You would find there its home page. There you could also get the latest version (currently 2.2.2) and enough documentation to start working with it.

I have written a few test cases, just to check it before start using it in the real code, but nothing worth to be written here. Have a look instead to their user guide, that includes a few examples that should be enough to let you see how it works.

Go to the full post

From XML to object

We have seen how easy is to marshal a Java object to XML when using JAXB, now I am showing you how to unmarshal an XML in a (compatible) Java object using the same library.

We have this XML fragment:
<user id="42" rating="4.2">Bill</user>
And we want to get from it an instance of the class User, as defined in the previous post. The (minor) nuisance is that I have to annotate User so that JAXB knows how it should map its data member to the XML elements, values, and attributes. Once this is done, we get the job done in a matter of few lines:
public User xml2Obj(String xml) { // 1
    try {
        JAXBContext ctx = JAXBContext.newInstance(User.class); // 2
        Unmarshaller um = ctx.createUnmarshaller(); // 3

        StringReader sr = new StringReader(xml);
        User user = (User) um.unmarshal(sr); // 4
        return user;
    } catch (JAXBException e) {
        e.printStackTrace();
        return null;
    }
}
1. This function gets in input the XML to be unmarshalled, and gives back the resulting User object (or null, in case of error).
2. A JAXB context is needed to do the magic, and it should know on which class it operates.
3. A JAXB marshaller is the guy that is going to do the dirty job.
4. To keep the unmarshaller code simple, it has been designed to operate on readers. So, the XML String is passed to a StringReader to adapt it to the JAXB expectations.

The full Java source code for the User class and a Main class using it are on github. The example includes both marshalling (as seen in the previous post) and unmarshalling.

Go to the full post

From object to XML

JAXB (Java Architecture for XML Binding) provides an easy way of converting Java objects to XML. In the JAXB jargon, this is know as marshalling, where unmarshalling means the opposite action of extracting a Java object from an XML.

As a first example, let's see how to create an XML fragment like this:
<user id="42" rating="4.2">Bill</user>
from a Java class named User that has the user name, id, and rating as data member.

Even if this looks like a pretty simple job, still there are a couple of points that it could be interesting to pay attention to. Large part of the JAXB job is done by annotations, with the result that the code looks quite simple to understand at first sight, once you know what it is going on.

In this case, I want my class User to have a couple of annotations, one to state that it represents a root XML element, and the second one to let JAXB know that I want to put an annotation to the data fields, and not to their getters, to specify their XML role.

The result is something like this:
@XmlRootElement // 1
@XmlAccessorType(XmlAccessType.FIELD) // 2
public class User {
    @XmlValue
    private String name;
    @XmlAttribute
    private int id;
    @XmlAttribute
    private float rating;

    // getters and setters follow ...
}
1. This class is the root element for my XML
2. If you don't explicitly say that the accessor type should be found in the field definition, JAXB gets confused. If you comment this line out, you get an IllegalAnnotationsException, and a message saying that "Class has two properties of the same name".

Once you have defined your JAXB annotated class, it is just a matter of creating an object, and marshalling:
public String obj2xml(User user) {  
    try {
        JAXBContext ctx = JAXBContext.newInstance(User.class); // 1
        Marshaller marshaller = ctx.createMarshaller(); // 2
        marshaller.setProperty(Marshaller.JAXB_FRAGMENT, Boolean.TRUE); // 3

        StringWriter sw = new StringWriter();
        marshaller.marshal(user, sw); // 4
        return sw.toString();
    }
    catch (JAXBException e) { // 5
        e.printStackTrace();
        return "<bad />";
    }
}
1. Create a JAXB context that knows about the User class.
2. Then from the context we get a marshaller.
3. Here I want to generate just an XML fragment, if I want to generate a full XML document, I would remove this line, and the generated XML would have this header:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
4. Ask the marshaller to send its output to a StringWriter, from which the resulting XML could be extracted as a String.
5. In case of anything goes wrong, give an alarming feedback to the user (not a very polite thing to do, but here it will suffices) but still return a valid XML.

The full Java source code for the User class and a Main class using it are on github.

Go to the full post

Map subsetting by keySet().toArray()

I recently faced a non-standard request, for which I implemented a solution that doesn't make me completely happy. But I have not much time to ponder of it, and it works. I write it down here, so that maybe someone (or myself in the future) could show me a better alternative.

Developing for Java 6, using just the core functionality. It is given a map of Strings and whatever, and an index of one of its elements, we want to reduce the original map to a subset such as: (1) the resulting map would contain 10 elements maximum, (2) the element given by index would be the first, (3) the traversing map order is preserved.

We don't know the actual type of the map object in input, we know only that it implements the Map interface, meaning that we can't use the handy SortedMap.subMap() method.

My first idea was iterating on the map associated entry set, removing all the items before the index, then executing a dummy loop for ten times, before removing the tail elements (if any). It works, but the central dummy loop is quite boring.

So I opted for going through the keySet map, converting it to an array, so that I could randomly access any element in it:
String[] keys = (String[])myMap.keySet().toArray(new String[myMap.size()]); // 1
for(int i = 0; i < first; ++i) // 2
    myMap.remove(keys[i]);

for(int i = first + max; i < keys.length; ++i) // 3
    myMap.remove(keys[i]);
1. This line is painful. Given a myMap object (it instantiates a class that implements the Map interface, as described above), we call on it keySet(), that returns a set, on it we call the toArray() method, that converts it to an array. I have already written something on the toArray() involuted syntax, using this overload we can specify the actual type of the returned array. 2. Assuming first is an int variable, representing the index of the elements we are interesting it, defined before this code snippet. Here we are removing the leading elements. 3. The max variable stores the maximum number of elements we want to have in the resized map. The effect is removing all the trailing elements.

Go to the full post

Per-queue message time-to-live (TTL)

One of the RabbitMQ protocol extension to the AMQP specification is that we can specify the messages lifetime on a queue. This is done setting the x-message-ttl argument to a non-negative integer value, representing the time to live in milliseconds.

For instance, if we want to use a queue named "ttl", with a time to live of half a second, we could write something like this (I use Java 7, as you can deduce from the diamond operator):
public class SendRecTTL {
    private final static String QUEUE_NAME = "ttl";
    private final static int TTL = 500;
    private Map<String, Object> args = new HashMap<>();

    public SendRecTTL() { args.put("x-message-ttl", TTL); }

    // ...

        channel.queueDeclare(QUEUE_NAME, false, false, false, args);
A message sent to this queue would expire after half a second, and would be considered "dead".

I have written a simple test application for the current RabbitMQ version (2.8.2) that sends and receives a message on such a queue. Adding a sleep between the two actions, we can see what happen if the delay is too big.

Better if your code is prepared to not receive anything, not hanging forever:
QueueingConsumer.Delivery delivery = consumer.nextDelivery(1000);
if(delivery == null)
{
    // ...
Full Java source code on github.

Go to the full post

Hello Maven

As a primer on Maven, there is nothing better than the Five Minutes Getting Started Page on its Apache site. Here you could find just a few extra notes I jotted down when I was reading it.

Once you have downloaded and unzipped the Maven package on your machine, you could run this basic command:
mvn --version
to ensure it is up and ready.

The most common issues you could have at this time are:
  • you can't see Maven from your current directory
  • Maven can't see Java
Ensure that the environment variables PATH and JAVA_HOME are properly set to avoid this problems. PATH should include also the Maven bin folder, and JAVA_HOME should refer to your preferred base JDK directory (notice that JDK is required, JRE is not enough).

Create a project

You can create a Java project from scratch running this Maven command (it is a single line, even if I split it to make it more readable):
mvn archetype:generate -DgroupId=package.name -DartifactId=app.name
-DarchetypeArtifactId=maven-archetype-quickstart -DinteractiveMode=false
I have used "helloMaven" as artifact id, and "hello" as group id. The artifact id is the root directory where all the stuff related to this you application will be stored, and the group id is the name of a package that would be created, and where a simple hallo App would be created by Maven.

Proxy issue?

Maven tries to connect to its remote repository to get what it needs and couldn't be found locally (or for a newer version of it). If you get an error here, it is often due to a proxy issue.

If you do have such problem, you should have a look at the settings.xml file in the conf directory in you Maven installation.

In the "proxies" section you should create a "proxy" block containing all the information required to go through your proxy. In its minimal form it looks something like this:
<proxy>
  <protocol>http</protocol>
  <host>your.proxy</host>
  <port>80</port>
</proxy>
When Maven works fine, you get a directory structure based in the specified artifactId, that would contain in its first level a src folder and a pom.xml

The Project Object Model (if you wonder what pom stands for) XML that I got is:
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
  <modelVersion>4.0.0</modelVersion>
  <groupId>hello</groupId>
  <artifactId>helloMaven</artifactId>
  <packaging>jar</packaging>
  <version>1.0-SNAPSHOT</version>
  <name>helloMaven</name>
  <url>http://maven.apache.org</url>
  <dependencies>
    <dependency>
      <groupId>junit</groupId>
      <artifactId>junit</artifactId>
      <version>3.8.1</version>
      <scope>test</scope>
    </dependency>
  </dependencies>
</project>
Project build

To build our project we run:
mvn package
The result is published in the "target" folder. And, if we had no error, we can run our application in this way:
java -cp target/helloMaven-1.0-SNAPSHOT.jar hello.App
Notice that the JAR name is built putting together the artifact id and the version field, as stored in the POM.

Go to the full post