Archive for 2010

How to setup up a new JDK with update-alternatives?

update-alternatives is a very good/clever tool especially for developers who has to maintain different versions of SDK's. I frequently use this feature to manage different Ruby versions, JDK versions and many other things but I always had to google it to check how to use. I saw that having to google it every time I need is time consuming so I finally decided to learn it and I noticed that it is not that hard. Actually there a couple of parameters that you have to know, that's all.

First of all, what is the idea behind? It is a way to manage symbolic links for default commands. You can create, remove, maintain and display information about the symbolic links that is managed by alternatives system. It is primarily implemented by Debian and there are other re-implementations one of which is Ubuntu.

There are two main concepts;

  • Link groups: A set of related symlinks, intended to be updated as a group.
  • And links
    • Master links: The link in a link group which determines how the other links in the group are configured.
    • Slave links: A link in a link group which is controlled by the setting of the master link.

Let's go over an example, a practical one. Setting up update-alternatives configuration for a new JDK (Java Development Kit). First of all what is JDK, as the name implies it is a development kit to develop JVM (Java Virtual Machine) based applications and it comes with handful of command line tools like every SDK. The thing is since we have other JDK's on our system, we have to configure all the command line tools that comes with JDK for update-alternatives and have to make them work as a group. We don't want to use an executable from one version and the other one from another version of JDK. They have to work as a group, if I switch to another version all of them have to switch at the same time.

JDK comes with the following command line tools.

  • jar
  • jarsigner
  • java
  • javac
  • javadoc
  • javah
  • javap
  • javaws

First we have to decide which one of these links will be the master link assuming that all the links will go under the same link group. Actually this is something I prefer since I want to switch them altogether. You might prefer some other combination, depends on your usage. For me the master is java.

The syntax to install an alternative is as follows. Basically you just have to specify the link name, link group, actual executable and the priority.

--install link name path priority [--slave link name path]

Let's get going and install them to alternatives system. We will install java as the master link and all the other commands as slave links under java link group. You can simply execute the following command to accomplish this.

sudo update-alternatives --install java java /opt/jdk1.5.0_22_64bit/bin/java  200 \
--slave jar java /opt/jdk1.5.0_22_64bit/bin/jar \ 
--slave jarsigner java /opt/jdk1.5.0_22_64bit/bin/jarsigner \ 
--slave javac java /opt/jdk1.5.0_22_64bit/bin/javac \
--slave javadoc java /opt/jdk1.5.0_22_64bit/bin/javadoc \ 
--slave javah java /opt/jdk1.5.0_22_64bit/bin/javah \
--slave javap java /opt/jdk1.5.0_22_64bit/bin/javap \
--slave javaws java /opt/jdk1.5.0_22_64bit/bin/javaws

Another alternative is to install all of them as master links with different groups but this means that if you want to switch to a different version of JDK then you have to switch them one by one. To do so execute the following commands.

sudo update-alternatives --install java java /opt/jdk1.5.0_22_64bit/bin/java  200
sudo update-alternatives --install jar jar /opt/jdk1.5.0_22_64bit/bin/jar  
sudo update-alternatives --install jarsigner jarsigner /opt/jdk1.5.0_22_64bit/bin/jarsigner 
sudo update-alternatives --install javac javac /opt/jdk1.5.0_22_64bit/bin/javac
sudo update-alternatives --install javadoc javadoc /opt/jdk1.5.0_22_64bit/bin/javadoc 
sudo update-alternatives --install javah javah /opt/jdk1.5.0_22_64bit/bin/javah
sudo update-alternatives --install javap javap /opt/jdk1.5.0_22_64bit/bin/javap
sudo update-alternatives --install javaws javavs /opt/jdk1.5.0_22_64bit/bin/javaws

Or you can do something in between you can create multiple link groups and have multiple master links, e.g. java for runtime environment --and also put the commands related to runtime environment under the same link group as slave links-- and javac for development environment --and also put the commands related to development environment under the same link group as slave links--.

I personally prefer the first option, it is up to you.

Interacting with an argument passed to a mocked method in EasyMock

I don't know EasyMock very well, I just use what I know. I hear you saying;

C'mon man, how hard can it be? Besides the whole library consists of 11 classes. Just read the manual and the API docs.

I hear you, however I could not find that time ever. I found my own ways when I found myself in situations that can not be handled with EasyMock --my view of EasyMock of course--

For example, there is this problem that I always faced with. When you need to provide a custom behavior only for one method of an interface --since you can not do this with EasMock, at least according to my knowledge--, I would create a mock implementation just to override that particular method. BTW don't get mistaken it is also not that easy with EasyMock, I am talking about interacting with an argument of a mocked method which itself is created by some class and passed as an argument to that method by a method that you don't have control over.

Let me demonstrate it with an example. Let's say I have a ContentReader class that reads a content from somewhere with some utility methods.

public interface ContentReader {
    // simply dumps all the content to the specified file
    public void readContent(File file);
}

And let's say I have another class which is also the class that I want to test. This class is the class calling the readContent(File file) method. The intention is to dump the content to a temporary file, pass the absolute path of this file to a utility method to extract metadata about the content. I agree in the first place why I am dumping the content to a temporary file and passing the absolute path to this metadata extractor instead of passing the stream directly, right? You should ask this to the developers of im4java, not me:)

public class ImageMetadataExtractor extends AbstractMappingMetadataExtracter {

    public final static String METADATA_X_RESOLUTION = "x-resolution";

    public final static String METADATA_Y_RESOLUTION = "y-resolution";

    @Override
    protected Map<String, Serializable> extractRaw(ContentReader contentReader) throws Throwable {
        File tempFile = TempFileProvider.createTempFile(UUID.randomUUID().toString(), ".bin");
        contentReader.getContent(tempFile);
        ImageInfo imageInfo = new ImageInfo(tempFile.getAbsolutePath());
        int xResolution = (int) imageInfo.getXResolution();
        int yResolution = (int) imageInfo.getYResolution();

        Map<String, Serializable> metadata = new HashMap<String, Serializable>();
        metadata.put(METADATA_X_RESOLUTION, xResolution);
        metadata.put(METADATA_Y_RESOLUTION, yResolution);

        return metadata;
    }

}

Anyway, this time when I faced a similar problem, I decided to check the EasyMock documentation to find something that can solve this problem instead of blindly doing the same thing I do always. There are a lot of stuff argument matcher, expectations, controls to also verify the calling order, etc. However I could not find something that I can use to inject some behavior to a mocked method to work on an argument passed to that mocked method. But I found two interesting things;

  • capture(Capture<T> captured) for capturing arguments of a mocked method. The main use case of capture is to verify the argument using your own way instead of trying to fit your expectations to the limited set provided by EasyMock.
  • IExpectationSetters<T> andAnswer(IAnswer<? extends T> answer) for setting an object that can calculate the return value.

They don't help me unless I use them together. What if

  • We capture the file using capture in a final variable
  • We use answer to interact with the captured value

It is an workaround but a nice one, admit it.

Unfortunately it is not possible to use expect method for a method that returns void. Luckily unlike expect, expectLastCall does. Now I can use capture to get a handle to the argument passed and interact with it using you IAnswer implementation. See the following example.

public class ImageMetadataExtractorTest {

    private ContentReader contentReader;

    @Before
    public void before() {
        contentReader = createMock(ContentReader.class);
    }

    @Test
    public void shouldReturnMapOfMetadata() throws Throwable {
        final Capture<File> captured = new Capture<File>();
        contentReader.getContent(capture(captured));
        expectLastCall().andAnswer(new IAnswer() {
            @Override
            public Object answer() throws Throwable {
            IOUtils.copy(this.getClass().getClassLoader().getResourceAsStream("./images/transparent_olympic.png"), new FileOutputStream(captured.getValue()));
                return null;
            }
        });
        replay(contentReader);

        Map<String, Serializable> metadata = new HashMap<String, Serializable>();
        metadata.put(ImageMetadataExtractor.METADATA_X_RESOLUTION, 72);
        metadata.put(ImageMetadataExtractor.METADATA_Y_RESOLUTION, 72);

        assertEquals(metadata, new ImageMetadataExtractor().extractRaw(contentReader));

        verify(contentReader);
    }
}

Finally since the method returns void, you have to return null from IAnswer.

The latest example is as follows, it is modified according to Meltem's feedback. There is no need to capture the variable you want to interact using capture() to a final variable --unless you want to share it within your test method--, you can simple call getCurrentArguments() method while you are inside IAnswer.answer().

public class ImageMetadataExtractorTest {

    private ContentReader contentReader;

    @Before
    public void before() {
        contentReader = createMock(ContentReader.class);
    }

    @Test
    public void shouldReturnMapOfMetadata() throws Throwable {
        contentReader.getContent(isA(File.class));
        expectLastCall().andAnswer(new IAnswer() {
            @Override
            public Object answer() throws Throwable {
                File file = (File) getCurrentArguments()[0];
                IOUtils.copy(this.getClass().getClassLoader().getResourceAsStream("./images/transparent_olympic.png"), new FileOutputStream(file));
                return null;
            }
        });
        replay(contentReader);

        Map<String, Serializable> metadata = new HashMap<String, Serializable>();
        metadata.put(ImageMetadataExtractor.METADATA_X_RESOLUTION, 72);
        metadata.put(ImageMetadataExtractor.METADATA_Y_RESOLUTION, 72);

        assertEquals(metadata, new ImageMetadataExtractor().extractRaw(contentReader));

        verify(contentReader);
    }

}

Automatic deployment for tomcat applications

I like continuous deployment just like continuous integration. However it can be pain if you have too many environments to deploy to.

In the project that I am currently involved in, we have 3 environments for which continuous deployment makes sense. Deployment frequency is different, deployment policies are different but I am sure it is possible to automatize the deployment process up to some extend.

Just for that reason (and also because I am bored of doing deployment and I like writing scripts time to time), I spent a couple of hours to write a script that can be invoked by cron to do deployment to tomcat. The process is easy;

  • Scripts checks all the war files under a specific folder
    • For every war file under in that directory
    • Checks if the war file is complete
    • If so stops tomcat (only for once:) )
    • Removes all the files related to the current war file from tomcat/webapps
    • Copies war file to tomcat/webapps
  • Starts tomcat

The full script is as follows. I am not a experienced bash coder, so be fair. Now all you have to do is to transfer the file to inbound directory. There are different ways for different build management tools. I am using maven so I defined an ant-run goal to copy the deployables via ssh.

#!/bin/sh

SCRIPT_USER="dev"
TOMCAT_DIR="/appdata/apps/wcm"
WAIT=10
TOMCAT_STOPPED=0

log_deployment()
{
    ssh dev@43.191.66.23 'echo "Deployment was started at `date`" > /appdata/apache2/htdocs/deploy.txt'
}

start_tomcat()
{
    if [ $TOMCAT_STOPPED -eq 1 ];
    then
        log "Starting tomcat"
        $TOMCAT_DIR/bin/startup.sh
        log_deployment
    fi
}

# stops tomcat
stop_tomcat()
{
    if [ $TOMCAT_STOPPED -eq 0 ];
    then
        log "Stopping tomcat"
        $TOMCAT_DIR/bin/shutdown.sh
        sleep $WAIT
        log "Making sure it is stopped by killing(-9) any process contains java and tomcat in their run command"
        kill -9 `ps ux | awk '/java/ && /tomcat/ && !/awk/ {print $2}'`
        TOMCAT_STOPPED=1
    fi
}

# prepares deployable for deployment
prepare_for_deploy()
{
    log "Going to make deployment for $1, file size is stable"
    log "Moving deployable $1 to processed directory"
    BASE_NAME="`basename $1`"
    BASE_NAME=${BASE_NAME:0:${#BASE_NAME}-4}
    log "Cleaning tomcat webapps directory for $BASE_NAME"
    rm -rf $TOMCAT_DIR/webapps/$BASE_NAME*
    log "Moving $1 to tomcat webapps directory"
    mv $1 $TOMCAT_DIR/webapps/.
}

# logs a message with date
log()
{
    echo "`date` >> $1"
}

# logs an error with date
error()
{
    echo "`date` >> $1" 1>&2
}

if [ $USER != $SCRIPT_USER ]; then
  error "You must run this script as user '$SCRIPT_USER'!"
  exit 1
fi

log "Checking inbound for deployables"

cd `dirname $0`
mkdir -p inbound

if [ -z `ls inbound/*.war` ];
then
  log "No deployables found"
  exit 0
fi

for f in inbound/*.war;
do
    LAST_ACCESS=$(stat -c%Y "$f")
    NOW=$(date +%s)
    let DIFF=$NOW-$LAST_ACCESS

    log "$f is accessed $DIFF seconds ago."

    SIZE1=$(stat -c%s "$f")
    sleep $WAIT
    SIZE2=$(stat -c%s "$f")
    if [ "$SIZE1" -eq "$SIZE2" ]
    then
        stop_tomcat
        prepare_for_deploy $f
    else
        log "File is still being copied"
    fi
done

start_tomcat

Making Alfresco maven friendly

Since Alfresco source code is not managed by maven, implementing Alfresco extensions (AMP extensions) even simple JAR extensions with maven is very painful.

Actually maven aside, Alfresco's extension mechanism is itself very basic or rather primitive and based on MMT (Module Management Tool). MMT is a executable jar file for overlaying AMP's (Alfresco Module Plugin) into Alfresco war file. The simple steps are;

  • Extracting Alfresco war file
  • Extracting AMP
  • Copying everything inside the AMP into extracted war file
  • Re-packaging everything as war file

It is just the automatized way of patching a war file. I am sure every developer did the same in order to patch a production web application instead of creating a new release and opening a ticket for deployment :) It is the dirty way of doing things.

Luckily there is a maven plugin (maven-amp-plugin) that makes developing AMP's using Maven. Thanks to sourcesense (they are the Alfresco partner behind the plugin), ironically they just like their motto, they made opensource make sense :)

What I don't get is why these kind of helpers are not provided by Alfresco. I am not saying that just to support Maven you have to use Maven. You can still continue to use Ant or you can even use your own scripts to manage your build process but if you are developing an enterprise application you have to support all the major build environment. BTW the majority of the world is using Maven.

Anyway even maven-amp-plugin makes everything much easier, it is not also perfect. What the plugin is the same steps listed above within Maven build lifecycle and some more conventions to manage configurations, etc.

I am saying that it is not perfect because there are some certain things it cannot do, like, managing the dependencies. Because it is using MMT behind, there is no way to manage the dependencies. Let's say you have an AMP and you have a dependency to commons-lang-2.5 and Alfresco already has commons-lang-2.4, in this case you will end up with a war file that contains both the commons-lang-2.5 and commons-lang-2.4.

Or let's say you are trying to write a simple unit test for a class that touches some Alfresco services, etc. In this case you have add all the dependencies that is being used by Alfresco as a test dependency to make your test work. However there is no way to do use the correct versions because the dependencies inside Alfresco war file is not managed by Maven.

I tried to manage the modules we are developing using maven-amp-plugin but managing dependencies became very painful. So I finally decided to build a pom to include all the dependencies those exist in Alfresco war file and include it as a dependency from alfresco module project instead of declaring all the dependencies one by one. As soon as I am finished, I will attach the pom to this post.

As promised, I created a pom file and listed all the dependencies inside Alfresco war file in this pom file. Since it was not possible to find all the dependencies inside the war file in 3rd party public maven repositories, I launched an Amazon EC2 Micro instance, installed Sonatype Nexus and deployed custom dependencies with groupId org.alfresco.sdk.

The pom file is on Github and you can find the custom dependencies in Alfresco SDK Repository. Please do not abuse, I don't want to pay for it :)

I also created an Alfresco war file with an empty WEB-INF\lib folder, why? Because it is the idea; we will manage all the dependencies using maven and overlay the amp modules into this war file. All you have to do is to use this war file with your amp module and also add a dependency to alfresco-dep artifact as follows.

<dependency>
    <groupId>org.alfresco.sdk</groupId>
    <artifactId>alfresco</artifactId>
    <version>3.3</version>
    <type>war</type>
    <classifier>community</classifier>
</dependency>

<dependency>
    <!-- Includes all alfresco dependencies -->
    <groupId>com.sony.forest.roots</groupId>
    <artifactId>alfresco-dep</artifactId>
    <version>3.3</version>
</dependency>

If you have any questions, just make a comment, I will try to answer ASAP.

P.S. If you want to do the same thing for another Alfresco version, feel free, that is why we use Github. I can also create a user so that you can also upload custom artifacts to maven repository. Just let me know.

Running Alfresco 3.3 with embedded database (H2 in PostgreSQL compability mode)

Alfresco is a very good open source content management system, however there is only one problem with Alfresco, testing. It is a complete headache, writing an integration test, forget it! You will end up with functional tests;

  • fire up an Alfresco instance
  • use one of the existing interfaces to setup some test data
  • and finally do your test

Unfortunately, starting from version 3.2 you cannot even do that and why?

Because starting from version 3.2 the sql scripts are out-dated and Alfresco is not working with any of the memory databases, not with Derby, not with hsqldb...

To see the related issue on this matter click here.

Luckily we have h2database and it can impersonate --thanks to compatibility mode-- other databases that Alfresco has support for.

All you have to do is to add h2database to your classpath and change your driver, jdbc url and dialect. I did not test Oracle or MySQL compatibility modes but it is working with PostgreSQL.

db.driver=org.h2.Driver
db.url=jdbc:h2:alf_data/h2_data/alfresco;MODE=PostgreSQL
hibernate.dialect=org.hibernate.dialect.PostgreSQLDialect

By this way you can still do your automated tests.

Surefire is not picking up JUnit 4/TestNG 5 tests

If you are using any of these artifacts from Spring Enterprise Bundle repository --just like I did--, you can face this problem. Since the artifact names of these artifacts are different than usual, it is not possible for surefire to detect.

Solution is easy, just configure surefire as follows.

<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-surefire-plugin</artifactId>
    <configuration>
        <!--
        This two configuration is required to make maven-surefire-plugin recognize junit and testng
        since we are using these artifacts from spring enterprise bundle repository (which has
        different artifact names than usual)
        -->
        <junitArtifactName>org.junit:com.springsource.org.junit</junitArtifactName>
        <testNGArtifactName>org.testng:com.springsource.org.testng</testNGArtifactName>
    </configuration>
</plugin>

You can use mvn -X to run maven in debug mode, it is very useful, this is how I figured out the problem.

[INFO] [surefire:test {execution: default-test}]
[DEBUG] dummy:dummy:jar:1.0 (selected for null)
[DEBUG]   org.apache.maven.surefire:surefire-booter:jar:2.6:runtime (selected for runtime)
[DEBUG]     org.apache.maven.surefire:surefire-api:jar:2.6:runtime (selected for runtime)
[DEBUG] Adding to surefire booter test classpath: /home/umut/.m2/repository/org/apache/maven/surefire/surefire-booter/2.6/surefire-booter-2.6.jar Scope: runtime
[DEBUG] Adding to surefire booter test classpath: /home/umut/.m2/repository/org/apache/maven/surefire/surefire-api/2.6/surefire-api-2.6.jar Scope: runtime
[DEBUG] dummy:dummy:jar:1.0 (selected for null)
[DEBUG] Retrieving parent-POM: org.apache.maven.surefire:surefire-providers:pom:2.6 for project: null:surefire-junit:jar:null from the repository.

The default PackageResolver for gwtoolbox has changed?

I spent the last 4 hours to understand why gwtoolbox is not injecting dependencies. It was a painful process; there was not any log, just a NullPointerException since there is nothing to inject in the container.

Anyway the problem occurred after upgrading gwtoolbox to 2.0-SNAPSHOT from 0.7. Finally I figured out that they changed default PackageResolver strategy in the new version. Before I wasn't specifying any PackageResolver, meaning; I was relying on the default package resolver.

When they changed the default package resolver, my project was unable to discover components at compile time. Since no component was discovered at compile time, the container was empty at runtime and that was why there wasn't any log.

As can be seen from the following code that the default package resolver is some kind of relaxed version of REGEXP.

public static Pattern resolvePattern(String expression, PackageResolver resolver) {
    switch (resolver) {

        case REGEXP:
            return Pattern.compile(expression);

        case PREFIX:
            String pattern = Pattern.quote(expression);
            return Pattern.compile(pattern + ".*");

        case SUFFIX:
            pattern = Pattern.quote(expression);
            return Pattern.compile(".*" + pattern);

        case DEFAULT:
            StringBuilder builder = new StringBuilder();
            int offset = 0;
            int i;
            while ((i = expression.indexOf('*', offset)) > -1) {
                String part = Pattern.quote(expression.substring(offset, i));
                builder.append(part);
                builder.append(".*");
                offset = i + 1;
            }
            if (offset < expression.length()) {
                builder.append(expression.substring(offset));
            }
            return Pattern.compile(builder.toString());

        default:
            throw new UnsupportedOperationException("Package resolver '" + resolver.name() + " is currently unsupported");
    }
}

Just be aware and do not spend 4 hours like me :)

Testing Alfresco webscripts

Waaah, too boring to test is too boring to write.

I just wanted to start with this quote from Ray Ryan in Architecting GWT applications for production at Google session at Google I/O 2010.

This really explains how I feel about extending Alfresco. Actually it is worse than that, writing tests for Alfresco extensions is painful rather than boring.

While developing extensions for Alfresco, I often find writing even the simple unit tests unnecessarily difficult. I think the reason is that it is not designed testability in mind. You don't have the necessary abstraction in the source code to ease unit testing; not enough interfaces, no separation of concerns.

As a result it is not usually possible for the developer to isolate the functionality that will be tested. I usually find myself trying to hack some stuff and bend the earth :) Unfortunately you can not bend everything, right?

Unit tests is not the big deal, you might somehow find your way in the dark. What about integration testing? It is even more difficult, everything is tightly coupled with the repository and you mostly need an up-and-running Alfresco instance.

And then comes the webcripts, functional testing. That is easy right, fire up an Alfresco instance with your custom webscripts deployed and apply the usual routine. Make an HTTP (GET, POST whatever it is) call and check the response to see if everything is OK. What if you need test data? How are we going to create the test data that that particular webscript needs?

If you are thinking about using Alfresco WebService Client, you must be crazy! I will never write those long boring lines of webservice calls to create a couple of folders and documents. Why? Because it is BORING. Remember? Too boring to test is too boring to write.

As you may know from my previous blog posts that we are currently developing a CMS back-end with Alfresco+GWT for our consumer facing website. The first thing we did was to write a test data preparation framework for functional tests --also for integration tests since you mostly need a running Alfresco instance--. We also used the Alfresco WebService Client but in a slightly different way. We created a DSL using Groovy on top of Alfresco WebService Client.

Now whenever I need test data to be created for a particular test, all I have to do is to override the following method from my base test class.

@Override
public Closure getData() {
  return {
    node(UUID.randomUUID().toString()) {
      nodeType = "{http://www.xxx.eu/model/content/1.0}imageFamily"
      aspects = ["{http://www.xxx.eu/model/content/1.0}item"]
      PROP_XXX_BATCH_NAME = 'example'
      rule(["inbound"], "image family", "image family/sibling related rules") {
        action("composite-action") {
          condition("no-condition")
          action("create-image-family") {
          }
        }
      }

      node(UUID.randomUUID().toString()) {
        nodeType = "{http://www.xxx.eu/model/content/1.0}imageSibling"
        aspects = ["{http://www.xxx.eu/model/content/1.0}item"]
        PROP_XXX_IS_ORIGINAL = true
        PROP_XXX_IS_TRANSPARENT = false
        PROP_XXX_X_RESOLUTION = 36
        PROP_XXX_Y_RESOLUTION = 72
        PROP_XXX_WIDTH = 100
        PROP_XXX_HEIGHT = 100
        content = file("images/50647.jpg", "image/jpg")
      }
    }

    folder("Folder_1") {
    }
  }
}

What is does is;

  • creates a node of type imageFamily, sets some attributes on the node and applies a rule on that node
  • creates another node of type imageSibling under the first one and sets some attributes on that
  • creates a folder with name Folder_1 at the same level as the first node

However I think this is not something I should do. Alfresco guys should provide these kind of frameworks and it is not that hard to do it. If you are hearing what I am saying, just send me an email I would be more than happy to help you about that.

A good framework should come with it's own testing framework. We love Spring, you know why...

Don't use internal APIs while integrating 3rd party libraries

While coding in a hurry, you sometimes use internal API's either willingly --since you don't want to spend more time-- or unwillingly instead of public equivalents. I am saying don't use them because they meant to change, as the name implies they are for internal use.

Just today, I spent three hours just to replace these kind of codes after upgrading Alfresco to 3.3 from 3.2 for a project I am working on. Those who know Alfresco will understand me. Alfresco is full of internal and public APIs, there is no clear distinction, just an annotation that specifies that a service is public. --and it is enough for decent developers, I agree :)--

However we had used a helper class somehow and we had used it extensively. It is called DictionaryHelper, it contains utility methods for querying information about your data model. There are also two other services which are public and provide the same functionality collaboratively named; DictionaryHelper and NamespaceResolverService.

Anyway, shit happens we had used it somehow and I spent the last three hours just to replace them with proper code. I am not saying that using internal API's is prohibited sometimes you can use them intentionally, even in that case, be clever enough put an abstraction on top. This way it is much more easy to change the code when the internal API changes.

But at least even coding in a hurry, we are always writing tests --we are right? :)-- and it becomes easier to update the existing code and be sure that everything works as supposed to.

GreenHopper for JIRA exceeded my expectations

Personally I am not a big supporter for agile tools or rather I am not a big supporter for agile methodologies like; Scrum. I think, this kind of frameworks just enable developers with different personalities/styles and from different profiles to work together.

Although I am not a big fan of these kind of methodologies, it is not easy to form a team that you don't need to use them.

Enough for the analysis, it is not the subject of this post actually. As I said, not a fan of agile tools until I start using GreenHopper. Even the manifesto suggests that we should honer individuals and interactions over processes and tools.

However when we purchased JIRA Studio service --it is good-old-JIRA however as a service, you have all the Atlassian tools (buzzword alert) on the cloud, it is cool-- for a global project that involves participants all over the world, I started using it unwillingly and even considered it as waste of time in the beginning.

After sometime, I realized that it actually speeds up the process considerably, the planning meetings, the grooming meetings, etc.

You can create planning sprints and open your planning board to give story point estimations. After finishing giving your story point estimations, you can either move the stories to your implementation sprint backlog or you can create subtasks with real estimations then move the stories to implementation backlog. Either approach is logical but I prefer the first one, because according to your velocity even you decide which stories to add to backlog, you might not feel comfortable/committed to all of them.

See the following screenshots from GreenHopper, the first one is the planning board and the second one is the task board.

With the task board everything is possible just like a real board. You can carry your tasks to in-progress and when you are done move them to done. You can log work in order to track the progress of your sprint.

I can hear you asking; "Ok but where is the burndown chart?".

You also have another screen called chart board. See the following screenshot for the chart board --a.k.a. burndown chart--. As can be seen it is possible to track the progress of the sprint just by checking chart board. You have two burn down charts; one for hours burndown and one for issue burndown.

Another good feature is that you don't have to use all of this screens every time, it is also possible to configure a dashboard to see a snapshot of your Sprint including; burndown, issues in-progress, issues done, issues assigned to you, recently updated list, remaining days, etc. See the following dashboard as an example.

I used GreenHopper and I am impressed by it. Long story short it removed my prejudice for agile tools, now I think that they can really be helpful. It is also worth checking Mingle, maybe it is also as good as GreenHopper, maybe better.

However one big advantage of GreenHopper is that it comes with good-old-JIRA :)