Archive for 2011

Salesforce extension for Spring Social

For the past two weeks I have been working on Spring Social Salesforce. Spring Social Salesforce is an extension to Spring Social that provides Salesforce connect and api bindings support.

Spring Social

Spring Social is an extension of the Spring Framework that allows you to connect your applications with Software-as-a-Service (SaaS) providers such as Facebook and Twitter.

As stated on it's site; Spring Social is an abstraction on top of social platforms (API's) that takes the burden of connecting your applications via OAuth and provides a clean way to implement rest clients.

I am sure there are different libraries that enables you to connect your applications to Salesforce, e.g. Salesforce SOAP client, etc. However Spring Social provides some unique features that you may want to reconsider your choices.

  • It handles the OAuth negotiation out-of-the-box and in an elegant way.
  • Provides connection management and connection persistence so you don't have to worry about the session management.
  • As usual, it works very well with Spring Framework.
  • With the help of Spring RestTemplate and Jackson it is very easy to code against any rest API.

See the following one-liner that fetches a user's profile.

public SalesforceProfile getUserProfile(String userId) {
    return restTemplate.getForObject(api.getBaseUrl() + "/v23.0/chatter/users/{id}",
            SalesforceProfile.class, userId);

Spring Social Salesforce

Enough about Spring Social, let's see what you can do with Spring Social Salesforce. Spring Social Salesforce is still at an early stage so the REST API is not fully supported, there are still lots of things that are missing, especially the ones related to the Chatter API. I will be adding new features and complete the API in the coming months. Luckily we use the very same extension to integrate our applications with Salesforce in my current company so it will be easier for me to keep my promise :)

See the following for the list of API's supported and their coverage.

  • Api operations: Fully implemented.
  • sObject Operations: All read-only operations are fully implemented.
  • Query Opreations: Fully implemented.
  • Search Operations: Fully implemented.
  • Recent Opreations: Fully implemented
  • Chatter Operations: Only user profile retrieval, status retrival and update are implemented. This is the least covered API but new features will be gradually added.

Token Refresh

One big caveat of the current implementation that it does not handle token refresh transparently. One has to handle it manually via wrapping every call with a try-catch block.

try {
    SalesforceProfile profile = salesforce.chatterOperations().getUserProfile();
} catch (InvalidAuthorizationException e) {
    //call refresh on connection and repeat the same call

I agree, it is annoying. Luckily there is a reported feature request and it will be implemented by spring social team however it is not yet scheduled. See SOCIAL-263.

There are some ninja tricks that can be done to handle it transparently however I did not include my workaround in the project source code. If you are interested in, just let me know, happy to help.

Client Login

Salesforce API also supports authenticating via client login, with username/password. For those who don't need OAuth negotiation can leverage this approach. Your application may not be a multi-tenant application and all you need is to use integrate a backend system with Salesforce. In this case you may not have the opportunity or the place to do the OAuth dance (redirect, etc) and you may also want the whole flow to be automatic.

With the client login, you make a POST to Salesforce token service with your username/password to get your access token. And good news is that this token never expires!

In the source code you will see a package named that contains a factory and utility classes to use the extension without OAuth mechanism.

See the following code for the usage. It calls factory's create method with necessary credentials and gets a Salesforce template configured with the retrieved access token.

SalesforceFactory factory = new BaseSalesforceFactory(clientid, clientSecret);
Salesforce template = factory.create(username, password, secretToken);


Script for changing GNOME proxy settings

Most of us are working in corporate environments and corporate environments do love proxies. And I hate them;

  • I hate setting them every morning
  • I hate un-setting them every evening

I am assuming that you also use the same laptop at work and at home.

Since I am so sick of changing proxy settings every time, I created the following bash script to set and unset proxy settings.

# Your .gconf directory under your home

if [ -n "$PROXY_HOST" ]
    echo "Setting proxy configuration : $PROXY_HOST:$PROXY_PORT"

    gconftool-2 --direct --config-source xml:readwrite:/etc/gconf/gconf.xml.mandatory --type string --set /system/proxy/mode "manual"
    gconftool-2 --direct --config-source xml:readwrite:$CONF --type string --set /system/http_proxy/host "$PROXY_HOST"
    gconftool-2 --direct --config-source xml:readwrite:$CONF --type int    --set /system/http_proxy/port "$PROXY_PORT"
    gconftool-2 --direct --config-source xml:readwrite:$CONF --type bool   --set /system/http_proxy/use_same_proxy "TRUE"
    gconftool-2 --direct --config-source xml:readwrite:$CONF --type bool   --set /system/http_proxy/use_http_proxy "TRUE"

    #gconftool-2 --direct --config-source xml:readwrite:$CONF --type list   --set /system/http_proxy/ignore_hosts [localhost,,*.local]

    if [ -n "$PROXY_USERNAME" ]
        echo "Using authentication information : $PROXY_USERNAME:$PROXY_PASSWORD"

        gconftool-2 --direct --config-source xml:readwrite:$CONF --type=bool   --set /system/http_proxy/use_authentication "TRUE"
        gconftool-2 --direct --config-source xml:readwrite:$CONF --type=string --set /system/http_proxy/authentication_user "$PROXY_USERNAME"
        gconftool-2 --direct --config-source xml:readwrite:$CONF --type=string --set /system/http_proxy/authentication_password "$PROXY_PASSWORD"
        gconftool-2 --direct --config-source xml:readwrite:$CONF --type=bool   --set /system/http_proxy/use_authentication "FALSE"

    echo "Removing proxy configuration."

    gconftool-2 --direct --config-source xml:readwrite:/etc/gconf/gconf.xml.mandatory --type string --set /system/proxy/mode "none"

    gconftool-2 --direct --config-source xml:readwrite:$CONF --unset /system/proxy/mode
    gconftool-2 --direct --config-source xml:readwrite:$CONF --unset /system/http_proxy/host
    gconftool-2 --direct --config-source xml:readwrite:$CONF --unset /system/http_proxy/port
    gconftool-2 --direct --config-source xml:readwrite:$CONF --unset /system/http_proxy/use_same_proxy
    gconftool-2 --direct --config-source xml:readwrite:$CONF --unset /system/http_proxy/use_http_proxy
    gconftool-2 --direct --config-source xml:readwrite:$CONF --unset /system/http_proxy/use_authentication
    gconftool-2 --direct --config-source xml:readwrite:$CONF --unset /system/http_proxy/authentication_user
    gconftool-2 --direct --config-source xml:readwrite:$CONF --unset /system/http_proxy/authentication_password

pkill gconfd

You can run it as follows. In order to unset all the proxy settings, just run it without parameters. <proxy_host> <proxy_port> <proxy_username> <proxy_password>

Now you can set and unset your GNOME proxy settings with one simple command.

If you prefer, you don't even have to run it at all --by yourself I mean :)-- Just create another script under /etc/network/if-up.d and with the following contents. Now every time your computer connects to a network it will call the first script '' with appropriate parameters depending on the your IP address.

# you can change this part, I am just checking if there is a network interface
# with and IP address starting with '43.' (my companies network)


IP_COUNT=`ifconfig|grep 'inet addr'|awk '{print $2}'|sed -e 's/addr\://'|grep \^43|wc -l`
if [ "$IP_COUNT" = "0" ]
        echo "No need for proxy, removing if there is one."
        cp /home/umut/.m2/settings.home.xml /home/umut/.m2/settings.xml
        sed -i 's/^http-proxy*/# http-proxy/' /home/umut/.subversion/servers
        echo "In sony network setting proxy." $PROXY_HOST $PROXY_PORT #$PROXY_USERNAME $PROXY_PASSWORD if needed
        cp /home/umut/.m2/ /home/umut/.m2/settings.xml
        sed -i 's/^# http-proxy*/http-proxy/' /home/umut/.subversion/servers

*UPDATE: The script above also puts proper my maven settings.xml and also enables/disables proxy for subversion according to the environment. You can add more stuff to this script, if you have more settings that you need to change.*

Ruby one-liner to retrieve user groups from Atlassian Crowd

As you may guess from the title that my previous method did not work out for us --not that it is not good, actually it is much cleaner than this solution--. Apache authentication module mod_crowd does not work with SUSE Linux Enterprise Server 11 and it is our infra team's choice of flavor for production servers :)

What I did instead was;

  • I used mod_ldap_auth to do the authentication
  • And made gitolite call a ruby script to retrieve the groups of the user from Crowd.

See the one-liner below.

For those who wonder the format of Crowd's output for the service call, it is as follows.

    "expand": "group",
    "groups": [
            "link": {
                "href": "http://localhost:8095/crowd/rest/usermanagement/1/group?groupname=confluence-administrators",
                "rel": "self"
            "name": "confluence-administrators"
            "link": {
                "href": "http://localhost:8095/crowd/rest/usermanagement/1/group?groupname=crowd-administrators",
                "rel": "self"
            "name": "crowd-administrators"

UPDATE: mod_crowd works with SUSE Linux Enterprise Server 11 flawlessly --thanks to Atlassian support--, apparently, our IT guys forgot some libraries while compiling. Anyway this method may not be necessary for this particular case, but it can be the solution when you had problems with the compilation.

Integrating Gitolite with Atlassian Crowd

There is no argument that git is currently the best SCM tool. Now I can not believe how come we were using CVS,SVN back then and I know that there is no way for me to use any SCM other than git.

While ago we decided to migrate all our repositories to git and while doing that we wanted to revisit the integrations between the other development tools that we are using.

We are currenlty using number of Atlassian tools; Jira, Confluence, Bamboo, Crucible&Fisheye (nearly!! all off them). But the integration between these tools were limited for us. Only Confluence and Jira were aware of each other. I am planning to go over whole setup in detail in another post. Now I would like to focus on one specific scenario an our solution. 10 lines of code over and there really solved our problem. I hope you also benefit from this solution.

Basically what we want is;

  • Being able to see modified files when we look at Jira tickets.
  • Have number of git repositories with a different RW access for different groups.(Btw we are using Git with HTTP protocol)

Sounds easy right, not so much. For the first one, we decided to use Jira - Crucible integration. Make these two applications trust each other and you are good to go, if only all of your repositories are anonymous or if you are using same Identification Service for both Jira and Crucible. We decided to go for the later one (because it is not aligned with our second requirement) and decided to add also Crowd another Atlassian tool to our development tools family.

Our current setting is to use Crowd for all of our Atlassian Tools, define our groups, assign relevant colleagues to groups, make all of our tools trust each other. rucible.

So you may have number of contributers that have an access to your Jira project but only the ones that have right to see the repository configured on Crucible will be able to see the changes on the source.

This setting seems to solve our first problem, for managing git access rights we decided to use gitolite. With gitolite you may create number of groups and assign your git users to them and give different access rights to either users or groups for different repositories.

Creating groups and assigning users to them ... creating groups and assigning users to them ... Didn't we already do that on Crowd. We will do that again? Hell no, not at least if I am involved. Here is the 3 step recipe to not to do it again.

STEP 1 : Using Gitolite with Basic Authentication As said previously, we are using Git with HTTP protocol and here is the basic apache configuration to use gitolite authentication in conjunction with basic authentication.

SetEnv GIT_PROJECT_ROOT /var/www/gitolite-home/repositories
SetEnv GITOLITE_HTTP_HOME /var/www/gitolite-home
ScriptAlias  /git /var/www/gitolite-home/bin/gl-auth-command/  
<LocationMatch /git>
  AuthName "Basic"
  AuthType Basic
  AuthUserFile /var/cache/git/passwords
  Require valid-user

Now an environment variable named "REMOTE_USER" is set and available for gitolite to use.

STEP 2 : Using Gitolite with Crowd

Luckily Crowd has an apache module (mod_crowd) so you can make authentication over Crowd by changing your authentication provider as shown below.

<LocationMatch /git>
   AuthName "Atlassian Crowd"
   AuthType Basic
   AuthBasicProvider crowd
   Options +ExecCGI
   CrowdAppName gitolite
   CrowdAppPassword pass
   CrowdURL http://xxx.xx.xx.xx:8095/crowd/
   Require valid-user

Now you are making the authentication over Crowd. Lucky you. But what about the groups. What about the authorization. This is why we need another step.

STEP 3 : Using groups of a user defined in Crowd for gitolite.

Gitolite has built in mechanism to get group definitions from an external source. Quoted from gitolite documentation at github here is how to do that.

All you need is a script that, given a username, queries your LDAP or similar server, and returns a space-separated list of all the groups she is a member of. If an invalid user name is sent in, or the user is valid but is not part of any groups, it should print nothing.

This script will probably be specific to your site. (See contrib/ldap for some example scripts that were contributed by the Nokia MeeGo team.)

Then set the $GL_GET_MEMBERSHIPS_PGM variable in the rc file to the full path of this program, set $GL_BIG_CONFIG to 1, and that will be that.that.

Meaning, with an additional script we can call Crowd via its REST interface and get the groups of a user. But we already visited Crowd once for authentication, and we did not want to do that again. So we made a small change on mod_crowd's code to also set another environment variable to contain the groups of the user, and our script for gitolite became one line script as shown below.


The only remaining thing is setting the REMOTE_USER_GROUPS in mod_crowd. To do that we added a new configuration to mod_crowd , so you can give the name of the environment variable that you would like to set. Here is the final Apache configuration.

<LocationMatch /git>
   AuthName "Atlassian Crowd"
   AuthType Basic
   AuthBasicProvider crowd
   Options +ExecCGI
   CrowdAppName gitolite
   CrowdAppPassword pass
   CrowdURL http://xxx.xx.xx.xx:8095/crowd/
   Require valid-user

And here is the groups_setter method to call from authn_crowd_check_password method to set our beloved environment variable

void groups_setter(request_rec *r) {
    authnz_crowd_dir_config *config = get_config(r);
    char *group_env_name = config->crowd_config->group_env_name;
    if (group_env_name == NULL) {

    apr_array_header_t *user_groups = authnz_crowd_user_groups(r->user, r);
    if (user_groups == NULL) {
      return ;

    char *user_groups_str =  apr_pcalloc(r->pool,1024 * sizeof(char));
    int y;
    for (y = 0; y < user_groups->nelts; y++) {
      const char *user_group = APR_ARRAY_IDX(user_groups, y, const char *);
      strcat(user_groups_str, user_group);
      if (y+1 < user_groups->nelts) {
        strcat(user_groups_str, " ");

    apr_table_set(r->subprocess_env,group_env_name, user_groups_str);

And here is where to get mod_crowd's source code wget


Changing the DHCP IP Address Range for VMware Player

VMware Workstation includes a utility called virtual network editor which can be used manage the virtual networks. VMware Player also had this utility but they decided to remove it and not by accident!

It is an important utility since virtual machines will use NAT to automatically assign a virtual IP address that may or may not work for your network, it might collide with your network's subnet. For the latest VMware Player the default range is

You can try to extract this utility from the installation file and if you want to do so you can find the steps on this blog post. However this method won't work for Linux users but we --Linux users-- like to play with config files :) If all you want to do is to change this IP range then just go to /etc/vmware execute the following command --as root user-- to replace the IP range with a safe one.

find . -type f -name "*" -print | xargs sed -i 's/192.168.1/192.168.5/g'

And don't forget to restart vmware and your network.

/etc/init.d/vmware restart
/etc/init.d/networking restart

Application development on SalesForce

For a project, I was asked to evaluate to see if it can be considered as a platform of choice for a couple of specific applications. During evaluation I mainly looked into

  • the feature set to see how aligned it is when compared to regular enterprise software development routines
  • the development experience to see the efficiency, learning curve, etc.

I have not yet finalized my evaluations, but in this post I will try to outline my first impressions about

First of all is a PaaS (Platform as a service) solution. So what is PaaS? According to Wikipedia, PaaS is the delivery of a computing platform and solution stack as a service. However as stated on the top of the same page, it is not to be trusted and contains a lot of external links and lacks in-line citations. also has a dedicated page just to explain the concept but it also lacks one sentence definition. There are a lot of conflicting definitions here and there on the web. The following is the best one that I could come across on this page.

A service that provides a platform in which to develop software applications, usually web based, with immediate abstractions of the underlying infrastructure.

Two important points to draw attention to, PaaS,

  • Provides a platform to develop application in
  • Abstracts the developed software from the underlying infrastructure

You can also think of PaaS sitting in the middle and filling the void between IaaS and SaaS. Another nice explanation to understand the *aaS stack is provided by Krishna on this Quora entry.

Very simply,

  • In IaaS, you select the pre-canned OS layer, deploy the application stack, deploy your code & then add your data
  • In PaaS, you deploy your code (OS/Application Stack is part of the offering) & then add your data
  • In SaaS, you add your data (everything else part of the offering)

Currently on the market there are different successful PaaS/IaaS solutions like; Google App Engine, Heroku, Amazon Elastic Compute Cloud and is also one of them.

In the traditional sense a PaaS implementation should/might provide the following functionalities.

  • Services to develop, test, deploy, host and maintain
  • An Integrated Development Environment (preferably Web based)
  • Integrated management
  • Multi-tenant architecture with scalability
  • Support for development team collaboration
  • Migration tools for importing/exporting data
  • Multi environment for delivery
  • Integration (with web services, databases, etc)
  • Off-line development (ability to develop, test and deploy while being disconnected from the cloud)

Different PaaS solutions offer different set of functionalities which may or may not correspond to the list given above. An example is, while Google App Engine supports fully off-line development whereas it is not even an issue with You need to be connected in order to develop since the cloud is the only option to deploy and test the functionality being developed.


In order to go into details about the development experience first you have to understand what does development involve? What is code and what is configuration?

Everything is metadata.

Yes, every customization done on is metadata, at least they call it metadata. Even the Apex code implemented is considered to be metatada (simply an extension on the current main app, platform).

The application is, it is a multi-tenant application that can be customized for specific use cases.

  • You can model your data and store it on
  • You can implement Apex code as triggers, controllers, web services, etc.
  • You can integrate external application
  • You can create visual components/pages using VisualForce
  • You can lots of other things

But customization you did on the system is stored as metadata and processed when it is needed. Even the implemented code is not even compiled until it is needed to be run. The scalability is not a concern for your application since all they need to scale is the application itself the


The platform requires at least 75% of your code to be covered by unit tests in order to deploy it to a production organization. Ideally, you should strive for 100% coverage. The code coverage restriction is not enforced for developer sandboxes. Even if you work with projects to make your changes locally with provided tools (like the Eclipse IDE, migration tool), for tests activated by Apex Test Runner, execution occurs on the cloud.


With integrated web IDE, it is possible to develop an application as a team and contribute/share code/configuration with each other and promote implemented functionality to an integration environment and then push it to production. However this kind of development is not so common for classical software development, so I will try explain a better approach which is more controlled/structured that utilizes an SCM (Source Code Management).

First of all, one should know that with the help of Metadata API, it is possible to extract all configuration and customizations done on as resources (which opens new doors for teamwork). Since we can extract all customizations we can also utilize an SCM like GIT, SVN or CVS to version the customizations and even tag them as releases. The use of an SCM system also enables developers to share code/configuration off-line and it can be used as the primary way to share/promote code between developers and environments.

See the following scenario steps for the proposed working model.


  • There is an SCM already set up and ready to be used by the developers.
  • Every developer has the same copy of the migration tool provided by on their local machines.
  • Every developer has his/her own Developer Sandbox or Developer Edition for testing and running the application.

Scenario steps:

  • Harry makes some changes on his sandbox and decides to share his customizations with others
  • Executes ant retrieveCode on his developer box to make migration tool retrieve his customizations from his sandbox (which is on the cloud).
  • Pushes his customizations to version control.
  • Sally needs the changes done by Harry to continue her work, so she pulls the changes from version control.
  • Executes ant deployCode on her developer box to make migration tool deploy the latest customizations to her sandbox and continues to work on her part.
  • And when the work is done, one of them can promote changes to the integration environment or to another environment depending on the environment model in use.

Since there is an SCM involved in the process, developers might use merge tools and other tools to resolve conflicts, tag releases, etc. See the following figure to understand the pieces involved in the process.

Version Control

There is no built-in version control provided by the platform itself, it is suggested by to use an external version control system. See "Working as a Team" section to see how to apply the concept. Since all the customizations done on a application is available through the Metadata API, any version control system can be easily integrated into the development process. There are different ways to extract customizations from a application;

  • Eclipse plugin
  • Migration Tool
  • Or any tool that works with Metadata API

Eventually all the tolls depends on the Metadata API and it is the primary way to push/retrieve code/configuration to/from

Continuous Integration

It is possible to build/deploy the application to an organization, run the tests via this organization and collect the results. The following figure illustrates a possible setup to realize continuous integration for a application.

The proposed setup is using CruiseControl as the CI tool and ant as the build tool. It is possible to replace CruiseControl with any other CI tool easily, since the actual build is done via ant using the ant extension library ant-salesforce.jar provided by And most of the CI tools can trigger ant builds out-of-the-box.

See this page for more information on the subject.

Multiple Environments

Since does not support fully off-line development, also provides multiple-environment support to be able to manage different development processes. These environments are called sandbox or organization in vocabulary. Depending on the project's complexity it is possible to use different sandbox models from simple to complicated;

  • simple being just a production organization
  • and complex being multiple sandboxes for a classical enterprise application release cycle utilizing multiple developer sandboxes, integration sandbox, QA sandbox, UAT sandbox, staging sandbox, production sandbox, etc.

The decision to use which sandbox model depends on two major factors;

  • The nature of the feature that will be developed; standalone or major improvement on an existing production application.
  • The number of teams/developers that will be involved in the project.

The other feature brought by is "Code Path Enforcement". You may configure, from where an organization may receive changes. In the complex scenario given below; Production organization only receives changes from "Staging organization", so there is no way to send changes which is not tested on Staging organization.

See the following figures for the simplest and the most complex scenarios.



Multiple Developer Sandboxes

Not all the environments have the same requirements regarding the resources and data, that is why provides different types of sandboxes to use with different environments. See the following list for the types of sandboxes and the purposes.

  • Developer Edition: Free option for developing on platform. Cannot access the production system.
  • Partner Developer Edition: This is a special edition to develop multi-organization products on
  • Full-copy Sandbox: Full sandboxes copy your entire production organization and all of its data, including standard and custom object records,documents, and attachments.
  • Configuration Only Sandbox: Configuration-only sandboxes copy all of your production organization's reports, dashboards, price books, products,apps, and customizations, but exclude all of your organization's standard and custom objectrecords, documents, and attachments.
  • Developer Sandbox: As the name implies, Developer Sandboxes are special configuration-only sandboxes intended for coding and testing by a single developer.


It is a very different development experience, you are very limited by the platform itself and it feels like you are not coding but it is the idea. You have a very robust multi-tenant application and you can tailor it for your needs. Overall I really like the idea behind but like every framework/platform, before jumping right into it you have to consider if it is the best platform to develop a certain application.

IMHO, is mostly suitable for the data centric applications because the platform is centered around a very flexible database and it allows you to easily develop applications which are data-centric, e.g., CRM, back-office applications.

If you need an application that need the followings, you can implement it with a point-and-click-development approach.

  • Database
  • Workflow & Approval
  • Reports and Dashboards around shared data

If we move beyond point-and-click development approach, with the Web Service and Mashup support, with Visual Force to create enhanced views and by using ready components on AppExchange, you can write any kind of applications on the platform.

The thing is, you can not write a framework with this platform :)

How to upload your artifacts to Maven Central?

I just applied for access to upload my maven plugin to Maven Central Repository and I would like to outline the steps I have followed. There is already a couple of HowTo pages on the subject but mine will be much shorter. You just have to follow the steps given below in order to have your artifact on the Maven Central.

First of all you need to have a PGP (Pretty Good Privacy) signature known by public signature servers.

Go to GnuPG web site, download and install it on your computer. For linux user it is good old ./configure; make; make check; sudo make install routine. Windows users have the option to download it as a binary. After installation follow the steps given below to have your public key signed.

$ gpg --gen-key
$ gpg --keyserver hkp:// --send-keys C6EED57A

To obtain the ID of your generated key, execute the following command.

$ gpg --list-keys
pub   2048R/C6EED57A 2011-04-24
uid                  John Doe <>
sub   2048R/C6EED57A 2011-04-24

Now that you have your key signed with public key servers, you can upload your artifacts to Maven Central. Now all you need is to have access to the repo. The easiest way to gain access to Maven Central is via Sonatype OSS Repository. It is an Apache approved repository provided by Sonatype and it helps any OSS (Open-source Software) project to have their artifacts uploaded to Maven Central.

Sign-up to Sonatype JIRA and create a JIRA ticket for access.

You have to choose Community Support - Open Source Project Repository Hosting for the ticket and enter the required information properly. They require you to provide the following information.

  • Summary: a brief introduction of your project
  • groupId : the groupId of your Maven project
  • Project URL : location of the project website
  • SCM URL : location of source control system
  • Nexus Username : the JIRA Username you just signed up, one or more
  • Already Sync To Central : if yes, we will copy those artifacts to Sonatype repository, and rebuild a correct maven-metadata.xml
  • Description

For more information please refer to this guide provided by Sonatype.

After your request is processed they will provide you the repository url's to upload your artifacts to.

Sign your artifacts and upload to the provided repositories

In order to sign your artifacts you can use maven-gpg-plugin otherwise you have to sign them manually using GnuPG. See the plugin page if you need more information on how to use the plugin.

After you upload your artifact to Sonatype OSS Maven Repository, they will be synced to Maven Central for you.

Maven plugin to force parent POM upgrades

It is no news that having a corporate POM for a group of projects is a maven best practice for standardization purposes and avoiding repetition.

And just like any other project a parent POM is also a living project and you need to update it from time to time. When the parent POM is updated, you also need to make the inheriting projects receive this update. There are two ways to do this;

  • We can either perform releases and have the version increased
  • Or we can deploy it without making a release (no version increase)

At first the second option seems logical since it does not require any communication to inform the users of this POM to upgrade the version. Actually it does. When you make a deploy without a release, there is no way to be sure that the developers will have the latest POM on their local, maven will not download it from the repository because there is already the copy in the local repository with the same version.

There is a long thread discussing these two approaches. As stated by someone on this thread that the problem is actually the same, either way you need to communicate the update with the developers. The only difference is the content of the dispatch :)

  • For the first approach, you have to tell the developers to upgrade the version of their parent POM to the latest one. (send an email)
  • For the second approach, you have to tell the developers to delete their local copy of the parent POM from their local repository. (send an email)

In the end the problem seems to be how to communicate. Sending an email is not a good option, it can get lost, go to junk or simply can be ignored. We need a persistent and controlled way to communicate this. We can simply make the maven build let the developers know that there is an update, we can even make the build fail if it is needed.

What the simply plugin does is;

  • Checks whether the current building project has a parent POM
  • If so checks whether it is one the parent POMs that we want to check for updates
  • If so checks whether there is a newer version
  • If so depending on the plugin configuration, it makes the build fail or print a warning to let the developer be aware of it

In order to activate the plugin all you have to do is to configure it within your parent POM. See the following snippet for the configuration options.

        <!-- When true makes the build fail if there is a new version-->
        <!-- The parent artifacts to check for -->

The plugin attaches itself to validate lifecycle phase if the user does not explicitly specify an execution.

At first sight making the build fail when there is a parent POM update seems annoying from the developer perspective however there are sometimes that you want to ensure that the version is upgraded by all the using projects. But with the configuration above you cannot do that, you cannot change your config to force the upgrade because this config is already in your parent POM, meaning; in order to force you need the using projects to upgrade the version. It is a chicken-and-eggs problem.

There is another option to make a SPECIFIC VERSION of your parent POM to force the update. You can define the very same property force.upgrade as a property in your parent POM just before release. Besides just checking plugin's configuration for forceUpgrade, the plugin also searches force.upgrade property within the available parent POM to see if there is a SPECIFIC VERSION that forces for upgrade. And if it finds a version forcing for upgrade, it will make the build fail not matter what the plugin's forceUpgrade config is. See the following example config.


See the following sample build outputs depending on the situation.

When the plugin is configured with force.update=false and there are two new versions, it just displays a warning log.

[WARNING] New versions available for your parent POM org.hoydaa.maven.plugins:test-parent:pom:1.1!
[WARNING] 1.2 (not forced)
[WARNING] 1.3 (not forced)
[WARNING] Your parent POM org.hoydaa.maven.plugins:test-parent:pom:1.1 is 2 versions behind, you have to upgrade it to 1.3!

When the plugin is configured with force.update=true and there are two new versions, it makes the build fail.

[WARNING] New versions available for your parent POM org.hoydaa.maven.plugins:test-parent:pom:1.1!
[WARNING] 1.2 (not forced)
[WARNING] 1.3 (not forced)
[INFO] ------------------------------------------------------------------------
[INFO] ------------------------------------------------------------------------
[INFO] Your parent POM org.hoydaa.maven.plugins:test-parent:pom:1.1 is 2 versions behind, you have to upgrade it to 1.3!

When there are three versions one of which is forced (meaning there is a property force.upgrade=true within the released parent POM, it makes the build fail even the forceUpgrade is false for the plugin.

[WARNING] New versions available for your parent POM org.hoydaa.maven.plugins:test-parent:pom:1.1!
[WARNING] 1.2 (not forced)
[WARNING] 1.3 (not forced)
[INFO] ------------------------------------------------------------------------
[INFO] ------------------------------------------------------------------------
[INFO] Your parent POM org.hoydaa.maven.plugins:test-parent:pom:1.1 is 3 versions behind, you have to upgrade it to 1.4! You have to upgrade your parent POM to the latest forced update at least!

The source code of the plugin is on github, feel free to check. I will also try to put it on maven central repo.

What does Alfresco mean by Social Content Management?

For those who follow/use Alfresco content management platform, it is no secret that the product team is heavily working around this new phrase (social content management) and they are planning to release features starting with version 3.4.

When I first heard the word social content management --in the same sentence as alfresco--, I let the intuitive side of my brain to understand what it might mean.

  • What does social content management mean or rather what does it mean in the context of a content management platform?
  • Is it making the product more social friendly (UI improvements to ease up collaboration) or does it really mean managing content through different social channels like twitter, facebook, youtube?

Luckily, I had the opportunity to see the CEO's (John Powell) presentation on the very same subject in one of alfresco's solution partner's client day and also had the opportunity to discuss it both with the CEO and WCM product architect (Brian Remmington). These are the things I extracted from the talks, not exactly things that was told.

There are new features coming with the next releases to make the product more social, more collaborative. These are mostly some UI improvements to make it more collaborative, they will simply provide a new UI based on Share for the existing social features like rating, blogs, wikis, etc. As you may know Share is the their new user interface that is based on Spring Surf (developed by Alfresco and adopted by Spring).

Besides UI improvements, one exciting feature is the Google Docs integration. I assume they also realized that it is very expensive to provide a better online authoring experience than Google Docs (also Alfresco is a content management product rather than being an online authoring tool) and decided to leverage Google Docs. As far as learned, it will be possible to author content on Google Docs and import them to Alfresco or the way around. They are even in contact with people from Google in order to be able to provide a better integration.

These are all good features but what about social content management? It is the same question, right :) The difference is nicely put by Jeff Potts in one of his blog posts. Where is the emphasis in this phrase, is it more about social content management (managing the content collaboratively) or social content management (management of social content)? Personally I want to see something related to the latter one.

Apparently they will implement a publishing connector for different social channels like Facebook, Youtube, Twitter, etc. Now, this is a novel idea that can change the way we look at content managements systems. You may agree that marketing has also changed a lot in the last decade. Nowadays marketing for a product might also include, launching a Facebook page, tweeting updates, publishing product videos on Youtube, etc. However until now these are the things done by marketing departments manually. All the marketing departments are already authoring content for different mediums/channels and why not reuse some of these content for social channels? It may even be possible to put a process into place in case of product launches, everybody knows the difficulties with product launches; every step has to be performed in a specific order and there is no margin for error (if something fails everything has to be rolled back).

I am really looking forward for this social publishing connector and I hope it would be flexible enough to introduce new channels easily.

Generating confluence documentation of Alfresco web scripts

If you are working with Alfresco, you may already know what web scripts are. Basically web script framework is a thin REST layer on top of repository and the web scripts are the RESTful services that runs inside this framework.

Web scripts are the services that allows you to interact with the repository and it is obvious that they should be documented properly for proper collaboration. Alfresco already has another web script to dump all the web scripts available in the system as html and wikimedia but not in confluence format. So I created a new freemarker template to render the documentation as confluence content. See the following snippet.


This page is rendered using Alfresco on Mar 1, 2011 8:00:04 PM and contains the latest web scripts and their definitions.

h2. Index of All Web Scripts
<#macro recursepackage package>
    <#if package.scripts?size &gt; 0>
h3. Package: ${package.path} ${url.serviceContext}/index/package${package.path}
        <#list package.scripts as webscript>
            <#assign desc = webscript.description>
h4. ${desc.shortName}
            <#if desc.description??>
h5. URI's
|| Method || URI ||
                <#list desc.URIs as uri>
| ${desc.method?html} | {noformat}${url.serviceContext}${uri?html}{noformat} |
h5. Properties
|| Property || Value ||
| Authentication | ${desc.requiredAuthentication} |
| Transaction | ${desc.requiredTransaction} |
| Format Style | ${desc.formatStyle} |
| Default Format | ${desc.defaultFormat!"Determined at run-time"} |
| Lifecycle | ${desc.lifecycle} |
| Id | ${url.serviceContext}/script/${} |
| Descriptor | ${url.serviceContext}/description/${} |
| Descriptor Path | ${desc.storePath}/${desc.descPath} |
    <#list package.children as childpath>
    <@recursepackage package=childpath/>

<@recursepackage package=rootpackage/>

All you have to do is to make this template available for that particular web script; means putting it to a proper place so that the web script framework can pick it up while rendering the output. You can put it on the classpath or directly in repository. See the following list for the possible locations.

  • repository folder: /Company Home/Data Dictionary/Web Scripts Extensions
  • repository folder: /Company Home/Data Dictionary/Web Scripts
  • class path folder: /alfresco/extension/templates/webscripts
  • class path folder: /alfresco/templates/webscripts

Don't forget to put the template in the proper package, the package of this webscript is org.springframework.extensions.webscripts. Use indexall.get.text.ftl as the template name. See the all possible template paths below.

  • repository folder: /Company Home/Data Dictionary/Web Scripts/indexall.get.text.ftlExtensions/com/springframework/extensions/webscripts
  • repository folder: /Company Home/Data Dictionary/Web Scripts/org/springframework/extensions/webscripts/indexall.get.text.ftl
  • class path folder: /alfresco/extension/templates/webscripts/org/springframework/extensions/webscripts/indexall.get.text.ftl
  • class path folder: /alfresco/templates/webscripts/org/springframework/extensions/webscripts/indexall.get.text.ftl

You can invoke this web script via calling /alfresco/service/index/all.text. If you want to generate the documentation only for the web scripts developed by you, just include the package while calling otherwise it will generate the documentation for all the web scripts, e.g. /alfresco/service/index/all.text?package=/com/foo/my/extension.

After calling the service you can just copy the output and paste into your confluence page.

UPDATE: There are other webscripts in Alfresco source code that can be extended in the same way. However this particular webscript comes with Spring Surf API (spring-webscripts-api) --which is not in Alfresco source code-- because of the Spring adoption of webscript api.