Maven profile best practices

Written by Andrew on January 5th, 2011

(A reworking of an earlier post, that I posted on DZone last month. I thought I should keep a copy of it on my own blog.)

Maven profiles, like chainsaws, are a valuable tool, with whose power you can easily get carried away, wielding them upon problems to which they are unsuited. Whilst you’re unlikely to sever a leg misusing Maven profiles, you can cause yourself some unnecessary pain. These three best practices all sprung from first-hand, real-world suffering:

  • The build must pass when no profile has been activated
  • Never use <activeByDefault>
  • Use profiles to manage build-time variables, not run-time variables and not (with rare exceptions) alternative versions of your artifact

I’ll expand upon these recommendations in a moment. First, though, let’s have a brief round-up of what Maven profiles are and do.

Maven Profiles 101

A Maven profile is a sub-set of POM declarations that you can activate or disactivate according to some condition. When activated, they override the definitions in the corresponding standard tags of the POM. One way to activate a profile is to simply launch Maven with a -P flag followed by the desired profile name(s), but they can also be activated automatically according to a range of contextual conditions: JDK version, OS name and version, presence or absence of a specific file or property. The standard example is when you want certain declarations to take effect automatically under Windows and others under Linux. Almost all the tags that can be placed directly in a POM can also be enclosed within a <profile> tag.

The easiest place to read up further about the basics is the Build Profiles chapter of Sonatype’s Maven book. It’s freely available, readable, and explains the motivation behind profiles: making the build portable across different environments.

The build must pass when no profile has been activated

(Thanks to Nicolas Frankel for this observation.)


Good practice is to minimise the effort required to make a successful build. This isn’t hard to achieve with Maven, and there’s no excuse for a simple mvn clean package not to work. A maintainer coming to the project will not immediately know that profile wibblewibble has to be activated for the build to succeed. Don’t make her waste time finding it out.

How to achieve it

It can be achieved simply by providing sensible defaults in the main POM sections, which will be overridden if a profile is activated.

Never use <activeByDefault>

Why not?

This flag activates the profile if no other profile is activated. Consequently, it will fail to activate the profile if any other profile is activated. This seems like a simple rule which would be hard to misunderstand, but in fact it’s surprisingly easy to be fooled by its behaviour. When you run a multimodule build, the activeByDefault flag will fail to operate when any profile is requested, even if the profile is not defined in the module where the activeByDefault flag occurs.

(So if you’ve got a default profile in your persistence module, and a skinny war profile in your web module… when you build the whole project, activating the skinny war profile because you don’t want JARs duplicated between WAR and EAR, you’ll find your persistence layer is missing something.)

activeByDefault automates profile activation, which is a good thing; activates implicitly, which is less good; and has unexpected behaviour, which is thoroughly bad. By all means activate your profiles automatically, but do it explicitly and automatically, with a clearly defined rule.

How to avoid it

There’s another, less documented way to achieve what <activeByDefault> aims to achieve. You can activate a profile in the absence of some property:

<profile id="nofoobar">

This will activate the profile “nofoobar” whenever the property is not defined.

Define that same property in some other profile: nofoobar will automatically become active whenever the other is not. This is admittedly more verbose than <activeByDefault>, but it’s more powerful and, most importantly, surprise-free.

Use profiles to adapt to build-time context, not run-time context, and not (with rare exceptions) to produce alternative versions of your artifact

Profiles, in a nutshell, allow you to have multiple builds with a single POM. You can use them in two ways:

  • To adjust how you build: that is, to adapt the build to variable circumstances (developer’s machine or CI server; with or without integration tests) whilst still producing the same final artifact, or
  • To adjust what you build: that is, to produce variant artifacts.

We can further divide the second option into: structural variants, where the executable code in the variants is different, and variants which vary only in the value taken by some variable (such as a database connection parameter).

If you need to vary the value of some variable at run-time, profiles are typically not the best way to achieve this. Producing structural variants is a rarer requirement — it can happen if you need to target multiple platforms, such as JDK 1.4 and JDK 1.5 — but it, too, is not recommended by the Maven people, and profiles are not the best way of achieving it.

A common case where profiles seem like a good solution is when you need different database connection parameters for development, test and production environments. It is tempting to meet this requirement by combining profiles with Maven’s resource filtering capability to set variables in the deliverable artifact’s configuration files (e.g. Spring context). This is a bad idea.


  • It’s indirect: the point at which a variable’s value is determined is far upstream from the point at which it takes effect. It makes work for the software’s maintainers, who will need to retrace the chain of events in reverse
  • It’s error prone: when there are multiple variants of the same artifact floating around, it’s easy to generate or use the wrong one by accident.
  • You can only generate one of the variants per build, since the profiles are mutually exclusive. Therefore you will not be able to use the Maven release plugin if you need release versions of each variant (which you typically will).
  • It’s against Maven convention, which is to produce a single artifact per project (plus secondary artifacts such as documentation).
  • It slows down feedback: changing the variable’s value requires a rebuild. If you configured at run-time you would only need to restart the application (and perhaps not even that). One should always aim for rapid feedback.

Profiles are there to help you ensure your project will build in a variety of environments: a Windows developer’s machine and a CI server, for instance. They weren’t intended to help you build variant artifacts from the same project, nor to inject run-time configuration into your project.

How to achieve it

If you need to get variable runtime configuration into your project, there are alternatives:

  • Use JNDI for your database connections. Your project only contains the resource name of the datasource, which never changes. You configure the appropriate database parameters in the JNDI resource on the server.
  • Use system properties: Spring, for example, will pick these up when attempting to resolve variables in its configuration.
  • Define a standard mechanism for reading values from a configuration file that resides outside the project. For example, you could specify the path to a properties file in a system property.

Structural variants are harder to achieve, and I confess I have no first-hand experience with them. I recommend you read this explanation of how to do them and why they’re a bad idea, and if you still want to do them, take the option of multiple JAR plugin or assembly plugin executions, rather than profiles. At least that way, you’ll be able to use the release plugin to generate all your artifacts in one build, rather than only one of them.

Consider also Maven’s per-user settings

Per-user settings are a bad idea in most cases, because the whole objective of the exercise is to have all artifacts under source control or in a Maven repository, such that the build can be replicated on any machine. However, when you want persistence tests to run in a different database schema for every developer, Maven’s per-user settings file (~/.m2/settings.xml) is a sensible alternative to profiles. In this case, you really do want the project to build differently depending on who runs the build. If you do this, make sure you still provide working default values in the POM itself (they will be over-ridden by user settings), such that builds will still work with an empty ~/.m2/settings.xml.

(Thanks to Eric Fitchett for this suggestion.)

Further reading


2 Comments so far ↓

  1. Marcin says:

    Hi Andrew, thanks for excellent post. I totally agree that only single artifact should be produced by the build, the same artifact for all target environments (DEV, QA, PROD).

    But sometimes it is very hard to externalize configuration. E.g: for 3rd party libraries when the configuration is stored in the property file at fixed location in WAR file. And the property is environment specific (e.g: frameworks dev mode).

    I haven’t found better solution that m-war-p overlays. For my foo-webapp module I created foo-deploy module and configure m-war-p to overlay configuration files in released WAR file (

    The problem is that foo-deploy module is never released, I can imagine that build will not be reproducible. Especially when overlay is overused.

    I’m really interesting in your opinion :-)


    • Andrew says:

      Hi Marcin,

      Sorry for taking a while to reply. I’ve never had this problem, so my opinion isn’t worth much (I don’t really believe in opinions on this sort of thing until they’ve been tested against reality). But here it is anyway.

      If you need variants of a file that’s inside the WAR, then you really only have two options. Either you need to produce multiple WARs to cover the different environments, even though it’s bad practice, or you need to produce a single WAR and then tweak it post-build.

      I didn’t know about overlays, so I never thought of them as a way to produce variant artifacts. From what I understand, with your problem you could also use a combination of profiles and filtering of properties files, to get the same effect. Overlays do have the advantage of isolating configuration into a separate project, but depending how you organise the dependency, you might risk forgetting to build foo-deploy before foo-webapp (and spending an hour or two wondering why the new configuration hasn’t taken effect). So, if you stick with overlays, I would advise making multiple foo-deploy-xxx modules, one per configuration. I’d also recommend using multiple executions of maven-war-plugin, as described here, to produce your multiple WARs from a single build (the post describes maven-jar-plugin but it should work the same way for maven-war-plugin). This is better than using profiles to switch between configurations, because you produce all the artifacts you require in one shot, and you keep a single build process. You avoid the danger of failing to activate the necessary profile at build time.

      As long as you take the option of producing multiple WARs from the same version of foo-webapp, whether you use overlays or some other method, you’ll have the problem of distinguishing them. The classifier would be the appropriate way to do that.

      All of the above approaches require that you know, at build time, what configurations you are going to need. If a requirement arises for a different configuration, you need to modify the POM of foo-webapp, which obliges you to produce a new release with a fresh version number.

      The alternative to all of that, i.e. tweaking the WAR after the build, contravenes the principle that built artifacts should be invariant. However, the kind of configuration you’re describing is the system integrator’s job, not the developer’s. You could very well respect the principle of invariant artifacts at the point of hand-over to the system integrator, but add a phase to your deployment process where you adapt the WAR to the environment. (It doesn’t matter if the developer and the system integrator are the same person: it’s the process that matters.) Of course, you would do this with some kind of automated script, not by hand. A big advantage of this approach is that you don’t need to know the configuration at build time.

      Overall, I’m inclined to recommend the second approach, especially if you can’t predict all of the configurations you’ll need for each release.

Leave a Comment