Moving on…

After almost seven years at Terracotta, I’ve decided to move on to other adventures. I joined the company, as it was still a start-up, in early 2009. Prior it acquiring Ehcache later that same year and then, itself, being acquired by Software AG in 2011.

Those have been crazy years. Very interesting years, but also challenging ones. I’m not going to delve into the details of the byte code manipulation of the DSO days, nor the challenges on getting advanced clustered capabilities in Quartz or dealing with node failures in Ehcache’s write-behind… Those all have been fun but more fun was being fortunate to work with the incredible team Terracotta has had over the years. The founders obviously: Ari, Orion and last but not least Tim. Tim Eck certainly has been the brightest engineer I’ve had the chance to work with over the nearly 20 years I have been working in software. He also happened to be a very kind and humorous person. Steve who has always pushed me and I with whom I have enjoyed every single argument we had over how to implement this or that! I like to believe the end result was always worth it. Alex, who has been an awesome manager for me and the transparency team in my early days at Terracotta. So has been Fiona over all those years!
And obviously the team itself: JamesChris D., Ludovic, Gary, Chris C., Louis, Clifford, Myron, Aurélien, Anthony, Abhilash, Mathieu, Jeff, … Not even mentioning all the great people that were part of the different teams over the many years…

Well there is one guy that has left that I should probably mention in this context. Not only did I join Terracotta because he introduced me to Ari at Devoxx in Belgium in 2008, not only because he’s been a friend for over 10 years now… but mostly because I’ll be working with him again in my new position: Geert Bevin.

… joining Moog Music

As of today, I’ll be working at Moog Music together with Geert. I am very excited to have this opportunity and work, initially, on their software instruments for the iOS platform. It’s an awesome opportunity to be able to marry two of my passions: music and software.

I’m really looking forward to what we’ll be creating there. That will be quite some change though… No more Java virtual machine for now. But C++, Objective C et al. But I always wanted to work on much more restrictive devices than our “commodity servers”… I guess I have it now!

Thanks Terracotta!

Again, thank you so much, all the people at Terracotta. It’s been a crazy ride and I loved it… but the time has come. That being said… don’t expect me to disappear from the open source mailing lists that quickly! ;)

Moving on…

Ehcache3’s templates

Earlier this month, a week ago more precisely, we’ve released the first alpha of the new development line of Ehcache. The target of the alpha release was to provide a JSR-107, the javax.cache API for java, compliant implementation of using the new API and features as we plan to have them for the v3 of Ehcache.

On-heap caching using the JSR-107 API

We’ve released a heap-only CachingProvider for JSR-107 and will start making the other topologies available with the beta releases early 2015… While the alpha comes with a new API into Ehcache, its main purpose is to enable people to make use of javax.cache APIs. These are obviously frozen, while the Ehcache v3 APIs are still somewhat in flux…

Presenting recently on the subject, someone asked:

I noticed there is no way to specify size of cache.
What am I missing? The 107 spec does not talk about size or maxEntries, yet it clearly says “resource constraint” but no constraint configuration is covered in the spec.
Is it even practical to define caches without size?
Well and indeed, no, it’s not practical to define a cache without capacity constraint (it’s just a Map otherwise, right?), but yes, the JSR-107 specification provides no way of configuring a capacity constraint on a Cache (or CacheManager).

Cache template to the rescue!

Ehcache always provided you a way to configure your CacheManager using a XML file, so does this new version. But we’ve added a small feature to our XML configuration: cache template. The idea came out of some crazy thought I had and smart developers pushed further during our public Ehcache weekly dev meeting. Basically cache template are cache definitions that you can “extend” when creating an actual cache. This makes your XML tighter when you create many different caches that only differ a little in terms of configuration. You can read everything about them in this little read me, but here’s how it works in a nutshell:

  1. Define one or multiple named cache-template(s), as if it was a cache, in your XML file;
  2. Define an aliased cache to use that cache-template (referencing it by name);
  3. Optionally override certain attributes from the cache-template.

Here is an example of this:

Where cache aliased “bar” extends cache-template “example”, so that it gets its capacity, value-type and key-type… but the key-type is explicitly overwritten.

How’s that useful to my JSR-107 application?

Well, we’ve added a small feature to our JSR-107 CachingProvider that let’s you define programmatically configured caches (at runtime) should use a cache-template. This is all explained in this other read me. But again, in a nutshell:

  1. I have this application that creates a javax.cache.CacheManager programmatically, using either some specific URI to configure it, or the default’s URI (see your provider’s documentation);
  2. At runtime, the app creates a cache named “users”, either using plain javax.cache APIs or some vendor specific configuration object. Either it doesn’t provide any capacity limit, or does it in a vendor specific way that Ehcache wouldn’t understand;
  3. When using Ehcache’s XML configuration, I can define cache-template as mentioned above, but adding the 107 XML configuration extension, I can define that when Cache “users” gets created it gets “enhanced” by a given template.

That way my application can benefit of all Ehcache features without tying itself to any of its APIs. Again, especially useful for exiting applications making use of the JSR-107 APIs (whether using other vendor specific APIs or not). Here’s a small example of the use-case above:

Where we also set a defaultTemplate for all other caches created on using that javax.cache.CacheManager.

Try it out!

We’re trying hard to make Ehcache 3 the best JSR-107 provider out there. We actually think we need to address some shortcomings of the specification while doing so… This feature actually came from an issue filed against the Ehcache v2 JSR-107 given some question on our mailing list… So please, try it all out and be vocal! Let us know what you think! In the meantime, we’ll be making progress on the beta release planned for early 2015.

Ehcache3’s templates

SizeOf Java library

That’s it! We’ve finally done it… Or almost done it. We’ve extracted the SizeOf code from the Ehcache library in it’s own library.

Release 0.2.0

The first release of the lib was just pushed to maven central and available by adding the following dependency to your pom.xml:


It then can simply be used by creating a new SizeOf instance (optionally passing some filters in) and using one of two methods:

SizeOf sizeOf = SizeOf.newInstance(); // <1>
sizeOf.sizeOf("Some string"); // <2>
sizeOf.deepSizeOf(Integer.MAX_VALUE, false, new ReentrantReadWriteLock()).getCalculated(); // <3>
  1. Creates a new instance, but also (only the first time) determines which of the 3 implementations will be used (see below)
  2. Sizes the java.lang.String instance passed in
  3. Sizes the ReentrantReadWriteLock instance, but also navigates down the graph, returning a Size instance. The two first arguments limit the amount of references the graph will be walked down for, and what to do if we hit this limit: either throw (true), or ignore and return the size calculated until the limit was reached (false)

Resolution mechanisms

The first time you invoke SIzeOf.newInstance(), the library will try to load one of the three implementations that we currently have: AgentSizeOf, UnsafeSizeOf and finally ReflectionSizeOf.


The library will try to attach to the Java virtual machine process and load an agent (packaged within the sizeof jar). If that succeeds, the AgentSizeOf will use java.lang.instrument.Instrumentation#getObjectSize to measure individual object sizes.


Should the attempt to attach to the VM fail, the UnsafeSizeOf is used. At least if we manage to access the sun.misc.Unsafe instance to then use that to calculate size, mainly with sun.misc.Unsafe#objectFieldOffset.


If all that failed, we’ll fallback to Java’s Reflection API to “manually” calculate object sizes.

0.2.0?! Is that… useable at all yet?

Well… yes and no. Yes, we believe it is fairly stable and well tested. It is what Ehcache has been using for a while now… Yet, we are not API (/ˈhapē/) with the API. Mainly org.ehcache.sizeof.Size, and the deepSizeOf method signature. These are left overs from the port that we still want to clean up. Finally, this release has support for OpenJDK, which we didn’t use to support.

Give it a try!

And tell us what you think. In the meantime will be clearing those leftovers for the next release and a couple of other enhancements. All tracked on Github at Feel free to submit bugs, ideas and, why not, matching pull requests!

SizeOf Java library

Starting with Ehcache 3.0

It’s been 4 years since we released Ehcache 2.0. While it was the second major release in almost 6 years, it didn’t break backward compatibility, but was “simply” the first release with a tight integration with the Terracotta stack, just 6 months after the acquisition. With JSR-107 close to being finalJava 8 lurking just around the corner and Ehcache having this close to a 10 year old API, it does seem like the perfect time to revamp the API, break free from the past and start working on Ehcache 3.0!

Ehcache 3.0

Ehcache is one of the most feature rich caching APIs out there. Yet it’s been growing “organically”, all founded on the very early API as designed some 10 years ago. We’ve learned a lot in the meantime and there have been compromises made along the way. But “fixing” these isn’t an easy task, often even impossible without breaking backward compatibility. In the meantime, while some caching solutions took very different approaches on it all, the expert group on JSR-107, Terracotta included, put great efforts in trying to come up with a standard API. We feel the new major version of the API should be based on a standardized API. JSR-107 is an option and a likely choice that will serve as a basis for the new version but Ehcache 3.0 will likely “extend” the specification in many aspects.


While JSR-107 lays a foundation to this new API, we also need to address some Ehcache specifics early on: One of these is the resource constraints user may wish to put on a cache, and the other is the exception type system. Let me try and expand a bit on these here.

Resource constraints

Ehcache users always had the possibility to constrain caches based on “element counts”, whether on-heap or on-disk. Over time we’ve introduced more tiers to cache data in (off-heap, Terracotta clustered) and more ways to constraint the caches in size (Automatic Resource Control, which let you constraint memory used by caches and cache managers in bytes). When capacity is reached, the cache will evict cached entries based on some eviction algorithm and fire events about these.

The javax.caching API doesn’t address this at all. Yet as this being a core concept in Ehcache and one that we want to keep, it seems only sensible we extend the API to support that.


We at Terracotta actually were never really fond of the exception types and their usages in Ehcache. Mainly because of the way Ehcache evolved to support more and more advanced topologies, without wanting to break backward compatibility. When it moved to support distributed caches, an entirely new set of problems emerged and failure became an inherent part of the system. Yet nothing on the API really allowed for that. JSR-107 seems to come with the same limitation.

While for simple, memory-only use-cases the API and its exceptions seem good enough, distributed caches do indeed need more. This is something we want address early on in the design of the new Ehcache 3.0 API.

Java 8

Ehcache 1.0 came out just a couple of days before Java 5 was released. While it was obvious that not everyone would jump on the boat right away, the enhancements that Java 5 brought were non-neglectable nonetheless, most noticeably generics. Yet, Ehcache never managed to integrate these in its own API. The net result being that 10 years later, Ehcache still lacks generics and leaves type safety entirely to the user to deal with. Lesson learned… This time around, we want to make sure the API is ready for the immediate future that is Java 8.


Like 10 years ago with Java 5, not everyone will migrate to Java 8 when it comes out. Actually probably only a very few reckless ones will do. But eventually, more and more people will and many of these will embrace its feature set. Lambdas are certainly a language feature that fits a caching API in many of its usages. As such, we want to make sure the Ehcache 3.0 API is ready for that. Foolishly enough, I believe it’s a task we can succeed in… hopefully without too much trouble!

Going about getting there

Having covered these two main “external” drivers (JSR-107 & Java 8) that are going to impact this Ehcache 3.0 API efforts, it would also be worthwhile to discuss the “internal” ones… Following are the very deep & ugly secrets of why we believe Ehcache 3.0 is a necessity!

Foundation work

As mentioned earlier, Ehcache grew its feature set organically. Most importantly it grew from a heap only caching solution to the feature richest API in the caching landscape, certainly in the open-source one. But some of it hadn’t been accounted for, or simply turned out to be harder to fit in. There are mainly two things we believe need fundamental fixing in order to make the life of users much simpler…


Interestingly enough this is something the JSR-107 expert group also left out. Not really surprising, as it probably would be really hard to have all vendors implement such a standardized lifecycle, let alone agree on one!

Ehcache has a much simpler task to address in that regard. The set of deployments and topologies Ehcache support is pretty much nailed down now. So “all” there is to do is clearly define a lifecycle that encompasses all these and lets a users easily move from one deployment to another while minimizing the set of new problems they have to think about. Whether we’re talking about moving a memory only cache to a “restartable one” that outlives the lifecycle of the virtual machine, yet keeps it all type safe; or move to a clustered deployment, over WAN and all the implication that brings along… It’ll probably never be entirely transparent, but we probably can make it as smooth as possible…

Modular configuration

With the plethora of features, came the ever growing set of configuration options to tune it all for the millions of deployment scenarios out there… The current ehcache.xsd is as scary as it gets! And that’s not even addressing any of many conflicting configurations one can end up with.

Addressing this would require a much better isolation between these features and their configuration. As part of a prototype I did with Chris Dennis a while back now, we came up with a much more modular approach for it. An approach where “modules” don’t know about each other and can function without each other entirely. While also tying into the previous point on lifecycle, nailing this aspect right seems an absolute requirement for the user experience to be as smooth as possible.

Main API

Last but not least, the API. I’m talking about the main API here, the one that expands on the JSR-107 one. The one that users will use the most. Covering configuration, lifecycle and “everyday” cache operations. The code that makes all of the above concrete to engineers and developers.

Moar featurz!

And yet there is much more than that… There is this list of features Ehcache grew to have over the last 10 years! While they are not necessarily all to be ported to this new version, most of them are. Deciding how and when to port them is going to be the next step when done with the above. But there is also a caveat to this… We need to make sure that whatever is done during the foundation work phase, nothing rules out or impedes with these features that we know are going to be part of this new API. I will not even try to come up with an exhaustive list, but here are some of the main features to keep in mind:

  • BootstrapCacheLoader (sync/async)
  • Statistics
  • Search
  • Transaction (JTA-y, XA, Local, …)
  • Write Through (explicit)
  • Write Behind (explicit)
  • Read Through (explicit, with multiples Loaders)
  • Refresh Ahead (Inline & Scheduled)
  • … and more!

So what now?

So why this blog now? Well, I know there are many users out there. What’s great about users of a library such as Ehcache is that all these users are developers. Many of them actually seem to love to complain about the tools and libraries they use! Now is the time, more than ever: come and complain! But come and complain in a positive way is all we ask. And if you feel like doing more than complaining, you might even participate! We’ve decided to start spec’ing this new API all in the open. The discussions will start happening on our Google Group ehcache-dev and the prototypes to API and code to be all on So we need you. We need you to make your issues known. We need you to make your solutions known and we need up to start debating with everyone on how to make Ehcache 3.0 the caching API you always wanted. We already have the knowledge and expertise to implement it all, based on all we already have in the 2.0 line and all the cool stuff we’ve been hacking on lately… But now is the time to let us know what is to be done!

We’ll be starting this exercise as of today: both discussing the above, starting with foundation work on top of the javax.caching API, as well as pushing the experimental code we have been hacking on to github. API work will be happening both on github, through code reviews, as well as on the group for more “general” discussions. All I can do now, is hope to see you there!

Starting with Ehcache 3.0

Choosing Hibernate’s caching strategy

One, if the not the, most common use case for Ehcache out there is using it with Hibernate. With only a little configuration, you can speed up your application drastically and reduce the load on your database significantly. Who wouldn’t want that? So off you go: you add the couple lines of configuration to your application. Nothing too complicated… You’re quickly left with two questions to answer though:

  1. What entities and/or relations do I cache?
  2. What strategy do I use for these?

We’ll let the former question as an exercise for the user to answer… but I’d like to spend some time here discussing the second question.

Choosing the strategy

So you decide to use caching on an entity or a relationship. You now have to tell Hibernate what caching strategy to use. Caching strategies come in 4 flavors: read-only, read/write, non strict read-write and transactional. What these entail is to be considered from two perspective: your ORM’s, i.e. Hibernate’s and from the cache’s perspective. The latter actually only applies to caches that loosen the consistency semantics, especially when you go distributed. Let’s start with the easy one first, Hibernate’s perspective.

From Hibernate’s perspective

The Hibernate documentation does give you some hints, but we’ll go in slightly more details here about what’s going on.


Easiest one to reason about. The data held in the cache is immutable and as such will never be updated. Solves all isolation problems. Data is. Hibernate will actually go as far as prohibiting any mutations to that dataset, throwing an exception at you if you try to update any such entity.

In a “strongly consistent’ cache as you’d have a single VM from any cache vendor, there isn’t much more thoughts to put into this one. Straight forward… Gotta love this one!

Non strict Read/Write

Now if you have mutable entities, that’s one of the three options you have. As the name implies it’s read and write, but in a non strict sense… Non strict here means that Hibernate will not try to isolate any mutations to an entity from other concurrent transaction reading the same data. But what does this mean?

At this stage, I think it’s important to debunk a couple of misunderstanding about Hibernate. Firstly, and most importably here, Hibernate does NOT store your entities in the second level cache. Instead it stores a “dehydrated” representation of the entity. Hibernate comes with an option to have it store Maps in the cache. You would never want to do that on a production system, but this can be useful for debugging an app. It makes it so a cache entry is a Map of property name to property value for a given entity type (e.g. entity “Company”: [“id”: 1, “name”: “Terracotta”, “doesCaching”: true]). The default storage is merely a more efficient way of storing that same data. And secondly, the cache is accessed/updated as the database is accessed/updated. So say a query is issued against companies, trying to select all companies that do caching, yet the session holds newly created company (managed) instance, Hibernate would flush all these to the database. Should these be reloaded from the database within this uncommitted transaction, they’d also make it in the second level cache. This is where non strict is important. Pending (i.e. uncommitted) changes can become visible through the cache to other transactions, breaking the I guarantee from ACID.

Also worth mentioning is that the behavior above is taken from Ehcache’s NonStrictReadWrite strategy. We’ve tried our best to make it hard for users to shoot themselves in the foot. This strategy will invalidate cached entries on updates and only populate the cache with data loaded from the database. Note that these invalidations happen after the transaction has successfully committed. As a result there is a race where “old” values can be seen in the cache, while updated in the database.


Read/Write (or strict read/write) tries to account for the short-comings of the non strict approach. It does so by implementing a “soft-lock” mechanism, locking entries in the cache on a per entry granularity as they are mutated. Only after the transactions has successfully committed will these locks be released, installing the appropriate value in the cache. Using this caching strategy, Hibernate will lock entries on flushes to the database. Every other transaction accessing a locked entry will consider it a cache miss and hit the database instead. What’s nice about this strategy is that on contention, it let’s the database handle the concurrent access, meaning that the isolation level provided is resolved by the database, as if there was no cache at all.

Now this seems perfectly reasonable… Yet, there is one shortcoming to this strategy as well: it stores these soft-lock in a cache. A cache that could evict (or expire) these. The absence of a lock for an entry can result in stale data be present in the cache or (as for non-strict) result in uncommitted state being exposed to other transactions.


Transactional deals with all these limitations. It basically expects your cache to be a full-blown JTA XAResource that can be modified alongside the database (meaning you do need to configure the both to use the same isolation level). Yet you also pay the price of two-phase commit. Also, fully XA compatibility does require recovery support, which for a Cache might be overkill (as it caches the data held in the cache and your application could potentially work with a cold, i.e. empty, cache).

From a distributed perspective

When you move to a distributed environment (i.e. multiple application nodes hitting the same databases), your cache needs to be kept in sync across all these nodes. While enabling Terracotta clustering with Ehcache is only a couple lines of config again, how you configure your clustered cache becomes important.

Terracotta provides basically two consistency models for clustered caches: Strong and Eventual. In Strong, you basically get JMM guarantees across your cluster. Our Hibernate caching strategies will account for all that’s to be taken care of for you to provide you with proper visibility semantics. In Eventual consistency though, things get slightly more complex for the user to understand (yet provide the better performance both in terms of read and write throughput).

Besides the consistency, Terracotta lets you configure the non-stop behavior of the clustered caches (what paragraph on distributed systems would be complete without mentioning CAP?!). If a certain operation can’t happen within a given configured time, you have multiple options. In the following paragraphs, when referencing non-stop, we will be talking about any behavior that’s not failure (i.e. favored A instead of C).


That’s again the easiest one to reason about, since the data never mutates there is nothing to worry about in terms of isolation. Yet, in terms of visibility, multiple nodes could be putting the same value into the cache, resulting in more database hits than you would expect. Say you have a list of countries in such a cache, configured to hold all countries and never expire or evict them. This is very common pattern for reference data that you want to keep close to your application. It could well be some countries are loaded and put in the cache from multiple nodes, especially under very high initial load.

The same is true in the face of partitioning with non-stop configured: at worse you’ll see more database hits as you’d expect. But since data is immutable, there isn’t any stale state ever…

Non strict Read/Write

While this mode will also work fine with eventual consistency, you’re basically blowing the race wider for inconsistent data making it in the cache. It becomes very use case dependent on how much problem this may or not be to your application. Invalidation are being propagated asynchronously to all nodes, which makes outdated values available slightly longer on the nodes that haven’t done the mutation. But since data is only ever populated from the database, when populated it’s always with the latest state.

Non-stop here can result in even larger races. Say an invalidation is ignored during a partition (noop configured non-stop caches), that data will remain in the cluster until the next mutation happens. How this situation is acceptable to your application is again up to you…


This is where things get more interesting! While it could sound like it would be as acceptable as the other two strategies so far, it actually isn’t. All because of the Soft-Locks mainly. As reads remain mainly unlocked, we can’t provide Hibernate with the expectations it has on such a strategy (see below Hibernate 3 vs. 4).

As explained for non-strict, with non-stop caches, you could end up with stale locks in the cache, basically rendering the whole strategy useless. Long story short, don’t use this strategy with a distributed cache that isn’t strongly consistent.


In Terracotta land, that one is actually surprisingly easy. As you need to have your cache be a proper XAResource, Ehcache will not let you configure anything non-sensible here. That is, Ehcache will only let you use an XA transactional cache with strong consistency. And here the only sensible behavior in the face of partitioning is to fail, but as such also to remain consistent.

One last thing… or two

Optimistic locking to the rescue

In order to deal with stale state (as it can happen even without a second level cache, your session being the first level cache already), you can implement an optimistic locking strategy within your data-model. As a result though, some layer(s) in your application will either have to deal with OptimisticLockingException (e.g. you tried to update the salary of Alex version 12, but that’s not the version present in the database at flush time anymore) or have your user deal with it… The latter not sounding too good probably, it is worth understanding what corruptions your data exposes itself with any given deployment (with or without second level cache, be it distributed or not).

Hibernate 3 vs. 4

In Hibernate 4, the caching provider can actually implement some of the behavior of the strategy himself. In Hibernate 3, nothing like it was feasible. As a result, there is smarter things (to a still limited extent though) we could do about dealing with these different modes in a clustered environment. Yet one main problem remains: Read/Write is a FSM which is not implementable atop a weakened consistency model. Also, as of today, Hibernate doesn’t try to handle cache “failures”, which is probably is fair thing to do, but forces you to understand those quirks.

In conclusion

Hibernate tries as best as possible to hide the complexity of the caching layer from the user. Yet there is only so much it can do about it. Yes, it enables users to easily plug in a cache without much further thought. But as your application’s deployment complexity might grow, you might be forced to revisit that initial strategy so it deals with the oddities of distributed systems in a way both acceptable by the domain and to its users.

Choosing Hibernate’s caching strategy

Users do not read warn or error messages! So comments ?! Forget it!

As an engineer working on libraries and being someone carrying about APIs in general, I always struggle with writing proper error messages. Worse than these are actual warnings: how do you best convey to someone using your library that he might be doing something “suboptimal”. It might even be right, but most probably… not.

One way we do this in Ehcache, is warn the user when we bootstrap the CacheManager about “inconsistencies” in his configuration. While those won’t stop your application from starting, the reason for those warnings might actually cause your application to perform poorly. If your application relies on the Cache to support the load (e.g. your database won’t be able to support peak traffic), such a “mis-config” would turn out to be lethal at the worst moment imaginable: peak traffic. If you’re lucky though, it will make your app perform poorly immediately. At which point, we hope, you’ll look through the logs!

But users don’t read these messages!

Not only don’t they read them, but when they see it they don’t READ them! So whatever you put in those messages seem to never be good enough! Even when that message is the last thing the user sees before the everything stops working!

One day as a user

This morning, I woke up finding a mail about a web application I wrote some 7 years ago being down… I log on the machine and do indeed see httpd isn’t running. Well, that explains it all, easy one! But restarting httpd just doesn’t work… wtf?!

So I go look at the logs, and there it is:

[crit] (22)Invalid argument: mod_jk: Could not set permissions on jk_log_lock; check User and Group directives

But how did that happen now all a sudden ?! This app has been running for years… and restarted many times (but that’s another story)! Never an issue… EVER! Permissions on the file and directory are fine… What happened ?! Double checking config, all seems fine. It’s using the proper locations and all…

Google to the rescue

I find a bug report with the same error message pretty quickly on ASF’s Bugzilla : Bug 39914. Well that was easy… but wait. Why now ? Well, as it turns out, the server uses netboot and inherited a brand new kernel! That must be it ! Somehow that must have triggered it. Even though that bug there, nor the one linked, should actually affect my version, let’s see… and upgrading never hurts I guess !

autoconf / automake fun !

Downloading the latest httpd and the latest tomcat connector. I link mod_jk statically in the httpd process, so I might just as well upgrade httpd as well, right ? And off I go…

  1. Crap! native connector won’t install… But I’m doing as the doc says!
  2. Ah! Right… I remember, need to configure httpd first!
  3. connector installed in httpd’s source dir
  4. re-configure httpd, make && sudo make install
  5. No!… error configuring
  6. Oh right, so macro didn’t get replaced, no problem, I can fix that
  7. Done… All installed !
  8. Start it …
  9. … fails ! SAME F$%^&ING ISSUE!!!

Alright, stay calm… It’s all good…

Stepping back for a second

So what’s busted here again ?

mod_jk: “Could not set permissions on jk_log_lock; check User and Group directives”

Check what… ?! User and Group directives ? Right, maybe that changed (This is where you are thinking: how the heck is that even supposed to change by itself you idiot?!)…

User nobody
Group #-1

Nah… that’s what I expected. Or is it ?

Awesome! The guys had the config template I used to configure this well documented, so a couple lines above I read:

# If you wish httpd to run as a different user or group, you must run
# httpd as root initially and it will switch.
# User/Group: The name (or #number) of the user/group to run httpd as.
# . On SCO (ODT 3) use “User nouser” and “Group nogroup”.
# . On HPUX you may not be able to use shared memory as nobody, and the
# suggested workaround is to create a user www and use that user.
# NOTE that some kernels refuse to setgid(Group) or semctl(IPC_SET)
# when the value of (unsigned)Group is above 60000;
# don’t use Group #-1 on these systems!

Some kernels do what ?! Well… Duh! Explicitly setting group then… I guess…

  1. Starting apache…
  2. … all up and running!

WTF?! Maybe these guys could have been a little cleared and tell me to… Oh, wait. That’s exactly what they did… Stupid users!

Users don’t read these messages … indeed!

Or many don’t at least! And I’m one of these… I’ve been able to tell many times helping users with using our software properly… Generally all is seen is the stacktrace in our case, but the exception’s message, the one I tried to craft as good as I could, so it’s clear, still concise, is just ignored… Stupid users! All of them! But we’re all some else’s user…

Users do not read warn or error messages! So comments ?! Forget it!