Install JBoss 4.2 on Centos/RHEL 5

July 1st, 2009 3 comments

I was recently tasked with installing JBoss 4.2 on Centos/RHEL 5. I found the experience remarkably difficult, so I figured I should share it for my own future reference, and hopefully to also save the sanity of whatever other poor souls are tasked with the same project.

  1. Start off with RHEL 5 or Centos 5
  2. Install jpackage50.repo into /etc/yum.repos.d/ (instructions for how to make this file can be found at jpackage.org)
  3. run “yum update”
  4. run “yum install jbossas”
  5. If you see this message: ” –> Missing Dependency: /usr/bin/rebuild-security-providers” then download jpackage-utils-compat-el5-0.0.1-1.noarch and install it by usingĀ  rpm -i jpackage-utils-compat-el5-0.0.1-1.noarch.rpm , then run “yum install jbossas” again. See this bug at Red Hat for details, and http://www.zarb.org/pipermail/jpackage-discuss/2008-July/012751.html for how the rpm was built.
  6. run “/sbin/chkconfig jbossas on” to start JBoss automatically at startup
  7. Until this bug is resolved , run this command: “ln -s /usr/share/java/eclipse-ecj.jar /usr/share/java/ecj.jar”
  8. If you want JBoss to listen to requests from systems other than localhost, edit /etc/jbossas/jbossas.conf. Create a new line that reads “JBOSS_IP=0.0.0.0″.
  9. put your .ear’s, .war’s, ejb .jar’s, *-ds.xml’s into /var/lib/jbossas/server/default/deploy
  10. Start JBoss by running “/etc/init.d/jbossas start”

JVM args can be found in /etc/jbossas/run.conf.

Note that if your web application (war, ear, whatever) depends on JNDI, you need to edit /var/lib/jbossas/server/default/deploy/jboss-web.deployer/META-INF/jboss-service.xml and a line for each JNDI data source like this: “<depends>jboss.jca:service=DataSourceBinding,name=jdbc/whatever</depends>”. This little detail cost me quite a few hours to figure out… an explanation as to why this is necessary can be found at http://confluence.atlassian.com/display/DOC/Known+Issues+for+JBoss. Basically, JBoss will start applications before JNDI data sources unless told otherwise, so your application will error out on startup with an exception like this: “Caused by: javax.naming.NamingException: Could not dereference object [Root exception is javax.naming.NameNotFoundException: jdbc not bound]“.

Some may argue that I should have simply downloaded the tar from jboss.org and manually installed JBoss without a package manager. However, the package manager offers a lot of advantages, such as dependency resolution/management, automatic updates for security and/or new features, clean and easy uninstall, and a lot more. When given the choice, I always choose to use a package manager, and will even create packages if ones are not available, and I report package bugs so others, and my future self,will have a better experience.

A lot of the pain in installing JBoss is due to bugs in the packaging. I hope that jpackage.org / Red Hat solves these problems soon – I wouldn’t really want anyone to have to live through the trouble I went through to figure all this out again.

Categories: Uncategorized Tags:

Compression (deflate) and HTML, CSS, JS Minification in ASP.NET

May 22nd, 2009 7 comments

As I’ve already demonstrated, I like performance. So I cache and compress a lot. When I was put onto an ASP.NET project at work, I obviously wanted to optimize the site, so here’s what I did.

Taking some hints from Y! Slow, I decided I wanted to:

  • Get rid of all the MS AJAX/toolkit javascript, as we used jQuery instead
  • Combine all the javascript into one request
  • Combine all the CSS into one request
  • Minify the CSS
  • Minify the javascript
  • Minify the HTML
  • Deflate everything (gzip is slightly larger, and all modern browsers support deflate, so I just ignored gzip)

I followed the directions outlined at this site to override the ScriptManager and prevent it from including the Microsoft AJAX javascript. Removing uunsed code is always a good thing.

Combining the javascript was easy. Starting in ASP.NET 3.5 SP1, ASP.NET’s ScriptManager supports the CombineScript tag inside of it. That was easy.

Combining the CSS was not so easy, as there’s no such thing in ASP.NET as a “ScriptManager.” I had two options: make a CSS manager (and use it everywhere), or figure out another way. Never taking the easy route when there’s a more interesting (and more front end developer transparent) way, I decided to make a filter (implementer of IHttpModule) to find all the “<link>” tags in the page header and replace them with one “<link>” to a combined CSS handler (which I called “CssResource.axd” to parallel ScriptManager’s “ScriptResource.axd”). Then, in my IHttpHandler implementation which handles CssResource.axd, I read the querystring, grab the requested CSS files from the file system, combine them into one string, and return them. CSS combining done.

For minifying the CSS and Javascript, I used the C# version of YUI Compressor. I used the original (Java) YUI Compressor before, and had a great experience, so picking this version was a no-brainer. In my aforementioned filter, I intercept requests for “ScriptResource.axd” and “CssResource.axd,” apply YUI Compressor to the response content, cache the result (so I don’t need to minify every single request), then return.

I also minify inline (as in, mixed with HTML) CSS and Javascript. Also in my filter, if the return type is HTML, I scan for “<script src” and “<link rel=’stylesheet’ src=” and minify their contents. This minification does have to happen for every request to that page, unless that whole page is cached.

Finally, the last thing the filter does is check if the browser accepts deflate compression. If it does, the filter compresses the stream. In the case of “ScriptResource.axd” and “CssResource.axd” requests, the deflating is done before the response is cached, so requests for those resources don’t need to be re-deflated for every request (their content is static, unlike regular html requests, so caching the whole response is okay).

The initial (cache empty) page load was 780k before I started. When I had finished, the page load was only 234k – a 70% decrease.

You can download the code from this site. To use it, you need to modify your web.config.

<system.web>
<httpModules>
<add type="CompressionModule" name="CompressionModule" /><!--This must be the last entry in the httpHandlers list-->
</httpModules>
<httpHandlers>
<add verb="GET,HEAD" path="CssResource.axd" validate="false" type="CssResourceHandler"/>
</httpHandlers>
</system.web>

I cannot claim 100% credit for all of this work. I got many ideas from just browsing web search results, trying things out, and combining ideas from various sources. If I have not credited you, and I should have – I apologize, and will be happy to do. But I can say, that I did not just “copy and paste” this from anywhere – I’m confident that this work cannot be classified as a derived work of anything else. With that in mind, I release it into the public domain.

Categories: Uncategorized Tags:

Hibernate Deep Deproxy

March 16th, 2009 2 comments

A common problem faced with using ORMs that use lazy loading is that the objects returned by the ORM contain (obviously) lazy loading references, so that you need an ORM session to access those objects. For example, if you have a “Person” class, that contains a “mother” property, when you do “person.getMother()”, the ORM will get the mother from the database when it’s requested – not when the person is initialized.

Lazy loading is great, because it means you don’t load a huge amount of data when you really just want one object (say you just want the person’s name, with lazy loading, the person’s mother is never retrieved). However, when you want to do caching, lazy loading can be a serious problem.

For example, let’s say I have a method I call a lot – “personDao.findAll()”. I’d like to cache this entire method, so I don’t need to hit the database or the ORM at all, so I use something like an aspect to do declarative caching on that method. On the second and subsequent calls, the returned list of persons won’t have sessions attached (as they’re still attached to the first caller, which is long gone), so they can’t load their lazy references, and you end up with the famous LazyInitializationException. If you know the list of people isn’t too big, and that it doesn’t refer to too many other objects, you can removed the lazy proxies and load everything at once – then cache that result. But be careful – by doing deep deproxying, all objects that are refered to will be loaded, so if you’re not careful, you can load the entire database, which results is either a loss of performance (due to using all the memory) or an immediate error.

Here’s how I do deep deproxying with Hibernate. I’ve read about many techniques to do this, but this approach works for better than anything I’ve been able to find so far.

    public T deepDeproxy(final Object maybeProxy) throws ClassCastException {
if(maybeProxy==null) return null;
T ret = deepDeproxy(maybeProxy,new HashSet<Object>());
return ret;
}
 
private T deepDeproxy(final Object maybeProxy,final HashSet<Object> visited) throws ClassCastException {
if(maybeProxy==null) return null;
Class clazz;
Hibernate.initialize(maybeProxy);
if (maybeProxy instanceof HibernateProxy) {
HibernateProxy proxy = (HibernateProxy) maybeProxy;
LazyInitializer li = proxy.getHibernateLazyInitializer();
clazz = li.getImplementation().getClass();
}
else {
clazz = maybeProxy.getClass();
}
T ret = (T) deepDeproxy(maybeProxy,clazz);
if(visited.contains(ret)) return ret;
visited.add(ret);
for (PropertyDescriptor property : PropertyUtils.getPropertyDescriptors(ret)) {
try{
String name = property.getName();
if(!"owner".equals(name) &&  property.getWriteMethod()!=null){
Object value = PropertyUtils.getProperty(ret, name);
boolean needToSetProperty=false;
if (value instanceof HibernateProxy) {
value = deepDeproxy(value,visited);
needToSetProperty=true;
}
if(value instanceof Object[]){
Object[] valueArray = (Object[]) value;
Object[] result = (Object[]) Array.newInstance(value.getClass(), valueArray.length);
for(int i=0;i<valueArray.length;i++){
result[i]=deepDeproxy(valueArray[i],visited);
}
value=result;
needToSetProperty=true;
}
if(value instanceof Set){
Set valueSet = (Set) value;
Set result = new HashSet();
for(Object o : valueSet){
result.add(deepDeproxy(o,visited));
}
value=result;
needToSetProperty=true;
}
if(value instanceof Map){
Map valueMap = (Map) value;
Map result = new HashMap();
for(Object o : valueMap.keySet()){
result.put(deepDeproxy(o, visited),deepDeproxy(valueMap.get(o),visited));
}
value=result;
needToSetProperty=true;
}
if(value instanceof List){
List valueList = (List) value;
List result = new ArrayList(valueList.size());
for(Object o : valueList){
result.add(deepDeproxy(o,visited));
}
value=result;
needToSetProperty=true;
}
if(needToSetProperty) PropertyUtils.setProperty(ret, name, value);
}
}catch (java.lang.IllegalAccessException e){
e.printStackTrace();
} catch (InvocationTargetException e) {
e.printStackTrace();
} catch (NoSuchMethodException e) {
e.printStackTrace();
}
}
return ret;
}
 
private <T> T deepDeproxy(Object maybeProxy, Class<T> baseClass) throws ClassCastException {
if(maybeProxy==null) return null;
if (maybeProxy instanceof HibernateProxy){
return baseClass.cast(((HibernateProxy) maybeProxy).getHibernateLazyInitializer().getImplementation());
}else{
return baseClass.cast(maybeProxy);
}
}
Categories: Uncategorized Tags:

EhCache implementation of OpenJPA caching

March 12th, 2009 2 comments

I usually use Hibernate, which supports a number of caching implementations (such as EhCache, oscache, JBoss, etc). My most recent project had a dependency on a product which has a dependency on OpenJPA, and OpenJPA only has it’s own built in implementations of a query cache and a data cache. I like to have one caching implementation in my project, so having two (OpenJPA’s for itself, and EhCache for everything else) annoyed me. So I had to fix it.

I started with Pinaki Poddar’s implementation of a Coherence provided OpenJPA data cache. I changed it to use EhCache, adjusted the unit tests, and then added a query cache implementation. To use it, add a dependency on the openjpa-ehcache, then set OpenJPA’s “openjpa.QueryCache” to “ehcache” and “openjpa.DataCacheManager” to ehcache. That’s it!

The code can be compiled with Maven. Simply run “mvn install”.

My code, like EhCache and OpenJPA, is licensed under the Apache Public License 2.0. Get it here.

Categories: Uncategorized Tags:

One HTTPS site per IP address… or may be not?

February 26th, 2009 1 comment

I randomly ran across SNI (aka RFC 4366) tonight. It’s a technology that has been under development since before 2000 that allows the client to tell the server what domain it’s visiting before the server sends the certificate. The history is fascinating!

The situation today is that SNI is not here yet. OpenSSL will support it starting in 0.9.9, but has it as a compile time option (default disabled) as of 0.9.8f. Apache may support in it’s next minor release (2.2.12), or maybe not… at least it’s in their trunk, so it will be released someday. I just installed the SNI patch on my Apache 2.2.11 server, and I’m going to try it out. IIS has no stated plan to support it or not. The other popular servers, like Cherokee, lighthttps, and nginx, support it today.

But, as usual, browser support is the limiting factor:

As usual, Internet Explorer is the limiting factor. You need *Vista* to use SNI, so given that IE6 still has a decent market share, and it’s 8 years old… it’s going to be at least 2017 before we can reliably host multiple HTTPS sites on the same IP address – and who knows about embedded browsers (like those in cell phones and PDAs). Perhaps using one IPv6 address per HTTPS site will be more practical before SNI is widely available… who knows.

Categories: Uncategorized Tags:

Why would a cache include cookies?

February 25th, 2009 No comments

Ehcache’s SimplePageCachingFilter caches cookies. And that baffles me… why would a cache include cookies in it?

I ran into the interesting situation where servlets, interceptors, and all those other Java goodies were writing cookies for purposes like the current browsing user’s identifier so it could track that user on the site and keep track of his shopping cart. The problem, which is obvious in retrospect but was incredibly puzzling at first, was that the cookies that included the user id were being cached, so when a subsequent user hit that page, he got the original requester’s user id, and got all that implied (like his cart).

Since each page is cached separately and at separate times, and there is more than one user on the site, visitors would see their carts changing, items seemingly appearing and disappearing randomly, and other such fun. For example, if Alice happened to hit the home page when its cache was expired, her user id cookie ended up in the home page cache. Then Bob comes along and hits the accessories page when its cache has expired, so his user id cookies ends up in that page’s cache. Finally, Charles visits the home page, and sees Alice’s cart. Then, he goes to the accessories page, and sees Bob’s cart. It’s just an incredibly weird and confusing situation!

I’ve been wracking my brain on the topic of caching cookies – when would it be useful? Cookies, as far as I can imagine (and have experienced), contain only user unique information – so why would you cache them?

To solve this problem, I extended SimplePageCachingFilter and overrode the setCookies method, having it be a no-op. And I filed a bug report with Ehcache.

Apache’s mod_cache will include cookies in its cache too. But, in their documentation, they specifically point out the case of cookies in their example of how to exclude items from the cache. It seems Apache knows including cookies is a bad idea… perhaps they should default to excluded?

Categories: Uncategorized Tags:

One instance at a time with PID file in Bash

February 16th, 2009 6 comments

Often times, I only want a script to run one instance at a time. For example, if the the script is copying files, or rsync’ing between systems, it can be disastrous to have two instances running concurrently, and this situation is definitely possible if you run the script from cron.

I figured out a simple way to make sure only one instance runs at a time, and it has the added benefit that if the script dies midway through, another instance will start – a drawback of just using lock files without a pid.

Without further ado, here’s my script:

#!/bin/bash
pidfile=/var/run/sync.pid
if [ -e $pidfile ]; then
pid=`cat $pidfile`
if kill -0 &>1 > /dev/null $pid; then
echo "Already running"
exit 1
else
rm $pidfile
fi
fi
echo $$ > $pidfile
 
#do your thing here
 
rm $pidfile
Categories: Uncategorized Tags:

HTTP Caching Header Aware Servlet Filter

February 14th, 2009 7 comments

On the project I’m working on, we’re desperately trying to improve performance. One of the approaches taken by my coworkers was to add the SimplePageCachingFilter from Ehcache, so that Ehcache can serve frequently hit pages that aren’t completely dynamic. However, it occurred to me that the SimplePageCachingFilter can be improved by adding support for the HTTP caching headers (namely, ETags, Expires, Last-Modified, and If-Modified-Since). Adding these headers will do two important things:

  1. Allow Apache’s mod_cache to cache Tomcat served pages, so that requests to these pages never even hit Tomcat, which should massively improve performance
  2. Allow browsers to accurately cache, so visitors don’t need to re-request pages after the first visit

Implementing these headers wasn’t terribly difficult – just tedious in that I had to read the relevant HTTP specification.

I sincerely hope that Ehcache picks up this class and adds it to the next version – I imagine that many applications could benefit from this class!

Here’s my class:

/**
*  Copyright 2009 Craig Andrews
*
*  Licensed under the Apache License, Version 2.0 (the "License");
*  you may not use this file except in compliance with the License.
*  You may obtain a copy of the License at
*
*      http://www.apache.org/licenses/LICENSE-2.0
*
*  Unless required by applicable law or agreed to in writing, software
*  distributed under the License is distributed on an "AS IS" BASIS,
*  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
*  See the License for the specific language governing permissions and
*  limitations under the License.
*/
import java.io.IOException;
import java.text.ParseException;
import java.text.SimpleDateFormat;
import java.util.Collection;
import java.util.Date;
import java.util.Iterator;
import java.util.List;
import java.util.Locale;
import java.util.TimeZone;
import java.util.zip.DataFormatException;
 
import javax.servlet.FilterChain;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
 
import org.apache.commons.lang.StringUtils;
 
import net.sf.ehcache.constructs.web.AlreadyGzippedException;
import net.sf.ehcache.constructs.web.PageInfo;
import net.sf.ehcache.constructs.web.ResponseHeadersNotModifiableException;
import net.sf.ehcache.constructs.web.filter.SimplePageCachingFilter;
 
/*
* Filter than extends {@link SimplePageCachingFilter}, adding support for
* the HTTP cache headers (ETag, Last-Modified, Expires, and If-None-Match.
*/
public class HttpCachingHeadersPageCachingFilter extends
SimplePageCachingFilter {
 
private static final SimpleDateFormat httpDateFormat = new SimpleDateFormat("EEE, dd MMM yyyy HH:mm:ss z", Locale.US);
 
static{
httpDateFormat.setTimeZone(TimeZone.getTimeZone("GMT"));
}
 
public synchronized static String getHttpDate(Date date){
return httpDateFormat.format(date);
}
 
public synchronized static Date getDateFromHttpDate(String date) throws ParseException{
return httpDateFormat.parse(date);
}
 
@SuppressWarnings("unchecked")
@Override
protected PageInfo buildPage(HttpServletRequest request, HttpServletResponse response, FilterChain chain) throws AlreadyGzippedException, Exception {
PageInfo pageInfo = super.buildPage(request, response, chain);
if(pageInfo.isOk()){
//add expires and last-modified headers
Date now = new Date();
 
List&lt;String[]&gt; headers = pageInfo.getHeaders();
 
long ttlSeconds = getTimeToLive();
 
headers.add(new String[]{"Last-Modified", getHttpDate(now)});
headers.add(new String[]{"Expires", getHttpDate(new Date(now.getTime() + ttlSeconds*1000))});
headers.add(new String[]{"Cache-Control","max-age=" + ttlSeconds});
headers.add(new String[]{"ETag", "\"" + Integer.toHexString(java.util.Arrays.hashCode(pageInfo.getUngzippedBody())) + "\""});
}
return pageInfo;
}
 
@Override
protected void writeResponse(HttpServletRequest request, HttpServletResponse response, PageInfo pageInfo) throws IOException, DataFormatException, ResponseHeadersNotModifiableException {
 
final Collection headers = pageInfo.getHeaders();
final int header = 0;
final int value = 1;
for (Iterator iterator = headers.iterator(); iterator.hasNext();) {
final String[] headerPair = (String[]) iterator.next();
if(StringUtils.equals(headerPair[header],"ETag")){
if(StringUtils.equals(headerPair[value],request.getHeader("If-None-Match"))){
response.sendError(HttpServletResponse.SC_NOT_MODIFIED);
// use the same date we sent when we created the ETag the first time through
response.setHeader("Last-Modified", request.getHeader("If-Modified-Since"));
return;
}
break;
}
if(StringUtils.equals(headerPair[header],"Last-Modified")){
try {
String requestIfModifiedSince = request.getHeader("If-Modified-Since");
if(requestIfModifiedSince!=null){
Date requestDate = getDateFromHttpDate(requestIfModifiedSince);
Date pageInfoDate = getDateFromHttpDate(headerPair[value]);
if(requestDate.getTime()&gt;=pageInfoDate.getTime()){
response.sendError(HttpServletResponse.SC_NOT_MODIFIED);
response.setHeader("Last-Modified", request.getHeader("If-Modified-Since"));
return;
}
}
} catch (ParseException e) {
//just ignore this error
}
}
}
 
super.writeResponse(request, response, pageInfo);
}
 
/** Get the time to live for a page, in seconds
* @return time to live in seconds
*/
protected long getTimeToLive(){
if(blockingCache.isDisabled()){
return -1;
}else{
if(blockingCache.isEternal()){
return 60*60*24*365; //one year, in seconds
}else{
return blockingCache.getTimeToLiveSeconds();
}
}
}
}
Categories: Uncategorized Tags:

IPv6 Setup

January 27th, 2009 1 comment

I received an email today from Jeremy Visser asking me how my blog is set up for ipv6. So intarwebs, here’s how my system works.

The server that hosts this blog is sitting in my living room connected via a router to Comcast cable Internet access. It runs a bunch of services, such as ejabberd for my xmpp server, postfix with courier for email, apache and PHP with Suhosin and XCache which hosts WordPress, mysql, and tomcat for whatever random Java webapps I’m playing with at the time.

All of the services are ipv6 enabled. For ejabberd, just add “inet6″ to each listen socket. For postfix, apache, and mysql, no additional work was necessary. For tomcat, ipv6 support should work, but I run tomcat so it listens only to localhost, and access it via mod_proxy_ajp from apache, so I haven’t looked at that too much.

The router is an Asus WL-500gp running OpenWrt. Since Comcast doesn’t provide ipv6 address to their customers, I use a sixxs.net tunnel via aiccu (my tunnel type is “heartbeat” because my ipv4 address is dynamic). I then use radvd to advertise the ipv6 subnet to the other computers on the lan (such as my laptops and server). You can find a guide for how to get an ipv6 tunnel going on an OpenWrt at OpenWrt’s wiki.

Comcast blocks incoming port 25, so I can’t just run my mail server the simple way, unfortunately. To get aroud this, I have postfix listening for deliveries on a different port (8025), and I use Roller Network‘s “SMTP Redirection” service. I set the MX records for my domain to rollernet’s mail servers, then rollernet’s mail servers deliver the mail to my server on a different port. Currently, rollernet’s mail servers do not have AAAA records, but I have asked for this feature, and rollernet is usually pretty awesome, so I bet I’ll have ipv6 enabled incoming mail very soon. For outgoing mail, rollernet’s mail relay servers (which I have to use, as most mail systems automatically reject mail from personal ISPs as spam) try ipv6 delivery then ipv4; also, the relay servers do have AAAA records to my server sends mail using ipv6.

Rollernet also provides ipv6-enabled DNS services. I simply set my nameservers in my registrar to point to rollernet (rollernet’s nameservers are ipv6 enabled, which from what I understand, is rare), and start adding DNS records. I added AAAA records for my server, and clients without ipv4 connectivity can reach my web site entirely over ipv6. The same applies for XMPP.

I have certainly learned a lot about networking from my ipv6 experiment. First, most applications already support ipv6, and need no additional configuration, or at most, very little. Second, setting up an ipv6 server is not terribly hard; just get a tunnel, a subnet, and advertise it. Third, finding a DNS hosting service that is ipv6 enabled is very difficult – rollernet was the only one I could find when I started by search last year – I hope that situation improves soon. Of course, even if ipv6 DNS services do proliferate, I’m not switching away from rollernet, as their customer service is great, technical ability is outstanding, and pricing is unbelievable good. But that’s a story for another post.

And finally, I’m working on the Hurricane Electric IPv6 Certification.

IPv6 Certification Badge

IPv6 Certification Badge

Categories: Uncategorized Tags:

Preventing java.lang.OutOfMemoryError: unable to create new native thread

January 27th, 2009 2 comments

Artifactory is a Maven repository manager that does a lot of really useful things, such as cache maven artifacts (which saves external bandwidth and makes downloads faster) and stores our own artifacts (such as 3rd party software that can’t be on a public maven repository for licensing reasons). I recently upgraded from Artifactory 1.2 to 2.0.0, which was pretty painless.

The problems appeared the next day, when the application died with “java.lang.OutOfMemoryError: unable to create new native thread.” I tried a lot of things, but what worked was reducing the stack size. The JVM has an interesting implementation, the design of which I don’t completely understand, but the implication is that the more memory is allocated for the heap (not necessarily used by the heap), the less memory available in the stack, and since threads are made from the stack, in practice this means more “memory” in the heap sense (which is usually what people talk about) results in less threads being able to run concurrently.

To reduce the stack size, add “-Xss64kb” to the JVM options. I suggest you start with 64k, try the application, then if it doesn’t work (it will fail with a Java.lang.StackOverFlowError), increase the stack to 128k, then 256k, and so on. The default stack size is 8192k so there’s a wide range to test.

Also, on Linux, you’ll need to set the Linux thread stack size to the same value as the JVM stack size to get full benefits. To do that, use “ulimit -s <size in kb>”. In my case, Artifactory works great with a 128kb stack, so I used “-Xss128kb” for the JVM options and “ulimit -s 128″ to set the Linux stack size. Note that the stack size applies per user, so you have to modify the init script, or edit /etc/security/limits.conf (on Debian/Ubuntu at least).

Categories: Uncategorized Tags: