Coal to Cryptocurrency: Mining Remains a Threat

November 30th, 2017 No comments

Coal was the fuel that powered the Industrial Revolution, bootstrapping the modern age as we know it. Acquiring it was simple, using it was easy, and it got the job done. Coal was the perfect resource. Back in those days, efficiency and cleanliness weren’t concerns because of ecological immaturity (society didn’t know any better) and scale (industry wasn’t big enough to impact the world sufficiently to raise concerns).

Cryptocurrency mining is today’s coal mining, and it’s time to start considering alternative solutions.

With any currency (traditional or cryptographic), a few constraints must be in place: a unit of currency cannot be spent more than once (no “double spending”), transactions must complete in a timely manner, and everyone must agree when a transaction is complete. With traditional paper money, it’s clear how all of these constraints are satisfied: counterfeiting is made difficult by secure notes and strongly discouraged by legal means, the transaction completes when physical possession of the note is transferred, and all parties can look at their physical possession of notes to determine a transaction’s state.

Implementing these constraints digitally is more difficult than when using physical items. Bitcoin, being the world’s first cryptocurrency, used the best solutions available. The system bitcoin leverages is known as blockchain with “proof of work.” Bitcoin uses a series of blocks, appropriately referred to as a blockchain, that forms a ledger which records the state of every bitcoin since bitcoin’s inception. Each block records the movement of a number of bitcoins between owners: the proof. In order for a block to be valid, it can include each coin at most once (to prevent double spending); it must include the unique identity (the hash) of the previous block; and it must include the solution to a difficult math problem (a cryptographic hash). The process of solving these problems to form valid blocks is known as mining and those who do so are called miners.

Solving these mining challenges takes hardware, infrastructure, cooling, and the electricity to keep it all going. To incentivize the block discovery process, the system rewards the miner with a predetermined amount of currency. To satisfy the need for timely transactions, each includes a transaction fee to be awarded to the miner. Therefore a miner wants to include as many transactions as possible into a block in order to collect the greatest amount in fees. Once a block has been mined, it’s shared to the public so anyone can verify that there was no double spending, and that the cryptographic hash is valid. Miners only mine new blocks on top of valid ones.

As cryptocurrencies grow more valuable, the mining rewards grow as well, making mining increasingly lucrative. This  draws in more miners which, in turn, uses more energy. As of November 2017, each bitcoin transaction now uses as much energy as the average American house consumes in a week. Furthermore, as a country, bitcoin now ranks as the 69th highest energy consumer.

Just as coal was a great way to bootstrap industry, proof of work has done a great job bootstrapping cryptocurrencies. But neither coal nor proof of work are viable paths forward; they’re simply too polluting. So what are the solar panel and wind turbine analogues for cryptocurrency?

One system is proof of stake. At a high level, this system limits miners’ output in proportion to the total amount of currency the miner owns. For example, if there are 200 units of currency total and a miner owns 10 units, that miner may only contribute 5% of the mining power. In this way, there’s no race for miners to acquire massive computational resources. This system has other advantages over proof of work as well including avoiding the 51% attack problem. Ethereum, the second largest cryptocurrency by market capitalization, is currently in the process of switching from proof of work to proof of stake. Ark, Dash, and Neo are examples of cryptocurrencies currently using a proof of stake system.

Another system is known as “the tangle,” currently only used by the IOTA cryptocurrency. The tangle’s alternative methodology provides many advantages over blockchain, including zero transaction fees, no miner energy expenditure, and greater decentralization. However, analogous to alternative energy sources in days past, the tangle today is not as proven, researched, tested, or understood as well as blockchain systems are.

In this modern age of global climate change, the world needs to abandon proof of work systems. With their energy expenditures exceeding that of most countries, the cost to the environment is simply too great to continue down this path, especially since alternatives already exist. Like modern industry’s move away from familiar, reliable coal, it’s time for the cryptocurrency community to move on from proof of work to better, more responsible solutions.

Categories: Uncategorized Tags:

Log4jdbc Spring Boot Starter

March 27th, 2017 No comments

Logging SQL as it’s executed is a fairly common desire when developing applications. Perhaps an ORM (such as Hibernate) is being used, and you want to see the actual SQL being executed. Or maybe you’re tracking down a performance problem and need to know if it’s in the application or the database, so step #1 is finding out what query is executing and for how long.

Solving this problem once and for all (at least for Spring Boot applications), I created Log4jdbc Spring Boot Starter. It’s a very simple yet powerful way to log SQL queries (and more, such as timing information). And unlike other solution, the queries logged are ready to run – the ‘?’ parameters are replaced with their values. This means you can copy and paste the query from the log and run them unmodified in the SQL query tool of your choice, saving a lot of time.

For background, my motivation for this work is a result of a Spring Boot / Hibernate application I have in progress. I started by using spring.jpa.properties.hibernate.show_sql=true but that only logs queries with ‘?’ place holders. To log the values, add spring.jpa.properties.hibernate.type=trace. At least now I had the query and the values for it, but to run it in my query tool (I need to EXPLAIN the query), I had to replace each ‘?’ with the value – and I had over 20 place holders. That got old fast.

There are other approaches to log queries, such as the one described in Display SQL to Console in Spring JdbcTemplate. I’m not a fan of this approach because it only works for queries made through JdbcTemplate (so Hibernate queries aren’t logged, for example) and it’s an awfully lot of code to include and therefore have to maintain in each project.

I discovered Log4jdbc but it’s a bit of a pain to setup in a Spring Boot application because it:

  • doesn’t use the Spring Environment (application.properties)
  • needs setup to wrap the DataSource’s in the Log4jdbc DataSourceSpy

Wanting to solve this problem precisely once and never again, I created Log4jdbc Spring Boot Starter.

To use it, just add to your project:

  1. <dependency>
  2.   <groupId>com.integralblue</groupId>
  3.   <artifactId>log4jdbc-spring-boot-starter</artifactId>
  4.   <version>[INSERT VERSION HERE]</version>
  5. </dependency>

Then turn on the logging levels as desired in application.properties, for example:


logging.level.jdbc.sqlonly=DEBUG

When no logging is configured (all loggers are set to fatal or off), log4jdbc returns the original Connection.

See the Log4jdbc Spring Boot Starter project page for more information.

Categories: Uncategorized Tags:

Working around HHH-9663: Orphan removal does not work for OneToOne relations

March 23rd, 2017 No comments

HHH-9663 means that orphan removal doesn’t work for OneToOne relationships. For example, given File and FileContent as below (taken from the bug report):

  1. package pl.comit.orm.model;
  2.  
  3. import javax.persistence.Entity;
  4. import javax.persistence.FetchType;
  5. import javax.persistence.Id;
  6. import javax.persistence.OneToOne;
  7.  
  8. @Entity
  9. public class File {
  10.  
  11. 	private int id;
  12.  
  13. 	private FileContent content;
  14.  
  15. 	@Id
  16. 	public int getId() {
  17. 		return id;
  18. 	}
  19.  
  20. 	public void setId(int id) {
  21. 		this.id = id;
  22. 	}
  23.  
  24. 	@OneToOne(fetch = FetchType.LAZY, orphanRemoval = true)
  25. 	public FileContent getContent() {
  26. 		return content;
  27. 	}
  28.  
  29. 	public void setContent(FileContent content) {
  30. 		this.content = content;
  31. 	}
  32. }
  1. package pl.comit.orm.model;
  2.  
  3. import javax.persistence.Entity;
  4. import javax.persistence.Id;
  5.  
  6. @Entity
  7. public class FileContent {
  8.  
  9. 	private int id;
  10.  
  11. 	@Id
  12. 	public int getId() {
  13. 		return id;
  14. 	}
  15.  
  16. 	public void setId(int id) {
  17. 		this.id = id;
  18. 	}
  19. }
  1. package pl.comit.orm.dao;
  2.  
  3. import javax.persistence.EntityManager;
  4. import javax.persistence.PersistenceContext;
  5.  
  6. import org.springframework.stereotype.Repository;
  7. import org.springframework.transaction.annotation.Transactional;
  8.  
  9. import pl.comit.orm.model.File;
  10. import pl.comit.orm.model.FileContent;
  11.  
  12. @Repository
  13. public class Dao {
  14.  
  15. 	@PersistenceContext
  16. 	private EntityManager entityManager;
  17.  
  18. 	@Transactional
  19. 	public void assureCreatedTaskAndNote(int fileId, int contentId) {
  20. 		FileContent content = entityManager.find(FileContent.class, contentId);
  21. 		if (content == null) {
  22. 			content = new FileContent();
  23. 			content.setId(contentId);
  24. 			entityManager.persist(content);
  25. 		}
  26.  
  27. 		File file = entityManager.find(File.class, fileId);
  28. 		if (file == null) {
  29. 			file = new File();
  30. 			file.setId(fileId);
  31. 			entityManager.persist(file);
  32. 		}
  33. 		file.setContent(content);
  34. 	}
  35.  
  36. 	@Transactional
  37. 	public void removeContent(int fileId) {
  38. 		File file = entityManager.find(File.class, fileId);
  39. 		file.setContent(null);
  40. 	}
  41.  
  42. 	public FileContent find(int contentId) {
  43. 		return entityManager.find(FileContent.class, contentId);
  44. 	}
  45. }

Running this as the main class will result in an exception:

  1. package pl.comit.orm;
  2.  
  3. import org.springframework.context.support.ClassPathXmlApplicationContext;
  4.  
  5. import pl.comit.orm.dao.Dao;
  6. import pl.comit.orm.model.FileContent;
  7.  
  8. public final class Application {
  9.  
  10. 	private static final String CFG_FILE = "applicationContext.xml";
  11.  
  12. 	public static void main(String[] args) {
  13. 		test(new ClassPathXmlApplicationContext(CFG_FILE).getBean(Dao.class));
  14. 	}
  15.  
  16. 	public static void test(Dao dao) {
  17. 		dao.assureCreatedTaskAndNote(1, 2);
  18. 		dao.removeContent(1);
  19. 		FileContent content = dao.find(2);
  20. 		if (content != null) {
  21. 			System.err.println("Content found: " + content);
  22. 		}
  23. 	}
  24. }

A workaround is to manually remove and detach the old referent, and then persist the new referent. Here’s an updated File.java:

  1. package pl.comit.orm.model;
  2.  
  3. import org.springframework.stereotype.Component;
  4.  
  5. import javax.annotation.PostConstruct;
  6. import javax.annotation.PreDestroy;
  7. import javax.persistence.Entity;
  8. import javax.persistence.FetchType;
  9. import javax.persistence.Id;
  10. import javax.persistence.OneToOne;
  11. import javax.persistence.PersistenceContext;
  12.  
  13. @Entity
  14. public class File {
  15.  
  16. 	private int id;
  17.  
  18. 	private FileContent content;
  19.  
  20. 	@Id
  21. 	public int getId() {
  22. 		return id;
  23. 	}
  24.  
  25. 	public void setId(int id) {
  26. 		this.id = id;
  27. 	}
  28.  
  29. 	@OneToOne(fetch = FetchType.LAZY, orphanRemoval = true)
  30. 	public FileContent getContent() {
  31. 		return content;
  32. 	}
  33.  
  34. 	public void setContent(FileContent content) {
  35. 		if(this.content != content){
  36. 			final oldContent = this.content;
  37. 			this.content = content;
  38. 			if(oldContent!=null){
  39. 				// Hibernate won't remove the oldContent for us, so do it manually; workaround HHH-9663
  40. 				WorkaroundHHH9663.entityManager.remove(oldContent);
  41. 				WorkaroundHHH9663.entityManager.detach(oldContent);
  42. 			}
  43. 		}
  44. 	}
  45.  
  46. 	// WORKAROUND https://hibernate.atlassian.net/browse/HHH-9663 "Orphan removal does not work for OneToOne relations"
  47. 	@Component
  48. 	public static class WorkaroundHHH9663 {
  49. 		@PersistenceContext
  50. 		private EntityManager injectedEntityManager;
  51.  
  52. 		private static EntityManager entityManager;
  53.  
  54. 		@PostConstruct
  55. 		public void postConstruct(){
  56. 			entityManager = injectedEntityManager;
  57. 		}
  58.  
  59. 		@PreDestroy
  60. 		public void preDestroy(){
  61. 			this.entityManager = null; // NOPMD
  62. 		}
  63. 	}
  64. 	// END WORKAROUND
  65. }

Note that no Dao changes were made, so if, for example, Spring Data was used instead of such a Dao, you wouldn’t have to modify anything else. And you can just remove this workaround code easily when a version of Hibernate because available with HHH-9963 fixed.
Finally, yes, this approach does exhibit a bit of code smell (the use of the static variable in this way and the entity having a container managed component aren’t exactly best practices), but, it’s a workaround – hopefully just a temporary one.

Categories: Uncategorized Tags:

Spring Boot, HTTPS required, and Elastic Beanstalk health checks

March 9th, 2017 No comments

Spring Boot can be very easily configured to require HTTPS for all requests. In application.properties, simply set

security.require-ssl=true

And that works great – until you’re running the Spring Boot application on AWS Elastic Beanstalk with both HTTP and HTTPS listeners:

In that case, Elastic Beanstalk’s health check is always done over HTTP. The configuration page even says as much (there is no option to change it to be over HTTPS):

Since Spring Boot will redirect all non-secure HTTP requests to HTTPS, the health check will see an HTTP 302 redirect and therefore fail.

To workaround this issue (and in my opinion, AWS shortcoming), Spring Boot needs to be configured to allow insecure requests to the health check URL. To do so, we’ll define a new URL (/aws-health) that proxies Actuator’s health URL but responds over http and https.

In your security configuration class (named WebSecurityConfiguration below as an example) which extends WebSecurityConfigurerAdapter, add the following to the existing implementation of configure(HttpSecurity) (or create that method if it doesn’t already exist):

  1. import org.springframework.security.config.annotation.web.configuration.WebSecurityConfigurerAdapter;
  2.  
  3. import org.springframework.security.config.annotation.web.builders.HttpSecurity;
  4. import org.springframework.security.web.access.channel.ChannelDecisionManagerImpl;
  5.  
  6. public class WebSecurityConfiguration extends WebSecurityConfigurerAdapter {
  7.   @Override
  8.   protected void configure(final HttpSecurity http) throws Exception {
  9.     // Elastic Beanstalk health checks only happen over HTTP, so as a workaround
  10.     // create a new URL (/aws-health) that forwards to the Actuator health check, see InsecureHealthController
  11.     // That URL is set to respond over any channel (not just secure, aka https, ones)
  12.     // see https://candrews.integralblue.com/2017/03/spring-boot-https-required-and-elastic-beanstalk-health-checks/
  13.     http.requiresChannel()
  14.       .antMatchers("/aws-health").requires(ChannelDecisionManagerImpl.ANY_CHANNEL)
  15.       .anyRequest().requiresSecure();
  16.   }
  17. }

Now create the controller:

  1. import java.io.IOException;
  2.  
  3. import javax.servlet.ServletException;
  4. import javax.servlet.http.HttpServletRequest;
  5. import javax.servlet.http.HttpServletRequestWrapper;
  6. import javax.servlet.http.HttpServletResponse;
  7.  
  8. import org.springframework.boot.actuate.autoconfigure.ManagementServerProperties;
  9. import org.springframework.stereotype.Controller;
  10. import org.springframework.web.bind.annotation.RequestMapping;
  11. import org.springframework.web.bind.annotation.RequestMethod;
  12. import org.springframework.web.util.UriComponentsBuilder;
  13.  
  14. /**
  15.  * Elastic Beanstalk health checks only happen over HTTP, so as a workaround
  16.  * create a new URL (/aws-health) that forwards to the Actuator health check
  17.  * That URL is set to respond over any channel (not just secure, aka https, ones) in {@link WebSecurityConfiguration}
  18.  * 
  19.  * @see https://candrews.integralblue.com/2017/03/spring-boot-https-required-and-elastic-beanstalk-health-checks/
  20.  */
  21. public class InsecureHealthController {
  22.   private final ManagementServerProperties management;
  23.  
  24.   public InsecureHealthController(ManagementServerProperties management) {
  25.     this.management = management;
  26.   }
  27.  
  28.   @RequestMapping(value="/health", method=RequestMethod.GET)
  29.   public void health(final HttpServletRequest request, final HttpServletResponse response) throws ServletException, IOException{
  30.     final String healthUrl = UriComponentsBuilder.fromPath(management.getContextPath()).path("/aws-health").toUriString();
  31.     final HttpServletRequestWrapper requestWrapper = new HttpServletRequestWrapper(request){
  32.       @Override
  33.         public boolean isSecure() {
  34.           return true;
  35.         }
  36.       };
  37.       request.getRequestDispatcher(healthUrl).forward(requestWrapper, response);
  38.     }
  39. }
Categories: Uncategorized Tags:

Spring Cache Abstraction as a Hibernate Cache Provider

February 23rd, 2017 No comments

Many of the projects I’ve worked on over my career have leveraged a Spring/Hibernate stack. Over that time, the Spring/Hibernate integration has greatly improved making the once tedious and repetitive chore of setting up a new project (and maintaining an existing one through upgrades, expansion, and refactoring) far simpler. Now it’s as simple as going to the Spring Initializr and selecting the right options. In seconds, you get a nicely configured Spring Boot boilerplate and you’re ready for the real work to begin.

Well, most of the setup is simple at least… setting up the Hibernate cache is still not simple. To get decent performance on Hibernate, the query cache and/or second level caches are key. But they’re a pain to configure and that configuration is almost entirely redundant with the Spring cache configuration.

Spring has a cache abstraction that can back to ehcache, redis, memcache, caffeine, guava, and a variety of other implementations – including just a simple ConcurrentHashMap (which is great for test harnesses). Spring Boot makes it really easy to setup. Instead of setting up the Spring cache and the Hibernate cache totally separately, why not use the same configuration and same implementations for both?

That’s what Hibernate SpringCache does. Set up Spring Cache, add the Hibernate SpringCache dependency to your Spring Boot project, tell Hibernate to enable the query cache and/or second level cache, and you’re done.

Hopefully, Hibernate itself will one day include this functionality. If you’d like to see Hibernate/Spring integration become that much simpler, let your thoughts be known by commenting on the pull request.

Categories: Uncategorized Tags:

Spring ID to Entity Conversion

July 22nd, 2015 1 comment

When using Spring with Hibernate or JPA, it can be very convenient to use objects as method parameters and automatically convert primary keys to those objects. For example:

  1. @RequestMapping(value = "/person/{personId}", method = RequestMethod.GET, produces = MediaType.APPLICATION_JSON_VALUE)
  2. @ResponseBody
  3. public PersonDto getById(@PathVariable Person person) {
  4. 	return person;
  5. }

instead of:

  1. @RequestMapping(value = "/person/{personId}", method = RequestMethod.GET, produces = MediaType.APPLICATION_JSON_VALUE)
  2. @ResponseBody
  3. public PersonDto getById(@PathVariable String personId) {
  4. 	return personService.getById(personId);
  5. }

Using this approach throughout a project (in controllers, services, etc) can lead to much more readable, shorter code.

Spring Data JPA provides a similar feature via DomainClassConverter. In my case, Spring Data wasn’t an option.

Here’s how to make this happen. First, set up a @Configuration class:

  1. package com.integralblue.candrews.config;
  2.  
  3. import javax.annotation.PostConstruct;
  4.  
  5. import org.springframework.context.annotation.Bean;
  6. import org.springframework.context.annotation.Configuration;
  7. import org.springframework.web.servlet.config.annotation.WebMvcConfigurationSupport;
  8.  
  9. import com.integralblue.candrews.converter.EntityToIdConverter;
  10. import com.integralblue.candrews.converter.IdToEntityConverter;
  11.  
  12. @Configuration
  13. public class WebMvcConfig extends WebMvcConfigurationSupport {
  14. 	@Bean
  15. 	protected IdToEntityConverter idToEntityConverter() {
  16. 		return new IdToEntityConverter();
  17. 	}
  18.  
  19. 	@Bean
  20. 	protected EntityToIdConverter entityToIdConverter() {
  21. 		return new EntityToIdConverter();
  22. 	}
  23.  
  24. 	@PostConstruct
  25. 	public void addConverters() {
  26. 		mvcConversionService().addConverter(idToEntityConverter());
  27. 		mvcConversionService().addConverter(entityToIdConverter());
  28. 	}
  29. }

Now add these 2 classes:

  1. package com.integralblue.candrews.converter;
  2.  
  3. import java.util.Collections;
  4. import java.util.HashSet;
  5. import java.util.Set;
  6.  
  7. import javax.persistence.EntityManager;
  8. import javax.persistence.PersistenceContext;
  9. import javax.persistence.metamodel.Attribute;
  10. import javax.persistence.metamodel.EntityType;
  11. import javax.persistence.metamodel.SingularAttribute;
  12.  
  13. import org.springframework.beans.factory.annotation.Autowired;
  14. import org.springframework.core.convert.TypeDescriptor;
  15. import org.springframework.core.convert.converter.ConditionalGenericConverter;
  16. import org.springframework.format.support.FormattingConversionService;
  17.  
  18. /**
  19.  * Converts an entity to its ID (the field in the entity annotated with
  20.  * {@link javax.persistence.Id}). This is similar to
  21.  * {@link org.springframework.data.repository.support.DomainClassConverter<T>}
  22.  * but since we don't use Spring Data, we can't use that.
  23.  */
  24. public class EntityToIdConverter implements ConditionalGenericConverter {
  25.  
  26. 	@PersistenceContext
  27. 	private EntityManager entityManager;
  28.  
  29. 	@Autowired
  30. 	private FormattingConversionService conversionService;
  31.  
  32. 	@Override
  33. 	public Set<ConvertiblePair> getConvertibleTypes() {
  34. 		return Collections.singleton(new ConvertiblePair(Object.class,
  35. 				Object.class));
  36. 	}
  37.  
  38. 	@SuppressWarnings("unchecked")
  39. 	@Override
  40. 	public boolean matches(TypeDescriptor sourceType, TypeDescriptor targetType) {
  41. 		EntityType<?> entityType;
  42. 		try {
  43. 			entityType = entityManager.getMetamodel().entity(
  44. 					sourceType.getType());
  45. 		} catch (IllegalArgumentException e) {
  46. 			return false;
  47. 		}
  48. 		if (entityType == null)
  49. 			return false;
  50.  
  51. 		// In my opinion, this is probably a bug in Hibernate.
  52. 		// If the class has a @Id that is a complex type (such as another
  53. 		// entity), then entityType.getIdType() will throw an
  54. 		// IllegalStateException with the message "No supertype found"
  55. 		// To work around this, get the idClassAttributes, and if there is
  56. 		// exactly 1, use the java type of that.
  57. 		Class<?> idClass;
  58. 		if (entityType.hasSingleIdAttribute()) {
  59. 			idClass = entityType.getIdType().getJavaType();
  60. 		} else {
  61. 			Set<SingularAttribute<?, ?>> idAttributes = new HashSet<>();
  62. 			@SuppressWarnings("rawtypes")
  63. 			Set attributes = entityType.getAttributes();
  64. 			for (Attribute<?, ?> attribute : (Set<Attribute<?, ?>>) attributes) {
  65. 				if (attribute instanceof SingularAttribute<?, ?>) {
  66. 					SingularAttribute<?, ?> singularAttribute = (SingularAttribute<?, ?>) attribute;
  67. 					if (singularAttribute.isId()) {
  68. 						idAttributes.add(singularAttribute);
  69. 					}
  70. 				}
  71.  
  72. 			}
  73. 			if (idAttributes.size() == 1) {
  74. 				idClass = idAttributes.iterator().next().getJavaType();
  75. 			} else {
  76. 				return false;
  77. 			}
  78. 		}
  79.  
  80. 		return conversionService.canConvert(TypeDescriptor.valueOf(idClass),
  81. 				targetType);
  82. 	}
  83.  
  84. 	@Override
  85. 	public Object convert(Object source, TypeDescriptor sourceType,
  86. 			TypeDescriptor targetType) {
  87. 		if (sourceType.getType() == targetType.getType()) {
  88. 			return source;
  89. 		}
  90. 		if (source == null) {
  91. 			return null;
  92. 		}
  93. 		EntityType<?> entityType = entityManager.getMetamodel().entity(
  94. 				sourceType.getType());
  95. 		Object ret = entityManager.getEntityManagerFactory()
  96. 				.getPersistenceUnitUtil().getIdentifier(source);
  97. 		if (ret == null)
  98. 			return null;
  99. 		return conversionService.convert(ret,
  100. 				TypeDescriptor.valueOf(entityType.getIdType().getJavaType()),
  101. 				targetType);
  102. 	}
  103. }
  1. package com.integralblue.candrews.converter;
  2.  
  3. import java.util.Collections;
  4. import java.util.HashSet;
  5. import java.util.Set;
  6.  
  7. import javax.persistence.EntityManager;
  8. import javax.persistence.NoResultException;
  9. import javax.persistence.PersistenceContext;
  10. import javax.persistence.metamodel.Attribute;
  11. import javax.persistence.metamodel.EntityType;
  12. import javax.persistence.metamodel.SingularAttribute;
  13.  
  14. import org.apache.commons.lang.StringUtils;
  15. import org.springframework.beans.factory.annotation.Autowired;
  16. import org.springframework.core.convert.TypeDescriptor;
  17. import org.springframework.core.convert.converter.ConditionalGenericConverter;
  18. import org.springframework.format.support.FormattingConversionService;
  19. import org.springframework.security.access.prepost.PostAuthorize;
  20.  
  21. /**
  22.  * Converts an ID (the field in the entity annotated with
  23.  * {@link javax.persistence.Id}) to the entity itself. This is similar to
  24.  * {@link org.springframework.data.repository.support.DomainClassConverter<T>}
  25.  * but since we don't use Spring Data, we can't use that.
  26.  */
  27. public class IdToEntityConverter implements ConditionalGenericConverter {
  28.  
  29. 	@PersistenceContext
  30. 	private EntityManager entityManager;
  31.  
  32. 	@Autowired
  33. 	private FormattingConversionService conversionService;
  34.  
  35. 	@Override
  36. 	public Set<ConvertiblePair> getConvertibleTypes() {
  37. 		return Collections.singleton(new ConvertiblePair(Object.class,
  38. 				Object.class));
  39. 	}
  40.  
  41. 	@SuppressWarnings("unchecked")
  42. 	@Override
  43. 	public boolean matches(TypeDescriptor sourceType, TypeDescriptor targetType) {
  44. 		EntityType<?> entityType;
  45. 		try {
  46. 			entityType = entityManager.getMetamodel().entity(
  47. 					targetType.getType());
  48. 		} catch (IllegalArgumentException e) {
  49. 			return false;
  50. 		}
  51. 		if (entityType == null)
  52. 			return false;
  53.  
  54. 		// In my opinion, this is probably a bug in Hibernate.
  55. 		// If the class has a @Id that is a complex type (such as another
  56. 		// entity), then entityType.getIdType() will throw an
  57. 		// IllegalStateException with the message "No supertype found"
  58. 		// To work around this, get the idClassAttributes, and if there is
  59. 		// exactly 1, use the java type of that.
  60. 		Class<?> idClass;
  61. 		if (entityType.hasSingleIdAttribute()) {
  62. 			idClass = entityType.getIdType().getJavaType();
  63. 		} else {
  64. 			Set<SingularAttribute<?, ?>> idAttributes = new HashSet<>();
  65. 			@SuppressWarnings("rawtypes")
  66. 			Set attributes = entityType.getAttributes();
  67. 			for (Attribute<?, ?> attribute : (Set<Attribute<?, ?>>) attributes) {
  68. 				if (attribute instanceof SingularAttribute<?, ?>) {
  69. 					SingularAttribute<?, ?> singularAttribute = (SingularAttribute<?, ?>) attribute;
  70. 					if (singularAttribute.isId()) {
  71. 						idAttributes.add(singularAttribute);
  72. 					}
  73. 				}
  74.  
  75. 			}
  76. 			if (idAttributes.size() == 1) {
  77. 				idClass = idAttributes.iterator().next().getJavaType();
  78. 			} else {
  79. 				return false;
  80. 			}
  81. 		}
  82.  
  83. 		return conversionService.canConvert(sourceType,
  84. 				TypeDescriptor.valueOf(idClass));
  85. 	}
  86.  
  87. 	@Override
  88. 	@PostAuthorize("hasPermission(returnObject, 'view')")
  89. 	public Object convert(Object source, TypeDescriptor sourceType,
  90. 			TypeDescriptor targetType) {
  91. 		if (sourceType.getType() == targetType.getType()) {
  92. 			return source;
  93. 		}
  94. 		EntityType<?> entityType = entityManager.getMetamodel().entity(
  95. 				targetType.getType());
  96. 		Object id = conversionService.convert(source, sourceType,
  97. 				TypeDescriptor.valueOf(entityType.getIdType().getJavaType()));
  98. 		if (id == null
  99. 				|| (id instanceof String && StringUtils.isEmpty((String) id))) {
  100. 			return null;
  101. 		}
  102. 		Object ret = entityManager.find(targetType.getType(), id);
  103. 		if (ret == null) {
  104. 			throw new NoResultException(targetType.getType().getName()
  105. 					+ " with id '" + id + "' not found");
  106. 		}
  107. 		return ret;
  108. 	}
  109. }
Categories: Uncategorized Tags:

Issues using Ehcache with ARC as the Hibernate Cache

July 22nd, 2015 No comments

When using Ehcache’s ARC (Automatic Resource Control) on a cache acting as the Hibernate cache, these kinds of warning will likely appear:

WARN [pool-1-thread-1] [ehcache.pool.impl.DefaultSizeOfEngine] sizeOf The configured limit of 100 object references was reached while attempting to calculate the size of the object graph. This can be avoided by adding stop points with @IgnoreSizeOf annotations. Since the CacheManger or Cache elements maxDepthExceededBehavior is set to “abort”, the sizing operation has stopped and the reported cache size is not accurate. If performance degradation is NOT an issue at the configured limit, raise the limit value using the CacheManager or Cache elements maxDepth attribute. For more information, see the Ehcache configuration documentation.

To fix this problem, you need to tell ARC what parts of Hibernate objects shouldn’t be counted when calculating object sizes so ARC doesn’t get into an infinite loop (when it hits circular references) or try to calculate the size of the entire Hibernate object graph.

One way to solve this problem is to use ehcache-sizeofengine-hibernate. To do so, simply add the jar to your application’s class path and it will be automatically discovered and used by Ehcache. However, since ehcache-sizeofengine-hibernate isn’t in Maven Central, or any publicly accessible Maven repository, it’s not really useful. I requested that it be published to Central a while ago, but the project seems to be unmaintained – so a more maintainable way to solve this problem is required.

A system property needs to be set pointing to a file that lists the classes and fields that should be ignored by ARC. In my Spring @Configuation class for configuring ehcache, I have this block:

static {
System.setProperty(net.sf.ehcache.pool.impl.DefaultSizeOfEngine.USER_FILTER_RESOURCE, "net.sf.ehcache.sizeof.filter.properties");
}

But you can really do this anywhere (including simply adding “-Dnet.sf.ehcache.sizeof.filter=net.sf.ehcache.sizeof.filter.properties” to the Java command line).

Next, create a file named “net.sf.ehcache.sizeof.filter.properties” in the classpath (in a Maven project, the src/main/resources directory makes sense) with these contents:

org.hibernate.Session
org.hibernate.internal.SessionImpl
org.hibernate.engine.spi.sessionimplementor
org.hibernate.cache.spi.QueryKey.positionalParameterTypes
org.hibernate.proxy.AbstractLazyInitializer.entityName
org.hibernate.proxy.pojo.BasicLazyInitializer.persistentClass
org.hibernate.proxy.pojo.BasicLazyInitializer.getIdentifierMethod
org.hibernate.proxy.pojo.BasicLazyInitializer.setIdentifierMethod
org.hibernate.proxy.pojo.BasicLazyInitializer.componentIdType
org.hibernate.proxy.pojo.javassist.JavassistLazyInitializer
org.hibernate.engine.spi.TypedValue.type
Categories: Uncategorized Tags:

Issues using Ehcache with ARC as the Thymeleaf Cache

July 22nd, 2015 No comments

Ehcache’s ARC (Automatic Resource Control) is pretty great – it more or less automatically and optimally handles cache sizing/allocation. Using ARC, one can effectively say, “Here’s 2GB to allocate for caching – no more than that – manage the cache as best as possible.” Of course, more complex rules can also be done (on a per cache basis instead of globally, for instance) but that’s the general idea.

I’ve used ARC on my last couple of projects to great success. However, it’s not without a bit of complexity. ARC has to calculate the size of objects in the cache to determine how much space is used by the cache, and calculating sizes of Java objects isn’t easy or obvious. Besides difficult to do yet seemingly simple sound tasks as answering “How big is an object reference pointer?” there are object graph traversal problems, and those object graph issues are the ones that I’ve run into a few times in my ARC adventures.

When using Ehcache with ARC to store Thymeleaf templates (for example, when using Spring Cache abstraction backed by Ehcache as the Thymeleaf cache implementation as I previous discussed), you’ll run into messages like this one:

WARN [pool-1-thread-1] [ehcache.pool.impl.DefaultSizeOfEngine] sizeOf The configured limit of 100 object references was reached while attempting to calculate the size of the object graph. This can be avoided by adding stop points with @IgnoreSizeOf annotations. Since the CacheManger or Cache elements maxDepthExceededBehavior is set to “abort”, the sizing operation has stopped and the reported cache size is not accurate. If performance degradation is NOT an issue at the configured limit, raise the limit value using the CacheManager or Cache elements maxDepth attribute. For more information, see the Ehcache configuration documentation.

To fix this problem, you need to tell ARC what parts of Thymeleaf objects shouldn’t be counted when calculating object sizes so ARC doesn’t get into an infinite loop (when it hits circular references) or try to calculate the size of the entire Spring context.

A system property needs to be set pointing to a file that lists the classes and fields that should be ignored by ARC. In my Spring @Configuation class for configuring ehcache, I have this block:

static {
System.setProperty(net.sf.ehcache.pool.impl.DefaultSizeOfEngine.USER_FILTER_RESOURCE, "net.sf.ehcache.sizeof.filter.properties");
}

But you can really do this anywhere (including simply adding “-Dnet.sf.ehcache.sizeof.filter=net.sf.ehcache.sizeof.filter.properties” to the Java command line).

Next, create a file named “net.sf.ehcache.sizeof.filter.properties” in the classpath (in a Maven project, the src/main/resources directory makes sense) with these contents:

org.thymeleaf.templateresolver.TemplateResolution.resourceResolver
org.thymeleaf.dom.Node.processors

That’s it!

Note that I’ve requested that Thymeleaf make the org.thymeleaf.Template class serializable, which would remove the need to do anything (ARC would “Just Work” if this change is made): https://github.com/thymeleaf/thymeleaf/issues/378 So hopefully, with a future version of Thymeleaf, all of this information will be useless and nothing more than of historical interest.

Categories: Uncategorized Tags:

Using Spring Cache as the Thymeleaf Cache

July 22nd, 2015 No comments

Thymeleaf includes caching that can be used to cache templates, fragments, messages, and expressions. The default implementation is an in-memory cache using standard Java collections. Wouldn’t it be nice to use the Spring Cache Abstraction instead? The advantages of such a setup include:

  • Consistency – why have many separate caching implementations each configured in different ways when you can have one?
  • Abstraction – Spring Cache Abstraction can be easily configured to use any number of implementations, such as ehcache, Guava, JDK collections, or any JSR-107 provider
  • All the advantages of the implementation chosen – For example, if your cahce provider supports it, you get statistics, memory usage information, and more

To connect Thymeleaf (I’m using 2.1.x, but this should work for any version 2.1 or later) to Spring (I’m using 4.1.x, but this approach should work for 4.0 or later).

Here’s a working configuration:

  1. package com.integralblue.candrews;
  2.  
  3. import java.util.List;
  4. import java.util.Properties;
  5.  
  6. import org.springframework.beans.factory.annotation.Autowired;
  7. import org.springframework.beans.factory.annotation.Value;
  8. import org.springframework.cache.Cache;
  9. import org.springframework.cache.CacheManager;
  10. import org.springframework.context.MessageSource;
  11. import org.springframework.context.annotation.Bean;
  12. import org.springframework.context.annotation.Configuration;
  13. import org.thymeleaf.Template;
  14. import org.thymeleaf.cache.AbstractCacheManager;
  15. import org.thymeleaf.cache.ICache;
  16. import org.thymeleaf.cache.ICacheEntryValidityChecker;
  17. import org.thymeleaf.cache.ICacheManager;
  18. import org.thymeleaf.dom.Node;
  19. import org.thymeleaf.extras.springsecurity4.dialect.SpringSecurityDialect;
  20. import org.thymeleaf.spring4.SpringTemplateEngine;
  21. import org.thymeleaf.spring4.templateresolver.SpringResourceTemplateResolver;
  22. import org.thymeleaf.spring4.view.ThymeleafViewResolver;
  23. import org.thymeleaf.templateresolver.ITemplateResolver;
  24. import org.thymeleaf.templateresolver.TemplateResolver;
  25.  
  26. @Configuration
  27. public class ThymeleafConfig {
  28. 	private static final String VIEWS = "/WEB-INF/views/";
  29. 	private static final String TEMPLATE_CACHE_NAME = "thymeleafTemplateCache";
  30. 	private static final String FRAGMENT_CACHE_NAME = "thymeleafFragmentCache";
  31. 	private static final String MESSAGE_CACHE_NAME = "thymeleafMessageCache";
  32. 	private static final String EXPRESSION_CACHE_NAME = "thymeleafExpressionCache";
  33.  
  34. 	@Value("${thymeleaf.cache}")
  35. 	private boolean thymeleafCache;
  36.  
  37. 	@Autowired
  38. 	private CacheManager cacheManager;
  39.  
  40. 	@Autowired
  41. 	private MessageSource messageSource;
  42.  
  43. 	@Bean
  44. 	public ThymeleafViewResolver viewResolver() {
  45. 		ThymeleafViewResolver viewResolver = new ThymeleafViewResolver();
  46. 		viewResolver.setCache(thymeleafCache);
  47. 		viewResolver.setTemplateEngine(templateEngine());
  48. 		viewResolver.setCharacterEncoding("UTF-8");
  49. 		return viewResolver;
  50. 	}
  51.  
  52. 	@Bean
  53. 	public SpringTemplateEngine templateEngine() {
  54. 		SpringTemplateEngine templateEngine = new SpringTemplateEngine();
  55. 		templateEngine.setCacheManager(thymeleafCacheManager());
  56. 		templateEngine.setTemplateResolver(templateResolver());
  57. 		templateEngine.addDialect(new SpringSecurityDialect());
  58. 		templateEngine.setMessageSource(messageSource);
  59. 		return templateEngine;
  60. 	}
  61.  
  62. 	@Bean
  63. 	public ITemplateResolver templateResolver() {
  64. 		TemplateResolver templateResolver = new SpringResourceTemplateResolver();
  65. 		templateResolver.setPrefix(VIEWS);
  66. 		templateResolver.setSuffix(".html");
  67. 		templateResolver.setTemplateMode("HTML5");
  68. 		templateResolver.setCacheable(thymeleafCache);
  69. 		return templateResolver;
  70. 	}
  71.  
  72. 	@Bean
  73. 	ICacheManager thymeleafCacheManager() {
  74. 		return new AbstractCacheManager() {
  75.  
  76. 			@Override
  77. 			protected ICache<String, Template> initializeTemplateCache() {
  78. 				return templateCache();
  79. 			}
  80.  
  81. 			@Override
  82. 			protected ICache<String, List<Node>> initializeFragmentCache() {
  83. 				return fragmentCache();
  84. 			}
  85.  
  86. 			@Override
  87. 			protected ICache<String, Properties> initializeMessageCache() {
  88. 				return messageCache();
  89. 			}
  90.  
  91. 			@Override
  92. 			protected ICache<String, Object> initializeExpressionCache() {
  93. 				return expressionCache();
  94. 			}
  95. 		};
  96. 	}
  97.  
  98. 	@Bean
  99. 	ICache<String, Template> templateCache() {
  100. 		return new CacheToICacheAdapter<Template>(
  101. 				cacheManager.getCache(TEMPLATE_CACHE_NAME));
  102. 	}
  103.  
  104. 	@Bean
  105. 	ICache<String, List<Node>> fragmentCache() {
  106. 		return new CacheToICacheAdapter<List<Node>>(
  107. 				cacheManager.getCache(FRAGMENT_CACHE_NAME));
  108. 	}
  109.  
  110. 	@Bean
  111. 	ICache<String, Properties> messageCache() {
  112. 		return new CacheToICacheAdapter<Properties>(
  113. 				cacheManager.getCache(MESSAGE_CACHE_NAME));
  114. 	}
  115.  
  116. 	@Bean
  117. 	ICache<String, Object> expressionCache() {
  118. 		return new CacheToICacheAdapter<Object>(
  119. 				cacheManager.getCache(EXPRESSION_CACHE_NAME));
  120. 	}
  121.  
  122. 	private static final class CacheToICacheAdapter<V> implements
  123. 			ICache<String, V> {
  124. 		private final Cache cache;
  125.  
  126. 		private static final class CacheEntry<V> {
  127. 			private final V value;
  128. 			private final long timestamp;
  129.  
  130. 			public CacheEntry(V value) {
  131. 				this.value = value;
  132. 				this.timestamp = System.currentTimeMillis();
  133. 			}
  134.  
  135. 			public V getValue() {
  136. 				return value;
  137. 			}
  138.  
  139. 			public long getTimestamp() {
  140. 				return timestamp;
  141. 			}
  142. 		}
  143.  
  144. 		public CacheToICacheAdapter(Cache cache) {
  145. 			this.cache = cache;
  146. 		}
  147.  
  148. 		@Override
  149. 		public void put(String key, V value) {
  150. 			cache.put(key, new CacheEntry<V>(value));
  151. 		}
  152.  
  153. 		@Override
  154. 		public V get(String key) {
  155. 			@SuppressWarnings("unchecked")
  156. 			CacheEntry<V> cacheEntry = cache.get(key, CacheEntry.class);
  157. 			if (cacheEntry == null) {
  158. 				return null;
  159. 			} else {
  160. 				return cacheEntry.getValue();
  161. 			}
  162. 		}
  163.  
  164. 		@Override
  165. 		public V get(
  166. 				String key,
  167. 				ICacheEntryValidityChecker<? super String, ? super V> validityChecker) {
  168. 			@SuppressWarnings("unchecked")
  169. 			CacheEntry<V> cacheEntry = cache.get(key, CacheEntry.class);
  170. 			if (cacheEntry == null) {
  171. 				return null;
  172. 			} else {
  173. 				V value = cacheEntry.getValue();
  174. 				if (validityChecker == null
  175. 						|| validityChecker.checkIsValueStillValid(key, value,
  176. 								cacheEntry.getTimestamp())) {
  177. 					return value;
  178. 				} else {
  179. 					return null;
  180. 				}
  181. 			}
  182. 		}
  183.  
  184. 		@Override
  185. 		public void clear() {
  186. 			cache.clear();
  187. 		}
  188.  
  189. 		@Override
  190. 		public void clearKey(String key) {
  191. 			cache.evict(key);
  192. 		}
  193.  
  194. 	}
  195. }

I’ve requested that Thymeleaf include this feature as part of the Spring integration in Thymeleaf 3.0: https://github.com/thymeleaf/thymeleaf-spring/issues/89 So hopefully, with future versions of Thymeleaf, none of this extra work will be necessary and things will Just Work™.

If using Ehcache as the backing cache for Thymeleaf, and ARC is being used, errors may occur – see “Issues using Ehcache with ARC as the Thymeleaf Cache” for more information.

Categories: Uncategorized Tags:

Precompiling with Web Deploy

April 27th, 2012 5 comments

Web Deploy is a great tool for deploying ASP.NET projects: it simplifies deployments, can target IIS 6 or 7, is pretty easy to get working on the server and the project side, and is supported by many hosting providers. It works really well when used along with continuous integration. In the first phase, the project is built and a Web Deploy package is made; the command used is (for example) “msbuild  [something.sln or something.csproj or something.vbproj] /m /p:Configuration=release /p:DeployOnBuild=True /p:DeployTarget=Package /p:skipJsHint=true”. In the second phase, the Web Deploy package is deployed to servers for testing.

One of the problems I noticed was that a user could browse around the site and hit a compilation error. For instance, if there was a misspelled variable in an ASPX page, the user would get a standard “500” error page.

One of the purposes of continuous integration, and build/testing systems in general, is to avoid such problems. If the code behind fails to compile, the team is notified immediately – but if the ASPX page fails to compile, we have to wait for a user to report the problem. That is not acceptable.

The aspnet_compiler tool is the solution to this problem. If you use ASP.NET MVC, your project’s MSBuild file will contain a target named “MVCBuildViews” that will run aspnet_compiler for you. Enabling the use of this target by setting the “MvcBuildViews” property makes the build take a bit longer, but there are less surprises – to me, the benefit far outweighs the cost.

So the first option is to simply enable “MvcBuildViews.” “MVCBuildViews” writes the precompiled files to the ASP.NET temporary directory (the precompiled output is not included in the Web Deploy package), so they’ll be used only by the system that invoked the target (in my case, the continuous integration server). So for server deployments, you get the benefit of knowing that the pages compiled (solving the original problem), but you don’t get the other major benefit of precompiling – the startup performance boost.

It turns that combining aspnet_compiler and Web Deploy is a bit tricky. I won’t go into the details, other than to say I figured it out and made an MSBuild target file that can be easily included into projects that use Web Deploy.

So, to get Web Deploy working with precompilation:

  1. Grab Precompile.targets and put it into the directory with your .csproj/.vbproj project file.
  2. Modify your .csproj or .vbproj file, add <Import Project=”$(MSBuildProjectDirectory)\Precompile.targets” /> towards the end of the file.
  3. In your project’s configuration, set the “AspNetCompiler” property to “true” for whatever configurations you want (I have it enabled for the “Release” configuration in my case).
  4. If you also want to invoke aspnet_merge, you need to ensure the system that does the building has Visual Studio or the Windows SDK installed, then in your project’s configuration, set the “AspNetMerge” property to “true” for whatever configurations you want.

aspnet_merge is another interesting tool that merges assemblies. You can read about its benefits on the aspnet_merge MSDN page. I found that it is beneficial to use aspnet_merge almost always – the only problem I had was on one of my very large projects (with dozens of ASPX page and the total web site size being a few hundred megabytes), aspnet_merge took an exorbitantly long time. For example, on our normal project, it takes ~1 minute. On this very large project, it took over an hour.

Categories: Uncategorized Tags: