WordPress on AWS The Easy Way with VersionPress

January 23rd, 2019 No comments

Developing and administering WordPress can be painful, especially when requirements include the ability to have a team of developers, the ability for a developer to run the site (including content) on their system (so they can reproduce issues caused by content or configuration), multiple content authors, multiple environments (such as staging and production), and the ability to roll back any problematic changes (content, code, or configuration) with ease.

To solve this problem, I developed a solution to make things easier: VersionPress on AWS. It uses Docker to allow developers to run the site locally and an assortment of AWS services to host the site. Check it out and I think you’ll find that life becomes a little bit easier.

VersionPress on AWS is Free Software under the GPLv3 hosted at gitlab. Contribution in the forms of issue reports and pull request are very much welcome.

What is VersionPress?

VersionPress stores content (posts, media, etc) as well as code (themes, plugins, configuration, etc) in source control (git).

  • By looking at the git log, it’s quick and easy to see who changed what and when.
  • All code (plugins, themes, WordPress core itself) and content (pages, posts, comments, configuration) are stored in git. This approach allows content changes as well as code changes can be reverted if there’s a problem and merged between branches for different environments.
  • Wipe out and recreate the environment at any time without data loss – everything is in git. No need to worry about the AWS RDS server. Migrate between RDS for MariaDB and Aurora at any time.
  • Need a staging site, or a new site to test work in progress? Create a new branch and launch a new stack, be up and running in minutes
  • Run the exact same site with the same content locally so you can reproduce issues in production effortlessly – no more “works on my machine” situations

Hosting with AWS

Need a small, cheap staging site, but also a full fledged scalable production site with a CDN? Use the same stack for both – simply specify different parameter values. Change parameter values whenever you want without downtime or data loss. For example, when starting out, leave the CloudFront CDN off to save money. When the site becomes popular, add the CloudFront CDN to better handle the load and improve performance for end users.

AWS features leveraged include:


Docker is used to run WordPress in AWS Elastic Beanstalk as well as for developers running the site locally. This consistency reduces the occurrences of “it works on my machine” situations and gets new developers on-boarded quicker.

When not to use VersionPress on AWS

Since VersionPress commits all content changes to git, content changes are a bit slower. Therefore, if the site is very content change heavy, such as if it’s a forum with many frequent comments being made, VersionPress on AWS may not be the right solution.

However, the vast majority of WordPress sites have very infrequent content changes, so the slightly slower writes are rarely an issue.

Get Started

Check out the VersionPress on AWS documentation to get started.

Categories: Uncategorized Tags:

DNSSEC on OpenWrt 18.06

August 10th, 2018 6 comments
DNSSEC ensures that the results of DNS queries (for DNSSEC enabled domains) are authentic. For example, integralblue.com uses DNSSEC, so if an attacker (using a man in the middle or spoofing attack) changes the IP address that www.integralblue.com resolves to, then a DNS resolver supporting DNSSEC will be able to tell and return an error.

DNSSEC provides authentication and integrity; it does not provide for confidentiality. For confidentiality (so your ISP, for example, cannot tell what DNS queries are being made), you can easily add TLS over DNS which I’ve described how to do in OpenWrt in another post.

By setting up DNSSEC on your OpenWrt router, you protect your entire network as all clients will perform DNS requests using your OpenWrt router’s DNS server which in turn will do DNSSEC checking for all queries.

Setting up DNSSEC on OpenWrt 18.06 is remarkably easy. You can use the LuCI web interface to perform these steps or shell command over ssh; I’m providing the commands here.

  1. Refresh the package list: opkg update
  2. Swap dnsmasq for dnsmasq-full (-full includes DNSSEC support): opkg install dnsmasq-full --download-only && opkg remove dnsmasq && opkg install dnsmasq-full --cache . && rm *.ipk
  3. Edit /etc/config/dhcp
    In the config dnsmasq section, add (or change the values of, if these settings already exist) these settings:

    • option dnssec '1'
    • option dnsseccheckunsigned '1'
  4. Restart dnsmasq so the changes take effect: /etc/init.d/dnsmasq restart

Enjoy knowing that now no one is tampering with your DNS queries.

Categories: Uncategorized Tags:

DNS Over TLS on OpenWrt 18.06

August 10th, 2018 9 comments
DNS over TLS encrypts DNS queries so no one between you and the DNS server you’re using (which, by default using these steps, will be Cloudflare’s, can tell what DNS queries/responses are being exchanged.

DNS over TLS provides confidentiality but not integrity or authenticity. For those, you need to setup DNSSEC which I’ve described how to do on OpenWrt in another post.

By setting up DNS over TLS on your OpenWrt router, you protect your entire network as all clients will perform DNS requests using your OpenWrt router’s DNS server which in turn will use DNS over TLS to perform the actual resolution.

Setting up DNS over TLS using Stubby on OpenWrt 18.06 is remarkably easy. You can use the LuCI web interface to perform these steps or shell command over ssh; I’m providing the commands here.

  1. Refresh the package list: opkg update
  2. Install the ca-certificates package (necessary for stubby to verify the certificate of the DNS server): opkg install ca-certificates (this step shouldn’t be necessary; ca-certificates should be a dependency of stubby. See this issue in OpenWrt.)
  3. Install the stubby package: opkg install stubby
  4. Start stubby: /etc/init.d/stubby start
  5. Set stubby to start automatically at boot: /etc/init.d/stubby enable
  6. Use stubby as the DNS server by editing /etc/config/dhcp
    In the config dnsmasq section, add (or change the values of, if these settings already exist) these settings:

    • option noresolv '1'
    • list server ''
  7. Restart dnsmasq so the changes take effect: /etc/init.d/dnsmasq restart

If you'd rather use a different DNS over TLS server than Cloudflare's, edit /etc/stubby/stubby.yml.

Now you can restart assured that your DNS queries can't be seen by 3rd parties.

Categories: Uncategorized Tags:


May 29th, 2018 No comments
I’m currently working on an application that persists Java serialized data (using ObjectOutputStream) in a database. Java’s serialization format compresses very well – so why not compress the data when storing it then decompress it while reading for a quick win? The problem is that there will still be legacy, uncompressed data, which the application will not be able to access if it assumes all data is now gzipped.

The solution is to use MaybeGZIPInputStream instead of GZIPInputStream. For example, when reading, instead of:

ObjectInputStream ois = new ObjectInputStream(new GZIPInputStream(databaseInputStream));

use MaybeGZIPInputStream instead:

ObjectInputStream ois = new ObjectInputStream(new MaybeGZIPInputStream(databaseInputStream));

And always write data using GZIPOutputStream. Now all of that existing data can be still be read, and newly written data gets the benefit of taking up much less storage (and taking up far less bandwidth / time being transferred between the application servers and the database).

Here’s the source code of MaybeGZIPInputStream:

import java.io.IOException;
import java.io.InputStream;
import java.io.PushbackInputStream;
import java.util.zip.GZIPInputStream;

/** Detect if the given {@link InputStream} contains compressed data. If it does, wrap it in a {@link GZIPInputStream}. If it doesn’t, don’t.
* @author Craig Andrews
public class MaybeGZIPInputStream extends InputStream {

private final InputStream in;

public MaybeGZIPInputStream(final InputStream in) throws IOException {
final PushbackInputStream pushbackInputStream = new PushbackInputStream(in, 2);
if(isGZIP(pushbackInputStream)) {
this.in = new GZIPInputStream(pushbackInputStream);
}else {
this.in = pushbackInputStream;

private boolean isGZIP(final PushbackInputStream pushbackInputStream) throws IOException {
final byte[] bytes = new byte[2];
final int bytesRead = pushbackInputStream.read(bytes);
if(bytesRead > 0) {
pushbackInputStream.unread(bytes, 0, bytesRead);
if(bytesRead == 2) {
if ((bytes[0] == (byte) (GZIPInputStream.GZIP_MAGIC)) && (bytes[1] == (byte) (GZIPInputStream.GZIP_MAGIC >> 8))){
return true;
return false;

public int read() throws IOException {
return in.read();

public int hashCode() {
return in.hashCode();

public int read(byte[] b) throws IOException {
return in.read(b);

public boolean equals(Object obj) {
return in.equals(obj);

public int read(byte[] b, int off, int len) throws IOException {
return in.read(b, off, len);

public long skip(long n) throws IOException {
return in.skip(n);

public String toString() {
return in.toString();

public int available() throws IOException {
return in.available();

public void close() throws IOException {

public void mark(int readlimit) {

public void reset() throws IOException {

public boolean markSupported() {
return in.markSupported();


Categories: Uncategorized Tags:

SQS JMS Resource Adapter

May 7th, 2018 No comments
The recently released SQS JMS Resource Adapter allows JEE applications (running on any JEE application server, including Glassfish, Payara, JBoss, IBM Liberty, etc) to easily use AWS SQS as a JMS implementation. This resource adapter can be helpful in many situations, such as:

  • Migrating an existing JEE application from another JMS implementation (such as RabbitMQ, ActiveMQ, IBM MQ, etc) to AWS SQS.
  • Allowing the JMS implementation to be switched out. For example, developers can use the ActiveMQ resource adapter, and in production, this AWS SQS resource adapter could be used.

Grab the resource adapter from Maven Central and submit issues and pull requests over at GitHub.

Categories: Uncategorized Tags:

Trusting DoD Certificates in Docker and Beanstalk

May 1st, 2018 No comments
The US DoD (Department of Defense) uses its own root certificate when signing https certificates for its domains. For example, https://www.my.af.mil/ uses such a certificate. These root certificates are not trusted by any (commercial/public) operating system, browser, or other client. Therefore, in order to access these sites and not get an error, the DoD certificates must be trusted.

On Windows, go to DISA’s PKI and PKE Tools page and under “Trust Store” follow the directions for the “InstallRoot X: NIPR Windows Installer”

On Linux, download the certificates from MilitaryCAC’s Linux Information page (direct link to the certificates). Then follow your distribution’s instructions on how to install certificates to the trust store. For example, on Red Hat / CentOS / Fedora / Amazon Linux, copy the certificates to /etc/pki/ca-trust/source/anchors/ then run update-ca-trust. On Debian / Ubuntu and Gentoo, copy the certificates to /usr/local/share/ca-certificates/ then run update-ca-certificates.

On Docker, for a Red Hat / CentOS / Fedora / Amazon Linux (or other Fedora-type system) derived container, add the following to the Dockerfile:

RUN yum -y install openssl \
&& CERT_BUNDLE="Certificates_PKCS7_v5.3_DoD" \
&& curl "https://iasecontent.disa.mil/pki-pke/${CERT_BUNDLE}.zip" --output certs.zip \
&& unzip certs.zip "${CERT_BUNDLE}/${CERT_BUNDLE}.pem.p7b" \
&& openssl pkcs7 -in "${CERT_BUNDLE}/${CERT_BUNDLE}.pem.p7b" -print_certs -out "/etc/pki/ca-trust/source/anchors/${CERT_BUNDLE}.pem" \
&& update-ca-trust \
&& update-ca-trust force-enable \
&& rm -rf certs.zip "${CERT_BUNDLE}" \
&& yum -y remove openssl \
&& rm -rf /var/cache/yum

On AWS Elastic Beanstalk the .ebextensions mechanism can be used. In the jar/war/etc deployment archive, add these files:
command: "bash .ebextensions/scripts/install_dod_certificates.sh"

set -e # stop on all errors
yum install -y unzip openssl
curl "https://iasecontent.disa.mil/pki-pke/${CERT_BUNDLE}.zip" --output certs.zip
unzip certs.zip "${CERT_BUNDLE}/${CERT_BUNDLE}.pem.p7b"
openssl pkcs7 -in "${CERT_BUNDLE}/${CERT_BUNDLE}.pem.p7b" -print_certs -out "/etc/pki/ca-trust/source/anchors/${CERT_BUNDLE}.pem"
update-ca-trust force-enable
rm -rf certs.zip "${CERT_BUNDLE}"
yum -y remove unzip
rm -rf /var/cache/yum

Categories: Uncategorized Tags:

Coal to Cryptocurrency: Mining Remains a Threat

November 30th, 2017 No comments
Coal was the fuel that powered the Industrial Revolution, bootstrapping the modern age as we know it. Acquiring it was simple, using it was easy, and it got the job done. Coal was the perfect resource. Back in those days, efficiency and cleanliness weren’t concerns because of ecological immaturity (society didn’t know any better) and scale (industry wasn’t big enough to impact the world sufficiently to raise concerns).

Cryptocurrency mining is today’s coal mining, and it’s time to start considering alternative solutions.

With any currency (traditional or cryptographic), a few constraints must be in place: a unit of currency cannot be spent more than once (no “double spending”), transactions must complete in a timely manner, and everyone must agree when a transaction is complete. With traditional paper money, it’s clear how all of these constraints are satisfied: counterfeiting is made difficult by secure notes and strongly discouraged by legal means, the transaction completes when physical possession of the note is transferred, and all parties can look at their physical possession of notes to determine a transaction’s state.

Implementing these constraints digitally is more difficult than when using physical items. Bitcoin, being the world’s first cryptocurrency, used the best solutions available. The system bitcoin leverages is known as blockchain with “proof of work.” Bitcoin uses a series of blocks, appropriately referred to as a blockchain, that forms a ledger which records the state of every bitcoin since bitcoin’s inception. Each block records the movement of a number of bitcoins between owners: the proof. In order for a block to be valid, it can include each coin at most once (to prevent double spending); it must include the unique identity (the hash) of the previous block; and it must include the solution to a difficult math problem (a cryptographic hash). The process of solving these problems to form valid blocks is known as mining and those who do so are called miners.

Solving these mining challenges takes hardware, infrastructure, cooling, and the electricity to keep it all going. To incentivize the block discovery process, the system rewards the miner with a predetermined amount of currency. To satisfy the need for timely transactions, each includes a transaction fee to be awarded to the miner. Therefore a miner wants to include as many transactions as possible into a block in order to collect the greatest amount in fees. Once a block has been mined, it’s shared to the public so anyone can verify that there was no double spending, and that the cryptographic hash is valid. Miners only mine new blocks on top of valid ones.

As cryptocurrencies grow more valuable, the mining rewards grow as well, making mining increasingly lucrative. This  draws in more miners which, in turn, uses more energy. As of November 2017, each bitcoin transaction now uses as much energy as the average American house consumes in a week. Furthermore, as a country, bitcoin now ranks as the 69th highest energy consumer.

Just as coal was a great way to bootstrap industry, proof of work has done a great job bootstrapping cryptocurrencies. But neither coal nor proof of work are viable paths forward; they’re simply too polluting. So what are the solar panel and wind turbine analogues for cryptocurrency?

One system is proof of stake. At a high level, this system limits miners’ output in proportion to the total amount of currency the miner owns. For example, if there are 200 units of currency total and a miner owns 10 units, that miner may only contribute 5% of the mining power. In this way, there’s no race for miners to acquire massive computational resources. This system has other advantages over proof of work as well including avoiding the 51% attack problem. Ethereum, the second largest cryptocurrency by market capitalization, is currently in the process of switching from proof of work to proof of stake. Ark, Dash, and Neo are examples of cryptocurrencies currently using a proof of stake system.

Another system is known as “the tangle,” currently only used by the IOTA cryptocurrency. The tangle’s alternative methodology provides many advantages over blockchain, including zero transaction fees, no miner energy expenditure, and greater decentralization. However, analogous to alternative energy sources in days past, the tangle today is not as proven, researched, tested, or understood as well as blockchain systems are.

In this modern age of global climate change, the world needs to abandon proof of work systems. With their energy expenditures exceeding that of most countries, the cost to the environment is simply too great to continue down this path, especially since alternatives already exist. Like modern industry’s move away from familiar, reliable coal, it’s time for the cryptocurrency community to move on from proof of work to better, more responsible solutions.

Categories: Uncategorized Tags:

Log4jdbc Spring Boot Starter

March 27th, 2017 No comments

Logging SQL as it’s executed is a fairly common desire when developing applications. Perhaps an ORM (such as Hibernate) is being used, and you want to see the actual SQL being executed. Or maybe you’re tracking down a performance problem and need to know if it’s in the application or the database, so step #1 is finding out what query is executing and for how long.

Solving this problem once and for all (at least for Spring Boot applications), I created Log4jdbc Spring Boot Starter. It’s a very simple yet powerful way to log SQL queries (and more, such as timing information). And unlike other solution, the queries logged are ready to run – the ‘?’ parameters are replaced with their values. This means you can copy and paste the query from the log and run them unmodified in the SQL query tool of your choice, saving a lot of time.

For background, my motivation for this work is a result of a Spring Boot / Hibernate application I have in progress. I started by using spring.jpa.properties.hibernate.show_sql=true but that only logs queries with ‘?’ place holders. To log the values, add spring.jpa.properties.hibernate.type=trace. At least now I had the query and the values for it, but to run it in my query tool (I need to EXPLAIN the query), I had to replace each ‘?’ with the value – and I had over 20 place holders. That got old fast.

There are other approaches to log queries, such as the one described in Display SQL to Console in Spring JdbcTemplate. I’m not a fan of this approach because it only works for queries made through JdbcTemplate (so Hibernate queries aren’t logged, for example) and it’s an awfully lot of code to include and therefore have to maintain in each project.

I discovered Log4jdbc but it’s a bit of a pain to setup in a Spring Boot application because it:

  • doesn’t use the Spring Environment (application.properties)
  • needs setup to wrap the DataSource’s in the Log4jdbc DataSourceSpy

Wanting to solve this problem precisely once and never again, I created Log4jdbc Spring Boot Starter.

To use it, just add to your project:

  1. <dependency>
  2.   <groupId>com.integralblue</groupId>
  3.   <artifactId>log4jdbc-spring-boot-starter</artifactId>
  4.   <version>[INSERT VERSION HERE]</version>
  5. </dependency>

Then turn on the logging levels as desired in application.properties, for example:


When no logging is configured (all loggers are set to fatal or off), log4jdbc returns the original Connection.

See the Log4jdbc Spring Boot Starter project page for more information.

Categories: Uncategorized Tags:

Working around HHH-9663: Orphan removal does not work for OneToOne relations

March 23rd, 2017 No comments
HHH-9663 means that orphan removal doesn’t work for OneToOne relationships. For example, given File and FileContent as below (taken from the bug report):

  1. package pl.comit.orm.model;
  3. import javax.persistence.Entity;
  4. import javax.persistence.FetchType;
  5. import javax.persistence.Id;
  6. import javax.persistence.OneToOne;
  8. @Entity
  9. public class File {
  11. 	private int id;
  13. 	private FileContent content;
  15. 	@Id
  16. 	public int getId() {
  17. 		return id;
  18. 	}
  20. 	public void setId(int id) {
  21. 		this.id = id;
  22. 	}
  24. 	@OneToOne(fetch = FetchType.LAZY, orphanRemoval = true)
  25. 	public FileContent getContent() {
  26. 		return content;
  27. 	}
  29. 	public void setContent(FileContent content) {
  30. 		this.content = content;
  31. 	}
  32. }
  1. package pl.comit.orm.model;
  3. import javax.persistence.Entity;
  4. import javax.persistence.Id;
  6. @Entity
  7. public class FileContent {
  9. 	private int id;
  11. 	@Id
  12. 	public int getId() {
  13. 		return id;
  14. 	}
  16. 	public void setId(int id) {
  17. 		this.id = id;
  18. 	}
  19. }
  1. package pl.comit.orm.dao;
  3. import javax.persistence.EntityManager;
  4. import javax.persistence.PersistenceContext;
  6. import org.springframework.stereotype.Repository;
  7. import org.springframework.transaction.annotation.Transactional;
  9. import pl.comit.orm.model.File;
  10. import pl.comit.orm.model.FileContent;
  12. @Repository
  13. public class Dao {
  15. 	@PersistenceContext
  16. 	private EntityManager entityManager;
  18. 	@Transactional
  19. 	public void assureCreatedTaskAndNote(int fileId, int contentId) {
  20. 		FileContent content = entityManager.find(FileContent.class, contentId);
  21. 		if (content == null) {
  22. 			content = new FileContent();
  23. 			content.setId(contentId);
  24. 			entityManager.persist(content);
  25. 		}
  27. 		File file = entityManager.find(File.class, fileId);
  28. 		if (file == null) {
  29. 			file = new File();
  30. 			file.setId(fileId);
  31. 			entityManager.persist(file);
  32. 		}
  33. 		file.setContent(content);
  34. 	}
  36. 	@Transactional
  37. 	public void removeContent(int fileId) {
  38. 		File file = entityManager.find(File.class, fileId);
  39. 		file.setContent(null);
  40. 	}
  42. 	public FileContent find(int contentId) {
  43. 		return entityManager.find(FileContent.class, contentId);
  44. 	}
  45. }

Running this as the main class will result in an exception:

  1. package pl.comit.orm;
  3. import org.springframework.context.support.ClassPathXmlApplicationContext;
  5. import pl.comit.orm.dao.Dao;
  6. import pl.comit.orm.model.FileContent;
  8. public final class Application {
  10. 	private static final String CFG_FILE = "applicationContext.xml";
  12. 	public static void main(String[] args) {
  13. 		test(new ClassPathXmlApplicationContext(CFG_FILE).getBean(Dao.class));
  14. 	}
  16. 	public static void test(Dao dao) {
  17. 		dao.assureCreatedTaskAndNote(1, 2);
  18. 		dao.removeContent(1);
  19. 		FileContent content = dao.find(2);
  20. 		if (content != null) {
  21. 			System.err.println("Content found: " + content);
  22. 		}
  23. 	}
  24. }

A workaround is to manually remove and detach the old referent, and then persist the new referent. Here’s an updated File.java:

  1. package pl.comit.orm.model;
  3. import org.springframework.stereotype.Component;
  5. import javax.annotation.PostConstruct;
  6. import javax.annotation.PreDestroy;
  7. import javax.persistence.Entity;
  8. import javax.persistence.FetchType;
  9. import javax.persistence.Id;
  10. import javax.persistence.OneToOne;
  11. import javax.persistence.PersistenceContext;
  13. @Entity
  14. public class File {
  16. 	private int id;
  18. 	private FileContent content;
  20. 	@Id
  21. 	public int getId() {
  22. 		return id;
  23. 	}
  25. 	public void setId(int id) {
  26. 		this.id = id;
  27. 	}
  29. 	@OneToOne(fetch = FetchType.LAZY, orphanRemoval = true)
  30. 	public FileContent getContent() {
  31. 		return content;
  32. 	}
  34. 	public void setContent(FileContent content) {
  35. 		if(this.content != content){
  36. 			final oldContent = this.content;
  37. 			this.content = content;
  38. 			if(oldContent!=null){
  39. 				// Hibernate won't remove the oldContent for us, so do it manually; workaround HHH-9663
  40. 				WorkaroundHHH9663.entityManager.remove(oldContent);
  41. 				WorkaroundHHH9663.entityManager.detach(oldContent);
  42. 			}
  43. 		}
  44. 	}
  46. 	// WORKAROUND https://hibernate.atlassian.net/browse/HHH-9663 "Orphan removal does not work for OneToOne relations"
  47. 	@Component
  48. 	public static class WorkaroundHHH9663 {
  49. 		@PersistenceContext
  50. 		private EntityManager injectedEntityManager;
  52. 		private static EntityManager entityManager;
  54. 		@PostConstruct
  55. 		public void postConstruct(){
  56. 			entityManager = injectedEntityManager;
  57. 		}
  59. 		@PreDestroy
  60. 		public void preDestroy(){
  61. 			this.entityManager = null; // NOPMD
  62. 		}
  63. 	}
  65. }

Note that no Dao changes were made, so if, for example, Spring Data was used instead of such a Dao, you wouldn’t have to modify anything else. And you can just remove this workaround code easily when a version of Hibernate because available with HHH-9963 fixed.
Finally, yes, this approach does exhibit a bit of code smell (the use of the static variable in this way and the entity having a container managed component aren’t exactly best practices), but, it’s a workaround – hopefully just a temporary one.

Categories: Uncategorized Tags:

Spring Boot, HTTPS required, and Elastic Beanstalk health checks

March 9th, 2017 2 comments
Spring Boot can be very easily configured to require HTTPS for all requests. In application.properties, simply set


And that works great – until you’re running the Spring Boot application on AWS Elastic Beanstalk with both HTTP and HTTPS listeners:

In that case, Elastic Beanstalk’s health check is always done over HTTP. The configuration page even says as much (there is no option to change it to be over HTTPS):

Since Spring Boot will redirect all non-secure HTTP requests to HTTPS, the health check will see an HTTP 302 redirect and therefore fail.

To workaround this issue (and in my opinion, AWS shortcoming), Spring Boot needs to be configured to allow insecure requests to the health check URL. To do so, we’ll define a new URL (/aws-health) that proxies Actuator’s health URL but responds over http and https.

In your security configuration class (named WebSecurityConfiguration below as an example) which extends WebSecurityConfigurerAdapter, add the following to the existing implementation of configure(HttpSecurity) (or create that method if it doesn’t already exist):

  1. import org.springframework.security.config.annotation.web.configuration.WebSecurityConfigurerAdapter;
  3. import org.springframework.security.config.annotation.web.builders.HttpSecurity;
  4. import org.springframework.security.web.access.channel.ChannelDecisionManagerImpl;
  6. public class WebSecurityConfiguration extends WebSecurityConfigurerAdapter {
  7.   @Override
  8.   protected void configure(final HttpSecurity http) throws Exception {
  9.     // Elastic Beanstalk health checks only happen over HTTP, so as a workaround
  10.     // create a new URL (/aws-health) that forwards to the Actuator health check, see InsecureHealthController
  11.     // That URL is set to respond over any channel (not just secure, aka https, ones)
  12.     // see https://candrews.integralblue.com/2017/03/spring-boot-https-required-and-elastic-beanstalk-health-checks/
  13.     http.requiresChannel()
  14.       .antMatchers("/aws-health").requires(ChannelDecisionManagerImpl.ANY_CHANNEL)
  15.       .anyRequest().requiresSecure();
  16.   }
  17. }

Now create the controller:

  1. import java.io.IOException;
  3. import javax.servlet.ServletException;
  4. import javax.servlet.http.HttpServletRequest;
  5. import javax.servlet.http.HttpServletRequestWrapper;
  6. import javax.servlet.http.HttpServletResponse;
  8. import org.springframework.boot.actuate.autoconfigure.ManagementServerProperties;
  9. import org.springframework.stereotype.Controller;
  10. import org.springframework.web.bind.annotation.RequestMapping;
  11. import org.springframework.web.bind.annotation.RequestMethod;
  12. import org.springframework.web.util.UriComponentsBuilder;
  14. /**
  15.  * Elastic Beanstalk health checks only happen over HTTP, so as a workaround
  16.  * create a new URL (/aws-health) that forwards to the Actuator health check
  17.  * That URL is set to respond over any channel (not just secure, aka https, ones) in {@link WebSecurityConfiguration}
  18.  * 
  19.  * @see https://candrews.integralblue.com/2017/03/spring-boot-https-required-and-elastic-beanstalk-health-checks/
  20.  */
  21. public class InsecureHealthController {
  22.   private final ManagementServerProperties management;
  24.   public InsecureHealthController(ManagementServerProperties management) {
  25.     this.management = management;
  26.   }
  28.   @RequestMapping(value="/health", method=RequestMethod.GET)
  29.   public void health(final HttpServletRequest request, final HttpServletResponse response) throws ServletException, IOException{
  30.     final String healthUrl = UriComponentsBuilder.fromPath(management.getContextPath()).path("/aws-health").toUriString();
  31.     final HttpServletRequestWrapper requestWrapper = new HttpServletRequestWrapper(request){
  32.       @Override
  33.         public boolean isSecure() {
  34.           return true;
  35.         }
  36.       };
  37.       request.getRequestDispatcher(healthUrl).forward(requestWrapper, response);
  38.     }
  39. }
Categories: Uncategorized Tags: