Performance Testing WebDAV Clients

March 18th, 2019 No comments

Part of migrating applications from on-premises hosting to cloud hosting (AWS, Azure, etc) involves re-evaluating how users access their data. A recent migration involved users running Windows 10 accessing a Windows file share using the SMB protocol. Since SMB isn’t safe to run directly over the Internet (it’s usually not encrypted and it has a long history of security vulnerabilities), potential options included tunneling SMB within a VPN or changing away from the SMB protocol. One such alternative is WebDAV.

Web Distributed Authoring and Versioning (WebDAV) provides the same basic functionality as NFS, SMB, or FTP over the familiar, widely deployed well supported HTTP protocol. It was literally designed for this use case.

Testing Methodology

I evaluated 7 approaches for clients to access a WebDAV share, testing the performance of each and noting pros and cons.

Testing was performed in March 2019. The same client, same server, and same connection was used for each test.

The client is a Dell Latitude 7490 running Fedora 29. Windows 10 (version 1809 build 17763.348) was run as a VM in KVM using user mode networking to connect to the Internet. For the smb+davfs2 testing, smb+davfs2 ran in Docker running in Linux and the Windows 10 VM connected to it.

The WebDAV server is running Apache 2.4.38 and Linux kernel 5.0.0-rc8. The WebDAV site is served over HTTPS.

Ping time between them is 18ms average, httping to the WebDAV server URL is 140ms average.

Windows 10’s Redirector over HTTP/1.1

WebDAV Redirector is the name of the built in WebDAV client in Windows. It’s accessed from Windows Explorer by using the “Map network drive…” option and selecting “Connect to a web site that you can use to store your documents and pictures”

Recent versions of Windows 10 support HTTP/2, so for this test, the “h2” protocol was disabled on the Apache server.

Windows 10’s Redirector over HTTP/2

This test is the same as the previous one except the “h2” protocol is enabled on the Apache server.


WebDrive (version 2019 build 5305) was used with “Basic Cache Settings” set to “Multi-User.” All other settings were left at their defaults.

WebDrive does not currently support TLS 1.3 or HTTP/2; I suspect these would improve performance.

davfs2 on Linux

davfs2 is a Linux file system driver which allows mounting WebDAV shares just like any other file system.

For this testing, the client is Linux, not Windows 10. Therefore, this test isn’t a real solution (the clients do need to run Windows 10). This test is for comparison and testing on component of a possible solution (Samba + davfs).

davfs2 does not support HTTP/2.

davfs2 offer a number of configuration options; here’s what was used for this testing:

use_locks 1
delay_upload 0 # disable write cache so users know that when the UI tells them the upload is done it’s actually done

And the mount command:

sudo mount -t davfs /home/candrews/webdavpoc/ -ouid=1000,gid=1000,conf=/home/candrews/.davfs2/davfs2.conf

The davfs2 version is 1.5.4.

davfs2 with locks disabled (using a Linux client)

This test is the same as the previous one but this WebDAV locks disabled by setting “use_locks 1” in davfs2.conf.

Samba + davfs2

This test uses Samba to expose the davfs2 mount from the davfs2 test and uses Windows 10 to access the result SMB share.

The davfs2 mount exposed using SMB with the dperson/samba docker image using:

name smb -e USERID=1000 -e GROUPID=1000 --security-opt label:disable -v /home/candrews/webdavpoc:/webdavpoc:Z --rm -it -p 139:139 -p 445:445 -d dperson/samba -s "webdavpoc;/webdavpoc;yes;no;no;testuser" -u "testuser;testpass" -g "vfs objects ="

The biggest challenge with this approach, which I didn’t address, is with regards to security. The davfs2 mount that Samba exposes uses fixed credentials – so all Windows 10 users using the smb+davfs proxy will appear to the WebDAV server to be the same user. There is no way to pass through the credentials from the Windows 10 client through Samba through davfs2 to the WebDAV server. I imagine this limitation may disqualify this solution.

samba + davfs2 with locks disabled

This test is the same as the previous one but WebDAV locks are disabled in davfs2.



Key takeaways from the results:

  • The built in Windows Redirector (which is free and the easiest solution) is by far the slowest.
  • WebDAV benefits greatly from HTTP/2. HTTP/2 was designed to improve the performance of multiple concurrent small requests (as is the case of HTML with references to CSS, Javascript, and images), and that’s exactly how WebDAV works. Each download is a PROPFIND and a GET, and each upload is a PROPFIND, PROPPUT, PUT, and, if locking is enabled, LOCK and UNLOCK.
  • Disabling WebDAV locks improves performance. By eliminating the LOCK/UNLOCK requests for every file, upload performance doubled. Unfortunately, disabling WebDAV locks for the built in Windows Redirector requires a registry change, a non-starter for many organizations. WebDrive has locks disabled by default. davfs2 only uses locks when editing files (when a program actually has the file open for writing), not when uploading a new file.
  • Surprisingly, the overall fastest (by far) approach involves a middle box running Linux using smb+davfs2. Adding a hop and another protocol usually slows things down, but not in this case.
  • davfs2 is fast because it opens more HTTP connections allowing it to do more requests in parallel. WebDrive supports adjusting the concurrency too; in “Connection Settings” there are settings for “Active Connections Limit” (defaults to 4) and “Active Upload Limit” (defaults to 2). Adjusting these settings would impact performance.
  • The client/server connection was high bandwidth (~100mpbs) and low latency (~18ms). Lower bandwidth and high latency would result in even greater benefits from using HTTP/2 and disabling locking.

Considerations for future research / testing include:

  • AWS Storage Gateway may be a solution. Concerns for this approach include:
    • Incompatible with MITM HTTPS-decrypting security devices
    • Uses a local, on premises cache (called an “upload buffer” in the documentation) so changes are not immediately available globally
    • Integrates with Active Directory for authentication/authorization, but setting up for a large number of users is onerous.
    • Requires the use of S3 for file storage, which may not be compatible with the servers in AWS
  • Various other WebDAV clients for Windows, such as DokanCloudFS, NetDrive, DirectNet Drive, ExpanDrive, and Cyberduck / Mountain Duck.
  • Further tweaks to configuration options, such as the concurrency settings in WebDrive and davfs2
Categories: Uncategorized Tags:

The Sad Story of TCP Fast Open

March 13th, 2019 No comments

I’m very interested in performance. If there’s a way to make something fast, you’ve got my attention. Especially when there’s a way to make a lot of things fast with a simple change – and that’s what TCP Fast Open (TFO) promises to do.

TFO (RFC 7413), started out in 2011 as a way to eliminate one of the round trips involved in opening a TCP connection. In early testing discussed at the 2011 Linux Plumbers Conference, Google found that TFO reduced page load times by 4-40%. The slowest, highest latency connections would benefit the most – TFO promised to be a great improvement for many users.

Support for this performance improving technology rapidly grew. In 2012, Linux 3.7 gained support for client and server TFO. In 2013, Android gained support when KitKat (4.4) was released using the Linux 3.10 kernel. In 2015, iOS gained support. In 2016, Windows 10 got support in the Anniversary Update. Even load balancers, such as F5, added support.

And yet, today, no browsers support it. Chrome, Firefox, and Edge all have use of TFO disabled by default.

What happened to this technology that once sounded so promising?

Initial Optimism Meets Hard Reality

I attribute the failure to achieve widespread adoption of TCP Fast Open to four factors:

  1. Imperfect initial planning
  2. Middleboxes
  3. Tracking concerns
  4. Other performance improvements

Factor 1: Imperfect Initial Planning

TCP Fast Open was in trouble from initial conception because it is an operating system change that had to done perfectly from the very beginning. Operating systems have very long lifespans – updates happen slowly, backwards compatibility is paramount, and changes are (rightfully so) difficult to make. So when the TFO specification wasn’t perfect the first time, that was a major blow to the changes of ever achieving widespread adoption.

TFO requires the allocation of a new, dedicated TCP Option Kind Number. Since TFO was experimental when it started out, it used a number (254 with magic 0xF989) from the experimental allocation as described in RFC 4727. Which quickly got ingrained in Windows, iOS, Linux. and more. As the saying goes, “nothing is as permanent as a temporary solution.”

So when TFO left experiment status with RFC 7413, the document states: “Existing implementations that are using experimental option 254 per [RFC6994] with magic number 0xF989 (16 bits) as allocated in the IANA “TCP Experimental Option Experiment Identifiers (TCP ExIDs)” registry by this document, SHOULD migrate to use this new option (34) by default.”

Did all implementations migrate? If they did, they would lose compatibility with those that didn’t migrate.

So all systems must now support both the experimental TCP Option Kind Number and the permanent one.

This issue isn’t a deal breaker – but it certainly wasn’t a great way to start out.

Factor 2: Middleboxes

Middleboxes are the appliances that sit between the end user and the server they’re trying to reach. They’re firewall, proxies, routers, caches, security devices, and more. They tend to be very rarely updated, very expensive, and running proprietary software. Middleboxes are, in short, why almost everything runs over HTTP today and not other protocols as the original design for the Internet envisioned.

The first sign of trouble appeared in the initial report from Google in 2011 regarding TFO. As reported by LWN, “about 5% of the systems on the net will drop SYN packets containing unknown options or data. There is little to be done in this situation; TCP fast open simply will not work. The client must thus remember cases where the fast-open SYN packet did not get through and just use ordinary opens in the future.”

Over the years, Google and Mozilla did much more testing and found that TFO caused more trouble than it was worth. Clients that initiated TFO connections found failures frequently enough that on average, TFO wasn’t worth it. In some networks, TFO never works – for example, China Mobile’s firewall consistently fails to accept TFO requiring every connection to be retried without it, leading to TFO actually increasing roundtrips.

Middleboxes are probably the fatal blow for TFO: the existing devices won’t be replaced for (many) years, and the new replacement devices may have the same problems.

Factor 3: Tracking Concerns

During initial connection to a host, TFO negotiates a unique cookie; on subsequent connections to the same host, the client uses the cookie to eliminate one round trip. Using this unique cookie allows servers using TFO to track users. For example, if a user browses to a site, then opens an incognito window and goes to the same site, the same TFO cookie would be used in both windows. Furthermore, if a user goes to a site at work, then uses the same browser to visit that site from a coffee shop, the same TFO cookie would be used in both cases allowing the site to know it’s the same user

In 2011, tracking by the governments and corporations wasn’t nearly as much of a concern as it is today. It would still be 2 years before Edward Snowden would release documents describing the US government massive surveillance programs.

But, in 2019, tracking concerns are real. TFO potential to be used for user tracking makes it unacceptable for most use cases.

One way to mitigate tracking concerns would be for the TFO cookie cache to be cleared whenever the active network changes. Windows/Linux/MacOS/FreeBSD/etc should consider clearing the OS’s TFO cookie cache when changing networks. See this discussion on curl’s issue tracker for more.

Factor 4: Other Performance Improvements

When TFO started out, HTTP/2 was not yet in use – in fact, HTTP/2’s precursor, SPDY, have a draft until 2012. With HTTP/1, a client would make many connections to the same server to make parallel requests. With HTTP/2, clients can make parallel requests over the same TCP connections. Therefore, since it setups up far fewer TCP connections, HTTP/2 benefits much less than HTTP/1 from TFO.

HTTP/3 even plans to use UDP, instead of TCP, to reduce connection setup round trips gaining the same performance advantage of TFO but without its problems.

TLS 1.3 offers another improvement which reduces round trips called 0RTT.

In the end, performance has been improving without requiring TFO’s drawbacks/costs.

The Future of TFO

TFO may never be universally used, but it still has its place. To summarize, the best use case for TFO would be with relatively new clients and servers connected by a network using either no middleboxes or only middleboxes that don’t interfere with TFO in a use case where user tracking isn’t a concern.

DNS is such a use case. DNS is very latency sensitive – eliminating the latency from one round trip would give a perceivable improvement to users. The same TCP connections are made from the same clients to the same servers repeatedly, which is TFO’s best case scenario. And there’s no tracking concern since many DNS clients and servers don’t move around (there’s no “incognito” mode for DNS). Stubby, Unbound, dnsmasq, BIND, and PowerDNS, for example, include or are currently working on support for TFO.

Categories: Uncategorized Tags:

WordPress on AWS The Easy Way with VersionPress

January 23rd, 2019 No comments

Developing and administering WordPress can be painful, especially when requirements include the ability to have a team of developers, the ability for a developer to run the site (including content) on their system (so they can reproduce issues caused by content or configuration), multiple content authors, multiple environments (such as staging and production), and the ability to roll back any problematic changes (content, code, or configuration) with ease.

To solve this problem, I developed a solution to make things easier: VersionPress on AWS. It uses Docker to allow developers to run the site locally and an assortment of AWS services to host the site. Check it out and I think you’ll find that life becomes a little bit easier.

VersionPress on AWS is Free Software under the GPLv3 hosted at gitlab. Contribution in the forms of issue reports and pull request are very much welcome.

What is VersionPress?

VersionPress stores content (posts, media, etc) as well as code (themes, plugins, configuration, etc) in source control (git).

  • By looking at the git log, it’s quick and easy to see who changed what and when.
  • All code (plugins, themes, WordPress core itself) and content (pages, posts, comments, configuration) are stored in git. This approach allows content changes as well as code changes can be reverted if there’s a problem and merged between branches for different environments.
  • Wipe out and recreate the environment at any time without data loss – everything is in git. No need to worry about the AWS RDS server. Migrate between RDS for MariaDB and Aurora at any time.
  • Need a staging site, or a new site to test work in progress? Create a new branch and launch a new stack, be up and running in minutes
  • Run the exact same site with the same content locally so you can reproduce issues in production effortlessly – no more “works on my machine” situations

Hosting with AWS

Need a small, cheap staging site, but also a full fledged scalable production site with a CDN? Use the same stack for both – simply specify different parameter values. Change parameter values whenever you want without downtime or data loss. For example, when starting out, leave the CloudFront CDN off to save money. When the site becomes popular, add the CloudFront CDN to better handle the load and improve performance for end users.

AWS features leveraged include:


Docker is used to run WordPress in AWS Elastic Beanstalk as well as for developers running the site locally. This consistency reduces the occurrences of “it works on my machine” situations and gets new developers on-boarded quicker.

When not to use VersionPress on AWS

Since VersionPress commits all content changes to git, content changes are a bit slower. Therefore, if the site is very content change heavy, such as if it’s a forum with many frequent comments being made, VersionPress on AWS may not be the right solution.

However, the vast majority of WordPress sites have very infrequent content changes, so the slightly slower writes are rarely an issue.

Get Started

Check out the VersionPress on AWS documentation to get started.

Categories: Uncategorized Tags:

DNSSEC on OpenWrt 18.06

August 10th, 2018 6 comments
DNSSEC ensures that the results of DNS queries (for DNSSEC enabled domains) are authentic. For example, uses DNSSEC, so if an attacker (using a man in the middle or spoofing attack) changes the IP address that resolves to, then a DNS resolver supporting DNSSEC will be able to tell and return an error.

DNSSEC provides authentication and integrity; it does not provide for confidentiality. For confidentiality (so your ISP, for example, cannot tell what DNS queries are being made), you can easily add TLS over DNS which I’ve described how to do in OpenWrt in another post.

By setting up DNSSEC on your OpenWrt router, you protect your entire network as all clients will perform DNS requests using your OpenWrt router’s DNS server which in turn will do DNSSEC checking for all queries.

Setting up DNSSEC on OpenWrt 18.06 is remarkably easy. You can use the LuCI web interface to perform these steps or shell command over ssh; I’m providing the commands here.

  1. Refresh the package list: opkg update
  2. Swap dnsmasq for dnsmasq-full (-full includes DNSSEC support): opkg install dnsmasq-full --download-only && opkg remove dnsmasq && opkg install dnsmasq-full --cache . && rm *.ipk
  3. Edit /etc/config/dhcp
    In the config dnsmasq section, add (or change the values of, if these settings already exist) these settings:

    • option dnssec '1'
    • option dnsseccheckunsigned '1'
  4. Restart dnsmasq so the changes take effect: /etc/init.d/dnsmasq restart

Enjoy knowing that now no one is tampering with your DNS queries.

Categories: Uncategorized Tags:

DNS Over TLS on OpenWrt 18.06

August 10th, 2018 12 comments
DNS over TLS encrypts DNS queries so no one between you and the DNS server you’re using (which, by default using these steps, will be Cloudflare’s, can tell what DNS queries/responses are being exchanged.

DNS over TLS provides confidentiality but not integrity or authenticity. For those, you need to setup DNSSEC which I’ve described how to do on OpenWrt in another post.

By setting up DNS over TLS on your OpenWrt router, you protect your entire network as all clients will perform DNS requests using your OpenWrt router’s DNS server which in turn will use DNS over TLS to perform the actual resolution.

Setting up DNS over TLS using Stubby on OpenWrt 18.06 is remarkably easy. You can use the LuCI web interface to perform these steps or shell command over ssh; I’m providing the commands here.

  1. Refresh the package list: opkg update
  2. Install the stubby package: opkg install stubby
  3. Start stubby: /etc/init.d/stubby start
  4. Set stubby to start automatically at boot: /etc/init.d/stubby enable
  5. Use stubby as the DNS server by editing /etc/config/dhcp
    In the config dnsmasq section, add (or change the values of, if these settings already exist) these settings:

    • option noresolv '1'
    • list server ''
  6. Restart dnsmasq so the changes take effect: /etc/init.d/dnsmasq restart

If you’d rather use a different DNS over TLS server than Cloudflare’s, edit /etc/stubby/stubby.yml.

Now you can restart assured that your DNS queries can’t be seen by 3rd parties.

Categories: Uncategorized Tags:


May 29th, 2018 No comments
I’m currently working on an application that persists Java serialized data (using ObjectOutputStream) in a database. Java’s serialization format compresses very well – so why not compress the data when storing it then decompress it while reading for a quick win? The problem is that there will still be legacy, uncompressed data, which the application will not be able to access if it assumes all data is now gzipped.

The solution is to use MaybeGZIPInputStream instead of GZIPInputStream. For example, when reading, instead of:

ObjectInputStream ois = new ObjectInputStream(new GZIPInputStream(databaseInputStream));

use MaybeGZIPInputStream instead:

ObjectInputStream ois = new ObjectInputStream(new MaybeGZIPInputStream(databaseInputStream));

And always write data using GZIPOutputStream. Now all of that existing data can be still be read, and newly written data gets the benefit of taking up much less storage (and taking up far less bandwidth / time being transferred between the application servers and the database).

Here’s the source code of MaybeGZIPInputStream:


/** Detect if the given {@link InputStream} contains compressed data. If it does, wrap it in a {@link GZIPInputStream}. If it doesn’t, don’t.
* @author Craig Andrews
public class MaybeGZIPInputStream extends InputStream {

private final InputStream in;

public MaybeGZIPInputStream(final InputStream in) throws IOException {
final PushbackInputStream pushbackInputStream = new PushbackInputStream(in, 2);
if(isGZIP(pushbackInputStream)) { = new GZIPInputStream(pushbackInputStream);
}else { = pushbackInputStream;

private boolean isGZIP(final PushbackInputStream pushbackInputStream) throws IOException {
final byte[] bytes = new byte[2];
final int bytesRead =;
if(bytesRead > 0) {
pushbackInputStream.unread(bytes, 0, bytesRead);
if(bytesRead == 2) {
if ((bytes[0] == (byte) (GZIPInputStream.GZIP_MAGIC)) && (bytes[1] == (byte) (GZIPInputStream.GZIP_MAGIC >> 8))){
return true;
return false;

public int read() throws IOException {

public int hashCode() {
return in.hashCode();

public int read(byte[] b) throws IOException {

public boolean equals(Object obj) {
return in.equals(obj);

public int read(byte[] b, int off, int len) throws IOException {
return, off, len);

public long skip(long n) throws IOException {
return in.skip(n);

public String toString() {
return in.toString();

public int available() throws IOException {
return in.available();

public void close() throws IOException {

public void mark(int readlimit) {

public void reset() throws IOException {

public boolean markSupported() {
return in.markSupported();


Categories: Uncategorized Tags:

SQS JMS Resource Adapter

May 7th, 2018 No comments
The recently released SQS JMS Resource Adapter allows JEE applications (running on any JEE application server, including Glassfish, Payara, JBoss, IBM Liberty, etc) to easily use AWS SQS as a JMS implementation. This resource adapter can be helpful in many situations, such as:

  • Migrating an existing JEE application from another JMS implementation (such as RabbitMQ, ActiveMQ, IBM MQ, etc) to AWS SQS.
  • Allowing the JMS implementation to be switched out. For example, developers can use the ActiveMQ resource adapter, and in production, this AWS SQS resource adapter could be used.

Grab the resource adapter from Maven Central and submit issues and pull requests over at GitHub.

Categories: Uncategorized Tags:

Trusting DoD Certificates in Docker and Beanstalk

May 1st, 2018 No comments
The US DoD (Department of Defense) uses its own root certificate when signing https certificates for its domains. For example, uses such a certificate. These root certificates are not trusted by any (commercial/public) operating system, browser, or other client. Therefore, in order to access these sites and not get an error, the DoD certificates must be trusted.

On Windows, go to DISA’s PKI and PKE Tools page and under “Trust Store” follow the directions for the “InstallRoot X: NIPR Windows Installer”

On Linux, download the certificates from MilitaryCAC’s Linux Information page (direct link to the certificates). Then follow your distribution’s instructions on how to install certificates to the trust store. For example, on Red Hat / CentOS / Fedora / Amazon Linux, copy the certificates to /etc/pki/ca-trust/source/anchors/ then run update-ca-trust. On Debian / Ubuntu and Gentoo, copy the certificates to /usr/local/share/ca-certificates/ then run update-ca-certificates.

On Docker, for a Red Hat / CentOS / Fedora / Amazon Linux (or other Fedora-type system) derived container, add the following to the Dockerfile:

RUN yum -y install openssl \
&& CERT_BUNDLE="Certificates_PKCS7_v5.3_DoD" \
&& curl "${CERT_BUNDLE}.zip" --output \
&& unzip "${CERT_BUNDLE}/${CERT_BUNDLE}.pem.p7b" \
&& openssl pkcs7 -in "${CERT_BUNDLE}/${CERT_BUNDLE}.pem.p7b" -print_certs -out "/etc/pki/ca-trust/source/anchors/${CERT_BUNDLE}.pem" \
&& update-ca-trust \
&& update-ca-trust force-enable \
&& rm -rf "${CERT_BUNDLE}" \
&& yum -y remove openssl \
&& rm -rf /var/cache/yum

On AWS Elastic Beanstalk the .ebextensions mechanism can be used. In the jar/war/etc deployment archive, add these files:
command: "bash .ebextensions/scripts/"

set -e # stop on all errors
yum install -y unzip openssl
curl "${CERT_BUNDLE}.zip" --output
unzip "${CERT_BUNDLE}/${CERT_BUNDLE}.pem.p7b"
openssl pkcs7 -in "${CERT_BUNDLE}/${CERT_BUNDLE}.pem.p7b" -print_certs -out "/etc/pki/ca-trust/source/anchors/${CERT_BUNDLE}.pem"
update-ca-trust force-enable
rm -rf "${CERT_BUNDLE}"
yum -y remove unzip
rm -rf /var/cache/yum

Categories: Uncategorized Tags:

Coal to Cryptocurrency: Mining Remains a Threat

November 30th, 2017 No comments
Coal was the fuel that powered the Industrial Revolution, bootstrapping the modern age as we know it. Acquiring it was simple, using it was easy, and it got the job done. Coal was the perfect resource. Back in those days, efficiency and cleanliness weren’t concerns because of ecological immaturity (society didn’t know any better) and scale (industry wasn’t big enough to impact the world sufficiently to raise concerns).

Cryptocurrency mining is today’s coal mining, and it’s time to start considering alternative solutions.

With any currency (traditional or cryptographic), a few constraints must be in place: a unit of currency cannot be spent more than once (no “double spending”), transactions must complete in a timely manner, and everyone must agree when a transaction is complete. With traditional paper money, it’s clear how all of these constraints are satisfied: counterfeiting is made difficult by secure notes and strongly discouraged by legal means, the transaction completes when physical possession of the note is transferred, and all parties can look at their physical possession of notes to determine a transaction’s state.

Implementing these constraints digitally is more difficult than when using physical items. Bitcoin, being the world’s first cryptocurrency, used the best solutions available. The system bitcoin leverages is known as blockchain with “proof of work.” Bitcoin uses a series of blocks, appropriately referred to as a blockchain, that forms a ledger which records the state of every bitcoin since bitcoin’s inception. Each block records the movement of a number of bitcoins between owners: the proof. In order for a block to be valid, it can include each coin at most once (to prevent double spending); it must include the unique identity (the hash) of the previous block; and it must include the solution to a difficult math problem (a cryptographic hash). The process of solving these problems to form valid blocks is known as mining and those who do so are called miners.

Solving these mining challenges takes hardware, infrastructure, cooling, and the electricity to keep it all going. To incentivize the block discovery process, the system rewards the miner with a predetermined amount of currency. To satisfy the need for timely transactions, each includes a transaction fee to be awarded to the miner. Therefore a miner wants to include as many transactions as possible into a block in order to collect the greatest amount in fees. Once a block has been mined, it’s shared to the public so anyone can verify that there was no double spending, and that the cryptographic hash is valid. Miners only mine new blocks on top of valid ones.

As cryptocurrencies grow more valuable, the mining rewards grow as well, making mining increasingly lucrative. This  draws in more miners which, in turn, uses more energy. As of November 2017, each bitcoin transaction now uses as much energy as the average American house consumes in a week. Furthermore, as a country, bitcoin now ranks as the 69th highest energy consumer.

Just as coal was a great way to bootstrap industry, proof of work has done a great job bootstrapping cryptocurrencies. But neither coal nor proof of work are viable paths forward; they’re simply too polluting. So what are the solar panel and wind turbine analogues for cryptocurrency?

One system is proof of stake. At a high level, this system limits miners’ output in proportion to the total amount of currency the miner owns. For example, if there are 200 units of currency total and a miner owns 10 units, that miner may only contribute 5% of the mining power. In this way, there’s no race for miners to acquire massive computational resources. This system has other advantages over proof of work as well including avoiding the 51% attack problem. Ethereum, the second largest cryptocurrency by market capitalization, is currently in the process of switching from proof of work to proof of stake. Ark, Dash, and Neo are examples of cryptocurrencies currently using a proof of stake system.

Another system is known as “the tangle,” currently only used by the IOTA cryptocurrency. The tangle’s alternative methodology provides many advantages over blockchain, including zero transaction fees, no miner energy expenditure, and greater decentralization. However, analogous to alternative energy sources in days past, the tangle today is not as proven, researched, tested, or understood as well as blockchain systems are.

In this modern age of global climate change, the world needs to abandon proof of work systems. With their energy expenditures exceeding that of most countries, the cost to the environment is simply too great to continue down this path, especially since alternatives already exist. Like modern industry’s move away from familiar, reliable coal, it’s time for the cryptocurrency community to move on from proof of work to better, more responsible solutions.

Categories: Uncategorized Tags:

Log4jdbc Spring Boot Starter

March 27th, 2017 No comments

Logging SQL as it’s executed is a fairly common desire when developing applications. Perhaps an ORM (such as Hibernate) is being used, and you want to see the actual SQL being executed. Or maybe you’re tracking down a performance problem and need to know if it’s in the application or the database, so step #1 is finding out what query is executing and for how long.

Solving this problem once and for all (at least for Spring Boot applications), I created Log4jdbc Spring Boot Starter. It’s a very simple yet powerful way to log SQL queries (and more, such as timing information). And unlike other solution, the queries logged are ready to run – the ‘?’ parameters are replaced with their values. This means you can copy and paste the query from the log and run them unmodified in the SQL query tool of your choice, saving a lot of time.

For background, my motivation for this work is a result of a Spring Boot / Hibernate application I have in progress. I started by using but that only logs queries with ‘?’ place holders. To log the values, add At least now I had the query and the values for it, but to run it in my query tool (I need to EXPLAIN the query), I had to replace each ‘?’ with the value – and I had over 20 place holders. That got old fast.

There are other approaches to log queries, such as the one described in Display SQL to Console in Spring JdbcTemplate. I’m not a fan of this approach because it only works for queries made through JdbcTemplate (so Hibernate queries aren’t logged, for example) and it’s an awfully lot of code to include and therefore have to maintain in each project.

I discovered Log4jdbc but it’s a bit of a pain to setup in a Spring Boot application because it:

  • doesn’t use the Spring Environment (
  • needs setup to wrap the DataSource’s in the Log4jdbc DataSourceSpy

Wanting to solve this problem precisely once and never again, I created Log4jdbc Spring Boot Starter.

To use it, just add to your project:

  1. <dependency>
  2.   <groupId>com.integralblue</groupId>
  3.   <artifactId>log4jdbc-spring-boot-starter</artifactId>
  4.   <version>[INSERT VERSION HERE]</version>
  5. </dependency>

Then turn on the logging levels as desired in, for example:


When no logging is configured (all loggers are set to fatal or off), log4jdbc returns the original Connection.

See the Log4jdbc Spring Boot Starter project page for more information.

Categories: Uncategorized Tags: