Precompiling with Web Deploy

April 27th, 2012 3 comments

Web Deploy is a great tool for deploying ASP.NET projects: it simplifies deployments, can target IIS 6 or 7, is pretty easy to get working on the server and the project side, and is supported by many hosting providers. It works really well when used along with continuous integration. In the first phase, the project is built and a Web Deploy package is made; the command used is (for example) “msbuild  [something.sln or something.csproj or something.vbproj] /m /p:Configuration=release /p:DeployOnBuild=True /p:DeployTarget=Package /p:skipJsHint=true”. In the second phase, the Web Deploy package is deployed to servers for testing.

One of the problems I noticed was that a user could browse around the site and hit a compilation error. For instance, if there was a misspelled variable in an ASPX page, the user would get a standard “500″ error page.

One of the purposes of continuous integration, and build/testing systems in general, is to avoid such problems. If the code behind fails to compile, the team is notified immediately – but if the ASPX page fails to compile, we have to wait for a user to report the problem. That is not acceptable.

The aspnet_compiler tool is the solution to this problem. If you use ASP.NET MVC, your project’s MSBuild file will contain a target named “MVCBuildViews” that will run aspnet_compiler for you. Enabling the use of this target by setting the “MvcBuildViews” property makes the build take a bit longer, but there are less surprises – to me, the benefit far outweighs the cost.

So the first option is to simply enable “MvcBuildViews.” “MVCBuildViews” writes the precompiled files to the ASP.NET temporary directory (the precompiled output is not included in the Web Deploy package), so they’ll be used only by the system that invoked the target (in my case, the continuous integration server). So for server deployments, you get the benefit of knowing that the pages compiled (solving the original problem), but you don’t get the other major benefit of precompiling – the startup performance boost.

It turns that combining aspnet_compiler and Web Deploy is a bit tricky. I won’t go into the details, other than to say I figured it out and made an MSBuild target file that can be easily included into projects that use Web Deploy.

So, to get Web Deploy working with precompilation:

  1. Grab Precompile.targets and put it into the directory with your .csproj/.vbproj project file.
  2. Modify your .csproj or .vbproj file, add <Import Project=”$(MSBuildProjectDirectory)\Precompile.targets” /> towards the end of the file.
  3. In your project’s configuration, set the “AspNetCompiler” property to “true” for whatever configurations you want (I have it enabled for the “Release” configuration in my case).
  4. If you also want to invoke aspnet_merge, you need to ensure the system that does the building has Visual Studio or the Windows SDK installed, then in your project’s configuration, set the “AspNetMerge” property to “true” for whatever configurations you want.

aspnet_merge is another interesting tool that merges assemblies. You can read about its benefits on the aspnet_merge MSDN page. I found that it is beneficial to use aspnet_merge almost always – the only problem I had was on one of my very large projects (with dozens of ASPX page and the total web site size being a few hundred megabytes), aspnet_merge took an exorbitantly long time. For example, on our normal project, it takes ~1 minute. On this very large project, it took over an hour.

Categories: Uncategorized Tags:

HTTP Response Caching for Java and Android

February 21st, 2012 15 comments

HTTP caching is both important, as it reduces bandwidth use and improves performance, and complex, as the rules are far from simple. In my experience, most Java and Android applications either don’t do HTTP caching, or they roll their own and up doing it wrong or in way too complicated a fashion. In other words, they create a non-standard, one off, unmaintainable solution. And IMHO, that’s no solution at all.

If you find yourself using HttpClient, you can use HttpClient-Cache, which is an easy drop in for Java. See my previous post about HttpClient-Cache for Android. But if you’re using HttpUrlConnection (aka java.net.URL.openConnection()), there’s no good solution for regular Java or Android. Well, in Android 4.0 and later, you can use HttpResponseCache, but with only a small percentage of Android devices using 4.0 or later, that’s not a terribly good solution. If you use Android 4.0+’s HttpResponseCache as recommended by Google, then all previous Android versions end up with no HTTP response caching – this causes excess load on your servers, slower performance for the app, and unnecessary bandwidth use.

To fix this problem, I grabbed all the code from AOSP that implements Android 4.0′s HttpResponseCache and made it a separate library. This library is easy to use, works on Java 1.5+, all versions of Android, and is licensed under APLv2 (just like everything else in AOSP). Really, there’s no reason not to use it! You can even use in Java server applications, such as those that use Spring.

To use it, if you’re using Maven, simply add this block to your pom.xml (all artifacts are in Maven Central):
<dependency>
<groupId>
<com.integralblue</groupId>
<artifactId>httpresponsecache</artifactId>
<version>1.0.0</version>
</dependency>

If you’re not using Maven, you’ll need to add the httpresponsecache jar and its dependency, disklrucache.jar, to your project.

When your application starts, before it makes any HTTP requests, execute this method:
com.integralblue.httpresponsecache.HttpResponseCache.install(File directory, long maxSize);
If you’re using Android, and you want to use Android 4.0′s HttpResponseCache if it’s available, and fallback to this library if it’s not available:
final long httpCacheSize = 10 * 1024 * 1024; // 10 MiB
final File httpCacheDir = new File(getCacheDir(), "http");
try {
Class.forName("android.net.http.HttpResponseCache")
.getMethod("install", File.class, long.class)
.invoke(null, httpCacheDir, httpCacheSize);
} catch (Exception httpResponseCacheNotAvailable) {
Ln.d(httpResponseCacheNotAvailable, "android.net.http.HttpResponseCache not available, probably because we're running on a pre-ICS version   of Android. Using c$
try{
com.integralblue.httpresponsecache.HttpResponseCache.install(httpCacheDir, httpCacheSize);
}catch(Exception e){
Ln.e(e, "Failed to set up com.integralblue.httpresponsecache.HttpResponseCache");
}
}

The source code to the library is available on GitHub. I’m already using it in my CallerID Android app. If you end up using this library, please leave me a comment.

That’s it – enjoy easy to use HTTP caching!

Categories: Uncategorized Tags:

jshint in msbuild

February 17th, 2012 2 comments

I recently had to add build time Javascript validation to an ASP.NET project. It took me quite a while to figure out how to do so in a (reasonably) maintainable, understandable way.

I’m using Visual Studio 2010, and the project targets .NET 3.5. The same approach would work fine if the project was targeting .NET 4.0.

I’m using NuGet to manage dependencies. The first thing I did was add node-jshint as a dependency of the project.

I opened the project’s file (something.csproj). I added a target:

<Target Name=”jshint”>
<ItemGroup>
<JavaScript Include=”@(Content)” Condition=”%(Extension) == ‘.js’” />
</ItemGroup>
<PropertyGroup>
<node-jshint>$(PackagesDir)\node-jshint.0.5.5</node-jshint>
</PropertyGroup>
<Exec Command=”&quot;$(node-jshint)\tools\jshint.bat&quot; &quot;%(JavaScript.FullPath)&quot; –reporter &quot;$(node-jshint)\tools\lib\vs_reporter.js&quot; –config &quot;$(MSBuildProjectDirectory)\jshintrc.json&quot;” ContinueOnError=”true” />
</Target>

Make BeforeBuild depend on jshint:

<Target Name=”BeforeBuild” DependsOnTargets=”jshint”>

Add a new text file to the root of project called “jshintrc.json” If the file is included in the Visual Studio project, make sure the build action is “None” so Visual Studio doesn’t try to do anything with it. The file contents should look like this. The latest available version of node-json at this time, 0.5.5, doesn’t deal with a Byte Order Markers (BOM) in the jshintrc.json file, so when saving it, be sure the BOM isn’t included.

Now whenever Visual Studio builds the project, JSHint errors will appear in the VS error list just like all other types of errors. It runs JSHint on all .js files included in your project as content (the way .js should be included in your project).

Categories: Uncategorized Tags:

Best way to use HttpClient in Android

September 30th, 2011 4 comments

Many Android applications access the Internet resources over HTTP (and my projects are no exception). There are 2 common ways to do that: use Apache HttpClient 4.x (which is included in Android) or use HttpURLConnection (from Java). Google stated in a September 29, 2011 blog post that they prefer you use HttpURLConnection, but many apps and a large number of Java libraries already use HttpClient and won’t be changing soon (if ever). So HttpClient is here to stay.

With that in mind, the performance and footprint of HttpClient can vary widely based on how its set up. Here are my recommendations:

  • Always use one HttpClient instance for your entire application. HttpClient is not free to instantiate – each additional instance takes time to create and uses more memory. However, more importantly, using one instance allows HttpClient to pool and reuse connections along with other optimizations that can make big differences in how your application performs.
  • Use a thread safe connection manager. If you’re using one global HttpClient, it will be accessed by multiple threads concurrently – so if you don’t use a thread safe connection manager, Bad Things will happen.
  • Use Android’s android.net.SSLCertificateSocketFactory and android.net.SSLSessionCache if they’re available. Using these instead of the base HttpClient SSLSocketFactorywill reduce round trips when connecting to the same https site multiple times, making your application feel faster.
  • Set the user agent to something useful. That way, the server’s logs will be far more useful, which may save you (or someone else) a lot of time later if (when) a problem occurs.

With all that said, here’s how I get my global HttpClient instance. This code should work on all Android versions (it should even work all the way back to 1.0 – if anyone cares). I use Google Guice‘s Provider interface and injection to get the application, but you can easily adopt this to a form that doesn’t use Guice.

import java.lang.reflect.Method;
import java.util.Locale;
 
import org.apache.http.client.HttpClient;
import org.apache.http.conn.ClientConnectionManager;
import org.apache.http.conn.scheme.PlainSocketFactory;
import org.apache.http.conn.scheme.Scheme;
import org.apache.http.conn.scheme.SchemeRegistry;
import org.apache.http.conn.scheme.SocketFactory;
import org.apache.http.conn.ssl.SSLSocketFactory;
import org.apache.http.impl.client.AbstractHttpClient;
import org.apache.http.impl.client.DefaultHttpClient;
import org.apache.http.impl.conn.tsccm.ThreadSafeClientConnManager;
import org.apache.http.params.HttpConnectionParams;
import org.apache.http.params.HttpParams;
import org.apache.http.params.HttpProtocolParams;
 
import android.app.Application;
import android.content.Context;
import android.content.pm.PackageManager.NameNotFoundException;
import android.os.Build;
import android.util.Log;
 
import com.google.inject.Inject;
import com.google.inject.Provider;
 
public class HttpClientProvider implements Provider {
@Inject
Application application;
 
// Wait this many milliseconds max for the TCP connection to be established
private static final int CONNECTION_TIMEOUT = 60 * 1000;
 
// Wait this many milliseconds max for the server to send us data once the connection has been established
private static final int SO_TIMEOUT = 5 * 60 * 1000;
 
private String getUserAgent(String defaultHttpClientUserAgent){
String versionName;
try {
versionName = application.getPackageManager().getPackageInfo(
application.getPackageName(), 0).versionName;
} catch (NameNotFoundException e) {
throw new RuntimeException(e);
}
StringBuilder ret = new StringBuilder();
ret.append(application.getPackageName());
ret.append("/");
ret.append(versionName);
ret.append(" (");
ret.append("Linux; U; Android ");
ret.append(Build.VERSION.RELEASE);
ret.append("; ");
ret.append(Locale.getDefault());
ret.append("; ");
ret.append(Build.PRODUCT);
ret.append(")");
if(defaultHttpClientUserAgent!=null){
ret.append(" ");
ret.append(defaultHttpClientUserAgent);
}
return ret.toString();
}
 
@Override
public HttpClient get() {
AbstractHttpClient client = new DefaultHttpClient(){
@Override
protected ClientConnectionManager createClientConnectionManager() {
SchemeRegistry registry = new SchemeRegistry();
registry.register(
new Scheme("http", PlainSocketFactory.getSocketFactory(), 80));
registry.register(
new Scheme("https", getHttpsSocketFactory(), 443));
HttpParams params = getParams();
HttpConnectionParams.setConnectionTimeout(params, CONNECTION_TIMEOUT);
HttpConnectionParams.setSoTimeout(params, SO_TIMEOUT);
HttpProtocolParams.setUserAgent(params, getUserAgent(HttpProtocolParams.getUserAgent(params)));
return new ThreadSafeClientConnManager(params, registry);
}
 
/** Gets an HTTPS socket factory with SSL Session Caching if such support is available, otherwise falls back to a non-caching factory
* @return
*/
protected SocketFactory getHttpsSocketFactory(){
try {
Class sslSessionCacheClass = Class.forName("android.net.SSLSessionCache");
Object sslSessionCache = sslSessionCacheClass.getConstructor(Context.class).newInstance(application);
Method getHttpSocketFactory = Class.forName("android.net.SSLCertificateSocketFactory").getMethod("getHttpSocketFactory", new Class[]{int.class, sslSessionCacheClass});
return (SocketFactory) getHttpSocketFactory.invoke(null, CONNECTION_TIMEOUT, sslSessionCache);
}catch(Exception e){
Log.e("HttpClientProvider", "Unable to use android.net.SSLCertificateSocketFactory to get a SSL session caching socket factory, falling back to a non-caching socket factory",e);
return SSLSocketFactory.getSocketFactory();
}
 
}
};
return client;
}
 
}

I use this approach in my CallerID app (source) and an upcoming app (that I cannot yet talk about). I’ve also submitted patches (which have been accepted) to reddit is fun, so it will use this approach in its next version.

Categories: Uncategorized Tags:

My First Android App: CallerID

April 18th, 2011 7 comments

I’ve been looking for an excuse to write an Android app, and those annoying “unknown number” phone calls presented themselves at the perfect problem to solve.
My CallerID application consists of two parts: a service that runs on a server and given a phone number returns the information associated with it, and an Android app that uses the service to display information to the user upon request or when the phone rings.

The service portion is licensed under the AGPLv3 so anyone can be automomous and run it themselves, instead of relying on my instance. I based the idea and some of the implementation, although not much of the original source is left, on CallerID Superfecta. It’s written in PHP, has pretty minimal requirements, and is easy to install – check out the source for the CallerID service on gitorious. In case you don’t want to install it yourself, you can use my instance, which the Android app uses by default. To query it, request URLs like this: http://callerid.integralblue.com/callerid.php?format=json&num=%28860%29%20429-7433 The service currently returns the listing name, company name (if applicable), and address in JSON format. The service can also just return the listing name, which is great for use with Asterisk, using URLs like this: http://callerid.integralblue.com/callerid.php?format=basic&num=%28860%29%20429-7433 To use this with Asterisk, make sure you have CURL support available, and modify your dialplan with a directive such as this one:
exten => _., n, Set(CALLERID(name)=${CURL(http://callerid.integralblue.com/callerid.php?format=basic&num=${CALLERID(num)})})

The Android application is written in Java, licensed under the GPLv3, and is available on the Google Android Market and (hopefully) soon on the Amazon Market and f-droid. The Android app source code is also available on gitorious. The application requires at least Android 1.5 – and supporting Android 1.5 all the way to 2.3 was an interesting experience, especially in the area of the Contacts API. It uses OpenStreetMaps (osmdroid) for maps (instead of Google Maps, as not every device has Google Maps, and Google Maps is not Free Software) and roboguice as an application framework.

I initially built the app using ant, like most Android apps, but decided to switch to using Maven for a few reasons:

  • Maven manages dependencies, so I don’t need to include large binaries in the source repository.
  • Using a Maven multimodule approach, I can have a separate integration testing project (application-it) and application project (application), making automatic testing easy.
  • I’ve used Maven for a number of projects in the past, and find it easy and sufficiently “magic” (but not overly so), compared to ant, which I find too “low level.”
  • Maven automatically will run proguard (for optimization, not obfuscation), zipalign, and signing when given a “release” profile, or skip those steps when running under the default profile.

I used the maven-android-plugin as a key component of the build process. If you’re starting an Android project, I highly recommend you check it out.

To build the app:

  1. Install the Android SDK (you’ll need at least SDK versions 3 and 10)
  2. Set the ANDROID_HOME environment variable as described on the maven-android-plugin Getting Started page
  3. Install Maven (Windows and Mac installers are available for download, and it’s packaged for most Linux distributions)
  4. Install Git (Windows and Mac installers are available for download, and it’s packaged for most Linux distributions)
  5. git clone git://gitorious.org/callerid-for-android/mainline.git mainline
  6. cd mainline/application
  7. mvn clean install
  8. Since osmdroid is not in the Maven repository, you’ll need to manually install it. These directions can also be found in application/pom.xml:
    1. wget http://osmdroid.googlecode.com/files/osmdroid-android-3.0.3.jar
    2. mvn install:install-file -DgroupId=org.osmdroid -DartifactId=osmdroid -Dversion=3.0.3 -Dpackaging=jar -Dfile=osmdroid-android-3.0.3.jar
  9. mvn clean install (again) – it should succeed

The apk is located at “mainline/application/target/callerid-1.0-SNAPSHOT.apk”

These steps do not perform the integration tests. If you want to run those, in the step above that says “cd mainline/application” run “cd mainline” instead. Note that you’ll either have to have your Android phone plugged in to your computer (and running in debug mode, recognized by adb) or an Android emulator running.

To create a ready-to-release apk, which includes running proguard, zipalign, and jarsign, run this command from the “mainline” directory: mvn clean install -Prelease -Dsign.keystore=/home/candrews/projects/callerid/test-key.keystore -Dsign.alias=mykey -Dsign.storepass=testtest -Dsign.keypass=testtest Note that this command will sign using the test key. If you really want to distribute the apk, you’ll need to generate your own key using keytool. In either case, the ready to distribute apk is located at “mainline/application/target/callerid-1.0-SNAPSHOT-signed-aligned.apk”

If you want to develop the application using Eclipse, first make sure that you can successfully compile the application using the steps above. Then you need to install some Eclipse plugins:

From Eclipse:

  1. Select File-Import…
  2. Maven, Existing Maven Projects
  3. The Root Directory is the “mainline” directory you checked out from git
  4. Finish
  5. You should now have 3 projects in your Eclipse workspace: callerid, callerid-it, and callerid-parent
  6. For the “callerid” and “callerid-it” project, modify the Build Path by right clicking on the project, selecting “Build Path” then “Configure Build Path.” You need to remove “JRE System Library” and then click “Add Library” “Android Classpath Container” (see https://code.google.com/a/eclipselabs.org/p/m2eclipse-android-integration/issues/detail?id=41)
  7. That’s it – Eclipse should automatically build as you go, and you can use the Eclipse/Android development tools just as everyone else does.

I hope both the server portion and application portion are reasonably easy to run, build, and understand. If you have any questions or concerns, please comment – and contributions are of course very welcome to both projects!
Please note that this project is not in any way related to my employer – it is a completely personal, non-work project. This article has been cross-posted to Isobar’s blog.

Categories: Uncategorized Tags: , , , ,

The Coming IPv6 Evolution

September 27th, 2010 No comments

The coming exhaustion of the IPv4 address space has been in the news for a few years now. Various organizations have warned that the end is nigh, and finally, it appears the transition to IPv6 is really starting to really pick up steam. The transition to IPv6 will provide more capability, more opportunity, more performance, and a generally better user (and administrator) experience.

The switch to IPv6 will impact every web site, every server, every device – every Internet user – over the next couple of years. If you haven’t already gotten started with your transition plan, now is the time to do so.

The Problem

IPv4 address are 32 bits, commonly expressed 4 octets in dotted quad notation, like this: x.x.x.x where x is a number between 0 and 254. People have been using and memorizing IP addresses for almost 30 years. For instance, your home network is probably using IP addresses in the range of 192.168.x.x.

Because the IPv4 addresses we all know and love are 32 bits, there are about 4 billion unique addresses. However, some of those addresses are reserved for special use and some for home networks. Also, in the 1980′s, before anyone realized just how big this Internet thing would be, addresses were assigned in a very inefficient manner: for example, MIT, Apple, Prudential Insurance, General Electric and IBM each have 8 million addresses – HP has 16 million. Naturally, because the Internet started in the US, all of these big block allocations were made to US entities. As other countries came online, and continue to come online, and as new technologies like virtualization, cell phone, tablets, TVs, PVRs, cars, even toasters join the Internet, the number of IP addresses available is simply insufficient. The consensus is that the last block of IP addresses will be allocated in 2012.

Doomsday

What happens after the last block of address is allocated? Probably not much – at least immediately. China and India would be affected first, as they have the largest online growth. ISPs will start charging more for service (or they won’t issue IPv4 addresses at all, issue only IPv6 addresses), as IP addresses which were formerly essentially free will now an increasingly high value. Markets will develop to buy and sell IP addresses. To avoid using addresses, ISPs will stop allocating Internet IP addresses to their customers, instead making up their own address space using NAT (like how your home router does this for your home network), essentially creating little Internets.

Over time, this “little Internets” problem will spread, and will occur throughout the world. There are 3 major issues with the “little Internets” “solution” to IPv4 address exhaustion:

  1. Devices on separate little Internets will not be able directly communicate – or at least not without jumping through very tricky hoops. Gaming, VoIP, and P2P all become less reliable, more difficult to develop, and harder to use.
  2. Hosting will become more expensive, crippling innovation. Right now, if you have the next big idea, you pay $10/month (maybe even less) for web hosting, $10/yr for a domain name, and you’re off. Imagine how crippling it would be if there was an additional IP address fee tacked on – perhaps starting at $5/month, then quickly jumping to much more than that.
  3. Because of damage done to P2P technologies and the added expense of hosting, the Internet will more quickly transform into less of a collaborative platform, and more into a consumer/provider platform, becoming more like Cable TV than the free for all open forum it is now.

IPv4 exhaustion will be very bad for the common person, even if he doesn’t understand the problem. However, it will be very good for current established institutions because it will become increasingly more expensive to compete with them. For example, Google started out as two college kids – they ended up taking down a billion dollar company’s product (DEC’s Altavista). With the added costs and challenges that would exist in a post-IPv4 exhaustion world, Google could not have possible become what it is today.

The Solution

The solution is to grow the address space – make more addresses. Sounds simple, right? Well, not really.

The size of the IPv4 address space is very firmly planted in the IPv4 protocol; there is no way to change it. As early as the late 1980′s, people began to realize that the address space was not big enough, so work on the next protocol began. Meanwhile, the world changed: mobile phones came about and become extremely popular, virtualization became popular (again), VoIP was invented, and computer science evolved new ideas.

In 1998, the specification for IPv6 was finalized. IPv6 includes optimizations for mobile devices (such as cell phones), quality of service is built in (so VoIP works better), security is built in, improved efficiency/performance, and the address space is vastly larger (128 bits instead of 32 bits). There are about 4 billion IPv4 address compared to the 3.4×1038 provided by IPv6. Considering as how there are 5×1028 for each human being alive today, IPv6 address space should last us a while.

Today, all modern operating systems (Windows XP, Linux 2.6, MacOS 10.3) support IPv6, and almost all software does.

Who’s Pushing IPv6

The US government realized that IPv4 exhaustion was going to be a problem, so it mandated that all new technology purchases be IPv6 capable, and all networks be upgraded to IPv6 by 2008. As with most government projects, the work is not 100% done, but many federal systems are running IPv6 right now.

China started the “China Next Generation Internet” project to push IPv6 adoption within its borders very hard. The 2008 Olympics websites all other network services were available over IPv6. China of courses realizes that if IPv6 isn’t adopted, and IPv4 doomsday comes along, it will be at a serious disadvantage.

ISPs also realized that without IPv6, service will become more expensive and cut into margin. So ISPs such as Comcast and Verizon are implementing IPv6.

In Australia and Europe, many ISPs have already implemented IPv6.

Mobile phone service providers are pushing IPv6 as mobile phones are major consumer of IP addresses. For this reason, T-Mobile is running an IPv6 test program, and Verizon is mandating that all new phones on its network be IPv6 capable in the near future.

Realizing that many of its customers will soon have IPv6 access, and may not always have IPv4 access, Google is moving its infrastructure to IPv6.

Other companies are realizing the same thing, such as Akamai and Facebook.

Should My Next Project Run On IPv6?

Yes. The benefits include:

  • Future proofing. IPv4 will be around for a long time still, but it won’t always be cheap, and it will become more complicated to administer. If your site will last more than a year, you should provide IPv6 connectivity.
  • Customer experience. If you serve only over IPv4, and a customer using only IPv6 visits your site, your site may not work (if it does work, it will be slower). If you serve only IPv4, and a customer supporting both IPv4 and IPv6 visits your site, the user may experience improved performance.

Supporting IPv6 can be easy. Ask your ISP or hosting provider if they support IPv6 (many backbone providers and large, commercial ISPs do) – if they don’t, go elsewhere. Your operating system already supports it, and chances are, the rest of your software does too.

As Cameron Byrne from T-Mobile USA repeatedly told content providers at Google’s IPv6 Implementor’s Conference:

Our users are going to access your content over IPv6. The only relevant question is “will we make the AAAA record or will you?” Wouldn’t you rather be the one to do it so you have control?

Cross posted to the Isobar blog – please comment there.

Categories: Uncategorized Tags: , ,

Facebook Went Down – Did You?

September 24th, 2010 No comments

Yesterday, Facebook went down for about 2.5 hours. Thousands of sites across the web, seemingly unconnected to Facebook, went down with it.

Facebook hosts thousands of “apps,” including games such as Farmville, Celtics 3 Point Play, and the Bruins Face Off. For 2.5 hours, all of those apps were unavailable, which means a lot of lost revenue (through lost ad views and lost transactions) for their owners. Facebook also hosts “pages” for everything from the BBC World Service to Barack Obama and Radio Head – so for 2.5 hours, all of these pages, which provide information on everything from political rallies to news discussion and concert planning were out of service. Even beyond the walled garden of Facebook, there are sites elsewhere on the Internet that use Facebook’s login mechanism to authenticate their users – for 2.5 hours, every site that did so was down. And even beyond that, many sites host “Like” buttons and other Facebook social widgets, and for the 2.5 hour duration, the lucky sites were simply missing those widgets, while the not so lucky ones showed their user javascript errors, and some even stopped working entirely.

Facebook is relied upon my many thousands of sites across the Internet, providing a single point of failure for a truly astounding portion of the web.

Is that really a good idea?

The Internet was created to be a reliable network that would route around failures; any disrupted connection would be routed around. This philosophy was baked into the Internet Protocol, into how the backbone is designed, how companies set up servers in redundant configurations, and how the fundamental protocols work. For example, consider email. If the gmail.com server goes down, only its users are effected; if I’m emailing my friend @isobar.com from my @integralblue.com address, there is absolutely no impact to me.

However, lately with the rise of Facebook, Twitter, and Google, a few very important points in the network are appearing, and when they fail, they wreck havoc. Perhaps it’s time to start thinking about how we’re gradually eliminating the reliability and redundancy that has served the Internet so well for so long, and start moving back towards those founding Internet principles.

Cross posted to the Isobar blog – please comment there.

Categories: Uncategorized Tags: ,

Microblogging inside the Firewall

March 30th, 2010 No comments

Cross posted to Molecular Voices. Please comment there.

Little strings of text are big business – both publicly and inside the corporate firewall. As we all know, Twitter is pretty big – TV and radio ads for major companies mention their Twitter sites and even business cards reference Twitter URLs nowadays. But Twitter cannot be used with internal information, so there’s a lot of collaborative power waiting to be unleashed by microblogging inside the corporate firewall. Consider how much more productive everyday workers could be if they shared a few quick bits of knowledge.

For example, consider this timeline:

Alice: Client loved the sales pitch – we won! #sales
Brion: Vending machine has been re-stocked
Charles: #CSS reminds me of aspect oriented programming #aop
Darleen: Project is progressing according to schedule #project3
Evan: Fellow #project3 members: Is this front end policy useful for us? http://ur1.ca/shyu
Fred: @evan Possibly – let’s discuss this with @brion over lunch
Zach: @fred @evan we used those guidelines on #project5 and it worked out well
ITBot: Email server test failed. IT has been contacted.

These examples show that:

  • The barrier to entry is incredibly low (Alice posted immediately after a sales pitch, probably from a plane)
  • Useful business information is exchanged, as well as team-building (Brion provided non-business information about the vending machine that others will likely appreciate)
  • Because discussion is open to a broader audience than email, others participate in unexpected and beneficial ways (see how Zach, who isn’t even on project 3, helped the project 3 team)
  • Bots can publicize information gathered automatically. For example, IT could set up a bot to monitor servers and automatically publish status updates. Bots can also subscribe to RSS feeds bridging wiki and blogs with the microblogging world.

There are many other benefits once metadata is considered.

  • People choose who to follow. If Alice isn’t interested in the state of IT systems, she doesn’t subscribe to the ITBot.
  • Users can mark a message as a favorite. Messages that are favorited many times show up in a “favorites” list, which is a great source of useful information.
  • By clicking on a #project3, Brion can find all posts about his project, providing a powerful search option.
  • Messages may have optionally location data attached. Users can tell if the person they’re talking to is in the same office as they are, on vacation, working from home, at a client office, or at another branch of their company. This data allows users to make fast decisions about how to further communicate (phone, email, or walk).

At Molecular, we wanted to take advantage of what “firewalled” microblogging has to offer, so we evaluated a few private microblogging tools, looking for software that provides a familiar interface, allows customization of the look and feel, and has clients for different devices (like Twitter has). In the end, we chose StatusNet. (In the interest of full disclosure, I’m a contributing developer to the StatusNet project.)

StatusNet LogoThe StatusNet software (which also runs the ~200k user identi.ca site) is Free and Open Source so anyone can feel free to install, evaluate, and use it without worrying about contracts or licensing fees. However, StatusNet, Inc (the company that supports the StatusNet software) offers professional services if you chose to run the software on site, or hosting if you prefer it to be hosted elsewhere. If the “go it yourself” route is selected, installation is pretty straightforward as it runs on the popular LAMP stack and has a vibrant community willing to answer questions.

StatusNet can integrate with LDAP/Active Directory and even some Single Sign On solutions. No worrying about managing accounts as employees come and go, so private information stays private.

The software also supports a variety of clients on a number of platforms, from Windows, Mac, and Linux to iPhones and Androids.

After developing a custom skin, selecting which plugins to enable, and testing with a small group, we officially launched “IsoBuzz” to the entire organization last week. We’re already seeing some interesting conversations. Over time, we hope to see IsoBuzz became a powerful tool for knowledge sharing and collaboration, especially among distant offices and between departments.

Categories: Uncategorized Tags:

Running Ubuntu in VMWare

October 28th, 2009 2 comments

VMWare is a leading (if not the leading) virtualization solution. Unfortunately, it is also proprietary software, which means that distributions tend not to care too much about it (and in my opinion, rightfully so!).

My employer is one such company that uses VMWare, and it recently instituted a policy that all VMs must have VMWare Tools installed on them, which causes a number of problems for Linux sysadmins, such as myself.

  1. VMWare Tools is not Free software
  2. VMWare Tools is a pain to acquire: it’s not packaged in any distribution (due to the non-Free nature), finding it on VMWare’s site is a serious pain, and the version that VMWare server includes seems to be perpetually out of date.
  3. Installing VMWare Tools is not a fun experience. The installer requires you to figure out how to get the kernel sources, then compiles and installs some kernel modules, and throws a bunch of proprietary binaries all over your file system. Also, depending on what kernel you’re using, the modules may not compile at all, in which case you have to hunt down patches.
  4. Installing VMWare Tools on a bunch of servers is an even bigger annoyance, because there’s no real automated way to do it.

The solution to all of these problems is the open-vm-tools project. It’s packaged in Debian and Ubuntu, and by all means, should Just Work.

Here’s when things get really interesting. Open-vm-tools really does Just Work – if the packaging is done correctly. As it stands right now, the packaging just copies the kernel module sources, and you are expected to figure out how to compile and install them, and do so each time you change kernels. Thanks to DKMS, this could be done automatically.

In Ubuntu bug #277556, that’s exactly how it’s done. I’ve been using the PPA referenced in that bug on 5 servers for about 4 months now, and it works great. Installation? As simple as apt-get install open-vm-tools! Upgrade your kernel? Open-vm-tools recompiles automatically.

So for you all you Debian/Ubuntu users who run VMs on VMWare, take a look at this bug, and you should save yourself some serious time and effort.

Categories: Uncategorized Tags:

oEmbed

August 7th, 2009 1 comment

oEmbed is a relatively simple concept, which can be basically thought of as hyperlinking to the next level. According to oembed.com: “oEmbed is a format for allowing an embedded representation of a URL on third party sites. The simple API allows a website to display embedded content (such as photos or videos) when a user posts a link to that resource, without having to parse the resource directly.”

Today, if I want to embed this Youtube video into a WordPress blog (such as this one), I need to complete these steps:

  1. Start typing my new blog post
  2. Switch browser windows, and go the Youtube video’s page
  3. Copy the “embed” code, which is kind of crazy looking:
    <object width="425" height="344"><param name="movie" value="http://www.youtube.com/v/Pube5Aynsls&hl=en&fs=1&"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/Pube5Aynsls&hl=en&fs=1&" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width="425" height="344"></embed></object>
  4. Switch back to the WordPress window, and paste the embed code (as HTML) into my WordPress post

Clearly, that’s not ideal. Figuring out where the embed code is, and how to copy and paste it as HTML into WordPress is not very easy, or intuitive. Now consider a future where WordPress is an oEmbed consumer, and Youtube is an oEmbed provider. To do the same thing, these are the steps:

  1. Start typing my new blog post
  2. Click the “embed” button in WordPress
  3. Enter the regular web browser link to the Youtube video in the box
  4. Click “OK.” WordPress will automagically figure out how to embed the video, and do it for you.

No copy and paste, no tabbing between pages, and best of all, no code. The user doesn’t need to know what oEmbed is, or how it works.

oEmbed can be used in more creative ways, too. For example, if you link to a Youtube video on the microblogging site identi.ca, the link will get a little paper clip next to it, and when clicked on, the video player will open in a lightbox. For example, take a look at this notice.

At this early stage of oEmbed’s lifetime, there are not many providers or consumers. To jumpstart the process, Deepak Sarda created oohembed, a service that acts as a provider for many sites that don’t yet support oEmbed themselves (since Youtube isn’t an oEmbed provider, identi.ca uses oohembed, and that’s how the video embedding notice example works). oohembed supports a number of popular sites, such as Youtube, Vimeo, Hulu, Wikipedia, and WordPress.com.

Hopefully, we’ll see more and more sites and pieces of software support oEmbed as both providers and consumers to improve their user experience. WordPress 2.9 will likely be an oEmbed consumer (so the theoretical process I gave above may soon become a reality), and I’ve created a plugin that makes WordPress an oEmbed provider. Here’s to an easier (to embed, at least) future!

Categories: Uncategorized Tags: