Monday, December 17, 2012

Google Cloud Storage JSON Example

I recently started working at Google on the Google Cloud Storage platform.

I wanted to try out the experimental Cloud Storage JSON API, and I was extremely excited to see that they support CORS for all of their methods, and you can even enable CORS on your own buckets and files.

To show the power of Cloud Storage + CORS, I released an example on GitHub that uses HTML, JavaScript and CSS to display the contents of a bucket.

The demo uses Bootstrap, jQuery, jsTree, and only about 200 lines of JavaScript that I wrote myself. The project is called Metabucket, because it's very "meta" - the demo is hosted in the same bucket that it displays!

Thursday, September 13, 2012

Bash Completion for Mac OS X

I've been using my Mac more often lately, and I realized I was really missing the fancy bash completion features that Ubuntu has. Turns out, it's pretty easy to enable on Mac with Homebrew:

brew install bash-completion

Then, as usual, homebrew tells you exactly what you have to do. Just add this to .bashrc:

if [ -f $(brew --prefix)/etc/bash_completion ]; then
  . $(brew --prefix)/etc/bash_completion
fi

Now you get fancy completion. Here's me hitting tab twice with a git command:

$ git checkout rdiankov
rdiankov-master          rdiankov/animation       rdiankov/animation2
rdiankov/collada15spec   rdiankov/master          rdiankov/pypy

Awesome.

Sunday, September 02, 2012

Comparing Image Comparison Algorithms

As part of my research into optimizing 3D content delivery for dynamic virtual worlds, I needed to compare two screenshots of a rendering of a scene and come up with an objective measure for how different the images are. For example, here are two images that I want to compare:


As you can clearly see, the image on the left is missing some objects, and the texture for the terrain is a lower resolution. A human could look at this and notice the difference, but I needed a numerical value to measure how different the two images are.

This is a problem that has been well-studied in the vision and graphics research communities. The most common algorithm is the Structural Similarity Image Metric (SSIM), outlined in the 2004 paper Image Quality Assessment: From Error Visibility to Structural Similarity by Wang, et al. The most common previous methods were PNSR and RMSE, but those metrics didn't take into account the perceptual difference between the two images. Another metric that takes into account perception is the 2001 paper Spatiotemporal Sensitivity and Visual Attention for Efficient Rendering of Dynamic Environments by Yee, et al.

I wanted to compare a few of these metrics to try and see how they differ. Zhou Wang from the 2004 SSIM paper maintains a nice site with some information about SSIM and comparisons to other metrics. I decided to start by taking the images of Einstein he created, labeled as Meanshift, Contrast, Impulse, Blur, and JPG:


The comparison programs I chose were pyssim for SSIM, perceptualdiff for the 2001 paper, and the compare command from ImageMagick for RMSE and PNSR. The results I came up with for those five images of Einstein:


As you can see, PNSR and RMSE are pretty much useless at comparing these images. It treats them all equally. Note that the SSIM values are actually inverted so that lower values mean a lower error rate for both the SSIM and perceptualdiff graphs. The SSIM and perceptualdiff metrics seem to agree on Meanshift and JPG, but have a reverse ordering of the error rates for Contrast, Impulse, and Blur.

To get a better idea of how these algorithms would work for my dataset, I took an example run of my scene rendering. This is a series of screenshots over time, with both the ground truth, correct image, and an image I want to compare to it. I plotted the error values for all of the four metrics over time:


Here we see a different story. RMSE and SSIM seem to be almost identical (with respect to their scale) and PNSR (inverted) and perceptualdiff also show similar shapes. All of the metrics seem to compare about equally in this case. You can find the code for generating these two graphs in my repository image-comparison-comparison on GitHub.

I think the moral of the story is make sure you explore multiple options for your dataset. Your domain might be different from other areas where the relevant algorithms were applied. I actually ended up using perceptualdiff because it gives a nice, intuitive output value: the number of pixels perceptually different in the two images, and because it supports MPI, so it runs really fast on machines with 16 cores.

Wednesday, August 22, 2012

How I Write LaTeX

I've been really happy with using the TeXlipse plugin for Eclipse to write my LaTeX documents. There are some really awesome features, so I thought I'd highlight some:

LaTeX Context

TeXlipse knows all the LaTeX commands, so you get auto-completion when you start typing:

Document Context

TeXlipse keeps an index of labels and references, so if you try and reference or cite something that doesn't exist, you get a warning:

It even has auto-completion for these labels too:

Bibliography Folding

The editor is full of little things that make your life easier. For example, you can configure bibliography files to be folded by default when you open them. It makes it much easier to navigate the document:

Spell Checking

TeXclipse can use the aspell command, or you can download a dictionary file and get instant feedback as you type:

Continuous Compiling

TeXclipse runs pdflatex (or you can use latex+dvipdf, pslatex+ps2pdf, etc.) in the background after you save a file. It knows how many times to invoke the compiler so the document gets built properly. If you have the PDF document open, most viewers will automatically reload the file when it gets changed, so you get instant feedback on how the final document looks. It also has the option of using Pdf4Eclipse, a PDF viewer that runs inside Eclipse. What's really nice about this is that it uses SyncTeX to connect the PDF with the original source files. You can double click on something in the PDF and it jumps directly to the source that generated that part of the document. It makes it really easy to make quick edits while reading the PDF.

Makefile

Since I usually collaborate with other people that don't use Eclipse, and in case I want to build the project outside of Eclipse, I also use latex-makefile project. This Makefile is amazing. You don't have to configure anything. You just drop it into the source directory and everything magically works. I can't recommend this enough.

Tuesday, July 24, 2012

Gigabit Hack Weekend

I went to a gigabit hacking event this past weekend hosted by Mozilla Ignite at the Internet Archive's building. I got to meet some interesting people and had a great time.

The general theme of the event was to try and demonstrate what kind of applications we can build when people's internet connections get really fast (e.g. gigabit). Since I work on federated 3D repositories with both the Sirikata project's Open3DHub, and also OurBricks, I wanted to  showcase applications that can be built with online 3D repositories and fast connections.

3D models can be quite big. Games usually ship a big DVD full of content or make you download several gigabytes worth of content before you can start playing. In contrast, putting 3D applications on the web demands low-latency start times. To showcase online 3D, I took the awesome ThreeFab project and added the ability to load files from an external repository.

For a demo:

  1. Visit http://blackjk3.github.com/threefab/.
  2. Click the "Import" button.
  3. Pick a 3D model from http://open3dhub.com/ and copy the "Direct Download" link box.
  4. Paste the link into the Import box.
  5. The model should load into the editor. (most models work)
The ThreeFab editor lets you export the scene you make. Here's a demo of a scene I created this weekend running on jsfiddle:

What's important to remember there is that all of the 3D content is being loaded asynchronously in the background directly from Open3DHub. The editor supports URLs from any site as long as they support CORS headers. As a result of the weekend, the Internet Archive is working on enabling these headers for its content, so hopefully we will be able to load 3D content directly from the archive soon.

Thanks to Mozilla and the Internet Archive for putting on a fun event!

Thursday, July 19, 2012

Automatic Debug Shell in Python

I've been working on a few long-running Python scripts lately. I find debugging to be difficult when a script takes a long time to run, since you don't get any feedback until execution hits the point in the code you're working on.

Python's awesome pdb module lets you drop into an interactive shell by calling the set_trace function. This is really useful because you can inspect local variables from the interactive shell. I found it tedious, though, to keep inserting these statements into my code, hoping it was in the right place, and wasting time when an exception was thrown somewhere other than what I was expecting.

I found a nice recipe that lets you drop into interactive mode as soon as an unhandled exception occurs. I created a gist that incorporates some changes from the comments. It only enters the interactive shell if the script has an appropriate TTY session, and if the exception is not a SyntaxError.

Here's an example script showing how to use it:
#test.py
import debug

def what():
   x = 3
   raise NotImplementedError()

if __name__ == '__main__':
   what()

And an example of using it:
$ python test.py 
Traceback (most recent call last):
  File "test.py", line 8, in <module>
    what()
  File "test.py", line 5, in what
    raise NotImplementedError()
NotImplementedError

> /home/jterrace/python-exception-debug/test.py(5)what()
-> raise NotImplementedError()
(Pdb) x
3

This is really nice for debugging, because it drops you to a shell that has all the local variables when the exception occurred.

Update
Yang pointed out the ipdb module. It looks awesome as an alternative to pdb and it comes with a method for doing this automatically.

Seamless Sharing

I uploaded an album yesterday of my recent trip to the ICME 2012 conference in Melbourne, Australia to Google+. I wanted to share a link to the album to Facebook, so I posted a link to the album. I was surprised that nothing popped up on Facebook for the preview, so I ran some experiments.

I created a public album on Google+, a public album on Facebook, and posted a public tweet with an image attached. I then shared each one on the other two social networks. I was surprised that the only one that actually showed content was sharing a tweet to Google+ and Facebook. Here's a grid showing all the results:

Comparison of sharing a photo/album across different social network. The rows are the source and the columns are the destination.
What's interesting is that other things do work. For example, I shared a link to a Flickr album, and it shows the content across all three networks. I'm guessing that maybe Facebook and Google+ don't have the proper Open Graph tags for the other networks to parse, but this really should be a seamless experience across networks.

Wednesday, July 11, 2012

Xvfb Memory Leak Workaround

I've been using Xvfb, the X virtual frame buffer, for a few projects. Xvfb allows you to run applications that require a display, without actually having a graphics card or a screen. I started noticing that the resident memory used by the Xvfb process would continually go up. I could literally run processes that connect to the display in a loop and watch the memory grow. Clearly, there was some kind of memory leak going on.

I finally found a workaround to preventing this memory issue. If you add -noreset to the argument of Xvfb, the memory issue vanishes. By default, when the last client to Xvfb disconnects, the server resets itself. My guess is that this works fine when video memory is backed by a hardware device, but Xvfb has a bug where it doesn't free the memory it allocated for the buffer. By setting the noreset option, the server no longer restarts and the memory isn't lost.

Here's my Xvfb command line:
Xvfb :1 -screen 0 1024x768x24 -ac +extension GLX +render -noreset

An explanation of the other arguments:

  • :1 - Runs the server on "display 1". Most X servers run on display 0 by default.
  • -screen 0 1024x768x24 - Sets screen 0 (the default screen) to a resolution of 1024x768 with 24 bits per pixel.
  • -ac - Disables access control because X access control is incredibly painful to get right, and since the server is usually only accessible from localhost, it's not a big deal.
  • +extension GLX - Enables the OpenGL extension, allowing graphics programs that use OpenGL to work inside the virtual display.
  • +render - Enables the X Rendering extension, enabling advanced image compositing features that most applications will need.
I also use a simple Xvfb start script. It's available as a gist.