Sunday, December 10, 2023

Google Calendar E Ink Display

I've always wanted to display our family calendar in a central location, like in the kitchen. Various options exist, but a powered display really limits where you can place something, and I never liked the way it would look.

When I saw the OpenEPaperLink project, I had to try it out. Here's the end result:

The hardware I purchased for this:
These things are surprisingly affordable for what they offer. The rest of the project is all software, primarily Home Assistant. The more I've used Home Assistant, the more I'm impressed with the amazing community surrounding it.

The things I had to install in Home Assistant:
  • Google Calendar home assistant integration: this allows you to pull information from your Google Calendar, which imports it to home assistant as Calendar entities.
  • OpenEPaperLink home assistant integration: this detects the OpenEPaperLink hub on your network and adds devices for each of the displays. It then exposes a service call you can use to send data.
  • Home Assistant Node-RED: this integrates Node-RED into home assistant, which is a flow-based programming interface. It's much more flexible than Home Assistant's built-in yaml-based routines. In particular, it has the ability to execute JavaScript code during a flow, which is necessary to transform the data (more on this later).
Once this was all set up and configured, I could create the Node-RED flow. Here's what the overall flow looks like:

The first step in the flow is an Inject node, which is configured to trigger the flow once an hour.

The next step is a call service node which calls calendar.get_events. This is a new method, which replaces calendar.list_events, which doesn't seem to be documented yet. The parameters I pass get all calendar events in the next 4 days, starting from 3 hours ago:
   /* 3 hours ago */
   "start_date_time": $fromMillis(
      $millis() - 1000 * 60 * 60 * 3),
   "duration": {"days": 4}
The output gets piped into a function node. This is really the only code I had to write for this. Instead of inlining it here, I placed it in a gist here.

The script takes the message payload output from the calendar service call, parses the dates and times, has some special handling for multi-day events, formats the dates and events in a custom format, and then transforms it into the output payload format expected by the open_epaper_link.drawcustom service call.

Next, I wanted to prevent the display from refreshing unnecessarily. I piped the output of my function into a Filter node. This conveniently has a mode where you can stop the flow if the input doesn't change. E Ink devices only consume a lot of power when the display changes. However, the way the open_epaper_link.drawcustom service call works is it renders into an image, which then gets sent to the display. If you send the same image twice, it still consumes power. To prevent refreshing the display when not necessary, the whole flow gets stopped by this filter node if what's being displayed doesn't change.

The next step is a call service node again, this time with open_epaper_link.drawcustom as the target. The parameter this time is very simple, {"payload": payload}, since my function outputs in the payload format expected by the service call.

That's it! Now I have an auto updating E Ink display that shows my calendar. I bought a few more of these E Ink displays to play around with, so I'll hopefully be adding more!

Monday, March 11, 2013

Vaurien: The Chaos Proxy

I had a need to test out a program's behavior when its backend web server returned errors. Unit testing would have been difficult in this case, because I specifically wanted to test the program's macro result when encountering, say, 10% server response errors.

After searching around, I came across Vaurien, the Chaos TCP Proxy. It's seriously cool. As its name suggests, it's a TCP proxy server that you can route a local connection through to a backend server. In TCP mode, it can delay packets, insert bad data, or drop packets, testing the socks (pun intended) off your application.

It also supports the application-level protocols: HTTP, Memcache, MySQL, Redis, and SMTP, and its architecture is very modular, so it's easy to plug in new protocols.

For example, here's how I can run it in HTTP mode:
$ vaurien --protocol http --proxy \
          --backend --behavior 50:error
This says to run an HTTP proxy that connects to on the backend and return a 5xx HTTP error code 50% of the time. Testing it out with curl:
$ curl --head -H "Host:"
HTTP/1.1 200 OK

$ curl --head -H "Host:"
HTTP/1.1 500 Internal Server Error
Content-Type: text/html; charset=UTF-8

$ curl --head -H "Host:"
HTTP/1.1 502 Bad Gateway
Content-Type: text/html; charset=UTF-8

Filed this away as a super useful tool.

Thursday, January 17, 2013

Git Branches by Date

I tend to create a huge number of branches in my git repositories, but I have a bad habit of not cleaning them up once I'm finished with them.

I found a nice command from an answer on StackOverflow that allows you to sort branches by date. I modified it slightly to also show the date when printing the branch information:

$ git for-each-ref --sort=-committerdate refs/heads/ --format='%(committerdate) %09 %(refname:short)'
Mon Jan 14 23:46:15 2013 +0000   keyfile-seek-rebase
Sat Jan 12 02:09:04 2013 +0000   perfdiag-mbit-fix
Fri Jan 11 17:38:13 2013 -0800   keyfile-seek
Thu Jan 10 01:05:43 2013 +0000   master

You can also set the date format to be relative (or other possibilities, see man git-for-each-ref):

$ git for-each-ref --sort=-committerdate refs/heads/ --format='%(committerdate:relative) %09 %(refname:short)'
3 days ago   keyfile-seek-rebase
6 days ago   perfdiag-mbit-fix
6 days ago   keyfile-seek
8 days ago   master

I added it to my .gitconfig file as an alias:
    branchdates = for-each-ref --sort=-committerdate refs/heads/ --format='%(committerdate:relative) %09 %(refname:short)'

That allows me to just type git branchdates and get a nice listing of my local branches by date.

Monday, January 14, 2013

Google Cloud Storage Signed URLs

Google Cloud Storage has a feature called Signed URLs, which allows you to use your private key file to authorize access to a specific operation to a third-party.

Putting all the bits together to create a properly signed URL can be a bit tricky, so I wrote a Python example that we just open-sourced in a repository called storage-signedurls-python. It demonstrates signing a PUT, GET, and DELETE request to Cloud Storage.

The example uses the awesome requests module for its HTTP operations and PyCrypto for its RSA signing methods.

Monday, December 17, 2012

Google Cloud Storage JSON Example

I recently started working at Google on the Google Cloud Storage platform.

I wanted to try out the experimental Cloud Storage JSON API, and I was extremely excited to see that they support CORS for all of their methods, and you can even enable CORS on your own buckets and files.

To show the power of Cloud Storage + CORS, I released an example on GitHub that uses HTML, JavaScript and CSS to display the contents of a bucket.

The demo uses Bootstrap, jQuery, jsTree, and only about 200 lines of JavaScript that I wrote myself. The project is called Metabucket, because it's very "meta" - the demo is hosted in the same bucket that it displays!

Thursday, September 13, 2012

Bash Completion for Mac OS X

I've been using my Mac more often lately, and I realized I was really missing the fancy bash completion features that Ubuntu has. Turns out, it's pretty easy to enable on Mac with Homebrew:

brew install bash-completion

Then, as usual, homebrew tells you exactly what you have to do. Just add this to .bashrc:

if [ -f $(brew --prefix)/etc/bash_completion ]; then
  . $(brew --prefix)/etc/bash_completion

Now you get fancy completion. Here's me hitting tab twice with a git command:

$ git checkout rdiankov
rdiankov-master          rdiankov/animation       rdiankov/animation2
rdiankov/collada15spec   rdiankov/master          rdiankov/pypy


Sunday, September 02, 2012

Comparing Image Comparison Algorithms

As part of my research into optimizing 3D content delivery for dynamic virtual worlds, I needed to compare two screenshots of a rendering of a scene and come up with an objective measure for how different the images are. For example, here are two images that I want to compare:

As you can clearly see, the image on the left is missing some objects, and the texture for the terrain is a lower resolution. A human could look at this and notice the difference, but I needed a numerical value to measure how different the two images are.

This is a problem that has been well-studied in the vision and graphics research communities. The most common algorithm is the Structural Similarity Image Metric (SSIM), outlined in the 2004 paper Image Quality Assessment: From Error Visibility to Structural Similarity by Wang, et al. The most common previous methods were PNSR and RMSE, but those metrics didn't take into account the perceptual difference between the two images. Another metric that takes into account perception is the 2001 paper Spatiotemporal Sensitivity and Visual Attention for Ef´Čücient Rendering of Dynamic Environments by Yee, et al.

I wanted to compare a few of these metrics to try and see how they differ. Zhou Wang from the 2004 SSIM paper maintains a nice site with some information about SSIM and comparisons to other metrics. I decided to start by taking the images of Einstein he created, labeled as Meanshift, Contrast, Impulse, Blur, and JPG:

The comparison programs I chose were pyssim for SSIM, perceptualdiff for the 2001 paper, and the compare command from ImageMagick for RMSE and PNSR. The results I came up with for those five images of Einstein:

As you can see, PNSR and RMSE are pretty much useless at comparing these images. It treats them all equally. Note that the SSIM values are actually inverted so that lower values mean a lower error rate for both the SSIM and perceptualdiff graphs. The SSIM and perceptualdiff metrics seem to agree on Meanshift and JPG, but have a reverse ordering of the error rates for Contrast, Impulse, and Blur.

To get a better idea of how these algorithms would work for my dataset, I took an example run of my scene rendering. This is a series of screenshots over time, with both the ground truth, correct image, and an image I want to compare to it. I plotted the error values for all of the four metrics over time:

Here we see a different story. RMSE and SSIM seem to be almost identical (with respect to their scale) and PNSR (inverted) and perceptualdiff also show similar shapes. All of the metrics seem to compare about equally in this case. You can find the code for generating these two graphs in my repository image-comparison-comparison on GitHub.

I think the moral of the story is make sure you explore multiple options for your dataset. Your domain might be different from other areas where the relevant algorithms were applied. I actually ended up using perceptualdiff because it gives a nice, intuitive output value: the number of pixels perceptually different in the two images, and because it supports MPI, so it runs really fast on machines with 16 cores.