Steve Bennett blogs

…about maps, open data, Git, and other tech.

Super lightweight map websites with Github

Github, the online version control repository host for Git, recently added support for GeoJSON files. Sounds boring, right? It actually lets you do something very cool: build your own “dots on a map” website with virtually zero code.

An example of GeoJSON on Github I whipped up.

An example of GeoJSON on Github I whipped up. Click it.

Here’s what you need to do.

  1. Get a Github repository if you don’t have one already. They’re free.
  2. Create a GeoJSON file. You can export to this format from various tools. One easy way to get started would be to upload a CSV file with locations to dotspotting.org then download the GeoJSON from there. Or even easier, use geojson.io to place dots, lines and polygons with a graphical tool. It can save directly to your GitHub.
  3. Here’s what my test file looks like:
    {
     "type": "FeatureCollection",
     "features": [{ "type": "Feature",
     "geometry": {
     "type": "Point",
     "coordinates": [144.9,-37.8]
     },
     "properties": {
     "title": "Scooter",
     "description": "Here's a dot",
     "marker-size": "medium",
     "marker-symbol": "scooter",
     "marker-color": "#a59",
     "stroke": "#555555",
     "stroke-opacity": 1.0,
     "stroke-width": 2,
     "fill": "#555555",
     "fill-opacity": 0.5
     }
     },
     { "type": "Feature",
     "geometry": {
     "type": "Point",
     "coordinates": [144.4,-37.5]
     },
     "properties": {
     "title": "Cafe",
     "description": "Coffee and stuff",
     "marker-size": "medium",
     "marker-symbol": "cafe",
     "marker-color": "#f99",
     "stroke": "#555555",
     "stroke-opacity": 1.0,
     "stroke-width": 2,
     "fill": "#555555",
     "fill-opacity": 0.5
     }
     }]
    }

    It’s worth validating with GeoJSONLint.

  4. Commit this file, say test.geojson, to your Github repository. You can get a preview of it in Github:

    The test GeoJSON file, as seen on GitHub.

    The test GeoJSON file, as seen on GitHub.

  5. Now the really cool part. Embed the map into your own website. This is stupidly easy:
    <!DOCTYPE html>
    <html>
    <body>
    <script src="https://embed.github.com/view/geojson/stevage/georly/master/test1.geojson?height=500&width=1000"></script>
    </body>
    </html>

    If you don’t have a website, site44 is an extremely easy way to get started. You place HTML files into your DropBox, and they get automagicked onto the web, with a subdomain: something.site44.com.

Now what?

That’s it! What’s especially interesting about the hosting on GitHub is it’s a very easy way to have a lightweight shared geospatial database of points, lines or polygons. Here’s how you could add dots to my map:

  1. Fork my repository
  2. Add a few points, by modifying the GeoJSON file
  3. Commit your changes to your repository.
  4. Send me a pull request
  5. I accept the changes, and voila – now your points are shown with mine.

Using this method, we have a “review before publish” workflow, and a full version history of every change.

Caveats

This is a nifty tool for prototyping social mapping applications, but it obviously won’t cut it for production purposes:

  • No support for different layers: all the dots are always shown
  • No support for different basemaps: always the same OpenStreetMap style
  • No authoring tools: you must use something else to generate the GeoJSON
  • Obligatory “rendered with (heart) by GitHub” footer.

Soon you’ll want to build a proper application, using tools like MapBox, CloudMade, CartoDB, Leaflet etc.

Digital humanities for beginners: get started with the Trove API

Trove is the National Library of Australia’s “discovery interface” – an amazing catalogue of books, newspapers, maps, music, journal articles etc. The Trove API is a special website that programs can talk to run queries, retrieve individual records etc. With some creative ideas and a bit of coding skill, you could make a pretty nifty tool and win a digital humanities award.

The documentation is pretty thorough, but doesn’t tell you how to get started. These days, web development is mostly about assembling the right tools and putting them together, but it can be hard for a novice to know what they need.

This guide is for you if:

  • You want to try exploring Trove programmatically; and
  • You’ve done some coding before; but
  • You’re not very familiar with coding against an API, or writing web applications.

In short, if you think the digital humanities might be for you.

What we’ll need

  • A server on which to run your web application. You can use your laptop directly, but better to use a virtual machine inside your laptop. Even better, a server running on the internet, like a VM on the NeCTAR Research Cloud.
  • A web application framework: Flask and Python
  • A library that makes it easy to make requests to other web servers (we’ll use Requests)
  • A JavaScript framework which makes dynamically modifying web pages much easier (we’ll use jQuery)
  • A CSS framework, which makes your page look attractive with no effort (we’ll use Twitter Bootstrap)
  • A templating engine, to combine HTML and the results of our queries into a web page (we’ll use Jinja2)
  • Our server logic, expressed as code in files.

A server

As an Australian academic, you can create your own server on the NeCTAR Research Cloud. That would be the best option, but can be a bit fiddly for a novice so we won’t explain that here.

The next best option is to create a server on a virtual machine inside your laptop, using VirtualBox and Vagrant. That keeps everything related to this one project self contained, which is a big bonus when you start working on other projects. It also creates a pristine, controlled Linux environment which means you’re less likely to run into issues cause by the peculiarities of your own system. Try the Vagrant Getting Started guide.

As a last resort, the simplest approach is to just install stuff directly on your computer. This may work ok to begin with.

Building a web application

We’re going to build a web application that will talk to Trove. This isn’t the only approach. You could:

  • Build a command line tool that pulls stuff out of Trove and saves it to disk
  • Make a desktop application that a user would have to download and install
  • Make a static website (not a web application) that does everything with JavaScript.

But a web application is probably what you’re going to want eventually and is the easiest to share with other people.

We’ll use Python, the programming language, and Flask, a web application framework for Python.

Why Flask? It’s easy to install, lightweight, and is well suited to experimental programming like this. (An alternative would be Django, which would be much better if you wanted to store stuff in a database and write something big and scalable.)

You can install these on the command line like this:

sudo apt-get install -y python-pip
sudo pip install flask

That is, first you install Python’s “pip” installer. Pip then knows how to install Python packages like Flask.

Whatever server you’re using, create a directory somewhere (perhaps under your home directory), called ‘trovetest’.

cd ~
mkdir trovetest
cd trovetest

Making requests to the Trove API

The Trove documentation says that to perform a query, we need to access a URL like this:

http://api.trove.nla.gov.au/result?key=<INSERT KEY>&zone=book&q=%22piers%20anthony%22

You might think you need to construct that string yourself, complete with question marks, equals signs and %22’s. You could do that using Python’s built-in urllib2 library:

import urllib2
apikey='1b2c3d4f'
troveURL = 'http://api.trove.nla.gov.au/'
zone = 'book'
search = '%22' + 'piers' + '%20' + 'anthony' + '%22'
r = urllib2.urlopen(troveURL + 'result' + '?' + 'key=' + apikey + '&' + 'zone=' + zone + '&' + 'q=' + search).read()

But it’s easier than that, if we use the Requests library.

import requests 
apikey='1b2c3d4f'
troveURL = 'http://api.trove.nla.gov.au/'
searchquery = 'piers'
r = requests.get(troveURL + 'result/', params = { 
 'key': apikey, 
 'zone': 'book', 
 'q': search
 } )

So, install the Requests library as well:

sudo pip install requests

jQuery

These days, virtually all web applications use a JavaScript framework, to make manipulating the web page more practical. We’ll use jQuery, which is also required by Twitter BootStrap.

Create a directory under ‘trovetest’ called ‘static’, and download and unzip the latest jQuery.

CSS Framework

Making a web page look good using straight CSS (cascading style sheets) is really hard. Making it look good in every browser and device is a nightmare. Save yourself the pain, and start from somewhere sensible like Bootstrap, made by Twitter. (By default, it looks a bit like Twitter’s website, but you can change that.)

Without Twitter Bootstrap

Without Twitter Bootstrap

Adding Bootstrap and minimal changes.

Adding Bootstrap and minimal changes.


Although it’s best to download both jQuery and Boostrap to your server and link to them there, you can get started quickly by linking to them on the web. Just include this text at the top of your HTML files.

<head>
http://code.jquery.com/jquery-1.10.1.min.js
<link rel="stylesheet" href="http://netdna.bootstrapcdn.com/bootstrap/3.1.0/css/bootstrap.min.css">
http://netdna.bootstrapcdn.com/bootstrap/3.1.0/js/bootstrap.min.js
</head>

Templating engine

The most common way to build a web page dynamically is by writing a “template” of the HTML page. It can be as simple as this:

<body>
<h1>Request complete</h1>
You requested this article: {{ article.name }}
</body>

When you tell the web application framework to render the page, you pass in the list of variables – in this case, an object called ‘article’ with an attribute ‘name’.

Templating example

We’ll use the Jinja2 templating engine, because that’s what Flask uses.

Our server logic

Now that everything is in place, let’s write a simple application that will simply make one request. We’ll ask the Trove API for a list of newspaper articles mentioning Piers Anthony (to follow their documentation’s example).

Create this file as ~/trovetest/simple.py

from flask import Flask, render_template, request # load Flask itself
import requests, json # load Requests library, and JSON library for interpreting responses
app = Flask(__name__) # create our web application object, named after this file
apikey = '1b2c3d...' # insert your API key, following instructions here: http://trove.nla.gov.au/general/api
troveURL = 'http://api.trove.nla.gov.au/' # all Trove API URLs start with this

@app.route("/list") # this is what we do when someone goes to "http://localhost:5000/list"
def list(): # the function that defines the behaviour
query = { # this dict structure contains the parameters that make up a URL
'key': apikey, # following the Trove API documentation.
'zone': 'newspaper',
'q': 'piers anthony',
'encoding': 'json', # we specify 'json' as the format because json is easy to parse.
'include': 'tags' }
r = requests.get(troveURL + 'result/', params=query)

# Requests will transform the params property into a string like
# ?key=...&zone=newspaper&q=%22piers%20anthony%22&include=tags&encoding=json

# After calling requests.get(), r is now an object containing lots of information about the response - errors codes, content etc.

r.encoding = 'ISO-8859-1' # We avoid some unicode conversion errors by adding this step.
results=r.json() # Convert the text we receive (in JSON format) into a Python dict
return render_template('list.html', results=results)# Render the template, passing through that dict

app.run(host='0.0.0.0', debug=True) # Run the defined web server, allowing anyone to connect, in debug mode. (This is not safe for a public web server.)

 

And save this as ~/trovetest/templates/list.html:

<!doctype html>
<head>
    <script src="http://code.jquery.com/jquery-1.10.1.min.js"></script>
    <linkrel="stylesheet"href="http://netdna.bootstrapcdn.com/bootstrap/3.1.0/css/bootstrap.min.css">
    <script src="http://netdna.bootstrapcdn.com/bootstrap/3.1.0/js/bootstrap.min.js"></script>
</head>
<body>
    <title>Trove test</title>
    <h1>Results</h1>

    <ul>
    {% for article in results.response.zone.0.records.article %}
      <li>
        <ahref="{{ article.troveUrl }}">{{ article.title.value }}</a> - <ahref="item/{{ article.id}}">More info</a>
        <br/>
        <i><small>{{ article.snippet | safe}}</small></i>
      </li>
    {% endfor %}
    </ul>
</body>

Run your server

On the command line, tell Python (and hence Flask) to run your application:

/home/ubuntu/trove$ python simple.py
 * Running on http://0.0.0.0:5000/
 * Restarting with reloader

Flask will sit there waiting for someone to connect to your webserver in a browser.

In your browser, go to http://localhost/5000/list

The results of our little Trove query.

The results of our little Trove query.

What next?

That was a very quick, high level view of all the pieces you need. If you made it through the example, you’re be in an excellent position to start making interesting web applications building on the Trove API. There’s no shortage of information on the web about all these technologies – the hard bit is knowing what you need to know.

All sorts of fun things can be done:

  • Cool exploration interfaces
  • Bots
  • Visualisations like graphs, charts etc
  • Text mining and analysis

Tim Sherratt (aka @wragge) has made many fun creations with the Trove API, documented on his discontents.com.au blog. Tim works for the Trove team, who are generally pretty happy to help .

 

Trello Tennis

(Credit: Flickr’s Carine06, CC-BY-SA)

Here’s my method of keeping track of stuff to do. Other tools just didn’t work for me. Most to-do list tools make two assumptions, I think:

  1. Work arrives from somewhere (eg, a manager), and is readily broken up into pieces roughly 0.5-4 hours in size.
  2. Your primary responsibility is processing these tasks without gaps, maintaining a high level of output.

My work, technology for academics, tends to involve  coordinating lots of little projects with lots of different people. At any given moment, there are few tasks “in my court”, and they tend to be small. My primary responsibility is to keep all projects moving, and ensure that projects aren’t held up by a task being with me too long.

So, I ran with the ball in my court / ball in your court metaphor. It’s useful when:

  1. Your work is primarily about projects, and particularly a relationship between you and the main stakeholder in that project.
  2. You anxiously wonder whether you’re supposed to be doing something on a project.
  3. You have trouble remembering all the relevant bits of information for each project.

“Projects” here can be very small, and often include very early stage proposals that haven’t formed into anything very concrete.

Presenting: Trello Tennis.

Trello Tennis in (fictious) action

Trello Tennis in (fictious) action

To start with, the two most important columns:

  • Ball in my court: projects whose current state requires me to do something (as small as writing an email or checking something, or as big as preparing a whole workshop structure)
  • Ball in their court: projects whose current state is waiting for someone else to do something, such as providing feedback on a document, organising an event, or replying to an email.

The goal each day is to move projects from the “my court” column to the “their court” column. I periodically review “my court”, placing the most urgent items at the top. I also review “their court”, checking whether I should follow up or offer assistance.

More columns, that turned out to be useful for me:

  • All good/meeting scheduled: projects whose current state is that both parties are waiting for an event, usually a meeting. Neither of us has to do anything until then.
  • Wut?: the most dangerous column, projects where I’m not sure what the current state is after all. Am I waiting on them? Did I promise to research something and get back to them?
  • Somebody else’s problem (SEP): where responsibility for the project has been shifted to someone else. I don’t need to do anything, and I don’t even need to monitor it at the moment. Someone else may eventually hand it back to me, but until then I don’t worry.
  • Archive: the project is actually finished.
  • Blocked: there is a technical reason why the project can’t advance, and no one is currently doing anything about it.

Using Trello

Trello works really well for this.

The name of each card consists of both the project name, and a quick summary of my current task (if in my court), or what I’m waiting on if in theirs. Examples:

  • WidgetUpgrade – check if compatible with Ubuntu Precise
  • FoobarSystem – Anita fixing DNS problem
  • Maps for Psych – meeting Thurs 3rd 2pm

What’s particularly fantastic about using Trello is that each card can also track all the work on that project. Changing the title automatically creates a history trace, or I can add comments to record more information. Need somewhere to record a server name and login details? Use the card’s description field.

You can also use coloured labels to group projects, such as tagging by university in my case.

Of course, each individual project frequently has its own task tracking system: perhaps another Trello board, Github issues, Pivotal Tracker, Google spreadsheets etc. I link to them from the card’s description, ultimately tying together all information systems for all projects under a single roof.

The rules

Each day (or whenever):

  1. If there is anything in “Wut?”, attempt to clarify its state, then move it to the right column.
  2. Check each item in “Ball in their court” to see if it needs attention. Set alarms if you need. If it’s time to follow up, just move it back to “Ball in my court” and adjust the title.
  3. Review the “Ball in my court” list, ordering them appropriately – most urgent at the top.
  4. Work through items in “Ball in my court” if you have any free time. :)

To make this natural, I order the columns from left to right as follows: Wut, My Court, Blocked, Their Court, All Good, SEP, Archive. Stress is on the left, calm is on the right.

The goal is to get your board to look like this blissful fantasy:

A totally successful day in Trello Tennis land.

A totally successful day in Trello Tennis land.

These examples are actually tiny compared to (my) reality. By the end of 2013, I had 21 active cards, plus another 10 inactive (SEP and Archive). If you find this technique at all useful, or have any improvements, I’d love to hear about it.

Filtering for sanity

(Added 4 Feb 2014)
Sometimes it gets daunting looking at the huge mounds of work in front of you. Filtering can help cut through the distraction. Two types have worked for me:

Filter by organisation

Filter by client

Are you working for a particular client today, and don’t want to see anything related to any other clients? Assign one label per client, then turn on the filter!

Filter by … you

Filter by youTo focus on just one or two tasks that you need to finish, first assign yourself to those cards. Then, filter to only show cards that you’re assigned to. This feature is intended for collaboration, but it works nicely.

Terrain in TileMill: a walkthrough for non-GIS types

I created a basemap for http://cycletour.org with TileMill and OpenStreetMap. It looked…ok.

Image

But it felt like there was something missing. Terrain. Elevation. Hills. Mountains. I’d put all that in the too hard basket, until a quick google turned up two blog posts from MapBox, the wonderful people who make TileMill. “Working with terrain data” and “Using TileMill’s raster-colorizer“. Putting these two together, plus a little OCD, led me to this:

Image

Slight improvement! Now, I felt that the two blog posts skipped a couple of little steps, and were slightly confusing on the difference between the two approaches, so here’s my step by step. (MapBox, please feel free to reuse any of this content.)

My setup is an 8 core Ubuntu VM with 32GB RAM and lots of disk space. I have OpenStreetMap loaded into PostGIS.

1. Install the development version of TileMill

You need to do this if you want to use the raster-colorizer. You want this while developing your terrain style, if you want the “snow up high, green valleys below” look. Without it, you have to pre-render the elevation color effect, which is time consuming. If you want to tweak anything (say, to move the snow line slightly), you need to re-render all the tiles.

Fortunately, it’s pretty easy.

  1. Get the “install-tilemill” gist (my version works slightly better)
  2. Probably uncomment the mapnik-from-source line (and comment out the other one). I don’t know whether you need the latest mapnik.
  3. Run it. Oh – it will uninstall your existing TileMill. Watch out for that.
  4. Reassemble stuff. The dev version puts projects in ~/Documents/<username/MapBox/project which is weird.

2. Get some terrain data.

The easiest place is the ubiquitous NASA SRTM DEM (digital elevation model) data. You get it from CGIAR. The user interface is awful. You can only download pieces that are 5 degrees by 5 degrees, so Victoria is 4 pieces.

Screen shot 2013-09-11 at 10.46.00 PM

If you’re downloading more than about that many, you’ll probably want to automate the process. I wrote this quick script to get all the bits for Australia:

for x in {59..67}; do
for y in {14..21}; do
echo $x,$y
if [ ! -f srtm_${x}_${y}.zip ]; then
wget http://droppr.org/srtm/v4.1/6_5x5_TIFs/srtm_${x}_${y}.zip
else
echo "Already got it."
fi
done
done
unzip '*.zip'


3. Process SRTM .tifs with GDAL.

To have any fun with terrain mapping in TileMill, you need to produce separate layers from the terrain data:

  1. The heightmap itself, so you can colour high elevations differently from low ones.
  2. A “hillshading” layer, where southeast facing slopes are dark, and northwest ones are light. This is what produces the “terrain” illusion.
  3. A “slopeshading” layer, where steep slopes (regardless of aspect) are dark. I’m ambivalent about how useful this is, but you’ll want to play with it.
  4. Contours. These can make your map look AMAZING.
Screen shot 2013-09-11 at 10.53.41 PM

Contours – they’re the best.

In addition, you’ll need some extra processing:

  1. Merge all the .tif’s into one. (I made the mistake of keeping them separate, which makes a lot of extra layers in TileMill). Because they’re GeoTiffs, GDAL can magically merge them without further instruction.
  2. Re-project it (converting it from some random ESPG to Google Web Mercator – can you tell I’m not a real GIS person?)
  3. A bit of scaling here and there.
  4. Generating .tif “overviews”, which are basically smaller versions of the tifs stored inside the same file, so that TileMill doesn’t explode.

Hopefully you already have GDAL installed. It probably came with the development version of TileMill.

Here’s my script for doing all the processing:

#!/bin/bash
echo -n "Merging files: "
gdal_merge.py srtm_*.tif -o srtm.tif
f=srtm
echo -n "Re-projecting: "
gdalwarp -s_srs EPSG:4326 -t_srs EPSG:3785 -r bilinear $f.tif $f-3785.tif

echo -n "Generating hill shading: "
gdaldem hillshade -z 5 $f-3785.tif $f-3785-hs.tif
echo and overviews:

gdaladdo -r average $f-3785-hs.tif 2 4 8 16 32
echo -n "Generating slope files: "
gdaldem slope $f-3785.tif $f-3785-slope.tif
echo -n "Translating to 0-90..."
gdal_translate -ot Byte -scale 0 90 $f-3785-slope.tif $f-3785-slope-scale.tif
echo "and overviews."
gdaladdo -r average $f-3785-slope-scale.tif 2 4 8 16 32
echo -n Translating DEM...
gdal_translate -ot Byte -scale -10 2000 $f-3785.tif $f-3785-scale.tif
echo and overviews.
gdaladdo -r average $f-3785-scale.tif 2 4 8 16 32
#echo Creating contours
gdal_contour -a elev -i 20 $f-3785.tif $f-3785-contour.shp

Take my word for it that the above script does everything I promise. The options I’ve chosen are all pretty standard, except that:

  • I’m exaggerating the hillshading by a factor of 5 vertically (“-z 5”). For no particularly good reason.
  • Contours are generated at 20m intervals (“-i 20”).
  • Terrain is scaled in the range -10 to 2000m. Probably an even lower lower bound would be better (you’d be surprised how much terrain is below sea levels – especially coal mines.) Excessively low terrain results in holes that can’t be styled and turn up white.

4. Load terrain layers into TileMill

Now you have four useful files, so create layers for them. I’d suggest creating them as layers in this order (as seen in TileMill):

  1. srtm-3785-contour.shp – the shapefile containing all the contours.
  2. srtm-3785-hs.tif – the hillshading file.
  3. srtm-3785-slope-scale.tif – the scaled slope shading file.
  4. srtm-3785.tif – the height map itself. (I also generate srtm-3785-scale.tif. The latter is scaled to a 0-255 range, while the former is in metres. I find metres makes more sense.)

For each of these, you must set the projection to 900913 (that’s how you spell GOOGLE in numbers). For the three ‘tifs’, set band=1 in the “advanced” box. I gather that GeoTiffs can have multiple bands of data, and this is the band where TileMill expects to find numeric data to apply colour to.

Screen shot 2013-09-11 at 11.06.39 PM

5. Style the layers

Mapbox’s blog posts go into detail about how to do this, so I’ll just copy/paste my styles. The key lessons here are:

  • Very slight differences in opacity when stacking terrain layers make a huge impact on the appearance of your map. Changing the colour of a road doesn’t make that much difference, but with raster data, a slight change can affect every single pixel.
  • There are lots of different raster-comp-ops to try out, but ‘multiply’ is a good default. (Remember, order matters).
  • Carefully work out each individual zoom level. It seems to work best to have hillshading transition to contours around zoom 12-13. The SRTM data isn’t detailed enough to really allow hillshading above zoom 13

My styles:

.hs[zoom <= 15] {
[zoom>=15] { raster-opacity: 0.1; }
[zoom>=13] { raster-opacity: 0.125; }
[zoom=12] { raster-opacity:0.15;}
[zoom<=11] { raster-opacity: 0.12; }
[zoom<=8] { raster-opacity: 0.3; }

raster-scaling:bilinear;
raster-comp-op:multiply;
}

// not really convinced about the value of slope shading
.slope[zoom <= 14][zoom >= 10] {
raster-opacity:0.1;
[zoom=14] { raster-opacity:0.05; }
[zoom=13] { raster-opacity:0.05; }
raster-scaling:lanczos;
raster-colorizer-default-mode: linear;
raster-colorizer-default-color: transparent;

// this combo is ok
raster-colorizer-stops:
stop(0, white)
stop(5, white)
stop(80, black);
raster-comp-op:color-burn;

}

// colour-graded elevation model
.dem {
[zoom >= 10] { raster-opacity: 0.2; }
[zoom = 9] { raster-opacity: 0.225; }
[zoom = 8] { raster-opacity: 0.25; }
[zoom <= 7] { raster-opacity: 0.3; }
raster-scaling:bilinear;
raster-colorizer-default-mode: linear;
raster-colorizer-default-color: hsl(60,50%,80%);
// hay, forest, rocks, snow

// if using the srtm-3785-scale.tif file, these stops should be in the range 0-255.
raster-colorizer-stops:
stop(0,hsl(60,50%,80%))
stop(392,hsl(110,80%,20%))
stop(785,hsl(120,70%,20%))
stop(1100,hsl(100,0%,50%))
stop(1370,white);
}
.contour[zoom >=13] {
line-smooth:1.0;
line-width:0.75;
line-color:hsla(100,30%,50%,20%);
[zoom = 13] {
line-width:0.5;
line-color:hsla(100,30%,50%,15%);
}

[zoom >= 16],
[elev =~ “.*00”] {
l/text-face-name:’Roboto Condensed Light’;
l/text-size:8;
l/text-name:'[elev]’;
[elev =~ “.*00”] { line-color:hsla(100,30%,50%,40%); }
[zoom >= 16] { l/text-size: 10; }
l/text-fill:gray;
l/text-placement:line;
}
}

And finally a gratuitous shot of Mt Feathertop, showing the major approaches and the two huts: MUMC Hut to the north and Federation Hut further south. Terrain is awesome!

Screen shot 2013-09-11 at 11.25.23 PM

A TileMill server with all the trimmings

Recently, I set up a server for a series of #datahack workshops. We used TileMill to make creative maps with OpenStreetMap and other available data.

The major pieces required are:

  • TileMill, which comes with its own installer, and is totally self-sufficient: web application server, Mapnik, etc.
  • Postgres, the database which will hold the OSM data
  • PostGIS, the extension which allow Postgres to do that
  • Nginx, a reverse proxy, so we can have some basic security (TileMill comes with none)
  • OSM2PGSQL, a tool for loading OSM data into PostGIS

I’ve captured all those bits, and their configuration in this script. You’ll probably want to change the password – search for “htpasswd”.

The script below is out of date, contains errors, and is not maintained. Go to:

http://github.com/stevage/saltymill

# This script installs TileMill, PostGIS, nginx, and does some basic configuration.
# The set up it creates has basic security: port 20009 can only be accessed through port 80, which has password auth.

# The Postgres database tuning assumes 32 Gb RAM.

# Author: Steve Bennett

wget https://github.com/downloads/mapbox/tilemill/install-tilemill.tar.gz
tar -xzvf install-tilemill.tar.gz

sudo apt-get install -y policykit-1

#As per https://github.com/gravitystorm/openstreetmap-carto

sudo bash install-tilemill.sh

#And hence here: http://www.postgis.org/documentation/manual-2.0/postgis_installation.html
#? 
sudo apt-get install -y postgresql libpq-dev postgis

# Install OSM2pgsql

sudo apt-get install -y software-properties-common git unzip
sudo add-apt-repository ppa:kakrueger/openstreetmap
sudo apt-get update
sudo apt-get install -y osm2pgsql

#(leave all defaults)

#Install TileMill

sudo add-apt-repository ppa:developmentseed/mapbox
sudo apt-get update

sudo apt-get install -y tilemill

# less /etc/tilemill/tilemill.config
# Verify that server: true

sudo start tilemill

# To tunnel to the machine, if needed:
# ssh -CA nectar-maps -L 21009:localhost:20009 -L 21008:localhost:20008
# Then access it at localhost:21009

# Configure Postgres

echo "CREATE ROLE ubuntu WITH LOGIN CREATEDB UNENCRYPTED PASSWORD 'ubuntu'" | sudo -su postgres psql
# sudo -su postgres bash -c 'createuser -d -a -P ubuntu'

#(password 'ubuntu') (blank doesn't work well...)

# === Unsecuring TileMill

export IP=`curl http://ifconfig.me`

cat > tilemill.config <<FOF
{
  "files": "/usr/share/mapbox",
  "coreUrl": "$IP:20009",
  "tileUrl": "$IP:20008",
  "listenHost": "0.0.0.0",
  "server": true
}
FOF
sudo cp tilemill.config /etc/tilemill/tilemill.config

# ======== Postgres performance tuning
sudo bash
cat >> /etc/postgresql/9.1/main/postgresql.conf <<FOF
# Steve's settings
shared_buffers = 8GB
autovaccuum = on
effective_cache_size = 8GB
work_mem = 128MB
maintenance_work_mem = 64MB
wal_buffers = 1MB

FOF
exit

# ==== Automatic start 
cat > rc.local <<FOF
#!/bin/sh -e
sysctl -w kernel.shmmax=8000000000
service postgresql start
start tilemill
service nginx start
exit 0
FOF

sudo cp rc.local /etc/rc.local

# === Securing with nginx
sudo apt-get -y install nginx

cd /etc/nginx
sudo bash
printf "maps:$(openssl passwd -crypt 'incorrect cow cell pin')\n" >> htpasswd
chown root:www-data htpasswd
chmod 640 htpasswd
exit

cat > sites-enabled-default <<FOF

server {
   listen 80;
   server_name localhost;
   location / {
        proxy_set_header Host \$http_host;
        proxy_pass http://127.0.0.1:20009;
        auth_basic "Restricted";
        auth_basic_user_file htpasswd;
    }
}

server {
   listen $IP:20008;
   server_name localhost;
   location / {
        proxy_set_header Host $http_host;
        proxy_pass http://127.0.0.1:20008;
        auth_basic "Restricted";
        auth_basic_user_file htpasswd;
    }
}

FOF

sudo cp sites-enabled-default /etc/nginx/sites-enabled/default
sudo service nginx restart

echo "Australia/Melbourne" | sudo tee /etc/timezone
sudo dpkg-reconfigure --frontend noninteractive tzdata

Forget trying to remember your servers’ names!

I run lots of Linux servers. I create them, install some stuff, mess around with them, forget them, come back to them…and forget my credentials. My life used to look like this:

$ ssh -i ~/steveko.pem steveb@115.146.83.268
Permission denied (publickey).
$ ssh -i ~/steveko.pem sbennett@115.146.83.268
Permission denied (publickey).
$ ssh -i ~/stevebennett.pem steveb@115.146.83.268
Permission denied (publickey).
$ ssh -i ~/steveko.pem ubuntu@115.146.83.268
Welcome to Ubuntu 12.10 (GNU/Linux 3.5.0-26-generic x86_64)
...

This got really tedious. You can’t memorise many IP addresses, so you’re constantly referring to emails, post-its or even SMSes. Then you rebuild your server, the IP address changes, and you’re lost again.

And because some of the servers are administered by other people, I can’t always choose my own user name, so more faffing around.

So, here’s my solution:

A naming convention for servers

Give each server a name. Ignore the actual hostname of the server, or what everyone else calls it. My convention goes like this:

<infrastructure><project>[<purpose>]

Examples:

  • nectar-tugg-dev: A development server for the TUGG project, running on NeCTAR Research Cloud infrastructure.
  • rmit-microtardis-prod: A development server for MicroTardis, running on RMIT infrastructure.
  • nectar-tunnelator: A side project “tunnelator” running on NeCTAR Research Cloud infrastructure. Small projects only have one server.

The key here is minimising what you need to remember. If I’m doing some work on a project, I’ll always know the project name and whether I want to work on the prod or dev server. Indeed, it’s an advantage to have to specifically type “-prod” when working on a production machine.

Put all IP addresses in /etc/hosts.

When I create a server, or someone tells me an IP, I immediately give it a name, and store it in /etc/hosts. The file looks like this:

##
# Host Database
##
127.0.0.1 localhost
255.255.255.255 broadcasthost
::1 localhost 
fe80::1%lo0 localhost
115.146.46.269 nectar-tunnelator
136.186.3.299 swin-bpsyc-dev
...

This has the huge advantage that you can also put those names in the browser address bar: http://nectar-tunnelator

If a server moves location, just update the entry in /etc/hosts, and forget about it again.

Put all access information in ~/.ssh/config

The SSH configuration file can radically simplify your life. You have one entry per server, like this:

Host latrobe-vesiclepedia-dev
 User steveb
 IdentityFile /Users/stevebennett/Dropbox/VeRSI/NeCTAR/nectar.pem
 Port 9022

Notice how we don’t need to spell out the IP address again. And by storing the access details here, we can connect like this:

$ ssh latrobe-vesiclepedia-dev

So much less to remember. And because it’s so easy to connect, suddenly tools like SCP, and SSH tunnelling become much more attractive.

$ scp myfile.txt latrobe-vesiclepedia-dev
$ ssh latrobe-vesiclepedia-dev sudo cp myfile.txt /var/www

In reality, it gets even simpler. Most of my NeCTAR boxes are Ubuntu, with a login name of “ubuntu”. The “nectar-” naming convention proves valuable:

Host nectar-*
 IdentityFile /Users/stevebennett/Dropbox/VeRSI/NeCTAR/nectar.pem
 User ubuntu

That means that any NeCTAR box using that key and username doesn’t even need its own entry in .ssh/config:

$ ssh nectar-someserver

Anonymous longitudinal surveys with LimeSurvey

A researcher at the Australian Research Centre in Sex, Health and Society (ARCSHS, pronounced “archers”) carries out surveys that are both:

  • anonymous: personal details are not collected, and steps are taken to avoid being able to identify participants; and
  • longitudinal: participants are contacted at intervals to answer similar questions. Questions often refer to previous answers given by the participant.

It’s a tricky combination, not well supported by most survey software. Here’s a solution, using LimeSurvey, that doesn’t require modifying any code. These instructions are aimed at people with no familiarity with LimeSurvey.

Quick summary:

  1. The survey is carried out in not anonymous mode. Anonymity is maintained through an external email redirection service.
  2. Additional attributes are added on to token tables for follow-up rounds.
  3. Answers from each survey are transferred to the additional attributes through Excel manipulation

Survey hosting

LimeService is a very cheap way to host LimeSurvey, and removes the burden of server administration.

Anonymous email addresses

Participants must not sign up for the survey using their actual email address. Instead, they should go to a third-party email forwarding site like http://notsharingmy.info. For the participant:

  1. Click on a link
  2. Enter their email address, click “Get an obscure email”
  3. Copy the generated email address into the survey registration form.

(Note: as of January 2013, notsharingmy.info is unreliable. I’m working on another solution.)

About tokens

This solution relies heavily on LimeSurvey’s “tokens” which are explained badly in the interface. Enabling tokens just means you want to track information about individual participants: individual adding them to the list, inviting them, keeping track of who completed the survey or who needs another reminder. We also add “additional attributes”: the participants’ previous answers.

1. Set up the initial survey

  1. Create a new survey:  Image
  2. Fill out the Description, Welcome Message, End Message.
  3. Create the first question group.
  4. Create some questions.
  5. Set these survey “general settings”:
    1. Tokens > Allow public registration? Yes
    2. (Optional) Modify the registration text as described below in “Avoiding personal information”
  6. Activate the survey. You’ll be asked if you want to initialise tokens:
    Image
    Yes, you do.
  7. Publicise the URL, and get lots of responses. Great.

Avoiding personal information

By default, LimeSurvey collects from each self-registering participant their first name, last name and email address. For a truly anonymous survey, you may wish to avoid collecting their name.

  1. Under Global Settings, choose Template Editor:
    Screen shot 2013-01-21 at 4.27.42 PM
  2. As the warning points out, editing the default template isn’t a great idea. Let’s make a copy called “no_name_collecting”:
    Screen shot 2013-01-21 at 4.28.47 PM
  3. Now, edit the copy. Select “startpage.pstpl”, then add  this code just before the “</head”> line:
    ...
    <script>
    $(function(){
    $("[name=register_firstname]").parent().parent().hide()
    $("[name=register_lastname]").parent().parent().hide()
    });
    </script>
    </head>

    Screen shot 2013-01-21 at 4.31.05 PM

     

  4. Next, you might want to change the registration message, to direct people to create an anonymous email address. On the “Register” screen,Screen shot 2013-01-21 at 4.36.16 PM
    select “register.pstpl”.
  5. In the code, replace {REGISTERMESSAGE1} and {REGISTERMESSAGE2} with any text or HTML you like.Screen shot 2013-01-21 at 4.38.41 PM

2. Export responses and tokens

  1. Export the responses as CSV
    Image

    Image
  2. Export the tokens as CSV:
    Image
    Image

3. Create follow-up survey

The first survey was a success, and there are now lots of entries in the tokens table and the responses table. More in the former than the latter, because some people will sign up and not answer any questions.

  1. Copy the first survey. (Click the + button, like starting a new survey first.)
    Image
  2. You’ll probably want to remove some irrelevant questions and groups, so do that.
  3. Now add one “additional attribute” for every response that you want to refer to from the initial survey. In Token Management:
    Screen shot 2013-01-21 at 2.07.54 PMScreen shot 2013-01-21 at 2.17.35 PMThe other fields don’t matter. Leave them blank.
  4. Next, you may want to use these attributes in some questions. Let’s say you want to ask “Last time you did this survey you were 42. How old are you now?”
    Screen shot 2013-01-21 at 4.07.11 PMIn the question text, click ‘LimeSurvey replacement field properties’ then select the extended attribute:Screen shot 2013-01-21 at 4.08.29 PMThe text then looks like this:
    Screen shot 2013-01-21 at 4.09.57 PM
  5. You can get fancy and set that value as the default for the question. Instructions on how to do this.
  6. Repeat this for each question, wherever those extended attributes are needed.

4. Transfer previous responses to the follow-up survey

  1. Download the token table for this survey. It will have no data, but the headers will be useful.So far, you have created a way to hold answers from previous surveys for each participant. But you haven’t actually put those answers in there. Time for some Excel magic. You want to create a token table for the follow-up survey, using bits from the three CSV files you’ve dowloaded so far.:
    1. Tokens table for initial survey. (Containing the email addresses people signed up with.)
    2. Responses for initial survey.
    3. Tokens table for follow-up survey (containing the additional attributes).
  2. Copy all the rows of tokens from the initial survey to the follow-up survey. Don’t copy the header row:Screen shot 2013-01-21 at 3.08.29 PMMake sure you retain the additional attribute fields (attribute_1 etc.) They should be blank.
  3. Notice that these tokens have already been “completed”. Clear out the invited and completed fields. Set the remindercount field to 0 and usesleft field to 1:
    Screen shot 2013-01-21 at 3.10.19 PM
  4. Now, copy columns from the participant responses table into attribute fields, one by one, as appropriate. Make sure the IDs line up correctly.Screen shot 2013-01-21 at 3.11.42 PM
  5. Save the file (“followup-tokens.csv” if you like). Now import it into your follow-up survey.Screen shot 2013-01-21 at 3.13.37 PMScreen shot 2013-01-21 at 3.16.11 PM

5. Activate the follow-up survey

Whew. You’re now ready to launch.

  1. Activate the survey.
  2. Send out invitations. Under “Token control”:Screen shot 2013-01-21 at 4.44.58 PM
  3. The default email template is pretty gross, so clean it up a bit before you send:Screen shot 2013-01-21 at 4.47.13 PM
  4. Each participant receives a custom URL which logs them in without requiring any username and password.

You can then repeat this process for each subsequent iteration.

Windows red cross errors scam

Image

We have noticed many red cross errors!

I just received an interesting phone call, apparently from a group of Indian scammers. It went roughly like this. (Phrases in bold are things I jotted down during the call)

  • Hello, I’m Caroline from the Computer Technical Department at Windows Best Help [or Windows Based Help, perhaps]. We’re calling to alert you that for the past four weeks you’ve been receiving red cross errors, which mean you’re subject to internet viruses, and hackers that are trying to break into your computer. Your address is [my address], correct?

Throughout this, I give non-committal “mmm, yes” responses.

  • We’re connected through the Global IT Server. This just an awareness call, nothing to do with telemarketing.
  • Now, go to the home page of your browser.
  • Now press Windows-R, and type “cookies“. [Actually, long description of how to find the Windows key, and spelling out “cookies” in radio code. The first “o” was orange, the second was Oscar.]
  • Now, do you see all those files and folders? All the work you’re doing is stored in those files as a double coded check up.
Image

I’m calling from the Computer Technical Department at Windows Best Help. This has nothing to do with telemarketing!

At this point, she transferred me to her “technical supervisor”. He gave me his name, but I didn’t quite catch it – something like Armin. I asked where they’re based – Kolkata.

  • Are you in front of your computer now? [I admitted that, no, the phone was in a different room.]
  • But I believe that you told Caroline that you were typing the commands and could see the results? [Interesting…I had led her to believe that. Is their operation so small that he listens to the whole conversation?]

Some confusion followed, where I offered to go and run the command for real. I told him to hold the line for a minute, while I went and did it. When he came back, the line was dead. Oops.

I’ve heard of this scam before, but it was entertaining to see it in operation. Too bad I didn’t get to see where it led.

What I learned at e-Research Australasia 2012

  1. Waterfall development still doesn’t work.
  2. Filling an institutional research data registry is still a hard slog.
  3. Omero is not just for optical microscopy.
  4. Spending money so universities can build tools that the private sector will soon provide for free ends badly.
  5. Research data tools that mimicking the design of laymen’s tools (“realestate.com.au for ecologists“) work well
  6. AURIN is brilliant. AuScope is even more brilliant than before.
  7. Touchscreen whiteboards are here, are extremely cool, and cost less than $5000.
  8. AAF still doesn’t work, but will soon. Please.
  9. NeuroHub. If neuroscience is your thing.
  10. Boring, mildly offensive after-dinner entertainment works well as a team-building exercise.
  11. All the state-based e-Research organisations (VeRSI, IVEC, QCIF, Intersect, eRSA, TPAC etc.) are working on a joint framework for e-research support.
  12. Cytoscape: An Open Source Platform for Complex Network Analysis and Visualization
  13. Staff at e-Research organisations have much more informed view of future e-Research projects, having worked on so many of them.
  14. If you tick the wrong checkbox, your paper turns into a poster.
  15. People find the word “productionising” offensive, but don’t mind “productifying”.
  16. CloudStor’s FileSender is the right way for people in the university sector to send big files to each other.

Build it? Wait for someone else to build it?

And a thought that hit me after the conference: although a dominant message over the last few years has been “the data deluge is coming! start building things!”, there are sometimes significant advantages in waiting. e-Research organisations overseas are also building data repositories, registries, tools etc. In many cases, it would pay to wait for a big project overseas to deliver a reusable product, rather than going it alone on a much smaller scale. So, since we (at e-Research organisations) are trying to help many researchers, perhaps we should consider the prospect of some other tool arriving on the scene, when assessing the merit of any individual project.

A pattern for multi-instrument data harvesting with MyTardis

Here’s a formula for setting up a multi-instrument MyTardis installation. It describes one particular pattern, which has been implemented at RMIT’s Microscopy and Microanalysis Facility (RMMF), and is being implemented at Swinburne. For brevity, the many variations on this pattern are not discussed, and gross simplifications are made.

This architecture is not optimal for your situation. Maybe a documented a suboptimal architecture is more useful than an undocumented optimal one.

The goal

The components:

  • RSync server running on each instrument machine. (On Windows, use DeltaCopy.)
  • Harvester script which collates data from the RSync servers to a staging area.
  • Atom provider which serves the data in the staging area as an Atom feed.
  • MyTardis, which uses the Atom Ingest app to pull data from the Atom feed. It saves data to the MyTardis store and metadata to the MyTardis database.

Preparation

Things you need to find out:

  1. List of instruments you’ll be harvesting from. Typically there’s a point of diminishing returns: 10 instruments in total, of which 3 get most of the use, and numbers #9 and #10 are running DOS and are more trouble than they’re worth.
  2. For each instrument: what data format(s) are produced? Some “instruments” have several distinct sensors or detectors: a scanning electron microscope (producing .tif images) with add-on EDAX detector (producing .spt files), for instance. If there is no MyTardis filter for the given data format(s), then MyTardis will store the files without extracting any metadata, making them opaque to the user.

    Details collected for each instrument at RMIT’s microscopy facility.

  3. Discuss with the users how MyTardis’s concept of “dataset” and “experiment” will apply to the files they produce. If possible, arrange for them to save files into a file structure which makes this clear: /User_data/user123/experiment14/dataset16/file1.tif . Map the terms “dataset” and “experiment” as appropriate: perhaps “session” and “project”.
  4. Networking. Each instrument needs to be reachable on at least one port from the harvester.
  5. You’ll need a staging area. Somewhere big, and readily accessible.
  6. You’ll eventually need a permanent MyTardis store. Somewhere big, and backed up. (The Chef cookbooks actually assume storage inside the VM, at present.)
  7. You’ll eventually need a production-level MyTardis database. It may get very large if you have a lot of metadata (eg, Synchrotron data). (The Chef cookbooks install Postgres locally.)

RSync Server

We assume each PC is currently operating in “stand-alone” mode (users save data to a local hard disk) but is reachable by network. If your users have authenticated access to network storage, you’ll do something a bit different.

Install an Rsync server on each PC. For Windows machines, use DeltaCopy. It’s free. Estimate 5 minutes per machine. Make it auto-start, running in the background, all the time, serving up the user  data directory on a given port.

Harvester script

The harvester script collates data from these different servers into the staging area. The end result is a directory with this kind of structure:

user_123
        TiO2
            2012-03-14 exp1
                run1.tif
                run2.tif
                edax.spc
            2012-03-15 exp1
                anotherrun.tif
                highmag.tif
user_456
...

You will need to customise the scripts for your installation.

  1. Create  a staging area directory somewhere. Probably it will be network storage mounted by NFS or something.
  2. sudo mkdir /opt/harvester
  3. Create a user to run the scripts, “harvester”. Harvested data will end up being owned by this user. Make sure that works for you.
  4. sudo chown harvester /opt/harvester
  5. sudo -su harvester bash
  6. git clone https://github.com/VeRSI-Australia/RMIT-MicroTardis-Harvest
  7. Review the scripts, understand what they do, make changes as appropriate.
  8. Create a Cron job to run the harvester at frequent intervals. For example:
    */5 * * * * /usr/local/microtardis/harvest/harvest.sh >/dev/null 2>&1
  9. Test.

Atom provider

My boldest assumption yet: you will use a dedicated VM simply to serve an Atom feed from the staging area.

Since you already know how to use Chef, use the Chef Cookbook for the Atom Dataset Provider to provision this VM from scratch in one hit.

  1. git clone https://github.com/stevage/atom-provider-chef
  2. Modify cookbooks/atom-dataset-provider/files/default/feed.atom as appropriate. See the documentation at https://github.com/stevage/atom-dataset-provider.
  3. Modify environments/dev.rb and environments/prod.rb as appropriate.
  4. Upload it all to your Chef server:
    knife cookbook upload –all
    knife environment from file environments/*
    knife role from file roles/*
  5. Bootstrap your VM. It works first try.

MyTardis

You’re doing great. Now let’s install MyTardis. You’ll need another VM for that. Your local ITS department can probably deliver one of those in a couple of hours, 6 months tops. If possible (ie, if no requirement to host data on-site), use the Nectar Research Cloud.

You’ll use Chef again.

  1. git clone https://github.com/mytardis/mytardis-chef
  2. Follow the instructions there.
  3. Or, follow the instructions here: Getting started with Chef on the NeCTAR Research Cloud.
  4. Currently, you need to manually configure the Atom harvester to harvest from your provider. Instructions for that are on Github.

MyTardis is now running. For extra credit, you will want to:

  • Develop or customise filters to extract useful metadata.
  • Add features that your particular researchers would find helpful. (On-the-fly CSV generation from binary spectrum files was a killer feature for some of our users.)
  • Extend harvesting to further instruments, and more kinds of data.
  • Integrate this system into the local research data management framework. Chat to your friendly local librarian. Ponder issues like how long to keep data, how to back it up, and what to do when users leave the institution.
  • Integrate into authentication systems: your local LDAP, or perhaps the AAF.
  • Find more sources of metadata: booking systems, grant applications, research management systems…

Discussion

After developing this pattern for RMMF, it seems useful to apply it again. And if we (VeRSI, RMIT’s Ian Thomas, Monash’s Steve Androulakis, etc) can do it, perhaps you’ll find it useful too. I was inspired by Peter Sefton’s UWS blog post Connecting Data Capture applications to Research Data Catalogues/Registries/Stores (which raises the idea of design patterns for research data harvesting).

Some good things about this pattern:

  • It’s pretty flexible. Using separate components means individual parts can be substituted out. Perhaps you have another way of producing an Atom feed. Or maybe users save their data directly to a network share, eliminating the need for the harvester scripts.
  • By using simple network transfers (eg, HTTP), it suits a range of possible network architectures (eg, internal staging area, but MyTardis on the cloud; or everything on one VM…)
  • Work can be divided up: one party can work on harvesting from the instruments, while another configures MyTardis.
  • You can archive the staging area directly if you want a raw copy of all data, especially during the testing phase.

Some bad things:

  • It’s probably a bit too complicated – perhaps one day there will be a MyTardis ingest from RSync servers directly.
  • Since the Atom provider is written in NodeJS, it means another set of technologies to become familiar with for troubleshooting.
  • There are lots of places where things can go wrong, and there is no monitoring layer (eg, Nagios).