As I mentioned before, in the assignment in my MSc covering land cover classification, we made a couple of field trips, in February and March 2014 to see several different examples of land cover on the ground, and took geotagged photographs with some tablets (see also the COBWEB project video) and recorded our impressions of the land cover on some standard recording sheets.
Unfortunately I could only locate the summary reports for 1 of the trips, the second fieldtrip I could only find the photographs, their locations and some very basic metadata but here are the locations put into a Google Fusion Table and mapped with links to a selection of the photos. I try to show at least 2 or 3 different photos of each of the sites, of which there were 9 in each trip.
Occasionally the GPS devices failed to record the locations accurately, in these cases I have averaged the coordinates of the other photographs of the site, or in 1 case, I9b, took the location from I9a and added an offset of 0.0001 degree to the lat and long (about 10m). It may be the case that some of the photographs are hidden because their geotagged location is too close to another one. It is usually possible to distunguish them by zooming but a few may be on top of each other even at maximum zoom. The other problem is that for the a few of the photos, the land cover class is viewed from a distance so the geotagged location won't be the actual location of the area of interest.
For fieldtrip 1, the descriptions refer to the sites 1-9 not the individual photographs.
Fieldtrip 1 - 19-02-2014
Fieldtrip 2 - 25-03-2014
Obviously, there are not enough points here to automate the choice of rules for the land cover classification. Perhaps start with something much more basic, just water/land/cloud. For this, I need a way of programmatically finding the band brightnesses in each band, which I will talk about in an upcoming post.