Farm Scripts

An operator of a wiki farm may assume responsibilities that are not directly supported by mechanisms in the server. However, with flat-file storage, simple scripts can automate tasks for dozens or even hundreds of sites.

Backups

I make off-site backups of fed.wiki.org with a shell script run nightly on the off-site server from a cron job.

# nightly backup of fed.wiki.org sites 20 4 * * * (cd wiki/fed; sh backup.sh)

The script, backup.sh, saves compressed archives of every page and keeps them for a week.

when=`date +%a` how='tar --exclude openid/nonces -chz farm' ssh fed.wiki.org $how > store/$when.tgz

This script depends on having ssh access to the tar archive program on both computers. The store it creates looks like this.

$ ls -lh store total 1.6G -rw-r--r-- 1 ward ward 226M Aug 29 04:26 Fri.tgz -rw-r--r-- 1 ward ward 226M Sep 1 04:26 Mon.tgz -rw-r--r-- 1 ward ward 226M Aug 30 04:26 Sat.tgz -rw-r--r-- 1 ward ward 226M Aug 31 04:26 Sun.tgz -rw-rw-r-- 1 ward ward 226M Aug 28 04:26 Thu.tgz -rw-r--r-- 1 ward ward 226M Aug 26 04:26 Tue.tgz -rw-r--r-- 1 ward ward 226M Aug 27 04:26 Wed.tgz

To extract a page from the archive.

tar -xzf Wed.tgz farm/video.fed.wiki.org/pages/ward-cunningham

To extract a whole site.

tar -xzf Wed.tgz farm/video.fed.wiki.org

Activity

I monitor activity in fed.wiki.org with a shell script run every 10 minutes on the wiki server from a cron job.

# rebuild the recent-farm-activity page */10 * * * * (cd farm-scripts; ruby build.rb)

The build.rb script reads through the farm's flat-file database looking for recently modified page files. It constructs a new page with references to each site with new activity.

The farm-scripts can be downloaded from a github repository. It writes the page it builds to ../pages in a wiki site of your choice.

git clone https://github.com/WardCunningham/farm-scripts.git