Federation allows us to place wiki servers near special hardware, huge databases or private credentials. Pages on these sites become setups for computations to be performed nearby. Both setups and results can be forked sans the resource.
We expect to move pages by forking them freely through ajax requests from the client application. A few megabytes seems to be the reasonable upper limit for moving data this way. Images quickly exceed this so we have proposed special management of Image Assets.
We have some experience working with files approaching 40 gigabytes. Our approach here has been to compose long running jobs in a webapp and then monitor progress with continuously updated svg summaries. We've proposed this as the preferred approach to working with huge datasets. See Exploratory Parsing
We have created low latency connections relaying real-time data through server-side plugins to attached hardware. Through this we expose issues of shared use and corresponding privilege. See Realtime Txtzyme
We have written plugins that cooperate with jobs that read through the server's page database. Often these represent programs requiring specific credentials or need to be run 24x7 to be effective. See Configuring Batch Email