Changes in resource management that will enable much larger wiki farms also calls into question the launch protocol for server-side plugins that communicate through websockets to the story items that use them. github
commit comment
Our protocol here for launching server-side components depends on having a running server that the component can use to handle websocket requests.
Up to now we've been careless as to what is semantically desirable in a farm situation. The few known examples were coded as if they were a server resource, not a virtual site resource. However, the launch protocol passes in a server from which connections can be made, not a connection that has been established by the (now) single real server.
We've chosen to not launch server-side plugins in farm mode until this launch-protocol can be adjusted and all server-side plugins revised.
.
The ultimate question here is what sort of responsibilities are assumed by the author of a server-side plugin with regards to meeting the needs of many clients with the resources available on a given server?
We could attempt to 'sandbox' instances of the server-side plugin corresponding specific sites or even specific client-side instances on those sites.
We could allow instances to have access to the full resources of the farm server and expect it to manage those resources with appropriate respect.
A site operator that installs a fully privileged server-side plugin is trusting that plugin author to the same degree that he trusts the wiki server authors themselves. In response the plugin author can manage shared resources to the advantage of all clients.
The plugin author can hear and respond to all events.
The plugin author can read or write all persistent stores.
The plugin author can coordinate use of shared hardware.
The Txtzyme plugin provides the best example of a fully privileged plugin accepting server class responsibilities.
The Txtzyme plugin manages a single usb connection to hardware by interleaving atomic packets going to the hardware and universally distributing packets returned.
The Txtzyme plugin could overlay on device-bound packets a notion of session and return session-labeled output to the single client requesting it.
Proposal
I suggest that the core server take responsibility for completing websocket connections on behalf of all virtual servers in a farm.
The core server could dispatch these to single instances of the server-side plugins passing the connection and virtual server information necessary to manage further dispatching.
The core server would have launched the plugin-instances when it launched and established an EventListener for further plugin specific communication.
A plugin could fail to launch if it didn't find necessary resources. For example, no Txtzyme device attached.
Question: doesn't this have the core server asserting some websocket conventions and even package selection on the plugins? How does this gracefully evolve?
Question: shouldn't the core server distribute RESTful requests to server-side plugins also? How much site, session, lineup, page and item identity would need to be managed to make this equally effective?