Companion relies heavily on chat redirection to intercept and interpret text input from the viewer. This is a technology within the RLV and RLVa APIs that allows an object to hook chat messages when they are sent by the client. Often, including in Companion, chat is also blocked from appearing normally in channel 0 (the public channel), allowing the listening object to completely take over the facility of participation for the avatar in local communications.

The RLV restrictions used are:

messagefunction
@sendchat=addStops the avatar from sending chat to channel 0
@redirchat:<channel>=addSends the avatar's chat to channel <channel>
@rediremote:<channel>=addSends the avatar's emotes to channel <channel>

An emote is any chat message that starts with the character sequence "/me ", including the space on the end. Some viewers may extend this to include other punctuation in place of the space, such as "/me'". The sendchat restriction also blocks emotes, although note that emotes and other chat are completely independent for the purposes of redirchat and rediremote.

Not coincidentally, these are the same restrictions used by standard RLV speech modulation devices, the best-known class of which is a gag. These typically simulate speech impediments via phonemic substitution, e.g. by replacing every instance of "s" with "th". Speech interception has an almost unlimited range of other applications, however, including transcription onto specialized display devices, machine translation into another language, word substitution and censoring, and error correction. While Companion 8.2 also introduced an expansive framework for multi-step speech modulation (the vox filter pipeline), these programs are not well-suited to interception that does not (or does not always) produce text output, such as transcription or punitive mechanisms. Moreover there is considerable value in the UX affordances offered by external devices, such as the convenience and metaphor of being able to attach a functional gag with minimal configuration, which strongly reinforces the user's mental model of the gag attachment being equivalent to a real gag. Compatibility with existing third-party products is also desirable, although unfortunately perfection in this matter is not possible within the scope of the current RLV API.

The Tweak

To be compatible with the output pipe, a standard RLV speech interception device needs only one modification: it must accept input from the controller in addition to the avatar. It requires no other modifications and should otherwise be programmed as a stand-alone RLV device.

If your product already contains an active light bus handler (as described here) then you can simply note the key of the controller when it sends probe or add-confirm messages; otherwise, it may simply be easier to filter keys based on whether or not they are the root prim of an attachment, e.g.

listen(integer channel, string name, key id, string message) {
    if (llListFindList(llGetAttachmentList(llGetOwner()), [id] >= 0)
        echo("Received message from device: " + message);
    else if (id == llGetOwner())
        echo("Received message from avatar: " + message);
    else
        echo("Rejected message from invalid source: " + message);
}

(This can of course be optimized in various ways.)