Admin: Queue Reader setup
Overview
The RPI Queue Reader service is used to drain Queue Listener and RPI Realtime queues. The following section documents how to set up the service.
Queue Listener setup
Queue listeners facilitate the monitoring of a "listener queue" for the arrival of data. Data arrives in the form of JSON packages—placed on the queue e.g. by an external system, or at submission of a web form. Downstream queue activities can then use this data to execute offers. Queue listeners might typically be used for the sending of emails e.g. after a customer makes a purchase, or when a web form is submitted in a landing page.
More details on queue listeners can be found in the RPI Reference Guide. Please follow these steps to configure RPI to use queue listeners:
Configure the listener queue provider in the Queue Listener Providers configuration interface.
Ensure the following
QueueListener
application settings are set for the Queue Reader service:IsEnabled
: set to True.QueuePath
: set to the path of the queue to be used as listener queue.
Set the ListenerQueuePath realtime app setting value to the same value as QueuePath.
In the Queue Listener Providers configuration interface, copy the listener queue’s JSON configuration to the clipboard.
Paste the same into the Realtime appsettings.json file’s ListenerQueueSettings section.
Any messages received for an inactive trigger can be pushed to a separate queue, defined using the following settings:
QueueService__QueueListener__ListenerQueueNonActiveQueuePath
QueueService__QueueListener__ListenerQueueNonActiveTTLDays
Any messages received that result in error (e.g. missing trigger key /malformed json) can be pushed to a separate queue, defined using the following settings:
QueueService__QueueListener__ListenerQueueErrorQueuePath
QueueService__QueueListener__ListenerQueueErrorTTLDays
Realtime Queue setup
Realtime event processing can be operated in two modes:
Distributed mode: facilitates more than one queue reader service draining the same queue. Allows for scaling to improve processing performance. Interim data is stored in an external (redis) cache and queue (any queue provider, but preferably local), which also protects against data loss.
Non-distributed mode: all work for a single queue is handled by a single service. There is no need for an external queue or cache to hold interim data.
Example appsettings files are provided below.
Appsettings example: Non-distributed mode, all Realtime queues for all tenants
{
"QueueService": {
"RealtimeConfiguration": {
"IsFormProcessingEnabled": true,
"IsEventProcessingEnabled": true,
"IsCacheProcessingEnabled": true,
"TenantIDs": [],
"IsDistributed": false
}
}
}
Appsettings example: Distributed mode, all Realtime queues for specific tenant
{
"QueueService": {
"RealtimeConfiguration": {
"IsFormProcessingEnabled": true,
"IsEventProcessingEnabled": true,
"IsCacheProcessingEnabled": true,
"TenantIDs": [
"7F037AD1-099E-4721-A51D-157E57C80498"
],
"IsDistributed": true,
"DistributedCache": {
"Provider": "Redis",
"RedisSettings": {
"IPAddress": "realtimecache:6379"
}
},
"DistributedQueue": {
"Assembly": "RedPoint.Resonance.RabbitMQAccess",
"Type": "RedPoint.Resonance.RabbitMQAccess.RabbitMQFactory",
"Settings": [
{
"Key": "HostName",
"Value": "queueservice"
},
{
"Key": "VirtualHost",
"Value": "/"
},
{
"Key": "UserName",
"Value": "xxx"
},
{
"Key": "Password",
"Value": "xxx"
}
]
}
}
}
}
Operational endpoints
The following endpoints are available at port 8080 on the Queue Reader service:
Start: /api/operations/start
Status: /api/operations/status
Stop: /api/operations/stop
Execution Statistics: /api/operations/stats