An incident management tool that supports alerting across multiple channels with easy custom messaging and on-call integrations. Compatible with any tool supporting webhook alerts, it’s designed for modern DevOps teams to quickly respond to production incidents.
🚀 Boost Your SRE Skills with the Book: On-Call in Action.
Features
- 🚨 Multi-channel Alerts: Send incident notifications to Slack, Microsoft Teams, Telegram, and Email (more channels coming!)
- 📝 Custom Templates: Define your own alert messages using Go templates
- 🔧 Easy Configuration: YAML-based configuration with environment variables support
- 📡 REST API: Simple HTTP interface to receive alerts
- 📡 On-call: On-call integrations with AWS Incident Manager
Contributing
We welcome contributions! Please follow these steps:
- Fork the repository Versus Incident
- Create your feature branch (
git checkout -b feature/amazing-feature
) - Commit your changes (
git commit -m 'Add some amazing feature'
) - Push to the branch (
git push origin feature/amazing-feature
) - Open a Pull Request
License
Distributed under the MIT License. See LICENSE
for more information.
Support This Project
Help us maintain Versus Incident! Your sponsorship:
🔧 Funds critical infrastructure
🚀 Accelerates new features like Viber/Lark integration, Web UI and On-call integrations
Getting Started
Prerequisites
- Go 1.20+
- Docker 20.10+ (optional)
- Slack workspace (for Slack notifications)
Easy Installation with Docker
docker run -p 3000:3000 \
-e SLACK_ENABLE=true \
-e SLACK_TOKEN=your_token \
-e SLACK_CHANNEL_ID=your_channel \
ghcr.io/versuscontrol/versus-incident
Or Build from source
# Clone the repository
git clone https://github.com/VersusControl/versus-incident.git
cd versus-incident
# Build with Go
go build -o versus-incident ./cmd/main.go
chmod +x versus-incident
Create run.sh
:
#!/bin/bash
export SLACK_ENABLE=true
export SLACK_TOKEN=your_token
export SLACK_CHANNEL_ID=your_channel
./versus-incident
Development
Docker
Create a configuration file:
mkdir -p ./config && touch config.yaml
config.yaml
:
name: versus
host: 0.0.0.0
port: 3000
alert:
slack:
enable: true
token: ${SLACK_TOKEN}
channel_id: ${SLACK_CHANNEL_ID}
template_path: "/app/config/slack_message.tmpl"
telegram:
enable: false
msteams:
enable: false
Configuration Notes
Ensure template_path
in config.yaml
matches container path:
alert:
slack:
template_path: "/app/config/slack_message.tmpl" # For containerized env
Slack Template
Create your Slack message template, for example config/slack_message.tmpl
:
🔥 *Critical Error in {{.ServiceName}}*
❌ Error Details:
```{{.Logs}}```
Owner <@{{.UserID}}> please investigate
Run with volume mount:
docker run -d \
-p 3000:3000 \
-v $(pwd)/config:/app/config \
-e SLACK_ENABLE=true \
-e SLACK_TOKEN=your_slack_token \
-e SLACK_CHANNEL_ID=your_channel_id \
--name versus \
ghcr.io/versuscontrol/versus-incident
To test, simply send an incident to Versus:
curl -X POST http://localhost:3000/api/incidents \
-H "Content-Type: application/json" \
-d '{
"Logs": "[ERROR] This is an error log from User Service that we can obtain using Fluent Bit.",
"ServiceName": "order-service",
"UserID": "SLACK_USER_ID"
}'
Response:
{
"status":"Incident created"
}
Result:
Other Templates
Telegram Template
For Telegram, you can use HTML formatting. Create your Telegram message template, for example config/telegram_message.tmpl
:
🚨 <b>Critical Error Detected!</b> 🚨
📌 <b>Service:</b> {{.ServiceName}}
⚠️ <b>Error Details:</b>
{{.Logs}}
This template will be parsed with HTML tags when sending the alert to Telegram.
Email Template
Create your email message template, for example config/email_message.tmpl
:
Subject: Critical Error Alert - {{.ServiceName}}
Critical Error Detected in {{.ServiceName}}
----------------------------------------
Error Details:
{{.Logs}}
Please investigate this issue immediately.
Best regards,
Versus Incident Management System
This template supports both plain text and HTML formatting for email notifications.
Microsoft Teams Template
Create your Teams message template, for example config/msteams_message.tmpl
:
**Critical Error in {{.ServiceName}}**
**Error Details:**
```{{.Logs}}```
Please investigate immediately
Kubernetes
- Create a secret for Slack:
# Create secret
kubectl create secret generic versus-secrets \
--from-literal=slack_token=$SLACK_TOKEN \
--from-literal=slack_channel_id=$SLACK_CHANNEL_ID
- Create ConfigMap for config and template file, for example
versus-config.yaml
:
apiVersion: v1
kind: ConfigMap
metadata:
name: versus-config
data:
config.yaml: |
name: versus
host: 0.0.0.0
port: 3000
alert:
slack:
enable: true
token: ${SLACK_TOKEN}
channel_id: ${SLACK_CHANNEL_ID}
template_path: "/app/config/slack_message.tmpl"
telegram:
enable: false
slack_message.tmpl: |
*Critical Error in {{.ServiceName}}*
----------
Error Details:
```
{{.Logs}}
```
----------
Owner <@{{.UserID}}> please investigate
kubectl apply -f versus-config.yaml
- Create
versus-deployment.yaml
:
apiVersion: apps/v1
kind: Deployment
metadata:
name: versus-incident
spec:
replicas: 2
selector:
matchLabels:
app: versus-incident
template:
metadata:
labels:
app: versus-incident
spec:
containers:
- name: versus-incident
image: ghcr.io/versuscontrol/versus-incident
ports:
- containerPort: 3000
livenessProbe:
httpGet:
path: /healthz
port: 3000
env:
- name: SLACK_CHANNEL_ID
valueFrom:
secretKeyRef:
name: versus-secrets
key: slack_channel_id
- name: SLACK_TOKEN
valueFrom:
secretKeyRef:
name: versus-secrets
key: slack_token
volumeMounts:
- name: versus-config
mountPath: /app/config/config.yaml
subPath: config.yaml
- name: versus-config
mountPath: /app/config/slack_message.tmpl
subPath: slack_message.tmpl
volumes:
- name: versus-config
configMap:
name: versus-config
---
apiVersion: v1
kind: Service
metadata:
name: versus-service
spec:
selector:
app: versus
ports:
- protocol: TCP
port: 3000
targetPort: 3000
- Apply:
kubectl apply -f versus-deployment.yaml
SNS Usage
docker run -d \
-p 3000:3000 \
-e SLACK_ENABLE=true \
-e SLACK_TOKEN=your_slack_token \
-e SLACK_CHANNEL_ID=your_channel_id \
-e SNS_ENABLE=true \
-e SNS_TOPIC_ARN=$SNS_TOPIC_ARN \
-e SNS_HTTPS_ENDPOINT_SUBSCRIPTION=https://your-domain.com \
-e AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY \
-e AWS_SECRET_ACCESS_KEY=$AWS_SECRET_KEY \
--name versus \
ghcr.io/versuscontrol/versus-incident
Send test message using AWS CLI:
aws sns publish \
--topic-arn $SNS_TOPIC_ARN \
--message '{"ServiceName":"test-service","Logs":"[ERROR] Test error","UserID":"U12345"}' \
--region $AWS_REGION
A key real-world application of Amazon SNS involves integrating it with CloudWatch Alarms. This allows CloudWatch to publish messages to an SNS topic when an alarm state changes (e.g., from OK to ALARM), which can then trigger notifications to Slack, Telegram, or Email via Versus Incident with a custom template.
On-call
Currently, Versus support On-call integrations with AWS Incident Manager. Updated configuration example with on-call features:
name: versus
host: 0.0.0.0
port: 3000
public_host: https://your-ack-host.example # Required for on-call ack
# ... existing alert configurations ...
oncall:
### Enable overriding using query parameters
# /api/incidents?oncall_enable=false => Set to `true` or `false` to enable or disable on-call for a specific alert
# /api/incidents?oncall_wait_minutes=0 => Set the number of minutes to wait for acknowledgment before triggering on-call. Set to `0` to trigger immediately
enable: false
wait_minutes: 3 # If you set it to 0, it means there’s no need to check for an acknowledgment, and the on-call will trigger immediately
aws_incident_manager:
response_plan_arn: ${AWS_INCIDENT_MANAGER_RESPONSE_PLAN_ARN}
redis: # Required for on-call functionality
insecure_skip_verify: true # dev only
host: ${REDIS_HOST}
port: ${REDIS_PORT}
password: ${REDIS_PASSWORD}
db: 0
Explanation:
The oncall
section includes:
enable
: A boolean to toggle on-call functionality (default:false
).wait_minutes
: Time in minutes to wait for an acknowledgment before escalating (default:3
). Setting it to0
triggers the on-call immediately.aws_incident_manager
: Contains theresponse_plan_arn
, which links to an AWS Incident Manager response plan via an environment variable.
The redis
section is required when oncall.enable
is true
. It configures the Redis instance used for state management or queuing, with settings like host
, port
, password
, and db
.
For detailed information on integration, please refer to the document here: On-call setup with Versus.
Template Syntax Guide
This document explains the template syntax (Go template syntax) used for create a custom alert template in Versus Incident.
Table of Contents
Basic Syntax
Access Data
Access data fields using double curly braces and dot notation, for example, with the data:
{
"Logs": "[ERROR] This is an error log from User Service that we can obtain using Fluent Bit.",
"ServiceName": "order-service",
}
Example template:
*Error in {{ .ServiceName }}*
{{ .Logs }}
Variables
You can declare variables within a template using the {{ $variable := value }} syntax. Once declared, variables can be used throughout the template, for example:
{{ $owner := "Team Alpha" }}
Owner: {{ $owner }}
Output:
Owner: Team Alpha
Pipelines
Pipelines allow you to chain together multiple actions or functions. The result of one action can be passed as input to another, for example:
upper: Converts a string to uppercase.
*{{ .ServiceName | upper }} Failure*
lower: Converts a string to lowercase.
*{{ .ServiceName | lower }} Failure*
title: Converts a string to title case (first letter of each word capitalized).
*{{ .ServiceName | title }} Failure*
default: Provides a default value if the input is empty.
*{{ .ServiceName | default "unknown-service" }} Failure*
slice: Extracts a sub-slice from a slice or string.
{{ .Logs | slice 0 50 }} // First 50 characters
replace: Replaces occurrences of a substring.
{{ .Logs | replace "error" "issue" }}
trimPrefix: Trims a prefix from a string.
{{ .Logs | trimPrefix "prod-" }}
trimSuffix: Trims a suffix from a string.
{{ .Logs | trimSuffix "-service" }}
len: Returns the length
{{ .Logs | len }} // Length of the message
urlquery: Escapes a string for use in a URL query.
uri /search?q={{ .Query | urlquery }}
split: splits a string into array using a separator.
{{ $parts := split "apple,banana,cherry" "," }}
{{/* Iterate over split results */}}
{{ range $parts }}
{{ . }}
{{ end }}
You can chain multiple pipes together:
{{ .Logs | trim | lower | truncate 50 }}
Control Structures
Conditionals
The templates support conditional logic using if, else, and end keywords.
{{ if .IsCritical }}
🚨 CRITICAL ALERT 🚨
{{ else }}
⚠️ Warning Alert ⚠️
{{ end }}
and:
{{ and .Value1 .Value2 .Value3 }}
or:
{{ or .Value1 .Value2 "default" }}
Best Practices
Error Handling:
{{ If .Error }}
{{ .Details }}
{{ else }}
No error details
{{ end }}
Whitespace Control:
{{- if .Production }} // Remove preceding whitespace
PROD ALERT{{ end -}} // Remove trailing whitespace
Template Comments:
{{/* This is a hidden comment */}}
Negates a boolean value:
{{ if not .IsCritical }}
This is not a critical issue.
{{ end }}
Checks if two values are equal:
{{ if eq .Status "critical" }}
🚨 Critical Alert 🚨
{{ end }}
Checks if two values are not equal:
{{ if ne .Env "production" }}
This is not a production environment.
{{ end }}
Returns the length of a string, slice, array, or map:
{{ if gt (len .Errors) 0 }}
There are {{ len .Errors }} errors.
{{ end }}
Checks if a string has a specific prefix:
{{ if .ServiceName | hasPrefix "prod-" }}
Production service!
{{ end }}
Checks if a string has a specific suffix:
{{ if .ServiceName | hasSuffix "-service" }}
This is a service.
{{ end }}
Checks if a message contains a specific strings:
{{ if contains .Logs "error" }}
The message contains error logs.
{{ else }}
The message does NOT contain error.
{{ end }}
Loops
Iterate over slices/arrays with range:
{{ range .ErrorStack }}
- {{ . }}
{{ end }}
Microsoft Teams Templates
Microsoft Teams templates support Markdown syntax, which is automatically converted to Adaptive Cards when sent to Teams. As of April 2025 (with the retirement of Office 365 Connectors), all Microsoft Teams integrations use Power Automate Workflows.
Supported Markdown Features
Your template can include:
- Headings: Use
#
,##
, or###
for different heading levels - Bold Text: Wrap text with double asterisks (
**bold**
) - Code Blocks: Use triple backticks to create code blocks
- Lists: Create unordered lists with
-
or*
, and ordered lists with numbers - Links: Use
[text](url)
to create clickable links
Automatic Summary and Text Fields
Versus Incident now automatically handles two important fields for Microsoft Teams notifications:
- Summary: The system extracts a summary from your template's first heading (or first line if no heading exists) which appears in Teams notifications.
- Text: A plain text version of your message is automatically generated as a fallback for clients that don't support Adaptive Cards.
You don't need to add these fields manually - the system handles this for you to ensure proper display in Microsoft Teams.
Example Template
Here's a complete example for Microsoft Teams:
# Incident Alert: {{.ServiceName}}
### Error Information
**Time**: {{.Timestamp}}
**Severity**: {{.Severity}}
## Error Details
```{{.Logs}}```
## Action Required
1. Check system status
2. Review logs in monitoring dashboard
3. Escalate to on-call if needed
[View Details](https://your-dashboard/incidents/{{.IncidentID}})
This will be converted to an Adaptive Card with proper formatting in Microsoft Teams, with headings, code blocks, formatted lists, and clickable links.
Configuration
A sample configuration file is located at config/config.yaml
:
name: versus
host: 0.0.0.0
port: 3000
public_host: https://your-ack-host.example # Required for on-call ack
alert:
debug_body: true # Default value, will be overridden by DEBUG_BODY env var
slack:
enable: false # Default value, will be overridden by SLACK_ENABLE env var
token: ${SLACK_TOKEN} # From environment
channel_id: ${SLACK_CHANNEL_ID} # From environment
template_path: "config/slack_message.tmpl"
telegram:
enable: false # Default value, will be overridden by TELEGRAM_ENABLE env var
bot_token: ${TELEGRAM_BOT_TOKEN} # From environment
chat_id: ${TELEGRAM_CHAT_ID} # From environment
template_path: "config/telegram_message.tmpl"
email:
enable: false # Default value, will be overridden by EMAIL_ENABLE env var
smtp_host: ${SMTP_HOST} # From environment
smtp_port: ${SMTP_PORT} # From environment
username: ${SMTP_USERNAME} # From environment
password: ${SMTP_PASSWORD} # From environment
to: ${EMAIL_TO} # From environment
subject: ${EMAIL_SUBJECT} # From environment
template_path: "config/email_message.tmpl"
msteams:
enable: false # Default value, will be overridden by MSTEAMS_ENABLE env var
power_automate_url: ${MSTEAMS_POWER_AUTOMATE_URL} # Power Automate HTTP trigger URL (required)
template_path: "config/msteams_message.tmpl"
other_power_urls: # Optional: Define additional Power Automate URLs for multiple MS Teams channels
qc: ${MSTEAMS_OTHER_POWER_URL_QC} # Power Automate URL for QC team
ops: ${MSTEAMS_OTHER_POWER_URL_OPS} # Power Automate URL for Ops team
dev: ${MSTEAMS_OTHER_POWER_URL_DEV} # Power Automate URL for Dev team
lark:
enable: false # Default value, will be overridden by LARK_ENABLE env var
webhook_url: ${LARK_WEBHOOK_URL} # Lark webhook URL (required)
template_path: "config/lark_message.tmpl"
other_webhook_urls: # Optional: Enable overriding the default webhook URL using query parameters, eg /api/incidents?lark_other_webhook_url=dev
dev: ${LARK_OTHER_WEBHOOK_URL_DEV}
prod: ${LARK_OTHER_WEBHOOK_URL_PROD}
queue:
enable: true
debug_body: true
# AWS SNS
sns:
enable: false
https_endpoint_subscription_path: /sns # URI to receive SNS messages, e.g. ${host}:${port}/sns or ${https_endpoint_subscription}/sns
# Options If you want to automatically create an sns subscription
https_endpoint_subscription: ${SNS_HTTPS_ENDPOINT_SUBSCRIPTION} # If the user configures an HTTPS endpoint, then an SNS subscription will be automatically created, e.g. https://your-domain.com
topic_arn: ${SNS_TOPIC_ARN}
# AWS SQS
sqs:
enable: false
queue_url: ${SQS_QUEUE_URL}
# GCP Pub Sub
pubsub:
enable: false
# Azure Event Bus
azbus:
enable: false
oncall:
### Enable overriding using query parameters
# /api/incidents?oncall_enable=false => Set to `true` or `false` to enable or disable on-call for a specific alert
# /api/incidents?oncall_wait_minutes=0 => Set the number of minutes to wait for acknowledgment before triggering on-call. Set to `0` to trigger immediately
enable: false
wait_minutes: 3 # If you set it to 0, it means there's no need to check for an acknowledgment, and the on-call will trigger immediately
provider: aws_incident_manager # Valid values: "aws_incident_manager" or "pagerduty"
aws_incident_manager: # Used when provider is "aws_incident_manager"
response_plan_arn: ${AWS_INCIDENT_MANAGER_RESPONSE_PLAN_ARN}
other_response_plan_arns: # Optional: Enable overriding the default response plan ARN using query parameters, eg /api/incidents?awsim_other_response_plan=prod
prod: ${AWS_INCIDENT_MANAGER_OTHER_RESPONSE_PLAN_ARN_PROD}
dev: ${AWS_INCIDENT_MANAGER_OTHER_RESPONSE_PLAN_ARN_DEV}
staging: ${AWS_INCIDENT_MANAGER_OTHER_RESPONSE_PLAN_ARN_STAGING}
pagerduty: # Used when provider is "pagerduty"
routing_key: ${PAGERDUTY_ROUTING_KEY} # Integration/Routing key for Events API v2 (REQUIRED)
other_routing_keys: # Optional: Enable overriding the default routing key using query parameters, eg /api/incidents?pagerduty_other_routing_key=infra
infra: ${PAGERDUTY_OTHER_ROUTING_KEY_INFRA}
app: ${PAGERDUTY_OTHER_ROUTING_KEY_APP}
db: ${PAGERDUTY_OTHER_ROUTING_KEY_DB}
redis: # Required for on-call functionality
insecure_skip_verify: true # dev only
host: ${REDIS_HOST}
port: ${REDIS_PORT}
password: ${REDIS_PASSWORD}
db: 0
Environment Variables
The application relies on several environment variables to configure alerting services. Below is an explanation of each variable:
Common
Variable | Description |
---|---|
DEBUG_BODY | Set to true to enable print body send to Versus Incident. |
Slack Configuration
Variable | Description |
---|---|
SLACK_ENABLE | Set to true to enable Slack notifications. |
SLACK_TOKEN | The authentication token for your Slack bot. |
SLACK_CHANNEL_ID | The ID of the Slack channel where alerts will be sent. Can be overridden per request using the slack_channel_id query parameter. |
Telegram Configuration
Variable | Description |
---|---|
TELEGRAM_ENABLE | Set to true to enable Telegram notifications. |
TELEGRAM_BOT_TOKEN | The authentication token for your Telegram bot. |
TELEGRAM_CHAT_ID | The chat ID where alerts will be sent. Can be overridden per request using the telegram_chat_id query parameter. |
Email Configuration
Variable | Description |
---|---|
EMAIL_ENABLE | Set to true to enable email notifications. |
SMTP_HOST | The SMTP server hostname (e.g., smtp.gmail.com). |
SMTP_PORT | The SMTP server port (e.g., 587 for TLS). |
SMTP_USERNAME | The username/email for SMTP authentication. |
SMTP_PASSWORD | The password or app-specific password for SMTP authentication. |
EMAIL_TO | The recipient email address(es) for incident notifications. Can be multiple addresses separated by commas. Can be overridden per request using the email_to query parameter. |
EMAIL_SUBJECT | The subject line for email notifications. Can be overridden per request using the email_subject query parameter. |
Microsoft Teams Configuration
The Microsoft Teams integration now supports both legacy Office 365 webhooks and modern Power Automate workflows with a single configuration option:
alert:
msteams:
enable: true
power_automate_url: ${MSTEAMS_POWER_AUTOMATE_URL}
template_path: "config/msteams_message.tmpl"
Automatic URL Detection (April 2025 Update)
As of the April 2025 update, Versus Incident automatically detects the type of URL provided in the power_automate_url
setting:
-
Legacy Office 365 Webhook URLs: If the URL contains "webhook.office.com" (e.g.,
https://yourcompany.webhook.office.com/...
), the system will use the legacy format with a simple "text" field containing your rendered Markdown. -
Power Automate Workflow URLs: For newer Power Automate HTTP trigger URLs, the system converts your Markdown template to an Adaptive Card with rich formatting features.
This automatic detection provides backward compatibility while supporting newer features, eliminating the need for separate configuration options.
Variable | Description |
---|---|
MSTEAMS_ENABLE | Set to true to enable Microsoft Teams notifications. |
MSTEAMS_POWER_AUTOMATE_URL | The Power Automate HTTP trigger URL for your Teams channel. Automatically works with both Power Automate workflow URLs and legacy Office 365 webhooks. |
MSTEAMS_OTHER_POWER_URL_QC | (Optional) Power Automate URL for the QC team channel. Can be selected per request using the msteams_other_power_url=qc query parameter. |
MSTEAMS_OTHER_POWER_URL_OPS | (Optional) Power Automate URL for the Ops team channel. Can be selected per request using the msteams_other_power_url=ops query parameter. |
MSTEAMS_OTHER_POWER_URL_DEV | (Optional) Power Automate URL for the Dev team channel. Can be selected per request using the msteams_other_power_url=dev query parameter. |
Lark Configuration
Variable | Description |
---|---|
LARK_ENABLE | Set to true to enable Lark notifications. |
LARK_WEBHOOK_URL | The webhook URL for your Lark channel. |
LARK_OTHER_WEBHOOK_URL_DEV | (Optional) Webhook URL for the development team. Can be selected per request using the lark_other_webhook_url=dev query parameter. |
LARK_OTHER_WEBHOOK_URL_PROD | (Optional) Webhook URL for the production team. Can be selected per request using the lark_other_webhook_url=prod query parameter. |
Queue Services Configuration
Variable | Description |
---|---|
SNS_ENABLE | Set to true to enable receive Alert Messages from SNS. |
SNS_HTTPS_ENDPOINT_SUBSCRIPTION | This specifies the HTTPS endpoint to which SNS sends messages. When an HTTPS endpoint is configured, an SNS subscription is automatically created. If no endpoint is configured, you must create the SNS subscription manually using the CLI or AWS Console. E.g. https://your-domain.com . |
SNS_TOPIC_ARN | AWS ARN of the SNS topic to subscribe to. |
SQS_ENABLE | Set to true to enable receive Alert Messages from AWS SQS. |
SQS_QUEUE_URL | URL of the AWS SQS queue to receive messages from. |
On-Call Configuration
Variable | Description |
---|---|
ONCALL_ENABLE | Set to true to enable on-call functionality. Can be overridden per request using the oncall_enable query parameter. |
ONCALL_WAIT_MINUTES | Time in minutes to wait for acknowledgment before escalating (default: 3). Can be overridden per request using the oncall_wait_minutes query parameter. |
ONCALL_PROVIDER | Specify the on-call provider to use ("aws_incident_manager" or "pagerduty"). |
AWS_INCIDENT_MANAGER_RESPONSE_PLAN_ARN | The ARN of the AWS Incident Manager response plan to use for on-call escalations. Required if on-call provider is "aws_incident_manager". |
AWS_INCIDENT_MANAGER_OTHER_RESPONSE_PLAN_ARN_PROD | (Optional) AWS Incident Manager response plan ARN for production environment. Can be selected per request using the awsim_other_response_plan=prod query parameter. |
AWS_INCIDENT_MANAGER_OTHER_RESPONSE_PLAN_ARN_DEV | (Optional) AWS Incident Manager response plan ARN for development environment. Can be selected per request using the awsim_other_response_plan=dev query parameter. |
AWS_INCIDENT_MANAGER_OTHER_RESPONSE_PLAN_ARN_STAGING | (Optional) AWS Incident Manager response plan ARN for staging environment. Can be selected per request using the awsim_other_response_plan=staging query parameter. |
PAGERDUTY_ROUTING_KEY | Integration/Routing key for PagerDuty Events API v2. Required if on-call provider is "pagerduty". |
PAGERDUTY_OTHER_ROUTING_KEY_INFRA | (Optional) PagerDuty routing key for infrastructure team. Can be selected per request using the pagerduty_other_routing_key=infra query parameter. |
PAGERDUTY_OTHER_ROUTING_KEY_APP | (Optional) PagerDuty routing key for application team. Can be selected per request using the pagerduty_other_routing_key=app query parameter. |
PAGERDUTY_OTHER_ROUTING_KEY_DB | (Optional) PagerDuty routing key for database team. Can be selected per request using the pagerduty_other_routing_key=db query parameter. |
Redis Configuration
Variable | Description |
---|---|
REDIS_HOST | The hostname or IP address of the Redis server. Required if on-call is enabled. |
REDIS_PORT | The port number of the Redis server. Required if on-call is enabled. |
REDIS_PASSWORD | The password for authenticating with the Redis server. Required if on-call is enabled and Redis requires authentication. |
Ensure these environment variables are properly set before running the application.
Dynamic Configuration with Query Parameters
We provide a way to overwrite configuration values using query parameters, allowing you to send alerts to different channels and customize notification behavior on a per-request basis.
Query Parameter | Description |
---|---|
slack_channel_id | The ID of the Slack channel where alerts will be sent. Use: /api/incidents?slack_channel_id=<your_value> . |
telegram_chat_id | The chat ID where Telegram alerts will be sent. Use: /api/incidents?telegram_chat_id=<your_chat_id> . |
email_to | Overrides the default recipient email address for email notifications. Use: /api/incidents?email_to=<recipient_email> . |
email_subject | Overrides the default subject line for email notifications. Use: /api/incidents?email_subject=<custom_subject> . |
msteams_other_power_url | Overrides the default Microsoft Teams Power Automate flow by specifying an alternative key (e.g., qc, ops, dev). Use: /api/incidents?msteams_other_power_url=qc . |
lark_other_webhook_url | Overrides the default Lark webhook URL by specifying an alternative key (e.g., dev, prod). Use: /api/incidents?lark_other_webhook_url=dev . |
oncall_enable | Set to true or false to enable or disable on-call for a specific alert. Use: /api/incidents?oncall_enable=false . |
oncall_wait_minutes | Set the number of minutes to wait for acknowledgment before triggering on-call. Set to 0 to trigger immediately. Use: /api/incidents?oncall_wait_minutes=0 . |
awsim_other_response_plan | Overrides the default AWS Incident Manager response plan ARN by specifying an alternative key (e.g., prod, dev, staging). Use: /api/incidents?awsim_other_response_plan=prod . |
pagerduty_other_routing_key | Overrides the default PagerDuty routing key by specifying an alternative key (e.g., infra, app, db). Use: /api/incidents?pagerduty_other_routing_key=infra . |
Examples for Each Query Parameter
Slack Channel Override
To send an alert to a specific Slack channel (e.g., a dedicated channel for database issues):
curl -X POST "http://localhost:3000/api/incidents?slack_channel_id=C01DB2ISSUES" \
-H "Content-Type: application/json" \
-d '{
"Logs": "[ERROR] Database connection pool exhausted.",
"ServiceName": "database-service",
"UserID": "U12345"
}'
Telegram Chat Override
To send an alert to a different Telegram chat (e.g., for network monitoring):
curl -X POST "http://localhost:3000/api/incidents?telegram_chat_id=-1001234567890" \
-H "Content-Type: application/json" \
-d '{
"Logs": "[ERROR] Network latency exceeding thresholds.",
"ServiceName": "network-monitor",
"UserID": "U12345"
}'
Email Recipient Override
To send an email alert to a specific recipient with a custom subject:
curl -X POST "http://localhost:3000/api/incidents?email_to=network-team@yourdomain.com&email_subject=Urgent%20Network%20Issue" \
-H "Content-Type: application/json" \
-d '{
"Logs": "[ERROR] Load balancer failing health checks.",
"ServiceName": "load-balancer",
"UserID": "U12345"
}'
Microsoft Teams Channel Override
You can configure multiple Microsoft Teams channels using the other_power_urls
setting:
alert:
msteams:
enable: true
power_automate_url: ${MSTEAMS_POWER_AUTOMATE_URL}
template_path: "config/msteams_message.tmpl"
other_power_urls:
qc: ${MSTEAMS_OTHER_POWER_URL_QC}
ops: ${MSTEAMS_OTHER_POWER_URL_OPS}
dev: ${MSTEAMS_OTHER_POWER_URL_DEV}
Then, to send an alert to the QC team's Microsoft Teams channel:
curl -X POST "http://localhost:3000/api/incidents?msteams_other_power_url=qc" \
-H "Content-Type: application/json" \
-d '{
"Logs": "[ERROR] Quality check failed for latest deployment.",
"ServiceName": "quality-service",
"UserID": "U12345"
}'
Lark Webhook Override
You can configure multiple Lark webhook URLs using the other_webhook_urls
setting:
alert:
lark:
enable: true
webhook_url: ${LARK_WEBHOOK_URL}
template_path: "config/lark_message.tmpl"
other_webhook_urls:
dev: ${LARK_OTHER_WEBHOOK_URL_DEV}
prod: ${LARK_OTHER_WEBHOOK_URL_PROD}
Then, to send an alert to the development team's Lark channel:
curl -X POST "http://localhost:3000/api/incidents?lark_other_webhook_url=dev" \
-H "Content-Type: application/json" \
-d '{
"Logs": "[ERROR] Development server crash detected.",
"ServiceName": "dev-server",
"UserID": "U12345"
}'
On-Call Controls
To disable on-call escalation for a non-critical alert:
curl -X POST "http://localhost:3000/api/incidents?oncall_enable=false" \
-H "Content-Type: application/json" \
-d '{
"Logs": "[WARNING] This is a minor issue that doesn't require on-call response.",
"ServiceName": "monitoring-service",
"UserID": "U12345"
}'
To trigger on-call immediately without the normal wait period for a critical issue:
curl -X POST "http://localhost:3000/api/incidents?oncall_wait_minutes=0" \
-H "Content-Type: application/json" \
-d '{
"Logs": "[CRITICAL] Payment processing system down.",
"ServiceName": "payment-service",
"UserID": "U12345"
}'
AWS Incident Manager Response Plan Override
You can configure multiple AWS Incident Manager response plans using the other_response_plan_arns
setting:
oncall:
enable: true
wait_minutes: 3
provider: aws_incident_manager
aws_incident_manager:
response_plan_arn: ${AWS_INCIDENT_MANAGER_RESPONSE_PLAN_ARN} # Default response plan
other_response_plan_arns:
prod: ${AWS_INCIDENT_MANAGER_OTHER_RESPONSE_PLAN_ARN_PROD} # Production environment
dev: ${AWS_INCIDENT_MANAGER_OTHER_RESPONSE_PLAN_ARN_DEV} # Development environment
staging: ${AWS_INCIDENT_MANAGER_OTHER_RESPONSE_PLAN_ARN_STAGING} # Staging environment
Then, to use a specific AWS Incident Manager response plan for a production environment issue:
curl -X POST "http://localhost:3000/api/incidents?awsim_other_response_plan=prod" \
-H "Content-Type: application/json" \
-d '{
"Logs": "[CRITICAL] Production database cluster failure.",
"ServiceName": "prod-database",
"UserID": "U12345"
}'
PagerDuty Routing Key Override
You can configure multiple PagerDuty routing keys using the other_routing_keys
setting:
oncall:
enable: true
wait_minutes: 3
provider: pagerduty
pagerduty:
routing_key: ${PAGERDUTY_ROUTING_KEY} # Default routing key
other_routing_keys:
infra: ${PAGERDUTY_OTHER_ROUTING_KEY_INFRA} # Infrastructure team
app: ${PAGERDUTY_OTHER_ROUTING_KEY_APP} # Application team
db: ${PAGERDUTY_OTHER_ROUTING_KEY_DB} # Database team
Then, to use a specific PagerDuty routing key for the infrastructure team:
curl -X POST "http://localhost:3000/api/incidents?pagerduty_other_routing_key=infra" \
-H "Content-Type: application/json" \
-d '{
"Logs": "[ERROR] Server load balancer failure in us-west-2.",
"ServiceName": "infrastructure",
"UserID": "U12345"
}'
Combining Multiple Parameters
You can combine multiple query parameters to customize exactly how an incident is handled:
curl -X POST "http://localhost:3000/api/incidents?slack_channel_id=C01PROD&telegram_chat_id=-987654321&oncall_enable=true&oncall_wait_minutes=1" \
-H "Content-Type: application/json" \
-d '{
"Logs": "[CRITICAL] Multiple service failures detected in production environment.",
"ServiceName": "core-infrastructure",
"UserID": "U12345",
"Severity": "CRITICAL"
}'
This will:
- Send the alert to a specific Slack channel (
C01PROD
) - Send the alert to a specific Telegram chat (
-987654321
) - Enable on-call escalation with a shortened 1-minute wait time
Slack Template for AWS SNS
1. AWS Glue Job Failure
SNS Message:
{
"source": "aws.glue",
"detail": {
"jobName": "etl-pipeline",
"state": "FAILED",
"message": "OutOfMemoryError: Java heap space"
}
}
Slack Template:
{{ if eq .source "aws.glue" }}
🔥 *Glue Job Failed*: {{.detail.jobName}}
❌ Error:
```{{.detail.message}}```
{{ end }}
2. EC2 Instance State Change
SNS Message:
{
"source": "aws.ec2",
"detail": {
"instance-id": "i-1234567890abcdef0",
"state": "stopped"
}
}
Slack Template:
{{ if eq .source "aws.ec2" }}
🖥 *EC2 Instance {{.detail.state | title}}*
ID: `{{.detail.instance-id}}`
{{ end }}
3. CloudWatch Alarm Trigger
SNS Message:
{
"source": "aws.cloudwatch",
"detail": {
"alarmName": "High-CPU-Utilization",
"state": "ALARM",
"metricName": "CPUUtilization",
"threshold": 80,
"actualValue": 92.5
}
}
Slack Template:
{{ if eq .source "aws.cloudwatch" }}
🚨 *CloudWatch Alarm Triggered*
• Name: {{.detail.alarmName}}
• Metric: {{.detail.metricName}}
• Value: {{.detail.actualValue}}% (Threshold: {{.detail.threshold}}%)
{{ end }}
4. Lambda Function Error
SNS Message:
{
"source": "aws.lambda",
"detail": {
"functionName": "data-processor",
"errorType": "Runtime.ExitError",
"errorMessage": "Process exited before completing request"
}
}
Slack Template:
{{ if eq .source "aws.lambda" }}
λ *Lambda Failure*: {{.detail.functionName}}
⚠️ Error: {{.detail.errorType}}
💬 Message: {{.detail.errorMessage}}
{{ end }}
5. AWS CodePipeline Failure
Scenario: A pipeline deployment fails during the "Deploy" stage.
SNS Message:
{
"source": "aws.codepipeline",
"detail-type": "CodePipeline Pipeline Execution State Change",
"detail": {
"pipeline": "prod-deployment-pipeline",
"state": "FAILED",
"stage": "Deploy",
"action": "DeployToECS",
"failure-type": "JobFailed",
"error": "ECS task definition invalid"
}
}
Slack Template:
{{ if eq .source "aws.codepipeline" }}
🚛 *Pipeline Failed*: {{.detail.pipeline | upper}}
🛑 Stage: {{.detail.stage}} (Action: {{.detail.action}})
❌ Error:
```{{.detail.error}}```
{{ end }}
6. EC2 Spot Instance Interruption (via EventBridge)
Scenario: AWS reclaims a Spot Instance due to capacity needs.
SNS Message:
{
"source": "aws.ec2",
"detail-type": "EC2 Spot Instance Interruption Warning",
"detail": {
"instance-id": "i-0abcdef1234567890",
"instance-action": "terminate",
"instance-interruption-behavior": "terminate",
"availability-zone": "us-east-1a",
"instance-type": "r5.large"
}
}
Slack Template:
{{ if eq .detail-type "EC2 Spot Instance Interruption Warning" }}
⚡ *Spot Instance Interruption*
Instance ID: `{{.detail.instance-id}}`
Action: {{.detail.instance-action | title}}
AZ: {{.detail.availability-zone}}
⚠️ **Warning**: Migrate workloads immediately!
{{ end }}
7. ECS Task Failure
Scenario: A critical ECS task crashes repeatedly.
SNS Message:
{
"source": "aws.ecs",
"detail-type": "ECS Task State Change",
"detail": {
"clusterArn": "arn:aws:ecs:us-east-1:123456789012:cluster/prod-cluster",
"taskArn": "arn:aws:ecs:us-east-1:123456789012:task/prod-cluster/abc123",
"lastStatus": "STOPPED",
"stoppedReason": "Essential container exited"
}
}
Slack Template:
{{ if eq .source "aws.ecs" }}
🎯 *ECS Task Stopped*
Cluster: {{.detail.clusterArn | splitList "/" | last}}
Reason:
```{{.detail.stoppedReason}}```
{{ end }}
8. DynamoDB Auto-Scaling Limit Reached
Scenario: DynamoDB hits provisioned throughput limits.
SNS Message:
{
"source": "aws.dynamodb",
"detail-type": "AWS API Call via CloudTrail",
"detail": {
"eventSource": "dynamodb.amazonaws.com",
"eventName": "UpdateTable",
"errorCode": "LimitExceededException",
"errorMessage": "Table my-table exceeded maximum allowed provisioned throughput"
}
}
Slack Template:
{{ if and (eq .source "aws.dynamodb") (eq .detail.errorCode "LimitExceededException") }}
📊 *DynamoDB Throughput Limit Exceeded*
Table: `{{.detail.requestParameters.tableName}}`
Error:
```{{.detail.errorMessage}}```
{{ end }}
9. AWS Health Event (Service Disruption)
Scenario: AWS reports a regional service disruption.
SNS Message:
{
"source": "aws.health",
"detail-type": "AWS Health Event",
"detail": {
"eventTypeCategory": "issue",
"service": "EC2",
"eventDescription": [{
"language": "en",
"latestDescription": "Degraded networking in us-east-1"
}]
}
}
Slack Template:
{{ if eq .source "aws.health" }}
🏥 *AWS Health Alert*
Service: {{.detail.service}}
Impact: {{.detail.eventTypeCategory | title}}
Description:
{{index .detail.eventDescription 0).latestDescription}}
{{ end }}
10. Amazon GuardDuty Finding
Scenario: Unauthorized API call detected.
SNS Message:
{
"source": "aws.guardduty",
"detail-type": "GuardDuty Finding",
"detail": {
"severity": 8.5,
"type": "UnauthorizedAccess:EC2/SSHBruteForce",
"resource": {
"instanceDetails": {
"instanceId": "i-0abcdef1234567890"
}
}
}
}
Slack Template:
{{ if eq .source "aws.guardduty" }}
🛡️ *Security Alert*: {{.detail.type | replace "UnauthorizedAccess:" ""}}
Severity: {{.detail.severity}}/10
Instance: `{{.detail.resource.instanceDetails.instanceId}}`
{{ end }}
Test Templates Locally
Use the AWS CLI to send test SNS messages:
aws sns publish \
--topic-arn arn:aws:sns:us-east-1:123456789012:MyTopic \
--message file://test-event.json
Advanced Template Tips
Multi-Service Template
Handle multiple alerts in one template:
{{ $service := .source | replace "aws." "" | upper }}
📡 *{{$service}} Alert*
{{ if eq .source "aws.glue" }}
🔧 Job: {{.detail.jobName}}
{{ else if eq .source "aws.ec2" }}
🖥 Instance: {{.detail.instance-id}}
{{ end }}
🔗 *Details*: {{.detail | toJson}}
If the field does not exist when passed to the template, let's use the template's printf
function to handle it.
{{ if contains (printf "%v" .source) "aws.glue" }}
🔥 *Glue Job Failed*: {{.detail.jobName}}
❌ Error:
```{{.detail.errorMessage}}```
{{ else }}
🔥 *Critical Error in {{.ServiceName}}*
❌ Error Details:
```{{.Logs}}```
Owner <@{{.UserID}}> please investigate
{{ end }}
Conditional Formatting
Highlight critical issues:
{{ if gt .detail.actualValue .detail.threshold }}
🚨 CRITICAL: {{.detail.alarmName}} ({{.detail.actualValue}}%)
{{ else }}
⚠️ WARNING: {{.detail.alarmName}} ({{.detail.actualValue}}%)
{{ end }}
Best Practices for Custom Templates
- Keep It Simple: Focus on the most critical details for each alert.
- Use Conditional Logic: Tailor messages based on event severity or type.
- Test Your Templates: Use sample SNS messages to validate your templates.
- Document Your Templates: Share templates with your team for consistency.
How to Customize Alert Messages from Alertmanager to Slack and Telegram
In this guide, you'll learn how to route Prometheus Alertmanager alerts to Slack and Telegram using the Versus Incident, while fully customizing alert messages.
Configure Alertmanager Webhook
Update your alertmanager.yml
to forward alerts to Versus:
route:
receiver: 'versus-incident'
group_wait: 10s
receivers:
- name: 'versus-incident'
webhook_configs:
- url: 'http://versus-host:3000/api/incidents' # Versus API endpoint
send_resolved: false
# Additional settings (if needed):
# http_config:
# tls_config:
# insecure_skip_verify: true # For self-signed certificates
For example, alert rules:
groups:
- name: cluster
rules:
- alert: PostgresqlDown
expr: pg_up == 0
for: 0m
labels:
severity: critical
annotations:
summary: Postgresql down (instance {{ $labels.instance }})
description: "Postgresql instance is down."
Alertmanager sends alerts to the webhook in JSON format. Here’s an example of the payload:
{
"receiver": "webhook-incident",
"status": "firing",
"alerts": [
{
"status": "firing",
"labels": {
"alertname": "PostgresqlDown",
"instance": "postgresql-prod-01",
"severity": "critical"
},
"annotations": {
"summary": "Postgresql down (instance postgresql-prod-01)",
"description": "Postgresql instance is down."
},
"startsAt": "2023-10-01T12:34:56.789Z",
"endsAt": "0001-01-01T00:00:00Z",
"generatorURL": ""
}
],
"groupLabels": {
"alertname": "PostgresqlDown"
},
"commonLabels": {
"alertname": "PostgresqlDown",
"severity": "critical",
"instance": "postgresql-prod-01"
},
"commonAnnotations": {
"summary": "Postgresql down (instance postgresql-prod-01)",
"description": "Postgresql instance is down."
},
"externalURL": ""
}
Next, we will deploy Versus Incident and configure it with a custom template to send alerts to both Slack and Telegram for this payload.
Launch Versus with Slack/Telegram
Create a configuration file config/config.yaml
:
name: versus
host: 0.0.0.0
port: 3000
alert:
slack:
enable: true
token: ${SLACK_TOKEN}
channel_id: ${SLACK_CHANNEL_ID}
template_path: "/app/config/slack_message.tmpl"
telegram:
enable: true
bot_token: ${TELEGRAM_BOT_TOKEN}
chat_id: ${TELEGRAM_CHAT_ID}
template_path: "/app/config/telegram_message.tmpl"
Create Slack and Telegram templates.
config/slack_message.tmpl
:
🔥 *{{ .commonLabels.severity | upper }} Alert: {{ .commonLabels.alertname }}*
🌐 *Instance*: `{{ .commonLabels.instance }}`
🚨 *Status*: `{{ .status }}`
{{ range .alerts }}
📝 {{ .annotations.description }}
⏰ *Firing since*: {{ .startsAt | formatTime }}
{{ end }}
🔗 *Dashboard*: <{{ .externalURL }}|Investigate>
telegram_message.tmpl
:
🚩 <b>{{ .commonLabels.alertname }}</b>
{{ range .alerts }}
🕒 {{ .startsAt | formatTime }}
{{ .annotations.summary }}
{{ end }}
<pre>
Status: {{ .status }}
Severity: {{ .commonLabels.severity }}
</pre>
Run Versus:
docker run -d -p 3000:3000 \
-e SLACK_ENABLE=true \
-e SLACK_TOKEN=xoxb-your-token \
-e SLACK_CHANNEL_ID=C12345 \
-e TELEGRAM_ENABLE=true \
-e TELEGRAM_BOT_TOKEN=123:ABC \
-e TELEGRAM_CHAT_ID=-456789 \
-v ./config:/app/config \
ghcr.io/versuscontrol/versus-incident
Test
Trigger a test alert using curl
:
curl -X POST http://localhost:3000/api/incidents \
-H "Content-Type: application/json" \
-d '{
"receiver": "webhook-incident",
"status": "firing",
"alerts": [
{
"status": "firing",
"labels": {
"alertname": "PostgresqlDown",
"instance": "postgresql-prod-01",
"severity": "critical"
},
"annotations": {
"summary": "Postgresql down (instance postgresql-prod-01)",
"description": "Postgresql instance is down."
},
"startsAt": "2023-10-01T12:34:56.789Z",
"endsAt": "0001-01-01T00:00:00Z",
"generatorURL": ""
}
],
"groupLabels": {
"alertname": "PostgresqlDown"
},
"commonLabels": {
"alertname": "PostgresqlDown",
"severity": "critical",
"instance": "postgresql-prod-01"
},
"commonAnnotations": {
"summary": "Postgresql down (instance postgresql-prod-01)",
"description": "Postgresql instance is down."
},
"externalURL": ""
}'
Final Result:
Advanced: Dynamic Channel Routing
Override Slack channels per alert using query parameters:
POST http://versus-host:3000/api/incidents?slack_channel_id=EMERGENCY-CHANNEL
Troubleshooting Tips
- Enable debug mode:
DEBUG_BODY=true
- Check Versus logs:
docker logs versus
If you encounter any issues or have further questions, feel free to reach out!
Configuring Fluent Bit to Send Error Logs to Versus Incident
Fluent Bit is a lightweight log processor and forwarder that can filter, modify, and forward logs to various destinations. In this tutorial, we will configure Fluent Bit to filter logs containing [ERROR] and send them to the Versus Incident Management System using its REST API.
Understand the Log Format
The log format provided is as follows, you can create a sample.log
file:
[2023/01/22 09:46:49] [ INFO ] This is info logs 1
[2023/01/22 09:46:49] [ INFO ] This is info logs 2
[2023/01/22 09:46:49] [ INFO ] This is info logs 3
[2023/01/22 09:46:49] [ ERROR ] This is error logs
We are interested in filtering logs that contain [ ERROR ]
.
Configure Fluent Bit Filters
To filter and process logs, we use the grep
and modify
filters in Fluent Bit.
Filter Configuration
Add the following configuration to your Fluent Bit configuration file:
# Filter Section - Grep for ERROR logs
[FILTER]
Name grep
Match versus.*
Regex log .*\[.*ERROR.*\].*
# Filter Section - Modify fields
[FILTER]
Name modify
Match versus.*
Rename log Logs
Set ServiceName order-service
Explanation
- Grep Filter:
- Matches all logs that contain
[ ERROR ]
. - The
Regex
field uses a regular expression to identify logs with the[ ERROR ]
keyword.
- Modify Filter:
- Adds or modifies fields in the log record.
- Sets the
ServiceName
field for the default template. You can set the fields you want based on your template.
Default Telegram Template
🚨 <b>Critical Error Detected!</b> 🚨
📌 <b>Service:</b> {{.ServiceName}}
⚠️ <b>Error Details:</b>
{{.Logs}}
Configure Fluent Bit Output
To send filtered logs to the Versus Incident Management System, we use the http
output plugin.
Output Configuration
Add the following configuration to your Fluent Bit configuration file:
...
# Output Section - Send logs to Versus Incident via HTTP
[OUTPUT]
Name http
Match versus.*
Host localhost
Port 3000
URI /api/incidents
Format json_stream
Explanation
- Name: Specifies the output plugin (
http
in this case). - Match: Matches all logs processed by the previous filters.
- Host and Port: Specify the host and port of the Versus Incident Management System (default is
localhost:3000
). - URI: Specifies the endpoint for creating incidents (
/api/incidents
). - Format: Ensures the payload is sent in JSON Stream format.
Full Fluent Bit Configuration Example
Here is the complete Fluent Bit configuration file:
# Input Section
[INPUT]
Name tail
Path sample.log
Tag versus.*
Mem_Buf_Limit 5MB
Skip_Long_Lines On
# Filter Section - Grep for ERROR logs
[FILTER]
Name grep
Match versus.*
Regex log .*\[.*ERROR.*\].*
# Filter Section - Modify fields
[FILTER]
Name modify
Match versus.*
Rename log Logs
Set ServiceName order-service
# Output Section - Send logs to Versus Incident via HTTP
[OUTPUT]
Name http
Match versus.*
Host localhost
Port 3000
URI /api/incidents
Format json_stream
Test the Configuration
Run Versus Incident:
docker run -p 3000:3000 \
-e TELEGRAM_ENABLE=true \
-e TELEGRAM_BOT_TOKEN=your_token \
-e TELEGRAM_CHAT_ID=your_channel \
ghcr.io/versuscontrol/versus-incident
Run Fluent Bit with the configuration file:
fluent-bit -c /path/to/fluent-bit.conf
Check the logs in the Versus Incident Management System. You should see an incident created with the following details:
Raw Request Body: {"date":1738999456.96342,"Logs":"[2023/01/22 09:46:49] [ ERROR ] This is error logs","ServiceName":"order-service"}
2025/02/08 14:24:18 POST /api/incidents 201 127.0.0.1 Fluent-Bit
Conclusion
By following the steps above, you can configure Fluent Bit to filter error logs and send them to the Versus Incident Management System. This integration enables automated incident management, ensuring that critical errors are promptly addressed by your DevOps team.
If you encounter any issues or have further questions, feel free to reach out!
Configuring CloudWatch to send Alert to Versus Incident
In this guide, you’ll learn how to set up a CloudWatch alarm to trigger when RDS CPU usage exceeds 80% and send an alert to Slack and Telegram.
Prerequisites
AWS account with access to RDS, CloudWatch, and SNS. An RDS instance running (replace my-rds-instance with your instance ID). Slack and Telegram API Token.
Steps
-
Create SNS Topic and Subscription.
-
Create CloudWatch Alarm.
-
Deploy Versus Incident with Slack and Telegram configurations.
-
Subscribe Versus to the SNS Topic.
Create an SNS Topic
Create an SNS topic to route CloudWatch Alarms to Versus:
aws sns create-topic --name RDS-CPU-Alarm-Topic
Create a CloudWatch Alarm for RDS CPU
Set up an alarm to trigger when RDS CPU exceeds 80% for 5 minutes.
aws cloudwatch put-metric-alarm \
--alarm-name "RDS_CPU_High" \
--alarm-description "RDS CPU utilization over 80%" \
--namespace AWS/RDS \
--metric-name CPUUtilization \
--dimensions Name=DBInstanceIdentifier,Value=my-rds-instance \
--statistic Average \
--period 300 \
--threshold 80 \
--comparison-operator GreaterThanThreshold \
--evaluation-periods 1 \
--alarm-actions arn:aws:sns:us-east-1:123456789012:RDS-CPU-Alarm-Topic
Explanation:
--namespace AWS/RDS
: Specifies RDS metrics.--metric-name CPUUtilization
: Tracks CPU usage.--dimensions
: Identifies your RDS instance.--alarm-actions
: The SNS topic ARN where alerts are sent.
Versus Incident
Next, we will deploy Versus Incident and configure it with a custom template to send alerts to both Slack and Telegram. Enable SNS support in config/config.yaml
:
name: versus
host: 0.0.0.0
port: 3000
alert:
debug_body: true
slack:
enable: true
token: ${SLACK_TOKEN}
channel_id: ${SLACK_CHANNEL_ID}
template_path: "/app/config/slack_message.tmpl"
telegram:
enable: true
bot_token: ${TELEGRAM_BOT_TOKEN}
chat_id: ${TELEGRAM_CHAT_ID}
template_path: "/app/config/telegram_message.tmpl"
queue:
enable: true
sns:
enable: true
https_endpoint_subscription_path: /sns
When your RDS_CPU_High alarm triggers, SNS will send a notification to your HTTP endpoint. The message will be a JSON object wrapped in an SNS envelope. Here’s an example of what the JSON payload of Message field
might look like:
{
"AlarmName": "RDS_CPU_High",
"AlarmDescription": "RDS CPU utilization over 80%",
"AWSAccountId": "123456789012",
"NewStateValue": "ALARM",
"NewStateReason": "Threshold Crossed: 1 out of the last 1 datapoints was greater than the threshold (80.0). The most recent datapoint: 85.3.",
"StateChangeTime": "2025-03-17T12:34:56.789Z",
"Region": "US East (N. Virginia)",
"OldStateValue": "OK",
"Trigger": {
"MetricName": "CPUUtilization",
"Namespace": "AWS/RDS",
"StatisticType": "Statistic",
"Statistic": "AVERAGE",
"Unit": "Percent",
"Period": 300,
"EvaluationPeriods": 1,
"ComparisonOperator": "GreaterThanThreshold",
"Threshold": 80.0,
"TreatMissingData": "missing",
"Dimensions": [
{
"Name": "DBInstanceIdentifier",
"Value": "my-rds-instance"
}
]
}
}
Create Slack and Telegram templates, e.g. config/slack_message.tmpl
:
*🚨 CloudWatch Alarm: {{.AlarmName}}*
----------
Description: {{.AlarmDescription}}
Current State: {{.NewStateValue}}
Timestamp: {{.StateChangeTime}}
----------
Owner <@${USERID}>: Investigate immediately!
config/telegram_message.tmpl
:
🚨 <b>{{.AlarmName}}</b>
📌 <b>Status:</b> {{.NewStateValue}}
⚠️ <b>Description:</b> {{.AlarmDescription}}
🕒 <b>Time:</b> {{.StateChangeTime}}
Deploy with Docker:
docker run -d \
-p 3000:3000 \
-v $(pwd)/config:/app/config \
-e SLACK_ENABLE=true \
-e SLACK_TOKEN=your_slack_token \
-e SLACK_CHANNEL_ID=your_channel_id \
-e TELEGRAM_ENABLE=true \
-e TELEGRAM_BOT_TOKEN=your_token \
-e TELEGRAM_CHAT_ID=your_channel \
--name versus \
ghcr.io/versuscontrol/versus-incident
Versus Incident is running and accessible at:
http://localhost:3000/sns
For testing purposes, we can use ngrok to enable the Versus on localhost that can be accessed via the internet.
ngrok http 3000 --url your-versus-https-url.ngrok-free.app
This URL is available to anyone on the internet.
Subscribe Versus to the SNS Topic
Subscribe Versus’s /sns endpoint to the topic. Replace versus-host with your deployment URL:
aws sns subscribe \
--topic-arn arn:aws:sns:us-east-1:123456789012:RDS-CPU-Alarm-Topic \
--protocol https \
--notification-endpoint https://your-versus-https-url.ngrok-free.app/sns
Test the Integration
- Simulate high CPU load on your RDS instance (e.g., run intensive queries).
- Check the CloudWatch console to confirm the alarm triggers.
- Verify Versus Incident receives the SNS payload and sends alerts to Slack and Telegram.
Conclusion
By integrating CloudWatch Alarms with Versus Incident via SNS, you centralize alert management and ensure critical infrastructure issues are promptly routed to Slack, Telegram, or Email.
If you encounter any issues or have further questions, feel free to reach out!
How to Configure Sentry to Send Alerts to MS Teams
This guide will show you how to route Sentry alerts through Versus Incident to Microsoft Teams, enabling your team to respond to application issues quickly and efficiently.
Prerequisites
- Microsoft Teams channel with Power Automate or webhook permissions
- Sentry account with project owner permissions
Set Up Microsoft Teams Integration (2025 Update)
Microsoft has announced the retirement of Office 365 Connectors (including Incoming Webhooks) by the end of 2025. Versus Incident supports both the legacy webhook method and the new Power Automate Workflows method. We recommend using Power Automate Workflows for all new deployments.
Option 1: Set Up a Power Automate Workflow (Recommended)
Follow these steps to create a Power Automate workflow to receive alerts in Microsoft Teams:
- Sign in to Power Automate
- Click Create and select Instant cloud flow
- Name your flow (e.g., "Versus Incident Alerts")
- Select When a HTTP request is received as the trigger and click Create
- In the HTTP trigger, you'll see a generated HTTP POST URL. Copy this URL - you'll need it later
- Click + New step and search for "Teams"
- Select Post a message in a chat or channel (under Microsoft Teams)
- Configure the action:
- Choose Channel as the Post as option
- Select your Team and Channel
- For the Message field, add:
@{triggerBody()?['messageText']}
- Click Save to save your flow
Option 2: Set Up an MS Teams Webhook (Legacy Method)
For backward compatibility, Versus still supports the traditional webhook method (being retired by end of 2025):
- Open MS Teams and go to the channel where you want alerts to appear.
- Click the three dots
(…)
next to the channel name and select Connectors. - Find Incoming Webhook, click Add, then Add again in the popup.
- Name your webhook (e.g., Sentry Alerts) and optionally upload an image.
- Click Create, then copy the generated webhook URL. Save this URL — you'll need it later.
Deploy Versus Incident with MS Teams Enabled
Next, configure Versus Incident to forward alerts to MS Teams. Create a directory for your configuration files:
mkdir -p ./config
Create config/config.yaml
with the following content for Power Automate (recommended):
name: versus
host: 0.0.0.0
port: 3000
alert:
debug_body: true
msteams:
enable: false # Default value, will be overridden by MSTEAMS_ENABLE env var
power_automate_url: ${MSTEAMS_POWER_AUTOMATE_URL} # Power Automate HTTP trigger URL
template_path: "config/msteams_message.tmpl"
Create a custom MS Teams template in config/msteams_message.tmpl
, for example, the JSON Format for Sentry Webhooks Integration:
{
"action": "created",
"data": {
"issue": {
"id": "123456",
"title": "Example Issue",
"culprit": "example_function in example_module",
"shortId": "PROJECT-1",
"project": {
"id": "1",
"name": "Example Project",
"slug": "example-project"
},
"metadata": {
"type": "ExampleError",
"value": "This is an example error"
},
"status": "unresolved",
"level": "error",
"firstSeen": "2023-10-01T12:00:00Z",
"lastSeen": "2023-10-01T12:05:00Z",
"count": 5,
"userCount": 3
}
},
"installation": {
"uuid": "installation-uuid"
},
"actor": {
"type": "user",
"id": "789",
"name": "John Doe"
}
}
Now, create a rich MS Teams template in config/msteams_message.tmpl
:
**🚨 Sentry Alert: {{.data.issue.title}}**
**Project**: {{.data.issue.project.name}}
**Issue URL**: {{.data.issue.url}}
Please investigate this issue immediately.
This template uses Markdown to format the alert in MS Teams. It pulls data from the Sentry webhook payload (e.g., {{.data.issue.title}}
).
Note about MS Teams notifications (April 2025): The system will automatically extract "Sentry Alert: {{.data.issue.title}}" as the summary for Microsoft Teams notifications, and generate a plain text version as a fallback. You don't need to add these fields manually - Versus Incident handles this to ensure proper display in Microsoft Teams.
Run Versus Incident using Docker, mounting your configuration files and setting the MS Teams Power Automate URL as an environment variable:
docker run -d \
-p 3000:3000 \
-v $(pwd)/config:/app/config \
-e MSTEAMS_ENABLE=true \
-e MSTEAMS_POWER_AUTOMATE_URL="your_power_automate_url" \
--name versus \
ghcr.io/versuscontrol/versus-incident
Replace your_power_automate_url
with the URL you copied from Power Automate. The Versus Incident API endpoint for receiving alerts is now available at:
http://localhost:3000/api/incidents
Configure Sentry Alerts with a Webhook
Now, set up Sentry to send alerts to Versus Incident via a webhook.
- Log in to your Sentry account and navigate to your project.
- Go to Alerts in the sidebar and click Create Alert Rule.
- Define the conditions for your alert, such as:
- When: “A new issue is created”
- Filter: (Optional) Add filters like “error level is fatal”
- Under Actions, select Send a notification via a webhook.
- Enter the webhook URL:
- If Versus is running locally:
http://localhost:3000/api/incidents
- If deployed elsewhere:
https://your-versus-domain.com/api/incidents
- Ensure the HTTP method is POST and the content type is application/json.
- Save the alert rule.
Sentry will now send a JSON payload to Versus Incident whenever the alert conditions are met.
Test the Integration
To confirm everything works, simulate a Sentry alert using curl:
curl -X POST http://localhost:3000/api/incidents \
-H "Content-Type: application/json" \
-d '{
"action": "triggered",
"data": {
"issue": {
"id": "123456",
"title": "Test Error: Something went wrong",
"shortId": "PROJECT-1",
"project": {
"name": "Test Project",
"slug": "test-project"
},
"url": "https://sentry.io/organizations/test-org/issues/123456/"
}
}
}'
Alternatively, trigger a real error in your Sentry-monitored application and verify the alert appears in MS Teams.
Conclusion
By connecting Sentry to MS Teams via Versus Incident, you’ve created a streamlined alerting system that keeps your team informed of critical issues in real-time. Versus Incident’s flexibility allows you to tailor alerts to your needs and expand to other channels as required.
Configure Kibana to Send Alerts to Slack and Telegram
Kibana, part of the Elastic Stack, provides powerful monitoring and alerting capabilities for your applications and infrastructure. However, its native notification options are limited.
In this guide, we’ll walk through setting up Kibana to send alerts to Versus Incident, which will then forward them to Slack and Telegram using custom templates.
Prerequisites
- A running Elastic Stack (Elasticsearch and Kibana) instance with alerting enabled (Kibana 7.13+ required for the Alerting feature).
- A Slack workspace with permissions to create a bot and obtain a token.
- A Telegram account with a bot created via BotFather and a chat ID for your target group or channel.
- Docker installed (optional, for easy Versus Incident deployment).
Step 1: Set Up Slack and Telegram Bots
Slack Bot
- Visit api.slack.com/apps and click Create New App.
- Name your app (e.g., “Kibana Alerts”) and select your Slack workspace.
- Under Bot Users, add a bot (e.g., “KibanaBot”) and enable it.
- Go to OAuth & Permissions, add the
chat:write
scope under Scopes. - Install the app to your workspace and copy the Bot User OAuth Token (starts with
xoxb-
). Save it securely. - Invite the bot to your Slack channel by typing
/invite @KibanaBot
in the channel and note the channel ID (right-click the channel, copy the link, and extract the ID).
Telegram Bot
- Open Telegram and search for BotFather.
- Start a chat and type
/newbot
. Follow the prompts to name your bot (e.g., “KibanaAlertBot”). - BotFather will provide a Bot Token (e.g.,
123456:ABC-DEF1234ghIkl-zyx57W2v1u123ew11
). Save it securely. - Create a group or channel in Telegram, add your bot, and get the Chat ID:
- Send a message to the group/channel via the bot.
- Use
https://api.telegram.org/bot<YourBotToken>/getUpdates
in a browser to retrieve thechat.id
(e.g.,-123456789
).
Step 2: Deploy Versus Incident with Slack and Telegram Enabled
Versus Incident acts as a bridge between Kibana and your notification channels. We’ll configure it to handle both Slack and Telegram alerts.
Create Configuration Files
- Create a directory for configuration:
mkdir -p ./config
- Create
config/config.yaml
with the following content:
name: versus
host: 0.0.0.0
port: 3000
alert:
slack:
enable: true
token: ${SLACK_TOKEN}
channel_id: ${SLACK_CHANNEL_ID}
template_path: "/app/config/slack_message.tmpl"
telegram:
enable: true
bot_token: ${TELEGRAM_BOT_TOKEN}
chat_id: ${TELEGRAM_CHAT_ID}
template_path: "/app/config/telegram_message.tmpl"
- Create a Slack template at
config/slack_message.tmpl
:
🚨 *Kibana Alert: {{.name}}*
**Message**: {{.message}}
**Status**: {{.status}}
**Kibana URL**: <{{.kibanaUrl}}|View in Kibana>
Please investigate this issue.
- Create a Telegram template at
config/telegram_message.tmpl
(using HTML formatting):
🚨 <b>Kibana Alert: {{.name}}</b>
<b>Message</b>: {{.message}}
<b>Status</b>: {{.status}}
<b>Kibana URL</b>: <a href="{{.kibanaUrl}}">View in Kibana</a>
Please investigate this issue.
Run Versus Incident with Docker
Deploy Versus Incident with the configuration and environment variables:
docker run -d \
-p 3000:3000 \
-v $(pwd)/config:/app/config \
-e SLACK_ENABLE=true \
-e SLACK_TOKEN="your_slack_bot_token" \
-e SLACK_CHANNEL_ID="your_slack_channel_id" \
-e TELEGRAM_ENABLE=true \
-e TELEGRAM_BOT_TOKEN="your_telegram_bot_token" \
-e TELEGRAM_CHAT_ID="your_telegram_chat_id" \
--name versus \
ghcr.io/versuscontrol/versus-incident
- Replace
your_slack_bot_token
andyour_slack_channel_id
with Slack values. - Replace
your_telegram_bot_token
andyour_telegram_chat_id
with Telegram values.
The Versus Incident API endpoint is now available at http://localhost:3000/api/incidents
.
Step 3: Configure Kibana Alerts with a Webhook
Kibana’s Alerting feature allows you to send notifications via webhooks. We’ll configure it to send alerts to Versus Incident.
- Log in to Kibana and go to Stack Management > Alerts and Insights > Rules.
- Click Create Rule.
- Define your rule:
- Name: e.g., “High CPU Alert”.
- Connector: Select an index or data view to monitor (e.g., system metrics).
- Condition: Set a condition, such as “CPU usage > 80% over the last 5 minutes”.
- Check every: 1 minute (or your preferred interval).
- Add an Action:
- Action Type: Select Webhook.
- URL:
http://localhost:3000/api/incidents
(or your deployed Versus URL, e.g.,https://your-versus-domain.com/api/incidents
). - Method: POST.
- Headers: Add
Content-Type: application/json
. - Body: Use this JSON template to match Versus Incident’s expected fields:
{ "name": "{{rule.name}}", "message": "{{context.message}}", "status": "{{alert.state}}", "kibanaUrl": "{{kibanaBaseUrl}}/app/management/insightsAndAlerting/rules/{{rule.id}}" }
- Save the rule.
Kibana will now send a JSON payload to Versus Incident whenever the alert condition is met.
Step 4: Test the Integration
Simulate a Kibana alert using curl
to test the setup:
curl -X POST http://localhost:3000/api/incidents \
-H "Content-Type: application/json" \
-d '{
"name": "High CPU Alert",
"message": "CPU usage exceeded 80% on server-01",
"status": "active",
"kibanaUrl": "https://your-kibana-instance.com/app/management/insightsAndAlerting/rules/12345"
}'
Alternatively, trigger a real alert in Kibana (e.g., by simulating high CPU usage in your monitored system) and confirm the notifications appear in both Slack and Telegram.
Conclusion
By integrating Kibana with Versus Incident, you can send alerts to Slack and Telegram with customized, actionable messages that enhance your team’s incident response. This setup is flexible and scalable—Versus Incident also supports additional channels like Microsoft Teams and Email, as well as on-call integrations like AWS Incident Manager.
If you encounter any issues or have further questions, feel free to reach out!
On Call
This document provides a step-by-step guide to integrating Versus Incident with an on-call solutions. We currently support AWS Incident Manager and PagerDuty, with plans to support more tools in the future.
Before diving into how Versus integrates with on-call systems, let's start with the basics. You need to understand the on-call platforms we support:
Understanding AWS Incident Manager On-Call
Understanding PagerDuty On-Call
Understanding AWS Incident Manager On-Call
AWS On-Call is a service that helps organizations manage and respond to incidents quickly and effectively. It’s part of AWS Systems Manager. This document explains the key parts of AWS Incident Manager On-Call—contacts, escalation plans, runbooks, and response plans—in a simple and clear way.
Key Components of AWS Incident Manager On-Call
AWS Incident Manager On-Call relies on four main pieces: contacts, escalation plans, runbooks, and response plans. Let’s break them down one by one.
1. Contacts
Contacts are the people who get notified when an incident happens. These could be:
- On-call engineers (the ones on duty to fix things).
- Experts who know specific systems.
- Managers or anyone else who needs to stay in the loop.
Each contact has contact methods—ways to reach them, like:
- SMS (text messages).
- Email.
- Voice calls.
Example: Imagine Natsu is an on-call engineer. His contact info might include:
- SMS: +84 3127 12 567
- Email: natsu@devopsvn.tech
If an incident occurs, AWS Incident Manager can send him a text and an email to let him know she’s needed.
2. Escalation Plans
An escalation plan is a set of rules that decides who gets notified—and in what order—if an incident isn’t handled quickly. It’s like a backup plan to make sure someone responds, even if the first person is unavailable.
You can set it up to:
- Notify people simultaneously (all at once).
- Notify people sequentially (one after another, with a timeout between each).
Example: Suppose you have three engineers: Natsu, Zeref, and Igneel. Your escalation plan might say:
- Stage 1: Notify Natsu.
- Stage 2: If Natsu doesn’t respond in 5 minutes, notify Zeref.
- Stage 3: If Zeref doesn’t respond in another 5 minutes, notify Igneel.
This way, the incident doesn’t get stuck waiting for one person—it keeps moving until someone takes action.
3. Runbooks (Options)
Runbooks are like instruction manuals that AWS can follow automatically to fix an incident. They’re built in AWS Systems Manager Automation and contain steps to solve common problems without needing a human to step in.
Runbooks can:
- Restart a crashed service.
- Add more resources (like extra servers) if something’s overloaded.
- Run checks to figure out what’s wrong.
Example: Let’s say your web server stops working. A runbook called “WebServerRestart” could:
- Automatically detect the issue.
- Restart the server in seconds.
This saves time by fixing the problem before an engineer even picks up their phone.
4. Response Plans
A response plan is the master plan that pulls everything together. It tells AWS Incident Manager:
- Which contacts to notify.
- Which escalation plan to follow.
- Which runbooks to run.
It can have multiple stages, each with its own actions and time limits, to handle an incident step-by-step.
Example: For a critical incident (like a web application going offline), a response plan might look like this:
- 1: Run the “WebServerRestart” runbook and notify Natsu.
- 2: If the issue isn’t fixed in 5 minutes, notify Bob (via the escalation plan).
- 3: If it’s still not resolved in 10 minutes, alert the manager.
This ensures both automation and people work together to fix the problem.
Next, we will provide a step-by-step guide to integrating Versus with AWS Incident Manager for On Call: Integration.
How to Integration
This document provides a step-by-step guide to integrate Versus Incident with AWS Incident Manager make an On Call. The integration enables automated escalation of alerts to on-call teams when incidents are not acknowledged within a specified time.
We'll cover configuring Prometheus Alert Manager to send alerts to Versus, setting up AWS Incident Manager, deploying Versus, and testing the integration with a practical example.
Prerequisites
Before you begin, ensure you have:
- An AWS account with access to AWS Incident Manager.
- Versus Incident deployed (instructions provided later).
- Prometheus Alert Manager set up to monitor your systems.
Setting Up AWS Incident Manager for On-Call
AWS Incident Manager requires configuring several components to manage on-call workflows. Let’s configure a practical example using 6 contacts, two teams, and a two-stage response plan. Use the AWS Console to set these up.
Contacts
Contacts are individuals who will be notified during an incident.
- In the AWS Console, navigate to Systems Manager > Incident Manager > Contacts.
- Click Create contact.
- For each contact:
- Enter a Name (e.g., "Natsu Dragneel").
- Add Contact methods (e.g., SMS: +1-555-123-4567, Email: natsu@devopsvn.tech).
- Save the contact.
Repeat to create 6 contacts (e.g., Natsu, Zeref, Igneel, Gray, Gajeel, Laxus).
Escalation Plan
An escalation plan defines the order in which contacts are engaged.
- Go to Incident Manager > Escalation plans > Create escalation plan.
- Name it (e.g., "TeamA_Escalation").
- Add contacts (e.g., Natsu, Zeref, and Igneel) and set them to engage simultaneously or sequentially.
- Save the plan.
- Create a second plan (e.g., "TeamB_Escalation") for Gray, Gajeel, and Laxus.
RunBook (Optional)
RunBooks automate incident resolution steps. For this guide, we’ll skip RunBook creation, but you can define one in AWS Systems Manager Automation if needed.
Response Plan
A response plan ties contacts and escalation plans into a structured response.
- Go to Incident Manager > Response plans > Create response plan.
- Name it (e.g., "CriticalIncidentResponse").
- Choose an Escalation Plan we had created, which defines two stages:
- Stage 1: Engage "TeamA_Escalation" (Natsu, Zeref, and Igneel) with a 5-minute timeout.
- Stage 2: If unacknowledged, engage "TeamB_Escalation" (Gray, Gajeel, and Laxus).
- Save the plan and note its ARN (e.g.,
arn:aws:ssm-incidents::111122223333:response-plan/CriticalIncidentResponse
).
Define IAM Role for Versus
Versus needs permissions to interact with AWS Incident Manager.
- In the AWS Console, go to IAM > Roles > Create role.
- Choose AWS service as the trusted entity and select EC2 (or your deployment type, e.g., ECS).
- Attach a custom policy with these permissions:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ssm-incidents:StartIncident",
"ssm-incidents:GetResponsePlan"
],
"Resource": "*"
}
]
}
- Name the role (e.g., "VersusIncidentRole") and create it.
- Note the Role ARN (e.g.,
arn:aws:iam::111122223333:role/VersusIncidentRole
).
Deploy Versus Incident
Deploy Versus using Docker or Kubernetes. Docker Deployment. Create a directory for your configuration files:
mkdir -p ./config
Create config/config.yaml
with the following content:
name: versus
host: 0.0.0.0
port: 3000
public_host: https://your-ack-host.example
alert:
debug_body: true
slack:
enable: true
token: ${SLACK_TOKEN}
channel_id: ${SLACK_CHANNEL_ID}
template_path: "config/slack_message.tmpl"
oncall:
enable: true
wait_minutes: 3
aws_incident_manager:
response_plan_arn: ${AWS_INCIDENT_MANAGER_RESPONSE_PLAN_ARN}
redis: # Required for on-call functionality
insecure_skip_verify: true # dev only
host: ${REDIS_HOST}
port: ${REDIS_PORT}
password: ${REDIS_PASSWORD}
db: 0
Create Slack templates config/slack_message.tmpl
:
🔥 *{{ .commonLabels.severity | upper }} Alert: {{ .commonLabels.alertname }}*
🌐 *Instance*: `{{ .commonLabels.instance }}`
🚨 *Status*: `{{ .status }}`
{{ range .alerts }}
📝 {{ .annotations.description }}
{{ end }}
{{ if .AckURL }}
----------
<{{.AckURL}}|Click here to acknowledge>
{{ end }}
ACK URL Generation
- When an incident is created (e.g., via a POST to
/api/incidents
), Versus generates an acknowledgment URL if on-call is enabled. - The URL is constructed using the
public_host
value, typically in the format:https://your-host.example/api/incidents/ack/<incident-id>
. - This URL is injected into the alert data as a field (e.g.,
.AckURL
) and becomes available for use in templates.
Create the docker-compose.yml
file:
version: '3.8'
services:
versus:
image: ghcr.io/versuscontrol/versus-incident
ports:
- "3000:3000"
environment:
- SLACK_TOKEN=your_slack_token
- SLACK_CHANNEL_ID=your_channel_id
- AWS_INCIDENT_MANAGER_RESPONSE_PLAN_ARN=arn:aws:ssm-incidents::111122223333:response-plan/CriticalIncidentResponse
- REDIS_HOST=redis
- REDIS_PORT=6379
- REDIS_PASSWORD=your_redis_password
depends_on:
- redis
redis:
image: redis:6.2-alpine
command: redis-server --requirepass your_redis_password
ports:
- "6379:6379"
volumes:
- redis_data:/data
volumes:
redis_data:
Note: If using AWS credentials, add AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
environment variables, or attach the IAM role to your deployment environment.
Run Docker Compose:
docker-compose up -d
Alert Rules
Create a prometheus.yml
file to define a metric and alerting rule:
global:
scrape_interval: 15s
scrape_configs:
- job_name: 'server'
static_configs:
- targets: ['localhost:9090']
rule_files:
- 'alert_rules.yml'
Create alert_rules.yml
to define an alert:
groups:
- name: rate
rules:
- alert: HighErrorRate
expr: rate(http_requests_total{status="500"}[5m]) > 0.1
for: 5m
labels:
severity: warning
annotations:
summary: "High error rate detected in {{ $labels.service }}"
description: "{{ $labels.service }} has an error rate above 0.1 for 5 minutes."
- alert: HighErrorRate
expr: rate(http_requests_total{status="500"}[5m]) > 0.5
for: 2m
labels:
severity: critical
annotations:
summary: "Very high error rate detected in {{ $labels.service }}"
description: "{{ $labels.service }} has an error rate above 0.5 for 2 minutes."
- alert: HighErrorRate
expr: rate(http_requests_total{status="500"}[5m]) > 0.8
for: 1m
labels:
severity: urgent
annotations:
summary: "Extremely high error rate detected in {{ $labels.service }}"
description: "{{ $labels.service }} has an error rate above 0.8 for 1 minute."
Alert Manager Routing Configuration
Configure Alert Manager to route alerts to Versus with different behaviors.
Send Alert Only (No On-Call)
receivers:
- name: 'versus-no-oncall'
webhook_configs:
- url: 'http://versus-service:3000/api/incidents?oncall_enable=false'
send_resolved: false
route:
receiver: 'versus-no-oncall'
group_by: ['alertname', 'service']
routes:
- match:
severity: warning
receiver: 'versus-no-oncall'
Send Alert with Acknowledgment Wait
receivers:
- name: 'versus-with-ack'
webhook_configs:
- url: 'http://versus-service:3000/api/incidents?oncall_wait_minutes=5'
send_resolved: false
route:
routes:
- match:
severity: critical
receiver: 'versus-with-ack'
This waits 5 minutes for acknowledgment before triggering the AWS Incident Manager Response Plan if the user doesn't click the link ACK that Versus sends to Slack.
Send Alert with Immediate On-Call Trigger
receivers:
- name: 'versus-immediate'
webhook_configs:
- url: 'http://versus-service:3000/api/incidents?oncall_wait_minutes=0'
send_resolved: false
route:
routes:
- match:
severity: urgent
receiver: 'versus-immediate'
This triggers the response plan immediately without waiting.
Testing the Integration
- Trigger an Alert: Simulate a critical alert in Prometheus to match the Alert Manager rule.
- Verify Versus: Check that Versus receives the alert and sends it to configured channels (e.g., Slack).
- Check Escalation:
- Wait 5 minutes without acknowledging the alert.
- In Incident Manager > Incidents, verify that an incident starts and Team A is engaged.
- After 5 more minutes, confirm Team B is engaged.
- Immediate Trigger Test: Send an urgent alert and confirm the response plan triggers instantly.
Result
Conclusion
You’ve now integrated Versus Incident with AWS Incident Manager for on-call management! Alerts from Prometheus Alert Manager can trigger notifications via Versus, with escalations handled by AWS Incident Manager based on your response plan. Adjust configurations as needed for your environment.
If you encounter any issues or have further questions, feel free to reach out!
Understanding PagerDuty On-Call
PagerDuty is a popular incident management platform that provides robust on-call scheduling, alerting, and escalation capabilities. This document explains the key components of PagerDuty's on-call system—services, escalation policies, schedules, and integrations—in a simple and clear way.
Key Components of PagerDuty On-Call
PagerDuty's on-call system relies on four main components: services, escalation policies, schedules, and integrations. Let's explore each one in detail.
1. Services
Services in PagerDuty represent the applications, components, or systems that you monitor. Each service:
- Has a unique name and description
- Is associated with an escalation policy
- Can be integrated with monitoring tools
- Contains a set of alert/incident settings
When an incident is triggered, it's associated with a specific service, which determines how the incident is handled and who is notified.
Example: A "Payment Processing API" service might be set up to:
- Alert the backend team when it experiences errors
- Have high urgency for all incidents
- Auto-resolve incidents after 24 hours if fixed
2. Escalation Policies
Escalation policies define who gets notified about an incident and in what order. They ensure that incidents are addressed even if the first responder isn't available.
An escalation policy typically includes:
- One or more escalation levels with designated responders
- Time delays between escalation levels
- Options to repeat the escalation process if no one responds
Example: For the "Payment API" service, an escalation policy might:
- Level 1: Notify the on-call engineer on the primary schedule
- Level 2: If no response in 15 minutes, notify the secondary on-call engineer
- Level 3: If still no response in 10 minutes, notify the engineering manager
3. Schedules
Schedules determine who is on-call at any given time. They allow teams to:
- Define rotation patterns (daily, weekly, custom)
- Set up multiple layers of coverage
- Handle time zone differences
- Plan for holidays and time off
PagerDuty's schedules are highly flexible and can accommodate complex team structures and rotation patterns.
Example: A "Backend Team Primary" schedule might rotate three engineers weekly, with handoffs occurring every Monday at 9 AM local time. A separate "Backend Team Secondary" schedule might rotate a different group of engineers as backup.
4. Integrations
Integrations connect PagerDuty to your monitoring tools, allowing alerts to be automatically converted into PagerDuty incidents. PagerDuty offers hundreds of integrations with popular monitoring systems.
For custom systems or tools without direct integrations, PagerDuty provides:
- Events API (V2) - A simple API for sending alerts to PagerDuty
- Webhooks - For receiving data about PagerDuty incidents in your other systems
Example: A company might integrate:
- Prometheus Alert Manager with their "Infrastructure" service
- Application error tracking with their "Application Errors" service
- Custom business logic monitors with their "Business Metrics" service
The PagerDuty Incident Lifecycle
When an incident is created in PagerDuty:
- Trigger: An alert comes in from an integrated monitoring system or API call
- Notification: PagerDuty notifies the appropriate on-call person based on the escalation policy
- Acknowledgment: The responder acknowledges the incident, letting others know they're working on it
- Resolution: After fixing the issue, the responder resolves the incident
- Post-Mortem: Teams can analyze what happened and how to prevent similar issues
This structured approach ensures that incidents are handled efficiently and consistently.
Key Benefits of PagerDuty for On-Call Management
- Reliability: Ensures critical alerts never go unnoticed with multiple notification methods and escalation paths
- Flexibility: Supports complex team structures and rotation patterns
- Reduced Alert Fatigue: Intelligent grouping and routing of alerts to the right people
- Comprehensive Visibility: Dashboards and reports to track incident metrics and on-call load
- Integration Ecosystem: Works with virtually any monitoring or alerting system
Next, we will provide a step-by-step guide to integrating Versus with PagerDuty for On-Call: Integration.
How to Integrate with PagerDuty
This document provides a step-by-step guide to integrate Versus Incident with PagerDuty for on-call management. The integration enables automated escalation of alerts to on-call teams when incidents are not acknowledged within a specified time.
We'll cover setting up PagerDuty, configuring the integration with Versus, deploying Versus, and testing the integration with practical examples.
Prerequisites
Before you begin, ensure you have:
- A PagerDuty account (you can start with a free trial if needed)
- Versus Incident deployed (instructions provided later)
- Prometheus Alert Manager set up to monitor your systems
Setting Up PagerDuty for On-Call
Let's configure a practical example in PagerDuty with services, schedules, and escalation policies.
1. Create Users in PagerDuty
First, we need to set up the users who will be part of the on-call rotation:
- Log in to your PagerDuty account
- Navigate to People > Users > Add User
- For each user, enter:
- Name (e.g., "Natsu Dragneel")
- Email address
- Role (User)
- Time Zone
- PagerDuty will send an email invitation to each user
- Users should complete their profiles by:
- Setting up notification rules (SMS, email, push notifications)
- Downloading the PagerDuty mobile app
- Setting contact details
Repeat to create multiple users (e.g., Natsu, Zeref, Igneel, Gray, Gajeel, Laxus).
2. Create On-Call Schedules
Now, let's create schedules for who is on-call and when:
- Navigate to People > Schedules > Create Schedule
- Name the schedule (e.g., "Team A Primary")
- Set up the rotation:
- Choose a rotation type (Weekly is common)
- Add users to the rotation (e.g., Natsu, Zeref, Igneel)
- Set handoff time (e.g., Mondays at 9:00 AM)
- Set time zone
- Save the schedule
- Create a second schedule (e.g., "Team B Secondary") for other team members
3. Create Escalation Policies
An escalation policy defines who gets notified when an incident occurs:
- Navigate to Configuration > Escalation Policies > New Escalation Policy
- Name the policy (e.g., "Critical Incident Response")
- Add escalation rules:
- Level 1: Select the "Team A Primary" schedule with a 5-minute timeout
- Level 2: Select the "Team B Secondary" schedule
- Optionally, add a Level 3 to notify a manager
- Save the policy
4. Create a PagerDuty Service
A service is what receives incidents from monitoring systems:
- Navigate to Configuration > Services > New Service
- Name the service (e.g., "Versus Incident Integration")
- Select "Events API V2" as the integration type
- Select the escalation policy you created in step 3
- Configure incident settings (Auto-resolve, urgency, etc.)
- Save the service
5. Get the Integration Key
After creating the service, you'll need the integration key (also called routing key):
- Navigate to Configuration > Services
- Click on your newly created service
- Go to the Integrations tab
- Find the "Events API V2" integration
- Copy the Integration Key (it looks something like:
12345678abcdef0123456789abcdef0
) - Keep this key secure - you'll need it for Versus configuration
Deploy Versus Incident
Now let's deploy Versus with PagerDuty integration. You can use Docker or Kubernetes.
Docker Deployment
Create a directory for your configuration files:
mkdir -p ./config
Create config/config.yaml
with the following content:
name: versus
host: 0.0.0.0
port: 3000
public_host: https://your-ack-host.example
alert:
debug_body: true
slack:
enable: true
token: ${SLACK_TOKEN}
channel_id: ${SLACK_CHANNEL_ID}
template_path: "config/slack_message.tmpl"
oncall:
enable: true
wait_minutes: 3
provider: pagerduty
pagerduty:
routing_key: ${PAGERDUTY_ROUTING_KEY} # The Integration Key from step 5
other_routing_keys:
infra: ${PAGERDUTY_OTHER_ROUTING_KEY_INFRA}
app: ${PAGERDUTY_OTHER_ROUTING_KEY_APP}
db: ${PAGERDUTY_OTHER_ROUTING_KEY_DB}
redis: # Required for on-call functionality
insecure_skip_verify: true # dev only
host: ${REDIS_HOST}
port: ${REDIS_PORT}
password: ${REDIS_PASSWORD}
db: 0
Create a Slack template in config/slack_message.tmpl
:
🔥 *{{ .commonLabels.severity | upper }} Alert: {{ .commonLabels.alertname }}*
🌐 *Instance*: `{{ .commonLabels.instance }}`
🚨 *Status*: `{{ .status }}`
{{ range .alerts }}
📝 {{ .annotations.description }}
{{ end }}
{{ if .AckURL }}
----------
<{{.AckURL}}|Click here to acknowledge>
{{ end }}
About the ACK URL Generation
- When an incident is created (e.g., via a POST to
/api/incidents
), Versus generates an acknowledgment URL if on-call is enabled. - The URL is constructed using the
public_host
value:https://your-host.example/api/incidents/ack/<incident-id>
. - This URL is injected into the alert data as
.AckURL
for use in templates.
Create the docker-compose.yml
file:
version: '3.8'
services:
versus:
image: ghcr.io/versuscontrol/versus-incident
ports:
- "3000:3000"
environment:
- SLACK_TOKEN=your_slack_token
- SLACK_CHANNEL_ID=your_channel_id
- PAGERDUTY_ROUTING_KEY=your_pagerduty_integration_key
- REDIS_HOST=redis
- REDIS_PORT=6379
- REDIS_PASSWORD=your_redis_password
depends_on:
- redis
redis:
image: redis:6.2-alpine
command: redis-server --requirepass your_redis_password
ports:
- "6379:6379"
volumes:
- redis_data:/data
volumes:
redis_data:
Run Docker Compose:
docker-compose up -d
Alert Manager Routing Configuration
Now, let's configure Alert Manager to route alerts to Versus with different behaviors:
Send Alert Only (No On-Call)
receivers:
- name: 'versus-no-oncall'
webhook_configs:
- url: 'http://versus-service:3000/api/incidents?oncall_enable=false'
send_resolved: false
route:
receiver: 'versus-no-oncall'
group_by: ['alertname', 'service']
routes:
- match:
severity: warning
receiver: 'versus-no-oncall'
Send Alert with Acknowledgment Wait
receivers:
- name: 'versus-with-ack'
webhook_configs:
- url: 'http://versus-service:3000/api/incidents?oncall_wait_minutes=5'
send_resolved: false
route:
routes:
- match:
severity: critical
receiver: 'versus-with-ack'
This configuration waits 5 minutes for acknowledgment before triggering PagerDuty if the user doesn't click the ACK link in Slack.
Send Alert with Immediate On-Call Trigger
receivers:
- name: 'versus-immediate'
webhook_configs:
- url: 'http://versus-service:3000/api/incidents?oncall_wait_minutes=0'
send_resolved: false
route:
routes:
- match:
severity: urgent
receiver: 'versus-immediate'
This will trigger PagerDuty immediately without waiting.
Override the PagerDuty Routing Key per Alert
You can configure Alert Manager to use different PagerDuty services for specific alerts by using named routing keys instead of exposing sensitive routing keys directly in URLs:
receivers:
- name: 'versus-pagerduty-infra'
webhook_configs:
- url: 'http://versus-service:3000/api/incidents?pagerduty_other_routing_key=infra'
send_resolved: false
route:
routes:
- match:
team: infrastructure
receiver: 'versus-pagerduty-infra'
This routes infrastructure team alerts to a different PagerDuty service using the named routing key "infra", which is securely mapped to the actual integration key in your configuration file:
oncall:
provider: pagerduty
pagerduty:
routing_key: ${PAGERDUTY_ROUTING_KEY}
other_routing_keys:
infra: ${PAGERDUTY_OTHER_ROUTING_KEY_INFRA}
app: ${PAGERDUTY_OTHER_ROUTING_KEY_APP}
db: ${PAGERDUTY_OTHER_ROUTING_KEY_DB}
This approach keeps your sensitive PagerDuty integration keys secure by never exposing them in URLs or logs.
Testing the Integration
Let's test the complete workflow:
-
Trigger an Alert:
- Simulate a critical alert in Prometheus to match the Alert Manager rule.
-
Verify Versus:
- Check that Versus receives the alert and sends it to Slack.
- You should see a message with an acknowledgment link.
-
Check Escalation:
- Option 1: Click the ACK link to acknowledge the incident - PagerDuty should not be notified.
- Option 2: Wait for the acknowledgment timeout (e.g., 5 minutes) without clicking the link.
- In PagerDuty, verify that an incident is created and the on-call person is notified.
- Confirm that escalation happens according to your policy if the incident remains unacknowledged.
-
Immediate Trigger Test:
- Send an urgent alert and confirm that PagerDuty is triggered instantly.
How It Works Under the Hood
When Versus integrates with PagerDuty, the following occurs:
- Versus receives an alert from Alert Manager
- If on-call is enabled and the acknowledgment period passes without an ACK, Versus:
- Constructs a PagerDuty Events API v2 payload
- Sends a "trigger" event to PagerDuty with your routing key
- Includes incident details as custom properties
The PagerDuty service processes this event according to your escalation policy, notifying the appropriate on-call personnel.
Conclusion
You've now integrated Versus Incident with PagerDuty for on-call management! Alerts from Prometheus Alert Manager can trigger notifications via Versus, with escalations handled by PagerDuty based on your escalation policy.
This integration provides:
- A delay period for engineers to acknowledge incidents before escalation
- Slack notifications with one-click acknowledgment
- Structured escalation through PagerDuty's robust notification system
- Multiple notification channels to ensure critical alerts reach the right people
Adjust configurations as needed for your environment and incident response processes. If you encounter any issues or have further questions, feel free to reach out!
Migrating to v1.2.0
This guide provides instructions for migrating from v1.1.5 to v1.2.0.
What's New in v1.2.0
Version 1.2.0 introduces enhanced Microsoft Teams integration using Power Automate, allowing you to send incident alerts directly to Microsoft Teams channels with more formatting options and better delivery reliability.
Key Changes
The main change in v1.2.0 is the Microsoft Teams integration architecture:
-
Legacy webhook URLs replaced with Power Automate: Instead of using the legacy Office 365 webhook URLs, Versus Incident now integrates with Microsoft Teams through Power Automate HTTP triggers, which provide more flexibility and reliability.
-
Configuration property names updated:
webhook_url
→power_automate_url
other_webhook_url
→other_power_urls
-
Environment variable names updated:
MSTEAMS_WEBHOOK_URL
→MSTEAMS_POWER_AUTOMATE_URL
MSTEAMS_OTHER_WEBHOOK_URL_*
→MSTEAMS_OTHER_POWER_URL_*
-
API query parameter updated:
msteams_other_webhook_url
→msteams_other_power_url
Configuration Changes
Here's a side-by-side comparison of the Microsoft Teams configuration in v1.1.5 vs v1.2.0:
v1.1.5 (Before)
alert:
# ... other alert configurations ...
msteams:
enable: false # Default value, will be overridden by MSTEAMS_ENABLE env var
webhook_url: ${MSTEAMS_WEBHOOK_URL}
template_path: "config/msteams_message.tmpl"
other_webhook_url: # Optional: Define additional webhook URLs
qc: ${MSTEAMS_OTHER_WEBHOOK_URL_QC}
ops: ${MSTEAMS_OTHER_WEBHOOK_URL_OPS}
dev: ${MSTEAMS_OTHER_WEBHOOK_URL_DEV}
v1.2.0 (After)
alert:
# ... other alert configurations ...
msteams:
enable: false # Default value, will be overridden by MSTEAMS_ENABLE env var
power_automate_url: ${MSTEAMS_POWER_AUTOMATE_URL} # Power Automate HTTP trigger URL
template_path: "config/msteams_message.tmpl"
other_power_urls: # Optional: Enable overriding the default Power Automate flow
qc: ${MSTEAMS_OTHER_POWER_URL_QC}
ops: ${MSTEAMS_OTHER_POWER_URL_OPS}
dev: ${MSTEAMS_OTHER_POWER_URL_DEV}
Migration Steps
1. Update Your Configuration File
Replace the Microsoft Teams section in your config.yaml
file:
msteams:
enable: false # Set to true to enable, or use MSTEAMS_ENABLE env var
power_automate_url: ${MSTEAMS_POWER_AUTOMATE_URL} # Power Automate HTTP trigger URL
template_path: "config/msteams_message.tmpl"
other_power_urls: # Optional: Enable overriding the default Power Automate flow
qc: ${MSTEAMS_OTHER_POWER_URL_QC}
ops: ${MSTEAMS_OTHER_POWER_URL_OPS}
dev: ${MSTEAMS_OTHER_POWER_URL_DEV}
2. Update Your Environment Variables
If you're using environment variables, update them:
# Old (v1.1.5)
MSTEAMS_WEBHOOK_URL=https://...
MSTEAMS_OTHER_WEBHOOK_URL_QC=https://...
# New (v1.2.0)
MSTEAMS_POWER_AUTOMATE_URL=https://...
MSTEAMS_OTHER_POWER_URL_QC=https://...
3. Setting up Power Automate for Microsoft Teams
To set up Microsoft Teams integration with Power Automate:
-
Create a new Power Automate flow:
- Sign in to Power Automate
- Click on "Create" → "Instant cloud flow"
- Select "When a HTTP request is received" as the trigger
-
Configure the HTTP trigger:
- The HTTP POST URL will be generated automatically after you save the flow
- For the Request Body JSON Schema, you can use:
{ "type": "object", "properties": { "message": { "type": "string" } } }
-
Add a "Post message in a chat or channel" action:
- Click "+ New step"
- Search for "Teams" and select "Post message in a chat or channel"
- Configure the Teams channel where you want to post messages
- In the Message field, use:
@{triggerBody()?['message']}
-
Save your flow and copy the HTTP POST URL:
- After saving, go back to the HTTP trigger step to see the generated URL
- Copy this URL and use it for your
MSTEAMS_POWER_AUTOMATE_URL
environment variable or directly in your configuration file
4. Update Your API Calls
If you're making direct API calls that use the Teams integration, update your query parameters:
Old (v1.1.5):
POST /api/incidents?msteams_other_webhook_url=qc
New (v1.2.0):
POST /api/incidents?msteams_other_power_url=qc
5. Update Your Microsoft Teams Templates (Optional)
The template syntax remains the same, but you might want to review your templates to ensure they work correctly with the new integration. Here's a sample template for reference:
# Critical Error in {{.ServiceName}}
**Error Details:**
```{{.Logs}}```
Please investigate immediately
Testing the Migration
After updating your configuration, test the Microsoft Teams integration to ensure it's working correctly:
curl -X POST http://your-versus-incident-server:3000/api/incidents \
-H "Content-Type: application/json" \
-d '{"service_name": "Test Service", "logs": "This is a test incident alert for Microsoft Teams integration"}'
Additional Notes
- The older Microsoft Teams integration using webhook URLs still work after upgrading to v1.2.0, just update properties
webhook_url
→power_automate_url
- If you experience any issues with message delivery to Microsoft Teams, check your Power Automate flow run history to debug potential issues
- For organizations with multiple teams or departments, consider setting up separate Power Automate flows for each team and configuring them with the
other_power_urls
property
Migration Guide to v1.3.0
This guide explains the changes introduced in Versus Incident v1.3.0 and how to update your configuration to take advantage of the new features.
Key Changes in v1.3.0
Version 1.3.0 introduces enhanced on-call management capabilities and configuration options, with a focus on flexibility and team-specific routing.
1. New Provider Configuration (Major Change from v1.2.0)
A significant change in v1.3.0 is the introduction of the provider
property in the on-call configuration, which allows you to explicitly specify which on-call service to use:
oncall:
enable: false
wait_minutes: 3
provider: aws_incident_manager # NEW in v1.3.0: Explicitly select "aws_incident_manager" or "pagerduty"
This change enables Versus Incident to support multiple on-call providers simultaneously. In v1.2.0, there was no provider selection mechanism, as AWS Incident Manager was the only supported provider.
2. PagerDuty Integration (New in v1.3.0)
Version 1.3.0 introduces PagerDuty as a new on-call provider with comprehensive configuration options:
oncall:
provider: pagerduty # Select PagerDuty as your provider
pagerduty: # New configuration section in v1.3.0
routing_key: ${PAGERDUTY_ROUTING_KEY} # Integration/Routing key for Events API v2
other_routing_keys: # Optional team-specific routing keys
infra: ${PAGERDUTY_OTHER_ROUTING_KEY_INFRA}
app: ${PAGERDUTY_OTHER_ROUTING_KEY_APP}
db: ${PAGERDUTY_OTHER_ROUTING_KEY_DB}
The PagerDuty integration supports:
- Default routing key for general alerts
- Team-specific routing keys via the
other_routing_keys
configuration - Dynamic routing using the
pagerduty_other_routing_key
query parameter
Example API call to target the infrastructure team:
curl -X POST "http://your-versus-host:3000/api/incidents?pagerduty_other_routing_key=infra" \
-H "Content-Type: application/json" \
-d '{
"Logs": "[ERROR] Load balancer failure.",
"ServiceName": "lb-service",
"UserID": "U12345"
}'
3. AWS Incident Manager Environment-Specific Response Plans (New in v1.3.0)
Version 1.3.0 enhances AWS Incident Manager integration with support for environment-specific response plans:
oncall:
provider: aws_incident_manager
aws_incident_manager:
response_plan_arn: ${AWS_INCIDENT_MANAGER_RESPONSE_PLAN_ARN} # Default response plan
other_response_plan_arns: # New in v1.3.0
prod: ${AWS_INCIDENT_MANAGER_OTHER_RESPONSE_PLAN_ARN_PROD}
dev: ${AWS_INCIDENT_MANAGER_OTHER_RESPONSE_PLAN_ARN_DEV}
staging: ${AWS_INCIDENT_MANAGER_OTHER_RESPONSE_PLAN_ARN_STAGING}
This feature allows you to:
- Configure multiple response plans for different environments
- Dynamically select the appropriate response plan using the
awsim_other_response_plan
query parameter - Use a more flexible named environment approach for response plan selection
Example API call to use the production environment's response plan:
curl -X POST "http://your-versus-host:3000/api/incidents?awsim_other_response_plan=prod" \
-H "Content-Type: application/json" \
-d '{
"Logs": "[ERROR] Production database failure.",
"ServiceName": "prod-db-service",
"UserID": "U12345"
}'
How to Migrate from v1.2.0
If you're upgrading from v1.2.0, update your on-call configuration to include the provider
property.
Complete Configuration Example
Replace your existing on-call configuration with the new structure:
oncall:
enable: false # Set to true to enable on-call functionality
wait_minutes: 3 # Time to wait for acknowledgment before escalating
provider: aws_incident_manager # or "pagerduty"
aws_incident_manager: # Used when provider is "aws_incident_manager"
response_plan_arn: ${AWS_INCIDENT_MANAGER_RESPONSE_PLAN_ARN}
other_response_plan_arns: # NEW in v1.3.0: Optional environment-specific response plan ARNs
prod: ${AWS_INCIDENT_MANAGER_OTHER_RESPONSE_PLAN_ARN_PROD}
dev: ${AWS_INCIDENT_MANAGER_OTHER_RESPONSE_PLAN_ARN_DEV}
staging: ${AWS_INCIDENT_MANAGER_OTHER_RESPONSE_PLAN_ARN_STAGING}
pagerduty: # Used when provider is "pagerduty"
routing_key: ${PAGERDUTY_ROUTING_KEY}
other_routing_keys: # Optional team-specific routing keys
infra: ${PAGERDUTY_OTHER_ROUTING_KEY_INFRA}
app: ${PAGERDUTY_OTHER_ROUTING_KEY_APP}
db: ${PAGERDUTY_OTHER_ROUTING_KEY_DB}
redis: # Required for on-call functionality
host: ${REDIS_HOST}
port: ${REDIS_PORT}
password: ${REDIS_PASSWORD}
db: 0
Upgrading from v1.2.0
-
Update your Versus Incident deployment to v1.3.0:
# Docker docker pull ghcr.io/versuscontrol/versus-incident:v1.3.0 # Or update your Kubernetes deployment to use the new image
-
Update your configuration as described above, ensuring that Redis is properly configured if you're using on-call features.
-
Restart your Versus Incident service to apply the changes.
For any issues with the migration, please open an issue on GitHub.