Sentinel is a comprehensive synthetic testing platform that helps you monitor and validate your API endpoints, ensuring they're functioning correctly and meeting performance expectations. Whether you're testing individual endpoints or complex workflows with multiple steps, Sentinel provides the tools to create, schedule, and monitor your tests.
Create and schedule individual API tests with customisable validation criteria
Build workflow sequences by chaining tests together with conditional execution
View detailed test results, analytics, and performance metrics
To get started with Sentinel, you'll need to sign in with Github. Once signed in, you'll have access to the main dashboard where you can create, manage, and monitor your tests.
The main navigation menu provides access to all the key features of Sentinel:
Create, edit, and delete tests. Set up schedules for automated execution.
View test results with detailed metrics and performance charts.
Analyse OpenAPI specs and generate comprehensive test coverage.
Configure alerts and notifications for test failures and performance issues.
Sentinel provides two ways to create tests: manually creating individual tests or automatically generating tests from an OpenAPI specification.
To create a test manually, go to the Scheduler page and fill out the test form:
Note
Tests can be run on demand or scheduled to run at specific intervals using either a cron expression or a simple interval format.
Sentinel can automatically generate tests based on an OpenAPI (Swagger) specification:
Note
Generated tests include boundary value testing and cover various scenarios like valid inputs, invalid inputs, and error conditions.
For tests requiring parameters, you can specify them in JSON format, example:
{
"userId": 123,
"name": "Test User",
"active": true
}For GET requests, these will be sent as query parameters. For POST, PUT, and other methods with a request body, they'll be sent as a JSON payload.
Validation criteria determine whether a test passes or fails. You can validate:
Example of expected response values:
{
"status": "success",
"data.id": 123,
"data.attributes.verified": true
}This will check that these values exist anywhere in the response JSON, regardless of nesting level.
Test chains allow you to create workflows by linking multiple tests together in a sequence. This is useful for testing complex scenarios that involve multiple API calls.
Note
Tests must be created individually before they can be added to a chain.
Each step in a test chain can have execution conditions:
This allows you to create conditional flows in your test chains.
Data mappings are a powerful feature that lets you to pass data between steps in a test chain. This lets you make end-to-end test scenarios where data from one API call is used in following calls.
Test chains support two types of mappings:
Take values from the chain context and inject them into the current test step.
Extract values from the current test's response and store them in the chain context for later steps.
Note
The chain context acts as a shared data store that persists throughout the entire test chain execution. It lets data to flow between steps.
Input mappings let you to take values from the chain context and inject them into different parts of your test:
Format: parameters.propertyName
Format: headers.headerName
Format: endpoint.paramName
Input mappings use a key-value format where:
parameters.userId)In this example, we're getting a user's details after creating the user in a previous step:
Step 2 Input Mappings:
{
"endpoint.userId": "userId",
"headers.Authorization": "authToken",
"parameters.includeDetails": "includeFullProfile"
}This mapping will:
{userId} in the endpoint URL with the value stored in the context variable userIdAuthorization header to the value stored in the authToken context variableincludeDetails with the value from the includeFullProfile context variableURL Before Mapping:
GET /api/users/{userId}URL After Mapping (if userId=12345):
GET /api/users/12345Output mappings extract values from the test response and save them to the chain context:
Output mappings use a key-value format where:
Note
Response headers are automatically available to output mappings and can be accessed using the headers. prefix in following steps.
After creating a user, we extract the user ID and auth token for use in following steps:
Example Response JSON:
{
"success": true,
"data": {
"user": {
"id": "12345",
"email": "test@example.com",
"profile": {
"name": "Test User",
"role": "admin"
}
},
"token": "eyJhbGciOi..."
}
}Output Mappings:
{
"userId": "$.data.user.id",
"userEmail": "$.data.user.email",
"userName": "$.data.user.profile.name",
"authToken": "$.data.token"
}These values will be stored in the chain context and available for use in following steps.
Here's how to make an end-to-end test chain with data mappings:
{userId} in the URL)Login → Extract token → Use token in following API calls
Perfect for testing secured endpoints where authentication is required.
make resource → Extract ID → Get resource details → Update resource → Delete resource
Test complete CRUD operations on your resources.
make order → Process payment → Check inventory → Ship order
Verify complex business workflows end-to-end.
If a step fails because an expected value is missing:
If endpoint path substitutions aren't working:
{paramName}endpoint.paramNameLike individual tests, test chains can be scheduled to run automatically:
Test chain results show the outcome of each step in the chain:
A chain is considered successful only if all steps execute successfully.
The Dashboard provides a comprehensive view of your test results, including performance metrics, success rates, and detailed execution information.
The Dashboard displays:
You can filter results by date range, service, and specific test.
Click on any test execution to view detailed results:
You can narrow down results using various filters:
Sentinel provides Prometheus-compatible metrics that can be integrated with your existing monitoring systems.
The following metrics are available:
To configure Prometheus to scrape metrics from Sentinel:
Example Prometheus configuration:
scrape_configs:
- job_name: 'sentinel-synthetics'
scrape_interval: 60s
metrics_path: '/results/api/v1/metrics/prometheus'
params:
userId: ['YOUR_USER_ID']
apiKey: ['YOUR_API_KEY']
static_configs:
- targets: ['sentinel.gowtom.tech']Sentinel can notify you when tests fail or when other important events occur.
Configure email notifications:
When generating tests from an OpenAPI specification, Sentinel provides coverage reports that show how well your tests cover your API.
The coverage report includes:
To improve API coverage:
Sentinel supports testing endpoints that require authentication.
Supported authentication methods:
To configure authentication for a test:
Note
Authentication credentials are encrypted when stored and only decrypted when executing tests.
Sentinel protects your authentication credentials:
Warning
While credentials are encrypted, it's best practice to use API tokens with limited permissions rather than privileged account credentials.
This often occurs when the expected response values don't match the actual response.
If scheduled tests aren't executing:
If tests fail due to authentication:
If you're having trouble uploading an OpenAPI specification:
If you need additional help: