Logo
Sentinel

Sentinel Documentation

Overview

Sentinel is a comprehensive synthetic testing platform that helps you monitor and validate your API endpoints, ensuring they're functioning correctly and meeting performance expectations. Whether you're testing individual endpoints or complex workflows with multiple steps, Sentinel provides the tools to create, schedule, and monitor your tests.

Synthetic Tests

Create and schedule individual API tests with customisable validation criteria

Test Chains

Build workflow sequences by chaining tests together with conditional execution

Results Dashboard

View detailed test results, analytics, and performance metrics

Getting Started

To get started with Sentinel, you'll need to sign in with Github. Once signed in, you'll have access to the main dashboard where you can create, manage, and monitor your tests.

Navigation

The main navigation menu provides access to all the key features of Sentinel:

  • Dashboard: Overview of your test results and performance metrics
  • Tests: Create and manage individual synthetic tests
  • Chains: Set up and monitor multi-step test workflows
  • Generate Tests: Generate tests automatically from OpenAPI specs
  • Admin: Configure Notifications and Prometheus metrics

User Interface Overview

Test Management

Create, edit, and delete tests. Set up schedules for automated execution.

Results Visualisation

View test results with detailed metrics and performance charts.

API Coverage

Analyse OpenAPI specs and generate comprehensive test coverage.

Notifications

Configure alerts and notifications for test failures and performance issues.

Creating Tests

Sentinel provides two ways to create tests: manually creating individual tests or automatically generating tests from an OpenAPI specification.

Manual Test Creation

To create a test manually, go to the Scheduler page and fill out the test form:

  1. go toThe "Tests" page
  2. Click "+ New Test"
  3. Fill in the Basic Information section:
    • Service Name: The name of the service being tested
    • Test Name: A descriptive name for your test
  4. Configure the endpoint details:
    • Endpoint URL: The full URL of the API endpoint
    • HTTP Method: GET, POST, PUT, DELETE, etc.
    • Parameters: Any query parameters or request body
  5. Set up validation criteria:
    • Expected Status Code: The HTTP status code the endpoint should return
    • Expected Response Values: JSON values that should be in the response
  6. Configure authentication if required
  7. Set a schedule for when the test should run
  8. Click "Create Test" to save and activate your test
  9. Click "View All Tests" to see your newly created test

Note

Tests can be run on demand or scheduled to run at specific intervals using either a cron expression or a simple interval format.

Generating Tests from OpenAPI Spec

Sentinel can automatically generate tests based on an OpenAPI (Swagger) specification:

  1. go to the "Generate Tests" page
  2. Upload your OpenAPI specification file (.json, .yaml, or .yml)
  3. Click "Create Tests" to generate test cases
  4. Review the generated tests and API coverage report
  5. Fill in the service name and test name fields
  6. Select tests you want to save
  7. Click "Accept and Save Test Cases" to save the generated tests

Note

Generated tests include boundary value testing and cover various scenarios like valid inputs, invalid inputs, and error conditions.

Test Parameters

For tests requiring parameters, you can specify them in JSON format, example:

{
                    "userId": 123,
                    "name": "Test User",
                    "active": true
                    }

For GET requests, these will be sent as query parameters. For POST, PUT, and other methods with a request body, they'll be sent as a JSON payload.

Validation Criteria

Validation criteria determine whether a test passes or fails. You can validate:

  • Status Code: Verify the HTTP status code matches expectations
  • Response Content: Check that specific values exist in the response
  • Response Time: Make sure the API responds within acceptable time limits
  • Headers: Validate specific response headers

Example of expected response values:

{
                    "status": "success",
                    "data.id": 123,
                    "data.attributes.verified": true
                    }

This will check that these values exist anywhere in the response JSON, regardless of nesting level.

Working with Test Chains

Test chains allow you to create workflows by linking multiple tests together in a sequence. This is useful for testing complex scenarios that involve multiple API calls.

Creating a Test Chain

  1. go to "Test Chains" and click "Create New Chain"
  2. Provide a name and description for your chain
  3. Select the tests you want to include in the chain
  4. The tests will be executed in the order they appear in the list
  5. Click "Create Test Chain" to save your chain

Note

Tests must be created individually before they can be added to a chain.

Execution Conditions

Each step in a test chain can have execution conditions:

  • Always: The step will always execute
  • Previous Step Success: The step will only execute if the previous step was successful
  • Previous Step Failure: The step will only execute if the previous step failed

This allows you to create conditional flows in your test chains.

Data Mappings in Test Chains

Data mappings are a powerful feature that lets you to pass data between steps in a test chain. This lets you make end-to-end test scenarios where data from one API call is used in following calls.

Understanding Data Mappings

Test chains support two types of mappings:

Input Mappings

Take values from the chain context and inject them into the current test step.

Output Mappings

Extract values from the current test's response and store them in the chain context for later steps.

Note

The chain context acts as a shared data store that persists throughout the entire test chain execution. It lets data to flow between steps.

Input Mappings

Input mappings let you to take values from the chain context and inject them into different parts of your test:

  • Parameters Mapping: Inject values into request parameters (body or query)

    Format: parameters.propertyName

  • Headers Mapping: Inject values into request headers

    Format: headers.headerName

  • Endpoint Mapping: Replace path parameters in the URL

    Format: endpoint.paramName

Input mappings use a key-value format where:

  • Key: The destination path in the test (e.g., parameters.userId)
  • Value: The context variable name to get the value from

Input Mapping Example

In this example, we're getting a user's details after creating the user in a previous step:

Step 2 Input Mappings:

{
  "endpoint.userId": "userId",
  "headers.Authorization": "authToken",
  "parameters.includeDetails": "includeFullProfile"
}

This mapping will:

  • Replace {userId} in the endpoint URL with the value stored in the context variable userId
  • Set the Authorization header to the value stored in the authToken context variable
  • Add a parameter named includeDetails with the value from the includeFullProfile context variable

URL Before Mapping:

GET /api/users/{userId}

URL After Mapping (if userId=12345):

GET /api/users/12345

Output Mappings

Output mappings extract values from the test response and save them to the chain context:

Output mappings use a key-value format where:

  • Key: The context variable name to store the value
  • Value: JSONPath expression to extract from the response

Note

Response headers are automatically available to output mappings and can be accessed using the headers. prefix in following steps.

Output Mapping Example

After creating a user, we extract the user ID and auth token for use in following steps:

Example Response JSON:

{
  "success": true,
  "data": {
    "user": {
      "id": "12345",
      "email": "test@example.com",
      "profile": {
        "name": "Test User",
        "role": "admin"
      }
    },
    "token": "eyJhbGciOi..."
  }
}

Output Mappings:

{
  "userId": "$.data.user.id",
  "userEmail": "$.data.user.email",
  "userName": "$.data.user.profile.name",
  "authToken": "$.data.token"
}

These values will be stored in the chain context and available for use in following steps.

Creating End-to-End Test Scenarios

Here's how to make an end-to-end test chain with data mappings:

  1. make individual tests for each step in your workflow
    • For tests that will receive data, use placeholder values (e.g., {userId} in the URL)
  2. go to "Test Chains" and make a new chain
  3. Add your tests to the chain in the correct order
  4. For each step that needs data from previous steps:
    • Configure input mappings to receive data from the chain context
  5. For each step that generates data needed later:
    • Configure output mappings to extract values from the response
  6. Set execution conditions for each step if needed
  7. Save and run your test chain

Common Use Cases

Authentication Flow

Login → Extract token → Use token in following API calls

Perfect for testing secured endpoints where authentication is required.

Resource Creation & Management

make resource → Extract ID → Get resource details → Update resource → Delete resource

Test complete CRUD operations on your resources.

Multi-Step Business Processes

make order → Process payment → Check inventory → Ship order

Verify complex business workflows end-to-end.

Best Practices

  • Use Descriptive Names: Choose clear, descriptive names for your context variables
  • Handle Missing Data: Set execution conditions to handle cases where expected data might be missing
  • Start Simple: Begin with simple chains and gradually build more complex scenarios
  • Test Individual Steps First: Ensure each test works on its own before adding it to a chain
  • Document Your Mappings: Add comments or descriptions to clarify what each mapping does

Troubleshooting

Missing or Null Values

If a step fails because an expected value is missing:

  • Verify the JSONPath in your output mappings
  • Check that the previous step executed successfully
  • Check the actual response to ensure the data is present
  • Use execution conditions to handle cases where data might be missing

Path Parameter Issues

If endpoint path substitutions aren't working:

  • Ensure the URL contains the placeholder in the format {paramName}
  • Verify the input mapping uses the correct format endpoint.paramName
  • Check that the value exists in the chain context

Scheduling Test Chains

Like individual tests, test chains can be scheduled to run automatically:

  1. go to the test chain details page
  2. Click "Edit Chain" or modify the schedule directly
  3. Configure the schedule using a cron expression or interval
  4. Save your changes

Viewing Chain Results

Test chain results show the outcome of each step in the chain:

  1. go to the Dashboard or Test Chains tab
  2. Select the "Test Chains" tab in the results view
  3. Click on a specific chain execution to see detailed results
  4. Each step's status, response time, and validation results are displayed

A chain is considered successful only if all steps execute successfully.

Viewing Test Results

The Dashboard provides a comprehensive view of your test results, including performance metrics, success rates, and detailed execution information.

Dashboard Overview

The Dashboard displays:

  • Summary Cards: Total tests, successful tests, failed tests, and more
  • Service Performance: Charts showing response times by service
  • Status Code Distribution: Breakdown of HTTP status codes
  • Test Outcome Chart: Visual representation of test outcomes
  • Recent Test Results: Latest test executions with statuses

You can filter results by date range, service, and specific test.

Detailed Test Results

Click on any test execution to view detailed results:

  • Request Details: Endpoint, method, headers, and parameters
  • Response: Status code, response body, and headers
  • Validation Results: Which validation criteria passed or failed
  • Timing Information: Total duration and timestamp

Filtering and Searching

You can narrow down results using various filters:

  • Date Range: View results from a specific time period
  • Service: Filter by service name
  • Test: Filter by specific test name

Metrics & Monitoring

Sentinel provides Prometheus-compatible metrics that can be integrated with your existing monitoring systems.

Available Metrics

The following metrics are available:

  • sentinel_test_success_rate: The success rate of synthetic tests
  • sentinel_test_response_time_seconds: Response time distribution
  • sentinel_test_status_code_total: Count of HTTP status codes
  • sentinel_test_outcome_total: Count of test outcomes
  • sentinel_test_execution_total: Total number of test executions
  • sentinel_validation_failures_total: Total number of validation failures

Configuring Prometheus Integration

To configure Prometheus to scrape metrics from Sentinel:

  1. go to the Metrics configuration page
  2. Enable metrics export
  3. Get your API key
  4. Add the Sentinel metrics endpoint to your Prometheus configuration

Example Prometheus configuration:

scrape_configs:
                - job_name: 'sentinel-synthetics'
                    scrape_interval: 60s
                    metrics_path: '/results/api/v1/metrics/prometheus'
                    params:
                    userId: ['YOUR_USER_ID']
                    apiKey: ['YOUR_API_KEY']
                    static_configs:
                    - targets: ['sentinel.gowtom.tech']

Notifications

Sentinel can notify you when tests fail or when other important events occur.

Email Notifications

Configure email notifications:

  1. go to your notification settings
  2. Enable email notifications -- click save
  3. Choose what events should trigger notifications:
    • Test failures
    • Cleanup failures
    • Successful tests (optional)
  4. Save your settings

API Coverage

When generating tests from an OpenAPI specification, Sentinel provides coverage reports that show how well your tests cover your API.

Coverage Metrics

The coverage report includes:

  • Endpoint Coverage: Percentage of API endpoints covered by tests
  • Method Coverage: Percentage of HTTP methods covered
  • Parameter Coverage: Percentage of parameters tested
  • Response Code Coverage: Percentage of documented response codes tested
  • Security Scheme Coverage: Percentage of authentication methods tested
  • Overall Coverage: Combined coverage score

Improving Coverage

To improve API coverage:

  • Generate tests for missing endpoints
  • Add tests for different HTTP methods
  • Create tests for edge cases and error conditions
  • Test with different authentication mechanisms
  • Make sure all documented response codes are tested

Security & Authentication

Sentinel supports testing endpoints that require authentication.

Authentication Methods

Supported authentication methods:

  • Bearer Token: JWT or other token-based authentication
  • Basic Authentication: Username and password
  • API Key: Key in header or query parameter

Configuring Authentication

To configure authentication for a test:

  1. In the test creation or edit form, expand the "Authentication" section
  2. Select the authentication type
  3. Enter the required credentials:
    • For Bearer Token: Enter the token value
    • For Basic Auth: Enter username and password
    • For API Key: Enter key name, value, and location (header or query parameter)

Note

Authentication credentials are encrypted when stored and only decrypted when executing tests.

Secure Handling of Credentials

Sentinel protects your authentication credentials:

  • Credentials are encrypted at rest
  • Only authorized users can view or modify tests
  • Only the test owner and explicitly shared users have access

Warning

While credentials are encrypted, it's best practice to use API tokens with limited permissions rather than privileged account credentials.

Troubleshooting

Common Issues

Test fails with "Invalid response"

This often occurs when the expected response values don't match the actual response.

  • Verify the JSON paths in your expected response values
  • Check if the API response structure has changed
  • Examine the actual response in the test results

Test schedule isn't running

If scheduled tests aren't executing:

  • Check that the schedule is correctly configured
  • Verify the start and end dates
  • For cron expressions, Make sure the syntax is correct
  • Look for any error messages in the schedule status

Authentication failures

If tests fail due to authentication:

  • Check that the authentication credentials are correct
  • Verify the token hasn't expired
  • Make sure the API key or username/password is still valid
  • Check that the authentication method matches what the API expects

Can't upload OpenAPI spec

If you're having trouble uploading an OpenAPI specification:

  • Make sure the file is in valid JSON or YAML format
  • Check that the specification follows the OpenAPI standard
  • Try validating the spec with an external tool
  • Reduce the file size if it's very large

Getting Support

If you need additional help:

  • Check the detailed error messages in test results
  • Email us for support at team@gowtom.tech
  • Include relevant details such as test IDs, error messages, and steps to reproduce