Skip to main content

Bruno Testing Framework

Zochil API uses Bruno for API testing and documentation. Bruno collections are stored in the ./spec/ folder using the .bru format, providing both API specifications and automated testing capabilities.

Collection Structure

API tests are organized by service and functionality:
spec/
├── general-monitoring/          # General monitoring API tests
│   ├── @auth/
│   ├── admins/
│   ├── environments/
│   └── profile/
├── marketplace-monitoring/      # Marketplace monitoring tests
│   ├── @auth/
│   ├── environments/
│   ├── marketplace/
│   └── bruno.json
└── environments/               # Shared environment configs
    └── local.bru

Environment Configuration

Environment variables are centralized in environments/local.bru:
vars {
  baseUrl: http://localhost:3000
  access-token: your_access_token_here
  marketplace_uid: your_marketplace_id
  device-id: your_device_id
}

Environment Setup

  1. Copy the example environment file
  2. Update variables for your local setup
  3. Obtain access tokens through authentication endpoints
  4. Configure service-specific variables as needed

Request Organization

Each API endpoint has a dedicated .bru file with complete HTTP request specification:

Example Bruno Request File

meta {
  name: Get User Profile
  type: http
  seq: 1
}

get {
  url: {{baseUrl}}/user/profile
}

headers {
  access-token: {{access-token}}
  device-id: {{device-id}}
  Content-Type: application/json
}

tests {
  test("Status is 200", function() {
    expect(res.getStatus()).to.equal(200);
  });

  test("Response has user data", function() {
    expect(res.getBody().data).to.have.property('id');
    expect(res.getBody().data).to.have.property('email');
  });
}

Running Tests

Bruno CLI Installation

Install Bruno CLI globally:
npm install -g @usebruno/cli

Test Execution Commands

# Run general monitoring tests
bru run spec/general-monitoring

# Run marketplace monitoring tests
bru run spec/marketplace-monitoring

# Run specific folder tests
bru run spec/general-monitoring/admins

Test Structure

Authentication Tests (@auth/)

Authentication tests should be run first to obtain access tokens:
// Login request example
meta {
  name: Admin Login
  type: http
  seq: 1
}

post {
  url: {{baseUrl}}/monitoring/admins/login-with-password
}

headers {
  Content-Type: application/json
}

body {
  json: {
    "username": "[email protected]",
    "password": "your_password"
  }
}

tests {
  test("Login successful", function() {
    expect(res.getStatus()).to.equal(200);
    expect(res.getBody()).to.have.property('access_token');
  });

  // Store token for subsequent requests
  bru.setEnvVar("access-token", res.getBody().access_token);
}

CRUD Operation Tests

Organize tests to follow CRUD patterns:
// Create
meta { name: Create Product, seq: 1 }
post { url: {{baseUrl}}/products/create }

// Read/List
meta { name: List Products, seq: 2 }
get { url: {{baseUrl}}/products/list }

// Read/Detail
meta { name: Get Product Detail, seq: 3 }
get { url: {{baseUrl}}/products/detail/{{productId}} }

// Update
meta { name: Update Product, seq: 4 }
post { url: {{baseUrl}}/products/update }

// Delete
meta { name: Remove Product, seq: 5 }
post { url: {{baseUrl}}/products/remove }

Test Validation

Response Validation

Validate response structure and data:
tests {
  test("Response structure is correct", function() {
    expect(res.getStatus()).to.equal(200);
    expect(res.getBody()).to.have.property('success', true);
    expect(res.getBody()).to.have.property('data');
    expect(res.getBody()).to.have.property('message');
  });

  test("Data contains required fields", function() {
    const data = res.getBody().data;
    expect(data).to.have.property('id');
    expect(data).to.have.property('name');
    expect(data).to.have.property('created_at');
  });

  test("Pagination is present", function() {
    expect(res.getBody().pagination).to.have.property('page');
    expect(res.getBody().pagination).to.have.property('total');
  });
}

Error Response Testing

Test error scenarios and responses:
tests {
  test("Validation error structure", function() {
    expect(res.getStatus()).to.equal(400);
    expect(res.getBody()).to.have.property('success', false);
    expect(res.getBody()).to.have.property('error');
    expect(res.getBody().error).to.have.property('code');
    expect(res.getBody().error).to.have.property('message');
  });

  test("Authentication error", function() {
    expect(res.getStatus()).to.equal(401);
    expect(res.getBody().error.code).to.equal('UNAUTHORIZED');
  });
}

Test Data Management

Dynamic Test Data

Use variables and scripts for dynamic test data:
pre-request {
  // Generate dynamic data
  const timestamp = Date.now();
  const testEmail = `test_${timestamp}@example.com`;

  bru.setVar("testEmail", testEmail);
  bru.setVar("timestamp", timestamp);
}

body {
  json: {
    "name": "Test Product {{timestamp}}",
    "email": "{{testEmail}}",
    "price": 99.99
  }
}

Test Data Cleanup

Include cleanup scripts in test suites:
post-response {
  // Store created resource ID for cleanup
  if (res.getStatus() === 201) {
    const createdId = res.getBody().data.id;
    bru.setVar("createdProductId", createdId);
  }
}

// Cleanup script in separate file
meta {
  name: Cleanup Test Data
  type: http
  seq: 999
}

post {
  url: {{baseUrl}}/products/remove
}

body {
  json: {
    "id": "{{createdProductId}}"
  }
}

Continuous Integration

Automated Testing

Integrate Bruno tests in CI/CD pipelines:
# In CI script
npm install -g @usebruno/cli

# Run all test collections
bru run spec/general-monitoring --env ci
bru run spec/marketplace-monitoring --env ci

# Check exit codes
if [ $? -eq 0 ]; then
  echo "All tests passed"
else
  echo "Tests failed"
  exit 1
fi

Test Reporting

Generate test reports for CI systems:
# Generate JSON report
bru run spec/general-monitoring --output test-results.json

# Generate JUnit XML (if supported)
bru run spec/general-monitoring --reporter junit --output junit.xml

Best Practices

Test Organization

  • Group related tests in folders
  • Use sequential numbering for test execution order
  • Include authentication tests at the beginning
  • Add cleanup tests at the end

Environment Management

  • Use environment variables for all configurable values
  • Maintain separate environments for dev/staging/prod
  • Never commit sensitive data like passwords or tokens

Data Validation

  • Test both success and error scenarios
  • Validate response structure and required fields
  • Check status codes and error messages
  • Test pagination and filtering

Maintenance

  • Update tests when API changes
  • Remove obsolete test cases
  • Keep test data current and relevant
  • Document complex test scenarios