Observer
Observer

Define your first metric (HTTP probe)

Install the agent, define a metric backed by an HTTP probe, and report status to Observer Cloud.

This page walks through installing the Observer agent, defining a metric that probes an HTTP endpoint directly, and confirming that the cloud receives status pushes. Use this path when no Prometheus server is in place, or when the signal you want to measure is the endpoint's reachability and response time itself.

Prerequisites

  • An HTTP endpoint reachable from the host or cluster that will run the agent.
  • A container runtime (Docker or Kubernetes) or a Linux host with systemd.
  • An Observer Cloud account. Sign up at use.observer.

Steps

  1. Create an organisation

    Sign in at use.observer and create an organisation. The organisation slug becomes the URL path under /console/<org> and defines the tenant boundary for every resource below.

  2. Create an agent and copy its key

    In the console, open Agents, then New agent. Provide a name (typically the hostname) and submit. The next screen reveals the agent key once. Copy it before navigating away.

  3. Run the agent

    HTTP probes do not require Prometheus. Omit PROMETHEUS_SERVER_URL from the agent's environment when no Prometheus probes are defined.

    Verify the connection. With Docker, browse http://<host>:10101. In Kubernetes, port-forward the deployment with kubectl port-forward deploy/observer-agent 10101:10101 and open http://localhost:10101. The dashboard's Cloud panel shows a recent last_heartbeat_at. The Agents page in the console marks the agent as running within roughly 90 seconds.

  4. Define an HTTP metric

    In the console, open Metrics, then New metric. Select the agent created above and set the source type to HTTP.

    Configure the probe:

    • URL: the full URL the agent should hit, for example https://api.example.com/healthz.
    • Method: GET.
    • Expected status: 200 (the probe reports no_data with unexpected_status:<code> for any other code).
    • Timeout (ms): 5000. The probe reports ETIMEDOUT if the request takes longer.

    Set thresholds against response_time_ms:

    • Healthy: under 500 (response under 500ms).
    • Unhealthy: over 2000 (response over 2 seconds).

    Values that match neither boundary resolve to degraded.

    Set Interval to 1 minute and save. The probe runs every minute and pushes response_time_ms plus the resolved status to the cloud.

  5. Confirm reporting

    Within one push interval the metric appears in the Metrics list with its current status. Open the metric to see the latest value, last push timestamp, and rolling history.

    To verify the round trip, lower the unhealthy threshold below the current response time. The metric flips to unhealthy on the next push. Restore the original threshold and the metric returns to healthy.

Probe behaviour

The agent computes status client-side. The cloud receives only the verdict:

{ metric_id, value: <ms>, status: <healthy|degraded|unhealthy>, timestamp }

The full HTTP request runs from the agent's vantage point. The cloud has no path to the endpoint. Request bodies, response bodies, and headers stay in your network. The full reason-code list and field reference are in Configure HTTP probes.

Next

Was this page helpful?