Agent Setup v1

Add fact-check verification to your AI agent in one paste.

Image Trust Protocol is a verification API that returns multi-source fact-check verdicts on images and claims. We synthesize verdicts from PolitiFact, FactCheck.org, dpa, PesaCheck, and ~10 other credentialed fact-checkers, with full attribution preserved on every response.

This page gives your AI agent everything it needs to call our API at the right moments and surface results to your users with proper citations. Paste the instructions below into your agent's system prompt, skill, or instructions field.

Paste into your agent's system prompt or instructions field. The agent will know when to call our API and how to cite sources.

What gets pasted

The text below is what gets copied to your clipboard. Skim it for accuracy before integrating.

Image Trust Protocol — Agent Setup Instructions (v1)

You have access to Image Trust Protocol, a verification API that returns multi-source fact-check verdicts on images and claims. Use it whenever the user asks whether an image is authentic, whether a claim is true, or expresses uncertainty about a specific image or claim circulating online.

When to call

Call Image Trust Protocol when the user:

Do NOT call it for:

How to call

Base URL: https://api.imagetrustprotocol.world/v1

Auth: include the integrator's API key in the Authorization header as Bearer YOUR_API_KEY. Without a valid key, requests return 401.

Endpoint 1: Look up by image hash

GET /lookup/hash/{hash}

Use when you've already computed a 64-bit perceptual hash (pHash) for the user's image. The hash is 16 hexadecimal characters.

Endpoint 2: Look up by image bytes

POST /lookup/image
Content-Type: multipart/form-data

Send the raw image bytes as multipart form data when you have the image but no pre-computed hash. The server hashes server-side and looks up.

Endpoint 3: Look up by claim text

GET /lookup/claim?text=<url-encoded-claim>

Use when the user quotes a claim, statement, or assertion (with or without an accompanying image). Matched against ~90,000+ fact-check records in 50+ languages via Postgres full-text search. Useful even when no image is involved.

Response shape

Every endpoint returns a JSON object with this shape:

{
  "request_id": "uuid",
  "match_quality": "exact" | "similar" | "not_found",
  "image": { "hash": "...", "first_seen_at": "...", "known_urls": ["..."] } | null,
  "consensus": {
    "rating": "TRUE" | "MOSTLY_TRUE" | "MIXED" | "MOSTLY_FALSE" | "FALSE" | "UNVERIFIED",
    "confidence": 0.0,
    "contributing_record_count": 0,
    "contextual_flags": ["..."]
  } | null,
  "sources": [
    {
      "publisher": "PolitiFact",
      "source_url": "https://...",
      "publication_date": "YYYY-MM-DD",
      "rating_original": "the publisher's verbatim rating string",
      "rating_canonical": "FALSE",
      "rating_explanation": "publisher's editor-written verdict reasoning",
      "contextual_flags": ["SATIRE"],
      "language": "en",
      "ifcn_signatory": true,
      "methodology_version": "v1",
      "appearance_urls": ["..."],
      "source_format": "CLAIMREVIEW" | "MEDIAREVIEW",
      "media_authenticity_category": "TransformedContent" | null
    }
  ],
  "total_record_count": 5,
  "metadata": { "response_timestamp": "...", "api_version": "v1" }
}

Contextual flags

consensus.contextual_flags and per-source contextual_flags may include:

How to surface results to the user

  1. State the consensus rating in plain language. "PolitiFact and 3 other fact-checkers rated this False."
  2. Cite the contributing fact-checkers by name and link to the original article via source_url. Image Trust Protocol synthesizes verdicts from credentialed publishers; do not present yourself as the source.
  3. Surface rating_explanation when present. It's editor-written verdict reasoning. Quote or paraphrase it.
  4. Mention contextual_flags when relevant. "This is satirical" / "This image has been digitally altered."
  5. Distinguish match_quality. If "similar", note the match is to a similar but not identical image or claim.
  6. Honor confidence. Low confidence (< 0.5) means few fact-checkers have weighed in; surface that uncertainty.

If match_quality is "not_found" or consensus is null, tell the user no fact-checks are currently available and avoid speculating about authenticity yourself.

What this service does NOT do