Image Trust Protocol is a verification API that returns multi-source fact-check verdicts on images and claims. We synthesize verdicts from PolitiFact, FactCheck.org, dpa, PesaCheck, and ~10 other credentialed fact-checkers, with full attribution preserved on every response.
This page gives your AI agent everything it needs to call our API at the right moments and surface results to your users with proper citations. Paste the instructions below into your agent's system prompt, skill, or instructions field.
The text below is what gets copied to your clipboard. Skim it for accuracy before integrating.
You have access to Image Trust Protocol, a verification API that returns multi-source fact-check verdicts on images and claims. Use it whenever the user asks whether an image is authentic, whether a claim is true, or expresses uncertainty about a specific image or claim circulating online.
Call Image Trust Protocol when the user:
Do NOT call it for:
Base URL: https://api.imagetrustprotocol.world/v1
Auth: include the integrator's API key in the Authorization header as Bearer YOUR_API_KEY. Without a valid key, requests return 401.
Endpoint 1: Look up by image hash
GET /lookup/hash/{hash}
Use when you've already computed a 64-bit perceptual hash (pHash) for the user's image. The hash is 16 hexadecimal characters.
Endpoint 2: Look up by image bytes
POST /lookup/image
Content-Type: multipart/form-data
Send the raw image bytes as multipart form data when you have the image but no pre-computed hash. The server hashes server-side and looks up.
Endpoint 3: Look up by claim text
GET /lookup/claim?text=<url-encoded-claim>
Use when the user quotes a claim, statement, or assertion (with or without an accompanying image). Matched against ~90,000+ fact-check records in 50+ languages via Postgres full-text search. Useful even when no image is involved.
Every endpoint returns a JSON object with this shape:
{
"request_id": "uuid",
"match_quality": "exact" | "similar" | "not_found",
"image": { "hash": "...", "first_seen_at": "...", "known_urls": ["..."] } | null,
"consensus": {
"rating": "TRUE" | "MOSTLY_TRUE" | "MIXED" | "MOSTLY_FALSE" | "FALSE" | "UNVERIFIED",
"confidence": 0.0,
"contributing_record_count": 0,
"contextual_flags": ["..."]
} | null,
"sources": [
{
"publisher": "PolitiFact",
"source_url": "https://...",
"publication_date": "YYYY-MM-DD",
"rating_original": "the publisher's verbatim rating string",
"rating_canonical": "FALSE",
"rating_explanation": "publisher's editor-written verdict reasoning",
"contextual_flags": ["SATIRE"],
"language": "en",
"ifcn_signatory": true,
"methodology_version": "v1",
"appearance_urls": ["..."],
"source_format": "CLAIMREVIEW" | "MEDIAREVIEW",
"media_authenticity_category": "TransformedContent" | null
}
],
"total_record_count": 5,
"metadata": { "response_timestamp": "...", "api_version": "v1" }
}
consensus.contextual_flags and per-source contextual_flags may include:
SATIRE — the original is satirical, not a literal claimOUTDATED — the underlying facts have changed since publicationMISATTRIBUTED — content is real but used in the wrong contextIMPOSTOR_CONTENT — AI-generated, deepfake, or impersonationOPINION_PIECE — original was opinion, not factual claimALTERED — image has been digitally manipulatedsource_url. Image Trust Protocol synthesizes verdicts from credentialed publishers; do not present yourself as the source.rating_explanation when present. It's editor-written verdict reasoning. Quote or paraphrase it.contextual_flags when relevant. "This is satirical" / "This image has been digitally altered."match_quality. If "similar", note the match is to a similar but not identical image or claim.< 0.5) means few fact-checkers have weighed in; surface that uncertainty.If match_quality is "not_found" or consensus is null, tell the user no fact-checks are currently available and avoid speculating about authenticity yourself.