# Image Trust Protocol — Agent Setup Instructions (v1) You have access to **Image Trust Protocol**, a verification API that returns multi-source fact-check verdicts on images and claims. Use it whenever the user asks whether an image is authentic, whether a claim is true, or expresses uncertainty about a specific image or claim circulating online. ## When to call Call Image Trust Protocol when the user: - Attaches an image and asks "is this real?" / "what is this?" / "is this fake?" / "where is this from?" - Quotes a claim or statement and asks whether it's true, who said it, or when it happened - Mentions a viral image or news event and asks for context - Explicitly asks for verification, fact-checking, or "what do the fact-checkers say" Do NOT call it for: - General-knowledge questions unrelated to a specific image or claim - Personal photos the user shared with no verification request attached - Hypothetical, opinion, or general-information requests where no specific verification is needed - Real-time events that may be too recent to have an existing fact-check record ## How to call Base URL: `https://api.imagetrustprotocol.world/v1` Auth: include the integrator's API key in the `Authorization` header as `Bearer YOUR_API_KEY`. The integrator must have an active account; without a valid key, requests return `401`. ### Endpoint 1: Look up by image hash ``` GET /lookup/hash/{hash} ``` Use when you've already computed a 64-bit perceptual hash (pHash) for the user's image. The hash is 16 hexadecimal characters. ### Endpoint 2: Look up by image bytes ``` POST /lookup/image Content-Type: multipart/form-data ``` Send the raw image bytes as multipart form data when you have the image but no pre-computed hash. The server hashes server-side and looks up. ### Endpoint 3: Look up by claim text ``` GET /lookup/claim?text= ``` Use when the user quotes a claim, statement, or assertion (with or without an accompanying image). The query is matched against ~90,000+ fact-check records in 50+ languages via Postgres full-text search. Useful even when no image is involved — for example, when a user asks "did X really say Y?" ## Response shape Every endpoint returns a JSON object with this shape: ```json { "request_id": "uuid", "match_quality": "exact" | "similar" | "not_found", "image": { "hash": "...", "first_seen_at": "ISO-8601 timestamp", "known_urls": ["..."] } | null, "consensus": { "rating": "TRUE" | "MOSTLY_TRUE" | "MIXED" | "MOSTLY_FALSE" | "FALSE" | "UNVERIFIED", "confidence": 0.0, "contributing_record_count": 0, "contextual_flags": ["..."] } | null, "sources": [ { "publisher": "PolitiFact", "source_url": "https://...", "publication_date": "YYYY-MM-DD", "rating_original": "the publisher's verbatim rating string", "rating_canonical": "FALSE", "rating_explanation": "publisher's editor-written verdict reasoning, when present", "contextual_flags": ["SATIRE"], "language": "en", "ifcn_signatory": true, "methodology_version": "v1", "appearance_urls": ["https://x.com/...", "https://facebook.com/..."], "source_format": "CLAIMREVIEW" | "MEDIAREVIEW", "media_authenticity_category": "TransformedContent" | null } ], "total_record_count": 5, "metadata": { "response_timestamp": "ISO-8601", "api_version": "v1" } } ``` ## Contextual flags `consensus.contextual_flags` (and per-source `contextual_flags`) may include: - `SATIRE` — the original is satirical, not a literal claim - `OUTDATED` — the underlying facts have changed since publication - `MISATTRIBUTED` — content is real but used in the wrong context (wrong date, location, person, etc.) - `IMPOSTOR_CONTENT` — AI-generated, deepfake, or impersonation - `OPINION_PIECE` — original was opinion, not factual claim - `ALTERED` — image has been digitally manipulated (Photoshop, AI edit, etc.) Multiple flags can apply to a single record. ## How to surface results to the user When you receive a match, surface it transparently: 1. **State the consensus rating in plain language.** "PolitiFact and 3 other fact-checkers rated this False" or "This is rated Mostly False by 2 of 3 fact-checkers." 2. **Cite the contributing fact-checkers by name** and link to the original article via `source_url`. Image Trust Protocol synthesizes verdicts from credentialed publishers — do not present yourself as the source. 3. **Surface `rating_explanation`** when present. It's editor-written verdict reasoning and gives the user actionable context. Quote or paraphrase it. 4. **Mention `contextual_flags`** when relevant. "This is satirical" / "This image has been digitally altered" / "The original is from 2018 but is being shared as recent." 5. **Distinguish `match_quality`.** If `"similar"` rather than `"exact"`, note the match is to a similar but not identical image or claim, and that interpretation may be partial. 6. **Honor confidence.** Low confidence (`< 0.5`) means few fact-checkers have weighed in; surface that uncertainty. If `match_quality` is `"not_found"` or `consensus` is null, tell the user no fact-checks are currently available for this specific image or claim, and avoid speculating about authenticity yourself. ## What this service does NOT do - Does not generate new fact-checks. It surfaces existing verdicts from publishers like PolitiFact, FactCheck.org, dpa, PesaCheck, and ~10 others. - Does not detect AI-generated images via pixel analysis. It surfaces what fact-checkers have already concluded about specific images. - Does not store user images permanently. Only perceptual hashes are retained for matching. - Does not have opinions on hypothetical, opinion, or general-knowledge questions. ## API version This setup is for API v1. Future breaking changes will increment to v2; v1 will remain accessible for compatibility. If you receive a `404` on a documented v1 endpoint, the API may not yet be available in your region or the integrator's key may lack access — return that information to the user verbatim rather than retrying.