AdTech Buyer Guide

IP Intelligence for AdTech in 2026: Buyer Criteria for IVT Filtering, Geo Accuracy, and Cost per Clean Impression

By IP Geolocation TeamApr 14, 20268 min read

Adtech buyers rarely lose money because a lookup is expensive. They lose money because clean inventory, geo targeting, and invalid-traffic controls break in production. A low-cost IP data contract still fails if it lets non-human traffic pollute bidding, if it misroutes regional campaigns, or if operations cannot explain why an impression was filtered after the fact.

What the Verified Public Surface Shows Today

Public lookup contract

The live docs and demo verify a REST lookup on /v1-get-ip-details with IP, country, registered country, ASN, AS organization, organization, city, timezone, coordinates, and an accuracy radius field.

Commercial model

The pricing page states one credit equals one API request, with pay-as-you-go credits starting at $2.25 for 5,000 credits and monthly plans starting at $2.50 for 7,500 credits.

Site-listed security workflows

The live site, pricing page, and FAQ also market proxy, VPN, and Tor detection plus batch, CSV, webhook, and bulk workflows. The public docs currently document the single lookup surface, so confirm the exact anonymous-IP and batch response contract before wiring hard filters around it.

Why adtech buyers need more than a country code

Adtech teams do not buy IP intelligence for curiosity. They buy it to improve spend quality. That usually means four things: filtering invalid traffic before it poisons performance data, protecting geo-targeted delivery from spoofed location signals, clustering suspicious supply with ASN and organization data, and keeping analytics usable after third-party cookies disappear.

The gap between a marketing claim and an operator workflow matters here. A site can say it supports proxy or VPN detection, but the buying question is more specific: where does that signal appear, what is the confidence model, how does it behave on mobile carrier networks, and what does the team do with it when the signal is ambiguous? Fraud, growth, analytics, and RevOps all end up paying for the same mistake if those details stay fuzzy.

That is why the right procurement unit is not cost per lookup. It is cost per clean impression, cost per trustworthy clickstream, or cost per protected campaign segment. A vendor can be cheap at request volume and still expensive after duplicate enrichment, manual review, or geo-targeting waste.

What competitor packaging says about the market

Current competitor pages point to a consistent category pattern. IPinfo packages multiple data families and data-download options behind tiered plans. MaxMind separates web services, downloadable databases, and dedicated Anonymous IP, ISP, and Connection Type data. ipgeolocation.io splits location, security, ASN, and company data into separate APIs and databases. DB-IP withholds ISP, organization, threat assessment, and crawler or proxy detection from its lower tier and adds those deeper signals higher up the stack. Abstract separates IP Geolocation from a broader IP Intelligence product family.

Buyers should read that packaging as a warning. The market already treats plain geolocation, anonymous-IP screening, network intelligence, and bulk delivery as different commercial surfaces. If your adtech workflow needs all four, procurement has to test those surfaces together instead of assuming one headline plan covers everything that matters.

Buyer criteria that actually change media efficiency

CriterionWhy it matters in adtechWhat to verify
Geo-targeting confidenceGeo buys fail when country assignment looks precise but the confidence is weak.Use country, registered country, and accuracy radius together. Ask how low-confidence traffic is flagged.
ASN and organization coverageProgrammatic fraud and routing abuse cluster by network more often than by individual IP.Verify ASN, organization, and ISP style fields in the public contract and test repeat offenders.
Anonymous-IP screeningMasked traffic distorts campaign geography and IVT decisions.Confirm proxy, VPN, and Tor fields with your own account-level docs before building hard exclusions.
Delivery modelBid-path decisions and log backfills should not share the same cost profile.Separate real-time request paths from batch, CSV, or warehouse enrichment flows.
ExplainabilityOps teams need a reason code when publishers or buyers question a filter.Make sure you can persist lookup fields and decision reasons without storing more data than necessary.
Pricing fitCheap requests become expensive if duplicate IPs are enriched repeatedly.Model unique-IP rates, cache hit rates, and overage pricing before you sign volume.

How to model cost per clean impression

The simplest model is also the one most teams skip. Start with impressions, strip them down to unique IPs by time window, and then decide which paths require real-time enrichment versus delayed analysis. Adtech data is noisy. A single IP can generate dozens of events across ad requests, clicks, viewability pings, and publisher diagnostics. If procurement prices the raw event stream instead of the unique-IP workload, the wrong plan will look attractive on paper.

Use an example model before you negotiate. Suppose a platform sees 40 million bid requests in a day, but only 2.8 million unique IPs inside the decision window. If 80% of those IPs hit a cache and only 20% need fresh lookups, your real-time requirement is 560,000 requests, not 40 million. Now add a separate nightly batch enrichment job for analytics, where duplicate IPs are deduped again before warehouse load. That is the number procurement should take into pricing discussions.

WorkloadPrimary KPIBest lookup patternCommercial question
Pre-bid IVT filterClean impression rateReal-time with short cacheWhat is the cost per impression protected after cache hit rate?
Post-bid auditRefundable waste foundBatch or CSVHow much analyst time does each recoverable cluster require?
Geo-targeting QATargeting accuracySampled enrichmentHow expensive is each wrong geography assignment?
Publisher investigationSuspicious supply isolatedASN-first clusteringCan ops explain why traffic was filtered or downranked?

A practical proof-of-concept for ad ops and procurement

A good proof-of-concept takes one week of impression, click, and post-bid rejection logs from a single buyer, DSP, SSP, or publisher slice. Enrich unique IPs first. Then build three views: geo mismatch, network cluster, and suspicious review queue. Geo mismatch compares countryCode with campaign targeting and separates low-confidence results with an accuracy radius large enough to make city targeting unreliable. Network cluster groups inventory by asn, aso, and organization so the team can see whether suspect traffic concentrates in hosting, reseller, or rapidly changing supply pockets.

The review queue is where operator value shows up. Do not hard-block every mismatch. Use a queue for impressions that combine registration mismatch, wide radius, repeated suspicious ASN patterns, or site-marketed anonymous-IP signals that your contract exposes. Then compare four commercial outcomes: spend removed, recovery from invalid traffic claims, geo-targeting improvement, and analyst time. This gives both procurement and operations the same scoreboard.

Real API examples for adtech evaluation

The examples below stick to the verified public lookup surface. One useful wrinkle surfaced during the site audit: the live demo renders snake_case city keys while the public OpenAPI schema documents camelCase keys. Do not assume one casing until you validate the exact response shape in your account. Normalizing both is a safer way to start.

cURL

curl -X GET "https://api.ip-info.app/v1-get-ip-details?ip=8.8.8.8" \
  -H "accept: application/json" \
  -H "x-api-key: $IP_INFO_API_KEY"

TypeScript

type IpLookup = {
  ip: string;
  countryCode?: string;
  registeredCountryCode?: string;
  asn?: number;
  aso?: string;
  organization?: string;
  city?: {
    name?: string;
    region?: string;
    accuracyRadius?: number;
    accuracy_radius?: number;
    timeZone?: string;
    time_zone?: string;
  };
};

function normalizeRadius(lookup: IpLookup): number | undefined {
  return lookup.city?.accuracyRadius ?? lookup.city?.accuracy_radius;
}

export function scoreImpression(
  lookup: IpLookup,
  targetedCountries: Set<string>,
  reviewAsns: Set<number>,
) {
  const reasons: string[] = [];
  const accuracyRadius = normalizeRadius(lookup);

  if (lookup.countryCode && !targetedCountries.has(lookup.countryCode)) {
    reasons.push('outside_target_country');
  }

  if (
    lookup.countryCode &&
    lookup.registeredCountryCode &&
    lookup.countryCode !== lookup.registeredCountryCode
  ) {
    reasons.push('registered_country_mismatch');
  }

  if (accuracyRadius !== undefined && accuracyRadius >= 200) {
    reasons.push('low_geo_confidence');
  }

  if (lookup.asn !== undefined && reviewAsns.has(lookup.asn)) {
    reasons.push('review_asn');
  }

  return {
    reasons,
    route:
      reasons.length === 0 ? 'serve' : reasons.includes('outside_target_country') ? 'exclude_geo_targeting' : 'review',
  };
}

Python Batch Enrichment

import csv
import os
import requests

API_KEY = os.environ["IP_INFO_API_KEY"]

def lookup_ip(ip: str) -> dict:
    response = requests.get(
        "https://api.ip-info.app/v1-get-ip-details",
        params={"ip": ip},
        headers={
            "accept": "application/json",
            "x-api-key": API_KEY,
        },
        timeout=2,
    )
    response.raise_for_status()
    return response.json()

with open("ad-events.csv", newline="") as handle:
    rows = list(csv.DictReader(handle))

unique_ips = sorted({row["ip"] for row in rows if row.get("ip")})
ip_cache = {ip: lookup_ip(ip) for ip in unique_ips}

for row in rows:
    details = ip_cache.get(row["ip"], {})
    city = details.get("city") or {}
    row["country_code"] = details.get("countryCode")
    row["registered_country_code"] = details.get("registeredCountryCode")
    row["asn"] = details.get("asn")
    row["accuracy_radius_km"] = city.get("accuracyRadius") or city.get("accuracy_radius")

with open("ad-events-enriched.csv", "w", newline="") as handle:
    writer = csv.DictWriter(handle, fieldnames=rows[0].keys())
    writer.writeheader()
    writer.writerows(rows)

Decision matrix: who should buy what kind of delivery model

TeamPrimary problemWhat to prioritize
Ad opsInvalid impressions and geo driftExplainable filters, ASN clustering, and sampled batch reviews
Fraud and trustMasked traffic and suspicious supplyExact anonymous-IP field contract, review rules, and low false positives
AnalyticsDirty geo reportingDeduped batch enrichment, warehouse-friendly output, and timezone stability
ProcurementChoosing the right planCost per clean impression, overage behavior, and account-level workflow support

If your team buys IP intelligence for adtech, the right next move is not a broad benchmark deck. It is a short proof-of-concept on your own inventory using the verified lookup surface, a clean cache model, and a written checklist for anonymous-IP fields, batch workflows, and operator explainability. That process closes the gap between a vendor demo and a system that protects campaign economics.

Validate spend quality on your own traffic

Test the public lookup surface on a sample of impression, click, and post-bid logs, then pressure-test the anonymous-IP and batch workflows with sales before you commit volume.

Implementation reference

Existing site guide for invalid traffic filtering and cookie-light adtech workflows.

Evaluation reference

Use the site's data-quality post to structure traffic tests before procurement.

Network workflow

ASN and ISP workflow patterns for suspicious traffic clustering, routing, and triage.

Public API docs

Confirm the live request contract, response fields, and authentication flow before rollout.