Towards Harder Location Proofs

Introduction

Hi everyone — I’m John, founder at Astral / Sophia Systems and Research Affiliate at UMD — I just shared an intro here.

At the Decentralized Geospatial Collaborative, our current focus is proof-of-location — one of the three pillars of the decentralized geospatial web, and a key enabler for location-based services on Ethereum.

Some of the questions we’re working to advance include:

  • How can we develop credible, versatile metrics for measuring validator geography?
  • How might these metrics expand the policy levers available to protocol and infrastructure designers to manage geographic decentralization?
  • How might composable location evidence feed into simulation or risk modeling work?

This post is intended to:

  • summarize the state of our research on location proofs;
  • explore what opportunities this primitive affords the Ethereum community to advance geographic decentralization efforts;
  • outline open questions and areas for collaboration.

Context

Our systematic review of location verification techniques has identified proof-of-location approaches ranging from physics-based measurements to cryptographic protocols to human trust networks. Each system makes different tradeoffs on security, accuracy, cost, and other dimensions.

Diverse approaches are needed to serve different use cases and threat models. This diversity creates both challenges and opportunities:

  • How do we meaningfully compare different proof-of-location systems?
  • If we can harmonize them, could we combine signals from independent systems to achieve stronger guarantees than any single system provides?
  • And how can developers navigate this complexity without deep expertise in each approach?

We’re coming to understand location verification through the lens of evidence functions, which provide a way to quantify confidence in claims about where things happened. Recognizing this common pattern points toward a framework for composable location evidence — one that enables different PoL systems to contribute independent evidence, while allowing verifiers to define for themselves what constitutes sufficient proof for their needs. This makes verification both more accessible and more robust. This post shares the state of our thinking on that framework.

As the discussion around measuring validator geography deepens, we’ve been exploring how composable location proofs could strengthen these measurements — from basic diversity metrics to adversarial risk models. We hope this can help quantify decentralization and risk, and help increase Ethereum’s resiliency and neutrality.

Proving Location

For such a simple idea, this is a surprisingly slippery concept.

Some key terms we’re working with:

  • Location claim — a statement that an entity or event occurred within a specified spatial region and time interval.

  • Proof-of-location system — a system that generates verifiable evidence about where an event or device is, based on observed signals and a defined localization model or verification process. Shortened to “PoL system”.

  • Signals — raw observations collected from a proof-of-location system, ranging from physical measurements (RF timings, sensor data) to cryptographic artifacts (digital signatures, attestations). (Note that many of these signals are unlikely to be suitable for a highly adversarial, server-based environment.)

  • Location stamp — a verifiable digital artifact that corroborates a claim about the position and timing of some event, requiring collusion, technical manipulation, or fraud to forge. Produced by a PoL system from its signals, including provenance and any relevant confidence indicators.

  • Location evidence — a composite artifact formed by combining one or more location stamps to support a single spatiotemporal location claim.

  • Evidence function — a function that evaluates location evidence against a location claim, producing a location proof.

  • Location proof — an assessment that quantifies how strongly location evidence supports a location claim across multiple dimensions, produced through verifiable evaluation methods.

This framework distinguishes between location evidence (portable, objective artifacts) and location proofs (contextual credibility assessments). This separation acknowledges that what constitutes sufficient “proof” depends on context — the same evidence might yield different proof assessments for different applications. By standardizing how evidence is structured while allowing flexibility in how it’s evaluated, we can support diverse use cases from low-stakes applications to high-security scenarios.

The Location Proof Lifecycle

In our framework, the lifecycle of a location proof involves:

  1. Some location claim is made by a prover.
  2. Signals are collected and processed to produce location stamps.
  3. Stamps are composed into location evidence.
  4. An evidence function is applied, which yields location proof.

For emphasis, this work operates as an abstraction layer above individual proof-of-location systems — it complements and depends on them for generating location evidence. We’re in the early stages of formalizing this framework and are seeking input on any and all of it.

Analytical Sketch

This section sketches how we might formalize this intuition mathematically, as a basis for measurement and simulation.

To generate location proofs, we apply an evidence function \mathcal{E} to evaluate evidence against claims.

\mathcal{E} : (C, E) \mapsto P

The function \mathcal{E}(C, E) returns a multidimensional location proof vector:

P = (P_1, P_2, ... , P_n)

where each component P_i represents a distinct evidential dimension — such as spatial accuracy, temporal integrity, or forgery resistance. These components may be expressed in normalized or natural units. Collectively, they form a credibility vector describing the strength of support the evidence provides for the claim.

This provides a structured way to compare or combine heterogeneous sources of location evidence, regardless of how they are generated.

1. Reality

We posit an underlying, true spatiotemporal mapping:

\ell_{\text{true}} : X \times \mathbb{T} \to \mathbb{S}

where

  • X is the set of entities,
  • \mathbb{T} is the set of times, and
  • \mathbb{S} is the spatial domain.

For any entity x \in X and time t \in \mathbb{T}, \ell_{\text{true}}(x, t) denotes the true location of x at t.

The mapping is only locally observable; each observation yields bounded evidence by which claims about \ell_{\text{true}} may be evaluated.

2. Claim

A prover asserts a claim about an event:

C = (x, L, T)

interpreted as the statement:

\exists \ t \in T : \ell_{\text{true}}(x,t) \in L

In words: entity x was in region L at some time within interval T. This expresses what the prover asks the verifier (or the network) to believe.

3. Signals

To compile evidence to corroborate C, the prover draws on one or more proof-of-location systems. Each proof-of-location system produces a set of raw observables concerning the entity’s location.

O_i = \{o_{i1}, o_{i2}, …, o_{ik}\}

Observables may include direct measurements (physical signals, environmental identifiers) or signed assertions from external entities (peers, authorities, services with location knowledge).

4. Location stamps

A PoL system processes its own signals into a signed, verifiable artifact:

s_i = g_i(O_i)

where g_i : O_i \to s_i encodes that system’s localization model. Systems typically determine location through:

  • Inference: computing position from physical signals (trilateration, timing analysis)
  • Reference: looking up observed identifiers in location databases (IP geolocation, WiFi mapping)
  • Corroboration: providing location-relevant attestations or patterns (peer witnesses, authority vouching, behavioral data)

Each stamp must be digitally signed to make its origin and integrity verifiable.

Beyond this minimal requirement, systems may include additional durability assurances—mechanisms that make falsification or replay more costly or detectable, such as cryptographic attestations, hardware proofs, or economic costs.

Most existing PoL systems provide only partial guarantees; mapping and comparing these durability properties remains an active area of research.

5. Location evidence

A location evidence bundle is composed of m stamps:

E = \{s_1, s_2, …, s_m\}

Each stamp carries its own provenance, uncertainty, and internal trust assumptions. Together, these stamps form the evidentiary basis for a specific spatiotemporal claim.

6. Evidence function

An evidence function \mathcal{E} evaluates location evidence E against a claim C to produce a location proof P:

\mathcal{E} : (C, E) \mapsto P

Recall that P is a credibility vector — a multidimensional assessment of how well the available evidence supports the claim across several evidential dimensions.

In practice, \mathcal{E} considers multiple factors when evaluating evidence against a claim:

\mathcal{E}(C, E) = \mathcal{E}(\text{corr}(E), \text{strength}(E), \text{relevance}(C, E)) = P

  • \text{corr}(E)independence: the degree to which constituent systems provide distinct information rather than redundant signals.
  • \text{strength}(E)robustness: the intrinsic reliability of each contributing system, including calibration accuracy, error bounds, and resistance to forgery or manipulation.
  • \mathrm{relevance}(C, E)fit to the claim: how directly the collected evidence pertains to the specific entity, region, and time interval asserted.

Both the structure of \mathcal{E} and the interpretation of its output vector P remain open research problems. A key challenge is to characterize what each evidential dimension of P should capture — accuracy, integrity, cost of forgery, privacy, decentralization, and so on — and to establish methods for quantifying these across heterogeneous proof-of-location systems with different assurance models and interdependencies.

Application-specific evidence functions

Different applications require fundamentally different evidence functions. Some verifiers may exclude entire categories of evidence based on their trust models:

  • Decentralization-focused applications might reject any evidence from centralized authorities, regardless of accuracy.
  • Regulatory-compliant systems might require government attestations and reject peer-to-peer evidence.
  • Privacy-preserving applications might only accept evidence with specific privacy guarantees, rejecting evidence that exposes raw location data.

This isn’t just about evaluation methods — it’s about which evidence types are considered valid at all. For example, a location stamp derived from a proprietary indoor-positioning network—such as one operated by a major mapping or retail-analytics company—might deliver sub-meter accuracy and be difficult to forge without privileged access, yet be assigned zero credibility by systems that reject centralized data services.

Evidence functions thus encode not just how to evaluate evidence, but which evidence to consider. This application-specific filtering reflects the fundamental trust assumptions of each use case.

7. Weighting scheme

To make location proofs actionable, verifiers often need to interpret the multidimensional credibility vector to produce a decision.

A weighting scheme maps the location proof P to a single scalar value that can be compared against an application-specific threshold.

w : P \mapsto p \in [0,1]

Different applications weight the components of P differently:

  • High-security applications might require P_{\text{forgery-cost}} > \theta_{\text{high}} regardless of spatial accuracy
  • Navigation services might weight P_{\text{precision}} heavily while tolerating lower forgery resistance
  • Decentralized systems might require P_{\text{decentralization}} > \theta_{\text{min}} as a hard constraint before considering other dimensions

The weighting function w encodes these priorities, transforming the rich credibility assessment into context-specific decision value.

Importantly, we don’t prescribe a universal weighting scheme. What constitutes “sufficient proof” is inherently application-specific. The same location proof might be accepted by one verifier and rejected by another, based on their different requirements and risk tolerances.

This separation between evidence evaluation (producing P) and decision-making (applying w) allows the framework to serve diverse applications while maintaining a common language for location verification.

Design Properties

This framework has several properties that make it useful across different contexts.

Information monotonicity

Adding valid, independent evidence may raise or lower confidence in a claim, but it should increase certainty about that confidence — tightening, not loosening, our understanding of where the truth most likely lies.

Non-additivity of redundant evidence

When two signals convey the same information, the second does little to increase certainty. The framework should recognize correlation so that only genuinely independent evidence contributes to a more reliable assessment.

Forgery-cost condition

Location evidence is only as good as the cost of faking it. For any given application, the expected cost of forging all contributing signals should exceed the value of whatever action or transaction the proof supports. This keeps incentives aligned and makes deception uneconomical.

Composability

Different location stamps can be combined into evidence bundles in a consistent way. The framework ensures that evidence from multiple sources can be meaningfully integrated, with each stamp’s contribution properly accounted for based on its independence and relevance. This makes it possible to layer multiple PoL systems together while maintaining interpretability.

Verifiability

The framework is designed to support verifiable computation throughout the lifecycle. Location stamps should be cryptographically signed, evidence bundles should maintain clear provenance, and evaluation functions should be identifiable so their outputs can be independently reproduced or verified. In practice, systems will vary in how far they go — from transparent on-chain evaluation to zero-knowledge proofs or trusted execution — but all should make explicit what can be verified and what trust assumptions remain.

Neutrality

The framework is unopinionated about how location evidence is generated or proofs are evaluated. It accommodates diverse proof-of-location systems and evidence functions, and it makes no assumptions about which attributes verifiers value most. Different implementations can compete, benchmark, and evolve within a common architecture.

Decision flexibility

The framework makes no assumptions about how proof evaluations are used. It supports diverse modes of decision-making — from simple thresholds to adaptive or collective reasoning — allowing applications to define what “sufficient proof” means within their own operational logic.

Together, these properties describe how evidence should behave as it accumulates: more independent, high-integrity signals sharpen our picture of reality, while noise, redundancy, or manipulation are naturally constrained.

Illustrative Example

To make this more tangible, consider a validator node that makes an honest location claim C=(x, L, T). It collects evidence from two independent proof-of-location systems:

  • a network-measurement system such as WitnessChain’s, which infers proximity through latency analysis, and
  • a satellite-attestation system similar to SpaceComputer’s proposal, that derives position from satellite signals and hardware attestations.

Each system processes its own signals to produce a signed location stamp. Combined, they form location evidence E = \{ s_{\text{network}}, s_{\text{satellite}} \}

Applying an evidence function yields a location proof: P = \mathcal{E}(C,E). The resulting credibility vector P reflects:

  • Low correlation: network and satellite systems use independent physical phenomena,
  • High robustness: both systems include cryptographic signatures and costly-to-forge attestations,
  • High relevance: both directly measure the claimed entity during the claimed time.

This multi-factor approach produces a stronger location proof than either system alone. Intuitively, the expected cost of forgery rises sharply: an attacker would need to simultaneously compromise both the network measurements and satellite infrastructure — a significantly higher bar than attacking a single system.

This illustrates the framework’s key insight: independent evidence sources compose to create stronger proofs, while redundant sources add little value.

Open Questions

Obviously, there is an enormous amount of work that needs to be done to build this kind of multisystem sensing infrastructure. Some things on my mind:

  1. Benchmarking strength
    How can we compare and benchmark the strength of different proof-of-location systems — including their accuracy, durability, and cost of forgery — in a consistent, reproducible way?

  2. Structure of the credibility vector
    What dimensions belong in a location proof? How can we quantify and normalize them so different PoL systems can interoperate and be evaluated on equal footing?

  3. Independence and correlation
    How do we measure and model correlation between PoL systems, so that composite proofs meaningfully reflect independence rather than redundancy?

  4. Privacy and disclosure
    What privacy affordances are essential across use cases, and which privacy-preserving techniques remain compatible with verifiable proofs?

  5. Durability mechanisms
    What cryptographic, hardware, and economic techniques can increase the durability of location proofs — making falsification or replay detectably costly — and how can these be compared or composed across systems?

  6. Application under adversarial conditions
    How robust can location proofs be in trustless or adversarial environments such as validator networks? What assumptions break down, and what mechanisms could re-establish confidence?

  7. Protocol integration and incentives
    If composable location evidence becomes available, how might protocols use it — for measurement, risk management, or incentive design — without introducing new centralization pressures?

  8. Where to begin
    Which existing PoL systems or datasets are ready for early experimentation, and what minimal testbeds or metrics would yield the clearest signal on feasibility and value?

Next Steps

If you’re working on validator measurement, network cartography, or simulation frameworks that could integrate spatial evidence, we’d like to compare notes. Our next step is formalizing testable evidence functions and benchmarking existing PoL systems.


Many thanks to Adam Spiers, Taylor Oshan, @cryptrowanderer, Vivek, Ron Erlih, Seth Docherty, Kiersten Jowett, Holly Grimm, j-mars, jason-james, and Danny Gattman for all their contributions to this work.

3 Likes