Modernizing Independent Price Verification (IPV)

Scaling AI, Featured Thibault Phlipponneau

As portfolios grow in complexity and markets turn more volatile, Independent Price Verification (IPV) has become more than a control — it’s a strategic pillar of valuation governance. For heads of valuation, finance, and risk, IPV ensures prices are independent, accurate, and defensible across audit, regulatory, and stakeholder lenses — anchored in frameworks like IFRS 13, FRTB, and SEC Rule 2a-5.

Yet IPV is increasingly strained by operational fragmentation and complexity. Many still rely on workflows that span Market Data Management (MDM) systems, spreadsheets, scripts, and informal collaboration. Built for simpler products and longer cycles, these legacy approaches now struggle to meet the demands of modern oversight.

A key complexity driver is instrument classification under IFRS 13: Level 1 instruments (e.g., listed equities) rely on observable prices and require minimal validation; Level 2 instruments (e.g., corporate bonds) depend on observable inputs like credit spreads; Level 3 instruments (e.g., illiquid derivatives) rely on internal models, assumptions, and unobservable inputs — demanding the most scrutiny.

These distinctions shape control requirements. While Level 1 assets can often be validated automatically, Level 2 and 3 instruments — where uncertainty and subjectivity are higher — require advanced validation logic, independent modeling, and governed exception workflows.

IPV isn’t broken — but underpowered. In a data-driven environment, it should act as a strategic radar, detecting issues before they escalate.

This piece explores that future: why traditional IPV architectures fall short — and how platforms like Dataiku can help transform IPV into a proactive, insight-driven, and governable function.

The IPV Lifecycle: Key Phases 

While implementations vary, most IPV frameworks follow a common structure. A typical process spans five interconnected steps — from sourcing to governance — applicable across product types.

1. Data Ingestion & Preparation

The IPV process begins by collecting relevant pricing inputs, including:

  • Market data from external vendors (e.g., security prices, FX rates, and benchmark quotes from providers like Bloomberg or Refinitiv)
  • Market-derived pricing inputs constructed from vendor data — such as risk-free curves, credit spreads, and volatility surfaces — typically generated within central data or risk infrastructure
  • Front-office trader marks and valuation outputs, including both quoted prices and internally derived inputs (e.g., model parameters, reconstructed curves, positions, sensitivities) based on front-office methodologies.
  • Reference data, including product-level attributes and classifications

This data must first be standardized and validated — mapping identifiers (e.g., ISIN, CUSIP), resolving missing fields, and aligning formats for consistency. For vendor data with multiple contributors, this step also supports consolidation into a golden copy — a single, independent pricing source used for downstream comparison. The dataset is enriched with contextual data for targeted validations. Establishing data lineage at this stage ensures traceability from source to valuation. In more agile environments, IPV readiness may also extend to intraday validations during market volatility.

2. Reconciliation

Front-office pricing data — including both raw inputs (e.g., trader marks, quoted prices) and derived components (e.g., curves, volatility surfaces, model outputs) — is then reconciled against the independently sourced golden copy established in the prior step. For more complex or illiquid instruments with unobservable prices, this may involve re-running internal models or applying alternative valuation assumptions.

The objective is to detect material pricing deviations, apply asset-specific thresholds, and flag instruments requiring further investigation. This is the first major control point in the IPV process.

3. Exception Management

Discrepancies beyond tolerance thresholds are flagged for review. Exception management involves:

  • Investigating root causes of mismatches
  • Determining whether deviations are justified (e.g., stale vendor data, incorrect model inputs, or pricing versus valuation methodological differences)
  • Escalating and resolving issues based on materiality and ownership
  • Documenting override decisions, rationale, and approvals

Escalation paths vary by instrument risk profile, position sensitivities, or notional size. In many firms, high-impact overrides must be reviewed by valuation oversight committees. This step typically involves coordination among valuation, risk, and front-office teams — especially for Level 2 and 3 instruments. Capturing rationale supports governance and informs adjacent controls like P&L explain or AVAs.

4. Analytics & Feedback

Beyond resolving individual exceptions, firms leverage IPV outputs to strengthen the overall process by:

  • Analyzing exception trends by asset class, desk, or product type
  • Identifying recurring data quality or model calibration issues
  • Informing updates to tolerances, pricing models, or control policies

Over time, this feedback loop transforms IPV from a static checkpoint into a predictive, learning-oriented process. Some firms apply machine learning (ML) to detect exception patterns, quantify exposure, or flag emerging model or data risks. These insights also feed into risk-theoretical pricing frameworks — such as FRTB — linking IPV outcomes to capital adequacy and risk attribution.

5. Reporting & Governance

IPV results must be documented and communicated to stakeholders across the governance chain — valuation oversight, model governance, risk, audit, and regulators. Transparent reporting supports both assurance and accountability. Many firms also track IPV-specific KPIs — such as resolution time, override frequency, and recurrence trends — to benchmark and improve control maturity.

Together, these five steps form the backbone of a defensible IPV framework — balancing automation with expert judgment, and standardization with flexibility. A well-governed IPV process does more than validate prices — it reinforces trust in valuation governance by connecting data quality, model transparency, and risk control into a single, auditable workflow.

Where Traditional IPV Workflows Fall Short

Despite significant investment in data infrastructure, IPV often remains a patchwork of spreadsheets, emails, and disconnected tools. Most institutions rely on mature MDM platforms to consolidate vendor feeds, resolve identifiers, and distribute standardized pricing. MDMs support foundational data quality — but not the judgment-intensive workflows IPV requires.

IPV is more than data management. It’s a dynamic, exception-driven, cross-functional process — reconciling internal and external prices, applying expert judgment, managing overrides, and ensuring auditability. Much of this happens outside core systems. The result? Valuation teams often work across brittle scripts and ad hoc tools to piece together critical steps.

Data Integration Gaps 

MDMs handle vendor data — but IPV also depends on internal inputs like trader marks, model outputs, sensitivities, and trade metadata. These disconnected systems force manual reconciliation, increasing risk and reducing repeatability.

IPV-Specific Data Preparation

IPV requires more than standard transformations. Tolerance bands, override logic, and contextual enrichments from trading or risk systems often fall outside MDM pipelines. Teams build local workarounds — eroding transparency and consistency across desks and regions.

Limited Frequency and Agility  

Fragmented tooling restricts many firms to monthly IPV cycles — especially for Level 2 and 3 assets. That cadence slows responsiveness during market volatility or when onboarding new instruments. In fast-moving environments, slow controls become a source of risk.

Exception Handling Challenges

While MDMs may flag missing or inconsistent inputs, they don’t support IPV-specific exceptions, which demand judgment and traceability. In most firms, resolution still happens via email, shared folders, and trackers — slowing investigations and complicating audits.

Lack of Independent Valuation Capabilities

IPV isn’t just about comparing marks — it’s about verifying them independently. For complex assets, that means re-running models: rebuilding curves, applying methodologies like Black-Scholes or DCF, and stress-testing assumptions. MDMs can’t support these functions natively, forcing teams into fragmented, hard-to-govern tools.

Insufficient Analytical Depth 

MDMs are built for ingestion and distribution — not diagnostics. Yet IPV demands more: detecting outliers, analyzing price discrepancies, and testing model sensitivity. Without embedded analytics or ML, teams must export data — introducing risk. Lessons from exceptions often stay siloed, limiting control evolution.

Fragmented Documentation and Governance

Overrides, rationales, and pricing decisions are often scattered — across emails, spreadsheets, PDFs, or meeting notes. This fragmentation complicates audits and creates governance blind spots. Without version control, it’s difficult to trace who changed what, when, or why — undermining defensibility and control assurance.

MDMs remain essential for sourcing and standardizing pricing data — but they lack the orchestration, valuation logic, and analytical infrastructure to manage IPV end to end.

This is the operational blind spot Dataiku was built to close — transforming fragmented workflows into governed, intelligent, and auditable processes.

Orchestrating IPV With Dataiku: Embedding Governance and Control at the Core

A Best-of-Breed Framework for IPV Systems, Functions, and ControlsFigure 1: A Best-of-Breed Framework for IPV Systems, Functions, and Controls

IPV connects the most sensitive points in the valuation lifecycle — where front-office pricing, market data, valuation control, and risk oversight converge. While each domain is supported by specialized systems, the operational layer that ties them together is often fragmented, manual, or entirely absent.

This is where Dataiku delivers value — not by replacing existing infrastructure, but by enabling the governed orchestration layer where institutional control truly happens: preparing and joining internal with external data, embedding analytics, managing exceptions, documenting decisions, and learning from prior cycles. Rather than imposing a one-size-fits-all product, Dataiku provides a modular, audit-ready platform that lets teams codify their own IPV logic — aligned to product mix, materiality thresholds, and governance structures.

A Purpose-Built IPV Orchestration Layer

MDMs consolidate standardized vendor data. Valuation engines produce theoretical marks. Risk systems calculate exposures. But none are built to manage the end-to-end IPV lifecycle — from reconciling front-office inputs to independently validating prices, resolving exceptions, and maintaining full audit traceability.

That’s the orchestration gap Dataiku fills. In Figure 1, each column represents a phase of the IPV lifecycle (from sourcing to publication), and each row reflects a core control function. Dataiku brings oversight, consistency, and intelligence across this matrix — replacing brittle spreadsheets and siloed trackers with governed, transparent workflows.

Agile Data Preparation — Beyond MDM

While MDMs are essential for curating vendor feeds, IPV depends on blending them with internal inputs — trader marks, model outputs, desk-level parameters, and sensitivities — often scattered across disparate systems.

With Dataiku, teams can programmatically ingest, transform, and join internal and external sources — standardizing formats, mapping identifiers (e.g., ISINs, desk codes), enriching records with context (e.g., booking location, timestamps), and preserving full data lineage. What once lived in spreadsheets becomes governed, reusable, and scalable.

Structured Exception Handling — With Intelligence

IPV exceptions aren’t just data breaks — they often require judgment, recalculation, and escalation. With Dataiku, discrepancies flow through auditable, configurable workflows: thresholds are dynamic, pricing assumptions can be toggled, and overrides are routed to the right approvers — with full traceability.

GenAI copilots suggest override rationale, surface similar cases, and pre-fill documentation — helping teams triage while preserving control quality. The result: faster resolution with full oversight.

Embedded Analytics and Independent Model-Based Valuation

True IPV means more than comparing values — it means verifying them independently. Dataiku enables teams to embed pricing logic directly into workflows: rebuild curves, apply models like Black-Scholes or DCF, and run shocks — without exporting data.

Analytics and ML capabilities bring diagnostics into the core workflow: detect outliers, cluster recurring issues, and score exception risk. Reconciliation becomes insight-driven.

IPV That Learns and Improves With Every Cycle

Every IPV cycle generates signals — where mismatches persist, where inputs drift, where overrides cluster. Dataiku captures this institutional knowledge, helping teams:

  • Analyze exception trends across issuers, desks, or models
  • Refine thresholds and pricing rules over time
  • Predict high-risk positions using ML
  • Link exception outcomes to model calibration or data sourcing gaps

This turns IPV from a static control into a self-improving, diagnostic engine.

Governance and Documentation by Design

With Dataiku, governance isn’t bolted on — it’s embedded. Every override, model run, and exception review is automatically logged. Reports pull directly from workflows — ensuring oversight matches execution.

GenAI copilots help draft summaries, tag exception patterns, and generate committee-ready documents — with human-in-the-loop sign-off. The result: audit readiness by default, not by extra effort.

Why It Matters

Dataiku doesn’t replicate what your core systems already do well. It completes what they leave undone — orchestrating fragmented logic, embedding valuation analytics, and transforming exception management into a governed, intelligent process.

With Dataiku, firms don’t just run IPV. They orchestrate it — with agility, transparency, and control that scales.

Future-Proofing Valuation Controls 

As markets grow more complex and regulatory expectations rise, IPV must evolve — from a periodic check to a continuous, intelligence-driven control. Fragmented workflows and rigid processes no longer meet the demands of modern valuation oversight.

Firms need IPV that is scalable, auditable, and adaptive — able to absorb data growth, volatility, and regulatory change without friction.

This is where Dataiku delivers: unifying exception workflows, pricing models, analytics, and documentation in a governed, collaborative environment.

IPV becomes more than validation — it becomes a signal of trust, credibility, and control.

You May Also Like

Using a Bugatti to Walk the Dog? Here’s to Rethinking What AI Is For

Read More

How Deloitte Scaled AI With Blended Teams and Dataiku

Read More

The Foundation for Agent Success: The Modern AI Architecture

Read More

Your Health Data Is Powerful, and It Needs Protection

Read More