ECIR-2026 Workshop

The First Workshop on Information Retrieval for Accountability and Integrity (IRAI)

A half-day pilot workshop exploring how IR can evaluate forward-looking statements, verify commitments, and foster evidence-based accountability across public and private domains.

Focus: Accountability & Integrity
IR × NLP
Half-day • Interactive
Proceedings via CEUR (opt-in)

At a Glance

When
April 2nd, 2026
Where
Delft, The Netherlands
Duration
Half-Day
Submission Site
Claim & Promise Verification Forecast Evaluation Corporate & Policy Accountability Responsible AI

Motivation

Information systems shape public discourse, decisions, and trust—yet we lack systematic ways to evaluate the accuracy of forward-looking statements (e.g., campaign promises, corporate forecasts). Media coverage is selective, standards are uneven, and the signal is buried in noise. The result: accountability gaps and eroded confidence.

IRAI brings IR and NLP communities together to design frameworks and tools that retrieve evidence, synthesize signals over time, and assess the fulfillment and reliability of claims and commitments. It complements ECIR’s mission by tackling a pressing, real-world challenge with societal impact.

Why now? New multilingual datasets and shared tasks (e.g., corporate promise verification) make it timely to connect IR retrieval, aggregation, and evaluation with accountability questions at scale.

What IRAI Aims to Do

  • Evaluate the accuracy of forecasts and predictions by individuals, organizations, or systems.
  • Assess fulfillment of commitments (political promises, corporate goals, public policies).
  • Identify patterns of exaggeration or accountability gaps in public discourse.
  • Promote transparency through evidence-based assessments and reproducible methodologies.

IRAI aspires to bridge NLP and IR, fostering shared benchmarks, methods, and open conversations.

Theme & Purpose

Core Theme

Applying IR to enhance accountability and integrity in public and private domains.

Emphasis on forward-looking statements and longitudinal evidence retrieval.

What Makes IRAI Different

  • Focus on using IR to evaluate claims (not only making IR systems fair/transparent).
  • Bridges IR methods with policy, regulation, journalism, and industry practice.

Complement to ECIR

Centers ethical & societal dimensions of IR and concrete applications for social good.

Submission & Selection

Call for Papers

  • Short research papers, position papers, datasets, demos, or work-in-progress relevant to IRAI themes.
  • High-quality works not selected for talks may join lightning discussion sessions.

Proceedings via CEUR (authors may opt out).

Review & Criteria

  • Each paper reviewed by at least two PC members.
  • Selection balances quality, relevance, and topical diversity.
  • Post-workshop overview to be shared on arXiv and submitted to ACM SIGIR Forum.

Formats

  • Template: Authors should consult Springer's authors' guidelines and use their proceedings templates, either for LaTeX or for Word (to be found at https://www.springer.com/gp/computer-science/lncs/conference-proceedings-guidelines)
  • Length (including figures/tables; excluding references and appendixes):
    • Research paper (Long): up to 12 pages
    • Work-in-progress (Short): up to 6 pages
  • File: PDF only. Double-blind review.

Topics of Interest

  • Claim/promise retrieval and verification; forecast accuracy estimation
  • Longitudinal & cross-temporal IR; evidence aggregation
  • Corporate, policy, and societal accountability via IR/NLP
  • Datasets, benchmarks, and evaluation protocols
  • Responsible/Trustworthy AI for accountability & integrity

Important Dates

  • Paper submission: February 1st, 2026
  • Notification:February 21st, 2026
  • Camera-ready (optional CEUR): March 1st, 2026
  • Workshop: April 2nd, 2026

Submission Site


Easychair

Contact

Questions about scope or format? Email the organizers:

c.c.chen@acm.org

Organizers

Chung-Chi Chen

AIST, Japan

Researcher focusing on forward-looking statements and scenario planning in finance; co-founded ACL SIG-FinTech; co-organizer of FinNLP/FinWeb and shared tasks (FinNum, FinArg, PromiseEval). SIGIR Early Career Researcher Award (Community Engagement).

c.c.chen@acm.org · homepage

Juyeon Kang

3DS Outscale – Dassault Systèmes, France

Data Science team lead; PhD in NLP (2011); industry experience across aeronautics, naval, public opinion, FinTech/RegTech; co-organizer of shared tasks (FinSBD, FinTOC, FinSim, ML-ESG, PromiseEval).

juyeon.kang@outscale.com · LinkedIn

Anaïs Lhuissier

3DS Outscale – Dassault Systèmes, France

Data & Language Analyst; end-to-end AI data management; ESG expertise in FinTech/RegTech; designs multilingual guidelines and shared tasks.

Dittaya Wanvarie

Chulalongkorn University, Thailand

Assistant Professor in Math & CS; works on NLP, IR, and ML; current project on knowledge from listed companies’ annual reports.

Min-Yuh Day

National Taipei University, Taiwan

Professor and Director of FinTech & Green Finance Center; research in AI/GenAI, ESG, green FinTech, big data, and NLP for finance & sustainability; Associate Editor and active in industry collaboration.

myday@gm.ntpu.edu.tw · homepage

Hiroya Takamura

AIST, Japan

Research Team Leader (Knowledge & Information Research Team). Interests: sentiment analysis, summarization, generation. Served as AC/PC for major NLP venues.

takamura.hiroya@aist.go.jp · homepage

Yohei Seki

University of Tsukuba, Japan

Professor; research in citizen sentiment and social information access; ICADL 2022 best paper runner-up; NTCIR multilingual opinion analysis co-organizer; co-editor of smart city standards (ISO/IEC 30146, 30145-3).

yohei@slis.tsukuba.ac.jp · lab

Contact

General Inquiries

Primary host (onsite): Chung-Chi Chen

c.c.chen@acm.org