Theme: Towards Consistent, Reliable, Explainable, and Safe LLMs


Date: August 25, 2024

Location: Barcelona, Spain

Attendance: In-person

Overview


This workshop seeks to expedite efforts at the intersection of Symbolic Knowledge and Statistical Knowledge inherent in LLMs. The objective is to establish quantifiable methods and acceptable metrics for addressing consistency, reliability, and safety in LLMs. Simultaneously, we seek unimodal or multimodal NeuroSymbolic solutions to mitigate LLM issues through context-aware explanations and reasoning. The workshop also focuses on critical applications of LLMs in health informatics, biomedical informatics, crisis informatics, cyber-physical systems, and legal domains. We invite submissions that present novel developments and assessments of informatics methods, including those that showcase the strengths and weaknesses of utilizing LLMs.

Topics of Interest


Theme: Improving LLMs with Consistency, Reliability, Explainability, and Safety

  • Developing methods and metrics to enhance consistency and reliability in LLMs.
  • Developing or Adapting safety certification frameworks for LLMs for health, cybersecurity, legal, and others.
  • Investigation into various types of general-purpose or domain-specific knowledge (like procedural and declarative); the fusion of knowledge and drawing inferences in Language Model Models (LLMs) for decision-making purposes.
  • Comparing how humans and machines semantically represent and process information, including abstraction levels, lexical structure, property inheritance, generalization, geometric approaches to meaning representation, mental connections, and the retrieval of meaning.
  • Innovative approaches and standards for achieving interpretability, explainability, and transparency, focusing on healthcare, both in quantitative and qualitative contexts.
  • Application of methodologies from diverse fields (such as neuroscience or computer vision) to scrutinize and assess LLMs and foundation models.
  • Applications of LLMs, foundation models, conversational systems, and generative AI in physical and mental healthcare as well as wellbeing and nutrition.
  • Development of open-source tools for analyzing, visualizing, or explaining LLMs.
  • Metrics for Safety-assessment of LLMs
  • Methods to make LLMs inherently safe from adversarial attacks.
  • Metrics for hallucination in the presence or absence of ground truth.
  • Experiences from real-world deployments and resulting datasets.

NeuroSymbolic and Knowledge-infused Learning

  • Retrieval augmented LLMs for health, legal, crisis, and other applications
  • Enhancing retrieval augmentation through structured background knowledge.
  • Knowledge-infused Reinforcement Learning
  • Knowledge-injected foundational language models,
  • Automated Rule Learning and Inference in health, legal, crisis, cybersecurity, etc.
  • User-Explainable Machine Learning and Deep Learning
  • User-Safety in Conversational Systems
  • Bias-awareness or Debiasing with Context in Deep Learning
  • User Controllability in Deep Learning using rules, constraints, and domain-specific guidelines.

Call For Papers


Submission Website: Open Review

Submission Deadline: May 25, 2024
Author Notification: June 21, 2024
Camera Ready Submission: June 30, 2024

We welcome original research papers in four types of submissions:

  1. Full research papers (9 pages including references)
  2. Position papers(4-6 pages) including references)
  3. Short papers (6 pages including references)
  4. Demo (4-6 pages including references) papers

A skilled and multidisciplinary program committee will evaluate all submitted papers, focusing on the originality of the work and its relevance to the workshop's theme. Acceptance of papers will adhere to the KDD 2024 Conference Template and undergo a double-blind review process. More details regarding submission can also be found at https://kdd2024.kdd.org/research-track-call-for-papers/. Selected papers will be presented at the workshop and published as open-access in the workshop proceedings through CEUR, where they will be available as archival content.

Submission Instructions

  1. Dual submission policy: This workshop welcomes ongoing and unpublished work but will also accept papers that are under review or have recently been accepted at other venues.
  2. OpenReview Moderation Policy: Authors are advised to register on Openreview using their institutional email addresses when submitting their work to KDD KiL Workshop. Using personal email addresses, such as "gmail.com," may result in a delay of up to two weeks for Openreview to verify and allow the submission. This could cause inconvenience during the busy submission period.

Organizing Committee


Manas Gaur

Manas Gaur

University of Maryland Baltimore County, USA

(Primary Contact)

Email: manas@umbc.edu

Efthymia Tsamoura

Efthymia Tsamoura

Samsung Research, Cambridge, UK

Email: efi.tsamoura@samsung.com

Edward Raff

Edward Raff

Booz Allen Hamilton, USA

Email: Raff_Edward@bah.com

Nikhita Vedula

Nikhita Vedula

Amazon, USA

Email: veduln@amazon.com

Srinivasan Parthasarathy

Srinivasan Parthasarathy

Ohio State University, USA

Email: srini@cse.ohio-state.edu

Frequently Asked Questions