Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Table of Contents

Overview

Below is an overview of the current discussion topics within the Trusted AI Committee. Further updates will follow as the committee work develops. 

  • Focus of the committee is on policies, guidelines, tooling and use cases by industry

  • Survey and contact current open source Trusted AI related projects to join LF AI efforts 

  • Create a badging or certification process for open source projects that meet the Trusted AI policies/guidelines defined by LF AI

  • Create a document that describes the basic concepts and definitions in relation to Trusted AI and also aims to standardize the vocabulary/terminology

Mailing List

If you are interested in getting involved please email info@lfai.foundation to be added to the mailing list. 

Current Participants

  • AT&T, Amdocs, Ericsson, IBM, Orange, TechM, Tencent

Chairs

NameRegionOrganizationContact Info
Animesh Singh
 :
North America
(IBM)
  • Souad Ouali : Europe (Orange)

  • Jeff Cao : Asia (Tencent)
    IBMsinghan@us.ibm.com
    Souad OualiEuropeOrangesouad.ouali@orange.com
    Jeff CaoAsiaTencentjeffcao@tencent.com


    Working Group:

    Name

    Organization

     Contact Info
    Ofer HermoniAmdocs ofer.hermoni@amdocs.com 
     Mazin GilbertATT mazin@research.att.com 
     Alka RoyATT AR6705@att.com 
    Mikael Anneroth Ericssonmikael.anneroth@ericsson.com 

    Jim Spohrer

    IBM

     spohrer@us.ibm.com

    Maureen McElaney

    IBM

     mmcelaney@us.ibm.com

    Susan Malaika

    IBM

     malaika@us.ibm.com
    Francois Jezequel Orangefrancois.jezequel@orange.com 
    Nat SubramanianTech Mahindra 

    Natarajan.Subramanian@Techmahindra.com

     Han Xiao Tencent hanhxiao@tencent.com


    Sub Categories

    - Fairness: Methods to detect and mitigate bias in datasets and models, including bias against known protected populations

    - Robustness: Methods to detect alterations/tampering with datasets and models, including alterations from known adversarial attacks

    - Explainability: Methods to enhance understandability/interpretability by persona/roles in process of AI model outcomes/decision recommendations, including ranking and debating results/decision options

    - Lineage: Methods to ensure provenance of datasets and AI models, including reproducibility of generated datasets and AI models

    Projects:


    If you are interested in getting involved please email info@lfai.foundation for more information.