Versions Compared


  • This line was added.
  • This line was removed.
  • Formatting was changed.

Table of Contents


Below is an overview of the current discussion topics within the Trusted AI Committee. Further updates will follow as the committee work develops. 

  • Focus of the committee is on policies, guidelines, tooling and use cases by industry

  • Survey and contact current open source Trusted AI related projects to join LF AI & Data efforts 

  • Create a badging or certification process for open source projects that meet the Trusted AI policies/guidelines defined by LF AI & Data

  • Create a document that describes the basic concepts and definitions in relation to Trusted AI and also aims to standardize the vocabulary/terminology



Team Calendars

Trusted AI Committee Monthly Meeting - 4th Thursday of the month (additional meetings as needed)

  • 10 PM Shenzen China
  • 7:30 PM India
  • 4 PM Paris
  • 10 AM ET USA
  • 7 AM PT USA (updated for daylight savings time as needed)

Zoom channel :

Committee Chairs




Email Address



Andreas Fehlner



Susan Malaika



Susan Malaika 
(but different email address)

Suparna Bhattacharya

AsiaHPEsuparna.bhattacharya@hpe.comSuparna Bhattacharya

Adrian Gonzalez Sanchez


HEC Montreal / Microsoft / OdiseIA 

Adrian Gonzalez Sanchez(but different email address)


Initial Organizations Participating: IBM, Orange, AT&T, Amdocs, Ericsson, TechM, Tencent



 Email Address


Ofer Hermoni


Ofer Hermoni 

Mazin Gilbert



Alka Roy

Responsible Innovation Project 


Mikael Anneroth 



Alejandro Saucedo

The Institute for Ethical AI and Machine Learning

Alejandro Saucedo 

Jim Spohrer

Retired IBM,

Jim Spohrer 

Saishruthi Swaminathan


Saishruthi Swaminathan 

Susan Malaika


sumalaika (but different email address)

Romeo Kienzler


Romeo Kienzler 

Francois Jezequel


Francois Jezequel 

Nat Subramanian

Tech Mahindra

Natarajan Subramanian 

Han Xiao



Wenjing Chu


Wenjing Chu

Yassi Moghaddam


Yassi Moghaddam 
Animesh Singh 
Souad OualiOrangesouad.ouali@orange.comSouad Ouali 

Sub Categories

  • Fairness: Methods to detect and mitigate bias in datasets and models, including bias against known protected populations
  • Robustness: Methods to detect alterations/tampering with datasets and models, including alterations from known adversarial attacks
  • Explainability: Methods to enhance understandability/interpretability by persona/roles in process of AI model outcomes/decision recommendations, including ranking and debating results/decision options
  • Lineage: Methods to ensure provenance of datasets and AI models, including reproducibility of generated datasets and AI models


Meeting Content (minutes / recording / slides / other)



8 June 2023
  • Open Voice Network Follow-On - 20 minutes @Lucy Hyde invites John Stine & Open Voice Network folks e.g., Nathan Southern
25 May 2023
Part 0 - Metadata / Lineage / Provenance topic from Suparna Bhattacharya & Aalap Tripathy & Ann Mary Roy & Professor Soranghsu Bhattacharya & Team
Part 1 - Open Voice Network - Introductions  
  • Open Voice NetworkOpen Voice Network, The Open Voice Network, Voice assistance worthy of user trust—created in the inclusive, open-source style you’d expect from a community of The Linux Foundation.

Part 2 - Identify small steps/publications to motivate concrete actions over 2023 in the context of these pillars:
Technology | Education | Regulations | Shifting power : Librarians / Ontologies / Tools
Possible Publications / Blogs
  • Interplay of Big Dreams and Small Steps |
    Inventory of trustworthy tools and how they fit - into the areas of ACT |Metadata, Lineage and Provenance tools in particular |
  • Giving power to people who don't have it - Phaedra @Phaedra Boinodiris and Ofer @Ofer Hermoni
  • -- Why give power ; The vision - including why important to everyone including companies
  • More small steps to take / blogs-articles to write
Part 3 - Review goals of committee taken from - including whether we want to go ahead with badges
  • Overview
  • Below is an overview of the current discussion topics within the Trusted AI Committee. Further updates will follow as the committee work develops.
  • Focus of the committee is on policies, guidelines, tooling and use cases by industry
  • Survey and contact current open source Trusted AI related projects to join LF AI & Data efforts
  • Create a badging or certification process for open source projects that meet the Trusted AI policies/guidelines defined by LF AI & Data
  • Create a document that describes the basic concepts and definitions in relation to Trusted AI and also aims to standardize the vocabulary/terminology
Part 4 - Any highlights from the US Senate Subcommittee on the Judiciary - Oversight on AI hearing

Part 5 - Any Other Business

Recording (video)

View file
nameSourangshu Bhattacharya_LFAI-presentation (1).pdf

26 Apr 2023

Join the Trusted AI Committee at the LF-AI for the upcoming session on April 27 at 10am Eastern where you will hear from:

  1. Adrian Gonzalez Sanchez: From Regulation to Realization – Linking ACT (European Union AI Act) to internal governance in companies
  2. Phaedra Boinodiris: Risks of generative AI and strategies to mitigate
  3. All: Explore what was presented and suggest next steps
  4. All : Update the Trusted AI Committee list
  5. Suparna Bhattacharya: Call to Action 

We all have prework to do! Please listen to these videos:

We look forward to your contributions

Recording (video)

Recording (audio)


Proposed agenda (ET)

  • 10am - Kick off Meeting
    • Housekeeping items:
      • Wiki page update
      • Online recording
      • Others
  • 10:05 - Generative AI and New Regulations - Adrian Gonzalez Sanchez 
    • Presentation (PDF) 
      View file
      name20230406 - Generative AI and New Regulations - Adrian Gonzalez Sanchez.pdf

  • 10:15 - Discussion
  • 10:30 - Formulate any next steps
  • 10:35 - News from the open source Trusted AI projects
  • 10:45 - Any other business

Call Lead: Susan Malaika 


Invitees: Beat Buesser; Phaedra Boinodiris ; Alexy Khrabov, David Radley, Adrian Gonzalez Sanchez

Optional: Ofer Hermoni , Nancy Rausch, Alejandro Saucedo, Sri Krishnamurthy, Andreas Fehlner, Suparna Bhattacharya

Attendees: Beat Buesser, Phaedra Boinodiris, Alexy Khrabov, Adrian Gonzalez Sanchez, Ofer Hermoni, Andreas Fehlne


  • Phaedra - Consider : Large Language Models opportunity and risks - in the context trusted ai - how to mitigate risk -
  • Adrian: European Union AI Act
  • Suparna -what does it mean foundation models in general, where language models is one example. Another related area is data centric trustworthy AI in this context
  • Alexy - Science - More work on understanding at the scientific way (e.g., Validation in Medical Context) - software engineering ad hoc driven by practice
  • Fast Forward - what’s next for ChatGPT

  • Andreas : File Formats for Models - additional needs for Trustworthy AI - in addition to Lineage
  • Idea: Create a PoV - Trustworthy AI for Generative application - Take AI ACT approach
  • Gaps in EU AT Act: useful source

Next steps

  • Set up a series of calls through the LF-AI Trusted AI mechanisms to have the following presenters
  • Run 3 sessions with presentations
  • Then create a presentation and/or document
  • Create the synthesis -A Point of View on : Trustworthy AI for Generative Applications

  • Occasionally the open source project leaders are invited to the call …
  • ACTION: Adrian will schedule next meeting has scheduled a call on Monday October 31, 2022 to determine next steps for the committee due to a change in leadership- please connect with Susan if you would like to be added to the call

The group met once a month - on the third Thursday each month at 10am US Eastern. See notes below for prior calls . Activities of the committee included:

  • Reviewing all trusted AI related projects at the LF-AI and making suggestions - e.g.,
  • AI Fairness 360
  • AI Explainability
  • Adversarial Robustness Toolbox
  • Related projects such as Egeria, Open Lineage etc
  • Reviewing the activities of the subgroups - known as working groups - and making suggestions
  • MLSecOps WG
  • Principles WG (completed)
  • Highlighting new projects that should/could be suitable for the LF-AI
  • Identifying trends in the industry in Trusted AI that should be of interest to the LF-AI
  • Initiating Working Groups within the Trusted AI Committee at the LF-AI to address particular issues

Reporting to:

  • The LF-AI Board of Governors on the activities of the Committee and taking guidance from the board - next meeting on Nov 1, 2022
  • The LF-AI TAC - making suggestions to the TAC and taking guidance


  • Should the Trusted AI Committee continue to meet once a month with similar goals?
  • Who will:
  • Identify the overall program and approach for 2023 - should that be the subject of the next Trusted AI Committee Call?
    • Host the meetings?
    • Identify the speakers?
    • Make sure all is set speakers and community?
    • Should the Trusted AI Committee take an interest in the activities of the PyTorch Consortium?

Invitees and interested parties on the call on October 31, 2022

  • HPE Suparna Battacharya
  • IBM Beat Buesser David Radley Christian Kadner Ruchi Mahindru Susan Malaika Cheranellore(Vasu) Vasu William Bittles ;
  • Beat leads the Adversarial Robustness Toolbox - a graduated project at the LF-AI
  • David works on Egeria Project - a graduated project at the LF-AI
  • William is involved in open lineage
  • Susan co-led Principles WG - a subgroup of Trusted AI Committee - work completed
  • Institute for Ethical AI Alejandro Saucedo Alejandro is also at Seldon - Leads MLSec Working Group - a subgroup of Trusted AI Committee
  • QuantUniversity Sri Krishnamurthy
  • SAS Nancy Rausch Currently chair of LF-AI and Data TAC
  • Trumpf Andreas Fehlner


Recording (video)

Recording (audio)

Principles report  =(R)REPEATS

Recording (audio)

Recording (video)

LFAI Trusted AI Committee Structure and Schedule: Animesh Singh

Real World Trusted AI Usecase and Implementation in Financial Industry: Stacey Ronaghan

AIF360 Update: Samuel Hoffman

AIX360 Update: Vijay Arya

ART Update: Beat Buesser


Recording (audio)

Recording (video)

Setting up Trusted AI TSC

Principles Update

Coursera Course Update

Calendar discussion - Europe and Asia Friendly


Walkthrough of LFAI Trusted AI Website and github location of projects

Trusted AI Video Series

Trusted AI Course in collaboration with University of Pennsylvania


Z-Inspection: A holistic and analytic process to assess Ethical AI - Roberto Zicari - University of Frankfurt, Germany

Age-At-Home - the exemplar of TrustedAI, David Martin, Hacker in Charge at

Plotly Demo with SHAP and AIX360 - Xing Han,

  • Explain the Tips dataset with SHAP: Article, Demo
  • Heart Disease Classification with AIX360: Post, Demo
  • Community-made SHAP-to-dashboard API: Post

Swiss Digital Trust Label - short summary - Romeo Kienzler, IBM

  • Announced by Swiss President Doris Leuthard at WEF 2nd Sept. 2019
  • Geneva based initiative for sustainable and fair treatment of data
  • Among others, those companies are involved already Google, Uber, IBM, Microsoft, Facebook, Roche, Mozilla,, UBS, Credit Suisse, Zurich, Siemens IKRK,  EPFL,  ETH, UNO
  •, Credit Suisse, IBM, Swiss Re, SBB, Kudelski and Canton of Waadt to deliver pilot

Watson OpenScale and Trusted AI - Eric Martens, IBM

LFAI Ethics Training course


View file
nameLFAI TAIC 20200723 Openscale and Video zoom_0.mp4


  • Montreal AI Ethics Institute Presentation

  • Status on Trusted AI projects in Open Governance

  • Principles Working Group update - Susan Malaika

  • Trusted AI Committee activities summarization for Governing board - Animesh Singh

  • Swaminathan Chandrasekaran, KPMG Managing Director would be talking about how they are working with practitioners in the field on their AI Governance and Trusted AI needs.

  • Susan Malaika from IBM will be giving an update from the Principles Working Group, and progress there.

  • Saishruthi Swaminathan to do a presentation on AI Transparency in Marketplace

  • Francois Jezequel to present on Orange Responsible AI initiative.

  • Andrew and Tommy did a deep dive in Kubeflow Serving and Trusted AI Integration

  • Principles Working Group discussion

  • AI for People is focused on the intersection of AI and Society with a lot of commonality with the focus areas of our committee. Marta will be joining to present their organization and what they are working on.

  • Proposal of Use Case to be tested by AT&T using Apache Nifi and AIF360  (romeo)

  • Introduction to baseline data set for AI Bias detection (romeo)

  • Exemplar walk-through: retrospective bias detection with Apache Nifi and AIF360 (romeo)

  • Principles Working Group Status Update (Susan)

Discuss AIF360 work around SKLearn community (Samuel Hoffman, IBM Research demo)

Discuss "Many organizations have principles documents, and a bit of backlash - for not enough practical examples."


  • Watch updates on production ML with Alejandro Saucedo done with Susan Malaika on the Cognitive Systems Institute call:
    Widget Connector

Notes from the call:

Proposed Agenda

  • Meeting notes are now in GitHub here:

  • Since we don't record and share our committee meetings should our committee channel in Slack be made private for asyncronous conversation outside these calls?

  • Introduction of MLOps  in IBM Trusted AI projects

  • Design thinking around integrating Trusted AI projects in Kubeflow Serving

Notes from the call:

Proposed Agenda:

  • Jim to get feedback from LFAI Board meeting

  • Romeo to demo AIF360 -Nifi Integration + feedback from his talk at OSS Lyon

  • Alka to present AT&T Working doc

  • Discuss holiday week meeting potential conflicts (28 Nov - US Holiday, 26 Dec - Day after Christmas)

Notes from call:


Ofer, Alka, Francois, Nat, Han, Animesh, Jim, Maureen, Susan, Alejandro


  • Animesh walked through the draft slides (to be presented in Lyon to LFAI governing board about TAIC)

  • Discussion of changes to make

  • Discussion of members, processes, and schedules


  • Jim will put slides in Google Doc and share with all participants

  • Susan is exploring a slack channel for communications

  • Trust and Responsibility, Color, Icons to add Amdocs, Alejandro's Institute

  • Next call Cancelled (31 October) as many committee members will be at OSS EU and TensorFlow World

Notes from the call:


Animesh Singh (IBM), Maureen McElaney (IBM), Han Xiao (Tencent), Alejandro Saucedo, Mikael Anneroth (Eriksson), Ofer Hermoni (Amdocs)

Animesh will check with Souad Ouali to ensure Orange wants to lead the Principles working group and host regular meetings. Committee members on the call were not included in the email chains that occurred so we need to confirm who is in charge and how communication will occur.

The Technical working group has made progress but nothing concrete to report.

A possible third working group could form around AI Standards.

Notes from the call:


Attendees: Ibrahim. H, Nat .S, Animesh.S, Alka.R, Jim.S, Francios. J, Jeff. C, Maureen. M, Mikael. A, Ofer. H, Romeo.K

  • Goals defined for the meeting:

Working Group Names and Leads have been confirmed:

  • Principles, lead: Souad Ouali (Orange France) with members from Orange, AT&T, Tech Mahindra, Tencent, IBM, Ericsson, Amdocs.
  • Technical, lead: Romeo Kienzler (IBM Switzerland) with members from IBM, AT&T, Tech Mahindra, Tencent, Ericsson, Amdocs, Orange.
  • Working groups will have a weekly meeting to make progress.  First read out to LF AI governing board will be Oct 31 in Lyon France.
  • The Principles team will study the existing material from companies, governments, and professional associations (IEEE), and some up with set that can be shared with the technical team for feedback as a first step.  We need to identify and compile the existing materials.
  • The Technical team is working on Acuomos+Angel+AIF360 integration demonstration.

Possible Discussion about third working group

Discussion about LFAI day in Paris

More next steps

Will begin recording meetings in future calls.

Notes from call: