Fairness: Methods to detect and mitigate bias in datasets and models, including bias against known protected populations
Robustness: Methods to detect alterations/tampering with datasets and models, including alterations from known adversarial attacks
Explainability: Methods to enhance understandability/interpretability by persona/roles in process of AI model outcomes/decision recommendations, including ranking and debating results/decision options
Lineage: Methods to ensure provenance of datasets and AI models, including reproducibility of generated datasets and AI models
quick brainstorm on possibilities for ONNX metadata (Led by@Suparna Bhattacharya&@Saurabh Tangri- with volunteers Martin Foltin (need to get him on the slack if possible)
Blogs Progress (perhaps@Nora Anwarand Jen Shelby are interested in promoting)
Also try to checkin with Vijay Arya on AIX 360 and@Beat Buesserwith ART - re: metadata connections
Committee Mission Statement Review - including badging discussion
Highlights from US Senate Subcommittee on Judiciary
News from EU ACT
Any other business
25 May 2023
To be formatted and updated
Draft Agenda for May 25, 2023 at 10am US Eastern – if we go ahead – we need a leader for each section - other than section 5 Other topics or formulations most welcome
Part 0 - Metadata / Lineage / Provenance topic from Suparna Bhattacharya & Aalap Tripathy & Team
Open Voice Network, The Open Voice Network, Voice assistance worthy of user trust—created in the inclusive, open-source style you’d expect from a community of The Linux Foundation.
26 Apr 2023
Join the Trusted AI Committee at the LF-AI for the upcoming session on April 27 at 10am Eastern where you will hear from:
Adrian Gonzalez Sanchez: From Regulation to Realization – Linking ACT (European Union AI Act) to internal governance in companies
Phaedra Boinodiris: Risks of generative AI and strategies to mitigate
All: Explore what was presented and suggest next steps
Suparna -what does it mean foundation models in general, where language models is one example. Another related area is data centric trustworthy AI in this context
Alexy - Science - More work on understanding at the scientific way (e.g., Validation in Medical Context) - software engineering ad hoc driven by practice
Fast Forward - what’s next for ChatGPT
Andreas : File Formats for Models - additional needs for Trustworthy AI - in addition to Lineage
Idea: Create a PoV - Trustworthy AI for Generative application - Take AI ACT approach
Create the synthesis -A Point of View on : Trustworthy AI for Generative Applications
Occasionally the open source project leaders are invited to the call …
ACTION: Adrian will schedule next meeting
malaika@us.ibm.com has scheduled a call on Monday October 31, 2022 to determine next steps for the committee due to a change in leadership- please connect with Susan if you would like to be added to the call
The group met once a month - on the third Thursday each month at 10am US Eastern. See notes below for prior calls . Activities of the committee included:
Reviewing all trusted AI related projects at the LF-AI and making suggestions - e.g.,
AI Fairness 360
AI Explainability
Adversarial Robustness Toolbox
Related projects such as Egeria, Open Lineage etc
Reviewing the activities of the subgroups - known as working groups - and making suggestions
MLSecOps WG
Principles WG (completed)
Highlighting new projects that should/could be suitable for the LF-AI
Identifying trends in the industry in Trusted AI that should be of interest to the LF-AI
Initiating Working Groups within the Trusted AI Committee at the LF-AI to address particular issues
Reporting to:
The LF-AI Board of Governors on the activities of the Committee and taking guidance from the board - next meeting on Nov 1, 2022
The LF-AI TAC - making suggestions to the TAC and taking guidance
Questions:
Should the Trusted AI Committee continue to meet once a month with similar goals?
Who will:
Identify the overall program and approach for 2023 - should that be the subject of the next Trusted AI Committee Call?
Host the meetings?
Identify the speakers?
Make sure all is set speakers and community?
Should the Trusted AI Committee take an interest in the activities of the PyTorch Consortium?
Invitees and interested parties on the call on October 31, 2022
HPE Suparna Battacharya
IBM Beat Buesser David Radley Christian Kadner Ruchi Mahindru Susan Malaika Cheranellore(Vasu) Vasu William Bittles ;
Beat leads the Adversarial Robustness Toolbox - a graduated project at the LF-AI
David works on Egeria Project - a graduated project at the LF-AI
William is involved in open lineage
Susan co-led Principles WG - a subgroup of Trusted AI Committee - work completed
Institute for Ethical AI Alejandro Saucedo Alejandro is also at Seldon - Leads MLSec Working Group - a subgroup of Trusted AI Committee
QuantUniversity Sri Krishnamurthy
SAS Nancy Rausch Currently chair of LF-AI and Data TAC
Trusted AI Committee activities summarization for Governing board - Animesh Singh
Swaminathan Chandrasekaran, KPMG Managing Director would be talking about how they are working with practitioners in the field on their AI Governance and Trusted AI needs.
Susan Malaika from IBM will be giving an update from the Principles Working Group, and progress there.
Saishruthi Swaminathan to do a presentation on AI Transparency in Marketplace
Francois Jezequel to present on Orange Responsible AI initiative.
Andrew and Tommy did a deep dive in Kubeflow Serving and Trusted AI Integration
Principles Working Group discussion
AI for People is focused on the intersection of AI and Society with a lot of commonality with the focus areas of our committee. Marta will be joining to present their organization and what they are working on.
Proposal of Use Case to be tested by AT&T using Apache Nifi and AIF360 (romeo)
Introduction to baseline data set for AI Bias detection (romeo)
Exemplar walk-through: retrospective bias detection with Apache Nifi and AIF360 (romeo)
Principles Working Group Status Update (Susan)
Discuss AIF360 work around SKLearn community (Samuel Hoffman, IBM Research demo)
Discuss "Many organizations have principles documents, and a bit of backlash - for not enough practical examples."
Resource
Watch updates on production ML with Alejandro Saucedo done with Susan Malaika on the Cognitive Systems Institute call:
Since we don't record and share our committee meetings should our committee channel in Slack be made private for asyncronous conversation outside these calls?
Introduction of MLOps in IBM Trusted AI projects
Design thinking around integrating Trusted AI projects in Kubeflow Serving
Animesh Singh (IBM), Maureen McElaney (IBM), Han Xiao (Tencent), Alejandro Saucedo, Mikael Anneroth (Eriksson), Ofer Hermoni (Amdocs)
Animesh will check with Souad Ouali to ensure Orange wants to lead the Principles working group and host regular meetings. Committee members on the call were not included in the email chains that occurred so we need to confirm who is in charge and how communication will occur.
The Technical working group has made progress but nothing concrete to report.
A possible third working group could form around AI Standards.
Attendees: Ibrahim. H, Nat .S, Animesh.S, Alka.R, Jim.S, Francios. J, Jeff. C, Maureen. M, Mikael. A, Ofer. H, Romeo.K
Goals defined for the meeting:
Working Group Names and Leads have been confirmed:
Principles, lead: Souad Ouali (Orange France) with members from Orange, AT&T, Tech Mahindra, Tencent, IBM, Ericsson, Amdocs.
Technical, lead: Romeo Kienzler (IBM Switzerland) with members from IBM, AT&T, Tech Mahindra, Tencent, Ericsson, Amdocs, Orange.
Working groups will have a weekly meeting to make progress. First read out to LF AI governing board will be Oct 31 in Lyon France.
The Principles team will study the existing material from companies, governments, and professional associations (IEEE), and some up with set that can be shared with the technical team for feedback as a first step. We need to identify and compile the existing materials.
The Technical team is working on Acuomos+Angel+AIF360 integration demonstration.