Overview
Below is an overview of the current discussion topics within the Trusted AI Committee. Further updates will follow as the committee work develops.
Focus of the committee is on policies, guidelines, tooling and use cases by industry
Survey and contact current open source Trusted AI related projects to join LF AI efforts
Create a badging or certification process for open source projects that meet the Trusted AI policies/guidelines defined by LF AI
Create a document that describes the basic concepts and definitions in relation to Trusted AI and also aims to standardize the vocabulary/terminology
Mail List
Please self subscribe to the mail list here: https://lists.lfai.foundation/g/trustedai-committee
Participants
Initial Organizations Participating: AT&T, Amdocs, Ericsson, IBM, Orange, TechM, Tencent
Committee Chairs
Name | Region | Organization | Email Address | LF ID |
---|---|---|---|---|
Animesh Singh | North America | IBM | singhan@us.ibm.com | |
Souad Ouali | Europe | Orange | souad.ouali@orange.com | |
Jeff Cao | Asia | Tencent | jeffcao@tencent.com |
Committee Participants
Name | Organization | Email Address | LF ID |
---|---|---|---|
Ofer Hermoni | Amdocs | ofer.hermoni@amdocs.com | |
Mazin Gilbert | ATT | mazin@research.att.com | |
Alka Roy | ATT | AR6705@att.com | |
Mikael Anneroth | Ericsson | mikael.anneroth@ericsson.com | |
Jim Spohrer | IBM | spohrer@us.ibm.com | |
Maureen McElaney | IBM | mmcelaney@us.ibm.com | |
Susan Malaika | IBM | malaika@us.ibm.com | |
Romeo Kienzler | IBM | romeo.kienzler@ch.ibm.com | |
Francois Jezequel | Orange | francois.jezequel@orange.com | |
Nat Subramanian | Tech Mahindra | Natarajan.Subramanian@Techmahindra.com | |
Han Xiao | Tencent | hanhxiao@tencent.com |
Assets
- All the assets beingSub Categories
- Fairness: Methods to detect and mitigate bias in datasets and models, including bias against known protected populations
- Robustness: Methods to detect alterations/tampering with datasets and models, including alterations from known adversarial attacks
- Explainability: Methods to enhance understandability/interpretability by persona/roles in process of AI model outcomes/decision recommendations, including ranking and debating results/decision options
- Lineage: Methods to ensure provenance of datasets and AI models, including reproducibility of generated datasets and AI models
Projects
Name | Github | Website |
---|---|---|
AI Fairness 360 | https://github.com/IBM/AIF360 | http://aif360.mybluemix.net/ |
Adversarial Robustness 360 | https://github.com/IBM/adversarial-robustness-toolbox | https://art-demo.mybluemix.net/ |
AI Explainability 360 | https://github.com/IBM/AIX360 | http://aix360.mybluemix.net |
Meetings
How to Join: Contact trustedai-committee@lists.lfai.foundation for more information about how to join.
Meeting Content (minutes / recording / slides / other):
Date | Minutes |
---|---|
Attendees: Ibrahim. H, Nat .S, Animesh.S, Alka.R, Jim.S, Francios. J, Jeff. C, Maureen. M, Mikael. A, Ofer. H, Romeo.K
|