Page tree

Versions Compared


  • This line was added.
  • This line was removed.
  • Formatting was changed.


  • Open Business and Artificial Intelligence Connectivity (OBAIC) borrows the concept from Open Database Connectivity (ODBC), which is an interface that makes it possible for applications to access data from a variety of database management systems (DBMSs). The aim of OBAIC is to make it as the interface that makes it possible for BI tools to access machine learning model from a variety of AI platform - “AI ODBC for BI” 
  • Through OBAIC, BI vendors can connect to any AI platform freely without concerning about the underlying implementation , and how does the AI platform execute train the model or infer the result. It's just like what we used to have for database with ODBC that it's up to how the database store the data and execute the query.
  • The committee has decided this standard will only define the REST APIs protocol of how AI and BI communicatecommunicates, initiated from BI to AI. The design or the actual implementation of OBAIC, such as whether this should be a Server VS Server-less VS Docker, will leave it up to the vendor to provide, or if this protocol grows to another open-sourced project to provide such implementation. 
  • There are 3 key aspects when designing this standard 
    • BI - what specific call do I need this standard to provide so that I can better leverage any underlying AI/ML framework?
    • AI - what should be the common denominator an AI framework should provide to support this standard?
    • Data - Shall data be moved around in the communication between AI and BI (passed by value) or keep the data in the same location (passed by reference)?


  • We understand that there are 2 key steps in machine learning - Model Training and Result Inference. In this first release of this protocol, we will only focus on inference. Training is provided here but it's subjected to more discussion.

Overall Flow





  • Data file type: What type of data we are supporting: e.g. for Delta needs to be parquet, RDBMS? Can modify the Jeffrey init cut below to support multiple data types, depending on the use case.
    • Inference: Pass by value should be good enough if it's only for predicting 
    • Train: not immediate, maybe later in Phase 2
  • Metadata structure, what kind of JSON schema do we needDo we support training or just inference?
  • Do we only support a specific model type (ONNX) or arbitrary number of framework
  • Decouple model (asking the model to predict and train) and data (listing, upload, download)
  • Finalize Logo


Why should I share our model to you?