Romeo Kienzler, Ivan Nesic

IBM Center for Open Source Data and AI Technologies (CODAIT) and University Hospital Basel

Bio

Romeo Kienzler is CTO and Chief Data Scientist of the IBM Center for Open Source Data and AI Technologies (CODAIT) in San Fransisco.
He holds an M. Sc. (ETH) in Computer Science with specialisation in Information Systems, Bioinformatics and Applied Statistics from the Swiss Federal Institute of Technology Zurich.
He works as Associate Professor for Artificial Intelligence at the Swiss University of Applied Sciences Berne and Adjunct Professor for Information Security at the Swiss University of Applied Sciences Northwestern Switzerland (FHNW). His current research focus is on cloud-scale machine learning and deep learning using open source technologies including TensorFlow, Keras, and the Apache Spark stack.
Recently he joined the Linux Foundation AI as lead for the Trusted AI technical workgroup with focus on Deep Learning Adversarial Robustness, Fairness and Explainability.
He also contributes to various open source projects. He regularly speaks at international conferences including significant publications in the area of data mining, machine learning and Blockchain technologies.
Romeo is lead instructor of the Advance Data Science specialisation on Coursera https://www.coursera.org/specializations/advanced-data-science-ibm with courses on Scalable Data Science, Advanced Machine Learning, Signal Processing and Applied AI with DeepLearning
He published a book on Mastering Apache Spark V2.X (http://amzn.to/2vUHkGl) which has been translated into Chinese (http://www.flag.com.tw/books/product/FT363).
Recently, he published a book on "What's new in TensorFlow 2.x" with O'Reilly (https://learning.oreilly.com/library/view/whats-new-in/9781492073727/)
Romeo Kienzler is a member of the IBM Technical Expert Council and the IBM Academy of Technology - IBM’s leading brain trusts. #ibmaot

Twitter: @RomeoKienzler
Web: ibm.com

Ivan Nesic is a Machine Learning Researcher at University Hospital Basel.

An open source ML Pipeline Development and Execution Environment

Deep Learning models are getting more and more popular but constraints on explainability, adversarial robustness and fairness are often major concerns for production deployment. Although the open source ecosystem is abundant on addressing those concerns, fully integrated, end to end systems are lacking in open source. Therefore we provide an entirely open source, reusable component framework, visual editor and execution engine for production grade machine learning on top of Kubernetes, a joint effort between IBM and the University Hospital Basel. It uses Kubeflow Pipelines, the AI Explainability360 toolkit, the AI Fairness360 toolkit and the Adversarial Robustness Toolkit on top of ElyraAI, Kubeflow, Kubernetes and JupyterLab. Using the Elyra pipeline editor, AI pipelines can be developed visually with a set of jupyter notebooks.