Type:
Stagiaire
Type de contrat:
Stage
Durée:
3-6 months
Lieu de travail:
Belval
With a team of more than 100 highly skilled researchers and engineers from various disciplines, the ITIS Department addresses the technological, organizational, human and economic aspects of innovative IT services. Research areas are based around the innovation of services with a high level of information intensity and a level of quality allowing trust to be gained in their use and value to be generated around new business models.
The successful candidate will actively contribute to the research and technological activities in an interdisciplinary research unit, BART (Business Analytics and Regulatory Technology).
The Business Analytics Team (IT for Innovative Services Department) is focused on applied science and builds innovation for a number of business-driven companies. The core activity the team deploys is mainly on Analytics and ML/AI, working with real data from Industry 4.0, Finance or Satellites. Questions range from ML/AI topics to Data Science and Applications, e.g. identifying patterns, extracting business intelligence insights or forecasting.
The team is well balanced in terms of young vs more experienced researchers, with very diverse backgrounds, including both business and research. We offer a great work environment where we foster original thinking, encourage exploring and provide strong support for personal development.
As an intern in the Business Analytics and Regulatory Technologies Unit, you will work closely with a team of researchers and domain experts on ongoing funded projects in the digital transformation area.
The focus of your work will be on scoping the evolution of Explainable ML/AI, including a review of the state of the art and existing frameworks. Explainable AI (XAI) is about understanding how an advanced black-box AI like the Deep Learning models work – see examples below:
XAI is at the forefront of the ML/AI field and received a strong interest from a number of business sectors. XAI is ultimately about explaining to the functional expert in, for example, the financial, healthcare or legal domain the “Why?” (why recommending a specific investment, healthcare treatment or legal action). The work will look at what are the current techniques for explaining models (visualization-driven, local approximation-based or via meta-models, etc.) and problems one needs to deal with such as identifying bias.
You will also cover existing initiatives such as the one of DARPA, will experiment with XAI frameworks, design small use cases and/or experiments and, as a final result, deliver a summary report.
DARPA XAI Initiative: www.darpa.mil/program/explainable-artificial-intelligence
LIME Framework: github.com/marcotcr/lime
SHAP Framework: shap.readthedocs.io/en/latest
Education
Competencies
Language