PEDRO HOLANDA AND MARK RAASVELDT (Centrum Wiskunde & Informatica, Netherlands)
Dissecting DuckDB: The internals of the “SQLite for Analytics”
The immense popularity of SQLite shows that there is a need for unobtrusive in-process data management solutions. However, there is no such system yet geared towards analytical workloads. We present DuckDB, a novel data management system designed to execute analytical SQL queries while embedded in another process. In our talk, we give an in-depth overview of the internals of DuckDB and the design choices that were made to cater to the use case of embedded analytics. DuckDB is available as Open Source software under a permissive license.

Pedro Holanda is a Ph.D. candidate in the Database Architectures group at the Centrum Wiskunde & Informatica (CWI) and is currently on an internship in the Data Management, Exploration and Mining (DMX) group at Microsoft Research. His research is focused on progressive indexing techniques, self-tuning tools, embedded database systems, and in-database analytics. Pedro graduated in computer science at Universidade Federal do Ceará (UFC) and did his masters at Universidade Federal do Paraná (UFPR). He has contributed to open source projects related to Data Management, including DuckDB (Embedded OLAP DBMS), MonetDB (Column-store DBMS) and Peloton (Self-driving DBMS).

Mark Raasveldt is a Ph.D. candidate in the Database Architectures group at the Centrum Wiskunde & Informatica (CWI). His research is on efficient integration of machine learning and analytics programs with relational database management systems. Mark graduated cum laude in computer science at Utrecht University. He is currently working as the lead developer on the embedded OLAP database system DuckDB. In the past, he has worked extensively on the column-store MonetDB and its embedded variant MonetDBLite.


FAT in Recommendation Systems
In recent years we have experienced an increasing deployment in our daily lives of applications based on Artificial Intelligence technology. Some of these applications are truly amazing, such as self-driving cars, translation systems, and automatic detection of illnesses. However, we have also seen applications which are disrupting our daily lives with unintended consequences, such as recommender systems which polarize public opinion and face recognition applications which can hinder our privacy. As the researcher Michael Jordan stated: “Just as early buildings and bridges sometimes fell to the ground — in unforeseen ways and with tragic consequences — many of our early societal-scale inference-and-decision-making systems are already exposing serious conceptual flaws.” In this context, a serious and systematic study of FAT (Fairness, Accountability, and Transparency) in AI is critical. The popularity of the recently created FAT conference and several related workshops in academic conferences, especially those related to topics in artificial intelligence, show evidence of how researchers and developers, as well as the society at large, is becoming aware of the effects of AI technology. In terms of recommendation systems, algorithmic transparency has been a topic of study in for at least 10 years, but only recently this area has attracted strong attention in the community. In this tutorial, Prof. Parra will present a general overview of FAT in AI, to then focus on FAT for recommendation systems with a theoretical presentation as well as practical activities.

Denis Parra is Assistant Professor at the Department of Computer Science at PUC Chile, he obtained his PhD at the University of Pittsburgh, USA. Prof. Parra conducts research in Recommendation Systems as well as in applications at the crossroads of HCI and AI. His work has focused on the role of visualization and interactivity in recommender systems upon user’s perception of transparency, trust and controllability. He has published his research in conferences like ACM IUI (Intelligent User Interfaces), ACM RecSys (Recommender Systems), ECIR, and UMAP, as well as in journals such as ACM TiiS, IJHCS, ESWA, PloS One and UMUAI. Currently, Prof. Parra works on applications of explainable AI (XAI) in Recommendation Systems, as well as in Medicine and in Fake News. He is a researcher at the Millenium Institute on Data Fundamentals, a large Chilean initiative to study the impact of Data across disciplines such as Computer Science, Statistics, Political Science and Journalism. He is also member of the AI Lab at PUC ( and leader of the SocVis ( research group.