Keynote Talks

Sameer Antani

Sameer Antani Dr. Sameer Antani is a Principal Investigator and a senior researcher in the Division of Intramural Research of the National Library of Medicine (NLM) at the National Institutes of Health (NIH). He earned his Ph.D. and M.Eng. in Computer Science and Engineering from the Pennsylvania State University and his B.E. in Computer Engineering from the Savitribai Phule Pune University (formerly University of Pune), India through the Pune Institute of Computer Technology (PICT). Dr. Antani is a Fellow of the American Institute for Medical and Biological Engineering (AIMBE), Fellow of Institute of Electrical and Electronics Engineers (IEEE), and Senior Member of the SPIE. He conducts research in multimodal artificial intelligence (AI) for medicine particularly for challenging diseases for reliable, reproducible, and interpretable prediction models with applications in disease screening, diagnostics, risk analysis, and treatment. His scientific contributions and research leadership have been recognized with several awards including NLM Board of Regents Awards and NIH Director's Awards among others.

Title: Cross-Modal Data Synthesis for AI-driven Biomedical Applications

Abstract:
Biomedical data are inherently multimodal comprising structured and unstructured text data, images, videos and other signal data. Collectively, these data encapsulate complementary aspects of a patient’s condition and support clinical diagnosis by one or more care providers. In the rapidly evolving landscape of biomedical AI, the integration of heterogeneous data sources producing such data can be crucial for advancing various AI-assisted downstream tasks such as disease diagnosis, information retrieval, and personalized treatment strategies. This talk will highlight novel approaches and results from our ongoing research in cross-modal data synthesis, a transformative approach that exploits such multimodal data integration for generative data synthesis as well as robust predictive systems to aid in disease detection, classification, or therapeutic pathways. We will examine findings from our research on novel multimodal generative AI models in: (i) the generation of new imaging data in order to enrich unbalanced training datasets with images including rare conditions; and, (ii) the generation of textual descriptions of images, that can subsequently be used to build foundation model for aiding more accurate and robust predictive models or aiding needs of a variety of downstream tasks. The material in this talk is aimed at informing researchers, clinicians, and AI practitioners to envision a future where integrated multimodal frameworks can aid in making downstream tasks such as biomedical discoveries more accessible, interpretable, and actionable.

José Hernández-Orallo

José Hernández-Orallo José Hernández-Orallo is Professor at the Universitat Politècnica de València, Spain and Senior Research Fellow at the Leverhulme Centre for the Future of Intelligence, University of Cambridge, UK. He received a B.Sc. and a M.Sc. in Computer Science from UPV, partly completed at the École Nationale Supérieure de l'Électronique et de ses Applications (France), and a Ph.D. in Logic and Philosophy of Science with a doctoral extraordinary prize from the University of Valencia. His academic and research activities have spanned several areas of artificial intelligence, machine learning, data science and intelligence measurement, with a focus on a more insightful analysis of the capabilities, generality, progress, impact and risks of artificial intelligence. He has published five books and more than two hundred journal articles and conference papers on these topics. His research in the area of machine intelligence evaluation has been covered by several popular outlets, such as The Economist, New Scientist or Nature. He keeps exploring a more integrated view of the evaluation of natural and artificial intelligence, as vindicated in his book "The Measure of All Minds" (Cambridge University Press, 2017, PROSE Award 2018). He is a member of AAAI, CLAIRE and ELLIS, and a EurAI Fellow.

Title: AI Evaluation Should Make AI Predictable

Abstract:
AI Evaluation is much more than benchmarks, metrics and leaderboards. It should also be much more, and much better, than 'evals'. This talk will cover the state of AI evaluation through three major research obstacles. First, there are very different paradigms and communities that often talk past each other: the TEVV (testing, evaluation, verification and validation) school, the benchmark school, the 'evals' school, the construct-oriented evaluation school, the real-world impact school and the exploratory school. Second, there is limited understanding of what capability means and how to measure it, as opposed to performance. Third, there is little explicit recognition that AI evaluation is mostly about predictability: from the question "is it accurate or safe in general?" to "will it work for this operating condition?" When we understand AI evaluation as pursuing both explanatory and predictive power, research challenges and opportunities become clearer.

Joanna Bryson

Joanna Bryson Joanna J. Bryson is expert in intelligence, both natural and artificial. With degrees in Social and Computer Sciences from Chicago, Edinburgh and MIT, and academic publications in AI, cognitive science, theoretical biology, political economy, behavioural ecology, philosophy, and technology policy, her voice is heard in the UN, EU, CoE, OSCE, OECD and governments and ministries globally. Since February 2020, Bryson has been Professor of Ethics and Technology at Hertie School, Berlin, recruited to their Centre for Digital Governance. Her current scientific focus is technology's impacts on human cooperation, and her policy focus is transnational regulation of essential digital infrastructure and services.

Title: Do we co-evolve with what we design? DevOps, AGI, and Human Frailties

Abstract:
Are AI and Humanity coevolving? Can we build AI collaborators? Who decides who or what is the responsible actor? Answers: Humanity does evolve, both biologically and (arguably) culturally, our technology can and does effect both.  Digital systems in contrast are built and may be built to change, but they do not reproduce sexually. Describing designed systems as collaborators is deceptive and dangerous. The decision about whether we try to hold AI itself to account or actually hold corporations within the rule of law is political, and ongoing. If you still want to attend this talk even now that you know the answers, you will get more details about both Darwinian evolution and AGI, four possible futures for AI and humanity, and concrete explanations about EU and global efforts at AI regulation.