CARLA 2024

Trilce Estrada

Navigating Challenges and Opportunities of AI-driven High Throughput Computing

As intelligent systems become pervasive and data production grows at a rate never seen before, a whole generation of scientific and medical applications is becoming increasingly reliant on Artificial Intelligence. 
In this talk I present three case studies, where AI, High Throughput Computing, and computational science coexist, and highlight a roadmap in the pursuit of reproducibility, scalability, and trust. The talk delves into the pivotal role that machine learning plays throughout the entire lifecycle of High

Throughput Computing for scientific exploration – from optimizing resource utilization to fostering meticulous analysis and result discovery. While highlighting the merits of scaling scientific endeavors, we also address concerns related to the need for transparent and reliable algorithms. Safeguarding against biases that could inadvertently hinder results’ credibility, and exploring techniques to understand how machine learning augments or impedes trust in research outcomes. 

At the end, I hope to convey the importance of balance between harnessing the predictive power of machine learning while preserving the domain knowledge and context that underpin reliable scientific results.

Liliana Barbosa Santillan

Liliana was awarded with several scholarships from the National Institute of Astrophysics, Optics and Electronics, the National Council of Science and Technology, Carolina Foundation, and Technical University of Madrid.

Liliana has presented conferences in different states in México, and Europe. She has collaborated applied research projects with a wide variety of organizations like National Institute of Astrophysics, Optics and Electronics; Ontology Engineering Group; The Computational logic, Languages, Implementation, and Parallelism Laboratory; Embedded and Real-Time Systems Collaborative Laboratory; FAO: Food and Agriculture Organization of the United Nations for a world without hunger; Mexican Army; the Ministry of the Economy and the State Council for Science and Technology of Jalisco; The TechBA Silicon Valley program, among others.

She has been member of the National Research System in Mexico (SNI I). Her research interests include semantic (web) applications, ontology systems, security, parallel programming with GPU, and quality software. Finally, She is the chairperson of www.dataminingengineeringgroup.net

Ewa Deelman

Her main area of research is distributed computing. She researches how to best support complex scientific applications on a variety of computational environments, including campus clusters, grids, and clouds. She has designed new algorithms for job scheduling, resource provisioning, and data storage optimization in the context of scientific workflows.

Since 2000, She has been conducting research in scientific workflows and has been leading the design and development of the Pegasus software that maps complex application workflows onto distributed resources. Pegasus is used by a broad community of researchers in astronomy, bioinformatics, earthquake science, gravitational-wave physics, limnology, and others.

She is also the Principal Investigator for the CI Compass, the NSF Cyberinfrastructure Center of Excellence, which provides leadership, expertise, and active support to cyberinfrastructure practitioners at NSF Major Facilities and throughout the research ecosystem in order to enable ongoing evolution of our technologies, our practices, and our field, ensuring the integrity and effectiveness of the cyberinfrastructure upon which research and discovery depend.

In addition, she is interested in issues of distributed data management, high-level application monitoring, and resource provisioning in grids and clouds.