Roman Zhukov

Principal Security & Community Architect at Red Hat

Speaker's Bio

Roman is a cybersecurity expert, engineer, and leader with over 17 years of hands-on experience securing complex systems and products at scale. Currently Principal Architect at Red Hat, he leads open-source security strategy, upstream collaboration, and cross-industry initiatives focused on building trusted ecosystems. He has built and scaled programs across security architecture, threat modeling, secure development, vulnerability management, incident response, and security education – for both engineers and senior leadership. His work spans trusted AI, privacy, compliance, and secure software supply chains. Previously, Roman led Product Security & Privacy for Data Center and AI software at Intel. He is a Security Champion for several open-source projects and an active contributor to working groups under the OpenSSF, Eclipse Foundation, and other global initiatives. He is an official member of CEN/CLC and ETSI standardization groups, contributing to the EU Cyber Resilience Act (CRA).

Roman is a cybersecurity expert, engineer, and leader with over 17 years of hands-on experience securing complex systems and products at scale. Currently Principal Architect at Red Hat, he leads open-source security strategy, upstream collaboration, and cross-industry initiatives focused on building trusted ecosystems. He has built and scaled programs across security architecture, threat modeling, secure development, vulnerability management, incident response, and security education – for both engineers and senior leadership. His work spans trusted AI, privacy, compliance, and secure software supply chains. Previously, Roman led Product Security & Privacy for Data Center and AI software at Intel. He is a Security Champion for several open-source projects and an active contributor to working groups under the OpenSSF, Eclipse Foundation, and other global initiatives. He is an official member of CEN/CLC and ETSI standardization groups, contributing to the EU Cyber Resilience Act (CRA).

Restoring Trust in Open-Source AI: Security Starts with Provenance

Open-source AI is shaping how we build, deploy, and scale systems and applications today, right into production. But with the rapid adoption of upstream AI models, datasets, and orchestration tools comes a critical question: can we trust what we’re using and how it was originally created? According to PwC survey, about 50% of surveyed company leaders in 2025 admitted they don’t trust AI to be embedded in their core operations.
Unlike focusing on securing AI systems themselves, in this talk we’ll explore often overlooked topic – how data provenance, model transparency, and AI-specific supply chain security are becoming essential for building trustworthy AI systems.
I’ll cover the importance of data provenance and how they reduce risks like data poisoning, bias, and adversarial manipulation; the rise of the AI Software Bill of Materials (AI SBOM) to document model components and inference behavior; open-source tools that bring it all to life: Sigstore, KitOps, Model and Data Cards.
I will also share updates from our work in top AI standardization organizations like OASIS, OpenSSF, and LF AI & Data to define and support AI provenance standards, automation, and trusted AI guidelines.