Blog • Generis Group

Engineering Trust in AI for Medical Devices : Miguel Ávila’s Insights

Written by Aadya Gupta | April 13, 2026 12:28:30 PM Z

Introduction

As artificial intelligence becomes increasingly embedded in medical device innovation, the conversation is shifting from possibility to responsibility. For leaders in medtech, the question is no longer whether AI can transform care, but how to design systems that clinicians trust, regulators approve, and patients can rely on at scale.

With more than 30 years of experience leading global quality, regulatory, medical, and clinical organizations, Miguel Ávila brings a deeply integrated perspective on what it takes to deliver safe, effective, and scalable innovation. As Vice President of Global Quality, Regulatory, Medical, and Clinical Affairs at Cordis, he oversees the company’s worldwide quality management system, regulatory strategy, and clinical evidence generation, ensuring cardiovascular technologies are not only innovative but trusted across global markets.

Ahead of his participation at the European Medical Device Summit, taking place on 9-10 June 2026 in Germany, we spoke with Miguel about designing trust in AI for medical devices, embedding clinical insight into early development, and how quality and clinical leadership must evolve to support continuous learning in the AI era.

 

Could you begin with a brief introduction about yourself, and an overview of your responsibilities as VP, Global Quality, Regulatory, Medical, and Clinical Affairs at Cordis?

I’m Miguel Ávila, Vice President of Global Quality, Regulatory, Medical, and Clinical Affairs at Cordis. I’m a results-driven medtech executive with 30 years of experience leading enterprise and high-growth organizations through transformation, innovation, and global regulatory complexity. Today, I oversee Cordis’ worldwide quality management system, regulatory strategy and execution, and our medical and clinical affairs functions – including clinical evidence and lifecycle oversight – so we can accelerate cardiovascular innovation while maintaining the highest standards of safety, effectiveness, and trust.

 

What does designing trust mean in the context of AI for medical devices, and why must quality-by-design begin in the clinical environment?

Designing trust means building an AI model that clinicians and patients can rely on because it behaves predictably, communicates its limits, and supports safe decision-making under real-world conditions – not just because it performs well in a controlled validation dataset. That’s why Quality-by-Design must begin in the clinical environment. If you don’t start where the device is actually used – under time pressure, variable data quality, diverse patient populations, and real workflow constraints – you risk optimizing an algorithm instead of protecting a patient. Practically, this means anchoring the AI to a clearly defined clinical job-to-be-done, designing around the moments where harm could occur, and demonstrating meaningful benefit – improved outcomes, fewer errors, reduced variability, or reduced burden – within the realities of care delivery.

 

How can insights from clinicians and real-world clinical use be more effectively integrated into early design and development decisions?

Clinician insights become most valuable when they’re captured in the real care environment and then translated into concrete design requirements – not treated as informal feedback. By shadowing clinicians in situ – whether in the Cath lab, OR, imaging suite, or at the bedside – engineering teams can see “work as done” versus “work as imagined”: interruptions, time pressure, handoffs, variability in technique, and the practical constraints that shape safety and adoption. The key is to operationalize what you learn. Those observations should be documented and brought into formal design reviews, where they become defined design inputs (workflow requirements, usability needs, risk controls) and inform verification and validation plans. When clinician insights are embedded into design controls early, you don’t just build a better model – you build a safer, clinically usable device that performs reliably in the environments it’s meant to serve.

 

At Cordis, our mission in medtech is to accelerate cardiovascular innovation while maintaining the highest standards of safety, effectiveness, and trust.



In your experience, how do quality, regulatory, medical, and clinical functions need to work together to ensure AI-driven innovations are both safe and scalable?

In my experience, these functions must operate as an integrated system with shared ownership of the full product lifecycle – because with AI, safety and scalability are inseparable from workflow, data, and change control. Quality builds and maintains the Quality Management System that makes performance repeatable and auditable – design controls, risk management, CAPA, supplier controls, verification and validation, and audit readiness. Regulatory translates the product and its lifecycle into an approvable, globally coherent strategy – intended use, claims, evidence expectations, submissions, post-approval change strategy, and harmonization across regions. Medical and Clinical anchor the innovation in real patient care – defining clinical value, shaping human factors and usability, guiding training and adoption, and interpreting post-market and real-world performance signals. When these functions are aligned, you don’t just prove the model works – you build a system that clinicians can trust at scale and over time. A simple test is: if Medical can’t clearly describe how harm could occur, if Quality can’t point to the specific control that mitigates it, and if Regulatory can’t explain how the device will be managed and updated across its lifecycle, then you don’t have scalable AI – you have a one-time launch risk..

 

What are the biggest quality and regulatory challenges when translating AI concepts into clinically trusted, compliant medical devices?

Designing trust means building AI systems that clinicians and patients can rely on because they behave predictably, communicate their limits, and support safe decision-making under real-world conditions.

The biggest challenges aren’t AI algorithm in isolation – they’re proving, controlling, and sustaining safe clinical performance across sites and over time, within a compliant quality system. AI is easy to repurpose, but vague intended use invites off-label and potentially new hazards. With unclear design inputs, it leads to weak verification and validation and unpredictable real-work risk. AI model performance can degrade with shifts in data, workflow, or user behavior and Post-Market Surveillance must include model performance signals, near-misses, clear escalation, and potential revalidation. AI models need change control aligning update frequency with compliance across global regions. Basically, quality and regulatory success comes from controlling the AI model full lifecycle, rather than just validating it once.

 

How can organizations embed continuous learning from clinical data and post-market feedback into their quality-by-design frameworks?

Continuous learning in QbD means your post-market signals are treated like design inputs – governed, thresholded, and change-controlled – so the product gets safer as it scales. This becomes part of the Design Plan and Risk Management Plan. By building a Post-Market Learning Loop that maps to the Quality Management System processes, you are able to reliably capture, triage, take action, verify, communicate, and document. This learning loop lives in your QMS such as complaints, post-market surveillance, CAPA, change control, management reviews, and supplier controls. By defining a “trust degradation signal” system, you are able to tie triggers to actions – “green” (within expected ranges) continue to monitor, “yellow” (drift) leads to increase sampling and/or targeted investigations, and “red” (safety risk signals) leads to pause rollout, field actions, and/or regulatory reporting. The creation of an AI Trust Review Board made up of Quality, Regulatory, Medical/Clinical, Engineering, Cyber, and Commercial gives a cross-functional governance real decision-making process. When learning is visible, the loop gets closed with customers and clinicians.

Looking ahead, how do you see the role of quality and clinical leadership evolving as AI becomes more deeply embedded in medical device innovation?

Quality and clinical leadership will move from being “review gates” to being co-owners of the AI lifecycle, because the risk and performance of AI are shaped as much by workflow, data, and updates as by design-time engineering. Quality leaders shift from compliance stewards to “trust engineers” by defining and governing trust requirements, AI-specific risk controls (automation bias, misuse), and lifecycle evidence (how trust is maintained). So, Quality will spend more time on model change control, monitoring, and rapid response and less on static documentation. Clinical leaders will shift from “clinical validation support” to owning the clinical workflow specifications as design inputs, defining clinically meaningful endpoints, leading adoption safety, and curating real-world feedback so it becomes engineering requirements. As AI usage deepens in an organization, Quality owns the signal detection and QMS pathways (CAPA, change control, PMS), Clinical owns clinical interpretations and mitigation relevance, and together, the functions will run the Trust Reviews.

 

Which aspect of the European Medical Device Summit are you most looking forward to?

I’m most looking forward to the exchange with industry colleagues across the MedTech landscape – hearing what’s working, what’s changing, and how teams are translating innovation into real clinical impact. Just as important, I’m eager to compare perspectives on the evolving global regulatory environment and what it will take to stay both compliant and agile as technologies and expectations continue to accelerate.

 

Conclusion

Miguel Ávila’s perspective makes one thing clear: in the age of AI in medical devices, trust is not a byproduct of innovation; it must be deliberately engineered. By tightly integrating quality, regulatory, medical, and clinical functions, organizations can move beyond one-time validation toward lifecycle governance that sustains safety, performance, and credibility over time.

As AI continues to reshape the medtech landscape, leaders like Miguel are redefining quality as a strategic driver of innovation, not just a checkpoint but a system for continuous learning and scalable trust. His insights at the European Medical Device Summit will offer a practical blueprint for organizations seeking to balance speed, compliance, and real-world clinical impact in an increasingly complex global environment.

 

Register now: emdsummit.com