Securing AI for Cyber Resilience: Building Trustworthy and Secure AI Systems

Written by

As artificial intelligence (AI) becomes woven into the fabric of daily life – powering automation, analytics, and decision-making – it simultaneously opens new vulnerabilities for attackers to exploit.

Ensuring the security of AI systems, beyond leveraging AI for network defense, is now among the most urgent challenges in cybersecurity.

To explore what this shift means for enterprises and critical infrastructure, Dr Vrizlynn Thing, Senior Vice President, Head of Cybersecurity Strategic Technology Centre at ST Engineering, shared how cyber-resilience principles can help organizations build AI that is secure, trustworthy and robust.

1. Dr Thing, how does your experience in cyber-resilience influence your approach to securing AI systems?

Resilience thinking applies directly to AI. In cybersecurity, especially for complex and critical cyber-physical systems, we design systems to withstand, recover, and adapt, not just prevent attacks. AI needs that same mindset.

As someone who has worked across technology innovation, standards and policies, systems engineering, and security operations business, I’ve seen how reliability and trust are built when innovation and protection evolve together. At ST Engineering, we bring that resilience mindset to AI by testing, simulating, and building adaptive defenses, ensuring systems can respond dynamically under stress.

2. Why is it so important for organizations to secure AI itself, rather than just use AI for security?

Attackers increasingly target the AI supply chain - poisoning training data, manipulating models, or exploiting vulnerabilities during deployment and operations.

When an AI system or model is compromised, it can quietly skew decisions. This poses significant risks for autonomous systems or analytics engines. Thus, it is important that we embed security and resilience into our AI systems, ensuring robust protection from design to deployment and operations.

3. Which AI-specific attacks pose the greatest risk today?

Data poisoning remains a top concern because it feeds corrupted data to models, causing them to learn the wrong things.

We’re also seeing more model inversion and model theft, where attackers extract and steal proprietary knowledge. And with generative AI, prompt injection has emerged as a fast-moving, real-world threat that’s actively being exploited. That’s why continuous stress-testing AI systems against evolving attacks is critical, to strengthen defenses before deployment.

4. What does a robust AI defense strategy look like in practice?

Visibility is key. You can’t protect what you can’t see. Without visibility into data flows, model behavior and system interactions, threats can remain undetected until it is too late. Continuous validation and monitoring help surface anomalies and adversarial manipulations early, enabling timely interventions. 

Explainability is just as pivotal. Detecting an anomaly is one thing, but understanding why it happened drives true resilience. Explainability clarifies the reasoning behind AI systems and their decisions, helps verify threats, traces manipulations, makes AI systems auditable, and strengthens trust.

Assurance must be continuous. Our team from the Cybersecurity Strategic Technology Centre rigorously validates and strengthens AI systems through advanced testing and assurance programs, including ongoing red-teaming, proactive vulnerability rectification, and robust system protection measures, ensuring our customers’ systems remain resilient and secure throughout their lifecycle.

5. How can businesses balance AI performance with strong security measures?

When protection is added later, businesses often end up sacrificing either performance or safety.  Embedding secure-by-design principles from the beginning makes security native and not an afterthought.

Therefore, we champion lightweight, adaptive defenses - high-performance security measures that safeguard AI systems without compromising speed or innovation. Our cyber-resilience validation platforms test AI-enabled control systems under realistic conditions, ensuring they stay secure and responsive even under pressure.

Global frameworks such as National Institute of Standards and Technology’s (NIST) AI Risk Management Framework, MITRE ATLAS, and Singapore’s Model AI Governance Framework and Guidelines on Securing AI Systems provide strong foundations for this. That said, true resilience goes beyond compliance. Bridging policy and technical practice, especially across borders, is key to building trustworthy, high-performance AI ecosystems that can evolve safely.

7. As the AI threats evolve, what guiding principles should organizations follow to keep AI systems safe and trustworthy?

Attackers are exploiting AI-specific security weaknesses, such as data poisoning, model inversion, and adversarial manipulations. As AI adoption accelerates, its threats will follow in equal sophistication and scale. The rapid proliferation of AI systems across industries not only drives innovation but also expands the attack surface, drawing the attention from both state-sponsored and criminal actors.

Defenders must move just as fast, leveraging adaptive, self-learning systems that respond in real time. The industry is shifting from reactive defense to proactive resilience, and from responding to anticipating.

To stay ahead, we should treat the security of AI as a continuous journey, not a checkbox. Collaboration between security teams, developers, and policymakers is crucial because risks span across boundaries.

At ST Engineering, we anchor everything on resilience and trust, as these are the foundations that turn powerful AI into responsible, secure AI.

8. Can you share a real-world example of AI resilience in action?

We helped customers assess the security robustness of their AI systems, benchmarking them across the industry, and testing them across multiple attack scenarios. Throughout these engagements, we monitor the entire process and provide explainability for findings and results.

With this insight, customers can remediate security issues in their AI systems and models, implement appropriate protection mechanisms, and continuously instill resilience as their AI evolves.
 

9. How does ST Engineering’s AGIL® SecureAI bring these ideas together, and what does the future of secure AI look like?

AGIL® SecureAI puts these principles into practice. It proactively identifies and mitigates threats to AI systems—testing them before deployment and monitoring them in production to ensure continuous assurance. AGIL® SecureAI enables organizations to scale AI securely, and demonstrates how innovation and protection can, and should, evolve together.

Looking ahead, AI will remain both an enabler and a target. Resilience must be designed from the start, such as combining adaptive defenses, explainable models, and ongoing validation to sustain trust.

With our expanding Digital Systems & Cyber business and deep expertise in the security of AI and cyber resilience, ST Engineering is shaping secure, trustworthy AI ecosystems for the future. As a global technology, defense, and engineering group, we are future-focused, continuously leveraging advanced technologies to deliver cutting-edge capabilities and solutions across diverse markets and sectors. Our goal is clear: to help customers secure what matters and thrive in the digital era.

Brought to you by

What’s Hot on Infosecurity Magazine?