WELCOME TO

SOFTECAsia 2025

AI-Driven SQA Revolution

Join the future of SQA this September in Kuala Lumpur.

Day(s)

:

Hour(s)

:

Minute(s)

:

Second(s)

Programme

Keynote Titles & Synopsis

Keynote 1: Transforming Software Quality Assurance Using Artificial Intelligence: A CIO/CTO Perspective


Speaker: Prof Jasbir Dhaliwal, Memphis University, USA

Synopsis:

AI technologies are now driving significant changes in SQA and software testing best practices. These are presenting both fresh opportunities and new challenges to chief information and technology officers who manage software development and implementation in both the private and public sectors. This talk tracks all the changes taking place and uses a strategic technology management perspective to show how both large and small organizations are overcoming challenges to succeed in realizing benefits. Examples are provided from a survey of senior technology managers at North American organizations. The keynote contextualizes these opportunities and challenges presented by artificial intelligence using a historical lens of the last thirty years of software development and peeks into the future for the changes to SQA best practices that are coming including the future shape of the AI-informed SQA organization.

Keynote 2 : Use Emotional Intelligence, Not Artificial Intelligence to Accelerate Your Career


Speaker: Philip Lew, XBOSoft., USA

Synopsis:

Has your testing career plateaued despite mastering the latest technical skills? In our rush to adopt AI, learn Python, or earn new certifications, we’ve overlooked the fundamental human skills that differentiate successful leaders from individual contributors in the software testing profession.

While technical expertise is necessary, it’s rarely sufficient for career advancement. The critical difference between staying an individual contributor and becoming an influential leader often comes down to one undervalued skill set: Emotional Intelligence (EI).

As testing professionals return to collaborative office environments post-pandemic, many are discovering a skill gap. Years of remote work have eroded our ability to navigate in-person team dynamics, and build relationships through spontaneous interactions—precisely the skills that differentiate human testers from automated systems.

In this keynote, Phil shares his transformation from a technically focused engineer to a successful entrepreneur, corporate executive, and testing leader. He reveals the specific EI skills that propelled his career forward when technical knowledge alone couldn’t and demonstrates how these capabilities can be systematically developed using an engineering mindset.

You’ll discover:

    • How EI can be your career “multiplier,” amplifying the impact of your technical testing skills
    • Why face-to-face collaboration skills have become the new competitive advantage in the post-pandemic workplace
    • A structured framework for developing EI that appeals to the analytical mindset of testing professionals
    • Practical techniques for remaining emotionally centered during testing challenges, including:

o Managing imposter syndrome during complex test implementations

o Responding constructively to failed test cycles or critical bugs

o Navigating rejection of test approaches or automation proposals

o Transforming passive-aggressive team dynamics into productive collaboration

o Building stakeholder support for testing initiatives

o Addressing resistance from developers or product owners

o Creating psychological safety within cross-functional testing teams

Unlike technical skills with clear learning paths, EI is rarely taught systematically in testing careers—yet it’s often the determining factor in who advances to leadership roles. Through real-world examples and practical frameworks, you’ll learn how to recognize emotional patterns in workplace testing scenarios and apply specific EI techniques to transform challenges into opportunities for growth.

For testers who aspire to lead, this session offers a blueprint for developing the human-centered skills that AI can never replicate—and that will remain essential even as testing tools become increasingly automated.

Keynote 3: Building trust into AI-driven software


Speaker: Vipul Kocher, TestAIng, India

Synopsis:

AI systems, including large language models, often rely on neural networks that are highly complex and difficult for humans to fully comprehend, raising critical questions about their security, integrity, and reliability. How can we place trust in such systems?

In this keynote, we’ll examine these challenges and explore solutions through techniques like transparency, explainability, and debiasing, alongside proactive measures such as security and privacy stress-testing. Learn how these approaches can ensure AI-driven software delivers both exceptional performance and unwavering trustworthiness.

Keynote 4: Enhancing Test Improvement Processes with AI


Speaker: Simon Frankish, Experimentus, UK

Synopsis:

Take a closer look at how AI technologies can be leverage on to enhance/complement SQA/testing processes in organisations, using TMMi framework as a reference model.

Workshop Titles & Synopsis

Workshop 1A: Transforming Agile Software Testing with Generative AI


Speaker: Dr. Pavan Mulkund, FedEx Institute of Technology, University of Memphis, USA

Synopsis:

Agile methodologies have fundamentally transformed software development, yet traditional manual testing methods frequently fall short in meeting the rapid, iterative demands of modern agile cycles. Generative AI (Gen AI) emerges as a game-changing innovation, enabling agile teams to automate intricate testing tasks, vastly expand test coverage, and swiftly pinpoint defects with unprecedented precision. This hands-on workshop empowers participants with practical tools and insights to integrate Gen AI seamlessly into their agile testing workflows.

Through engaging demonstrations and immersive exercises, attendees will master how to utilize Gen AI for adaptive test planning, automated test case generation, intelligent defect detection, and comprehensive code-change analysis within streamlined continuous integration and deployment (CI/CD) pipelines. Participants will experience firsthand how AI-driven methodologies significantly boost testing efficiency, drastically cut cycle times, and enable proactive shift-left testing practices that reduce costly late-stage defects.

The workshop offers an end to end perspective and further tackles real-world adoption challenges, equipping attendees with actionable strategies to overcome resistance, bridge skill gaps, and ensure robust human oversight in AI-driven environments. By attending, software testers, QA managers, and agile leaders will gain transformative expertise that positions their organizations at the forefront of agile innovation, fostering a culture of continuous learning, agility, and technological excellence.

Workshop 1B: Developing Effective Tests for Assessing Ethical Principles in AI Systems


Speaker: Dr. Pavan Mulkund , FedEx Institute of Technology, University of Memphis, USA

Synopsis:

Artificial Intelligence (AI) systems have significantly reshaped numerous sectors through innovation and improved operational efficiency. However, their increasing adoption introduces profound ethical challenges, such as fairness, accountability, transparency, and bias. To responsibly leverage AI’s transformative potential, organizations must adopt rigorous methods to systematically assess ethical adherence.

This workshop offers participants a practical exploration of a comprehensive test case generation framework designed explicitly for evaluating AI systems against ethical standards. Participants will engage in interactive sessions that delve into core ethical principles, including fairness, accountability, transparency, privacy, and non-discrimination. Utilizing the Goal-Question-Metric (GQM) methodology, attendees will learn to articulate clear ethical goals, formulate targeted assessment questions, and apply suitable quantitative, qualitative, and hybrid metrics for comprehensive ethical evaluation.

Through practical exercises, collaborative discussions, and expert insights, attendees will gain hands-on experience developing high-level test cases tailored for formative and summative evaluations in both artificial and real-world scenarios. Additionally, participants will learn how to effectively conduct qualitative interviews with stakeholders such as AI ethicists, developers, regulators, and end-users, enabling richer insights into ethical considerations and evaluation practices.

This workshop is ideal for AI developers, ethicists, policymakers, and industry leaders committed to ensuring ethical integrity in AI deployment, enhancing transparency, and building public trust in AI technologies.

Workshop 2A : Public Speaking for Enginerds


Speaker: Philip Lew, XBOSoft., USA

Synopsis:

Public speaking is critical for engineering professionals who want to communicate their ideas effectively and lead in their field. Many engineers excel at technical problem-solving but need help compellingly presenting their insights. This workshop bridges that gap, offering a systematic approach to improving public speaking skills.

The core philosophy of this workshop is simple yet powerful: “Practice With Feedback Makes Perfect.” Unlike the familiar adage that mere practice leads to improvement, this workshop emphasizes receiving immediate, constructive feedback to accelerate learning and skill development.

Intended Audience

This workshop is tailored specifically for engineers at all levels who want to:

– Communicate more clearly in internal meetings

– Present more effectively to clients

– Improve presentation skills for management interactions

– Overcome public speaking anxiety

– Develop a structured approach to communication

Workshop Structure and Key Learnings:

– 3-4 sessions, each lasting one hour

– 10-minute breaks between sessions

– Maximum of 20 participants

– Highly interactive learning environment

 

During each session, participants will explore and practice critical public speaking elements:

  1. Talk Structure

   – How to start and end a presentation effectively

   – Creating compelling narrative arcs

 

  1. Physical Presence

   – Body posture and its impact on audience perception

   – Strategic movement and stage utilization

   – Meaningful gesture integration

 

  1. Vocal Techniques

   – Voice inflection and tonality

   – Modulating speech for emphasis and engagement

 

  1. Audience Connection

   – Reading and responding to audience energy

   – Techniques for maintaining engagement

   – Handling questions and interactions

 

Unique Workshop Approach

Unlike traditional workshops that rely on theoretical instruction, this program emphasizes immediate practical application:

– Learn a concept

– Practice the concept on the spot

– Receive real-time feedback

– Immediately re-attempt with incorporated improvements

 

Additional Features

– Possible video recording of sessions

– Personalized feedback on individual performance

– Opportunity to observe and learn from peers

 

 Learning Outcomes

Participants will:

– Develop a personalized framework for public speaking improvement

– Gain confidence in their communication abilities

– Understand systematic methods for skill enhancement

– Receive actionable feedback on their current skill level

 

Personal Context

The workshop is led by Phil Lew, an introverted engineer who has worked hard and continues to work hard to transform and improve his public speaking abilities. Having presented keynotes worldwide, he understands the challenges faced by technical professionals in communication and has developed a structured, engineering-like approach to mastering public speaking.

Workshop 2B: Software Quality Metrics Workshop: Traditional & AI-Enhanced Testing


Speaker: Philip Lew, XBOSoft., USA

Synopsis:

Overview

Software quality metrics are essential for understanding the health and effectiveness of our development and testing processes. However, many organizations struggle to implement metrics that drive business value rather than merely collecting data for its own sake. This workshop addresses this critical gap by teaching participants how to create, implement, and leverage meaningful metrics that connect directly to business objectives.

As artificial intelligence transforms software testing practices, this workshop also explores the unique challenges and opportunities of measuring AI effectiveness in quality assurance workflows. Participants will learn foundational metric principles and approaches for evaluating AI-powered testing initiatives.

Why Metrics Matter

When implementing software quality metrics, we must first understand their purpose and audience. Are these metrics intended to measure people, processes, product quality, or progress toward specific objectives? QA managers typically want to deliver productivity metrics to management, while executives may focus on customer satisfaction indicators or cost-related measures.

This disconnect often leads to metrics that fail to demonstrate value to decision-makers. By developing quality metrics with clear, actionable objectives tied to business goals, you can bridge this communication gap and ensure your metrics drive meaningful improvements rather than becoming forgotten spreadsheets.

Workshop Benefits

This half-day course teaches participants how to develop and implement a metrics framework for development and QA organizations. You’ll learn to:

  • Align quality metrics with strategic business objectives
  • Create metrics that drive action rather than merely reporting information
  • Communicate metrics effectively to different stakeholders
  • Avoid common pitfalls that lead to unintended negative behaviors
  • Adapt traditional metrics for AI-enhanced testing environments

Measuring AI Effectiveness in Software Testing

As organizations incorporate artificial intelligence into testing workflows, new measurement approaches become necessary. This workshop covers essential metrics for evaluating AI testing initiatives and considerations that impact AI testing effectiveness.

Who Should Attend

This workshop is valuable for quality assurance managers, test leads, development managers, product owners, and other stakeholders involved in software quality decisions. Both testing professionals looking to implement AI solutions and those seeking to improve their traditional metrics approaches will benefit from this comprehensive curriculum.

Join us to transform your quality metrics from ignored spreadsheets into powerful tools that drive meaningful business improvements and prepare your organization for the evolving world of AI-enhanced quality assurance.

Keynote 3A: Trusting the Black Box: How Much Can We Rely on AI?


Speaker: Vipul Kocher, TestAIng, India

Synopsis:

In this workshop we will understand the reasons for not trusting the AI and the type of solutions where trust in AI is absolutely. essential. We will look at

    • Defining “trust” in AI and what it means for decision-making.
    • Putting human-in-the-loop and potential pitfalls.
    • Examples of AI systems where trust can be easily compromised and their consequences.
    • Frameworks for assessing the reliability of AI models in critical applications (e.g., autonomous vehicles, healthcare).
Workshop 3B: Security in AI Systems: Protecting Against Vulnerabilities


Speaker: Vipul Kocher, TestAIng, India

Synopsis:

Common security threats in AI systems, including adversarial attacks. How adversarial attacks manipulate AI models and undermine trust. Defensive mechanisms to protect neural networks, such as adversarial training and robust optimization. The role of secure hardware (e.g., trusted execution environments). As AI systems become integral to decision-making across industries, ensuring their security critical to maintaining trust and reliability.

In this session we explore myriad security threats targeting AI, with a focus on adversarial and backdoor attacks, model inversion and stealing attacks and data poisoning on one side and deep fakes on the other side. We will also look at techniques to counter these risks including adversarial training and data testing.

Copyright © 2025 SOFTECAsia. All Rights Reserved.

Connect with Us