Artificial intelligence is making rapid inroads into software quality assurance (SQA). Test scripts now self-heal, machine learning models predict bug-prone code, and bots churn through thousands of test cases overnight. With such advances, it’s fair to ask: Will AI replace human software testers? In Malaysia’s vibrant QA community, the answer is a resounding “No” – but with a twist. Rather than rendering testers obsolete, AI is redefining their role. The Malaysian Software Testing Board (MSTB) holds that human ingenuity and judgment are more critical than ever in this AI-driven era. MSTB has long promoted professional development, best practices, and international standards in testing. As a member of global bodies like ISTQB, IREB, and TMMi, MSTB leads Malaysia’s certification of testers and ensures they’re equipped with world-class skills. Now, on the cusp of SOFTECAsia 2025 – MSTB’s flagship conference themed “AI-Driven SQA Revolution” – industry leaders are uniting to emphasize the synergy between artificial and human intelligence. This article explores why seasoned human testers remain indispensable, even as AI transforms the software testing landscape.
The Rise of AI in Software Testing
There’s genuine excitement in the software testing field about what AI can do. Over the past few years, AI-powered tools have dramatically improved testing speed, coverage, and efficiency. Smarter test automation is one big draw – machine learning models can generate test scripts and adapt them on the fly. For example, AI algorithms can analyze past bug patterns and predict which areas of an application are most likely to fail, focusing tests on those high-risk hotspots (harington.fr). This predictive testing means catching potential weaknesses before they manifest into bugs, raising software quality early in the development cycle. AI-driven tools are also tackling the bane of test maintenance: unlike brittle manual scripts, some tools use self-healing locators that automatically adjust to minor UI changes. Products like Testim and Applitools demonstrate this adaptability – using AI to dynamically update test steps when the application’s interface changes, reducing the need to rewrite tests after every code tweak (harington.fr). The result is fewer false alarms and less downtime spent fixing test scripts.

Another area AI is transforming is test data and scenario generation. Generative AI can produce a wide range of realistic test data (names, addresses, transactions, etc.) at scale, enabling testers to run scenarios covering diverse conditions without hand-crafting every input. In fact, the 2024 State of Testing survey found that 25% of organizations are already using AI for test case creation, 23% for test optimization, and 20% for test planning (leapwork.com). This means a significant chunk of QA teams are letting AI create or refine test cases – something that would have been unthinkable a decade ago. AI can even simulate complex user behaviors. By observing user interaction patterns, AI tools can automatically create new test cases that mirror real-world usage, expanding coverage into edge cases humans might overlook. It’s little wonder that studies suggest nearly 70% of manual QA tasks could eventually be automated through AI, translating to faster results and wider coverage (devzery.com). More automation also frees human testers from repetitive checks so they can focus on creative testing and user experience evaluation.
The benefits go on: faster test execution (running thousands of checks in a fraction of the time), (eliminating human errors like missed steps), and continuous testing in DevOps pipelines. With AI integrated into CI/CD, tests run around the clock and alert teams of issues immediately, speeding up feedback loops. All these advances have led to broad adoption. One recent survey of large firms found 79% of companies have already embraced AI-augmented testing, and most plan to increase investment in the coming year (leapwork.com). Clearly, AI is revolutionizing how we assure quality – making testing faster, smarter, and more cost-effective. This AI-driven SQA revolution is the backdrop for Malaysia’s upcoming SOFTECAsia 2025 conference, where practitioners will share cutting-edge use cases and tools. But amidst the enthusiasm, it’s important to separate hype from reality. What can AI truly handle in testing, and where do humans still shine? That brings us to some prevalent myths – and the real story behind them.
Realities vs. Myths in AI-Driven Testing
Myth 1: “AI will replace human testers entirely.”
This sensational idea has gained traction with the rise of sophisticated testing bots. However, the reality is far more nuanced. AI excels at crunching data and executing predefined tasks, but it lacks the uniquely human qualities needed for thorough software quality assurance. As one QA expert put it, “AI is a tool — a powerful one, but a tool nonetheless. Just as automation didn’t eliminate testers but transformed their role, AI will do the same.” (medium.com). We’ve seen this movie before: years ago, the spread of test automation led many to fear for manual testers’ jobs. In practice, automation redefined testers’ work rather than replacing them, pushing them toward strategy, risk analysis, and creative test design instead of repetitive checking. The introduction of AI is following a similar script. Yes, AI can automate routine checks and generate test scripts, but experienced testers are still needed to design the overall test strategy, decide what to test in the first place, and interpret the results.

Notably, AI has critical limits in areas requiring human judgment. One limitation is intuition. AI works by recognizing patterns in data, but it doesn’t possess a tester’s intuition for sensing when something “feels off” in a software feature or anticipating rare corner cases. “AI can execute tests, but it doesn’t understand business goals, customer behavior, or the end-user experience,” notes one analysis – in other words, AI lacks context awareness of the product’s real-world use (medium.com). A human tester, drawing on domain knowledge and user empathy, can explore scenarios that weren’t obvious from requirements – those subtle usability quirks or culturally sensitive issues that an algorithm trained on historical data simply wouldn’t know about. AI also doesn’t ask “Why?” If requirements are incomplete or flawed, an AI won’t raise a flag; it dutifully tests whatever it’s told. Testers, on the other hand, routinely challenge assumptions – catching gaps or ambiguities in specifications that need clarification before coding even begins. This critical thinking is a hallmark of human testers that no AI today can replicate.
Another reality check: debugging and interpreting failures remain a human forte. AI-driven tests might tell you that a certain step failed, but figuring out why (was it a code bug, an environment issue, or a test script error?) often requires human investigation. In fact, researchers have found that current AI agents struggle with debugging code effectively. Even with advanced tools at their disposal, “AI agents can’t reliably debug software,” leading computer scientists concluded in 2025 (arstechnica.com). This underscores that when a test fails, human engineers are still on the hook to diagnose the root cause and decide the fix. Anyone who’s triaged a complex bug knows it requires judgment, experience, and sometimes creative sleuthing – strengths of skilled testers, not AI algorithms.
Myth 2: “AI can test everything, leaving nothing for humans to do.”
Reality: AI’s scope, while broad, is not all-encompassing. Consider usability, look-and-feel, and ethical compliance – areas that demand human sensibility. An AI script might confirm that a feature technically meets the specified requirements yet miss that the workflow is confusing to users or that the content is inappropriate. Human testers are vital for this kind of qualitative assessment. They notice if an app’s interface is unintuitive or if an AI-driven feature is making biased decisions that could upset users or regulators. In fields like healthcare and finance, testers provide a conscience and a safety net, checking that AI recommendations or automation outcomes align with ethical norms and legal requirements. It’s telling that in a recent industry survey, 68% of C-suite executives said human validation remains essential in the QA process, even as AI tools are adopted (leapwork.com). Leaders clearly recognize that “trust but verify” is the mantra – let AI handle the grunt work but have humans in the loop to verify results and ensure nothing critical slips by.

Crucially, AI itself needs oversight. Left unchecked, AI-driven testing could produce false positives (flagging issues that aren’t real problems) or false negatives (missing bugs that a human would catch by thinking outside the script). Testers act as the sense-makers. As one QA team described, “AI does not replace human testers; instead, it complements their cognitive skills, creativity, problem-solving abilities and emotional intelligence.” (luckie.com). In other words, AI can crunch the numbers and even highlight patterns, but humans provide the insight and intuition to make testing truly effective. This complementary relationship is why AI isn’t poised to turn software testing upside-down overnight. Even vendors of testing AI acknowledge its partial role. Original Software, a maker of testing tools, notes that AI can improve certain testing tasks like object recognition and results analysis, “however, AI will never be able to take over the whole testing process.” (erp.today). Complex enterprise systems, for instance, involve intricate data and business logic that are beyond what current AI can fully comprehend or generate tests for. In short, the myth of total replacement is busted: AI expands what testers can do, but it doesn’t eliminate the need for human expertise.
Why Human Expertise Remains Essential
If AI handles the grunt work, what is the evolving role of human testers? Far from being edged out, testers are becoming strategic leaders and guardians of quality in the AI era. Their expertise is essential in a number of high-value areas:
- Defining Test Strategy and Coverage: Human testers decide what needs to be tested in the first place. They leverage domain knowledge, user stories, and risk analysis to prioritize testing on features that matter most to users and business. An AI might generate thousands of test cases, but it takes a person to ensure the test coverage is meaningful and aligned with real-world usage. Testers also design the edge-case scenarios and exploratory charters that push the software in unpredictable ways – something AI, bound by training data, wouldn’t instinctively do. As MSTB emphasizes through its training programs, a tester’s analytical thinking and planning skills are key assets. Certification schemes under ISTQB® (for testing techniques) and IREB (for requirements engineering) instill this big-picture thinking, ensuring that Malaysian QA professionals can craft test plans that make intelligent use of AI tools without leaving gaps.
- Interpreting AI Outputs: Today’s testers often find themselves in the role of an AI supervisor – monitoring the results that automation and AI tools churn out, and making judgment calls. For example, if an AI-driven visual testing tool flags a UI difference, the human tester judges whether it’s an acceptable change or a bug. If a machine learning model reports an anomaly in system logs, a human investigates whether it’s a real security threat or a benign pattern. This interpretative layer is crucial. In practice, testers act as the last mile of quality – reviewing AI’s findings and approving what gets sent to clients. Their intuition and understanding of user expectations can catch subtleties that an algorithm might pass over.
- Ethical Oversight and Quality Governance: As software systems incorporate AI (think of an app using AI to make loan decisions, or a chatbot giving medical advice), testers become the guardians of ethics and compliance. They need to ask: Is the AI making fair and unbiased decisions? Is user data handled securely and in line with privacy laws? These are not abstract questions – regulators worldwide are drafting AI oversight rules, and companies will rely on QA professionals to ensure compliance. Human testers are uniquely positioned to set ethical boundaries in how AI is used. They can design tests for fairness (e.g. verifying an AI’s recommendations don’t disadvantage certain groups) and for transparency (e.g. does the system clearly explain an AI-made decision to the user?). In fact, the SOFTECAsia 2025 program directly tackles this issue, with a workshop by Dr Pavan Mulkund explicitly focusing on “developing effective tests for assessing ethical principles in AI systems”. This reflects a growing understanding that quality isn’t just about functionality, but also about trust and responsibility. As AI proliferates, human QA experts will be the ones to say “this AI feature is not ready to ship because it violates our ethics or regulatory standards,” and to guide the fix.
- User Experience & Empathy: Great testers have a knack for thinking like end-users – a blend of psychology and curiosity that leads them to discover what truly improves or hurts user experience. AI cannot replicate genuine human empathy or creativity. For example, an AI might test that a workflow technically works, but a human tester will notice if the workflow is frustrating or illogical to a human user. This human-centric approach is what distinguishes products that delight customers from those that merely function. Testers should use AI tools to amplify their creative probing of a product, not to replace their judgment. In practice, a tester might use an AI tool to generate dozens of test variations, then apply human creativity to dream up additional “what if” scenarios the tool didn’t cover. They might use analytics to identify an unusual usage pattern, then personally design a test to see how the software handles it. The human imagination and curiosity remain irreplaceable.
- Process Improvement and Leadership: With routine tasks automated, testers are increasingly taking on the role of quality coaches and leaders within development teams. They ensure that testing is not merely a phase, but a mindset baked into the software development life cycle. This strategic role is something MSTB actively nurtures – for instance, by promoting the Test Maturity Model integration (TMMi) in Malaysia. MSTB serves as the local chapter of the TMMi Foundation, guiding organizations in improving their test processes. A key part of process improvement today is figuring out where AI can help and where it can’t. Test managers (often experienced testers themselves) need to set policies for AI usage: which tools to trust, how to validate their recommendations, and how to maintain quality objectives. Simon Frankish, a veteran TMMi assessor and speaker at SOFTECAsia 2025, stresses understanding “how AI can complement test process improvement” as an important topic for the future. The message is that human leaders must steer the integration of AI into QA in a way that truly enhances quality. They are the ones who will mix the right cocktail of human and machine strengths for their context.

In short, human expertise provides interpretation, oversight, creativity, and strategic direction – none of which can be automated. This perspective is embedded in MSTB’s mission. Since its inception, MSTB has built a strong software testing ecosystem by collaborating with industry, government, and academia. Through training programs and certifications, it has raised a generation of testers who don’t just execute tests but understand quality from a holistic view. That foundation is exactly what’s needed now, as testers evolve into human-centric quality guardians in the age of AI.
Be Part of the Conversation – Join SOFTECAsia 2025
If you’re an experienced tester wanting to thrive in the AI era, there’s no better place to learn and share than SOFTECAsia 2025. SOFTECAsia is Asia’s premier conference on software testing and quality assurance, organized annually by MSTB. Since 2008, it has grown from a national event into a regional hub of thought leadership, consistently drawing global experts to Malaysia. Every year the conference tackles timely themes – from fundamentals of test engineering in its early days to DevOps and test automation in recent years. This year’s edition zeroes in on AI’s transformative impact on SQA.

What can participants expect? For starters, an impressive lineup of international and local speakers who are at the forefront of testing and AI. The keynote sessions feature global thought leaders including Prof. Jasbir Dhaliwal (University of Memphis, USA), Philip Lew (CEO of XBOSoft, USA), Vipul Kocher (CEO of TestAIng and President of the Indian Testing Board), and Simon Frankish (Practice Lead at Experimentus, UK) – names well-known in the QA world. Each brings a unique perspective: Prof. Dhaliwal, for example, is an early pioneer in AI validation and explainable AI dating back to the 1990s, and he’ll share a high-level CIO/CTO viewpoint on integrating AI into quality strategies. Philip Lew, by contrast, is focusing on the human factor – his keynote titled “Use Emotional Intelligence, Not Artificial Intelligence to Accelerate Your Career” will remind us that soft skills and human insight drive testing careers forward, even in an AI-rich environment. Vipul Kocher will delve into “Building trust into AI-driven software,” addressing how we ensure AI-infused systems remain reliable and accountable. And Simon Frankish will speak on “Enhancing Test Improvement Processes with AI,” tying AI adoption to frameworks like TMMi for continuous improvement. It’s a well-rounded program that covers technical, managerial, and ethical dimensions of AI in QA.
Beyond the keynotes, SOFTECAsia 2025 offers hands-on tutorials, panel discussions, and networking that make it a must-attend event. Expect practical workshops where you might get to try out AI testing tools or learn new test design techniques. (For instance, a tutorial on using Generative AI in agile testing by Dr. Pavan Mulgund is on the agenda, promising a deep dive into how AI can assist in an Agile team’s testing activities.)
Crucially, SOFTECAsia isn’t just about listening – it’s about connecting. In 2025, over 200 professionals are expected to attend, ranging from test engineers and QA managers to tech executives and academics. This creates a rich environment for networking. You could find yourself sharing war stories with a test manager from a finance giant about AI tools or brainstorming over coffee with a startup founder on how to instill quality culture. Past attendees often cite how energizing it is to meet others who are passionate about quality – you realize you’re part of a larger movement, not fighting the “AI testing” battles alone.
SOFTECAsia 2025 is more than just a conference – it’s a rallying point for the testing community in the age of AI. It’s where you can absorb cutting-edge knowledge, get inspired by experts, and find your tribe of fellow QA enthusiasts. As the event promo puts it, “Learn. Share. Grow – three simple words that capture the spirit of SOFTECAsia. Whether you are a seasoned test manager or a budding QA engineer, being part of this conversation will arm you with ideas and contacts to drive your career and your organization’s quality practices forward.
The Opportunity for Professionals and Organizations
The rise of AI in testing is not a threat; it’s an opportunity – but only for those willing to adapt and engage. For individual QA professionals, this is a moment to elevate your role and become a key player in your organization’s digital transformation. Staying informed is a career advantage: those who understand AI’s capabilities and limitations will be the ones defining testing strategies and guiding teams in the near future. On the flip side, those who ignore the trend risk being left behind. As Harvard professor Karim Lakhani astutely summed up, “AI is not going to replace humans, but humans with AI are going to replace humans without AI.” (wipro.com). In the context of QA, that means the testers who learn to leverage AI will outshine those who stick strictly to old ways. Fortunately, gaining these skills is very much within reach – and MSTB is at the forefront of enabling it.

Upskilling and continuous learning are the watchwords. Testers should proactively seek knowledge on both testing fundamentals and new AI techniques. On the fundamentals side, if you haven’t already, consider obtaining reputable certifications that solidify your expertise. MSTB offers globally recognized certifications in software testing (ISTQB Certified Tester Scheme), test process improvement (TMMi Professional), and requirements engineering (IREB Certified Professional). These certifications ensure you have the core competencies to design solid tests, evaluate processes, and understand requirements – skills that become even more important when working with AI tools. For instance, an ISTQB® Certified Tester will have a strong grasp of test design techniques that they can then automate or enhance with AI. A TMMi Professional knows how to assess and improve an organization’s testing maturity, useful when introducing AI into the workflow in a structured way. And an IREB requirements engineer can help ensure that AI-driven features have clear, testable requirements (preventing the “black box” syndrome where nobody knows what the AI is supposed to do). These certifications, administered by MSTB in Malaysia, are more than just paper credentials – they imbue a mindset of quality that allows testers to lead in an AI world.
On the emerging tech side, testers should familiarize themselves with AI and analytics as they pertain to QA. This doesn’t mean you need to become a data scientist, but understanding basic concepts of machine learning, knowing the strengths of various AI testing tools, and being aware of AI ethics will go a long way. There are plenty of resources to help. For example, the ISTQB® has introduced an AI Testing certification (CT-AI) covering how AI can be used in testing and how to test AI systems. Books like “Human + Machine: Reimagining Work in the Age of AI” by Paul Daugherty can provide insight into how AI is changing the nature of work and how QA fits in. Regularly reading industry research helps too – the State of Testing Report can highlight what skills are in demand. Notably, AI and machine learning skills in testing have surged from niche to mainstream: one report noted that the percentage of organizations seeking AI/ML skills in testers tripled from 7% in 2023 to 21% in 2024 (leapwork.com), reflecting the new expectations on QA roles.
For organizations – whether a startup or a large enterprise – the message is: don’t sideline your QA team in the AI revolution, empower them to lead it. Companies that invest in their human testers will reap benefits in quality and velocity. This could mean sponsoring your testers to attend events like SOFTECAsia 2025, where they can bring back fresh ideas and motivation. It also means providing training on new tools and encouraging experimentation. Forward-thinking organizations are already embedding AI into their QA processes, but always with human oversight. For example, a bank might use an AI-driven script to run through hundreds of mobile app transactions, but it’s their knowledgeable testers who review anomalies and ensure compliance with banking regulations. By pairing AI tools with human expertise, organizations can achieve the best of both worlds: the speed and breadth of automation plus the depth and assurance of human judgment.
Moreover, organizations stand to gain a competitive edge by fostering a culture where testers and developers work hand-in-hand on quality. AI in testing can blur the roles (developers might use AI to generate unit tests, testers might need to read code to train an AI model, etc.), so collaboration is key. Companies should encourage cross-pollination of skills – let your testers learn some coding or data analysis, and let your developers learn about test design and ethics. The payoff is higher quality software delivered faster, which in turn means happier customers and lower risk of costly failures. We’ve all seen headlines of software glitches causing service outages or safety issues. In many cases, those incidents could have been prevented by robust testing. In the AI era, “robust testing” will likely involve AI-assisted techniques – but it absolutely will involve humans guiding and approving the results. An investment in your testing team’s growth is an investment in your product’s success and your users’ trust.
To sum up this opportunity: the blending of AI into QA is expanding the influence of testers. No longer confined to writing manual test cases, testers can evolve into quality strategists, automation architects, and AI validators within their organizations. It’s a chance to shed the perception of testers as the “folks at the end of the line” and instead be the quality leaders from the start. By being proactive – learning, experimenting, and engaging with the community – testers can ensure they are the ones wielding the AI tools, not the ones being displaced by them.
Join The Revolution
The age of AI in software testing is here, and it’s an exciting time to be in QA. We are witnessing a transformation in how software quality is achieved: AI brings unprecedented power, speed, and insights, while human testers bring judgment, creativity, and the holistic understanding that quality requires. The story is not one of competition, but of collaboration – a new chapter where human intuition and machine precision work in tandem. As we’ve discussed, human software testers are not only still relevant; they are central to guiding this AI-driven revolution responsibly and effectively. In Malaysia, bodies like MSTB have shown remarkable foresight in preparing testers for this future, from building a talent pipeline of certified professionals to hosting platforms like SOFTECAsia where knowledge is shared openly. It’s a model of how we can embrace technology without losing the human touch.
For any QA professional reading this, consider this a personal invitation and a challenge. Be curious, be adaptive, and be a voice in the future of your profession. Attend conferences, partake in trainings, read widely. Engage in discussions about ethics and quality because your insights as a tester are invaluable in shaping how AI is deployed. Most importantly, never underestimate the value of your human perspective. In the words of renowned AI expert Fei-Fei Li, “Artificial intelligence is not a substitute for human intelligence; it is a tool to amplify human creativity and ingenuity.” (nisum.com). Use AI to amplify your abilities, not to replace them.

The coming years will likely see even more advanced AI tools entering our field – from intelligent test agents that can learn an application, to autonomous systems that fix bugs on their own. But however smart these systems become, they will serve our needs and follow our guidelines. Quality remains a human responsibility. So let’s rise to the occasion. Let’s pair our human intuition with AI’s computational scale to achieve levels of software quality previously unimaginable.
As a final call to action: join the conversation and help shape the future. If you can, be at SOFTECAsia 2025 in Kuala Lumpur this September. Lend your voice, absorb the knowledge, and bring it back to your team and community. The event is not just about listening to experts on stage – it’s about realizing that you are part of a global network of professionals defining what “quality” means in an AI-driven world. And if you cannot attend, stay connected through MSTB and other professional networks, and continue investing in your growth. The SQA field has always been about continuous improvement – of processes, of products, and of people. That ethos will carry us through this AI revolution.
Let’s keep the conversation going and the passion for quality alive – see you at SOFTECAsia 2025!