Contents hide

Quality In, Quality Out: The New Imperative for Testers in the AI Era

For decades, the language of software testing has been one of precision, logic, and code. We communicated through test scripts, defect reports, and execution logs—a dialect understood by machines and specialists. Today, a new and far more powerful language is emerging, one spoken not in rigid syntax but in nuanced conversation. Artificial Intelligence is not merely another tool to be adopted; it is a new intelligence to be directed. To harness its power, we must evolve from being writers of scripts to being architects of dialogue. The quality of our work, and indeed our relevance in this new era, will be measured not by the tests we execute, but by the quality of the questions we ask.

The industry-wide pivot to Agile methodologies, the subsequent rise of test automation that redefined productivity benchmarks, and the dawn of DevOps which reshaped the very culture of collaboration—each of these shifts demanded that professionals adapt, learn, and ultimately, deliver higher quality software, faster. But the change we are facing today, driven by Artificial Intelligence, is a tectonic shift on an entirely different scale. It is a fundamental revolution in how we think, create, and validate.

This evolution is precisely why the Malaysian Software Testing Board (MSTB) has designated “AI-Driven SQA Revolution” as the central theme for our upcoming SOFTECAsia 2025 conference. This is not merely a catchy phrase; it reflects a profound change that is already underway. As an organization, MSTB’s mandate has always been to guide our national tech industry towards global standards of excellence. Since our inception in 2006, we have worked to establish a robust ecosystem where software quality is paramount.

Today, our commitment to that mission requires us to champion the next essential skillset. You have undoubtedly heard the old IT mantra, “Garbage In, Garbage Out.” In the age of Generative AI (GenAI) and Large Language Models (LLMs), that simple phrase has been elevated from a casual warning to the single most critical principle for professional success. Many of us are already interacting with these powerful new tools, but to truly harness their potential, we must evolve from being passive users into being meticulous architects of conversation. We must become prompt engineers. This report serves as a guide for that evolution, outlining why this skill is not just an advantage, but a necessity for every software testing professional in Malaysia today.

SOFTECAsia 2024 Participants

The Tester’s New Co-Pilot: Evolving from AI User to AI Architect

For many in our field, initial interactions with GenAI may have felt like a novelty—useful for drafting an email or planning a holiday, but perhaps not a core professional tool. It is time to decisively move this technology from our personal lives into the heart of our professional workstreams. The critical distinction between receiving a generic, unhelpful response and obtaining a detailed, timesaving, and contextually aware output lies entirely in the quality and precision of our prompts. The ability to instruct an AI with clarity and intent is what separates a gimmick from a professional co-pilot.

As testers, we should not view AI with apprehension but embrace it as a powerful ally to augment our capabilities and boost our efficiency. However, this requires a level of precision that is inherent to our profession. Consider the common task of creating test cases for a login page.  

A weak, un-engineered prompt might be:

“Write test cases for a login page.”

The AI, lacking any specific context or constraints, will likely generate a generic, five-point list that is too rudimentary for any real-world application. It might mention testing with a valid username and password, an invalid one, and perhaps an empty field. While not incorrect, this output is functionally useless for a professional tester, as it lacks the depth, specificity, and structure needed for a formal test suite.

Now, consider a strong, professionally engineered prompt:

“Act as a senior SDET with 10 years of experience in e-commerce testing. You are testing the login page for a new Malaysian online store. The username must be a valid email format. The password requires a minimum of 8 characters, at least one uppercase letter, one number, and one special character. Generate a comprehensive set of test cases using Equivalence Partitioning and Boundary Value Analysis. The output should be a markdown table with the following columns: ‘Test Case ID’, ‘Test Objective’, ‘Test Steps’, and ‘Expected Result’. Include both positive and negative test cases covering valid credentials, invalid passwords of varying lengths and compositions, invalid email formats, and empty fields.”

The difference in the resulting output will be night and day. The second prompt succeeds because it transforms the AI from a general-purpose tool into a specialized expert. This is achieved by systematically providing the essential elements of a well-defined task, a process that should feel intimately familiar to any software tester.

This process reframes the tester’s role. They are no longer just executing tests; they are performing a critical quality assurance function on the input to the AI. This is a natural extension of a tester’s core competency: the ability to define precise conditions, actions, and expected outcomes. By applying this discipline to how we communicate with AI, we move from being passive consumers of its output to being active architects of its entire thought process, ensuring the “Garbage In, Garbage Out” principle works in our favour.

The Five Core Principles of High-Impact Prompt Engineering for Testers

To move from ad-hoc prompting to a structured, repeatable methodology, testers can master a set of core principles. These principles transform a simple question into a powerful instruction set, enabling the creation of high-value testing artifacts with unprecedented speed.

Principle 1: Persona Assumption

Always begin by giving the AI a specific role. This is the most effective way to prime the model to access the most relevant parts of its vast knowledge base. Go beyond generic roles and assign nuanced personas tailored to the testing task at hand.

  • For functional testing: “Act as a meticulous QA analyst verifying every acceptance criterion.”
  • For usability testing: “Act as a first-time, non-technical user who is easily confused by complex interfaces.”
  • For security testing: “Act as a cybersecurity expert attempting to identify potential injection vulnerabilities in web forms.”

Assigning a persona frames the AI’s response style, its vocabulary, and the lens through which it analyzes the problem, leading to more targeted and relevant outputs.

An AI model has no inherent knowledge of your project, your team’s goals, or your users’ needs. Providing rich, layered context is essential to bridge this gap. Instead of assuming knowledge, explicitly provide it. This can include:

  • User Stories and Acceptance Criteria: Paste the full text of a user story to generate relevant test scenarios.
  • Technical Specifications: Provide snippets of API documentation, database schemas, or relevant code functions.
  • User Profiles: Describe the target user to generate test cases that reflect their likely behaviors and pain points.

The more context you provide, the less the AI has to infer, and the more accurate and bespoke its response will be.

Never leave the structure of the output to chance. Explicitly command the AI to use specific professional techniques and to deliver the results in a ready-to-use format. This is the difference between getting a wall of text and a structured document that can be directly integrated into your workflow. This approach can be seen as a form of non-code-based automation; you are codifying your intent in natural language to produce a predictable, high-quality result.

  • Specify Techniques:Apply State Transition Testing to the order fulfilment process.” or “Use Pairwise Testing to generate a minimal set of test combinations for these configuration options.”
  • Demand Structure:Format the output as a Gherkin feature file for BDD.”, “Generate the test data as a JSON array of objects.”, or “Create a mind map of potential test areas using Mermaid syntax.”

Your first prompt should be considered a draft, not a final command. The true power of interacting with GenAI lies in conversation. Treat the interaction as a “conversational debugging” session. If the initial output is not perfect, provide feedback and refine it.

  • To expand:That’s a good start. Now, add negative test cases for API error codes 401, 403, and 500.”
  • To correct:In the previous response, you assumed the user was logged in. Please regenerate the test cases for a guest user.”
  • To reformat:Please reformat the previous list of test cases into a two-column markdown table: ‘Test Scenario’ and ‘Expected Outcome’.”

This iterative process allows you to sculpt the AI’s output with increasing precision, ensuring the result perfectly matches your requirements.

Just as important as telling the AI what to do is telling it what not to do. Applying clear constraints prevents the model from generating irrelevant or incorrect information and helps focus its output on the specific problem you are trying to solve.

  • Negative Constraints:Generate test cases for the user profile page, but do not include test cases related to changing the password, as that is handled in a separate module.”
  • Compliance Rules:Create sample user data for testing. Ensure all generated personal information is fictional and compliant with PDPA guidelines.”
  • Technical Limitations:Outline an API testing strategy. Note that the legacy endpoint only supports GET and POST requests; do not include tests for PUT or DELETE.”

By mastering these five principles, testers can transform GenAI from a simple tool into a sophisticated partner, capable of accelerating test design, data generation, and documentation tasks by an order of magnitude.

Always begin by giving the AI a specific role. This is the most effective way to prime the model to access the most relevant parts of its vast knowledge base. Go beyond generic roles and assign nuanced personas tailored to the testing task at hand.

Assigning a persona frames the AI’s response style, its vocabulary, and the lens through which it analyzes the problem, leading to more targeted and relevant outputs.

An AI model has no inherent knowledge of your project, your team’s goals, or your users’ needs. Providing rich, layered context is essential to bridge this gap. Instead of assuming knowledge, explicitly provide it. This can include:

The more context you provide, the less the AI has to infer, and the more accurate and bespoke its response will be.

Never leave the structure of the output to chance. Explicitly command the AI to use specific professional techniques and to deliver the results in a ready-to-use format. This is the difference between getting a wall of text and a structured document that can be directly integrated into your workflow. This approach can be seen as a form of non-code-based automation; you are codifying your intent in natural language to produce a predictable, high-quality result.

Your first prompt should be considered a draft, not a final command. The true power of interacting with GenAI lies in conversation. Treat the interaction as a “conversational debugging” session. If the initial output is not perfect, provide feedback and refine it.

This iterative process allows you to sculpt the AI’s output with increasing precision, ensuring the result perfectly matches your requirements.

Just as important as telling the AI what to do is telling it what not to do. Applying clear constraints prevents the model from generating irrelevant or incorrect information and helps focus its output on the specific problem you are trying to solve.

By mastering these five principles, testers can transform GenAI from a simple tool into a sophisticated partner, capable of accelerating test design, data generation, and documentation tasks by an order of magnitude.

SOFTECAsia 2024 Participants

Prompt Engineering in Practice: A Framework for Application

To help operationalize these principles, the following matrix provides a practical framework for constructing high-impact prompts across common testing scenarios. It is designed not just to provide examples, but to teach a repeatable methodology for prompt construction.

Interactive Prompting Matrix for Testers

Interactive Prompting Matrix

Click a tab below to explore practical examples for different testing scenarios.

Security Testing

Generate a preliminary set of test cases to probe for SQL Injection vulnerabilities on a web application’s login form.

Key Principles to Apply

Persona: “Act as a malicious security tester…”
Context: “…given a login page with ‘username’ and ‘password’ input fields…”
Constraints: “…focusing specifically on classic SQL injection and blind SQL injection techniques.”

Example Prompt Snippet

Act as a malicious security tester with expertise in web application vulnerabilities. Given a standard login page with ‘username’ and ‘password’ input fields, generate a set of test cases in a table format to probe for SQL injection vulnerabilities. Include columns for ‘Test Case ID’, ‘Payload to Inject’, and ‘Expected Behavior (e.g., error message, successful login, no change)’.

API Testing

Create a diverse set of valid and invalid JSON payloads for a ‘createUser’ API endpoint to test its data validation logic.

Key Principles to Apply

Context: “You are testing a ‘POST /api/v1/users’ endpoint. The required JSON schema is…”
Technique: “…apply Equivalence Partitioning and Boundary Value Analysis…”
Format: “…generate an array of 10 distinct JSON objects.”

Example Prompt Snippet

You are an SDET responsible for testing a ‘POST /api/v1/users’ endpoint. The schema requires ‘name’ (string, 5-50 chars), ’email’ (valid email format), and ‘age’ (integer, 18-99). Using BVA and EP, generate an array of 10 JSON objects for testing. Include payloads with valid data, boundary values for age and name length, malformed email addresses, missing fields, and incorrect data types.

Acceptance Criteria Translation

Convert a high-level user story into a set of concrete, testable acceptance criteria using Behavior-Driven Development (BDD) syntax.

Key Principles to Apply

Persona: “Act as an experienced Business Analyst…”
Context: “…for the following user story: ‘As a premium subscriber, I want to be able to download articles for offline reading…'”
Format: “…using standard Gherkin syntax (Given/When/Then).”

Example Prompt Snippet

Act as a Business Analyst collaborating with a QA team. Convert the following user story into a comprehensive set of BDD acceptance criteria using Gherkin syntax: “As a premium subscriber, I want to be able to download articles for offline reading so that I can access them without an internet connection.” Create scenarios for successful downloads, attempting to download without a premium subscription, and handling network interruptions.

Performance Test Planning

Outline a structured performance test plan for a newly launched e-commerce product detail page to ensure it meets non-functional requirements.

Key Principles to Apply

Persona: “You are a senior performance test engineer…”
Context: “…for a Malaysian e-commerce site expecting a peak load of 500 concurrent users… The NFR is a page load time under 3 seconds.”
Format: “…a structured test plan outline with sections for Objectives, Scope, Test Types (Load, Stress, Soak), and Key Metrics to Measure.”

Example Prompt Snippet

You are a senior performance test engineer creating a test plan for a new product detail page on a high-traffic Malaysian e-commerce website. The page is expected to handle a peak load of 500 concurrent users with an average response time under 3 seconds. Create a detailed test plan outline in markdown. The plan must include sections for: 1) Test Objectives, 2) In-Scope and Out-of-Scope items, 3) Workload Model, 4) Test Scenarios (e.g., viewing product, adding to cart), and 5) Key Performance Indicators to monitor (e.g., response time, error rate, CPU utilization).

Situating Your Skills: The Broader AI Ecosystem in SQA

Mastering prompt engineering is the essential first step, the foundational skill that unlocks the door to the wider “AI-Driven SQA Revolution”. It is the entry point, but the journey does not end there. Prompting allows you to efficiently generate testing artifacts, but the true challenge for the modern SQA professional is to integrate these new capabilities into our existing processes and to grapple with the new, complex questions that AI introduces.  

This is why SOFTECAsia 2025 has been curated not as a collection of disparate talks, but as a comprehensive, end-to-end learning pathway designed to guide you through this revolution. Once you have mastered how to prompt an AI to generate test cases, the next logical questions arise, and our conference is structured to help you answer them:

Kuala Lumpur Convention Center

Your Call to Leadership: Mastering the Craft at SOFTECAsia 2025

The AI-driven revolution is not a future event; it is happening now. For the Malaysian software testing community, this moment represents an unparalleled opportunity. By mastering the art and science of the prompt, we are not just keeping pace with a global trend; we are positioning ourselves at the forefront, ready to lead the evolution of software quality assurance in our region. This is more than a personal productivity enhancement; it is a prerequisite for leadership in the new SQA paradigm and is essential to upholding Malaysia’s hard-won reputation as a regional hub for software testing excellence.

At MSTB, we recognized that mastering this skill is no longer optional. It is fundamental. That is why we have dedicated a workshop at SOFTECAsia 2025 to this exact topic. We are proud to host the hands-on workshop (5B), PromptCraft for Test Engineers: Leveraging GenAI for Smarter Software Testing,” led by one of Malaysia’s own experts, the fantastic Pricilla Bilavendran.

This is not just another lecture. It is an interactive, practical tutorial where you will explore everything from the core concepts of prompting to advanced techniques designed specifically for the challenges we face as testers. The session is built for immediate impact, with hands-on exercises to help you design better prompts and compare responses across different tools. Whether you are a functional tester, an SDET, or a test lead, this session is designed to significantly boost your productivity and creativity. Crucially, no prior AI expertise is needed. You will walk away from this tutorial ready and able to turn GenAI into your trusted testing co-pilot, prompt by prompt.

The path forward is clear. The skills that brought us to where we are today will not be enough to take us where we need to go tomorrow. We invite you to join us at the Kuala Lumpur Convention Centre on the 9th and 10th of September 2025. Come to SOFTECAsia 2025 ready to learn, to engage, and to equip yourself with the skills needed to lead the AI-Driven SQA Revolution. Let us embrace this change together and continue to shape the future of software quality in Malaysia and beyond.


Author

Asrul Han

Digital Marketing and Communications Lead