Free iSQI CT-AI Exam Actual Questions

The questions for CT-AI were last updated On Dec 15, 2025

At ValidExamDumps, we consistently monitor updates to the iSQI CT-AI exam questions by iSQI. Whenever our team identifies changes in the exam questions,exam objectives, exam focus areas or in exam requirements, We immediately update our exam questions for both PDF and online practice exams. This commitment ensures our customers always have access to the most current and accurate questions. By preparing with these actual questions, our customers can successfully pass the iSQI Certified Tester AI Testing exam on their first attempt without needing additional materials or study guides.

Other certification materials providers often include outdated or removed questions by iSQI in their iSQI CT-AI exam. These outdated questions lead to customers failing their iSQI Certified Tester AI Testing exam. In contrast, we ensure our questions bank includes only precise and up-to-date questions, guaranteeing their presence in your actual exam. Our main priority is your success in the iSQI CT-AI exam, not profiting from selling obsolete exam questions in PDF or Online Practice Test.

 

Question No. 1

Pairwise testing can be used in the context of self-driving cars for controlling an explosion in the number of combinations of parameters.

Which ONE of the following options is LEAST likely to be a reason for this incredible growth of parameters?

SELECT ONE OPTION

Show Answer Hide Answer
Correct Answer: C

Pairwise testing is used to handle the large number of combinations of parameters that can arise in complex systems like self-driving cars. The question asks which of the given options isleast likelyto be a reason for the explosion in the number of parameters.

Different Road Types (A): Self-driving cars must operate on various road types, such as highways, city streets, rural roads, etc. Each road type can have different characteristics, requiring the car's system to adapt and handle different scenarios. Thus, this is a significant factor contributing to the growth of parameters.

Different Weather Conditions (B): Weather conditions such as rain, snow, fog, and bright sunlight significantly affect the performance of self-driving cars. The car's sensors and algorithms must adapt to these varying conditions, which adds to the number of parameters that need to be considered.

ML Model Metrics to Evaluate Functional Performance (C): While evaluating machine learning (ML) model performance is crucial, it does not directly contribute to the explosion of parameter combinations in the same way that road types, weather conditions, and car features do. Metrics are used to measure and assess performance but are not themselves variable conditions that the system must handle.

Different Features like ADAS, Lane Change Assistance, etc. (D): Advanced Driver Assistance Systems (ADAS) and other features add complexity to self-driving cars. Each feature can have multiple settings and operational modes, contributing to the overall number of parameters.

Hence, theleast likelyreason for the incredible growth in the number of parameters isC. ML model metrics to evaluate the functional performance.

:

ISTQB CT-AI Syllabus Section 9.2 on Pairwise Testing discusses the application of this technique to manage the combinations of different variables in AI-based systems, including those used in self-driving cars.

Sample Exam Questions document, Question #29 provides context for the explosion in parameter combinations in self-driving cars and highlights the use of pairwise testing as a method to manage this complexity.


Question No. 2

You have been developing test automation for an e-commerce system. One of the problems you are seeing is that object recognition in the GUI is having frequent failures. You have determined this is because the developers are changing the identifiers when they make code updates. How could AI help make the automation more reliable?

Show Answer Hide Answer
Correct Answer: A

The syllabus discusses using AI-based tools to reduce GUI test brittleness:

'AI can be used to reduce the brittleness of this approach, by employing AI-based tools to identify the correct objects using various criteria (e.g., XPath, label, id, class, X/Y coordinates), and to choose the historically most stable identification criteria.'

(Reference: ISTQB CT-AI Syllabus v1.0, Section 11.6.1)


Question No. 3

An image classification system is being trained for classifying faces of humans. The distribution of the data is 70% ethnicity A and 30% for ethnicities B, C and D. Based ONLY on the above information, which of the following options BEST describes the situation of this image classification system?

SELECT ONE OPTION

Show Answer Hide Answer
Correct Answer: B

A . This is an example of expert system bias.

Expert system bias refers to bias introduced by the rules or logic defined by experts in the system, not by the data distribution.

B . This is an example of sample bias.

Sample bias occurs when the training data is not representative of the overall population that the model will encounter in practice. In this case, the over-representation of ethnicity A (70%) compared to B, C, and D (30%) creates a sample bias, as the model may become biased towards better performance on ethnicity A.

C . This is an example of hyperparameter bias.

Hyperparameter bias relates to the settings and configurations used during the training process, not the data distribution itself.

D . This is an example of algorithmic bias.

Algorithmic bias refers to biases introduced by the algorithmic processes and decision-making rules, not directly by the distribution of training data.

Based on the provided information, optionB(sample bias) best describes the situation because the training data is skewed towards ethnicity A, potentially leading to biased model performance.


Question No. 4

Which ONE of the following statements correctly describes the importance of flexibility for Al systems?

SELECT ONE OPTION

Show Answer Hide Answer
Correct Answer: C

Flexibility in AI systems is crucial for various reasons, particularly because it allows for easier modification and adaptation of the system as a whole.

AI systems are inherently flexible (A): This statement is not correct. While some AI systems may be designed to be flexible, they are not inherently flexible by nature. Flexibility depends on the system's design and implementation.

AI systems require changing operational environments; therefore, flexibility is required (B): While it's true that AI systems may need to operate in changing environments, this statement does not directly address the importance of flexibility for the modification of the system.

Flexible AI systems allow for easier modification of the system as a whole (C): This statement correctly describes the importance of flexibility. Being able to modify AI systems easily is critical for their maintenance, adaptation to new requirements, and improvement.

Self-learning systems are expected to deal with new situations without explicitly having to program for it (D): This statement relates to the adaptability of self-learning systems rather than their overall flexibility for modification.

Hence, the correct answer isC. Flexible AI systems allow for easier modification of the system as a whole.

:

ISTQB CT-AI Syllabus Section 2.1 on Flexibility and Adaptability discusses the importance of flexibility in AI systems and how it enables easier modification and adaptability to new situations.

Sample Exam Questions document, Question #30 highlights the importance of flexibility in AI systems.


Question No. 5

You are developing a ''flower'' ML model... Which of the following describes an objection that you can NEGLECT in your risk assessment?

Choose ONE option (1 out of 4)

Show Answer Hide Answer
Correct Answer: D

The ISTQB CT-AI syllabus explains that reusing pre-trained models is strongly related tosimilarity between the original task and the new task. Section1.8 -- Pre-trained Models and Transfer Learningstates that reuse is effective when the new task is similar to the original one, such as adapting a cat-classifier to classify dog breeds. The syllabus warns about risks related toinput differences,data preparation inconsistencies,inherited shortcomings, andexplainability issues. These are legitimate objections (matching options A, B, and C) because large differences in image inputs or patterns can undermine transfer learning; misclassification risk can increase; and explainability often decreases when reusing pre-trained models .

However,output differences are NOT a valid concernhere. Both the leaf-based and flower-based ML models classifythe same plant species, meaning theiroutputs are identical. The syllabus does not identify output mismatch as a transfer-learning risk. Real risks concerninputs,bias inheritance,model transparency, andtraining differences---not output labels. Therefore, OptionDdescribes an objection that can be safelyneglected, because output classes are the same and do not hinder reuse.