Free The SecOps Group CAP Exam Actual Questions

The questions for CAP were last updated On Jun 14, 2025

At ValidExamDumps, we consistently monitor updates to the The SecOps Group CAP exam questions by The SecOps Group. Whenever our team identifies changes in the exam questions,exam objectives, exam focus areas or in exam requirements, We immediately update our exam questions for both PDF and online practice exams. This commitment ensures our customers always have access to the most current and accurate questions. By preparing with these actual questions, our customers can successfully pass the The SecOps Group Certified AppSec Practitioner Exam exam on their first attempt without needing additional materials or study guides.

Other certification materials providers often include outdated or removed questions by The SecOps Group in their The SecOps Group CAP exam. These outdated questions lead to customers failing their The SecOps Group Certified AppSec Practitioner Exam exam. In contrast, we ensure our questions bank includes only precise and up-to-date questions, guaranteeing their presence in your actual exam. Our main priority is your success in the The SecOps Group CAP exam, not profiting from selling obsolete exam questions in PDF or Online Practice Test.

 

Question No. 1

Which is the most effective way of input validation to prevent Cross-Site Scripting attacks?

Show Answer Hide Answer
Correct Answer: B

Cross-Site Scripting (XSS) attacks occur when an attacker injects malicious scripts (e.g., JavaScript) into a web application, which are then executed in a victim's browser. Effective input validation is a key defense against XSS by ensuring that user input does not contain malicious content.

Option A ('Blacklisting HTML and other harmful characters'): Blacklisting involves blocking known harmful characters (e.g., <, >, &) or patterns. While this can mitigate some XSS attacks, it is not the most effective approach because blacklists can be bypassed (e.g., using alternate encodings, nested tags, or new attack vectors). Blacklisting is inherently reactive and prone to evasion.

Option B ('Whitelisting and allowing only trusted input'): Whitelisting involves defining a strict set of allowed characters or patterns (e.g., only alphanumeric characters for a username). This is the most effective method because it explicitly permits only safe input and rejects everything else, making it much harder for attackers to inject malicious scripts. For example, if a field expects a phone number, a whitelist might allow only digits, spaces, and dashes, rejecting any HTML or script tags outright.

Option C ('Using a Web Application Firewall (WAF)'): A WAF can help detect and block XSS attacks by filtering malicious requests, but it is not an input validation method. WAFs are a secondary defense and can be bypassed; they are not a substitute for proper validation at the application level.

Option D ('Marking Cookie as HttpOnly'): The HttpOnly flag prevents cookies from being accessed by JavaScript, mitigating the impact of XSS (e.g., stealing session cookies), but it does not prevent the XSS attack itself. It addresses the consequence, not the root cause, and is not an input validation technique.

The correct answer is B, aligning with the CAP syllabus under 'Cross-Site Scripting (XSS)' and 'Input Validation Best Practices.'


Question No. 2

Based on the below request/response, which of the following statements is true?

Send

GET /dashboard.php?purl=http://attacker.com HTTP/1.1

Host: example.com

User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) Firefox/107.0

Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8

Accept-Language: en-GB,en;q=0.5

Accept-Encoding: gzip, deflate

Upgrade-Insecure-Requests: 1

Sec-Fetch-Dest: document

Sec-Fetch-Mode: navigate

Sec-Fetch-Site: none

Sec-Fetch-User: ?1

Cookie: JSESSIONID=38RB5ECV10785B53AF29816E92E2E50

Te: trailers

Connection: keep-alive

Pretty Raw | Hex | php | curl | ln | Pretty

HTTP/1.1 302 Found 2022-12-03 17:38:18 GMT

Date: Sat, 03 Dec 2022 17:38:18 GMT

Server: Apache/2.4.54 (Unix) OpenSSL/1.0.2k-fips PHP/8.0.25

X-Powered-By: PHP/8.0.25

Content-Length: 0

Content-Type: text/html; charset=UTF-8

Connection: keep-alive

Location: http://attacker.com

Set-Cookie: JSESSIONID=38C5ECV10785B53AF29816E92E2E50; Path=/; HttpOnly

Show Answer Hide Answer
Correct Answer: A

The request is a GET to /dashboard.php with a purl parameter (http://attacker.com). The response is a 302 Found redirect with a Location: http://attacker.com header, indicating the server redirects the client to the URL specified in the purl parameter. Let's evaluate the statements:

Option A ('Application is likely to be vulnerable to Open Redirection vulnerability'): Correct. Open Redirection occurs when an application redirects to a user-supplied URL without validation. Here, the purl parameter (http://attacker.com) is directly used in the Location header, allowing an attacker to redirect users to a malicious site (e.g., for phishing). This is a classic Open Redirection vulnerability if the application does not restrict redirects to trusted domains.

Option B ('Application is vulnerable to Cross-Site Request Forgery vulnerability'): Incorrect. CSRF involves tricking a user into making an unintended request (e.g., via a malicious form). This response does not indicate a CSRF issue; there's no evidence of state-changing actions or lack of CSRF tokens.

Option C ('Application uses an insecure protocol'): Incorrect. The request is made over HTTP, and the redirect is to an HTTP URL (http://attacker.com), which is insecure, but the response itself does not indicate the protocol used for the initial request. The server could be using HTTPS for the initial response; the insecure protocol is in the redirect destination, which relates to the Open Redirection issue, not the application's protocol usage broadly.

Option D ('All of the above'): Incorrect, as only A is true.

The correct answer is A, aligning with the CAP syllabus under 'Open Redirection Vulnerabilities' and 'URL Redirection Attacks.'


Question No. 3

After purchasing an item on an e-commerce website, a user can view their order details by visiting the URL:

https://example.com/?order_id=53870

A security researcher pointed out that by manipulating the order_id value in the URL, a user can view arbitrary orders and sensitive information associated with that order_id. This attack is known as:

Show Answer Hide Answer
Correct Answer: A

The scenario describes a vulnerability where a user can manipulate the order_id parameter in the URL (e.g., https://example.com/?order_id=53870) to access other users' order details, indicating a lack of proper access control. This is a classic case of an Insecure Direct Object Reference (IDOR) attack. IDOR occurs when an application exposes a reference to an internal object (e.g., an order ID) that can be manipulated by an unauthorized user to access resources they should not have access to, without validating the user's permissions.

Option A ('Insecure Direct Object Reference'): Correct, as the ability to change order_id to view arbitrary orders fits the definition of IDOR.

Option B ('Session Poisoning'): Incorrect, as session poisoning involves corrupting or altering a user's session data, which is not indicated here.

Option C ('Session Riding OR Cross-Site Request Forgery'): Incorrect, as CSRF involves tricking a user into submitting a request (e.g., via a malicious form), not manipulating a URL parameter directly.

Option D ('Server-Side Request Forgery'): Incorrect, as SSRF involves tricking the server into making unauthorized requests to internal or external resources, which is not the case here.

The correct answer is A, aligning with the CAP syllabus under 'Insecure Direct Object Reference (IDOR)' and 'OWASP Top 10 (A04:2021 - Insecure Design).'


Question No. 4

Which HTTP header is used by the CORS (Cross-origin resource sharing) standard to control access to resources on a server?

Show Answer Hide Answer
Correct Answer: C

Cross-Origin Resource Sharing (CORS) is a security mechanism that allows servers to specify which origins can access their resources, relaxing the Same-Origin Policy (SOP) for legitimate cross-origin requests. CORS uses specific HTTP headers to control this access. The key header for controlling access to resources is Access-Control-Allow-Origin, which specifies which origins are permitted to access the resource. However, among the provided options, the closest related header is Access-Control-Allow-Headers, which is part of the CORS standard and controls which request headers can be used in the actual request (e.g., during a preflight OPTIONS request).

Option A ('Access-Control-Request-Method'): This header is sent by the client in a preflight request to indicate the HTTP method (e.g., GET, POST) that will be used in the actual request. It is not used by the server to control access.

Option B ('Access-Control-Request-Headers'): This header is sent by the client in a preflight request to list the headers it plans to use in the actual request. It is not used by the server to control access.

Option C ('Access-Control-Allow-Headers'): This header is sent by the server in response to a preflight request, specifying which headers are allowed in the actual request. While Access-Control-Allow-Origin is the primary header for controlling access, Access-Control-Allow-Headers is part of the CORS standard to manage header-based access control, making this the best match among the options.

Option D ('None of the above'): Incorrect, as Access-Control-Allow-Headers is a CORS header.

The correct answer is C, aligning with the CAP syllabus under 'CORS Security' and 'HTTP Headers.'


Question No. 5

A robots.txt file tells the search engine crawlers about the URLs which the crawler can access on your site. Which of the following is true about robots.txt?

Show Answer Hide Answer
Correct Answer: A

The robots.txt file is a text file placed in a website's root directory to communicate with web crawlers (e.g., Googlebot) about which pages or resources should not be accessed or indexed. It uses directives like Disallow to specify restricted areas (e.g., Disallow: /admin/). However, robots.txt is not a security mechanism; it is only a request to crawlers, and malicious bots or users can ignore it.

Option A ('Developers must not list any sensitive files and directories in this file'): Correct. Listing sensitive files or directories (e.g., Disallow: /secret/) in robots.txt can inadvertently expose their existence to attackers, who can then attempt to access them directly. The best practice is to avoid mentioning sensitive paths and rely on proper access controls (e.g., authentication, authorization) instead.

Option B ('Developers must list all sensitive files and directories in this file to secure them'): Incorrect. Listing sensitive paths in robots.txt does not secure them; it only informs crawlers to avoid them, and it can serve as a roadmap for attackers.

Option C ('Both A and B'): Incorrect, as A and B are contradictory; B is false.

Option D ('None of the above'): Incorrect, as A is true.

The correct answer is A, aligning with the CAP syllabus under 'Web Crawler Security' and 'Information Disclosure Prevention.'