Quiz-summary
0 of 10 questions completed
Questions:
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
Information
Premium Practice Questions
You have already completed the quiz before. Hence you can not start it again.
Quiz is loading...
You must sign in or sign up to start the quiz.
You have to finish following quiz, to start this quiz:
Results
0 of 10 questions answered correctly
Your time:
Time has elapsed
Categories
- Not categorized 0%
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- Answered
- Review
-
Question 1 of 10
1. Question
In managing Forecasting Techniques (ETC, EAC), which control most effectively reduces the key risk of producing unrealistic completion dates when current performance significantly deviates from the original baseline?
Correct
Correct: A bottom-up Estimate to Complete (ETC) is the most rigorous forecasting control because it requires the project team to re-evaluate the remaining work from the ground up. This approach is specifically recommended when the original assumptions are no longer valid or when performance has deviated so significantly that historical indices (like CPI or SPI) are no longer reliable predictors of future outcomes. It ensures that the forecast reflects the actual reality of the work remaining rather than just a mathematical projection of past errors.
Incorrect: Extrapolating the current SPI assumes that past performance will continue at the same rate, which may be inaccurate if the causes of delay have been mitigated. Using the BAC divided by CPI is a standard formulaic approach, but it is often too simplistic for complex schedule recovery as it focuses on cost efficiency rather than the specific logic of remaining schedule activities. Relying on TCPI to force adherence to a budget or schedule is a management target rather than a realistic forecasting technique, and it fails to account for the actual resource capacity or technical constraints of the remaining work.
Takeaway: A bottom-up re-estimate of remaining work provides the most reliable forecast when past performance is no longer a valid predictor of future project outcomes.
Incorrect
Correct: A bottom-up Estimate to Complete (ETC) is the most rigorous forecasting control because it requires the project team to re-evaluate the remaining work from the ground up. This approach is specifically recommended when the original assumptions are no longer valid or when performance has deviated so significantly that historical indices (like CPI or SPI) are no longer reliable predictors of future outcomes. It ensures that the forecast reflects the actual reality of the work remaining rather than just a mathematical projection of past errors.
Incorrect: Extrapolating the current SPI assumes that past performance will continue at the same rate, which may be inaccurate if the causes of delay have been mitigated. Using the BAC divided by CPI is a standard formulaic approach, but it is often too simplistic for complex schedule recovery as it focuses on cost efficiency rather than the specific logic of remaining schedule activities. Relying on TCPI to force adherence to a budget or schedule is a management target rather than a realistic forecasting technique, and it fails to account for the actual resource capacity or technical constraints of the remaining work.
Takeaway: A bottom-up re-estimate of remaining work provides the most reliable forecast when past performance is no longer a valid predictor of future project outcomes.
-
Question 2 of 10
2. Question
Which safeguard provides the strongest protection when dealing with Story Points and Velocity? An agile team is transitioning from traditional duration-based scheduling to relative estimation. The scheduler needs to ensure that the metrics used for forecasting the release date are not misused or misinterpreted by stakeholders during the planning and execution phases.
Correct
Correct: Velocity is an empirical measure of a specific team’s capacity based on their unique understanding of story points and relative complexity. The strongest safeguard against schedule inaccuracy and metric manipulation is ensuring that velocity is used only for that specific team’s forecasting. Comparing or benchmarking velocity across different teams is a fundamental misuse that leads to point inflation and unreliable schedule data.
Incorrect: Normalizing story points to labor hours is incorrect because it converts relative estimation back into traditional duration-based estimation, losing the benefits of complexity-based assessment. Treating velocity as an ever-increasing target is a common misconception; velocity is an observation of capacity, not a performance goal, and forcing it upward often leads to quality degradation. Mapping points to specific buffers is redundant because story points are already designed to incorporate complexity, effort, and inherent risk within the estimate itself.
Takeaway: Story points and velocity are team-specific tools for empirical forecasting and should never be used for cross-team performance comparisons or as fixed productivity targets.
Incorrect
Correct: Velocity is an empirical measure of a specific team’s capacity based on their unique understanding of story points and relative complexity. The strongest safeguard against schedule inaccuracy and metric manipulation is ensuring that velocity is used only for that specific team’s forecasting. Comparing or benchmarking velocity across different teams is a fundamental misuse that leads to point inflation and unreliable schedule data.
Incorrect: Normalizing story points to labor hours is incorrect because it converts relative estimation back into traditional duration-based estimation, losing the benefits of complexity-based assessment. Treating velocity as an ever-increasing target is a common misconception; velocity is an observation of capacity, not a performance goal, and forcing it upward often leads to quality degradation. Mapping points to specific buffers is redundant because story points are already designed to incorporate complexity, effort, and inherent risk within the estimate itself.
Takeaway: Story points and velocity are team-specific tools for empirical forecasting and should never be used for cross-team performance comparisons or as fixed productivity targets.
-
Question 3 of 10
3. Question
What is the primary risk associated with Claims Management related to Schedule Delays, and how should it be mitigated? In a complex multi-year construction project, several unforeseen events have occurred, including owner-directed changes and contractor-caused resource shortages. The contractor has submitted a request for an extension of time (EOT) and additional costs, but the project owner disputes the cause of the critical path slippage.
Correct
Correct: In claims management, the primary challenge is proving entitlement and causation. Contemporaneous records provide the necessary evidence of what occurred and when, while Time Impact Analysis (TIA) is a recognized forensic scheduling technique used to demonstrate the specific effect of a delay event on the project completion date by inserting the delay into a representative baseline.
Incorrect: Re-baselining the schedule after every delay can actually hinder claims management because it may overwrite the historical data needed to prove the original impact of a delay. Negotiating global settlements at the end of a project is a common practice but does not mitigate the risk of poor documentation or lack of proof during the project. Resource leveling is a technique for managing resource constraints but does not address the legal or contractual requirements of proving delay responsibility.
Takeaway: Successful claims management for schedule delays requires rigorous contemporaneous documentation and the application of forensic scheduling methods to clearly differentiate between owner-caused and contractor-caused impacts.
Incorrect
Correct: In claims management, the primary challenge is proving entitlement and causation. Contemporaneous records provide the necessary evidence of what occurred and when, while Time Impact Analysis (TIA) is a recognized forensic scheduling technique used to demonstrate the specific effect of a delay event on the project completion date by inserting the delay into a representative baseline.
Incorrect: Re-baselining the schedule after every delay can actually hinder claims management because it may overwrite the historical data needed to prove the original impact of a delay. Negotiating global settlements at the end of a project is a common practice but does not mitigate the risk of poor documentation or lack of proof during the project. Resource leveling is a technique for managing resource constraints but does not address the legal or contractual requirements of proving delay responsibility.
Takeaway: Successful claims management for schedule delays requires rigorous contemporaneous documentation and the application of forensic scheduling methods to clearly differentiate between owner-caused and contractor-caused impacts.
-
Question 4 of 10
4. Question
You are the relationship manager at an audit firm. While working on Story Points and Velocity during control testing, you receive a whistleblower report. The issue is that the development team has been consistently increasing the story point estimates for standard tasks over the last three iterations to artificially inflate their velocity metrics. This trend was identified during a review of the project’s burn-up chart, where the total scope appeared to be growing without a corresponding increase in delivered value. The whistleblower alleges that this practice is being used to meet performance bonuses tied to productivity targets. What is the most appropriate action for the scheduler to take to ensure the integrity of the project schedule and performance reporting?
Correct
Correct: Story points are a relative measure of effort, and velocity is only a reliable scheduling tool if the estimation scale remains stable over time. When ‘point inflation’ occurs, velocity becomes a vanity metric that misleads stakeholders about actual progress. The most effective way to restore the integrity of the schedule is to address the root cause by re-calibrating the team’s estimation process through a retrospective, ensuring that a ‘5-point’ story represents the same level of effort today as it did in previous iterations.
Incorrect: Adjusting historical data by a fixed percentage is an arbitrary fix that does not address the behavioral issue or provide a sustainable solution for future iterations. Switching to ideal hours ignores the fundamental agile principle of relative estimation and may introduce new complexities in tracking. Increasing the total project scope to match inflated points validates the manipulation and results in a schedule that reflects artificial growth rather than actual deliverable progress.
Takeaway: Maintaining a stable and consistent estimation baseline is essential for velocity to serve as a reliable tool for schedule forecasting and performance measurement.
Incorrect
Correct: Story points are a relative measure of effort, and velocity is only a reliable scheduling tool if the estimation scale remains stable over time. When ‘point inflation’ occurs, velocity becomes a vanity metric that misleads stakeholders about actual progress. The most effective way to restore the integrity of the schedule is to address the root cause by re-calibrating the team’s estimation process through a retrospective, ensuring that a ‘5-point’ story represents the same level of effort today as it did in previous iterations.
Incorrect: Adjusting historical data by a fixed percentage is an arbitrary fix that does not address the behavioral issue or provide a sustainable solution for future iterations. Switching to ideal hours ignores the fundamental agile principle of relative estimation and may introduce new complexities in tracking. Increasing the total project scope to match inflated points validates the manipulation and results in a schedule that reflects artificial growth rather than actual deliverable progress.
Takeaway: Maintaining a stable and consistent estimation baseline is essential for velocity to serve as a reliable tool for schedule forecasting and performance measurement.
-
Question 5 of 10
5. Question
During a routine supervisory engagement with a fund administrator, the authority asks about Time Impact Analysis (TIA) in the context of client suitability. They observe that the firm is currently upgrading its client onboarding platform to meet new suitability standards. A significant change in data validation requirements has been identified mid-project. To assess the impact of this change on the project’s milestone dates, the scheduler decides to perform a Time Impact Analysis. Which action is most consistent with the TIA methodology?
Correct
Correct: Time Impact Analysis (TIA) is a prospective scheduling technique used to model the impact of a change or delay. It involves creating a ‘fragnet’ (a sub-network of activities) that represents the specific change and inserting it into the most current, statused version of the schedule. This allows the scheduler to see how the logic of the new activities interacts with the existing schedule to shift the completion date or key milestones.
Incorrect: Comparing the baseline to actual progress is a retrospective ‘As-Planned vs. As-Built’ analysis, which is used after the fact rather than for prospective modeling. Removing non-critical activities is incorrect because a change can cause a non-critical path to become the new critical path; removing them would invalidate the model. Re-baselining immediately is a poor practice because it obscures the specific impact of the change, making it impossible to justify time extensions or resource adjustments based on the specific disruption.
Takeaway: Time Impact Analysis is a prospective method that uses fragnets and the current schedule status to quantify the effect of specific changes on project milestones.
Incorrect
Correct: Time Impact Analysis (TIA) is a prospective scheduling technique used to model the impact of a change or delay. It involves creating a ‘fragnet’ (a sub-network of activities) that represents the specific change and inserting it into the most current, statused version of the schedule. This allows the scheduler to see how the logic of the new activities interacts with the existing schedule to shift the completion date or key milestones.
Incorrect: Comparing the baseline to actual progress is a retrospective ‘As-Planned vs. As-Built’ analysis, which is used after the fact rather than for prospective modeling. Removing non-critical activities is incorrect because a change can cause a non-critical path to become the new critical path; removing them would invalidate the model. Re-baselining immediately is a poor practice because it obscures the specific impact of the change, making it impossible to justify time extensions or resource adjustments based on the specific disruption.
Takeaway: Time Impact Analysis is a prospective method that uses fragnets and the current schedule status to quantify the effect of specific changes on project milestones.
-
Question 6 of 10
6. Question
A gap analysis conducted at a credit union regarding Post-Project Schedule Analysis as part of complaints handling concluded that the project team failed to capture the specific reasons for a 45-day delay in the integration of the member database. Although the project was delivered, the internal audit report highlighted that the scheduling process lacked a mechanism to convert historical performance into organizational process assets. To improve the reliability of future project timelines, what should the scheduler focus on during the post-project analysis?
Correct
Correct: The primary objective of post-project schedule analysis is to compare the as-built schedule (what actually happened) against the baseline (what was planned). This allows the scheduler to identify systemic issues, such as consistently underestimated durations or missing dependencies, which can then be used to improve the accuracy of future project schedules and templates.
Incorrect: Archiving the schedule is an administrative closure task that ensures data availability but does not provide the analytical insights needed to improve scheduling maturity. Modifying the original baseline to match actuals is an improper practice that obscures performance variances and renders historical data useless for benchmarking. Retrospective resource leveling is a theoretical exercise that may identify staffing gaps but does not address the fundamental need to analyze why the original schedule logic or duration estimates failed.
Takeaway: Effective post-project analysis requires comparing the as-built schedule to the baseline to identify and document variances for future process improvement.
Incorrect
Correct: The primary objective of post-project schedule analysis is to compare the as-built schedule (what actually happened) against the baseline (what was planned). This allows the scheduler to identify systemic issues, such as consistently underestimated durations or missing dependencies, which can then be used to improve the accuracy of future project schedules and templates.
Incorrect: Archiving the schedule is an administrative closure task that ensures data availability but does not provide the analytical insights needed to improve scheduling maturity. Modifying the original baseline to match actuals is an improper practice that obscures performance variances and renders historical data useless for benchmarking. Retrospective resource leveling is a theoretical exercise that may identify staffing gaps but does not address the fundamental need to analyze why the original schedule logic or duration estimates failed.
Takeaway: Effective post-project analysis requires comparing the as-built schedule to the baseline to identify and document variances for future process improvement.
-
Question 7 of 10
7. Question
The risk committee at a broker-dealer is debating standards for Agile Scheduling Techniques as part of internal audit remediation. The central issue is that the current reporting framework fails to capture the iterative nature of software delivery for their high-frequency trading platform. During a recent 90-day review, it was noted that while teams are completing tasks, the overall project timeline remains opaque to senior stakeholders. To address this, the lead scheduler must implement a technique that balances the flexibility of Agile with the need for long-term predictability. Which approach should the scheduler prioritize to provide a high-level view of the project’s progress while maintaining the integrity of the team’s iterative workflow?
Correct
Correct: A Release Roadmap is the appropriate Agile scheduling technique for providing high-level visibility over a longer horizon. It groups iterations into releases based on features or business value, allowing stakeholders to understand the timing of major deliverables without the scheduler needing to manage the granular, frequently changing details of the sprint backlog. This maintains the balance between team-level agility and organizational-level predictability.
Incorrect: Mapping every user story to a Critical Path Method network is counterproductive in Agile as it introduces rigid dependencies that do not exist in a prioritized backlog and creates massive maintenance overhead. Focusing on daily task completion and resource utilization in a Gantt chart emphasizes output over outcome and often leads to bottlenecks rather than flow. Relying solely on velocity and burn-down charts provides tactical data useful for the team but lacks the strategic, time-phased context required by a risk committee or senior management to track long-term project health.
Takeaway: Effective Agile scheduling at the enterprise level requires synthesizing tactical iteration data into strategic release roadmaps to ensure stakeholder visibility and alignment.
Incorrect
Correct: A Release Roadmap is the appropriate Agile scheduling technique for providing high-level visibility over a longer horizon. It groups iterations into releases based on features or business value, allowing stakeholders to understand the timing of major deliverables without the scheduler needing to manage the granular, frequently changing details of the sprint backlog. This maintains the balance between team-level agility and organizational-level predictability.
Incorrect: Mapping every user story to a Critical Path Method network is counterproductive in Agile as it introduces rigid dependencies that do not exist in a prioritized backlog and creates massive maintenance overhead. Focusing on daily task completion and resource utilization in a Gantt chart emphasizes output over outcome and often leads to bottlenecks rather than flow. Relying solely on velocity and burn-down charts provides tactical data useful for the team but lacks the strategic, time-phased context required by a risk committee or senior management to track long-term project health.
Takeaway: Effective Agile scheduling at the enterprise level requires synthesizing tactical iteration data into strategic release roadmaps to ensure stakeholder visibility and alignment.
-
Question 8 of 10
8. Question
During your tenure as client onboarding lead at a broker-dealer, a matter arises concerning Delay Analysis Techniques (e.g., Window Analysis, Collapsed As-Built) during conflicts of interest. The an internal audit finding suggests that the project team’s current method for assessing schedule delays is overly subjective and fails to account for the project’s actual progress and logic changes over time. To provide a more transparent and defensible analysis for a 24-month regulatory compliance project, the scheduler is asked to use a technique that examines the schedule in discrete segments, or snapshots, to determine how the critical path was impacted by specific events during those periods. Which technique is most appropriate for this requirement?
Correct
Correct: Window Analysis (also known as Snapshot Analysis) is a contemporaneous or retrospective technique that breaks the project duration into discrete time periods (windows). It uses the schedule updates that were in effect during each period to analyze the impact of delays, thereby accounting for the actual progress and any changes in logic or the critical path that occurred during the project. This makes it highly objective and suitable for resolving disputes where the sequence of events is complex.
Incorrect: Collapsed As-Built Analysis is a retrospective ‘but-for’ method that removes delays from the final as-built schedule; it is often criticized for being subjective and ignoring the contemporaneous logic. Impacted As-Planned Analysis involves adding delays to the original baseline schedule, which fails to account for actual progress or logic changes made during execution. As-Planned vs. As-Built Comparison simply compares the original plan to the final outcome without analyzing the dynamic causes or the interaction of concurrent delays through the project’s logic.
Takeaway: Window Analysis is the preferred forensic scheduling technique for complex projects because it analyzes delays within specific time increments using contemporaneous schedule data to account for a shifting critical path.
Incorrect
Correct: Window Analysis (also known as Snapshot Analysis) is a contemporaneous or retrospective technique that breaks the project duration into discrete time periods (windows). It uses the schedule updates that were in effect during each period to analyze the impact of delays, thereby accounting for the actual progress and any changes in logic or the critical path that occurred during the project. This makes it highly objective and suitable for resolving disputes where the sequence of events is complex.
Incorrect: Collapsed As-Built Analysis is a retrospective ‘but-for’ method that removes delays from the final as-built schedule; it is often criticized for being subjective and ignoring the contemporaneous logic. Impacted As-Planned Analysis involves adding delays to the original baseline schedule, which fails to account for actual progress or logic changes made during execution. As-Planned vs. As-Built Comparison simply compares the original plan to the final outcome without analyzing the dynamic causes or the interaction of concurrent delays through the project’s logic.
Takeaway: Window Analysis is the preferred forensic scheduling technique for complex projects because it analyzes delays within specific time increments using contemporaneous schedule data to account for a shifting critical path.
-
Question 9 of 10
9. Question
An internal review at a payment services provider examining Archiving Project Schedules as part of whistleblowing has uncovered that several project managers are deleting intermediate schedule baselines once a project reaches the 90% completion threshold. The organization’s standard operating procedure requires a comprehensive historical record for future lessons learned and forensic delay analysis. The lead scheduler argues that only the final as-built schedule is necessary for the archive to save storage space and maintain a clean database. Which of the following represents the most significant risk to the organization regarding this archiving practice?
Correct
Correct: Archiving project schedules is a critical component of maintaining Organizational Process Assets. Preserving intermediate baselines and periodic schedule updates is essential for forensic delay analysis, which allows the organization to investigate the root causes of delays and assign accountability. Without these snapshots, the organization loses the ability to compare ‘planned vs. actual’ at different stages of the project life cycle, which is vital for improving future estimation and scheduling accuracy.
Incorrect: The WBS dictionary focuses on defining the scope and deliverables of work packages rather than the technical archival requirements of schedule software files. While ISO standards emphasize record-keeping, they do not universally mandate a specific seven-year read-only rule for all project schedules; such requirements are typically defined by internal policy or specific legal jurisdictions. The final Critical Path is calculated based on the current logic and status of the project at completion; while historical data is needed for variance analysis, it is not a technical requirement for the software to calculate the final path of the terminal schedule.
Takeaway: Comprehensive schedule archiving must include historical baselines and periodic updates to support forensic analysis and the development of accurate historical benchmarks.
Incorrect
Correct: Archiving project schedules is a critical component of maintaining Organizational Process Assets. Preserving intermediate baselines and periodic schedule updates is essential for forensic delay analysis, which allows the organization to investigate the root causes of delays and assign accountability. Without these snapshots, the organization loses the ability to compare ‘planned vs. actual’ at different stages of the project life cycle, which is vital for improving future estimation and scheduling accuracy.
Incorrect: The WBS dictionary focuses on defining the scope and deliverables of work packages rather than the technical archival requirements of schedule software files. While ISO standards emphasize record-keeping, they do not universally mandate a specific seven-year read-only rule for all project schedules; such requirements are typically defined by internal policy or specific legal jurisdictions. The final Critical Path is calculated based on the current logic and status of the project at completion; while historical data is needed for variance analysis, it is not a technical requirement for the software to calculate the final path of the terminal schedule.
Takeaway: Comprehensive schedule archiving must include historical baselines and periodic updates to support forensic analysis and the development of accurate historical benchmarks.
-
Question 10 of 10
10. Question
Senior management at a mid-sized retail bank requests your input on Scheduling Software and Tools as part of data protection. Their briefing note explains that the organization is transitioning from standalone desktop scheduling applications to a centralized enterprise project management (EPM) system to manage a new 24-month regulatory compliance roadmap. They are particularly concerned about maintaining the integrity of the baseline schedules and preventing unauthorized modifications to critical path activities across multiple departments. Which feature of the scheduling software is most critical to address these specific data protection and integrity requirements?
Correct
Correct: Granular role-based access control (RBAC) allows the organization to define specific permissions for different users, ensuring that only authorized schedulers can modify baselines or critical path logic. Audit trail logging provides a historical record of all changes made to the schedule, which is essential for data integrity and accountability in a highly regulated banking environment.
Incorrect: Automated cloud-based backup protects against data loss due to system failure but does not prevent unauthorized modifications by internal users. Resource leveling algorithms are used for managing resource constraints and do not provide data protection or integrity controls. While multi-user concurrent access is a feature of enterprise tools, it actually increases the risk of unauthorized changes unless it is strictly managed by the access controls mentioned in the correct option.
Takeaway: In enterprise scheduling environments, data integrity and protection are primarily maintained through robust access controls and comprehensive audit logs.
Incorrect
Correct: Granular role-based access control (RBAC) allows the organization to define specific permissions for different users, ensuring that only authorized schedulers can modify baselines or critical path logic. Audit trail logging provides a historical record of all changes made to the schedule, which is essential for data integrity and accountability in a highly regulated banking environment.
Incorrect: Automated cloud-based backup protects against data loss due to system failure but does not prevent unauthorized modifications by internal users. Resource leveling algorithms are used for managing resource constraints and do not provide data protection or integrity controls. While multi-user concurrent access is a feature of enterprise tools, it actually increases the risk of unauthorized changes unless it is strictly managed by the access controls mentioned in the correct option.
Takeaway: In enterprise scheduling environments, data integrity and protection are primarily maintained through robust access controls and comprehensive audit logs.