Quiz-summary
0 of 9 questions completed
Questions:
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
Information
Premium Practice Questions
You have already completed the quiz before. Hence you can not start it again.
Quiz is loading...
You must sign in or sign up to start the quiz.
You have to finish following quiz, to start this quiz:
Results
0 of 9 questions answered correctly
Your time:
Time has elapsed
Categories
- Not categorized 0%
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- Answered
- Review
-
Question 1 of 9
1. Question
Two proposed approaches to TensorFlow Lite, PyTorch Mobile, ONNX Runtime, OpenVINO conflict. Which approach is more appropriate, and why? An internal audit of a manufacturing firm’s edge computing strategy reveals a disagreement regarding the deployment of predictive maintenance models across a fleet of diverse hardware, including Intel-based gateways and ARM-based mobile sensors. The first approach suggests standardizing on ONNX Runtime to facilitate a unified model format and deployment pipeline. The second approach suggests maintaining separate native formats, specifically OpenVINO for Intel and TensorFlow Lite for ARM, to maximize hardware-specific optimizations.
Correct
Correct: ONNX Runtime (ORT) acts as a high-performance inference engine that supports models from various frameworks. Its architecture allows for the use of Execution Providers (EPs) such as OpenVINO for Intel hardware and ACL for ARM, which enables the organization to maintain a single model artifact and deployment workflow while still achieving near-native performance on diverse hardware. This aligns with edge computing goals of scalability and operational efficiency by reducing the complexity of managing multiple model formats.
Incorrect: Maintaining separate native formats increases the complexity of the CI/CD pipeline and version control, leading to higher operational risks and maintenance costs. Standardizing on TensorFlow Lite across Intel hardware ignores the significant performance gains offered by OpenVINO’s specialized optimizations for Intel silicon. Using PyTorch Mobile without conversion ignores the necessity of optimization and quantization for resource-constrained edge devices, which often cannot handle the memory footprint of unoptimized models in real-time scenarios.
Takeaway: ONNX Runtime offers a strategic balance between cross-platform portability and hardware-specific acceleration through its execution provider architecture.
Incorrect
Correct: ONNX Runtime (ORT) acts as a high-performance inference engine that supports models from various frameworks. Its architecture allows for the use of Execution Providers (EPs) such as OpenVINO for Intel hardware and ACL for ARM, which enables the organization to maintain a single model artifact and deployment workflow while still achieving near-native performance on diverse hardware. This aligns with edge computing goals of scalability and operational efficiency by reducing the complexity of managing multiple model formats.
Incorrect: Maintaining separate native formats increases the complexity of the CI/CD pipeline and version control, leading to higher operational risks and maintenance costs. Standardizing on TensorFlow Lite across Intel hardware ignores the significant performance gains offered by OpenVINO’s specialized optimizations for Intel silicon. Using PyTorch Mobile without conversion ignores the necessity of optimization and quantization for resource-constrained edge devices, which often cannot handle the memory footprint of unoptimized models in real-time scenarios.
Takeaway: ONNX Runtime offers a strategic balance between cross-platform portability and hardware-specific acceleration through its execution provider architecture.
-
Question 2 of 9
2. Question
Upon discovering a gap in Sensor data processing, real-time decision making, V2X communication, which action is most appropriate? A municipal transport authority is deploying an Intelligent Transport System (ITS) that relies on Vehicle-to-Infrastructure (V2I) communication for collision avoidance. During pilot testing, it was observed that the end-to-end latency for safety-critical alerts exceeds the 10ms threshold required for autonomous braking triggers. The current architecture sends all LiDAR and camera data from roadside units (RSUs) to a centralized city-wide edge server for fusion and decision-making.
Correct
Correct: In V2X and real-time decision-making scenarios, latency is the primary constraint. A hierarchical edge architecture allows for ‘data locality’ where safety-critical processing occurs at the extreme edge (the RSU), minimizing the physical distance and network hops data must travel. This ensures that time-sensitive decisions, such as autonomous braking triggers, are made within the required millisecond thresholds, while less urgent data is processed at higher levels of the hierarchy.
Incorrect: Upgrading backhaul infrastructure improves throughput but does not address the fundamental propagation and processing delays inherent in a centralized model for ultra-low-latency needs. Data compression introduces additional computational overhead at the RSU, which can actually increase total latency. Replacing RSUs with low-power gateways reduces the local compute capacity, forcing more reliance on centralized processing and further exacerbating the latency gap for real-time V2X applications.
Takeaway: To meet ultra-low-latency requirements in V2X environments, safety-critical sensor processing must be decentralized and moved to the extreme edge of the architecture.
Incorrect
Correct: In V2X and real-time decision-making scenarios, latency is the primary constraint. A hierarchical edge architecture allows for ‘data locality’ where safety-critical processing occurs at the extreme edge (the RSU), minimizing the physical distance and network hops data must travel. This ensures that time-sensitive decisions, such as autonomous braking triggers, are made within the required millisecond thresholds, while less urgent data is processed at higher levels of the hierarchy.
Incorrect: Upgrading backhaul infrastructure improves throughput but does not address the fundamental propagation and processing delays inherent in a centralized model for ultra-low-latency needs. Data compression introduces additional computational overhead at the RSU, which can actually increase total latency. Replacing RSUs with low-power gateways reduces the local compute capacity, forcing more reliance on centralized processing and further exacerbating the latency gap for real-time V2X applications.
Takeaway: To meet ultra-low-latency requirements in V2X environments, safety-critical sensor processing must be decentralized and moved to the extreme edge of the architecture.
-
Question 3 of 9
3. Question
The client onboarding lead at an investment firm is tasked with addressing Energy and Utilities during periodic review. After reviewing an internal audit finding, the key concern is that the current decentralized EDGE architecture used for real-time grid monitoring lacks a standardized container orchestration strategy. This has led to inconsistent security patching across 500 remote utility gateways over the last fiscal quarter, potentially exposing the grid to vulnerabilities. To remediate this while maintaining the sub-10ms latency required for grid balancing, which architectural approach should the firm recommend to the utility provider?
Correct
Correct: A hierarchical EDGE model is the most effective solution because it separates the control plane from the data plane. This allows for centralized orchestration and standardized security patching from a single management point (addressing the audit finding) while keeping the actual data processing at the edge nodes to satisfy the low-latency requirements of the utility grid.
Incorrect: Migrating to a centralized cloud environment would solve the patching issue but would introduce significant latency, failing the sub-10ms requirement for grid balancing. A flat peer-to-peer architecture increases administrative complexity and fails to provide the centralized oversight needed for consistent security updates. Upgrading to specialized hardware like FPGAs improves local processing speed but does not address the core audit concern regarding inconsistent orchestration and management across the network.
Takeaway: Hierarchical EDGE architectures allow organizations to maintain centralized administrative and security control through orchestration while preserving the low-latency benefits of localized data processing.
Incorrect
Correct: A hierarchical EDGE model is the most effective solution because it separates the control plane from the data plane. This allows for centralized orchestration and standardized security patching from a single management point (addressing the audit finding) while keeping the actual data processing at the edge nodes to satisfy the low-latency requirements of the utility grid.
Incorrect: Migrating to a centralized cloud environment would solve the patching issue but would introduce significant latency, failing the sub-10ms requirement for grid balancing. A flat peer-to-peer architecture increases administrative complexity and fails to provide the centralized oversight needed for consistent security updates. Upgrading to specialized hardware like FPGAs improves local processing speed but does not address the core audit concern regarding inconsistent orchestration and management across the network.
Takeaway: Hierarchical EDGE architectures allow organizations to maintain centralized administrative and security control through orchestration while preserving the low-latency benefits of localized data processing.
-
Question 4 of 9
4. Question
You have recently joined a private bank as portfolio manager. Your first major assignment involves Autonomous Vehicles and Transportation during business continuity, and an incident report indicates that a fleet of autonomous delivery vehicles used for high-value document transport experienced a 45-second communication blackout with the central cloud server while navigating a dense urban tunnel. Despite the loss of cloud connectivity, the vehicles successfully completed their routes using local processing. However, the post-incident audit reveals that the decision-making logs during the blackout were not synchronized with the central repository, creating a gap in the audit trail for regulatory compliance. Which architectural characteristic of EDGE computing was primarily responsible for maintaining vehicle operations during the blackout, and what is the most critical risk management action to address the logging gap?
Correct
Correct: The ability of the vehicles to continue operating without cloud connectivity is a direct result of the ‘autonomy’ characteristic of EDGE computing, where edge nodes (the vehicles) possess sufficient local processing power to make real-time decisions. To address the audit trail gap, a store-and-forward mechanism is the standard technical control, ensuring that data generated during disconnected states is cached locally and synchronized once the network is available, thereby maintaining regulatory compliance.
Incorrect: Increasing bandwidth or cellular repeaters addresses the connectivity medium but does not solve the underlying architectural need for data persistence during inevitable outages. Migrating logic to the cloud is counter-productive as it removes the autonomy that allowed the vehicles to function during the blackout. Deploying micro-datacenters is an infrastructure-heavy approach that may reduce the frequency of outages but does not provide a fail-safe for the data integrity of the audit logs themselves.
Takeaway: Edge autonomy enables continuous operation during connectivity failures, but robust data synchronization protocols like store-and-forward are essential to maintain audit trails and compliance.
Incorrect
Correct: The ability of the vehicles to continue operating without cloud connectivity is a direct result of the ‘autonomy’ characteristic of EDGE computing, where edge nodes (the vehicles) possess sufficient local processing power to make real-time decisions. To address the audit trail gap, a store-and-forward mechanism is the standard technical control, ensuring that data generated during disconnected states is cached locally and synchronized once the network is available, thereby maintaining regulatory compliance.
Incorrect: Increasing bandwidth or cellular repeaters addresses the connectivity medium but does not solve the underlying architectural need for data persistence during inevitable outages. Migrating logic to the cloud is counter-productive as it removes the autonomy that allowed the vehicles to function during the blackout. Deploying micro-datacenters is an infrastructure-heavy approach that may reduce the frequency of outages but does not provide a fail-safe for the data integrity of the audit logs themselves.
Takeaway: Edge autonomy enables continuous operation during connectivity failures, but robust data synchronization protocols like store-and-forward are essential to maintain audit trails and compliance.
-
Question 5 of 9
5. Question
During a committee meeting at a fund administrator, a question arises about Edge AI for real-time decision making as part of market conduct. The discussion reveals that the firm is deploying specialized Edge accelerators at co-location facilities to detect anomalous trading patterns within a 5-millisecond window. The Chief Compliance Officer is concerned about how these decentralized nodes maintain consistent model versions and provide a verifiable audit trail for regulatory reporting. Which of the following represents the most significant risk to the internal control environment when utilizing this decentralized Edge architecture for real-time market conduct oversight?
Correct
Correct: In a decentralized Edge AI environment, ensuring that every node is running the exact same version of the detection model is critical for regulatory consistency. If one node drifts or fails to update, it might allow prohibited market conduct that another node would catch, creating an uneven and non-compliant control environment. This directly impacts the reliability of the firm’s market conduct oversight and the integrity of the internal control framework.
Incorrect: Increased latency for historical analysis is a performance and reporting issue but does not inherently invalidate the real-time control’s effectiveness. Physical security at co-location facilities is a valid concern but is typically managed through service level agreements and physical access controls rather than being the primary risk to the AI’s decision-making logic. The absence of a centralized data lake affects the long-term improvement of the model but does not pose an immediate risk to the current enforcement of market conduct rules.
Takeaway: Maintaining model synchronization and version control across decentralized nodes is essential for ensuring uniform compliance and mitigating the risk of disparate regulatory enforcement.
Incorrect
Correct: In a decentralized Edge AI environment, ensuring that every node is running the exact same version of the detection model is critical for regulatory consistency. If one node drifts or fails to update, it might allow prohibited market conduct that another node would catch, creating an uneven and non-compliant control environment. This directly impacts the reliability of the firm’s market conduct oversight and the integrity of the internal control framework.
Incorrect: Increased latency for historical analysis is a performance and reporting issue but does not inherently invalidate the real-time control’s effectiveness. Physical security at co-location facilities is a valid concern but is typically managed through service level agreements and physical access controls rather than being the primary risk to the AI’s decision-making logic. The absence of a centralized data lake affects the long-term improvement of the model but does not pose an immediate risk to the current enforcement of market conduct rules.
Takeaway: Maintaining model synchronization and version control across decentralized nodes is essential for ensuring uniform compliance and mitigating the risk of disparate regulatory enforcement.
-
Question 6 of 9
6. Question
The quality assurance team at an insurer identified a finding related to Observability and analytics for EDGE environments as part of third-party risk. The assessment reveals that while individual edge gateways successfully log local events, the centralized monitoring system fails to correlate these events with backend cloud services during intermittent 10-minute network partitions. This lack of synchronized telemetry prevents the IT operations team from identifying whether a transaction failure originated at the edge node or within the core application layer. To address this gap in the internal control environment, which of the following strategies should the internal auditor recommend to ensure comprehensive observability?
Correct
Correct: In a distributed EDGE architecture, observability depends on the ability to track a single request across multiple boundaries. Distributed tracing with correlation IDs allows the organization to link events occurring at the edge with those in the cloud, even when network connectivity is inconsistent. This provides the necessary audit trail and technical visibility to perform root cause analysis and ensure that third-party service level agreements (SLAs) are being met.
Incorrect: Increasing the frequency of log uploads addresses the timeliness of data but does not solve the underlying issue of correlating disparate data points across different layers of the architecture. Deploying redundant hardware improves system availability and resilience but does not enhance the analytical visibility or the ability to trace transactions. Manual weekly reports are a detective control that is too slow and lacks the technical granularity required for managing real-time edge environments and identifying specific transaction failures.
Takeaway: Comprehensive observability in EDGE environments requires end-to-end distributed tracing to correlate telemetry across decentralized nodes and centralized cloud services.
Incorrect
Correct: In a distributed EDGE architecture, observability depends on the ability to track a single request across multiple boundaries. Distributed tracing with correlation IDs allows the organization to link events occurring at the edge with those in the cloud, even when network connectivity is inconsistent. This provides the necessary audit trail and technical visibility to perform root cause analysis and ensure that third-party service level agreements (SLAs) are being met.
Incorrect: Increasing the frequency of log uploads addresses the timeliness of data but does not solve the underlying issue of correlating disparate data points across different layers of the architecture. Deploying redundant hardware improves system availability and resilience but does not enhance the analytical visibility or the ability to trace transactions. Manual weekly reports are a detective control that is too slow and lacks the technical granularity required for managing real-time edge environments and identifying specific transaction failures.
Takeaway: Comprehensive observability in EDGE environments requires end-to-end distributed tracing to correlate telemetry across decentralized nodes and centralized cloud services.
-
Question 7 of 9
7. Question
A client relationship manager at a mid-sized retail bank seeks guidance on Healthcare and Remote Patient Monitoring as part of complaints handling. They explain that several premium account holders have filed formal grievances regarding the failure of the bank-integrated health monitoring service to provide real-time alerts during recent network outages. The internal audit department is evaluating the third-party provider’s EDGE infrastructure, which currently utilizes a hierarchical model where edge nodes act only as data forwarders to a centralized cloud for anomaly detection. To mitigate the risk of alert failure and meet the required sub-second response time for critical health events, which architectural shift should the audit team recommend?
Correct
Correct: Transitioning to a decentralized architecture with local processing at the edge gateway directly addresses the core requirements of low latency and autonomy. By moving the anomaly detection logic from the cloud to the edge node, the system can trigger alerts in real-time without requiring a round-trip to a central server, ensuring functionality even during cloud connectivity outages.
Incorrect: Increasing buffer size on industrial PCs helps prevent data loss but does not solve the latency issue for real-time alerts. Cloud-based orchestration improves scalability but remains dependent on network availability and inherent cloud latency. Enhancing encryption is a critical security measure for data integrity but does not improve the speed of processing or the autonomy of the edge nodes during a network failure.
Takeaway: The primary advantage of EDGE computing in critical healthcare monitoring is the ability to provide autonomous, low-latency processing at the data source, reducing dependence on centralized cloud connectivity for time-sensitive actions.
Incorrect
Correct: Transitioning to a decentralized architecture with local processing at the edge gateway directly addresses the core requirements of low latency and autonomy. By moving the anomaly detection logic from the cloud to the edge node, the system can trigger alerts in real-time without requiring a round-trip to a central server, ensuring functionality even during cloud connectivity outages.
Incorrect: Increasing buffer size on industrial PCs helps prevent data loss but does not solve the latency issue for real-time alerts. Cloud-based orchestration improves scalability but remains dependent on network availability and inherent cloud latency. Enhancing encryption is a critical security measure for data integrity but does not improve the speed of processing or the autonomy of the edge nodes during a network failure.
Takeaway: The primary advantage of EDGE computing in critical healthcare monitoring is the ability to provide autonomous, low-latency processing at the data source, reducing dependence on centralized cloud connectivity for time-sensitive actions.
-
Question 8 of 9
8. Question
An escalation from the front office at a payment services provider concerns Relevant communication standards (e.g., 3GPP for 5G) during outsourcing. The team reports that the third-party edge provider claims full compliance with 3GPP Release 16 for their 5G-enabled edge nodes, yet the internal audit team has identified significant latency spikes during peak transaction windows. The provider’s Service Level Agreement (SLA) guarantees sub-10ms latency for critical payment processing. Which specific 3GPP feature set should the internal auditor verify as being correctly implemented and prioritized to ensure the edge infrastructure meets the low-latency requirements for real-time financial transactions?
Correct
Correct: Ultra-Reliable Low-Latency Communication (URLLC) is the specific 3GPP standard feature set designed for mission-critical applications requiring extremely low latency and high reliability, such as real-time payment processing. Features like Grant-Free Uplink allow devices to transmit data without waiting for a specific resource grant from the base station, which is essential for maintaining sub-10ms latency in edge environments.
Incorrect: Enhanced Mobile Broadband (eMBB) is optimized for high throughput and data volume rather than latency. Massive Machine Type Communications (mMTC) is intended for a high density of low-power IoT devices where latency is not the primary concern. Non-Standalone (NSA) architecture relies on the 4G LTE core, which typically introduces higher latency than a 5G Standalone (SA) core and may not fully support the advanced URLLC features required for this scenario.
Takeaway: To ensure edge computing performance for real-time applications, auditors must verify the implementation of URLLC standards within the 3GPP framework to guarantee latency and reliability targets.
Incorrect
Correct: Ultra-Reliable Low-Latency Communication (URLLC) is the specific 3GPP standard feature set designed for mission-critical applications requiring extremely low latency and high reliability, such as real-time payment processing. Features like Grant-Free Uplink allow devices to transmit data without waiting for a specific resource grant from the base station, which is essential for maintaining sub-10ms latency in edge environments.
Incorrect: Enhanced Mobile Broadband (eMBB) is optimized for high throughput and data volume rather than latency. Massive Machine Type Communications (mMTC) is intended for a high density of low-power IoT devices where latency is not the primary concern. Non-Standalone (NSA) architecture relies on the 4G LTE core, which typically introduces higher latency than a 5G Standalone (SA) core and may not fully support the advanced URLLC features required for this scenario.
Takeaway: To ensure edge computing performance for real-time applications, auditors must verify the implementation of URLLC standards within the 3GPP framework to guarantee latency and reliability targets.
-
Question 9 of 9
9. Question
Following a thematic review of Telecommunications as part of record-keeping, a payment services provider received feedback indicating that their current edge deployment at 500 high-volume retail locations fails to meet the 10ms latency requirement during peak hours. The internal audit team noted that the reliance on continuous cloud connectivity for transaction validation leads to significant delays when the wide-area network (WAN) experiences congestion. To improve bandwidth efficiency and ensure operational continuity during intermittent network outages, which architectural strategy should the provider prioritize?
Correct
Correct: Decentralized edge gateways provide the necessary autonomy and data locality required for low-latency environments. By processing transactions locally, the system reduces the immediate demand on the WAN (bandwidth efficiency) and allows the retail location to continue operating even if the connection to the central cloud is temporarily lost (autonomy). This aligns with the core EDGE principles of reducing latency and ensuring resilience through distributed processing.
Incorrect: Upgrading to leased lines or using thin-client architectures fails to address the core principles of edge computing, such as data locality and autonomy; these approaches remain highly dependent on network stability and do not reduce the volume of data that must be transmitted. Consolidating resources at a regional fog node may reduce some latency compared to a distant cloud, but it still leaves the individual retail locations vulnerable to local network failures and does not maximize bandwidth efficiency at the source.
Takeaway: Decentralized edge architectures enhance resilience and efficiency by enabling local processing and reducing dependency on constant cloud connectivity.
Incorrect
Correct: Decentralized edge gateways provide the necessary autonomy and data locality required for low-latency environments. By processing transactions locally, the system reduces the immediate demand on the WAN (bandwidth efficiency) and allows the retail location to continue operating even if the connection to the central cloud is temporarily lost (autonomy). This aligns with the core EDGE principles of reducing latency and ensuring resilience through distributed processing.
Incorrect: Upgrading to leased lines or using thin-client architectures fails to address the core principles of edge computing, such as data locality and autonomy; these approaches remain highly dependent on network stability and do not reduce the volume of data that must be transmitted. Consolidating resources at a regional fog node may reduce some latency compared to a distant cloud, but it still leaves the individual retail locations vulnerable to local network failures and does not maximize bandwidth efficiency at the source.
Takeaway: Decentralized edge architectures enhance resilience and efficiency by enabling local processing and reducing dependency on constant cloud connectivity.