A High-Fidelity Digital Twin Framework for IoT Networks

Abstract

Digital Twin (DT) serves as a foundational technology for Smart Manufacturing and Industry 5.0 as it enhances data-driven decision-making in industrial environments. With the continued growth of its core technologies, including the Internet of Things (IoT), artificial intelligence (AI), Big Data and data analytics, and edge computing, DT has witnessed a significant increase in different applications, promoting sustainability, intelligence, and adaptability in various domains. Hence, DT technology has emerged as a promising link between the physical and virtual worlds, enabling simulation, prediction, and real-time performance optimization. This work aims to explore the development of a high-fidelity digital twin framework, focusing on synchronization and accuracy between physical and digital systems to enhance data-driven decision making. To achieve this, we deploy several stationary UAVs in optimized locations to collect data from IoT devices, which were used to monitor multiple physical entities and perform computations to evaluate their status. We consider a practical setup in which multiple IoT devices may monitor a single physical entity, and as a result, the measurements are combined and processed together to determine the status of the physical entity. The resulting status updates are subsequently uploaded from the UAVs to the base station, where the DT resides. In this work, we consider a novel metric based on the Age of Information (AoI), coined as the Age of Digital Twin (AoDT), to reflect the status freshness of the digital twin. Factoring AoDT in the problem formulation ensures that the DT reliably mirrors the physical system with high accuracy and synchronization. We formulate a mixed-integer non-convex program to maximize the total amount of data collected from all IoT devices while ensuring a constrained AoDT. Using successive convex approximations, we first solve the problem by conducting extensive simulations and comparing the results with baseline approaches to demonstrate the effectiveness of the proposed solution. Then, to handle uncertainty and cope with realistic scenarios and unpredictable environments, we transformed our problem into a Markov Decision Process (MDP) and proposed a deep reinforcement learning-based approach using a Twin Delayed Deep Deterministic Policy Gradient (TD3), that uses dual critics to reduce overestimation bias and delayed policy updates for stability, to optimize the unmanned aerial vehicle positions and the sum rate. We present simulation results for various system scenarios to illustrate the effectiveness of the proposed solution compared to several baseline approaches.

Description

Keywords

Citation

Endorsement

Review

Supplemented By

Referenced By