ISSN : 2583-2646

Intelligent Sensor Fusion Using Deep Learning for Next-Generation Electronic Applications

ESP Journal of Engineering & Technology Advancements
© 2025 by ESP JETA
Volume 5  Issue 2
Year of Publication : 2025
Authors : Madesto Agnus
:10.56472/25832646/JETA-V5I2P125

Citation:

Madesto Agnus, 2025. "Intelligent Sensor Fusion Using Deep Learning for Next-Generation Electronic Applications", ESP Journal of Engineering & Technology Advancements  5(2): 231-239.

Abstract:

Innovations in artificial intelligence together with the fast development of sensor technology have driven the creation of intelligent sensor fusion systems capable of efficiently interpreting and merging several data sources. Deep learning methods including convolutional neural networks (CNNs), recurrent neural networks (RNNs), transformers, and graph neural networks (GNNs) have radically changed how sensor data is analysed, therefore enabling more exact and context-aware decision-making. From raw data aggregation to high-level semantic awareness, these techniques enable fusion at several levels. The combination of artificial intelligence and sensor fusion is opening hitherto unheard-of prospects in several fields. The architectures, models, and deep learning techniques used in intelligent sensor fusion are systematically reviewed in this work. Along with smart wearables that use biosensors and motion detectors to enable continuous health monitoring, key application domains include autonomous vehicles—where data from LiDAR, radar, ultrasonic sensors, and cameras is fused for environment perception. While in industrial automation it allows predictive maintenance and adaptive process control, in robotics sensor fusion improves object detection, localisation, and path planning. Furthermore helping smart homes, augmented reality systems, unmanned aerial vehicles (UAVs) are strong, multimodal sensor data interpretation. The paper also covers issues including computing costs, real-time restrictions, synchronising several sensors, and the need of labelled information. Potential solutions are suggested to be techniques including self-supervised learning, attention processes, and sensor calibration. Moreover, scalable, low-latency, and privacy-preserving sensor fusion uses are being enabled by the combination of edge computing, neuromorphic hardware, and federated learning. This study highlights the future possibilities of intelligent sensor fusion in promoting invention in next-generation electronic systems, so preparing the foundation for smarter, more autonomous, and linked technologies.

References:

[1] Krizhevsky, A.; Sutskever, I.; & Hinton, G. E. (2012) Deep convolutional neural networks enable ImageNet categorisation. Developments in neural information processing systems, 25.

[2] Vaswani, A., together with others (2017). Just pay attention; you need this. developments in neural information processing systems, 30.

[3] Hochreiter, S.; Schmidhuber, J. (1997) Long term memory is short term. Nine (8), 1735–1780 is neural computation.

[4] Welling, M. & Kipf, T. N. 2017). Graph convolutional networks provide semi-supervised categorisation. iCLR.

[5] Han, S.; Mao, H.; & Dally, W. J. 2016 here. Compression deep neural networks using pruning, learnt quantisation, Huffman coding. ICLR.

[6] LeCun, Y.; Bengio, Y; Hinton, G. (2015). Deep learning here Nature, 521(7553), 436–444.

[7] Redmon, J.- and Farhadi, A. 2018). YOLOv3: a little but steady improvement arXiv. 1804.02767.

[8] Oates, T. and Wang, W. 2015 here. Time-series in imaging help to enhance imputation and classification. IjCAI.

[9] Abadi, M., et al. 2016 TensorFlow: Large-scale machine learning toolkit OSDI here.

[10] Chen, L. C.; et al. 2017. Considering atrous convolution in relation to semantic picture segmentation. arXiv: 1706.05557.

[11] Goodfellow, I., together with others (2014). Generative adversarial nets. Modern developments in neural information processing systems, 27.

[12] Lane, N. D.; et al. 2016. DeepX: a low-power deep learning inference accelerator for mobile devices. IPSN stands for...

[13] Xu, H., together with others (2018). Deep learning sensor fusion for perception Letters on IEEE Robotics and Automaton, 3(4), 2187–2194.

[14] Chen, C., together with others (2017). Three-dimensional object detection network in multi-views for autonomous driving. DVPR.

[15] Zhang, C., and associates (2021). Edge computing helps artificial intelligence to reach its last mile. IEEE Conference Proceedings, 109(11), 1755–1778.

[16] Bojarski, M., together with others (2016). Self-driving cars should engage end-to- end learning. 1604.07316 is arXiv.

[17] Yu, L., and colleagues (2020). Variational autoencoders enable unsupervised sensor fusion. Twenty-one, fifteen, 4263.

[18] Zhao, Z., and associates, 2021 Review on sensor fusion for autonomous cars. Intelligent Transportation Systems: IEEE Transactions

[19] 2020: Lin, Y., et al. A evaluation of deep learning for strong sensor fusion in autonomous cars IEEE Transactions in Neural Networks and Learning Systems.

[20] Wang, Z., together with others (2019). Data fusion in smart manufacturing: Not too recent developments and uses Journal of manufacturing systems, 51, 42–52.

[21] B., Moons, et al. (2016). Approximate computing allows energy-efficient convolution nets. IEEE Low Power Electronics and Design Symposium International

[22] Radu, V., together with others (2018). Activity and context recognition multimodal deep learning IMWUT, 1,4, 157.

[23] Misra, D., et al. 2021. Sensor fusion with transformers. Neurips is a name.

[24] K., Georgiou et al. (2019). Deep learning for data fusion in biosensors IEEE Revues in Biomedical Engineering.

[25] Jiang, H., together with others (2021). Attention-based deep fusion for wearable multi-sensor activity recognition ACM Transactions on Sensor Networks, 17 (2), 1–25.

[26] Chen, M., together with others (2017). Wearable 2.0: Making human-cloud integration possible for upcoming systems of next generation healthcare. 1955 IEEE Communications Magazine, 55(1), 54–61.

[27] Raghu, M., together with Schmidt, E. 2020. a review of deep learning in order of scientific discovery. arXiv: 2003.11553.

[28] Kwon, Y., together with others (2019). Forecasting driver attentiveness with multimodal deep learning. CVPr.

[29] Zhang, J., et al. (2022). Explainable sensor fusion implemented in prototype-based learning. Pattern recognition: 129, 108721.

[30] Bonawitz, K., together with others (2019). System design towards scaled federated learning. MLSYS.

[31] Ma, Y., and colleagues (20211). TinyML: An extensive poll. ACM Surveys in Computing.

[32] S., Qiao, et al. (2018). Few-shot image recognition using activations to guide parameters. CVP.

[33] K., he et al. (2016). Deep residual learning for visual identification. PVR.

[34] Zhang, J., and colleagues (2020). Decision-making for autonomous cars and multi-sensor fusion 20(12), 3613. Sensors, 20(12).

[35] Luo, W., together with others (2018). Real time end-to--end 3D detection, tracking, and motion forecasting using a single neural net moves fast and furiously. CVPr.

[36] Chen, J., and colleagues (2022). Sensor fusion based on deep learning for human activity recognition: a review. Journal: IEEE Internet of Things.

[37] Liu, D., and colleagues (2021). Edge artificial intelligence: An idea for the architecture of the next generation edges. Journal of Systems Architectural Development

[38] Li, X., and others (2019). Improving mobile edge intelligence by means of intelligent sensor fusion driven by federated learning IEEE Network 33(6), 90–96.

[39] Yu, H., and associates 2020. Vision, hype, and reality: an assessment on federated learning systems arXive: 2003.11590.

[40] Yao, L., together with others (2017). On the qualitative evaluation of sensor fusion techniques. Sensor Network ACM Transactions, 13(2), 1–26.

[41] Shi, W., together with others (2016). Edge computing: Vision and obstacles. 3(5), 637–646 IEEE Internet of Things Journal

[42] Ma, X., et al. 20211 Knowledge distillation applied in deep learning. IEEE Access,10,32269–32285.

[43] Wang, S., together with others (2018). Self-healing power grid made possible via deep learning Natural Communications, 9(1), 1–11.

[44] Deb, C., et al., 2018. Energy consumption prediction in buildings: an analysis of data fusion approaches Reviews on Sustainable and Renewable Energy, 81, 944–960.

[45] Kim, H.; et al. 2020. Search on neural architecture: a survey. IEEE TRANSACTIONS ON Pattern Analysis AND Machine Intelligence

[46] Zhang, K., and associates (2021). Graph neural networks for heterogeneous fusion of sensor data. IEEE Sensors Journal 21(14), 15409–15420.

[47] Choi, J., et al. (20211). Learning with a teacher-student network under noisy labelling. JACCI.

[48] Du, Y., and associates (2020). FusionNet is a multi-modal sensor fusion deep fully residual convolutional neural network. Fusion of Information, 55, 207–217.

[49] Guo, H., and associates (2018). Deep fusion: An audio-video emotion recognition attention guided factorised bilinear pooling. Ijcai.

[50] Gao, H., et al. (2022). Towards reliable deep learning for sensor fusion: an overview of uncertainty estimating methods ACM Computing Surveys.

Keywords:

Sensor Fusion, Deep Learning, Edge Computing, Neural Networks, IoT, Context-Aware Systems, Smart Electronics, Real-Time Decision Making.