The SmartSpace project kicked-off in September 2022 and has a duration of 3 years.
Below we briefly introduced the main research topics that we have worked on so far.

GenAI-Based Models for NGSO Satellites Interference Detection

Related Pubs:

Recent advancements in satellite communications have highlighted the challenge of interference detection, especially with the new generation of non-geostationary orbit satellites (NGSOs) that share the same frequency bands as legacy geostationary orbit satellites (GSOs). Despite existing radio regulations during the filing stage, this heightened congestion in the spectrum is likely to lead to instances of interference during real-time operations. This paper addresses the NGSO-to-GSO interference problem by proposing advanced artificial intelligence (AI) models to detect interference events. In particular, we focus on the downlink interference case, where signals from low-Earth orbit satellites (LEOs) potentially impact the signals received at the GSO ground stations (GGSs). In addition to the widely used autoencoder-based models (AEs), we design, develop, and train two generative AI-based models (GenAI), which are a variational autoencoder (VAE) and a transformer-based interference detector (TrID). These models generate samples of the expected GSO signal, whose error with respect to the input signal is used to flag interference.

Feed-Forward Neural Networks to Overcome Complex Precoding Matrix Calculation in GEO Broadband Satellite Communication Systems

Related Pubs:

As satellite communication evolves, multi-beam GEO satellite systems have emerged as crucial for delivering high-throughput broadband services across wide geographical expanses. Despite significant strides in this field, these systems are constrained by dependence on traditional precoding techniques, such as Minimum Mean Square Error (MMSE) precoders, known for their high computational complexity and detailed channel matrix knowledge requirement. Accordingly, in this paper, we employ a trained feedforward neural network (FFNN) for precoding based on user locations, eliminating the need for Channel State Information (CSI) or channel matrix estimation. While the proposed FFNN precoder initially necessitates the integration of CSI during the deployment (preliminary offline training) phase, it subsequently develops the capacity in the operational phase to predict the precoding matrix per user position only. Hence, this methodology successfully navigates around the constraints of traditional practices, leading to substantial reductions in computational complexity, overhead, and latency.

User Scheduling in Multibeam Precoded GEO Satellite Systems

Related Pubs:

Clustering used by Flor

Future generation SatCom multibeam architectures will extensively exploit full-frequency reuse schemes together with interference management techniques, such as precoding, to dramatically increase spectral efficiency performance. Precoding is very sensitive to user scheduling, suggesting a joint precoding and user scheduling design to achieve optimal performance. However, the joint design requires solving a highly complex optimisation problem which is unreasonable for practical systems. Even for suboptimal disjoint scheduling designs, the complexity is still significant. To achieve a good compromise between performance and complexity, we investigate the applicability of Machine Learning (ML) for the aforementioned problem.

A Deep Learning Based Acceleration of Complex Satellite Resource Management Problem

Related Pubs:

NN used by Tedros

Demand-based algorithms have been widely studied in the satellite community, where the satellite’s radio resources are allocated according to the on-ground users’ demands. However, these algorithms have high computational time because they are required to optimise many parameters, which hinders the practical implementation of the algorithms. In this work, we propose a methodology to alleviate the computational complexity of demand-aware bandwidth and power allocation algorithm by combining conventional optimisation and deep learning (DL). Hence, conventional optimisation allocates the radio resources, while DL accelerates the computation.

CCN-based on-Board Interference Detection in Satellite Systems: An Analysis of Dataset Impact on Performance

Related Pubs:

Flying satellite communication systems often have to deal with intended and unintended radio-frequency interference, especially now with the advent of non-geostationary orbit (NGSO) systems causing orbital crowding. In this work, we investigate the use of machine learning (ML) for interference detection and classification. In particular, we investigate the effects of datasets representations on the performance of convolutional neural network (CNN) when deployed on-board of a geostationary orbit (GSO) satellite, to detect interference and classify the spectrum of interest. Focusing on the frequency representation of the observed signal, we consider different input datasets depending on the fast Fourier Transform (FFT) size and their transformation. In particular, we considered complex and magnitude values of full dimension and reduced dimension FFT. Our analysis shows that the magnitude of the reduced dimension FFT, attains the best results in terms of accuracy in detecting the presence of the interference signal, and its location in the spectrum of interest.

NGSO-To-GSO Satellite Interference Detection Based on Autoencoder

Related Pubs:

Recently, non-geostationary orbit (NGSO) satellite communication constellations have regained popularity due to their ability to provide global coverage and lower-latency connectivity. However, the new wave of Low Earth Orbit (LEO) satellite constellations operate on the same spectral bands as legacy satellites in geosynchronous orbit (GSO), which concurrently access the electromagnetic spectrum. Even if international regulations are in place, such increased spectral congestion will result in interference events. Therefore, both regulator entities and GSO operators have a high interest in detecting illegal or unlicensed NGSO interference sources. In this work, we simulate a realistic downlink interference scenario by emulating an actual commercial NGSO orbit whose signal is eventually received in a GSO receiver that is pointed toward a specific GSO satellite. We design an autoencoder deep neural network and we evaluate its performance considering both time-series and frequency-domain series of the overall received samples.

Signal recovery in interference-limited satellite and terrestrial spectrum coexistence

Related Pubs:

  • A.B.M. Adam, M. Samy, C.E. Garcia, E. Lagunas, S. Chatzinotas, “Diffusion Model-based Signal Recovery in Coexisting Satellite and Terrestrial Networks”, WCNC2024 Workshop: Beyond Connectivity: When Wireless Communications Meet Generative AI, Dubai, United Arab Emirates, April 2024.

Coexisting satellite and terrestrial networks present a unique set of challenges and opportunities when the two networks share the same spectrum. One of these challenges is the signal recovery. In this work, we formulate
an optimization problem and propose a diffusion model to perform signal recovery. The proposed diffusion model leverages the denoising mechanism to recover the signals from noisy and distorted signals. The proposed diffusion model consists of encoder to encode the input to the latent space, U-Net for denoising, attention block to integrate different relevant feature to create better context for signal recovery, and decoder to deliver the recovered signal.