Deep Learning-based Carrier Frequency Offset Estimation with One-Bit ADCs
Ryan M. Dreifuerst, Robert Heath, Mandar N. Kulkarni, Jianzhong Zhang
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 11:33
Low resolution architectures are a power efficient
solution for high bandwidth communication at millimeter wave
and terahertz frequencies. In such systems, carrier synchronization
is important yet has not received much attention. In this
paper, we develop and analyze deep learning architectures for estimating
the carrier frequency of a complex sinusoid in noise from
the 1-bit samples of the in-phase and quadrature components.
Carrier frequency offset estimation from a sinusoid is used in
GSM and is a first step towards developing a more comprehensive
solution with other kinds of signals. We train four different deep
learning architectures each on eight datasets which represent
possible training considerations. Specifically, we consider how
training with various signal to noise ratios (SNR), quantization,
and sequence lengths affects estimation error. Further, we analyze
each architecture in terms of scalability for MIMO receivers. In
simulations, we compare computational complexity, scalability,
and mean squared error (MSE) versus classic signal processing
techniques. We demonstrate that training with quantized data,
drawn from signals with SNRs between 0-10dB tends to improve
deep learning estimator performance across the entire SNR range
of interest. We conclude that convolutional models have the best
performance, while also scaling for massive MIMO situations
more efficiently than FFT models. Our approach is able to
accurately estimate carrier frequencies from 1-bit quantized data
with fewer pilots and lower signal to noise ratios (SNRs) than
traditional signal processing methods.
solution for high bandwidth communication at millimeter wave
and terahertz frequencies. In such systems, carrier synchronization
is important yet has not received much attention. In this
paper, we develop and analyze deep learning architectures for estimating
the carrier frequency of a complex sinusoid in noise from
the 1-bit samples of the in-phase and quadrature components.
Carrier frequency offset estimation from a sinusoid is used in
GSM and is a first step towards developing a more comprehensive
solution with other kinds of signals. We train four different deep
learning architectures each on eight datasets which represent
possible training considerations. Specifically, we consider how
training with various signal to noise ratios (SNR), quantization,
and sequence lengths affects estimation error. Further, we analyze
each architecture in terms of scalability for MIMO receivers. In
simulations, we compare computational complexity, scalability,
and mean squared error (MSE) versus classic signal processing
techniques. We demonstrate that training with quantized data,
drawn from signals with SNRs between 0-10dB tends to improve
deep learning estimator performance across the entire SNR range
of interest. We conclude that convolutional models have the best
performance, while also scaling for massive MIMO situations
more efficiently than FFT models. Our approach is able to
accurately estimate carrier frequencies from 1-bit quantized data
with fewer pilots and lower signal to noise ratios (SNRs) than
traditional signal processing methods.