moreisless13 / The-Seer

The Seer incorporates machine learning, radio frequency communication, and signal processing to estimate the direction-of-arrival of an incoming low-band 5G signal.

Home Page:https://harschht.github.io/The-Seer/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

The Seer - A system designed to estimate the direction-of-arrival (DOA) of incoming low-band 5G signals
EE493 Senior Design - Sonoma State University - Department of Engineering Science
Contributors: Tate Harsch-Hudspeth, Evan Peelen, Victor Madrid
Faculty Advisor: Dr. Mohamed Salem, Sonoma State University
Industry Advisor: Joe Reid
Client: Cellular Providers, Cellular Application Developers
September 2020 - May 2021

Website (index.html): https://harschht.github.io/The-Seer/


Contents:

	Original Design - 5 Antenna Array (5AA):

The_Seer_Rx_Official_5AA.grc
- GNU Radio Companion flowgraph file that controls the 5AA receiver (Rx)
the_seer_rx_flow_5AA.py
- GNU Radio flowgraph python file generated by ".grc" file
Friis_make_database_5AA.py
- Simulation creates database to test the neural network using Friis transmission equation, 5AA arrangement
JTC_w_added_noise_5AA.m
- Simulation creates database to test the neural network using JTC transmission model with added noise, 5AA arrangement
Neural_Net_5AA_Test.py
- Code used to test and fine-tune the neural network
LHS.py 
- Outputs the latin hypercube sample (LHS) for the provided dimensions of the testing area
Extract_5AA.py
- Extracts the target values from the saved matrices output by the 5AA Rx flowgraph's file sink block
Neural_Net_5AA.py
- The python code used to generate Keras deep learning model used for The Seer, code can be used to train and make predictions
GUI.py
- The graphical user interface (GUI) that is called by the neural network code after being tested on validation data

	Final Design - 3 Antenna Array, Power Only (3P)

The_Seer_Rx_Official_3P.grc
- GNU Radio Companion flowgraph file that controls the 3AA receiver (Rx)
the_seer_rx_flow_3P.py
- GNU Radio flowgraph python file generated by ".grc" file
Friis_make_database_3P.py
- Simulation creates database to test the neural network using Friis transmission equation, 3AA arrangement
JTC_w_added_noise_3P.m
- Simulation creates database to test the neural network using JTC transmission model with added noise, 3AA arrangement
Neural_Net_3P_Test.py
- Code used to test and fine-tune the neural network
LHS.py
- Outputs the latin hypercube sample (LHS) for the provided dimensions of the testing area
Extract_3P.py
- Extracts the target values from the saved matrices output by the 5AA Rx flowgraph's file sink block
Neural_Net_3P.py
- The python code used to generate Keras deep learning model used for The Seer, code can be used to train and make predictions
GUI.py
- The graphical user interface (GUI) that is called by the neural network code after being tested on validation data


Acknowledgments:
	We would like to thank the entire Department of Engineering Science at Sonoma State University for their support over the years. A special thanks to our advisors Dr. Mohamed Salem and Joe Reid for their endless help and support throughout the senior design process. We would also like to thank Dr. Donald Estreich, Andrew Choi, Roger Nichols, Stan Bischof, and Enrique Zeiger for their time and guidance. Lastly, we would like to thank the Office of Research & Sponsored Programs and the Koret Foundation for funding our project.


Abstract:
	The Seer focuses on streamlining the process of location estimation to benefit telecommunications, assisted GPS (AGPS), and radio enthusiasts. By implementing a neural network that utilizes the signal data from a receiver (Rx) composed of an array of antennas, our system creates an accurate and adaptably complex model of the environment it is trained in. The system focused on Sub-6 5G NR within the 600 MHz (n5) to 850 MHz (n71) band. Positioning the antennas a half-wavelength apart ensures that the main variation between antennas is due to the antennas’ orientation relative to the transmitter (Tx). Analyzing the amplitude and phase of these received signals provides useful data for the creation of our deep learning model. Use of a neural network allowed a model to be created that matched the complexity of the urban indoor environment our team targeted with our prototype. Other methods of pinpointing the location of an incoming signal use models that assume an isotropic environment, while true indoor urban environments are by no means isotropic. The existence of other EM waves, as well as multipath, and constructive and destructive interference add to the complexity of solving the inverse function of finding the direction using the received signal parameters. Our system has the added benefit of learning and can be implemented in any environment through training. Our system prototype has the potential of improving the current methods for determining the direction-of-arrival (DOA) of low-band 5G signals, as well as finding the range of transmitting devices, bolstering communication between a base station and Tx, while lowering the economic impact of these large scale 5G systems.


Problem Statement:
	Large cellular companies spend up to 70% of their capital on power bills, that is more than they spend on employee salaries. Dynamic and static beam forming attempts to reduce the unnecessary use of power in areas covered by antennas that may not always need coverage. One way to tune these antennas is to keep them pointed on areas that gather the most users, but it is not always so cut and dry. Some places have large crowds in one area during the day, but the crowds gather in different areas at night. Dealing with this can be cumbersome, costing money in the form of labor and power use. This could be simplified if there was a way to determine Tx location based on a single incoming pilot signal. This technology could also benefit smart device applications that rely on precise location for their services (AGPS). There are methods currently available that can model RF environments, but require the environment to be isotropic, which is usually not the case when dealing with urban areas. Current methods of transmission are centralized around massive multiple-input multiple-output (MIMO) technology [4] that implements beamforming to send data into an area, rather than radiating omnidirectionally with umbrella coverage. Beamforming is imperative when it comes to the new 5G system because the wavelengths are short, impairing their ability to penetrate surfaces. For beamforming networks to work, the system must know where the recipient of the data is, so it can appropriately adjust the direction and intensity of the radiation. If the system were to incorrectly estimate the location of the recipient, the result would be increased power loss along with a reduction in customer satisfaction. The current methods of triangulation and periodic pilot signals used to determine the best Tx/Rx path for beamforming use considerable amounts of power and do not account for non isotropic environments. Complex environments are difficult to mathematically model due to the non-proportionality of power relative to distance, leading to our complex inverse problem.



Methodology:
	To try and solve this problem, our team planned to use an array of five antennas spaced out a half wavelength from one another. Since our target frequencies are in the low-band 5G range (600-850MHz), after analyzing our antenna’s S11 parameters using a vector network analyzer (VNA), we chose 750MHz resulting in a spacing of 0.2 meters. Using five antennas in a linear horizontal array, we planned to use the varying amplitude and phase shift of the signal received by each antenna to determine the location of the Tx. In order for the antennas to provide 11 coherent signal data, the team took steps to synchronize the RTL-SDRs by daisy chaining the clocks together via solder bridging and removing the bypass resistors on the physical components. To make sure that the clock signals were not disrupted by this process, we used an oscilloscope [14] to check the period and amplitude of the common clock after every addition to the array. The final product would include one master clock chained to four puppets with negligible change in clock period. With the clocks synchronized, the received power and individual phase shifts could be processed by GNU Radio [11], a software capable of processing the data received by the RTL-SDRs. To ensure that the data being collected would be useful to the neural network, we planned to use the middle antenna as a reference, dividing the phasors from each of the other four antennas by the reference antenna’s phase to obtain the relative phase difference between them. Using GNU Radio, we would extract the magnitude and relative phase difference from each of the outer four antennas using FFT signal processing techniques, taking a moving average of the samples before saving the binary data to be processed by a data extraction code written in Python. Using this data extraction code, we would obtain the last moving average value from each of the four antenna’s magnitude and relative phase difference binary data files, which were arranged into a ten-column tensor for the neural network to use for training. The last two columns of this tensor were the (r,theta) values that related to the distance between the Tx and the reference antenna on the Rx. To sample the testing environment, we would use a Latin Hypercube Sampling (LHS) [18] library in Python to produce an unbiased distribution of possible Tx locations. After compiling a sufficient number of data measurements, the neural network would be trained, and the loss and accuracy would be analyzed. Prior to training, the team would simulate data using two different radio wave propagation models: the Friis transmission equation, and the JTC urban indoor environment model. This simulated data allowed the team to dial in the number of hidden layers and weights per hidden layer in preparation for the live data. After the live data was collected and used for training, and an acceptable accuracy was found, the model and corresponding weights were saved, then used for predictions on live measurements that were outside of the training dataset. The team developed a graphical user interface (GUI) to display the predicted values to the user, along with a polar plot representing the DOA of the received signal.
	For the hardware, we used TG.35.8113 Apex II Wideband 5G/4G Dipole Terminal Antennas connected to RTL-SDRs making up the Rx, with a HackRF One attached to the Port-A-Pack used as the Tx. The antenna array was put together via PVC pipe to mitigate any possible reflection or attenuation from the stand itself, since PVC is practically transparent at our target frequencies. We used SMA low-loss coax cables as extensions from the SDRs to the antennas to account for the half wavelength spacing. The SDRs were then connected to a USB hub powered with an external power supply connected to a standard 120V outlet, which sent data to an RPi 4 System on Chip (SoC). The RPi ran the team's GNU Radio flowgraph which sent the collected data to a database on the cloud. Once in the cloud, the data could be accessed by the data extraction code, which prepared the data for the neural network. The neural network was implemented in Keras [9], a deep learning API that uses TensorFlow2 as a backend. We used Tkinter and Matplotlib Python libraries [15] for the GUI.
	Due to complications with the hardware and a lack of testing equipment availability resulting from the COVID-19 pandemic, the final system consisted of three SDRs connected to 12 three antennas, with only power measurements used for data. We used one of the RTL-SDRs as a master clock to drive the clocks on the other four puppets. RTL admits to potential phase drifting occurring in their devices when operated over a few tens of MHz, this drifting disrupted the phase data the team was hoping to use for the DOA. Please see Test 5 (pg.45 of our Project Report) for an in-depth look at the reason the team stepped down from five antennas to three, Test 6B (pg.49 of our Project Report) for the reasoning behind dropping the relative phase difference from the input data, and Future of The Seer (pg.54 of our Project Report) for a design that would compensate for these encountered issues.
	The Project Report can be found on our Project Website or under "assets" in the GitHub Repo.

About

The Seer incorporates machine learning, radio frequency communication, and signal processing to estimate the direction-of-arrival of an incoming low-band 5G signal.

https://harschht.github.io/The-Seer/


Languages

Language:Python 80.5%Language:HTML 10.2%Language:MATLAB 9.3%