nesl / LLM_Based_Robots

Using Large Language Models to control robots to perform complex tasks based on user input.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Overall Project Goals and Description:

Research Question / Purpose

How viable is the method of using LLMs to control robots?

Research Question/Goal: Create a framework that incorporates an LLM into a robotic system to serve as the central reasoning node. Then evaluate how LLMs work in real-time, resource-constrained environments that are often characteristic of home automation and personalized robotics.

Three main components that define the structure of the system:

  • Action (Physical, Observable Responses)
  • Sensing (Integrate Sensors)
  • Intelligence (LLM)

Flow Chart

Hardware

  • Create 3 Robot

    • 7 IR Obstacle Sensors in the front
      • Can be used to detect obstacles
    • Three Buttons on top
      • Can be overloaded using ROS 2 application
      • Power Button features ring of 6 RGB LEDs for indication
    • Multi-Zone Bumper
    • Docking Sensor
    • Adapter Board below Faceplate
      • Main Purpose: Used to interface to external computers either through Bluetooth or via USB-C
      • Unregulated Battery Port (~14 V at 2 A max)
      • USB-C Connector: USB 2.0 Host connection into robot with 5.13 V at 3.0 A provided to power downstream connections. Power is disabled on this port unless a proper USB-C device is connected.
      • USB/BLE Toggle routes the robot's single USB Host connection to either the USB-C port or to the on-board Bluetooth Low Energy module.
    • Faceplate + Cargo Bay
      • Regular hole pattern for attaching payloads
    • 4 Cliff sensors
      • Keeps robot on ground
    • Optical Odometry Sensor
    • IMU
  • Nvidia Jetson AGX Xavier Development Kit

    • CPU structure:
    • GPU structure:
  • Desktop Machine

    • CPU structure:
    • GPU structure:
  • Raspberry Pi (Optional)

  • Intel Realsense LiDar Camera L515

    • Depth camera
    • RGB camera

check rest of the list

  • Gyroscope?

Software Setup Steps

This section describes all the required setup steps for the devices and also installing the necessary programs such as the LLMs.

Nvidia Jetson AGX Xavier Developer Kit:

  1. Install Jetpack 5.1.1 using SDK Manager (make sure installation occurring on 18.04 device)
    1. Install to NVMe to utilize SSD as main path
    2. Once installation done, make sure that Ubuntu version on Jetson is 20.04
    3. AGX Developer Manual
  2. Setup to install everything (and all packages) on the SSD.

Ethan finish

  1. Install Whisper

ROS 2 Galactic:

  1. ROS 2 Installation (using Binary Packages)
  2. Configure ROS 2 Environment
  3. Create 3 ROS 2 Setup
  4. Nvidia Jetson Setup with Create 3
  5. Test Run
    1. Make sure to run “source /opt/ros/galactic/setup.bash” and “export ROS_DOMAIN_ID=0”
    2. Run Docking and Undocking Commands

Create 3 Robot

  1. Create 3 Initial Software Update
  2. Create 3 ROS 2 Setup

LiDar L515

  1. Intel Realsense connect to jetson
  2. Intel Realsense Python Wrapper installation
    1. Python Package Installation (pyrealsense2)

Desktop Machine

  1. Install Pip
  2. Install Cuda
  3. Install Pytorch
  4. Install Numpy
    1. pip install numpy
  5. Install transformers
    1. pip install transformers
  6. Install LLM
    1. python3
    2. Type in commands under “Load Model Directly” in here

Raspberry Pi (Optional):

  1. Setup the Raspberry Pi
    1. Install 64-bit Raspbian OS
  2. Download WhisperAI files from Github Repository: GitHub - openai/whisper: Robust Speech Recognition via Large-Scale Weak Supervision
  3. Test Python Example Code on ThonnyIDE

Steps To Work on Project

  1. Connect Create 3 Robot to Wifi using these steps
  2. Go to Terminal
    1. Type “source /opt/ros/galactic/setup.bash”
    2. Type “export ROS_DOMAIN_ID=<your_domain_id>” (in our case, 0)
  3. To open

About

Using Large Language Models to control robots to perform complex tasks based on user input.


Languages

Language:Python 97.3%Language:Shell 2.7%