PREREQUISITES: Download and compile the 5 DOF planar robot packages.
sudo apt-get install ros-noetic-ros-control ros-noetic-ros-controllers //
git clone https://github.com/arebgun/dynamixel_motor //
git clone https://github.com/fenixkz/ros_snake_robot.git
sudo apt-get install ros-noetic-gazebo-ros-pkgs ros-noetic-gazebo-ros-control
After every download:
catkin_make
source ~/CATKIN_WORKSPACE/devel/setup.bash
To launch gazebo:
roslaunch gazebo_robot gazebo.launch
To see available ROS Topics:
rostopic list
TASK: Create a rosnode that will “listen” for std_msgs/Float64 type data and “publish” this data to the joint of the planar robot. The node should send the command to move if the any new incoming value is lower than the previous one.
Joint Movement of Planar Robot
new.mov
TASK: Get the step response of (you can create a node that will send a square-wave function):
- the joint at the base of the robot
lab3_part2_base_step.mov
- the joint at the end-effector of the robot
lab3_part2_end_step.mov
Get the sine-wave response of (you can create a node that will send a sine-wave
function):
3. the joint at the base of the robot
lab3_part2_base_sin.mov
- the joint at the end-effector of the robot
lab3_part2_end_sin.mov
TASK: 1. Configure MoveIt library
My MoveIt package is called "lab4".
- Create a node moves the “end” by 1.4 (in rviz units mm or m) along X axis
File to run is located in scripts/src/test.cpp
rosrun scripts test_test
x.mov
- Create a node that moves “end” to Draw a rectangle File to run is located in scripts/src/test_rectangle.cpp
rosrun scripts test_rect
Untitled.mov
TASK: Using rosbag Record the joint angles and the position of the end- effector in x- and y-axes.
/scripts/src/salem.csv
TASK: Obtain Forward Kinematics without the robot model
Dataset is located in scripts/src/dict1.csv. Dataset was obtained by scripts/src/dataset.py
Importing libraries.
import numpy as np
from tensorflow import keras
from keras.models import Sequential
from keras.layers import Dense
from keras import backend as K
import pandas as pd
from sklearn.model_selection import train_test_split
Reading generated csv file.
def main():
data = pd.read_csv("/home/zhamilya/catkin_ws_zhamilya/dict1.csv", header = None, names = ["Angles", "XY"])
print(data.head(10))
Splitting into train and test.
train = data['Angles'].to_numpy()
labels = data['XY'].to_numpy()
X = list()
Y = list()
for i in range(len(train)):
labels[i] = labels[i].replace(' ', ' ')
labels[i] = labels[i].replace(' ', ' ')
labels[i] = labels[i].replace(' ', ' ')
labels[i] = labels[i].strip('[ ').strip(' ]')
train[i] = train[i].strip('(').strip(')')
result = [float(val) for val in train[i].split(',')]
X.append(result)
result = [float(val) for val in labels[i].split(' ')]
Y.append(result)
X_train, X_test, y_train, y_test = train_test_split(np.asarray(X), np.asarray(Y), test_size=0.80)
print("TRAIN X SHAPE ", np.shape(X_train))
print("TRAIN Y SHAPE ", np.shape(y_train))
print("TEST X SHAPE ", np.shape(X_test))
print("TEST Y SHAPE ", np.shape(y_test))
Loss Function: Root Mean Square
def rmse(y_true, y_pred):
return K.sqrt(K.mean(K.square(y_pred - y_true)))
Model
model = Sequential()
model.add(Dense(10, input_dim = 5, activation = 'relu'))
model.add(Dense(16, activation = 'relu'))
model.add(Dense(3, activation='linear'))
model.compile(loss=rmse, optimizer=Adam(0.01))
print(model.summary())
model.fit(X_train, y_train, epochs = 15)
scores = model.evaluate(X_test, y_test, verbose=0)
print("RMSE: %.2f" % (scores))
RMSE: 0.10
- Mean Squared Logarithmic Error
model = Sequential()
model.add(Dense(10, input_dim =5, activation = 'relu'))
model.add(Dense(16, activation = 'relu'))
model.add(Dense(3, activation='linear'))
model.compile(loss='mean_squared_logarithmic_error', optimizer=keras.optimizers.Adam(0.01))
mean_squared_logarithmic_error 0.0005
- Mean Absolute Error
model = Sequential()
model.add(Dense(10, input_dim =5, activation = 'relu'))
model.add(Dense(16, activation = 'relu'))
model.add(Dense(3, activation='linear'))
model.compile(loss='mean_absolute_error', optimizer=keras.optimizers.Adam(0.01))
mean_absolute_error 0.0331
- Mean Squared Error
model = Sequential()
model.add(Dense(10, input_dim =5, activation = 'relu'))
model.add(Dense(16, activation = 'relu'))
model.add(Dense(3, activation='linear'))
model.compile(loss='mean_squared_error', optimizer=keras.optimizers.Adam(0.01))
mean_squared_error 0.0036
- Root Mean Squared Error
model = Sequential()
model.add(Dense(10, input_dim =5, activation = 'relu'))
model.add(Dense(16, activation = 'relu'))
model.add(Dense(3, activation='linear'))
model.compile(loss=rmse, optimizer=keras.optimizers.Adam(0.01))
rmse 0.0508
Mean Squared Logarithmic Error showed the best results.
Layer number Mean Squared Logarithmic Error
2 ------------> 0.000359
3 ------------> 0.088592
4 ------------> 0.088045
5 ------------> 0.088894
6 ------------> 0.000551
7 ------------> 0.000601
2 layers showed the best results.
- Tanh
model = Sequential()
model.add(Dense(10, input_dim =5, activation = 'tanh'))
model.add(Dense(16, activation = 'tanh'))
model.add(Dense(16, activation = 'tanh'))
model.add(Dense(3, activation='linear'))
model.compile(loss='mean_squared_logarithmic_error', optimizer=keras.optimizers.Adam(0.01))
0.000190
- Sigmoid
model = Sequential()
model.add(Dense(10, input_dim =5, activation = 'sigmoid'))
model.add(Dense(16, activation = 'sigmoid'))
model.add(Dense(16, activation = 'sigmoid'))
model.add(Dense(3, activation='linear'))
model.compile(loss='mean_squared_logarithmic_error', optimizer=keras.optimizers.Adam(0.01))
0.017334
- Linear
model = Sequential()
model.add(Dense(10, input_dim =5, activation = 'linear'))
model.add(Dense(16, activation = 'linear'))
model.add(Dense(16, activation = 'linear'))
model.add(Dense(3, activation='linear'))
model.compile(loss='mean_squared_logarithmic_error', optimizer=keras.optimizers.Adam(0.01))
0.002832
- Softmax
model = Sequential()
model.add(Dense(10, input_dim =5, activation = 'softmax'))
model.add(Dense(16, activation = 'softmax'))
model.add(Dense(16, activation = 'softmax'))
model.add(Dense(3, activation='linear'))
model.compile(loss='mean_squared_logarithmic_error', optimizer=keras.optimizers.Adam(0.01))
0.019299
Tanh showed the best results
Mean Squared Logarithmic Error: 0.000190
Dataset Size -> 10000
Number of Hidden Layers -> 2
Optimizer -> Adam
Activation Function -> Tanh
Loss -> Mean Squared Logarithmic Error
Epochs -> 15
TASK: Obtain Inverse Kinematics without the robot model
Importing libraries.
import numpy as np
from tensorflow import keras
from keras.models import Sequential
from keras.layers import Dense
from keras import backend as K
import pandas as pd
from sklearn.model_selection import train_test_split
Reading generated csv file.
def main():
data = pd.read_csv("/home/zhamilya/catkin_ws_zhamilya/dict1.csv", header = None, names = ["Angles", "XY"])
print(data.head(10))
Splitting into train and test.
train = data['Angles'].to_numpy()
labels = data['XY'].to_numpy()
X = list()
Y = list()
for i in range(len(train)):
labels[i] = labels[i].replace(' ', ' ')
labels[i] = labels[i].replace(' ', ' ')
labels[i] = labels[i].replace(' ', ' ')
labels[i] = labels[i].strip('[ ').strip(' ]')
train[i] = train[i].strip('(').strip(')')
result = [float(val) for val in train[i].split(',')]
Y.append(result)
result = [float(val) for val in labels[i].split(' ')]
X.append(result)
X_train, X_test, y_train, y_test = train_test_split(np.asarray(X), np.asarray(Y), test_size=0.80)
print("TRAIN X SHAPE ", np.shape(X_train))
print("TRAIN Y SHAPE ", np.shape(y_train))
print("TEST X SHAPE ", np.shape(X_test))
print("TEST Y SHAPE ", np.shape(y_test))
Model
model = Sequential()
model.add(Dense(10, input_dim =3, activation = 'relu'))
model.add(Dense(16, activation = 'relu'))
model.add(Dense(5, activation='linear'))
model.compile(loss=rmse, optimizer=keras.optimizers.Adam(0.01))
Model Fitting
model.fit(X_train, y_train, epochs = 200)
scores = model.evaluate(X_test, y_test, verbose=0)
print("RMSE: %.2f" % (scores))
0.259730
Model
model = Sequential()
model.add(Dense(10, input_dim =3, activation = 'relu'))
model.add(Dense(16, activation = 'relu'))
model.add(Dense(5, activation='linear'))
model.compile(loss='mean_squared_logarithmic_error', optimizer=keras.optimizers.Adam(0.01))
0.01889
# 2 ------------> mean_squared_logarithmic_error: 0.022235
# 3 ------------> mean_squared_logarithmic_error: 0.015986
# 4 ------------> mean_squared_logarithmic_error: 0.022250
# 5 ------------> mean_squared_logarithmic_error: 0.015907
# 6 ------------> mean_squared_logarithmic_error: 0.015879
# 7 ------------> mean_squared_logarithmic_error: 0.018899
# 8 ------------> mean_squared_logarithmic_error: 0.018779
# 9 ------------> mean_squared_logarithmic_error: 0.018794
# 10 -----------> mean_squared_logarithmic_error: 0.018883
0.015907
The best performance showed the 5 hidden layers.
model = Sequential()
model.add(Dense(10, input_dim =3, activation = 'tanh'))
model.add(Dense(16, activation = 'tanh'))
model.add(Dense(16, activation = 'tanh'))
model.add(Dense(16, activation = 'tanh'))
model.add(Dense(16, activation = 'tanh'))
model.add(Dense(16, activation = 'tanh'))
model.add(Dense(16, activation = 'tanh'))
model.add(Dense(5, activation='linear'))
model.compile(loss='mean_squared_logarithmic_error', optimizer=keras.optimizers.Adam(0.01))
0.015924
0.015749
Mean Squared Logarithmic Error: 0.015749
Dataset Size -> 10000
Number of Hidden Layers -> 5
Optimizer -> Adam
Activation Function -> Tanh
Loss -> Mean Squared Logarithmic Error
Epochs -> 200