EKF SLAM on a TurtleBot3

Built a full SLAM stack in ROS 2 from scratch: 2D geometry library, simulator, odometry, and Extended Kalman Filter SLAM with unknown data association, deployed on a real TurtleBot3.

Jan 2026 - Mar 2026 | View Source
| C++ROS 2SLAMEKFLocalizationSensor FusionTurtleBot3

Overview

Built a full SLAM stack from the ground up for a TurtleBot3 differential-drive robot in ROS 2. The robot uses a LIDAR to detect cylindrical landmarks and maintains a joint estimate of its own pose and all landmark positions.

The stack was cross-compiled for the TurtleBot3’s ARM processor (aarch64) and deployed live on the robot.

Packages

PackageRole
turtlelib2D rigid-body transforms and differential-drive kinematics
nusimROS 2 simulator and visualizer for TurtleBot3
nuturtle_descriptionURDF and robot description files
nuturtle_interfacesCustom ROS 2 service definitions
nuturtle_controlOdometry, low-level control
nuslamEKF SLAM with unknown data association

Odometry

Wheel encoder readings are integrated via the differential-drive kinematic model in turtlelib to estimate the robot’s pose. Odometry accumulates drift over time, which the SLAM filter is designed to correct.

EKF SLAM

The Extended Kalman Filter tracks a joint state vector x\mathbf{x} containing both the robot pose and all landmark positions:

x=[xyθmx1my1mxnmyn]T\mathbf{x} = \begin{bmatrix} x & y & \theta & m_{x_1} & m_{y_1} & \cdots & m_{x_n} & m_{y_n} \end{bmatrix}^T

Each timestep has two steps:

Predict — motion model

A control input u=(v,ω)\mathbf{u} = (v, \omega) (linear and angular velocity from wheel encoders) propagates the robot pose forward via the differential-drive kinematic model. The covariance is inflated by process noise QQ:

x^k=f(xk1,uk)\hat{\mathbf{x}}_k = f(\mathbf{x}_{k-1}, \mathbf{u}_k) Σ^k=AkΣk1AkT+Q\hat{\Sigma}_k = A_k \Sigma_{k-1} A_k^T + Q

where AA is the Jacobian of the motion model with respect to the state.

Update — measurement model

The LIDAR produces a point cloud that is processed through a perception pipeline — clustering, circle regression, and classification — to detect cylindrical landmarks and publish their relative positions on the map. These detections are what feed the measurement model. The Kalman gain KK then weights how much to trust the sensor versus the prediction:

K=Σ^kHkT(HkΣ^kHkT+R)1K = \hat{\Sigma}_k H_k^T (H_k \hat{\Sigma}_k H_k^T + R)^{-1} xk=x^k+K(zkh(x^k))\mathbf{x}_k = \hat{\mathbf{x}}_k + K(\mathbf{z}_k - h(\hat{\mathbf{x}}_k)) Σk=(IKHk)Σ^k\Sigma_k = (I - KH_k)\hat{\Sigma}_k

where HH is the Jacobian of the measurement function hh.

SLAM in simulation

The nuslam package runs an EKF that maintains a joint state of the robot pose (x,y,θ)(x, y, \theta) and all detected landmark positions (mxi,myi)(m_{x_i}, m_{y_i}). The filter supports two modes:

  • Known data association — landmark IDs are provided directly by the simulator.
  • Unknown data association — landmarks are matched to the existing map, with a distance threshold to initialize new landmarks.

RViz displays the ground-truth robot, the SLAM estimate, the odometry estimate, and the estimated and actual landmark positions simultaneously.

SLAM in the real world

The filter was deployed on a physical TurtleBot3 navigating an environment of cylindrical landmarks. After a full loop, the SLAM estimate returned close to the origin while odometry had drifted significantly:

Estimatorx (m)y (m)θ (rad)
SLAM−0.016−0.001−0.018
Odometry0.131−0.005−0.040

Real-world SLAM result