The Motor Control group on the Computer Science subteam is responsible for implementing our autonomous decision making capabilities and controlling the motors accordingly. In short, we use what the boat “sees” (a list of objects detected by the computer vision model and their associated distances from the boat) to dictate how the boat moves. Our work consists of two branches: path planning and path execution. These can be equated to the Guidance and Control aspects of a GNC (Guidance, Navigation, and Control) system necessary to produce autonomous behavior in a mobile robot. The ZED camera provides Navigation through positional tracking of the boat and its surroundings.
Main Control Sequence
This diagram shows how the ZED camera interacts with the autonomous decision making code in our ROS framework. The main control loop will determine which task the boat is on, execute the corresponding completion algorithm, and make incremental progress every iteration. The SFR (State Field Registry) is a file containing variables which describe the boat’s state, some of which are updated indirectly by the ZED node, such as a list of objects the CV model has detected. To control the motors, the Jetson will send information to the Arduino through a separate form of communication (the Arduino is NOT part of the ROS framework). The Arduino will decide whether or not to listen to the manual remote controller or the Jetson output. The Jetson will send information online which the UI will reference (the UI is NOT part of the ROS framework).
Path planning encompasses the task-specific algorithms to create a list of waypoints defining the ideal path of the boat. To calculate these waypoints we utilize two frames of reference. The local frame is defined with the ZED camera as the origin, and is how obstacle coordinates are represented in the object list. The global frame, as represented by the large axes, is initialized where the ZED is turned on, and represents a more GPS-like frame which the boat moves through. The waypoints are global coordinates, yet require local information to calculate, so we will utilize a mapping function (local → global) using the current global position and orientation of the boat provided by the ZED.
The RoboBoat competition requires our boat to attempt various navigation and skill based tasks. Our primary focus this year has been the navigation tasks. Our team has created specialized algorithms for each task to create the list of waypoints, then send them to the motion controller (pure pursuit). The waypoint creation will either be done by specialized selection and calculation relative to given objects for simpler tasks, or by an A* algorithm for more complex tasks. The general workflow for task completion is depicted below. We handle task transitions by assuming a set order of tasks we want to complete, so after finishing one we know immediately which is next. Each task has distinguishable markers which the boat will locate to orient itself in a position ready to start the task. The boat will then execute a (abstract) loop of observing its surroundings, creating a path, executing the path, and determining if the task has been completed. Each task has a specialized criteria for completion.
Once a list of waypoints has been created, the boat must follow them, i.e. execute the path. To do this we are adapting a Pure Pursuit path follower. The idea behind pure pursuit is that the boat maintains a “lookahead” distance along the waypoint-defined path that it is constantly trying to reach. The input to the controller is the waypoint list, the global position and orientation of the boat, and the lookahead distance. The controller outputs the ideal linear and angular velocity the boat should maintain to reach the lookahead point. We then use this information to send signals to the motors.
Concepts and Techniques Used
- Path planning (A* algorithm)
- Path execution (Pure Pursuit, PID)
- Math (linear algebra, calculus, trigonometry)
- Coordinate frame transformations
- Python3 and MATLAB
- Improve shooting strategies
- Make algorithms more robust to environmental disruptions (currents, waves, etc.) and edge cases (boat gets off course, more complicated transitions required, etc.)
- Incorporate reinforcement learning techniques