2023-2024 Qubo Software Projects
Controls
Note: This section was copied directly from the old wiki. This page is intended to be approachable for new members transitioning to working on Qubo. I'm not sure how understandable this is, as it's not my area; so just please don't avoid a project you're interested in because the description sounds scary. For questions and concerns about these projects, ask in #team-software in the Slack. --- Jeffrey Fisher
Controls as in "control theory". Making Qubo be able to stand still (to fire torpedos accurately), move efficiently, etc.
Goals
- Make movement faster, more efficient, and more accurate.
- Track our position using available sensors.
- Be able to give Qubo a position and have it move there.
Current state
- Very simple, untuned P depth and PD yaw control
- Ballasts mostly hold roll and pitch, directly modified forward/sideways thrust
- Only track depth and orientation, no localization or sense or position
Things to work on
- Maybe: Ballasts will be removed, so we will have full 6 degrees of freedom to deal with
- We may end up keeping the ballasts though.
- How can we improve on our current control?
- Is it feasible to somewhat accurately model our robot? (determine hydrodynamic quantities like added mass, inertia, drag, etc.)
- If so, what model-based control is best?
- LQR?
- Sliding mode control?
- Model predictive control?
- If not, what control methods can we use?
- PID?
- MIMO with linear and angular positions?
- Cascading PID?
- Adaptive control?
- Feedforward components?
- PID?
- If so, what model-based control is best?
- Is it feasible to somewhat accurately model our robot? (determine hydrodynamic quantities like added mass, inertia, drag, etc.)
- How can we use our sensors to track our position?
- Kalman filters
Vision
Note: This section was copied directly from the old wiki. This page is intended to be approachable for new members transitioning to working on Qubo. I'm not sure how understandable this is, as it's not my area; so just please don't avoid a project you're interested in because the description sounds scary. For questions and concerns about these projects, ask in #team-software in the Slack. --- Jeffrey Fisher
Goals
- Detect competition objects accurately and efficiently (minimize computational cost)
- Determine objects' relative poses (relative linear position/coordinates and rotational orientation)
Current state
- YOLOv8 model trained on our data and some other team's data in this notebook very quickly during competition
Things to work on
- What ML model(s) do we want to use? (my knowledge is quite limited)
- YOLOv8 worked pretty well for identifying objects considering how rushed our training and implementation was
- Are there models that are feasible for us that can give us more info than YOLO, maybe something like PoseCNN that gives full pose estimation given 3D models of the objects?
- If we use YOLOv8, one of the big issues was speed, both initialization (took like 1 minute at competition) and detection speed
- Likely can be easily improved just by using the proper format for the Jetson, right now we're just using .pt but TensorRT (tutorial)/TensorRT (docs) is recommended, we can test other formats using their benchmark
- Also can use quantization args for predict and export
- Data collection - we collected all of our data at competition (including yoinking another team's dataset)
- Some other teams constructed 3D models in Unity/other graphics engines and trained on those, is that worth it?
- Ideally we can have a baseline model trained on data we either synthetically create or we collect in the pool using physical recreations of the objects, before we go to competition, that we can train off of
- Traditional CV - if our ML stuff only gives us the bounding box of the object in the image, can we use traditional CV stuff to get the relative pose of objects?
- E.g. algorithms like SIFT (feature matching), contour/corner detection -> solvePnP, etc.
- Make labeling easier - labeling was an incredibly massive time sink during competition
- Roboflow hides model-assisted labeling behind a paywall, we found this cool open source thing that lets you do model-assisted labeling pretty easily
GUI
End-effectors
This includes: torpedo launcher, claw, marker dropper.
marker dropper = device that releases a small object, a "marker" that is identifiable to our team. A recurring RoboSub task is dropping markers into bins.
Jeffrey Fisher: My understanding is that these end-effectors are not complete, but that we can work with prior iterations. Even if it won't go on the robot right away, it will be worthwhile to get the code in place so we are ready when mechanical finishes the new designs, and to begin testing the functionality. Testing is very important because accuracy is important for all three of these end-effectors.
New DVL (1-2 people)
We've purchased a new, much smaller DVL (doppler velocity logger). The model is WaterLinked's DVL A50. For information on what a DVL is, and about this specific model, see the BlueRobotics DVL A50 page.
Project overview
- The DVL speaks JSON over TCP. See WaterLinked's protocol documentation.
- There is also a serial interface, but we will not use it because (1) it is slower and (2) I'm pretty sure one reason for getting this DVL was that we can talk to it over Ethernet --- Jeffrey Fisher.
- Goal: Setup communication between Qubo's main computer and the DVL.
- Goal: Make the DVL data available to the rest of the code over ROS.
- Language: Likely C++
- There are existing libraries for interfacing with this DVL. It is probably worth evaluating them.
- At a quick glance, this one seems worth checking out: https://github.com/paagutie/dvl-a50. It's already using the JSON library I was planning to use, so if we write our own in C++ it will probably be quite similar to this one. --- Jeffrey Fisher
- Here's a bunch: https://github.com/search?utf8=%E2%9C%93&q=dvl+a50&type=repositories
- Jeffrey Fisher: Some things worth considering:
- How often will we have to change this code?
- How likely are there to be bugs?
- Will depending on a 3rd-party ROS library make it hard to upgrade in the future? How separate are the DVL protocol and ROS portions of the 3rd-party library? How hard would it be to patch the library ourselves for a newer ROS version?
Embedded software
Electrical team is redesigning the electrical system to use a backplane. If you've ever seen a computer motherboard, that's basically what a backplane is. Instead of a mess of wires, there will be several "daughter cards" that slot into the backplane, and will communicate over wiring inside the PCBs.
Each of the daughter boards need to be programmed.
Daughter boards
WIP: This section is rough, just taken from Alex's notes from a meeting with Erik.
Thruster control board (x2)
- Each thruster control board controls 4 thrusters, for a total of 8 thrusters.
- CAN node
- UART node
- Receive heading information from Jetson (6-vector float?)
Power board
- CAN node
- UART node
- 5 Current Monitoring ICs, 1 for the whole system, same outputs as the monitors in the thruster control card
- eFuses (2 of them):
- Communicate with outside world using digital output (whether I'm open or not)
- Should be logged for diagnostics
- Enable pins (think power buttons for individual components)
- 1 for the thrusters and almost everything else, 1 for the Jetson
- Considering a shutoff for each individual sensor, great for diagnostic purposes
Diagnostics
- Polling for current and voltage information from the CAN bus (probably can be done on the Jetson)
- Home-cooked boundary scan: Do a full scan on startup, shutdown, and if serious fault detected.
- Send a stimulus to the pressure points, and measure the response.
Jetson
- CAN: Send heading information to the thruster daughter cards
- CAN: Get power information from the power daughter card
CAN Bus
CAN bus is a protocol for communication between microcontrollers and devices. It is used in motor vehicles.
As of right now, this is the planned protocol for communication between daughter boards.