2023-2024 Qubo Software Projects: Difference between revisions

From R@M Wiki
(Created page with "== Controls == == Vision == == GUI == == End-effectors == == New DVL (1-2 people) == == Embedded software == === CAN Bus ===")
 
 
(12 intermediate revisions by the same user not shown)
Line 1: Line 1:
== Spring Overall Goals ==
Competition is coming up on us quickly!


'''Test, test, test.''' Get Qubo in the pool as much as possible. '''Iterate!''' Make something crappy so that we can see how it works. Don't try to get it perfect; the only way we'll get anywhere close to perfect is by making crappy versions first.


'''Deadlines.''' Leads will work with teams to develop a detailed semester plan that includes estimated deadlines. The focus will be on becoming competition-ready; nice-to-haves should be separated or removed entirely from the plan. Aiming for a Minimum Viable Product. Both so that we can test early and often, and so that we have something nice working at competition.

However, we have a large team. We do not need to stop working on longer-term projects. It will mainly be that urgent and key projects will build a minimal, rather than aspirational, plan. So for example the GUI has the potential to greatly increase testing and competition efficiency, but it is not 100% necessary; the GUI plan won't have to be as minimal as the controls plan. At the same time, there may be aspirational tasks on controls that a subteam is working on; but we also need to get something simple working quickly, so that we can start testing.

Missing a deadline is not the end of the world. The main goal is to be realistic about what we can accomplish within the time we have. If we are not progressing as fast as we hoped, then we'll have to remove something non-essential from the plan.

Look for easy points in RoboSub tasks. Focus on reliability rather than having many features.


'''Keep the bus factor and integration hell in mind.''' We're not sure on the best ways to do this. If you have ideas or experience, let us know!

Large teams often take longer to finish a task; there's only so many things to do. But also if someone has schedule conflicts, how can we be ready to pick up what they were working on?

Pair programming? Detailed plans with small, concrete tasks?

== Autonomy ==
This might get lumped in with Controls and/or Vision, you may be working on those a lot if you are working on autonomy. Also potentially involved are: localization, mapping.

=== Goals ===

# Pre-qualification task.
# Better autonomy, create a bit of a base that we can continue to build on, rather than entirely ad-hoc hacky autonomy.
# Ability to complete various RoboSub tasks

== Controls ==
== Controls ==
<blockquote>Note: This section was copied directly from the old wiki. This page is intended to be approachable for new members transitioning to working on Qubo. I'm not sure how understandable this is, as it's not my area; so just please don't avoid a project you're interested in because the description sounds scary. For questions and concerns about these projects, ask in #team-software in the Slack. --- Jeffrey Fisher</blockquote>Controls as in [[wikipedia:Control_theory|"control theory"]]. Making Qubo be able to stand still (to fire torpedos accurately), move efficiently, etc.

=== Goals ===

# Make movement faster, more efficient, and more accurate.
# Track our position using available sensors.
# Be able to give Qubo a position and have it move there.

=== Current state ===

* Very simple, untuned P depth and PD yaw control
** Ballasts mostly hold roll and pitch, directly modified forward/sideways thrust
* Only track depth and orientation, no localization or sense or position

=== Things to work on ===

* Maybe: Ballasts might be removed, so we will have full 6 degrees of freedom to deal with
** We may end up keeping the ballasts though.
* How can we improve on our current control?
** Is it feasible to somewhat accurately model our robot? (determine hydrodynamic quantities like added mass, inertia, drag, etc.)
*** If so, what model-based control is best?
**** [https://quartz-suggestion-1ac.notion.site/LQR-d2bbc644378f4cab8ed08ce8c0a1acef?pvs=4 LQR]?
**** Sliding mode control?
**** Model predictive control?
*** If not, what control methods can we use?
**** PID?
***** MIMO with linear and angular positions?
***** Cascading PID?
**** Adaptive control?
**** Feedforward components?
* How can we use our sensors to track our position?
** Kalman filters


== Vision ==
== Vision ==
<blockquote>Note: This section was copied directly from the old wiki. This page is intended to be approachable for new members transitioning to working on Qubo. I'm not sure how understandable this is, as it's not my area; so just please don't avoid a project you're interested in because the description sounds scary. For questions and concerns about these projects, ask in #team-software in the Slack. --- Jeffrey Fisher</blockquote>


== GUI ==
=== Goals ===

# Detect competition objects accurately and efficiently (minimize computational cost)
# Determine objects' relative poses (relative linear position/coordinates and rotational orientation)

=== Current state ===

* YOLOv8 model trained on [https://app.roboflow.com/josh-smith/robosub2023training/overview our data] and [https://app.roboflow.com/22ghoshi/cw-akqjp/deploy/2 some other team's data] in [https://colab.research.google.com/drive/1jiaf1PEvYl7cbJP1DTVHJZ2XlTB61Jgj?usp=sharing this notebook] very quickly during competition

=== Things to work on ===

* What ML model(s) do we want to use? (my knowledge is quite limited)
** YOLOv8 worked pretty well for identifying objects considering how rushed our training and implementation was
** Are there models that are feasible for us that can give us more info than YOLO, maybe something like [https://arxiv.org/abs/1711.00199 PoseCNN] that gives full pose estimation given 3D models of the objects?
* If we use YOLOv8, one of the big issues was speed, both initialization (took like 1 minute at competition) and detection speed
** Likely can be easily improved just by using the proper format for the Jetson, right now we're just using .pt but [https://wiki.seeedstudio.com/YOLOv8-TRT-Jetson/ TensorRT (tutorial)]/[https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html#overview TensorRT (docs)] is recommended, we can test other formats using their [https://docs.ultralytics.com/modes/benchmark/ benchmark]
** Also can use quantization args for [https://docs.ultralytics.com/modes/predict/ predict] and [https://docs.ultralytics.com/modes/export/#arguments export]
* Data collection - we collected all of our data at competition (including yoinking another team's dataset)
** Some other teams constructed 3D models in Unity/other graphics engines and trained on those, is that worth it?
** Ideally we can have a baseline model trained on data we either synthetically create or we collect in the pool using physical recreations of the objects, before we go to competition, that we can train off of
* Traditional CV - if our ML stuff only gives us the bounding box of the object in the image, can we use traditional CV stuff to get the relative pose of objects?
** E.g. algorithms like SIFT (feature matching), contour/corner detection -> solvePnP, etc.
* Make labeling easier - labeling was an incredibly massive time sink during competition
** Roboflow hides model-assisted labeling behind a paywall, we found [https://anylabeling.nrl.ai/docs/custom-models this cool open source thing] that lets you do model-assisted labeling pretty easily

== GUI & UX ==
More information: https://code.umd.edu/robotics-at-maryland/ram/-/wikis/yearly/2023-2024/2023-2024-software-plans/gui

=== Goals ===

* GUI (graphical user interface) for controlling Qubo
* Display diagnostics, sensor information, camera, state of the mission
* Basic features usable without software development knowledge
* '''Key:''' Make it faster and more reliable to do competition runs
** Also would include e.g. making machine learning model boot time less of a problem.


== End-effectors ==
== End-effectors ==
This includes: torpedo launcher, claw, marker dropper.

marker dropper = device that releases a small object, a "marker" that is identifiable to our team. A recurring RoboSub task is dropping markers into bins.

Jeffrey Fisher: My understanding is that these end-effectors are not complete on the mechanical side, but that we can work with prior iterations. Even if it won't go on the robot right away, it will be worthwhile to get the code in place so we are ready when mechanical finishes the new designs, and to begin testing the functionality. Testing is very important because accuracy is important for all three of these end-effectors.


== New DVL (1-2 people) ==
== New DVL (1-2 people) ==
We've purchased a new, ''much smaller'' DVL (doppler velocity logger). The model is WaterLinked's DVL A50. For information on what a DVL is, and about this specific model, see [https://bluerobotics.com/store/the-reef/dvl-a50/ the BlueRobotics DVL A50 page].


== Embedded software ==
=== Project overview ===

* The DVL speaks JSON over TCP. See [https://waterlinked.github.io/dvl/dvl-protocol/ WaterLinked's protocol documentation].
** There is also a serial interface, but we will not use it because (1) it is slower and (2) I'm pretty sure one reason for getting this DVL was that we can talk to it over Ethernet --- Jeffrey Fisher.
* '''Goal:''' Setup communication between Qubo's main computer and the DVL.
* '''Goal:''' Make the DVL data available to the rest of the code over ROS.
* '''Goal:''' Test DVL data and make sure output makes sense.
* Language: Likely C++
* There are existing libraries for interfacing with this DVL. It is probably worth evaluating them. There are other projects that are more interesting and that are more important for us to do ourselves.
** At a quick glance, this one seems worth checking out: https://github.com/paagutie/dvl-a50. It's already using the JSON library I was planning to use, so if we write our own in C++ it will probably be quite similar to this one. --- Jeffrey Fisher
** Here's a bunch: https://github.com/search?utf8=%E2%9C%93&q=dvl+a50&type=repositories
** Jeffrey Fisher: Some things worth considering: (however I am now strongly leaning towards using an existing library / ROS package unless there are really no decent ones)
*** How often will we have to change this code?
*** How likely are there to be bugs?
*** Will depending on a 3rd-party ROS library make it hard to upgrade in the future? How separate are the DVL protocol and ROS portions of the 3rd-party library? How hard would it be to patch the library ourselves for a newer ROS version?

== Embedded software (3-4 people?) ==
Electrical team is redesigning the electrical system to use a backplane. If you're familiar with a computer motherboard, that's basically what a backplane is. Instead of a mess of wires, there will be several "daughter cards" that slot into the backplane, and will communicate over wiring inside the [[wikipedia:Printed_circuit_board|PCBs]].

Each of the daughter boards need to be programmed. The microcontroller we are programming is a [[STM32G0B1RE]].

=== Daughter boards ===
WIP: This section is rough, just taken from Alex's notes from a meeting with Erik.

==== Thruster control board (x2) ====

* Each thruster control board controls 4 thrusters, for a total of 8 thrusters.
* CAN node
* UART node
* Receive heading information from Jetson (6-vector float?)

==== Power board ====

* CAN node
* UART node
* 5 Current Monitoring ICs, 1 for the whole system, same outputs as the monitors in the thruster control card
* eFuses (2 of them):
** Communicate with outside world using digital output (whether I'm open or not)
** Should be logged for diagnostics
* Enable pins (think power buttons for individual components)
** 1 for the thrusters and almost everything else, 1 for the Jetson
** Considering a shutoff for each individual sensor, great for diagnostic purposes

==== Diagnostics ====

* Polling for current and voltage information from the CAN bus (probably can be done on the Jetson)
* Home-cooked boundary scan: Do a full scan on startup, shutdown, and if serious fault detected.
** Send a stimulus to the pressure points, and measure the response.

==== Jetson ====

* CAN: Send heading information to the thruster daughter cards
* CAN: Get power information from the power daughter card


=== CAN Bus ===
=== CAN Bus ===
[[wikipedia:CAN_bus|CAN bus]] is a protocol for communication between microcontrollers and devices. It is used in motor vehicles.

As of right now, this is the planned protocol for communication between daughter boards.

Latest revision as of 23:12, 27 January 2024

Spring Overall Goals[edit | edit source]

Competition is coming up on us quickly!


Test, test, test. Get Qubo in the pool as much as possible. Iterate! Make something crappy so that we can see how it works. Don't try to get it perfect; the only way we'll get anywhere close to perfect is by making crappy versions first.


Deadlines. Leads will work with teams to develop a detailed semester plan that includes estimated deadlines. The focus will be on becoming competition-ready; nice-to-haves should be separated or removed entirely from the plan. Aiming for a Minimum Viable Product. Both so that we can test early and often, and so that we have something nice working at competition.

However, we have a large team. We do not need to stop working on longer-term projects. It will mainly be that urgent and key projects will build a minimal, rather than aspirational, plan. So for example the GUI has the potential to greatly increase testing and competition efficiency, but it is not 100% necessary; the GUI plan won't have to be as minimal as the controls plan. At the same time, there may be aspirational tasks on controls that a subteam is working on; but we also need to get something simple working quickly, so that we can start testing.

Missing a deadline is not the end of the world. The main goal is to be realistic about what we can accomplish within the time we have. If we are not progressing as fast as we hoped, then we'll have to remove something non-essential from the plan.

Look for easy points in RoboSub tasks. Focus on reliability rather than having many features.


Keep the bus factor and integration hell in mind. We're not sure on the best ways to do this. If you have ideas or experience, let us know!

Large teams often take longer to finish a task; there's only so many things to do. But also if someone has schedule conflicts, how can we be ready to pick up what they were working on?

Pair programming? Detailed plans with small, concrete tasks?

Autonomy[edit | edit source]

This might get lumped in with Controls and/or Vision, you may be working on those a lot if you are working on autonomy. Also potentially involved are: localization, mapping.

Goals[edit | edit source]

  1. Pre-qualification task.
  2. Better autonomy, create a bit of a base that we can continue to build on, rather than entirely ad-hoc hacky autonomy.
  3. Ability to complete various RoboSub tasks

Controls[edit | edit source]

Note: This section was copied directly from the old wiki. This page is intended to be approachable for new members transitioning to working on Qubo. I'm not sure how understandable this is, as it's not my area; so just please don't avoid a project you're interested in because the description sounds scary. For questions and concerns about these projects, ask in #team-software in the Slack. --- Jeffrey Fisher

Controls as in "control theory". Making Qubo be able to stand still (to fire torpedos accurately), move efficiently, etc.

Goals[edit | edit source]

  1. Make movement faster, more efficient, and more accurate.
  2. Track our position using available sensors.
  3. Be able to give Qubo a position and have it move there.

Current state[edit | edit source]

  • Very simple, untuned P depth and PD yaw control
    • Ballasts mostly hold roll and pitch, directly modified forward/sideways thrust
  • Only track depth and orientation, no localization or sense or position

Things to work on[edit | edit source]

  • Maybe: Ballasts might be removed, so we will have full 6 degrees of freedom to deal with
    • We may end up keeping the ballasts though.
  • How can we improve on our current control?
    • Is it feasible to somewhat accurately model our robot? (determine hydrodynamic quantities like added mass, inertia, drag, etc.)
      • If so, what model-based control is best?
        • LQR?
        • Sliding mode control?
        • Model predictive control?
      • If not, what control methods can we use?
        • PID?
          • MIMO with linear and angular positions?
          • Cascading PID?
        • Adaptive control?
        • Feedforward components?
  • How can we use our sensors to track our position?
    • Kalman filters

Vision[edit | edit source]

Note: This section was copied directly from the old wiki. This page is intended to be approachable for new members transitioning to working on Qubo. I'm not sure how understandable this is, as it's not my area; so just please don't avoid a project you're interested in because the description sounds scary. For questions and concerns about these projects, ask in #team-software in the Slack. --- Jeffrey Fisher

Goals[edit | edit source]

  1. Detect competition objects accurately and efficiently (minimize computational cost)
  2. Determine objects' relative poses (relative linear position/coordinates and rotational orientation)

Current state[edit | edit source]

Things to work on[edit | edit source]

  • What ML model(s) do we want to use? (my knowledge is quite limited)
    • YOLOv8 worked pretty well for identifying objects considering how rushed our training and implementation was
    • Are there models that are feasible for us that can give us more info than YOLO, maybe something like PoseCNN that gives full pose estimation given 3D models of the objects?
  • If we use YOLOv8, one of the big issues was speed, both initialization (took like 1 minute at competition) and detection speed
  • Data collection - we collected all of our data at competition (including yoinking another team's dataset)
    • Some other teams constructed 3D models in Unity/other graphics engines and trained on those, is that worth it?
    • Ideally we can have a baseline model trained on data we either synthetically create or we collect in the pool using physical recreations of the objects, before we go to competition, that we can train off of
  • Traditional CV - if our ML stuff only gives us the bounding box of the object in the image, can we use traditional CV stuff to get the relative pose of objects?
    • E.g. algorithms like SIFT (feature matching), contour/corner detection -> solvePnP, etc.
  • Make labeling easier - labeling was an incredibly massive time sink during competition
    • Roboflow hides model-assisted labeling behind a paywall, we found this cool open source thing that lets you do model-assisted labeling pretty easily

GUI & UX[edit | edit source]

More information: https://code.umd.edu/robotics-at-maryland/ram/-/wikis/yearly/2023-2024/2023-2024-software-plans/gui

Goals[edit | edit source]

  • GUI (graphical user interface) for controlling Qubo
  • Display diagnostics, sensor information, camera, state of the mission
  • Basic features usable without software development knowledge
  • Key: Make it faster and more reliable to do competition runs
    • Also would include e.g. making machine learning model boot time less of a problem.

End-effectors[edit | edit source]

This includes: torpedo launcher, claw, marker dropper.

marker dropper = device that releases a small object, a "marker" that is identifiable to our team. A recurring RoboSub task is dropping markers into bins.

Jeffrey Fisher: My understanding is that these end-effectors are not complete on the mechanical side, but that we can work with prior iterations. Even if it won't go on the robot right away, it will be worthwhile to get the code in place so we are ready when mechanical finishes the new designs, and to begin testing the functionality. Testing is very important because accuracy is important for all three of these end-effectors.

New DVL (1-2 people)[edit | edit source]

We've purchased a new, much smaller DVL (doppler velocity logger). The model is WaterLinked's DVL A50. For information on what a DVL is, and about this specific model, see the BlueRobotics DVL A50 page.

Project overview[edit | edit source]

  • The DVL speaks JSON over TCP. See WaterLinked's protocol documentation.
    • There is also a serial interface, but we will not use it because (1) it is slower and (2) I'm pretty sure one reason for getting this DVL was that we can talk to it over Ethernet --- Jeffrey Fisher.
  • Goal: Setup communication between Qubo's main computer and the DVL.
  • Goal: Make the DVL data available to the rest of the code over ROS.
  • Goal: Test DVL data and make sure output makes sense.
  • Language: Likely C++
  • There are existing libraries for interfacing with this DVL. It is probably worth evaluating them. There are other projects that are more interesting and that are more important for us to do ourselves.
    • At a quick glance, this one seems worth checking out: https://github.com/paagutie/dvl-a50. It's already using the JSON library I was planning to use, so if we write our own in C++ it will probably be quite similar to this one. --- Jeffrey Fisher
    • Here's a bunch: https://github.com/search?utf8=%E2%9C%93&q=dvl+a50&type=repositories
    • Jeffrey Fisher: Some things worth considering: (however I am now strongly leaning towards using an existing library / ROS package unless there are really no decent ones)
      • How often will we have to change this code?
      • How likely are there to be bugs?
      • Will depending on a 3rd-party ROS library make it hard to upgrade in the future? How separate are the DVL protocol and ROS portions of the 3rd-party library? How hard would it be to patch the library ourselves for a newer ROS version?

Embedded software (3-4 people?)[edit | edit source]

Electrical team is redesigning the electrical system to use a backplane. If you're familiar with a computer motherboard, that's basically what a backplane is. Instead of a mess of wires, there will be several "daughter cards" that slot into the backplane, and will communicate over wiring inside the PCBs.

Each of the daughter boards need to be programmed. The microcontroller we are programming is a STM32G0B1RE.

Daughter boards[edit | edit source]

WIP: This section is rough, just taken from Alex's notes from a meeting with Erik.

Thruster control board (x2)[edit | edit source]

  • Each thruster control board controls 4 thrusters, for a total of 8 thrusters.
  • CAN node
  • UART node
  • Receive heading information from Jetson (6-vector float?)

Power board[edit | edit source]

  • CAN node
  • UART node
  • 5 Current Monitoring ICs, 1 for the whole system, same outputs as the monitors in the thruster control card
  • eFuses (2 of them):
    • Communicate with outside world using digital output (whether I'm open or not)
    • Should be logged for diagnostics
  • Enable pins (think power buttons for individual components)
    • 1 for the thrusters and almost everything else, 1 for the Jetson
    • Considering a shutoff for each individual sensor, great for diagnostic purposes

Diagnostics[edit | edit source]

  • Polling for current and voltage information from the CAN bus (probably can be done on the Jetson)
  • Home-cooked boundary scan: Do a full scan on startup, shutdown, and if serious fault detected.
    • Send a stimulus to the pressure points, and measure the response.

Jetson[edit | edit source]

  • CAN: Send heading information to the thruster daughter cards
  • CAN: Get power information from the power daughter card

CAN Bus[edit | edit source]

CAN bus is a protocol for communication between microcontrollers and devices. It is used in motor vehicles.

As of right now, this is the planned protocol for communication between daughter boards.