Programming
Examples
Vision
LimelightOS
Year 2023-24 SPYDER

Year 2023-24 SPYDER

Using Multiple Cameras (Work Around)

Since we used Limelight OS, we can't use multiple cameras. However, we can use a workaround to use multiple cameras. We can use the VisionABC class to create a new camera server and stream the camera feed to the dashboard.

VisionABC Class

 
package frc.robot.subsystems.Vision;
 
import edu.wpi.first.math.geometry.Translation2d;
import edu.wpi.first.wpilibj2.command.Command;
import edu.wpi.first.wpilibj2.command.CommandScheduler;
import edu.wpi.first.wpilibj2.command.SubsystemBase;
import frc.robot.subsystems.SubsystemABC;
import frc.robot.subsystems.Vision.utils.VisionObject;
 
public abstract class VisionABC extends SubsystemBase {
 
 
 
  public VisionABC() {}
 
  public abstract void periodic();
  public abstract boolean CheckTarget();
  public abstract Translation2d GetTarget(VisionObject object);
  public abstract void setPipeline(int pipeline);
  public abstract void setLEDMode(int mode);
  public abstract void setCamMode(int mode);
  public abstract Command BlinkLED();
  public abstract Command TurnOffLED();
 
  /**
   * This method is called periodically by the {@link CommandScheduler}. Useful for updating
   * subsystem-specific state that needs to be maintained for simulations, such as for updating
   * {@link edu.wpi.first.wpilibj.simulation} classes and setting simulated sensor readings.
   */
  @Override
  public abstract void simulationPeriodic();
 
  public Boolean enabled;
}
 

Creating a new camera from VisionABC

package frc.robot.subsystems.Vision.camera;
 
import edu.wpi.first.wpilibj2.command.CommandBase;
import edu.wpi.first.wpilibj2.command.CommandGroupBuilder;
import edu.wpi.first.wpilibj.networktable.NetworkTable;
import edu.wpi.first.wpilibj.shuffleboard.ShuffleboardTab;
 
public abstract class Back_Camera {
 
    public static Back_Camera getInstance() {
        return INSTANCE;
    }
 
    @Override
    public abstract void periodic();
 
    @Override
    public abstract boolean CheckTarget();
 
    @Override
    public abstract void setPipeline(int pipeline);
 
    @Override
    public abstract void setLEDMode(int mode);
 
    @Override
    public abstract void setCamMode(int mode);
 
    public static abstract class VisionCommand extends CommandBase {
    }
 
    private static final Back_Camera INSTANCE = new Back_Camera() {
        private final ShuffleboardTab tab = Shuffleboard.getTab("BackCamera");
        private final NetworkTable table = NetworkTableInstance.getDefault().getTable(CameraConstants.BackCam.BACK_CAMERA_NETWORK_TABLES_NAME);
        private final VisionObject tagCenter = new VisionObject(0, 0, 0, ObjectType.APRILTAG);
 
        @Override
        public void periodic() {
            tagCenter.update(
                    table.getEntry("tx").getNumber(0).doubleValue(),
                    table.getEntry("ty").getNumber(0).doubleValue(),
                    table.getEntry("ts").getNumber(0).doubleValue()
            );
            SmartDashboard.putBoolean("Target Found", CheckTarget());
            SmartDashboard.putNumber("Distance", tagCenter.getDistance());
            SmartDashboard.putNumber("X", tagCenter.getX());
            SmartDashboard.putNumber("Y", tagCenter.getY());
        }
 
        @Override
        public boolean CheckTarget() {
            return table.getEntry("tv").getNumber(0).intValue() == 1;
        }
 
        @Override
        public void setPipeline(int pipeline) {
            // Implement setting pipeline using LimelightHelpers.setPipelineIndex(nt_key, pipeline);
        }
 
        @Override
        public void setLEDMode(int mode) {
            // Implement setting LED mode using LimelightHelpers.setLEDMode_* methods based on mode
        }
 
        @Override
        public void setCamMode(int mode) {
            // Implement setting camera mode using LimelightHelpers.setCameraMode_* methods based on mode
        }
    };
}
 

With this, we can create multiple cameras and stream them to the dashboard. This is a workaround to use multiple cameras with Limelight OS.

Training a Custom Detector Model for Limelight

  • To train a custom detector model for Limelight, you will need to collect and annotate images of objects of interest.
  • Annotation is the process of drawing bounding boxes around objects of interest.
  • This can be done within Roboflow's web interface. You can also use a public dataset from Roboflow Universe.
  • Once you have a dataset, you can export it as a .tfrecord file and upload it to Google Drive.

Google Colab

  • You can use Google Colab to train neural networks using powerful GPUs on the cloud for free.
  • The Limelight Training Notebook expects a zipped .tfrecord dataset.

Dataset Tips

Maximize Dataset Diversity:

  • Your dataset should include a wide variety of images that represent all the different conditions your Limelight might encounter during a match.
  • This includes different lighting conditions, angles, distances, and object orientations.

Quality and Accuracy:

  • The images in your dataset should be clear and the objects of interest should be easily distinguishable.
  • When annotating your images (drawing bounding boxes around the objects of interest), make sure the boxes are as accurate as possible.

Bounding Box Convention:

  • Decide on a convention for drawing bounding boxes and stick to it.
  • For example, if an object is partially occluded (hidden),the bounding box should only cover the visible part of the object.

Class Labels:

  • The class labels (the names of the objects you're detecting) should be all in lowercase and kept to a minimum.
  • This makes the training process more efficient.

Use Roboflow's Augmentations:

  • Roboflow provides image augmentations that can artificially increase the diversity and size of your dataset.
  • However, make sure the augmentations you use make sense for your specific task. For example, if you're detecting red and blue balls, avoi

Recommended Platforms:



Training the Model

To train your custom detector, start a Google Colab Session with the Limelight Detector Training Notebook. The Notebook does not require any code changes. Once the training script is running, you can refresh the files pane and tensorboard to monitor progress. A new checkpoint should appear in the "training_progress" folder every 2000 steps. While training will automatically stop at 40000 steps, you can stop it at any point with the stop button in the final code block of this section. As long as checkpoints are available, you can more forward to quantization and compilation.

Upload to Limelight

Once you have trained your model, you will need to convert it to a compatible FlatBuffer format and quantize it for INT8 / 8bit inference. Then, you can prepare the model for Google Coral and Limelight. The final code block will take some time, and it will download the trained model as a .zip file. You will then need to unzip the archive from your Colab session. Finally, you will need to upload the limelight_neural_detector_coral.tflite and the labels.txt files to your Limelight.

✔️
Success! You have successfully trained a custom detector model for Limelight. You can now use this model to detect objects of interest in your robot's environment.

Conclusion

We can use the above steps to train vision models for the robot. This will help us to detect the target accurately and improve the performance of the robot.

🤖
More Documentation Coming soon..! 🎉

References

  1. Limelight Documentation (opens in a new tab)
  2. WPILib Documentation (opens in a new tab)

Contributors

  1. Ritesh Raj (opens in a new tab)