Main Content

Follow a Colored Object Using VEX V5 Vision Sensor

This example shows how to use Simulink® Coder™ Support Package for VEX® EDR V5 Robot Brain to control a robot in arcade mode by following a colored object detected by a V5 Vision Sensor.

Introduction

Simulink Coder Support Package for VEX EDR V5 Robot Brain provides a Vision Sensor block that can be used to analyze a color blob detected by a V5 Vision Sensor. The block outputs x and y coordinates, height, and width of the color blob detected in the field of view of the V5 Vsion Sensor, based on the color signature that you select in the block.

In this example, you will learn how to:

  • Use the x coordinate and width of the detected color blob, which are the outputs of Vision Sensor block, as the references for configuring the object tracking logic

  • Use an object tracking logic that uses Simulink Stateflow® to compute the required values for forward/backward movement and for steering control of the robot

  • Use an Arcade Module block (available in the support package) that uses the output from the Stateflow to control the right and left motors of the robot

Prerequisites

  • If you are new to Simulink, watch the Simulink Quick Start video.

  • Complete the Getting Started with VEX EDR V5 Support Package example.

Required Hardware

To run this example, you need the following hardware:

  • VEX EDR V5 Robot Brain

  • V5 Vision Sensor

  • V5 Smart Motors (2)

  • Two-wheel robot platform

  • V5 Robot Battery

  • 600mm Smart Cable (2)

  • 300mm Smart cable

  • USB A to Micro Cable

Task 1 - Hardware Connections

1. Connect the V5 Robot Battery to the VEX EDR V5 Robot Brain.

2. Connect the VEX EDR V5 Robot Brain to your computer using the USB cable.

3. Connect the two V5 Smart Motors to two Smart Ports on the V5 Robot Brain, using the two 600mm Smart Cables. Connect the right motor to the Smart Port 10 and the left motor to Smart Port 1.

4. Connect the V5 Vision Sensor to Smart Port 3 on the V5 Robot Brain, using the 300 mm Smart Cable.

5. Connect the two V5 Smart Motors to the two wheels of the robot

Model

Simulink Coder Support Package for VEX EDR V5 Robot Brain provides a pre-configured model that you can use to control a robot in arcade mode using V5 Vision Sensor.

Open the vexv5_follow_object model.

The Vision Sensor block in the model uses SIG_3 as the color signature to be detected.

Note: To configure a color signature that you specify in the Vision Sensor block, you need to use the VEX Vision Utility. For details about saving a color signature on V5 Vision Sensor using the VEX Vision Utility, refer to the steps mentioned at Train V5 Vision Sensor for Color Signature.

Task 2 - Configure Offsets for Objects Detected by Vision Sensor

In the pre-configured model, we use two PID controllers to configure the required offsets for objects detected by V5 Vision Sensor in the field of view.

  • Steer PID controller: To specify the mid-point of the field of view, we connect a Constant block with value 125. The Steer_offset output of the PID controller is calculated based on the current X value from the Vision Sensor block and the mid-point that you specified.

  • Forward PID controller: To specify the width limit of the detected color blob (which determines the stop distance of the V5 Vision Sensor from the object), we connect a Constant block with value 60. The Fwd offset output of the PID controller is calculated based on the current W value from the Vision Sensor block and the width limit that you specified.

Task 3 - Configure Object Tracking Using Stateflow

In this example model, we use a Stateflow chart to create the logic that helps to track the object. Double-click the Object Following Logic stateflow to view the states.

The Stateflow chart consists of three states that uses fwd_pid and steer_pid inputs to drive the robot, and one state to stop the robot:

  • TurnLeft: This state is configured to turn the robot to the left if the V5 Vision Sensor detects the object with an X value less than 100 and W value less than the width limit.

  • TurnRight: This state is configured to turn the robot to the right if the V5 Vision Sensor detects the object with an X value more than 150 and W value less than the width limit.

  • MoveForward: This state is configured to move the robot in forward direction towards the object after the V5 Vision Sensor has already aligned the robot to the detected object using the steer control states. The robot stops moving towards the object till the detected width (in the field of view) is same as the width limit that you specified.

Task 4 - Connect Arcade Module and Smart Motor Write Blocks

Simulink Coder Support Package for VEX EDR V5 Robot Brain provides an Arcade Module block that accepts separate inputs for forward control and steer control and then drive the motors connected to its outputs.

  • Connect the fwd and steer outputs from the stateflow chart to the FwdRev and Steer inputs of the Arcade Module block

  • Connect two Smart Motor Write blocks to the Right and Left output ports of the Arcade Module block. For connecting the right wheel motor, we also include a Gain block with value -1 to move motor in the same direction. Ensure that the Smart Port parameter in the two Smart Port Write blocks matches the actual port that you connected in Task 1.

Task 5 - Deploy the Model and Track the Object Using V5 Vision Sensor

1. Connect the VEX V5 Robot Brain to the host computer using the USB cable.

2. In the Simulink model, go to the Hardware tab and click Build, Deploy & Start. After the code deployment is successful, the model starts running in the V5 Robot Brain, and the tank robot starts to turn left continuously.

3. Place the colored object anywhere near the field of view of the V5 Vision Sensor, while the robot is trying to detect objects using the sensor.

Note: The color of the object should match the color that you used to train the V5 Vision Sensor for the selected color signature using the VEX Vision Utility. In this example, the Vision Sensor block is configured to detect SIG_3.

The robot detects the object and tries to align the midpoint of the field of view (based on the detected X value by the V5 Vision Sensor).

Once the midpoint is aligned, the robot moves forward to the object till the width of the detected object in the field of view matches the limit that you specified.