voice control

Through the previous tutorials, we have understood the entire process of implementing voice custom interaction. Next, let's learn how to control our robot to move through the voice interaction function. You can also design your own voice to control the robot based on the custom interaction mode explained earlier.

overall effect

After running the default demo, you can command 前进后退左转 and右转by voice, and control the robot to perform corresponding operations.

We also provide additional interfaces to add acceleration, deceleration, and more.


Implementation process

Simulation use

First, we have to wake up the robot through the wake-up word and put it into the wake-up state. So you need to start the Voice offline command word detection and VAD detection nodes first. Open two new terminals and run the following two commands respectively:

roslaunch handsfree_speech offline_interactive_ros.launch//Voice interactive function node

rosrun handsfree_speech vad_record.py//vad detection node

Next, we run to open a new terminal and run the following command:

rosrun handsfree_speech cn_voice_cmd_vel.py//This is the voice control node

According to the content of the first section of the voice, we can use the simulation environment to test the robot first, and look at the effect first. Here we also provide the corresponding simulation environment.

roslaunch handsfree_stage demo_handsfree_room_stage.launch

roslaunch handsfree_stage demo_handsfree_xuda_stage.launch

These two are simulation environments. If the space is not empty enough or if you are not proficient, you can execute any one of them to perform simulation tests.

You can use our voice interaction function to see the execution effect and debug the driving function through the simulation environment.

real machine use

After you are familiar with the robot's voice interaction function, you can debug our robot in the actual environment.

First, start the voice interaction node and vad detection node:

roslaunch handsfree_speech offline_interactive_ros.launch//Voice interactive function node

rosrun handsfree_speech vad_record.py//vad detection node

Then you need to start the Voice Control Node :

rosrun handsfree_speech cn_voice_cmd_vel.py//This is the voice control node

Then you also need to start the driver node

roslaunch handsfree_hw handsfree_hw.launch//drive node

Now you can try to control the robot with your voice.

core control

You can use the voice interaction function node and VAD detection node mentioned above to obtain our interaction data, and through the extraction of the interaction data, send the ID number information to the control node through topic , and then let the control node send data through the /mobile_base/mobile_base_controller/cmd_vel topic. By sending the data of linear speed and angular speed, we can convert it into the wheel linear speed or rotational speed of the car and control it.

Our control core is Chinese voice adaptation and optimization based on English voice control.

Drive Control
rospy.on_shutdown(self.cleanup)

        # Set a number of parameters affecting the robot's speed
        self.max_speed = rospy.get_param("~max_speed", 0.4)
        self.min_speed = rospy.get_param("~min_speed", 0.1)
        self.linear_increment = rospy.get_param("~linear_increment", 0.1)

        self.max_angular_speed = rospy.get_param("~max_angular_speed", 0.8)
        self.min_angular_speed = rospy.get_param("~min_angular_speed", 0.1)
        self.angular_increment = rospy.get_param("~angular_increment", 0.1)

        # Initial parameters
        self.speed = rospy.get_param("~start_speed", 0.4)
        self.angular_speed = rospy.get_param("~start_angular_speed", 0.2)

        # We don't have to run the script very fast
        self.rate = rospy.get_param("~rate", 5)
        r = rospy.Rate(self.rate)

        # Time, in seconds, for waiting correct command
        self.initial_wait_time = 5
        self.wait_time = self.initial_wait_time

        # A flag to determine whether or not TIAGo voice control
        self.TIAGo = False

        # Initialize the Twist message we will publish.
        self.cmd_vel = Twist()

        # Publish the Twist message to the cmd_vel topic
        self.cmd_vel_pub = rospy.Publisher('/mobile_base/mobile_base_controller/cmd_vel', Twist, queue_size=5)

        # Subscribe to the /recognizer/output topic to receive voice commands.
        rospy.Subscriber('/recognizer/output', String, self.speech_callback)


        # A mapping from keywords or phrases to commands
        self.keywords_to_command = {'backward': ['backward', 'go backward', 'move backward', 'back', 'move back'],
                                    'forward': ['forward', 'go forward', 'move forward'],
                                    'turn left': ['turn left', 'left'],
                                    'turn right': ['turn right', 'right'],
                                    'stop': ['stop', 'halt'],
                                    'faster': ['faster'],
                                    'slower': ['slower'],
                                    'quarter': ['quarter speed', 'quarter'],
                                    'half': ['half speed', 'half'],
                                    'full': ['full speed', 'full']}

        rospy.loginfo("Ready to receive voice commands")

        # We have to keep publishing the cmd_vel message if we want the robot to keep moving.
        while not rospy.is_shutdown():
            self.cmd_vel_pub.publish(self.cmd_vel)
            r.sleep()
 if command == 'backward':
            self.cmd_vel.linear.x = -self.speed
            self.cmd_vel.angular.z = 0
            rospy.loginfo("Command: " + str(command))
            time.sleep(3)
            rospy.loginfo("Command: " + self.get_command(msg.data))
            self.cmd_vel = Twist()
            self.TIAGo = False
            self.wait_time = self.initial_wait_time
            return

        elif command == 'forward':
            self.cmd_vel.linear.x = self.speed
            print(self.speed)
            self.cmd_vel.angular.z = 0
            rospy.loginfo("Command: " + str(command))
            time.sleep(3)
            rospy.loginfo("Command: " + self.get_command(msg.data))
            self.cmd_vel = Twist()
            self.TIAGo = False
            self.wait_time = self.initial_wait_time
            return
Voice Control Feedback Section
  if(todo_id==99999)
  {
char* text;
      int x =random(0, 5);
      if(x==1)
{
text =(char*)"what's the matter"; //synthesize text
}
      else if(x==2)
{
text =(char*)"what's up"; //synthesize text
}
      else if(x==3)
{
text =(char*)"come"; //synthesize text
}
      else
{
text =(char*)" I am"; //Synthesized text
}
      printf("Received wakeup command");
      ret =text_to_speech(text, filename_move, session_begin_params);
      if (MSP_SUCCESS != ret)
      {
printf("text_to_speech failed, error code: %d.\n", ret);
      }
      printf("Completed\n");
      ifwake=1;
      std_msgs::String msg_pub;
      msg_pub.data = "stop";
      pub.publish(msg_pub);
      play_wav((char*)"/home/handsfree/catkin_ws/src/handsfree_speech/res/tts_sample_move.wav");
      msg_pub.data = "tiago";
      pub.publish(msg_pub);
  }
 if(todo_id==10001&&ifwake==1)
  {
      const char* text = "Receive an instruction, please avoid it"; //Synthetic text
      ret =text_to_speech(text, filename_move, session_begin_params);
      if (MSP_SUCCESS != ret)
      {
printf("text_to_speech failed, error code: %d.\n", ret);
      }
      printf("Completed\n");
      printf("Received forward command");
      play_wav((char*)filename_move);
      ifwake=0;
      std_msgs::String msg_pub;
      msg_pub.data = "forward";
      pub.publish(msg_pub);
  }
  if(todo_id==10002&&ifwake==1)
  {
      const char* text = "Instruction received, please pay attention, reverse"; //Synthetic text
      ret =text_to_speech(text, filename_move, session_begin_params);
      if (MSP_SUCCESS != ret)
      {
printf("text_to_speech failed, error code: %d.\n", ret);
      }
      printf("Completed\n");
      printf("Received reverse order");
      play_wav((char*)filename_move);
      ifwake=0;
      std_msgs::String msg_pub;
      msg_pub.data = "backward";
      pub.publish(msg_pub);
  }
  if(todo_id==10003&&ifwake==1)
  {
      const char* text = "Instruction received, turning"; //Synthetic text
      ret =text_to_speech(text, filename_move, session_begin_params);
      if (MSP_SUCCESS != ret)
      {
printf("text_to_speech failed, error code: %d.\n", ret);
      }
      printf("Completed\n");
      printf("Receive left turn instruction");
      play_wav((char*)filename_move);
      ifwake=0;
      std_msgs::String msg_pub;
      msg_pub.data = "left";
      pub.publish(msg_pub);
  }
  if(todo_id==10004&&ifwake==1)
  {
      const char* text = "Instruction received, turning"; //Synthetic text
      ret =text_to_speech(text, filename_move, session_begin_params);
      if (MSP_SUCCESS != ret)
      {
printf("text_to_speech failed, error code: %d.\n", ret);
      }
      printf("Completed\n");
      printf("Right turn instruction received");
      play_wav((char*)filename_move);
      ifwake=0;
      std_msgs::String msg_pub;
      msg_pub.data = "right";
      pub.publish(msg_pub);
  }

In the voice control program, we set a flag called tiago, only when the robot receives this flag, the robot will determine that the received subsequent control data is valid data. This is also equivalent to a status judgment flag. So we added a tiago data to the wake-up feedback of the received wake-up command, which means that the data sent later is valid data.

Here we have designed an extra function, when you say the wake word, the robot will stop the running task, so that when the robot is moving, we use the wake word to stop the robot.

In the 前进 and 后退 operations, we also added a delay function, which can make the robot stop the current 前进 and 后退 actions after each execution of a period of time. Avoid the robot moving too far, so that the voice wake-up word cannot wake up the robot normally, so that the robot cannot stop, and keeps 前进 or 后退.

The following is an error code information query of iFLYTEK. If the relevant error code is reported when calling iFLYTEK, you can query through this link. Inquiry about iFLYTEK error code information

results matching ""

    No results matching ""