|
|
# Project: An advanced office assistant
|
|
|
|
|
|
The goal of the project is to work in a group of four students to develop an Advanced Office Assistant robot capable of the following functionalities:
|
|
|
|
|
|
* *Exploration:* the robot should be able to explore its environment to provide a map of interest points, such as locations of offices, vending machines, and humans. Some of the interest points move over time, and the robot should be able to handle those aspects.
|
|
|
* *Human-aware navigation:* the robot should be able to navigate around humans in a comfortable way.
|
|
|
* *Natural text commands:* the robot should be able to interpret natural text sentences to define goals for the robot.
|
|
|
* *Decision-making:* the robot should be able to make decisions based on a set of goals.
|
|
|
|
|
|
One and only one student in the group should focus on one functionality to cover the complete set of functionalities. Exceptions may be granted if the project group is not of size four, which may happen since the number of students is unlikely to be a multiple of four.
|
|
|
|
|
|
## Expectations
|
|
|
|
|
|
You are not expected to deliver a fully functional product but rather a proof of concept. The development of such a robot would likely take much more effort than what can be covered by the time allocated in this course. Also, the examination is *individual*, you will not be penalized if the other students in your group fail to deliver on their part. However, it is expected that you integrate with the students who are completing the course, and the other part can be handled by the solutions to lab assignments:
|
|
|
|
|
|
* *Exploration:* the exploration implemented in [Lab3](Lab3) and [Lab4](Lab4) should be enough, even though it is not optimal.
|
|
|
* *Human-aware navigation:* if the student working on that topic fails to deliver, it is acceptable that the robot collides with humans.
|
|
|
* *Natural text commands:* the node implemented in [Lab5](Lab5) provides a basic version and can be easily extended with additional simple commands to cover for a missing NLP component.
|
|
|
* *Decision-making:* the simple goal to tst node implemented in [Lab5](Lab5) provides a basic version and can be easily extended to support more goals if needed.
|
|
|
|
|
|
## Deliverables
|
|
|
|
|
|
* *Individual planning report:* by the end of March, each student should submit an individual planning report. It should be a one-page document that introduces the topic, lists possible approaches, and indicates which approach the student intends to investigate further. It should include a few relevant bibliographic references. The main goal of the document is to evaluate the project's feasibility.
|
|
|
* *Presentation of the project:* an oral presentation of your work in a seminar during May. The presentation should be hybrid and include a sale pitch of your robot as well as a scientific presentation of the different parts ([detailed instructions](group_presentation)).
|
|
|
* *Project leaflets:* an A3 document advertising your robot. You should highlight the strengths of your robot to potential customers (detailed instructions coming soon)
|
|
|
* *Individual report:* the individual report should present your individual work as a small scientific paper ([detailed instructions](individual_report)).
|
|
|
|
|
|
## Submission
|
|
|
|
|
|
You should submit your reports via [Gitlab](https://gitlab.liu.se/tdde05_students/submissions). Your file name should include your login (or group number for the leaflet) and include your LiU login/group number in the report as well. Example file names:
|
|
|
|
|
|
* *Individual planning report*: `plan_report_xyzab899.pdf`
|
|
|
* *Final report*: `final_report_xyzab899.pdf`
|
|
|
* *Project leaflets*: `leaflet_group_X.pdf`
|
|
|
|
|
|
To create a new submission, create a new issue and attach the file. Make sure to make the issue confidential - a tick in the box under the Description text box (i.e., *This issue is confidential...*).
|
|
|
|
|
|
## Running the simulator
|
|
|
|
|
|
For the project, you should use either 'office_1' (without moving humans) or 'office_2' (with moving humans). You can also run the simulator without SLAM, with an existing map like this:
|
|
|
|
|
|
```bash
|
|
|
ros2 launch air_bringup turtle.launch.py world:=office_1 localization:=true slam:=off
|
|
|
```
|
|
|
|
|
|
## Information regarding the different sub-parts
|
|
|
|
|
|
The techniques linked in this section are only suggestions and starting points.
|
|
|
|
|
|
### Exploration
|
|
|
|
|
|
When the robot is turned on for the first time, it knows nothing about the office environment. Your robot should be able to gather information about the environment by driving around and capturing information with its sensors.
|
|
|
|
|
|
The two techniques implemented in [Lab2](Lab2) and [Lab3](Lab3) work but are inefficient. For the project, you should look at more efficient techniques such as:
|
|
|
|
|
|
* frontier cell exploration, as presented in the lectures or in _A frontier-based approach for autonomous exploration, B. Yamauchi, Computational Intelligence in Robotics and Automation, 1997_.
|
|
|
* _Mobile robots exploration through cnn-based reinforcement learning, L. Tai and M. Liu, Robotics and Biomimetics. 2016_.
|
|
|
* Or any other approach found in the literature.
|
|
|
|
|
|
ROS has many modules for exploration that can be used. However, if you choose to use a ready-made module, you should develop some extra functionalities on top of it, such as handling dynamic knowledge. You can also use pre-existing ROS nodes to compare against your own solution.
|
|
|
|
|
|
## Human-aware navigation:
|
|
|
|
|
|
It is not good enough to avoid obstacles when operating in environments that include humans. The robot navigation should be done so that the humans feel comfortable.
|
|
|
|
|
|
* Look at Lecture Topic 11 of TDDE05 (on on-site Lecture 9 slides)
|
|
|
* A good starting point is _Human-aware robot navigation: A survey, T. Kruse, A. Kumar Pandey, R. Alami and A. Kirsch, Robotics and Autonomous Systems, 2013_
|
|
|
|
|
|
If you work with C++, you can check [Nav2](https://docs.nav2.org/) documentation for how to integrate your algorithm in the navigation stack of the TurtleBot.
|
|
|
|
|
|
If you work with Python, you should also check [Nav2](https://docs.nav2.org/) documentation to learn about the general concepts. However, Nav2 does not support Python directly. We have, however, implemented interfaces for a [Controller](https://docs.nav2.org/plugin_tutorials/docs/writing_new_nav2controller_plugin.html), and for Trajectory Generation and Critic Plugins based on the [dynamic window controller](https://github.com/ros-planning/navigation2/tree/main/nav2_dwb_controller) used by the TurtleBot. The code is available in [air_navigation](https://gitlab.liu.se/tdde05_ros2/air_navigation)
|
|
|
and some examples are included in [air_navigation_examples](https://gitlab.liu.se/tdde05_ros2/air_navigation_examples/-/blob/master/air_navigation_examples/__init__.py).
|
|
|
|
|
|
For both C++ and Python, to use a custom controller and critic, you need to change the configuration of nav2. You can see an example in the air_navigation_examples [nav2_test_py_controller.yaml](https://gitlab.liu.se/tdde05_ros2/air_bringup/-/blob/master/config/nav2_test_py_controller.yaml) and [nav2_test_py_dwb.yaml](https://gitlab.liu.se/tdde05_ros2/air_bringup/-/blob/master/config/nav2_test_py_dwb.yaml). To use a different configuration than the default one:
|
|
|
|
|
|
```
|
|
|
ros2 launch air_bringup turtle.launch.py nav2_params_file:=`ros2 pkg prefix air_bringup`/share/air_bringup/config/nav2_test_py_controller.yaml
|
|
|
```
|
|
|
|
|
|
*Ask your assistant for help to setup your controller!*
|
|
|
|
|
|
## Natural text commands:
|
|
|
|
|
|
A natural interface between a human and a robot is for the human to give commands to the robot using natural sentences, such as, "I need you to go to George's office and bring us two coffees." The goal of this part is to decompose this sentence into three goals that can be executed by the robot: _goto George office_ _bring coffee George_ _bring coffee user_.
|
|
|
|
|
|
You should consider the following approaches:
|
|
|
|
|
|
* Intent recognition: _Practical guidelines for intent recognition: Bert with minimal training data evaluated in real-world, M. Huggins, S. Alghowinem, S. Jeong, P. Colon-Hernandez, C. Breazeal, and H.W Park, ACM/IEEE International Conference on Human-Robot Interaction. 2021_
|
|
|
* Ruled-based
|
|
|
|
|
|
It is not expected that the student investigates speech-to-text techniques. In fact, it is strongly discouraged to look at those aspects. You should consider that you receive a sentence as a string. Speech-to-text is a *very* hard problem.
|
|
|
|
|
|
Example(s) of training data can be found at [data/NLP](https://gitlab.liu.se/tdde05_students/data/-/tree/main/NLP/).
|
|
|
|
|
|
## Decision-making
|
|
|
|
|
|
The robot will receive a set of goals and should decide which goals to accomplish and how. You should consider the following approaches:
|
|
|
|
|
|
* Automated planning, refer to Lecture Topic 7 (Deliberative architecture)
|
|
|
* Decision theory, refer to Lecture Topic 8 (Decision Theory)
|
|
|
|
|
|
For automated planning, the following software is installed on the LiU computers:
|
|
|
|
|
|
* Fast Downward: It is a planner with support for PDDL2.1 and some features of PDDL3, particularly for defining the costs of actions. Full documentation can be found at the [official website](https://www.fast-downward.org/). The executable can be found at `/courses/TDDE05/software/downward/builds/release/bin`.
|
|
|
* Unified Planning: The Unified Planning library makes it easy to formulate planning problems and to invoke automated planners. More information and tutorials can be found in the [official documentation](https://unified-planning.readthedocs.io/en/latest/). Note that _Fast Downward_ is accessible via _Unified Planning_ library but without the support for the action cost feature.
|
|
|
* ROS2 Planning System: A more advanced planning system for ROS2. Documentation can be found on the official page [plansys2](https://plansys2.github.io/) and through the examples on [github](https://github.com/PlanSys2/ros2_planning_system_examples). Note that this package is rather complex, and it is not recommended to use it unless you have prior experience with it.
|
|
|
|
|
|
## FAQ
|
|
|
|
|
|
* *Can I use existing software?* Yes, absolutely. It is not expected that you write everything from scratch. You can and should use existing relevant libraries, frameworks, and ROS nodes and integrate them into your system. However, it is not sufficient to install an existing ROS node, configure the topics, and run that. In case of doubt, ask the assistant if what you intend to do is acceptable. |
|
|
# Project: An advanced office assistant
|
|
|
|
|
|
The goal of the project is to work in a group of four students to develop an Advanced Office Assistant robot capable of the following functionalities:
|
|
|
|
|
|
* *Exploration:* the robot should be able to explore its environment to provide a map of interest points, such as locations of offices, vending machines, and humans. Some of the interest points move over time, and the robot should be able to handle those aspects.
|
|
|
* *Human-aware navigation:* the robot should be able to navigate around humans in a comfortable way.
|
|
|
* *Natural text commands:* the robot should be able to interpret natural text sentences to define goals for the robot.
|
|
|
* *Decision-making:* the robot should be able to make decisions based on a set of goals.
|
|
|
|
|
|
One and only one student in the group should focus on one functionality to cover the complete set of functionalities. Exceptions may be granted if the project group is not of size four, which may happen since the number of students is unlikely to be a multiple of four.
|
|
|
|
|
|
## Expectations
|
|
|
|
|
|
You are not expected to deliver a fully functional product but rather a proof of concept. The development of such a robot would likely take much more effort than what can be covered by the time allocated in this course. Also, the examination is *individual*, you will not be penalized if the other students in your group fail to deliver on their part. However, it is expected that you integrate with the students who are completing the course, and the other part can be handled by the solutions to lab assignments:
|
|
|
|
|
|
* *Exploration:* the exploration implemented in [Lab3](Lab3) and [Lab4](Lab4) should be enough, even though it is not optimal.
|
|
|
* *Human-aware navigation:* if the student working on that topic fails to deliver, it is acceptable that the robot collides with humans.
|
|
|
* *Natural text commands:* the node implemented in [Lab5](Lab5) provides a basic version and can be easily extended with additional simple commands to cover for a missing NLP component.
|
|
|
* *Decision-making:* the simple goal to tst node implemented in [Lab5](Lab5) provides a basic version and can be easily extended to support more goals if needed.
|
|
|
|
|
|
## Deliverables
|
|
|
|
|
|
* *Individual planning report:* by the end of March, each student should submit an individual planning report. It should be a one-page document that introduces the topic, lists possible approaches, and indicates which approach the student intends to investigate further. It should include a few relevant bibliographic references. The main goal of the document is to evaluate the project's feasibility.
|
|
|
* *Presentation of the project:* an oral presentation of your work in a seminar during May. The presentation should be hybrid and include a sale pitch of your robot as well as a scientific presentation of the different parts ([detailed instructions](group_presentation)).
|
|
|
* *Project leaflets:* an A3 document advertising your robot. You should highlight the strengths of your robot to potential customers (detailed instructions coming soon)
|
|
|
* *Individual report:* the individual report should present your individual work as a small scientific paper ([detailed instructions](individual_report)).
|
|
|
|
|
|
## Submission
|
|
|
|
|
|
You should submit your reports via [Gitlab](https://gitlab.liu.se/tdde05_students/submissions). Your file name should include your login (or group number for the leaflet) and include your LiU login/group number in the report as well. Example file names:
|
|
|
|
|
|
* *Individual planning report*: `plan_report_xyzab899.pdf`
|
|
|
* *Final report*: `final_report_xyzab899.pdf`
|
|
|
* *Project leaflets*: `leaflet_group_X.pdf`
|
|
|
|
|
|
To create a new submission, create a new issue and attach the file. Make sure to make the issue confidential - a tick in the box under the Description text box (i.e., *This issue is confidential...*).
|
|
|
|
|
|
## Running the simulator
|
|
|
|
|
|
For the project, you should use either 'office_1' (without moving humans) or 'office_2' (with moving humans). You can also run the simulator without SLAM, with an existing map like this:
|
|
|
|
|
|
```bash
|
|
|
ros2 launch air_bringup turtle.launch.py world:=office_1 localization:=true slam:=false
|
|
|
```
|
|
|
|
|
|
## Information regarding the different sub-parts
|
|
|
|
|
|
The techniques linked in this section are only suggestions and starting points.
|
|
|
|
|
|
### Exploration
|
|
|
|
|
|
When the robot is turned on for the first time, it knows nothing about the office environment. Your robot should be able to gather information about the environment by driving around and capturing information with its sensors.
|
|
|
|
|
|
The two techniques implemented in [Lab2](Lab2) and [Lab3](Lab3) work but are inefficient. For the project, you should look at more efficient techniques such as:
|
|
|
|
|
|
* frontier cell exploration, as presented in the lectures or in _A frontier-based approach for autonomous exploration, B. Yamauchi, Computational Intelligence in Robotics and Automation, 1997_.
|
|
|
* _Mobile robots exploration through cnn-based reinforcement learning, L. Tai and M. Liu, Robotics and Biomimetics. 2016_.
|
|
|
* Or any other approach found in the literature.
|
|
|
|
|
|
ROS has many modules for exploration that can be used. However, if you choose to use a ready-made module, you should develop some extra functionalities on top of it, such as handling dynamic knowledge. You can also use pre-existing ROS nodes to compare against your own solution.
|
|
|
|
|
|
## Human-aware navigation:
|
|
|
|
|
|
It is not good enough to avoid obstacles when operating in environments that include humans. The robot navigation should be done so that the humans feel comfortable.
|
|
|
|
|
|
* Look at Lecture Topic 11 of TDDE05 (on on-site Lecture 9 slides)
|
|
|
* A good starting point is _Human-aware robot navigation: A survey, T. Kruse, A. Kumar Pandey, R. Alami and A. Kirsch, Robotics and Autonomous Systems, 2013_
|
|
|
|
|
|
If you work with C++, you can check [Nav2](https://docs.nav2.org/) documentation for how to integrate your algorithm in the navigation stack of the TurtleBot.
|
|
|
|
|
|
If you work with Python, you should also check [Nav2](https://docs.nav2.org/) documentation to learn about the general concepts. However, Nav2 does not support Python directly. We have, however, implemented interfaces for a [Controller](https://docs.nav2.org/plugin_tutorials/docs/writing_new_nav2controller_plugin.html), and for Trajectory Generation and Critic Plugins based on the [dynamic window controller](https://github.com/ros-planning/navigation2/tree/main/nav2_dwb_controller) used by the TurtleBot. The code is available in [air_navigation](https://gitlab.liu.se/tdde05_ros2/air_navigation)
|
|
|
and some examples are included in [air_navigation_examples](https://gitlab.liu.se/tdde05_ros2/air_navigation_examples/-/blob/master/air_navigation_examples/__init__.py).
|
|
|
|
|
|
For both C++ and Python, to use a custom controller and critic, you need to change the configuration of nav2. You can see an example in the air_navigation_examples [nav2_test_py_controller.yaml](https://gitlab.liu.se/tdde05_ros2/air_bringup/-/blob/master/config/nav2_test_py_controller.yaml) and [nav2_test_py_dwb.yaml](https://gitlab.liu.se/tdde05_ros2/air_bringup/-/blob/master/config/nav2_test_py_dwb.yaml). To use a different configuration than the default one:
|
|
|
|
|
|
```
|
|
|
ros2 launch air_bringup turtle.launch.py nav2_params_file:=`ros2 pkg prefix air_bringup`/share/air_bringup/config/nav2_test_py_controller.yaml
|
|
|
```
|
|
|
|
|
|
*Ask your assistant for help to setup your controller!*
|
|
|
|
|
|
## Natural text commands:
|
|
|
|
|
|
A natural interface between a human and a robot is for the human to give commands to the robot using natural sentences, such as, "I need you to go to George's office and bring us two coffees." The goal of this part is to decompose this sentence into three goals that can be executed by the robot: _goto George office_ _bring coffee George_ _bring coffee user_.
|
|
|
|
|
|
You should consider the following approaches:
|
|
|
|
|
|
* Intent recognition: _Practical guidelines for intent recognition: Bert with minimal training data evaluated in real-world, M. Huggins, S. Alghowinem, S. Jeong, P. Colon-Hernandez, C. Breazeal, and H.W Park, ACM/IEEE International Conference on Human-Robot Interaction. 2021_
|
|
|
* Ruled-based
|
|
|
|
|
|
It is not expected that the student investigates speech-to-text techniques. In fact, it is strongly discouraged to look at those aspects. You should consider that you receive a sentence as a string. Speech-to-text is a *very* hard problem.
|
|
|
|
|
|
Example(s) of training data can be found at [data/NLP](https://gitlab.liu.se/tdde05_students/data/-/tree/main/NLP/).
|
|
|
|
|
|
## Decision-making
|
|
|
|
|
|
The robot will receive a set of goals and should decide which goals to accomplish and how. You should consider the following approaches:
|
|
|
|
|
|
* Automated planning, refer to Lecture Topic 7 (Deliberative architecture)
|
|
|
* Decision theory, refer to Lecture Topic 8 (Decision Theory)
|
|
|
|
|
|
For automated planning, the following software is installed on the LiU computers:
|
|
|
|
|
|
* Fast Downward: It is a planner with support for PDDL2.1 and some features of PDDL3, particularly for defining the costs of actions. Full documentation can be found at the [official website](https://www.fast-downward.org/). The executable can be found at `/courses/TDDE05/software/downward/builds/release/bin`.
|
|
|
* Unified Planning: The Unified Planning library makes it easy to formulate planning problems and to invoke automated planners. More information and tutorials can be found in the [official documentation](https://unified-planning.readthedocs.io/en/latest/). Note that _Fast Downward_ is accessible via _Unified Planning_ library but without the support for the action cost feature.
|
|
|
* ROS2 Planning System: A more advanced planning system for ROS2. Documentation can be found on the official page [plansys2](https://plansys2.github.io/) and through the examples on [github](https://github.com/PlanSys2/ros2_planning_system_examples). Note that this package is rather complex, and it is not recommended to use it unless you have prior experience with it.
|
|
|
|
|
|
## FAQ
|
|
|
|
|
|
* *Can I use existing software?* Yes, absolutely. It is not expected that you write everything from scratch. You can and should use existing relevant libraries, frameworks, and ROS nodes and integrate them into your system. However, it is not sufficient to install an existing ROS node, configure the topics, and run that. In case of doubt, ask the assistant if what you intend to do is acceptable. |