Lab 4: Exploration
The goal of this lab is to fill a knowledge database with information perceived in the environment. You will extend the TST executor from Lab3 to listen to a semantic sensor, fill new observations in the database and generate a TST for reaching all the locations of interests.
Basic information related to concepts used in this lab can be found in lecture slides for Lecture topic 9. You can refer to the information on the slides when working on this lab assignment.
In this lab you will:
- Implement a TST Executor that listens to the semantic sensor topic and fills the database.
- Use the
tst_executor
from Lab 3 to explore the environment. - Display the results of the exploration in RViz.
- Generate a TST to drive to all the locations.
Interfacing with the Database
For this lab we will use a Postgresql database, with a wrapper called \emph{kDB} which provides ROS services for executing SQL and SPARQL queries. For this lab, we will only use the SPARQL part.
The following command will start the database (you should add it to your screen):
ros2 run ros_kdb kdb_server --ros-args -p db_port:=101XX
Replace XX with your group number. The database server is very verbose, you can generally ignore what is printed in the terminal.
You can reset the content of the database with (you need to stop the database first):
rm -rf $HOME/.ros_kdb/stores/
Inserting in the Database
The following service call will insert two objects in the graph named salientpoints, one human (with identifier 1) at coordinates (8,10)
and one table (with identifier 2) at coordinates (10,12)
:
ros2 service call /kdb_server/insert_triples ros_kdb_interfaces/srv/InsertTriples "
graphname: 'salientpoints'
format: 'ttl'
content: '@prefix gis: <http://www.ida.liu.se/~TDDE05/gis>
@prefix properties: <http://www.ida.liu.se/~TDDE05/properties>
<id1> a <human>;
properties:location [ gis:x 8; gis:y 10 ] .
<id2> a <table>;
properties:location [ gis:x 10; gis:y 12 ] .'"
This will insert two objects, the first has uuid id1
, coordinate (8,10)
and is a human
. The second has uuid id2
, coordinates (10,12)
and is of class table
.
Querying the Database
The following query can be used to retrieve all the elements in the database:
ros2 service call /kdb_server/sparql_query ros_kdb_interfaces/srv/QueryDatabase "queries:
- graphnames: [salientpoints]
format: 'json'
query: 'SELECT ?subject ?predicate ?object WHERE { ?subject ?predicate ?object }'"
Note: this query should be used to check if any triples were added in the database. It should not be used as a solution for the lab.
The following query can be used to retrieve all the objects with their classes that are near the point with coordinates (10, 12)
:
ros2 service call /kdb_server/sparql_query ros_kdb_interfaces/srv/QueryDatabase "queries:
- graphnames: ['salientpoints']
format: 'json'
query: '
PREFIX gis: <http://www.ida.liu.se/~TDDE05/gis>
PREFIX properties: <http://www.ida.liu.se/~TDDE05/properties>
SELECT ?obj_id ?class ?x ?y WHERE { ?obj_id a ?class ;
properties:location [ gis:x ?x; gis:y ?y ]
FILTER( 9.9 < ?x && ?x < 10.1 && 11.9 < ?y && ?y < 12.1) .
}'"
You can use ros2 service type
to get the type of the sparql_query
service call and ros2 interface
to access the definition of the service.
Semantic Sensor
The sensor outputs observations on a topic called /semantic_sensor
. You should use ros2 topic
and ros2 interface
to introspect the topic, to find its type, look at the values sent, etc...
Exploration
In Lab 3, you have created a TST executor that moves the robot to explore the environment. We will extend the functionality of lab3 and add a new type of TST Executor that can be used to record observations from /semantic_sensor
and save them in the database.
In lab3_node
:
- Create a new TST Executor for
record-semantic
, it should take as parameters-
topic
the name of the topic -
graphname
the name of the graph used to save the data
-
- Listen for new observations on the given topic
- Check in the graph specified with
graphname
if the observations are already inserted (using a query). For example, the following query can be used that should return a single element (you need to replacesomeid
andsomeclasss
with the correct values from the message):
PREFIX gis: <http://www.ida.liu.se/~TDDE05/gis>
PREFIX properties: <http://www.ida.liu.se/~TDDE05/properties>
SELECT ?x ?y WHERE { <someid> a <someklass> ;
properties:location [ gis:x ?x; gis:y ?y ] . }
- Take into account the following when calling this query:
- Make sure to keep
<
and>
as they indicate a URI in the query. Without them the query is invalid and it will not be executed. So you can replace<someid>
by<somethingelse>
but notsomethingelse
. - Set the TST parameter for
graphname
in the parameter to the service call. - This query needs to be executed with a service call to
/kdb_server/sparql_query
. You can check the official tutorial for Python or C++ to learn how to make a service call. You only need the client, not the server, since the server is in the database. - To wait for the results, you should use
self.executor.spin_until_future_complete(self.future)
(since we use a specialexecutor
) - You also need to create
ReentrantCallbackGroup
like you did in thetst_executor
of Lab3, and add it to thecreate_client
function call. - In the call to
/kdb_server/sparql_query
, the fieldsbindings
andservice
are unused for this lab and can be set to an empty string.
- Make sure to keep
- If the object has not been put in the database yet, add new observation using the insert triples service call (replace 'salientpoints' with the TST parameter for
graphname
). - In C++, add
air_simple_sim_msgs
as a dependency inCMakeLists.txt
, usingfind_package
and setting the correct dependencies for your executable. - In the following section, we cover hints on how to interpret the results of queries, including parsing of JSON.
You can test your TST executor with:
ros2 service call /execute_tst air_lab_interfaces/srv/ExecuteTst "tst_file: '`ros2 pkg prefix air_tst`/share/air_tst/tsts/explore_record_semantic.json'"
Python
Parsing results The results of the queries are in the JSON format. You can parse JSON this way in Python:
import json
data = json.loads(x)
for row in data["results"]["bindings"]:
print(row)
Transformation If you need to transform points from one coordinate frame to another, you can use tf2
:
import rclpy
from rclpy.node import Node
from tf2_ros import TransformException
from tf2_ros.buffer import Buffer
from tf2_ros.transform_listener import TransformListener
import tf2_geometry_msgs
# The tf_buffer and tf_listener needs to be kept alive and should be created in a constructor
node = Node()
tf_buffer = Buffer()
tf_listener = TransformListener(self.tf_buffer, node)
# To use them for point transformation
try:
point_transformed = self.tf_buffer.transform(point, to_frame_rel, timeout=rclpy.duration.Duration(seconds=1.0)
except TransformException as ex:
node.get_logger().info(f'Could not transform {point} to {to_frame_rel}: {ex}')
return
Where to_frame_rel
is the destination frame, in our case, it should be "map"
. point
is the point stamped from the observation.
C++
Parsing results The results of the queries are in the JSON format. You can parse JSON this way in C++:
#include <QJsonDocument>
QJsonDocument doc = QJsonDocument::fromJson(QByteArray::fromStdString(result));
You can check the API online for QJsonDocument
Transformation If you need to transform points from one coordinate frame to another, you can use tf2
:
// The tf_buffer and tf_listener needs to be kept alive and should be created in a constructor
std::unique_ptr<tf2_ros::Buffer> tf_buffer_ = std::make_unique<tf2_ros::Buffer>(node->get_clock());
std::shared_ptr<tf2_ros::TransformListener> tf_listener_ = std::make_shared<tf2_ros::TransformListener>(*tf_buffer_);
// To use them for point transformation
try {
point_transformed = tf_buffer_->tranform(point, to_frame_rel);
} catch (const tf2::TransformException & ex) {
RCLCPP_INFO(node->get_logger(), "Could not transform %s to %s: %s",
point.header.frame_id.c_str(), to_frame_rel.c_str(), ex.what());
return;
}
Where to_frame_rel
is the destination frame, in our case, it should be "map"
. point
is the point stamped from the observation.
Visualisation
In this part you should query the database (using a service call) for all the objects.
And then display the resulting objects in RViz.
To do that we can publish visualization_msgs/MarkerArray
message on a topic. Use ros2 interface show
to display the structure of the message.
Python
The following shows an example of how to publish a marker to show two cubes in Python. You can add this to a new node (don't forget to update your setup.py
):
import rclpy
import rclpy.node
import visualization_msgs.msg
import geometry_msgs.msg
import std_msgs.msg
def create_point(x, y, z):
msg = geometry_msgs.msg.Point()
msg.x = x
msg.y = y
msg.z = z
return msg
def create_color(r, g, b, a):
msg = std_msgs.msg.ColorRGBA()
msg.r = r
msg.g = g
msg.b = b
msg.a = a
return msg
def timer_callback():
marker = visualization_msgs.msg.Marker()
marker.id = 1242 # identifier the marker, should be unique
marker.header.frame_id = "odom"
marker.type = visualization_msgs.msg.Marker.CUBE_LIST
marker.action = 0
marker.scale.x = 0.5
marker.scale.y = 0.5
marker.scale.z = 0.5
marker.pose.orientation.w = 1.0
marker.color.a = 1.0
marker.points.append(create_point(2.0, 3.0, 0.0))
marker.points.append(create_point(5.0, 2.0, 0.0))
marker.colors.append(create_color(1.0, 0.5, 0.5, 1.0))
marker.colors.append(create_color(0.5, 1.0, 0.5, 1.0))
marker_array = visualization_msgs.msg.MarkerArray()
marker_array.markers = [marker]
display_marker_pub.publish(marker_array)
def main():
global display_marker_pub
rclpy.init()
node = rclpy.node.Node('visualise_semantic_objects')
display_marker_pub = node.create_publisher(visualization_msgs.msg.MarkerArray, 'semantic_sensor_visualisation', 10)
timer = node.create_timer(0.5, timer_callback)
rclpy.spin(node)
if __name__ == '__main__':
main()
In RViz you can select to display a marker array, and you should see two cubes. You should then modify the program to query the database and show the humans and tables. Once you have completed the lab you should see something similar to this (the exact numbers of cubes and their positions may be different for you):
C++
The following shows an example of how to publish a marker to show two cubes in C++, you can add this to a new node (don't forget to update your CMakeLists.txt
):
#include <chrono>
#include "rclcpp/rclcpp.hpp"
#include <visualization_msgs/msg/marker_array.hpp>
using namespace std::chrono_literals;
int main(int argc, char** argv)
{
rclcpp::init(argc, argv);
std::shared_ptr<rclcpp::Node> node = rclcpp::Node::make_shared("visualise_semantic_objects");
auto display_marker_pub = node->create_publisher<visualization_msgs::msg::MarkerArray>("semantic_sensor_visualisation", 10);
auto timer_ = node->create_wall_timer(500ms, [display_marker_pub]()
{
visualization_msgs::msg::Marker marker;
marker.id = 1242; // identifier the marker, should be unique
marker.header.frame_id = "odom";
marker.type = visualization_msgs::msg::Marker::CUBE_LIST;
marker.action = 0;
marker.scale.x = 0.5;
marker.scale.y = 0.5;
marker.scale.z = 0.5;
marker.pose.orientation.w = 1.0;
marker.color.a = 1.0;
geometry_msgs::msg::Point pt1;
pt1.x = 2;
pt1.y = 3;
marker.points.push_back(pt1);
geometry_msgs::msg::Point pt2;
pt2.x = 5;
pt2.y = 2;
marker.points.push_back(pt2);
std_msgs::msg::ColorRGBA colorA;
colorA.r = 1.0;
colorA.g = 0.5;
colorA.b = 0.5;
colorA.a = 1.0;
marker.colors.push_back(colorA);
std_msgs::msg::ColorRGBA colorB;
colorB.r = 0.5;
colorB.g = 1.0;
colorB.b = 0.5;
colorB.a = 1.0;
marker.colors.push_back(colorB);
visualization_msgs::msg::MarkerArray array;
array.markers.push_back(marker);
display_marker_pub->publish(array);
});
rclcpp::spin(node);
return 0;
}
In RViz you can select to display a marker array, and you should see two cubes. You should then modify the program to query the database and show the humans and tables. Once you have completed the lab you should see something similar to this:
Note: the previous images show all the possible objects in the database, after you have completed the exploration you will only see a limited number of objects.
TST
The final step is to write a simple program that queries the database for all the humans (all the objects of class human
), generates a TST (as a json file) for driving to all the locations of the humans (as a sequence of drive_to
) and execute that TST. For simplicity, you can make your program output the TST to a file and then execute the TST with the service call for lab 3.
Demonstration
Your demonstration should:
- Show the exploration with recording of data in the database
- Run the query script to visualise the result
- Generate the TST and execute it