Analyze the data and send back the results
In the previous sections, we explored our data and exported it to a Pandas dataframe. In this section, we will analyze the data to extract a "jaw open state" signal and send it back to the viewer.
Analyze the data analyze-the-data
We already identified that thresholding the jawOpen
signal at 0.15 is all we need to produce a binary "jaw open state" signal.
In the previous section, we prepared a flat, floating point column with the signal of interest called "jawOpen"
. Let's add a boolean column to our Pandas dataframe to hold our jaw open state:
df["jawOpenState"] = df["jawOpen"] > 0.15
Send the result back to the viewer send-the-result-back-to-the-viewer
The first step is to initialize the logging SDK targeting the same recording we just analyzed. This requires matching both the application ID and recording ID precisely. By using the same identifiers, we're appending new data to an existing recording. If the recording is currently open in the viewer (and it's listening for new connections), this approach enables us to seamlessly add the new data to the ongoing session.
rr.init(
recording.application_id(),
recording_id=recording.recording_id(),
)
rr.connect()
Note: When automating data analysis, it is typically preferable to log the results to an distinct RRD file next to the source RRD (using rr.save()
). In such a situation, it is also valid to use the same app ID and recording ID. This allows opening both the source and result RRDs in the viewer, which will display data from both files under the same recording.
We will send our jaw open state data in two forms:
- As a standalone
Scalar
component, to hold the raw data. - As a
Text
component on the existing bounding box entity, such that we obtain a textual representation of the state in the visualization.
Here is how to send the data as a scalar:
rr.send_columns(
"/jaw_open_state",
times=[rr.TimeSequenceColumn("frame_nr", df["frame_nr"])],
components=[
rr.components.ScalarBatch(df["jawOpenState"]),
],
)
We use the rr.send_column()
API to efficiently send the entire column of data in a single batch.
Next, let's send the same data as Text
component:
target_entity = "/video/detector/faces/0/bbox"
rr.log(target_entity, [rr.components.ShowLabels(True)], static=True)
rr.send_columns(
target_entity,
times=[rr.TimeSequenceColumn("frame_nr", df["frame_nr"])],
components=[
rr.components.TextBatch(np.where(df["jawOpenState"], "OPEN", "CLOSE")),
],
)
Here we first log the ShowLabel
component as static to enable the display of the label. Then, we use rr.send_column()
again to send an entire batch of text labels. We use the np.where()
to produce a label matching the state for each timestamp.
Final result final-result
With some adjustments to the viewer blueprint, we obtain the following result:
The OPEN/CLOSE label is displayed along the bounding box on the 2D view, and the /jaw_open_state
signal is visible in both the timeseries and dataframe views.
Complete script complete-script
Here is the complete script used by this guide to load data, analyze it, and send the result back:
from __future__ import annotations
import numpy as np
import rerun as rr
# ----------------------------------------------------------------------------------------------
# Load and prepare the data
# load the recording
recording = rr.dataframe.load_recording("face_tracking.rrd")
# query the recording into a pandas dataframe
record_batches = recording.view(index="frame_nr", contents="/blendshapes/0/jawOpen").select()
df = record_batches.read_pandas()
# convert the "jawOpen" column to a flat list of floats
df["jawOpen"] = df["/blendshapes/0/jawOpen:Scalar"].explode().astype(float)
# ----------------------------------------------------------------------------------------------
# Analyze the data
# compute the mouth state
df["jawOpenState"] = df["jawOpen"] > 0.15
# ----------------------------------------------------------------------------------------------
# Log the data back to the viewer
# Connect to the viewer
rr.init(recording.application_id(), recording_id=recording.recording_id())
rr.connect_tcp()
# log the jaw open state signal as a scalar
rr.send_columns(
"/jaw_open_state",
times=[rr.TimeSequenceColumn("frame_nr", df["frame_nr"])],
components=[
rr.components.ScalarBatch(df["jawOpenState"]),
],
)
# log a `Label` component to the face bounding box entity
target_entity = "/video/detector/faces/0/bbox"
rr.log(target_entity, [rr.components.ShowLabels(True)], static=True)
rr.send_columns(
target_entity,
times=[rr.TimeSequenceColumn("frame_nr", df["frame_nr"])],
components=[
rr.components.TextBatch(np.where(df["jawOpenState"], "OPEN", "CLOSE")),
],
)