Moving Platform: Mission Metadata, Telemetry and Detection Loading
Overview
Teaching: 150 min
Exercises: 0 minQuestions
What are considered moving platforms. What are the use cases?
How do I load moving platform metadata, telemetry data, and detections into the Database?
Objectives
Understand the workflow for moving platform workflow in the OTN system
Learn how to use the
Movers
notebooksKnown issues and shooting tips
Here is the issue checklist in the OTN Gitlab Moving Platforms
template, for reference:
Moving platform
- [ ] - NAME load raw metadata file (`movers-1` notebook)**(:fish: table name: c_moving_platform_missions_yyyy)**
- [ ] - NAME load raw telemetry files (`movers-2` notebook) **(:fish: table name: c_moving_platform_telemetry_yyyy**)
- [ ] - NAME create telemetry table from raw table (`movers-2` notebook) **(:fish: table name: moving_platform_telemetry_yyyy**)
- [ ] - NAME combine mission metadata with telemetry (`movers-2` notebook) **(:fish: table name: moving_platform_mission_telemetry_yyyy)**
- [ ] - NAME load to raw detections (`detections-1` notebook) **(:fish: table name: c_detections_yyyy)**
- [ ] - NAME verify raw detections table (`detections-1` notebook)
- [ ] - NAME load raw events (`events-1` notebook) **(:fish: table name: c_events_yyyy )**
- [ ] - NAME load raw events to events table (`events-2` notebook)
- [ ] - NAME load to detections_yyyy_movers (`movers-2` notebook) **(:fish: put affected years here)**
- [ ] - NAME delete self detections (`movers-3` notebook)
- [ ] - NAME timedrift correction for affected detection (`movers-3` notebook)
- [ ] - NAME verify timedrift corrections (`movers-3` notebook)
- [ ] - NAME verify detections_yyyy_movers (looking for duplicates) (`movers-3` notebook)
- [ ] - NAME load to sensor match (`movers-3` notebook) **(:fish: put affected years here)**
- [ ] - NAME load formatted telemetry tables (`movers-4` notebook) **(:fish: put affected years here)**
- [ ] - NAME load reduced telemetry tables (`movers-4` notebook) **(:fish: put affected years here)**
- [ ] - NAME load glider as receiver tables (`movers-4` notebook) **(:fish: put affected years here)**
- [ ] - NAME load into vw_detections_yyyy_movers (`movers-4` notebook) **(:fish: put affected years here)**
- [ ] - NAME load view detections into otn_detections_yyyy (`movers-4` notebook) **(:fish: put affected years here)**
- [ ] - NAME verify otn_detections_yyyy (`movers-4` notebook)
- [ ] - NAME create mission and receiver records in moorings (`movers-4` notebook)
- [ ] - NAME load download records (`events-3` notebook)
- [ ] - NAME verify download records (`events-3` notebook)
- [ ] - NAME process receiver configuration (`events-4` notebook)
- [ ] - NAME label issue with *'Verify'*
- [ ] - NAME pass issue to analyst for final steps
- [ ] - NAME match tags to animals (`detections-4` notebook)
- [ ] - NAME update detection extract table
metadata: **(put metadata repository link here)**
data: **(put data repository link here)**
telemetry: **(put telemetry repository link here)**
Loading Mission Metadata
Moving platform missing metadata should be reported to the Node in the template provided here: Moving Platforms Metadata. This spreadsheet file will contain one or more missions (rows) of the moving platform: identifiers, instruments used, and deployment/recovery times.
- Quality control the
MOVING PLATFORMS METADATA
spreadsheet. If any modification save revised version as_QCed.xlsx
1.1 Visually check for any missing information and inconsistant or formatting issues in the essential columns (in dark-green backgroup color)? Column names and example data are shown as below:
- PLATFORM_ID: e.g.
OTN-Gl-1
. - OTN_MISSION_ID: e.g.
OTN-Gl-1-20231003T1456
(Note: otn_mission_id is an internal unique identifier which can be constructed asplatform_id + deploy_date_time
). Recommend this format to comply with OTN and also match theDataset ID
of IOOS US Glider DAC ERDDAP Datasets - INS_MODEL_NO: e.g.
VMT
- INS_SERIAL_NO: e.g.
130000
- DEPLOY_DATE_TIME: e.g.
2023-10-03T14:56:00
- DEPLOY_LAT: e.g.
16.7428
- DEPLOY_LONG: e.g.
-24.7889
- RECOVER_DATE_TIME: e.g.
2023-12-03T12:00:00
- RECOVERED: (y/n/l)
- RECOVER_LAT: e.g.
16.9251
- RECOVER_LONG: e.g.
-25.4713
- DATA_DOWNLOADED: (y/n)
- DOWNLOAD_DATE_TIME: e.g.
2024-06-11T00:00:00
- FILENAME: e.g.
VMT_130000_20240616_133100.vrl
(Note: prefer original downloads).
1.2 Optional but nice to have columns:
- TRANSMITTER and TRANSMIT_MODEL to reduce self-detections.
- Run through the
movers - 1 - Load Mission Metadata
Nodebook to load the spreadsheet into themission_table
. See steps here:
User Input
Cell three requires input from you. This information will be used to get the raw mission CSV and to be able to create a new raw mission table in the database.
schema
: ‘collectioncode’- Please edit to include the relevant project code, in lowercase, between the quotes.
table_suffix
: e.g.2024_03
- should be the same as in themovers - 1 - Load Mission Metadata
Nodebook.- Within the quotes, please add your custom table suffix. We recommend using
year_month
or similar, to indicate the most-recently downloaded instrument.
- Within the quotes, please add your custom table suffix. We recommend using
mission_file
: ‘/path/to/mission_file’- paste a filepath to the relevant XLSX file. The filepath will be added between the provided quotation marks.
- Check off the step and record the
mission_table
name in the Gitlab ticket.
- [ ] - NAME load raw metadata file (movers-1 notebook)**(:fish: table name: c_moving_platform_missions_yyyy)**
Loading Telemetry Data
- Visually check if there is any missing information or inconsistent formatting in the four essential columns in each submitted telemetry file. Column names and example data are shown as below:
- Timestamp: e.g.
2023-12-13T13:10:12
(Note: the column name may be different) - lat: e.g.
28.33517
(Note: the column name may be different) - lon: e.g.
-80.33734833
(Note: the column name may be different) - PLATFORM_ID: e.g.
OTN-GL-1
(Note: the column name may be different. Ensure the values match themission_table
.platform_id
in the Loading Mission Metadata step) - OTN_MISSION_ID: e.g.
OTN-GL-1-20231003T1456
(Note: this column needs to be added in a spreadsheet application. And populate the values to match the values in themission_table
.otn_mission_id
in the Loading Mission Metadata step)
- Timestamp: e.g.
- Launch the
movers - 2 - Load telemetry
Nodebook
User Input
Cell three requires input from you. This information will be used to get the telemetry CSV and to be able to create a new raw telemetry table in the database.
table_suffix
: e.g.2024_03
- Within the quotes, please add your custom table suffix. We recommend using
year_month
or similar.
- Within the quotes, please add your custom table suffix. We recommend using
schema
: ‘collectioncode’- Please edit to include the relevant project code, in lowercase, between the quotes.
telemetry_file
: ‘/path/to/telem_file’- Paste a filepath to the relevant CSV file. The filepath will be added between the provided quotation marks.
- Note: if multiple telemetry files are submitted use the
Optional: Combine telemetry files from multiple missions/platforms
section to combine them.
-
Run the
Prepare the telemetry file to be uploaded
cell to map the spreadsheet comlumns to Database columns. -
Run the
verify_telemetry_file
andUpload the telemetry file to raw table
cells to load the telemetry data (.xlsx or .csv) file into theraw_telemetry
table. -
Check off the step and record the
c_moving_platform_telemetry
name in the Gitlab ticket.
- [ ] - NAME load raw telemetry files (movers-2 notebook) **(:fish: table name: c_moving_platform_telemetry_yyyy**)
-
Run the
Create the telemtry table for joining to missions
andverify_telemetry_table
cell to create and loadtelemetry
table (e.g. moving_platform_telemetry_2024_03). -
Check off the step and record the
moving_platform_telemetry
name in the Gitlab ticket.
- [ ] - NAME create telemetry table from raw table (movers-2 notebook) **(:fish: table name: moving_platform_telemetry_yyyy**)
-
Run the
movers - 2 - Load telemetry
notebook:verify_missions_table
,create_joined_table
, andverify_telemetry_table
cells to create the moving_platform_mission_telemetry table: -
Check off the step and record the
moving_platform_mission_telemetry
name in the Gitlab ticket.
- [ ] - NAME combine mission metadata with telemetry (movers-2 notebook) **(:fish: table name: moving_platform_mission_telemetry_yyyy)**
Loading Raw Detections and Events
These detailed steps and explanations are the same as https://ocean-tracking-network.github.io/node-manager-training/10_Detections/index.html Convert to CSV
section, detections - 1 - load csv detections
section, events - 1 - load events into c_events_yyyy
section and events - 2 - move c_events into events table
section. Please use the above Detection Loading process as reference.
- The related detections may now be processed. Detection data should be reported to the Node as a collection of raw, unedited files. These can be in the form of a zipped folder of
.VRLs
, a database from Thelma Biotel or any other raw data product from any manufacturer.
Visual Inspection
Once the files are received from a researcher, the Data Manager should first complete a visual check for formatting and accuracy.
Things to visually check:
- Do the files appear edited? Look for
_edited
in file name. - Is the file format the same as expected for that manufacturer? Ex.
.vrl
for Innovasea - not.csv
orrld
formats.
Convert to CSV
Once the raw files are obtained, the data must often be converted to .csv
format by the Node Manager. There are several ways this can be done, depending on the manufacturer.
For Innovasea
- VUE (Obsolete, prefer Fathom Connect unless receiver unsupported)
- Open a new
database
- Import all the
VRL
files provided - Select
export detections
and choose the location you want to save the file - Select
export events
and choose the location you want to save the file
- Open a new
- Fathom Connect App
- Choose “export data”
- Select the relevant files and import into the Fathom Connect application
- Export all data types, and choose the location you want to save the files
convert - Fathom (vdat) Export - VRL to CSV
Nodebook- This will use the
vdat.exe
executable to export from VRL/VDAT to CSV - Instructions for this Nodebook are in the
Detections
lesson.
- This will use the
For Thelma Biotel
- Use the
ComPort
software to open the.tbdb
file and export as CSV
For Lotek
- Exporting to CSV is more complicated, please reach out to OTN for specific steps for a given instrument model
For all other manufacturers, contact OTN staff to get specifics on the detection data loading workflow.
detections - 1 - load csv detections
Detections-1 loads CSV detections files into a new database table. If detections were exported using Fathom
or the convert - Fathom (vdat) Export - VRL to CSV
Nodebook, the receiver events records will also be loaded at this stage. This is because these applications combine the detections and events data in one CSV file.
Import cells and Database Connections
As in all Nodebooks, run the import cell to get the packages and functions needed throughout the notebook. This cell can be run without any edits.
The second cell will set your database connection. You will have to edit one section: engine = get_engine()
- Within the open brackets you need to open quotations and paste the path to your database
.kdbx
file which contains your login credentials. - On MacOS computers, you can usually find and copy the path to your database
.kdbx
file by right-clicking on the file and holding down the “option” key. On Windows, we recommend using the installed software Path Copy Copy, so you can copy a unix-style path by right-clicking. - The path should look like
engine = get_engine('C:/Users/username/Desktop/Auth files/database_conn_string.kdbx')
.
Once you have added your information, you can run the cell. Successful login is indicated with the following output:
Auth password:········
Connection Notes: None
Database connection established
Connection Type:postgresql Host:db.for.your.org Database:your_db_name User:your_node_admin Node:Node
User Input
Cell three requires input from you. This information will be used to get the raw detections CSV and to be able to create a new raw table in the database.
file_or_folder_path = r'C:/Users/path/to/detections_CSVs/'
- Paste a filepath to the relevant CSV file(s). The filepath will be added between the provided quotation marks.
- This can be a path to a single CSV file, or a folder of multiple CSVs.
table_suffix = 'YYYY_mm'
- Within the quotes, please add your custom table suffix. We recommend using the year and month (i.e,
YYYY_MM
) or similar, to indicate the most-recently downloaded instrument.
- Within the quotes, please add your custom table suffix. We recommend using the year and month (i.e,
schema = 'collectioncode'
- Please edit to include the relevant project code, in lowercase, between the quotes.
There are also some optional inputs:
load_detections
: A true or false value using the table suffix you supplied.stacked
: This is for Fathom exports only and refers to how they should be parsed.
Once you have added your information, you can run the cell.
Verify Detection File and Load to Raw Table
Next, the Nodebook will review and verify the detection file(s) format, and report any error. Upon successful verification, you can then run the cell below which will attempt to load the detections into a new raw table.
The Nodebook will indicate the success of the table-creation with a message such as this:
Reading fathom files...
Loading Files...
X/X
Task list checkpoint
In GitLab, this task can be completed at this stage:
- [ ] - NAME load to raw detections ("detections-1" notebook) **(:fish: table name: c_detections_yyyy)**
Ensure you paste the table name (ex: c_detections_YYYY_mm) into the indicated section before you check the box.
Verify Raw Detection Table
This cell will now complete the Quality Control checks of the raw table. This is to ensure the Nodebook loaded the records correctly from the CSVs.
The output will have useful information:
- Are there any duplicates?
- Are the serial numbers formatted correctly?
- Are the models formatted correctly?
The Nodebook will indicate the sheet had passed quality control by adding a ✔️green checkmark beside each section.
If there are any errors, contact OTN for next steps.
Task list checkpoint
In GitLab, these tasks can be completed at this stage:
- [ ] - NAME verify raw detections table ('detections-1' notebook)
events - 1 - load events into c_events_yyyy
Events-1 is responsible for loading receiver events files into raw tables. This is only relevant for CSVs that were NOT exported using Fathom
or the convert - Fathom (vdat) Export - VRL to CSV
Nodebook.
Import cell
As in all Nodebooks run the import cell to get the packages and functions needed throughout the notebook. This cell can be run without any edits.
User Inputs
Cell two requires input from you. This information will be used to get the raw events CSV and to be able to create a new raw table in the database.
filepath = r'C:/Users/path/to/events.csv'
- Paste a filepath to the relevant CSV file. The filepath will be added between the provided quotation marks.
table_name = 'c_events_YYYY_mm'
- Within the quotes, please add your custom table suffix. We recommend using the year and month (i.e,
YYYY_MM
) or similar, to indicate the most-recently downloaded instrument.
- Within the quotes, please add your custom table suffix. We recommend using the year and month (i.e,
schema = 'collectioncode'
- Please edit to include the relevant project code, in lowercase, between the quotes.
There are also some optional inputs:
file_encoding
: How the file is encoded. ISO-8859-1 is the default encoding used in VUE’s event export.
Once you have added your information, you can run the cell.
Verifying the events file
Before attempting to load the event files to a raw table the Nodebook will verify the file to make sure there are no major issues. This will be done by running the Verify Events File cell. If there are no errors, you will be able to continue.
The Nodebook will indicate the success of the file verification with a message such as this:
Reading file 'events.csv' as CSV.
Verifying the file.
Format: VUE 2.6+
Mandatory Columns: OK
date_and_time datetime:OK
Initialization(s): XX
Data Upload(s): XX
Reset(s): XX
Database Connection
You will have to edit one section: engine = get_engine()
- Within the open brackets you need to open quotations and paste the path to your database
.kdbx
file which contains your login credentials. - On MacOS computers, you can usually find and copy the path to your database
.kdbx
file by right-clicking on the file and holding down the “option” key. On Windows, we recommend using the installed software Path Copy Copy, so you can copy a unix-style path by right-clicking. - The path should look like
engine = get_engine('C:/Users/username/Desktop/Auth files/database_conn_string.kdbx')
.
Once you have added your information, you can run the cell. Successful login is indicated with the following output:
Auth password:········
Connection Notes: None
Database connection established
Connection Type:postgresql Host:db.for.your.org Database:your_db_name User:your_node_admin Node:Node
Load the events file into the c_events_yyyy table
The second last cell loads the events file into a raw table. It depends on successful verification from the last step. Upon successful loading, you can dispose of the engine then move on to the next Nodebook.
The Nodebook will indicate the success of the table-creation with the following message:
File loaded with XXXXX records.
100%
Task list checkpoint
In GitLab, these tasks can be completed at this stage:
- [ ] - NAME load raw events (events-1 notebook) **(:fish: table name: c_events_yyyy )**
Ensure you paste the table name (ex: c_events_YYYY_mm) into the indicated section before you check the box.
events - 2 - move c_events into events table
This Nodebook will move the raw events records into the intermediate events table.
Import cell
As in all Nodebooks run the import cell to get the packages and functions needed throughout the notebook. This cell can be run without any edits.
User input
This cell requires input from you. This information will be used to get the raw events CSV and to create a new raw table in the database.
c_events_table = 'c_events_YYYY_mm'
- Within the quotes, please add your custom table suffix, which you have just loaded in either
detections-1
orevents-1
.
- Within the quotes, please add your custom table suffix, which you have just loaded in either
schema = 'collectioncode'
- Please edit to include the relevant project code, in lowercase, between the quotes.
Database Connection
You will have to edit one section: engine = get_engine()
- Within the open brackets you need to open quotations and paste the path to your database
.kdbx
file which contains your login credentials. - On MacOS computers, you can usually find and copy the path to your database
.kdbx
file by right-clicking on the file and holding down the “option” key. On Windows, we recommend using the installed software Path Copy Copy, so you can copy a unix-style path by right-clicking. - The path should look like
engine = get_engine('C:/Users/username/Desktop/Auth files/database_conn_string.kdbx')
.
Once you have added your information, you can run the cell. Successful login is indicated with the following output:
Auth password:········
Connection Notes: None
Database connection established
Connection Type:postgresql Host:db.for.your.org Database:your_db_name User:your_node_admin Node:Node
Verify table format
You will then verify that the c_events events table you put in exists and then verify that it meets the required format specifications.
The Nodebook will indicate the success of the table verification with a message such as this:
Checking table name format... OK
Checking if schema collectioncode exists... OK!
Checking collectioncode schema for c_events_YYYY_mm table... OK!
collectioncode.c_events_YYYY_mm table found.
If there are any errors in this section, please contact OTN.
Load to Events table
If nothing fails verification, you can move to the loading cell.
The Nodebook will indicate the success of the processing with a message such as this:
Checking for the collectioncode.events table... OK!
Loading events... OK!
Loaded XX rows into collectioncode.events table.
Task list checkpoint
In GitLab, these tasks can be completed at this stage:
- [ ] - NAME load raw events to events table ("events-2" notebook)
Loading Detections for Moving Platforms
- With the telemetry and mission table, we can now upload the raw detections and promote them to the detections_yyyy_movers tables.
- This Nodebook is analogous to
detections - 2 - c_table into detections_yyyy
anddetections - 2b - timedrift calculations
. The difference is it handles_movers
tables.- Run
movers - 3 - Load Detections
Nodebook tillLoad raw dets into detections_yyyy_movers
cell to populatedetections_yyyy_movers
tables and load raw detections into them.
- Run
User Input
Cell three requires input from you. This information will be used to get the raw detections CSV and to be able to create a new raw table in the database.
table_suffix
: e.g.2024_03
- should be the same as in themovers - 1 - Load Mission Metadata
Nodebook.- Within the quotes, please add your custom table suffix. We recommend using
year_month
or similar.
- Within the quotes, please add your custom table suffix. We recommend using
schema
: ‘collectioncode’- Please edit to include the relevant project code, in lowercase, between the quotes.
-
Run the
Load raw dets into detections_yyyy_movers
cell and note the output. -
Check off the step and record affected years (which detections_yyyy_movers tables were updated) in the Gitlab ticket.
- [ ] - NAME load to detections_yyyy_movers (
movers-2 notebook) **(:fish: put affected years here)**
- Note: if no detections were promoted into detections_yyyy_movers please assign the ticket to OTN for trouble shooting.
- Run the next six cells to load timedrift factors, adjustment detection datetime to detections_yyyy_movers and verify the timedrift corrections. Check off the steps in the Gitlab ticket.
- [ ] - NAME timedrift correction for affected detection (
movers-3 notebook)
- [ ] - NAME verify timedrift corrections (movers-3 notebook)
- Run the next two cells to verify the detections_yyyy_movers tables. Check off the step in the Gitlab ticket.
- [ ] - NAME verify detections_yyyy_movers (looking for duplicates) (movers-3 notebook)
- Run the last cell to create and load sensor match tables for movers. Check off the step and record affected years in the Gitlab ticket.
- [ ] - NAME load to sensor match (
movers-3 notebook) **(:fish: put affected years here)**
Loading OTN Detections
User Input
Cell three requires input from you. This information will be used to get the raw detections CSV and to be able to create a new raw table in the database.
table_suffix
: e.g.2024_03
- should be the same as in themovers - 1 - Load Mission Metadata
notebook.- Within the quotes, please add your custom table suffix. We recommend using
year_month
or similar.
- Within the quotes, please add your custom table suffix. We recommend using
schema
: ‘collectioncode’- Please edit to include the relevant project code, in lowercase, between the quotes.
years
: e.g.[2022, 2023]
- Enter the affected years from
movers - 3 - Load Detections
- Enter the affected years from
- Run the
create_moving_platforms_full_telemetry
cell to load to the formatted telemetry table. Check off the step and record the affected years in the Gitlab ticket.
- [ ] - NAME load formatted telemetry tables (movers-4 notebook) **(:fish: put affected years here)**
- Run the
create_reduced_telemetry_tables
cell to load the reduced telemetry tables. Check off the step and record the affected years in the Gitlab ticket.
- [ ] - NAME load reduced telemetry tables (movers-4 notebook) **(:fish: put affected years here)**
- Run the
create_platform_as_receiver_tables
cell to load the “glider as receiver” tables. Check off the step and record the affected years in the Gitlab ticket.
- [ ] - NAME load glider as receiver tables (movers-4 notebook) **(:fish: put affected years here)**
- Run the
create_detections_view
andload_view_into_otn_dets
cells to load view detections into otn_detections_yyyy tables. Check off the step and record the affected years in the Gitlab ticket.
- [ ] - NAME load into vw_detections_yyyy_movers (
movers-4 notebook) **(:fish: put affected years here)**
- [ ] - NAME load view detections into otn_detections_yyyy (movers-4 notebook) **(:fish: put affected years here)**
- Note: if no detections were promoted into detections_yyyy_movers please assign the ticket to OTN for trouble shooting.
- Run the
verify_otn_detections
cell to verify theotn_detections_yyyy
tables. Check off the step in the Gitlab ticket.
- [ ] - NAME verify otn_detections_yyyy (movers-4 notebook)
- Run the
load_platforms_to_moorings
cell to verify themoorings
tables. Check off the step in the Gitlab ticket.
- [ ] - NAME create mission and receiver records in moorings (movers-4 notebook)
events - 3 - create download records
This Nodebook will promote the events records from the intermediate events
table to the final moorings
records. Only use this Nodebook after adding the receiver records to the moorings table as this process is dependant on receiver records.
NOTE: as of November 2024 the below Nodebooks do no support the Movers data. This will be completed shortly.
Import cells and Database connections
As in all Nodebooks run the import cell to get the packages and functions needed throughout the notebook. This cell can be run without any edits.
The second cell will set your database connection. You will have to edit one section: engine = get_engine()
- Within the open brackets you need to open quotations and paste the path to your database
.kdbx
file which contains your login credentials. - On MacOS computers, you can usually find and copy the path to your database
.kdbx
file by right-clicking on the file and holding down the “option” key. On Windows, we recommend using the installed software Path Copy Copy, so you can copy a unix-style path by right-clicking. - The path should look like
engine = get_engine('C:/Users/username/Desktop/Auth files/database_conn_string.kdbx')
.
Once you have added your information, you can run the cell. Successful login is indicated with the following output:
Auth password:········
Connection Notes: None
Database connection established
Connection Type:postgresql Host:db.for.your.org Database:your_db_name User:your_node_admin Node:Node
User Inputs
Information regarding the tables we want to check against is required. Please complete schema = 'collectioncode'
, edited to include the relevant project code, in lowercase, between the quotes.
Once you have edited the value, you can run the cell.
Detecting Download Records
The next cell will scan the events
table looking for data download events, and attempt to match them to their corresponding receiver deployment.
You should see output like this:
Found XXX download records to add to the moorings table
The next cell will print out all the identified download records, in a dataframe for you to view.
Loading Download Records
Before moving on from this you will need to confirm 2 things:
- Confirm that NO Push is currently ongoing
- Confirm
rcvr_locations
for this schema have been verified.
If a Push is ongoing, or if verification has not yet occurred, you must wait for it to be completed before processing beyond this point.
If everything is OK, you can run the cell. The Nodebook will indicate success with a message like:
Added XXX records to the moorings table
Task list checkpoint
In GitLab, this task can be completed at this stage:
- [ ] - NAME load download records ("events-3" notebook)
Verify Download Records
This cell will have useful information:
- Are the instrument models formatted correctly?
- Are receiver serial numbers formatted correctly?
- Are there any other outstanding download records which haven’t been loaded?
The Nodebook will indicate the table has passed verification by the presence of ✔️green checkmarks.
If there are any errors, contact OTN for next steps.
Task list checkpoint
In GitLab, this task can be completed at this stage:
- [ ] - NAME verify download records ("events-3" notebook)
events-4 - process receiver configuration
This Nodebook will process the receiver configurations (such as MAP code) from the events table and load them into the schema’s receiver_config
table. This is a new initiative by OTN to document and store this information, to provide better feedback to researchers regarding the detectability of their tag programming through time and space.
There are many cells in this Nodebook that display information but no action is needed from the Node Manager.
Import cells and Database connections
As in all Nodebooks run the import cell to get the packages and functions needed throughout the notebook. This cell can be run without any edits.
The second cell will set your database connection. You will have to edit one section: engine = get_engine()
- Within the open brackets you need to open quotations and paste the path to your database
.kdbx
file which contains your login credentials. - On MacOS computers, you can usually find and copy the path to your database
.kdbx
file by right-clicking on the file and holding down the “option” key. On Windows, we recommend using the installed software Path Copy Copy, so you can copy a unix-style path by right-clicking. - The path should look like
engine = get_engine('C:/Users/username/Desktop/Auth files/database_conn_string.kdbx')
.
Once you have added your information, you can run the cell. Successful login is indicated with the following output:
Auth password:········
Connection Notes: None
Database connection established
Connection Type:postgresql Host:db.for.your.org Database:your_db_name User:your_node_admin Node:Node
User Inputs
Information regarding the tables we want to check against is required. Please complete schema = 'collectioncode'
, edited to include the relevant project code, in lowercase, between the quotes.
Once you have edited the value, you can run the cell.
Get Receiver Configuration
Using the receiver deployment records, and the information found in the events
table, this cell will identify any important configuration information for each deployment. A dataframe will be displayed.
The following cell (“Process receiver config”) will extrapolate further to populate all the required columns from the receiver_config
table. A dataframe will be displayed.
Processed Configuration
Run this to see the rows that were successfully processed from the events
table. These will be sorted in the next step into (1) duplicates, (2) updates, and (3) new configurations. Of the latter two, you will be able to select which ones you want to load into the database.
Incomplete Configuration
Run this cell to see any rows that could not be properly populated with data, i.e, a missing frequency or a missing map code. This will usually happen as a result of a map code that could not be correctly processed. These rows will not be loaded and will have to be fixed in the events table if you want the configuration to show up in the receiver_config
table.
Sort Configuration
All processed configurations are sorted into (1) duplicates, (2) updates, and (3) new configurations. Of the latter two, you will be able to select which ones you want to load into the database in future cells.
Duplicates
No action needed - These rows have already been loaded, and there are no substantial updates to be made.
Updates
Action needed - These rows have incoming configuration from the events
table that represent updates to what is already in the table for a given catalognumber at a given frequency. Example: a new version of VDAT exported more events information, and we would like to ensure this new informatiuon is added to existing records in the events
table.
Select the rows using the checkboxes that you want to make updates to, then run the next cell to make the changes.
You should see the following success message, followed by a dataframe:
XX modifications made to receiver config table.
New Confirguration
These rows are not already in the receiver configuration table.
Select the rows using the checkboxes that you want to add to the database, then run the next cell to make the changes.
You should see the following success message, followed by a dataframe:
XX modifications made to receiver config table.
Task list checkpoint
In GitLab, this task can be completed at this stage:
- [ ] - NAME process receiver configuration ("events-4" notebook)
Final Steps
The remaining steps in the GitLab Checklist are completed outside the Nodebooks.
First: you should access the Repository folder in your browser and ensure the raw detections are posted in the Data and Metadata
folder.
Finally, the Issue can be passed off to an OTN-analyst for final verification in the database.
Troubleshoot Tips and Resources
- To visualize detections along telemetries use:
movers - 2b - Compare detections to telemetry
Gantt Chart of Detections vs Telemetry
Missing Telemetry Graph
Known issues and Limitations
events - 3 - create download records
does not support moving platform as of Nov-2024.
Key Points
OTN supports processing of slocum and wave glider detections, detections from other mobile platforms (ship-board receivers, animal-mounted receivers, etc.), and active tracking (reference https://oceantrackingnetwork.org/gliders/).
OTN is able to match detections collected by moving platforms as long as geolocation data is provided.
mission metadata
,telemetry data
anddetection data
should be submitted prior to the Moving platform data loading process.