Element Library

Navigator Pro Elements

Inputs

  • Camera - connect a webcam or network camera (IP or RTSP) as a data feed
  • Media Loader - upload images or videos to test models and see the inference outputs. Not necessary for training or dataset generation
  • Prompt API - Send LLM prompts to the LLM Chat or Document QnA elements
  • LLM Dataset Generator - Standalone element that processes text documents and prepares them for training an LLM with the option to augment that data with information from popular LLMs.
    • Supports Word Docs, PDFs, and .txt files
    • Navigator comes with a range of stock elements so you can start creating right away. Here is the list of all the stock elements, what they do, and how to use them. New elements are added regularly, and you can make your own to customize your solution for production.

Outputs

  • Response API - Connect to the output end of the LLM Chat or Document QnA elements to get responses back from the API

Training

All Training elements need to be in their own Project to train effectively. Specify where the trained model should be stored on your computer using the Output Artifact Path setting.

Inference

  • Document QnA - Add documents for a model to query and give answers from, then connect it to stock open source, or tuned custom models. Also known as RAG (Retrieval Augmented Generation)
  • LLM Model Chat - Load your custom tuned LLM (from the LLM Trainer) and connect it to data inputs and outputs.
  • Text Reader -  A pre-trained Optical Character Recognition model. Connect it to a camera feed or the Media Loader as an input to read text in images and videos.

Enterprise Elements

The Enterprise plan includes our Vision suite in addition to the LLM suite available with Navigator Pro. To start a trial of the Enterprise Plan, contact our team HERE.

Outputs

  • Image Regions: Specify Regions of Interest in images or videos for the model to focus on. Regions of interest are a portion of an image you want to filter or operate on in some way.
  • Zone Counter - Similar to Output Preview, but keeps count of the objects
  • Output Preview - preview window to show images or camera feeds with inference and bounding boxes.

Training

All Training elements need to be in their own Project to train effectively. Specify where the trained model should be stored on your computer using the Output Artifact Path setting.

  • Deep Detection Lite Trainer - webAI proprietary object detector model that requires less data to reach similar accuracy levels as other object detector models. Upload a folder of images with a COCO JSON annotations file. File name of the annotations should be annotations.json. For more info on how to build an Object Detection Dataset, read our deep dive here.
  • Image Classification Trainer - Train a ResNet Classification Model. Upload a folder of images organized into sub-folders labeled with the class names, define where you want your Model to be saved, and hit Run. For more info on how to build an Image Classification Dataset, read our Deep Dive here.
  • Object Detection Trainer - Train a YOLOv8 object detector. Upload a folder of images with a COCO JSON annotations file. File name of the annotations should be annotations.json. For more info on how to build an Object Detection Dataset, read our deep dive here.

Inference

  • Deep Detection Lite Inference - webAI proprietary object detector inference model. Connect a data source and output element such as Output Preview to see the inference on your images or video.
  • Image Classification Inference - understand and categorize images under a specific label. Requires data inputs, a trained classification model, and an output element such as Output Preview to see the inference.
  • Object Detection Inference - Detect and locate objects in images or videos using models you’ve trained with the Object Detection Trainer. Connect to a data source and an output element such as Output Preview to see the inference.
  • Object Detector - A pre-trained object detection model running the YOLO World model. Connect it to a camera feed and an output element such as Output Preview to see bounding boxes around objects

Other

  • Barcode Reader - A pre-trained Barcode Reader vision model. Select the Barcode type, connect a data source such as a Camera and an output like Output Preview to read barcodes.
  • Class Filter - A whitelist filter element for Object Detectors. Connect it as the output to an Object Detector or Object Detection Inference element, open the settings and type the objects you’d like to have detected. Connect an output element like Output Preview to see the inference and bounding boxes.
  • Object Tracker - a pre-trained Object Detection model that tracks objects across a screen. Tracking differs from Detection in that the model has the concept of object permanence.