send link to app

Vision Detector


4.8 ( 4848 ratings )
效率
开发 Kazufumi Suzuki
自由

Vision Detector performs image processing using a CoreML model on iPhones and iPads. Typically, CoreML models must be previewed in Xcode, or an app must be built with Xcode to run on an iPhone. However, Vision Detector allows you to easily run CoreML models on your iPhone.

To use the app, first prepare a machine learning model in CoreML format using CreateML or coreml tools. Then, copy this model into the iPhone/iPad file system, which is accessible through the iPhones Files app. This includes local storage and various cloud services (iCloud Drive, One Drive, Google Drive, Dropbox, etc.). You can also use AirDrop to store the CoreML model in the Files app. After launching the app, select and load your machine learning model.

You can choose the input source image from:
- Video captured by the iPhone/iPads built-in camera
- Still images from the built-in camera
- The photo library
- The file system
For video inputs, continuous inference is performed on the camera feed. However, the frame rate and other parameters depend on the device.

The supported types of machine learning models include:
- Image classification
- Object detection
- Style transfer
Models lacking a non-maximum suppression layer, or those that use MultiArray for input/output data, are not supported.

In the local Vision Detector documents folder, youll find an empty tab-separated values (TSV) file named customMessage.tsv. This file is for defining custom messages to be displayed. The data should be organized into a table with two columns as follows:
(Label output by YOLO, etc.) (tab) (Message) (return)
(Label output by YOLO, etc.) (tab) (Message) (return)

Note: This application does not include a machine learning model.

On the iPhone, you can use the LED torch feature. When the screen is in landscape orientation, touching the screen will hide the UI and switch to full-screen mode.