KEMBAR78
DobotVisionStudio User Guide en V4.1.2 | PDF | Operating System | Software
100% found this document useful (1 vote)
338 views375 pages

DobotVisionStudio User Guide en V4.1.2

Uploaded by

faiz 555
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
338 views375 pages

DobotVisionStudio User Guide en V4.1.2

Uploaded by

faiz 555
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 375

User Guide

DobotVisionStudio
User Guide

Issue: V4.1.2

Date: 2022-06-08

Shenzhen Yuejiang Technology Co., Ltd.


DobotVisionStudio User Guide

Copyright © Shenzhen Yuejiang Technology Co., Ltd. 2022. All rights reserved.
No part of this document may be reproduced or transmitted in any form or by any means
without the prior written consent of Yuejiang Technology Co., Ltd.

Disclaimer
To the maximum extent permitted by applicable law, the products described (including its
hardware, software, and firmware, etc.) in this document are provided AS IS, which may have flaws,
errors or faults. Yuejiang makes no warranties of any kind, express or implied, including but not
limited to, merchantability, satisfaction of quality, fitness for a particular purpose and non-
infringement of third party rights. In no event will Yuejiang be liable for any special, incidental,
consequential or indirect damages resulting from the use of our products and documents.
Before using our product, please thoroughly read and understand the contents of this document
and related technical documents that are published online, to ensure that the robot is used on the
premise of fully understanding the robot and related knowledge. Please use this document with
technical guidance from professionals. Even if follow this document or any other related
instructions, damages or losses will be happening in the using process. Dobot shall not be considered
as a guarantee regarding all security information contained in this document.
The user has the responsibility to make sure of following the relevant practical laws and
regulations of the country, in order that there is no significant danger in the use of the robot.

Shenzhen Yuejiang Technology Co., Ltd.


Address: Address: Floor 9-10, Building 2, Chongwen Garden, Nanshan iPark, Liuxian Blvd,
Nanshan District, Shenzhen, Guangdong Province, China
Website: www.dobot.cc

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
i
DobotVisionStudio User Guide

Contents
Updates...................................................................................................................... 1
Product Introduction ............................................................................................... 2
Overview........................................................................................................................... 2
Characteristics ................................................................................................................... 2
Operating Environment ..................................................................................................... 2
DobotVisionStudio Installation ......................................................................................... 4
Software Interface .................................................................................................... 7
Welcome Page ................................................................................................................... 7
Home Page ........................................................................................................................ 7
Menu ................................................................................................................................. 8
File ....................................................................................................................... 8
Settings ................................................................................................................ 9
System ............................................................................................................... 14
Tools .................................................................................................................. 14
Help ................................................................................................................... 19
Control Toolbar ............................................................................................................... 20
Tool Box ......................................................................................................................... 21
Tool Box Introduction ....................................................................................... 21
Tool Application ................................................................................................ 21
Result Display ................................................................................................................. 25
Module Result ................................................................................................... 25
Result Display Area ........................................................................................... 26
Flow Management .......................................................................................................... 27
Flow Operation .................................................................................................. 27
Multi-Flow......................................................................................................... 28
Camera Management ...................................................................................................... 30
Controller Management .................................................................................................. 31
Global Variables .............................................................................................................. 35
Communication Management ......................................................................................... 37
Device Management .......................................................................................... 37
Receive Event .................................................................................................... 44
Send Event ......................................................................................................... 46
Heartbeat Management ...................................................................................... 48
Response Settings .............................................................................................. 49
Global Trigger ................................................................................................................. 50
Global Script ................................................................................................................... 52
Overview ........................................................................................................... 52
Debug Global Script via Visual Studio .............................................................. 54
Add Assembly ................................................................................................... 55
Usage ................................................................................................................. 56
Operation Interface ......................................................................................................... 58
Overview ........................................................................................................... 58
Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
ii
DobotVisionStudio User Guide

Interface Configuration ..................................................................................... 59


Export Program ................................................................................................. 62
Data Queue ..................................................................................................................... 63
Flow Time ....................................................................................................................... 65
Dobot Panel..................................................................................................................... 65
Vision Tools ............................................................................................................ 69
Acquisition ...................................................................................................................... 69
Image Source ..................................................................................................... 69
Multi-image Acquisition .................................................................................... 70
Image Output ..................................................................................................... 72
Image Buffer ...................................................................................................... 74
Light Source ...................................................................................................... 75
Location .......................................................................................................................... 76
Feature Match .................................................................................................... 77
Greyscale Match ................................................................................................ 85
Mark Location ................................................................................................... 86
Circle Search ..................................................................................................... 90
Line Search ........................................................................................................ 92
Blob Analysis .................................................................................................... 94
Caliper ............................................................................................................... 97
Edge Search ..................................................................................................... 100
Position Correction .......................................................................................... 101
Rect Search ...................................................................................................... 104
Peak Search ..................................................................................................... 105
Edge Intersection ............................................................................................. 106
Parallel Lines Search ....................................................................................... 107
Quadrilateral Search ........................................................................................ 108
Line Group Search........................................................................................... 110
Multi-line Search ............................................................................................. 110
Blob Label Analysis ........................................................................................ 111
Path Extraction ................................................................................................ 113
Find Angle Bisector ......................................................................................... 116
Find Median Line ............................................................................................ 118
Calculate Parallel Lines ................................................................................... 119
Find Vertical Line ............................................................................................ 122
Measurement................................................................................................................. 124
L2C Measure ................................................................................................... 124
L2L, C2C Measure .......................................................................................... 128
P2C, P2L, P2P Measure................................................................................... 129
Intensity Measure ............................................................................................ 129
Pixel Count ...................................................................................................... 130
Edge Distance .................................................................................................. 130
Histogram ........................................................................................................ 132
Image Generation .......................................................................................................... 133
Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
iii
DobotVisionStudio User Guide

Circle Fit and Line Fit ..................................................................................... 133


Geometry Generation ...................................................................................... 134
Recognition ................................................................................................................... 135
2D BCR Recognition ....................................................................................... 136
BCR Recognition............................................................................................. 138
OCR Recognition ............................................................................................ 138
DL Character Recognition ............................................................................... 142
DL Code Reading ............................................................................................ 144
DL Character Location .................................................................................... 146
DL Single Character Recognition .................................................................... 147
Deep Learning............................................................................................................... 148
DL Image Segmentation .................................................................................. 149
DL Classification ............................................................................................. 150
DL Object Detection ........................................................................................ 151
DL Image Retrieval ......................................................................................... 152
DL Anomaly Detection .................................................................................... 155
DL Instance Segmentation............................................................................... 156
Calibration .................................................................................................................... 157
Camera Mapping ............................................................................................. 157
CalibBoard Calibration .................................................................................... 159
N-Point Calibration ......................................................................................... 162
Distortion Calibration ...................................................................................... 167
Mapping Calibration ........................................................................................ 169
N-Image Calibration ........................................................................................ 171
Load Calibration .............................................................................................. 174
Calculation .................................................................................................................... 175
Single Point Alignment.................................................................................... 175
Calculate Rotation ........................................................................................... 176
Point Set Alignment......................................................................................... 178
Calibration Transformation.............................................................................. 179
Scale Transformation ....................................................................................... 181
Line Alignment ................................................................................................ 182
Variable Calculation ........................................................................................ 183
Image Processing .......................................................................................................... 185
Image Combination ......................................................................................... 186
Image Morphology .......................................................................................... 187
Image Binarization .......................................................................................... 189
Image Filter ..................................................................................................... 191
Image Enhancement ........................................................................................ 192
Image Computing ............................................................................................ 193
Distortion Correction ....................................................................................... 194
Image Clarity ................................................................................................... 195
Image Fixture................................................................................................... 196
Shade Correction ............................................................................................. 196
Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
iv
DobotVisionStudio User Guide

Affine Transformation ..................................................................................... 198


Ring Expansion ............................................................................................... 198
Copy and Fill ................................................................................................... 199
Frame Mean ..................................................................................................... 200
Image Normalization ....................................................................................... 201
Image Correction ............................................................................................. 202
Geometric Transformation............................................................................... 203
Image Stitch ..................................................................................................... 203
Multiple Images Fusion ................................................................................... 208
Color Processing ........................................................................................................... 209
Color Extraction .............................................................................................. 209
Color Measurement ......................................................................................... 213
Color Transformation ...................................................................................... 214
Color Recognition ........................................................................................... 215
Defect Detection ........................................................................................................... 216
OCV................................................................................................................. 216
Arc Edge Defect Detection .............................................................................. 221
Linear Edge Defect Detection ......................................................................... 223
Arc-Pair Defect Detection ............................................................................... 224
Line-Pair Defect Detection .............................................................................. 225
Edge Group Defect Detection.......................................................................... 226
Edge Pair Group Defect Detection .................................................................. 226
Edge Model Defect Detection ......................................................................... 227
Edge Pair Model Defect Detection .................................................................. 229
Defect Contrast ................................................................................................ 230
Logic Tools ................................................................................................................... 233
Condition Detection......................................................................................... 233
Branch ............................................................................................................. 234
Branch String ................................................................................................... 235
Save Text ......................................................................................................... 236
Logic................................................................................................................ 237
Format ............................................................................................................. 238
String Comparison ........................................................................................... 239
Script ............................................................................................................... 240
Group ............................................................................................................... 248
Point Set .......................................................................................................... 252
Time Consuming ............................................................................................. 252
Data Set ........................................................................................................... 253
Communication ............................................................................................................. 255
Receive Data.................................................................................................... 255
Send Data ........................................................................................................ 255
Camera IO Communication ............................................................................. 256
Protocol Parsing............................................................................................... 257
Protocol Assembly ........................................................................................... 259
Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
v
DobotVisionStudio User Guide

Dobot Magician Tools ................................................................................................... 260


Moving to a Point ............................................................................................ 260
Speed Ratio...................................................................................................... 262
Home Calibration ............................................................................................ 263
Suction Cup Switch ......................................................................................... 264
Gripper Switch................................................................................................. 264
Laser Switch .................................................................................................... 265
I/O Multiplexing .............................................................................................. 265
I/O Output........................................................................................................ 267
I/O Input .......................................................................................................... 268
Cases ...................................................................................................................... 270
Location for USB Hole ................................................................................................. 270
Detection for Detective Metal ....................................................................................... 274
Distance Detection ........................................................................................................ 279
Loop Function ............................................................................................................... 282
Script Function .............................................................................................................. 284
Medicine Bottle Detection ............................................................................................ 288
Multi-Flow Communication ......................................................................................... 296
Communication Trigger Flow ....................................................................................... 299
Dobot Magician Demo ......................................................................................... 303
Robot Calibration .......................................................................................................... 303
Color Sorting................................................................................................................. 309
Character Defect Detection ........................................................................................... 320
Diameter Measurement ................................................................................................. 334
Rectangle Template Match............................................................................................ 344
Circle Template Match .................................................................................................. 358

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
vi
DobotVisionStudio User Guide

Updates

Version Description

DobotVisionStudio 4.1.2 Updates:

⚫ Update contents

⚫ Add Dobot Panel

DobotVisionStudio 4.1.1 Updates:

⚫ Supplement contents

⚫ Modify some bugs

DobotVisionStudio 4.1.0 Updates:

⚫ Remove the M1 robot algorithm module

⚫ Add N-point calibration module to directly obtain the Dobot


arm coordinate

⚫ Upgrade dongle - imVS-VM-6100 + is recommended

⚫ Add functions of other algorithm moduls to optimize the overall


performance

DobotVisionStudio 1.4.0 Improve steps of demo

DobotVisionStudio 1.3.2 Modify some bugs

DobotVisionStudio 1.3.0 Updates:

⚫ Add the description on DobotStudio

⚫ Add the wait time tool

⚫ Add the speed ratio tool

⚫ Add the Dobot arm orientation tool of DobotM1

DobotVisionStudio 1.2.0 Original version

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
1
DobotVisionStudio User Guide

Product Introduction

Overview
DobotVisionStudio software integrates various algorithm components of machine vision. It is
designed to quickly combine algorithms to find, measure the object and detect defection and so on,
suitable for a variety of application scenarios.
The DobotVisionStudio software features a powerful library of visual analysis tools, which can be
used to build machine vision application solutions without programming. It can meet the needs of
vision applications such as visual positioning, measurement, detection and identification. It has rich
functions, stable performance and friendly user interface.

Characteristics
 Easy to Use: With drag-and-drop component, you can build a visual application without
programming.
 User-Friendly Interface: Provides clear, simple and visualized user interfaces.
 Flexible Display: Maximally save the limited screen display space.
 Wide Compatibility: Supports multiple operation systems including Windows 7/10
(32/64-bit).

Operating Environment
Make sure that the computer on which you install the client software at least meets the minimum
requirements.
NOTE
 We recommend adding the software to the allowlist of the antivirus software or closing
the antivirus software, in case of being recognized as virus.
 Dongle is required for running this software. Before using it, please install the dongle
driver and relevant machine vision drivers.
Recommended
 Operating System: Microsoft Windows 7/10 (64-bit Chinese-English operating system)
 .NET Running Environment: .NET4.6.1 and above
 CPU: Intel Core i7-6700 3.4 GHz or above. i9-10900K or above is recommended for
using CPU related deep learning functions.
 Memory: 8 GB or above
 NIC: Intel i210 and above
 Graphics Card: 1 GB or above. 6 GB or above is recommended for using GPU related
deep learning functions.
 USB Interface: USB3.0 interface is required.
 Software-enabling Configuration: Dongle or authorization file specialized for algorithm

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
2
DobotVisionStudio User Guide

platform
Minimum
 Operating System: Microsoft Windows 7/10 (64-bit Chinese-English operating system)
 .NET Running Environment: .NET4.6.1 and above
 CPU: Intel 3845 or above
 Memory: 4GB
 NIC: Gigabit Ethernet
 Graphics Card:1 GB or above. 6 GB or above is recommended for using GPU related deep
learning functions.
 USB Interface: USB3.0 interface is required.
 Software-enabling Configuration: Dongle or authorization file specialized for algorithm
platform

Refer to the table below for dongle models and corresponding functions.
Models and Location Recognition Deep Calibration Contraposition Image Color Defect

Functions Learning Processing Processing Detection

IMVS-VM- No Yes No No No No No No

1100 VM-

6100-S

IMVS-VM- Yes No No No No No No Yes

2100

VM-6200-S

IMVS-VM- Yes No No Yes Yes No No No

4100

VM-6400-S

IMVS-VM- Yes Yes No Yes Yes Yes Yes Yes

6100

VM-6600-S

IMVS-VM- Yes Yes Yes Yes Yes Yes Yes Yes

7100

VM-6700-S

Refer to the table below for dongle and supported camera quantity.
Models and Functions Dongle without suffix Dongle with SE suffix Dongle with PRO suffix
(Four cameras supported) (Two cameras supported) (No limitation for
camera quantity)
IMVS-VM-1100 Yes No Yes
VM-6100-S
IMVS-VM-2100 Yes No Yes
VM-6200-S

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
3
DobotVisionStudio User Guide

IMVS-VM-4100 Yes No Yes


VM-6400-S
IMVS-VM-6100 Yes Yes Yes
VM-6600-S
IMVS-VM-7100 Yes No Yes
VM-6700-S

NOTE
 The dongle model of IMVS-VM-6100-EDU supports two cameras and two flows only,
and supports TCP communication only.

DobotVisionStudio Installation
The installation steps are shown below.
Double-click the installation package to install the software, as shown below. Click Next.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
4
DobotVisionStudio User Guide

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
5
DobotVisionStudio User Guide

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
6
DobotVisionStudio User Guide

Software Interface

Welcome Page

Double click to start the software, and the welcome page will pop up.

 There are four modules selectable: General Solution, Location & Measure, Defect
Detection, and Recognition. General Solution includes the three latter modules. You can
choose a module according to the edit type.
 Open Recent: Open the recently-used solutions. Do not ask again: Enter the main page
after you open the software
 Do not ask again: Once you tick it, you will directly enter the main page after opening the
software

Home Page
You can choose any of the modules to enter the home page of DobotVisionStudio. The home page
is shown as follows:

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
7
DobotVisionStudio User Guide

 Area 1: Menu Bar. It includes File, Settings, System, Tools, and Help.
 Area 2: Control Toolbar. It provides quick operations for the software, including file
saving, file open, camera management, controller management, etc.
 Area 3: Tool Box. It includes image collection, positioning, measurement, identification,
calibration, position, image processing, color processing, defect detection, logic tools,
communication, etc.
 Area 4: Flow Process. You can edit the flow here.
 Area 5: Image. Image is displayed here.
 Area 6: Results. You can view current results, history results, and help information
 Area 7: Status. It displays flow time, tool time, and algorithm time.
 Area 8: Flow Toolbar. it provides quick operations for flows.
 Area 9: Overview, It provides an overview of the flow.

Menu
The menu is shown as below, including File, Settings, Tools, System and Help.

File

In File menu, you can make new solutions, open solutions, open solution examples, save or save as
solutions, import flow files with .prc format, and exit the software according to actual demands.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
8
DobotVisionStudio User Guide

 New: Create a new solution. After clicking, you can select whether to save the current
solution.
 Open: Open a solution that was created and saved earlier.
 Open Recent: Open a recently solution.
 Open Example: Open the software example solution, mainly including the common visual
solution that has been built.
 Save: Save the current algorithm solution with the suffix of. Sol. You can set encryption
when you save the file, as shown in the following figure.
 Save As: Save the currently configured algorithm solution to a specified path. You can set
encryption when you save the file.
 Import Process: Import process files in. PRC format into the solution.
 Exit: Exit the DobotVisionStudio software.

Settings

Authority Settings
By enabling encryption and setting the administrator password, you can enable the administrator
permissions, and the administrator control option will appear in the upper right corner of the main
interface. By enabling encryption, you can also reset the administrator password. In authority
interface, you can also enable permissions for technician and operator, and set corresponding
password. Technicians can obtain the authority opened by an administrator, and operators can only
click the buttons of the front-end running interface.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
9
DobotVisionStudio User Guide

The administrator can assign authority to technicians, as shown below. Tick Open All Tools can
open configuration permissions of all modules. You can also customize the permissions which need
to be opened.

Software Settings

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
10
DobotVisionStudio User Guide

Start-up Settings
Auto Start After the function is enabled, the software will automatically start when the PC
starts.
Start Time Delay After startup, the software starts after the set time delay
Run Interface After the function is enabled, the software will go to the running interface.
Maximized when running After the function is enabled, the software will go to the running interface and the
interface starts separately interface is in max. status.
File Auto Load After the function is enabled, the software will automatically load configured
files. You need to enter file path, password and start state.
File Path Path where the target scheme is stored
Password Enter a correct password if it has been set
Start State There are two states: continuous run and discontinuous run
Select Identity After you select the identity, you will open the running interface with the
configured identity every time. Different identities correspond to different
permissions
Exit Settings
Background Running Close the software and continue to run in the background
Exit and Stop After closing the software, exit the software and stop running

Solution Settings
If you want to use communications to switch solutions, you need to establish communications and
set related parameters as follows.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
11
DobotVisionStudio User Guide

Solution Settings
Path It sets the saving path of solutions.
Password When a password is set for the target solution, the correct password is entered here before
communication can switch solutions.
String Only when the string sent by the communication is successfully verified with
communication strings can solutions switch be triggered.
Enable It refers to the communication switch control.
Callback Set The callback can only be opened here, and the callback of solutions is supported after
opening. To close the callback, you need to set it through the SDK.
Auto Save Set After enabling auto save, the software will check running parameters every five minutes. If
the parameters are updated, all data will be saved in the same directory of the solution files.

You can also set password when saving solutions.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
12
DobotVisionStudio User Guide

Running Strategy
The operating mode includes normal mode and diagnostic mode. If the software occupies too much
system resources, prompt interfaces will pop up under diagnostic mode.
The strategy mode includes default mode and custom mode. Customized thread allocation is
recommended in case of big process time consuming fluctuations or high CPU usage caused by
feature match. Allocating CPU cores based on the number of concurrent is recommended when the
process calculation is large and there are lots of branches. A single core is recommended when there
is no branch. CPU configuration reads the number of CPU cores automatically. The parallel
calculation of model match algorithm changes with the configuration of running strategy. By default,
the parallel calculation of algorithm is turned off and will only be turned on in custom mode as
shown below.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
13
DobotVisionStudio User Guide

System

You can find Log, Communication Management, Controller Management and Camera Set in
System menu.

 Log: View the log information in the running process of software


 Communication Management: see Communication Management for details.
 Controller Management: see Controller Management for details.
 Camera Set: See Camera Management for details.

Tools

Create One-Click Calibration Guide


The step-by-step guide helps you quickly complete the complex operation process of one-click
calibration. After selecting the calibration mode, click Create and you will see the corresponding
guide on the interface. You need to follow the guide to generate the calibration file, which is used
for converting pixel coordinates to physical coordinates and usually used with calibration
conversion module. Refer to N-Point Calibration for details.
The one-click calibration guide includes three types: static calibration, dynamic calibration and
mapping calibration. Static calibration is usually used to generate calibration files of the calibration
plate. Dynamic calibration is usually used to generate N-point calibration files and N image
calibration files. Mapping calibration is used to generate mapping calibration files.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
14
DobotVisionStudio User Guide

⚫ Static Calibration
The static calibration uses a stationary single camera and a calibration board to determine the
conversion relationship between the pixel coordinate system and the physical coordinate
system.
Steps:
1. Go to Tools and click Create One-Click Calibration Guide.
2. Click Static Calibration and click Create.
3. Click Config to enter communication management, add communication devices
according to actual demands, and enter Trigger Signal.

4. Click Config to add global camera, and click Image Source Config to select image
sources form local, camera image or SDK.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
15
DobotVisionStudio User Guide

5. Click Calibration Board to calibrate.


6. View results.

NOTE
 The static calibration supports using calibration board to calibrate only.

⚫ Dynamic Calibration
The dynamic calibration is used for N-point calibration and N-image calibration file generation.
The process of dynamic calibration creation is the same as that of static calibration.
Steps:
1. Click Dynamic Calibration and click Create.
2. Click Config to enter communication management, add communication devices according
to actual demands.
3. Enter Start Signal, Calibration Signal, and End Signal.

NOTE
 X/Y in calibration signal stands for physical coordinates, and R stands for angle.

4. Click Next Step, and click Config to add global camera, and click Image Source Config
to select image sources.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
16
DobotVisionStudio User Guide

5. Select the calibration method. Dynamic calibration supports N-point calibration and N-
image calibration.
6. View results.

⚫ Mapping Calibration
Steps:
1. Click Mapping Calibration and click Create.
2. Click Config to enter communication management, add communication devices according
to actual demands.
3. Enter Main Camera Trigger Signal, Secondary Camera Trigger Signal, and Mapping
Signal according to actual demands.

4. Click Next Step, and click Config to add global camera, and select image sources for main
and secondary cameras.
5. Select the calibration method.
NOTE
 The default calibration method is mapping calibration.
6. View results.

Operating Environment Detection Tool


This tool is used to detect the operating environment, .net and other files. You can go to the
installation directory and find drivers folder to install related components again if lost.

Calibration Board Generate Tool

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
17
DobotVisionStudio User Guide

This tool is used to customize the type of calibration board, the number of rows and columns of the
calibration board, CAD and other parameters to generate images of calibration board. The image
path is in:\VM\VisionMaster4.1.0\Applications\Tools\Demo.

Custom Module Generate Tool


This tool is used to help users develop customized modules.
Steps:
1. Go to Tools and click Custom Module Generate Tool.
2. Select VM version. It is recommended to use the VM4.X.
3. Enter module name, and check image source and position correction in basic input
according to actual demands.

NOTE
 The module name supports English contents only.

4. Check module status and output image in basic output according to actual
demands.
5. Click Add in customized input and output to add contents.
⚫ Parameter name: The underlying name of parameter, and it supports English contents
only.
⚫ Display name: The displayed name of parameter.
⚫ Parameter type: It includes float, int and string.
⚫ Input/output: It sets input or output.

NOTE
 You can click delete, insert, export and import to execute related operations.

6. Click Customized Output to output results.


7. Click Next and check ROI type and shield area in basic parameters according to actual
demands.
8. Click Add to add customized running parameters.
⚫ Parameter type: It includes float, int, enumeration, string, bool, floatBetween and
intBetween.
⚫ Parameter name: The underlying name of parameter, and it supports English contents
only.
⚫ Display name: The displayed name of parameter.
⚫ Edit: You can set related contents of parameters. Regarding float and int, you can set
their max. value, min. value and default value. Regarding enumeration, you can set

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
18
DobotVisionStudio User Guide

displayed name and enumeration value. Regarding String, you can set max. length
and default value. Regarding bool, you can set true and false. Regarding floatBetween
and intBetween, you can set their parameter name, displayed name, min. value, max.
value and default value.
⚫ Edit Status: After finishing settings in edit, the edit status displays Yes. Otherwise, it
displays No.
NOTE
 Before creating XML and project files, you need to finish editing and make sure
that the edit status is Yes.

9. Click Edit in template settings interface to edit the template according to actual demands.
10. Click Create XML, Create C++Project, or Create C#Project to save files.
⚫ Create XML: It creates a file folder named after customized template name.
⚫ Create C++Project: It creates a file folder named after Proj_template name.
⚫ Create C#Project: It creates a file folder named after CsProj_template name.
NOTE
 It is recommended to put the xml and project files in the same directory.

11. Compile the C++ and C# project files, and create the corresponding dll.
12. Put dll, xml and png into the same folder, and put the folder into Module(sp)\x64.
NOTE
 Each vision module is composed of 10 to 11 files. Among them,
CalculatorModuleControl.dll corresponds to configuration functions.

13. Open the software again, and you can view the customized module in the
corresponding category.

Help

You can find Language, Help, Version, More and Welcome Page in this menu.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
19
DobotVisionStudio User Guide

• Click Language to switch language between Chinese and English.


• Click Help to view the user guide of the software.
• Click Version to view the software version related information.
• Click Welcome Page to display the welcome page.

Control Toolbar

 Save: Click to save the solution.


 Open: Click to open the solution.
 Undo: Click to undo the current operation.
 Redo: Click to cancel the undo operation. All operations supporting undo and redo
functions increase memory consumption during the natural operation (operation data
needs to be cached). Creating, loading and saving operations will clear all cache generated
during undo and redo operations
 Locking: Click to lock the interface. Adding, deleting and moving modules or wires are
not supported once the locking protection is enabled
 Camera Set: Click to create global cameras. The names of the global cameras can also be
changed. See Camera Management for details.
 Controller Management: Click to add controllers. See Controller Management for details.
 Global Variable: Click to add global variables. You can set name, type and current value
of variables. See Global Variables for details.
 Communication Management: Click to add the device and set the communication protocol
and other parameters. TCP, UDP and serial communication are supported. See
Communication Management for details.
 Global Trigger: Click to add event trigger or string trigger. See Global Trigger for details.
 Global Script: Click to open the global script window. See 3.13Global Script for details.
 Once: Click to execute the flow once. Work for all flows.
 Continuous: Click to execute the flow continuously.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
20
DobotVisionStudio User Guide

 Operation Interface: Click to display the operation window. You can customize the display
interface based on your needs. See 3.14Operation Interface for details.
 Dobot: See Dobot Panel for details.
 File Path: display the solution name. Click to open the path where the solution is saved.

NOTE
 The Once, Continuous and Stop operations in the toolbar work for all flows. To control
a single flow, click the run control button in the flow.

Tool Box
Tool Box Introduction

Tool box includes different vision tools, including acquisition, location, measurement, image
generation, recognition, deep learning, calibration, calculation, image processing, color processing,
defect detection, logical tools, communication, etc., as shown in Area 3 of Home Page.
 Acquisition includes Image Source, Multiple Image Acquisition, Image Output, Image
Buffer, and Light Source.
 Location, measurement, image generation, recognition, deep learning, calibration,
calculation, image processing, color processing, defect detection and logical tools are all
visual processing tools, which can be combined with corresponding algorithm modules.
 Communication supports sending/receiving data, and camera IO output. Only Dobot
visual controller is supported in IO communication.

Tool Application

Select the tool that you want to use and drag it to the process editing area. Connect the relevant tools
according to the logical requirements of the project, and double-click the configuration parameters,
as shown in the figure below.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
21
DobotVisionStudio User Guide

This section describes only general parameters. The parameter settings of specific tools will be
described in the follow-up sections.
Basic Parameters
You can set some basic parameters in Basic params, generally including the selection of image input
source and ROI settings, as shown in the figure below.

 Image input: Select the input source of the image processed by this tool from the drop-
down list according to your own needs.
 ROI area: There are two ways to create ROI area: draw and inherit. After you set it, the
corresponding tool will only process images in ROI region.
 Draw: You can customize the drawing area, including three shapes, full selection,
selecting rectangle ROI, and selecting circle ROI. In some modules you can also
customize the polygon ROI with up to 32 vertices.
 Inherit: You can inherit a ROI of previous modules, which can be inherited by a circular
region and circle parameters or a rectangular region and rectangular parameters.
- If you select rectangle ROI, you can set or view the center point coordinates, height,
width, and angle, as shown in the following figure.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
22
DobotVisionStudio User Guide

You can also customize circle ROI, as shown below.

The position of arrow 1 can be rotated and scaled by changing the curvature, and is based
on the other edge vertex. Arrow 2 can be used to zoom in and out the inner and outer rings.
The position of arrow 3 is used to translate the ring. The position of arrow 4 is used to
change the radian of the ring.

- If you select circle ROI, you can set the center point coordinates, radius, starting angle,
angle range, number of calipers, and caliper width and height.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
23
DobotVisionStudio User Guide

NOTE
 When you use ROI, the search direction of all tools is the direction of the ROI area,
that is, the ROI area is regarded as an XY coordinate system, and ROI arrow direction
is the positive direction of X axis. Top to bottom means finding a line from top to
bottom along the Y-axis. Left to right means finding a line from left to right along the
X-axis, as shown in the figure below.

 Mask area: You can customize polygon mask area with up to 32 vertices. The image in
the mask area is not processed.
 Position correction: It can be used with position correction tools. See Position Correction
for specific usage.

Running Parameters
Running parameters involve parameter settings the of many tools. The running parameters of
different tools are described in different chapters.

Result Display
Result Display includes result judgment, image display, text judgment and the preceding item
display. Take circle search as an example.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
24
DobotVisionStudio User Guide

 Check Result: Judge the output result of the algorithm, which will affect the status of the
module. Take radius judgment as an example. If radius judgment is enabled, the radius
range of the target circle can be set. The default range is 0 to 99999.
 Image display: It displays the algorithm result in the image. This function is enabled by

default. Click to close it. Click to set the color of OK and NG. The color of
OK determines the contour color of the fitting circle in the circle search result.
 Text display: You can set the text display content, OK color, NG color, size, transparency
and position coordinates.

Module Result
All the output results of the module are displayed in Module Result. The corresponding results can
be linked to in the format tool. See Module Result for details.

Result Display
Module Result

The module result displays all output results, as shown below.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
25
DobotVisionStudio User Guide

You can click to view and bind data with corresponding global variable.

Different modules have different results. See each module for specific data.

Result Display Area

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
26
DobotVisionStudio User Guide

The result display area includes Current Result, History and Help.
Current Result
This area displays the operation result of the latest module.

History
This area displays the operation history of modules.

Help
This area explains the tool function and operation steps briefly.

Flow Management
Flow Operation

The software supports creating multiple sub flows without interfering each other. You can also
perform data interaction and logical design through Global Script, Data Queue, Global Variables, as
shown below.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
27
DobotVisionStudio User Guide

Right click the flow index, and you can execute the following operations:

Flow operation

Flow Export Export flow files with .prc format.

Copy the information of current flow and create a new one, and import the copied
Flow Copy
information into the new flow.

Delete Delete the selected flows

Run Interval The interval between two consecutive runs of a process. The unit of interval is ms.

Stop Process Executed NG When you execute flows continuously, if NG occurs, the process will stop.

Rename Rename the flow.

Multi-Flow

The multi-flow has the features of multi-function, high efficiency, and executing asynchronously.
Creating multiple flows can meet your demands for different functions or sequences, and you can
also combine multiple flows via Data Queue or Global Variables.
Main Flow

In multi-flow process, you can click to display all sub flows you have created as shown below.

You can click one or more specified flows to run, or directly check the operation times and the time
it takes to run once. You can right click a single flow to delete it, set continuous running time and
rename the flow.
Click to enable or disable the flow.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
28
DobotVisionStudio User Guide

In main-flow page, you can click to edit output parameters as shown below.

Click Display Settings to set display related parameters as shown below.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
29
DobotVisionStudio User Guide

Camera Management

Click to add global camera, and set parameters according to actual demands after connection.
Multiple processes can share a global camera, and it is best to have a execution order between
processes. If multiple processes are executed at the same time and share the same global camera,
they will be queued to acquire data.

When software is selected as trigger source, you can click to trigger the camera to acquire

images once, and click to let the camera to acquire images continuously.

The specific parameters and settings are as follows.


Common Parameters
After enabling it, you need to import .cit format file, and search the cameras supporting
GenTL Camera
genicam protocol.
You can select camera in the same network segment, including GigE area/line scan
Choose Camera
camera and U3V camera, and third-party cameras
When the camera is disconnected due to network factors, the module will be
Reconnect Time
reconnected during the time.
Image Width, Image
You can view the width and height of connected camera.
Height
Pixel Format Mono 8/10/12, RGB 8 and YUV formats are supported.
Frame Rate You can set the max. frame rate of camera here.
Actual Frame Rate It refers to the actual frame rate of camera.
Exposure Time It refers to the exposure time of camera. Exposure affects the brightness of images.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
30
DobotVisionStudio User Guide

Gain You can increase gain to improve brightness without increasing exposure time.
The camera supports Gamma correction, which provides a non-linear output mapping
mechanism.
Gamma value between 0 and 1: When image brightness increases, dark area becomes
Gamma
brighter.
Gamma value between 1 and 4: When image brightness decreases, dark area becomes
darker.
Line Rate It refers to the line rate of line scan camera.

Trigger Settings
Software trigger means that DobotVisionStudio triggers the camera. Hardware trigger
Trigger Source
means using external hardware to trigger the camera.
Trigger Delay After the configured trigger delay time, the program starts to respond.
IO Control
IO Option It selects IO port.
IO Mode It includes input and strobe.
Smart Camera IO Control
IO Selector Select IO output port.
IO Source It includes off and software triggering. Off means that output is disabled.
IO Inverter After you enable it, the electrical signal level will be reversed.
Output Delay Set the delay time of output, and its unit is us.
Duration It refers to the duration of IO signals
Smart Camera Light Control
Lighting Enable After you enable it, the light source will be opened.
Lighting Mode It includes strobe and constant.
Lighting Duration It refers to the duration of the strobe.
Lighting Delay It refers to the delay time of light output.
Lighting Precharge It refers to the precharge time of light trigger.

Controller Management
You can set the light source and IO devices, as shown below.

Click to add a device. You can select the added device and right-click to rename or delete this
device.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
31
DobotVisionStudio User Guide

Currently, eight brands are supported: DPS2, MV-AP1024-2T, MV-LEVD, VB2200,


VC3000(Light), VC3000(IO), VC4000, and GPIO. For protocol, only COM is supported.
NOTE
 For specific parameters, refer to the actual one you got.

If DPS2, MV-AP1024-2T, MV-LEVD are selected as brand, you need to set communication
parameter and light parameters.
Communication Parameters
Port Number It refers to the port number.
Baud Rate The common baud rate includes 4800, 9600, 115200, etc.
Data Bit It includes 6/7/8.
Parity Bit It includes odd/event/none.
A stop bit is a character in asynchronous communication that lets a receiver know that the byte
Stop Bit
being transmitted has ended.
Light Parameters
Channel Enabled After enabling it, you can turn on the corresponding light.
It sets the light brightness. 0 stands for closing light. The larger the value, the brighter
Channel Brightness
the light is.

If VB2200 is selected as brand, you need to set communication parameter, IO parameters,


and light parameters.
Communication Parameters

Port Number It refers to the port number.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
32
DobotVisionStudio User Guide

Baud Rate The common baud rate includes 4800, 9600, 115200, etc.

Data Bit It includes 6/7/8.

Parity Bit It includes odd/event/none.


A stop bit is a character in asynchronous communication that lets a receiver know that the byte
Stop Bit
being transmitted has ended.

IO Parameters

Output Type It includes output when NG and output when OK.

Send Interval It refers to the sending interval ranging from 0 ms to 100 ms.

IO Port You can select IO port according to actual demands.

Polling Enable After enabling it, you can set its polling interval.

Delay Time It sets the delay time of outputted signal.

Light Parameters

Channel 1 Enabled After enabling it, you can turn on the corresponding light.
It sets the light brightness. 0 stands for closing light. The larger the value, the brighter
Channel Brightness
the light is.
Light State It includes on and off.

If VC3000 (Light) is selected as brand, you need to set communication parameter and
light parameters.

Communication Parameters

Port Number It refers to the port number.

Baud Rate The common baud rate includes 4800, 9600, 115200, etc.

Data Bit It includes 6/7/8.

Parity Bit It includes odd/event/none.

A stop bit is a character in asynchronous communication that lets a receiver know that the byte
Stop Bit
being transmitted has ended.
Light Parameters
Channel 1 Enabled After enabling it, you can turn on the corresponding light.
It sets the light brightness. 0 stands for closing light. The larger the value, the brighter
Channel Brightness
the light is.
Light State It includes on and off.
Edge Type It includes rising edge type and falling edge type.
Duration It sets the duration of light state, and the unit is ms.

If VC300(IO) is selected as brand, you need to set communication parameter and IO


parameters.
Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
33
DobotVisionStudio User Guide

Communication Parameters
Port Number It refers to the port number.
Baud Rate The common baud rate includes 4800, 9600, 115200, etc.
Data Bit It includes 6/7/8.
Parity Bit It includes odd/event/none.
A stop bit is a character in asynchronous communication that lets a receiver know that the
Stop Bit
byte being transmitted has ended.

IO Parameters

Output Type It includes output when NG and output when OK.

Output Polarity It selects output polarity, including PNP and NPN.

Send Interval It refers to the sending interval ranging from 0 ms to 100 ms.

IO Port You can select IO port according to actual demands.

Polling Enable After enabling it, you can set its polling interval.

Delay Time It sets the delay time of outputted signal.

If VC4000 is selected as brand, you need to set communication parameter, IO parameters


and light parameters.
Communication Parameters
Port Number It refers to the port number.
Baud Rate The common baud rate includes 4800, 9600, 115200, etc.
Data Bit It includes 6/7/8.
Parity Bit It includes odd/event/none.
A stop bit is a character in asynchronous communication that lets a receiver know
Stop Bit
that the byte being transmitted has ended.
IO Parameters
Output Type It includes output when NG and output when OK.
Output Polarity It selects output polarity, including PNP and NPN.
Send Interval It refers to the sending interval ranging from 0 ms to 100 ms.
IO Port You can select IO port according to actual demands.
Polling Enable After enabling it, you can set its polling interval.
Delay Time It sets the delay time of outputted signal.
Light Parameters
Channel 1 Enabled After enabling it, you can turn on the corresponding light.
It sets the light brightness. 0 stands for closing light. The larger the value, the brighter
Channel Brightness
the light is.
Light State It includes on and off.
Edge Type It includes rising edge type and falling edge type.
Duration It sets the duration of light state, and the unit is ms.

If GPIO is selected as brand, you need to set IO parameters.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
34
DobotVisionStudio User Guide

IO Parameters
Output Type It includes output when NG and output when OK.
Output Polarity It selects output polarity, including PNP and NPN.
Duration It sets the duration of signal output, and the unit is ms.
Polling Enable After enabling it, you can set its polling interval.
Delay Time It sets the delay time of outputted signal.

NOTE
 GPIO is the VC3000's motherboard IO.

Global Variables

Global variables are variables defined outside all functions. They can be called or modified by all
flows in the solution. Variable names, types, and current values can be customized. It is valid in the

entire project file. You can click in the control toolbar to open it. After enabling initialization,

you can send fixed-format string to set the initial value of global variables (such as variable var0,
send SetGlobalValue: var0=0, and the variable value can be set to 0). Global variables have an
overwrite update mechanism, which means old data will be overwritten when new data comes in.

The main functions of the global variables are shown below:


• Click Add to add new global variables.
• Click Import/Export to import or export global variables in fixed-format files.
• Search: You can enter variables name to search global variables.

• allows you to adjust the position of global variables.


• Click Save to save the current global variables you added.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
35
DobotVisionStudio User Guide

After adding variables, you need to set relevant information of variables.


In result module, you can bind data with corresponding global variables.

When global variables bind with target parameters, multiple selections are supported. Global
variables support initializing multiple global variables at a time of communication, as shown in the
following figure.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
36
DobotVisionStudio User Guide

Communication Management
Communication is an important channel connecting the algorithm platform and external devices.
The algorithm platform supports both reading and writing of external data. When the
communication is established, the software can send process results to the outside or trigger the
camera to acquire images or operate software via strings sent by the external devices.

Device Management

The device management allows you to add different communication protocols and set corresponding
parameters. TCP, UDP, serial port, Modbus, PLC, and other communication protocols are supported.
The specific configuration is as follows.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
37
DobotVisionStudio User Guide

Click to add communication protocol.

TCP Communication
TCP communication includes TCP client and TCP server.

TCP Client
Target Port It refers to the port No. of the TCP server.
Target IP It refers to the IP address of the TCP server.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
38
DobotVisionStudio User Guide

Data Upload After enabling, data will automatically upload.


Auto Reconnect After enabling, the software will automatically connect the server.
After enabling, the software will check if terminator is received or not. If the terminator is
Receive End
not received in the process of receiving data, the software will continue to receive data from
Character
the buffer until the terminator is received.
Send Data It is used to test if the communication is established or not. Only strings are supported.
Receive Data It is used to test if the communication is established or not.
TCP Server
Local Port It refers to the port No. of the local TCP.
Local IP It refers to the IP address of the local TCP.
Data Upload After enabling, data will automatically upload.
Auto Reconnect After enabling, the software will automatically connect the server.
After enabling, the software will check if terminator is received or not. If the terminator is
Receive End
not received in the process of receiving data, the software will continue to receive data from
Character
the buffer until the terminator is received.
Send Data It is used to test if the communication is established or not. Only strings are supported.
Receive Data It is used to test if the communication is established or not.

UDP Communication

UDP Communication
Local Port It refers to the port No. of the device.
Local IP It refers to the IP address of the device.
Target Port It refers to the port No. of the target device.
Target IP It refers to the IP address of the target device.
Data Upload After enabling, data will automatically upload.
Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
39
DobotVisionStudio User Guide

Auto Reconnect After enabling, the software will automatically connect the server.
After enabling, the software will check if terminator is received or not. If the terminator is
Receive End
not received in the process of receiving data, the software will continue to receive data from
Character
the buffer until the terminator is received.
Send Data It is used to test if the communication is established or not. Only strings are supported.
Receive Data It is used to test if the communication is established or not.

Serial Port Communication


Serial communication means that the serial port sends and receives bytes bit by bit. Although byte
serial communication is slow, the serial port can use one cable to send data while using another
cable to receive data. The serial communication protocol the content of data package, including the
start bit, body data, check bit and stop bit. Data can be received and sent normally only when both
parties agree on a consistent data packet format.
Before serial communication, you need to make sure that the serial cable is connected. Then you
can check the port number in device manager. Some settings are shown as below.

Serial Port Communication


Port Number The serial port of the PC.
Baudrate The common baud rate includes 4800, 9600, 115200, etc.
Data Bit It includes 6/7/8.
Parity Bit It includes odd/event/none.
A stop bit is a character in asynchronous communication that lets a receiver know that
Stop Bit
the byte being transmitted has ended.
Read Interval You can set read interval here.
Data Upload After enabling, data will automatically upload.
Auto Reconnect After enabling, the software will automatically connect the server.
Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
40
DobotVisionStudio User Guide

After enabling, the software will check if terminator is received or not. If the
Receive End Character terminator is not received in the process of receiving data, the software will continue
to receive data from the buffer until the terminator is received.

PLC Communication
When using PLC communication, you need to establish the corresponding TCP or serial
communication in the PLC device. Then you can create the PLC communication in Devices to
establish connection. The PLC type supports Mitsubishi (Mitsu), KEYENCE ( Keyence), Panasonic
(Mewtocol) and Omron (Omronl), etc.

KEYENCE PLC Communication Parameters


Data Type It includes 16-bit and 32-bit.
Soft Element Type Only D is supported.
Soft Element Address It is the register address.
Soft Component Size its range is between 1 and 128.
Timeout It is the polling timeout.
Polling After enabling, the device will read information in a polling way.
Data Upload After enabling, data will automatically upload.
Mitsubishi PLC Communication Parameters
Frame Format It includes 3E, 3C-3 and 4C-5.
Data Code ASCII and binary.
Data Type It includes 16-bit and 32-bit.
Soft Component Type X/Y/M/D are supported.
Soft Component Address its range is between 0 and 65535.
Soft Component Size its range is between 1 and 128.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
41
DobotVisionStudio User Guide

Timeout It is the polling timeout.


Polling After enabling, the device will read information in a polling way.
Data Upload After enabling, data will automatically upload.
Panasonic PLC Communication Parameters
Station No. Set it according to according to actual demands.
Data Type It includes 16-bit and 32-bit.
Soft Component Type Only D is supported.
Soft Component Address It is the register address.
Soft Component Size its range is between 1 and 128.
Timeout It is the polling timeout.
Polling After enabling, the device will read information in a polling way.
Data Upload After enabling, data will automatically upload.
OMRON PLC Communication Parameters
Data Type It includes 16-bit and 32-bit.
Soft Component Type Only D is supported.
Soft Component Address It is the register address.
Soft Component Size its range is between 1 and 128.
Timeout It is the polling timeout.
Polling After enabling, the device will read information in a polling way.
Data Upload After enabling, data will automatically upload.
Inovance PLC Communication Parameters
Station No. Set it according to according to actual demands.
Data Type It includes 16-bit and 32-bit.
Soft Element Type Only D is supported.
Soft Component Address It is the register address.
Soft Component Size its range is between 1 and 128.
Timeout It is the polling timeout.
Polling After enabling, the device will read information in a polling way.
Data Upload After enabling, data will automatically upload.
Ethernet/IP CIP Communication Parameters
Slot No. its range is between 0 and 256.
Data Type It includes 16-bit and 32-bit.
Communication Address Set it according to according to actual demands.
Polling After enabling, the device will read information in a polling way.
Timeout It is the polling timeout.
Data Upload After enabling, data will automatically upload.

Modbus Communication

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
42
DobotVisionStudio User Guide

Modbus is a master/slave protocol. One node is the master node, and the other nodes that use
Modbus protocol in communication are slave nodes. Currently the algorithm platform only supports
the master station.
To use Modbus communication, you need to establish a communication connection first. Then you
can create Modbus communication in Devices to establish connection, as shown in the following
figure.
Modbus supports TCP and serial ports. Different communication modes require different
communication parameters. For details about how to set the parameters, see TCP and Serial Port
communication.

Mitsubishi PLC Communication Parameters

Function Code It includes read/keep register, read/input register, and write multiple registers.

Main/Sub Mode Main is supported only.

Protocol Type It includes RTU and ASCLL.

Data Type It includes 16-bit and 32-bit.

Send Sequence It includes four types: "ABCD”


“BADC”
“DCBA”
“CDAB”

Device Address The sub device address.

Register Address Set it according to according to actual demands.

Register Number Set it according to according to actual demands.

Polling After enabling, the device will read information in a polling way.

Data Upload After enabling, data will automatically upload.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
43
DobotVisionStudio User Guide

Receive Event

In communication management, receiving event can analyze and configure the data received by
communications to parse a piece of data into the required value. It can also reassemble the received
data or customize the data and send it back to the communication device. See Communication
Trigger Flow for specific cases. You can configure corresponding parameters in this interface, as
shown below.

Click in Receive Event List to add corresponding event.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
44
DobotVisionStudio User Guide

Create Event
Process Method It includes text, byte match and script.
It includes protocol parse and protocol assembly. The protocol parse is to obtain the divided
Event Type data according to the delimiter via the received data, and protocol assembly is to assemble
data according to a given delimiter.

Currently, four types of events are available: text-protocol parse, text-protocol assembly, byte
match-protocol assembly, and script.
Text-Protocol Parse

Bind Device Select the communication devices.

Delimiter Delimiter is used to divide the received data

After the function is enabled, the software will automatically compare the length
of received data. If the length of the received character is not equal to the
Character Length Compare configured length, the received data will not be configured to the output list and
will not respond to the corresponding event trigger (The delimiter also
represents a length).

Character Length Set the character length.

Output List Set and display the received data

Text-Protocol Assembly

Bind Device It selects the communication devices.

Delimiter Delimiter is used to divide the data to be sent

After the function is enabled, the software will automatically compare the length
of received data. If the length of the received character is not equal to the
Character Length Compare configured length, the received data will not be configured to the output list and
will not respond to the corresponding event trigger (The delimiter also represents
a length).

Character Length Set the character length.

Parse List Set received data and compare with configured data.

After the function is enabled, after receiving the data sent from the communication
Reply to Device device and being assembled, send the assembled data back to the communication
device

Delimiter Delimiter is used to separate the data replied to the device

Assembly List The data in the assembly list can be uploaded to upper, or replied to the device.

Byte Match-Protocol Assembly

Bind Device Select the communication devices.

After the function is enabled, the software will automatically compare the length
Character Length Compare of received data. If the length of the received character is not equal to the
configured length, the received data will not be configured to the output list and

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
45
DobotVisionStudio User Guide

will not respond to the corresponding event trigger (The delimiter also represents
a length).

Character Length Set the character length.

After the function is enabled, the software will convert data into hexadecimal
ASCII to Hexadecimal
type.
It can convert received data into a data type according to certain rule, and compare
Rule List
data with configured value.

Byte Start and End It sets the start and end position of byte.

Type It parses and converts the received data into this type of data in a certain order.

Sequence It parses and converts the received data into a specified type of data in this order.

Input Box Enter data here.

After the function is enabled, send the assembled data back to the communication
Reply to Device device once receiving the data sent from the communication device and being
assembled

Delimiter Delimiter is used to separate the data replied to the device

Reply List Set data replied to the communication.

Script
Bind Device Select the communication devices.
After the function is enabled, send the assembled data back to the communication
Reply to Device device once receiving the data sent from the communication device and being
assembled
Delimiter Delimiter is used to separate the data replied to the device
File Path Select configured script files.
Assembly List Display data in the script.

Send Event

In communication management, you can set the corresponding parameter types for the sending event,
and subscribe sending event and customize parameters via Send Data. See Communication Trigger
Flow for specific cases. You can configure corresponding parameters in this interface, as shown
below.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
46
DobotVisionStudio User Guide

Click by Send Event List to add the receiving event.

Create Event
Process Method It includes text and script.
Event Type It includes direct output and assembly output.

Currently, three types of events are available: text-direct output, text-assembly output, and script.
Text-Direct Output
Bind Device Select the communication devices
Delimiter Divides the received data by delimiter

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
47
DobotVisionStudio User Guide

Parameter List Sets sent data and type

Text-Assembly Output

Bind Device Select the communication devices

Parameter List Set sent data and type

Delimiter Divide the received data by delimiter

Assembly List Assemble sent data

Script

Bind Device Select the communication devices

Path Select configured script files

Assembly List Display data in the script

Heartbeat Management

You can configure the communication heartbeat in this page, as shown below.

Heartbeat Management Parameters


Device Selects the corresponding communication device.
It includes single data and multiple data. Single data supports sending single data
Heartbeat Type
continuously, and multiple datum support sending two data in loop.
Send Content Set the contents to be sent (PLC and Modbus supports sending integer data only).
Time Interval It refers to the time interval between sending data two times, and the unit is ms.
On/Off After being enabled, the communication heartbeat is turned on.
Operate You can click to delete the item.
Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
48
DobotVisionStudio User Guide

Response Settings

You can set parameters of loading solution, flow control and camera response in Response panel.
• Load solution: After the function is enabled, the software will trigger specific contents if
the solution is loaded successfully.
• Flow control: After the function is enabled, the software will trigger specific contents if the
mode of flow is changed (busy or idle).
• Camera response: After the function is enabled, the software will trigger specific contents
if the mode of camera is changed (disconnected or connected).

The parameters that need to be configured for various functions are different. All communication
devices need to be set separately. You can select the communication mode that has been configured
in Device. For other parameter descriptions, refer to the table below.

Load Solution

The characters sent by the communication after loading solution, for example A
Trigger Character
(PLC and Modbus supports sending integer data only).

Flow Control
Split Symbol Select the delimiter between data and flow No.
Set the contents sent by communication when flow is free, for example A (PLC and
Trigger when Free Modbus supports sending integer data only). The content is {configured strings}{split
symbol}{flow ID}.
Set the contents sent by communication when flow is busy, for example A (PLC and
Trigger when Busy Modbus supports sending integer data only). The content is {configured strings}{split
symbol }{flow ID}.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
49
DobotVisionStudio User Guide

Camera Response Configuration

Split Symbol Select the delimiter between data and camera No.

Set the contents sent by communication when the camera connects, for example A (PLC and
Trigger when
Modbus supports sending integer data only). The content is {configured strings}{split
Connecting
symbol }{camera ID}.
Set the contents sent by communication when the camera disconnects, for example A (PLC and
Trigger when
Modbus supports sending integer data only). The content is {configured strings}{split
Disconnecting
symbol }{camera ID}.

Global Trigger
The global trigger includes event trigger and string trigger. After receiving trigger signal, the
software can switch solutions, execute processes and modules.
Event Trigger
Event trigger can respond to commands like executing process, module, and module action
according to the configured event after meeting the configured conditions, thus achieving accurate
control of the operation. For the configuration of event trigger, see Receive Event. Execute process
means triggering the process to run, and execute module means separately executing the
subscription module. At present, execute module action only supports clearing the calibrated point
of N-point calibration module. For specific cases, see Communication Trigger Flow. The following
figure shows how to configure event trigger.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
50
DobotVisionStudio User Guide

Click to configure event parameters. The Target Output is the running parameters of the
binding module, and the Enter Settings is the input parameters of the binding module, as shown in
the following figure.

Trigger Settings
It is used to trigger process or module action by subscribing corresponding Receive
Trigger Event
Event.
It includes switching solution, executing process, executing module, and executing
module action.
Switch solution: Switch solution after receiving trigger once.
Trigger Command Type Execute process: Execute single process after receiving trigger once.
Execute module: Execute single module after receiving trigger once.
Execute module action: Execute one action in the module, and N-point calibration is
supported only.
Execute module and execute module action need to subscribe modules to be executed,
Trigger Settings
and execute process needs to subscribe ID of process.
You need to configure this parameter only when the trigger command type is execute
Trigger Action
module action. Currently, only N-point calibration is supported.

String Trigger
By setting string trigger, you can execute process, module and module actions. Execute process
means triggering the process to run, and execute module means separately executing the
subscription module. At present, execute module action only supports clearing the calibrated point
of N-point calibration module. The following figure shows how to configure string trigger.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
51
DobotVisionStudio User Guide

Trigger Settings
Trigger String It is used to trigger process or module action by setting characters.
It includes exact match, partial match and mismatch.
Exact match: Only when the software receives the same trigger string, it will trigger
Match Mode actions.
Partial match: If the sent strings contain trigger string, it will trigger actions.
Mismatch: Any data can trigger action if mismatch is selected as match mode.
Trigger Event It is used to trigger process or module action.
It includes switching solution, executing process, executing module, and executing
module action.
Switch solution: Switch solution after receiving trigger once.
Trigger Command Type Executing process: Execute single process after receiving trigger once.
Executing module: Execute single module after receiving trigger once.
Execute module action: Execute one action in the module, and N-point calibration is
supported only.
Executing module and executing module action need to subscribe modules that to be
Trigger Settings
executed, and executing process needs to subscribe process's ID.
Only the executing module action need to configure this parameter. Currently, only
Trigger Action
N-point calibration is supported.

Global Script
Overview

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
52
DobotVisionStudio User Guide

Global script is available in controlling the running sequence of multiple flows, setting module
parameters dynamically, and triggering the communication. It supports C# language, and calls C#
API of DobotVisionStudio SDK. It can perform logical control of multiple flows, modify variable
parameters, receive communication data, and so on.

Click to open the global script window, and click to set password for it.

Global script

Import saved .cs files

Export generated .cs files

Open a demo

Open the directory of the global script

Add reference assembly

Save the script program

Set password for encrypting the global script

Initialization function. This function is only executed once when the editing
Init()
is successful.
Process() This function responds to the Runoperation.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
53
DobotVisionStudio User Guide

Debug Global Script via Visual Studio

1. Click to open the directory of the global script, and open GlobalUserScript.sln with
Visual Studio. Set the breaking point in the program and generate a solution.

2. Add the global script to the process and click Attach.

3. Click Once in DobotVisionStudio to check whether it runs to the breaking point.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
54
DobotVisionStudio User Guide

The result is as follows after you click Once.

If it enters the breaking point, the global script can be debugged by VS.

Add Assembly

Click to open reference assembly, as shown below. You can click Add to add other assembly
according to actual demands.

NOTE
 Only C# assembly can be added.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
55
DobotVisionStudio User Guide

Click Add in the upper right corner to dynamically add assemblies as required. Only C# assemblies
are supported. Find the.dll you want to add in the path of the third-party assemblies. You can call it
in the global script after adding.

Usage

The global script supports calling the C# version of the algorithm platform SDK. For specific
interface function, refer to DobotVisionStudio Algorithm Platform SDK User Manual. In addition,
the global script provides the data interface of global variable module, continuous running time
setting interface, debug interface, etc.
Global variable related interface:
Description Get int type variable value of global variable.
Function Int GetGlobalVariableIntValue(string paramName, ref int paramValue)
/ Parameter Name Data Type Description
Input paramName string variable name
Output paramValue Int variable value
Returned Value Success: 0; Failure: error code excluding 0

Description Get float type variable value of global variable.


Function Int GetGlobalVariableFloatValue(string paramName, ref float paramValue)
/ Parameter Name Data Type Description
Input paramName string variable name

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
56
DobotVisionStudio User Guide

Output paramValue float variable value


Returned Value Success: 0; Failure: error code excluding 0

Description Get string type variable value of global variable.


Function Int GetGlobalVariableStringValue (string paramName, ref string paramValue)
/ Parameter Name Data Type Description
Input paramName string variable name
Output paramValue string variable value
Returned Value Success: 0; Failure: error code excluding 0

Description Set int type variable value of global variable.


Function Int SetGlobalVariableIntValue (string paramName, int paramValue)
/ Parameter Name Data Type Description
Input paramName string variable name
Output paramValue Int variable value
Returned Value Success: 0; Failure: error code excluding 0

Description Set float type variable value of global variable.


Function Int SetGlobalVariableIntValue (string paramName, int paramValue)
/ Parameter Name Data Type Description
Input paramName string variable name
Output paramValue float variable value
Returned Value Success: 0; Failure: error code excluding 0

Description Set string type variable value of global variable.


Function Int SetGlobalVariableIntValue (string paramName, int paramValue)
/ Parameter Name Data Type Description
Input paramName string variable name
Output paramValue string variable value
Returned Value Success: 0; Failure: error code excluding 0

Global script continuous running time setting interface:


Description Get global script continuous running interval.

Function Uint GetScriptContinusExecuteInterval ()


Returned Value Success: interval value; Failure: -1

Description Set global script continuous running interval.

Function void SetScriptContinusExecuteInterval (uint nMilliSecond)


/ Parameter Name Data Type Description
Input nMilliSecond unit interval, unit: ms
Returned Value None

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
57
DobotVisionStudio User Guide

Communication related interface:

Description Initialize global communication.

Function bool StartGlobalCommunicate ()

Returned Value Success: true; Failure: false

Description Register communication receive event

Function Void UnRegesiterReceiveCommunicateDataEvent ()

Returned Value None

Description Cancel communication receive event.

Function Void RegesiterReceiveCommunicateDataEvent()

Returned Value None

Description Communication data receive event.

Function void UserGlobalMethods_OnReceiveCommunicateDataEvent(ReceiveDataInfo dataInfo)

/ Parameter Name Data Type Description

Output dataInfo ReceiveDataInfo CommunicateType

int DeviceIndex

int DeviceAddressIndex

byte[] DeviceDat

Returned Value None

Debug interface:
Description Print information into DebugView.

Function void ConsoleWrite(string content)


/ Parameter Name Data Type Description
Input Content string Print content
Returned Value None

Operation Interface
Overview

The operation interface is designed for the operation and usage for operators. You can customize the
operation interface according to your needs to realize multiple functions like operation control,
display, parameters settings, and exporting programs. The exported programs are divided into exe
and vmCodeProject.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
58
DobotVisionStudio User Guide

Interface Configuration

Click to open the front running interface, then enter the editing interface, as shown below.

Tool bar

Click it to open saved ones.

Save the operation interface you edited.

It includes in the top, up, bottom and next layer of the editing area.

It is used to align multiple function tools in the editing area, Press


Ctrl and select multiple function tools to align them.

Cancel and undo cancellation.

Delete function tools.

Function Tools

Icon Function Main Parameters


Layout Sets dimension and margin.
It is used to link image
Data Linking Sets parameters
information
Auxiliary Line Displays effects and results
It is used to link multiple images' Layout Sets dimension and margin.
information Data Linking Sets parameters.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
59
DobotVisionStudio User Guide

Auxiliary Line Displays effects and results.

It is used to control the operation


/ /
of all flows and single flow.

It is used to display operation


/ /
status of flows

It is used to display the operation


/ /
result of flows: OK or NG.

It is recommended to link data and


Format String
display text.
Data type includes string, int and
Data Type
float.
It is used to customize text and It refers to the data source of
Data Source
link with output data. displayed text.
After setting data linking, the
Display Text displayed text will update
automatically.
Front It customizes front and size.
It defines row and column in the
Row and Column
It is used to output data in a table table.
when feature match meets Column Name It defines column name.
multiple targets or Blob creates Column Width it defines column width.
data. Data type includes string, int and
Data Type
float.
It includes parameter configuration
and parameter reset. Parameter
configuration can configure the
Data Source function module parameters in the
main process. Parameter reset only
supports variable reset in variable
Click it to set parameters for
calculation.
different modules
Parameter reset can only link variable
calculation parameter reset.
Parameter configuration can
Data Source
configure all module parameters,
global variables, camera
management, light source

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
60
DobotVisionStudio User Guide

management and communication


management.

It is used to enable or disable


Data Source It can link controllers with switch.
switch.

It is used to adjust operation It needs to link with int or float type


Data Source
parameters. operation parameters.

It is used to adjust values in


Data Source It needs to link with var0.
variable calculation.

It is used to adjust operation It needs to link with STRING type


Data Source
parameters and global variable. operation parameters.

It is used to display the status of It is used to link with traffic light in


Data Source
configured modules. the process.

It is used to add local images.

It is used to create the group box.

It is used to edit child interface.

It is used to create new tab. Press Alt and you can drag other controllers into tab.

Edit Area

Click it to zoom in the image.

Click it to zoom out the image.

Click it to let the image restore to the original status.

Click it to have a full screen.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
61
DobotVisionStudio User Guide

Display the coordinate information of the cursor location on the image.

Display the R/G/B information of the cursor location on the image.

NOTE

 You can click to top the operation interface.

 You can click to lock the operation interface.

Export Program

Exporting program can reduce the occupation of resources, and it supports secondary development.
You can click Export to export, as shown below.

You need to enter name, select icon, type (exe, vmCodeProject, and exe+ vmCodeProject) and
storage path before exporting.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
62
DobotVisionStudio User Guide

The specific information of program for exporting exe and vmCodeProject are as follows:

Public_Release files include main program files, as shown below.

Data Queue
Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
63
DobotVisionStudio User Guide

The queue is a first-in-first-out linear table. Its end that allows insertion is called the tail of the queue,
and the other end that allows deletion is called the head.

Click in DobotVisionStudio to enter the interface of multi-flow, and drag to set up one
or multiple data queues.

Data queue
No. Queue index
Data Type It includes int, float and string.
Queue Name It refers to the name of queues.
Queue Cache Row Number It refers to the max. quantity of queues.

In the process, you can use the function of sending data to send data to the data queue. The data can
be cached in the queue when not taken out.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
64
DobotVisionStudio User Guide

When multiple processes send data to the data queue for comprehensive processing, to ensure that
the order of data combination is not disrupted, the data cannot be taken out if any of the queue is
blank.

Flow Time
The software supports recording time consumed in tool operation and algorithm operation. You can
go to the home page of the software, and click as shown below.

Click the position in the figure above to open Time Consuming View, where you can view the tool
and algorithm time of each module as well as the entire process.

Dobot Panel
The Dobot panel shows Magician settings by default, as shown in the figure below.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
65
DobotVisionStudio User Guide

You can click Magician in the upper left to select CR or MG400/M1Pro, as shown in the figures
below.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
66
DobotVisionStudio User Guide

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
67
DobotVisionStudio User Guide

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
68
DobotVisionStudio User Guide

Vision Tools

The software supports multiple vision tools. You can directly drag the tools needed from the tool
box.
The vision tools have different categories, including acquisition, location, measurement, image
generation, recognition, deep learning, calibration, calculation, image processing, color processing,
defect detection, logical tools and communications.

Acquisition
You can set image sources and save images through acquisition. Image sources can be obtained from
loading local images and linking to camera for images.

Image Source

Drag the Image Source module to the process editing area. The image can be loaded from local or
obtained from camera.
When selecting SDK for the image source, you can only set the image data by calling the SDK
interface as shown below.

When you select Local Image in Image Source, Current Results changes to General Parameter

in the result display area. You can click on the upper right to load an image, click to

load an image folder, and click to delete all images. At the same time, you can double-click

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
69
DobotVisionStudio User Guide

the module to set parameters, which mainly include pixel format, fetch interval, solution-saved
images and trigger settings.

Basic Parameters
Pixel Format The pixel format can be set to Mono8 or RGB24.
Fetch Interval You can set the time interval between two image fetches and the unit is ms.
Solution-saved Image You can set whether to save the local images when saving solutions.
Auto Switch Switch to the next image automatically after enabling auto switch.
Character Trigger Filter 1. The external communication can control the function module after character
trigger filter being turned on.
2. Input character: select the source of input character.

3. Trigger character:

Any character transmitted can trigger the process when no character is set.

After the character is set, only the character transmitted in accordance with the
set character can trigger the process.

Before selecting Camera as the image source, you need to click in Toolbar to create a global
camera.
Image Source Parameters

Connected Camera The global camera created before connection


Exposure Control When this function is used in conjunction with a script, you can use some logics to
control the exposure value as needed; in addition to binding the script output, it can
also bind other module data output, but only the float type data.
Gain Control Refer to Exposure Control
Output Mono8 It can output a gray image while outputing the color image after being enabled.

NOTICE

⚫ It is necessary to set general parameters of the camera when the preview is stopped,
and it is recommended to adjust the parameters on the MVS client before
synchronizing to the VisionMaster client.
⚫ Only the camera supporting multicast can perform real-time streaming.

Multi-image Acquisition

Drag this module to the process editing area. You can get local images data or multiple images
with different angles and brightness captured by camera and light source after double-clicking
corresponding configured parameters.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
70
DobotVisionStudio User Guide

Before selecting Camera in the image source, you can click in the toolbar to create global
camera. Refer to Camera Management for details.

Input Configuration
Image Source It can be set as local image or camera image.
Associated Camera If the Image Source is set as camera, the corresponding global camera can be associated.
Fetch Intervals You can set the time interval between two image fetches and the unit is ms.
Fetch Quantity If the Image Source is set as local image, the fetch quantity can be set here from 3 to 8.
Image Path You can set the path of image.
Image Configuration
Camera Exposure Improve the brightness by increasing exposure.
Camera Gain Without increasing the exposure value, improve the brightness by increasing gain.
Light Source Device The associated corresponding light source device can be set.
Trigger Time (ms) Trigger time of light source can be set.
Light Source Channel The corresponding light source channel can be selected.
Light Source Brightness The corresponding light source brightness of the corresponding channel can be
controlled.
Distribution Angle As shown in the figure (Refers to the distribution of the multiple cameras.)

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
71
DobotVisionStudio User Guide

Illumination Angle As shown in the figure (Refers to the erection of the camera.)

NOTICE

⚫ Set the common parameters of the camera when stopping the preview. It is
recommended to adjust the parameters on the MVS client before synchronizing
them to the VisionMaster client.
⚫ Only cameras that support multicast can obtain stream in real time

Image Output

Drag the Image Output module to the process editing area. Double click and configure the
corresponding parameters. You can run the process and save the global camera images, local images
or the images processed by the image processing tool. The specific parameters are as shown below.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
72
DobotVisionStudio User Guide

Output Image

Input Source Select the saved image source, mainly including images from image source in the
solution or images processed by other modules.
Save Image Enable It is off by default, at this time, the output image module only realizes the function
of output image, and the specific parameters of the module can be configured by
opening save image enable. At the same time, the image can be stored.
Save Trigger Trigger variable binds with the condition testing results generally and stores the
image s according to storage conditions.
Debug Save Save the binary data.
Generate Directory After enabling, you can create a folder according to the month and date, and save
the images in the created folder.
Synchronous Storage After enabling, you can storage images in real time.
Save Render Image After enabling, you can save the rendered images in the rendering path.
Render Image Path Path of custom storage the rendering.
Render Image Naming You can set the name for render image.
Render Image Cache It is the max. image quantity can be storage in cache.
Save Original Image After enabling, you can save the original image in the original image path.
Original Image Path Path of custom storage the original image.
Original Image Naming You can set the name for original image.
Original Image Cache It is the max. image quantity can be storage in cache.
Number of Images Buffer The maximum number of images in the buffer queue when storing images
asynchronously.
Save Mode You can choose to cover the previous image or stop saving the image when the
maximum storage quantity is reached or the disk space is insufficient.
Disk Freespace Set the remaining space of the target disk. If the remaining disk space reaches the

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
73
DobotVisionStudio User Guide

set value, it will be stored according to the mode set by the storage mode.
Storage Unit Optional MB or GB.
Max. Storage Days Set the time of automatically deleting the storage images.
Save Format BMP and JPEG
File Naming You can customize the prefix or set module data as the prefix before subscription,
and the serial number of date as the suffix, such as IMG-1. The naming format
will change with the module status when adopting trigger saving image, such as
IMG-OK-1.
Pixel Format Optional Mono8 or RGB24
Image Magnification Type Original Image Size
Interface Size Images and fonts are stored according to the size of
the interface.
Custom Magnification When saving the image, the line width magnification
refers to the line width of the positioning box, and the
font width refers to the magnification of the font.
Rendering Parameters
Text Setting Content You can choose to store the text information rendered on the image in the hard
disk.
Color You can change the color of the rendering information.
Word Size You can change the word size of the rendering information.
Position X/Y You can change the location of horizontal and vertical coordinates of the
rendering information.
Previous Storage Settings When the previous item storage enabled, the output result of the previous module
can be stored, such as outputting the image containing the circle search result and
other text information.

Image Buffer

Image buffer can be used for debugging functions of solutions. When some sample images are
misjudged, the image buffer can be used to cache the image.
When this function is enabled, the process can cache one image at a time, and a maximum of 15
images can be cached. The new image cache will overwrite the previous image. The data source of
the subsequent processing module can be bound to anyone of the 15 cached images, which facilitates
the traceability of the process of program debugging.
In Image Buffer, 0 means disable, 1 means enable, and 1 is selected by default when there is no
binding data. The specific usage is shown as below.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
74
DobotVisionStudio User Guide

Light Source

Refer to Controller Management for light source creation. After the light source is successfully
created, you can directly call the light source in the module of process.
⚫ Set Light Source to be Constant: Enable the light source, adjust the brightness and put the
light source module in front of the light source module.
⚫ Set Light Source to be Strobe: Put one light source module in front of and behind the light
source respectively. Use format module to output OK or NG. Enable the former light source,
and disable the latter light source.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
75
DobotVisionStudio User Guide

Channel Parameters

Output Type It is optional to output when OK and when NG.

Trigger Character It refers to whether the subscription module status triggers the light source. OK or
NG can be input manually according to the output type.

Trigger Time Duration of constant light source after trigger.

NOTICE

After setting the triggering time for the light source module, do not trigger the second
time within the set time after the first trigger, otherwise the light source will be turned off
in advance.

Location
The positioning tools mainly include 20 tools in the figure below, and are mainly used to locate or
detect some features in the image.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
76
DobotVisionStudio User Guide

Feature Match

Overview
Feature matching is divided into fast match and high-precision match. This tool uses the edge
features of the image as the template, confirms the search space in accordance with the preset
parameters, and searches targets that are similar to the template among images. This tool can be
used for locating, calculating and verifying. Double-click feature matching module to set parameters.
These parameters include basic parameters, feature template, operation parameters, result display,
etc. For basic parameters and result display, you can refer to Tool Application. This section
introduces feature template and operation parameters.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
77
DobotVisionStudio User Guide

High-precision match has higher precision and more refined setting features, but it consumes longer
time than fast match, as shown below.

Algorithm Principle
Template matching refers to the handling method of searching the corresponding mode in an image.
At a word, the template is a known and small image, and template matching means searching the
target in the big image, as shown below.

Fast Match
1. Feature Template
The feature template is used to extract image features. For initial use, you need to edit its template.
Select the template area that needs to be edited, and click the training model after setting parameters,
as shown below. You can enable Template Image Saving to save the template image when saving
the template.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
78
DobotVisionStudio User Guide

The shortcuts shown in area 4 mean moving image, creating fan circle mask, creating rect mask,
creating polygon mask, selecting model match center, building model, erasure contour point,
clearing all model ling area, undo and back from left to right.

Template Match
• Match Point: it is used to create location standard. Click Select Model Match Center,
and set match point by yourself.
• Scale Mode: when its auto mode can meet demands, it will not adjust. If not, it will switch
to manual mode.
• Roughness scale: the higher the roughness scale parameter, the greater feature scale, and
its respective edge points will be sparse. But, in this way, it can speed up feature matching speed,
and the range is 1 to 20 by default. When the value is 1, it means the finest. After adjustment, the
contour point quantity will change, as shown below.

• Threshold Mode: when its auto mode can meet demands, it will not adjust. if not, it will
switch to manual mode.
• Contrast Threshold: it means the contrast value. It is related with feature points and gray
value of background. The larger the value, more feature points will be eliminated, and the range is
1 to 255 by default.
2. Running Parameters

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
79
DobotVisionStudio User Guide

It is used to set parameters of feature match in order to set search space. The targets in specific
search space can be searched only, as shown below.
(1) Basic Parameters
• All Search Mode: all the templates are matched after enabling and output the optimal
results.
• Min. Match Score: the match score means the similarity between feature template and
target in searched images. It is also called as similarity threshold. Only when the target's similarity
reaches similarity threshold, it can be searched out. The max. value is 1 which means totally matched,
and it is 0.5 by default.
• Max. Match Number: it is the max. target quantity can be searched. It is 1 by default and
the range is 1 to 200.
• Match Polarity: polarity means the color transition from feature image to background.
When the edge polarity of the searched target does not match with feature template polarity, but you
want to make sure the target is searched, then you need to set match polarity as ignored. if not, you
can set as considered, and search time can be reduced.
• Angle Range: it means the angle changing range when comparing the target and created
template. If you want to search the proactive target, and then you need to set respectively, and the
default range is -180° to 180°.
• Scale Range: it means scale changing range when comparing the target and created
template. If you want to search the target with same scale changing, and then you need to set
respectively, and the default range is 1.0 to 1.0.
(2) Advanced Parameters
• Max. Overlap Rate: when searching for multiple targets, two detected targets overlapping
with each other, and the maximum overlap ratio allowed by the matching frames is overlap threshold.
The larger the value, the greater the degree of overlapping between the two targets. The range is 0°
to 100° and it is 50 by default.

• Sort:
- Sort by score in descending order: sorted by the score of the feature match in
descending order.
- Sort by angle in descending order: in descending order of relative angular deviation
in the current result.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
80
DobotVisionStudio User Guide

- Sort by X from small to large: the current result has the X coordinate of the matching
box center. According to the X coordinate, the order is rounded up from small to
large. When the X coordinates are the same, then is sorted by Y from big to small,
and the Y axis has the same operation with X axis.
- X is from small to large, and Y is from small to large too: the current result has the
matching box center X/Y coordinates. According to the X coordinate, the order is
rounded up from small to large. When the X coordinates are the same, then the order
is sorted according to Y from small to large.
• Threshold Type:
- Automatic threshold: it automatically determines the threshold parameter according
to the target image.
- Template threshold: it uses the contrast threshold of the template as the contrast
threshold in matching phase.
- Manual threshold: it uses the threshold set by the user as the threshold parameter for
searching.
• Clutter Considered: If mottle considered is enabled, the algorithm will consider the mottle
feature. If the feature has burrs, the score will decrease.
• Extension Threshold: it is the ratio that the portion of missing feature compared with the
complete feature when the image is incompletely displayed at the edge of the image. When the
searched target appears in the edge of the image, the extension threshold can make sure the image
is found. As long as the value of setting extension threshold is larger than 30, it is guaranteed that
the top target graph can be searched.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
81
DobotVisionStudio User Guide

• Timeout Control: When the time exceeds the time set by the overtime control, the search
will stop and no search result will be returned. The value range of overtime control is 0 ms to 10000
ms, and 0 means that the overtime control is disabled.
• Contour Enabled: if contour enabled is enabled, the contour feature points of the template
are displayed. If it is disabled, the feature points are not displayed, and only the matching frame is
displayed, which can reduce operation time.

3. Output Results
• Matching Box Center X: X coordinate of matching box center.
• Matching Box Center Y: Y coordinate of matching box center.
• Matching Point X: X coordinate of matching point.
• Matching Point Y: Y coordinate of matching point.
• Angle:
Example: Build the feature template with Image 1, take matching point as original point.
Matching point changes with the target image. Angle is the rotation angle of the matched target
image compared with the feature image. After clockwise rotation, the angle is positive; After
counterclockwise rotation, the angle is negative. The angle change of the target image with No.
3 compared with the feature image is 110.192°.

• Scale: the consistency scale change multiple of the matching target relative to the created
template.
Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
82
DobotVisionStudio User Guide

• Score: match score.

High-Precision Match
1. Feature Template
The feature template of high-precision match is similar to that of fast match. Create the feature
template in accordance with the specific methods of fast match above, and the specific parameters
are listed as below.
Template Match
• Match Point: it is used to create location standard. Click Select Model Match Center,
and set match point by yourself.
• Scale Mode: when its auto mode can meet demands, it will not adjust. if not, it will switch
to manual mode.
• Roughness scale: the higher the roughness scale parameter, the greater feature scale, and
its respective edge points will be sparse. But, in this way, it can speed up feature matching speed,
and the range is 1 to 20 by default.
• Threshold Mode: when its auto mode can meet demands, it will not adjust. if not, it will
switch to manual mode.
• Contrast Threshold: it means the contrast value. It is related with feature points and gray
value of background. The larger the value, more feature points will be eliminated, and the range is
1 to 255 by default.

2. Running Parameters
You can set parameters of high-precision feature match in order to set search space. Only the targets
in specific search space can be searched, as shown below.
(1) Basic Parameters
• All Search Mode: all the templates are matched after enabling and output the optimal
results.
• Min. Match Score: the match score means the similarity between feature template and
target in searched images. It is also called as similarity threshold. Only when the target's similarity
reaches similarity threshold, it can be searched out. The max. value is 1 which means totally matched,
and it is 0.5 by default.
• Max. Match Number: it is the max. target quantity can be searched. It is 1 by default and
the range is 1 to 200.
• Match Polarity: polarity means the color transition from feature image to background.
When the edge polarity of the searched target does not match with feature template polarity, but you
want to make sure the target is searched, then you need to set match polarity as ignored. if not, you
can set as considered, and search time can be reduced.
• Angle Range: it means the angle changing range when comparing the target and created
template. If you want to search the proactive target, and then you need to set respectively, and the
default range is -180° to 180°.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
83
DobotVisionStudio User Guide

• Scale Range: it means scale changing range when comparing the target and created
template. If you want to search the target with same scale changing, and then you need to set
respectively, and the default range is 1.0 to 1.0.
(2) Advanced Parameters
• Max. Overlap Threshold: when searching for multiple targets, two detected targets
overlapping with each other, and the maximum overlap ratio allowed by the matching frames is
overlap threshold. The larger the value, the greater the degree of overlapping between the two targets.
The range is 0° to 100° and it is 50 by default.
• Sort:
- Sort by score in descending order: sorted by the score of the feature match in
descending order.
- Sort by angle in descending order: in descending order of relative angular deviation
in the current result.
- Sort by X from small to large: the current result has the X coordinate of the matching
box center. According to the X coordinate, the order is rounded up from small to
large. When the X coordinates are the same, then is sorted by Y from big to small,
and the Y axis has the same operation with X axis.
- X is from small to large, and Y is from small to large too: the current result has the
matching box center X/Y coordinates. According to the X coordinate, the order is
rounded up from small to large. When the X coordinates are the same, then the order
is sorted according to Y from small to large.
• Threshold Type:
- - Automatic threshold: it automatically determines the threshold parameter according
to the target image.
- - Template threshold: it uses the contrast threshold of the template as the contrast
threshold in matching phase.
- - Manual threshold: it uses the threshold set by the user as the threshold parameter
for searching.
• Clutter Considered: If mottle considered is enabled, the algorithm will consider the mottle
feature. If the feature has burrs, the score will decrease.
• Extension Threshold: it is the ratio that the portion of missing feature compared with the
complete feature when the image is incompletely displayed at the edge of the image. When the
searched target appears in the edge of the image, the extension threshold can make sure the image
is found. As long as the value of setting extension threshold is larger than 30, it is guaranteed that
the top target graph can be searched.
• Timeout Control: When the time exceeds the time set by the overtime control, the search
will stop and no search result will be returned. The value range of overtime control is 0 ms to 10000
ms, and 0 means that the overtime control is disabled.
• Contour Enabled: if contour enabled is enabled, the contour feature points of the template
are displayed. If it is disabled, the feature points are not displayed, and only the matching frame is
displayed, which can reduce operation time.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
84
DobotVisionStudio User Guide

3. Output Results
• Matching Box Center X: X coordinate of matching box center.
• Matching Box Center Y: Y coordinate of matching box center.
• Matching Point X: X coordinate of matching point.
• Matching Point Y: Y coordinate of matching point.
• Angle: it is the same as the output angle of fast feature match, you can refer to fast feature
match.
• Scale: the consistency scale change multiple of the matching target relative to the created
template.
• Scale X/Y: zoom multiple along X/Y direction, the result output in the high-precision only.
• Score: match score.

Greyscale Match

Greyscale match is used to build template bases on greyscale of each pixel, and matches objects that
have similar greyscales. It can realize precise matching and locating when there are multiple similar
objects and their greyscales are quite different, or images are blurred with unclear contour points.

The main modeling process, using method and parameter configuration of greyscale match are the
same as those of Feature Match. Part of parameters are listed as follows.
• Template Configuration

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
85
DobotVisionStudio User Guide

- Numb Level: Use the template to build the highest numb level, the higher level, the
faster search speed and larger match missing probability. However, the numb level
is not recommended to be below 3, the defaulted value is 5 and the range is from 1
to 8.
• Run Parameter
- Min. Match Score: Match score (similarity threshold) refers to the similarity
between the feature template and the target searched in the image. Only when the
object similarity reaches the threshold, it can be searched. The maximum value is 1,
which represents certainty.
- Max. Match Number: The maximal number of objects that are allowed to be found.
The defaulted value is 1 and the range is from 1 to 200.
- Angle Step: In matching with angle freedom degree, the angle step refers to the angle
step of each rotation when searching. The greater the value, the faster the speed and
larger match missing probability. The defaulted value is 8.
- Angle Range: Refers to the angle changing range of the wait-for-matched object and
created template. Set corresponding settings to search an object with rotation changes.
The defaulted range is from -180° to 180°.
- Max. Overlap Ratio: Refers to the maximal overlapping ratio of objects’ matching
boxes when these two objects are overlapped. The greater the value, the lager the
overlapping range. The range of it is from 0 to 100.
- Sort Type: Sort by Score, Angle, X, Y, etc.
- Match Polarity: Polarity refers to the transition situation of greyscale module
images and internal matched images. When the transition situation of the searched
image color disagrees with the template image, selecting Ignored to ensure the target
can be found. However, generally you should select Considered.
- Overtime Control: Controls the searching time. When the time exceeds the set time,
searching stops, no result will be sent back. The value ranges from 0 to 10000 (ms),
0 represents disable this function.

Mark Location

Mark location tool includes cross positioning and rectangular positioning and it mainly applies to
the locating of cross image and rectangular image, as shown below.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
86
DobotVisionStudio User Guide

NOTICE

The cross mask includes the hollow cross and solid cross. Select mask graphic according
to the shape of graphic and the edge of detected graphics must be included in the blue
area of drawing ROI when you draw the cross/rectangular ROI.

Feature Template
You can extract the image features by creating the feature template. For initial template
configuration, you need to click New Model, as shown below.

1. You can configure the parameters of graphics location module and feature templates by double-
clicking graphics location module. Click New Model in Area 1 when you create templates for the
first time and create a new template.

2. You can click and in Area 2 for loading and deleting the template created.
3. Area 3 is for selecting mask shape. You can select cross shape masks when positioning the cross
image, and select rectangle masks when positioning the rectangle image.
4. Move image, generate model, erase contour points, clear the mask of erasing contour points,
cancel and back are the operations in order from left to right in Area 4.
Template Match

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
87
DobotVisionStudio User Guide

• Location Mode: Fast location and high precision location can be selected, and the selection
basis refers to feature match.
• Scale Mode: it includes auto mode and manual mode. When its auto mode can meet
demands, it will not adjust. If not, it will switch to manual mode.
• Roughness scale: the higher the roughness scale parameter, the greater feature scale, and
its respective edge points will be sparse. But, in this way, it can speed up feature matching speed,
and the range is 1 to 20 by default. When the value is 1, it means the finest. After adjustment, the
contour point quantity will change, as shown below.
• Fine Scale: only supports the positioning mode of selecting high precision, which
represents the fine degree of extracted feature particles. It is less than or equal to the rough ness
scale, which is only an integer. When the fine scale value is 1, the fine degree is the largest, the
number of edge points is the largest, and the precision is the highest.
• Threshold Mode: when its auto mode can meet demands, it will not adjust. if not, it will
switch to manual mode.
• Contrast Threshold: it means the contrast value. It is related with feature points and gray
value of background. The larger the value, more feature points will be eliminated, and the range is
1 to 255 by default.
• Rotation Angle: It means the angle threshold between the mask and the image to be
detected. If the result is not successfully matched during modeling, it is recommended to increase
the rotation angle.
• Rotation Step: the range is 0.1 ° to 1 °, and it is recommended to use the defaulted value.
• Projection Interval: the range is 1 to 10. If the number of matching points detected during
modeling is small, it is recommended to increase the projection interval
• Mark Type: According to the selection of the tested image, when the image is a hollow
cross, the mark type selects a hollow cross in principle, vice versa.

Running Parameters
You can configure various parameters of the graphics positioning module and restrict the positioning
conditions, so as to better meet your needs.
(1) Basic Parameters
• Min. Match Score: the match score means the similarity between feature template and
target in searched images. It is also called as similarity threshold. Only when the target's similarity
reaches similarity threshold, it can be searched out. The max. value is 1 which means totally matched,
and it is 0.5 by default.
• Max. Match Number: it is the max. target quantity can be searched. It is 1 by default and
the range is 1 to 200.
• Match Polarity: polarity means the color transition from feature image to background.
When the edge polarity of the searched target does not match with feature template polarity, but you
want to make sure the target is searched, then you need to set match polarity as ignored. if not, you
can set as considered, and search time can be reduced.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
88
DobotVisionStudio User Guide

• Angle Range: it means the angle changing range when comparing the target and created
template. If you want to search the proactive target, and then you need to set respectively, and the
default range is -90° to 90°.
• Scale Range: it means scale changing range when comparing the target and created
template. If you want to search the target with same scale changing, and then you need to set
respectively, and the default range is 1.0 to 1.0.

(2) Advanced Parameters


• Location Type: direct mapping and secondary correction can be selected. Direct mapping
takes short time, low precision, and secondary correction takes long time and high precision.
• Max. Overlap Threshold: when searching for multiple targets, two detected targets
overlapping with each other, and the maximum overlap ratio allowed by the matching frames is
overlap threshold. The larger the value, the greater the degree of overlapping between the two targets.
The range is 0 to 100 and it is 50 by default.
• Sort:
- Sort by score in descending order: sorted by the score of the feature match
in descending order.
- Sort by angle in descending order: in descending order of relative angular
deviation in the current result.
- Sort by X from small to large: the current result has the X coordinate of the
matching box center. According to the X coordinate, the order is rounded up
from small to large. When the X coordinates are the same, then is sorted by
Y from big to small, and the Y axis has the same operation with X axis.
- X is from small to large, and Y is from small to large too: the current result
has the matching box center X/Y coordinates. According to the X coordinate,
the order is rounded up from small to large. When the X coordinates are the
same, then the order is sorted according to Y from small to large.
• Threshold Type:
- Automatic threshold: it automatically determines the threshold parameter
according to the target image.
- Template threshold: it uses the contrast threshold of the template as the
contrast threshold in matching phase.
- Manual threshold: it uses the threshold set by the user as the threshold
parameter for searching.
• Clutter Considered: If mottle considered is enabled, the algorithm will consider the mottle
feature. If the feature has burrs, the score will decrease.
• Extension Threshold: please refer to the configured parameters of feature matching.
• Timeout Control: When the time exceeds the time set by the overtime control, the search
will stop and no search result will be returned. The value range of overtime control is 0 ms to 10000
ms, and 0 means that the overtime control is disabled.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
89
DobotVisionStudio User Guide

• Contour Enabled: if contour enabled is enabled, the contour feature points of the template
are displayed. If it is disabled, the feature points are not displayed, and only the matching frame is
displayed, which can reduce operation time.

Results Display
You can set several judgment conditions of the results in Results Display.
• Quantity Judgment: After opening, you can set the quantity range of matching results. If
the number of matching results is within the quantity range, the module is OK; and if it is not within
the quantity range, the module is NG.
• Angle Judgment: After opening, you can set the angle range of matching results, and the
range is -180 ° to 180 ° by default. If the angle of output results is within the quantity range, the
module is OK; and if it is not within the angle range, the module is NG.
• X Scale Judgment: in the output results, if X scale is within the range, the module is OK,
otherwise the module is NG.
• Y Scale Judgment: it is the same as X Scale Judgment.
• Score: it is similar score and means the degree of similarity degree of template and image
detected.
• Matching Point X/Y Judgment, Center Point X/Y Judgment: X coordinate of matching
point.
• Text Display: Text content, color, font size and location displayed can be edited.
• Output Results: you can refer to the Feature Matching.

Circle Search

The tool of circle search detects multiple edge points first, and then fits them into a circle. This tool
is used for location and measurement of circles. The basic parameters and result display have been
explained in Tool Application. This section mainly describes the running parameters.

Running Parameters
• Sector Radius: it means the radius of inner and outer circle of the ring ROI.
• Edge Type:

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
90
DobotVisionStudio User Guide

- Strongest Edge: only the edge point set with the largest gradient in the
scanning range is detected and fitted into a circle.
- Last Edge: only the edge point set with the largest distance from the center
of the circle within the scanning range is detected and fitted into a circle.
- First Edge: only the set of edge points with the smallest distance from the
center of the circle within the scanning range is detected and fitted into a
circle.
• Edge Polarity: it has three modes, including dark to light, light to dark and any.
- Dark to light: it means that the transition from the area with a low gray value
to the edge of the area with a high gray value.
- Light to dark: it means the transition from the area with a high gray value to
the edge of the area with a low gray value.
- Any means both edges are detected.
• Edge Threshold: The edge threshold is the gradient threshold, ranging from 0 to 255. Only
edge points whose edge gradient threshold is greater than this value are detected. The larger the
value is, the stronger the anti noise ability is, the less the number of edges is, and even the target
edge points are screened out.
• Filter Size: it is used to enhance the edge and suppress noise, and its minimum value is 1.
When the edge is blurred or there is noise interference, you can increase its value to make the
detection result more stable. If the distance between the edge and the edge is smaller than the filter
size, it will affect the accuracy of the edge location or even lose edge. This value needs to be set
based on the actual situation.
• Calipers Number: it is used to scan ROI area quantity of edge points, as shown below.

• Reject Number: It means that the number of minimum points that have high error to be
excluded from fitting. In general, if there is a great number of points excluded from fitting, its value
should be set higher. For better results, it is recommended to use it in combination with the parameter
of distance to remove.
• Initial Locating: If it is enabled, combined with the circular location sensitivity and
subsampling coefficient settings, this parameter can roughly determine that the center of the area
closer to the circle in the ROI area used as the initial circle center, which is convenient for subsequent
fine circle search. If the initial location is disabled, the ROI center is the initial circle center by
default. Under normal circumstances, the previous module of finding circle is position correction,
it is recommended to disable this parameter.
Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
91
DobotVisionStudio User Guide

• Subsampling Coefficient: it is also called downsampling, which means that the number of
sampling points is decreased. For an image of N*M, if the subsampling coefficient is K, then one
dot is taken every K points of each row and column in the original image to create an image.
Therefore, the larger the subsampling coefficient, the more sparse the contour points will be, and
the less detailed contour will be. It is recommended that the value should not be set too large.
• Circle Locating Sensitivity: it is used to eliminate the interference points. The larger the
value, the stronger the ability to eliminate noise interference, but it is also easy to cause the initial
location of circle failure.
• Distance to Remove: It means that the maximum pixel distance from the outlier to the fit
circle. The smaller the value, the more points are excluded.
• Projection Width: In the ROI, several edge points are distributed in a circle to search for
the ROI. This parameter describes the width of the area where the edge points are scanned for the
ROI. Increase the value within a certain range to obtain more stable edge points.
• Initial Fit: It has two types, including global and partial fit.
- Partial: Local optimization is to fit the circle according to the local feature
points. If the local feature more accurately reflects the position of the circle,
the local optimization is adopted, otherwise the global optimization is
adopted.
- Global: circle fit with the found global feature points.
• Fit Mode: It has three types, including least squares, huber and tukey.The three fitting
methods only have some differences in the calculation of weight. With the increase of the number
of outliers and the distance of outliers, the least squares, Huber and Tukey can be used step by step.

NOTICE

You can use this tool to search only one circle at one time. If you want to search multiple
circles, it is recommended to use it together with the loop function.

Line Search

The tool of line search is used to search a line with certain features in the image. It forms feature
point groups using known feature points, and then fits into a line. The basic parameters and result
display have been explained in Tool Application. This section mainly describes the running
parameters. For the parameters not mentioned, refer to Circle Search.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
92
DobotVisionStudio User Guide

Running Parameter
• Edge Type: It includes strongest edge, last edge and first edge.
- Strongest Edge: find the edge point group with the largest gradient threshold,
and then fit it into a straight line.
- Last Edge and First Edge: find the first / last line that meets the condition.
• Edge Polarity: it has three polarities, including dark to light, light to dark and any.
• Edge Threshold: The edge threshold is the gradient threshold, ranging from 0 to 255. Only
edge points whose edge gradient threshold is greater than this value are detected. The larger the
value is, the stronger the anti noise ability is, the less the number of edges is, and even the target
edge points are screened out.
• Filter Size: It can filter the noise. The larger the value, the stronger the anti noise ability,
and the less the number of edges obtained. At the same time, it may also cause the target edge to be
screened out.
• Caliper Number: The edge points are clamped by multiple calipers to define the number
of calipers.
• Revert Find Orient: After enabling it, the software will revert the information of line start
point and line end point.
• Angle Normalization: After enabling, the angle of outputted line is -90° to 90°. if it is not
enabled, the angle of outputted line is -180° to 180°.
• Reject Number: Refer to Find Circle.
• Distance to Remove: Refer to Find Circle.
• Projection Width: It is the caliper width, several searched edge points are arranged in order
in the ROI. This parameter describes the width of the area where the edge points are scanned for the
ROI.Increase the value within a certain range to obtain more stable edge points .

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
93
DobotVisionStudio User Guide

• Initial Fit: Refer to Find Circle.


• Fit Mode: Refer to Find Circle.

Output Result
• Starting Point X/Y: X and Y coordinates of the starting point of the line
• End Point X/Y: X and Y coordinates of the end point of the line
• Line Angle: The angle of the line relative to the horizontal line
• Fit Error: The RMS error calculation method is adopted for the straight line fit error. As
shown below, d is the distance between the fit point and the standard straight line, and n is the
number of fit points.

Blob Analysis

Blob analysis is a process of detecting, locating, or analyzing a target object in an image area where
the pixel is at a limited gray level. The Blob tool provides information about certain features of the
target object in the image, such as presence, quantity, position, shape, orientation and topological
relations between blobs. The basic parameters and result display have been explained in Tool
Application. This section mainly describes the running parameters.

Running Parameters

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
94
DobotVisionStudio User Guide

• Threshold Mode: When the input image is a binary image, it is optional to not execute
binarization. In other cases, there are five options, including single threshold, double threshold, auto
threshold, soft threshold (fixed) and soft threshold (relative).
- Single threshold:
1. darker than background: [0, low threshold -1] blob target of gray value is
detected.
2. lighter than background: [low threshold, 255] blob target of gray value is
detected.
- Double threshold: when the high threshold is higher than the low threshold,
the target gray range is [low threshold, high threshold]. When the low
threshold is set higher than the high threshold, the target gray range is [0,
high threshold] and [low threshold, 255].
- Single threshold, double threshold or auto threshold
1. Low threshold: it is used to set lower limit of threshold.
2. High threshold: it is used to set upper limit of threshold.
- Soft threshold (fixed):
1. Lighter than background: the number of copies set between high and low
thresholds is divided into softness as the transition area. The area between [low
threshold, 254] is set to 1.
2. Darker than background: The area between [0, low threshold] is set to 1.
- Soft threshold (relative): blobs with fuzzy target edges and indistinct
features can be considered.
• Polarity: it has two modes, including dark blobs, light background, and light background
dark background.
- Dark blobs, light background is that the feature image pixel value is lower
than the background pixel value.
- Light background, dark background is that the feature image pixel value is
higher than the background pixel value.
• Threshold Range: it is used to set the lower limit and upper limit of threshold. The target
whose edge threshold within the threshold range, and it can be found in Blob area.
• Number of Fund Blobs : it is used to set the number of Blob graphs to find.
• Min. Hole Area: it refers to the minimum non-Blob area size in Blob area. If the parameter
is not larger than this value, the hole padding is the Blob.
• Contour Output Enable: Blob contour is displayed when enabled.
• Blob Image Output: do not output the Blob analysis image after disabled.
• Enable: If the current feature is enabled, the feature is used for blob filtering. If it is closed,
the feature will not be used for blob filtering.
- Area: the area of the target image.
- Contour: the circumference of the target image.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
95
DobotVisionStudio User Guide

- Major and minor axis: the length and width of the smallest area of
circumscribed rectangle, as shown below.

- Circularity and rectangularity: the similarity to a circle or rectangle.


- Barycenter offset: absolute pixel offset between Blob barycenter and center
of circumscribed rectangle of Blob minimum area.
- Axis ratio: short axis and long axis of box.
- Sort feature: it has several characteristics, such as area, perimeter, circularity,
rectangularity, connected domain center x, connected domain center y, box
angle, box width, box height, rectangular upper left vertex x, rectangular
upper left vertex y, second-order center distance, principal axis angle, axis
ratio, etc.
- Sort mode: ascending, descending and non sorting are used in combination
with the sorting feature.
• Connectivity: In image processing, the object we are interested in is usually a combination
of interconnected pixels. Therefore, in order to obtain an area, we must calculate all the connected
areas that are contained in the area after division. On a rectangular pixel grid, there are two
definitions of connectivity. The first one is that the two pixels have a common edge, that is, one
pixel is above, below, to the left or to the right of the other pixel, which is called 4-connectivity. The
second definition is the extension of the first definition. The adjacent pixels on the diagonal line are
also included, which is called 8-connectivity. The relation between these two kinds of connectivity
is shown below.
• Min. Overlap Rate:
- This parameter is used to filter Blob and filter out some Blob intersecting
ROI.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
96
DobotVisionStudio User Guide

- Specific filtering method: if the minimum overlap rate is set to 50, and the
blob area inside the ROI is less than 50% of its overall area, it will be filtered
in the result, and the filtered Blob will not be displayed in the result.
- This parameter can be set to 100 if it is pasted with ROI, that is, it is not
regarded as the target drawing

Running Result
• Total Area: sum of all Blob areas in the obtained results.
• Area: the area of a Blob in the obtained results.
• Perimeter: number of edge pixels
• Barycenter X/Y: X/Y coordination of Blob barycenter
• Angle: angle of Blob to Blob minimum circumscribed rectangle
• Major/ Minor Axis: major axis / minor axis of minimum circumscribed rectangle
• Roundness / Rectangularity: Blob area divided by Blob minimum circumscribed circle
area / Blob area divided by Blob minimum circumscribed rectangle area.

NOTICE

The Blob analysis module supports the use of multiple ROIs, but when two ROIs are
connected, the two ROIs will be regarded as one ROI. Therefore, if each ROI needs to
be processed separately, it needs to be implemented circularly.

Caliper

The caliper is a visual tool that measures the width of a target object, edge location, the location of
the feature or edge pair, and the distance between edge pairs. Unlike other visual tools, the caliper
requires the user to confirm the approximate area of the measurement or position, and the target
object or edge feature. The caliper tool can position edge or edge pair via different edge modes. The
basic parameters and result display have been explained in Tool Application. This section mainly
describes the running parameters.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
97
DobotVisionStudio User Guide

Running Parameter
• Edge Mode: it includes single edge and edge pair modes.
- Single edge mode detects the edge position in the specific area and can be
used to locate, count and verify the presence or absence.
- Edge pair mode detects edge spacing in a specific area.
• Filter Size: it is used to enhance the edge and suppress noise, and its minimum value is 1.
When the edge is blurred or there is noise interference, you can increase its value to make the
detection result more stable. If the distance between the edge and the edge is smaller than the filter
size, it will affect the accuracy of the edge location or even lose edge. This value needs to be set
based on the actual situation.
• Edge Threshold: The edge threshold is the gradient threshold, ranging from 0 to 255. Only
edge points whose edge gradient threshold is greater than this value are detected. The larger the
value is, the stronger the anti noise ability is, the less the number of edges is, and even the target
edge points are screened out.
• Edge Polarity: it has three modes, including dark to light, light to dark and both.
• Edge Pair Width: it means that the size of the pixel spacing of the edge pairs that are
expected to be output. Adjusting this parameter separately cannot directly screen out the expected
edge pairs. Only when one or more score methods, including current position normalization score,
relative position normalization score, space score, space gap score and relative space gap score, are
enabled, adjusting this parameter is meaningful.
• Max. Result Number: it is the maximum number of edge pairs expected to be output. If
the logarithm actually found is greater than this parameter, the edge pair of this parameter is output
according to the score from high to low. Otherwise, the actual edge pair quantity is output.
• Sort Method: score ascending, score descending, direction forward and direction reverse.
The results are sorted according to the selected type.
• Projection Direction: it includes from top to bottom and from left to right.
Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
98
DobotVisionStudio User Guide

- From top to bottom means that the edge point is searched in the left and right
direction relative to the search ROI. If the search ROI is rotated by 90°, and
then the edge point is searched from bottom to top direction.
- Left to right means that the edge point is searched in the top and bottom
direction relative to the search ROI. If the search ROI is rotated by 90°, and
then the edge point is searched from left to right direction.
- Projection is to project a whole edge into a pixel, so the direction of finding
the edge and the projection direction must be in the horizontal direction and
the vertical direction, as shown in the following figure.

• Blurred Edge: Turn on this enable to enhance the extraction ability of the candidate edge
point set, with more candidate point sets, so that the target edge points can be extracted more likely
in the "image with more interference points" or "image with blurred edges" scene, but the time
consumption increases significantly.
• Score Enable
- Single Edge
1. Contrast Score Enable: it is scored by edge contrast or edge pair contrast. The
user can execute contrast score on the candidate edges. The edge contrast here
reflects the intensity of the grayscale change at the edge. The maximum score
is 1.0, which is equivalent to an edge contrast of 255 (the maximum value of
the edge contrast). If it is the search mode of edge pair, the average contrast of
the two edges is used as the scoring factor.
2. Position Score Enable: it is scored by the absolute position difference of the
edge or edge pair center relative to the center of the ROI area.
3. Relative Position Score Enable: it is scored by the absolute position difference
(positive or negative) of the edge or edge pair center relative to the center of the
ROI area.
4. Gray Score Enable: score by regional projection gray mean value.
Each score enabling has six parameters, including curve type, starting point,
midpoint, ending point, highest value, and lowest value of score. Curve type includes
Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
99
DobotVisionStudio User Guide

descending and ascending. The target edge point has the highest score by setting
these six parameters.
• Edge pair mode
- Normalized Position Score Enable: it is scored by the absolute position
difference of the edge or edge pair center relative to the center of the ROI
area. The denominator of normalization is the edge pair width.
- Normalized Relative Position Score Enable: it is scored by the position
difference of the edge or edge pair center relative to the center of the ROI
area. The denominator of normalization is the edge pair width.
- Spacing Score Enable: it scores in accordance with actual distance of edge
pair / edge pair width.
- Spacing Difference Enable: it scores by single edge in accordance with
(actual distance of edge pair - edge pair width) / edge pair width.
- Relative Spacing Difference Enable: it scores by both edges in accordance
with (actual distance of edge pair - edge pair width) / edge pair width.
Output Result
• Edge Point Coordinates X/Y: start and end coordinates of edges or edge pairs
• Edge Polarity: 1 means from black to white, 2 means from white to black.
• Measured Width: caliper spacing of edge pairs
• Score: The caliper tool defines the score calculation method, maps the mapping score with
the original scores of several judgment benchmarks, and then calculates the overall score of edge
pattern candidates by taking the n-th power root of the product of the mapping score. For example,
if four scoring methods are defined, the caliper tool multiplies the four scores and takes the fourth
root of the result.
• Edge Status: 1 means searched, 2 means searching failed.

Edge Search

The tool of edge search is used to search edge with certain features in the image. The basic
parameters and result display have been explained in Edge Distance.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
100
DobotVisionStudio User Guide

• Filter Size: it is used to enhance the edge and suppress noise, and its minimum value is 1.
When the edge is blurred or there is noise interference, you can increase its value to make the
detection result more stable. If the distance between the edge and the edge is smaller than the filter
size, it will affect the accuracy of the edge location or even lose edge. This value needs to be set
based on the actual situation.
• Edge Threshold: The edge threshold is the gradient threshold, ranging from 0 to 255. Only
edge points whose edge gradient threshold is greater than this value are detected. The larger the
value is, the stronger the anti-noise ability is, the less the number of edges is, and even the target
edge points are screened out.
• Edge Polarity: it has three polarities, including dark to light, light to dark and any.
• Edge Type: it includes strongest edge, last edge, first edge and all.
• Find Direction: it includes top to bottom and left to right.
• Min. Edge Score: find the minimum score of the edge. The edge with a score lower than
the minimum score will be filtered out.
• Max. Result Number: maximum number of searching.
• Sort Method: score ascending, score descending, direction forward and direction reverse.
The results are sorted according to the selected type.

Position Correction

The tool of position correction is used to assist positioning and precise positioning, and modify
target motion deviation. It establishes the reference of position deviation according to match points
and match frame angle in the template matching result, and then realize ROI coordinate rotation
deviation of detection box according to the relative position deviation of running point and fiducial
point in the template matching result, as shown below.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
101
DobotVisionStudio User Guide

Fiducial point and fiducial frame are the match point and match frame when you create the reference.
Running point and running frame are the match point and match frame when the target image is
matching with features. The pixel deviation of image can be determined according to fiducial point
and running point, and angle deviation can be determined according to fiducial frame and match
frame. Thus, the ROI area can keep up with the changing of image angle and pixel.
The position correction figure is shown below.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
102
DobotVisionStudio User Guide

There are two ways to correct position: position correction by point and position coordinate by
coordinate. For by point, the position of the point has been determined. For by coordinate, X and Y
are used to determine the position of the point. Please note that whether for a point or a coordinate,
its position information is transmitted from the previous module, and its role is to determine the
pixel and angle deviation.

NOTE
 When using fast feature matching and position correction, you do not need to be
subscribe the scale parameters. In this case, the position correction of images with the
same resolution but different sizes cannot be realized. When you use high-precision
feature matching and position correction, the position correction module automatically
subscribes the scale to realize the position correction of images with the same
resolution but different sizes.

The following are images before and after position correction.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
103
DobotVisionStudio User Guide

Rect Search

The tool of rect search is used to detect the rectangle in the target image. The basic parameters and
result display have been explained in Tool Application, and running parameters have been explained
in previous sections. This section displays a solution, as shown below.

Detection Parameter
Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
104
DobotVisionStudio User Guide

• Edge Pair Type: see Spacing Detection for details.


• Edge Polarity: it has three polarities, including dark to light, light to dark and any.
• Edge Threshold: The edge threshold is the gradient threshold, ranging from 0 to 255. Only
edge points whose edge gradient threshold is greater than this value are detected. The larger the
value is, the stronger the anti noise ability is, the less the number of edges is, and even the target
edge points are screened out.
• Caliper Number: used to scan number of ROI areas of edge points.
• Filter Size: it is used to enhance the edge and suppress noise, and its minimum value is 1.
When the edge is blurred or there is noise interference, you can increase its value to make the
detection result more stable. If the distance between the edge and the edge is smaller than the filter
size, it will affect the accuracy of the edge location or even lose edge. This value needs to be set
based on the actual situation.
• Projection Width: In the ROI, several edge points are distributed in a circle to search for
the ROI. This parameter describes the width of the area where the edge points are scanned for the
ROI. Increase the value within a certain range to obtain more stable edge points.
• Reject Number: It means that the number of minimum points that have high error to be
excluded from fitting. In general, if there is a great number of points excluded from fitting, its value
should be set higher. For better results, it is recommended to use it in combination with the parameter
of distance to remove.
• Distance to Remove: It means that the maximum pixel distance from the outlier to the fit
circle. The smaller the value, the more points are excluded.
• Initial Fit: see Find Circle for details.
• Fit Mode: It has three types, including least squares, huber and tukey. The three fitting
methods only have some differences in the calculation of weight. With the increase of the number
of outliers and the distance of outliers, the least squares, Huber and Tukey can be used step by step.

Peak Search

The tool of peak search is used to detect the peak of an object in specific area, as shown below.

Detection Parameter
Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
105
DobotVisionStudio User Guide

• Filter Size: it is used to enhance the edge and suppress noise, and its minimum value is 1.
When the edge is blurred or there is noise interference, you can increase its value to make the
detection result more stable. If the distance between the edge and the edge is smaller than the filter
size, it will affect the accuracy of the edge location or even lose edge. This value needs to be set
based on the actual situation.
• Edge Threshold: The edge threshold is the gradient threshold, ranging from 0 to 255. Only
edge points whose edge gradient threshold is greater than this value are detected. The larger the
value is, the stronger the anti noise ability is, the less the number of edges is, and even the target
edge points are screened out.
• Edge Polarity: it has three modes, including dark to light, light to dark and any.
• Scan Width: it is used to describe the area width of searching edge point ROI, and the min.
value is 1. Within certain range, increasing this value can reduce the quantity of edge points.
Detection Results
• Peak X/Y: generate the X/Y coordination of the peak.
• Peak Score: Edge points, peak and their scores in peak detection are calculated by internal
caliper. Refer to caliper tool for the definition of scores.
• Peak Polarity: it is edge polarity: light to dark and dark to light.
• Peak Distance: it is the distance from the peak to one side of the ROI area, corresponding
to the search direction in the operation parameters.
• Edge Number: the number of edge points.

Edge Intersection

The tool of edge intersection is used to search the intersection of two edges, and the search direction
and polarity can be set according to the required intersection. When the two edges intersect, the
intersection is the target to be searched. When they do not intersect, the intersection is the
intersection of their extension lines. The basic parameters and result display have been explained in
the Tool Application. This section mainly describes part of the parameters.

Edge Parameter
Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
106
DobotVisionStudio User Guide

• Edge Type: it has the strongest, the first and the last edge.
• Edge Polarity: it has three modes, including dark to light, light to dark and any.
• Edge Threshold: The edge threshold is the gradient threshold, ranging from 0 to 255. Only
edge points whose edge gradient threshold is greater than this value are detected. The larger the
value is, the stronger the anti noise ability is, the less the number of edges is, and even the target
edge points are screened out.
• Calipers Number: it is used to scan ROI area quantity of edge points.
• Filter Size: it is used to enhance the edge and suppress noise, and its minimum value is 1.
When the edge is blurred or there is noise interference, you can increase its value to make the
detection result more stable. If the distance between the edge and the edge is smaller than the filter
size, it will affect the accuracy of the edge location or even lose edge. This value needs to be set
based on the actual situation.
• Projection Width: In the ROI, several edge points are distributed in a circle to search for
the ROI. This parameter describes the width of the area where the edge points are scanned for the
ROI. Increase the value within a certain range to obtain more stable edge points.
• Reject Number: It means that the number of minimum points that have high error to be
excluded from fitting. In general, if there is a great number of points excluded from fitting, its value
should be set higher. For better results, it is recommended to use it in combination with the parameter
of distance to remove.
• Distance to Remove: It means that the maximum pixel distance from the outlier to the fit
circle. The smaller the value, the more points are excluded.
• Initial Fit: see Find Circle for details.
• Fit Mode: It has three types, including least squares, huber and tukey. The three fitting
methods only have some differences in the calculation of weight. With the increase of the number
of outliers and the distance of outliers, the least squares, Huber and Tukey can be used step by step.
• ROI Area: setting one or two ROIs is available. When setting one, it is to find the
intersection of two lines in one ROI area. When setting two, it is to find the line in two ROI areas
separately, and then find the intersection of these two lines.
• Search Mode: edge1 search mode is to find the line in ROI 1 area, and edge 2 search mode
is to find the line in ROI 2 area.
Output Result of Intersection
• Edge Intersection X/Y: X/Y coordination of edge intersection
• Line Starting and End X/Y: X/Y coordination of line starting and end.
• Line 0/1 Angle: the angular offset of lines 0 and 1 from the horizontal.

Parallel Lines Search

The tool of parallel lines search is used to find lines that are approximately parallel within the
tolerance angle range. The centerline of the two parallel lines is also displayed in the middle, as
shown below. The running parameters have been explained in Line Search. This section mainly
describes part of the parameters.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
107
DobotVisionStudio User Guide

• Edge Pair Type: see Spacing Detection for details.


• Edge Polarity: it has three modes, including dark to light, light to dark and any.
• Edge Threshold: The edge threshold is the gradient threshold, ranging from 0 to 255. Only
edge points whose edge gradient threshold is greater than this value are detected. The larger the
value is, the stronger the anti-noise ability is, the less the number of edges is, and even the target
edge points are screened out.
• Calipers Number: it is used to scan ROI area quantity of edge points.
• Max. Angle Tolerance: if the angle of target lines is smaller than this parameter value, the
target lines will be determined as parallel lines. Otherwise, the target lines will be determined as
non-parallel lines.
• Reject Number: It means that the number of minimum points that have high error to be
excluded from fitting. In general, if there is a great number of points excluded from fitting, its value
should be set higher. For better results, it is recommended to use it in combination with the parameter
of distance to remove.
• Distance to Remove: It means that the maximum pixel distance from the outlier to the fit
circle. The smaller the value, the more points are excluded.
• Filter Size: it is used to filter noise. The larger the value is, the stronger the anti noise ability
is, the less the number of edges is, and even the target edge points are screened out.
• Projection Width: In the ROI, several edge points are distributed in a circle to search for
the ROI. This parameter describes the width of the area where the edge points are scanned for the
ROI. Increase the value within a certain range to obtain more stable edge points.
• Initial Fit: see Find Circle for details.
• Fit Mode: It has three types, including least squares, huber and tukey. The three fitting
methods only have some differences in the calculation of weight. With the increase of the number
of outliers and the distance of outliers, the least squares, Huber and Tukey can be used step by step.

Quadrilateral Search

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
108
DobotVisionStudio User Guide

The tool of quadrilateral search is used to find linear edges in the specified quadrilateral areas. After
the function is enabled, the system outputs the start point and end point information of lines, and the
central point information of the quadrilateral.

Running Parameters
• Select Edge: It is used to select the edge 0, 1, 2, 3, and set its corresponding parameters
after selecting the edge.
• Edge Type:
- Strongest Edge: find edge point set with the largest gradient threshold and
fitted into a line.
- Last Edge/First Edge: find the last edge/the first edge that meets the
demands.
• Edge Polarity: it has three modes, including dark to light, light to dark and any.
• Edge Threshold: the edge threshold is the gradient threshold, ranging from 0 to 255. Only
edge points whose edge gradient threshold is greater than this value are detected. The larger the
value is, the stronger the anti noise ability is, the less the number of edges is, and even the target
edge points are screened out.
• Filter Size: it is used to filter noise. The larger the value is, the stronger the anti noise ability
is, the less the number of edges is, and even the target edge points are screened out.
• Calipers Number: the edge points are clamped by multiple calipers to define the number
of calipers.
• Projection Width: refer to Find Line.
• Reject Number: see Find Circle for details.
• Distance to Remove: see Find Circle for details.
• Initial Fit: see Find Circle for details.
• Fit Mode: see Find Circle for details.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
109
DobotVisionStudio User Guide

Line Group Search

With the tool of line group search, you can find out multiple lines, and generate multiple edge points.
Then, the multiple edge points generated are combined to a line. It supports creating multiple ROI
areas and multiple sets of parameters, as shown follow.

Multi-line Search

With the tool of multi-line search, you can find out multiple target lines in one image. The related
information of these multiple lines may also be output in the result.

Basic Parameters
• Filter Kernel Half Width: For edge enhancement and noise suppression, the minimum
value is 1. When the edge is blurred or there is noise interference, increasing this value will help to
make the detection result more stable, but if the edge is too close to the edge, it will affect the
accuracy of the edge position or even lose the edge. This value should be set according to the actual
situation.
• Projection Length: This value determines the number of areas used for gradient field
projection. A smaller value allows the tool to analyze the image with finer granularity, but it may

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
110
DobotVisionStudio User Guide

take more time to execute the tool. A higher value can improve the execution speed of the tool, but
the edge of the tool you want to find may not be detected.
It is generally recommended that this value be at least as large as the set filter size.
• Absolute / Relative Edge Threshold: Only edge points whose edge gradient threshold is
greater than the extraction threshold are detected.
• Edge Polarity: Four kinds of edge polarity are available: From White to Black, From Black
to White & From White to Black, From Black to White & From White to Black, and From Black to
White/From White to Black. & indicates that in the same line segment, both polarities of the point
set need to be considered, / indicates that in the same line segment, if the point set has two polarities,
either one can be used for fitting.
• Edge Angle Tolerance: The maximum allowable angle difference between the gradient
direction of the edge point and the direction perpendicular to the fitting line (the normal direction
of the fitting line). By increasing this value, the tool may take more consideration of the edge points,
thereby changing the position of the found line segment.
• Edge Distance Tolerance: It refers to the max. allowable distance between the edge point
and the line. When the edge distance tolerance is increased, it is allowed the tool to consider more
edge points and change the position of the discovery line.
• Max. Line: The maximum target line quantity.
• Coverage Rate: It refers to the min. percentage of actual edge point quantity in planed
edge point quantity. The value is larger, the line with higher coverage rate may be outputted.

NOTE
 High coverage rate does not mean that there are many edge points covered, which
should be comprehensively considered in combination with the actual length of
straight line segment.

• Linear Rotation Tolerance: The amount of rotation of the found line segment and the
tolerance of the defined gradient search direction. A lower value forces the tool to locate segments
that are more parallel to the gradient search direction.
• Fit Mode: The two fitting methods only have some differences in the calculation of weight.
With the increase of the number of outliers and the distance of outliers, Huber and Tukey can be
used step by step.

Blob Label Analysis

BLOB label analysis module adds two output results: category tag and tag value, based on the
function of Blob module. It is usually used together with the module of scientific research outputting
multi category information, such as DL image segmentation.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
111
DobotVisionStudio User Guide

Basic Parameters
• Category Name: It is usually used with DL image segmentation. Subscribe to the category
name of DL image segmentation, or manually output a group of category names.
• Gray Value: It is usually used with DL image segmentation. Subscribe to the gray value of
DL image segmentation, or manually output a group of gray value.
Running Parameters
• Number of Fund Blobs: it is used to set the number of Blob graphs to find.
• Area Enable: After enabled, it is limited to find the area range of Blob.
• Contour Enable: After enabled, the module show the Blob contour.
• Blob Image Output: After disabled, the image after Blob analysis do not output any more.
• Perimeter, Short Axis, Long Axis, Circularity, Rectangularity, Barycenter Offset, Axis
Ratio: see BLOB Analysis for details.
• Axial Ratio Range: short axis of box: long axis of box
• Sort Feature: it has several characteristics, such as area, perimeter, circularity,
rectangularity, connected domain center x, connected domain center y, box angle, box width, box
height, rectangular upper left vertex x, rectangular upper left vertex y, second-order center distance,
principal axis angle, axis ratio, etc.
• Sort Mode: ascending, descending and non sorting are used in combination with the sorting
feature.
• Connectivity: In image processing, the object we are interested in is usually a combination
of interconnected pixels. Therefore, in order to obtain an area, we must calculate all the connected
areas that are contained in the area after division. On a rectangular pixel grid, there are two
definitions of connectivity. The first one is that the two pixels have a common edge, that is, one
pixel is above, below, to the left or to the right of the other pixel, which is called 4-connectivity. The
second definition is the extension of the first definition. The adjacent pixels on the diagonal line are
also included, which is called 8-connectivity.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
112
DobotVisionStudio User Guide

• Min. Overlap Rate:


- This parameter is used to filter Blob and filter out some Blob intersecting
ROI.
- Specific filtering method: if the minimum overlap rate is set to 50, and the
blob area inside the ROI is less than 50% of its overall area, it will be filtered
in the result, and the filtered Blob will not be displayed in the result.
- This parameter can be set to 100 if it is pasted with ROI, that is, it is not
regarded as the target drawing.

NOTICE

The blob tag analysis module supports the use of multiple ROIs, but when two ROIs are
connected, the two ROIs will be regarded as one ROI. Therefore, if each ROI needs to
be processed separately, it needs to be implemented circularly.

Path Extraction

The path extraction module can be used to take points or search edge points at equal intervals on the
drawn path. This module is generally used with feature match and position correction modules, as
shown in the following figure.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
113
DobotVisionStudio User Guide

The main steps of path extraction are as follow:


1. Double-click the Path Extraction module. Select ModelSettings, and click Edit Model as follow:

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
114
DobotVisionStudio User Guide

The image selected when you configure the template should be consistent with the image selected

when you create the feature matching template. Use and to draw the polyline and arc

path to be extracted. Click to generate model, and finally click OK to complete the template.

NOTE
 You can draw multiple paths and set their corresponding parameters according to
actual demands. You can also change the order of the paths.

Training Parameter
⚫ Number of Path Points: The number of path points on the extracted path in the template,
ranging from 2 to 300, can be selected according to the demand.
⚫ Position Offset: The offset of path points can be set according to the actual demand,
ranging from -1000 to 1000.
⚫ Edge Polarity: it has three polarities, including dark to light, light to dark and any.
- From Dark to Light: The edge of a region that transitions from a low gray
value to a high gray value.
- From Light to Dark: The edge of a region that transitions from a high gray
value to a low gray value.
- Any: Both edges above can be detected.
⚫ Edge Threshold: The edge threshold is the gradient threshold, ranging from 0 to 255. Only
edge points whose edge gradient threshold is greater than this value are detected. The larger

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
115
DobotVisionStudio User Guide

the value is, the stronger the anti noise ability is, the less the number of edges is, and even
the target edge points are screened out.
⚫ Caliper Height: Find ROI at several edge points of ROI distribution. This value describes
to scan edge points to find the area height of ROI. When the edge search is not accurate,
the value can be increased appropriately.
⚫ Caliper Width: Increasing this value within a certain range can obtain more stable edge
points.
⚫ Filter Size: it is used to filter noise, and the larger the value is, the stronger the anti noise
ability is, the less the number of edges is, and at the same time even the target edge points
are screened out.
⚫ Position Offset: The offset of path points can be set, ranging from -1000 to 1000
2. Select the extraction method in the operation parameters. Different extraction methods lead to
different functions of this module and different parameters in model training. Therefore, the
extraction method is selected according to the demand.
Running Parameter
⚫ Extraction Method: It includes taking points at equal intervals and finding edge points.
- Taking Points at Equal Intervals: Obtain the corresponding number of glue
spots in the template at equal interval in the operation result, and the
detection parameters cannot be changed by default.
- Finding Edge Points: The operation results are displayed as the edge points
near the template, and the detection parameters can be changed by yourself.
• Ouput Arc Params: You need to set this parameter when selecting taking points at equal
intervals as extraction method. After enabling it, if there is an arc trajectory, the center and angle of
the arc will be output in the result of the arc trajectory.

NOTICE

⚫ This module is used in conjunction with the position correction module. You should
create the fiducial point of position correction on an image. Run the process, and
double-click the position correction module and manually click to create the fiducial
point.
⚫ When the model training parameters are consistent with the operation parameters,
the operation parameters shall prevail during the operation of the module.
⚫ When using this module, first select the extraction method in the operation
parameters. Then configure the template, and move and delete the path points in the
template after generating the template.

Find Angle Bisector

The angle bisector finding module is used to find the angle bisector between the two lines.
Prerequisite:
Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
116
DobotVisionStudio User Guide

Drag the Find Angle Bisector module to the process editing area. Complete the connection with
other modules and ensure that other relevant modules have been connected with the module before,
such as Line Search.
Steps:
1. Double-click Find Angle Bisector module to enter the parameter edit window as shown below.

2. Select the image data source in Input Source


3. Subscribe the input source of Line 1 and Line 2 respectively. There are three kinds of input sources
for lines: by line, by point and by coordinate.
• By Line: Directly subscribe a line from the results of the previous module.
• By Point: Need to subscribe two points from the results of the previous module as the start
point and end point of the line.
• By Coordinate: Need to subscribe four coordinates from the results of the previous module
as the X and Y coordinates of the start point and the end point respectively.

NOTE
 After selecting one mode to subscribe a data source, when you switch to the other two
modes, the module will automatically get the corresponding data source of the other
modes.

4. Switch to Result Display of the module and set the specific module, color and transparency of the
image.
5. Click Execute or Continue to view the operation results as shown below.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
117
DobotVisionStudio User Guide

Find Median Line

The Find Median Line module is used to find the median points of the start point and the end point
of the two lines respectively, and generate the median line according to the two median points.
Prerequisite:
Drag the Find Median Line module to the process editing area. Complete the connection with other
modules and ensure that other relevant modules have been connected with the module before, such
as Line Search.
Steps:
1. Double-click Find Median Line module to enter parameter edit window, as shown below.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
118
DobotVisionStudio User Guide

2. Select the image data source in Input Source


3. Subscribe the input mode for Line 1 and Line 2 respectively. There are three kinds of input modes:
by line, by point and by coordinate.
• By Line: Directly subscribe a line from the module results of the previous module.
• By Point: Need to subscribe two points from the module results of the previous module as
the start point and end point of the line.
• By Coordinate: Need to subscribe four coordinates from the module results of the previous
module as the X and Y coordinates of the start point and the end point.

NOTE
 After you select one method to subscribe a data source, when you switch to the other
two methods, the module will automatically get the corresponding data source of the
other methods.

4. Switch to the result display of the module and set the specific module, color and transparency of
the image display.
5. Click Execute or Continue to view the operation results as shown below.

Calculate Parallel Lines

The calculate parallel lines module is used to find parallels, which can be calculated based on a line
and a certain distance, or based on a line and a point.
Prerequisite:
Drag the Calculate Parallel Lines module to the process editing are. Complete the connection with
other modules and ensure that other relevant modules have been connected with the module before,
such as Line Search.
Steps:

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
119
DobotVisionStudio User Guide

1. Double-click Calculate Parallel Lines module to enter parameter edit window.


2. Select the image data source in Input Source.
3. Select the method of finding vertical lines in Calculation Method according to the actual
situation. There are two methods.
• A point outside the line: Obtain the parallel line of the selected line based on the selected
point.
• A distance from the line: Obtain the upper and lower parallel lines based on the selected
line according to the set distance.
4. When selecting A distance from the line in Calculation Method, you need to subscribe the data
source of line input and fill in the specific value of the spacing, as shown in the figure below. The
spacing parameter is in pixels.

5. When selecting A point outside the line, you need to subscribe the data sources of point input
and line input respectively.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
120
DobotVisionStudio User Guide

NOTE
 The subscription method of lines and points is similar to that of the find angle bisector
module. See Find Angle Bisector for details.

6. Switch to Result Display of the module and set the specific module, color and transparency of the
image.
7. Click Execute or Continue to view the operation results as shown below. The following figure
shows the operation results when a distance from the line is selected.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
121
DobotVisionStudio User Guide

Find Vertical Line

The find vertical line module can find the point-crossing vertical line or perpendicular bisector.
Prerequisite:
Drag the Find Vertical Line module to the process editing area. Complete the connection with other
modules and ensure that other relevant modules have been connected with the module before, such
as Line Search.
Steps:
1. Double-click Find Vertical Lines module to enter parameter edit window.
2. Select the image data source in the Input Source.
3. Select the method of finding vertical lines in VerticalLine Type according to the actual situation.
There are two methods.
• Point Perpendicular: Obtain the vertical line of the selected line based on the selected
point.
• Perpendicular Bisector: Find perpendicular bisector of the selected line based on the
selected line.
4. When selecting Point Perpendicular in VerticalLine Type, you need to subscribe the data source
of point input and line input as shown in the figure below.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
122
DobotVisionStudio User Guide

5. When selecting Perpendicular Bisector in Verticalline Type, you need to subscribe the data
sources of line input only.

NOTE
 The subscription method of lines and points is similar to that of the find angle bisector
module. See Find Angle Bisector for details.

6. Switch to Result Display and set the specific module, color and transparency of the image.
7. Click Execute or Continue to view the operation results as shown below. The following figure
shows the operation results when perpendicular bisector is selected.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
123
DobotVisionStudio User Guide

Measurement
L2C Measure

The tool of L2C measurement returns the vertical distance and the intersection coordinate of the line
and the circle in the image. This tool finds the line and the circle in the image of the object to be
measured first, as shown below.
When you select subscription mode, the specific data input is described as follows.

The specific steps of line circle measure are described as follows:


Steps:
1. Solution Creation: select the image source in the operation area, drag the algorithm module of
finding line, finding circle and line2C measurement into the operation area, use the operation line

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
124
DobotVisionStudio User Guide

to connect the above-mentioned modules successively, and click Run Once to complete the solution
creation, as shown above.
2. After finding the target circle and line respectively, and set line input and circle input. There are
several ways to input line and circle, as shown below.
- By line or circle: select the result of finding line and circle for input source.
- By point: you can customize or link the start point, end point and angle of the line.
- By coordinate: you can customize or link the X/Y coordinate of the line's start point, and
end point.
- By parameter: you can customize or link the coordinate of circle center and radius length.
3. Input Result: angle between line circle connecting line and horizontal line, the vertical distance
between the line and circle, intersection point coordinate of the line and the circle, and projection
coordinate of circle center.
When subscription mode is selected for data input, the operation parameters of line circle
measurement module are invalid and do not need to be configured.
The data input of drawing method is shown in the figure:

The specific steps of line circle measure are described as follows:


Steps:
1. Solution Creation: select the image source in the operation area, drag the line circle measurement
module into the operation area, use the operation line to connect the modules, double-click the line
circle measurement module, select Draw in the data input mode, and draw the ROI. Click Run to
complete the solution construction, as shown in the above figure.
2. Data Source:
- Draw Method: It does not need to cooperate with line search and circle search, but only
needs the image source and line circle measurement module. This method is to draw the
ROI of searching line and circle in the input image.
3. Draw Input: Click Draw and draw ROI in the input image.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
125
DobotVisionStudio User Guide

4. Output Result:angle between line circle connecting line and horizontal line, the vertical distance
between the line and circle, intersection point coordinate of the line and the circle, and projection
coordinate of circle center.
5. Running Parameter: the running parameters are valid only when the data source selects the
drawing input mode, invalid when the subscription input mode is selected .
6. Select Type: Select an object, line or circle, to configure parameters.
Select Line for Type
• Edge Type:
- Strongest Edge: only the edge point set with the largest gradient in the scanning range
is detected and fitted into a line.
- First Edge: the first line satisfying the condition.
- Last Edge: the last line satisfying the condition.
• Edge Polarity:
- Dark to light: it means that the transition from the area with a low gray value to the
edge of the area with a high gray value.
- Light to dark: it means the transition from the area with a high gray value to the edge
of the area with a low gray value.
- Any means both edges are detected.
• Edge Threshold: When the select type is line, the default value is 5. The edge threshold is
the gradient threshold, ranging from 0 to 255. Only edge points whose edge gradient threshold is
greater than this value are detected. The larger the value is, the stronger the anti noise ability is, the
less the number of edges is, and even the target edge points are screened out.
• Filter Size: it is used to enhance the edge and suppress noise, and its minimum value is 1.
When the edge is blurred or there is noise interference, you can increase its value to make the
detection result more stable. If the distance between the edge and the edge is smaller than the filter
size, it will affect the accuracy of the edge location or even lose edge. This value needs to be set
based on the actual situation.
• Reject Number: It means that the number of minimum points that have high error to be
excluded from fitting. In general, if there is a great number of points excluded from fitting, its value
should be set higher. For better results, it is recommended to use it in combination with the parameter
of distance to remove.
• Distance to Remove: It means that the maximum pixel distance from the outlier to the fit
line. The smaller the value, the more points are excluded.
• Initial Fit: It has two types, including global and partial fit.
- Partial:Local optimization is to fit the line according to the local feature points. If the
local feature more accurately reflects the position of the line, the local optimization
is adopted, otherwise the global optimization is adopted.
- Global: line fit with the found global feature points

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
126
DobotVisionStudio User Guide

• Fit Mode: It has three types, including least squares, huber and tukey. The three fitting
methods only have some differences in the calculation of weight. With the increase of the number
of outliers and the distance of outliers, the least squares, Huber and Tukey can be used step by step.
Select Circle for Type
• Edge Type:
- Strongest Edge: only the edge point set with the largest gradient in the scanning range
is detected and fitted into a circle.
- Last Edge: only the edge point set with the largest distance from the center of the
circle within the scanning range is detected and fitted into a circle.
- First Edge: only the set of edge points with the smallest distance from the center of
the circle within the scanning range is detected and fitted into a circle.
• Edge Polarity:
- Dark to light: it means that the transition from the area with a low gray value to the
edge of the area with a high gray value.
- Light to dark: it means the transition from the area with a high gray value to the edge
of the area with a low gray value.
- Any means both edges are detected.
• Edge Threshold: When the select type is line, the default value is 5. The edge threshold is
the gradient threshold, ranging from 0 to 255. Only edge points whose edge gradient threshold is
greater than this value are detected. The larger the value is, the stronger the anti noise ability is, the
less the number of edges is, and even the target edge points are screened out.
• Filter Size: it is used to enhance the edge and suppress noise, and its minimum value is 1.
When the edge is blurred or there is noise interference, you can increase its value to make the
detection result more stable. If the distance between the edge and the edge is smaller than the filter
size, it will affect the accuracy of the edge location or even lose edge. This value needs to be set
based on the actual situation.
• Reject Number: It means that the number of minimum points that have high error to be
excluded from fitting. In general, if there is a great number of points excluded from fitting, its value
should be set higher. For better results, it is recommended to use it in combination with the parameter
of distance to remove.
• Initial Locating: If it is enabled, combined with the circular location sensitivity and
subsampling coefficient settings, this parameter can roughly determine that the center of the area
closer to the circle in the ROI area used as the initial circle center, which is convenient for subsequent
fine circle search. If the initial location is disabled, the ROI center is the initial circle center by
default. Under normal circumstances, the previous module of finding circle is position correction,
it is recommended to disable this parameter.
• Subsampling Coefficient: it is also called downsampling, which means that the number of
sampling points is decreased. For an image of N*M, if the subsampling coefficient is K, then one
dot is taken every K points of each row and column in the original image to create an image.
Therefore, the larger the subsampling coefficient, the more sparse the contour points will be, and
the less detailed contour will be. It is recommended that the value should not be set too large.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
127
DobotVisionStudio User Guide

• Circle Locating Sensitivity: it is used to eliminate the interference points. The larger the
value, the stronger the ability to eliminate noise interference, but it is also easy to cause the initial
location of circle failure.
• Distance to Remove: It means that the maximum pixel distance from the outlier to the fit
circle. The smaller the value, the more points are excluded.
• Initial Fit: It has two types, including global and partial fit.
- Partial:Local optimization is to fit the circle according to the local feature points. If
the local feature more accurately reflects the position of the circle, the local
optimization is adopted, otherwise the global optimization is adopted.
- Global: line fit with the found global feature points
• Fit Mode: It has three types, including least squares, huber and tukey. The three fitting
methods only have some differences in the calculation of weight. With the increase of the number
of outliers and the distance of outliers, the least squares, Huber and Tukey can be used step by step.

NOTE
 When you select Draw in the data input mode, the caliper box and the figure to be
found must intersect.

L2L, C2C Measure

In general, two lines are not absolutely parallel, thus line to line distance measurement is calculated
by the average distance from four end points of the line to another line. Distance of line to line
measurement is absolute distance, as shown below. The input mode and output results are described
here. The selecting method of data source of circle to circle measurement is the same as that of line
to line measurement.

L2L Measure Parameters


• Source Selection: select the source of data, including subscription and drawing. See L2C
Measurement for details.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
128
DobotVisionStudio User Guide

• By Line: inputting source is the result of finding line.


• By Point: you can customize or link the start point, end point and angle of the line.
• By Coordinate: you can customize or link the X/Y coordinate of the line's start point, and
end point.
• Running Operation: see L2C Measure for details.
L2L Measure Results
• By Angle: the angle difference between two lines.
• Distance: the average distance from four end points of the line to another line.
• Absolute Distance: the absolute value of distance.
• Intersection Point X/Y: the intersection point X/Y coordinate of extension lines of two
lines.

P2C, P2L, P2P Measure

For P2C, P2L and P2P measurement, please refer to L2C Measure. You can set the output of
corresponding tools according to your demands.

Intensity Measure

The tool of intensity measurement is used to measure grayscale average value and standard deviation
of all pixels within the ROI of the measured image. Select the image source in the operating area
first, drag the intensity measurement algorithm module into the operating area, and connect the
module in turn using the operating line. You can use ROI tool to select area and narrow the search
range. Click Run to see the result, as shown below.

Output Results
• Min./Max. Value: Min./Max. grayscale value
• Average Value: grayscale average value

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
129
DobotVisionStudio User Guide

• Standard Deviation: The standard deviation is the arithmetic square root of the variance.
The standard deviation can reflect the dispersion of a data set.
• Contrast: Contrast is a relative value. For a picture, it reflects the ratio of the brightest part
to the darkest part of the picture.

Pixel Count

Count the number of pixels that meet the high and low threshold grayscale setting in the ROI area,
as shown below.

Parameters and Results:


• Low Threshold: the grayscale value in the area that needs count should be larger than low
threshold value.
• High Threshold: the grayscale value in the area that needs count should be smaller than
high threshold value.
• Rate: pixel point rate in this area.

NOTICE

⚫ If low threshold is larger than high threshold, the value of pixel should be between
[0, high threshold] and [low threshold, 255]. If low threshold is smaller than high
threshold, the value of pixel should be between [low threshold, high threshold].

Edge Distance

This tool is used to detect the distance between two feature edges. First, find the edge that meets the
conditions, and then measure the distance, as shown in the following figure. As some parameters
have been explained in previous sections, this section mainly describes part of the parameters.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
130
DobotVisionStudio User Guide

Running Parameter
• Filter Size: it is used to enhance the edge and suppress noise, and its minimum value is 1.
When the edge is blurred or there is noise interference, you can increase its value to make the
detection result more stable. If the distance between the edge and the edge is smaller than the filter
size, it will affect the accuracy of the edge location or even lose edge. This value needs to be set
based on the actual situation.
• Edge Threshold: The edge threshold is the gradient threshold, ranging from 0 to 255. Only
edge points whose edge gradient threshold is greater than this value are detected. The larger the
value is, the stronger the anti noise ability is, the less the number of edges is, and even the target
edge points are screened out.
• Edge Polarity: it has three modes, including dark to light, light to dark and any.
• Widest: represents the edge pair with the largest spacing within the detection range.
• Narrowest: represents the edge pair with the smallest spacing within the detection range.
• Strongest: represents the edge pair with the largest average gradient within the detection
range.
• Weakest: represents the edge pair with the smallest average gradient within the detection
range.
• First Pair: represents the edge pair whose center of the edge pair within the detection range
is closest to the starting point of the search.
• Last Pair: represents the edge pair whose center of the edge pair within the detection range
is furthest to the starting point of the search.
• Closest: represents the set of edge pairs closest to the ideal width within the detection
scanning range.
Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
131
DobotVisionStudio User Guide

• Not Closest: represents the set of edge pairs not closest to the ideal width within the
detection scanning range.
• All: represents all edge pairs within the detection scanning range.
• Find Direction: it includes top to bottom and left to right.
• Min. Edge Score: find the minimum score of the edge. The edge with a score lower than
the minimum score will be filtered out.
• Max. Result Number: maximum number of searching
• Ideal Spacing: The closer the spacing between edge pairs is to this value, the better, which
is used to filter the sorting results
• Sort Method: score ascending, score descending, direction forward and direction reverse.
The results are sorted according to the selected type.
Output Result
• Measured Width: pixel width of spacing detection
• Edge Point Coordinates X/Y: start and end coordinates of edges
• Score: The score calculation method in spacing detection is completely consistent with
caliper tool. However, the "scoring function curve" is set internally and is not open to the outside
world.
• Edge Polarity: 1 means from black to white, 2 means from white to black.
• Edge Status: 0 means not locating to edge, 1 means located.

Histogram

The tool of histogram is used to count pixel quantity, average value, min. value, max. value, peak
value, standard deviation and contrast of gray in a target area. It also generates a gray histogram,
which clearly shows the distribution of pixel points under each gray value, as shown below.

Results:

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
132
DobotVisionStudio User Guide

• Min./Max. Value: Min./Max. grayscale value


• Mid-value, Peak Value, Average Value: Mid-value, Peak Value, Average Value of
grayscale
• Standard Deviation: The standard deviation is the arithmetic square root of the variance.
The standard deviation can reflect the dispersion of a data set.
• Pixel Quantity: count pixel quantity in the image
• Contrast: Contrast is a relative value. For a picture, it reflects the ratio of the brightest part
to the darkest part of the picture.

Image Generation
Circle Fit and Line Fit

This tool fits circle on the basis of three points or above, as shown below. Detect the intersection to
form a point set firstly, and then fit into a circle.

Basic Parameters of Circle Fit


Image Input select collected images
Fit point select collected point in the flow as fitting source

Running Parameters of Circle Fit


It means that the number of minimum points that have large error to be excluded from
fitting. In general, if there is a great number of points excluded from fitting, its value
Reject Number
should be set higher. For better results, it is recommended to use it in combination with
Distance to Remove.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
133
DobotVisionStudio User Guide

It means the maximum pixel distance from the outlier to the fit circle. The smaller the
Distance to Remove
value is, the more points are excluded.
Initial Type It includes global optimum and local optimum
It has three types, including least squares, huber function and tukey function. These three
fitting modes are different in way of calculating the weight only. As the number of outliers
Weighting Function
and the distance from the group increases, it is recommended to use least squares, huber,
and tukey successively.
Max Iteration Times max. execution times for fitting algorithm

Line fitting needs two fitting points, and its principle is similar to circle fitting. For specific
parameters, refer to circle fitting mentioned above. Take a solution demonstration as an example
here. The circle is used as the model for feature matching, and then the matching points are used to
fit the line.

Geometry Generation

The tool of geometry generation is used to freely create multiple auxiliary graphics, including
rectangles, points, line segments and circles. When some graphics are difficult to locate, you can
move the mouse or modify the X and Y coordinates to change the position of the generated
graphics, as shown below. The white circle contour that is difficult to locate can be created by
yourself.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
134
DobotVisionStudio User Guide

Geometry Creation
Drag the mouse to draw on the image, and move or modify the X, Y coordinates, width,
Create Rectangle
height, and angle of the center point. This tool can be used with position correction tool
Click on the image with the mouse to create point, and then move or modify the X and
Create Point
Y coordinates, which can be used in conjunction with position correction
Drag the mouse on the image to create a line. Drag the end point of the line or modify
Create Line the X/Y coordinate of the end point, which can be used in conjunction with position
correction
Drag the mouse on the image to create a circle. Move the end point of the circle or
Create Circle modify the X/Y coordinate of circle center and the radius, which can be used in
conjunction with position correction

Recognition
Select the recognition tool from the drop-down menu. The tool of recognition includes 2D BCR
recognition, BCR recognition and OCR recognition.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
135
DobotVisionStudio User Guide

2D BCR Recognition

The tool of 2D BCR recognition is used to identify the 2D code in the target image, and output 2D
code information in the form of characters. It can efficiently and accurately identify multiple 2D
codes at a time. Currently, only QR code and DataMatrix code are supported, as shown below.

Recognition Parameters:
• QR Code and DM Code: after enabling, the software can recognize QR code and
DataMatrix code in the image. When you do not sure about the code type, it is recommended to
enable both of them.
• 2D Code Number: the max quantity of 2D codes that are expected to be searched and
output. If the quantity of actually found is smaller than this parameter, the actual quantity of 2D
codes is output. Sometimes the quantity of 2D codes in the scene is variable. To identify all the 2D

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
136
DobotVisionStudio User Guide

codes that appear, you should set this parameter as the max. quantity of 2D codes in the scene. In
some applications, the background texture is complicated, and the current parameters can be
appropriately larger than the quantity of 2D codes to be recognized, which will reduce efficiency.
• Polarity: it has three types, including dark on white, white on dark and any. you can select
them according to your demands.
• Edge Type: it has three types, including continuous, discrete and compatible type, as shown
below. The left one means continuous type, right one means discrete type, and compatible type can
be compatible with continuous and discrete type.

• Subsampling Ratio: it is the image subsampling coefficient. The larger the value,and the
higher the efficiency of the algorithm will be. But the recognition rate of the 2D code is reduced.
• Code Width Range: the pixel width that 2D codes occupy, and it includes the pixel width
of the largest and smallest code.
• Mirror Mode: it means the X direction mirror of image, including mirrored and non-
mirrored mode.
• Mirror mode enable switch, which refers to image X-direction mirroring, including
"mirror" and "non-mirrored" modes. This parameter should be enabled when the mage is collected
from reflected mirror. Otherwise, this parameter should not be enabled.
• QR Distortion: this parameter needs to be enabled when the 2D code to be identified is
printed on the bottle or bags of logistics industry.
• Timeout-Period to Exit: if the running time of the algorithm exceeds this parameter value,
it will exit directly and unit is ms. When this parameter is set as 0, and the timeout exit time will be
disabled.
• Application Mode: the normal mode is adopted in the normal scene. The expert mode is
reserved for the 2D codes that are difficult to identify. When the application scene is simple, the
code is clear, and area is large and clean, the fast speed mode can be adopted.
• DM Code Type: it has three types, including square, rectangle and compatibility mode.
Recognition Results:
• Center X/Y: center X and Y coordinates of 2D BCR
• Code Angle: the angular offset of the 2D BCR from the horizontal position
• PPM: pixels quantity occupied by the side length of a module in the 2D BCR

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
137
DobotVisionStudio User Guide

BCR Recognition

The tool of BCR recognition is used to locate and identify the bar code in the specified area, and
tolerate the target barcode to rotate at any angle and tilt at a certain angle. It supports CODE39 code,
CODE128 code, Codabar code, EAN code, alternating 25 code and CODE93 code. The specific
operation is shown below.

Recognition Parameters:
• Code Enabling: it supports CODE39 code, CODE128 code, Codabar code, EAN code,
alternating 25 code and CODE93 code. Enable according code type.
• Bar Code Number: the max. quantity of bar codes that are expected to be found and output.
If the quantity of actually found is smaller than this parameter, the actual quantity of bar codes is
output.
• Subsampling coefficient: it is also called downsampling, which means that the number of
sampling points is decreased. For an image of N*M, if the subsampling coefficient is K, then one
dot is taken every K points of each row and column in the original image to create an image.
Therefore, the larger the subsampling coefficient, the more sparse the contour points will be, and
the less detailed contour will be. It is recommended that the value should not be set too large.
• Detection Window Size: it is the size of bar code area location window size. The default
value is 4, and when the blank space in the bar code is relatively large, you can set it larger, for
example 8. But you have to make sure that the bar code height is greater than 6 times of that of the
window size, and the value range is from 4 to 65.
• Quite Zone Width: it refers to the width of the blank area on the left and right sides of the
bar code. The default value is 30. If the blank area is sparse, you can set the value as 50.
• Bar Code Height Range: the minimum bar code height and the maximum bar code height
that algorithm is able to recognize. The default value is from 30 to 200.
• Dfk Filter Size: the minimum bar code height and the maximum bar code height that
algorithm is able to recognize. The default value is from 30 to 2400.
• Timeout-Period to Exit: if the running time of the algorithm exceeds this parameter value,
it will exit directly and unit is ms. When this parameter is set as 0, the actual time that algorithm
costs should prevail, and the unit is ms.

OCR Recognition
Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
138
DobotVisionStudio User Guide

The tool of OCR recognition is used to read the character text on the label. This tool requires
character training. The specific steps are shown below.
1. Drag the character recognition module in to the flow editing area and set relevant parameters after
double-click.
2. Set parameters before character training.
Running Parameters:
• Character Polarity: it has two types, including dark on light and light on dark.
• Character Width Range: it is used to set the min. width and max. width of character, and
the range is [1, 512].
• Width Type: it has two types, including changeable type and equal-width type. When the
character width is the same, it is recommended to select equal-width type. Otherwise, select
changeable type.
• Character Height Range: it sets the min. height and max. height of the character, and the
range is [1, 512].
• Binary Ratio: it is the parameter of binaryzation threshold, and the range is [0, 100].
• Area of Fragment: the area range of single character, and the range is [0, 100000].
• Accept Threshold: it refers to the min. score of a character that can be recognized.
Advanced Parameters:
• Distance Threshold: the distance from the Blob to the text baseline. If the actual value is
greater than this parameter, it will be deleted.
• Ignore Borders: whether to delete the blob that is in touch with the ROI boundary.
• Main Direction Range: the search range of text tilt angle, and the range is [0, 45].
• Tilt Range: the max. range that allows characters to be tilted, and the range is [0, 45].
• Min. Gap of Characters: the min. transverse gap between two characters.
• Min. Gap of Lines: the min. gap between multi line characters.
• Max Width and Height Ratio: the largest width and height ratio range of a single character
circumscribed rectangle is [1, 1000].
• Classification Method: it has three methods, including the nearest distance, the highest
weight and the fastest rate.
• Stroke Width Filter Enable: Whether to enable stroke width filter or not.
• Stroke Width Range: the width range of a single stroke, and it can only take effect after
the width filter is enabled, and the max. range is [1, 64].
• Similarity Type: it supports euclidean distance and cosine distance. Different types will
affect recognition rate.
3. Double-click fonts training.
- Select the target character area.
- Click Draw Character and it will show characters that are divided by red box.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
139
DobotVisionStudio User Guide

• Click Train Characters. Complete the training by inputting the true values of the
corresponding characters and adding them to the character library. If the recognition is inaccurate,

you can repeat the training. Select a single character in the character library and click to
expand the training of a single character, as shown in the figure below.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
140
DobotVisionStudio User Guide

4. In order to improve the accuracy of recognition, you can enable character filtering. Now only the
character types you set are recognized in the corresponding positions, as shown in the following
figure.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
141
DobotVisionStudio User Guide

When character filtering is enabled, you can customize the number of recognized characters and the
type of each character, including all, numbers, uppercase letters, lowercase letters, special characters,
spaces, customization, etc. You can define characters that are easy to be misunderstood through
Customize, but only if the customized characters exist in the character library.
5. Click Run to output the target character in the image. The first possible recognition result and
candidate recognition result will be output. The first possible result is read by default, as shown in
the following figure.

DL Character Recognition

Character recognition is a process of converting image information into symbols that can be
represented and processed by the PC. Essentially, character recognition task can be considered as a
special translation process: translating image signals into natural language. This is similar to speech
recognition and machine translation: from a mathematical point of view, they convert a set of input
sequences containing a large amount of noise into a set of output sequences of a given label through
the model obtained by automatic learning.
With features of low rejection rate, low error rate, fast recognition speed, high stability, friendly user
interface, etc., DL character recognition is widely applied to dot and matrix character recognition,
IC chip character recognition, inkjet printing character recognition, bank card character recognition,
etc. Character training is required before character recognition, and it is recommended to use
character recognition tool together with character location tool.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
142
DobotVisionStudio User Guide

• Basic Parameters: In this setting, the ROI area where the target character is located must
be selected. When the graphic position changes, it is best to use it in conjunction with
character positioning.
• Running Parameters: The settings are as following:

• Model File Path: The model file provides a default model, and you can also load
the model file created by the previous character training.
• Saving Model in Solution: After enabling, save the model data to the solution file
or process file. When loading a solution across machines, you do not need to enter
the model file path.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
143
DobotVisionStudio User Guide

• Character Filtration: Each ROI supports setting character filtration


independently

When character filtering is enabled, you can customize the number of recognized characters and the
type of each character, including all, numbers, uppercase letters, lowercase letters, special characters,
spaces, customization, etc. You can define characters that are easy to be misunderstood through
Customize, but only if the customized characters exist in the character library.
• Min. Confidence: It refers to the min. score for recognizing characters.

DL Code Reading

The tool of DL code reading can read the 2D BCR and BCR in the specific area. It supports CODE39
code, CODE128 code, Codabar code, EAN code, alternating 25 code and CODE93 code for BCR,
supports QR code and DataMatrix code for 2D BCR. The specific solutions are as following.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
144
DobotVisionStudio User Guide

DL code reading module is divided into DL code reading C and DL code reading G. They need
different operating environments but have the same parameters and functions. Therefore, you can
select according to your own computer configuration.
Level Running Parameters of 2D BCR:
• ISO Standard: ISO15415 and ISO29158, select according to demands.
• Mirror Mode: mirror image refers to the image in X-direction mirror image. Three modes
can be selected:
- Mirror Image: select this mode if the collected image is a reflection image in the
mirror.
- Non-Mirror Image: select this mode if the collected image is not a reflection image
in the mirror.
- Any: default mode includes mirror image and non-mirror image.
• Rating Processing Type: processing 1 and processing 2 can be selected.
- Processing 1: supports HIK rating mode.
- Processing 1: supports ISO rating mode.
• Aperture Size: it is filter size, it is more obvious to increase the filter effect, and the default
value is recommended.
• Rating Mode: HIk and ISO can be selected.
• Polarity: it has three types, including dark on white, white on dark and any. you can select
them according to your demands.
• Number of 2D BCR Rows and Columns: The 'row' and 'column' of the smallest 'square'
or 'circle' constituting the 2D BCR are one row or one column. It indicates the number of rows and
columns contained in the actual 2D BCR. It is recommended to use the default value. In case of
abnormality, the number of rows and columns of 2D BCR can be filled in according to the actual
situation.
• Edge Type: Continuous and discrete type can be selected.
Level Running Parameters of 1D BCR:
• Time-Period to Exit: If the algorithm running time exceeds this value, it will exit directly,
and the unit is ms. When it is set as 0, the timeout exit time will be closed and the algorithm will
run for as long as the actual time required.
• High Performance Mode: DL code reading C module has unique parameters. After it is
enabled, the recognition effect will be better, but the CPU usage will increase
• Code Grade Flag: If it is enabled, you can view an set more specific code parameter
configurations.
• Enable Button: when enabling, it means the sub level of the index reckons in the total
level, including decoding level, edge certainty, contrast, min. reflectivity rate, edge contrast, module
uniformity, decode ability, defect degree and static area.
• Indicator ABCD Threshold: It refers to the interval when the score of the index is used to
calculate the grade. The index includes decode ability, defect degree, minimum reflectivity rate,
edge contrast, module uniformity and contrast.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
145
DobotVisionStudio User Guide

• Aperture Size: it is filter size, it is more obvious to increase the filter effect, and the default
value is recommended.

NOTICE

⚫ There is no quantitative correlation between the timeout exit time and the algorithm
time at the bottom of the interface. The timeout time represents the time when the
algorithm locates and reads the code in the image.
⚫ DL code reading library supports the upper limit of 7936 * 5888 pictures.
⚫ The DL code reading module of this version only supports codes below 16 data
areas of 2D BCR for the time being, and the limit of code reading character set is
256. If you need to expand the ability set, please contact our technicians.

DL Character Location

The tool of DL character location can accurately locate characters in images when the character
background is complex and the location is not fixed. The specific process is shown in the figure
below.

Running Parameters
Model File Path Select the model file created by the previous character training.
After enabling, save the model data to the solution file or process file. When
Save Model Data loading a solution across machines, you do not need to enter the model file
path.
It refers to the max. character quantity to be searched, the default is 1 and the
Max. Number to Find
range is [1, 100].

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
146
DobotVisionStudio User Guide

It refers to the min. score for the character location box. The default is 0.3 and
Min. Confidence
the range is [0, 1].
It refers to the overlap rate of character location boxes, the default is 0.3 and
Overlap Rate
the range is [0.01, 1]
It refers to the sorting order of characters, including by X/Y coordinate of
Sort Object
center point, and by confidence.
Advanced Parameters
After the function is enabled, the minimum edge score can be set. If the
Edge Filter Enable proportion of the part within the edge of the search target in the whole is less
than the minimum edge score, the search target will be discarded.
It sets the relative angle range tolerance for target characters. To search a target
Character Angle Enable with rotation change, you need to set it accordingly, and angle range is from -
180° to 180°.
Character Width
If it is enabled, characters whose width/height are within this range can be
Enable/Character Height
detected.
Enable

DL Single Character Recognition

When there are multiple lines of text in the image and the combination of positioning and
recognition takes a long time, the single character recognition module can be considered, as shown
in the figure below.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
147
DobotVisionStudio User Guide

Running Parameters:
• Model File Path: Select the model file created by the previous character training.
• Saving Model in Solution: After enabling, save the model data to the solution file or
process file. When loading a solution across machines, you do not need to enter the model file path.
• Character Filtration: Each recognition frame can set customized character filter
information. Refer to Character Recognition for details.
• Max. Number to Find: It refers to the max. quantity of objects to be searched.
• Min. Confidence: It refers to the min. score for target objects.
• Overlap Rate: It refers to the max. overlap rate of the target image.
• Sort Object: It refers to the character’s sorting order, including X coordinate of center, Y
coordinate of center and confidence.

Advanced Parameters:
• Edge Filter Enable: After enabling, the minimum edge score can be set. If the proportion
of the part within the edge of the search target in the whole is less than the minimum edge score, the
search target will be discarded.
• Character Width Enable: If it is enabled, target objects whose width are within this range
can be detected.
• Character Height Enable: If it is enabled, target objects whose height are within this range
can be detected.

NOTE
 When you copy module parameters, the character filtering data in the running
parameters will not be copied. If you paste a parameter to another module of the same
type, the character filtering content in the running parameter must be reset.

Deep Learning

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
148
DobotVisionStudio User Guide

Deep Learning is a machine learning algorithm, which is developed based on the traditional neural
network. With a deep learning module, the machine can absorb, learn, understand and handle
complex information in the real world as human beings.
Specifically speaking, machine with the deep leaning module, can execute and finish high-difficulty
recognition tasks, such as character recognition, character positioning, image segmentation, image
classification, object detection, image retrieval, anomaly detection and instance segmentation.
As deep learning is based on data, you need to collect a large amount of data sets before starting
deep learning. The data sets participating in the training need to be labeled to ensure the diversity
of data as much as possible and have low resolution requirements. Take character positioning as an
example. At least 150 picture samples are required. You need to use VisionTrain for deep learning
training.

NOTE
 The deep learning module has two versions: G and C. The suffix of module name of
GPU version is G, the suffix of module name of CPU version is C, and the CPU version
does not depend on the graphics card.
 DL character recognition, DL code reading, DL character location, and DL single
character recognition belong to the recognition category.
 The function module has a high requirement for the PC configuration. See Operating
Environment for details.

DL Image Segmentation

In image segmentation, you can configure parameters and detect foreground objects through self-
trained model. Image segmentation can be divided into two-level classification mode and multi-
level classification mode. The two classification mode is to only detect defects, as shown in the
figure below.
• Two-level Classification Mode

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
149
DobotVisionStudio User Guide

For defects, the pixel value is high, and most of the interference can be removed through subsequent
processing or secondary training. The image on the right side of the above figure is the defect
probability diagram of image segmentation.
• Multi-level Classification Mode

For defects, the pixel value is high, and most of the interference can be removed through subsequent
processing or secondary training. Through subsequent processing, the image on the right side of the
above figure is the defect probability diagram of image segmentation.
• Multi-level Classification Mode
The right side of the figure above is the category diagram of image segmentation, and its operation
parameter settings are shown in the figure below.
Running Parameters of Defect Detection
Model File Path Select the model file generated by the previous image segmentation training.
After enabling, save the model data to the solution file or process file. When
Saving Model in Solution
loading a solution across machines, you do not need to enter the model file path.
Min. Score It is classified as a certain type of probability.
Display Probability Diagram You can select defect probability diagram displayed on the display interface.
You can select the defect probability diagram that can be subscribed by subsequent
Output Type
modules.

NOTE
 The DL image segmentation module supports the multi classification function of
defects. When marking defects with the deep learning training tool, if it is necessary
to classify defects, mark the defect type at the same time, and only the large/small
model of single image segmentation and the self-learning template mode in image
comparison support the multi classification function.

DL Classification

Deep learning classification is an image processing method for distinguishing the targets of different
types according to different features reflected in image information. It uses the PC to analyze images,
Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
150
DobotVisionStudio User Guide

and classifies each pixel or area in an image into several categories so as to replace human beings'
visual judgment. It has been widely applied into object recognition and sorting. The specific
processes are as follow.

Running Parameters
Model File Path Select the model file created by the previous character training.
After enabling, save the model data to the solution file or process file. When loading a
Saving Model in Solution
solution across machines, you do not need to enter the model file path.
It outputs the index number and corresponding confidence of the highest confidence
Top-K Categories
score of K categories.

DL Object Detection

Object detection is an image segmentation based on target geometric and statistical features. It
combines target segmentation and recognition. DL object detection has two ways: G and C. The
suffix of module name of GPU version is G, and the suffix of module name of CPU version is C.
The CPU version does not depend on the graphics card.

Running Parameters:
Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
151
DobotVisionStudio User Guide

• Model File Path: Select the model file created by the previous character training.
• Saving Model in Solution: After enabling, save the model data to the solution file or
process file. When loading a solution across machines, you do not need to enter the model file path.
• Max. Number to Find: It refers to the max. quantity of objects to be searched.
• Min. Confidence: It refers to the min. score for target objects.
• Overlap Rate: It refers to the max. overlap rate of two target objects.
• Sort Object: It refers to the character’s sorting order, including X coordinate of center, Y
coordinate of center and confidence.

Advanced Parameters:
• Edge Filter Enable: After enabling, the minimum edge score can be set. If the proportion
of the part within the edge of the search target in the whole is less than the minimum edge score, the
search target will be discarded.
• Angle Enable: It sets the relative angle range tolerance for target objects. To search an
object with rotation change, you need to set it accordingly, and angle range is from -180° to 180°.
Width/Height Enable: If it is enabled, target objects whose/height width are within this range can
be detected.

DL Image Retrieval

DL image retrieval is mainly used to search for the desired type of target image among various types
of images. The DL image retrieval module supports color images and black-and-white images. The
specific solution is shown in the figure below.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
152
DobotVisionStudio User Guide

⚫ Select the image to be trained, use the VisionTrain to train the image marking, and divide the
image types in the image set. See the user manual of VisionTrain for specific training methods.
⚫ Import the model trained by VisionTrain into the running parameter model file path of DL
module, as shown in the following figure.

⚫ Import the Gallery model file obtained during training into the Gallery path, as shown in the
following table:
⚫ After adding the Gallery path and model file path, click Execute to configure the parameters.

Running Parameters:
• Model File Path: Select the model file created by the previous character training.
• Saving Model in Solution: After enabling, save the model data to the solution file or
process file. When loading a solution across machines, you do not need to enter the model file path.
• Gallery Path: It is generated together with the training model during training, and the path
of the gallery file is VisionTrain1.2.0\Applications\DeepLearningModel.
• Top-K Category: The number of categories displayed in the result, with a value range of
1 to 10.
• Min. Similarity: You can filter results whose search results are less than this value.

Gallery Management:
• Search Category: You can search and delete categories in the gallery.
• Register Image: You can create Gallery files manually.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
153
DobotVisionStudio User Guide

If Gallery file is not found in this folder, DL image retrieval supports self creation of files. The
specific methods are as follows
• Click Register to enter the following interface:

• Select Register by Image to register a type. Fill in the category name and click + to add
image and click Register. You can only register one type using the method.
• Select Register by Folder to register multiple types, as shown in the following figure:

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
154
DobotVisionStudio User Guide

• Click to select a folder and then click Register. Place Images of the same category in the
folder and name it as the category name.

DL Anomaly Detection

DL anomaly detection is to detect exceptional pictures by learning from a few OK samples. It can
be used to the scenario that there are a large number of OK samples, but no NG samples or few
types of NG samples and the defect type is uncertain.
The image with obvious detects is shown in the figure below.

After registering training model, run the results as shown below.

Basic Parameters:
Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
155
DobotVisionStudio User Guide

In this setting, you need to draw the ROI area where the defect is located.
Running Parameters:

• Model File Path: Load the model file created by the previous training.
• Saving Model in Solution: After enabling, save the model data to the solution file or
process file. When loading a solution across machines, you do not need to enter the model
file path.

DL Instance Segmentation

Instance Segmentation has the characteristics of Semantic Segmentation, which needs to be


classified at the pixel level. It also has part of the characteristics of Object Detection, which needs
to locate different instances, even if they are of the same kind. The effect of DL instance
segmentation module is shown below.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
156
DobotVisionStudio User Guide

Running parameters
Model File Path Select the model file created by the previous training.
After enabling, save the model data to the solution file or process file. When loading
Save Model Data
a solution across machines, you do not need to enter the model file path.
Max. Number to Find It refers to the max. quantity of targets for target detection.
Min. Confidence It refers to the min. score for the position box.
Overlap Threshold The max. proportion allowed for overlaying in the target image.
The min. value for the mask. If the actual value is smaller than this value, the software
Mask Threshold
will not output.
The maximum overlap of the two mask areas. If the actual value is higher than this
Max. Overlap Threshold
value, the mask area with lower confidence will be filtered.
Advanced parameters
After enabling it, you can enter min. edge score. If the proportion of the searched
Edge Filter Enable target in the edge to the whole image is less than the minimum edge score, the search
target will be discarded.
After the function is enabled, the target detected in the image after running will be
Render Mask Image
presented in the form of mask.

Calibration
Camera Mapping

The camera mapping module calibrates the conversion relations between the two camera coordinate
systems via the corresponding pixel pair of the two cameras, and outputs the calibration file,
calibration status, and calibration error, as shown below.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
157
DobotVisionStudio User Guide

Basic Parameters of Camera Mapping


Input Mode It includes by point or by coordinate.
Demonstrate Point-Run Select demonstration point and run point. You need to select 2 pairs at least and 8
Point pairs at most.

Create Calibration File Output calibration file

Running Parameters of Camera Mapping

It includes three types: scaling, rotation, aspect ratio, tilt, translation and transmission;
scaling, rotation, aspect ratio, tilt and translation; scaling, rotation and translation. The
DOF
three parameter Settings correspond to perspective transformation, affine transformation
and similarity transformation respectively.

You can choose least square method, Huber or Tukey algorithm function. You are
Weighting Function
advised to use the default parameters

Set this parameter when you select Tukey or Huber. The weight coefficient is the
Weighting Coefficient
clipping factor of the corresponding method. You are advised to use the default value

Output Results
It refers to X/Y coordinate ratio of the conversion matrix from the coordinate system where
Scale X/Y the running point is located to the coordinate system where the demonstration point is
located.
It uses the calculated calibration matrix to map the origin of the coordinate system where
Translate X/Y the demonstration point is located to the coordinate X/Y obtained by the coordinate system
where the running point is located.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
158
DobotVisionStudio User Guide

The rotation angle (in radians) of the coordinate system where the demonstration point is
located relative to the coordinate system where the running point is located. When the
rotation θ is a positive value, after the X axis of the coordinate system where the
demonstration point is located rotates θ counterclockwise, its X axis is consistent with the X
Rotate
axis of the coordinate system where the running point is located. When the rotation θ is a
negative value, the demonstration point is located. After the X axis of the coordinate system
rotates clockwise by -θ, its X axis is consistent with the X axis direction of the coordinate
system where the operating point is located.

The unit length in the coordinate system where the demonstration point is located
Scale corresponds to the number of pixels in the coordinate system where the running point is
located.

The difference between the Y-axis rotation angle and the X-axis rotation angle of the
Skew
coordinate system where the demonstration point is located (in radians).

The ratio of the Y-axis zoom amount to the X-axis zoom amount of the coordinate system
Aspect
where the demonstration point is located.

CalibBoard Calibration

The calibration board calibration is divided into four types, including checkerboard, circle grid
board, Hikrobot I, and Hikrobot II.
Take the checkerboard as an example. After Inputting the checkerboard grayscale image and its
dimension parameters, the software will calculate the mapping matrix, calibration error, and
calibration status between the image coordinate system and the checkerboard physical coordinate
system. This tool can generate one calibration file to be used by the calibration conversion. By
clicking Creating Calibration File, you can select the path for saving created calibration file, as
shown below.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
159
DobotVisionStudio User Guide

• Create Calibration File: select the creating calibration file storage path.
• Calibration File Path: the absolute path of the calibrate file. If the file exists in this path,
it will be loaded. If it does not exist, it will fail to load and an error will be reported when running.
• Update File: After a round of calibration is completed, if the update file control is enabled,
the new round of calibration will update the calibration results to the calibration file.
• Origin (X) and Origin (Y):this origin point is the origin point of the physical coordinates.
The coordinates of origin point can be set, that is, origin location of X-axis and Y-axis in the image.
• Rotation Angle: the physical coordinate system direction can be adjusted by adjusting the
rotation angle. it refers to the rotation angle of calibration board.
• Coordinate System Mode: select the left-hand coordinate system or the right-hand
coordinate system.
• Physical Size: the side length of each black and white grid of the checkerboard or the center
distance of two adjacent centers of the circular plate, and the unit is mm.
• Calibration Board Type: it is divided into checkerboard, circle calibration board, Hikrobot
calibration board type I and Hikrobot calibration board type II. Type I is a self-developed code
occupying four checkerboard positions, and type II is a self-developed code placed in the white grid
of the calibration board.
- Hikrobot Calibration Board: You can generate tool in the calibration board of Tool
which customized the calibration board to be generated. After definition, the
calibration board image will be generated in the... \ vm4.1.0 \ visionmaster4.1.0 \
applications \ tools path.

Running Parameters:
• DOF: you can select according to your demands, including scale, rotation, aspect ratio, tilt,
translation and transmission, and scale, rotation, aspect ratio, tilt and translation, scale, rotation and
Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
160
DobotVisionStudio User Guide

translation. The three parameters correspond to perspective transformation, affine transformation


and similarity transformation respectively.
• Grayscale Contrast: it refers to the min. value of contrast between adjacent black-and-
white squares in a checkerboard image, it is recommended to use default value.
• Subpixel Window: this parameter indicates whether to adaptively calculate the window
size of corner sub-pixel accuracy. When each checkerboard grid accounts for more pixels, this value
can be increased appropriately. It is recommended to use the default value.
• Set Window Size: it refers to the size of subpixel window that user sets. The width of each
checkerboard pixel of the calibration board can be adjusted as /10.
• Weighting Function: you can select least squares, Huber, and Tukey algorithm functions.
It is recommended to use default parameter.
• Weighting Coefficient: it is the parameter setting item when Tukey or Huber weight
function is selected. It is the clipping factor of the corresponding method and recommended to use
the default value.

Output Results:
• Calibration Error: it is the calibration parameters by calculation. The physical coordinates
of the extracted calibration board feature points (such as checkerboard corner points or circle center
of circle calibration board) are mapped back to the image coordinates system, which is the average
value of the distance from the actual image coordinates.
• Scale: the unit length in the world coordinate system corresponds to the number of pixels
in the image coordinate system.
• Pixel Accuracy: the size in the physical coordinate system corresponding to a single pixel.
• Calibration Point Quantity: number of extracted calibration plate feature points.
• Translation X/Y: it is the calibration parameters by calculation. The origin of the world
coordinate system is mapped to image coordinates system, and obtain the coordinate X/Y.
• Rotation: the rotation angle (unit: radians) of the world coordinate system relative to the
image coordinate system. When rotating θ is positive, and after the X-axis of the world coordinate
system rotates counterclockwise θ, its X-axis is consistent with the X-axis direction of the image
coordinate system θ. When it is negative, and after the X-axis of the world coordinate system rotates
clockwise θ, its X-axis is consistent with the X-axis direction of the image coordinate system.
• Chamfer: difference between Y-axis rotation angle and X-axis rotation angle of world
coordinate system (unit: radians).
• Aspect Ratio: the ratio of Y-axis scaling to X-axis scaling in the world coordinate system.

NOTICE

When failing to use the calibration tool, you need to adjust the parameters. The common
debugging steps are as follows:
⚫ Check whether the input image is a calibration board image.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
161
DobotVisionStudio User Guide

⚫ Check whether the calibration type in the basic parameters is set correctly.
⚫ Check whether the parameter setting is reasonable. For checkerboard calibration,
check whether the gray contrast is high. If so, you can change the sub-pixel window
from adaptive to the set value, and set the window size to about checkerboard pixel
width/10.
⚫ For the round dot matrix calibration board, check whether the round dot type is set
correctly, whether the dot roundness threshold is too high, and whether the edge
extraction threshold is unreasonable.

N-Point Calibration

The calibration is mainly used to determine the conversion relations between the camera coordinate
system and the robot arm coordinate system. The N-point calibration realizes the conversion
between the camera coordinate system and the executing structure coordinate system and generates
calibration file via N-point pixel coordinate and physical coordinate. The N needs to be larger than
or equal to 4.
In the actual situation, there are mainly two calibration methods, including upper camera capture
and lower camera alignment, as shown below.

The recommended calibration solution is shown below. The branch module is used to determine
whether feature matching is successful or not. If it is successful, the flow enters N-point calibration.
Otherwise, format a specific character and sent it out to feed back the matching results.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
162
DobotVisionStudio User Guide

Taking lower camera alignment as an example, N-point calibration is executed by the camera that
is driven by the mechanical arm to move in the direction set by the parameter. Each movement will
trigger the camera to capture a picture. At this time, the calibration module in the solution is
synchronously calibrated. At last, the calibration file is created. The specific parameter settings are
shown below.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
163
DobotVisionStudio User Guide

Basic Parameters:
• Calibration Points Acquisition: Select trigger acquisition or manual input.
• Calibration Points Input: Select by point or by coordinate.
• Image Point: The calibration point of N-point, and points of other modules can be
subscribed.
• Physical Point: Coordinate point of the robot arm. An image point on the VM image
corresponds to a physical point of the robot arm. (It is recommended not to subscribe, but to generate
automatically.) You can select one from physical angle and physical coordinate.
• Image Rotate Angle: The angle of other modules can be subscribed, and the rotation
consistency and left and right hand coordinate system can be obtained through related calculations.
• World Rotate Angle: The angle of the current pose manipulator is generally obtained from
an external device, and the rotation consistency and left-handed coordinate system can be obtained
through related calculations. You can select one from physical angle and physical coordinate.
• Translation Number: The times for obtaining calibration by moving. It is related with X/Y
movement only, and it is set as 9 by default.
• Rotation Number: The number of rotations is required when the rotation axis is not coaxial
with the center of the image, and it is recommended to set as 3. The rotation happens at the 5th
location.
• Update File: After a round of calibration is completed, if the update file control is enabled,
the new round of calibration will update the calibration results to the calibration file.
• Set Calibration File Path: The absolute path of the calibrate file. If the file exists in this
path, it will be loaded. If it does not exist, it will fail to load and an error will be reported when
running.
• Demonstration: After enabling it, the software can identify whether the external signals
are demonstration signal or not, and you need to set external input character and external triggered
character.
• External Input Character: You can subscribe signals of external devices.
• External Triggered Character: It judges whether the external input signal is the basis of
the demonstration signal. When the external input character is consistent with the external trigger
character, the signal is a demonstration signal.
• Fiducial Point X and Fiducial Point Y: Physical coordinate of calibration original point,
it sets as (0, 0), and the unit is mm.
• Offset X and Offset Y: Physical offset in X or Y direction of each movement of the robot
arm, which can be positive or negative, and the unit is mm.
• Movement First: Sets the movement direction of the mechanical.
• Commutation Times: The times of robot arm changes direction after moving.
• Angle Offset: The initial rotation angle and rotation angle at each time. If it rotates 3 times,
and rotation angle is from -10 degrees to 0 degree, and then to +10 degrees. The fiducial angle is -
10 degrees and offset angle is 10 degrees.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
164
DobotVisionStudio User Guide

In the image above, movement is made towards X or Y direction for 9 times, and rotation is made
towards other direction for 3 times, offset is 5. It is X axis movement first and the times of changing
direction movement is 3.
• Use Relative Coordinates: The default state is off. After enabling, the calibration original
point size can be configured.
• Calibration Origin: It is generally set to 4, because it is counted from 0, so it is the middle
point.

Running Parameters:
• Camera Mode: it has three types, including static camera upper position or static camera
lower position, and dynamic camera.
- static camera upper position: the camera is fixed and above the shooting
work piece.
- static camera lower position: the camera is fixed and below the shooting
work piece.
- dynamic camera: camera moves with robot arm.
• DOF: you can select according to your demands, including scale, rotation, aspect ratio, tilt,
translation and transmission, and scale, rotation, aspect ratio, tilt and translation, scale, rotation and
translation. The three parameters correspond to perspective transformation, affined transformation
and similarity transformation respectively.
• Weighting Function: you can select least squares, Huber, Tukey, and Ransac algorithm
functions. It is recommended to use default parameter.
• Weighting Coefficient: it is the parameter setting item when Tukey or Huber weight
function is selected. It is the clipping factor of the corresponding method and recommended to use
the default value.
• Distance Threshold: it is the parameter setting item when Ransac weight function is
selected, which means that the distance threshold for eliminating the error point. The smaller the

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
165
DobotVisionStudio User Guide

value, the stricter the point set selection will be. When the point group's accuracy is not high, this
threshold can be appropriately increased. It is recommended to use the default value.
• Sampling Rate: it is the parameter setting item when Ransac weight function is selected.
When the point group's accuracy is not high, the sampling rate can be appropriately reduced. It is
recommended to use the default value.

Output Results:
• Calibration Status: 1 represents calibrated, 0 means failed calibration.
• Evaluate Calibration Error Status: Evaluate whether the calibration error is OK, 0
indicates that the error is within the normal range, and 1 indicates abnormal calibration results.
When the calibration error is large, the accuracy of image point and physical point information can
be checked in the calibration parameters, and can also be analyzed from the moving track of N-point
calibration processing results.
Firstly, check the motion trajectory of image points. Generally, the motion trajectory in the same
direction is parallel, and the motion trajectory in different directions is vertical, which indicates that
the selected image points have high quality, otherwise it will lead to large fitting error, as shown in
the figure below.

If the rotation error is large, it is necessary to check the rotation image point and input angle. If the
over rotation error is small, but it is much different from the rotation position of the real image,
check the input angle and camera mode.
From the motion trajectory analysis, assuming that the structure motion is X first and then Y, the
coordinate system in the image is shown in the figure below.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
166
DobotVisionStudio User Guide

In the coordinate system, the rotation angle from the X positive direction to the Y positive direction
is positive, and vice versa, as shown in the following figure.

Distortion Calibration

The tool of distortion calibration is used to calibrate calibration board images that have distortion
by inputting gray scale calibration template image. It generates calibration file, outputs calibration
error and calibration status, as shown below.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
167
DobotVisionStudio User Guide

Parameters:
• Correction Point Input: it includes By Point and By Coordinates.
• Correction Center Point: It refers to center point coordinate of perspective distortion
correction. When “Input by Point” is selected, you can link the data of the previous module. When
"by coordinate" is selected, the X and Y coordinates of the correction center need to be customized.
• Distortion Type: it has three types, including perspective distortion, radial distortion and
radial perspective distortion.
• Calibration Board Type: select checkerboard calibration board, or circle calibration board.
• Dot Circularity: it is the threshold of circle detection. The larger the value, the much
rounder circle will be detected.
• Edge Low Threshold: it is used to extract the low threshold of the edge.
• Edge High Threshold: it is used to extract the high threshold of the edge. Only the edge
point whose edge gradient threshold is between edge low threshold and edge high threshold can be
detected.
• Round Dot Type: black circle on white and white circle on black can be selected
• Grayscale Contrast: it refers to the min. value of contrast between adjacent black-and-
white squares in a checkerboard image, it is recommended to use default value.
• Median Filter Status: Whether to perform median filtering before corner point extraction.
There are two modes: "Perform Filtering" and "No Filtering". It is recommended to use the default
value.
• Subpixel Window: this parameter indicates whether to adaptively calculate the window
size of corner sub-pixel accuracy. When each checkerboard grid accounts for more pixels, this value
can be increased appropriately. It is recommended to use the default value.
• Set Window Size: adjust window size manually.
• Weighting Function: you can select least squares, Huber, and Tukey algorithm functions.
It is recommended to use default parameter.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
168
DobotVisionStudio User Guide

• Weighting Coefficient: it is the parameter setting item when Tukey or Huber weight
function is selected. It is the clipping factor of the corresponding method and recommended to use
the default value.

Mapping Calibration

With the tool of mapping calibration, you can convert coordinates of different coordinate systems.
Only the self-developed calibration board is supported. See CalibBoard Calibration for the
generation method of the calibration board. Input two self-developed calibration board images or
calibration files corresponding to the self-developed calibration board images. Calculate the
mapping relations between image 1 and image 2, and then generate a new calibration file for
calibration conversion.

Basic Parameters

Calibration Points Input It selects by point or by coordinate.

Coordinate point of the robot arm. An image point on the VM image corresponds
to a physical point of the robot arm. (It is recommended not to subscribe, but to
Physical Point
generate automatically.) You can select one from physical angle and physical
coordinate.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
169
DobotVisionStudio User Guide

The angle of the current pose manipulator is generally obtained from an external
device, and the rotation consistency and left-handed coordinate system can be
World Rotate Angle
obtained through related calculations. You can select one from physical angle and
physical coordinate.
After enabling it, the software can identify whether the external signals are
Demonstration
demonstration signal or not.

External Input Character It can subscribe signals of external devices.

It identifies whether the external input signal is the basis of the demonstration
External Trigger Character signal. When the external input character is consistent with the external trigger
character, the signal is a teaching signal
The absolute path of the calibrate file. If the file exists in this path, it will be
Set Calibration File Path loaded. If it does not exist, it will fail to load and an error will be reported when
running.
After a round of calibration is completed, if the update file control is enabled, the
Update File
new round of calibration will update the calibration results to the calibration file.

Physical X and Physical Y It refers to the physical coordinate X and Y.

Running Parameters

Calibration Board Type It selects by point or by coordinate.

Coordinate point of the robot arm. An image point on the VM image corresponds to
a physical point of the robot arm. (It is recommended not to subscribe, but to
DOF
generate automatically.) You can select one from physical angle and physical
coordinate.
The angle of the current pose manipulator is generally obtained from an external
device, and the rotation consistency and left-handed coordinate system can be
Weighting Function
obtained through related calculations. You can select one from physical angle and
physical coordinate.
After enabling it, the software can identify whether the external signals are
Weighting Coefficient
demonstration signal or not.
It refers to the min. value of contrast between adjacent black-and-white squares in a
Grayscale Contrast
checkerboard image, it is recommended to use default value.
Whether to perform median filtering before corner point extraction. There are two
Median Filtering State modes: "Perform Filtering" and "No Filtering". It is recommended to use the default
value.
This parameter indicates whether to adaptively calculate the window size of corner
Subpixel Window sub-pixel accuracy. When each checkerboard grid accounts for more pixels, this
value can be increased appropriately. It is recommended to use the default value.
It refers to the size of subpixel window that user sets. The width of each
Set Window Size
checkerboard pixel of the calibration board can be adjusted as /10.
Output Results

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
170
DobotVisionStudio User Guide

Current Angle Point Quantity Input the number of corner points extracted in image 1.

Target Angle Point Quantity: Input the number of corner points extracted in image 2.

Mapping Angle Point The number of corner points extracted from image 1 that can be mapped to
Quantity image 2.

N-Image Calibration

N-image calibration tends to create image points and physical points, and calculate the mapping
relationship between the image coordinate system and the physical coordinate system.
Generally, the camera is installed above the self-developed calibration board. Make sure the camera
can capture enough angle points and self-developed codes. The min. value for image translation is
3. When the value is larger than or equal to 4, the direction of movement is changed every 3 images.
At present, translation calibration, rotation calibration and translation rotation calibration can be
supported, as shown in the figure below.

When translating coordinates only, the original point of the physical coordinate system is the start
point of the first calibration board. When rotating coordinates, the original point of the physical
coordinate system is the physical point of the rotation center of the image.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
171
DobotVisionStudio User Guide

Basic Parameter Description


• Update File: After a round of calibration is completed, if the update file control is enabled,
the new round of calibration will update the calibration results to the calibration file.
• Calibration File Path: the absolute path of the calibrate file. If the file exists in this path,
it will be loaded. If it does not exist, it will fail to load and an error will be reported when running.
• Calibration Points Input: select by point or by coordinate.
• Physical Point: the coordinate point of the robot arm. An image point on the VM image
corresponds to a physical point of the robot arm.
• Rotation Angle: the physical coordinate system direction can be adjusted by adjusting the
rotation angle.
• Translation Times: Image quantity for translation.
• Rotation Times: Image quantity for rotation.
• Fiducial Point X and Fiducial Point Y: physical coordinate of calibration original point,
it sets as (0, 0).
• Offset X and Offset Y: physical offset in X or Y direction of each movement of the robot
arm, which can be positive or negative.
• Movement First: set the priority offset direction for each operation of the robot arm,
including X priority and Y priority.
• Commutation Times: the times of robot arm changes direction after moving.
• Fiducial Angle/Angle Offset: the initial rotation angle and rotation angle at each time. If
it rotates 3 times, and rotation angle is from -10 degrees to 0 degree, and then to +10 degrees. The
fiducial angle is -10 degrees and offset angle is 10 degrees.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
172
DobotVisionStudio User Guide

• Matrix Correction Enable: In the conversion relationship generated by N image


calibration, only when moving the calibration horizontally, the coordinates of the upper left corner
of the checkerboard corner of the first image in the image are the image origin by default. When
there is rotation calibration, the default is that the image rotation center is the origin of the physical
coordinate system. If this correspondence does not meet the requirements, the matrix correction
enable can be enabled.
• Input Mode: it includes by point and by coordinate.
• Image Point: enable matrix correction, input image points, correct the position of the
matrix, and transform this image point into a specified physical point.
• Physical Point: enable matrix correction, input physical points, correct the position of the
matrix, and transform specific image point into the physical point.
Running Parameters:
• Camera Movement: If the camera has relative movement, you can enable this parameter.
• Calibration Board Type: Self-developed I type, and self-developed II type. Type I is a
self-developed code occupying four checkerboard positions, and type II is a self-developed code
placed in the white grid of the calibration board.
• Grayscale Contrast: it refers to the min. value of contrast between adjacent black-and-
white squares in a checkerboard image, it is recommended to use default value.
• Median Filter Status: Whether to perform median filtering before corner point extraction.
There are two modes: "Perform Filtering" and "No Filtering". It is recommended to use the default
value.
• Subpixel Window: 2 modes: self-adaptive, and configured value.

Output Results:
• Average Error of Pixel Translation: according to the pixel estimation error when
translation image calculates the matrix, the smaller the error, the more stable the movement of the
mechanism and the more reliable the calibration result.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
173
DobotVisionStudio User Guide

• Translate Real Average Error: The average error of translation pixel is the real error
obtained by single pixel precision transformation, and the unit is mm.
• Average Error of Pixel Rotation: according to the pixel estimation error when the rotating
image calculates the rotation center, the smaller the error, the more stable the rotation of the
mechanism and the more reliable the rotation center.
• Rotate Real Average Error: The average error of rotation pixel is the real error obtained
by single pixel precision transformation, and the unit is mm.
• Rotation Center Coordinate X/Y: rotation center image coordinates calculated by rotating
image.

Load Calibration

The load calibration module can reload some calibration files generated by the calibration module
when the subscribed refresh signal is not 0, and output relevant information. The supported
calibration modules are N-point calibration, mapping calibration and N image calibration.
Prerequisite:
Drag the load calibration module to the process editing area, complete the connection with other
modules and ensure that other relevant modules have been connected with the module before.
Steps:
1. Double-click Find Vertical Lines module to enter parameter edit window.

2. Click to open the selected calibration file in the Calibration File Path, and then the relative
calibration information will be shown as below.

NOTE

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
174
DobotVisionStudio User Guide

 Calibration files can be generated through some calibration modules, and support
XML format calibration files generated by N-point calibration, mapping calibration
and N image calibration

3. Refresh the int data source of the preamble module at the signal. When the data source subscribed
by the refresh signal is not 0, the module will automatically reload the selected calibration file.

Calculation
Single Point Alignment

According to the input target point position (X0, Y0) and the direction and the object point position
(X1, Y1) and direction, the tool of single point alignment calculates the movement amount required
from the object point to the target point, including the position movement amount and angle
movement amount. The alignment module inputs the physical coordinates, so it needs to be used
together with the calibration conversion, as shown below.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
175
DobotVisionStudio User Guide

Basic Parameters of Single Point Alignment


Input Mode It includes By Point or By Coordinate.

Demonstrate Point-Run Point Select demonstration point and run point. You need to select 8 pairs at most.

Calculate Rotation

The rotation calculation module can rotate the point or line counterclockwise or anticlockwise
around the rotation center point according to the rotation angle, and obtain the relevant information
of the point or line after rotation.
Prerequisite:
Drag the Rotate Calculate module to the process editing area. Complete the connection with other
modules and ensure that other relevant modules have been connected with the module before, such
as Find Angle Bisector.
Steps:
1. Double-click Calculate Rotation module to enter parameter edit window.
2. Select the image data source in the Input Source.
3. Select the data type to be rotated in Input Type: by Point or Line.
4. Select the data source of the point or line to be rotated in Point Input.
5. Subscribe the data source of rotation center in Rotate CenterPoint.
6. Subscribe the data source of rotated angle in Rotate Angle as shown below.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
176
DobotVisionStudio User Guide

NOTE
 If the subscribed data source of rotation angle is positive, it is clockwise rotation;
Otherwise, it rotates counterclockwise.
 The subscription method of lines and points is similar to that of the find angle bisector
module. See the description of step 3 in Find Angle Bisector for details.

7. Switch to Result Show and set the specific module, color and transparency in Image Display,
and set the text content and color in Text Show.
8. Click Execute or Continue to view the operation results as shown below. The following figure
shows the operation results when line is selected in Input Type.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
177
DobotVisionStudio User Guide

Point Set Alignment

According to the X array and Y array of input target points, and the X array and Y array of object
points, the tool of point set alignment calculates the movement amount required from the points to
the target point, including the position movement amount and angle movement amount. The
alignment module inputs the physical coordinates, so it needs to be used together with the calibration
conversion, as shown below.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
178
DobotVisionStudio User Guide

Basic Parameters of Point Set Alignment


Input Mode It includes by point or by coordinate.
Select demonstration point and run point. You need to select 2 pairs at least and
Demonstrate Point-Run Point
8 pairs at most.

Calibration Transformation

After the calibration is completed, the conversion between the camera coordinate system and the
robot arm coordinate system can be realized by the calibration conversion module. Click Load
Calibration File Path in Calibration Transformation interface, and select calibration file path. The
flow is shown below.
Find the position of the work piece in the camera coordinate system via the feature matching
template, load the saved calibration file, and click Run to complete the operation. The position of
the work piece in the robot arm coordinate system after the calibration conversion is shown below.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
179
DobotVisionStudio User Guide

Through external communication, the camera is controlled to capture images. The pixel coordinate
positioning of the work piece image is realized by using feature template function. The generated
calibration file is loaded in the calibration conversion module, and the pixel coordinates are output
as robot arm coordinate. Through formatting, the external communication tells the robot arm unit to
complete the function of the control robot arm.
Calibration Transformation Parameters:
• Image Coordinate Input: select the input method by point or by coordinate, and the input
source.
• Angle: the feature angle that needs physical transformation.
• Coordinate Type: image coordinates and physical coordinates can be selected. If the image
coordinate is selected, it is the input image coordinate and the output physical coordinate; This is
the same reason when selecting physical coordinates.
• Load Calibration File: the absolute path of the calibrate file. If the file exists in this path,
it will be loaded. If it does not exist, it will fail to load and an error will be reported when running.
It can load calibration files with iWcal and xml format.
• Refresh Signal: set to 0 not to update, set to non-zero to update. When it is set to non-zero,
the file in the calibration file path will be automatically converted according to the updated
calibration file.
Output Results:
• Transform Coordinates X/Y: coordinates obtained by calibration conversion / inverse
conversion of input coordinates.
• Transform Angle: angles obtained by calibration conversion / inverse conversion of input
angles.
• Single Pixel Accuracy: the size in the physical coordinate system corresponding to a single
pixel.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
180
DobotVisionStudio User Guide

• Translation X/Y: it is the calibration parameters by calculation. The origin of the world
coordinate system is mapped to image coordinates system, and obtain the coordinate X/Y.
• Rotation: the rotation angle (unit: radians) of the world coordinate system relative to the
image coordinate system. When rotating θ is positive, and after the X-axis of the world coordinate
system rotates counterclockwise θ, its X-axis is consistent with the X-axis direction of the image
coordinate system θ. When it is negative, and after the X-axis of the world coordinate system rotates
clockwise θ, its X-axis is consistent with the X-axis direction of the image coordinate system.
• Scale: the unit length in the world coordinate system corresponds to the number of pixels
in the image coordinate system.
• Chamfer: difference between Y-axis rotation angle and X-axis rotation angle of world
coordinate system (unit: radians).
• Aspect Ratio: the ratio of Y-axis scaling to X-axis scaling in the world coordinate system.

Scale Transformation

The tool of scale transformation is used to convert pixel units like distance and width to physical
units. For specific use, you only need to load the calibration file, set the distance to be converted,
and subscribe refresh signal, as shown below.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
181
DobotVisionStudio User Guide

Output Results
Transform Results Distance after the entered pixel interval is transformed through calibration file.
Single Pixel Accuracy Size in the physical coordinate system corresponding to a single pixel.
It is the calibration parameters by calculation. The origin of the world coordinate
Translation X/Y
system is mapped to image coordinates system, and obtain the coordinate X/Y.
The rotation angle (unit: radians) of the world coordinate system relative to the image
coordinate system. When the rotation angle θ is positive, after the X-axis of the world
coordinate system rotates θ counterclockwise, its X-axis is consistent with the X-axis
Rotation
direction of the image coordinate system. When the rotation angle is negative, after the
X-axis of the world coordinate system rotatesθ clockwise, its X-axis is consistent with
the X-axis direction of the image coordinate system.
Number of pixels in the image coordinate system which the unit length in the world
Scale
coordinate system corresponds to
Difference between Y-axis rotation angle and X-axis rotation angle of world coordinate
Chamfer
system (unit: radians).
Aspect Ratio Ratio of Y-axis scaling to X-axis scaling in the world coordinate system

Line Alignment

According to the input target lines and object lines that are both line arrays, the tool of line alignment
calculates the movement amount required from the points to the target point, including the position
movement amount and angle movement amount, as shown below.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
182
DobotVisionStudio User Guide

Basic Parameters of Camera Mapping


Input Mode It includes by point or by coordinate.
Demonstrate Point-Run It selects demonstration point and run point. You need to select 2 pairs at least
Point and 8 pairs at most.

According to the shape of lines, select opening or closing. If the shape of lines is
Alignment Shape
closed, select closing.

Variable Calculation

This tool supports multiple input mixed calculations. You can customize parameters or select
module data for calculation.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
183
DobotVisionStudio User Guide

Calculator Parameters

Reset Click it to reset as initial value.

Execute Click it to execute once.

Confirm Click it to save and exit the tool.

Calculator Initial Value/ The variable calculates the initial value. When enabling calculator Initialization, each

Calculator Initialization time the process execution resets the variable to the initial value of the setting.

You can edit the expression here. After editing, you can click Check to verify its
Expression
correctness.

It has three types, including int, float and point.


Output Type
Note: The expression of point type supports plus, minus and multiplication only.

x is angle, the returned value is It is asin, -1≤x≤1, angle value is


sin(x) asin(x)
sine. returned.

x is angle, the returned value is


sinh(x) asinh(x) It is asinh, angle value is returned.
sinh.

x is angle, the returned value is It is acos, -1≤x≤1, angle value is


cos(x) acos(x)
cos. returned.

x is angle,the returned value is It is acosh, angle value is


cosh(x) acosh(x)
cosh. returned.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
184
DobotVisionStudio User Guide

x is angle,the returned value is


tan(x) atan(x) It is atan, angle value is returned.
tan.

x is angle,the returned value is


tanh(x) atanh(x) It is atanh, angle value is returned.
tanh.

Get the smaller value from x and


max(x,y) Get the larger value from x and y. min(x,y)
y.

Round all decimal places to ones


Returns the natural logarithm of
round (decimal value) place, and output after the decimal log(x)
the specified number (base e).
position is 0.

pow(x,y) x raised to the power of y. abs(x) Returns the absolute value of x.

The smallest integer value of


Returns the logarithm of x to the
ceil(x) returned value is larger than or is log10(x)
base 10
equal to x.

The max. integer value of Calculate the integer part of x,


floor(x) returned value is smaller than or is trunc(x) which is the nearest integer
equal to x. rounded towards zero.

The returned value is x's square


sqrt(x) exp(x) Returns e to the power of x.
root.

Image Processing
The tool of image processing is used to process the target image. The sub-menu of image processing
includes image combination, image morphology, image binarization, image filter, image
enhancement, image math, image sharpness, image fixture, shade correction, affine transformation,
polar unwarp, copy fill, frame mean, etc.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
185
DobotVisionStudio User Guide

Image Combination

The image combination combines with five image processing modules, including morphological
processing, image binarization, image filter, image enhancement and shade correction. Its operation
process is shown below.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
186
DobotVisionStudio User Guide

Steps:
1. Add different image processing modules to the processing list.
2. Check Enable to enable the corresponding module function.

3. Click of the corresponding module to set the running parameters.

4. Click and then output.

NOTE
 The order in which the corresponding modules operate can be adjusted.

Image Morphology

The tool of image morphology is used to extract image components that are meaningful to the
expression and description of area shape, so that the subsequent recognition work can obtain the
most essential shape features like edge and connected area of the target object. This tool copes with
the white pixels in the image, and related parameters are shown below.
• Morphology Type: it supports dilation (expansion), erosion, opening and closing.
- Dilation (expansion) is to separate independent image elements, and connect
adjacent elements.
- Erosion is a corresponding operation to dilation.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
187
DobotVisionStudio User Guide

- Opening operation means erosion first and then dilation.


- Closing operation means dilation first and then erosion.
• Structuring Element: it is a common parameter, and it supports rect, ellipse and cross.
• Iteration Times: the max. running times of the morphological operation algorithm, and it
only takes effect after setting dilation and erosion.
• Element Width: it is the width of adjacent elements.
• Element Height: it is the height of adjacent elements
Corrosion
Whenever a sub-image that is the same as the structural element is found in the target image, the
pixel position corresponding to the origin of the structural element in the sub-image is marked as 1,
and all such pixels marked on the target image are composed of is the result of the corrosion
calculation.

Expansion
Firstly, reflect the structure element about its origin to obtain the reflection set, and then translate
the reflection set on the target image, then those reflection sets after translation and the target image
have at least 1 non-zero common element when the corresponding reflection set's origin is
intersected. This is the result of the expansion operation.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
188
DobotVisionStudio User Guide

Open Operation
Using the same structural element to perform the corrosion operation on the target image first, and
then using the expansion operation is called the open operation. The open operation has the effect
of polishing the outer boundary of the image.
Close Operation
Using the same structural element to perform the expansion operation on the target image first, and
then using the corrosion operation is called the close operation. The close operation has the effect
of polishing the inner boundary of the image.

Image Binarization

The image binarization refers to the creation of binarized images that reflect the overall and local
features of images from 256 grayscale images of the brightness level via appropriate threshold
selection.
• Binarization Type: it refers to the segmentation method of binarization, including hard
threshold binarization, mean binarization, Gaussian binarization, and auto binarization. When
setting as hard threshold binarization, the grayscale is between low threshold and high threshold,
and the respective binarization image pixel is white. Otherwise, it is black. When setting as auto
binarization, you do not need to adjust other parameters, area with large grayscale in binarization is
shown as white area, and area with small grayscale in binarization is shown as black area.
• Hard Threshold Binarization:
- Low threshold: the min. threshold set by this threshold
- High threshold: the max. threshold set by this threshold.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
189
DobotVisionStudio User Guide

• Mean Binarization Effect:


- Filter kernel width: the average value of filter kernel width. Increasing this value can
make the average filtered image smoother.
- Filter kernel height: the average value of filter kernel height. Increasing this value
can make the average filtered image smoother.
- Comparison type: it has four comparison types, including greater than or equal to,
less than or equal to, equal to, and not equal to. If the condition is met, it will be
displayed as a white area in the binarized image, otherwise it will be displayed as a
black area.
- Threshold offset: the compensation value added to the current pixel value during the
mean binarization or Gaussian binarization, the compensated pixel value and the
average image or Gaussian filter image.
• Gaussian Binarization:
- Gaussian filter kernel: it refers to the size of the filter kernel. Increasing this value
can make the image after Gaussian filter smoother.
- Gaussian standard deviation: it refers to the level of Gaussian filter.
- Comparison type: it has four comparison types, including greater than or equal to,
less than or equal to, equal to, and not equal to. If the condition is met, it will be
displayed as a white area in the binarized image, otherwise it will be displayed as a
black area.
- Threshold offset: the compensation value added to the current pixel value during the
mean binarization or Gaussian binarization, the compensated pixel value and the
average image or Gaussian filter image.
• Auto Binarization: The software will automatically perform image binarization.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
190
DobotVisionStudio User Guide

Image Filter

The tool of Image filter is used to suppress the noise of the target image while reserving the image
detail features as much as possible.
Image Filter
Gaussian filtering is a linear smoothing filter, and Gaussian smoothing filter is very
Gaussian Filter
effective for suppressing noise that obeys a normal distribution.
Median filter is a non-linear smoothing filter, and it is used to eliminate salt and pepper
Median Filter noise in the image. It can eliminate the noise while protecting the edge of the signal so that
it will not be blurred.
Mean filter is a normalized filter, which has a good suppression effect on periodic
Mean Filter
interference noise.

Inverse Filter It inverts the target image grayscale.

It binarizes the edge image whose gradient magnitude is within the edge threshold range to
Edge Extraction
extract the edge.

If you want to recognize the first line of characters in the figure below, but because the characters
are composed of a dot matrix and the gray level difference is small, it is difficult to recognize. After
image binarization, corrosion, and Gaussian filtering, the second line of characters can be obtained.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
191
DobotVisionStudio User Guide

Image Enhancement

The tool of image enhancement includes sharpness, contrast adjustment, Gamma adjustment, and
brightness adjustment.
Image Enhancement

Image sharpening is to highlight the edges, contours, and features of some linear target
elements on the image. This filtering method improves the contrast between the edge of the
Sharpness object and the surrounding pixels, so it is also called edge enhancement.
It has two parameters, including sharpen intensity and kernel size. The larger value of sharpen
intensity, and the sharper the image is. The range of kernel size is between 1 and 51.

Image contrast is the perception of the difference in image color and brightness. The greater the
contrast, the greater the difference between the object and the surroundings of the image, and
vice versa.
Contrast
Contrast coefficient: it is a common parameter, and is used to control contrast coefficient. 100
stands for not adjustment, and the number is larger than 100, it means that contrast increases.
Otherwise, it means that contrast reduces.
it is a common parameter, and 1 refers to effect without adjustment. Gamma adjustment is used
to edit the gamma curve of the image to achieve non-linear tone editing of the image. It can
detect the dark portion and the light portion of the image signal, increase their ratio, thus
Gamma
improving Image contrast effect. But, the too large parameter will make the picture dark.
If the Gamma is between 0 and 1, the dark area of the image will become bright.
if the Gamma is between 1 and 4, the dark area of the image will become dark.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
192
DobotVisionStudio User Guide

An image is overexposed and appears very white, or insufficient light appears very dark. At this
time, brightness correction and correction can be carried out. curdst[i]=offset+cursrc[i]*gain
Among them, cursrc[i] represents the current gray value of the input image, curdst[i] represents
the current gray value of the output image, gain represents the brightness correction gain, and
offset represents the brightness correction compensation.
Brightness In the above calculation formula, the calculation results of curdst[i] are all limited to the range
of [0,255].
Gain: it is used to increase the overall pixel brightness of the image screen. The default value is
0 and the adjustment range is from 0 to 100.
Compensation: it is used to add or subtract this value for overall pixel of the image. The
default value is 0 and the adjustment range is from -255 to 255.

Image Computing

Image math needs to configure two images of the same size, and only supports full image operation.
The principle of image math is to calculate the gray values of the pixels at the same coordinates in
the two images and then obtain a new image, as shown below.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
193
DobotVisionStudio User Guide

Refer to the figure below for the effect of image absolute difference calculation.

• Image Input 1/2: you can select the input source image including input source 1 and input
source 2, and execute computing to these two images.
• Image Weight 1/2: it refers to the multiplication coefficient of input source 1/2 image gray
value.
• Calculation Type: it includes addition, substraction, absolute difference, max. value, min.
value, mean value, and, or, etc.

Distortion Correction

The tool of distortion correction is used to correct images by loading distortion calibration file, and
outputs the corrected image. After loading distortion calibration file and setting parameters of
calibration board, you can use click Run to operate. The image comparison between the original
image and the corrected image is shown below.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
194
DobotVisionStudio User Guide

Correction Mode: it has three types, including perspective distortion correction, radial distortion
correction and radial perspective distortion correction.
- Perspective distortion correction: it is used to solve the perspective transformation matrix
of the image. It is applied to the tilted calibration board (which is not perpendicular to the
camera optical axis), the lens distortion is relatively small, and you can obtain an image
without perspective distortion.
- Radial distortion correction: it is used to solve the radial distortion parameters of the image.
It is applied to estimate the radial distortion coefficient of the lens. If you do not need to
remove the perspective distortion of the image, but need to remove the radial distortion,
and you can select this mode.
- Radial perspective distortion correction: it is applied to general scenes, and it can solve
both perspective distortion and radial distortion.

Image Clarity

The tool of image clarity is used to assess the sharpness of the image and output the image sharpness
score via inputting an image. This tool can judge whether the camera is clearly focused, as shown
below.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
195
DobotVisionStudio User Guide

• Assess Mode: it has two modes, including auto correlation and gradient square. The auto
correlation mode is applied to auto focus scene, and the gradient square mode is applied to the scene
with rich image edge information.
• Noise Level: the default value is 0. When the image gray level difference is large, you can
appropriately increase this parameter. It is valid in the auto correlation mode only.

Image Fixture

Image fixture is to perform image offset repair on the correction object by pre-setting the initial
information of the reference point and the movement of the reference point (position and angle)
detected during operation, so that the corrected image and the reference image have the same
position. In the process, it is sometimes impossible to guarantee that the conveyed objects will not
shift, as shown below.

When the position of the object is shifted in the field of view, the image needs to be corrected to
ensure the accuracy of the positioning. The correction information in the image correction can be
imported from the position correction. It is recommended to use a standard image to establish a
template to ensure the accuracy of the correction information.
Image Fixture Parameters
By Info You can bind the correction information in the position correction directly.
You can bind the reference point, reference angle, operating point, and operating angle
By Point
transmitted from the previous module.

The coordinates of the reference point and the operating point can be customized. This method
By Coordinate
can be selected when the image needs to be rotated.

Shade Correction

Shadow correction performs mean filtering on the input image, and compares the mean filtered
image with the original image to obtain the image residuals, and resets the pixels according to a
certain rule. It is mainly used to perform illumination correction on the input image with uneven
illumination.
The image before shade correction is shown below:

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
196
DobotVisionStudio User Guide

The image after shade correction is shown below:

The parameters of shade correction are shown below:


⚫ Filter Kernel Size: it refers to the size of the filter kernel, and its range is from 1 to 50.
⚫ Gain: it refers to the gain of signal amplification, and its range is from 0 t0 100.
⚫ Compensation: it refers to the brightness compensation value, and its range is from 0 to
255.
⚫ Noise: it is used to eliminate noise. If the signal is lower than this parameter value, it will
be set as 0, and its range is from 0 to 255.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
197
DobotVisionStudio User Guide

⚫ Direction: it has three types of direction, including X, Y, XY, and all of them indicate the
direction of filter kernel size.

Affine Transformation

The affine transformation has the function of cutting and zooming the image. Select the camera
image on the right side of the image display source to frame the ROI. After frame selection, the ROI
area will be cut out and displayed, as shown below.

⚫ Scale: it refers to the image's scale coefficient.


⚫ Aspect Ratio: it refers to the ratio of width and height.
⚫ Interpolation: it includes two types, including Nearest Neighbor and bilinear.
⚫ Fill Mode: it includes two types, including constant and copy nearby.
⚫ Fill Value: its range is from 0 to 255.
⚫ Width: Dimension* image height.
⚫ Height: Dimension* image width* aspect ratio

Ring Expansion

In the ring expansion interface, select the local image in the input source, otherwise the ROI cannot
be selected. After that, in the specific rectangular area, ring expansion can be performed, as shown
below.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
198
DobotVisionStudio User Guide

⚫ Arrow 1: Adjust the radius of the outer circle and the inner ring will not change.
⚫ Arrow 2: Adjust the curvature of the inner and outer rings at the same time.
⚫ Arrow 3/4: Adjust the arc length of the inner and outer rings at the same time.
⚫ Arrow 5/6: When selected one, the other is used as the fiduciary point for rotation and
scaling.
⚫ Direction of the circle: It has two directions, including counterclockwise and clockwise.
The arrow direction is same of the inner and outer circles. The image above shows the
counterclockwise direction, and you can adjust the direction of the circle by the arrow 3
or 4 direction.
⚫ Radial direction: It has two directions, including from the inside to the outside and from
the outside to the inside. You can adjust arrow direction via the location arrow 2.

Copy and Fill

The tool of copy and fill processes the ROI and ROI's min. circumscribed rectangle. The copy tool
is to copy the image in the ROI and fill the area inside the min. circumscribed rectangle outside the
ROI. The fill is to fill inside and outside the ROI, and the circumscribed rectangle.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
199
DobotVisionStudio User Guide

When you want to process a certain area separately in the whole picture, you can choose the copy
mode shown in the figure below.

• Process Type: You can select fill or copy.


• Fill Value outside the Region: It refers to the filling outside ROI region, and its range is
from 0 to 255. The fill effect can be seen in the result display only when the ROI is not a rectangle
or the entire image.
• Fill Vale inside the Region: It refers to the filling inside ROI region, and its range is from
0 to 255. This parameter is valid only when the process type is fill.

Frame Mean

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
200
DobotVisionStudio User Guide

The tool of frame mean is used to calculate the average value of the pixels of ROI region with the
same size set by the multi-frame images, and output the average image of the region.
Steps:
1. Select the ROI, and perform a frame mean operation on the ROI or on the full image.
2. Click Current Image Count to accumulate. Click Current Image Ignore, you can skip the frame
mean operation of the current image, as shown below.

• Accumulate: It counts 100 input images at most, and output average images.
• Ignore: It skips the accumulated statistics for the current input image.
•Clear Statistics Image: It clears the average images obtained by the current statistics.
Before executing statistical operation for ROI region, you need to clear the statistics image.

Image Normalization

The tool of image normalization is used to perform gradation transformation for input images. The
effects of this tool are shown below. These effects, from left to right, are the original image,
histogram equalization effect, histogram normalization, and standard deviation normalization.

Normalization Type: It has three types, including histogram equalization, histogram normalization,
and standard deviation normalization.
Image Normalization

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
201
DobotVisionStudio User Guide

Equalization helps to extend the image histogram. After equalization, the gray level
Histogram Equalization
range of the image is wider, which effectively enhances the contrast of the image.

It transforms histogram distribution of inputting image that is removed left and right-
Histogram end ratios into the specified histogram distribution.
Normalization Gray value range: The minimum value is the gray scale of the left end of the
histogram, and the maximum value is the right end of the histogram.
It transforms the input image gray mean standard deviation to the standard deviation
Standard Deviation gray level.
Normalization - Target mean value: it refers to the target mean value.
- Target standard deviation: it refers to target standard deviation.

Image Correction

Lens radial distortion is the general term of the inherent perspective distortion of the optical lens.
The image shape distortion that caused by optical lens inherent features (the convex lens merges
light rays and the concave lens diverges light rays) is divided into barrel distortion and pincushion
distortion. This function can correct these two kinds of distortions, as shown below

• Image Correction Parameters

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
202
DobotVisionStudio User Guide

- -Expansion Amount: When the value is larger than 0, it will correct the
barrel distortion. When the value is smaller than 0, it will correct the
pincushion distortion.
- Scaling Amount: Zoom out the whole image without changing its
resolutions. Zoom in when the value is above 0 while zoom out when it is
below 0.

Geometric Transformation

The geometric transformation can be used for horizontal, vertical, horizontal and vertical
transformation of the image. At a certain angle, the entire image can be rotated after mirror image
transformation. When "none" is selected as the mirror direction, setting a certain angle can directly
rotate the image horizontally.
• Horizontal mirroring is to reverse the left and right parts of the image, with the vertical
center axis of the image as the center.
• Vertical mirroring is to reverse the upper and lower parts of the image, with the horizontal
central axis of the image as the center.
• Diagonal mirroring is to reverse the image by centering on the intersection of the horizontal
central axis and the vertical central axis of the image. This is equivalent to mirroring the image
horizontally and vertically in succession, as shown below.

Image Stitch

Due to hardware limitations, the field of view of a single camera cannot cover the entire field of
view. When the display of the entire object is required in the project, multiple parts of the object
need to be stitched into a whole picture, as shown below.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
203
DobotVisionStudio User Guide

Image stitching only supports calibration with self-developed calibration board.


Take the stitching of 9 images as an example. Before stitching, you need to set up 9 cameras in
sequence above the calibration board, or move a single camera to 9 fixed areas in a fixed way to
shoot, and the finally 9 pictures need to cover the entire calibration board.
After inputting a series of calibration board images, you can get the relative position relationship
between the images based on the checkerboard corner position information of different images.
The image stitch parameters can be set in the basic parameters, and the model can be used directly
after the template configuration is completed based on the parameter settings, as shown below.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
204
DobotVisionStudio User Guide

Image Stitch Basic Parameters

Input Mode There are two input methods, i.e., single source and multiple source.

Single source: select one image source; single source corresponds to an application where a
single camera moves multiple times.

Multiple source: stitch several image sources; multiple source corresponds to an application
where multiple cameras shoot at a fixed position.
Stitch Number Customize the number of images that need to be stitched.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
205
DobotVisionStudio User Guide

Fill in the number of image sources in the box Stitch Number , and finish stitching by
clicking Execute.

Stitch Method A×B is the image of row A and column B, the value of which affects the image distribution
in the stitch mode.

The modeling process is as follows:


1. First take a picture of the calibration board in 9 different areas, and select the images to load in
sequence in the model settings, as shown below.

Parameter Configuration

Calibration Board Type Hikvision Type I and Hikvision Type II.


Grayscale Contrast Grayscale contrast refers to the contrast of black and white grids.

When the grayscale value of the black and white grids of the image is close,
the grayscale contrast can be set lower,
Median Filtering Enable Median Filtering is enabled after opening.
Subpixel Window This parameter indicates whether to adapt the window size of the corner sub-
pixel accuracy.

When each square of the checkerboard occupies more pixels, the value can be
increased appropriately. The default value is recommended.
One Key Replacement Replace all calibration parameters of other images with current parameters.
Parameter

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
206
DobotVisionStudio User Guide

2. After the image is loaded, click Feature Extraction to extract the feature. If the feature extraction
is successful, the gray triangle in the lower left corner of the image will turn green, prompting the
end of feature extraction.
3. After the feature extraction is completed, click Create Model to generate the model, and click
Stitch Preview to preview the stitching effect picture.
After the setting is completed, the single camera takes pictures in a fixed order several (stitch number)
times or the multiple cameras take pictures once to complete the stitching, as shown below.

Image Normalization
It includes merge mean, merge min, merge max, and merge uncover.

• Merge mean: It is the default method. After taking the average value of the repeated
area and then complete the merger, usually the effect is better.
• Merge max: Get the maximum value of the coincident image and add it to the
stitched image. It can be used to stitch the bright and dark images in the middle.
Merge Type
• Merge min: Take the minimum value in the coincident image area, and add the
minimum value to the merger image.
• Merge uncover: The stitched image will not be stitched repeatedly, and only the un-
stitched area will be processed to solve the ghost phenomenon in stitching, but the
gradation of the image connection is poor.

The range is between 0 and 25. The larger this parameter, the smaller the processing area and
Cut Ratio the less time the module runs. This parameter is mainly for the area where the image overlaps
with a large amount.

• If it is enabled, the input image will be deleted automatically after one stitching of the
Auto Clear
image is completed.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
207
DobotVisionStudio User Guide

• If it is disabled, after completing one image stitching, when another image is input, it
will replace one of the 9 images that are stitched together previously.

Multiple Images Fusion

This tool needs to be used in conjunction with the multiple images collection. Use one camera and
multi-angle and same-spectrum light sources (commonly used as four or eight angle light sources)
to obtain the same number of images with the same light sources by turning on the light sources
respectively.
This tool can obtain obvious shape images and texture images, which can be used in scenes such as
distinguish, blemish/stain, character recognition (convex), "character recognition (concave), and so
on. The interface and parameters of multiple images fusion are shown below.

Text-Protocol Parse

Output Image Type It includes all, reflection and shadow.

The size of the filter kernel, and its range is from 1 and 50. Its function is to filter
out the noise in the shadow image. The larger the value, the more blurred the
Filter Size
shadow image, the stronger the ability to filter noise, but the impact on the target
is also equal.
• It it is enabled, you can set background brightness and contrast
coefficient.
Enhancement Enable
• If it is disabled, the software will automatically give the shadow image.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
208
DobotVisionStudio User Guide

The larger the value, the darker the background of the shadow image, and the
Background Brightness
smaller the value, the whiter the background of the shadow image.
The larger the value, the more obvious the shadow of the shadow image, and the
Contrast Coefficient
smaller the value, the less obvious the shadow.

Direction Enhancement
It includes none, x-direction and -direction.
Type

Direction Enhancement
It controls the intensity of the direction enhancement.
Grade

If 4 multi-angle light images are input, the value setting range should be 1 to 4. if
8 multi-angle light images are input, the value setting range should be 1 to 8. The
larger the value, the greater the removal strength, and the darker the output
Halo Removal Grade
reflection image.
if the input is 4 images, it is recommended to configure it as 3, and if it is 8
images, it is recommended to configure it as 5.

NOTE
 This tool currently supports grayscale images only.
 Input at least 3 and 8 images collected by multi-angle and same-spectrum light sources.

Color Processing
Color Extraction

Color extraction is used to segment the target area from a color image and get a binary image
containing only the target object. The color space of the color extraction tool can be RGB, HSV or
HSI. The three-channel threshold can be generated automatically by modeling or manually set, as
shown below.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
209
DobotVisionStudio User Guide

To extract the red area from the target map, you need to create a color extraction list first. First,
measure the approximate values of the three channels, and manually set the three-channel extraction
threshold, as shown below.

The extraction model can also be generated automatically through modeling, as shown below.
1. Select the color area. Click the rectangle tool to select the color you want to extract.

2. Draw the ROI in the target area where the image needs to be segmented.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
210
DobotVisionStudio User Guide

3. The three-channel extraction threshold is automatically generated. The threshold is the


recommended value. If the segmentation result does not meet the requirements, you can make minor
adjustments based on the three-channel histogram data.

After the color extraction list is generated, the software will automatically extract the objects in the
channel range and perform binaryzation, as shown below.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
211
DobotVisionStudio User Guide

Click and refer to the above modeling method to create multiple color extraction lists, as
shown below.

Color Extraction
Color Space RGB, HSV and HSI.
Channel Lower Limit The lower limit of the pixel values extracted by the image channel in the specified
space.
Channel Upper Limit The upper limit of the pixel values extracted by the image channel in the specified
space.
Color Inversion Color inversion after the binaryzation of image.

Note:

Pixel values greater than or equal to the lower limit and less than or equal to the upper limit will be assigned a
value of 255, and other pixel values will be assigned a value of 0.

After you enable Color Total Area Check in Result Show page, the output results are filtered
according to the output area.
The Result Show interface is shown below.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
212
DobotVisionStudio User Guide

Color Measurement

The tool of color measure is used to measure the color information in the specified area of the color
image, including the max. value, min. value, mean value, standard deviation and histogram of each
channel, as shown below.

Color Measurement Parameters


Color Space RGB, HSV and HSI

The operation result is shown below.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
213
DobotVisionStudio User Guide

Color Measurement
Channel Min. It is the min. value of the corresponding color channel.
Channel Max. It is the max. value of the corresponding color channel.
Channel Mean It is the average value of the corresponding color channel.

Channel Standard It is the standard deviation of the corresponding color channel.

Color Transformation

After inputting a colored image, you can choose transfer type, including RGB to grayscale, RGB to
HSV, RGB to HSI, and RGB to YUV. The tool of color transformation transforms the color space
in the specified area of the color image, and outputs the gray image of specific color channel after
transformation in this area, as shown below.

Color Transformation

0.299r + 0.587g + 0.114b. r stands for the gray value of


General transfer ratio the R channel, g stands for the gray value of the G
channel, and b stands for the gray value of the B channel.

Average transfer ratio (r + g + b) / 3


RGB to Grayscale
Channel min. value min (r, g, b)

Channel max. value max (r, g, b)

User-defined transfer ratio Self-defined coefficients

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
214
DobotVisionStudio User Guide

Channel R r + 0*g +0*b

Channel B 0*r + 0*g +b

Channel G 0*r + g +0*b

Channel 1 Transform according to channel 1.

RGB to HSV, HSI and YUV Channel 2 Transform according to channel 2.

Channel 3 Transform according to channel 3.

Color Recognition

Color recognition relies on color as a template for classification. When different types of objects
have obvious color differences, this tool can achieve accurate object classification and output related
classification information. The template needs to be established before color recognition.

One type of object can be placed in a label, and the sample can be moved to the correct label list
when the sample is incorrectly marked. After completing the modeling, you can adjust the template
parameters, as shown below.
Template parameters
This parameter has three modes, including low sensitivity mode, medium sensitivity
Sensitivity Mode mode, and high sensitivity mode. It is recommended to select high sensitivity mode
when the image is sensitive to external environment.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
215
DobotVisionStudio User Guide

This parameter includes feature spectrum and feature histogram, and feature histogram
Feature Type
is more sensitive.
Intensity reflects the effect of light on the image. If you need to keep the recognition
result more stable under the change of light, it is recommended to disable this parameter.
Intensity
You can only enable or disable intensity when selecting feature histogram as feature
type. Intensity is always enabled when the feature type is feature spectrum.

When you run the flow after creating the template, loading the image and setting the ROI, the
software can output the recognition score for each category and the best recognition effect according
to the K value, as shown below. On the right side of the output result, the hue, saturation and
brightness contrast chart are displayed between model area and current area.

Running parameters of color recognition


This parameter means that the class with the largest number in the first K samples is
K Value
selected as the best recognition result.
This parameter includes Euclidean distance, Manhattan distance and intersection
KNN Distance distance. There are slight differences among various distances. You can select it
according to actual conditions. Generally, you can select the default distance.

Defect Detection
OCV

The tool of OCV (Optical Character Verification) compares the target image with the standard image
to detect whether printed characters and patterns have defects like characters missing and redundant.
This tool is widely applied to packaging, printing, semiconductor and other manufacturing areas.
OCV is a process which compares the target image with the standard image, thus standard images
need to be trained before defect detection.
Steps:
1. Select the OCV tool, and double click to open it.
Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
216
DobotVisionStudio User Guide

2. Enter input source, ROI area, and rough match information in the basic parameter interface
according to actual demands.
3. Click Train Model in the character model interface.
4. After opening the character training model, you can directly create model or inherit.

5. Click Next Step to enter the detection target interface, and use to draw detection area,
and click Draw Character after setting related parameters.

Character Segmentation Parameters Description

Min. Character Area It sets the character area size.


Min. Character Width It sets the character width.
Binarize Scale It sets binarize scale.
Binarize Window Size It sets binarize window size.
Binarize Threshold It sets binarize threshold.
Binary Picture After enabling, the software will display binary picture.
Character Polarity It includes light on dark and dark on light.
Segment Mode It includes segment by line, word and character.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
217
DobotVisionStudio User Guide

6. Click Next Step to enter fine positioning interface.


7. Draw the area that needs training, set training parameters and exact location parameters, and click
Model Training at last.
Training Parameters
Scale Mode It includes auto and manual. It is recommended to use auto mode.
It indicates the fineness of extracting feature particles. When the fine scale is set to 1,
Precision Pyramid Scale the fineness is the largest, the number of edge points is the largest, and the precision is
the highest.

Speed Pyramid Scale It is the speed pyramid scale.

It includes auto and manual.

• If auto mode is selected, the software will be self-adaptive according to the


Threshold Mode target characters.
• If manual mode is selected, you need to set model low threshold, and the
software will perform according to the user's configuration.

Model Low Threshold It only takes effect when manual is selected as threshold mode.

Exact Location Parameters


It refers to the degree of similarity between the feature template and the target in the search
image, that is, the similarity threshold. The searched target will only be searched when the
Min. Match Score
similarity reaches the threshold. The maximum is 1, indicating a complete fit, and the
default is 0.5.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
218
DobotVisionStudio User Guide

It Indicates the angle change range of the target to be matched relative to the created
Angle Range template. If you want to search for a target with a rotation change, you need to set it
accordingly. The default range is between -180° and 180°.
Correct Flag After enabling, the software will atomically correct the character position.
Threshold Type It includes auto threshold, manual threshold, and model threshold.

Width Tolerance The degree that the width direction is allowed to deviate from the template.

Height Tolerance The degree that the height direction is allowed to deviate from the template.

8. Click Next Step to set detection area mask, and use to erase the area that do not need
recognition.
9. Click Next Step to enter statistical training interface, and you can load up to 500 pieces of images
to train.

NOTE
 You can click green check mark to count image or click red cross to discard.

10. Click OK to finish training, and set running parameters.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
219
DobotVisionStudio User Guide

Running Parameters
Character Detection

Defect Type It has three types, including bright defect, dark defect, and bright and dark defect.

Bright Threshold The min. grayscale of bright defect.

Dark Threshold The min. grayscale of dark defect.


Bright Scale The min. scale of bright defect.
Dark Scale The min. scale of dark defect.
Edge Tolerance The larger the value, and greater tolerance for defect.
The detection image is compared with the high and low threshold image to obtain a
Area Tolerance difference binary image, and the blob in the binary image larger than the area threshold
image is regarded as a defect.
Character Exact Location
It refers to the degree of similarity between the feature template and the target in the
search image, that is, the similarity threshold. The searched target will only be searched
Min. Match Score when the similarity reaches the threshold. The maximum is 1, indicating a complete fit,
and the default is 0.5.

It Indicates the angle change range of the target to be matched relative to the created
Angle Range template. If you want to search for a target with a rotation change, you need to set it
accordingly. The default range is between -180° and 180°.
It indicates the variation range of the consistency scale of the target to be matched relative
X Scale Range to the created template. If you want to search for a target with a consistent scale change,
you need to set the corresponding settings.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
220
DobotVisionStudio User Guide

It indicates the variation range of the consistency scale of the target to be matched relative
Y Scale Range to the created template. If you want to search for a target with a consistent scale change,
you need to set the corresponding settings.
Threshold Type It includes auto threshold, manual threshold, and model threshold.
Speed Enable After enabling it, you need to enter speed threshold.
Correct Flag After enabling, the software will atomically correct the character position.

Width Tolerance The degree that the width direction is allowed to deviate from the template.

Height Tolerance The degree that the height direction is allowed to deviate from the template.

Character Rough Location


It refers to the degree of similarity between the feature template and the target in the
search image, that is, the similarity threshold. The searched target will only be searched
Min. Match Score
when the similarity reaches the threshold. The maximum is 1, indicating a complete fit,
and the default is 0.5.

Polarity stands for the color transition from the feature pattern to the background. When
the edge polarity of the search target is inconsistent with the polarity of the feature
Match Polarity template, it is still necessary to ensure that the target is found. The matching polarity
needs to be set to ignore the polarity. If necessary, it can be set to consider the polarity,
which can shorten the search time.

It Indicates the angle change range of the target to be matched relative to the created
Angle Range template. If you want to search for a target with a rotation change, you need to set it
accordingly. The default range is between -180° and 180°.
It indicates the variation range of the consistency scale of the target to be matched relative
X Scale Range to the created template. If you want to search for a target with a consistent scale change,
you need to set the corresponding settings.
It indicates the variation range of the consistency scale of the target to be matched relative
Y Scale Range to the created template. If you want to search for a target with a consistent scale change,
you need to set the corresponding settings.
Threshold Type It includes auto threshold, manual threshold, and model threshold.
Speed Enable After enabling it, you need to enter speed threshold.

11. Set in the result displaying in the image display interface, and click run once or run continuously
to run the flow.

Arc Edge Defect Detection

The tool of Arc edge defect detection can detect pits, bumps and fracture defects on the edges of
arcs. It can accurately recognize defective arcs and output defect information. The specific operation
method is shown below. It is recommended to enable Standard Input when the arc edge is blurry.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
221
DobotVisionStudio User Guide

• Input Mode: You can select By Circle or By Parameter. When selecting By Circle as
Input Mode, you need to link Circle, Angle Start, and Angle Range. When selecting By
Parameter as Input Mode, you need to enter Circle Center X, Circle Center Y, Radius,
Angle Start and Angle Range.
• ROI Area: You can select Draw or Inherit.
• Edge Type: It includes Strongest Edge, First Edge and Last Edge.
• Edge Polarity: It includes Dark to Light, Light to Dark and Both.
• Filter Kernel Size: It is used to enhance the edge and suppress noise, and its min. value is
1. When the edge is blurred or there is noise interference, you can increase its value to make
the detection result more stable. If the distance between the edge and the edge is smaller
than the filter size, it will affect the accuracy of the edge location or even lose edge. This
parameter value needs to be set based on the actual situation.
• Contrast Threshold: It is also called gradient threshold, and its range is from 0 to 255. The
edge point whose gradient threshold is larger than this value can be detected. The larger the
value, the stronger noise resistibility, the smaller the number of edges obtained, and even
the target edge points are filtered out.
• Caliper Height: It refers to the height of caliper.
• Caliper Length: It refers to the length of caliper.
• Caliper Spacing: It refers to the spacing of caliper.
• Defect Polarity: It includes Trajectory Left Defect, Trajectory Right Defect, and
Trajectory Both Defect.
• Rough Min Distance: The distance between the edge point and the fitted straight line. If
the distance is greater than the threshold, it is determined as the defect point to be filtered.
If the Defect Size Enable or Defect Area Enable is enabled, you need to further filter
according to the corresponding threshold.
• Defect Size Enable: If enabled, you need to enter Rough Min Size.
• Defect Area Enable: If enabled, you need to enter Rough Min Area.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
222
DobotVisionStudio User Guide

Linear Edge Defect Detection

The tool of linear edge defect detection is used to detect defects and bumps on the linear edge. It
outputs the position information of the defect bounding box and defect size. The specific operation
method is shown below. It is recommended to enable Standard Input when the linear edge is blurry.

⚫ Input Mode: You can select Line, By Point, or By Coordinate. When selecting Line as
Input Mode, you need to link line. When selecting By Point as Input Mode, you need to
enter Start point, Endpoint, and Angle. When selecting By Coordinate as Input Mode,
you need to enter Start point X Coordinate, Start point Y Coordinate, Endpoint X
Coordinate, and Endpoint Y Coordinate.
⚫ ROI Area: You can select Draw or Inherit.
⚫ Edge Type: It includes Strongest Edge, First Edge and Last Edge.
⚫ Edge Polarity: It includes Dark to Light, Light to Dark and Both.
⚫ Filter Kernel Size: It is used to enhance the edge and suppress noise, and its min. value is
1. When the edge is blurred or there is noise interference, you can increase its value to make
the detection result more stable. If the distance between the edge and the edge is smaller
than the filter size, it will affect the accuracy of the edge location or even lose edge. This
parameter value needs to be set based on the actual situation.
⚫ Contrast Threshold: It is also called gradient threshold, and its range is from 0 to 255. The
edge point whose gradient threshold is larger than this value can be detected. The larger the
value, the stronger noise resistibility, the smaller the number of edges obtained, and even
the target edge points are filtered out.
⚫ Caliper Height: It refers to the height of caliper.
⚫ Caliper Length: It refers to the length of caliper.
⚫ Caliper Spacing: It refers to the spacing of caliper.
⚫ Defect Polarity: It includes Trajectory Left Defect, Trajectory Right Defect, and
Trajectory Both Defect.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
223
DobotVisionStudio User Guide

⚫ Rough Min Distance: The distance between the edge point and the fitted straight line. If
the distance is greater than the threshold, it is determined as the defect point to be filtered.
If the Defect Size Enable or Defect Area Enable is enabled, you need to further filter
according to the corresponding threshold.
⚫ Defect Size Enable: If enabled, you need to enter Rough Min Size.
⚫ Defect Area Enable: If enabled, you need to enter Rough Min Area.

Arc-Pair Defect Detection

The tool of arc-pair defect detection is used to detect the concave-convex and fracture part of the
arc pair. By setting the width eligibility range, defect size, and defect area, this tool searches the
defect area between two arcs and outputs related information. The specific operation method is
shown below. See Arc Edge Defect Detection for other parameters.

• Edge Search Type: It includes Widest Edge Pair, Nearest Edge Pair, and Strongest &
Nearest Edge Pair.
• Edge 0 Polarity: It includes Dark to Light, Light to Dark, and Both.
• Edge 1 Polarity: It includes Dark to Light, Light to Dark, and Both.
• Filter Kernel Size: It is used to enhance the edge and suppress noise, and its min. value is
1. When the edge is blurred or there is noise interference, you can increase its value to make
the detection result more stable. If the distance between the edge and the edge is smaller
than the filter size, it will affect the accuracy of the edge location or even lose edge. This
parameter value needs to be set based on the actual situation.
• Contrast Threshold: It is also called gradient threshold, and its range is from 0 to 255. The
edge point whose gradient threshold is larger than this value can be detected. The larger the

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
224
DobotVisionStudio User Guide

value, the stronger noise resistibility, the smaller the number of edges obtained, and even
the target edge points are filtered out.
• Ideal Width: It refers to the ideal distance between inner and outer arcs.
• Caliper Height: It refers to the height of caliper.
• Caliper Length: It refers to the length of caliper.
• Caliper Spacing: It refers to the spacing of caliper.
• Width Eligibility Range: This parameter is the main parameter for the detection of arc pair
defects. Only the arc pair width within this parameter range is considered as a eligible.
• Defect Size Enable: If enabled, you need to enter Rough Min Size.
• Defect Area Enable: If enabled, you need to enter Rough Min Area.
• Caliper Number: It is used to scan the ROI area quantity in edge points.
• Reject Number: The minimum number of points whose error is too large to be excluded
from fitting. In general, the more outliers, the larger the value should be set. In order to
obtain a better search effect, it is recommended to use it in conjunction with the elimination
distance
• Threshold to Remove: The maximum pixel distance between the outlier and the fitting
circle is allowed. The smaller the value, the more points will be excluded.
• Angle Tolerance: The maximum angle is allowed.
• Track Tolerance: The maximum pixel offset is allowed by edge tracking.

Line-Pair Defect Detection

The tool of line-pair defect detection is used to detect defects between a pair of straight lines that
are deformed or broken, and output the corresponding defect information. It can be used to detect
the deformation and defects of the edges of rectangular work pieces, determine the regularity of the
work piece edges, and search small burrs and dirt. The specific operation method is shown below.
See Arc Edge Defect Detection for other parameters.

Line-Pair Defect Detection


Edge Search Type It includes Widest Edge Pair, Nearest Edge Pair, and Strongest & Nearest Edge Pair.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
225
DobotVisionStudio User Guide

Edge 0/1 Polarity Edge 0 and Edge 1 in order from top to bottom of ROI
Ideal Width It refers to the ideal distance between inner and outer arcs.
Width Eligibility Range This parameter is the main parameter for the detection of arc pair defects.

Edge Group Defect Detection

Edge group defect detection can combine up to 32 edge defect detecting tools, including linear edge
detection and arc edge detection. You can configure basic parameters and running parameters of
each tool in the parameter settings. For details, see Arc Edge Defect Detection and Linear Edge
Defect Detection. The operation method is as follows.

Edge Pair Group Defect Detection

Edge pair group defect detection can combine multiple edge-pair defect detecting tools, including
line-pair edge detection and arc-pair edge detection. You can configure basic parameters and running
parameters of each tool in the parameter settings. For details, see Arc Edge Defect Detection and
Linear Edge Defect Detection. The operation method is as follows.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
226
DobotVisionStudio User Guide

Edge Model Defect Detection

Edge model defect detection is used to compare with the standard model to output relative defect
information, such as location deviation, fracture defect and hierarchical defect. You need to select a
relatively completed model to execute model building before detecting. Load or train to generate
models in Edge Model. The interface is shown below when you select train model.

1. Click to customize an edge model. Try to fit the edge template when you customize it.
You can double click to stop customizing, as shown in the figure above.
2. Set proper train parameters. For unmentioned parameters, see Arc Edge Defect Detection.
Template Configuration Parameter
• Edge Type: It includes Strongest Edge, First Edge and Last Edge
• Edge Polarity: It includes Dark to Light and Light to Dark

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
227
DobotVisionStudio User Guide

• Edge Intensity: It is used for edge grayscale limiting. The larger the value is, the more
filterable edge points are.
• Edge Precision: Generate a track automatically which fits the model trend, according to
the model edge information.
• Filter Kernel Size: For strengthening edge and repressing noise, the minimal value is 1.
Set this value according to the actual needs. If the edge is burry or it is intervened by noises, increase
the value to make the detecting results more stable. However, it might impact the edge position
precision or even cause edge loss, if the interval between edges is less than the filter kernel size.
3. Click Confirm to generate the edge model.
Configure corresponding running parameters. Relevant defect information will be output after
running.

Edge Model Defect Detection Parameters


• Location Deviation: The comparison distance between the edge point of the detected
image and the model building image. If this distance larger than the threshold, the point can be
defined as a position defect point that to be selected.
• Fracture Defect Enable: enable it to detect the fracture defect
• Hierarchical Defect: For sawtooth-like defect detecting. This defect type’s size and area
are small as well as its deviation range.
- Hierarchical Deviation Height: In the detected image, if the height
difference between two adjacent edge points is larger than the threshold, the
edge point can be defined as a hierarchical defect point that to be selected.
- Min. Hierarchical Length: If the number of the adjacent hierarchical
defects that to be selected exceeds this length, the defect can be defined as a
hierarchical defect.
• Grayscale Assisted Detect: enable it to automatically define Filter Kernel Size.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
228
DobotVisionStudio User Guide

• Filter Kernel Size: For strengthening edge and repressing noise, the minimal value is 1.
Set this value according to the actual needs. If the edge is burry or it is intervened by noises, increase
the value to make the detecting results more stable. However, it might impact the edge position
precision or even cause edge loss, if the interval between edges is less than the filter kernel size.

Edge Pair Model Defect Detection

Edge pair model defect detection is used to compare with the standard model to output relative
defect information, such as width, location deviation, fracture defect and hierarchical defect. You
need to select a relative completed model to execute model building before detecting. Load or train
to generate models in Edge Model. For model building details, see Edge Model Defect Detection.

In model settings, you can click Starting Range, and click Generate Trace to create a trace.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
229
DobotVisionStudio User Guide

Bubble Defect Detection


⚫ Grayscale Acceptance: Compare the average grayscale of the current caliper and
the model-building caliper
⚫ Grayscale Change: A defect can be defined as a change defect, if adjacent pixels
grayscale difference value is larger than the grayscale change.
⚫ Bubble Defect Length: Refers to the caliper number of the adjacent defects that
defined as bubble defects.
⚫ Max. Num of Mutations: There is a bubble defect in the current caliper, if the
number of grayscale changes larger than the max. num of mutations.

NOTICE

This module is used with the position correction module. You should create the position
correction reference point on an image. Run the process once first. Double-click the
position correction module and manually click to create the reference point.

Defect Contrast

The defect contrast tool can train a defect contrast model to detect whether the target image is OK
or NG.
Steps:
1. Double-click the defect contrast tool from the tool bar. Go to Defect Model, and click New Model.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
230
DobotVisionStudio User Guide

2. Set the contrast area, which can be the whole image or the ROI area according to the actual
demands. The ROI area should contain the target area to be detected.

3. In the batch training window, click to load images according to actual demands. Click OK
or NG in Register Model according to actual conditions, and click Train to start training.

NOTE
 Up to 40 images can be added (20 OK images and 20 NG images) , including at least
one OK image.

4. Import verified images, and click Verify Model.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
231
DobotVisionStudio User Guide

5. Run the operation models, as shown below.

If the score of the target image is lower than the configured score threshold, the image will be
labeled as NG.
6. Run the operation models. The NG image is shown below.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
232
DobotVisionStudio User Guide

7. The OK image is shown below.

Logic Tools
Condition Detection

The condition detection determines whether the input data satisfies the condition. If it is satisfied,
the operation interface displays OK character. Otherwise, it displays NG character, as shown below.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
233
DobotVisionStudio User Guide

Condition detection
Judge Method It includes all and any. If the condition meets demands, the judge result is OK.
Condition Type It includes int and float.
Condition You need to link with previous module status, or other result outputs.
If the result selected meets the range from the min. value to the max. value, the result is
Valid Value Range
OK, otherwise is NG.

NOTE
 The corresponding result of each added condition can be viewed in the history
interface.
 The number of digits after the decimal point in the valid value range can be configured
in the specified XML file. A maximum of 3 digits are supported.
 Take the default software installation path in a 64-bit system as an example. The file
is under the installation path...\Module(sp)\x64\Logic\IfModule. The XML file is
called ifModuleAlgorithmtab.xml, so change the value of DecimalDigits.

Branch

The branch tools can set the input conditions for different branch modules according to the actual
requirements. When the input condition is this value, the corresponding branch module is executed.

The input value only supports integers and it does not support strings. If you need to enter string
format, you need to use character branch, or use character recognition and branch modules. When
you need to determine the subsequent branch work according to the template matching status, you
can set the input condition as the template matching status, and set the condition value of the branch
module, as shown below.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
234
DobotVisionStudio User Guide

Branch Parameters

Condition Input It can bind the previous module or int data from external communications.

Index by value is to compare the condition input value with the set value
Index by value after the module ID index. If these two values are the same, the
branch will be executed. Otherwise, the branch will not be executed.

Branch Parameters
Index by bit is to convert the condition input value into binary system,
and the binary sequence corresponds to the bit sequence of the module
Index by bit
ID. When the bit is 1, the branch is executed (multiple is available at a
time). Otherwise, the branch is not executed.

Branch String

The branch string comparison module detects the input characters. If the detection passes, data
transmission will be executed, as shown below.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
235
DobotVisionStudio User Guide

Branch String Parameters


Input Text Receive data from the external communication or from formatting.
Set the condition input value, and select branch module in accordance with the
Branch Parameters
comparison of input text and condition input value.
Debug Mode The branch will be always executed once being enabled.

Save Text

The tool of saving text is used to process complex data. It saves and loads the contents of the
script that has been written. The upper limit length of script code is 4095 bytes. After the operation
is started, the file is generated as a cache file, and text is generated only after the cache file reaches
the set file size.

Save Text Parameters

TXT Creates TXT format files.

Creates CSV format files. It is recommended to use notepad to


Data Source open the CSV files, and then save a new file and open it with
CSV
excel. When choosing to create a CSV file, you need to perform
data linking for each column of generated data.

Input Text It selects input text.

It sets the limit condition when saving, and saves the file when the trigger variable
Save Trigger
reaches the saving condition

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
236
DobotVisionStudio User Guide

Save Path It customizes a saving path.

File Save Number The max. file quantity can be stored.

Document Size The Max. size of each file.

You can set the saving mode when the upper saving limit has been reached. Two
Save Mode
save modes are available: stop saving and overwrite.

Timestamp Setting The timestamp information may display front of the file name.

If it is enabled, you can create folder in accordance with date, and save images in
Generate Directory by Date
it.

File Naming It sets file name, and you can link with module data.

After enabling it, data will be directly written into txt and csv files. If it is not
Real Time Save enabled, data will be written into cache file first, and then into txt and csv files if
file capacity is full.

Save Temporary File Click it to cave cache files in txt or csv format.

Logic

The logic module includes calculation type and calculation data, as shown below.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
237
DobotVisionStudio User Guide

Logic Calculation

Calculation Type You can select and, or, not, nand and nor.

Calculation Data you can select data source to execute logic calculation.

Format

The format tool can integrate and format the data into a string for output. It can link the result output
of previous modules, and you can also directly enter characters in the box. This tool is usually used
to organize the data before the data output via communications.

You can right click the added line and click Delete or Clear Content to delete or clear the content,
as shown above.

Format Module

Click it to add the index of subscribed information.

Click it to link the output information of previous modules.

Click it to add customized texts.

Click it to insert the array.

• Split symbol is the delimiter between different elements, which can be


revised.
• Array subscript supports entering and subscribing.
Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
238
DobotVisionStudio User Guide

• Array list supports adding different array elements.

Click them to change the position of added information.

Click it to delete the currently configured contents.

Input End Character \r: return; \n: new line; \r\n: return and new line.

Split Symbol Split symbol is the delimiter between different elements.

Array Delimiter It is used to separate multiple arrays.

Save Click it to save format module.

Cancel Click it to cancel format module.

String Comparison

The string comparison tool can compare the input character with the configured one. If they are the
same, the software will output the corresponding index value. If they are different, the software will
not output. The index value can be manually modified, and the quantity of index is up to 32.
Generally, this tool is used in combination with branch tool, as shown below.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
239
DobotVisionStudio User Guide

Script

The script tool is used to process complex data. It saves and loads the contents of the script that has
been written, and the script file format is suffixed with cs. There is no limitation for the length of
script code, and script code supports import and export. After the import is completed, the script
tool will execute a compilation. If the compilation fails, the system will show exception. If the
compilation is successful, the system will operate according to the new code. The parameter settings
are shown below.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
240
DobotVisionStudio User Guide

The function interfaces include data acquisition interface, data output interface, and debugging
related interface.
Interface for setting/obtaining global variable
GlobalVariableModule.SetValue

Function Description Set Global Variable


Function Method GlobalVariableModule.SetValue(string paramName,string paramValue)
Parameter Name Data Type Parameter Description
Input paramName string Variable Name
Output paramValue string Variable Value
Success: 0
Return Value
Exception: error code rather than 0.

GlobalVariableModule.GetValue

Function Description Obtain Global Variable


Function Method object GlobalVariableModule.GetValue (string paramName)
Parameter Name Data Type Parameter Description
Input paramName string Variable Name
Success: specific result
Return Value
Exception: null

Obtain result data of module


CurrentProcess.GetModule

Function Description Obtain module result data

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
241
DobotVisionStudio User Guide

CurrentProcess.GetModule(string paramModuleName).GetValue(string
Function Method
paramValueName)
Parameter Name Data Type Parameter Description
Input paramModuleName string Variable Name
Input paramValueName string Variable Value
Success: specific result
Return Value
Exception: null

Set running parameters of module


GetModule.SetValue

Function Description Set module running parameters


CurrentProcess.GetModule(string paramModuleName).SetValue(string
Function Method
paramValueName,string paramValue)
Parameter Name Data Type Parameter Description
Input paramModuleName string Module Name
Input paramValueName string Parameter Name
Input paramValue string Parameter Value
Success: 0
Return Value
Exception: error code rather than 0.

Interface for sending data


PLC, Modbus interface: SendData(string data,DataType dataType)

Function Description PLC or Modbus sends int, float or string data


GlobalCommunicateModule.GetDevice(int deviceID).GetAddress(int
Function Method
addressID).SendData(string data,DataType dataType)
Parameter Name Data Type Parameter Description
Input deviceID Int Device ID in communication management
Input addressID Int Device address ID
Input data String Data to be sent. If you send multiple data, use ";" to s
Input dataType DataType Data type to be sent, including int, float and string
Success: 0
Return Value
Exception: error code rather than 0

PLC, Modbus interface: SendData(byte[] bytedata,DataType.ByteType)

Function Description PLC or Modbus sendshexadecimal data


GlobalCommunicateModule.GetDevice(int deviceID).GetAddress(int
Function Method
addressID).SendData(byte[] bytedata,DataType.ByteType)
Parameter Name Data Type Parameter Description
Input deviceID Int Device ID in communication management
Input addressID Int Device address ID

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
242
DobotVisionStudio User Guide

Input bytedata Byte Hexadecimal data to be sent


Success: 0
Return Value
Exception: error code rather than 0.

TCP, UDP, serial port interface: SendData(string data)

Function Description Send String data


Function Method GlobalCommunicateModule.GetDevice(int deviceID).SendData(string data)
Parameter Name Data Type Parameter Description
Input deviceID Int Device ID in communication management
Input data String Data to be sent
Success: 0
Return Value
Exception: error code rather than 0

TCP, UDP, serial port interface: SendData(byte[] bytedata)

Function Description Send hexadecimal data


Function Method GlobalCommunicateModule.GetDevice(int deviceID).SendData(byte[] bytedata)
Parameter Name Data Type Parameter Description
Input deviceID Int Device ID in communication management
Input bytedata Byte Data to be sent
Success: 0
Return Value
Exception: error code rather than 0

Data acquisition interface:


GetIntValue

Function Description Obtain INT variable value


Function Method int GetIntValue(string paramName, ref int paramValue)
Parameter Name Data Type Parameter Description
Input paramName string Variable Name
Output paramValue int Variable Value
Success: 0
Return Value
Exception: error code rather than 0

GetFloatValue

Function Description Obtain Flat variable value


Function Method int GetFloatValue (string paramName, ref float paramValue)
Parameter Name Data Type Parameter Description
Input paramName string Variable Name
Output paramValue Float Variable Value
Success: 0
Return Value
Exception: error code rather than 0.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
243
DobotVisionStudio User Guide

GetStringValue

Function Description Obtain String variable value


Function Method int GetStringValue (string paramName, ref string paramValue)
Parameter Name Data Type Parameter Description
Input paramName string Variable Name
Output paramValue string Variable Value
Success: 0
Return Value
Exception: error code rather than 0.

GetBytesValue

Function Description Obtain Bytes variable value


Function Method int GetBytesValue (string paramName,ref byte[] paramValue)
Parameter Name Data Type Parameter Description
Input paramName string Variable Name
Output paramValue byte[] Variable Value
Success: 0
Return Value
Exception: error code rather than 0.

GetIMAGEValue

Function Description Obtain image data


Function Method int GetIMAGEValue (string paramName, ref Image paramValue)
Parameter Name Data Type Parameter Description
Input paramName string Variable Name
Output paramValue Image Variable Value
Success: 0
Return Value
Exception: error code rather than 0.

GetIntArrayValue

Function Description Obtain INT array variable


Function Method int GetIntArrayValue(string paramName, ref int[] paramValue, out int arrayCount)
Parameter Name Data Type Parameter Description
Input paramName string Variable Name
paramValue int[] Variable Value
Output
arrayCount int Array Quantity
Success: 0
Return Value
Exception: error code rather than 0.

GetFloatArrayValue

Function Description Obtain float array variable


Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
244
DobotVisionStudio User Guide

int GetFloatArrayValue(string paramName, ref float[] paramValue , out int


Function Method
arrayCount)
Parameter Name Data Type Parameter Description
Input paramName string Variable Name
paramValue float[] Variable Value
Output
arrayCount int Array Quantity
Success: 0
Return Value
Exception: error code rather than 0.

GetStringArrayValue

Function Description Obtain string array variable


int GetStringArrayValue(string paramName, ref string[] paramValue , out int
Function Method
arrrayCount)
Parameter Name Data Type Parameter Description
Input paramName string Variable Name
paramValue string[] Variable Value
Output
arrayCount int Array Quantity
Success: 0
Return Value
Exception: error code rather than 0.

Data output interface:


SetIntValue

Function Description Set INT variable value


Function Method int SetIntValue(string key, int value)
Parameter Name Data Type Parameter Description
paramName string Variable Name
Input
paramValue int Variable Value
Output
Success: 0
Return Value
Exception: error code rather than 0.

SetFloatValue

Function Description Set float variable value


Function Method int SetFloatValue (string key, float value)
Parameter Name Data Type Parameter Description
paramName string Variable Name
Input
paramValue float Variable Value
Output
Success: 0
Return Value
Exception: error code rather than 0.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
245
DobotVisionStudio User Guide

SetStringValue

Function Description Set string variable value


Function Method int SetFloatValue (string key, float value)
Parameter Name Data Type Parameter Description
paramName string Variable Name
Input
paramValue string Variable Value
Output
Success: 0
Return Value
Exception: error code rather than 0.

SetBytesValue

Function Description Set hexadecimal data


Function Method int SetBytesValue (string key, byte[] value)
Parameter Name Data Type Parameter Description
paramName string Variable Name
Input
paramValue byte[] Variable Value
Output
Success: 0
Return Value
Exception: error code rather than 0.

SetImageValue

Function Description Set image data


Function Method int SetImageValue (string key, Image value)
Parameter Name Data Type Parameter Description
paramName string Variable Name
Input
paramValue Image Variable Value
Output
Success: 0
Return Value
Exception: error code rather than 0

SetStringValueByIndex

Function Description Set string array


Function Method int SetStringValueByIndex(string key, string value, int index, int total)
Parameter Name Data Type Parameter Description
paramName string Variable Name
value string Variable Value
Input
index int Array index
total int Array element quantity
Output

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
246
DobotVisionStudio User Guide

Success: 0
Return Value
Exception: error code rather than 0.

SetIntValueByIndex

Function Description Set Int array


Function Method int SetStringValueByIndex(string key, string value, int index, int total)
Parameter Name Data Type Parameter Description
paramName string Variable Name
value int Variable Value
Input
index int Array index
total int Array element quantity
Output
Success: 0
Return Value
Exception: error code rather than 0.

SetFloatValueByIndex

Function Description Set Float array


Function Method int SetFloatValueByIndex (string key, int value, int index, int total)
Parameter Name Data Type Parameter Description
paramName string Variable Name
value float Variable Value
Input
index int Array index
total int Array element quantity
Output
Success: 0
Return Value
Exception: error code rather than 0.

Debugging related Interface:

ConsoleWrite

Function Description Print information into Debug View


Function Method void ConsoleWrite(string content)
Parameter Name Data Type Parameter Description
Input Content string Print content
Output
Return Value None

ShowMessageBox

Function Description Print information into Debug View


Function Method void ShowMessageBox(string msg)
Parameter Name Data Type Parameter Description

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
247
DobotVisionStudio User Guide

Input msg string Print content


Output
Return Value None

The Init() function is the initialization function and the Process() is processing function. The
initialization function runs only during the first time, as shown below.

Group

In complex solution, too many modules may cause visual clutter. At this point, you can use Group
tool to integrate different modules.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
248
DobotVisionStudio User Guide

Double click Group and enter the Group sub-interface.

Click on the group tool to set related parameters.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
249
DobotVisionStudio User Guide

Input parameters and output parameters are used to bind and set input and output data, and they
support multiple selections. Data outside the group needs to be set as input data before being passed
to the Group, otherwise the relevant data cannot be bound within the Group. After the internal
processing of the Group is completed, if the corresponding data is to be transmitted to the outside,
the data also needs to be set as output data.
The result of the Group shows that the module status 1 will be output only when the output
configuration is completed. If the output is not configured completely, even if the running status of
the modules in the Group is 1, the Group will display the module status as 0. At the same time, the
history module only displays the data type that is configured as output, and the unconfigured data
type will not be output. The specific parameter settings are shown below.
Group Parameters
Parameter You can customize variable name.

Input and output data include int (integer type), float (floating point type), string (string), IMAGE
Type (image), ROIBOX (target area), POINT (point), LINE (straight line), CIRCLE (circle), FIXTURE
(correction information), and ANNULUS (circular ring), ROIANNULUS (ROI arc)

Initial value You should link with other modules.

The interface of Display Settings is shown below.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
250
DobotVisionStudio User Guide

You can set loop execution of modules in Group. Before executing loop function, you should set up
internal functions of the Group, and complete the input and output settings. The main parameters
are shown below:

Loop Parameters

Loop Enable After enabling, you can set loop related parameters.

Cycle Start Value It refers to the start value of the loop. It is recommended to set as 0.

Cycle End Value It refers to the end value of the loop.

Cycle Time It refers to the interval of a single loop.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
251
DobotVisionStudio User Guide

Break Loop After enabling, you can set conditions to stop loop.

Data Type It sets the data type of the base comparison value.

You can set customized value or link the base comparison value according to actual
Base Compare Value
demands.
It compares the base comparison value and target comparison value. If specific
Target Compare Value
condition is meet, the interrupt will be executed.

You can set the content, font color, font size, position, etc. of the displayed result in Result Show.
It also supports multi-layer group display and module copy, paste, import, and export. When the
Group module performs these operations, the module can automatically update the subscribed ID
according to the relative relation of the subscribed modules, and the operation results of the internal
modules of the Group can be displayed outside the Group.

Point Set

The tool of point set is used to combine the related data of other modules into points sets with max.
16 point sets to facilitate subsequent operation. This tool supports point or coordinate input, and it
can be dragged into loop control module to assemble points after Loop Enable is opened.
⚫ Point Input: You can input the data of other modules by selecting By Point or By
Coordinate.
⚫ Loop Enable: It makes points sets in loop. Point sets obtained from each loop are added
to output point sets.

Time Consuming

The tool of time consuming is used to calculate the time from entering the start module to leaving
the end module. The end time should be set later than the start time in Basic Params interface.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
252
DobotVisionStudio User Guide

Otherwise, this tool cannot calculate the time consuming. Parameter settings and operation results
are shown below.

Data Set

Enter Group module to execute the internal loop and generate multiple result data, which can be
integrated and output through the data set. All loop results or the result of the last loop can be output
by setting the clear signal.
When executing the fast match tool within loop for multiple times, you can bind the data set tool
with matching points of the fast match tool, and set Clear Signal as 0 or null, as shown below.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
253
DobotVisionStudio User Guide

You can also use the format tool to bind the contents output by the Group tool, as shown below.

When the clear signal is set and not 0, the data generated in the last cycle can be output. The specific
parameters are shown below.
Data Set Parameters
Completely output the cycle data, and clear the data before the start of the
Empty
next cycle.
Completely output the cycle data, but do not clear the data before the start of
Parameter 0
the next cycle.

Not empty or 0 Output data of the last cycle.

Customize the data name. Data shall be configured as output data in Group output
Name
configuration before the outputting.
Type The types of the data to be integrated, including int, float, and string.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
254
DobotVisionStudio User Guide

Data Source Bind pre-module data that need to be integrated.

Communication
Receive Data

The Receive Data module can get data from data queue, communication devices and global variable.
Here takes global variable as data source as an example.

Add the data name to the input data and subscribe the data source. As the figure above, the receiving
data module gets the data sent by flow 1 to the global variable.

Send Data

The tool for sending data can send data to data queue, communication devices, global variable,
vision controller, and send event. Take sending data to the global variable as an example.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
255
DobotVisionStudio User Guide

Select the global variable as the output option, and select the target global variable in the output
data. You need to subscribe the data that needs to be sent to the global variable in the selected data.
After sending data to the global variable, you can use the tool for receiving data to subscribe the
data in the global variable, so as to get the sent data.

Camera IO Communication

Camera IO communication includes common camera communication and smart camera IO


communication. Smart camera IO communication is generally used in the case of built-in algorithm
platform of smart camera. Generally, the camera IO communication tool is used in combination with
a certain result of the flow.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
256
DobotVisionStudio User Guide

Camera IO Communication Parameters


When the output condition is consistent with the event set by the output type, the
IO Output Condition
corresponding IO port output signal.
Related Camera You need to select the global camera.
It includes common camera, smart camera and dalsa camera. The common camera has
Camera Type
2 IO ports, smart camera has 3 IO ports, and dalsa camera has 4 IO ports.
Duration Duration to output the active level, and the unit is ms.
It has 2 types, including Output when NG and Output when OK. When the event meets
Output Type
condition, output active level. The default value is not output.

Protocol Parsing

There are three types of protocol parsing: text parsing, script parsing, and byte parsing.
When the character string sent by external communication contains special characters or line breaks,
the text content can be parsed into multiple readable variables through text. For example, 1; 123;
234 can be parsed into 1, 123 and 234 by setting the split symbol as ";". The settings are shown as
below.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
257
DobotVisionStudio User Guide

The result output is shown as below.

You can also customize the logic through python script, compile the script and save it as a file with
the py. suffix.
There is a sample example under the installation path...\Module(sp)\x64\Communication\
DataAnalysisModule, as shown below.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
258
DobotVisionStudio User Guide

After editing the script, you can parse the text by loading the script file in the protocol analysis
module.
For example, the formatted result "0000006400000005" can be parsed into decimal data 100 and 5
via script, as shown below.

NOTE
 The name, type and number of output variables must be defined in the script.

Protocol Assembly

Protocol assembly is the reverse process of protocol analysis. Text assembly can assemble multiple
data and separate the data with a custom delimiter, as shown below.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
259
DobotVisionStudio User Guide

The script assembly can customize or bind the assembly content, and at the same time, customize
the conversion logic and load the script file in the python script. Finally, data assembly can be
realized according to the logic set by scripts after running, as shown below.

Dobot Magician Tools


Moving to a Point

Moving to a point means that Dobot Magician moves to a specified point according to the set motion
mode. You can select motion mode based on site requirements, as shown below.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
260
DobotVisionStudio User Guide

⚫ Motion mode: the motion mode of the robot arm when it moves to the target position. The
value range is MOVJ, MOVL, JUMP.
- MOVJ: joint movement, from point A to point B. Each joint moves from the joint
angle corresponding to point A to the joint angle corresponding to point B. During
the joint movement, the running time of each joint axis should be consistent and the
joint axis reaches the end point at the same time, as shown in the figure below.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
261
DobotVisionStudio User Guide

- MOVL: straight line movement, path S from point A to point B.


- JUMP: door-shaped trajectory, moving from point A to point B in MOVJ mode, as
shown in the figure below.
1. Rise to a certain Height in MOVJ mode.
2. Transition to maximum lifting height (Limit) in MOVJ mode.
3. Move to the height above point B in MOVJ mode.
4. Transition to the height of point B plus Height in MOVJ mode.
5. Descend to point B in MOVJ mode.

• X: Value of coordinate X
• Y: Value of coordinate Y
• Z: Value of coordinate Z
• R: Value of coordinate R

Speed Ratio

You can set the speed ratio of Dobot Magician to control the Dobot Magician' s speed. As shown
below, the Dobot Magician will move at the 60% speed ratio.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
262
DobotVisionStudio User Guide

Home Calibration

If Dobot Magician loses step during working or encounters crash, you need to execute homing
operation to improve its accuracy. The calibration tool is shown below.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
263
DobotVisionStudio User Guide

Suction Cup Switch

Suction cup is used to suck up or release object. After connecting suction cup to Dobot Magician,
you can make it suck up or release object by controlling the switch, as shown below.
ON/CLOSE: Control the intake and outtake of air pump. Value range: On means intake, and Close
means outtake.

Gripper Switch

Gripper is used to grab or release objects. After connecting gripper to Dobot Magician, you can
make it grab or release object by controlling the switch, as shown below.
ON/CLOSE: Set gripper to grab or release objects. Value range: On means grab, and Close means
release.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
264
DobotVisionStudio User Guide

Laser Switch

Laser switch is used for laser engraving. After connecting laser engraving kit to Dobot Magician,
you can make it work by controlling the switch, as shown below.
Laser: Click to open or close laser. Value range: On and Off.

I/O Multiplexing
Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
265
DobotVisionStudio User Guide

The laser switch is used for laser engraving. After connecting the laser engraving kit to Dobot
Magician, you can set the laser switch tool for laser engraving, as shown in Laser Switch.
In the Dobot controller, all extended I/O is uniformly addressed. Based on the current situation, I/O
functions include the following:
• High and low level output;
• PWM output;
• Read the input level (high and low);
• Read the input analog-to-digital conversion value.
Some I/O may have all of these functions. You need to configure I/O multiplexing before using
different functions. For details, see Dobot Magician User Guide.
You can set the status of a specified I/O output interface using the I/O output tool. The figure below
shows the I/O output tool.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
266
DobotVisionStudio User Guide

⚫ Address: Set the address of the output interface. See the definition of Dobot Magician I/O
Interface in Dobot Magician User Guide.
⚫ DO: IO output;
⚫ PWM: PWM output;
⚫ Dummy: No function is configured.
⚫ DI: IO input.
⚫ ADC: AD input.

I/O Output

The addresses of the I/O interfaces in Dobot Magician are unified. You can set specified output
status by I/O output tool, as shown below.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
267
DobotVisionStudio User Guide

⚫ Address: Set the address of output interface. For the definition of I/O interface, please refer
to Dobot Magician User Guide.
⚫ Level: Set I/O output status. Value range: High means high level, and Low means low level.

I/O Input

The addresses of the I/O interfaces in Dobot Magician are unified. You can get specified input status
by I/O input tools, as shown below.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
268
DobotVisionStudio User Guide

Address: Set the address of input interface. For the definition of I/O interface, please refer to Dobot
Magician User Guide.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
269
DobotVisionStudio User Guide

Cases

Location for USB Hole


In this example, measure the position information of the CNC USB interface, and determine the
center point position, as shown below.

Steps:
1. Solution establishment idea:
 Find the position of the USB interface in the image by matching template.
 Find the four lines of the trapezoidal interface, including upper line, left line, lower line,
right line. l
 Measure the intersection of two adjacent lines: the intersection point on the upper left line,
the intersection point on the upper right line, the intersection point on the lower left line,
and the intersection point on the lower right line.
 Measure the distance between two diagonal points: the distance between the upper left
line intersection point and the lower right line intersection point, and the distance between
the upper right line intersection point and the lower left line intersection point.
The overall solution flow is shown below.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
270
DobotVisionStudio User Guide

2. Save the solution.


3. Load solution.
4. Train model, as shown below.

Locate the trapezoidal interface by matching template, as shown below.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
271
DobotVisionStudio User Guide

5. Find lines. Select ROI according to actual features of lines in the image, and set parameters.
 Edge Polarity: it includes dark to light, light to dark and any.
 Search Mode: it includes best line, first line and last line.
 Search Direction: it includes top to bottom and lift to right.
In this example, find the upper line of the trapezoidal interface is shown below.

In this example, find the lower line of the trapezoidal interface is shown below.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
272
DobotVisionStudio User Guide

In this example, find the left line of the trapezoidal interface is shown below.

In this example, find the right line of the trapezoidal interface is shown below.

6. Determine the left and upper intersection point of lines, as shown below.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
273
DobotVisionStudio User Guide

7. After obtaining the four intersection points, execute point to point (P2P) measurement.

Detection for Detective Metal


In this example, it detects whether there is a metal cover and its position is right, and the sample
image is shown below.

Steps:
1. Solution establishment idea:
 The surface of the metal cover is reflective, and whether there is a metal cover can be
detected by intensity analysis. l
 The bottom triangle can determine the installation position of the metal cover. Through
Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
274
DobotVisionStudio User Guide

feature matching tool, you can set the center of the base as a matching point by the training
template. l
 Find the center of the installation hole on the metal cover by the tool of finding circle.
 Measure the distance between the matching point and the center of the circle by the tool
of point-to-point measurement (P2P measure).
 Determine whether the metal cover is installed correctly by the center distance and the
tool of if module.
2. The overall solution flow is shown below.

3. Train model, as shown below.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
275
DobotVisionStudio User Guide

4. Find circle. Select the search range of circle, and determine the the location of metal cover, as
shown below.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
276
DobotVisionStudio User Guide

5. Measure. The distance between circle center and the matching point is shown below.

According to valid value range to determine whether the installation position is right or not.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
277
DobotVisionStudio User Guide

6. Intensity measurement analysis. Determine whether there is a metal cover by intensity measure
and if module, as shown below.

7. Detection result is shown below.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
278
DobotVisionStudio User Guide

Distance Detection
In this example, detect the distance between the far left and far right circle holes on the electronics,
as shown below.

Steps:
1. Solution establishment idea:
Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
279
DobotVisionStudio User Guide

 Calibration can be calibrated by checkerboard.


 There are a lot of interference for many circles among far left and far right circles. Thus,
template matching and location is needed for capture the circle accurately.
 Transform the two circle centers to the post-calibration coordinates via calibration
transformation.
 Measure the circle center distance by using point to point measure (P2P measure).
2. The overall solution flow is shown below.

3. Calibration.
Load the calibration image into local image, connect calibration tool, set parameters of calibration
tool, and select save path.
4. Find circle.
Select the ROI area of circle, and find the circle centers of two circles, as shown below.

5. Calibration transformation
Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
280
DobotVisionStudio User Guide

Transform the two circle centers to the post-calibration coordinates via calibration transformation.

6. P2P measure
In the basic parameter interface of P2P measure, enter the coordinates of two circle centers as start
point and end point. The result is shown below after running the solution.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
281
DobotVisionStudio User Guide

Loop Function
Steps:
1. Set up solution flow.
The overall solution flow is shown below.

2. Set loop parameters.


The settings of loop parameters must have circle start value and circle end value, as shown below.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
282
DobotVisionStudio User Guide

3. Set the parameters of finding peak.


The tool of finding peak should set up standards according to loop times, as shown below.

4. Display result.
The result is shown below after running the loop control tool.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
283
DobotVisionStudio User Guide

Script Function
The script function provides input interface, and it transmits to output via C# programme process.
Steps:
1. Set up solution flow.
Find circle output diameter by template matching, as shown below.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
284
DobotVisionStudio User Guide

2. Edit script.
Double-click the script module and enter editing page, as shown below.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
285
DobotVisionStudio User Guide

You can add new variable by clicking +. The variable has two types, including int and float. You
can set value in original value, as shown below.

The main body of the script is divided into three parts, as shown below.
• The first part defines the global variable, and it only runs during first time. This solution
should define an integer variable and a diameter variable, as shown below.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
286
DobotVisionStudio User Guide

• The second part defines the initial variable, and it only runs during first time. This solution
should define an initial integer variable and a diameter variable, as shown below.

• The third part defines the main function area, and user can edit according to its demands.
The detailed code of this solution is shown below.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
287
DobotVisionStudio User Guide

Medicine Bottle Detection


Functional Requirement
A customer needs to conduct multiple tests on medicine bottles on site, as shown below.

 Recognize the production date, expiration date, and product batch number printed on the
bottle body.
 Check if the font on the bottle cap is defective, distorted, and the printing is qualified or
not.
 Recognize the black stains and deformation at the bottom of the bottle.
 When the font on the bottle cap is unqualified or there is a stain on the bottom of the
bottle, the IO signal will be output, and the unqualified products will be filtered.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
288
DobotVisionStudio User Guide

Solution Design
 The information on the bottle body can be recognized by the tool of character recognition,
and data can be integrated by formatting. But due to medicine bottles conveyed on site
may be inclined, you can use feature matching and fixture tools to solve it.
 Regarding the problem of defective fonts on bottle cap, you can use defect detection tool.
 For stains on the bottle bottom, you can use image processing tool to deepen stains first,
and then use the Blob tool to accurately locate them.
 The recognized information can be transmitted to the client's computer via network
communication, and the unqualified medicine bottles can be converted into level signals
via IO communication, and control external machines to remove unqualified bottles.
Bottle Body Detection
You can refer to following solutions to recognize the production date, expiration date, and
product batch number printed on the bottle body.

• Feature Matching + Fixture


It is used to assist imagedeviation. Generally, you need to set ROI first. But deviation
may occur during the conveying process of bottles. Thus, you need to use both the
feature matching tool and fixture tool. Refer to section Feature Matching and Position
Correction for details.
• Character Recognition
One character recognition can only recognize one line of characters. For 3 recognition
areas, you need use 3 character recognition flows. Refer to section OCR for details.
• Formatting
Before communication, you need to use formatting tool to integrate the output data
and convert it to string type. Refer to section Format Module for details.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
289
DobotVisionStudio User Guide

• Send Data
The role of sending data aims to send processed data to external device for statistics.
Bottle Cap Detection
You can refer to following solutions to check if the font on the bottle cap is defective, distorted, and
the printing is qualified or not.

• Character Defect Section


Character training is required before executing character defect detection. The aim of the
training is to put the OK model into the library for later comparison with NG images. You need
to draw a ROI area first, and the area should be larger than the minimum circumscribed
rectangle of the target string, otherwise training cannot be done. After selecting the ROI area,
create a template in the character template, as shown below.
Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
290
DobotVisionStudio User Guide

When the grayscale difference between the character and the background is small, it is
recommended to enable binary image. When the font is inclined or the automatic segmentation is
inaccurate, it is recommended to manually select characters and use default parameters for training
parameters. After selection, click the training template to create character outline, and click next to
enter the training interface as shown below.

You need to train about 10 OK characters on the training interface. For NG images marked with red
dots, you need to select "not count current image", and click OK to complete the training.
After the training is completed, the detection effect is verified and debugged. When the false
detection occurs, you can refer to section OCV to adjust parameters.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
291
DobotVisionStudio User Guide

• Condition Detection
Select int type data for conditional detection, and link with character defect detection number.
• Send Data
Send condition detection results to the data queue. For character defects on the bottle cap are
not output separately, it is recommended to use data queue to integrate all defects to send the
detection results of process 1 to the data queue.
Bottle Bottom Detection
You can refer to following solutions to recognize the black stains and deformation at the bottom of
the bottle.

• Blob + Fixture
It is used to assist imagedeviation. The Blob can accurately find and locate the white bottle
bottom. The function of Blob is similar to featuring matching tool, but here you need to detect
and output the roundness of the bottle bottom, thus it is recommended to use Blob.
• Image Enhancement
The original image has low gray contrast, so image enhancement is used to make the contrast
more obvious.
• Blob
This function is used to search image stains after image enhancement.
• Total Output
The variable calculation is used here, and the total output increases by 1 once the process is
run.
• Condition Detection
It is used to detect the roundness of the bottle bottom.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
292
DobotVisionStudio User Guide

• Send Data
Send detection results to the data queue, and integrate with results of bottle cap character
detection.
• Branch Character
The branch character is used to control the execution of subsequent modules when the
condition detection result is NG, including NG image saving and NG count accumulation, as
shown below.

Results Output
You can refer to following to get detection results and send them to external machines via vision
controller IO.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
293
DobotVisionStudio User Guide

• Receive Data
Receiving data is used to receive the data in the data queue of queue0 and queue1.
• Logic
Link received data of var0 and var1 in logic tool. This means that the judgment result of the
bottle cap and the bottle bottom.
• Send Data
The module of sending data is linked with the output of vision controller. Refer to section IO
communication for details.
• Process timing control
Because the process 3 needs to receive the data sent by processes 1 and 2 , it is recommended
to run process 3 after processes 1 and 2 are completed. At this time, use the global script to
control the process running logic.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
294
DobotVisionStudio User Guide

Detection Results
Finally, detection results and related information will be displayed on the operating interface, as
shown below.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
295
DobotVisionStudio User Guide

Multi-Flow Communication
Steps:
1. Judge the quantity of the three different samples and send the judgment result. The flow 1 in
the figure below is to detect the quantity of lamps. If there is a missing one, it is unqualified.

2. In order to combine these three detection results, you can send the judgment result to the data

queue. Click to enter the main flow, and set up the corresponding data queues here, as
shown below.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
296
DobotVisionStudio User Guide

3. View the cache of the data queue in the history result.


4. Through the receiving data module, the flow can get the value of the data queue, and perform
operation.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
297
DobotVisionStudio User Guide

NOTE
 The script here acts as a delay because the data queue follows the principle of first-in-
first-out and the data can only be taken out when the number in a row is full. Make
sure that flows before the receiving data module has operated.

When the software trigger is selected and the global operation is performed in a single time, the
solution can achieve the expected output, but if the hardware trigger is selected to trigger the flow
1, flow 2, and flow 3, and flow 4 cannot be triggered for receiving data. At this time, you can choose
to use global variable.

5. Flow 1, flow 2 and flow 3 send data to the global variable.


6. The result of global variables can be received at the end of any flow and the corresponding
logical operations should be performed, but the timing must be controlled to ensure that the
data is retrieved when all global variables receive values.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
298
DobotVisionStudio User Guide

7. Finally, perform a logical and operation on all the received data, and output the result of the
operation to a third-party device through TCP communication.

Communication Trigger Flow


Different bytes are used to trigger various processes via the established TCP communication. If the
first two digits of the sent data are 0100, the flow 1 will be triggered. If the last two digits of the sent
data are 0100, the flow 2 will be triggered. If the sent data is 01000100, two flows (flow 1 and flow
2) will be triggered, and a message will be returned by sending an event.
Solution:
• Establish two flows, namely, flow 1 and flow2.
• Establish TCP connection, and test if the connection is normal.
• Establish the received event according to actual demands.
• Configure the global trigger.
Steps:
8. Establish flow 1 and flow 2.
The solution of flow 1 is shown below.

The solution of flow 2 is shown below.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
299
DobotVisionStudio User Guide

9. Establish TCP connection

10. Establish two receiving events of byte match-protocol assembly. 0 byte match-protocol
assembly trigger flow 1 and 1 byte match-protocol assembly trigger flow 2. The specific
parameter settings are shown below.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
300
DobotVisionStudio User Guide

In the assembly configuration interface, the software can return the received data to the device.

11. Configure the sending event. This module can send data via sending event, as shown below.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
301
DobotVisionStudio User Guide

12. Configure the global trigger parameters.

13. Send communication data 01000000 to trigger flow 1 and send communication data 00000100
to trigger flow 2. After triggering, the software will send required data to the device via sending
event.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
302
DobotVisionStudio User Guide

Dobot Magician Demo

The demos in this chapter need to be used with Dobot Magician. For installation of Dobot Magician,
see relevant manuals on Magician.

Robot Calibration
Objective: Make calibration files by calibration tools.

1. Click Camera, as shown below. Select a camera, and set Pixel Format to MONO8. You
can adjust the brightness of camera by adjusting aperture, exposure-time and light, if the
photograph is not bright enough. Set Trigger Source to SOFTWARE in Trigger Settings.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
303
DobotVisionStudio User Guide

2. Click Image Source module. Set Image Source to Camera, and set Relate Camera to
the global camera that you set above.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
304
DobotVisionStudio User Guide

You can get an image by clicking or a series of images by clicking in SOFTWARE


mode. Meanwhile, you can adjust parameters based on site requirements. An image is shown
below.

Set Input Source to Input Source.ImageData. Set Calibration Board Type to Checkboard
Calibration Board. Then click Confirm.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
305
DobotVisionStudio User Guide

1. Set Translation Number to 9. Click to get image, as shown below. Calibrate 9


points in the order of the green arrow shown in the figure below.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
306
DobotVisionStudio User Guide

2. Click on the toolbar to open DobotStudio. Choose Magician and click Connect.
Set Pen as the end-effector of Magician.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
307
DobotVisionStudio User Guide

3. Jog X-axis and Y-axis on DobotStudio to make the pen move to the nine points in the
order of the green arrow which is shown above. Write down every point value according

to the corresponding ID on Edit Calibration Points page. You can click on the
right side to directly acquire the current X coordinate and Y coordinate, or manually input
the coordinates. Then click Confirm.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
308
DobotVisionStudio User Guide

4. Click Create Calibration File to save the calibration file.

NOTE
 The calibration file will be used in the following demos. If the relative position
between camera and Magician is changed or Magician loses steps, you have to create
the calibration file again.

Color Sorting
Objective: Detect the color of blocks and sort them, as shown below.
NOTE
 Calibration file needs to be generated beforehand.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
309
DobotVisionStudio User Guide

 Extract the sorted color in the camera image, and sort corresponding blocks according to
color.
 Convert image coordinate to physical coordinate according to the calibration file.
 Sort the specified color blocks by Dobot Magician.
The overall solution flow is shown below.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
310
DobotVisionStudio User Guide

1. Click Camera, as shown below. Select a camera, and set Pixel Format to RGB 8. You
can adjust the brightness of camera by adjusting aperture, exposure-time and light, if the
photograph is not bright enough. Set Trigger Source to SOFTWARE in Trigger Settings.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
311
DobotVisionStudio User Guide

2. Click Image Source, as shown below. Select Camera as Image source. Select the global
camera device that you set above as Relate Camera.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
312
DobotVisionStudio User Guide

Refer to Color Extraction. Take red as an example. Set Color Space to RGB, and set the value
of every channel. Extract the specified color and output an image.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
313
DobotVisionStudio User Guide

NOTE
 You can get RGB channel values by moving the mouse on the block in color that you
want to sort. The values are shown in the right lower corner of the page.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
314
DobotVisionStudio User Guide

1. Set Input Source to the image that you want to use, and set ROI Creation to Draw. Click
Confirm.

2. Set Morphology Type based on site requirements (Set it to Dilation in the demo). Set
Structuring Element to Rect, then click Confirm.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
315
DobotVisionStudio User Guide

1. Set Input Source to the image that you want to use, and set ROI Creation to Draw.

2. Set Run Params, as shown below.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
316
DobotVisionStudio User Guide

Set Input Source and Input Mode, as shown below. Load the calibration file and click
Confirm.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
317
DobotVisionStudio User Guide

Set motion mode. For details, refer to Moving to a Point. Select the corresponding physical
coordinates, and set the Z-axis value based on site requirements.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
318
DobotVisionStudio User Guide

Set suction cup to ON or CLOSE based on site requirements.

Set placing positions of blocks according to different colors.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
319
DobotVisionStudio User Guide

Character Defect Detection


Objective: Detect whether characters have defects like characters missing and redundancy.

NOTE
 Calibration file needs to be generated beforehand.

Create the template of the standard characters and train them. Compare the trained characters
with the target characters to find detects.
Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
320
DobotVisionStudio User Guide

Analyze the result and sort characters with Dobot Magician.


The overall solution flow is shown below.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
321
DobotVisionStudio User Guide

1. Click Camera, as shown below. Select a camera, and set Pixel Format to MONO8. You
can adjust the brightness of camera by adjusting aperture, exposure-time and light, if the
photograph is not bright enough. Set Trigger Source to SOFTWARE in Trigger Settings.

2. Click Image Source module. Set Image Source to Camera, and set Relate Camera to the
global camera that you set above.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
322
DobotVisionStudio User Guide

Create feature models by Fast Match module, as shown below. You can set Scale Mode to
Manual to select manually. Set the roughness and the contrast threshold based on site
requirements.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
323
DobotVisionStudio User Guide

Create the reference of position deviation according to the match point and angle. As shown
below, set Input Source to the image that you want to use, and set Choose Mode to By Point.
Click Create Reference, as shown below.

1. Click Train Model to create a character model.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
324
DobotVisionStudio User Guide

2. The process of training is shown below. Create a rectangular mask and generate a model.
Click Next.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
325
DobotVisionStudio User Guide

3. Extract characters following the figure below. Click Next Step.

4. Select characters manually for fine positioning, as shown below. Click Model Training.
Then click Next Step.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
326
DobotVisionStudio User Guide

5. The process of setting detection area mask is shown below. After setting the mask, you
will see the Set Successfully pop-up window. Then click Next Step.

6. The process of statistical training is shown below. Click to choose Current Image.
Click Current Image Statistics, and you will see the result in the lower right corner.
Click OK.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
327
DobotVisionStudio User Guide

7. Now you have completed the configuration of character defect detection. Click OK to
exit.

1. Set Input Source to the image that you want to use, and set ROI Creation to Draw.

2. Set Morphology Type to Opening. Set Structuring Element to Ellipse. You can set
other parameters according to site requirements. Click Confirm.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
328
DobotVisionStudio User Guide

1. Set Input Source to the image that you want to use, and set ROI Creation to Draw.

2. Set Run Params, as shown below.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
329
DobotVisionStudio User Guide

Set Input Source and Input Mode, as shown below. Load calibration file and click Confirm.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
330
DobotVisionStudio User Guide

Set suction cup to ON or CLOSE based on site requirements.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
331
DobotVisionStudio User Guide

Set placing positions for normal characters and defective characters.


Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
332
DobotVisionStudio User Guide

The detection results are shown below.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
333
DobotVisionStudio User Guide

Diameter Measurement
Objective: Measure the diameter of objects and judge the location of the object.
NOTE
 Calibration file needs to be generated beforehand.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
334
DobotVisionStudio User Guide

⚫ To find the measured objects by find circle module.


⚫ Convert image size to physical size by scale transformation.
⚫ Convert image coordinate to physical coordinate by calibration tools.
⚫ Sort eligible objects.
The overall solution flow is shown below.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
335
DobotVisionStudio User Guide

1. Click Camera, as shown below. Select a camera, and set Pixel Format to MONO8. You
can adjust the brightness of camera by adjusting aperture, exposure-time and light, if the
photograph is not bright enough. Set Trigger Source to SOFTWARE in Trigger Settings.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
336
DobotVisionStudio User Guide

2. Click Image Source module. Set Image Source to Camera, and set Relate Camera to
the global camera that you set above.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
337
DobotVisionStudio User Guide

1. Set Input Source to the image that you want to use, and set ROI Creation to Draw.

2. Acquire the range of radius according to the following figures. Click Confirm.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
338
DobotVisionStudio User Guide

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
339
DobotVisionStudio User Guide

Set Input Source and Pixel Interval. Convert pixel unit to physical unit by scale
transformation tools. Click Confirm.

Set Input Source and Input Mode, as shown below. Load calibration file and click Confirm.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
340
DobotVisionStudio User Guide

Set motion mode. For details, refer to Moving to a Point. Select the corresponding physical
coordinates, and set the Z-axis value based on site requirements.

Set suction cup to ON or CLOSE based on site requirements.

Set valid range of physical radius, as shown below.


Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
341
DobotVisionStudio User Guide

Place different circles according to the measured results. If the result is 1, execute branch 8,
otherwise execute branch 7.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
342
DobotVisionStudio User Guide

Set placing positions for objects according to the detection results.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
343
DobotVisionStudio User Guide

Rectangle Template Match


Objective: Find rectangles and put them to the corresponding template through angle
calculation and model match, as shown below.
NOTE
 Calibration file needs to be generated beforehand.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
344
DobotVisionStudio User Guide

Create feature templates for rectangle and rectangle model by feature match module.

NOTE
 There are two feature matches in this demo to determine ROI, which are used for
searching rectangle and rectangle model respectively.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
345
DobotVisionStudio User Guide

 Calculate the angle of rectangle and rectangle model by variables to adjust the matching
angle.
NOTE
 The angle of end-effector of Magician is -150°to 150 °. As the angle of model matching
may be -150 °to -180 °or 150 °to 180 °, script is used to transform the matching angle
into an executable angle for Magician.

 Convert image coordinate to physical coordinate by calibration transformation.


 Match rectangle by controlling Magician.
The overall solution flow is shown below.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
346
DobotVisionStudio User Guide

1. Click Camera, as shown below. Select camera, and set Pixel Format to MONO8. You
can adjust the brightness of camera by adjusting aperture, exposure-time and light, if the
photograph is not bright enough. Set Trigger Source to SOFTWARE in Trigger Settings.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
347
DobotVisionStudio User Guide

2. Click Image Source, as shown below. Select Camera as Image source. Select the global
camera device that you set above as Relate Camera.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
348
DobotVisionStudio User Guide

1. Set Input Source to the image that you want to use. Set ROI Creation to Draw , and
draw ROI area with mouse.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
349
DobotVisionStudio User Guide

2. Create feature templates, as shown below. You need to select all characters completely
enclosed by a green box. You can set Scale Mode to Manual to select characters manually
if the green box cannot cover all characters. Then set the roughness and contrast threshold
based on site requirements.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
350
DobotVisionStudio User Guide

1. Set Input Source to the image that you want to use. Set ROI area to Draw, and draw ROI
area with mouse.

2. Create feature templates, as shown below. You need to select all characters completely
enclosed by a green box. You can set Scale Mode to Manual to select characters manually
if the green box cannot cover all characters. Then set the roughness and contrast threshold
based on site requirements.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
351
DobotVisionStudio User Guide

Establish the position offset baseline according to the feature matching point and angle, As
shown in the figure below, set Input Source to the image to be processed. Select By Point in
Choose Mode, and click Create Reference.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
352
DobotVisionStudio User Guide

Establish the position offset baseline according to the feature matching point and angle, As
shown in the figure below, set Input Source to the image to be processed. Select By Point in
Choose Mode, and click Create Reference.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
353
DobotVisionStudio User Guide

Set Input Source to the image to be processed. Set ROI Creation to Draw. Set the position
correction information of the rectangle. As shown in the figure below, set line search for the
target side in Shape. Click Execute.

Set Input Source to the image that you want to use. Set ROI Creation to Draw. Set the
position correction information of the rectangle. As shown in the figure below, set line search
for the target side in Shape. Click Execute.

Calculate as shown in the figure.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
354
DobotVisionStudio User Guide

Set Input Source and Input Mode, as shown below. Load calibration file and click Confirm.

Select the corresponding physical coordinates, and set the Z-axis value based on site
requirements.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
355
DobotVisionStudio User Guide

Set R-axis to the output of variable calculation.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
356
DobotVisionStudio User Guide

Set Input Source, Input Mode and Coord Point, as shown below. Load calibration file and
click Confirm.

Select the corresponding physical coordinates, and set the Z-axis value based on site
requirements.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
357
DobotVisionStudio User Guide

Circle Template Match


Objective: Measure the diameter of circles and put them into the corresponding model as shown
below.
NOTE
Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
358
DobotVisionStudio User Guide

 Calibration file needs to be generated beforehand.

 Find the measured circle by Circle Search module.


 Convert image coordinate to physical coordinate by calibration tools.
 Put circles to corresponding circle models by Dobot Magician.
The overall solution flow is shown below.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
359
DobotVisionStudio User Guide

1. Click Camera, as shown below. Set Pixel Format to MONO8. You can adjust the
brightness of camera by adjusting aperture, exposure-time and light, if the photograph is
not bright enough. Set Trigger Source to SOFTWARE in Trigger Settings.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
360
DobotVisionStudio User Guide

2. Click Image Source, as shown below. Select Camera as Image source. Select the global
camera device that you set above as Relate Camera.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
361
DobotVisionStudio User Guide

1. Set the range of circle search. Set Input Source to the image that you want to use. Set
ROI Creation to Draw, and draw ROI area with mouse.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
362
DobotVisionStudio User Guide

2. Set radius and mode parameters to search eligible circles. Then click Confirm.

Please refer to Step 3 to set model parameters.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
363
DobotVisionStudio User Guide

Set Input Source and Input Mode, as shown below. Load calibration file and click Confirm.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
364
DobotVisionStudio User Guide

Set the grasping position of Magician. For details about motion mode, refer to Moving to a
Point. Set the Z-axis value based on site requirements.

Set Input Source and Input Mode, as shown below. Load calibration file and click Confirm.
Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
365
DobotVisionStudio User Guide

Set the match position. For details about motion mode, refer to Moving to a Point. Set the Z-
axis value based on site requirements.

Close suction cup.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
366
DobotVisionStudio User Guide

All circles are matched successfully.

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
367
DobotVisionStudio User Guide

Issue V4.1.2 (2022-06-08) User Guide Copyright © Yuejiang Technology Co., Ltd.
368

You might also like