UI Component Detector & Classifier
Component Detection
UI Component Detection has been adapted from UI Element Detection. The algorithm for detection is mentioned below:
- Region detection method first detects the layout blocks of a GUI using the flood-filling algorithm over the greyscale map
- Suzuki’s Contour tracing algorithm is used to compute the boundary of the block and produce a block map
- A binary map of the input GUI, and for each detected block are generated, followed by it segmenting the corresponding region of the binary map (the binarization method based on the gradient map of the GUI image)
- Connected component labeling is used to identify GUI element regions in each binary block segment (as GUI elements can be any shape, it identifies a smallest rectangle box that covers the detected regions as the bounding boxes)
Element Classification
For classification of the detected GUI components, one can use the following:
- CNN trained on RICO Dataset (cnn-rico-1.h5)
- CNN trained using transfer learning on the wireframes dataset provided by organizers (cnn-wireframes-only.h5)
- CNN trained using transfer learning on a more generalized dataset obtained from wireframes & the ReDraw Dataset (cnn-generalized.h5)
These models can be downloaded using the links mentioned & stored in smart_ui_tf20/models/
folder. Corresponding changes are automatically recognized by the configs in the smart_ui_tf20/app/uiComponentDetector/config/CONFIG.py
file.