The droplet is inclined on the substrate at an tilt angle of 30°
Images of feature maps, reconstruction methodology, lens distortion, and reprojection errors.
Current methods to measure the contact angle require orthogonal imaging of the droplet and substrate. We have developed a novel computer vision-based technique to reconstruct the surface of the 3D transparent microdroplet from non-orthogonal images and determine the contact angle using custom-made equipment comprising a smartphone camera and macro lens.
After estimating the intrinsic and extrinsic camera parameters using a printed pattern, the EfficientNet-B4 model of U-Net CNN architecture was used to extract silhouettes of droplets from images using semantic segmentation. Finally, the shape-from-silhouette method was employed involving a space carving algorithm to estimate the visual hull containing the droplet volume. Comparison with measurements from a state-of-the-art goniometer of static and dynamic contact angles on various substrates using a standard goniometer revealed an average error of 4%. Our method, using non-orthogonal images, was found to be successful for the on-site measurement of static and dynamic contact angles, as well as 3D reconstruction of the transparent droplets.
A setup was designed to hold and tilt the substrate. It comprises (b,c) base disk, (b,d) two supporting stands, (a,f) four connector joins, and (a,e) stage. All of these were fabricated using polylactic acid on a 3D printer (Ender 3, Creality, Shenzhen, China).
We used the (a) Motorola Edge 20 Pro mobile phone along with (b) SIGNI Pro 15× macro lens for data acquisition. The focal length of the lens varies from 25 to 55 mm. The data acquisition system provides (1836 × 3264) 6 MP resolution for the macro images of small (30 µl) droplets
An asymmetrical pattern of circles was printed on a photo paper of size 10 × 7.3 mm2 for camera calibration and pose estimation purposes. The intrinsic parameters, i.e. intrinsic camera matrix and lens distortion coefficients, were estimated based on the Zhang and Brown-Conrady models, respectively. Below is the (a) pattern image and (b) tracked points
We used the U-Net CNN architecture for segmentation due to its robust performance in medical imaging and microscopic image segmentation of transparent irregular-shaped cells. For our purpose, the backbone of the model was taken as the EfficientNet-B4 architecture with pre-trained weights from the imagenet dataset. Below is the (a) acquired droplet image, (b) ground truth, and (c) prediction from trained model.
We used the space carving algorithm of shape-from-silhouette method. It includes the projection of all 3D points and the generation of the occupancy vector. The occupancy vector stores the visibility information of 3D points across all views. With appropriately chosen threshold we can generate the visual hull of the droplet that can be later used for angle measurement.