Abstract:Cavity-enhanced absorption spectroscopy (CEAS) utilizes multiple reflections of light between two cavity mirrors to increase the interactions between a laser beam and gas, thus improving detection sensitivity. In CEAS, noise predominantly arises from a low laser-to-cavity coupling efficiency and fluctuations of the cavity mode amplitude. To address such noise, researchers have developed optical-feedback cavity-enhanced absorption spectroscopy technology (OF-CEAS), which locks the laser frequency to the frequency of the cavity longitudinal mode. As a result, it narrows the laser linewidth and improves the laser-to-cavity coupling efficiency. To avoid direct reflections from the optical cavity for optical feedback, traditional OF-CEAS employs a three-mirror V-shaped resonator. However, based on experimental verification, we have found that when the feedback phase is properly controlled, light directly reflected by the optical cavity does not cause optical feedback, and the laser can be locked to the resonant cavity mode from the optical cavity. Therefore, we propose OF-CEAS based on a linear F-P cavity. The symmetry of the transmission cavity mode is used to generate the error signal for feedback phase control. Then, 101 consecutive cavity transmission modes with a stable amplitude and broad width were observed. Finally, we detected methane standard gas with a concentration of 32×10-6 and obtained the OF-CEAS absorption signal. Based on the signal to noise ratio, the detection sensitivity down to 0.54×10-6 (1σ) was estimated.
Abstract:In order to monitor the content of CO2, near-infrared spectral absorption lines of CO2 gas near 1 580 nm were selected in this study,and a distributed feedback(DFB)laser at 1 580 nm and Herriott gas chamber at 20 m were recommended. Based on tunable diode laser absorption spectroscopy (TDLAS), CO2 detection experiments for 0.03%~0.08% and 2%~20% concentrations were carried out. Raw 2f signals were collected with a data acquisition card, and empirical mode decomposition was embedded into LabVIEW data acquisition and analysis platform as a pre-processing algorithm. After obtaining the pre-processed 2f signals, the concentration was inverted using particle swarm optimization-kernel extreme learning machine (PSO-KELM) algorithm. Experimental results indicate that the signal-to-noise ratio of 2f signals increases from 6.75 dB to 12.59 dB compared with raw 2f signals. Compared with the results of the least square method, partial least square method, back proportional neural network (BP neural network), and extreme learning machine, the inversion results show that the Pso-Kelm algorithm improves the detection accuracy of CO2 gas, the root mean square error is the smallest, and the linear correlation coefficient is the closest to 1. After 1.5 h of stability test and analysis of its Allan variance, when the integral time is 16 s, its theoretical detection limit can reach .This LabVIEW data acquisition and analysis platform meets the requirements of high accuracy and good stability of CO2 detection.
Abstract:To better understand the sensing characteristics of the seven-core photonic crystal fiber, this study adopted the method of combining theory and experiment. The finite element method was used to evaluate the temperature sensing characteristics of the seven-core photonic crystal, and the relationship between the effective refraction difference under the basic mode of the seven-core photonic crystal core and the higher order mode of the cladding and the temperature was determined. Moreover, the relationship between the wavelength of the trough and the temperature, that is, theoretical sensitivity, was calculated. And it was based on the single-mode-seven-core photonic crystal-single-mode Mach-Zehnder interference combined with the water bath method to detect the temperature. The results indicate that the theoretically calculated sensitivity is -47.14 pm/ºC, and the experimentally measured sensitivity is -48.86 pm/ºC. The seven-core photonic crystal fiber also features good linear temperature sensing characteristics. Owing to its good stability, high linearity fit, simple manufacturing process, and other characteristics, it shows potential for application in marine environmental monitoring, the biopharmaceutical industry, food testing, and other fields.
Abstract:To realize high-precision strain sensing measurements, a strain sensor based on high-sensitivity micro-nano fiber coupling was proposed. Two single-mode optical fibers were stripped off from a coating layer and placed on an optical fiber pull-cone platform, and the two optical fibers were wound together. Then, the required pull cone parameters were set, and hydrogen flow was turned on. Then, the two optical fibers were pull-cone burned using a hydrogen flame to prepare the optical fiber coupler. The strain measurement was realized by using the strong evanescent field characteristic after coupling. Based on repeated measurements, it was concluded that the sensor has good reversibility. To overcome the cross-sensitivity problem, the interference dip has different sensitive characteristics with respect to temperature and strain, and a coefficient matrix is used to determine the influence of temperature on strain. The experimental results show that the strain sensitivity is 20.35 and the corresponding linear correlation coefficient is 99.9%. In contrast with other sensors, this strain sensor has high sensitivity, good stability, and low cost; further, it has substantial application value in building safety detection and other tasks.
Abstract:The purpose of this study was to compare three different design methods for negative lenticular lenses and identify lenses for high myopia that can be fabricated precisely and easily, are low weight, and are aesthetically pleasing. Three optimized design methods for negative lenticular lenses—the bicubic spline interpolation, high-order polynomial, and geometric construction methods—were proposed. Using identical optical parameters, three groups of -10 m-1 negative lenticular lenses were designed, and the sagitta and the power distribution map were compared. The designed negative lenticular lenses were then fabricated, and the central optical area, maximum thickness, and edge thickness were compared. The central optical area obtained via the geometric construction method was equal to that obtained via the bicubic spline interpolation method; however, this area was 20.99% greater than that obtained using the high-order polynomial method. Furthermore, the maximum thickness obtained using the geometric construction method is 0.7% less than that obtained via the high-order polynomial method, which, in turn, is 13.26% less than that afforded by the bicubic spline interpolation method. The edge thickness under the geometric construction method is 80.3% less than that under the higher-order polynomial method, which, in turn, is 92.42% less than that obtained via the bicubic spline interpolation method. The differences between the central optical power in the simulation and the central optical power afforded by the bicubic spline interpolation, geometric construction, and high-order polynomial methods are 0.06, 0.11, and 0.15 m-1 , respectively. Thus, it was concluded that the geometric construction method meets the requirements of the wearers. These methods are also suitable for designing other types of optical components.
Abstract:In order to measure the quantum efficiency of linear array image sensor (hereinafter referred to as linear array camera), a set of quantum efficiency measurement system based on focusing scanning method is established. A linear array CMOS camera is tested and the uncertainty of the test results is analyzed. Firstly, a test system composed of a lighting source, a monochromator, a scanning motion mechanism and a standard detector is built. Then the scanning motion mechanism is used to drive the linear array camera and the standard detector to move with high precision and uniform speed so that the camera can scan and image the generated monochromatic spot. By switching the position between the linear array camera and the standard detector, the energy of the monochromatic spot can be measured. Finally, the sum of gray scale of the images collected by the linear array camera and the sum of the light energy detected by the standard detector are analyzed and calculated. Combined with the theoretical formula of the focused scanning method, the quantum efficiency of the linear array camera is obtained. The test results show that the focusing scanning method is feasible for measuring the quantum efficiency of linear array camera, and the repeatability accuracy is high. By analyzing the uncertainty of the test results, the measurement accuracy of the quantum efficiency of this method is about 2.6%. Quantum efficiency measurement system for focused scanning linear array camera innovatively proposes the dynamic linear scanning focusing measurement of linear array camera quantum efficiency. This system is different from the traditional quantum efficiency test method for planar array camera that irradiated by monochromatic light combined with integrating sphere in principle, and makes up for the shortage of the linear array camera's quantum efficiency testing methods and low precision. In addition, the application of focusing scanning method to measure quantum efficiency is not limited to linear array cameras, but can also test photoelectric parameters of various types of cameras, and the measurement accuracy can meet the requirements of most projects.
Abstract:As one of the most important optical parameters of a liquid, the refractive index has significant applications in many fields. The conventional laser interference method for measuring refractive index is highly susceptible to disturbances in the environment, and it also requires a large number of samples. To address these issues, in this study, a fiber grating laser is used as the inner cavity and a hollow fiber is used as the external cavity to form a closed, dual-external-cavity laser feedback system for measuring the refractive index of a liquid. The mode coupling efficiency of the fiber structure and the accuracy of the measured refractive index of the liquid are theoretically analyzed and calculated. The results indicate that the theoretical measurement accuracy of the system can reach 3.44×10-5. As the design adopts the integrated fusion splicing of closed optical fiber components, the system is more robust against thermal jitters and mechanical vibrations. Moreover, the samples to be tested need not match, and there is no risk of contamination. The proposed system combines micro-nano devices and fluid mechanics, and it can construct a special research environment required for scientific research in fields such as physics, chemistry, and biology.
Keywords:optical fiber sensing;All-fiber;Dual external cavity;self-mixing interference;refractive index
Abstract:To achieve high-efficiency and high-precision detection of super-large steel structures, this study investigates the overall registration algorithm for ground laser point clouds and the algorithm to generate dense point clouds from unmanned aerial vehicle multi-view images. First, the iterative global registration algorithm based on geometric features is used to weigh and solve the observed value constraints continuously. The error of the observed value corrections is controlled within a certain threshold range until the registration is complete and the LiDAR point cloud model of the entire grid structure is generated. Subsequently, the eccentricity between the spherical node and the column in the grid structure is calculated using the spherical node multi-link center point algorithm. The image dense point cloud generated using the visual structure from motion algorithm and improved RANSAC algorithm is used to realize fusion of the ground laser point cloud and high-resolution non-metric image data through registration. With the high-precision inspection of the largest-span steel structure in Asia taken as an example, the overall registration accuracy of the multi-station LiDAR point cloud is 5 mm, and the deviation of 16 of the 21 steel columns sampled is close to or greater than 35 mm (32.1-68.2 mm, all in the outward direction). Notably, the deflection of the overall steel structure grid is less than 1/250. The feasibility and accuracy of the entire registration algorithm for the ground LiDAR point cloud and the algorithm for generating high-resolution non-measurement image data-intensive point clouds are verified to fully meet the requirements of high-precision detection of super-large steel structures.
Abstract:Low-frequency terahertz radiation has been widely employed in terahertz security imaging systems owing to its excellent penetrability. However, this procedure also increases the risk of human exposure to terahertz radiations and raises safety concerns. In this study, the layered structure model of the skin was first established, following which the complex dielectric constant and various optical parameters of skin tissues were calculated. Next, the process of terahertz photon transmission in the skin tissues was determined via Monte Carlo simulation. Finally, the attenuation of the skin tissues to terahertz radiation at different frequencies was determined quantitatively. The penetration depth of 0.1 terahertz photons was analyzed, and it was found that 99.9% of the penetration depth was within 0.55 mm. These results show that it is difficult to transmit terahertz radiation to the lower dermis of the skin, even when the radiation power is higher than the damage threshold; this proves that terahertz radiation is safe and will not affect internal organs.
Keywords:optoelectronic detection;terahertz radiation;human safety;Monte Carlo simulation
Abstract:Single-pixel imaging combined with compressed sensing can reconstruct high-quality images of an imaged object from a small part of the measurement results of a bucket detector without a spatial resolution. However, at low sampling rates, randomly selected projected speckle sequences limit the quality of reconstructed images. To achieve improved imaging at very low sampling rates, this paper proposes a data-driven Hadamard matrix sorting scheme, which uses the training effect of an entire dataset to adaptively select transmitted speckle signal sequences. In the process of reconstructing an image, two different compressed-sensing-related algorithms are employed to realize the image reconstruction of an imaged object at an ultra-low sampling rate of 5% in a numerical simulation and physical experiment, and it is sorted with the current optimal correlation Hadamard matrix. The schemes are compared, and it is found that the reconstruction effect of the method proposed in this paper is better at sampling rates of 1% to 5%. The research results presented in this paper can be used to increase the imaging speed of single-pixel imaging, and can be applied to fields such as imaging guidance and medical imaging.
Abstract:To accurately measure the ultra-micro-hardness indentation by considering the pile-up, a high-resolution laser scanning confocal microscope was used to obtain the 3D morphology of the residual indent in this study. The four corners of indentation were then extracted to calculate the ultra-micro-hardness value using the indentation diagonal method. However, this traditional method cannot accurately reflect the complex structure of the residual indenter, in particular, the pile-up. Taking advantage of the knowledge on the 3D shape of the indent, a datum plane concept is proposed to determine the true contact area in accordance with the contact mechanics. It was found that the contact area determined by the datum plane at the inflexion point of the area function differentiated by the depth is a good indication of the contact area, even considering pile-up. The experimental results show that the hardness values measured by the optimal datum plane obtained by the contact mechanics law are <±1.5% error and <1% stable. The reference plane method can obtain a more stable and accurate hardness value that is more consistent with the definition of hardness.
Abstract:Inspired by many insects, several polarized skylight orientation determination approaches have been proposed. However, almost all of these approaches always require polarization sensors pointing to the zenith of the sky dome. Therefore, the influence of sensor tilts (not pointing to the sky zenith) on bio-inspired polarization orientation determination needs to be analyzed urgently. Aiming at solving this problem, a polarization compass simulation system is designed based on a solar position model, the Rayleigh sky model, and a hypothetical polarization imager. Then, the error characteristics of four typical orientation determination approaches are investigated in detail under the pitch tilt condition, roll tilt condition, and both pitch and roll tilt conditions. Finally, simulations and field experiments indicate that the orientation errors of the four typical approaches are highly consistent when they are subjected to tilt interference; the errors are affected by not only the degree of inclination but also the solar altitude angle and the relative position between the Sun and polarization sensor. The results of this study can be used to estimate the orientation determination error caused by sensor tilts and correct this type of error.
Abstract:The structural thermal deformation of an optical satellite was optimized and verified using the optimal design of carbon fiber reinforced plastic lamination technology. The external heat flow data of a satellite in a sun-synchronous orbit was analyzed and the temperature load distribution of the satellite decks under high and low temperature conditions was calculated according to the external heat flow data and by analyzing the thermal characteristics of satellite. According to the temperature load of the satellite platform in extreme working conditions, the lamination angle was used as the optimization variable to analyze the changes in the flatness and angle of the mounting plane of the camera leg of the satellite platform and the first-order frequency in X/Y/Z directions. Analysis data shows that when θ=40°, the camera mounting plate layers of the honeycomb panel is in the angle order of [90°, +40,0°, -40, -40,0°, +40,90°]. Under this condition, the mounting surface thermal deformation is the smallest and the fundamental frequency of the satellite meets the requirements of the launcher. In thermal and vibration tests, using the design scheme under the heat load, it was observed that the payload mounting surface flatness is >0.05, changing angle is >60 ", and first-order fundamental frequency of X/Y/Z direction is 22, 18, 49.8 Hz, respectively. It can meet the required optical camera installation precision and launch vehicle constraints.
Keywords:optical satellite;structure optimization;laminate optimization;thermal deformation;heat influx;fundamental frequency
Abstract:The key of optical synthetic aperture imaging is to solve the problem of common phase. The phase imbalance between sub apertures will seriously affect the imaging quality. Generally, the phase modulation accuracy of 1/10 wavelength in optical band is required in engineering. In this paper, a liquid phase modulator was fabricated by piezoelectric ceramics. The phase of the liquid phase modulator was detected with high precision by using interference method and digital image processing technology. The phase modulator was prepared by injecting liquid into the piezoelectric ceramic cavity. The interference fringes of the piezoelectric ceramic liquid phase modulator under different voltages were obtained by using the interferometer and recorded by CCD. Finally, a series of digital image processing were performed to obtain the fringe skeleton of a single pixel. The fringe marking method was used to track the movement of the skeleton fringe under different voltages. The pixel movement of skeleton stripe was calculated. The experimental results show that the phase modulation range of the piezoelectric ceramic liquid phase modulator is 0-3.325π under 0-30 V voltage, and the adjustment accuracy can reach 1/40 wavelength. The phase modulation linearity is good in the voltage range, which meets the phase modulation accuracy requirements of single sub mirror in optical synthetic aperture.
Abstract:To satisfy the requirements of mechanical property and biocompatibility of porous implants in the medical field, a parametric design method of generating a controllable porous structure using the Voronoi-tessellation algorithm was studied. The Box-Behnken design was developed with the strut diameter, irregularity, and unit distance as structural parameters and elastic modulus, compressive strength, and porosity as response targets. The optimal structural parameters of multi-response targets were obtained by combining the grey relational analysis (GRA), and the grey relational grade prediction model was established. The accuracy of the model was verified by the analysis of variance. The results show that strut diameter is the most significant factor affecting the performance of the porous structure. The optimum structural parameters of the irregular porous structure are as follows: strut diameter of 0.3 mm, irregularity of 0.5, and unit distance of 2 mm. Subsequent to performing confirmatory experiments, we obtained the sample with an elasticity modulus of 2.987 GPa, compressive strength of 210.048 MPa, porosity of 89.43%, and GRG of 0.789 5. The optimization results are in good agreement with the predicted results, and the error is 1.2%, indicating that the optimization method is feasible.
Abstract:To eliminate the defects in the model accuracy robustness due to the variability of temperature-sensitive points, a method for selection of robust temperature-sensitive points was investigated herein. The mechanism of temperature-sensitive point variability was explored, and the causes thereof were explained. On this basis, a method for selecting robust temperature-sensitive points was proposed, the validity of which was verified via experimental data obtained throughout the year. Two thermal error compensation models were established via a robust temperature-sensitive point selection method and non-robust selection method, respectively. In addition, their accuracies were analyzed and compared. The model built by substituting the wrong temperature-sensitive points without considering their variability will cause a significant degradation of fitting accuracy, prediction accuracy, and long-term prediction robustness. The thermal error compensation model based on the selection of robust temperature-sensitive points can not only improve the model accuracy robustness so that it meets working condition requirements, but also avoid the introduction of wrong temperature-sensitive point modeling. Notably, the annual average prediction accuracy of the model can be controlled within 5.18 μm with five temperature sensors, while the fluctuation of the annual prediction accuracy can be controlled within 2.57 μm. Therefore, this method for selection of robust temperature-sensitive points for CNC machine tools has significant theoretical value and engineering application potential.
Abstract:Path accuracy is the key index for measuring the motion performance of industrial robots. The relevant standards are formulated for the detection and evaluation of industrial robots locally and abroad. Owing to deviation in robot movement, the sampling frequency of the measurement system and the measurement error during the detection process, mapping errors between the theoretical and the actual trajectory may arise. In this paper, we analyze in detail the methods for evaluation of path accuracy in domestic and foreign standards, propose the application of the dynamic time warping (DTW) method to the path analysis of industrial robots, present how we realize the continuity of the DTW method by developing an interpolation model to improve the accuracy of the algorithm, and analyze the effect of the robot movement speed and sampling frequency of the measurement system on the path mapping method. The experimental results showed that, when compared with the path accuracy of the ISO standard method, that of the continuous DTW (CDTW) algorithm improved by 73%, the standard deviation decreased by 86%, and the overall error fluctuation decreased significantly. The ISO standard method and the DTW algorithm can be well applied to the accuracy evaluation of a linear path. For the corner path, the ISO method has obvious mapping errors, whereas the DTW algorithm solves the mapping error. However, owing to the effect of the robot motion speed and to the sampling frequency of the measurement system, there is a multi-point mapping problem. The CDTW algorithm effectively solves this problem and improves the path accuracy.
Keywords:industrial robot;path accuracy;Dynamic Time Warping(DTW);path mapping
Abstract:To improve the method of processing large off-axis aspheric mirrors, reduce the reflector surface error caused by external factors, and improve efficiency, a tool path planning method suitable for large off-axis amounts and not subject to machine processing caliber is proposed based on slow tool servo technology. The distance between the outer edge of the mirror and the spindle center is controlled within the machining radius of the lathe by means of translational transformation of the off-axis mirror coordinate, thereby reducing the amount of tool off-axis and the processing area. The mirror after translation takes the center of the spindle as the origin, and a number of mirrors consistent with this are distributed on the circumference, forming an off-axis array processing method in which the cutting zone and the transition zone coexist, directly avoiding coordinate rotation. Spline and sine mixed interpolation equations are established, based on the premise that the Z-direction cutting speed and cutting acceleration of the tool are continuous without sudden change and smoothly compensate for the tool path of the transition zone. Finally, the experiment proves that, in the entire processing area, the Z axis and C axis run smoothly, and the processing accuracy reaches PV0.4 wavelength@632. 8 nm. By means of coordinate translation, the amount of tool off-axis can be effectively reduced. The tool path ensures the smooth operation of the machine and satisfies the processing of aspheric mirrors with large off-axis amount, enabling the completion of multiple mirrors with one-time processing with high processing accuracy and efficiency.
Abstract:The angle measurement error caused by the eccentric installation of the circular grating is the key factor affecting the angle measurement accuracy of the circular grating. It is an effective method for correcting the angle measurement error caused by the eccentric installation of an encoder. In this paper, the relation between the eccentricity of the circular grating installation and the angle measurement error is analyzed, the development of a theoretical model of the eccentric angle measurement error of the circular grating based on a dual read head is presented, and a new method for correcting the eccentric angle measurement error of the circular grating based on a non-diametrically installed dual read head is proposed. The simulation and experimental results showed that the new method is better than the conventional mean method in correcting the angle measurement error caused by the installation error of the dual read head and by the eccentric error of the circular grating. In the test, when the installation error of the diametrical alignment of the dual read head was approximately 4°, the angular measurement error corrected by the new method was less than that corrected by the mean method. In the simulation, the corrected angular measurement error of the mean method was 1.785 inches, whereas that of the new method was 0.720 inch.
Abstract:The single-reference resistance ratio temperature measurement system is a commonly used method for high-precision temperature measurement, which can effectively weaken the effect of constant current source long-term drift, amplifier gain drift, thermoelectric potential, and other factors on the measurement results. However, the performance worsens when the value of the measured resistance is far away from the reference resistance. To solve this problem, we analyze the platinum resistance and a single-reference resistance ratio temperature measurement method based on a constant current source, introduce a nonlinear factor, and analyze why the measurement performance will deteriorate as the value of the measured resistance moves far away from the reference resistance. Moreover, based on this, a multi-reference resistance ratio temperature measurement method is proposed and the hardware system is designed. The multi-reference resistance ratio temperature measurement system was tested by performing equivalent experiments to determine the long-term stability, resolution ability, and nonlinear calibration degradation caused by different ambient temperatures, which verifies the correctness of the theoretical derivation part of this study. The experimental results showed that, in the temperature measurement range of approximately -38.8~64.6 ℃, the measurement stability of the high-precision multi-reference resistance ratio temperature measurement system is better than 0.0025 ℃/5 d, the measurement resolution is better than 0.00125 ℃, and the measurement stability at an ambient temperature range of 5 to 45 ℃ is better than 0.004 ℃. It basically meets the long-term temperature measurement application requirements with large environmental temperature changes.
Keywords:temperature measurements;ratio temperature measurements;platinum resistance;multi-reference;nonlinear
Abstract:Deep neural networks require a large amount of data for supervised learning; however, it is difficult to obtain enough labeled data in practical applications. Semi-supervised learning can train deep neural networks with limited samples. Semi-supervised generative adversarial networks can yield superior classification performance; however, they are unstable during training in classical networks. To further improve the classification accuracy and solve the problem of training instability for networks, we propose a semi-supervised classification model called co-training generative adversarial networks (CT-GAN) for image classification. In the proposed model, co-training of two discriminators is applied to eliminate the distribution error of a single discriminator and unlabeled samples with higher confidence are selected to expand the training set, which can be utilized for semi-supervised classification and enhance the generalization of deep networks. Experimental results on the CIFAR-10 dataset and the SVHN dataset showed that the proposed method achieved better classification accuracies with different numbers of labeled data. The classification accuracy was 80.36% with 2000 labeled data on the CIFAR-10 dataset, whereas it improved by about 5% compared with the existing semi-supervised method with 10 labeled data. To a certain extent, the problem of GAN overfitting under a few sample conditions is solved.
Abstract:To address the inadequacy of current datasets for systematic evaluating target detection algorithm under the occlusion problem and the difficulty in acquiring some data in reality, this paper proposes an occlusion image data generation system to generate images with occlusion and corresponding annotations and to build the occlusion image dataset, namely more than common object dataset (MOCOD). In terms of system architecture, a scene and global management module, a control module, and a data processing module were designed to generate and process data to build an occlusion image dataset. In terms of data generation, for opaque objects, pixel-level annotation was generated via post-processing with a stencil buffer; for translucent objects, the annotation was generated by sampling the 3D temporal space with ray marching. Finally, the occlusion level could be calculated based on the generated annotations. The experiment result indicates that our system could efficiently annotate instance-level data, with an average annotation speed of nearly 0.07 s. The images provided by our dataset have ten occlusion levels. In the case of MOCOD, the annotation is more accurate, occlusion level classification is more precise, and annotation speed is considerably faster, compared to those in the case of other datasets. Further, the annotation of translucent objects is introduced in MOCOD, which expands the occlusion types and can help evaluate the occlusion problem better. In this study, we focused on the occlusion problem, and herein, we propose an occlusion image data generation system to effectively build an occlusion image dataset, MOCOD; the accurate annotation in our dataset can help evaluate the bottleneck and performance of detection algorithms under the occlusion problem better.
Abstract:To overcome the shortcomings of single satellite sensor imaging, a fusion algorithm for synthetic aperture radar (SAR) and multispectral images based on densely connected networks is proposed herein. Firstly, the SAR and multispectral images are preprocessed separately, and the bicubic interpolation method is used to resample the same spatial resolution. Then, the densely connected network is used to extract the feature maps of the image separately, and the fusion strategy with the largest regional energy is used to combine the depth features. The fused image is input to a pre-trained decoder for reconstruction to obtain the final fused image. The experiment uses Sentinel-1 SAR images, Landsat-8 images, and Gaofen-1 satellite images for verification and draws comparisons with methods based on component substitution, those based on multiscale decomposition, and those based on deep learning. Experimental results indicate that the accuracy of the fusion algorithm based on densely connected networks in terms of the multiscale structural similarity index is as high as 0.9307, and it is better than other fusion algorithms in terms of other evaluation indexes. Detailed information of SAR images and multispectral images are well preserved.
Abstract:The process of motion segmentation and measurement of mechanical swing based on traditional block shape optical flow trajectory group clustering exhibits limitations in terms of over-segmentation and fragmentation due to the partial occlusion, interruption, and uneven velocity distribution of the optical flow trajectory. To overcome these limitations, we herein propose an arc-shaped trajectory clustering algorithm that uses curvature as a similarity metric and combines it with point cloud registration to perform mechanical swing measurement. The algorithm first performs sparse Gaussian regression of the active subset to learn the average trajectory of the arc-shaped trajectory group. Subsequently, the average trajectory is used as the seed sample of the sparse subspace clustering to complete the motion segmentation at one time. Finally, the non-seed sample is reclassified into its surrogate seed sample cluster to obtain the point set of each frame. Through conditional expectation point cloud registration, the rotation component is extracted to complete the swing angle measurement. The proposed algorithm is used for a vehicle windshield wiper under the four-link wiper assembly model and six different environment illuminances, as part of a visual automation system project targeting the daily safety inspection of passenger station vehicles, and compared with other algorithms. The experimental results show that the proposed algorithm can fully learn the blocked trajectory, and the mean square error with an artificially calibrated value is less than 10%. Furthermore, the computational complexity is only equivalent to that in the case of a single iteration of the alternating direction method of multipliers (ADMMs), Therefore, the proposed algorithm can be used for mechanical vision motion measurement in industrial intelligent manufacturing and automatic control systems.
Abstract:To solve the problems concerning large errors and long times required in traditional stitching algorithms in the case of a damaged fracture location, this paper proposes an automatic splicing algorithm based on the geometric characteristics of the fracture surface. Here, the neighborhood feature parameters of the fragment model are defined, the feature points of the fracture surface are extracted, and the curvature feature parameters are constructed according to the principle of the least-squares method to optimize the feature points set. Subsequently, to solve the problem regarding the difficulty in matching the features of a sparse point cloud, the relative distance and relative angle between the feature points are defined as feature descriptors. According to the set similarity theory, the feature points of the fracture surface are measured via similarity measurement, and the matching set of the feature points of the fracture surface is extracted. Following this, a random sampling consistency algorithm is used to eliminate the mismatched points and select the optimal matching set. Finally, singular value decomposition (SVD) is used to calculate the rotation and translation matrix, and an improved iterative nearest point algorithm based on a K-D tree is used to achieve accurate splicing of the fragments. The experimental results showed that, when compared with the traditional reassembly algorithm, the algorithm proposed in this paper has fewer feature points, simpler feature descriptors, and higher robustness; it also more effectively improves the accuracy and efficiency of fragment reassembly.
Keywords:splicing of cultural relics;ordinary least squares;random sample consensus;singular value decomposition;Iterative Closest Point algorithm(ICP)
Abstract:To solve the problem of the scarcity of the number of image feature points extracted by the traditional scale-invariant feature transform (SIFT) algorithms, in the previous work, we used the characteristics of a hyperspectral image and reconstructed the scale space of the image by considering the difference in the spectral dimension, so that the number of feature points extracted is greatly increased. However, the large increase in the number of feature points leads to the increase in the time cost of the algorithm, and the proportion of effective feature points is low. To solve the redundancy problem of feature points and improve the matching efficiency, we propose a novel flip-SIFT (F-SIFT) algorithm based on the spectral image space. Based on the ability of the proposed algorithm to perform matching quickly at the pixel level, an eight-neighborhood criterion centered on the current pixel is constructed. Before the feature points of the image are extracted, the pixels in the corresponding position of the difference pyramid are pre-filtered, so that the number of feature points is reduced to less than one tenth of the original. In addition, the traditional matching method only counts the pixel information in the neighborhood of the target pixel, but ignores the geometric location information of the pixel. Therefore, in this study, we extended the feature descriptor vector. First, the feature descriptor is roughly matched by the comparison between the nearest neighbor and the next nearest neighbor, and the similarity degree is recorded and included in the descriptor vector. Then, according to the reliability degree, four groups of matching points are selected iteratively from the first 20 sets of matching points to construct the triangle plane and the pixel position information is used for accurate matching. Experimental results showed that the proposed method can effectively reduce the number of redundant feature points and eliminate mismatches.
Keywords:scale-invariant feature transform;spectral image space;eight neighborhood criterion;double position iterative matching
Abstract:The soluble solids content (SSC) and firmness of kiwifruit are two important indices for evaluating its quality and distinguishing its maturity. In this paper, we explore the feasibility of predicting the SSC, firmness, and maturity of kiwifruit using optical fiber spectroscopy technology and of finding the best prediction model. First, an optical fiber spectroscopy (200~1 000 nm) acquisition system was used to collect the reflectance spectra of the different maturity stages of ‘Guichang’ kiwifruit. Simultaneously, the reference values of the SSC and firmness were measured. Two methods, namely, partial least-squares regression (PLSR) and principal components regression (PCR), were employed to establish the models on the basis of the full spectra and the reference values. Then, multiple linear regression (MLR) and an error back-propagation (BP) network were applied to build simplified models on the basis of the selected characteristic variables from the full wavelengths using the methods of successive projection algorithm (SPA) and competitive adaptive reweighted sampling (CARS). Finally, partial least-squares discrimination analysis (PLS-DA) and a simplified K nearest neighbor (SKNN) algorithm were applied to build models for predicting the maturity of kiwifruit. The results showed that, for the SSC, the CARS-BP model had the best prediction ability (=0.90, RMSEP=0.64, RPD=3.22), and for the firmness, the CARS-MLR model had the best prediction ability (=0.83, RMSEP=1.67, RPD=2.47). The PLS-DA model had the best detection ability, and the maturity discrimination accuracy was up to 100%. These results can provide important guidance for the nondestructive prediction of the quality and maturity of fruits.
Abstract:With the aim of tackling the registration problems in terms of low matching accuracy and low convergence speed in locally losing point clouds, a fast point cloud registration algorithm based on a clustering extended Gaussian image is proposed herein. To avoid the interference due to local loss, the point cloud is mapped to the extended Gaussian image for clustering and inversely mapped back to the actual point cloud. Moreover, to improve the efficiency of computation and the accuracy of registration, the process of point cloud registration is realized by using the distance–curvature descriptor to obtain the corresponding point pairs and the iterative closest point (ICP) algorithm. The experimental results reveal that this algorithm displays high accuracy in the case of locally losing point clouds (resulting in a mean squared error (MSE) value lowered by 17.9% for the fast point feature histogram (FPFH) descriptor combined with the ICP algorithm). Moreover, it is faster than other algorithms (resulting in a decrease in running time by 32.5% for the signature of histograms of orientation (SHOT) descriptor combined with the ICP algorithm). Therefore, it can be widely applied for fast recognition and location of three-dimensional objects in the industrial field.
Keywords:machine vision;point cloud registration;extended Gaussian image;distance-curvature descriptor;losing point cloud
Abstract:To perform de-scattering of blurred turbid underwater images, we herein develop a physical model of the underwater polarization imaging process, study the effect of underwater scattering on the transmission of polarized light, and then propose a method for de-scattering such turbid underwater images on the basis of a specific polarization state. First, a polarization camera is used to conduct imaging contrast experiments in turbid water. Next, the optimal imaging interval for a specific degree and angle of polarization is selected using the optimization algorithm, following which the target image after de-scattering is obtained. Finally, subjective vision and the objective indicators are evaluated and used as bases for analyzing the target image in different situations. The experimental results indicate that the subjective visual quality of the target image significantly improved, the enhancement measure by entropy (EME) value of the objective evaluation index is increased by 455%, the variance is increased by 124% and 38%, and the average gradient is increased by 19% and 6%. Therefore, the method proposed in this paper can facilitate simple and effective suppression of the scattering of turbid underwater images, increase in the image contrast, and improvement in the image quality.