An intelligent method for pregnancy diagnosis in breeding sows according to ultrasonography algorithms

Jung-woo Chae1,#, Yo-han Choi2,#, Jeong-nam Lee1, Hyun-ju Park2, Yong-dae Jeong2, Eun-seok Cho2, Young-sin Kim2, Tae-kyeong Kim3, Soo-jin Sa2,*, Hyun-chong Cho4,*
Author Information & Copyright
1Interdisciplinary Graduate Program for BIT Medical Convergence, Kangwon National University, Chuncheon 24341, Korea
2Swine Science Division, National Institute of Animal Science, Rural Development Administration, Cheonan 31000, Korea
3Department of Electronics Engineering, Kangwon National University, Chuncheon 24341, Korea
4Department of Electronics Engineering and Interdisciplinary Graduate Program for BIT Medical Convergence, Kangwon National University, Chuncheon 24341, Korea

# These authors contributed equally to this work.

*Corresponding author: Soo-jin Sa, Swine Science Division, National Institute of Animal Science, Rural Development Administration, Cheonan 31000, Korea. Tel: +82-41-580-3450, E-mail:
*Corresponding author: Hyun-chong Cho, Department of Electronics Engineering and Interdisciplinary Graduate Program for BIT Medical Convergence, Kangwon National University, Chuncheon 24341, Korea. Tel: +82-33-250-6301, E-mail:

© Copyright 2023 Korean Society of Animal Science and Technology. This is an Open-Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License ( which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Received: Oct 26, 2022; Revised: Nov 16, 2022; Accepted: Nov 17, 2022

Published Online: Mar 31, 2023


Pig breeding management directly contributes to the profitability of pig farms, and pregnancy diagnosis is an important factor in breeding management. Therefore, the need to diagnose pregnancy in sows is emphasized, and various studies have been conducted in this area. We propose a computer-aided diagnosis system to assist livestock farmers to diagnose sow pregnancy through ultrasound. Methods for diagnosing pregnancy in sows through ultrasound include the Doppler method, which measures the heart rate and pulse status, and the echo method, which diagnoses by amplitude depth technique. We propose a method that uses deep learning algorithms on ultrasonography, which is part of the echo method. As deep learning-based classification algorithms, Inception-v4, Xception, and EfficientNetV2 were used and compared to find the optimal algorithm for pregnancy diagnosis in sows. Gaussian and speckle noises were added to the ultrasound images according to the characteristics of the ultrasonography, which is easily affected by noise from the surrounding environments. Both the original and noise added ultrasound images of sows were tested together to determine the suitability of the proposed method on farms. The pregnancy diagnosis performance on the original ultrasound images achieved 0.99 in accuracy in the highest case and on the ultrasound images with noises, the performance achieved 0.98 in accuracy. The diagnosis performance achieved 0.96 in accuracy even when the intensity of noise was strong, proving its robustness against noise.

Keywords: Classification algorithm; Deep learning; Pregnancy diagnosis; Sow; Ultrasound


The management of pig reproduction is an important factor that is directly related to the success or failure of pig farms [13]. Therefore, methods for diagnosing pregnancy in sows have a significant impact on reproductive management and are essential in pig farming [46]. It can increase pig reproduction by shortening the non-pregnant condition of sows and increasing the number of births. Pregnancy diagnosis of sows can be confirmed through observations for return of estrus, vaginal biopsy, serum analysis, hormone measurement, and ultrasound detection methods [79]. However, if the sow shows no clear signs of pregnancy, the manager who is inexperienced or lacks the time and labor may not notice the pregnancy until the due date. In such cases, the pregnant sows cannot receive proper treatment for pregnancy and miscarriages can occur in stressful situations [10]. These issues increase the feed, management, and labor costs, which has a major adverse effect on profitability. Therefore, as mentioned before, the pregnancy diagnosis of sows has a great effect on reproduction and determines the success or failure of pig farms. As the necessity of diagnosing the pregnancy of sows is emphasized, many institutions and organizations have conducted research and a variety of methods are used to diagnose the pregnancy of sows [11]. Cameron [12] made a detailed description of the reproductive tract of the sow as felt by rectal examination. Lin et al. [13] showed the expression of αV and β3 integrin subunits in the endometrium during implantation in pigs. Zhou et al. [14] hypothesized that circulating exosome-derived miRNAs might be used to differentiate the pregnancy status as early as several days after insemination in pigs and successfully identified circulating exosomal miRNA profiles in the serum of pigs in early pregnancy. Kauffold and Althouse [15] reviewed an update on the current status of B-mode ultrasonography in pig reproduction and how this technology can be of value when used in pig production medicine. Also, Kauffold et al. [16] provided an overview of the principles and clinical uses of ultrasonography (RTU) for application to address swine reproductive performance. Kousenidis et al. [17] studied the ultrasonic typification of sows to develop a methodology for pregnancy diagnosis and suggested that detailed real-time ultrasonic scanning, can help predict litter size and the precise management of pregnant sows.

In this study, we developed a computer-aided diagnosis (CADx) method to diagnose the pregnancy of sows using ultrasound images, which has advantages over other methods mentioned above in terms of simplicity, low cost, and high accuracy. CADx is expected to provide additional information to pig farmers by showing the diagnostic result of artificial intelligence to assist the farmer in making a diagnosis decision of the image. We compared the accuracy of three computerized classification approaches with two types of noise: Gaussian and speckle. Of the three computerized classification approaches selected, the Inception model is one of the most used convolution neural network (CNN) models, Xception is based on Inception with depthwise separable convolution, and EfficientNet is a model that achieved state-of-the-art (SOTA) performance on image classification tasks with much few parameters. We added the Gaussian and speckle noises because ultrasound images are usually corrupted by them. Although the issues that we could explore in one study are only a small fraction of those involved in the entire CADx process of sow pregnancy diagnosis, it is expected that this study will provide useful information for the design of a robust CADx system that uses ultrasound images.


Ultrasound images of pregnant and non-pregnant sows were collected by experts and used as the dataset for training and performance evaluation of pregnancy diagnosis using deep learning algorithms. In consideration of use in various environments in pig farms, ultrasound images containing noise were generated and were used together with the other images in the performance evaluation. To find the optimal method for diagnosing sow pregnancy, we compared the performance of several classification algorithms.


A data set was collected from the files of sows that had undergone ultrasound imaging in the hog barn of the National Institute of Animal Science (NIAS) located in Cheonan, with the approval of the Institutional Animal Care and Use Committee (IACUC) of Rural Development Administration (approval No. NIAS-2021-538). All ultrasound images were acquired by trained experts using a MyLab™OmegaVET (Esaote) ultrasonic device and a convex array ultrasound transducer AC2541 (Esaote) with 1.0–8.0 MHz frequency range. We acquired ultrasound images of 5,292 pregnant and 5,367 non-pregnant from 44 sows. Among them, 29 sows were at least 23 days pregnant and 15 sows were not pregnant. The images of pregnant sows were confirmed by the experts. The ultrasound images were collected in GEN-M format in 4.0–6.0 MHz frequency range with general resolution and middle penetration. The collected ultrasound images were extracted as 860 × 808 resolution Bitmap Image format with lossless and uncompressed characteristics to minimize feature loss.

The 5,292 ultrasound images of pregnant sows were divided into 4,241 images (88 with invisible embryonic sacs) for training and 1,051 images (14 with invisible embryonic sacs) for performance evaluation. It is difficult for even experts to accurately identify pregnancy in images with invisible embryonic sacs. Of the 5,367 ultrasound images of non-pregnant sows, 4,231 images we used for training and 1,136 images were used for performance evaluation. Overall, the training set consisted of 4,241 images of pregnant and 4,231 images of non-pregnant sows, and the test set (Dataset-A) consisted of 1,051 images of pregnant and 1,136 images of non-pregnant sows. And part of the test set (Dataset-A) in which the embryonic sac was not visible was composed as the other test set (Dataset-B). The specifications of the images are shown in Fig. 1.

Fig. 1. Ultrasound images of sows.
Download Original Figure
Generating ultrasound images with speckle and Gaussian noises

Noise is an unwanted phenomenon that is ubiquitous in digital ultrasound images. It can appear in different forms and distributions such as speckle and Gaussian. Diagnosis of pregnancy in sows using an ultrasound device can be performed in various situations depending on the surrounding environments [18]. Speckle noise is a type of noise that is multiplicative and independent. It is the result of interference between returning light from rough surfaces and the aperture creating a granular shape pattern in the camera sensor. This type of noise affects both the resolution and contrast in ultrasound images. Gaussian noise is another type of noise that is also additive and independent. It can be the product of sources such as amplifiers, shot noise and film grain noise, among others [19]. The configuration of ultrasonic devices and probes used in all pig farms is the same as that of this study. In addition, the frequency used to diagnose pregnancy depends on the physical characteristics of the sow; the ultrasound image can contain Gaussian noise and speckle noise depending on the surrounding environment. Therefore, we added these two noises to the ultrasound images to make them similar to the noise that occurs in typical farm situations [20,21]. Speckle noise 0.7 (variance) and Gaussian noise 0.02 (zero mean and variance 0.02) were added to 1,051 ultrasound images of pregnant sows and 1,136 non-pregnant sows used for the test, and speckle noise 0.4 and Gaussian noise 0.01 were applied in the same way. The number of test images with noises is the same as original and noise images were not used in the training stage. The ultrasound images with noise for the test are shown in Fig. 2. Ultrasound images with noises were used together with the original images for performance evaluation so that the deep learning-based classification algorithm can show robustness in various environments.

Fig. 2. Ultrasound images with gaussian and speckle noise.
Download Original Figure
Classification algorithms using deep learning

To develop a method to diagnose pregnancy in sows that can be used in real-time in various environments with high processing speed and low computational cost, we decided to use a deep learning-based classification algorithm [22]. It has high accuracy based on neural network structure and a high processing speed with no position calculation, so it is considered ideal for diagnosing pregnancy in real-time. To select an optimal classification algorithm for sow ultrasound image pregnancy detection, various deep learning-based classification algorithms known for high performance were used. Inception-v4, Xception, and EfficientNetV2 classification algorithms were all used to train the ultrasound images and generate trained weights. Performance evaluation and comparison for the original ultrasound images and the noise ultrasound images were performed to select the optimal algorithm.

The inception model is one of the most used CNN models since the release of TensorFlow [23]. The core of the inception model is in the Conv layer called the inception module. Conventional Conv layers usually use data composed of width, height, and channels. Width and height decrease through max-pooling according to the progress of the network model, and the channel progresses in the direction of increasing. The inception model uses the form of 1 × 1 Conv to make the filter 1 × 1, and it is performed in the direction of decreasing channels. Through this, a fully connected computation of the channel called network-in-network is performed, and a compression effect of reducing the dimension can be achieved. Therefore, 1 × 1 Conv structure of Inception was able to increase the accuracy and reduce the amount of computation. Inception-v2 has a change on the existing inception module. To reduce the amount of computation, module A with factorizing was applied by changing 5 × 5 Conv to two 3x3 Conv, and module B with asymmetric factorization was made. To reduce the grid size of the feature map, module C was created by combining pooling to Conv structure and Conv to pooling structure in parallel, and these replaced the existing inception module. Inception-v3 has the same structure as Inception-v2, and various techniques such as RMSProp, Label Smoothing, Factorized 7-7, and BN-auxiliary are applied to increase performance. In the Inception-v4 used in our proposed study, the modules that change the grid are distinguished from the structure of Inception-v3. Along with the inception module A-B-C, the reduction module A-B, which reduces the size of the grid, has been added and improves accuracy. The structure of Inception-v4 is shown in Fig. 3.

Fig. 3. Network structure of Inception-v4.
Download Original Figure

Xception is based on Inception, but it is a model to which the concept of modified depthwise separable convolution is applied [24]. Xception went further from the existing inception module and aimed to completely separate cross-channel correlations and spatial correlations. Therefore, as shown in Fig. 4 correlation between channels was mapped through 1 × 1 Conv in the existing inception module, and then spatial correlation was mapped for all output channels. Through this, Xception was able to show high classification accuracy when compared to Inception-v3, which has a similar scale and is used as a pretrain for various encoders due to its simple concept and structure and high performance.

Fig. 4. Network structure of Xception.
Download Original Figure

EfficientNetV1 is a model that achieved SOTA performance in 2019 with good performance with much fewer parameters than other image classification tasks [25]. The performance of CNN tends to be proportional to the scale of the model, and many studies have been conducted to improve the performance by increasing the model. There are three methods of scaling up: deepening the network depth, increasing the channel width, and increasing the resolution of the input image. EfficientNetV1 found the optimal combination of these three through automated machine learning [26], and suggested a compound scaling method to achieve high performance even with a small model. EfficientNetV2 is a model that succeeded in increasing the learning speed while maintaining accuracy through progressive learning, which gradually increases the input image size while using the existing structure and the non-uniform scaling technique that compensates for progressive learning [27]. The basic structure of EfficientNetV2 is shown in Fig. 5.

Fig. 5. Network structure of EfficientNetV2.
Download Original Figure

Inception-v4, which reduces the complexity of calculations through the inception module, achieving fast processing and high accuracy; Xception, which uses the concept of depthwise separable on ultrasound image because it is basically one-channel grayscale; and EfficientNetV2, which performs classification through optimal combination using automated machine learning because frequency bands exist but cannot define accurate image resolution, were selected as the ultrasound pregnancy diagnosis algorithms.

Inception-v4, Xception and EfficientNetV2 training was done for pregnancy diagnosis in sows. The 5,292 ultrasound images of pregnant sows were divided into 4,241 for training and 1,051 for testing. The 5,367 ultrasound images of non-pregnant sows were divided into 4,231 for training and 1,136 for testing. The training images were further divided into training and validation at a ratio of 8:2. The training the network models was continued until the validation loss converged. All training and performance evaluations were performed using Windows 10 x64, CUDA 10.1 with cuDNN, and Python 3.7.4 with the following specifications: Intel(R) Xeon(R) W-2133, NVIDIA TITAN Xp, and 128 GB RAM.


The performance of the pregnancy diagnosis in sows was evaluated by weights trained through Inception-v4, Xception, and EfficientNetV2. The overall structure of the study is shown in Fig. 6. The dataset used for the performance evaluation was divided into Dataset-A and Dataset-B. Dataset-A consisted of 1,051 ultrasound images of pregnant sows with all situations and visible embryonic sacs and 1,136 ultrasound images of non-pregnant sows. Dataset-B which is a subset of the Dataset-A consisted of 14 ultrasound images of pregnant sows with invisible embryonic sacs and 14 ultrasound images of non-pregnant sows. Each of Dataset-A and Dataset-B was divided once more into original, NoiseT1 with added speckle noise of variance 0.4 and Gaussian noise of zero mean and variance 0.01 into original images and NoiseT2 with added speckle noise of variance 0.7 and Gaussian noise of zero mean and variance 0.02 into original images depending on the application of noise. Therefore, a total of 6 test datasets were used for performance evaluation: Original Dataset-A, Original Dataset-B, NoiseT1 Dataset-A, NoiseT1 Dataset-B, NoiseT2 Dataset-A, and NoiseT2 Dataset-B.

Fig. 6. Proposed ultrasonography-based pregnancy diagnosis in sows.
Download Original Figure

The ultrasound images used in the study were organized as shown in Table 1. Ultrasound images in Dataset-A and Dataset-B were classified for pregnancy through weights trained using Inception-v4, Xception, and EfficientNetV2. A confusion matrix consisting of true positive (TP), true negative (TN), false positive (FP), and false negative (FN) was used for evaluation. TP is the case in which pregnant is predicted as pregnant, and TN is the case in that the non-pregnant is predicted as non-pregnant. FP is the case that non-pregnant is incorrectly predicted as pregnant, and FN is the case that pregnant is incorrectly predicted as non-pregnant. We also employed the performance metrics of specificity, sensitivity, and accuracy to evaluate the pregnancy diagnosis performance. Sensitivity is calculated as TP / (TP+FN) and is the ratio determined as pregnant in all pregnant, and specificity is calculated as TN / (TP+FP) and is the ratio determined as non-pregnant in all non-pregnant. Accuracy includes all elements of sensitivity and specificity and can confirm the overall pregnancy diagnosis performance.

Table 1. Number of ultrasound images of sows used for training and performance evaluation
Original NoiseT1 (Gaussian 0.01, Speckle 0.4) NoiseT2 (Gaussian 0.02, Speckle 0.7)
Pregnant Non-pregnant Pregnant Non-pregnant Pregnant Non-pregnant
Training 4,241 4,231 - - - -
Dataset-A 1,051 1,136 1,051 1,136 1,051 1,136
Dataset-B 14 14 14 14 14 14
Download Excel Table

The results of ultrasound pregnancy diagnosis performance evaluation for Dataset-A are shown in Table 2. Xception achieved the highest overall performance. In the original ultrasound images result, Xception, EfficientNetV2, and Inception-v4 achieved 0.98, 0.99, and 0.98 accuracy, respectively. However, when the noise was added, the performance of EfficientNetV2 and Inception-v4 significantly decreased. The performance of Xception was reduced by 0.02, a minor difference from the original. Results for Dataset-B are shown in Table 3: again, Xception achieved the highest performance. In the original ultrasound images result, Xception, EfficientNetV2, and Inception-v4 achieved 0.89, 0.82, and 0.93 accuracy, respectively. Dataset-B was difficult to distinguish even for experts because the embryonic sacs are not visible. However, the proposed method achieved high overall performance. When the ultrasound images contain noise, the performance of EfficientNetV2 and Inception-v4 significantly decreased. Although the performance of the Xception was also reduced from the original performance, the difference was only 0.04. Dataset-B shows a lower sensitivity compared to Dataset-A. This is thought to be because the number of images with invisible embryonic sacs is not sufficient for training; they are only 88 out of the 4,241 training images. On the other hand, specificity was 1.00 for all models in Dataset-B. This is the opposite of the previous case. Non-pregnant was trained using many images, but the results were confirmed only using 14 images. Although there was a data imbalance problem in Dataset-B, we were able to confirm the unbiased performance through the comparison of three classification algorithms.

Table 2. Performance evaluation of Dataset-A
Variable Dataset-A
Sensitivity Specificity Accuracy
 Inception-v4 0.9943 0.9622 0.9776
 Xception 0.9859 0.9798 0.9827
 EfficientNetV2 0.9876 0.9982 0.9931
NoiseT1 (Gaussian 0.01 / Speckle 0.4)
 Inception-v4 0.6613 1 0.8372
 Xception 0.9914 0.9736 0.9822
 EfficientNetV2 0.8554 1 0.9305
NoiseT2 (Gaussian 0.02 / Speckle 0.7)
 Inception-v4 0.3949 0.9991 0.7087
 Xception 0.9924 0.9393 0.9648
 EfficientNetV2 0.5956 1 0.8057
Download Excel Table
Table 3. Performance evaluation of Dataset-B (embryonic sac is not visible)
Variable Dataset-B
Sensitivity Specificity Accuracy
 Inception-v4 0.8571 1.000 0.9286
 Xception 0.7857 1.000 0.8929
 EfficientNetV2 0.6429 1.000 0.8214
NoiseT1 (Gaussian 0.01 / Speckle 0.4)
 Inception-v4 0.1249 1.000 0.5714
 Xception 0.7857 1.000 0.8929
 EfficientNetV2 0.2857 1.000 0.6429
NoiseT2 (Gaussian 0.02 / Speckle 0.7)
 Inception-v4 0.000 1.000 0.5000
 Xception 0.7143 1.000 0.8571
 EfficientNetV2 0.1427 1.000 0.5714
Download Excel Table

The classification algorithms used in this study have high performance. When tested with the original ultrasound images, they achieved high performance in both Dataset-A and Dataset-B. However, when noise was included or the intensity of noise was increased, the performance decrease drastically, except for Xception. Xception maps the correlation between channels and then maps spatial correlation. It means that the relationship between the channels and spatial are separated due to the depthwise separable. Two noises were added to the ultrasound images according to the characteristics of the ultrasonography. Xception, which is based on CNN structure is robust against noise when extracting spatial features. Furthermore, against speckle noise, which has 3-channels unlike 1-channel of ultrasonography, it is presumed that a robust classification was achieved by separately extracting the channels and spatial features. As a result, it was found that it is best to use the Xception classification algorithm for pregnancy diagnosis using ultrasound images.


In this study, ultrasonography-based deep-learning algorithms to diagnose pregnancy in sows were proposed. Inception-v4, Xception, and EfficientNetV2 were used for deep learning-based classification algorithms. Gaussian and speckle noise with parameters of each 0.01, 0.02, and 0.4, 0.7, respectively, were added to ultrasound images as these are easily affected by noise from the surrounding environments.

The pregnancy diagnosis algorithms achieved good overall performance. The algorithms performed highly on ultrasound images with visible embryonic sacs. Even on ultrasound images with invisible embryonic sacs, which are difficult for experts to distinguish, the algorithms achieved accuracies of up to 0.93 . When the embryonic sac was visible in the ultrasound image containing noise, the accuracy reached 0.98. For ultrasound images with noise and invisible embryonic sacs, accuracy was reduced to 0.89. The Xception algorithm showed robustness against noise and achieved overall high performance. For future study, we plan to collect more images with invisible embryonic sacs; the current study had only a few of these. Also, this study considered pregnancy of at least 23 days; therefore, we plan to include pregnancy between 10 and 23 days.

Competing interests

No potential conflict of interest relevant to this article was reported.

Funding sources

This work was carried out with the support of “Cooperative Research Program for Agriculture Science and Technology Development (Project No. PJ01681001),” Rural Development Administration, Korea.


This study was supported by 2022 the RDA Fellowship Program of National Institute of Animal Science, Rural Development Administration, Korea.

Availability of data and material

Upon reasonable request, the datasets of this study can be available from the corresponding author.

Authors’ contributions

Conceptualization: Choi YH, Sa SJ, Cho HC.

Data curation: Choi YH, Park HJ, Jeong YD, Cho ES, Kim YS, Sa SJ.

Formal analysis: Chae JW, Choi YH, Park HJ, Jeong YD, Cho ES, Kim YS, Sa SJ.

Methodology: Chae JW, Choi YH, Park HJ, Jeong YD, Cho ES, Kim YS.

Software: Chae JW, Lee JN, Kim TK.

Validation: Chae JW, Choi YH, Sa SJ, Cho HC.

Investigation: Chae JW, Choi YH, Lee JN, Park HJ, Jeong YD, Cho ES, Kim YS, Kim TK.

Writing - original draft: Chae JW, Choi YH, Sa SJ, Cho HC.

Writing - review & editing: Chae JW, Choi YH, Lee JN, Park HJ, Jeong YD, Cho ES, Kim YS, Kim TK, Sa SJ, Cho HC.

Ethics approval and consent to participate

This study was approved by the Institutional Animal Care and Use Committee (IACUC) of Rural Development Administration (No. NIAS-2022-563), Korea.



Kim KH, Hosseindoust A, Ingale SL, Lee SH, Noh HS, Choi YH, et al. Effects of gestational housing on reproductive performance and behavior of sows with different backfat thickness. Asian-Australas J Anim Sci. 2016; 29:142-8


Lee S, Hosseindoust A, Choi Y, Kim M, Kim K, Lee J, et al. Age and weight at first mating affects plasma leptin concentration but no effects on reproductive performance of gilts. J Anim Sci Technol. 2019; 61:285-93


Moturi J, Hosseindoust A, Ha SH, Tajudeen H, Mun JY, Kim JS. Effects of age at first breeding and dietary energy levels during the rearing period of replacement gilts on reproductive performance. Anim Prod Sci. 2022; 62:1581-9


Inaba T, Nakazima Y, Matsui N, Imori T. Early pregnancy diagnosis in sows by ultrasonic linear electronic scanning. Theriogenology. 1983; 20:97-101


Spoolder HAM, Geudeke MJ, Van der Peet-Schwering CMC, Soede NM. Group housing of sows in early pregnancy: a review of success and risk factors. Livest Sci. 2009; 125:1-14


Tajudeen H, Moturi J, Hosseindoust A, Ha S, Mun J, Choi Y, et al. Effects of various cooling methods and drinking water temperatures on reproductive performance and behavior in heat stressed sows. J Anim Sci Technol. 2022; 64:782-91


Auvigne V, Leneveu P, Jehannin C, Peltoniemi O, Sallé E. Seasonal infertility in sows: a five year field study to analyze the relative roles of heat stress and photoperiod. Theriogenology. 2010; 74:60-6


Choi HS, Kiesenhofer E, Gantner H, Hois J, Bamberg E. Pregnancy diagnosis in sows by estimation of oestrogens in blood, urine or faeces. Anim Reprod Sci. 1987; 15:209-16


Gábor G, Kastelic JP, Abonyi-Tóth Z, Gábor P, Endrődi T, Balogh OG. Pregnancy loss in dairy cattle: relationship of ultrasound, blood pregnancy-specific protein. B progesterone and production variables. Reprod Domest Anim. 2016; 51:467-73


Einarsson S, Madej A, Tsuma V. The influence of stress on early pregnancy in the pig. Anim Reprod Sci. 1996; 42:165-72


Peltoniemi O, Björkman S, Oropeza-Moe M, Oliviero C. Developments of reproductive management and biotechnology in the pig. Anim Reprod. 2019; 16:524-38


Cameron RDA. Pregnancy diagnosis in the sow by rectal examination. Aust Vet J. 1977; 53:432-5


Lin H, Wang X, Liu G, Fu J, Wang A. Expression of αV and β3 integrin subunits during implantation in pig. Mol Reprod Dev. 2007; 74:1379-85


Zhou C, Cai G, Meng F, Xu Z, He Y, Hu Q, et al. Deep-sequencing identification of MicroRNA biomarkers in serum exosomes for early pig pregnancy. Front Genet. 2020; 11:536


Kauffold J, Althouse GC. An update on the use of B-mode ultrasonography in female pig reproduction. Theriogenology. 2007; 67:901-11


Kauffold J, Peltoniemi O, Wehrend A, Althouse GC. Principles and clinical uses of real-time ultrasonography in female swine reproduction. Animals. 2019; 9:950


Kousenidis K, Giantsis IA, Karageorgiou E, Avdi M. Swine ultrasonography numerical modeling for pregnancy diagnosis and prediction of litter size. Int J Biol Biomed Eng. 2021; 15:29-35


Zhang Q, Han H, Ji C, Yu J, Wang Y, Wang W. Gabor-based anisotropic diffusion for speckle noise reduction in medical ultrasonography. J Opt Soc Am A. 2014; 31:1273-83


Mafi M, Tabarestani S, Cabrerizo M, Barreto A, Adjouadi M. Denoising of ultrasound images affected by combined speckle and Gaussian noise. IET Image Process. 2018; 12:2346-51


Jeyalakshmi TR, Ramar K. A modified method for speckle noise removal in ultrasound medical images. Int J Comput Electr Eng. 2010; 2:54-8


Wang S, Huang TZ, Zhao XL, Mei JJ, Huang J. Speckle noise removal in ultrasound images by first- and second-order total variation. Numer Algorithms. 2018; 78:513-33


Hemanth DJ, Estrela VV. Deep learning for image processing applications. Amsterdam: IOS Press. 2017


Szegedy C, Ioffe S, Vanhoucke V, Alemi AA. Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proceedings of Thirty-first AAAI Conference on Artificial Intelligence. 2017; San Francisco, CA. p. p. 4278-84


Chollet F. Xception: deep learning with depthwise separable convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2017; Honolulu, Hawaii. p. p. 1251-8


Tan M, Le Q. EfficientNet: rethinking model scaling for convolutional neural networks. In: Proceedings of the 36th International Conference on Machine Learning. 2019; Long Beach, CA. p. p. 6105-14


He X, Zhao K, Chu X. AutoML: a survey of the state-of-the-art. Knowl Based Syst. 2021; 212:106622


Tan M, Le Q. EfficientNetV2: smaller models and faster training. In: Proceedings of the 38th International Conference on Machine Learning. 2021; p.:10096-106