Logo Medical Science Monitor

Call: +1.631.470.9640
Mon - Fri 10:00 am - 02:00 pm EST

Contact Us

Logo Medical Science Monitor Logo Medical Science Monitor Logo Medical Science Monitor

08 April 2025: Review Articles  

Artificial Intelligence in Dentistry: A Narrative Review of Diagnostic and Therapeutic Applications

Sizhe Gao1ABCDEF, Xianyun Wang2ACEF, Zhuoheng Xia1BDF, Huicong Zhang3CDEF, Jun Yu2DEF, Fan Yang1ABCD*

DOI: 10.12659/MSM.946676

Med Sci Monit 2025; 31:e946676

Abstract

0:00

ABSTRACT: Advancements in digital and precision medicine have fostered the rapid development of artificial intelligence (AI) applications, including machine learning, artificial neural networks (ANN), and deep learning, within the field of dentistry, particularly in imaging diagnosis and treatment. This review examines the progress of AI across various domains of dentistry, focusing on its role in enhancing diagnostics and optimizing treatment for oral diseases such as endodontic disease, periodontal disease, oral implantology, orthodontics, prosthodontic treatment, and oral and maxillofacial surgery. Additionally, it discusses the emerging opportunities and challenges associated with these technologies. The findings indicate that AI can be effectively utilized in numerous aspects of oral healthcare, including prevention, early screening, accurate diagnosis, treatment plan design assistance, treatment execution, follow-up monitoring, and prognosis assessment. However, notable challenges persist, including issues related to inaccurate data annotation, limited capability for fine-grained feature expression, a lack of universally applicable models, potential biases in learning algorithms, and legal risks pertaining to medical malpractice and data privacy breaches. Looking forward, future research is expected to concentrate on overcoming these challenges to enhance the accuracy and applicability of AI in diagnosing and treating oral diseases. This review aims to provide a comprehensive overview of the current state of AI in dentistry and to identify pathways for its effective integration into clinical practice.

Keywords: Artificial Intelligence, Deep Learning, Dentistry

Introduction

Artificial intelligence (AI) has evolved remarkably since its conceptual inception in 1943, defined as the automation of human cognitive functions [1]. Officially termed “artificial intelligence” at a 1956 Dartmouth conference by John McCarthy, AI’s journey began with the vision of machines autonomously performing tasks traditionally deemed intelligent [2]. These foundational machines were designed to address problems based on input data, setting the stage for more advanced applications in the subsequent decades [3].

The advent of deep learning and machine learning, which are pivotal subfields of AI, marked a significant evolution. Indeed, all deep learning is a subset of machine learning; however, not all machine learning qualifies as deep learning [4]. Machine learning is a subdomain of augmented intelligence that centers on the development of algorithms and statistical models capable of being trained to generate desired outcomes based on input data[5]. This approach enables computers to learn from data without being explicitly programmed, thus enhancing their ability to make informed decisions [6]. Deep learning uses neural networks inspired by the structure of the human brain to learn from large amounts of data, can automatically identify and extract features from raw data, such as images, sounds, and text, and use them to make predictions or decisions [6]. The neural network typically comprises multiple layers, including an input layer, one or more hidden layers, and an output layer. Information is transmitted between these layers through a process known as forward propagation, which ultimately yields the predicted results [7]. These disciplines utilize neural network architectures inspired by the structure of the human brain to process vast datasets and extract meaningful patterns and insights [8]. This methodology has not only transformed traditional computing paradigms but has also ushered in a new era of algorithm-based innovations that learn from data without explicit programming [9]. Particularly in medicine, AI has found robust applications in radiology, where digitally coded images provide a compatible medium for AI processes, allowing for efficient translation into actionable medical insights [10]. The capability of AI systems to emulate the pattern recognition skills of human radiologists has dramatically enhanced diagnostic accuracy and efficiency across multiple medical disciplines including, but not limited to, computer vision, medical imaging process, and psychological sciences [11]. The use of AI in dentistry has expanded significantly, encompassing a broad spectrum of applications, ranging from diagnostic radiology to complex procedural planning. The ability of AI to automate the quantification, analysis, and visualisation of dental imaging has revolutionised traditional practices, offering new insights into tooth growth, decay, bone structure, and soft tissue conditions [12,13,14]. Furthermore, the potential of AI to develop personalized treatment plans, analyse dental images for disease detection, and design dental restorations through computer-aided design (CAD) systems underscores its transformative impact on dental healthcare [8,15–18].

Despite these advancements, the integration of AI in dentistry remains in its nascent stages, characterized by high precision in experimental settings, yet faces challenges in terms of clinical accuracy and reliability [19,20]. The complexity of dental care, coupled with the dynamic nature of medical science, necessitates that AI systems not only complement but also enhance clinical decision-making without supplanting the nuanced judgment of experienced clinicians. This review aims to delve into the diverse applications of AI in dentistry, provide a comprehensive overview of its current state, explore its success, and address the challenges and future potential of AI in this specialized field. Through a systematic examination of key studies, data-driven insights, and methodological approaches, this study endeavoured to chart a course for future research and implementation of AI in dentistry, highlighting how this technology can further refine and redefine the standards of dental practice.

Application of AI in Maxillofacial Disease Diagnosis and Treatment

AI FOR DETECTING DENTAL CARIES:

Tooth decay is the most prevalent human disease, affecting approximately 90% of the global population, with varying degrees of dental caries [21]. Without timely intervention, caries can progressively expand, infiltrate the tooth pulp, and lead to conditions such as pulpitis, periapical inflammation, apical abscesses, and eventually, tooth loss [22]. The diagnosis of dental caries primarily involves reviewing the medical history, conducting clinical examinations, and performing auxiliary assessments, including visual and tactile evaluations. Imaging techniques such as X-ray photography (the most common method), optical coherence tomography (OCT), quantitative photoluminescence, intraoral scanning (IOS), and near-infrared radiography are crucial for detecting early and concealed caries [23–25]. However, the accuracy of caries detection using these images is significantly dependent on the clinician’s experience. Inexperienced dentists often encounter misdiagnosis challenges. To address this issue, AI technologies are increasingly employed to improve the detection and diagnosis of dental caries using captured images [26–28].

In dental imaging, areas of caries are characterized by low-density shadows with distinct gradient boundaries. Techniques such as boundary supervision and multitask learning have been integrated for the automated detection of caries to delineate the extent of lesion erosion [29]. Gráfová et al. employed a gradient-based Canny algorithm in an edge detector to segment tooth instances [30]. When coupled with principal component analysis, this method automatically selects the parameters that identify the edge with the highest correlation in the tests, thereby facilitating accurate detection. Convolutional neural network (CNN) is a deep learning model. The first representative convolutional neural network structure was first proposed by neuroscientist Yann LeCun in 1998, which can be used to solve image recognition problems [31]. Due to its powerful parameter calculation and data processing capabilities, after a large amount of data is input into the neural network, the computer automatically extracts the features of the data set and gradually learns them, which can solve the problems of image recognition, segmentation, classification and so on with high precision [32]. It has achieved great success in the field of image processing and has gradually become a research hotspot in the field of medical image analysis. Introduced in 2015, the U-Net network won first place in the ISBI2015 challenge for dental X-ray image analysis [33] and has since become a foundational framework for various segmentation tasks. It inherits the basic convolutional operation of CNN and achieves more refined pixel-level prediction capability through unique network structure and design improvements. For example, Helli et al utilized U-Net for tooth segmentation and subsequently applied simple corrosion techniques to section tooth instances [34]. The advent of convolutional neural network (CNN)-based methods in 2016 has resulted in significant advancements in deep-learning technologies. Ainas et al employed ANNs to identify dental caries [35], while Lee et al used a pretrained GoogleNet Inception V3 CNN model to detect caries in premolars and molars, demonstrating the efficacy of CNN algorithms in identifying caries [26]. Zhu et al developed CariesNet, a new model that maps panoramic oral X-ray images to various degrees of dental caries [22]. CariesNet, a U-shaped network featuring partial coding and full-scale axial attention modules, extracts features from channel and spatial domains to effectively segment carious lesions, classifying them as surface, medium, or deep caries. This network offers smoother and more precise segmentation boundaries than PraNet, U-Net, DeepLabv3, and Res-U-Net, achieving a Dice coefficient of 93.64% and an accuracy of 93.61% in caries segmentation. Esmaeilyfard et al utilized CNNs to predict the presence or absence of dental decay and classified lesions according to their depth and type for the provided samples [36], achieving a high accuracy (95.3%), sensitivity (92.1%), and specificity (96.3%). Early caries segmentation methods combined detection and structured segmentation to identify the caries lesions [37]. The methodology successfully segmented lesions on tooth structures. Numerous methods that incorporate attention mechanisms have been developed to enhance the accuracy and efficiency of superficial caries detection [38]. These methods refine the operations across various scales, categories, spaces, and channels. Additionally, a boundary-supervision-based method was adopted to address issues with fuzzy gradient boundaries by utilizing multitask learning with mask supervision information to generate additional supervision, thus enhancing the accuracy and automation of caries detection [39].

X-ray imaging provides comprehensive insights into oral conditions and key information about dental lesions but faces several challenges. First, X-ray imaging is susceptible to noise from patient movement, equipment quality, and medical staff expertise, which can lead to artefacts and disturbances in panoramic images. Second, the average area of the carious lesions constitutes only 1.5% of the panoramic image, with shallow caries accounting for less than 0.5%. This small target size poses significant challenges for modern neural networks [40–42]. Third, 40% demineralisation of hard tissue is required before lesions are detectable on radiographs [43], which often leads to misidentification due to slight decreases in density associated with incipient enamel lesions [44]. Thus, X-ray imaging can be used to detect early-stage carious lesions. Despite the relatively low radiation dose from X-rays, there remains a risk to patients. To address these issues, innovative caries detection methods based on fluorescence techniques such as impedance measurement, near-infrared light transillumination (NILT), and fibre-optic transillumination have been developed [45]. Researchers have integrated these techniques with AI. For example, Schwendicke et al trained two state-of-the-art CNNs (Resnet 18 and Resnext50) to detect caries in NILT images and demonstrated their discriminatory abilities [46]. Holtkamp et al trained CNNs for caries detection on NILT images both in vitro and in vivo and achieved promising accuracy using in vivo imagery [47]. Casalegno et al developed a deep learning model for the automatic detection and localisation of dental lesions in NILT images, achieving a mean intersection-over-union (IOU) score of 72.7% for a 5-class segmentation task and IOU scores of 49.5% and 49.0% for proximal and occlusal carious lesions, with areas under the receiver operating characteristic curve of 83.6% and 85.6%, respectively [28]. In 2019, Salehi et al introduced a novel approach using OCT imaging and a CNN for occlusal carious lesion detection, achieving a sensitivity of 98% and a specificity of 100% [48]. In 2020, Salehi et al reported deep learning-based classification of ex vivo OCT images for dental caries detection, enhancing CNN classifier accuracy through seven optimization methods, with Adam, AdaMax, and Nadam optimizers, which showed the highest accuracy of 95.45–97.12% for training and 86.86–88.73% for testing [49]. Charvat et al focused on detecting carious lesions using diffuse reflectance spectroscopy, employing computational intelligence for classification [50]. Using support vector machine (SVM), Bayesian methods, k-nearest neighbour methods, and neural networks, they distinguished healthy tissue from carious lesions with accuracies ranging from 94.1% to 98.4% and cross-validation errors below 8.3%. The results indicated that the proposed method successfully classified the experiments with sufficient accuracy to differentiate between healthy and unhealthy tissues. Procházka et al utilized reflectivity data to distinguish between healthy and carious tissues using deep learning and a multilayer CNN [51]. This deep learning neural network system outperformed standard methods in accurately distinguishing caries from healthy tissues without requiring preprocessing or feature selection. The classification accuracy reached 98.1% using signal domain features and 94.4% using spectral domain features, with a loss criterion below 0.01 in both cases. As the AI technology advances, its applications have increasingly permeated our daily lives. Thanh et al explored the use of deep learning algorithms to diagnose the stages of smooth-surface caries using smartphone images [52]. They employed four deep learning models – Faster R-CNN, YOLOv3, RetinaNet, and the single-shot multibox detector (SSD) – to detect non-cavitated and cavitated caries from photos taken with a standard smartphone. YOLOv3 and Faster R-CNN demonstrated the highest sensitivities among the models for detecting cavitated caries, with rates of 87.4% and 71.4%, respectively, highlighting the potential of AI for community-based cavitated caries detection. An increasing number of studies have investigated caries detection using deep learning, employing a variety of architectures, and achieving substantial success.

APPLICATION OF AI IN APICAL PERIODONTITIS:

Apical periodontitis is a prevalent condition that impacts 34–61% of individuals and approximately 3–4% of all teeth [53]. It represents around 75% of radiolucent jaw lesions, characterized by inflammatory responses and bone loss in the periapical tissues, primarily resulting from microbial infections of the dental pulp [54,55]. This condition can manifest as either an acute or chronic inflammatory reaction, depending on the state of infection in the root canal. Acute apical periodontitis is diagnosed clinically, whereas chronic apical periodontitis is typically detected using radiography [56]. Periapical lesions generally heal following root canal treatment or show improvement, as indicated by a reduction in lesion size [57,58]. Radiographic signs of apical periodontitis include widened periodontal ligament space or visible lesions. Recent studies used deep learning to detect apical lesions. For example, Li et al used two cascades of ResNet-18 diaphyses and two convolutional layers to automate the detection of caries and apical periodontitis on periapical radiographs, achieving F1-scores of 0.829 for caries and 0.828 for apical periodontitis [59]. Another study employing a deep-convolutional neural network (D-CNN) system reported a reliable detection rate of 92.8% for periapical lesions, correctly identifying 142 of 153 lesions. This study also demonstrated a significant positive correlation between radiologist evaluations and D-CNN measurements [60]. Orhan et al compared the diagnostic capabilities of a deep CNN algorithm with volume measurements from CBCT images in periapical pathology and found 92.8% reliability in lesion detection [61]. The deep CNN volumetric measurements were comparable to those of manual segmentation, with no significant differences between the two methods. Song assessed the performance of a pretrained U-Net model for segmenting apical lesions on panoramic radiographs. In a test group of 180 lesions, 147 were segmented with IOU thresholds of 0.3, achieving F1-scores of 0.828, 0.815, and 0.742 for IoU thresholds of 0.3, 0.4, and 0.5, respectively, thereby demonstrating the efficacy of deep learning in segmenting root-tip lesions [62].

Root canal therapy is the preferred treatment for apical periodontitis [63]. If a canal is overlooked and remains untreated, it may lead to microbial colonisation and, ultimately, failure of the root canal treatment [64]. Thus, it is imperative for dentists to thoroughly understand the morphology of the root canal and accurately determine the length and location of the apical foramen to ensure successful treatment [65]. Hatvani demonstrated the superiority of CNN-based approaches over reconstruction-based methods for dental CT images by enhancing the detection of the canal size, shape, and curvature [66]. Mohammad assessed the reliability of an ANN in locating the minor apical foramen, and the results indicated that the ANN was more precise than endodontist determination when compared with actual working length measurements using a stereomicroscope as the gold standard post-tooth extraction [67]. Numerous studies have highlighted the excellent performance of AI deep learning systems in determining root canal morphology, showing great promise in detecting endodontic diseases with accuracy, sensitivity, and specificity generally exceeding 80% [68]. These deep learning systems have proven beneficial in diagnostics, particularly in classifying images, and can assist inexperienced clinicians in interpreting complex imagery [69, 70]. Some studies using human experts as benchmarks have shown that deep learning is either comparable or superior in accuracy [71]. In conclusion, AI models are effective tools that significantly enhance the accuracy and reliability of endodontic disease diagnosis.

APPLICATION OF AI IN PERIODONTAL DISEASE:

Periodontitis, characterized by chronic inflammation of the periodontal tissues, ranks as the sixth most widespread epidemic globally [72]. It leads to irreversible damage to both soft and bone tissues, causing gum recession, tooth loosening, and ultimately, tooth loss. Early diagnosis and treatment planning are essential for the management of periodontitis [73]. Diagnosis typically depends on the patient’s medical history, clinical manifestations, physical examination, and conventional dental radiography [74]. To improve the conventional diagnostic methods, numerous studies have investigated the application of AI in the diagnosis of periodontitis. For example, Liu et al employed a Faster R-CNN model to detect marginal bone loss, achieving evaluation metrics comparable to those of resident dentists [75]. Chang et al utilized a hybrid deep learning approach for the automatic detection and classification of periodontal bone loss and staging of periodontitis, demonstrating high accuracy and reliability [76]. Lee et al developed a deep learning-based CAD model that provided highly accurate measurements of alveolar bone levels and provisional diagnoses based on periapical radiographs [77]. Palkovics et al presented a 3D virtual model generated through semiautomatic segmentation, offering more precise information on intrabony periodontal defect morphologies than traditional diagnostic methods [78]. Sunnetci et al implemented an AI-based system to predict periodontal bone loss [79]; similarly, Ozden et al applied machine learning models, including ANN, decision trees, and SVM, to classify six types of periodontal diseases, achieving an accuracy of 98% [80]. Kim et al introduced DeNTNet, a deep learning-based automatic diagnostic system for detecting periodontal bone loss using panoramic X-rays, achieving an F1 score of 0.75, surpassing the 0.69 score obtained by dental clinicians [81]. Kurt-Bayrakdar et al developed a deep learning algorithm for interpreting panoramic radiographs and assessing periodontal bone loss and patterns [82]. These patterns are expected to become indispensable tools in periodontal treatment planning and prognosis. These studies highlighted the potential of AI as an auxiliary tool for diagnosing and treating periodontal diseases, generating significant interest, particularly in daily clinical applications.

APPLICATION OF AI IN ORAL IMPLANTOLOGY:

Dental implants are increasingly favoured for restoring missing teeth [83]. However, the diversity of cases, sensitivity of surgical techniques, and complexity of prognoses require highly proficient implantology practitioners. Advances in digital navigation and workflow have facilitated the use of AI in oral implantology, thereby enhancing treatment predictability [84]. Standard techniques for image acquisition in implant planning include cone beam computed tomography CBCT, IOS, and facial scanning [85,86]. In addition, software plays a crucial role in integrating multimodal images, comprehensive treatment planning, and image analysis [87].

AI assists dentists by integrating imaging data, oral scans, facial aesthetics, and digital information to accurately interpret alveolar bone characteristics, such as bone mass, thickness, and height, as well as the anatomical structure of the surgical area, which includes the maxillary sinus, mandibular nerve canal, and mental foramina [88]. It facilitates the determination of the optimal implant size, three-dimensional positioning, and axial direction before surgery, thereby enhancing work efficiency and treatment consistency while reducing technical difficulties and surgical risks. This enables customised implant planning for individual patients by incorporating customised dental implants, personalized titanium meshes for guided bone regeneration, and implementation of CAD/CAM scaffolds for bone augmentation [89]. Satapathy et al recorded and analysed data on implant positioning, angulation, and depth for both clinical and AI-generated plans, and found a high degree of agreement between them [90]. Lee demonstrated the efficacy of three major deep CNN architectures for identifying and classifying implant systems using a limited number of radiographic images [17]. Hiraiwa et al employed a probabilistic neural network method to analyse CBCT images and successfully determined tooth root locations [91]. Orhan et al applied a CNN to compute bone volume measurements for implant design [61]. AI technologies have significantly enhanced the accuracy of anatomical structure detection, thereby improving implant design. Wu et al analysed and evaluated the performance of various AI types in predicting the prognosis of dental implants and reported that the accuracy of AI models ranged from 70% to 96.13% [92].

The finite element method (FEA) is widely used in implant design to evaluate the stress across each segment of a structure via calculus. However, owing to the high computational costs associated with traditional FEA, neural network approximations are increasingly utilized for optimizing implant designs. Li et al substituted FEA with an AI algorithm to calculate the stress at the implant-bone interface, considering factors such as implant length, thread length, and spacing [93]. This AI model successfully reduced the stress at this interface by 36.6% compared to traditional FEA outcomes. To further optimize the implant porosity, length, and diameter, Roy et al integrated an ANN with a genetic algorithm, replacing FEA [94]. Similarly, Zaw et al developed a neural network to simulate the response of a dental implant-bone system [95]. Bayrakdar et al pioneered the reliability assessment of deep CNNs in implant design, demonstrating that AI diagnostics in the maxillary molar/premolar region were consistent with manual measurements in the mandibular premolar region [96]. The AI algorithm precisely calculated the elastic modulus at the implant-bone interface. Additionally, retrospective studies have investigated the role of AI in identifying implant types using radiographic images, showing improvements in accuracy from 93.8% to 98% [97]. In summary, AI algorithms are potent diagnostic tools in dental implantology that significantly enhance the precision and efficiency of implant design and identification using radiographic images. They also play a crucial role in pre-surgical digital implant planning and in predicting the prognosis of dental implants.

APPLICATION OF AI IN ORAL ORTHODONTICS:

Malocclusion is an oral disease prevalent in clinical settings. Orthodontics, with its focus on cephalometric analysis and pretreatment imaging, is particularly conducive to the adoption of AI [98]. Research on AI in orthodontics encompasses five primary domains: diagnosis and treatment planning, automated landmark detection and cephalometric analysis, assessment of growth and development, evaluation of treatment outcomes, and various other applications [16]. Several AI tools have been developed to assist in therapeutic orthodontic decision-making, such as choosing between orthodontics and orthognathic surgery and decisions regarding tooth extraction. Bichu et al demonstrated that ANNs provide essential guidance for orthodontic treatment planning, particularly for less experienced orthodontists [16]. Du et al created an interactive decision support system that combined objective evaluation indices and subjective assessment scores to accurately diagnose dental maxillofacial deformities and formulate orthognathic surgery plans that meet the clinical requirements [99]. Jung et al employed a neural network machine learning approach to differentiate between extraction and non-extraction treatments in 156 patients based on 12 cephalometric features and six additional clinical variables [100]. The model achieved a 93% accuracy rate in determining the appropriate treatment approach extraction versus non-extraction. Yim et al investigated the feasibility of a one-step automatic orthodontic diagnostic tool using a CNN model and validated its accuracy with internal (two hospitals) and external (eight other hospitals) test sets [101].

Cephalometric analysis is essential for diagnosing malocclusions and devising orthodontic treatment plans. The traditional two-dimensional cephalometric analysis of lateral cephalic films is both time-consuming and labour-intensive. It is also susceptible to measurement errors due to overlapping bilateral anatomical structures, image blurring, and other visual factors, often requiring repeated measurements to ensure consistency [102]. The accuracy of labelled cephalometric markers critically influences clinical analyses and subsequent treatment decisions [103,104]. By contrast, automatic tagging with AI technology aims to accelerate and improve the precision of these tasks. Recent studies have shown that deep learning algorithms can achieve an accuracy comparable to that of experienced orthodontists, suggesting that AI implementation in automatic cephalometric analysis could significantly reduce the workload and minimize human error [105]. Advances in digital technology have increased the volume of clinical patient data, thereby supporting the integration of AI in automatic cephalometric measurements. Arik et al pioneered the use of a deep CNN for fully automated quantitative cephalometric measurements, achieving high accuracy in detecting and classifying anatomical markers and successfully identifying critical structures in the oral and maxillofacial regions [106]. Park et al trained two deep learning algorithms, YOLOv3 and SSD, to assess their accuracy and computational efficiency in automatically identifying cephalometric landmarks [105]. YOLOv3 demonstrated not only a smaller error range but also a more isotropic tendency compared to SSD, with the mean computational time per image being 0.05 seconds for YOLOv3 and 2.89 seconds for SSD, highlighting the potential of YOLOv3 for clinical use in fully automated cephalometric landmark identification. Lee et al developed an AI model based on Bayesian CNNs, providing 95% confidence intervals for each marker based on automatic recognition and achieving an accuracy of up to 82.11% and an error margin of no more than 2 mm [107].

Furthermore, Moon et al compared facial growth prediction models using partial least squares (PLS) and AI, and found that AI predicted growth with greater accuracy than PLS overall [108]. Alam et al evaluated the ability of AI models to predict treatment outcomes in orthodontics, revealing a moderate level of accuracy, with a success rate of 73% [109]. These studies indicated that AI could function effectively as a computer-aided diagnostic tool, significantly enhancing the accuracy and reliability of decision-making in orthodontics.

APPLICATION OF AI IN ORAL RESTORATION:

Restorative dentistry focuses primarily on treating lost or damaged teeth and maintaining tooth structures. Dentists restore the anatomical morphology and physiological function of teeth using advanced medical science, technology, and synthetic materials, thereby reducing damage to oral and maxillofacial health. Common restoration methods include the use of fixed dentures, removable partial dentures, overdentures, and dental implants. Biomimetic dentures should provide optimal rehabilitation, prognosis, and functional restoration [110]. Maintaining functional balance necessitates precise replication of the occlusal morphology, 3D positioning, muscle-tongue positioning, and occlusal relationships of healthy natural teeth. Any imbalance can result in tooth and periodontal lesions, further complicating oral health, and posing challenges in replicating the original denture shape, which may lead to treatment errors and failures [111]. Therefore, personalized denture design is crucial.

The integration of AI with prosthetics represents a significant advancement in the design of crown shapes, positioning, determination of finish lines for tooth preparation, optimization of parameters for casting metal crowns, and the design of dental prosthesis frameworks [112]. These AI-generated dentures not only replicate the morphological quality of those crafted by human experts but also enhance functionality. Following Goodfellow’s introduction of the generative adversarial network (GAN), an algorithm based on deep learning that analyses training data distribution to generate new data, researchers applied GANs to design and create dental crowns [113]. These personalized dental crowns achieve high accuracy and mimic the morphology and biomechanics of natural teeth. Zhang developed a CNN-based AI model using an S-octree structure to predict the finish line of dental preparations, achieving an average accuracy between 90.6% and 97.4% and demonstrated its capability to automate the localisation of completion lines [114]. Revilla-León et al reviewed research on the role of AI in automatic CAD modelling and the design of dental restorations and showed that AI can construct tooth libraries from digital impressions, extract occlusal surface features, and automatically adjust tooth models for virtual design and reconstruction [97]. The produced prosthesis models exhibited a high anatomical similarity to the original teeth, underscoring AI’s potential of AI in CAD modelling for dental prostheses. Matin et al used AI to simulate and optimize the casting parameters for Co-Cr crown frame manufacturing, significantly reducing the porosity and manufacturing time [115]. Takahashi demonstrated the capability of a CNN in classifying dental arches and designing removable partial dentures, potentially establishing denture design principles [116]. Li et al developed a computer-aided segmentation network-driven framework for complete denture metal base (CDMB) design, automatically generating personalized digital design boundaries for complete dentures in edentulous patients [117]. The designed CDMB closely aligned with the patient’s dental morphology, offering superior clinical applicability. Yamaguchi et al demonstrated the effectiveness of AI in predicting the probability of the detachment of composite restorations in restorative dentistry [118]. These findings underscore the role of AI in delivering personalized treatment solutions, and accurately identifying and recommending treatments based on individual diagnostic profiles [119]. The use of individual patient digital health data by AI is anticipated to improve diagnostic accuracy and streamline treatment planning, thereby promoting the creation of personalized repair workflows. This represents a paradigm shift from traditional principle-based prosthodontics to personalized prosthodontics powered by AI. As the healthcare landscape evolves, integrating AI into restorative dental operations has become not merely an option but a necessity to propel the industry forward [120].

APPLICATION OF AI IN ORAL AND MAXILLOFACIAL SURGERY:

Oral cancer (OC) is the sixth most common malignant tumour worldwide, with its incidence and mortality ranking 11th among all cancers [121]. In 2020, approximately 377 713 new OC cases and 177 757 OC-related deaths were reported worldwide. Early detection of cancerous lesions is critical for the successful treatment of OC, underscoring the need for an accurate diagnosis of underlying malignancies and consistent follow-up. Early diagnosis significantly enhances the prognosis and survival rates [122].

In oral surgery, AI is used to screen pathological changes using image data [123]. Research indicates a strong preference for AI in detecting head and neck tumours through various imaging techniques, such as radiography, microscopy, and ultrasound [124]. Adeoye employed four learning algorithms – Cox-Time, DeepHit, DeepSurv, and Random Survival Forest (RSF) – to analyse data from 1,098 patients with oral white lesions. These models successfully predicted the malignant transformation of oral leucoplakia and lichenoid lesions, and their performance was evaluated using the concordance index (C-index), integrated Brier score (IBS), and cross-validation. Notably, DeepSurv and RSF were the most effective, achieving c-indexes of 0.95 and 0.91, and IBS values of 0.04 and 0.03, respectively [125]. Alhazmi developed an ANN to predict OC risk based on risk factors, medical conditions, and clinicopathological features, achieving an average sensitivity of 85.71%, a specificity of 60.00%, and an overall accuracy of 78.95% [126]. Jubair developed a lightweight deep CNN to classify oral lesions as benign or potentially malignant, attaining an accuracy of 85.0%, a specificity of 84.5%, a sensitivity of 86.7%, and an area under the curve (AUC) of 0.928, making it suitable for low-budget diagnostic devices [127]. Marzouk introduced the AIDTLOCCM model, an AI-based method that enhanced the accuracy of OC diagnosis to 90.08% [128]. Aubreville pioneered an automated approach for diagnosing oral squamous cell carcinoma using deep learning on confocal laser endomicroscopy images, which surpassed texture feature-based machine learning methods with an AUC of 0.96 and an average accuracy of 88.3% [129]. Additionally, Kouketsu et al. developed an AI model that detected the presence of OC, requiring concurrent thorough examination by an oncologist, which significantly improved accuracy and reliability, with a sensitivity of 93.9% and a specificity of 81.2% [130].

In patients diagnosed with maxillofacial tumours, both cervical lymph node and distant metastases are associated with increased tumour recurrence and reduced survival rates [131]. Pathological examination remains the definitive method for diagnosing malignant tumours, whereas imaging examinations play a crucial role in both diagnosis and treatment. Deep learning is leveraged to make predictions regarding unknown variables using extensive patient data. Bur et al. developed and validated machine learning algorithms using five clinical and pathological variables to predict occult lymph node metastasis, consistently surpassing models based solely on tumour invasion [132]. Furthermore, deep learning has proven superior to radiologists in detecting the extranodal expansion of cervical lymph node metastases by analysing 703 CT images from 51 patients, some of whom had extranodal metastasis, demonstrating its efficacy as a diagnostic tool [120]. Somyanonthanakul et al. created a fuzzy deep learning-based model to estimate survival time using clinicopathological data on OC, thereby enhancing the accuracy of these predictions [131]. AI also plays a crucial role in early diagnosis, potentially reducing OC-related mortality and morbidity, and is expected to become indispensable in oral and maxillofacial surgery [124].

In maxillofacial surgery, AI applications include prediction and automatic segmentation of the inferior alveolar nerve (IAN) and its localisation relative to the mandibular third molar, which is vital for treatments such as extractions, dental implants, and orthognathic surgery [133]. Although CBCT effectively determines the position of the IAN, simpler methods are required for operators to ascertain its 3D position. Lim et al. aimed to use AI to automatically image and track the IAN’s position, thereby enabling quicker and safer surgery [134]. They implemented a stepwise customised nnU-Net method that facilitated active learning with limited data to enhance training efficiency and utilized a multicentre dataset to diversify and improve AI performance. Subsequently, they compared the Dice Similarity Coefficient (DSC) and segmentation times for each learning step. After training, the DSC values increased progressively, confirming that the deep active learning framework could enhance IAN labelling accuracy. Choi et al trained a deep CNN with a ResNet-50 architecture to determine the positional relationship between the mandibular third molar and IAN in panoramic radiographs, showing higher accuracy than oral and maxillofacial surgery specialists in these assessments [135]. Ni et al introduced a robust and clinically applicable AI system for the precise and fully automatic segmentation of the mandibular canal on CBCT scans, indicating its potential to assist clinicians in treatment decision-making for mandibular third molars [136].

Challenges in the Application of AI in Dental Imaging Diagnosis and Treatment

LACK OF ACCURATELY LABELLED DATA:

The training of deep neural networks requires substantial volumes of annotated data, which presents a significant challenge to the development of image-based intelligent learning algorithms. Manual image tagging is both time-consuming and labour-intensive, leading to a scarcity of accurately annotated dental imaging data [137]. Consequently, it is vital to select high-quality data through independent reliability learning for effective deep model training. The performance of the AI system is enhanced by access to larger datasets [138]. However, strict medical data privacy regulations hinder the standardisation and interoperability of oral medical data systems, thus complicating the acquisition of sufficiently high-quality training data [139]. To overcome these obstacles, several steps are critical: first, the call for individuals, as well as data warehouse providers and developers, to establish a standardised dataset that guarantees patient privacy and data anonymity is vital and promising; second, the enhancement of technology updates and approval processes, the establishment of institutions for bias and evaluation, and the prevention of data misuse; and third, the definition of data supervision strategies and the drafting of detailed data security protocols [140,141]. Large-scale datasets benefit computer vision technology by enabling the recognition of complex general scenarios. Semiautomatic labelling using data engines, such as preliminary training of segmentation networks for mask prediction with minimal manual adjustments, can significantly reduce labelling costs [142].

WEAK EXPRESSION OF FINE-GRAINED FEATURES:

Current depth models for dental imaging primarily utilize general backbone networks to extract the tooth structure characteristics [16]. However, accurate diagnosis of many oral diseases depends on the precise segmentation of essential anatomical structures within the oral cavity [143]. However, the absence of a hierarchical task network for structural analysis limits the accuracy of these models. Moreover, single-depth models often fail to identify subtle alveolar bone structures [144]. To resolve this issue, it is crucial to develop and refine progressive depth model structures, along with coarse-to-fine feature models. The effective modelling of multiple related tasks is vital for enhancing the diagnostic performance of fine-grained and complex oral features. Addressing challenges such as noise and ambiguous lesion areas, which are prevalent in oral medicine images, is also essential. The use of denoising probabilistic models to predict and eliminate image noise combined with high-resolution reconstruction techniques for image smoothing has proven to be advantageous [145]. For feature extraction and classification, contrastive learning methods can distinguish features within and between classes, thereby training more robust feature extractors [146]. Additionally, semi-supervised methods can refine the decision boundary in low-density feature spaces, thereby improving inter-class feature representations [147]. The introduction of vision transformers, which capture both local and global features through multi-head attention mechanisms, may enhance feature representation compared to traditional CNNs [148].

DISADVANTAGE OF MODEL IN GENERALIZATION:

The construction of diagnostic models for various oral diseases depends on the data from diverse imaging scenarios [149]. Variations in imaging quality and noise levels stemming from different devices lead to distinct data characteristics and distributions, which particularly impact modelling tasks. This variability hampers the ability of cascaded models to integrate tasks effectively, and models that are dependent on specific data often underperform in new environments [150,151]. Therefore, it is essential to identify the most effective methods for transferring trained models from one scenario to another. To improve network generalization, researchers have proposed several data augmentation techniques including translation, rotation, noise addition, and contrast adjustment to enhance data diversity [152]. Furthermore, advances in semi- and self-supervised learning architectures are being explored to bolster the network’s ability to learn from substantial volumes of unlabelled data [153]. These methods aim to improve network generalization and address the challenges of acquiring high-precision clinically labelled data. Transfer learning and pre-training are particularly beneficial for enhancing model generalization [154]. Transfer learning applies a model developed for one task to another, such as adapting a lung detection model for dental imaging and allowing a trained convolution kernel to utilize common texture representations in medical images, thereby improving generalization [154]. Pre-training uses extensive datasets to extract dominant features, reducing the model’s learning load for specific tasks and preventing overfitting, especially in limited oral datasets [3].

Comprehensive oral lesion diagnosis and treatment require collaboration across multiple tasks. This process involves developing and training specialized models for each task, including object segmentation to identify the maxillary sinus, mandibular canal, and alveolar bone; tooth instance segmentation; internal tooth structure segmentation; and key-point detection in cephalograms [155]. Because models lack generalisability across different tasks, they must be specifically designed and trained for each task using the corresponding annotated data [156].

Prompt learning facilitates interactive guidance by physicians, enabling them to direct the model’s detection or segmentation of target objects using tools, such as mouse clicks or bounding boxes, to generate precise predictions [157]. Through prompt training, the model exhibits the capacity to manage uncertain prompts, such as selecting internal tooth coordinates, and can deliver predictions for internal structures, entire teeth, and the surrounding alveolar bone [158]. Prompt learning has demonstrated zero-shot capability, allowing the model to make accurate predictions on unseen data, thereby achieving robust generalization [159].

THE POTENTIAL FOR LEARNED BIAS:

AI can be trained to exhibit various biases, including those based on race, sex, and age, which can potentially lead to adverse outcomes [160]. For example, a widely used algorithm in the healthcare industry that affects millions of patients shows significant racial bias [161]. This algorithm revealed that given identical risk scores, Black patients were substantially sicker than White patients regarding certain untreated illnesses. Addressing this disparity could increase the proportion of Black patients receiving additional care from 17.7% to 46.5% [162]. Such cases highlight concerns that algorithms may perpetuate racial and gender disparities, influenced by the biases of their developers or training data. The presence of racial bias in a dataset significantly impacts the reliability of an AI model. When the training data contains racial biases, these biases are perpetuated in the model’s outputs, resulting in unfair treatment of different racial groups [163]. Furthermore, an uneven racial distribution within the dataset can lead to diminished performance for certain groups. For instance, studies have indicated that models trained on datasets with disproportionate racial representation tend to be less accurate in predicting risks for Black or minority ethnic groups [164]. This phenomenon is known as “sample bias,” which diminishes the model’s ability to generalize from data it has not previously encountered [165]. Additionally, the classification of races within the dataset can affect the model’s performance. Ambiguous racial classifications or unclear cultural backgrounds may obscure the true experiences of specific groups, thereby influencing the accuracy and fairness of the model’s outcomes [164]. However, several scholars have argued that these biases can be mitigated by adjusting the data input to the algorithm, particularly by redefining the labels used. Accurate labelling requires extensive domain knowledge, the ability to identify and extract pertinent data elements, and continuous experimentation. Learned bias is often mathematically characterized as a long-tail distribution problem [166]. Disparities in category numbers, foreground-background ratios, and segmentation scales within datasets lead to varied backpropagation frequencies for imbalanced targets during training. Ideally, a uniform data representation across all dimensions of the learning task could address these biases [167]. The long-tail distribution in datasets suggests underlying data formation issues but can be partially remedied by enlarging the representation of under-represented categories through composite images or data augmentation [168]. A more direct method involves adjusting the training weights based on the data distribution characteristics, assigning greater emphasis to infrequently represented data to ensure fair treatment during backpropagation [169]. Counterfactual reasoning can effectively address discrimination by predicting model preference distributions and adjusting for statistical biases during inference [170]. To ensure the accuracy and fairness of AI systems, it is necessary to pay attention to the diversity and representation of datasets. This not only helps improve the overall performance of the model but also reduces the unfair treatment of specific groups.

Legal Risks of Medical Malpractice and Medical Data Leakage

Effective utilization and reliance on AI in healthcare necessitate that these technologies are both safe and reliable. However, the potential for AI to cause harm during medical procedures raises significant concerns regarding medical malpractice. Consequently, the application of AI in the medical field introduces legal risks, leading to a crucial question: Who should be held responsible for these instances of medical malpractice? Is it the manufacturer, the healthcare institution, or the clinician?

Assigning liability for medical malpractice involving AI systems presents significant legal challenges [171]. A primary issue is whether AI should be granted legal subject status. For example, in the United States, the Watson AI system was classified as an “employee,” resulting in the hospital being held vicariously liable for patient harm. In contrast, Chinese judicial practices attribute AI-related malpractice to hospitals or doctors as a form of medical fault [172]. To navigate these complexities, some scholars advocate for a comprehensive framework that incorporates a hybrid liability model. Under this model, developers should collaborate closely with healthcare institutions during the creation of new AI systems, ensuring that these innovations are informed by clinical needs and effectively integrated into quality improvement strategies [173]. Manufacturers should be held accountable for any algorithmic flaws. Furthermore, hospitals and healthcare institutions have an obligation to conduct thorough assessments of new technologies prior to their adoption and integration. Physicians will likely require additional training to become proficient in using new AI and machine learning systems. Clinicians must ensure that AI outputs are utilized appropriately and in accordance with established standards of care [174]. Moreover, the introduction of AI-specific malpractice insurance could facilitate a more equitable distribution of liability and ensure adequate patient compensation [175].

The development of AI presents significant challenges to the protection of individual health data, even as it enhances patient access to medical care. Privacy and security risks persist, encompassing fundamental patient information, diagnostic and treatment data, electronic medical records, medical imaging materials, and behavioral data. These risks can lead to patient identification, thereby infringing upon personal privacy, reputation, and safety [176]. Although contemporary AI medical systems demonstrate increased intelligence, this advancement is accompanied by a heightened risk of breaches in patient privacy. The European General Data Protection Regulation (GDPR) provides a model for safeguarding data, emphasizing principles such as minimization and transparency. Similarly, incorporating technological safeguards – such as data anonymization, encryption, and blockchain – could help mitigate privacy risks. Legal frameworks must clearly specify data ownership, ensure informed consent for the use of AI, and impose penalties for breaches to bolster trust in AI systems [177]. Safeguarding patient data and privacy requires not only the establishment of robust legal norms but also their effective implementation in practice. Furthermore, there is an urgent need for more precise definitions regarding the categorization and protection levels of personal medical data. This necessitates a clear delineation of behaviors that constitute violations of patient privacy, along with the corresponding legal responsibilities associated with data breaches [172].

Future Directions

Since the introduction of AI in the medical field, AI-based clinical diagnostic and treatment systems have encountered several challenges due to their specialization and limited applicability. Each subspecialty typically employs a unique AI model, specialized analysis software, or system configuration, necessitating that clinicians navigate multiple systems, platforms, and devices. This complexity not only consumes significant resources and time but also has the potential to diminish the efficiency and accuracy of AI-assisted diagnostics and treatments. Oral diseases, in particular, often require a multidisciplinary approach for comprehensive diagnosis and treatment planning. The COVID-19 pandemic catalyzed the adoption of AI in healthcare, particularly in remote healthcare delivery, patient monitoring, and digital education. The pandemic underscored the need for safer, more efficient, and contactless solutions, which are especially pertinent in the high-risk environment of dental practice. To address the ongoing spread of diseases and to facilitate more affordable and timely treatments, AI is recognized as a critical advancement [178]. During this period, AI-assisted telemedicine established a precedent for remote healthcare delivery and patient follow-up [179]. The onset of the pandemic and its subsequent effects have led to a surge in the utilization of digital technologies for various purposes, ranging from consultations to dental education [180]. Notably, contemporary patients have exhibited increased involvement in healthcare-related decision-making due to the growing acceptance of virtual healthcare systems and associated digital innovations [181].

The pandemic not only highlighted the feasibility of AI-driven solutions but also revealed the untapped potential for future innovations in dentistry. By leveraging these advancements, the dental field is poised to adopt a more digital, efficient, and patient-centered approach in the coming years. Technological advancements are facilitating the introduction of teledentistry, online learning, and other digital alternatives, thereby enhancing, preserving, and advancing various aspects of dental practice and education. For instance, integrating AI with smartphone applications and computer software could be advantageous for effectively evaluating the reliability of AI-based programs in routine clinical practice [182,183]. The implementation of whole-process digitalization-assisted immediate implant placement and immediate restoration treatment in the aesthetic zone may enhance implant accuracy, reduce marginal bone loss, improve aesthetic outcomes, and increase patient satisfaction in comparison to conventional treatment methods [184]. Building upon these developments, the integration of AI into teledentistry, online dental education, and other digital tools is expected to expand further. Moreover, further research is necessary to combine this technology with features such as low-noise instruments and computerized clinical devices to evaluate their mutual effects on patient management [185,186]. Future innovations may include AI-driven diagnostics for remote consultations, intelligent treatment planning systems, and more personalized approaches to patient care. Advances in deep learning technology are positioned to address challenges related to interpretability, cross-modal diversity, repeatability, and scalability through innovations in online learning, optimized algorithms, and multimodal data integration. As digital technologies for clinical diagnosis, treatment, and portable diagnostic tools continue to evolve, AI learning data sources are anticipated to expand beyond traditional imaging data to encompass disease biomarkers, spectral signals, and extensive textual data.

Conclusions

This paper reviewed the application of AI in dentistry, assessing both the benefits and limitations of AI technologies in the diagnosis and treatment of various oral and maxillofacial diseases across different fields of dentistry. AI is rapidly progressing in dentistry and has demonstrated significant benefits. Research has highlighted the advantages of data-driven AI, particularly its reliability and transparency, which surpass human capabilities in specific contexts. Despite these advancements, the application of AI in dental medicine is still in its infancy. Current deep learning research is primarily methodological and has not been fully integrated into clinical practice, underscoring the need for further advancement to satisfy the stringent safety standards of clinical environments. Future studies should focus on enhancing deep learning methods for diagnosing and predicting oral and maxillofacial diseases, including optimizing models, integrating cross-modal datasets with experiential libraries, and developing large-scale public oral health datasets. The integration of AI technology into dentistry is poised to revolutionise medical equipment, accelerate the industrialisation and commercialisation of AI, and bring intelligent medical treatments closer to clinical applications, fundamentally transforming the traditional dental industry model.

References

1. Kilani A, Hamida AB, Hamam H, Artificial intelligence review: Encyclopedia of Information Science and Technology, 2018; 106-119, IGI Global Available from: https://doi.org/10.4018/978-1-5225-2255-3.ch010

2. Rajaraman V, JohnMcCarthy – Father of artificial intelligence: Resonance, 2014; 19; 198-207

3. Patil S, Albogami S, Hosmani J, Artificial intelligence in the diagnosis of oral diseases: Applications and pitfalls: Diagnostics (Basel), 2022; 12(5); 1029

4. Aljulayfi IS, Almatrafi AH, Althubaitiy RO, The potential of artificial intelligence in prosthodontics: A comprehensive review: Med Sci Monit, 2024; 30; e944310

5. Schwendicke F, Samek W, Krois J, Artificial intelligence in dentistry: Chances and challenges: J Dent Res, 2020; 99(7); 769-74

6. Ari T, Sağlam H, Öksüzoğlu H, Automatic feature segmentation in dental periapical radiographs: Diagnostics (Basel), 2022; 12(12); 3081

7. Rajaram Mohan K, Mathew Fenn S, Artificial intelligence and its theranostic applications in dentistry: Cureus, 2023; 15(5); e38711

8. Vodanović M, Subašić M, Milošević D, Savić Pavičin I, Artificial intelligence in medicine and dentistry: Acta Stomatol Croat, 2023; 57(1); 70-84

9. Ahmed N, Abbasi MS, Zuberi F, Artificial intelligence techniques: Analysis, application, and outcome in dentistry – a systematic review: Biomed Res Int, 2021; 2021; 9751564

10. Chan HP, Samala RK, Hadjiiski LM, Zhou C, Deep learning in medical image analysis: Adv Exp Med Biol, 2020; 1213; 3-21

11. Bonny T, Al Nassan W, Obaideen K, Contemporary role and applications of artificial intelligence in dentistry: F1000Res, 2023; 12; 1179

12. Estai M, Tennant M, Gebauer D, Deep learning for automated detection and numbering of permanent teeth on panoramic images: Dentomaxillofac Radiol, 2022; 51(2); 20210296

13. Park WJ, Park JB, History and application of artificial neural networks in dentistry: Eur J Dent, 2018; 12(4); 594-601

14. Caruso P, Silvestri E, Sconfienza LM: Cone beam CT and 3D imaging: A practical guide, 2013, Springer Science & Business Media Available from: https://doi.org/10.1007/978-88-470-5319-9

15. Lian L, Zhu T, Zhu F, Zhu H, Deep learning for caries detection and classification: Diagnostics (Basel), 2021; 11(9); 1672

16. Ezhov M, Gusarev M, Golitsyna M, Clinically applicable artificial intelligence system for dental diagnosis with CBCT: Sci Rep, 2021; 11(1); 15006

17. Lubbad MAH, Kurtulus IL, Karaboga D, A comparative analysis of deep learning-based approaches for classifying dental implants decision support system: J Imaging Inform Med, 2024; 37(5); 2559-80

18. Bichu YM, Hansa I, Bichu AY, Applications of artificial intelligence and machine learning in orthodontics: A scoping review: Prog Orthod, 2021; 22(1); 18

19. Katne T, Kanaparthi A, Gotoor S, Artificial intelligence: Demystifying dentistry – the future and beyond: Int J Contemp Med Surg Radiol, 2019; 4(4); D6-D9

20. Aggarwal R, Sounderajah V, Martin G, Diagnostic accuracy of deep learning in medical imaging: A systematic review and meta-analysis: NPJ Digit Med, 2021; 4(1); 65

21. Nam Y, Kim HG, Kho HS, Differential diagnosis of jaw pain using informatics technology: J Oral Rehabil, 2018; 45(8); 581-88

22. Zhu H, Cao Z, Lian L, CariesNet: A deep learning approach for segmentation of multi-stage caries lesion from oral panoramic X-ray image: Neural Comput Appl, 2022 [Online ahead of print]

23. Gomez J, Detection and diagnosis of the early caries lesion: BMC Oral Health, 2015; 15(Suppl 1); S3

24. Metzger Z, Colson DG, Bown P, Reflected near-infrared light versus bite-wing radiography for the detection of proximal caries: A multicenter prospective clinical study conducted in private practices: J Dent, 2022; 116; 103861

25. Michou S, Vannahme C, Bakhshandeh A, Intraoral scanner featuring transillumination for proximal caries detection. An in vitro validation study on permanent posterior teeth: J Dent, 2022; 116; 103841

26. Lee JH, Kim DH, Jeong SN, Choi SH, Detection and diagnosis of dental caries using a deep learning-based convolutional neural network algorithm: J Dent, 2018; 77; 106-11

27. Prados-Privado M, García Villalón J, Martínez-Martínez CH, Dental caries diagnosis and detection using neural networks: A systematic review: J4Clin Med, 2020; 9(11); 3579

28. Casalegno F, Newton T, Daher R, Caries detection with near-infrared transillumination using deep learning: J Dent Res, 2019; 98(11); 1227-33

29. Huang C, Wang J, Wang S, Zhang Y, A review of deep learning in dentistry: Neurocomputing, 2023; 554(14); 126629

30. Gráfová L, Kasparová M, Kakawand S, Study of edge detection task in dental panoramic radiographs: Dentomaxillofac Radiol, 2013; 42(7); 20120391

31. Yang S, Research on image recognition method based on improved neural network; 125-129, IEEE Available from: https://doi.org/10.1109/CISCE55963.2022.9851078

32. Alzubaidi L, Zhang J, Humaidi AJ, Review of deep learning: Concepts, CNN architectures, challenges, applications, future directions: J Big Data, 2021; 8(1); 53

33. Ronneberger O, Fischer P, Brox T, U-net: Convolutional networks for biomedical image segmentation: Medical image computing and computer-assisted intervention–MICCAI 2015: 18th international conference, Munich, Germany, October 5–9, 2015 proceedings, part III 18, 2015; 234-41, Springer International Publishing Available from: https://doi.org/10.1007/978-3-319-24574-4_28

34. Helli S, Hamamcı A, Tooth instance segmentation on panoramic dental radiographs using u-nets and morphological processing: Düzce Üniversitesi Bilim ve Teknoloji Dergisi, 2022; 10(1); 39-50

35. ALbahbah A A, El-Bakry H M, Abd-Elgahany S, Detection of caries in panoramic dental X-ray images using back-propagation neural network: International Journal of Electronics Communication and Computer Engineering, 2016; 7(5); 250-56

36. Esmaeilyfard R, Bonyadifard H, Paknahad M, Dental caries detection and classification in CBCT images using deep learning: Int Dent J, 2024; 74(2); 328-34

37. Lee S, Oh SI, Jo J, Deep learning for early dental caries detection in bitewing radiographs: Sci Rep, 2021; 11(1); 16807

38. Chibinski ACR, Gehrke SA, Dental caries perspectives: A collection of thoughtful essays: BoD-Books on Demand, 2024 Available from: https://doi.org/10.5772/intechopen.110999

39. Wang X, Gao S, Jiang K, Multi-level uncertainty aware learning for semi-supervised dental panoramic caries segmentation: Neurocomputing, 2023; 540; 126208

40. Min K, Lee GH, Lee SW, Attentional feature pyramid network for small object detection: Neural Netw, 2022; 155; 439-50

41. Chong Y, Chen X, Pan S, Context union edge network for semantic segmentation of small-scale objects in very high-resolution remote sensing images: IEEE Geoscience and Remote Sensing Letters, 2020; 19; 1-5

42. Liu Y, Duan Y, Zeng T, Learning multi-level structural information for small organ segmentation: Signal Processing, 2022; 193; 108418

43. White SC, Pharoah MJ: Oral radiology: principles and interpretation, 2013, Elsevier Health Sciences

44. Castro VM, Katz JO, Hardman PK, In vitro comparison of conventional film and direct digital imaging in the detection of approximal caries: Dentomaxillofac Radiol, 2007; 36(3); 138-42

45. Schneider H, Ahrens M, Strumpski M, An intraoral OCT probe to enhanced detection of approximal carious lesions and assessment of restorations: J Clin Med, 2020; 9(10); 3257

46. Schwendicke F, Elhennawy K, Paris S, Deep learning for caries lesion detection in near-infrared light transillumination images: A pilot study: J Dent, 2020; 92; 103260

47. Holtkamp A, Elhennawy K, Cejudo Grano de Oro JE, Generalizability of deep learning models for caries detection in near-infrared light transillumination images: J Clin Med, 2021; 10(5); 961

48. Salehi HS, Mahdian M, Murshid MM, Deep learning-based quantitative analysis of dental caries using optical coherence tomography: An ex vivo study; 10857

49. Salehi HS, Barchini M, Mahdian M, Optimization methods for deep neural networks classifying OCT images to detect dental caries; 11217

50. Charvát J, Procházka A, Fričl M, Diffuse reflectance spectroscopy in dental caries detection and classification: SIViP, 2020; 14; 1063-70

51. Procházka A, Charvát J, Vyšata O, Mandic D, Incremental deep learning for reflectivity data recognition in stomatology: Neural Comput & Applic, 2022; 34; 7081-89

52. Thanh MTG, Van Toan N, Ngoc VTN, Deep learning application in dental caries detection using intraoral photos taken by smartphones: Applied Sciences, 2022; 12(11); 5504

53. Berlin-Broner Y, Febbraio M, Levin L, Association between apical periodontitis and cardiovascular diseases: A systematic review of the literature: Int Endod J, 2017; 50(9); 847-59

54. Liu J, Liu X, Shao Y, Periapical lesion detection in periapical radiographs using the latest convolutional neural network ConvNeXt and its integrated models: Sci Rep, 2024; 14(1); 25429

55. Azuma MM, Samuel RO, Gomes-Filho JE, The role of IL-6 on apical periodontitis: A systematic review: Int Endod J, 2014; 47(7); 615-21

56. Hilmi A, Patel S, Mirza K, Galicia JC, Efficacy of imaging techniques for the diagnosis of apical periodontitis: A systematic review: Int Endod J, 2023; 56(Suppl 3); 326-39

57. Segura-Egea JJ, Martín-González J, Castellanos-Cosano L, Endodontic medicine: Connections between apical periodontitis and systemic diseases: Int Endod J, 2015; 48(10); 933-51

58. Connert T, Truckenmüller M, ElAyouti A, Changes in periapical status, quality of root fillings and estimated endodontic treatment need in a similar urban German population 20 years later: Clin Oral Investig, 2019; 23(3); 1373-82

59. Li S, Liu J, Zhou Z, Artificial intelligence for caries and periapical periodontitis detection: J Dent, 2022; 122; 104107

60. Asiri AF, Altuwalah AS, The role of neural artificial intelligence for diagnosis and treatment planning in endodontics: A qualitative review: Saudi Dent J, 2022; 34(4); 270-81

61. Orhan K, Bayrakdar IS, Ezhov M, Evaluation of artificial intelligence for detecting periapical pathosis on cone-beam computed tomography scans: Int Endod J, 2020; 53(5); 680-89

62. Song IS, Shin HK, Kang JH, Deep learning-based apical lesion segmentation from panoramic radiographs: Imaging Sci Dent, 2022; 52(4); 351-57

63. Conrad J, Retelsdorf J, Attia S, German dentists’ preferences for the treatment of apical periodontitis: A cross-sectional survey: Int J Environ Res Public Health, 2020; 17(20); 7447

64. Karobari MI, Adil AH, Basheer SN, Evaluation of the diagnostic and prognostic accuracy of artificial intelligence in endodontic dentistry: A comprehensive review of literature: Comput Math Methods Med, 2023; 2023; 7049360

65. Carrotte P, Endodontics: Part 4. Morphology of the root canal system: Br Dent J, 2004; 197(7); 379-83

66. Hatvani J, Horváth A, Michetti J, Deep learning-based super-resolution applied to dental computed tomography: IEEE Transactions on Radiation and Plasma Medical Sciences, 2019; 3(2); 120-28

67. Saghiri MA, Garcia-Godoy F, Gutmann JL, Lotfi M, Asgar K, The reliability of artificial neural network in locating minor apical foramen: A cadaver study: J Endod, 2012; 38(8); 1130-34

68. Umer F, Habib S, critical analysis of artificial intelligence in endodontics: A scoping review: J Endod, 2022; 48(2); 152-60

69. Xue Y, Zhang R, Deng Y, A preliminary examination of the diagnostic value of deep learning in hip osteoarthritis: PLoS One, 2017; 12(6); e0178992

70. Wang X, Yang W, Weinreb J, Searching for prostate cancer by fully automated magnetic resonance imaging classification: deep learning versus non-deep learning: Sci Rep, 2017; 7(1); 15415

71. Mohammad-Rahimi H, Motamedian SR, Rohban MH, Deep learning for caries detection: A systematic review: J Dent, 2022; 122; 104115

72. Cardoso EM, Reis C, Manzanares-Céspedes MC, Chronic periodontitis, inflammatory cytokines, and interrelationship with other chronic diseases: Postgrad Med, 2018; 130(1); 98-104

73. Corbet E, Smales R, Oral diagnosis and treatment planning: Part 6. Preventive and treatment planning for periodontal disease: Br Dent J, 2012; 213(6); 277-84

74. Preshaw PM, Detection and diagnosis of periodontal conditions amenable to prevention: BMC Oral Health, 2015; 15(Suppl 1); S5

75. Liu M, Wang S, Chen H, Liu Y, A pilot study of a deep learning approach to detect marginal bone loss around implants: BMC Oral Health, 2022; 22(1); 11

76. Chang HJ, Lee SJ, Yong TH, Deep learning hybrid method to automatically diagnose periodontal bone loss and stage periodontitis: Sci Rep, 2020; 10(1); 7531

77. Lee CT, Kabir T, Nelson J, Use of the deep learning approach to measure alveolar bone level: J Clin Periodontol, 2022; 49(3); 260-69

78. Palkovics D, Mangano FG, Nagy K, Windisch P, Digital three-dimensional visualization of intrabony periodontal defects for regenerative surgical treatment planning: BMC Oral Health, 2020; 20(1); 351

79. Sunnetci KM, Ulukaya S, Alkan A, Periodontal bone loss detection based on hybrid deep learning and machine learning models with a user-friendly application: Biomedical Signal Processing and Control, 2022; 77; 103844

80. Ozden FO, Özgönenel O, Özden B, Aydogdu A, Diagnosis of periodontal diseases using different classification algorithms: a preliminary study: Niger J Clin Pract, 2015; 18(3); 416-21

81. Kim J, Lee HS, Song IS, Jung KH, DeNTNet: Deep Neural Transfer Network for the detection of periodontal bone loss using panoramic dental radiographs: Sci Rep, 2019; 9(1); 17615

82. Kurt-Bayrakdar S, Bayrakdar İŞ, Yavuz MB, Detection of periodontal bone loss patterns and furcation defects from panoramic radiographs using deep learning algorithm: a retrospective study: BMC Oral Health, 2024; 24(1); 155

83. Duraccio D, Mussano F, Faga MG, Biomaterials for dental implants: Current and future trends: J Mater Sci, 2015; 50; 4779-812

84. Shujaat S, Bornstein MM, Price JB, Jacobs R, Integration of imaging modalities in digital dental workflows – possibilities, limitations, and potential future developments: Dentomaxillofac Radiol, 2021; 50(7); 20210268

85. Rahim A, Khatoon R, Khan TA, Artificial intelligence-powered dentistry: Probing the potential, challenges, and ethicality of artificial intelligence in dentistry: Digit Health, 2024; 10; 20552076241291345

86. Mangano C, Luongo F, Migliario M, Combining intraoral scans, cone beam computed tomography and face scans: The virtual patient: J Craniofac Surg, 2018; 29(8); 2241-46

87. Jacobs R, Salmon B, Codari M, Cone beam computed tomography in implant dentistry: Recommendations for clinical use: BMC Oral Health, 2018; 18(1); 88

88. Ritter L, Reiz SD, Rothamel D, Registration accuracy of three-dimensional surface and cone beam computed tomography data for virtual implant planning: Clin Oral Implants Res, 2012; 23(4); 447-52

89. Lindh C, Petersson A, Rohlin M, Assessment of the trabecular pattern before endosseous implant treatment: Diagnostic outcome of periapical radiography in the mandible: Oral Surg Oral Med Oral Pathol Oral Radiol Endod, 1996; 82(3); 335-43

90. Satapathy SK, Kunam A, Rashme R, AI-assisted treatment planning for dental implant placement: Clinical vs AI-generated plans: J Pharm Bioallied Sci, 2024; 16(Suppl 1); S939-41

91. Hiraiwa T, Ariji Y, Fukuda M, A deep-learning artificial intelligence system for assessment of root morphology of the mandibular first molar on panoramic radiography: Dentomaxillofac Radiol, 2019; 48(3); 20180218

92. Wu Z, Yu X, Wang F, Xu C, Application of artificial intelligence in dental implant prognosis: A scoping review: J Dent, 2024; 144; 104924

93. Li H, Shi M, Liu X, Shi Y, Uncertainty optimization of dental implant based on finite element method, global sensitivity analysis and support vector regression: Proc Inst Mech Eng H, 2019; 233(2); 232-43

94. Roy S, Dey S, Khutia N, Design of patient specific dental implant using FE analysis and computational intelligence techniques: Applied soft computing, 2018; 65; 272-79

95. Zaw K, Liu GR, Deng B, Tan KB, Rapid identification of elastic modulus of the interface tissue on dental implants surfaces using reduced-basis method and a neural network: J Biomech, 2009; 42(5); 634-41

96. Kurt Bayrakdar S, Orhan K, Bayrakdar IS, A deep learning approach for dental implant planning in cone-beam computed tomography images: BMC Med Imaging, 2021; 21(1); 86

97. Revilla-León M, Gómez-Polo M, Vyas S, Artificial intelligence applications in implant dentistry: A systematic review: J Prosthet Dent, 2023; 129(2); 293-300

98. Kazimierczak N, Kazimierczak W, Serafin Z, AI in orthodontics: Revolutionizing diagnostics and treatment planning – a comprehensive review: J Clin Med, 2024; 13(2); 344

99. Du W, Bi W, Liu Y, Machine learning-based decision support system for orthognathic diagnosis and treatment planning: BMC Oral Health, 2024; 24(1); 286

100. Jung SK, Kim TW, New approach for the diagnosis of extractions with neural network machine learning: Am J Orthod Dentofacial Orthop, 2016; 149(1); 127-33

101. Yim S, Kim S, Kim I, Accuracy of one-step automated orthodontic diagnosis model using a convolutional neural network and lateral cephalogram images with different qualities obtained from nationwide multi-hospitals: Korean J Orthod, 2022; 52(1); 3-19

102. Nishimoto S, Sotsuka Y, Kawai K, Personal computer-based cephalometric landmark detection with deep learning, using cephalograms on the internet: J Craniofac Surg, 2019; 30(1); 91-95

103. Doff MH, Hoekema A, Pruim GJ, Long-term oral-appliance therapy in obstructive sleep apnea: A cephalometric study of craniofacial changes: J Dent, 2010; 38(12); 1010-18

104. da Fontoura CS, Miller SF, Wehby GL, Candidate gene analyses of skeletal variation in malocclusion: J Dent Res, 2015; 94(7); 913-20

105. Park JH, Hwang HW, Moon JH, Automated identification of cephalometric landmarks: Part 1-Comparisons between the latest deep-learning methods YOLOV3 and SSD: Angle Orthod, 2019; 89(6); 903-9

106. Arık SÖ, Ibragimov B, Xing L, Fully automated quantitative cephalometry using convolutional neural networks: J Med Imaging (Bellingham), 2017; 4(1); 014501

107. Lee JH, Yu HJ, Kim MJ, Automated cephalometric landmark detection with confidence regions using Bayesian convolutional neural networks: BMC Oral Health, 2020; 20(1); 270

108. Moon JH, Shin HK, Lee JM, Comparison of individualized facial growth prediction models based on the partial least squares and artificial intelligence: Angle Orthod, 2024; 94(2); 207-15

109. Alam MK, Alanazi DSA, Alruwaili SRF, Alderaan RAI, Assessment of AI models in predicting treatment outcomes in orthodontics: J Pharm Bioallied Sci, 2024; 16(Suppl 1); S540-42

110. Luo X, Niu J, Su G, Research progress of biomimetic materials in oral medicine: J Biol Eng, 2023; 17(1); 72

111. Afrashtehfar KI, Ahmadi M, Emami E, Abi-Nader S, Tamimi F, Failure of single-unit restorations on root filled posterior teeth: A systematic review: Int Endod J, 2017; 50(10); 951-66

112. Koul R, Upadhyay G, Kalia D, Verma K, Artificial intelligence in prosthodontics: Current applications and future avenues: A narrative review: Journal of Primary Care Dentistry and Oral Health 5, 2024; 3; 94-100

113. Ding H, Cui Z, Maghami E, Morphology and mechanical performance of dental crown designed by 3D-DCGAN: Dent Mater, 2023; 39(3); 320-32

114. Zhang B, Dai N, Tian S, The extraction method of tooth preparation margin line based on S-Octree CNN: Int J Numer Method Biomed Eng, 2019; 35(10); e3241

115. Matin I, Hadzistevic M, Vukelic D, Development of an expert system for the simulation model for casting metal substructure of a metal-ceramic crown design: Comput Methods Programs Biomed, 2017; 146; 27-35

116. Takahashi T, Nozaki K, Gonda T, Ikebe K, A system for designing removable partial dentures using artificial intelligence. Part 1. Classification of partially edentulous arches using a convolutional neural network: J Prosthodont Res, 2021; 65(1); 115-18

117. Li C, Jin Y, Du Y, Efficient complete denture metal base design via a dental feature-driven segmentation network: Comput Biol Med, 2024; 175; 108550

118. Yamaguchi S, Lee C, Karaer O, Predicting the debonding of CAD/CAM composite resin crowns with AI: J Dent Res, 2019; 98(11); 1234-38

119. Joda T, Waltimo T, Pauli-Magnus C, Population-based linkage of big data in dental research: Int J Environ Res Public Health, 2018; 15(11); 2357

120. Arjumand B, The application of artificial intelligence in restorative dentistry: A narrative review of current research: Saudi Dent J, 2024; 36(6); 835-40

121. Siegel RL, Miller KD, Jemal A, Cancer statistics, 2019: Cancer J Clin, 2019; 69(1); 7-34

122. Hegde S, Ajila V, Zhu W, Zeng C, Artificial intelligence in early diagnosis and prevention of oral cancer: Asia Pac J Oncol Nurs, 2022; 9(12); 100133

123. Shan T, Tay FR, Gu L, Application of artificial intelligence in dentistry: J Dent Res, 2021; 100(3); 232-44

124. Thurzo A, Urbanová W, Novák B, Where is the artificial intelligence applied in dentistry? Systematic review and literature analysis: Healthcare (Basel), 2022; 10(7); 1269

125. Adeoye J, Koohi-Moghadam M, Lo AWI, Deep learning predicts the malignant-transformation-free survival of oral potentially malignant disorders: Cancers (Basel), 2021; 13(23); 6054

126. Alhazmi A, Alhazmi Y, Makrami A, Application of artificial intelligence and machine learning for prediction of oral cancer risk: J Oral Pathol Med, 2021; 50(5); 444-50

127. Jubair F, Al-Karadsheh O, Malamos D, A novel lightweight deep convolutional neural network for early detection of oral cancer: Oral Dis, 2022; 28(4); 1123-30

128. Marzouk R, Alabdulkreem E, Dhahbi S, Deep transfer learning driven oral cancer detection and classification model: Computers, Materials & Continua, 2022; 73(2); 3905-20

129. Aubreville M, Knipfer C, Oetter N, Automatic classification of cancerous tissue in laserendomicroscopy images of the oral cavity using deep learning: Sci Rep, 2017; 7(1); 11979

130. Kouketsu A, Doi C, Tanaka H, Detection of oral cancer and oral potentially malignant disorders using artificial intelligence-based image analysis: Head Neck, 2024; 46(9); 2253-60

131. Somyanonthanakul R, Warin K, Chaowchuen S, Survival estimation of oral cancer using fuzzy deep learning: BMC Oral Health, 2024; 24(1); 519

132. Bur AM, Holcomb A, Goodwin S, Machine learning to predict occult nodal metastasis in early oral squamous cell carcinoma: Oral Oncol, 2019; 92; 20-25

133. Kim YT, Pang KM, Jung HJ, Clinical outcome of conservative treatment of injured inferior alveolar nerve during dental implant placement: J Korean Assoc Oral Maxillofac Surg, 2013; 39(3); 127-33

134. Lim HK, Jung SK, Kim SH, Deep semi-supervised learning for automatic segmentation of inferior alveolar nerve using a convolutional neural network: BMC Oral Health, 2021; 21(1); 630

135. Choi E, Lee S, Jeong E, Artificial intelligence in positioning between mandibular third molar and inferior alveolar nerve on panoramic radiography: Sci Rep, 2022; 12(1); 2456

136. Ni FD, Xu ZN, Liu MQ, Towards clinically applicable automated mandibular canal segmentation on CBCT: J Dent, 2024; 144; 104931

137. Bui TH, Hamamoto K, Paing MP, Deep fusion feature extraction for caries detection on dental panoramic radiographs: Applied Sciences, 2021; 11(5); 2005

138. Sarker IH, Deep learning: A comprehensive overview on techniques, taxonomy, applications and research directions: SN Comput Sci, 2021; 2(6); 420

139. Tian S, Dai N, Zhang B, Automatic classification and segmentation of teeth on 3D dental model using hierarchical deep learning networks: Ieee Access, 2019; 7; 84817-28

140. Thapa C, Camtepe S, Precision health data: Requirements, challenges and existing techniques for data security and privacy: Comput Biol Med, 2021; 129; 104130

141. He J, Baxter SL, Xu J, The practical implementation of artificial intelligence technologies in medicine: Nat Med, 2019; 25(1); 30-36

142. Heo MS, Kim JE, Hwang JJ, Artificial intelligence in oral and maxillofacial radiology: What is currently possible?: Dentomaxillofac Radiol, 2021; 50(3); 20200375

143. Ren R, Luo H, Su C, Machine learning in dental, oral and craniofacial imaging: A review of recent progress: PeerJ, 2021; 9; e11451

144. Xi R, Ali M, Zhou Y, Tizzano M, A reliable deep-learning-based method for alveolar bone quantification using a murine model of periodontitis and micro-computed tomography imaging: J Dent, 2024; 146; 105057

145. Jebur RS, Zabil MH, Hammood DA, Cheng LK, A comprehensive review of image denoising in deep learning: Multimedia Tools and Applications, 2024; 83(20); 58181-99

146. Cao Z, Li X, Feng Y, ContrastNet: Unsupervised feature learning by autoencoder and prototypical contrastive learning for hyperspectral imagery classification: Neurocomputing, 2021; 460; 71-83

147. Han K, Sheng VS, Song Y, Deep semi-supervised learning for medical image segmentation: A review: Expert Systems with Applications, 2024; 245(1); 123052

148. Han K, Wang Y, Chen H, A survey on vision transformer: IEEE Trans Pattern Anal Mach Intell, 2023; 45(1); 87-110

149. Chen Y, Du P, Zhang Y, Image-based multi-omics analysis for oral science: Recent progress and perspectives: J Dent, 2024; 151; 105425

150. Nazir N, Sarwar A, Saini BS, Recent developments in denoising medical images using deep learning: An overview of models, techniques, and challenges: Micron, 2024; 180; 103615

151. Faust O, Salvi M, Barua PD, Issues and Limitations on the road to fair and inclusive AI solutions for biomedical challenges: Sensors, 2025; 25(1); 205

152. Chlap P, Min H, Vandenberg N, A review of medical image data augmentation techniques for deep learning applications: J Med Imaging Radiat Oncol, 2021; 65(5); 545-63

153. Rani V, Kumar M, Gupta A, Self-supervised learning for medical image analysis: A comprehensive review: Evolving Systems, 2024; 15(4); 1-27

154. Angriawan M, Transfer learning strategies for fine-tuning pretrained convolutional neural networks in medical imaging: Research Journal of Computer Systems and Engineering, 2023; 4(2); 73-88

155. Singh NK, Raza K, Progress in deep learning-based dental and maxillofacial image analysis: A systematic review: Expert Systems with Applications, 2022; 199; 116968

156. Maleki F, Ovens K, Gupta R, Generalizability of machine learning models: Quantitative evaluation of three methodological pitfalls: Radiol Artif Intell, 2022; 5(1); e220028

157. Khaertdinova L, Shmykova T, Pershin I, Gaze assistance for efficient segmentation correction of medical images: IEEE Access, 2025 Available from: https://doi.org/10.1109/ACCESS.2025.3530701

158. Tsoromokos N, Parinussa S, Claessen F, Estimation of alveolar bone loss in periodontitis using machine learning: Int Dent J, 2022; 72(5); 621-27

159. Lu Y, Zhao X, Wang J, Medical knowledge-enhanced prompt learning for diagnosis classification from clinical text; 278-88

160. Seyyed-Kalantari L, Zhang H, McDermott MBA, Underdiagnosis bias of artificial intelligence algorithms applied to chest radiographs in under-served patient populations: Nat Med, 2021; 27(12); 2176-82

161. Dueno T, Racist robots and the lack of legal remedies in the use of artificial intelligence in healthcare: Conn Ins LJ, 2020; 27; 337

162. Obermeyer Z, Powers B, Vogeli C, Mullainathan S, Dissecting racial bias in an algorithm used to manage the health of populations: Science, 2019; 366(6464); 447-53

163. Hanna M, Pantanowitz L, Jackson B, Ethical and bias considerations in artificial intelligence (AI)/machine learning: Modern Pathology, 2024; 100686

164. Mickel J, Racial/ethnic categories in AI and algorithmic fairness: why they matter and what they represent; 2484-94

165. , Law Patent: Berkeley Technology Law Journal, 2013

166. Zhou X, Zhai J, Cao Y, Feature fusion network for long-tailed visual recognition: Pattern Recognition, 2023; 144; 109827

167. Khanam R, Hussain M, Hill R, Allen P, A comprehensive review of convolutional neural networks for defect detection in industrial applications: IEEE Access Jul 8, 2024

168. Zhou Y, Hu Q, Wang Y, Deep super-class learning for long-tail distributed image classification: Pattern Recognition, 2018; 80; 118-28

169. Pouyanfar S, Sadiq S, Yan Y, A survey on deep learning: Algorithms, techniques, and applications: ACM Computing Surveys (CSUR), 2018; 51(5); 1-36

170. Cowgill B, Tucker CE, Economics, fairness and algorithmic bias: Journal of Economic Perspectives, 2019 Available from: https://doi.org/10.2139/ssrn.3361280

171. Price WN, Gerke S, Cohen IG, Potential liability for physicians using artificial intelligence: JAMA, 2019; 322(18); 1765-66

172. Shentu X, A review on legal issues of medical robots: Medicine (Baltimore), 2024; 103(21); e38330

173. Ahmed Z, Mohamed K, Zeeshan S, Dong X, Artificial intelligence with multi-functional machine learning platform development for better healthcare and precision medicine: Database (Oxford), 2020; 2020; baaa010

174. Drabiak K, Leveraging law and ethics to promote safe and reliable AI/ML in healthcare: Front Nucl Med, 2022; 2; 983340

175. Bottomley D, Thaldar D, Liability for harm caused by AI in healthcare: an overview of the core legal concepts: Front Pharmacol, 2023; 14; 1297353

176. Hellyer P, Artificial intelligence in endodontics: Br Dent J, 2024; 237(1); 48

177. Bertolaccini L, Falcoz PE, Brunelli A, The significance of general data protection regulation in the compliant data contribution to the European Society of Thoracic Surgeons database: Eur J Cardiothorac Surg, 2023; 64(3); ezad289

178. Lyon J, Bogodistov Y, Moormann J, AI-driven optimization in healthcare: The diagnostic process: Eur J Manage Issues, 2021; 29; 218-31

179. Iqbal J, Cortés Jaimes DC, Makineni P, Reimagining healthcare: unleashing the power of artificial intelligence in medicine: Cureus, 2023; 15(9); e44658

180. Deiters W, Burmann A, Meister SStrategies for digitalizing the hospital of the future: Urologe A, 2018; 57(9); 1031-39 [in German]

181. Lee SM, Lee D, Opportunities and challenges for contactless healthcare services in the post-COVID-19 era: Technol Forecast Soc Chang, 2021; 167; 120712

182. Pascadopoli M, Zampetti P, Nardi MG, et L, Smartphone applications in dentistry: A scoping review: Dent J (Basel), 2023; 11(10); 243

183. Ostaş D, Almăşan O, Ileşan RR, Point-of-care virtual surgical planning and 3D printing in oral and cranio-maxillofacial surgery: A narrative review: J Clin Med, 2022; 11(22); 6625

184. Han X, Qi C, Guo P, Whole-process digitalization-assisted immediate implant placement and immediate restoration in the aesthetic zone: A prospective study: Med Sci Monit, 2021; 27; e931544

185. Kim IH, Cho H, Song JS, Assessment of real-time active noise control devices in dental treatment conditions: Int J Environ Res Public Health, 2022; 19(15); 9417

186. Vitale MC, Gallo S, Pascadopoli M, Local anesthesia with SleeperOne S4 computerized device vs traditional syringe and perceived pain in pediatric patients: a randomized clinical trial: J Clin Pediatr Dent, 2023; 47(1); 82-90

In Press

Review article  

Global Guidelines and Trends in HPV Vaccination for Cervical Cancer Prevention

Med Sci Monit In Press; DOI: 10.12659/MSM.947173  

0:00

Clinical Research  

Serum Prolidase and Ischemia-Modified Albumin Levels in Neural Tube Defects: A Comparative Study of Myelome...

Med Sci Monit In Press; DOI: 10.12659/MSM.947873  

0:00

Clinical Research  

Impact of Depression, Fatigue, and Pain on Quality of Life in Slovak Multiple Sclerosis Patients

Med Sci Monit In Press; DOI: 10.12659/MSM.947630  

Clinical Research  

Longitudinal Evaluation of Metabolic Benefits of Inactivated COVID-19 Vaccination in Diabetic Patients in T...

Med Sci Monit In Press; DOI: 10.12659/MSM.947450  

0:00

Most Viewed Current Articles

17 Jan 2024 : Review article   7,960,158

Vaccination Guidelines for Pregnant Women: Addressing COVID-19 and the Omicron Variant

DOI :10.12659/MSM.942799

Med Sci Monit 2024; 30:e942799

0:00

16 May 2023 : Clinical Research   702,953

Electrophysiological Testing for an Auditory Processing Disorder and Reading Performance in 54 School Stude...

DOI :10.12659/MSM.940387

Med Sci Monit 2023; 29:e940387

0:00

01 Mar 2024 : Editorial   29,945

Editorial: First Regulatory Approvals for CRISPR-Cas9 Therapeutic Gene Editing for Sickle Cell Disease and ...

DOI :10.12659/MSM.944204

Med Sci Monit 2024; 30:e944204

0:00

28 Jan 2024 : Review article   23,915

A Review of IgA Vasculitis (Henoch-Schönlein Purpura) Past, Present, and Future

DOI :10.12659/MSM.943912

Med Sci Monit 2024; 30:e943912

0:00

Your Privacy

We use cookies to ensure the functionality of our website, to personalize content and advertising, to provide social media features, and to analyze our traffic. If you allow us to do so, we also inform our social media, advertising and analysis partners about your use of our website, You can decise for yourself which categories you you want to deny or allow. Please note that based on your settings not all functionalities of the site are available. View our privacy policy.

Medical Science Monitor eISSN: 1643-3750
Medical Science Monitor eISSN: 1643-3750