An evaluation of the use of augmented intelligence in 3D facial landmarking—an approach to facial analysis and craniofacial anomalies detection
Highlight box
Key findings
• Augmented intelligence can automatically detect landmarks to a high accuracy and significantly reduce the time for data analysis in three-dimensional (3D) facial imaging.
What is known and what is new?
• Artificial intelligence has been used for facial landmarking and is continually improving in being able identify crucial anatomical facial features.
• Using the technique in the paper allows a better way to identify anatomical facial features more accurately and with better precision. This enables clinicians to utilize artificial intelligence to automate the tedious task of manual landmarking.
What is the implication, and what should change now?
• The study provides an exciting foundation to further analyze large volume datasets and the protocol can be applied immediately to existing databases of 3D faces.
Introduction
Background
Facial soft tissue morphology is a critical component of craniofacial analysis, heavily influenced by underlying skeletal relationships and dental occlusions (1,2). Historically, this analysis relied on two-dimensional (2D) methods like cephalometry and photography, which lacked the spatial accuracy for comprehensive evaluation (3). The advent of three-dimensional (3D) imaging technologies, such as stereophotogrammetry, has revolutionized the field, enabling precise, non-invasive volumetric assessment (4). This had facilitated major advancements in orthodontics, maxillofacial surgery, and the study of craniofacial anomalies (5). The foundation of this analysis rests on anthropometric landmarks, a system pioneered by Farkas, who established points based on their anatomical reliability and clinical relevance for diagnosing dysmorphology and planning treatment (6,7).
Facial analysis using 3D anthropometric landmarks
The foundation of modern facial analysis stems from the meticulous work of Farkas, who established standardized anthropometric landmarks based on their anatomical reliability and clinical relevance (8). Farkas specifically selected landmarks that represented: (I) stable skeletal junctions (e.g., nasion, gnathion) that could be reliably palpated; (II) consistent soft tissue points (e.g., pronasale, cheilion) visible across ethnic groups; and (III) functionally significant facial features critical for surgical planning and syndrome diagnosis (9). His landmark system prioritized points that could be repeatedly identified with minimal measurement error (<1 mm), while also capturing the proportional relationships essential for assessing facial harmony (8). This careful selection enabled both cross-population comparisons and longitudinal growth studies that remain clinically valuable today. The transition to 3D imaging has preserved Farkas’ landmark definitions while overcoming the limitations of direct anthropometry through non-contact measurement and volumetric analysis (1,5). Modern systems now combine Farkas’ anatomical logic with computational precision, automating landmark identification for points like exocanthion and alare that were originally chosen for their consistency in marking facial boundaries (10). Contemporary applications in surgical simulation and dysmorphology diagnosis continue to utilize these landmarks because their anatomical basis—whether marking muscle insertions, tissue junctions, or proportional divisions—remains fundamentally sound across imaging modalities (11). While 3D technology has introduced new capabilities like asymmetry quantification (12) and soft tissue deformation analysis, it still relies on the landmark framework Farkas established through decades of validation against craniofacial growth patterns and surgical outcomes (6).
Historical development of 3D imaging
The roots of 3D imaging trace back to photogrammetry, a technique developed in the mid-20th century for measuring photographs to reconstruct 3D structures. Early applications in medicine and dentistry were labor-intensive, involving contour mapping and stereo-photogrammetry (13,14). These methods laid the groundwork for modern systems but were limited by their complexity and reliance on manual measurements (5). During the 1970s and 1980s, laser scanning was introduced, initially in engineering and industrial design before its adaptation for medical applications (15). Laser scanners, including the Minolta “VIVID” series, facilitated swift and high-resolution acquisition of facial surfaces, achieving accuracies within the range of 0.1–0.5 mm (2,16). Simultaneously, structured light methodologies were developed, utilizing projected patterns to determine surface depth via triangulation (5). Advanced systems such as the 3dMDfaceTM integrated stereophotogrammetry with structured light technology, yielding sub-millimeter precision and photorealistic results (10).
Clinical applications and advancements
3D imaging has become indispensable in maxillofacial surgery, orthodontics, and craniofacial anomalies. Key applications include:
- Facial averaging and growth analysis: 3D templates allow comparisons of normative datasets to identify deviations in growth or symmetry (17).
- Facial analysis of different races and populations: 3D templates have been used to understand facial variations in Caucasian and non-Caucasian groups (7,18-21).
- Orthognathic surgery: preoperative planning and postoperative assessment are enhanced by volumetric analysis of soft tissue changes (1,22,23).
- Craniofacial anomalies: devices like the 3dMD system quantify asymmetries in cleft lip and palate patients, guiding surgical interventions (5,24).
- Image-guided surgery: laser scanners and navigational systems improve precision in real-time surgical procedures (25).
- Facial analysis and genetics: 3D facial analysis has been used to link facial features to genetic mutations (26-28).
Rationale and knowledge gap
Currently, clinical planning often involves comparing a patient’s 3D cephalometric measures to population-based normative data. However, facial morphology exhibits significant ethnic variation, and these norms may not be representative of the patient’s specific background, potentially leading to suboptimal aesthetic and functional outcomes (23). While automated landmarking systems powered by artificial intelligence have emerged to address the time-consuming and variable nature of manual identification, their validation is very important. Several studies, including the work by Berends et al. on which the current platform is based, have demonstrated promise, primarily in Caucasian populations (22). A significant gap remains in the independent validation of such platforms, particularly regarding their performance on distinct clinical datasets and their robustness without the aid of texture information, which is often used by human annotators but raises privacy concerns.
In addition, despite its advantages, 3D imaging faces challenges such as motion artifacts, cost, and the need for standardized protocols (5). Future innovations may focus on artificial intelligence for automated landmark identification and 4D (time-resolved) imaging for dynamic assessments (29-31). In summary, 3D imaging has evolved from rudimentary photogrammetry to sophisticated multimodal systems, reshaping clinical practice and research. Its continued advancement promises further integration into personalized medicine and telehealth.
Automated augmented intelligence (AI) systems have been used to understand Caucasian faces and some craniofacial anomalies (32). However, no studies have been conducted on Central European 3D datasets. As a result, the accuracy and reliability of 3D landmarking is relatively unknown.
Objective
The primary objective of this study was to conduct an independent evaluation of a specific AI platform for automated 3D facial landmarking. We aimed to determine its accuracy and reliability against manual annotations, which served as the ground truth, in a cohort of Caucasian adults. A key focus was to assess the platform’s performance using geometric data alone, providing insights into its utility in privacy-conscious clinical and research settings.
Methods
Data acquisition
A randomly selected sample of forty 3D facial scans was retrospectively obtained from the Hungarian Orthodontic Society 3D data repository [2009–2010], with permission from Dr. Péter Borbély and approval from the Hungarian Orthodontic Society Review Board. Inclusion criteria were Caucasian adults aged 18–40 years with a clinically normal facial appearance. Exclusion criteria were a history of craniofacial surgery, trauma, or orthodontic treatment within the last 2 years. The sample size was calculated from the parameter of the width of the eyes (ExR and ExL), which could possibly show the largest variance. With the likely change to be about 3 mm and a standard deviation of 2.7 mm, a power of 0.85 with a significance of 0.05 would require a sample of 35.
Of the forty initial scans, three were excluded because the automated AI platform failed to process the files and generate landmark data or the landmark data collected was grossly skewed. The remaining 37 scans (18 males, 19 females) were successfully processed by both manual and automated methods and formed the study cohort.
Imaging acquisition system
The 3dMDfaceTM system uses a portable structured light system using a combination of stereo-photogrammetry and the structured light technique (5). This system uses a multi-camera configuration, with three cameras on each side (one color and two infra-red) that capture photo-realistic quality pictures. A random light pattern is projected onto a subject, and an image is captured with multiple synchronized digital cameras set at various angles in an optimum configuration. This system can capture full facial images from ear to ear and under the chin in 1.5 milliseconds at the highest resolution. The manufacturer accuracy is less than 0.5 mm and the quoted clinical accuracy is 1.5% of the total observed variance (10). 3D surface images captured by surface acquisition systems are highly repeatable and 3D landmark data can be acquired with a high degree of precision (17,33).
Image acquisition
Natural head posture (NHP) was adopted for all subjects, as this has been proven to be clinically reproducible (34). The subjects sat on the adjustable chair and were asked to look into a mirror with a horizontal and vertical line marked on it. They were asked to level their eyes to the horizontal line and to adjust the midline of their faces to line up with the vertical line. Adjustments to seating heights were made to assist the subjects in achieving NHP. The subjects were asked to swallow hard and to keep their jaws in a relaxed position just before the images were taken. Each image acquisition took 1.5 milliseconds.
DiffusionNet
For this study, we utilized DiffusionNet, a deep learning architecture designed for 3D mesh data. A key advantage of DiffusionNet is its robustness to variations in mesh resolution and orientation, which is critical for processing clinical 3D facial scans that can vary in quality and alignment. The architecture propagates information across the mesh surface to perform accurate landmark prediction (2).
Parameters to be measured
Landmarks
A total of 21 landmarks (from a possible 57) were manually obtained on each 3D face using a proprietary software called 3dMD Vultus (Figure 1). Each landmark consisted of x, y, z coordinates. Concurrently, each of the 37 faces was imported to an AI software (https://3dmedx.nl) (32) that automatically plotted the same 21 landmarks by AI.
Facial landmarks such as exR-exL (intercanthal width), alR-alL (alar base width), and others are critical in clinical and anthropological analyses because they provide standardized reference points for assessing facial morphology. These measurements help quantify symmetry, proportionality, and growth patterns, which are essential in fields like orthodontics, plastic surgery, and forensic identification. For example, enR-enL (inter-eye distance) can indicate genetic syndromes if abnormal. The alR-alL width influences nasal airflow and aesthetic planning, and n-prn-pg (nasolabial angle) affects both functional breathing and facial harmony. Discrepancies between manual and AI measurements highlight potential biases in automated tools, emphasizing the need for calibration to ensure accuracy in diagnostic or surgical applications. Thus, these landmarks bridge quantitative analysis with clinical relevance, ensuring precise evaluations of facial structure.
Linear distances and angles
The linear distances and angles selected for this study were carefully chosen to represent key craniofacial relationships with clinical significance for diagnosis and treatment planning. We focused on measurements that capture essential facial proportions (e.g., intercanthal width, nasal projection) and functional relationships (e.g., lip commissure positioning), as these parameters are routinely used in orthodontic assessment and surgical planning. The selected dimensions—including exocanthion-to-exocanthion (exR-exL) for facial width, alare-to-alare (alR-alL) for nasal base evaluation, and various pronasale-based measurements for facial projection—were prioritized because they demonstrate high anatomical reliability across populations while remaining sensitive to pathological variations. These specific linear and angular measurements form the foundation of established cephalometric analysis protocols and have been validated through decades of clinical use in both research and practice, making them ideal for evaluating the performance of automated landmarking systems against traditional manual methods.
Anatomical landmarks and measurements
A total of 21 anthropometric landmarks were identified on each 3D face. Manual landmarking was performed by a single operator (S.Y.C.K.), a trained medical student. The operator was trained and calibrated on 10 practice scans (not included in the study) by an experienced craniofacial orthodontist (C.H.K.) until an intra-class correlation coefficient (ICC) of >0.90 was achieved for all landmarks. Manual landmarking was performed using the 3dMD Vultus software, with the 3D mesh viewed alongside its photorealistic texture to aid in anatomical identification. To ensure patient anonymity for the automated processing, texture information was removed from all mashes before being submitted to the AI platform.
From the 21 landmarks, 10 linear distances and angles were calculated to represent key craniofacial relationships with clinical significance. Angles were calculated from the three relevant landmark coordinates using the vector dot product formula.
Automated landmarking workflow
The automated landmarking workflow followed a previously cited article by Berends et al. (32) and consisted of four main steps: (I) rough prediction of landmarks using an initial DiffusionNet on the original meshes; (II) realignment of the meshes based on the roughly predicted landmarks; (III) segmentation of the facial region through fitting of a template facial mesh using a morphable model; (IV) refined landmark prediction on the segmented meshes using a final DiffusionNet. The DiffusionNet models used spatial features only and did not use texture information for the automated landmarking task.
Experimental design and statistical analysis
The error calculations in this study were performed using Euclidean distances between corresponding AI-derived and manually annotated landmarks, providing a quantitative measure of the system’s precision in 3D space. These landmarks were also an important component of landmarks crucial for principal component analysis (PCA) which would be a future diagnosis analysis tool (Figure 2). For each landmark, we calculated both the absolute positional error (in mm) and the percentage of predictions falling within clinically acceptable thresholds (≤2, ≤3, and ≤4 mm) (32). These error metrics were selected to align with established standards in craniofacial research, where ≤2 mm differences are generally considered clinically insignificant for most diagnostic and treatment planning purposes. The directional bias of errors (over- or under-estimation) was also analyzed, as systematic deviations may have distinct implications for different clinical applications.
Statistical analysis of these errors incorporated paired t-tests to identify significant differences between manual and automated measurements, with particular attention to landmarks critical for surgical planning (35). As a cross-sectional study employing a convenience sample, no analytical methods to account for the sampling strategy were applied. The error distribution patterns revealed important insights into the AI algorithm’s performance characteristics, showing superior consistency in bony landmarks compared to soft tissue points. By evaluating errors across multiple dimensions (linear distances, angles, and landmark positions), we were able to assess not just the magnitude of discrepancies, but also their potential clinical impact based on the functional importance of each measurement in orthodontic and surgical contexts. This comprehensive error analysis framework provides clinicians with practical guidance about which measurements can be reliably automated, and which may require manual verification. Sensitivity analyses were not performed as the primary analysis was deemed sufficient to evaluate the agreement between measurement methods for the obtained sample.
Normality of data distribution was assessed using the Shapiro-Wilk test. A P value of <0.05 was considered statistically significant. Statistical analysis was performed using SPSS Statistics (Version 28, IBM Corp., Armonk, NY, USA).
Ethical statement
This study was conducted in accordance with the Declaration of Helsinki and its subsequent amendments. The study was approved by national ethics board of Hungarian Orthodontic Society (No. HUN-09-01) and informed consent was taken from all individual participants.
Results
Of the forty 3D faces used, 3 were not usable due to errors in the scan data. 37 Hungarian 3D faces (18 males and 19 females) were used in the final study. These 3D faces were obtained using the image capture techniques mentioned above in the materials section.
Landmark positioning errors
The mean Euclidean error for each of the 21 individual landmarks is presented in Table 1. The overall mean landmark error across all points was 1.42±0.95 mm. Landmarks with the lowest error included alare (alR, alL) and alare (alL)-endocanthion (enL)-pronasale (prn), while the highest errors were observed for exocanthion (exR, exL) and angle between alare (alR)-exocanthion (exR)-pronasale (prn).
Table 1
| Measurement | Manual (mm) | AI-unaligned (mm) | Mean difference (AI-unaligned vs. manual) (mm) | P value |
|---|---|---|---|---|
| exR-exL | 88.76±4.62 | 91.59±4.56 | −2.83±1.45 | 0.056 |
| alR-alL | 34.02±2.64 | 33.56±2.08 | 0.46±1.89 | 0.89 |
| enR-enL | 30.75±2.82 | 31.58±2.70 | −0.83±0.95 | 0.57 |
| n-sn | 51.96±4.07 | 50.89±3.77 | 1.07±1.82 | 0.46 |
| n-prn-pg | 127.89±4.58* | 126.25±4.36* | 1.64±2.11* | 0.043 |
| exR-prn-alR | 37.03±2.81 | 39.18±2.75 | −2.15±1.68 | 0.33 |
| exL-prn-alL | 36.71±3.12 | 37.90±2.98 | −1.19±2.01 | 0.38 |
| enR-prn-alR | 46.56±4.09 | 47.38±3.69 | −0.82±1.99 | 0.86 |
| enL-prn-alL | 47.14±4.15 | 47.09±3.90 | 0.05±2.12 | 0.08 |
| n-prn-sn | 97.77±7.34* | 98.76±6.56* | −0.99±2.89* | 0.02 |
Data are presented as mean ± SD. *, indicates a statistically significant difference (P<0.05) from a paired, two-sided t-test. AI, artificial intelligence; SD, standard deviation.
Linear distance and angle measurement errors
A total of 10 linear distances and angles were derived from the landmarks. The manual and AI-derived means, their differences, percentage errors, and statistical significance are presented in Table 2. The mean error of the 10 linear distances and angles was 1.30±0.89 mm and 1.14±0.76 degrees, representing an approximate error of less than 2.5%. Statistically significant differences were found for the intercanthal width (exR-exL), which was overestimated by the AI by an average of 2.83 mm (P<0.001).
Table 2
| Measurement | Manual mean (mm) | AI-unaligned mean (mm) | Mean difference (mm) | Percentage error (%) |
|---|---|---|---|---|
| exR-exL | 88.76 | 91.59 | −2.83 | 3.18 |
| alR-alL | 34.02 | 33.56 | 0.46 | 1.35 |
| enR-enL | 30.75 | 31.58 | −0.83 | 2.70 |
| n-sn | 51.96 | 50.89 | 1.07 | 2.06 |
| n-prn-pg | 127.89 | 126.25 | 1.64 | 1.28 |
| exR-prn-alR | 37.03 | 39.18 | −2.15 | 5.49 |
| exL-prn-alL | 36.71 | 37.90 | −1.19 | 3.24 |
| enR-prn-alR | 46.56 | 47.38 | −0.82 | 1.76 |
| enL-prn-alL | 47.14 | 47.09 | 0.05 | 0.10 |
| n-prn-sn | 97.77 | 98.76 | −0.99 | 1.01 |
3D, three-dimensional; AI, artificial intelligence.
Precision at clinical thresholds
The percentage of landmarks predicted within common clinical error thresholds is shown in Table 3. The aggregate percentage of landmarks within 2, 3, and 4 mm was 75.1%, 89.8%, and 96.7%, respectively. Landmarks such as alare and endocanthion showed high precision (>98% within 3 mm), while exocanthion (Exo) and pronasale (prn) had lower precision at the 2 mm threshold.
Table 3
| Measurement | ≤2 mm (AI-U) (%) | ≤3 mm (AI-U) (%) | ≤4 mm (AI-U) (%) |
|---|---|---|---|
| enR-enL | 100 | 100 | 100 |
| exR-exL | 72 | 92 | 98 |
| n-sn | 85 | 95 | 100 |
| n-prn-sn | 71.4 | 88.6 | 97.1 |
| n-prn-pg | 67.9 | 89.3 | 96.4 |
| exR-prn-alR | 80 | 94 | 99 |
| exL-prn-alL | 60.7 | 85.7 | 96.4 |
| enR-prn-alR | 74.3 | 88.6 | 97.1 |
| enL-prn-alL | 60.0 | 80.0 | 91.4 |
| alR-alL | 98 | 100 | 100 |
| All landmarks | 75.1 | 89.8 | 96.7 |
AI-U, augmented intelligence unaligned.
Subgroup analysis
No statistically significant difference was found in the overall mean landmark error between males and females (P=0.45) or across different age groups (P=0.62).
Other analyses
The analysis was limited to the primary outcome of the agreement between automated and manual landmarking. No additional subgroup or sensitivity analyses were conducted.
Discussion
Key findings
This study presents a comprehensive validation of an automated AI landmarking system for 3D facial analysis. Our key finding is that the platform achieved an overall mean landmark positioning error of 1.42 mm and a linear measurement error of 1.30 mm, with 89.8% of landmarks placed within a 3 mm clinical threshold. This performance is clinically promising for many applications (29). However, we identified systematic biases, particularly a significant overestimation of intercanthal width (exR-exL), which highlights the need for cautious application and landmark-specific verification in precise surgical planning.
Comparison with similar research
Our findings align with the work of Berends et al. (22), the developers of the platform, who reported a mean error of 1.69 mm on their clinical test set. Our marginally lower mean error could be attributed to our study’s use of a homogeneous Caucasian population. Notably, both studies found that well-defined landmarks at skeletal junctions and nasal apertures (e.g., alare, subnasale) consistently outperformed more diffuse landmarks in soft, mobile tissues (e.g., exocanthion). When compared to other automated systems like MeshMonk, which reports a mean error of approximately 2.0 mm on diverse datasets, the current platform demonstrates competitive, if not superior, accuracy. The observed error also falls within the range of reported inter-observer variability in manual 3D landmarking, which can often exceed 2mm for certain landmarks (17).
Explanations of findings
A major factor influencing the results is the difference in input data between the validation standard and the AI. The manual annotations, which serve as our ground truth, benefited from visual cues provided by the texture overlay, particularly for landmarks defined by color and texture boundaries like the eyelid margins (exocanthion, endocanthion) and the vermilion border (cheilion) (10). The AI’s reliance on geometry alone likely explains the systematic biases and higher errors observed for these specific landmarks, as the algorithm had to infer their position from shape alone without these clear visual guides. This trade-off between patient privacy and landmarking precision is an important consideration for clinical implementation. The superior performance on bony, structural landmarks is consistent with the principles of Farkas’s anthropometry, which prioritizes stable, palpable points (9).
Strengths and limitations
Strengths of this study include a well-defined cohort, the use of NHP for standardized image acquisition, and a rigorous comparison against a calibrated manual standard performed with texture guidance. Limitations include a moderate sample size from a single ethnic group, which may limit generalizability to other populations (24). Furthermore, the AI platform failed to process three otherwise usable scans, indicating that its technical reliability is not yet perfect. The most significant limitation is the AI’s non-use of texture, which we have identified as a key explanatory factor for its performance gap on specific soft-tissue landmarks.
Implications and actions needed
The primary advantage of automated landmarking is its potential to save significant clinician time and achieve perfect intra-observer consistency, thereby standardizing assessments in both research and clinical practice. While a mean error of ~1.3 mm exists, this trade-off may be acceptable for high-throughput screening, growth monitoring, and initial orthodontic treatment planning (36). For precise surgical planning, such as in orthognathic or reconstructive surgery, our data suggest that automated landmarks could serve as an excellent initial draft, requiring selective manual verification of specific landmarks prone to higher error (e.g., exocanthion) (35). Future work should prioritize validating the platform’s performance in diverse ethnic groups and in patients with craniofacial anomalies to test its generalizability. Furthermore, developing models that can securely leverage texture information, perhaps through privacy-preserving techniques, is a crucial next step to close the current performance gap.
Conclusions
The validated AI platform provides an accurate and efficient method for 3D facial landmarking, suitable for applications in orthodontic screening and craniofacial research. Its integration into clinical workflows could standardize assessments and free up clinician time. The platform performs well on geometric data alone, making it suitable for privacy-conscious environments. Future work should focus on validating its performance in diverse ethnic groups and in patients with craniofacial anomalies.
Acknowledgments
Although the sample included both male and females, formal subgroup analyses by sex were not conducted. The study was designed and powered to assess the aggregate accuracy of the automated landmarking system, and a preliminary assessment found no evidence suggesting that measurement error was systematically influenced by sex.
Footnote
Data Sharing Statement: Available at https://fomm.amegroups.com/article/view/10.21037/fomm-25-21/dss
Peer Review File: Available at https://fomm.amegroups.com/article/view/10.21037/fomm-25-21/prf
Funding: None.
Conflicts of Interest: All authors have completed the ICMJE uniform disclosure form (available at https://fomm.amegroups.com/article/view/10.21037/fomm-25-21/coif). C.H.K. serves as an unpaid editorial board member of Frontiers of Oral and Maxillofacial Medicine from July 2024 to June 2026. P.B. is the owner of Fogszabályozási Stúdió. The other authors have no conflicts of interest to declare.
Ethical Statement: The authors are accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. This study was conducted in accordance with the Declaration of Helsinki and its subsequent amendments. The study was approved by national ethics board of Hungarian Orthodontic Society (No. HUN-09-01) and informed consent was taken from all individual participants.
Open Access Statement: This is an Open Access article distributed in accordance with the Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International License (CC BY-NC-ND 4.0), which permits the non-commercial replication and distribution of the article with the strict proviso that no changes or edits are made and the original work is properly cited (including links to both the formal publication through the relevant DOI and the license). See: https://creativecommons.org/licenses/by-nc-nd/4.0/.
References
- Hajeer MY, Ayoub AF, Millett DT. Three-dimensional assessment of facial soft-tissue asymmetry before and after orthognathic surgery. Br J Oral Maxillofac Surg 2004;42:396-404. [Crossref] [PubMed]
- Kau CH, Richmond S, Zhurov AI, et al. Reliability of measuring facial morphology with a 3-dimensional laser scanning system. Am J Orthod Dentofacial Orthop 2005;128:424-30. [Crossref] [PubMed]
- Farkas LG. Centenary of Ambrus Abrahám. Orv Hetil 1994;135:1429. Hungarian.
- Meulstee JW, Verhamme LM, Borstlap WA, et al. A new method for three-dimensional evaluation of the cranial shape and the automatic identification of craniosynostosis using 3D stereophotogrammetry. Int J Oral Maxillofac Surg 2017;46:819-26. [Crossref] [PubMed]
- Kau CH, Richmond S, Incrapera A, et al. Three-dimensional surface acquisition systems for the study of facial morphology and their application to maxillofacial surgery. Int J Med Robot 2007;3:97-110. [Crossref] [PubMed]
- Farkas LG, Katic MJ, Forrest CR. Comparison of craniofacial measurements of young adult African-American and North American white males and females. Ann Plast Surg 2007;59:692-8. [Crossref] [PubMed]
- Kau CH, Wang J, Davis M. A Cross-Sectional Study to Understand 3D Facial Differences in a Population of African Americans and Caucasians. Eur J Dent 2019;13:485-96. [Crossref] [PubMed]
- Farkas LG, Katic MJ, Forrest CR. Anthropometric proportion indices in the craniofacial regions of 73 patients with forms of isolated coronal synostosis. Ann Plast Surg 2005;55:495-9. [Crossref] [PubMed]
- Kolar JC, Munro IR, Farkas LG. Anthropometric evaluation of dysmorphology in craniofacial anomalies: Treacher Collins syndrome. Am J Phys Anthropol 1987;74:441-51. [Crossref] [PubMed]
- Aldridge K, Boyadjiev SA, Capone GT, et al. Precision and error of three-dimensional phenotypic measures acquired from 3dMD photogrammetric images. Am J Med Genet A 2005;138A:247-53. [Crossref] [PubMed]
- Hood CA, Bock M, Hosey MT, et al. Facial asymmetry--3D assessment of infants with cleft lip & palate. Int J Paediatr Dent 2003;13:404-10. [Crossref] [PubMed]
- Hanis SB, Kau CH, Souccar NM, et al. Facial morphology of Finnish children with and without developmental hip dysplasia using 3D facial templates. Orthod Craniofac Res 2010;13:229-37. [Crossref] [PubMed]
- Burke PH, Beard LF. Stereo-photogrammetry of the face. Rep Congr Eur Orthod Soc 1967;279-93.
- TANNER JM. WEINER JS. The reliability of the photogrammetric method of anthropometry, with a description of a miniature camera technique. Am J Phys Anthropol 1949;7:145-86. [Crossref] [PubMed]
- Rangel FA, Maal TJ, Bronkhorst EM, et al. Accuracy and reliability of a novel method for fusion of digital dental casts and Cone Beam Computed Tomography scans. PLoS One 2013;8:e59130. [Crossref] [PubMed]
- Kusnoto B, Evans CA. Reliability of a 3D surface laser scanner for orthodontic applications. Am J Orthod Dentofacial Orthop 2002;122:342-8. [Crossref] [PubMed]
- Kau CH, Zhurov A, Richmond S, et al. Facial templates: a new perspective in three dimensions. Orthod Craniofac Res 2006;9:10-7. [Crossref] [PubMed]
- Bo Ic M, Kau CH, Richmond S, et al. Facial morphology of Slovenian and Welsh white populations using 3-dimensional imaging. Angle Orthod 2009;79:640-5. [Crossref] [PubMed]
- Kim JY, Kau CH, Christou T, et al. Three-dimensional Analysis of Normal Facial Morphologies of Asians and Whites: A Novel Method of Quantitative Analysis. Plast Reconstr Surg Glob Open 2016;4:e865. [Crossref] [PubMed]
- Gor T, Kau CH, English JD, et al. Three-dimensional comparison of facial morphology in white populations in Budapest, Hungary, and Houston, Texas. Am J Orthod Dentofacial Orthop 2010;137:424-32. [Crossref] [PubMed]
- Bhaskar E, Kau CH. A Comparison of 3D Facial Features in a Population from Zimbabwe and United States. Eur J Dent 2020;14:100-6. [Crossref] [PubMed]
- Berends B, Bielevelt F, Baan F, et al. Soft-tissue prediction based on 3D photographs for virtual surgery planning of orthognathic surgery. Comput Biol Med 2025;194:110529. [Crossref] [PubMed]
- Kau CH, Cronin AJ, Richmond S. A three-dimensional evaluation of postoperative swelling following orthognathic surgery at 6 months. Plast Reconstr Surg 2007;119:2192-9. [Crossref] [PubMed]
- Kau CH, Kamel SG, Wilson J, et al. New method for analysis of facial growth in a pediatric reconstructed mandible. Am J Orthod Dentofacial Orthop 2011;139:e285-90. [Crossref] [PubMed]
- Marmulla R, Hassfeld S, Lüth T, et al. Laser-scan-based navigation in cranio-maxillofacial surgery. J Craniomaxillofac Surg 2003;31:267-77. [Crossref] [PubMed]
- Vuollo V, Sidlauskas M, Sidlauskas A, et al. Comparing Facial 3D Analysis With DNA Testing to Determine Zygosities of Twins. Twin Res Hum Genet 2015;18:306-13. [Crossref] [PubMed]
- Hoskens H, Liu D, Naqvi S, et al. 3D facial phenotyping by biometric sibling matching used in contemporary genomic methodologies. PLoS Genet 2021;17:e1009528. [Crossref] [PubMed]
- Richmond S, Howe LJ, Lewis S, et al. Facial Genetics: A Brief Overview. Front Genet 2018;9:462. [Crossref] [PubMed]
- Harkel TCT, Vinayahalingam S, Ingels KJAO, et al. Reliability and Agreement of 3D Anthropometric Measurements in Facial Palsy Patients Using a Low-Cost 4D Imaging System. IEEE Trans Neural Syst Rehabil Eng 2020;28:1817-24. [Crossref] [PubMed]
- Popat H, Richmond S, Marshall D, et al. Facial movement in 3 dimensions: average templates of lip movement in adults. Otolaryngol Head Neck Surg 2011;145:24-9. [Crossref] [PubMed]
- Matthews H, de Jong G, Maal T, et al. Static and Motion Facial Analysis for Craniofacial Assessment and Diagnosing Diseases. Annu Rev Biomed Data Sci 2022;5:19-42. [Crossref] [PubMed]
- Berends B, Bielevelt F, Schreurs R, et al. Fully automated landmarking and facial segmentation on 3D photographs. Sci Rep 2024;14:6463. [Crossref] [PubMed]
- Kau CH, Hunter LM, Hingston EJ. A different look: 3-dimensional facial imaging of a child with Binder syndrome. Am J Orthod Dentofacial Orthop 2007;132:704-9. [Crossref] [PubMed]
- Chiu CS, Clark RK. Reproducibility of natural head position. J Dent 1991;19:130-1. [Crossref] [PubMed]
- Hsu SS, Gateno J, Bell RB, et al. Accuracy of a computer-aided surgical simulation protocol for orthognathic surgery: a prospective multicenter study. J Oral Maxillofac Surg 2013;71:128-42. [Crossref] [PubMed]
- Gašparović B, Morelato L, Lenac K, et al. Comparing Direct Measurements and Three-Dimensional (3D) Scans for Evaluating Facial Soft Tissue. Sensors (Basel) 2023;23:2412. [Crossref] [PubMed]
Cite this article as: Kau SYC, Borbely P, Zhurov A, de Jong G, Maal T, Kárpáti K, Kau CH, Zsoldos M. An evaluation of the use of augmented intelligence in 3D facial landmarking—an approach to facial analysis and craniofacial anomalies detection. Front Oral Maxillofac Med 2026;8:2.
