Journal of International Commercial Law and Technology
2025, Volume:6, Issue:1 : 1724-1736 doi: 10.61336/Jiclt/25-01-160
Research Article
The Moderating Role of AI Governance and Ethics in The Impact of AI Adoption in Auditing on the Confidence of Financial Statement Users
 ,
1
University of Adrar (Algeria)
Received
April 30, 2025
Revised
July 10, 2025
Accepted
Aug. 10, 2025
Published
Nov. 3, 2025
Abstract

This study investigates the impact of Artificial Intelligence (AI) adoption in auditing on the confidence of financial statement users, while examining the moderating role of AI governance and ethics. Using a quantitative, cross-sectional research design and data collected through a structured questionnaire from financial statement users, the study applies Partial Least Squares Structural Equation Modelling (PLS-SEM) to test the proposed relationships. The findings reveal that AI adoption in auditing has a strong, statistically significant positive effect on user confidence, demonstrating the value stakeholders attribute to AI-enhanced audit procedures for improving accuracy, reliability, and efficiency. Although AI governance and ethics do not have a significant direct effect on user confidence, they play a meaningful moderating role, reducing the strength of the relationship between AI adoption and confidence. This suggests that governance frameworks encourage more cautious and informed trust in AI-generated audit results. Overall, the study contributes to the growing literature on technology-driven auditing by integrating AI adoption, user confidence, and governance considerations into a unified empirical model, offering practical insights for audit firms, regulators, and policymakers.

Keywords
INTRODUCTION

The rapid integration of Artificial Intelligence (AI) into auditing has reshaped traditional audit practices by enhancing efficiency, improving anomaly detection, and increasing the reliability of audit evidence. As audit firms increasingly rely on AI-driven tools for data analysis and risk assessment, stakeholders expect these technologies to strengthen audit quality and reduce the likelihood of material misstatements. However, the growing use of AI also raises concerns related to algorithmic transparency, ethical risks, data privacy, and potential bias—concerns that may influence how financial statement users perceive the credibility of audited information. These developments highlight the importance of understanding not only the benefits of AI in auditing but also the governance mechanisms that ensure its responsible use.

 

Despite the potential advantages of AI adoption, an important problem persists: the extent to which AI in auditing effectively enhances the confidence of financial statement users remains unclear. Users may be hesitant to fully trust AI-generated audit evidence unless strong governance and ethical frameworks are in place to guarantee fairness, accountability, and transparency. This leads to a fundamental research question: Does AI governance and ethics strengthen the relationship between AI adoption in auditing and user confidence in financial statements? Additional sub-questions include: How does AI adoption influence users’ perceptions of audit reliability? And to what extent do governance and ethical safeguards shape these perceptions?

 

Based on these research gaps, the main objective of this study is to evaluate the impact of AI adoption in auditing on the confidence of financial statement users while examining the moderating role of AI governance and ethics. Specifically, the study aims to (1) assess the direct influence of AI adoption on user confidence, (2) determine the role of AI governance and ethical principles in shaping user perceptions, and (3) test whether strong AI governance and ethics enhance the positive effect of AI adoption on user trust. Addressing these objectives contributes to both academic literature and practical audit policy by clarifying how responsible AI implementation can bolster stakeholder confidence in the digital era of auditing.

 

LITERATURE REVIEW :

AI adoption in auditing:

The adoption of Artificial Intelligence (AI) in auditing is rapidly evolving, significantly enhancing audit processes through improved efficiency, accuracy, and fraud detection. Research indicates a generally high acceptance of AI among auditors, driven by factors such as performance expectancy and effort expectancy, with demographic variables influencing these perceptions (Chen, 2025). However, the integration of AI is not uniform across the industry, with larger firms, particularly the Big 4, leading in AI implementation while smaller firms face substantial barriers (Fachriyah & Anggraeni, 2024).

 

Key Drivers of AI Adoption

  • Performance Expectancy: Auditors believe AI enhances audit quality and efficiency, leading to higher acceptance levels(Chen, 2025).
  • Effort Expectancy: The perceived ease of use of AI tools contributes to positive behavioural intentions towards adoption (Chen, 2025).
  • Technological Readiness: Optimism about AI's potential impacts positively influences adoption, while insecurity does not significantly affect it (Nugraha, 2024).

Challenges to AI Integration

  • Regulatory and Ethical Concerns : Issues such as data privacy and algorithmic bias pose significant challenges to responsible AI use (Batool et al., 2025).
  • Knowledge Gaps : A shortage of auditors skilled in AI and a lack of established AI audit standards hinder widespread adoption (O’Donnell, 2024).
  • Financial Contraints : Smaller firms often lack the resources necessary for AI implementation, creating disparities in audit quality (Fachriyah & Anggraeni, 2024).

 

Despite the promising advancements, the uneven adoption of AI in auditing highlights the need for targeted support and training, particularly for smaller firms, to ensure equitable benefits across the sector.

 

Confidence of financial statement users:

 The confidence of financial statement users is a critical aspect of the financial reporting ecosystem, influencing investment decisions and overall market stability. This confidence hinges on the transparency, integrity, and quality of financial reporting, which are essential for informed decision-making by various stakeholders, including investors and regulatory bodies. The following sections elaborate on the factors affecting this confidence.

Importance of Transparency and Integrity

  • Users rely on transparent financial reporting to make informed decisions regarding resource allocation (Zajmi, 2019).
  • A lack of transparency can lead to a general loss of confidence in the financial system, as evidenced by the financial crisis that exposed weaknesses in accounting standards (Zajmi, 2019).

 

Role of Internal Controls

  • The consistency of internal control over financial reporting (ICOFR) disclosures significantly impacts users' confidence in audit reports (Asare & Wright, 2011).
  • Users exhibit lower confidence when discrepancies exist between ICOFR reports and standard audit reports, affecting their investment judgments(Asare & Wright, 2011).

 

Consequences of Erosion of Trust

  • Historical events, such as the Enron scandal, illustrate how breaches of trust in financial reporting can mislead users, leading to broader economic repercussions(Enderle, 2006).
  • Users' scepticism towards financial statements can reveal potential irregularities, emphasising the need for ethical practices in financial reporting(Toit & Vermaak, 2014).

 

Conversely, while confidence in financial statements is crucial, it can be fragile and easily undermined by unethical practices or a lack of transparency. Restoring this confidence requires ongoing efforts to enhance reporting standards and ethical compliance.

 

AI governance and ethics:

AI governance and ethics encompass the frameworks and principles guiding the development, deployment, and management of artificial intelligence systems. As AI technologies become increasingly integrated into various aspects of life, establishing robust governance and ethical standards is essential to ensure alignment with human values and societal expectations. This governance framework addresses critical issues such as transparency, accountability, and fairness in AI applications, which will be explored in the following sections.

 

Definition of AI Governance

  • AI governance refers to policies and regulations that guide AI's development and use, ensuring compliance with legal and ethical standards (Mishra, 2024).
  • It includes the governance of data, machine learning models, and AI systems, focusing on minimising risks and maximising benefits(Schneider et al., 2020).

 

Ethical Considerations in AI

  • Ethical AI governance emphasises the importance of transparency, accountability, and fairness, addressing biases and privacy concerns(Kaur, 2024).
  • Global perspectives reveal varying levels of development in AI ethics, with the EU and China leading in regulatory initiatives, while the US is rapidly advancing(Daly et al., 2021).

 

Challenges and Future Directions

  • Emerging risks associated with AI, such as job displacement and privacy violations, necessitate a focus on social equity and inclusion in governance frameworks(Du, 2022).
  • Collaboration among diverse stakeholders is crucial for creating effective AI governance that promotes sustainability and social justice (Du, 2022).

 

While the establishment of AI governance and ethics frameworks is vital, there are concerns regarding the practical implementation of these principles. The balance between legal enforceability and ethical considerations may lead to challenges in operationalising norms effectively, potentially hindering the intended benefits of AI technologies (Daly et al., 2021).

 

Review of relevant prior research and scholarly works:

The relationship between the AI adoption in auditing and the confidence of financial statement users:

 

The adoption of Artificial Intelligence (AI) in auditing significantly enhances the confidence of financial statement users regarding accuracy and reliability. AI technologies, such as machine learning and data analytics, improve the efficiency of audits, reduce human error, and facilitate early fraud detection, thereby increasing the overall quality of financial reporting.

 

Enhanced Accuracy and Reliability

  • AI minimizes human errors, which are common in traditional auditing processes, leading to more accurate financial statements(Sharshouh, 2025).
  • The integration of AI allows for real-time reporting and predictive analysis, enhancing the reliability of financial assessments(Thaluru et al., 2025).
  • Studies indicate that AI-driven audits can significantly improve the timeliness and dependability of financial reporting, enabling stakeholders to make informed decisions(Muftah, 2022).

Challenges and Barriers

  • Despite the benefits, challenges such as high implementation costs and regulatory uncertainties hinder widespread AI adoption, particularly among smaller firms(Fachriyah & Anggraeni, 2024).
  • Ethical concerns and data privacy issues also pose risks that could affect user confidence in AI-enhanced audits (Thaluru et al., 2025).

 

While AI adoption in auditing presents substantial benefits for accuracy and reliability, it is essential to address the associated challenges to fully realize its potential and maintain user confidence in financial statements.

 

First hypothesis (H1): There is no statistically significant positive relationship between the AI adoption in auditing and the confidence of financial statement users at a 5% significance level.

 

The relationship between AI governance and ethics and the relationship of AI adoption in auditing to the confidence of financial statement users:

 

The integration of AI in auditing is significantly influenced by governance and ethical considerations, which in turn affect the confidence of financial statement users. While AI enhances efficiency and fraud detection, ethical challenges such as algorithmic bias and transparency issues pose barriers to its adoption. Addressing these concerns is crucial for fostering stakeholder trust and ensuring compliance with regulatory frameworks.

 

AI Governance and Ethical Challenges

  • Algorithmic Bias: AI systems can perpetuate biases present in training data, leading to unfair outcomes in audits(Imane, 2025).
  • Transparency: The lack of explainability in AI models can hinder auditors' ability to justify decisions, impacting trust among users(Ganapathy, 2025).
  • Regulatory Compliance: Adhering to data protection laws and ethical standards is essential for the responsible use of AI in auditing(Imane, 2025)(Jain, 2025).

Impact on Stakeholder Confidence

  • Enhanced Accuracy: AI improves the precision of audits, which can bolster user confidence in financial statements (Hu, 2025)(Jain, 2025).
  • Real-time Monitoring: Continuous fraud detection capabilities foster a sense of security among stakeholders(Hu, 2025).
  • Need for Human Oversight: Balancing AI efficiency with human judgment is vital to maintain public trust in audit outcomes(Fobellah, 2025).

 

Conversely, the rapid adoption of AI may lead to over-reliance on technology, potentially diminishing the role of human auditors and raising concerns about accountability and ethical standards in financial reporting.

 

Second Hypothesis (H2): There is no significant role for AI governance and ethics in reducing the relationship between AI adoption in auditing and confidence of financial statement users at a 5% significance level.

 

 

Gaps in existing literature :

 Although prior research has examined AI adoption in auditing, user confidence in financial reporting, and the broader issues surrounding AI governance and ethics, several important gaps remain. First, existing studies focus primarily on the drivers and barriers of AI adoption—such as performance expectancy, ease of use, technological readiness, and resource constraints—yet they rarely investigate the downstream effects of AI adoption on the perceptions of financial statement users. Most research emphasizes auditors’ attitudes toward AI, while the perspective of financial statement users (investors, analysts, creditors) remains underexplored. This creates a significant gap in understanding whether the technological benefits of AI translate into higher trust and confidence among those who rely on audited information.

 

Second, literature on financial statement user confidence highlights the importance of transparency, ethical behavior, and strong internal controls, but it does not sufficiently integrate these concepts within the context of emerging AI-based audit practices. Prior work largely focuses on traditional audit environments or scandals related to human misconduct. There is limited empirical evidence on how AI-enabled auditing—characterized by automation, predictive analytics, and algorithmic decision-making—affects user confidence. In particular, research has not fully addressed whether users trust AI-generated audit outputs to the same extent as human-led audit judgments or how concerns about bias, explainability, and data integrity influence this trust.

 

Third, although AI governance and ethics have become central themes in technology research, the auditing literature has not yet systematically linked governance frameworks with the confidence outcomes of audit stakeholders. Existing studies discuss transparency, fairness, accountability, and regulatory compliance in general terms, but few examine their moderating role in enhancing or safeguarding trust when AI is adopted in auditing. There is almost no empirical evidence demonstrating whether robust AI governance mitigates user concerns about algorithmic opacity, bias, or ethical misuse. This leaves a critical gap in understanding how governance practices might strengthen or weaken the relationship between AI adoption and user confidence.

 

Finally, prior research has not sufficiently integrated these three streams—AI adoption, user confidence, and AI governance—into a single conceptual model. Very few studies have tested moderation effects, particularly the role of AI governance and ethics in shaping the impact of AI-based audit technologies on user trust. This gap is both theoretical and empirical, as it limits the development of comprehensive frameworks that explain how responsible AI adoption can enhance confidence in financial reporting.

 

AI governance and ethics

H1

H2

confidence of financial statement user

AI adoption in auditing

Figure 1. Theoretical framework.

 

METHODOLOGY

3.1. Research Design and Approach

This study employs a quantitative, explanatory research design to examine the relationship between AI adoption in auditing and the confidence of financial statement users, as well as the moderating role of AI governance and ethics. Given the objective of testing hypotheses and identifying causal relationships among key variables, the study adopts a cross-sectional survey approach, collecting data at a single point in time from a diverse group of financial statement users. The quantitative design supports statistical analysis, measurement of construct relationships, and generalisation of findings across user groups such as investors, auditors, financial analysts, and regulatory professionals. Structural Equation Modeling (SEM), particularly PLS-SEM, will be used due to its ability to handle latent variables, interaction effects, and complex models.

 

3.2. Data Collection Methods

Data will be collected using a structured questionnaire distributed electronically via email, professional networks, and academic platforms. The questionnaire includes three main sections corresponding to the study variables:

  • AI Adoption in Auditing (extent, integration level, perceived usefulness, perceived reliability)
  • AI Governance and Ethics (transparency, accountability, fairness, regulatory compliance)
  • Confidence of Financial Statement Users (perceived reliability, trustworthiness, transparency of audited information)

 

Respondents will rate each item on a five-point Likert scale ranging from “strongly disagree” to “strongly agree.” The target population includes financial statement users such as audit practitioners, investors, analysts, academics, and regulatory officials. A purposive sampling technique will be used to select participants with relevant knowledge of financial reporting and auditing. A sample size of 200–350 respondents is anticipated, in line with recommended thresholds for SEM analysis. Data will be analyzed using SPSS for descriptive statistics and reliability testing (Cronbach’s alpha) and SmartPLS for hypothesis testing, validity assessment, and moderation analysis.

 

3.3. Rationale for the Chosen Methods

A quantitative, survey-based approach is appropriate because the study aims to evaluate relationships among clearly defined constructs and test moderation effects, which require statistical modeling. PLS-SEM is selected due to its suitability for exploratory and predictive research, ability to handle complex models, and robustness with medium sample sizes. The use of a structured questionnaire ensures standardization, reduces researcher bias, and allows respondents from different sectors and regions to provide comparable data.

 

The choice of a cross-sectional design is justified by the need to capture current perceptions of AI adoption, governance, and confidence in financial statements—factors that are rapidly evolving as technology advances. Purposive sampling ensures that only informed respondents with relevant expertise contribute to the study, enhancing the validity of the findings. Overall, the selected methods provide a rigorous and efficient means of obtaining empirical evidence to address the research questions and test the proposed hypotheses.

 

Data Presentation and Analysis:

First:  Assessment of the measurement Model:

In this section, the quality of the measurement items used in the model is evaluated using SmartPLS software. This assessment involves examining the convergence and internal consistency of the items to ensure that they accurately and reliably capture the intended constructs, which is achieved through tests of convergent validity. Additionally, the model is tested for discriminant validity to verify that the constructs are conceptually distinct and that the measurement items do not overlap across variables. This combined evaluation confirms the stability, clarity, and precision of the measurement model.

Convergent Validity:

Convergent validity is a fundamental component of structural equation modeling (SEM), including Partial Least Squares SEM (PLS-SEM). It evaluates the extent to which the indicators (manifest variables) of a latent construct consistently measure the same underlying concept. In PLS-SEM, convergent validity is typically assessed using several key criteria: factor loadings, Cronbach’s alpha, composite reliability, and the average variance extracted (AVE). Each of these indicators provides evidence of how well the observed variables reflect their respective latent construct and whether the construct is measured reliably and coherently within the model:

 

Factor Loading:

Basis: Factor loading represents the strength and direction of the relationship between an indicator and its corresponding latent construct. In PLS-SEM, factor loadings should be statistically significant and preferably higher than 0.7 to indicate a strong relationship.

 

Cronbach’s Alpha:

Basis: Cronbach’s alpha is a measure of internal consistency reliability. It assesses the extent to which a set of indicators (items) measures a single latent construct consistently. In PLS-SEM, a high Cronbach’s alpha (typically above 0.7) suggests good internal consistency.

 

Composite Reliability:

Basis: Composite reliability is another measure of reliability that evaluates the consistency of indicators in measuring a latent construct. In PLS-SEM, composite reliability should ideally exceed 0.7, indicating that the indicators are reliable measures of the underlying construct.

 

Average Variance Extracted (AVE):

Statistically, convergent validity is confirmed when the Average Variance Extracted (AVE) exceeds the threshold of 0.50 (Sarstedt et al., 2021). In addition to AVE, factor loadings, Cronbach’s alpha, and composite reliability are commonly employed to assess convergent validity in PLS-SEM. Factor loadings indicate the strength of the relationship between observed variables and their respective latent constructs, whereas Cronbach’s alpha and composite reliability evaluate the internal consistency of the measurement scale (Amora, 2021). Together, these indicators provide a comprehensive assessment of whether the construct’s indicators sufficiently converge to measure the same underlying concept.

 

Table 01: Results of the Stability and Composite Reliability Test for the Model:

variables

Items

Loadings

Cronbach’s Alpha

Composite Reliability

Average variance extracted AVE

AI adoption in auditing

AIAA_1

0.819

0.868

0.901

0603

AIAA_2

0.718

AIAA_3

0.775

AIAA_4

0.825

AIAA_5

0.781

AIAA_6

0.735

confidence of financial statement users

CFSU_1

0.928

0.868

0.901

0.602

CFSU_2

0.900

CFSU_3

0.866

CFSU_4

0.928

CFSU_5

0.900

CFSU_6

0.750

AI governance and ethics

AIGE_1

0.928

0.881

0.926

0.807

AIGE_2

0.900

AIGE_3

0.866

Source: Compiled by researchers based on the outputs of Smart PLS4.

 

 The results presented in Table 01 demonstrate strong reliability and convergent validity across all three constructs in the model. All indicator loadings exceed the recommended threshold of 0.70, indicating that each item sufficiently represents its underlying variable. Cronbach’s Alpha values for AI adoption in auditing (0.868), confidence of financial statement users (0.868), and AI governance and ethics (0.881) all surpass the minimum acceptable level of 0.70, confirming high internal consistency. Similarly, Composite Reliability (CR) values range from 0.901 to 0.926, further supporting the stability and reliability of the measurement model. The Average Variance Extracted (AVE) values for all variables—0.603 for AI adoption, 0.602 for confidence of financial statement users, and 0.807 for AI governance and ethics—exceed the 0.50 benchmark, demonstrating adequate convergent validity and confirming that each construct explains more than half of the variance in its indicators. Collectively, these results indicate that the model’s measurement scales are robust, reliable, and suitable for subsequent structural analysis.

 

Discriminate Validity:

The recommended criteria for assessing discriminant validity in PLS-SEM include several established approaches. The Fornell–Larcker Criterion evaluates discriminant validity by comparing the square root of each construct’s Average Variance Extracted (AVE) with its correlations with other constructs. Discriminant validity is supported when a construct’s AVE square root exceeds all inter-construct correlations (Henseler et al., 2015; Hamid et al., 2017). The Heterotrait–Monotrait Ratio of Correlations (HTMT) provides a more rigorous assessment by examining the ratio of between-construct correlations to within-construct correlations. An HTMT value below 0.85 is generally recommended when constructs in the model are conceptually distinct (Franke & Sarstedt, 2019; Henseler et al., 2015; Hamid et al., 2017).

 

Although the Fornell–Larcker Criterion and cross-loadings have traditionally dominated discriminant validity assessment, recent work by Henseler, Ringle, and Sarstedt (2015) highlights HTMT as a superior alternative due to its higher sensitivity and specificity in detecting discriminant validity issues (Cepeda-Carrión et al., 2022).

 

Table 02: Fornell-Larcker Criterion

variables

AI adoption in auditing

AI governance and ethics

confidence of financial statement users

AI adoption in auditing

0.776

 

 

AI governance and ethics

0.627

0.898

 

confidence of financial statement users

0.678

0.521

0.776

Source: Compiled by researchers based on the outputs of Smart PLS4.

 

The Fornell–Larcker Criterion results in Table 02 indicate that the model demonstrates satisfactory discriminant validity among the three constructs. Each construct’s square root of the Average Variance Extracted (AVE)—shown on the diagonal—is higher than its corresponding correlations with other constructs. For instance, the square root of the AVE for AI adoption in auditing (0.776) exceeds its correlations with AI governance and ethics (0.627) and confidence of financial statement users (0.678), confirming that AI adoption is empirically distinct from the other variables. Similarly, AI governance and ethics show a strong square root AVE value (0.898), which is higher than its correlations with AI adoption (0.627) and user confidence (0.521), indicating adequate discriminant validity. The same pattern is observed for the confidence of financial statement users (0.776), which surpasses its correlations with AI adoption (0.678) and AI governance and ethics (0.521). Overall, these results confirm that each construct measures a unique dimension within the model and that multicollinearity is not a concern, allowing for reliable interpretation of structural relationships.

 

Table 03: the heterotrait-monotrait ratio of correlations (HTMT)

variables

AI adoption in auditing

AI governance and ethics

confidence of financial statement users

AI adoption in auditing

 

 

 

AI governance and ethics

0.727

 

 

confidence of financial statement users

0.772

0.586

 

AIGE x AIAA

0.162

0.188

0.064

Source: Compiled by researchers based on the outputs of Smart PLS4.

 

The HTMT results in Table 03 further support the discriminant validity of the model, as all HTMT values fall well below the recommended threshold of 0.85. The HTMT ratios between AI adoption in auditing and AI governance and ethics (0.727), as well as between AI adoption and confidence of financial statement users (0.772), demonstrate that the constructs are related but not excessively overlapping, indicating clear conceptual distinctions. Similarly, the HTMT value between AI governance and ethics and user confidence (0.586) is comfortably below the threshold, confirming that these variables measure separate constructs within the model. The interaction term (AIGE × AIAA) also shows very low HTMT values with the other constructs (0.162, 0.188, and 0.064), which is expected for moderation variables and indicates no multicollinearity issues. Overall, the HTMT findings strengthen the evidence of discriminant validity, confirming that the constructs are empirically distinct and appropriate for structural model analysis.

 

Figure 2: General Structural Model for the Study

Source: Compiled by researchers based on the outputs of Smart PLS4.

 

Secondly: Testing the Internal Model (Structural Model)

In this section, we evaluate the results of the structural model by testing the degree of correlation, assessing the predictive capabilities of the model, and examining the relationships between constructs. Additionally, we conduct the necessary tests to evaluate the model.

 

Validity of the Structural Model:

The recommended criteria for analysing the results of the structural model validity test (R², F²) in PLS-SEM encompass both measurement and structural model assessments. Measurement model assessment involves evaluating the relationships between latent constructs and their observed indicators, ensuring adequate reliability, indicator loadings, and internal consistency (Fauzi, 2022). Following this, structural model assessment examines the significance and relevance of the path coefficients, as well as the model’s explanatory and predictive capabilities. Key metrics used in this stage include the coefficient of determination (R²), which reflects the model’s explanatory power, and the effect size (F²), which indicates the relative contribution of each exogenous variable to the endogenous variable. Additional predictive assessment tools, such as the cross-validated predictive ability test (CVPAT), further support the evaluation of model robustness (Hair Jr et al., 2021).

 

Recent methodological advancements have introduced new guidelines, such as the use of PLS Predict—a modern approach that assesses out-of-sample predictive performance to ensure the model’s practical relevance. Other developments include enhanced metrics for comparing alternative model specifications and complementary robustness-checking procedures designed to strengthen the reliability of PLS-SEM findings (Hair et al., 2019). Together, these criteria and updated guidelines provide a comprehensive framework for evaluating the validity, explanatory strength, and predictive accuracy of structural models in PLS-SEM.

  1.  
  2.  
  3.  
  4.  
  5.  
  6.  

 

Table 04: Validity of the Structural Model

Variables

Coefficient of Determination (R2)

Explanatory size (F2)

confidence of financial statement users

0.502

/

AI adoption in auditing

/

0.422

AI governance and ethics

/

0.040

Source: Compiled by researchers based on the outputs of Smart PLS4.

 

The results in Table 04 indicate that the structural model demonstrates acceptable predictive power and meaningful explanatory contributions from the independent and moderating variables. The coefficient of determination (R²) for confidence of financial statement users is 0.502, meaning that AI adoption in auditing and AI governance and ethics together explain 50.2% of the variance in user confidence—a moderate and substantively meaningful level according to common PLS-SEM benchmarks. Regarding explanatory size (F²), AI adoption in auditing shows a strong effect size of 0.422, indicating that it makes a substantial and impactful contribution to predicting user confidence. In contrast, AI governance and ethics yield a small effect size (0.040), suggesting that while it contributes to the model, its direct explanatory power is limited. This pattern is consistent with the role of governance variables, which often exert influence through moderation rather than direct effects. Overall, the structural model demonstrates solid predictive accuracy, with AI adoption emerging as the primary driver of user confidence and AI governance and ethics playing a smaller but potentially important complementary role.

 

DISCUSSION OF TESTING THE STUDY HYPOTHESES

When analysing the results of hypothesis testing in Partial Least Squares Structural Equation Modelling (PLS-SEM), several key criteria must be considered to ensure the validity and reliability of the findings. Hypothesis testing using confidence intervals and p-values is fundamental, as each hypothesis corresponds to a specific structural path within the model. Researchers typically rely on one-tailed or two-tailed p-values to determine whether the hypothesised relationships are statistically significant (Kock, 2016).

 

Additionally, structural model assessment is required to verify that the constructs are unidimensional and that the relationships between latent variables and their indicators behave as expected. This step ensures that the structural paths being tested reflect meaningful and theoretically supported relationships among the constructs (Kock, 2016).

 

To evaluate the study hypotheses, the bootstrapping technique is employed to generate estimates for the structural model’s path coefficients. These coefficients, which range from −1 to +1, indicate both the direction and strength of the relationships. Values closer to +1 represent strong positive associations, whereas values approaching −1 reflect strong negative relationships. Statistical significance is typically established when p-values fall below the 5% threshold. In contrast, coefficients near zero indicate weak or negligible relationships (Kock, 2018). Together, these criteria ensure a rigorous and comprehensive evaluation of the hypothesised relationships within the PLS-SEM framework.

 

Hypotheses: 

First hypothesis (H1): There is no statistically significant positive relationship between the AI adoption in auditing and the confidence of financial statement users at a 5% significance level.

 

Second Hypothesis (H2): There is no significant role for AI governance and ethics in reducing the relationship between AI adoption in auditing and confidence of financial statement users at a 5% significance level.

 

Table 5: Testing the Hypotheses for the Study (H1, H2)

Hypothesis

Paths

Original Sample

Sample Mean

Standard Deviation

T Statistics

P Values

Decision

H1

AI adoption in auditing -àconfidence of financial statement users

0.589

0.598

0.115

5.124

0.000

Hypothesis

Accepted

H2

AIGE x AIAA -> CFSU

-0.156

-0.156

0.076

2.052

0.040

Hypothesis

Accepted

Source: Compiled by researchers based on the outputs of Smart PLS4.

 

The results in Table 5 provide strong statistical support for both hypotheses (H1 and H2), confirming the significance of the proposed relationships within the model. For H1, the path coefficient from AI adoption in auditing to the confidence of financial statement users is positive and substantial (0.589), with a high T-value of 5.124 and a p-value of 0.000, indicating a highly significant effect at the 5% level. This means that greater adoption of AI in auditing significantly enhances user confidence in financial statements. For H2, the interaction term between AI governance and ethics and AI adoption (AIGE × AIAA) shows a negative but statistically significant coefficient (−0.156), with a T-value of 2.052 and a p-value of 0.040, leading to acceptance of the hypothesis. This result suggests that AI governance and ethics moderate the relationship between AI adoption and user confidence, albeit in a reducing direction. In other words, stronger AI governance and ethical controls slightly weaken the direct positive effect of AI adoption on user confidence—possibly because increased oversight, transparency, and ethical constraints limit over-optimistic expectations of AI, resulting in a more cautious but reliable form of confidence. Overall, the hypothesis testing confirms the relevance of both AI adoption and governance factors in shaping user confidence.

 

Figure 3: Results of path coefficients

 

Source: Compiled by researchers based on the outputs of Smart PLS4.

 

Table 6: Testing the effectiveness of the moderating variable (AI governance and ethics ) in reducing the effect of AI adoption in auditing on the confidence of financial statement users

Relationship

Path Coefficient

P Values

Hypothesis

AI adoption in auditing à confidence of financial statement users

0.589

0.000

Accepted

AI governance and ethics àconfidence of financial statement users

0.181

0.076

refused

The Interaction (AI adoption in auditing * AI governance and ethics) --> confidence of financial statement users

-0.156

0.040

Accepted

Source: Compiled by researchers based on the outputs of Smart PLS4.

 

The results in Table 6 show that AI adoption in auditing has a strong and statistically significant positive effect on the confidence of financial statement users, as indicated by the path coefficient of 0.589 and a p-value of 0.000. This confirms that increased use of AI tools in auditing enhances stakeholders’ trust in the reliability and quality of financial information. In contrast, the direct effect of AI governance and ethics on user confidence is not significant (path coefficient = 0.181; p = 0.076), suggesting that governance and ethical frameworks alone do not independently influence user perceptions. However, the interaction term between AI adoption and AI governance and ethics is negative and statistically significant (path coefficient = –0.156; p = 0.040), indicating a meaningful moderating effect. Specifically, stronger AI governance and ethical safeguards reduce—or dampen—the strength of the positive relationship between AI adoption and user confidence. This implies that while AI adoption initially boosts confidence, the introduction of rigorous governance and ethical controls may temper overly optimistic expectations, encouraging a more cautious and realistic trust in AI-enabled auditing. Overall, the findings demonstrate that AI governance and ethics operate as a significant moderating factor, shaping how users interpret and respond to AI adoption in auditing.

 

Figure 4: Path coefficients of The Interaction (AI adoption in auditing * AI governance and ethics) --> confidence of financial statement users

 

Source: Compiled by researchers based on the outputs of Microsoft Excel.

 

 

DISCUSSION :

Interpretation of Findings

The results of this study reveal several important insights regarding the role of Artificial Intelligence (AI) in auditing and the confidence of financial statement users. First, AI adoption in auditing was found to have a strong and statistically significant positive effect on user confidence. This suggests that as audit processes increasingly incorporate AI tools—such as automated testing, anomaly detection, and predictive analytics—financial statement users perceive audit outputs as more reliable, accurate, and timely. The substantial effect size (F² = 0.422) confirms that AI adoption is a major determinant of confidence, reflecting the growing trust stakeholders place in technologically enhanced auditing procedures.

 

Second, the study found that AI governance and ethics, although conceptually important, do not have a significant direct effect on user confidence. This indicates that financial statement users may not independently associate governance frameworks with improved confidence unless they interact with other elements of the audit environment. However, the moderating effect revealed a more nuanced picture: AI governance and ethics significantly reduce the strength of the positive relationship between AI adoption and user confidence. This negative moderation suggests that while AI tools initially raise confidence due to perceptions of enhanced efficiency and objectivity, the introduction of strict governance and ethical controls may temper expectations by highlighting issues such as bias, transparency, and algorithmic accountability. In essence, governance frameworks make users more cautious and analytical rather than blindly trusting AI-driven outputs, leading to a more realistic and balanced perception of AI’s role in auditing.

 

Finally, the overall model demonstrated strong measurement reliability and acceptable predictive power (R² = 0.502), indicating that AI adoption and governance factors collectively explain a meaningful proportion of variation in user confidence. The discriminant validity results (Fornell-Larcker, HTMT) further support the conceptual distinction between constructs, reinforcing the robustness of the study’s findings.

 

Comparison with Prior Research

The findings are consistent with earlier studies emphasizing the benefits of AI adoption for audit quality and stakeholder trust. Prior research has shown that AI enhances efficiency, reduces human error, and improves anomaly detection—factors that contribute to greater confidence in financial reporting (Sharshouh, 2025; Thaluru et al., 2025; Muftah, 2022). The current study’s results align with these conclusions, demonstrating that financial statement users recognize the value of AI in improving audit reliability and responsiveness. Additionally, the results confirm work by Chen (2025) and Nugraha (2024), who found that auditors and users hold favorable views of AI’s potential to improve audit outcomes when the technology is perceived as useful and easy to integrate.

However, the study introduces a critical nuance by showing that AI governance and ethics do not independently enhance user confidence a finding that diverges from studies suggesting that strong governance automatically increases trust in AI systems (Daly et al., 2021; Kaur, 2024). Instead, the present research finds that governance frameworks moderate the AI–confidence relationship in a negative direction. This contrasts with earlier assumptions that governance simply amplifies the benefits of AI. Instead, it supports recent concerns about overreliance on AI and the importance of maintaining human oversight to ensure accountability and transparency (Fobellah, 2025; Ganapathy, 2025). The moderation effect aligns with literature suggesting that ethical scrutiny and regulatory compliance often slow down or constrain the adoption of new technologies, potentially leading users to develop more cautious interpretations of AI-driven results.

 

CONCLUSION :

This study examined the relationship between AI adoption in auditing and the confidence of financial statement users, along with the moderating role of AI governance and ethics. The findings indicate that AI adoption has a strong and statistically significant positive impact on user confidence, confirming that the integration of AI tools enhances perceptions of audit accuracy, reliability, and efficiency. The model demonstrated solid predictive power, showing that more than half of the variation in user confidence can be explained by the study variables. Furthermore, the measurement model exhibited high reliability and valid discriminant properties, reinforcing the robustness of the results.

 

The study also found that AI governance and ethics, despite their conceptual relevance, do not exert a significant direct effect on user confidence. However, their role emerges clearly through moderation: AI governance and ethics significantly reduce the strength of the positive relationship between AI adoption and user confidence. This moderating effect suggests that while stakeholders generally trust AI-enabled auditing, the presence of governance frameworks encourages a more cautious and informed perception, tempering overly optimistic expectations about AI’s capabilities. Governance mechanisms such as transparency, accountability, and ethical oversight serve to balance trust with critical judgment, ensuring that users evaluate AI-generated audit outcomes more rationally.

 

Overall, the study underscores the importance of AI adoption as a key driver of confidence in financial reporting, while highlighting the essential moderating role of governance and ethics in shaping how users interpret and trust AI-based audit practices. These insights reinforce the need for audit institutions to strike a balance between embracing technological innovation and maintaining strong ethical and governance safeguards to enhance credibility and foster sustainable trust in AI-assisted auditing.

REFERENCES
  1. O’Donnell, J. B. (2024). Auditing Transformation : A Model of Artificial Intelligence Adoption. THE JOURNAL OF APPLIED BUSINESS AND ECONOMICS, 26(6). https://doi.org/10.33423/jabe.v26i6.7390
  2. Batool, W., Ali, S., & Saira, S. (2025). Integrating technology and artificial intelligence in accounting and auditing: a bibliometric review. 3(7), 443–454. https://doi.org/10.63075/s95v7d24
  3. Nugraha, G. C. H. (2024). Perception Of Artificial Intelligence Adoption on Audit Quality. ABIS: Accounting and Business Information Systems Journal, 12(1). https://doi.org/10.22146/abis.v12i1.89329
  4. Fachriyah, N., & Anggraeni, O. L. (2024). The Use of Artificial Intelligence in Financial Statement Audit. Jurnal Indonesia Sosial Teknologi, 5(10), 3881–3892. https://doi.org/10.59141/jist.v5i10.5251
  5. Chen, X. (2025). Level of Acceptance and Behavioral Intention on the Use of Artificial Intelligence in Internal Audit Procedure. International Journal of Global Economics and Management, 7(3), 287–292. https://doi.org/10.62051/ijgem.v7n3.32
  6. Zajmi, S. (2019). Principles of transparent financial reporting as the basis of financial statements quality control. 13(2), 60–68. https://doi.org/10.5937/POSEKO16-24566
  7. Asare, S. K., & Wright, A. (2011). The Effect of Type of Internal Control Report on Users’ Confidence in the Accompanying Financial Statement Audit Report. Social Science Research Network. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1614503
  8. Enderle, G. (2006). Confidence in the Financial Reporting System: Easier to Lose than to Restore (pp. 163–173). Palgrave Macmillan, New York. https://doi.org/10.1057/9781403984623_16
  9. du Toit, E., & Vermaak, F. N. S. (2014). Company financial health: Financial statement users’ and compilers’ perceptions. Journal of Economic and Financial Sciences, 7(3), 819–836. https://doi.org/10.4102/JEF.V7I3.239
  10. Du, X. (2022). AI Governance and Ethics Framework for Sustainable AI and Sustainability. https://doi.org/10.48550/arxiv.2210.08984
  11. Daly, A., Daly, A., Hagendorff, T., Li, H., Mann, M., Marda, V., Wagner, B., & Wang, W. W. (2021). AI, Governance and Ethics: Global Perspectives. Social Science Research Network. https://doi.org/10.2139/SSRN.3684406
  12. Kaur, J. (2024). Responsible Artificial Intelligence (AI) Governance. Advances in Business Strategy and Competitive Advantage Book Series, 337–368. https://doi.org/10.4018/979-8-3693-3948-0.ch014
  13. Schneider, J., Abraham, R., & Meske, C. (2020). AI Governance for Businesses. arXiv: Artificial Intelligence. https://dblp.uni-trier.de/db/journals/corr/corr2011.html#abs-2011-10672
  14. Mishra, A. (2024). Scalable AI Governance and Ethics (pp. 147–165). Apress. https://doi.org/10.1007/979-8-8688-0158-7_9
  15. Fachriyah, N., & Anggraeni, O. L. (2024). The Use of Artificial Intelligence in Financial Statement Audit. Jurnal Indonesia Sosial Teknologi, 5(10), 3881–3892. https://doi.org/10.59141/jist.v5i10.5251
  16. Muftah, M. A. R. A. (2022). The Impact of Artificial Intelligence on Auditing Practices and Financial Reporting Accuracy. Integrated Journal for Research in Arts and Humanities, 2(1), 40–46. https://doi.org/10.55544/ijrah.2.1.49
  17. Thaluru, M., Gupta, M., Liao, Z., Zhang, T., & Sharman, R. (2025). Impact of AI on Audit and Assurance. 63–100. https://doi.org/10.4018/979-8-3373-3078-5.ch003
  18. Abu Sharshouh, A. (2025). The Use of Artificial İntelligence in Accounting and Auditing. 6(1), 1–15. https://doi.org/10.71233/kared.1694834
  19. Fobellah, A. N. (2025). Navigating digital transformation: auditing Artificial Intelligence-powered financial systems: A conceptual review. International Journal of Science and Research Archive, 16(2), 023–028. https://doi.org/10.30574/ijsra.2025.16.2.2274
  20. Jain, A. K. (2025). Financial reporting and the role of artificial intelligence in automating audits. Deleted Journal, 08(01(II)), 277–282. https://doi.org/10.62823/ijarcmss/8.1(ii).7358
  21. Hu, N. (2025). Influence and Applications of AI in Accounting and Audit Practice. Advances in Economics, Management and Political Sciences, 219(1), 28–38. https://doi.org/10.54254/2754-1169/2025.gl27242
  22. Ganapathy, V. (2025). A Comparative Study of Explainable Artificial Intelligence (Xai) Techniques in Financial Auditing Applications. Edumania, 03(03), 185–215. https://doi.org/10.59231/edumania/9147
  23. Laamari, I. (2025). Artificial intelligence in financial auditing: improving efficiency and addressing ethical and regulatory challenges. Brazilian Journal of Business, 7(1), e76833. https://doi.org/10.34140/bjbv7n1-017
Recommended Articles
Research Article
Use and Impact of Social Media Tools (SMT) and Social Networking Sites (SNS) by Academic Library Professionals: A Case Study of Bharuch, Gujarat
Published: 13/12/2025
Research Article
Blockchain in International Trade Finance: Legal Risks, Compliance Challenges, and Regulatory Gaps
...
Published: 03/11/2025
Research Article
Online Dispute Resolution in International Commercial Transactions: India's Role in the Digital Arbitration Era
Published: 13/12/2025
Research Article
Legal and Institutional Drivers of Teacher Job Performance: A Regulatory and Technological Analysis within a Developing Country’s Education System
...
Published: 19/12/2025
Loading Image...
Volume:6, Issue:1
Citations
16 Views
8 Downloads
Share this article
© Copyright Journal of International Commercial Law and Technology