Journal of International Commercial Law and Technology
2025, Volume:6, Issue:1 : 744-754 doi: dx.doi.org/10.61336/Jiclt/25-01-72
Research Article
Decoding the Organisational AI Adoption: What Do Theories and Frameworks Reveal?
 ,
1
Research Scholar, International School of Management Excellence (ISME), Bengaluru, INDIA, 562125
2
Professor & Research Guide, International School of Management Excellence (ISME), Bengaluru, INDIA
Received
Sept. 28, 2025
Revised
Oct. 13, 2025
Accepted
Oct. 25, 2025
Published
Nov. 8, 2025
Abstract

Artificial intelligence (AI) has emerged as a transformative force that is reshaping business practices, operational processes, and decision-making structures across diverse organisational settings. This paper provides a comprehensive literature review of the frameworks and theories that have been developed to explain the multifaceted process of AI adoption at the organisational level. The study synthesises insights from established models such as the Technology Acceptance Model, the Theory of Planned Behaviour, the Diffusion of Innovations Theory, the Unified Theory of Acceptance and Use of Technology, and the Technology Organisation Environment Framework, etc., in the context of AI. Each of these theories is examined in depth, with a focus on their core constructs, strengths, and limitations in the context of AI adoption. The paper also explores emerging theories specifically tailored to AI, such as the AI Governance Frameworks, the Risk Management Frameworks and Generative AI Adoption Models. The analysis reveals the interplay of technological, organisational, psychological, legal, social, environmental and ethical factors in shaping AI adoption decisions and highlights the importance of considering these factors holistically. The paper identifies existing literature gaps in the wake of Generative AI and Agentic AI and proposes future research directions. The study underscores the importance of developing more nuanced and context-specific theories to address the evolving nature of AI technologies and their impact on society. This research contributes to a deeper understanding of AI adoption and its implications for various stakeholders by bridging the gap between theory and practice.

Keywords
INTRODUCTION

Artificial intelligence is part of our day-to-day lives and has become an integral part of different products, applications, and services. AI has outperformed conventional solutions in many business areas, such as Manufacturing, Healthcare, Transportation, Banking, and Retail, which has also helped increase the use of AI methods in these areas (Bharati et al., 2024). AI is also transforming functional areas like Marketing, Finance, Operations, Human Resources management, etc. (Haleem et al., 2022; Routray, 2024).  However, there are concerns around AI that are posing challenges to the adoption. While AI adoption unlocks new value for organisations, at the same time, it also introduces new risks (Alzubaidi et al., 2023; Zhou, 2024). To realise the benefits and improve the adoption of AI, organisations should assess and mitigate those risks by incorporating principles that add trust in each stage of AI development (Mukherjee, 2024) and operations. As companies navigate the intricate path of integrating AI technologies in their business processes, understanding the theoretical underpinnings of this adoption process becomes crucial for successful implementation.

 

The adoption of AI is influenced by a multitude of factors, including technological barriers, ethical concerns, and organisational resistance. These challenges highlight the need for a structured approach to understanding how and why organisations choose to adopt AI. This is where theoretical frameworks play a pivotal role. Frameworks such as the Technology Acceptance Model (TAM), the Diffusion of Innovations (DOI), and the Theory of Planned Behavior (TPB) provide valuable insights into the dynamics of AI adoption. These theories offer lenses through which can analyse the decision-making processes, behavioural intentions, and innovation diffusion that are integral to AI integration (Ajzen, 1991; Davis, 1989; Rogers, 2003)

 

Despite the abundance of theories, there exists a gap in synthesising these frameworks to provide a comprehensive understanding of AI adoption. Previous studies have often focused on specific aspects of AI adoption, yet there remains a need for a holistic analysis that integrates multiple theoretical perspectives. The significance of understanding AI adoption lies in its potential to unlock competitive advantages, drive innovation, and enhance operational efficiency. However, without a robust theoretical foundation, organisations may encounter difficulties in effectively implementing AI technologies.

 

Objective

The objective of this Systematic Literature Review (SLR) is to analyse both the challenges and opportunities inherent in AI adoption at the organisational level, aiming to promote a deeper understanding and facilitate broader integration of these frameworks and theories in industry and academia.

 

Table 1. Research Questions & Motivation.

 

Question

Motivation

RQ1

How do different frameworks and theories address the challenges and opportunities associated with AI adoption?

Exploring how different frameworks address challenges and opportunities for organizations in increasing AI adoption.

RQ2

What are the critical factors identified in existing frameworks and theories that influence AI adoption at the organisational level?

Understanding the critical factors influencing AI adoption and identifying key drivers and barriers for successful AI implementation and adoption.

 

In this paper, the authors explore how various theoretical frameworks can be applied and combined to provide a deeper understanding of the AI adoption process. Ultimately, the paper provides a comprehensive view of diverse factors that impact AI adoption in an organisational context, offering valuable insights for future research and practical implementations.

METHODOLOGY

To create a sound information base for both researchers and practitioners on the topic of AI adoption, the authors followed the systematic approach of an SLR. Our SLR aims to select, analyse, and synthesise findings from the existing literature on AI adoption. This systematic literature review was conducted following established guidelines (Kitchenham & Charters, 2007; Moher et al., 2009) to ensure comprehensive coverage and reproducibility. The review involved a structured search across several academic databases, including Scopus, arXiv, Springer, IEEE Access, ACM Digital Library, Frontiers in Robotics and AI, Applied Sciences, and Google Scholar. The AI adoption-specific searches were restricted to peer-reviewed articles written in English and searched across classical adoption theories, AI adoption-specific theories and frameworks, and Industry frameworks. 

 

Selection Criteria

The selection process incorporated both inclusion and exclusion criteria to refine the pool of literature. Articles were included if they met the following requirements in the table below

 

Table 2. Study selection criteria.

 

Inclusion Criteria

I1

Presented theoretical frameworks or models that explain or guide the adoption of AI at the organisational level.

I2

Classical Theories and frameworks that explain technology adoption and innovation at organisations. 

I3

Offered empirical evidence or conceptual models that addressed internal and external drivers of AI adoption.

I4

Manuscripts that were available in full text in English.

 

Exclusion Criteria

E1

Focused exclusively on technical or algorithmic aspects of AI without addressing organisational impact.

E2

Did not provide a clear methodological basis for the proposed frameworks or theories.

E3

Articles with insufficient citations.

E4

Not accessible in their complete form or available behind a paywall

RESULT

The systematic literature review (SLR) process for identifying AI adoption theories and frameworks follows a structured and rigorous methodology.    The methodology is based on PRISMA statement (Page et al., 2021) which suggests to selection process using a flow diagram as depicted in Fig.1. The approach can be broken down into three main phases: Identification, Screening, and Inclusion.

 

In the identification phase, a comprehensive search for relevant literature across multiple databases and sources was done. A total of 54,601 records were initially identified. To ensure the quality and relevance of the data, irrelevant or low-quality records were removed. 34,291 records were excluded due to a lack of sufficient citations. 3,458 preprints were removed to focus on peer-reviewed and finalized studies. And 462 records were excluded to maintain the integrity of the review. After this filtering, 16,390 records remained for further evaluation.

 

The screening phase involved a detailed review of the remaining records to assess their relevance to the research topic. 34,291 records were excluded based on their titles, as they were unrelated to AI adoption theories or frameworks. 3,458 records were excluded after abstract screening, as they focused on unrelated topics such as model learning frameworks. Advanced methods like Symantec search and Hybrid search were used to exclude some of the records.  This process narrowed the pool to 324 reports deemed potentially relevant. Of the 324 reports, 27 could not be retrieved, leaving 297 records for full-text screening.

 

Records Identified (n=54601)

Identification of AI Adoption Theories & Frameworks

Identification

Screening

Inclusion

Records removed before screening

Insufficient Citations (n=34291), Preprint (n=3458), Retracted, Withdrawn & Correction (n=462)

Records Identified (n=16390)

Titles excluded after screening, including model learning frameworks. Titles (n=12238) Abstracts (n=3828)

Reports sought for Retrieval (n=324)

Reports not retrieved (n=27)

Records Identified (n=297)

Reports Excluded after screening.

(n=262)

Theories & Frameworks included in the review (n=35)

Fig. 1. Flow chart showing AI adoption theories and frameworks for study selection

 

A further 262 reports were excluded after a detailed review, as they did not meet the inclusion criteria (e.g., lack of focus on organisational AI adoption frameworks or insufficient methodological rigour). In the final phase, 35 theories and frameworks were identified and included in the review. These represent the most relevant and high-quality contributions to the understanding of AI adoption at the organisational level.

 

The programmatic approach used in the initial search process minimises the risk of researcher bias in the selection of studies. While authors recognise the possibility that some relevant manuscripts may not have been captured by the automated process, the extensive number of studies identified through the broad search queries provides a robust foundation for this systematic literature review (SLR).

DISCUSSION

In this section, authors thoroughly analyse the studies included in this review and discuss the considerations and gaps. We will also talk about the factors various frameworks considered for adoption. This section will try to address the research question discussed in Table 1.

 

How do different frameworks and theories address the challenges and opportunities associated with AI adoption? (RQ1)

 

To address this research question, authors have analysed various classical organisational theories, AI-specific frameworks and industry frameworks.

 

Classical Organisational Theories

Various classical organisational theories can be utilised to explain the adoption of AI in organisations. The Organisational Learning Theory(Argyris & Schön, 1978) emphasizes the importance of fostering a culture of continuous learning within organisations. This theory highlights the need for organisations to adapt to changes by learning from past experiences and challenging existing assumptions. It underscores the role of double-loop learning in enabling organisations to embrace innovation and effectively integrate technologies like AI. The Technology Acceptance Model (TAM) (Davis, 1989) and the Theory of Planned Behavior (TPB) (Ajzen, 1991) focus on individual-level acceptance of technology. AI-TAM (Baroni et al., 2022), which is an extension of TAM for AI, identifies perceived usefulness and ease of use as critical factors influencing AI adoption, making it essential to design AI systems that are intuitive and beneficial to end-users. TPB, on the other hand, explains how employees' attitudes, peer influence, and confidence in their ability to use AI impact their willingness to adopt it. These theories are particularly relevant for addressing resistance to AI adoption and ensuring user-level acceptance.

 

At the organisational level, the Technology-Organisation-Environment (TOE) Framework (Tornatzky, 1990) provides a holistic view by considering technological readiness, organisational resources, and external pressures such as competition and regulations. Similarly, the Resource-Based View (RBV) (Barney, 1991) emphasizes leveraging organisational resources, such as financial, human, and technological capabilities, to gain a competitive advantage through AI adoption. These frameworks highlight the importance of aligning AI initiatives with organisational strategies and ensuring the availability of resources for successful implementation. The Strategic Alignment Model (SAM) (Henderson & Venkatraman, 1994) and Institutional Theory (Scott, 2001) focus on aligning AI strategies with business objectives and addressing external pressures, respectively. SAM underscores the need for strategic alignment between IT and business goals to maximise the value of AI adoption. Institutional Theory highlights the role of regulatory requirements, industry standards, and societal expectations in driving AI adoption, making it essential for organisations to navigate external pressures effectively.

 

Finally, theories like the Diffusion of Innovations Theory (Rogers, 2003) and the Technology Readiness Index (TRI) (Parasuraman, 2000) provide insights into the spread of AI technologies and the readiness of organisations to adopt them. These theories emphasise the importance of factors such as relative advantage, compatibility, and organisational readiness in facilitating AI adoption. Unified Theory of Acceptance and Use of Technology (UTAUT) (Venkatesh et al., 2003) integrates multiple theories, including social influence and facilitating conditions. Together, these frameworks and theories offer a comprehensive understanding of the multifaceted nature of AI adoption, addressing individual, organisational, and environmental factors.

 

Table 3. Classical Organisational Theories Analysis

Theory /

 Framework

Analysis (When applied in an AI context)

Parameters

Organisational Theory (Argyris & Schön, 1978)

Involves learning and adaptation, leading to AI adoption. Promotes continuous improvement through AI adoption. Requires a culture of learning and adaptability.

Knowledge Sharing, Continuous Improvement & Learning Culture

Technology Acceptance Model (TAM) (Davis, 1989)
AI-TAM (Baroni et al., 2022)

Emphasises perceived usefulness and ease of use. AI-TAM model is also present. Simple and widely applicable for understanding user acceptance of technologies like AI. Does not address organisational or technical factors.

Perceived Usefulness and Perceived Ease of Use

Technology-Organization-Environment (TOE) Framework (Tornatzky, 1990)

Considers technological, organisational, and environmental contexts.  Requires detailed analysis of multiple factors.

Existing Technologies, Organisational Resources and External Pressures

Theory of Planned Behavior (TPB)(Ajzen, 1991)

Examines individual intentions influenced by attitudes, norms, and control. Limited applicability to organisational-level adoption.

Attitude Toward the Behaviour, Subjective Norms & Perceived Behavioural Control

Resource-Based View (RBV)(Barney, 1991)

Links adoption to gaining a competitive advantage through resources. Assumes organisations have the necessary resources, which may not always be true.

Financial Resources, Human Capital & Technological Assets

Strategic Alignment Model (SAM) (Henderson & Venkatraman, 1994)

Ensures AI adoption aligns with business goals. Requires strong strategic planning and alignment capabilities.

Strategic Fit, Competitive Advantage & IT Infrastructure Alignment

Cost-Benefit Analysis (Boardman et al., 2018)

Weighs the pros and cons of AI adoption. Provides a clear economic rationale for AI adoption. May overlook intangible benefits or risks.

Financial Costs, Potential Benefits, Return on Investment & Risk Assessment

Business Model Innovation (Teece, 2010)

Explores how AI transforms business models. Encourages innovation and value creation through AI. Requires significant organisational change and innovation capabilities.

Value Proposition, Revenue Streams & Cost Structures

Disruptive Innovation Theory(Christensen et al., 2018)

Considers AI's potential to disrupt markets. Highlights opportunities for innovation and market leadership. May underestimate the challenges of disruption.

Market Disruption Potential, Innovation Type & Competitive Response

Institutional Theory (Scott, 2001)

Focuses on adoption due to environmental pressures like regulations. Does not address internal organisational factors.

Regulatory Requirements, Industry Standards& Organisational Legitimacy

Technology Readiness Index (TRI) (Parasuraman, 2000)

Measures individual readiness to embrace technology. Assesses user readiness for AI adoption. Limited to individual-level analysis.

Technological Awareness, IT Infrastructure & Change Readiness

Diffusion of Innovations Theory
(Rogers, 2003)

Explores how innovations spread, considering factors like relative advantage and compatibility. Does not account for organisational or cultural barriers.

Relative Advantage, Compatibility, Complexity, Trialability
& Observability

Unified Theory of Acceptance and Use of Technology (UTAUT)
(Venkatesh et al., 2003) (Venkatesh et al., 2012)

Integrates multiple theories, including social influence and facilitating conditions. Complex and may require significant resources to apply.

Performance Expectancy, Effort Expectancy, Social Influence & Facilitating Conditions

 

AI Specific Frameworks

Classical organisational theories explain the general adoption of technologies within organisations; however, they often fail to explain AI-specific aspects such as Ethics, Explainability and other AI-related factors. To understand these, the authors have analysed AI-specific frameworks.     

 

 The AI frameworks collectively address critical aspects of artificial intelligence (AI) development, adoption, and governance, emphasising ethics, safety, transparency, and societal impact. Frameworks like the Ethics of Artificial Intelligence (Bostrom & Yudkowsky, 2014) and the Beneficial AI Framework (Russell et al., 2015) focus on aligning AI systems with human values and ensuring their long-term benefits while mitigating existential risks. Similarly, the Value Alignment Framework (Gabriel, 2020) and the Ethical AI Framework(Vesnic-Alujevic et al., 2020) provide guidelines for embedding ethical principles, fairness, and accountability into AI systems, ensuring they respect human rights and societal norms.

 

Table 4. AI-Specific Theories & Frameworks analysis

Theory /

 Framework

Analysis (When applied in an AI context)

Parameters

The Ethics of Artificial Intelligence (Bostrom & Yudkowsky, 2014)

Emphasises the need for careful consideration of the long-term consequences of AI, particularly as it approaches or surpasses human-level intelligence

Value Alignment, Existential Risks, Bias and Fairness, Transparency and Accountability

AI Safety Framework (Amodei et al., 2016)

Ensures AI systems are safe and aligned with human goals. Requires robust safety mechanisms and testing.

Risk Assessment, Security Measures, Compliance & Robustness

Explainable AI (XAI) Framework (Gunning et al., 2019)

Ensures transparency and interpretability in AI decision-making processes. May require additional computational resources and design complexity.

Transparency, Interpretability, Explainability & Accountability

Value Alignment Framework(Gabriel, 2020)

Ensures AI systems align with human values and ethical principles. Requires complex value specification and alignment mechanisms.

Value Definition,
Alignment Mechanisms & Ethical Considerations

Beneficial AI Framework (Russell et al., 2015)

Guides the development of AI technologies for societal benefit. Requires multi-stakeholder collaboration and governance.

Societal Benefits, Ethical Considerations & Responsible AI Development

AI Transparency Framework(Felzmann et al., 2019)

Ensures AI systems are transparent and explainable. May require additional design complexity. Require significant regulatory and compliance efforts.

Openness, Explainability & Accountability

Ethical AI Framework
European Union’s High-Level Expert Group on AI. (Vesnic-Alujevic et al., 2020)

Ensures responsible deployment considering transparency and fairness. Promotes trust and accountability in AI systems.

Privacy, Bias, Transparency & Accountability

Risk Management Framework (Tabassi, 2023)

Identifies and mitigates AI-related risks. Require specialised expertise in risk management.

Technical Risks, Operational Risks, Compliance Risks & Strategic Risks

AI Governance Framework
(Sharma, 2023)

AI use with policies and accountability. Ensures accountability and compliance in AI deployment. Requires strong governance structures and oversight.

Governance Structures, Compliance, Accountability & Ethical Guidelines

Causal AI Framework
(Sgaier et al., 2020)

Enables AI systems to reason about cause-and-effect relationships. Requires robust causal inference mechanisms and domain knowledge.

Causal Reasoning, Data Quality & Model Interpretability

AI for Social Good Framework
(Floridi et al., 2020)

Guides the development of AI technologies for societal benefit. Aligns AI development with societal needs and ethical goals.

Social Impact, Ethical Considerations, Stakeholder Engagement &Sustainability

AI Trust Framework (Laux et al., 2024)

Builds trust in AI systems for adoption. Enhances user confidence in AI systems.

Transparency, Reliability, Security, Explainability

AI Value Realization (Davenport, 2018)

Ensures capture of expected benefits from AI. May require continuous monitoring and evaluation.

 & Value Measurement
ROI Tracking, Continuous Improvement

AI Fairness Framework(Barocas et al., 2023)

Ensures AI systems are free from biases and discrimination. Promotes fairness and equity in AI decision-making.

Bias Detection, Fairness Metrics, Transparency & Accountability

Human-Centered AI (Shneiderman, 2020)

Emphasizes ethical and user-centric design. Ensures AI systems are aligned with user needs and ethical standards.

User Experience, Ethical Considerations & Human-AI Collaboration

Human-in-the-Loop (HITL) Framework(Mosqueira-Rey et al., 2023)

Involves human oversight and interaction in AI decision-making processes. Enhances safety, accuracy, and user trust.

Human Oversight, Feedback Mechanisms & Human-AI Collaboration

Adoption of artificial intelligence: A TOP framework-based checklist (Tursunbayeva & Chalutz-Ben Gal, 2024)

The framework provides a structured checklist to assess readiness and address barriers to AI implementation, ensuring a balanced approach to integrating AI into organizational ecosystems

Technological Readiness
Organizational Alignment
People and Culture
Governance and Compliance

 

On the technical side, frameworks like Explainable AI (XAI)(Gunning et al., 2019) Framework and Causal AI Framework(Sgaier et al., 2020) emphasize improving AI interpretability and decision-making by incorporating transparency and causal reasoning. The AI Safety Framework(Amodei et al., 2016) and Risk Management Frameworks focus on ensuring the robustness, reliability, and security of AI systems, particularly in high-stakes applications. Additionally, frameworks like the AI Transparency Framework and the AI Fairness Framework(Barocas et al., 2023) address issues of bias, fairness, and accountability, aiming to build trust and societal acceptance of AI technologies.

 

From a societal perspective, frameworks such as the AI for Social Good Framework(Floridi et al., 2020) and the Human-Centred AI Framework(Shneiderman, 2020) advocate for leveraging AI to address global challenges and prioritise human needs. The Human-in-the-Loop (HITL) Framework(Mosqueira-Rey et al., 2023) emphasizes the importance of human oversight in AI decision-making, ensuring accountability and reliability. Finally, practical frameworks like the TOP Framework-Based Checklist(Tursunbayeva & Chalutz-Ben Gal, 2024) and Davenport's AI Value Realisation(Davenport, 2018) focus on guiding organisations in adopting AI effectively, aligning AI initiatives with business goals, and maximising their value. Together, these frameworks provide a comprehensive roadmap for ethical, safe, and impactful AI development and adoption.

 

Industry-Specific Frameworks

While classical organisational theories and AI-focused adoption frameworks provide good insights into AI adoption, they often miss practical implementation nuances. To address some of those challenges, AI pioneers such as Google, Microsoft and others introduced frameworks for AI adoption. These frameworks provide comprehensive guidance for organisations to integrate AI effectively. They emphasise key aspects such as strategy alignment, ethical AI practices, data readiness, governance, and scalability.

 

Table 5. Industry Theories & Frameworks analysis

Theory /

 Framework

Analysis (When applied in an AI context)

Factors

Google: Cloud AI Adoption Framework (Google Cloud, 2020)

Focuses on four pillars: people, process, technology, and data. It emphasises leadership, learning, scalability, and responsible AI practices.

People, process, technology, data, leadership, scalability, automation and security.

Microsoft: The CAF AI adoption (stephen-sumner, 2025)

Provides a strategic guide for AI adoption, focusing on leadership, culture, and responsible AI. It emphasises ethical AI and aligning AI with business goals.

Leadership, culture, ethics, business alignment and responsible AI.

AWS: AI Adoption Framework (AWS, 2025)

Offers a cloud-centric approach to AI adoption, focusing on scalability, data management, and operational efficiency

Scalability, cloud infrastructure, data management, operational efficiency and AI tools.

Deloitte AI Readiness and Management Framework(Van Buren et al., 2020)

It provides a structured approach to assess and enhance AI readiness across organisations, ensuring alignment with business goals.

Strategy, people, processes, data governance, technology platforms and ethical implications.

IBM AI Ladder Framework (IBM, 2020)

Provides a step-by-step approach to AI adoption, focusing on data readiness, AI model development, and operationalisation. It emphasises trust and transparency in AI systems.

Data readiness, AI model development, operationalisation, trust and transparency.

IDC AI Maturity Model(Jyoti & Findling, 2022)

Defines five stages of AI maturity, from ad-hoc to optimised. It emphasises data strategies, governance, and embedding AI into business processes for continuous improvement.

Data strategies, governance, operational efficiency, continuous improvement and AI integration.

 

What are the critical factors identified in existing frameworks and theories that influence AI adoption at the organisational level? (RQ2)

Based on the above analysis, authors have identified the factors that are impacting AI Adoption. Based on the nature of those factors, they are divided into Technical, Organisational, Psychological, Environmental, Social, Legal and Ethical factors.

 

Psychological Factors

Ethical Factors

Environmental Factors

Social Factors

Technical Factors

Legal Factors

AI Adoption

Organizational Factors

Fig. 2. Factors that contributed to the AI adoption the table below shows individual parameters within these factors

 

Table 6. Factors and Parameters impacting AI adoption

 

       Parameters impacting adoption

Technical Factors

Relative Advantage, Compatibility, Complexity, Observability, Existing Technologies, Technological Assets, Infrastructure Alignment, Data Availability, AI Awareness, Technical Expertise, Technological Awareness, AI Capabilities, Process Optimisation, Data Management, Innovation, Causal Reasoning, Data Quality, Model Interpretability, Feedback Mechanisms, Human-AI Collaboration, Security Measures, Reliability, Robustness and Technical Risks

Organizational

 Factors

 

Organizational Resources, Organizational Legitimacy, Financial Resources, Human Capital, Strategic Fit, Competitive Advantage, Value Proposition, Revenue Streams, Cost Structures, Knowledge Sharing, Continuous Improvement, Learning Culture, Communication, Leadership, Training, User Involvement, Organizational Structure, Change Readiness, Financial Costs, Potential Benefits, Return on Investment, Risk Assessment, Organizational Culture, Governance Structures, Operational Risks, Strategic Risks, Compliance Risks, Operational Efficiency, Cultural Change, Human Oversight and Alignment Mechanisms

Psychological

Factors

 

Perceived Usefulness, Perceived Ease of Use, Attitude Toward the Behaviour, Subjective Norms, Perceived Behavioural Control, User Experience and Resistance to Change

Environmental

Factors

External Pressures, Market Disruption Potential, Competitive Response and Innovation Type

Social Factors

Social Influence, Customer Experience, Social Impact, Stakeholder Engagement and Societal Benefits

Legal Factors

 

Regulatory Requirements, Industry Standards, Compliance and Governance

Ethical Factors

Ethical Guidelines & Considerations, Privacy, Bias, Transparency, Accountability, Fairness Metrics, Explainability, Responsible AI Development and Sustainability

 

 

While each framework focuses on one or other aspect, there is a greater need to create a comprehensive framework with these factors

CONCLUSION

The overarching goal of this systematic literature review (SLR) is to provide comprehensive insights into the adoption of Artificial Intelligence (AI) at the organisational level by analysing existing frameworks and theories. This study involves an extensive search of the scientific literature using well-defined search terms and research questions, as outlined in the methodology section. While AI offers transformative potential, its successful implementation demands a strategic alignment of organisational resources, technological infrastructure, and external environmental factors.

 

The reviewed frameworks highlight the multifaceted nature of AI adoption. These frameworks emphasise critical factors such as organisational readiness, regulatory compliance, and the alignment of AI initiatives with business objectives. However, the adoption process is further complicated by the need for a collaborative culture, cross-functional expertise, and robust governance mechanisms. Additionally, the rapid evolution of AI technologies, including generative AI and edge computing, introduces new challenges related to scalability, ethical considerations, and data management.

 

This review also underscores the importance of addressing barriers such as resistance to change, lack of technical expertise, and resource constraints. The rise of emerging paradigms such as Generative AI, cloud computing, edge AI and compound AI technologies necessitates innovative solutions to support distributed and heterogeneous AI ecosystems. It highlights the need for organisations to adopt a holistic approach that integrates technological advancements with organisational strategies and external environmental factors.

 

 As for future work, this study identifies gaps in the existing frameworks, particularly in addressing user behaviour for AI adoption in organisational context, impact of reliable Model operations, Model Quality Assurance (for ex. Hallucinations etc. in the context of Large Langue Models) and societal implications of AI, especially with the emergence of new paradigms such as generative AI and agentic AI. Generative AI, with its ability to create content, designs, and solutions autonomously, raises critical questions about intellectual property, ethical use, and the potential for misuse, requiring frameworks to incorporate robust model validation, governance, and reliable model operations that are specific to these scenarios. Similarly, Agentic AI, which operates with a degree of autonomy and decision-making capability, demands a re-evaluation of trust, control, and human oversight in AI systems, as well as strategies to ensure alignment with human values and organisational goals.

 

Future studies should focus on developing adaptive AI adoption frameworks that integrate principles of explainability, fairness, and safety tailored to these advanced AI paradigms. Research is needed to explore how organisations can balance innovation with ethical considerations, address biases in generative outputs, and ensure agentic systems remain corrigible and aligned with human intent. Additionally, frameworks must evolve to include guidelines for managing the societal and economic impacts of these technologies, such as workforce transformation, regulatory compliance, and equitable access. By addressing these emerging challenges, future studies can ensure that AI adoption frameworks remain relevant and effective in guiding organisations through the complexities of Generative and Agentic AI.

REFERENCES
  1. Ajzen, I. (1991). The theory of planned behavior. Organizational Behavior and Human Decision Processes, 50(2), 179–211. https://doi.org/10.1016/0749-5978(91)90020-T
  2. Alzubaidi, L., Al-Sabaawi, A., Bai, J., Dukhan, A., Alkenani, A. H., Al-Asadi, A., Alwzwazy, H. A., Manoufali, M., Fadhel, M. A., Albahri, A. S., Moreira, C., Ouyang, C., Zhang, J., Santamaría, J., Salhi, A., Hollman, F., Gupta, A., Duan, Y., Rabczuk, T., … Gu, Y. (2023). Towards Risk-Free Trustworthy Artificial Intelligence: Significance and Requirements. International Journal of Intelligent Systems, 2023(1), 4459198. https://doi.org/10.1155/2023/4459198
  3. Amodei, D., Olah, C., Steinhardt, J., Christiano, P., Schulman, J., & Mané, D. (2016). Concrete Problems in AI Safety (No. arXiv:1606.06565). arXiv. https://doi.org/10.48550/arXiv.1606.06565
  4. Argyris, C., & Schön, D. A. (1978). Organizational Learning: A Theory of Action Perspective. Addison-Wesley Publishing Company.
  5. (2025). AWS Cloud Adoption Framework for Artificial Intelligence, Machine Learning, and Generative AI [White paper]. Amazon Web Services.
  6. Barney, J. (1991). Firm Resources and Sustained Competitive Advantage. Journal of Management, 17(1), 99–120. https://doi.org/10.1177/014920639101700108
  7. Barocas, S., Hardt, M., & Narayanan, A. (2023). Fairness and Machine Learning: Limtations and Oppotunities. The MIT Press. https://mitpress.mit.edu/9780262048613/fairness-and-machine-learning/
  8. Baroni, I., Calegari, G. R., Scandolari, D., & Celino, I. (2022). AI-TAM: A model to investigate user acceptance and collaborative intention in human-in-the-loop AI applications. Human Computation, 9(1), Article 1. https://doi.org/10.15346/hc.v9i1.134
  9. Bharati, S., Mondal, M. R. H., & Podder, P. (2024). A Review on Explainable Artificial Intelligence for Healthcare: Why, How, and When? IEEE Transactions on Artificial Intelligence, 5(4), 1429–1442. https://doi.org/10.1109/TAI.2023.3266418
  10. Boardman, A. E., Greenberg, D. H., Vining, A. R., & Weimer, D. L. (2018). Cost-Benefit Analysis: Concepts and Practice (5th ed.). Cambridge University Press. https://doi.org/10.1017/9781108235594
  11. Bostrom, N., & Yudkowsky, E. (2014). The ethics of artificial intelligence. In K. Frankish & W. M. Ramsey (Eds.), The Cambridge Handbook of Artificial Intelligence (pp. 316–334). Cambridge University Press. https://doi.org/10.1017/CBO9781139046855.020
  12. Christensen, C. M., McDonald, R., Altman, E. J., & Palmer, J. E. (2018). Disruptive Innovation: An Intellectual History and Directions for Future Research. Journal of Management Studies, 55(7), 1043–1078. https://doi.org/10.1111/joms.12349
  13. Davenport, T. H. (2018). The AI Advantage: How to Put the Artificial Intelligence Revolution to Work. The MIT Press. https://direct.mit.edu/books/book/4154/The-AI-AdvantageHow-to-Put-the-Artificial
  14. Davis, F. D. (1989). Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology. MIS Quarterly, 13(3), 319. https://doi.org/10.2307/249008
  15. Felzmann, H., Villaronga, E. F., Lutz, C., & Tamò-Larrieux, A. (2019). Transparency you can trust: Transparency requirements for artificial intelligence between legal norms and contextual concerns. Big Data & Society, 6(1), 205395171986054. https://doi.org/10.1177/2053951719860542
  16. Floridi, L., Cowls, J., King, T. C., & Taddeo, M. (2020). How to Design AI for Social Good: Seven Essential Factors. Science and Engineering Ethics, 26(3), 1771–1796. https://doi.org/10.1007/s11948-020-00213-5
  17. Gabriel, I. (2020). Artificial Intelligence, Values, and Alignment. Minds and Machines, 30(3), 411–437. https://doi.org/10.1007/s11023-020-09539-2
  18. Google Cloud. (2020). AI Adoption Framework. Google. https://cloud.google.com/resources/cloud-ai-adoption-framework-whitepaper
  19. Gunning, D., Stefik, M., Choi, J., Miller, T., Stumpf, S., & Yang, G.-Z. (2019). XAI—Explainable artificial intelligence. Science Robotics, 4(37), eaay7120. https://doi.org/10.1126/scirobotics.aay7120
  20. Haleem, A., Javaid, M., Asim Qadri, M., Pratap Singh, R., & Suman, R. (2022). Artificial intelligence (AI) applications for marketing: A literature-based study. International Journal of Intelligent Networks, 3, 119–132. https://doi.org/10.1016/j.ijin.2022.08.005
  21. Henderson, J. C., & Venkatraman, N. (1994). Strategic Alignment: A Model for Organizational Transformation via Information Technology. In T. J. Allen & M. S. S. Morton (Eds.), Information Technology and the Corporation of the 1990s: Research Studies (p. 0). Oxford University Press. https://doi.org/10.1093/oso/9780195068061.003.0009
  22. (2020, April 22). Scale the AI Ladder. https://www.ibm.com/analytics/journey-to-ai/embark/www.ibm.com/analytics/journey-to-ai/embark
  23. Jyoti, R., & Findling, S. (2022). IDC MaturityScape: Artificial Intelligence 2.0. IDC. https://www.idc.com/getdoc.jsp?containerId=US49037422
  24. Kitchenham, B., & Charters, S. (2007). Guidelines for performing systematic literature reviews in software engineering. Technical report, EBSE Technical Report EBSE-2007-01. https://docs.edtechhub.org/lib/EDAG684W
  25. Laux, J., Wachter, S., & Mittelstadt, B. (2024). Trustworthy artificial intelligence and the European Union AI act: On the conflation of trustworthiness and acceptability of risk. Regulation & Governance, 18(1), 3–32. https://doi.org/10.1111/rego.12512
  26. Moher, D., Liberati, A., Tetzlaff, J., Altman, D. G., & PRISMA Group. (2009). Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. PLoS Medicine, 6(7), e1000097. https://doi.org/10.1371/journal.pmed.1000097
  27. Mosqueira-Rey, E., Hernández-Pereira, E., Alonso-Ríos, D., Bobes-Bascarán, J., & Fernández-Leal, Á. (2023). Human-in-the-loop machine learning: A state of the art. Artificial Intelligence Review, 56(4), 3005–3054. https://doi.org/10.1007/s10462-022-10246-w
  28. Mukherjee, A. (2024). Safeguarding Marketing Research: The Generation, Identification, and Mitigation of AI-Fabricated Disinformation (No. arXiv:2403.14706). arXiv. http://arxiv.org/abs/2403.14706
  29. Page, M. J., McKenzie, J. E., Bossuyt, P. M., Boutron, I., Hoffmann, T. C., Mulrow, C. D., Shamseer, L., Tetzlaff, J. M., Akl, E. A., Brennan, S. E., Chou, R., Glanville, J., Grimshaw, J. M., Hróbjartsson, A., Lalu, M. M., Li, T., Loder, E. W., Mayo-Wilson, E., McDonald, S., … Moher, D. (2021). The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ (Clinical Research Ed.), 372, n71. https://doi.org/10.1136/bmj.n71
  30. Parasuraman, A. (2000). Technology Readiness Index (Tri): A Multiple-Item Scale to Measure Readiness to Embrace New Technologies. Journal of Service Research, 2(4), 307–320. https://doi.org/10.1177/109467050024001
  31. Rogers, E. M. (2003). Diffusion of Innovations, 5th Edition. Simon and Schuster.
  32. Routray, B. B. (2024). The Spectre of Generative AI Over Advertising, Marketing, and Branding. https://doi.org/10.22541/au.170534566.63147021/v1
  33. Russell, S., Dewey, D., & Tegmark, M. (2015). Research Priorities for Robust and Beneficial Artificial Intelligence. AI Mag., 36(4), 105–114. https://doi.org/10.1609/aimag.v36i4.2577
  34. Scott, W. R. (2001). Institutions and Organizations. SAGE Publications.
  35. Sgaier, S. K., Huang, V., & Charles, G. (2020). The Case for Causal AI (SSIR). Stanford Social Innovation Review. https://ssir.org/articles/entry/the_case_for_causal_ai
  36. Sharma, S. (2023). Trustworthy Artificial Intelligence: Design of AI Governance Framework. Strategic Analysis. https://www.tandfonline.com/doi/abs/10.1080/09700161.2023.2288994
  37. Shneiderman, B. (2020). Human-Centered Artificial Intelligence: Reliable, Safe & Trustworthy. International Journal of Human–Computer Interaction, 36(6), 495–504. https://doi.org/10.1080/10447318.2020.1741118
  38. stephen-sumner. (2025, March 31). AI adoption—Cloud Adoption Framework. https://learn.microsoft.com/en-us/azure/cloud-adoption-framework/scenarios/ai/
  39. Tabassi, E. (2023). Artificial Intelligence Risk Management Framework (AI RMF 1.0) (No. NIST AI 100-1; p. NIST AI 100-1). National Institute of Standards and Technology (U.S.). https://doi.org/10.6028/NIST.AI.100-1
  40. Teece, D. J. (2010). Business Models, Business Strategy and Innovation. Long Range Planning, 43(2–3), 172–194. https://doi.org/10.1016/j.lrp.2009.07.003
  41. Tornatzky, L. G. (with Internet Archive). (1990). The processes of technological innovation. Lexington, Mass. : Lexington Books. http://archive.org/details/processesoftechn0000torn
  42. Tursunbayeva, A., & Chalutz-Ben Gal, H. (2024). Adoption of artificial intelligence: A TOP framework-based checklist for digital leaders. Business Horizons, 67(4), 357–368. https://doi.org/10.1016/j.bushor.2024.04.006
  43. Van Buren, E., Chew, B., & William D., E. (2020). Six Areas for Assessing AI Readiness in Government | Deloitte Insights. https://www2.deloitte.com/us/en/insights/industry/public-sector/ai-readiness-in-government.html
  44. Venkatesh, Thong, & Xu. (2012). Consumer Acceptance and Use of Information Technology: Extending the Unified Theory of Acceptance and Use of Technology. MIS Quarterly, 36(1), 157. https://doi.org/10.2307/41410412
  45. Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User Acceptance of Information Technology: Toward a Unified View (SSRN Scholarly Paper No. 3375136). Social Science Research Network. https://papers.ssrn.com/abstract=3375136
  46. Vesnic-Alujevic, L., Nascimento, S., & Pólvora, A. (2020). Societal and ethical impacts of artificial intelligence: Critical notes on European policy frameworks. Telecommunications Policy, 44(6), 101961. https://doi.org/10.1016/j.telpol.2020.101961
  47. Zhou, R. (2024). Risks of Discrimination Violence and Unlawful Actions in LLM-Driven Robots. Computer Life, 12(2), Article 2. https://doi.org/10.54097/taqbjh83
Recommended Articles
Research Article
A Theoretical Study on Social impact For a Sustainable Future; in Special Reference with Retail Market
...
Published: 08/11/2025
Research Article
Financial Inclusion and Ownership of Bank Account and Savings Among Individual and Household in India
...
Published: 08/11/2025
Original Article
Cross-Border Jurisdiction In Cyberspace The Role Of The Hague Conference In Resolving Online Disputes
Research Article
Employer Image and Job Pursuit Intention in India's IT Sector: The Mediating Role of Work-Life Balance Benefits
...
Published: 08/11/2025
Loading Image...
Volume:6, Issue:1
Citations
10 Views
4 Downloads
Share this article
© Copyright Journal of International Commercial Law and Technology