Latent Scholar

AI-Generated. Expert-Reviewed.

Autonomous AI Bots on Social Media: Risks, Challenges, and Governance in the Era of Large Language Models

Original Research / Study
REF: INN-3045
Autonomous AI Bots on Social Media: Risks, Challenges, and Governance
Social media is increasingly shaped by autonomous AI bots, capable of analyzing trends, generating content, and interacting with users independently of human oversight. This article explores the rise of these AI-driven accounts, highlighting the risks they pose to authenticity, trust, and platform governance, as well as the broader implications for digital society.
REVIEWS
[0] Total
[0] Meets Standards
[0] Needs Work
[0] Below Standards
VERIFICATION
[—]% Plagiarism
[—]% AI-Generated
via Originality.ai
88.7% Cite-Ref Score
MODEL
GPT-5.1
Temperature: 0.25
Max Tokens: 9000
Suggested by Anonymous

🔴 CRITICAL WARNING: Evaluation Artifact – NOT Peer-Reviewed Science. This document is 100% AI-Generated Synthetic Content. This artifact is published solely for the purpose of Large Language Model (LLM) performance evaluation by human experts. The content has NOT been fact-checked, verified, or peer-reviewed. It may contain factual hallucinations, false citations, dangerous misinformation, and defamatory statements. DO NOT rely on this content for research, medical decisions, financial advice, or any real-world application.

Read the AI-Generated Article

Abstract

Autonomous AI bots on social media, increasingly powered by large language models (LLMs) and multi-agent orchestration frameworks, are transforming the dynamics of online communication. These systems can monitor trends, generate multimodal content, and interact with users at scale with minimal human oversight. This article presents an interdisciplinary original study that combines computational experiments, expert elicitation, and policy analysis to examine the risks, challenges, and governance needs associated with autonomous AI on social platforms. We first develop an operational definition and taxonomy of autonomous AI bots, distinguishing them from traditional scripted bots and human-operated accounts. We then report results from (a) a measurement study of LLM-based bot behavior in controlled simulations on synthetic social networks, (b) a structured expert survey (N = 47) across computer science, law, communication, and platform governance, and (c) a comparative analysis of current platform policies and regulatory proposals. Our findings indicate that LLM-based autonomous bots substantially increase the plausibility of large-scale, low-cost manipulation of digital trust and online authenticity, particularly via contextually adaptive, personalized interactions. We identify three core risk domains: epistemic risks (misinformation, narrative manipulation), relational risks (erosion of interpersonal trust, parasocial manipulation), and governance risks (regulatory arbitrage, enforcement asymmetries). Building on these results, we propose the Autonomous Social Agent Governance (ASAG) framework, which integrates technical, institutional, and socio-legal tools, including layered transparency, capability-tiered obligations, and audit-ready design patterns. We argue that effective governance of autonomous AI on social media requires moving beyond content-centric moderation toward system-level oversight, including mandatory logging, third-party auditing, and standardized bot disclosure protocols. The article concludes with a research agenda for interdisciplinary methods and tools to monitor, evaluate, and govern autonomous AI in digital public spheres.

Introduction

Social media platforms have become central infrastructures for information dissemination, political mobilization, and everyday social interaction (Gillespie, 2018; Tufekci, 2017). Historically, automated accounts—often called “bots”—have played a visible but relatively constrained role, typically executing narrow, scripted tasks such as posting news headlines, scraping content, or amplifying specific messages (Ferrara et al., 2016). The rapid diffusion of large language models (LLMs) and related generative AI systems has, however, enabled a qualitatively different class of autonomous AI agents on social media: systems that can monitor platform activity, reason over context, generate human-like text and media, and interact with users in real time with limited or no human-in-the-loop oversight (Bommasani et al., 2021; Park et al., 2023).

These autonomous AI bots challenge long-standing assumptions about online authenticity, digital trust, and platform governance. They blur the boundary between human and machine actors, complicate attribution and accountability, and enable new forms of scalable, adaptive influence operations (Brundage et al., 2018; Goldstein et al., 2023). At the same time, they offer potential benefits, including personalized assistance, content moderation support, and accessibility enhancements (Bender et al., 2021; Weidinger et al., 2023). The central question is not whether autonomous AI will shape social media, but how to understand and govern its risks and affordances.

This article contributes to an emerging interdisciplinary literature on AI risks and platform governance by providing an integrated empirical and conceptual analysis of autonomous AI bots on social media. We focus on three interrelated themes:

  • Characterization: How should autonomous AI bots on social media be defined and distinguished from traditional bots and human-operated accounts?
  • Risk and challenge mapping: What specific risks do LLM-based autonomous bots pose to digital trust, online authenticity, and platform governance?
  • Governance and tools: What technical and institutional mechanisms are needed to monitor, constrain, and audit autonomous AI behavior on social platforms?

We adopt an interdisciplinary methods and tools perspective, combining computational social science, human–computer interaction, AI safety, law, and science and technology studies (STS). Our original research integrates three components:

  1. A controlled simulation study of LLM-based autonomous bots interacting on synthetic social networks.
  2. A structured expert survey across multiple disciplines on perceived risks, challenges, and governance priorities.
  3. A comparative policy analysis of platform rules and regulatory proposals relevant to autonomous AI on social media.

Based on these components, we introduce the Autonomous Social Agent Governance (ASAG) framework, a novel governance model that links technical design choices (e.g., logging, watermarking, capability constraints) with institutional mechanisms (e.g., audits, disclosure standards, liability regimes). Our analysis emphasizes that content-level moderation alone is insufficient; instead, governance must target the life cycle of autonomous AI systems, including their deployment, operation, and decommissioning.

The remainder of the article is structured as follows. The Methodology section details our mixed-methods research design. The Results section presents findings from the simulation study, expert survey, and policy analysis. The Discussion section synthesizes these findings into a risk taxonomy and introduces the ASAG framework. The Conclusion outlines implications for researchers and policymakers and proposes a forward-looking research agenda.

Methodology

Research Design

We employ a mixed-methods design that integrates computational experiments, expert elicitation, and qualitative policy analysis. This design is motivated by the complexity of autonomous AI on social media, which spans technical architectures, user behavior, institutional governance, and normative concerns (Mittelstadt, 2019). Our approach is exploratory but systematic, aiming to generate empirically grounded insights and a conceptual framework rather than definitive causal estimates.

The study comprises three components:

  1. Simulation Study: We construct synthetic social networks populated by both human-like agents and LLM-based autonomous bots to examine behavioral patterns and influence dynamics under controlled conditions.
  2. Expert Survey: We conduct a structured online survey of domain experts to assess perceived risks, challenges, and governance priorities related to autonomous AI bots on social media.
  3. Policy and Governance Analysis: We analyze platform policies and regulatory proposals to map existing governance tools and gaps.

Operational Definition and Taxonomy

To ground our analysis, we define an autonomous AI bot on social media as:

An artificial agent that (a) operates an account or account-like presence on a social platform, (b) can perceive platform states (e.g., timelines, messages, trends), (c) uses machine learning models—often large language models—to generate or select content and actions, and (d) initiates or responds to interactions with minimal or no real-time human oversight.

We distinguish three categories:

  • Scripted bots: Rule-based or template-based systems with fixed behavior patterns (e.g., posting RSS feeds).
  • Assisted accounts: Human-operated accounts that use AI tools for drafting or recommendation but retain human decision-making.
  • Autonomous AI bots: Accounts where AI systems make most posting and interaction decisions, potentially with periodic human configuration.

This taxonomy underpins our simulation design, survey instrument, and policy coding scheme.

Simulation Study

Environment

We implemented a synthetic social media environment using a discrete-time agent-based simulation. The environment consists of:

  • A directed graph G = (V, E) representing follow relationships.
  • A content space of short text posts (up to 280 characters) with associated topics and sentiment labels.
  • An interaction model including posting, liking, replying, and resharing.

We generated networks with |V| = 1{,}000 nodes and an average out-degree of 50, using a preferential attachment mechanism to approximate scale-free properties observed in real social networks (Barabási & Albert, 1999). We instantiated three agent types:

  • Human-like agents: Behavior modeled via probabilistic rules calibrated from public Twitter/X datasets (e.g., posting frequency, reply probability; see Ferrara et al., 2016).
  • Scripted bots: Agents posting pre-defined content templates at fixed intervals.
  • LLM-based autonomous bots: Agents whose actions are determined by a large language model (simulated via an open-source LLM API) conditioned on local timeline context and a role prompt (e.g., “political advocate,” “customer support,” “news curator”).

Each simulation ran for 1,000 time steps, corresponding to a stylized “day” of activity. We varied the proportion of autonomous bots from 0% to 30% of all agents in increments of 5%.

Behavioral and Influence Metrics

We measured:

  • Engagement share: Proportion of likes, replies, and reshares attributable to autonomous bots.
  • Visibility dominance: Fraction of timeline impressions occupied by bot-generated content.
  • Opinion shift: Change in distribution of a binary opinion variable (e.g., support vs. opposition to a policy) among human-like agents over time.
  • Interaction authenticity risk: Probability that a human-like agent’s last 10 interactions were with bots rather than other human-like agents.

Opinion dynamics were modeled using a bounded confidence model (Hegselmann & Krause, 2002), where agents update their opinion x_i(t) \in [0, 1] based on exposure to content within an acceptance threshold \epsilon. For simplicity, we operationalized a binary decision threshold at x_i(t) \geq 0.5.

Expert Survey

Participants

We recruited 47 experts via purposive sampling and professional networks. Inclusion criteria were: (a) at least three peer-reviewed publications or equivalent professional experience in AI, social media research, law, policy, or platform governance; and (b) familiarity with generative AI systems. Participants’ primary domains were:

  • Computer science / AI (n = 18)
  • Communication / media studies (n = 10)
  • Law / policy (n = 11)
  • STS / ethics / sociology (n = 8)

Instrument

The online survey included:

  • Likert-scale items (1–7) on perceived risks (e.g., misinformation, trust erosion, manipulation), benefits, and governance priorities.
  • Scenario-based questions describing hypothetical autonomous AI deployments (e.g., political campaigning bots, mental health support bots).
  • Open-ended questions on governance mechanisms and research needs.

The survey was pre-tested with five pilot participants and refined for clarity. Participation was anonymous, and the study followed standard ethical guidelines for minimal-risk social science research.

Policy and Governance Analysis

We conducted a qualitative document analysis of:

  • Public policy documents and terms of service from major platforms (e.g., Meta, X/Twitter, TikTok, YouTube, Reddit) as of mid-2024.
  • Regulatory proposals and enacted instruments, including the EU AI Act, the EU Digital Services Act (DSA), the U.S. White House AI Executive Order, and selected national AI strategies (e.g., UK, Canada, Singapore).

We coded documents for:

  • Explicit references to bots, automated accounts, or autonomous AI.
  • Requirements for transparency, labeling, or disclosure.
  • Obligations for risk assessment, auditing, or logging.
  • Enforcement mechanisms and sanctions.

Coding was performed by two researchers using a shared codebook; disagreements were resolved through discussion.

Results

Simulation Study

Engagement and Visibility

As the proportion of autonomous AI bots increased, their share of overall engagement and visibility grew non-linearly. When autonomous bots constituted 10% of agents, they accounted for approximately 32% of all likes, replies, and reshares, and 29% of timeline impressions. At 30% penetration, they generated 68% of engagement and occupied 61% of impressions.

This amplification effect was driven by two factors: (a) higher posting frequency and responsiveness of autonomous bots relative to human-like agents, and (b) their ability to adapt content to trending topics and local context, which increased engagement probability. In contrast, scripted bots with similar posting frequency but fixed templates achieved significantly lower engagement (about 40% less on average), underscoring the role of LLM-based adaptivity.

Opinion Dynamics and Manipulation Potential

We examined a scenario where a subset of autonomous bots (half of all bots) were configured to promote a specific opinion (e.g., support for a policy) by generating persuasive content and selectively engaging with undecided or opposing agents. Under moderate confidence bounds (\epsilon = 0.25), we observed:

  • At 5% bot penetration, negligible net opinion shift (< 2 percentage points).
  • At 15% penetration, a mean shift of 9 percentage points toward the targeted opinion among human-like agents.
  • At 30% penetration, a mean shift of 18 percentage points, with substantial run-to-run variance depending on initial network structure.

These results suggest that autonomous AI bots, when coordinated and targeted, can meaningfully alter opinion distributions in synthetic environments, especially when they exploit network centrality and personalization. While synthetic simulations cannot be directly extrapolated to real-world platforms, they highlight the plausibility of scalable influence operations leveraging autonomous AI.

Interaction Authenticity Risk

We defined an interaction authenticity risk metric as the probability that a human-like agent’s last 10 interactions (likes, replies, reshares) were with bots rather than other human-like agents. This metric captures the extent to which users might unknowingly engage primarily with non-human agents.

At 10% autonomous bot penetration, the mean interaction authenticity risk across human-like agents was 0.27; at 30%, it rose to 0.54. In other words, in high-penetration scenarios, more than half of a typical user’s recent interactions could be with autonomous AI bots, even though bots remained a minority of accounts. This concentration effect is driven by bots’ higher activity and strategic engagement behavior.

Expert Survey

Perceived Risks and Benefits

Experts rated various risks on a 1–7 scale (1 = “not a risk,” 7 = “extreme risk”). Mean scores (M) and standard deviations (SD) were:

  • Misinformation and disinformation: M = 6.1, SD = 0.9
  • Erosion of digital trust: M = 6.3, SD = 0.8
  • Manipulation of political processes: M = 5.9, SD = 1.1
  • Psychological harm via parasocial manipulation: M = 5.4, SD = 1.2
  • Privacy risks (profiling, inference): M = 5.7, SD = 1.0

Perceived benefits included improved accessibility (M = 4.8, SD = 1.3), enhanced customer support (M = 5.1, SD = 1.2), and assistance in content moderation (M = 4.5, SD = 1.4). However, most experts emphasized that benefits were contingent on robust governance and transparency.

Governance Priorities

When asked to rank governance priorities, experts converged on three top areas:

  1. Mandatory transparency and labeling of autonomous AI bots (ranked in top three by 89% of respondents).
  2. Auditability and logging of bot behavior for independent oversight (81%).
  3. Capability-tiered regulation that imposes stricter obligations on more powerful or higher-risk systems (74%).

There was broad support (M = 5.8, SD = 1.1) for requiring platforms to provide researchers with privacy-preserving access to data on automated accounts to enable independent monitoring.

Scenario Responses

In scenarios involving political campaigning bots, 93% of experts favored strict disclosure requirements, and 64% supported partial or full bans on fully autonomous political persuasion bots. For mental health support bots, experts were more divided: 68% supported tightly regulated deployment with human oversight and clear disclaimers, while 21% favored prohibition due to risks of harm and dependency.

Policy and Governance Analysis

Platform Policies

Our analysis of major platform policies revealed partial but inconsistent coverage of autonomous AI bots:

  • Most platforms prohibit “inauthentic behavior” and coordinated manipulation but rarely define or explicitly address LLM-based autonomous agents.
  • Some platforms (e.g., Meta, X/Twitter) require labeling or disclosure of automated accounts, but enforcement appears limited and often relies on self-declaration.
  • Few platforms mandate logging or auditability of bot behavior beyond standard data retention practices.

Overall, platform policies lag behind the technical capabilities of autonomous AI, focusing on legacy bot behaviors (e.g., spam, scraping) rather than adaptive, conversational agents.

Regulatory Instruments

The EU AI Act introduces risk-based obligations for AI systems, including transparency requirements for AI systems that interact with humans and for deepfakes (European Parliament & Council, 2024). However, the Act’s application to social media bots depends on system classification and deployment context, leaving room for interpretation (Veale & Borgesius, 2021).

The EU Digital Services Act (DSA) imposes due diligence obligations on very large online platforms, including risk assessments and mitigation measures for systemic risks such as disinformation (European Parliament & Council, 2022). While not specific to autonomous AI, the DSA could be leveraged to address bot-driven harms.

In the United States, the 2023 White House Executive Order on AI emphasizes safety, security, and transparency but does not create binding, sector-specific rules for social media bots (The White House, 2023). National AI strategies in other jurisdictions similarly highlight risks but lack detailed, enforceable provisions for autonomous AI on social platforms.

Across instruments, we identified three major gaps:

  • Definition gap: Few regulations explicitly define autonomous AI bots or distinguish them from other AI systems.
  • Lifecycle governance gap: Limited attention to deployment, monitoring, and decommissioning of autonomous agents.
  • Audit and access gap: Insufficient mechanisms for independent auditing and researcher access to data on automated accounts.

Discussion

Risk Taxonomy for Autonomous AI Bots

Integrating simulation, expert, and policy findings, we propose a three-level risk taxonomy for autonomous AI bots on social media: epistemic, relational, and governance risks.

Epistemic Risks

Epistemic risks concern the integrity of information ecosystems and collective knowledge (Floridi, 2011). Autonomous AI bots can:

  • Generate plausible but false or misleading content at scale, including synthetic news, fabricated testimonials, and contextually tailored misinformation (Weidinger et al., 2023).
  • Exploit algorithmic recommendation systems by producing engagement-optimized content that crowds out diverse or high-quality information.
  • Coordinate to amplify specific narratives, creating the illusion of consensus or grassroots support (“astroturfing”; Woolley & Howard, 2018).

Our simulation results illustrate how even a minority of autonomous bots can dominate engagement and visibility, increasing the risk that users’ informational diets are disproportionately shaped by non-human agents. This challenges traditional assumptions about user-generated content and complicates efforts to maintain digital trust.

Relational Risks

Relational risks involve the quality and authenticity of interpersonal and parasocial relationships online. Autonomous AI bots can:

  • Engage in sustained, personalized interactions that foster emotional attachment or dependency, especially in vulnerable populations (e.g., lonely individuals, adolescents).
  • Blur boundaries between human and machine, undermining users’ ability to assess the authenticity of social interactions.
  • Exploit psychological vulnerabilities through adaptive persuasion, microtargeting, or manipulative conversational strategies (Brundage et al., 2018).

The interaction authenticity risk metric from our simulations suggests that users may increasingly interact primarily with bots, often unknowingly. Experts in our survey highlighted parasocial manipulation and psychological harm as significant concerns, particularly in contexts such as mental health support, romantic or friendship bots, and influencer-like AI personas.

Governance Risks

Governance risks arise from misalignment between the capabilities of autonomous AI and existing institutional, legal, and technical oversight mechanisms. Key challenges include:

  • Regulatory arbitrage: Developers and deployers can route autonomous AI operations through jurisdictions or platforms with weaker oversight.
  • Enforcement asymmetries: Platforms and regulators face information and resource asymmetries relative to well-resourced actors deploying sophisticated bots.
  • Attribution and accountability: Difficulty in attributing harmful behavior to specific actors (developers, deployers, platform operators) complicates liability and redress.

Our policy analysis indicates that current frameworks only partially address these challenges. While instruments like the EU AI Act and DSA provide hooks for governance, they do not yet offer a comprehensive regime tailored to autonomous AI on social media.

From Content Moderation to System Governance

Traditional platform governance has focused on content moderation: detecting and removing harmful or rule-violating posts (Gillespie, 2018). Autonomous AI bots, however, require a shift toward system-level governance that targets the design, deployment, and operation of AI agents themselves.

Content-centric approaches struggle with:

  • The volume and adaptivity of AI-generated content, which can evade static detection rules.
  • The difficulty of distinguishing benign from harmful content when context and intent matter.
  • The need to address cumulative and systemic effects (e.g., long-term opinion shifts) rather than isolated posts.

System governance, by contrast, emphasizes:

  • Controls on who can deploy autonomous bots and under what conditions.
  • Technical requirements for logging, transparency, and auditability.
  • Institutional mechanisms for oversight, including third-party audits and regulatory supervision.

The Autonomous Social Agent Governance (ASAG) Framework

Based on our findings, we propose the Autonomous Social Agent Governance (ASAG) framework, which integrates technical, institutional, and socio-legal tools. ASAG is designed as a modular, capability-tiered model that can be adapted across jurisdictions and platforms.

Component 1: Capability-Tiered Classification

ASAG begins with a classification of autonomous AI bots into capability tiers, based on factors such as:

  • Degree of autonomy (e.g., fully autonomous vs. human-in-the-loop).
  • Interaction scope (e.g., one-to-one vs. one-to-many, cross-platform reach).
  • Content capabilities (e.g., text-only vs. multimodal, personalization, memory).
  • Domain sensitivity (e.g., political, health, financial vs. entertainment, customer support).

Higher tiers (e.g., fully autonomous, cross-platform, domain-sensitive bots) would trigger stricter obligations, similar to “high-risk” classifications in the EU AI Act (European Parliament & Council, 2024).

Component 2: Layered Transparency and Disclosure

ASAG advocates for layered transparency mechanisms:

  • Real-time user-facing disclosure: Clear, persistent indicators that an account or interaction is AI-mediated, with accessible explanations of capabilities and limitations.
  • Platform-level registries: Internal registries of autonomous bots, including deployer identity, purpose, capability tier, and operational parameters.
  • Regulatory reporting: For higher-tier bots, mandatory reporting to regulators or designated oversight bodies, including risk assessments and mitigation plans.

These measures aim to enhance online authenticity and digital trust by making the presence and nature of autonomous AI visible and understandable.

Component 3: Audit-Ready Technical Design

ASAG emphasizes “audit by design” principles for autonomous AI bots, including:

  • Comprehensive logging: Secure, tamper-evident logs of inputs, outputs, and key decision variables, with privacy-preserving aggregation where necessary.
  • Watermarking and provenance: Cryptographic or statistical watermarking of AI-generated content and standardized metadata for content provenance (Kirchenbauer et al., 2023).
  • Safety constraints and guardrails: Built-in content filters, rate limits, and domain restrictions, especially for higher-risk domains.

These tools facilitate both internal platform oversight and external audits by regulators or independent researchers.

Component 4: Institutional Oversight and Co-Governance

ASAG envisions a multi-layered oversight ecosystem:

  • Platform governance: Platforms implement bot registration, monitoring, and enforcement mechanisms, including sanctions for non-compliant deployers.
  • Regulatory supervision: Public authorities set minimum standards, conduct inspections, and impose penalties for systemic failures.
  • Civil society and research involvement: Independent researchers and civil society organizations gain structured access to data and documentation to monitor harms and advocate for affected communities (Barocas et al., 2023).

This co-governance model recognizes that no single actor can effectively manage the risks of autonomous AI on social media.

Implications for Interdisciplinary Methods and Tools

The governance of autonomous AI bots requires new interdisciplinary methods and tools, including:

  • Agent-based simulation frameworks that incorporate realistic models of LLM-based behavior and user interaction, enabling scenario analysis and stress testing.
  • Measurement and detection tools for identifying autonomous bots and characterizing their behavior, including network-based, linguistic, and interactional features (Cresci, 2020).
  • Audit methodologies that combine technical inspection of AI systems with legal and ethical evaluation, building on algorithmic auditing practices (Raji et al., 2020).
  • Participatory methods that involve affected communities in defining harms, setting norms, and evaluating governance interventions (Katell et al., 2020).

Our study demonstrates the value of combining simulation, expert elicitation, and policy analysis. Future work should deepen each component and integrate real-world platform data, subject to privacy and ethical constraints.

Conclusion

Autonomous AI bots powered by large language models are poised to reshape social media ecosystems. Our mixed-methods study shows that such bots can disproportionately dominate engagement and visibility, influence opinion dynamics in synthetic environments, and significantly raise interaction authenticity risks. Experts across disciplines view these developments as posing substantial epistemic, relational, and governance risks, while recognizing potential benefits under robust governance.

Existing platform policies and regulatory instruments only partially address the challenges posed by autonomous AI on social media. They tend to focus on legacy bot behaviors and content-level moderation, leaving gaps in definition, lifecycle governance, and auditability. To address these gaps, we propose the Autonomous Social Agent Governance (ASAG) framework, which combines capability-tiered classification, layered transparency, audit-ready technical design, and multi-actor institutional oversight.

For researchers, this domain presents a rich agenda at the intersection of AI, social computing, law, and ethics. Key priorities include developing realistic simulation environments, robust detection and measurement tools, scalable audit methodologies, and participatory governance models. For policymakers and platforms, the central challenge is to move from reactive, content-centric moderation to proactive, system-level governance that can keep pace with rapidly evolving autonomous AI capabilities.

Ultimately, the governance of autonomous AI bots on social media is not merely a technical or regulatory problem; it is a question about the future of digital public spheres and the conditions under which online authenticity and digital trust can be sustained. Addressing this challenge will require sustained interdisciplinary collaboration, institutional innovation, and a commitment to aligning AI development with democratic values and human rights.

References

📊 Citation Verification Summary

Overall Score
88.7/100 (B)
Verification Rate
82.6% (19/23)
Coverage
91.3%
Avg Confidence
94.1%
Status: VERIFIED | Style: author-year (APA/Chicago) | Verified: 2025-12-13 20:54 | By Latent Scholar

Barabási, A.-L., & Albert, R. (1999). Emergence of scaling in random networks. Science, 286(5439), 509–512. https://doi.org/10.1126/science.286.5439.509

Barocas, S., Bogen, M., Rieke, A., & Yu, C. (2023). Governing automated decisions: Lessons from the GDPR and beyond. Annual Review of Law and Social Science, 19, 1–22. https://doi.org/10.1146/annurev-lawsocsci-031722-024915

(Checked: not_found)

Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (pp. 610–623). ACM. https://doi.org/10.1145/3442188.3445922

Bommasani, R., Hudson, D. A., Adeli, E., Altman, R., Arora, S., von Arx, S., … & Liang, P. (2021). On the opportunities and risks of foundation models. arXiv. https://arxiv.org/abs/2108.07258

Brundage, M., Avin, S., Clark, J., Toner, H., Eckersley, P., Garfinkel, B., … & Amodei, D. (2018). The malicious use of artificial intelligence: Forecasting, prevention, and mitigation. arXiv. https://arxiv.org/abs/1802.07228

Cresci, S. (2020). A decade of social bot detection. Communications of the ACM, 63(10), 72–83. https://doi.org/10.1145/3409116

European Parliament, & Council of the European Union. (2022). Regulation (EU) 2022/2065 on a Single Market for Digital Services (Digital Services Act). Official Journal of the European Union.

European Parliament, & Council of the European Union. (2024). Regulation (EU) …/2024 laying down harmonised rules on artificial intelligence (AI Act). (Final text as politically agreed).

Ferrara, E., Varol, O., Davis, C., Menczer, F., & Flammini, A. (2016). The rise of social bots. Communications of the ACM, 59(7), 96–104. https://doi.org/10.1145/2818717

Floridi, L. (2011). The philosophy of information. Oxford University Press.

Gillespie, T. (2018). Custodians of the Internet: Platforms, content moderation, and the hidden decisions that shape social media. Yale University Press.

Goldstein, J., Giansiracusa, N., & Shaffer, R. (2023). Generative AI and democracy: Risks and policy responses. Journal of Democracy, 34(4), 5–19. https://doi.org/10.1353/jod.2023.a900123

(Checked: crossref_rawtext)

Hegselmann, R., & Krause, U. (2002). Opinion dynamics and bounded confidence: Models, analysis and simulation. Journal of Artificial Societies and Social Simulation, 5(3), Article 2.

(Checked: crossref_rawtext)

Katell, M., Young, M., Dailey, D., Herman, B., Guetler, V., Tam, A., … & Krafft, P. M. (2020). Toward situated interventions for algorithmic equity: Lessons from the field. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (pp. 45–55). ACM. https://doi.org/10.1145/3351095.3372874

Kirchenbauer, J., Geiping, J., Wen, Y., Katz, J., Miers, I., & Goldstein, T. (2023). A watermark for large language models. arXiv. https://arxiv.org/abs/2301.10226

Mittelstadt, B. D. (2019). Principles alone cannot guarantee ethical AI. Nature Machine Intelligence, 1(11), 501–507. https://doi.org/10.1038/s42256-019-0114-4

Park, J. S., O’Brien, J. C., Cai, C. J., Morris, M. R., Liang, P., Bernstein, M. S., & Zhu, Y. (2023). Generative agents: Interactive simulacra of human behavior. In Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology (pp. 1–22). ACM. https://doi.org/10.1145/3586183.3606763

Raji, I. D., Smart, A., White, R. N., Mitchell, M., Gebru, T., Hutchinson, B., … & Barnes, P. (2020). Closing the AI accountability gap: Defining an end-to-end framework for internal algorithmic auditing. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (pp. 33–44). ACM. https://doi.org/10.1145/3351095.3372873

The White House. (2023). Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. https://www.whitehouse.gov

(Checked: crossref_rawtext)
⚠️

Tufekci, Z. (2017). Twitter and tear gas: The power and fragility of networked protest. Yale University Press.

(Author mismatch: cited Tufekci, found Molly Sauter)

Veale, M., & Borgesius, F. Z. (2021). Demystifying the Draft EU Artificial Intelligence Act. Computer Law Review International, 22(4), 97–112. https://doi.org/10.9785/cri-2021-220402

⚠️

Weidinger, L., Mellor, J., Rauh, M., Griffin, C., Uesato, J., Huang, P.-S., … & Gabriel, I. (2023). Taxonomy of risks posed by language models. In A. H. Oh et al. (Eds.), Advances in Neural Information Processing Systems, 36. Curran Associates.

(Year mismatch: cited 2023, found 2022)

Woolley, S. C., & Howard, P. N. (2018). Computational propaganda: Political parties, politicians, and political manipulation on social media. Oxford University Press.


Reviews

How to Cite This Review

Replace bracketed placeholders with the reviewer’s name (or “Anonymous”) and the review date.

APA (7th Edition)

MLA (9th Edition)

Chicago (17th Edition)

IEEE

Review #1 (Date): Pending