AI

Ofcom Publishes Paper on Mitigating Deepfake Harms

Ofcom has published a discussion paper on the impacts and mitigation of deepfakes, which are AI-generated or manipulated audio-visual content that misrepresents people or events. Deepfakes have caused harm to public figures and ordinary people, often through non-consensual sexual content, financial scams, or political disinformation. Ofcom’s recent poll reveals that nearly half of respondents aged 8-15 and 16+ have encountered deepfakes in the past six months. Under the Online Safety Act, online services must address harmful content, including deepfakes, through measures such as watermarking, detection classifiers, and media literacy efforts.

Ofcom Publishes Paper on Mitigating Deepfake Harms Read More »

FCC Proposes New AI Disclosure Rules for Political Ads

The FCC has introduced a proposal to standardize the disclosure of AI-generated content in political advertisements. This Notice of Proposed Rulemaking aims to bring uniformity to state laws regulating AI use in political campaigns. The proposal requires broadcasters, cable operators, Direct Broadcast Satellite (DBS) providers, and Satellite Digital Audio Radio Services (SDARS) licensees to disclose AI-generated content in political ads through on-air announcements and notices in political files. The initiative seeks to enhance transparency and help voters make informed decisions, amid concerns about AI-generated “deepfakes” misleading the public.

FCC Proposes New AI Disclosure Rules for Political Ads Read More »

U.S. Department of State Releases Risk Management Profile for AI and Human Rights

The U.S. Department of State has published the “Risk Management Profile for Artificial Intelligence and Human Rights” to guide organizations in AI design, development, and deployment with a focus on human rights. This Profile aims to bridge the gap between AI risk management and human rights considerations by integrating principles from the U.S. National Institute of Standards and Technology’s AI Risk Management Framework. It underscores the importance of international human rights in AI governance and provides actionable recommendations for organizations to follow throughout the AI lifecycle to ensure AI systems are used responsibly and ethically.

U.S. Department of State Releases Risk Management Profile for AI and Human Rights Read More »

Irish DPC to investigate X’s Grok AI training on user data without consent

The Irish Data Protection Commission is investigating X’s practice of sharing user data with Elon Musk’s AI startup, xAI, without explicit consent. X implemented a feature that opts users into data sharing by default, without prior notice. Users can change this setting only on the desktop version, with a mobile app option in development. The DPC had been questioning X for months and expressed surprise at the sudden rollout, which may lead to a GDPR investigation and possible fines. This situation mirrors Meta’s recent GDPR-related halt on similar AI data usage in Europe.

Irish DPC to investigate X’s Grok AI training on user data without consent Read More »

Regulatory Mapping on AI in Latin America

Access Now has published the “Regulatory Mapping on Artificial Intelligence in Latin America,” a comprehensive report outlining AI governance across the region. This report, developed with TrustLaw’s pro bono legal network and supported by the Patrick J. McGovern Foundation, provides an in-depth analysis of AI definitions, soft law instruments, national strategies, and draft legislation in countries like Argentina, Brazil, and Mexico. It emphasizes human rights, transparency, and the need for region-specific AI policies, aiming to guide public policymakers towards effective AI regulation while promoting technical development and ethical standards.

Regulatory Mapping on AI in Latin America Read More »

Scroll to Top