2024-03-10 One-Minute Post
Microsoft is blocking terms causing its AI to create inappropriate images. AI models show language bias, recommending Black defendants be “sentenced to death”. Sandra Watcher, a leading female AI expert, is working on AI systems to combat disinformation and promote fairness. Governance frameworks are being developed to tackle ethical AI challenges and ensure privacy. Former Prasar Bharati CEO emphasizes the critical significance of data privacy and ethical considerations in the face of increasing AI influence. AI is being used for campaign tools, showing clear racial bias. AI is also being used to generate content, raising privacy concerns.
Articles we found interesting:
-
1. How governance frameworks can tackle ethical AI challenges — Ifeoluwa Oladele link Highlight: The use of such technology without proper ethical considerations and safeguards has led to wrongful arrests and invasions of privacy, raising …
-
2. Former Prasar Bharati CEO addresses trust challenges in the era of AI and Deepfake technology link Highlight: He emphasised the critical significance of data privacy and ethical considerations in the face of the increasing influence of AI & deepfake tech.
-
3. We asked AI 'what is there to do in Ocala' link Highlight: Privacy Policy · Terms of Service and Privacy Policy · Advertising · Home … Everything you just read after “20 years of age” was written by an AI.
-
4. Microsoft Blocks Terms That Caused Its AI to Create Inappropriate Images link Highlight: Microsoft is now blocking terms that caused its AI tool, Microsoft Copilot Designer, to create violent and sexual images. … bias, underage drinking, …
-
5. Jason Palmer credits AI for his surprise win over Joe Biden in American Samoa - Mashable link Highlight: Having tried Palmer's AI campaign tool ourselves, this claim is eyebrow-raising to say the least. SEE ALSO: AI shows clear racial bias when used for …
-
6. AI models found to show language bias by recommending Black defendents be 'sentenced to death' link Highlight: Large language models (LLMs) are more likely to criminalise users that use African American English, the results of a new Cornell University study …
-
7. Sandra Watcher: Leading Female AI Expert in Data Ethics at Oxford - Global Village Space link Highlight: … AI systems to combat disinformation and promote fairness. According to Watcher, her interest in AI stems from a belief in the potential of …
-
8. Women in AI: Sandra Watcher, professor of data ethics at Oxford | TechCrunch link Highlight: She also looked at ways to audit AI to tackle disinformation and promote fairness. Q&A. Briefly, how did you get your start in AI? What attracted …
Updated Everyday by: (Supriti Vijay & Aman Priyanshu)