
đ Researchers use artificial intelligence to unlock extreme weather mysteriesÄȘ new machine learning approach helps scientists understand why extreme precipitation days in the Midwest are becoming more frequent.

đ AI may diagnose dementia in a dayÄŹurrently it can take several scans and tests to diagnose dementia delaying treatment and causing a range of problems for patients. When does ânudgeâ become manipulation in an AI-based system? How can companies build trust in AI? Other interesting reads đ NeurIPS 2021 Workshop: Tackling Climate Change with Machine LearningÄŹall for short papers using machine learning to address problems in climate mitigation, adaptation, and modeling. đ Can you trust artificial intelligence? Review of invisible barriers, social discrepancies, and recruitment policies that contribute to discrimination in the AI community. đ Combatting Anti-Blackness in the AI Community New research identifies problematic image collections that continue to be used in research despite being taken offline. đ AI datasets are prone to mismanagement, study finds

The release of ImageNet was a watershed moment in computer vision but is now fingered as a culprit in problems of bias. _ AI Ethics đ How computer vision works - and why itâs plagued by bias To resolve this bias for the future, the long-term solution is likely to involve more human involvement in image processing. For example the algorithm showed preference to light-skinned faces over dark-skinned faces. Key Takeaway:Â To the delight and praise of many, Twitter recently ran a public competition with a cash prize to test this hypothesis and the results do indeed demonstrate a number of different biases. The previous system worked by presenting the most âvisually interestingâ area but the concern was that the algorithm demonstrated bias across gender and race in doing so. What:Â In March 2021, Twitter phased out its automated system for cropping images in image preview boxes. Twitter's photo-cropping algorithm preferred young, beautiful, and light-skinned faces Will we find ways to limit the risks while maximising the benefits of this technology?
#WORKDONE AI TECHCRUNCH CODE#
These hazards span across "safety, security, and economic factors including producing code misaligned with user intent. Key Takeaway:Â Some commentators are heralding a new era of low-code interaction with computers whereas OpenAI itself has identified a range of hazards associated with the technology. Codex will then drop a boulder from the top of the screen without any prior instruction as to what âthe skyâ is. The system now accepts commands in plain English and outputs live, working code! For example, a games developer may say âmake the boulder fall from the skyâ.
#WORKDONE AI TECHCRUNCH UPGRADE#
What:Â OpenAI has launched a significant upgrade to its AI-powered coding assistant, Codex. OpenAI upgrades its natural language AI coder Codex and kicks off private beta What sort of future do we want for our elderly? They are concerned that such a companion may reduce human interactions. Critics question the value of interactions with âno mutuality, no real shared experienceâ. But ultimately she describes her as an âadded ornament to my lifeâ. Key Takeaways:Â Juanita refers to the robot as female, always thanks her and says âgoodnightâ. ElliQ is advertised as a âsidekick for happier ageingâ and is aimed at older adults without dementia. What:Â Interview with a 93-year old woman, Juanita, who has used a commercial, AI-powered robot companion called ElliQ for about 2 years. Make sure to subscribe to our site as well: Graham Lane & Marcel Hedman

The aim: To inspire AI action by builders, regulators, leaders, researchers and those interested in the field. Welcome to the 30th edition of the AI and Global Grand Challenges newsletter, where we explore how AI is tackling the largest problems facing the world.
