In the swiftly evolving landscape of generative AI, the need for independent evaluation and red teaming cannot be overstated. Such evaluations are pivotal for uncovering potential risks and ensuring these systems align with public safety and ethical standards. Yet, the current approach by leading AI companies, employing restrictive terms of service and enforcement strategies, significantly hampers this necessary research. The fear of account suspensions or legal repercussions looms large over researchers, creating a chilling effect that stifles good-faith safety evaluations. The limited scope and independence of company-sanctioned researcher access programs compounds this dire situation. These programs often suffer from inadequate funding and limited community representation and are influenced by corporate interests, making them a poor substitute for truly independent research access. The crux of the issue lies in the existing barriers that disincentivize vital safety and trustworthiness evaluations, underscoring the need for a paradigm shift toward more open and inclusive…
_Cybersecurite via GRISE Veille Globale on Inoreader