Please be assured that we uphold the highest standards of privacy and confidentiality; under no circumstances will your personal information be shared with third parties or used for unsolicited communications.
Recent months have seen a wide range of AI governance proposals. FLI has analyzed the different proposals side-by-side, evaluating them in terms of the different measures required. The results can be found below. The comparison demonstrates key differences between proposals, but, just as importantly, the consensus around necessary safety requirements. The scorecard focuses particularly on concrete and enforceable requirements, because strong competitive pressures suggest that voluntary guidelines will be insufficient.
The United States Copyright Office is undertaking a study of the copyright law and policy issues raised by artificial intelligence (“AI”) systems. To inform the Office's study and help assess whether legislative or regulatory steps in this area are warranted, the Office seeks comment on these issues, including those involved in the use of copyrighted works to train AI models, the appropriate levels of transparency and disclosure with respect to the use of copyrighted works, and the legal status of AI-generated outputs (these would be considered copyrightable if created by a human author).
The Office of Management and Budget (OMB) is seeking public comment on a draft memorandum titled Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence (AI). As proposed, the memorandum would establish new agency requirements in areas of AI governance, innovation, and risk management, and would direct agencies to adopt specific minimum risk management practices for uses of AI that impact the rights and safety of the public.
The National Institute of Standards and Technology (NIST) is seeking information to assist in carrying out several of its responsibilities under the Executive order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence issued on October 30, 2023. Among other things, the E.O. directs NIST to undertake an initiative for evaluating and auditing capabilities relating to Artificial Intelligence (AI) technologies and to develop a variety of guidelines, including for conducting AI red-teaming tests to enable deployment of safe, secure, and trustworthy systems.
Assisted with getting the open letter for an international AI treaty to reduce catastrophic risks posed by AI, signed by hundreds of signatories, including leading experts like Yoshua Bengio, requesting a treaty from the UK AI safety summit.
StakeOut.AI researched and disseminated information regarding former French digital minister Cedric O’s potentially corrupt U-turn regarding the EU AI Act’s foundation model requirements (the removal of which would have essentially killed the EU AI Act's updside)
Assisted with outreach for the forecasting report (by world-leading forecasting group Samotsvety) and treaty proposal (by Samotsvety forecaster Tolga Bilge).
United as one, we can ensure AI development is safer for us, for our children, for our children's children, and for humanity as a whole.
StakeOut.AI is a nonprofit based in the United States, with registered company number 93-4733126 and pending 501(c)3 charity status.
Please contact us to suggest ideas, improvements, or corrections.
© 2023-2024 – all rights reserved. Please contact us if you wish to redistribute, translate, or adapt this work.