We are in the early stages of developing our new website. Please be patient with us! Sign up for updates

Website in beta. Please bear with us! Sign up for updates


Help Us

Right AI Laws, to Right Our Future
Join the Global Grassroots United Front
#WeStakeOutAI #HaltDeepfakes

Our Contributions


Right AI Laws, to Right Our Future

While we are spearheading the Global Grassroots United Front to influence and speed up international regulations and policies, we are also contributing to the right AI laws via:

  • Publications & Submissions: Scroll down to learn more
  • Open Letters & Petitions: We've publicly supported & done behind-the-scenes work to get open letters signed by hundreds of signatories, including leading experts like Yoshua Bengio
  • Pro Bono Advising: Tailored support and webinars for partner organizations on how to apply our research

"extremely helpful in the later stages of the TAISC.org & AITreaty.org projects"

Peter S. Park is the director of StakeOut.AI and an AI Existential Safety Postdoctoral Fellow at MIT. Peter has a good overview of the AI-political landscape, and was extremely helpful in the later stages of the TAISC.org & AITreaty.org projects.

- Tolga Bilge

Publications & Submissions

Future of Life Institute

Recent months have seen a wide range of AI governance proposals. FLI has analyzed the different proposals side-by-side, evaluating them in terms of the different measures required. The results can be found below. The comparison demonstrates key differences between proposals, but, just as importantly, the consensus around necessary safety requirements. The scorecard focuses particularly on concrete and enforceable requirements, because strong competitive pressures suggest that voluntary guidelines will be insufficient.

U.S. Copyright Office

The United States Copyright Office is undertaking a study of the copyright law and policy issues raised by artificial intelligence (“AI”) systems. To inform the Office's study and help assess whether legislative or regulatory steps in this area are warranted, the Office seeks comment on these issues, including those involved in the use of copyrighted works to train AI models, the appropriate levels of transparency and disclosure with respect to the use of copyrighted works, and the legal status of AI-generated outputs (these would be considered copyrightable if created by a human author).

Office of Management and Budget

The Office of Management and Budget (OMB) is seeking public comment on a draft memorandum titled Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence (AI). As proposed, the memorandum would establish new agency requirements in areas of AI governance, innovation, and risk management, and would direct agencies to adopt specific minimum risk management practices for uses of AI that impact the rights and safety of the public.

The National Institute of Standards and Technology (NIST) is seeking information to assist in carrying out several of its responsibilities under the Executive order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence issued on October 30, 2023. Among other things, the E.O. directs NIST to undertake an initiative for evaluating and auditing capabilities relating to Artificial Intelligence (AI) technologies and to develop a variety of guidelines, including for conducting AI red-teaming tests to enable deployment of safe, secure, and trustworthy systems.

Activism: Treaty Proposal, Open Letters & Petitions

StakeOut.AI's Collaboration with Other AI Safety Advocates >>>

Want to Help Too? Sign Our "Global Grassroots United Front Against AI Risks" Online Petitions Today!

The biggest difference between our Global Grassroots United Front online petition compared to other petitions is that we aim to moblize a large-scale grassroots movement against dangerous AI with the help of the international public.

History has shown when people band together, we can affect change.

Our goal is ambitious, and we need your help. We want to reach over 1,150,000 signatures. We believe this is a number that governments would no longer be able to ignore.

AI Treaty Open Letter


Assisted with getting the open letter for an international AI treaty to reduce catastrophic risks posed by AI, signed by hundreds of signatories, including leading experts like Yoshua Bengio, requesting a treaty from the UK AI safety summit.

EU AI Act Activism

In collaboration with Max Tegmark

StakeOut.AI researched and disseminated information regarding former French digital minister Cedric O’s potentially corrupt U-turn regarding the EU AI Act’s foundation model requirements (the removal of which would have essentially killed the EU AI Act's updside)

Treaty on Artificial Intelligence Safety and Cooperation


Assisted with outreach for the forecasting report (by world-leading forecasting group Samotsvety) and treaty proposal (by Samotsvety forecaster Tolga Bilge).

Support Regulating Foundation Models with a Tiered Approach in the EU AI Act


The EU AI Act needs Foundation Model Regulation


"Dr Park has been immensely helpful to talk to about AI, alignment, x-risk and more"

Dr Park has been immensely helpful to talk to about AI, alignment, x-risk and more. There are many times when he has gone the extra mile and given insights on topics where I barely know how to get started. For example, in the Critique-a-Thon, for which Dr. Park has consistently been a fantastic Judge, he was the one to make a system of rating Critiques out of 10- which I hadn't thought of before! I'm incredibly grateful to have met him."

- Kabir Kumar (AI-Plans.com)

Ready to Make a Positive Impact? Yes, Together We Can Create Change & Demand the Right AI Laws 

United as one, we can ensure AI development is safer for us, for our children, for our children's children, and for humanity as a whole.

Please contact us to suggest ideas, improvements, or corrections.

We do our best to provide useful information, but how you use the information is up to you. We don’t take responsibility for any loss that results from the use of information on the site. Please consult our full legal disclaimer and privacy policy. See also our cookie notice.

© 2023-2024 – all rights reserved. Please contact us if you wish to redistribute, translate, or adapt this work.