We are in the early stages of developing our new website. Please be patient with us! Sign up for updates

Website in beta. Please bear with us! Sign up for updates

CUSTOM JAVASCRIPT / HTML

Help Us

Right AI Laws, to Right Our Future
Join the Global Grassroots United Front
#WeStakeOutAI #HaltDeepfakes

New to our Website? Start Here

Many people around the world have been asking questions like these as AI has infiltrated our lives faster than we ever imagined.

What about you? Have you pondered them?

Are you concerned by the lightning speed of AI advancements? Where is it leading? What does it mean for your family?

Are you aware of the surge in AI-enabled scams worldwide?

How vulnerable are you?

Have you considered how AI's rapid progress could impact your career, especially knowing it has already replaced countless jobs?

Have you heard more and more murmurs of robots taking over, something you previously thought was only in science fiction movies?

If you've asked similar questions, you're in the right place.

StakeOut.AI is a nonprofit organization committed to engaging in conversations about AI, its current impact, and what humanity needs to actively stake out and be vigilant about.

Evidence shows that using AI (like ChatGPT & other AI tools) is like harnessing fire: Controlled, AI is transformative; Uncontrolled, AI causes devastation.

Despite the incredible benefits AI has brought to various industries, including nonprofit efforts that aid the most vulnerable people, the negative impacts and risks of AI are undeniable.

We believe in the potential of AI to enhance human flourishing, but this necessitates timely intervention by international governments before it's too late.

OUR SIMPLE MOTTO

Right AI Laws, to Right Our Future

OUR MISSION

StakeOut.AI fights to safeguard humanity from AI-driven risks.

We use evidence-based outreach to inform people of the threats that advanced AI poses to their economic livelihoods and personal safety.

Our mission is to create a united front for humanity, driving national and international coordination on robust solutions to AI-driven disempowerment.

"You are helping to demystify this stuff and not having a stake in the Hollywood side of things means to me that the answers and insights you provide are honest and true"

- Rochelle Robinson

"Yes, I would recommend StakeOut.AI to professionals concerned about AI so they can gain more insight about what could happen in the future."

- Lateisha P.

"very compelling and revealing. It could encourage others to be on the lookout and take action. I hope that this information becomes more widespread"

- Jacob Prado

A story based on true events

Consider the fate of the digital artists.

In 2022, AI art models were on the rise, yet their aesthetic quality was not yet economically competitive.

  • Foresaw: Some artists foresaw the looming changes, and adopted the necessary urgency.
  • Waited: But other artists were instead convinced to wait-and-see
  • Fast forward one year: and the aesthetic quality of AI art, took a substantial leap.

Many digital artists

Lost Their Incomes
and Their Leverage Overnight

When one is blindsided about an imminent threat, there will likely be missed opportunities.

Like for preventing pandemics, it is important to act early:

to act well before clear signs of the threat manifest.

What is the threat? An extremely small number of overconfident AI companies are

Working to replace most human livelihoods, without the public consent

OpenAI, the creator of ChatGPT, continues to pursue its founding mission:

Also, consider the following quote of

Rich Sutton, the first-ever advisor of Google DeepMind

  • ​displace us from existence: Rather quickly, they would displace us from existence...
  • bow out when we can no longer contribute: It behooves us to give them every advantage, and to bow out when we can no longer contribute…
  • should not resist it...I don't think we should fear succession. I think we should not resist it. We should embrace it and prepare for it.
  • greater beings, greater AIs: Why would we want greater beings, greater AIs, more intelligent beings kept subservient to us?

When AI industry leaders say things like this...

They are not shunned or stopped...
In fact they get more job opportunities

After Sutton gave his talk AI Succession at the World AI Conference in Shanghai, he was invited on to a partnership with Keen Technologies to 

Build autonomous AI systems that outperform humans at most capabilities.

These AI companies continue to create AI systems of ever-increasing capabillities

Despite the fact that they STILL DO NOT KNOW how to control AI

Even today's AI systems are accelerating scams, propaganda, and radicalization.

Leading AI experts from academia and industry believe that:

mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

But Big Tech is still training larger and larger AI models on workers' data without their consent

In order to profit from the automation of these same workers' careers - e.g.:

  • artists: https://www.theguardian.com/technology/2023/mar/18/chatgpt-said-i-did-not-exist-how-artists-and-writers-are-fighting-back-against-ai
  • journalists: https://gizmodo.com/chatgpt-ai-buzzfeed-news-journalism-existential-threat-1849869364
  • lawyershttps://www.nytimes.com/2023/04/10/technology/ai-is-coming-for-lawyers-again.html
  • actors: https://www.washingtonpost.com/technology/2023/07/19/ai-actors-fear-sag-strike-hollywood/
  • writers: https://edition.cnn.com/2023/05/04/tech/writers-strike-ai/index.html

... and eventually, everyone.

Who are We?

 StakeOut.AI is an initiative to raise public awareness of the threats posed by advanced AI to people’s economic livelihoods and personal safety.

Here is a an interesting thought experiment...

Hover Car Hazard: Eco-Friendly Tech Marred by Swerving Causing Injuries and Deaths

If a company invents hovering vehicles that use renewable energy (a positive aspect), emit no exhaust into the environment (another positive), and require less maintenance without the worry of flat tires (yet another positive), but the vehicles are known to swerve out of control occasionally:

Do you think these vehicles would be allowed to be sold to consumers? Even if they are sold, if these safety issues become known, wouldn't the vehicles likely be required to be recalled and fixed, especially considering both the personal safety risks and the economic consequences of the injuries and deaths?

SEEMS LOGICAL those hovering vehicles would be recalled and fixed right? So...

Why is AI Any Different? Experts and Even the Inventors Themselves Have Said They Don't Know How to Control AI

Despite all the good that AI is doing in different industries, and even nonprofit applications helping the most vulnerable people in the world...

If they don't know how to control AI (for example, GPT-4) and current AI systems have already caused so much harm to humanity, why isn't more money being spent to develop safety measures to ensure GPT-4's safety before advancing to GPT-5, GPT-6, and more?

In fact, what's been happening is that...

They Have Been Spending Billions on AI Advancement, but Only Millions in AI Safety

  • ​billion is 10^9: 1,000,000,000
  • million is 10^6: 1,000,000
  • they are spending 100x-1000x more on advancing AI: Total "spent to advance AI capabilities is between $1 billion and $340 billion per year. Even assuming a figure as low as $1 billion, this would still be around 100 times the amount spent on reducing risks from AI" (link)

The General Public, the Global Citizens, are Who Will Be Affected the Most by the Billions Spent on AI Advancement (and Negligence on AI Safety)

THIS IS WHY WE EXIST

Convey the evidence that disempowering AI poses serious dangers

We carry out evidence-based outreach through partnerships (e.g. professional associations) and mass media channels (e.g. Internet) to convey the evidence that disempowering AI poses serious dangers (to economic livelihoods and personal safety) for people of every industry, for people of every country, and for humanity as a whole.

AI safety global rallying point

StakeOut.AI’s goal is to become the AI safety global rallying point for both the general public and the AI-threatened professionals.

Our Simple Motto: Right AI Laws, to Right Our Future

Collaborate towards robust solution to the threat

By facilitating national and global coordination, we hope to pressure governments, private enterprises, and other pertinent organizations to collaborate towards robust solutions to the threats that advanced AI poses to people’s jobs and safety.

Tailored advising and advocacy

We provide tailored advising for organizations and groups people who are or will be threatened by advanced AI.

We are also spearheading the Global Grassroots United Front against AI risks, advocating for safer AI.

Ready to Make a Positive Impact? Yes, Together We Can Create Change & Demand the Right AI Laws 

United as one, we can ensure AI development is safer for us, for our children, for our children's children, and for humanity as a whole.

Please contact us to suggest ideas, improvements, or corrections.

We do our best to provide useful information, but how you use the information is up to you. We don’t take responsibility for any loss that results from the use of information on the site. Please consult our full legal disclaimer and privacy policy. See also our cookie notice.

© 2023-2024 – all rights reserved. Please contact us if you wish to redistribute, translate, or adapt this work.

CUSTOM JAVASCRIPT / HTML
CUSTOM JAVASCRIPT / HTML