We are in the early stages of developing our new website. Please be patient with us! Sign up for updates

Website in beta. Please bear with us! Sign up for updates


Help Us

Right AI Laws, to Right Our Future
Join the Global Grassroots United Front
#WeStakeOutAI #HaltDeepfakes


Frequently Asked Questions

If you have additional questions, please contact us here.

Clicking the question will automatically bring you to the answer. If you want to scroll back to the questions list, simply hit "back" on your browser.

How can AI be dangerous, aren't these programs developed in isolation for specific tasks?

What is AI superintelligence (or Artificial General Intelligence), and how can it pose dangers?

How can StakeOut.AI be considered a global grassroots movement when it is incorporated in the United States?

What does StakeOut.AI's "mission accomplished" look like?

How can AI be dangerous, aren't these programs developed in isolation for specific tasks?

AI can be dangerous for several reasons, even though these programs are often developed in isolation for specific tasks. The potential dangers of AI include:

Misuse or Malicious Use
: AI can be used for harmful purposes, such as creating deepfakes, conducting cyber-attacks, or automating tasks in warfare.

Bias and Discrimination
: If AI algorithms are trained on biased data, they can perpetuate and amplify these biases, leading to unfair or discriminatory decisions, especially in critical areas like hiring, law enforcement, and loan approval.

Lack of Transparency and Explainability
: Many AI systems, particularly those based on deep learning, are often seen as "black boxes" with decisions that are difficult to understand or explain, leading to trust and accountability issues.

Unintended Consequences
: Even when developed for specific tasks, AI systems can produce unexpected results or be repurposed for different uses, potentially causing harm.

Dependency and Automation Bias
: Over-reliance on AI can lead to a reduction in human skills and judgment, known as automation bias, where humans defer decisions to AI even when it's inappropriate.

Security Vulnerabilities
: AI systems can be hacked or manipulated, leading to security breaches or incorrect decision-making.

Impact on Employment
: AI automation can lead to job displacement in various sectors, raising economic and social concerns.

The development of AI in isolated environments for specific tasks doesn't inherently safeguard against these risks. Read more about different AI risk categories here.

How can the right AI laws help right our future?

The reason why the right AI laws can help right our future is because companies must comply with these laws to operate in certain countries. This principle is crucial to the effectiveness of regulations like the GDPR. Let's look at some examples to understand this better:

1) Meta's Fines Under GDPR: Meta (formerly Facebook) faced substantial fines for GDPR non-compliance. They were fined a record 1.2 billion euros for issues related to user data transfers to the United States and an additional approximately $275 million for a data breach involving over 530 million users' personal data. These fines, imposed by the Irish Data Protection Commission, the lead EU regulator for Meta, were due to the company’s infringement of GDPR Articles focused on data protection by design and default.

2) WhatsApp and Instagram Penalties: Meta-owned WhatsApp was fined approximately $267 million for transparency breaches under GDPR, and Instagram received a €405 million penalty for children’s privacy violations. These fines indicate the broad scope of GDPR, covering various aspects of data handling and user privacy.

3) Historical Facebook Data Breaches: Meta was also fined around $18.6 million for a series of historical Facebook data breaches, illustrating the GDPR's retrospective effect, where companies can be held accountable for past violations.

4) Ongoing Inquiries and Compliance Measures: Beyond fines, Meta is subjected to ongoing inquiries into various aspects of its business by EU regulators. These inquiries and the imposed compliance measures further ensure that companies adhere to the GDPR’s standards to maintain their operational status in the EU.

These examples illustrate how stringent AI laws like the GDPR enforce compliance by leveraging market access. Companies seeking to operate in the EU must adhere to these regulations, making GDPR a de facto global standard. This approach not only ensures data protection within the EU but also encourages global companies to adopt similar standards universally.

Therefore, if the right AI laws are enacted, it will deter companies from violating these laws due to the significant fines imposed, which would negatively impact their profits. Besides financial penalties, there is also the need to maintain a positive public image and investor confidence, motivating companies to comply with such regulations. This creates a financial and reputational incentive for companies to adhere to the law, ensuring that AI development is ethical and aligned with humanity.

Why are we called StakeOut.AI?

At the heart of our identity, "StakeOut.AI" embodies our fundamental purpose: to vigilantly monitor and guide the development of artificial intelligence. Just as a stakeout represents careful, continuous observation, our organization is dedicated to ensuring that AI progresses in a way that benefits humanity and remains under our responsible control.

The term "stakeout" reflects our proactive stance. We're not just observers; we're active participants in shaping AI's future. It's about taking a stand, marking our territory in the AI landscape, and asserting human values and ethics in a field that's rapidly evolving.

Our name is a call to action. It's an invitation for like-minded individuals, communities, and leaders to join us in this vital endeavor. Together, we have the power to influence policies, drive ethical AI development, and ensure that the future of AI aligns with the best interests of humanity.

So, why StakeOut.AI? Because we're here to make a difference, to ensure that the AI revolution unfolds under watchful, caring eyes – ours, and yours.

What is AI superintelligence (or Artificial General Intelligence), and how can it pose dangers?

Artificial General Intelligence (AGI) or AI superintelligence refers to AI that possesses the ability to understand, learn, and apply its intelligence to a wide range of problems, much like a human being. This level of AI goes beyond the specialized tasks for which most current AI systems are designed. The potential dangers of AGI or AI superintelligence include:

Existential Risk
: A superintelligent AI might develop goals misaligned with human values or interests. If its goals diverge significantly from human welfare, it could pose an existential threat to humanity.

Loss of Control
: As AI systems become more intelligent and autonomous, it might become difficult or impossible for humans to control or understand their actions fully.

Rapid and Unpredictable Changes
: A superintelligent AI could enact changes at a pace and scale that humans cannot predict or manage, leading to unforeseen consequences.

Ethical and Moral Dilemmas
: The decisions made by a superintelligent AI could raise complex ethical issues, especially if it surpasses human understanding in areas like morality, ethics, or value judgments.

Economic and Social Disruption
: An AGI could rapidly outperform humans in a wide range of jobs and activities, leading to significant economic and social disruptions.

Power Concentration
: If AGI is controlled by a select few, it could lead to unprecedented concentration of power and influence, raising concerns about inequality and oppression.

: AGI could be weaponized for military purposes, creating advanced and potentially uncontrollable warfare technologies.

Given these potential risks, the development of AGI or superintelligence requires careful consideration of ethical guidelines, robust control mechanisms, and broad societal dialogue to ensure that such technologies, if they come to exist, align with and benefit humanity as a whole.

Why is there less emphasis on AI's benefits and more on its risks?

'Stakeout,' as defined by Merriam-Webster, is "a surveillance maintained by the police of an area or a person suspected of criminal activity." At StakeOut.AI, we adapt this concept to the realm of artificial intelligence, closely monitoring the AI sector for potential ethical and societal risks. This vigilance is crucial for ensuring AI's responsible evolution.

StakeOut.AI recognizes the positive impacts of AI, but our focus is on presenting evidence of AI-driven harms and critically assessing risks arising from AI's rapid advancements. This approach is crucial to ensure AI development aligns with human values and safety, providing a balanced view in AI discussions.

Besides, do you need yet another source talking about how great AI is? Haven't you seen enough ads about this amazing feature in an AI tool, or yet another post about what AI can now accomplish? Rest assured, those who are making a profit with AI will let you know about the good it's doing, and we'll leave it to them to brag. We, on the other hand, are committed to discussing AI's risks and advocating for necessary laws to prevent harm.

How can StakeOut.AI be considered a global grassroots movement when it is incorporated in the United States?

StakeOut.AI's incorporation in the United States does not confine our reach or dilute our global mission. Our initial focus spans all English-speaking countries, not just USA. The location of our incorporation is a logistical detail, one that doesn't limit our commitment to influencing AI laws and policies worldwide.

Our motto, 'right AI laws to right our future,' encapsulates our global ambition. We advocate for international laws and policies that transcend national boundaries because AI's impact is universal. It's not about one country or one region; it's about shaping a global future where AI is developed and utilized responsibly and ethically.

We believe the only way to achieve meaningful change is by uniting voices from around the world. Our movement calls for collective action, urging governments everywhere to recognize the urgency of the situation. It’s about rallying together to say, 'enough is enough.' We need the right regulations on AI now, not just to address immediate concerns, but to safeguard our future.

Choosing the United States as our base for incorporation is a strategic decision. It's important to note that three of the biggest AI companies are located in the United States. By establishing our organization in a country that is a hub for AI development, we position ourselves to take a stand in a region where changes in AI laws will have immediate and significant impact. This is not just about influencing American policy; it's about setting a precedent. When the U.S., a leader in AI technology, implements progressive AI laws, it sends a powerful message to the world. It sets an example for other countries to follow, potentially catalyzing a wave of global change.

Thus, while our legal incorporation is in the United States, our vision and activities are inherently international. We are a grassroots movement in essence, reaching out to individuals and communities across English-speaking countries and beyond, rallying them under a common cause. This global unity is our strength, and it's how we aim to influence AI policies and development on an international scale.

Your help is vital, important, and necessary. Join the Global Grassroots United Front against AI risks today!

What does StakeOut.AI's "mission accomplished" look like?

Our charter is clear: to vigilantly monitor AI risks. Should a time come when these risks are completely eradicated from society, StakeOut.AI will have achieved its mission. In this envisioned future, where AI is safe and ethically aligned, our organization will proudly retire, having played a pivotal role in guiding AI towards a more responsible and secure trajectory.

Would you recommend StakeOut.AI to professionals in other industries or unions concerned about AI impacting their careers, and why?

"quick, clear updates & highlights shared"

Yes. There’s so much information out there that it’s hard to know where to start. I appreciate the quick, clear updates & highlights shared.

- Joanna Moznette

"answers and insights you provide are honest"

Yes. You are helping to demystify this stuff and not having a stake in the Hollywood side of things means to me that the answers and insights you provide are honest and true and not skewed to favoring one side or another.

- Rochelle Robinson

"team is deeply knowledgeable in several areas, rather than hyper-focusing on only one faction"

Without a doubt. The team is deeply knowledgeable in several areas, rather than hyper-focusing on only one faction. I understand risks and AI potential better.

- J Cuevas

"recommend StakeOut.AI due to its expertise"

Yes, I would certainly recommend StakeOut.AI due to its expertise and shared concerns.

- Rebecca Jensen Uesugi

"important to have people who are informed on AI"

Yes, because I think it’s important to have people who are informed on AI a part of your team.

- Samantha Nelson-Philipp

"need to know the immediate impact in order to prepare"

Yes, AI cuts workflow in half phases out professions. People need to know the immediate impact in order to prepare.

- David Goodloe

"recommend StakeOut.AI to professionals concerned about AI"

Yes, I would recommend StakeOut.AI to professionals concerned about AI so they can gain more insight about what could happen in the future.

- Lateisha P.

"very compelling and revealing"

I would. Given the information that has been shared, it is very compelling and revealing. It could encourage others to be on the lookout and take action.  I hope that this information becomes more widespread and will lead us to a future with more certainty over AI, it's use and effects over our lives.

- Jacob Prado

"multidisciplinary approach to finding solutions"

Yes, I believe StakeOut.AI is doing well to take a multidisciplinary approach to finding solutions by involving actors, lawyers (Amy) and AI researchers (Dr. Park).

- David W.

Ready to Make a Positive Impact? Yes, Together We Can Create Change & Demand the Right AI Laws 

United as one, we can ensure AI development is safer for us, for our children, for our children's children, and for humanity as a whole.

Please contact us to suggest ideas, improvements, or corrections.

We do our best to provide useful information, but how you use the information is up to you. We don’t take responsibility for any loss that results from the use of information on the site. Please consult our full legal disclaimer and privacy policy. See also our cookie notice.

© 2023-2024 – all rights reserved. Please contact us if you wish to redistribute, translate, or adapt this work.