Please be assured that we uphold the highest standards of privacy and confidentiality; under no circumstances will your personal information be shared with third parties or used for unsolicited communications.
We are in the early stages of developing our new website. Please be patient with us! Sign up for updates
Website in beta. Please bear with us! Sign up for updates
Right AI Laws, to Right Our Future
Join the Global Grassroots United Front
#WeStakeOutAI #HaltDeepfakes
In 2022, inspired by the capabilities of AI demonstrated by DALL-E 2, I pivoted my Harvard Ph.D. research from human cognitive science to AI cognitive science. This shift led me to join an AI safety research group and later secure mentorship under MIT’s Max Tegmark, focusing on both the technological and the societal impacts of AI.
My perspective evolved from viewing AI safety as a purely technical challenge to understanding it as a broad societal issue that demands democratic participation and oversight. I learned that ensuring technology's safety extends beyond programmers in the AI industry—it requires the engagement of everyone affected by these AI advancements. This realization spurred me to co-found StakeOut.AI, aiming to foster a future where AI benefits humanity through informed public involvement and legislative action.
Embarking on a career fueled by a passion for social justice and equality, I chose public interest law over a lucrative $260,000 corporate defense offer after graduating from Harvard Law in 2022.
My work at Sanford Heisler Sharp, LLP, and previously with the U.S. Air Force reflects my dedication to advocating for fairness, in different aspects of society.
Co-founding StakeOut.AI was a natural step in my journey, even though I knew going in it could encompass working without income while dipping into my savings—all the while still paying off looming student debts.
Now based in DC, I'm privileged to have a hand in influencing AI legislation towards protecting human well-being and ensuring safe AI use for future generations.
They say they “miss the days” when only intellectual elites talked about AI safety, and even now, these inner circles think it’s a bad idea to let the general public in.
As a layperson deeply committed to our collective future, I stand firm in my belief in the global people’s power to affect change. I've seen only an increase of unregulated AI harm thus far without the public’s involvement, and it was excruciating to continue to sit on the sidelines.
To do my part in helping to influence the right AI laws to safeguard our kids’ futures, I've lost weight, sleep, sacrificed family time, and risked my family's finances.
I believe that this fight for humanity has been worth every effort spent.
United as one, we can ensure AI development is safer for us, for our children, for our children's children, and for humanity as a whole.
CONTENT
Start Here
THE Crucial AI Risk
AI Risk Categories
Other AI Topics
Are You a...?
FAQ
TAKE ACTION
[S] Sign the AI Safety Petitions for Safe AI Laws
[T] Tell Your Story
[A] Advising Tailored to You
[K] Kudos & Donor Recognition
[E] Endorse & Testify
[O] Offer & Volunteer
[U] Unleash Your Influence
[T] Thankful Community
Please contact us to suggest ideas, improvements, or corrections.
We do our best to provide useful information, but how you use the information is up to you. We don’t take responsibility for any loss that results from the use of information on the site. Please consult our full legal disclaimer and privacy policy. See also our cookie notice.
© 2023-2024 – all rights reserved. Please contact us if you wish to redistribute, translate, or adapt this work.