@Holly, thanks for sharing. Always happy to discuss these things.
@remmelt
Program coordinator of AI Safety Camp.
https://nl.linkedin.com/in/remmelt-ellen-19b88045$0 in pending offers
I helped launch the first AI Safety Camp and now coordinate the program with Linda Linsefors.
My technical research clarifies reasons why the AGI control problem would be unsolvable: lesswrong.com/posts/xp6n2MG5vQkPpFEBH/the-control-problem-unsolved-or-unsolvable
I support creatives and other communities to restrict harmful AI scaling:
https://forum.effectivealtruism.org/posts/q8jxedwSKBdWA3nH7/we-are-not-alone-many-communities-want-to-stop-big-tech-from
Previously, I co-founded Effective Altruism Netherlands. Pre-2021 background here:
forum.effectivealtruism.org/posts/C2gfgJrvNF8NMXjkt/consider-paying-me-or-another-entrepreneur-to-create
Remmelt Ellen
10 months ago
@adamyedidia, thank you for the donation. We are getting there in terms of funding, thanks to people like you.
We can now run the next edition
Remmelt Ellen
10 months ago
@Alex319, thank you for the thoughtful consideration, and for making the next AI Safety Camp happen!
As always, if you have any specific questions or things you want to follow up on, please let us know.
Remmelt Ellen
11 months ago
@zeshen, thank you for the contribution!
I also saw your comment on the AI Standards Lab being launched out of AISC.
I wasn't sure about the counterfactual, so that is good to know.
Remmelt Ellen
12 months ago
See @MarcusAbramovitch's notes in the [#grantmakerscorner](https://discord.com/channels/1111727151071371454/1145045940865081434/1185644569891721329) on Discord.
Remmelt Ellen
12 months ago
@IsaakFreeman, thank you, appreciating you daring to be a first mover here.
Remmelt Ellen
over 1 year ago
@J-C, thank you too for the conversation.
If it's helpful, here are specific critiques of longtermist tech efforts I tweeted:
- Past projects: twitter.com/RemmeltE/status/1626590147373588489
- Past funding twitter.com/RemmeltE/status/1675758869728088064
- Godlike AI message: twitter.com/RemmeltE/status/1653757450472898562
- Counterarguments: twitter.com/RemmeltE/status/1647206044928557056
- Gaps in community focus: twitter.com/RemmeltE/status/1623226789152841729
- On complexity mismatch: twitter.com/RemmeltE/status/1666433433164234752
- On fundamental control limits: twitter.com/RemmeltE/status/1665099258461036548
- On comprehensive safety premises: twitter.com/RemmeltE/status/1606552635716554752
I have also pushed back against Émile Torres and Timnit Gebru (researchers I otherwise respect):
- twitter.com/RemmeltE/status/1672943510947782657
- twitter.com/RemmeltE/status/1620596011117993984
^– Can imagine those tweets got lost ( I appreciate the searches you did).
You are over-ascribing interpretations somewhat (eg. "social cluster" is a term I use to describe conversational/collaborative connections in social networks), but I get that all you had to go on there was a few hundred characters.
~ ~ ~
I started in 2015 in effective altruism movement-building, and I never imagined myself to become this critical about the actions of the community I was building up.
I also reached my limit of trying to discuss specific concerns with EAs/rationalists/longtermists.
Having a hundred+ conversations to watch interlocutors continue business as usual does this to you.
Maybe this would change if I wrote a Katja-Grace-style post – talking positively from their perspective, asking open-ended questions so readers reflect on what they could explore further, finding ways to build from their existing directions of work so they feel empowered rather than averse to dig deeper, not state any conclusions that conflict with their existing beliefs or sound too strong within the community's Overton window, etc. etc.
Realistically though, people who made a career upskilling in and doing alignment work won't change their path easily, which is understandable. If the status quo for technically-minded researchers is to keep trying to invent new 'alignment solutions' with funding from (mostly) tech guys, then there is little point to clarifying why that would be a dead end.
Likewise, where AI risk people stick mostly to their own intellectual nerdy circles to come up with outreach projects to slow AI (because "we're the only ones who care about extinction risk"), then there is little point of me trying to bridge between them and other communities' perspectives.
~ ~ ~
Manifund doesn't seem like a place to find collaborators beyond, but happy to change my mind:
I am looking for a funder who already relates with the increasing harms of AI-scaling, and who wants to act effectively within society to restrict corporations from scaling further.
A funder who acknowledges critiques of longtermist tech efforts so far (as supporting companies to scale up larger AI models deployed for a greater variety of profitable ends), and who is looking to fund neglected niches beyond.
Remmelt Ellen
over 1 year ago
Your selected quotes express my views well
Note though that the “self-congratulatory vibes” point was in reference to the Misalignment Museum: https://twitter.com/RemmeltE/status/1635123487617724416
And I am skipping over the within-quote commentary ;)
Remmelt Ellen
over 1 year ago
* Note that I was talking about conflicts between the AI Safety community and communities like AI ethics and those being harmed who AI ethics researchers are advocating for (artists and writers, data workers, marginalised tech-exploited ethnic communities, etc).
Remmelt Ellen
over 1 year ago
Thank you for sharing your concerns.
How is suing AI companies in court less likely to cause conflict than the 'good cop' approach you deride?
Suing companies is business as usual. Rather than focus on ideological differences, it focusses on concrete harms done and why those are against the law.
Not that I was talking about conflicts between AI Safety communities like the AI ethics those being harmed who AI ethics researchers are advocating for (artists and writers, data workers, marginalised tech-exploited ethnic communities, etc).
Some amount of conflict with AGI lab folks is inevitable. Our community’s attempts to collaborate with the labs to research the fundamental control problems first and to carefully guide AI development to prevent an arms race did not work out. And not for lack of effort on our side! Frankly, their
reckless behaviour now to reconfigure the world on behalf of the rest of society needs to be called out.
Are you claiming that your mindset and negotiation skills are more constructive?
As I mentioned, I’m not arguing here for introducing a bad cop. I’m arguing for starting lawsuits to get injunctions arranged for widespread harms done (data piracy, model misuses, toxic compute).
What leverage did we have to start with?
The power imbalance was less lopsided. When the AGI companies were in their start-up phase, they were relying a lot more on our support (funding, recruitment, intellectual support) than they do now.
For example, public intellectuals like Nick Bostrom had more of an ability to influence narratives than they do now. Now AGI labs have ratcheted up their own marketing and lobbying and in that way crowd out the debate.
few examples for illustration, but again, others can browse your Twitter:
Could you clarify why those examples are insulting for you?
I am pointing out flaws in how the AI Safety community has acted in aggregate, such as offering increasing funding to DeepMind, OpenAI and then Anthropic. I guess that’s uncomfortable to see in public now, and I’d have preferred that AI Safety researchers had taken this seriously when I expressed concerns in private years ago.
Similarly, I critiqued Hinton for letting his employer Google scale increasingly harmful models based on his own designs for years, and despite his influential position, still not offering much of a useful response now to preventing these developments in his public speaking tours. Scientists in tech have great power to impact the world, and therefore great responsibility to advocate for norms and regulation of their technologies.
Your selected quotes express my views well. I feel you selected them with care (ie. no strawmanning, which I appreciate!).
I think there's some small chance you could convince me that something in this ballpark is a promising avenue for action. But even then, I'd much rather fund you to do something like lead a protest march than to "carefully do the initial coordination and bridge-building required to set ourselves up for effective legal cases.
Thank you for the consideration!
For | Date | Type | Amount |
---|---|---|---|
10th edition of AI Safety Camp | 8 months ago | project donation | +1000 |
10th edition of AI Safety Camp | 8 months ago | project donation | +100 |
10th edition of AI Safety Camp | 9 months ago | project donation | +1000 |
10th edition of AI Safety Camp | 9 months ago | project donation | +4000 |
10th edition of AI Safety Camp | 10 months ago | project donation | +1000 |
10th edition of AI Safety Camp | 10 months ago | project donation | +10 |
10th edition of AI Safety Camp | 10 months ago | project donation | +2000 |
Manifund Bank | 10 months ago | withdraw | 48257 |
10th edition of AI Safety Camp | 10 months ago | project donation | +5000 |
10th edition of AI Safety Camp | 10 months ago | project donation | +10 |
10th edition of AI Safety Camp | 10 months ago | project donation | +2000 |
10th edition of AI Safety Camp | 10 months ago | project donation | +75 |
10th edition of AI Safety Camp | 10 months ago | project donation | +5000 |
10th edition of AI Safety Camp | 10 months ago | project donation | +100 |
10th edition of AI Safety Camp | 10 months ago | project donation | +1042 |
10th edition of AI Safety Camp | 10 months ago | project donation | +1000 |
10th edition of AI Safety Camp | 10 months ago | project donation | +3000 |
10th edition of AI Safety Camp | 10 months ago | project donation | +200 |
10th edition of AI Safety Camp | 10 months ago | project donation | +20 |
10th edition of AI Safety Camp | 10 months ago | project donation | +50 |
10th edition of AI Safety Camp | 10 months ago | project donation | +500 |
10th edition of AI Safety Camp | 10 months ago | project donation | +15000 |
10th edition of AI Safety Camp | 10 months ago | project donation | +10 |
10th edition of AI Safety Camp | 10 months ago | project donation | +250 |
10th edition of AI Safety Camp | 10 months ago | project donation | +15000 |