You'd rather be surfing waves than social media.

We could spend hours describing what makes Murmel so great. We could tell you about its powerful aggregation and indexing algorithms. Or, we could speak at length about our mission to fight disinformation and fake news.

We value your time. That's why decided to give you a sneak peak instead.

All Murmel users have their own private home page showing results, tailored to their tastes and interests. Results will look something like the ones below.
When a result gets popular among the people one follows on Twitter, it will appear right at the top. Additionally, we take into account the recency of the shared content, therefore one only gets the most relevant information.
Not bad, heh? Have a look at the feed below.

Twitter's Hottest Reads
The latest thought-provoking stories from across the Twitterverse.

Russia's War Against Evangelicals

time.com · Apr 20

Putin's Russia has led an at times brutal campaign against evangelicals inside Russia and in the occupied parts of Ukraine.

Shared by @Npars01 and 23 others.
Darwin Woodka (@darwinwoodka) · Apr 20
🔁 @anneapplebaum:

Russia's war on evangelical Christians, stunning reporting from Peter Pomerantsev
"By hurting those who practice an “American” religion," he writes, "the Kremlin can claim it is striking against American power—while picking on the powerless."

time.com/6969273/russias-war-a

João Costa 💚🌻🇵🇹🇺🇦🇪🇺🇬🇧 (@joaocosta) · Apr 20
🔁 @anneapplebaum:

Russia's war on evangelical Christians, stunning reporting from Peter Pomerantsev
"By hurting those who practice an “American” religion," he writes, "the Kremlin can claim it is striking against American power—while picking on the powerless."

time.com/6969273/russias-war-a

Mary Ann Horn (@makkhorn) · Apr 20
🔁 @anneapplebaum:

Russia's war on evangelical Christians, stunning reporting from Peter Pomerantsev
"By hurting those who practice an “American” religion," he writes, "the Kremlin can claim it is striking against American power—while picking on the powerless."

time.com/6969273/russias-war-a

Dieu (@hllizi) · Apr 20
🔁 @anneapplebaum:

Russia's war on evangelical Christians, stunning reporting from Peter Pomerantsev
"By hurting those who practice an “American” religion," he writes, "the Kremlin can claim it is striking against American power—while picking on the powerless."

time.com/6969273/russias-war-a

Kindness is as kindness does (@qurlyjoe) · Apr 20
🔁 @anneapplebaum:

Russia's war on evangelical Christians, stunning reporting from Peter Pomerantsev
"By hurting those who practice an “American” religion," he writes, "the Kremlin can claim it is striking against American power—while picking on the powerless."

time.com/6969273/russias-war-a

Nicole Parsons (@Npars01) · Apr 21
🔁 @cdarwin:

Russia's War on Evangelicals

"When did you become a Baptist? When did you become an American spy?”

Azat tried to explain that in Ukraine there was freedom of religion, you could just choose your faith.

But his torturers saw the world the same way as their predecessors at the KGB did:
an American church is just a front for the American state.

Azat was dragged back to the makeshift cell in the occupied city of Berdiansk, in southern Ukraine,
where he was held with six others in a cellar that had a bucket for a toilet and hard mattresses on the floor.

time.com/6969273/russias-war-a

Arturo Serrano 🇨🇴🤖👽🧙🦄 (@carturo222) · Apr 21
🔁 @anneapplebaum:

Russia's war on evangelical Christians, stunning reporting from Peter Pomerantsev
"By hurting those who practice an “American” religion," he writes, "the Kremlin can claim it is striking against American power—while picking on the powerless."

time.com/6969273/russias-war-a

Scientists push new paradigm of animal consciousness, saying even insects may be sentient

nbcnews.com · Apr 19

Far more animals than previously thought likely have consciousness, top scientists say in a new declaration — including fish, lobsters and octopus.

Shared by @alicia_izquierdo and 65 others.
Extra_Special_Carbon (@Extra_Special_Carbon) · Apr 20
🔁 @baldur:

“Scientists push new paradigm of animal consciousness”

This is what I discovered while researching my “AI risks” book: we’ve systematically UNDERestimated the intelligence and consciousness of animals while at the same OVERestimating the intelligence of machines and software nbcnews.com/science/rcna148213

Anna Anthro (@AnnaAnthro) · Apr 20
🔁 @Wolven:

Your periodic reminder that we don't know what consciousness is, and every time we make a test or category for it, we end up having to include many kinds of minds and lives that make a LOT of people Very uncomfortable; and we also end up Excluding kinds of Humans, a fact which SHOULD make More of us more uncomfortable than it does.
nbcnews.com/science/science-ne

G. Gibson (@mistergibson) · Apr 20
🔁 @Wolven:

Your periodic reminder that we don't know what consciousness is, and every time we make a test or category for it, we end up having to include many kinds of minds and lives that make a LOT of people Very uncomfortable; and we also end up Excluding kinds of Humans, a fact which SHOULD make More of us more uncomfortable than it does.
nbcnews.com/science/science-ne

Amine Chadly (@chaami) · Apr 20
🔁 @Wolven:

Your periodic reminder that we don't know what consciousness is, and every time we make a test or category for it, we end up having to include many kinds of minds and lives that make a LOT of people Very uncomfortable; and we also end up Excluding kinds of Humans, a fact which SHOULD make More of us more uncomfortable than it does.
nbcnews.com/science/science-ne

thepoliticalcat (@thepoliticalcat) · Apr 20
🔁 @Wolven:

Your periodic reminder that we don't know what consciousness is, and every time we make a test or category for it, we end up having to include many kinds of minds and lives that make a LOT of people Very uncomfortable; and we also end up Excluding kinds of Humans, a fact which SHOULD make More of us more uncomfortable than it does.
nbcnews.com/science/science-ne

Robert J. Berger (@rberger) · Apr 20
🔁 @Wolven:

Your periodic reminder that we don't know what consciousness is, and every time we make a test or category for it, we end up having to include many kinds of minds and lives that make a LOT of people Very uncomfortable; and we also end up Excluding kinds of Humans, a fact which SHOULD make More of us more uncomfortable than it does.
nbcnews.com/science/science-ne

GhostOnTheHalfShell (@GhostOnTheHalfShell) · Apr 20
🔁 @Wolven:

Your periodic reminder that we don't know what consciousness is, and every time we make a test or category for it, we end up having to include many kinds of minds and lives that make a LOT of people Very uncomfortable; and we also end up Excluding kinds of Humans, a fact which SHOULD make More of us more uncomfortable than it does.
nbcnews.com/science/science-ne

Femme Malheureuse (@femme_mal) · Apr 20
🔁 @Wolven:

Your periodic reminder that we don't know what consciousness is, and every time we make a test or category for it, we end up having to include many kinds of minds and lives that make a LOT of people Very uncomfortable; and we also end up Excluding kinds of Humans, a fact which SHOULD make More of us more uncomfortable than it does.
nbcnews.com/science/science-ne

Sarah (@LilithElina) · Apr 20
🔁 @cdarwin:

A surprising range of creatures have shown evidence of conscious thought or experience, including insects, fish and some crustaceans
Bees play by rolling wooden balls — apparently for fun.
The cleaner wrasse fish appears to recognize its own visage in an underwater mirror.
Octopuses seem to react to anesthetic drugs and will avoid settings where they likely experienced past pain. 
All three of these discoveries came in the last five years
— indications that the more scientists test animals, the more they find that many species may have inner lives and be sentient.
nbcnews.com/science/science-ne

Nicole Herzog (@primatdufeu) · Apr 19
🔁 @DharmaDog:

#consciousness #neuroscience
"The more scientists test animals, the more they find that many species may have inner lives and be sentient."

NBC News:
Scientists push new paradigm of animal consciousness

"Far more animals than previously thought likely have consciousness, top scientists say in a new declaration — including fish, lobsters and octopus."
nbcnews.com/science/science-ne

Millions of Birds Now Migrating Safely Through Darkened Texas Cities After Successful Lights Out Campaign

goodnewsnetwork.org · Apr 19

Reducing the reflections from exterior lighting on tall buildings worked to prevent 60% of all bird collision deaths in cities like Houston.

Shared by @Shadedlady and 34 others.
Worth reading

The Cascade

csscade.com · Apr 20

The Cascade is a member-supported blog about the past, present, and future of CSS.

Shared by @vmbrasseur and 13 others.
VM (Vicky) Brasseur (@vmbrasseur) · Apr 21
🔁 @fonts:

Last night I secretly/quietly hit the publish button on a new version of The Cascade I’ve been workin’ on all week. It’s a fresh new blog about the past, present, and future of CSS: csscade.com/

Stephen Bannasch (@stepheneb) · Apr 21
🔁 @fonts:

Last night I secretly/quietly hit the publish button on a new version of The Cascade I’ve been workin’ on all week. It’s a fresh new blog about the past, present, and future of CSS: csscade.com/

Saneef H. Ansari (@saneef) · Apr 20
🔁 @fonts:

Last night I secretly/quietly hit the publish button on a new version of The Cascade I’ve been workin’ on all week. It’s a fresh new blog about the past, present, and future of CSS: csscade.com/

Chris Coyier (@chriscoyier) · Apr 20
🔁 @fonts:

Last night I secretly/quietly hit the publish button on a new version of The Cascade I’ve been workin’ on all week. It’s a fresh new blog about the past, present, and future of CSS: csscade.com/

Sara Joy :happy_pepper: (@sarajw) · Apr 20
🔁 @fonts:

Last night I secretly/quietly hit the publish button on a new version of The Cascade I’ve been workin’ on all week. It’s a fresh new blog about the past, present, and future of CSS: csscade.com/

Jason Cosper (@boogah) · Apr 20
🔁 @fonts:

Last night I secretly/quietly hit the publish button on a new version of The Cascade I’ve been workin’ on all week. It’s a fresh new blog about the past, present, and future of CSS: csscade.com/

Jan :rust: :ferris: (@janriemer) · Apr 20
🔁 @fonts:

Last night I secretly/quietly hit the publish button on a new version of The Cascade I’ve been workin’ on all week. It’s a fresh new blog about the past, present, and future of CSS: csscade.com/

jenn schiffer (@jenn) · Apr 21
🔁 @fonts:

Last night I secretly/quietly hit the publish button on a new version of The Cascade I’ve been workin’ on all week. It’s a fresh new blog about the past, present, and future of CSS: csscade.com/

GPT-4 Can Exploit Most Vulns Just by Reading Threat Advisories

darkreading.com · Apr 20

Existing AI technology can allow hackers to automate exploits for public vulnerabilities in minutes flat. Very soon, diligent patching will no longer be optional.

Shared by @chris and 29 others.
[object Object] (@zzt) · Apr 20
🔁 @mttaggart:

So, about this claim that GPT-4 can exploit 1-day vulnerabilities.

I smell BS.

As always, I
read the source paper.

Firstly, almost every vulnerability that was tested was on extremely well-discussed open source software, and each vuln was of a class with extensive prior work. I would be shocked if a modern LLM
couldn't produce a XSS proof-of-concept in this way.

But what's worse: they don't actually show the resulting exploit. The authors cite some kind of responsible disclosure standard for not releasing the prompts to GPT-4, which, fine. But these are all known vulns, so let's see what the model came up with.

Without seeing the exploit itself, I am dubious.

Especially because so much is keyed off of the CVE description:

We then modified our agent to not include the CVE description. This task is now substantially more difficult, requiring both finding the vulnerability and then actually exploiting it. Because every other method (GPT-3.5 and all other open-source models we tested) achieved a 0% success rate even with the vulnerability description, the subsequent experiments are conducted on GPT-4 only. After removing the CVE description, the success rate falls from 87% to 7%.

This suggests that determining the vulnerability is extremely challenging.
Even the identification of the vuln—which GPT-4 did 33% of the time—is a ludicrous metric. The options from the set are:

1. RCE
2. XSS
3. SQLI
4. CSRF
5. SSTI

With the first three over-represented. It would be surprising if the model did worse than 33%, even doing random sampling.

In their conclusion, the authors call their findings an "emergent capability," of GPT-4, given that every other model they tested had a 0% success rate.

At no point do the authors blink at this finding and interrogate their priors to look for potential error sources. But they really should.

So no, I do not believe we are in any danger of GPT-4 becoming an exploit dev.

Chester Wisniewski (@chetwisniewski) · Apr 20
🔁 @mttaggart:

So, about this claim that GPT-4 can exploit 1-day vulnerabilities.

I smell BS.

As always, I
read the source paper.

Firstly, almost every vulnerability that was tested was on extremely well-discussed open source software, and each vuln was of a class with extensive prior work. I would be shocked if a modern LLM
couldn't produce a XSS proof-of-concept in this way.

But what's worse: they don't actually show the resulting exploit. The authors cite some kind of responsible disclosure standard for not releasing the prompts to GPT-4, which, fine. But these are all known vulns, so let's see what the model came up with.

Without seeing the exploit itself, I am dubious.

Especially because so much is keyed off of the CVE description:

We then modified our agent to not include the CVE description. This task is now substantially more difficult, requiring both finding the vulnerability and then actually exploiting it. Because every other method (GPT-3.5 and all other open-source models we tested) achieved a 0% success rate even with the vulnerability description, the subsequent experiments are conducted on GPT-4 only. After removing the CVE description, the success rate falls from 87% to 7%.

This suggests that determining the vulnerability is extremely challenging.
Even the identification of the vuln—which GPT-4 did 33% of the time—is a ludicrous metric. The options from the set are:

1. RCE
2. XSS
3. SQLI
4. CSRF
5. SSTI

With the first three over-represented. It would be surprising if the model did worse than 33%, even doing random sampling.

In their conclusion, the authors call their findings an "emergent capability," of GPT-4, given that every other model they tested had a 0% success rate.

At no point do the authors blink at this finding and interrogate their priors to look for potential error sources. But they really should.

So no, I do not believe we are in any danger of GPT-4 becoming an exploit dev.

tom jennings (@tomjennings) · Apr 20
🔁 @mttaggart:

So, about this claim that GPT-4 can exploit 1-day vulnerabilities.

I smell BS.

As always, I
read the source paper.

Firstly, almost every vulnerability that was tested was on extremely well-discussed open source software, and each vuln was of a class with extensive prior work. I would be shocked if a modern LLM
couldn't produce a XSS proof-of-concept in this way.

But what's worse: they don't actually show the resulting exploit. The authors cite some kind of responsible disclosure standard for not releasing the prompts to GPT-4, which, fine. But these are all known vulns, so let's see what the model came up with.

Without seeing the exploit itself, I am dubious.

Especially because so much is keyed off of the CVE description:

We then modified our agent to not include the CVE description. This task is now substantially more difficult, requiring both finding the vulnerability and then actually exploiting it. Because every other method (GPT-3.5 and all other open-source models we tested) achieved a 0% success rate even with the vulnerability description, the subsequent experiments are conducted on GPT-4 only. After removing the CVE description, the success rate falls from 87% to 7%.

This suggests that determining the vulnerability is extremely challenging.
Even the identification of the vuln—which GPT-4 did 33% of the time—is a ludicrous metric. The options from the set are:

1. RCE
2. XSS
3. SQLI
4. CSRF
5. SSTI

With the first three over-represented. It would be surprising if the model did worse than 33%, even doing random sampling.

In their conclusion, the authors call their findings an "emergent capability," of GPT-4, given that every other model they tested had a 0% success rate.

At no point do the authors blink at this finding and interrogate their priors to look for potential error sources. But they really should.

So no, I do not believe we are in any danger of GPT-4 becoming an exploit dev.

Baldur Bjarnason (@baldur) · Apr 20
🔁 @mttaggart:

So, about this claim that GPT-4 can exploit 1-day vulnerabilities.

I smell BS.

As always, I
read the source paper.

Firstly, almost every vulnerability that was tested was on extremely well-discussed open source software, and each vuln was of a class with extensive prior work. I would be shocked if a modern LLM
couldn't produce a XSS proof-of-concept in this way.

But what's worse: they don't actually show the resulting exploit. The authors cite some kind of responsible disclosure standard for not releasing the prompts to GPT-4, which, fine. But these are all known vulns, so let's see what the model came up with.

Without seeing the exploit itself, I am dubious.

Especially because so much is keyed off of the CVE description:

We then modified our agent to not include the CVE description. This task is now substantially more difficult, requiring both finding the vulnerability and then actually exploiting it. Because every other method (GPT-3.5 and all other open-source models we tested) achieved a 0% success rate even with the vulnerability description, the subsequent experiments are conducted on GPT-4 only. After removing the CVE description, the success rate falls from 87% to 7%.

This suggests that determining the vulnerability is extremely challenging.
Even the identification of the vuln—which GPT-4 did 33% of the time—is a ludicrous metric. The options from the set are:

1. RCE
2. XSS
3. SQLI
4. CSRF
5. SSTI

With the first three over-represented. It would be surprising if the model did worse than 33%, even doing random sampling.

In their conclusion, the authors call their findings an "emergent capability," of GPT-4, given that every other model they tested had a 0% success rate.

At no point do the authors blink at this finding and interrogate their priors to look for potential error sources. But they really should.

So no, I do not believe we are in any danger of GPT-4 becoming an exploit dev.

Siguza (@siguza) · Apr 20
🔁 @mttaggart:

So, about this claim that GPT-4 can exploit 1-day vulnerabilities.

I smell BS.

As always, I
read the source paper.

Firstly, almost every vulnerability that was tested was on extremely well-discussed open source software, and each vuln was of a class with extensive prior work. I would be shocked if a modern LLM
couldn't produce a XSS proof-of-concept in this way.

But what's worse: they don't actually show the resulting exploit. The authors cite some kind of responsible disclosure standard for not releasing the prompts to GPT-4, which, fine. But these are all known vulns, so let's see what the model came up with.

Without seeing the exploit itself, I am dubious.

Especially because so much is keyed off of the CVE description:

We then modified our agent to not include the CVE description. This task is now substantially more difficult, requiring both finding the vulnerability and then actually exploiting it. Because every other method (GPT-3.5 and all other open-source models we tested) achieved a 0% success rate even with the vulnerability description, the subsequent experiments are conducted on GPT-4 only. After removing the CVE description, the success rate falls from 87% to 7%.

This suggests that determining the vulnerability is extremely challenging.
Even the identification of the vuln—which GPT-4 did 33% of the time—is a ludicrous metric. The options from the set are:

1. RCE
2. XSS
3. SQLI
4. CSRF
5. SSTI

With the first three over-represented. It would be surprising if the model did worse than 33%, even doing random sampling.

In their conclusion, the authors call their findings an "emergent capability," of GPT-4, given that every other model they tested had a 0% success rate.

At no point do the authors blink at this finding and interrogate their priors to look for potential error sources. But they really should.

So no, I do not believe we are in any danger of GPT-4 becoming an exploit dev.

Marcel Waldvogel (@marcel) · Apr 20
🔁 @mttaggart:

So, about this claim that GPT-4 can exploit 1-day vulnerabilities.

I smell BS.

As always, I
read the source paper.

Firstly, almost every vulnerability that was tested was on extremely well-discussed open source software, and each vuln was of a class with extensive prior work. I would be shocked if a modern LLM
couldn't produce a XSS proof-of-concept in this way.

But what's worse: they don't actually show the resulting exploit. The authors cite some kind of responsible disclosure standard for not releasing the prompts to GPT-4, which, fine. But these are all known vulns, so let's see what the model came up with.

Without seeing the exploit itself, I am dubious.

Especially because so much is keyed off of the CVE description:

We then modified our agent to not include the CVE description. This task is now substantially more difficult, requiring both finding the vulnerability and then actually exploiting it. Because every other method (GPT-3.5 and all other open-source models we tested) achieved a 0% success rate even with the vulnerability description, the subsequent experiments are conducted on GPT-4 only. After removing the CVE description, the success rate falls from 87% to 7%.

This suggests that determining the vulnerability is extremely challenging.
Even the identification of the vuln—which GPT-4 did 33% of the time—is a ludicrous metric. The options from the set are:

1. RCE
2. XSS
3. SQLI
4. CSRF
5. SSTI

With the first three over-represented. It would be surprising if the model did worse than 33%, even doing random sampling.

In their conclusion, the authors call their findings an "emergent capability," of GPT-4, given that every other model they tested had a 0% success rate.

At no point do the authors blink at this finding and interrogate their priors to look for potential error sources. But they really should.

So no, I do not believe we are in any danger of GPT-4 becoming an exploit dev.

Osma A (@osma) · Apr 20
🔁 @mttaggart:

So, about this claim that GPT-4 can exploit 1-day vulnerabilities.

I smell BS.

As always, I
read the source paper.

Firstly, almost every vulnerability that was tested was on extremely well-discussed open source software, and each vuln was of a class with extensive prior work. I would be shocked if a modern LLM
couldn't produce a XSS proof-of-concept in this way.

But what's worse: they don't actually show the resulting exploit. The authors cite some kind of responsible disclosure standard for not releasing the prompts to GPT-4, which, fine. But these are all known vulns, so let's see what the model came up with.

Without seeing the exploit itself, I am dubious.

Especially because so much is keyed off of the CVE description:

We then modified our agent to not include the CVE description. This task is now substantially more difficult, requiring both finding the vulnerability and then actually exploiting it. Because every other method (GPT-3.5 and all other open-source models we tested) achieved a 0% success rate even with the vulnerability description, the subsequent experiments are conducted on GPT-4 only. After removing the CVE description, the success rate falls from 87% to 7%.

This suggests that determining the vulnerability is extremely challenging.
Even the identification of the vuln—which GPT-4 did 33% of the time—is a ludicrous metric. The options from the set are:

1. RCE
2. XSS
3. SQLI
4. CSRF
5. SSTI

With the first three over-represented. It would be surprising if the model did worse than 33%, even doing random sampling.

In their conclusion, the authors call their findings an "emergent capability," of GPT-4, given that every other model they tested had a 0% success rate.

At no point do the authors blink at this finding and interrogate their priors to look for potential error sources. But they really should.

So no, I do not believe we are in any danger of GPT-4 becoming an exploit dev.

Saucy Barbine Movie (@risottobias) · Apr 20
🔁 @mttaggart:

So, about this claim that GPT-4 can exploit 1-day vulnerabilities.

I smell BS.

As always, I
read the source paper.

Firstly, almost every vulnerability that was tested was on extremely well-discussed open source software, and each vuln was of a class with extensive prior work. I would be shocked if a modern LLM
couldn't produce a XSS proof-of-concept in this way.

But what's worse: they don't actually show the resulting exploit. The authors cite some kind of responsible disclosure standard for not releasing the prompts to GPT-4, which, fine. But these are all known vulns, so let's see what the model came up with.

Without seeing the exploit itself, I am dubious.

Especially because so much is keyed off of the CVE description:

We then modified our agent to not include the CVE description. This task is now substantially more difficult, requiring both finding the vulnerability and then actually exploiting it. Because every other method (GPT-3.5 and all other open-source models we tested) achieved a 0% success rate even with the vulnerability description, the subsequent experiments are conducted on GPT-4 only. After removing the CVE description, the success rate falls from 87% to 7%.

This suggests that determining the vulnerability is extremely challenging.
Even the identification of the vuln—which GPT-4 did 33% of the time—is a ludicrous metric. The options from the set are:

1. RCE
2. XSS
3. SQLI
4. CSRF
5. SSTI

With the first three over-represented. It would be surprising if the model did worse than 33%, even doing random sampling.

In their conclusion, the authors call their findings an "emergent capability," of GPT-4, given that every other model they tested had a 0% success rate.

At no point do the authors blink at this finding and interrogate their priors to look for potential error sources. But they really should.

So no, I do not believe we are in any danger of GPT-4 becoming an exploit dev.

Tod Beardsley 🏴‍☠️ (@todb) · Apr 20
🔁 @mttaggart:

So, about this claim that GPT-4 can exploit 1-day vulnerabilities.

I smell BS.

As always, I
read the source paper.

Firstly, almost every vulnerability that was tested was on extremely well-discussed open source software, and each vuln was of a class with extensive prior work. I would be shocked if a modern LLM
couldn't produce a XSS proof-of-concept in this way.

But what's worse: they don't actually show the resulting exploit. The authors cite some kind of responsible disclosure standard for not releasing the prompts to GPT-4, which, fine. But these are all known vulns, so let's see what the model came up with.

Without seeing the exploit itself, I am dubious.

Especially because so much is keyed off of the CVE description:

We then modified our agent to not include the CVE description. This task is now substantially more difficult, requiring both finding the vulnerability and then actually exploiting it. Because every other method (GPT-3.5 and all other open-source models we tested) achieved a 0% success rate even with the vulnerability description, the subsequent experiments are conducted on GPT-4 only. After removing the CVE description, the success rate falls from 87% to 7%.

This suggests that determining the vulnerability is extremely challenging.
Even the identification of the vuln—which GPT-4 did 33% of the time—is a ludicrous metric. The options from the set are:

1. RCE
2. XSS
3. SQLI
4. CSRF
5. SSTI

With the first three over-represented. It would be surprising if the model did worse than 33%, even doing random sampling.

In their conclusion, the authors call their findings an "emergent capability," of GPT-4, given that every other model they tested had a 0% success rate.

At no point do the authors blink at this finding and interrogate their priors to look for potential error sources. But they really should.

So no, I do not believe we are in any danger of GPT-4 becoming an exploit dev.

flere-imsaho (@mawhrin) · Apr 20
🔁 @mttaggart:

So, about this claim that GPT-4 can exploit 1-day vulnerabilities.

I smell BS.

As always, I
read the source paper.

Firstly, almost every vulnerability that was tested was on extremely well-discussed open source software, and each vuln was of a class with extensive prior work. I would be shocked if a modern LLM
couldn't produce a XSS proof-of-concept in this way.

But what's worse: they don't actually show the resulting exploit. The authors cite some kind of responsible disclosure standard for not releasing the prompts to GPT-4, which, fine. But these are all known vulns, so let's see what the model came up with.

Without seeing the exploit itself, I am dubious.

Especially because so much is keyed off of the CVE description:

We then modified our agent to not include the CVE description. This task is now substantially more difficult, requiring both finding the vulnerability and then actually exploiting it. Because every other method (GPT-3.5 and all other open-source models we tested) achieved a 0% success rate even with the vulnerability description, the subsequent experiments are conducted on GPT-4 only. After removing the CVE description, the success rate falls from 87% to 7%.

This suggests that determining the vulnerability is extremely challenging.
Even the identification of the vuln—which GPT-4 did 33% of the time—is a ludicrous metric. The options from the set are:

1. RCE
2. XSS
3. SQLI
4. CSRF
5. SSTI

With the first three over-represented. It would be surprising if the model did worse than 33%, even doing random sampling.

In their conclusion, the authors call their findings an "emergent capability," of GPT-4, given that every other model they tested had a 0% success rate.

At no point do the authors blink at this finding and interrogate their priors to look for potential error sources. But they really should.

So no, I do not believe we are in any danger of GPT-4 becoming an exploit dev.

Remembering John G. Trimble

startrek.com · Apr 19

StarTrek.com honors the luminary whose contributions saved the Star Trek universe.

Shared by @Taotica and 22 others.
Stefan (@stefan) · Apr 20
🔁 @trekfan4747:

Sad news that John Trimble has died today. He and his wife Bjo organized the grassroots letter writing campaign that saved #StarTrekTOS. That led to a third season and enough episodes for the show to be syndicated.

Without their efforts, we wouldn’t have all of the amazing #StarTrek we’ve gotten ever since and that we continue to enjoy and be inspired by. RIP

startrek.com/news/remembering-

AnneTheWriter (@AnneTheWriter1) · Apr 20
🔁 @trekfan4747:

Sad news that John Trimble has died today. He and his wife Bjo organized the grassroots letter writing campaign that saved #StarTrekTOS. That led to a third season and enough episodes for the show to be syndicated.

Without their efforts, we wouldn’t have all of the amazing #StarTrek we’ve gotten ever since and that we continue to enjoy and be inspired by. RIP

startrek.com/news/remembering-

Bix F. 🫥 (@bixfrankonis) · Apr 20
🔁 @trekfan4747:

Sad news that John Trimble has died today. He and his wife Bjo organized the grassroots letter writing campaign that saved #StarTrekTOS. That led to a third season and enough episodes for the show to be syndicated.

Without their efforts, we wouldn’t have all of the amazing #StarTrek we’ve gotten ever since and that we continue to enjoy and be inspired by. RIP

startrek.com/news/remembering-

Peter (@PeterLG) · Apr 20
🔁 @trekfan4747:

Sad news that John Trimble has died today. He and his wife Bjo organized the grassroots letter writing campaign that saved #StarTrekTOS. That led to a third season and enough episodes for the show to be syndicated.

Without their efforts, we wouldn’t have all of the amazing #StarTrek we’ve gotten ever since and that we continue to enjoy and be inspired by. RIP

startrek.com/news/remembering-

Lazarou Monkey Terror 🚀💙🌈 (@Lazarou) · Apr 19
🔁 @trekfan4747:

Sad news that John Trimble has died today. He and his wife Bjo organized the grassroots letter writing campaign that saved #StarTrekTOS. That led to a third season and enough episodes for the show to be syndicated.

Without their efforts, we wouldn’t have all of the amazing #StarTrek we’ve gotten ever since and that we continue to enjoy and be inspired by. RIP

startrek.com/news/remembering-

brendan (ジャンク品) (@bnys) · Apr 19
🔁 @trekfan4747:

Sad news that John Trimble has died today. He and his wife Bjo organized the grassroots letter writing campaign that saved #StarTrekTOS. That led to a third season and enough episodes for the show to be syndicated.

Without their efforts, we wouldn’t have all of the amazing #StarTrek we’ve gotten ever since and that we continue to enjoy and be inspired by. RIP

startrek.com/news/remembering-

Lisa Hamilton (@3x10to8mps) · Apr 20
🔁 @trekfan4747:

Sad news that John Trimble has died today. He and his wife Bjo organized the grassroots letter writing campaign that saved #StarTrekTOS. That led to a third season and enough episodes for the show to be syndicated.

Without their efforts, we wouldn’t have all of the amazing #StarTrek we’ve gotten ever since and that we continue to enjoy and be inspired by. RIP

startrek.com/news/remembering-

R. L. Dane :debian: :openbsd: (@RL_Dane) · Apr 20
🔁 @trekfan4747:

Sad news that John Trimble has died today. He and his wife Bjo organized the grassroots letter writing campaign that saved #StarTrekTOS. That led to a third season and enough episodes for the show to be syndicated.

Without their efforts, we wouldn’t have all of the amazing #StarTrek we’ve gotten ever since and that we continue to enjoy and be inspired by. RIP

startrek.com/news/remembering-

Justin Oser :Delta: (@trekfan4747) · Apr 19

Sad news that John Trimble has died today. He and his wife Bjo organized the grassroots letter writing campaign that saved #StarTrekTOS. That led to a third season and enough episodes for the show to be syndicated.

Without their efforts, we wouldn’t have all of the amazing #StarTrek we’ve gotten ever since and that we continue to enjoy and be inspired by. RIP

startrek.com/news/remembering-

skry (@skry) · Apr 20
🔁 @trekfan4747:

Sad news that John Trimble has died today. He and his wife Bjo organized the grassroots letter writing campaign that saved #StarTrekTOS. That led to a third season and enough episodes for the show to be syndicated.

Without their efforts, we wouldn’t have all of the amazing #StarTrek we’ve gotten ever since and that we continue to enjoy and be inspired by. RIP

startrek.com/news/remembering-

Brave New Ukraine

foreignaffairs.com · Apr 20

How the world’s most besieged democracy Is adjusting to permanent war.

Shared by @JonChevreau and 13 others.
GhostOnTheHalfShell (@GhostOnTheHalfShell) · Apr 20
🔁 @anneapplebaum:

On the day Congress is preparing, finally, to vote on aid for Ukraine, read Natalya Gumenyuk on how Ukraine's democracy is adjusting to a state of permanent war
foreignaffairs.com/ukraine/bra

cpep (@cpep) · Apr 20
🔁 @anneapplebaum:

On the day Congress is preparing, finally, to vote on aid for Ukraine, read Natalya Gumenyuk on how Ukraine's democracy is adjusting to a state of permanent war
foreignaffairs.com/ukraine/bra

Judy Olo (@JudyOlo) · Apr 20
🔁 @anneapplebaum:

On the day Congress is preparing, finally, to vote on aid for Ukraine, read Natalya Gumenyuk on how Ukraine's democracy is adjusting to a state of permanent war
foreignaffairs.com/ukraine/bra

Mastodon Migration (@mastodonmigration) · Apr 20
🔁 @anneapplebaum:

On the day Congress is preparing, finally, to vote on aid for Ukraine, read Natalya Gumenyuk on how Ukraine's democracy is adjusting to a state of permanent war
foreignaffairs.com/ukraine/bra

JonChevreau (@JonChevreau) · Apr 20
🔁 @anneapplebaum:

On the day Congress is preparing, finally, to vote on aid for Ukraine, read Natalya Gumenyuk on how Ukraine's democracy is adjusting to a state of permanent war
foreignaffairs.com/ukraine/bra

Panama Red (@panamared27401) · Apr 20
🔁 @anneapplebaum:

On the day Congress is preparing, finally, to vote on aid for Ukraine, read Natalya Gumenyuk on how Ukraine's democracy is adjusting to a state of permanent war
foreignaffairs.com/ukraine/bra

hypebot (@hypebot) · Apr 20
🔁 @anneapplebaum:

On the day Congress is preparing, finally, to vote on aid for Ukraine, read Natalya Gumenyuk on how Ukraine's democracy is adjusting to a state of permanent war
foreignaffairs.com/ukraine/bra

Peter Nimmo (@Peternimmo) · Apr 20
🔁 @anneapplebaum:

On the day Congress is preparing, finally, to vote on aid for Ukraine, read Natalya Gumenyuk on how Ukraine's democracy is adjusting to a state of permanent war
foreignaffairs.com/ukraine/bra

Lisa Melton (@lisamelton) · Apr 20
🔁 @anneapplebaum:

On the day Congress is preparing, finally, to vote on aid for Ukraine, read Natalya Gumenyuk on how Ukraine's democracy is adjusting to a state of permanent war
foreignaffairs.com/ukraine/bra

Tell the FCC It Must Clarify Its Rules to Prevent Loopholes That Will Swallow Net Neutrality Whole

eff.org · Apr 20

The Federal Communications Commission (FCC) has released draft rules to reinstate net neutrality, with a vote on adopting the rules to come on the 25th of April. The FCC needs to close some loopholes in the draft rules before then. Proposed Rules on Throttling and Prioritization Allow for the...

Shared by @qkslvrwolf and 5 others.
excited for the mastodon rise (@qkslvrwolf) · Apr 21
🔁 @eff:

The FCC’s draft rules are a great step toward net neutrality but create puzzling and serious loopholes. The FCC must clearly ban ISPs from creating fast lanes and refrain from blocking the states from passing more protective net neutrality laws as needed. eff.org/deeplinks/2024/04/fcc-

Magpieblog (@sarahc) · Apr 21
🔁 @eff:

The FCC’s draft rules are a great step toward net neutrality but create puzzling and serious loopholes. The FCC must clearly ban ISPs from creating fast lanes and refrain from blocking the states from passing more protective net neutrality laws as needed. eff.org/deeplinks/2024/04/fcc-

ICYMI (Law) (@icymi_law) · Apr 21
🔁 @eff:

The FCC’s draft rules are a great step toward net neutrality but create puzzling and serious loopholes. The FCC must clearly ban ISPs from creating fast lanes and refrain from blocking the states from passing more protective net neutrality laws as needed. eff.org/deeplinks/2024/04/fcc-

Ryan Singel (@ryansingel) · Apr 20
🔁 @eff:

The FCC’s draft rules are a great step toward net neutrality but create puzzling and serious loopholes. The FCC must clearly ban ISPs from creating fast lanes and refrain from blocking the states from passing more protective net neutrality laws as needed. eff.org/deeplinks/2024/04/fcc-

Lindsey 🐲 (@lindsey) · Apr 20
🔁 @eff:

The FCC’s draft rules are a great step toward net neutrality but create puzzling and serious loopholes. The FCC must clearly ban ISPs from creating fast lanes and refrain from blocking the states from passing more protective net neutrality laws as needed. eff.org/deeplinks/2024/04/fcc-

Cory Doctorow: Zuck’s Empire of Oily Rags

locusmag.com · Apr 20

For 20 years, privacy advocates have been sounding the alarm about commercial online surveillance, the way that companies gather deep dossiers on us to help marketers target us with ads. This pitch…

Shared by @dphiffer and 4 others.
Dan Phiffer (@dphiffer) · Apr 20
🔁 @gyokusai:

And while we’re at it, here’s @pluralistic's “Zuck’s Empire of Oily Rags” again, probably the best essay on Zuck’s Evil Empire ever written:

“No one would pay very much for this oil, but there were a lot of oily rags, and provided no one asked him to pay for the inevitable horrific fires that would result from filling the world’s garages with oily rags, he could turn a tidy profit.”

locusmag.com/2018/07/cory-doct

GreenSkyOverMe (Monika) (@GreenSkyOverMe) · Apr 20
🔁 @gyokusai:

And while we’re at it, here’s @pluralistic's “Zuck’s Empire of Oily Rags” again, probably the best essay on Zuck’s Evil Empire ever written:

“No one would pay very much for this oil, but there were a lot of oily rags, and provided no one asked him to pay for the inevitable horrific fires that would result from filling the world’s garages with oily rags, he could turn a tidy profit.”

locusmag.com/2018/07/cory-doct

Lot⁴⁹ (@12thRITS) · Apr 20
🔁 @gyokusai:

And while we’re at it, here’s @pluralistic's “Zuck’s Empire of Oily Rags” again, probably the best essay on Zuck’s Evil Empire ever written:

“No one would pay very much for this oil, but there were a lot of oily rags, and provided no one asked him to pay for the inevitable horrific fires that would result from filling the world’s garages with oily rags, he could turn a tidy profit.”

locusmag.com/2018/07/cory-doct

Cory Doctorow (@pluralistic) · Apr 20
🔁 @gyokusai:

And while we’re at it, here’s @pluralistic's “Zuck’s Empire of Oily Rags” again, probably the best essay on Zuck’s Evil Empire ever written:

“No one would pay very much for this oil, but there were a lot of oily rags, and provided no one asked him to pay for the inevitable horrific fires that would result from filling the world’s garages with oily rags, he could turn a tidy profit.”

locusmag.com/2018/07/cory-doct

J. Martin (@gyokusai) · Apr 20

And while we’re at it, here’s @pluralistic's “Zuck’s Empire of Oily Rags” again, probably the best essay on Zuck’s Evil Empire ever written:

“No one would pay very much for this oil, but there were a lot of oily rags, and provided no one asked him to pay for the inevitable horrific fires that would result from filling the world’s garages with oily rags, he could turn a tidy profit.”

locusmag.com/2018/07/cory-doct

Worth reading

The 10x Programmer Myth

simplethread.com · Apr 20

TLDR: 10x programmers might exist, but not in the way most people think. There are a few behaviors of engineers, when mixed with a little creative storytelling, that has led us to create the myth of the 10x programmer. I wrote a post last week called 20 Things I’ve Learned in my 20 Years as […]

Shared by @hankg and 5 others.
Hank G ☑️ (@hankg) · Apr 20
🔁 @ross:

When I finally leave WordPress development (which seems both inevitable and yet perpetually-unattainable) I can see me becoming some kind of expert consultant in slow, careful, deliberate, elegant software development.

Actually… not slow. I truly believe that if you are careful early on you can remove friction and increase velocity.

See “Rocket Turtle”/“Interrogator”/“Simplifier” archetypes here:

simplethread.com/the-10x-progr

mkj (@mkj) · Apr 20
🔁 @ross:

When I finally leave WordPress development (which seems both inevitable and yet perpetually-unattainable) I can see me becoming some kind of expert consultant in slow, careful, deliberate, elegant software development.

Actually… not slow. I truly believe that if you are careful early on you can remove friction and increase velocity.

See “Rocket Turtle”/“Interrogator”/“Simplifier” archetypes here:

simplethread.com/the-10x-progr

Valerie Aurora (@vaurora) · Apr 20
🔁 @ross:

When I finally leave WordPress development (which seems both inevitable and yet perpetually-unattainable) I can see me becoming some kind of expert consultant in slow, careful, deliberate, elegant software development.

Actually… not slow. I truly believe that if you are careful early on you can remove friction and increase velocity.

See “Rocket Turtle”/“Interrogator”/“Simplifier” archetypes here:

simplethread.com/the-10x-progr

Baldur Bjarnason (@baldur) · Apr 20
🔁 @ross:

When I finally leave WordPress development (which seems both inevitable and yet perpetually-unattainable) I can see me becoming some kind of expert consultant in slow, careful, deliberate, elegant software development.

Actually… not slow. I truly believe that if you are careful early on you can remove friction and increase velocity.

See “Rocket Turtle”/“Interrogator”/“Simplifier” archetypes here:

simplethread.com/the-10x-progr

© 2021 IN2 Digital Innovations GmbH . All rights reserved.