• Starting today August 7th, 2024, in order to post in the Married Couples, Courting Couples, or Singles forums, you will not be allowed to post if you have your Marital status designated as private. Announcements will be made in the respective forums as well but please note that if yours is currently listed as Private, you will need to submit a ticket in the Support Area to have yours changed.

  • CF has always been a site that welcomes people from different backgrounds and beliefs to participate in discussion and even debate. That is the nature of its ministry. In view of recent events emotions are running very high. We need to remind people of some basic principles in debating on this site. We need to be civil when we express differences in opinion. No personal attacks. Avoid you, your statements. Don't characterize an entire political party with comparisons to Fascism or Communism or other extreme movements that committed atrocities. CF is not the place for broad brush or blanket statements about groups and political parties. Put the broad brushes and blankets away when you come to CF, better yet, put them in the incinerator. Debate had no place for them. We need to remember that people that commit acts of violence represent themselves or a small extreme faction.
  • We hope the site problems here are now solved, however, if you still have any issues, please start a ticket in Contact Us

  • The rule regarding AI content has been updated. The rule now rules as follows:

    Be sure to credit AI when copying and pasting AI sources. Link to the site of the AI search, just like linking to an article.

Hegseth Announces Grok Access to Classified Pentagon Networks

ThatRobGuy

Part of the IT crowd
Site Supporter
Sep 4, 2005
30,155
17,596
Here
✟1,588,679.00
Country
United States
Gender
Male
Faith
Atheist
Marital Status
Single
Politics
US-Others
Anthropic is out from what it seems. They refuse to remove guardrails from their model that Hegseth is insisting on.

And no matter what version (free/professional) you are using, if it's an LLM, you will never be able to get rid of hallucinations. It's unfortunately baked into how the models function and you can only try to mitigate it, but it will always be in here in LLMs.

Seems like Anthropic is changing their tune pretty quickly


“Rather than being hard commitments, these are public goals that we will openly grade our progress towards,” the company said in its blog post.

The change comes a day after Defense Secretary Pete Hegseth gave Anthropic CEO Dario Amodei a Friday deadline to roll back the company’s AI safeguards, or risk losing a $200 million Pentagon contract and being put on what is effectively a government blacklist.



With regards to the other aspect you mentioned, while the hallucinations you speak of are certainly pronounced in the free tiered versions, the higher tier versions and enterprise solutions are a different user experience. Where the free tiered ones on the web are basically just a glorified LLM/google search hybrid made for speed, the more robust offerings are more tailored to accuracy and depth.

So while one may be used to hopping on ChatGPT, Perplexity, or Claude and typing a question, and getting an answer back in 5 seconds. The more robust solutions are asking follow-up questions, and giving users the option to prioritize different aspects of what they're trying to do. For example, when utilizing the top tier version of Claude Code, it's not unusual to basically have an 1 hour+ discussion with it, and have it "think" for 5-10 minutes between answers and build a complex solution that can pass SOC2, HIPAA, and PCI audits, where as if you hop on the free version of Claude that's online, it'll spit out a quick and dirty html page with some vanilla JavaScript.


To describe the difference in the past, I've used the steak dinner analogy.

You can order a steak dinner at Cracker Barrel or Applebee's.
vs
You can order a steak dinner from Ruth's Chris or Fleming's

Both will involve sitting a table and having someone bringing you a steak to eat that's edible, but you'll be getting a very different product offering and experience at the latter.

That's not to say that you can't end up with a poorly cooked steak at Ruth's Chris, it obviously happens once in a while. But your chances of getting an overcooked piece of shoe leather, or one that's still cold in the middle (because they didn't thaw properly) when you ordered medium-rare is much less likely at the latter.
 
Upvote 0

MarcusGregor

New year, new you...
Oct 1, 2025
195
372
26
South
✟22,728.00
Country
United States
Gender
Male
Faith
Atheist
Marital Status
Married
Politics
US-Democrat
Upvote 0

ThatRobGuy

Part of the IT crowd
Site Supporter
Sep 4, 2005
30,155
17,596
Here
✟1,588,679.00
Country
United States
Gender
Male
Faith
Atheist
Marital Status
Single
Politics
US-Others
Related:


"Would you like to play a game?"

Some of the article appears to be behind the paywall...

I was able to see as far as this part:
Kenneth Payne at King’s College London set three leading large language models – GPT-5.2, Claude Sonnet 4 and Gemini 3 Flash – against each other in simulated war games. The scenarios involved intense international standoffs, including border disputes, competition for scarce resources and existential threats to regime survival.


Not to keep harping on the "version is everything here folks" talking points but.

Using several-versions old free tier models doesn't necessarily prove what would happen in a realistic application.

For example, they mentioned using the 1 year old free-tier Claude implementation (they're currently on Sonnet 4.6 for the free tier, and Opus 4.6 for premium use)

And there's a big difference between the two

GPQA Diamond is where the gap becomes dramatic. This benchmark measures PhD-level questions across physics, chemistry, and biology. Opus 4.6's 91.3% vs Sonnet 4.6's 74.1% represents a 17-point chasm — the single largest performance difference between the two models. If your work involves expert-level reasoning, Opus is in a different league.

Similar story with OpenAI. Their "deeper reasoning" is determined based on mode. Meaning, the answers/approaches you'd get from Sonnet 4.6/GPT5.2 are very different than what you'd get with Opus4.6/o1 Pro-Thinking Mode.


Basically, this guy's experiment is tantamount to someone reproducing an already known flaw from a several-versions old, un-patched release of Windows 10, and using that as an argument for why it's dangerous for a government entity to be using the latest & greatest version of Windows Server that's fully patched and up-to-date.
 
Upvote 0

RDKirk

Alien, Pilgrim, and Sojourner
Site Supporter
Mar 3, 2013
43,305
23,971
US
✟1,841,921.00
Faith
Christian
Marital Status
Married
Musk is a Trump supporter while much of the rest of Silicon Valley are just doing what they need to survive him. It seems that military decisions as to which platform is reliable and which not are being made on the basis of personal loyalty to Emperor Trump.

Personally I found Grok to be infected with Musk's atheism when I asked it questions about Bible dating for example.

The biggest question with this AI as with all of them in fact is to do with the tendency to hallucinate based on statistical models that do not map to reality. I would be interested in what legal constraints and what correspondence to truth tests will be in place for this system.

Also how will command and control hierarchies integrate with the workflows of decision making. AIs can identify targets but it will still need some kind of empirical test and a human decision to fire.
And that leads us to the current conflict between Hegseth and Anthropic, the corporation operating the AI system called "Claude."

Hegseth is threatening to destroy Anthropic because they refuse to develop fully autonomous weapons that take humans out of the loop entirely and automate selecting and engaging targets. I suspect it's because he can't find enough soldiers who will reliably obey illegal orders.

Here is the full statement of Anthropic's CEO on the dispute:

Here is a detailed news story on the dispute.
 
Upvote 0

RDKirk

Alien, Pilgrim, and Sojourner
Site Supporter
Mar 3, 2013
43,305
23,971
US
✟1,841,921.00
Faith
Christian
Marital Status
Married
Seems like Anthropic is changing their tune pretty quickly


“Rather than being hard commitments, these are public goals that we will openly grade our progress towards,” the company said in its blog post.

The change comes a day after Defense Secretary Pete Hegseth gave Anthropic CEO Dario Amodei a Friday deadline to roll back the company’s AI safeguards, or risk losing a $200 million Pentagon contract and being put on what is effectively a government blacklist.
No, that new policy does not indicate a reversal from Anthropic's resistance to creating an LLM that can target humans without human control.
 
Upvote 0

Hans Blaster

Call Me Al
Mar 11, 2017
24,707
18,033
56
USA
✟466,720.00
Country
United States
Gender
Male
Faith
Atheist
Marital Status
Private
Politics
US-Democrat
And that leads us to the current conflict between Hegseth and Anthropic, the corporation operating the AI system called "Claude."

Hegseth is threatening to destroy Anthropic because they refuse to develop fully autonomous weapons that take humans out of the loop entirely and automate selecting and engaging targets.
I normally don't say this, but destroy Anthropic? Don't threaten me with a good time, Sec. Pete.

I suspect it's because he can't find enough soldiers who will reliably obey illegal orders.
This is entirely too plausible.
 
Upvote 0

Nithavela

you're in charge you can do it just get louis
Apr 14, 2007
31,506
23,205
Comb. Pizza Hut and Taco Bell/Jamaica Avenue.
✟621,595.00
Country
Germany
Faith
Other Religion
Marital Status
Single
Upvote 0

ThatRobGuy

Part of the IT crowd
Site Supporter
Sep 4, 2005
30,155
17,596
Here
✟1,588,679.00
Country
United States
Gender
Male
Faith
Atheist
Marital Status
Single
Politics
US-Others
No, that new policy does not indicate a reversal from Anthropic's resistance to creating an LLM that can target humans without human control.
If you read the fine print of the policy change, it does significantly weaken their prior promises.

Or, at the very least, it's a reversal on what their perceived "commitment" level was. They've long tried to brand themselves as the "ethical AI" company.




However, when you look at the details of this policy change, it's basically tantamount to saying "Yeah, as long as we're out in front, we'll stick with the commitment to safety, however, if other competitors start to surpass us and start making more money, we'll just match whatever their level of guardrails are"
 
Upvote 0

RDKirk

Alien, Pilgrim, and Sojourner
Site Supporter
Mar 3, 2013
43,305
23,971
US
✟1,841,921.00
Faith
Christian
Marital Status
Married
If you read the fine print of the policy change, it does significantly weaken their prior promises.

Or, at the very least, it's a reversal on what their perceived "commitment" level was. They've long tried to brand themselves as the "ethical AI" company.




However, when you look at the details of this policy change, it's basically tantamount to saying "Yeah, as long as we're out in front, we'll stick with the commitment to safety, however, if other competitors start to surpass us and start making more money, we'll just match whatever their level of guardrails are"
That new policy does not indicate a reversal from Anthropic's resistance to creating an LLM that can target humans without human control.
 
Upvote 0

mindlight

See in the dark
Site Supporter
Dec 20, 2003
14,483
3,056
London, UK
✟1,059,477.00
Country
Germany
Gender
Male
Faith
Christian
Marital Status
Married
And that leads us to the current conflict between Hegseth and Anthropic, the corporation operating the AI system called "Claude."

Hegseth is threatening to destroy Anthropic because they refuse to develop fully autonomous weapons that take humans out of the loop entirely and automate selecting and engaging targets. I suspect it's because he can't find enough soldiers who will reliably obey illegal orders.

Here is the full statement of Anthropic's CEO on the dispute:

Here is a detailed news story on the dispute.

It seemed that Hegseth in effect asked anthropic to trust his usage and assume that it would be lawful. So he did not want the company to introduce their own safeguards to the technology suggesting these would be provided by the military in the way they used it. But Anthropic would most probably object that since it is their tech they know what it takes to provide the necessary safeguards and these need to be provided by them and especially when it comes to domestic surveillance and autonomous systems.
 
Upvote 0

RDKirk

Alien, Pilgrim, and Sojourner
Site Supporter
Mar 3, 2013
43,305
23,971
US
✟1,841,921.00
Faith
Christian
Marital Status
Married
It seemed that Hegseth in effect asked anthropic to trust his usage and assume that it would be lawful. So he did not want the company to introduce their own safeguards to the technology suggesting these would be provided by the military in the way they used it. But Anthropic would most probably object that since it is their tech they know what it takes to provide the necessary safeguards and these need to be provided by them and especially when it comes to domestic surveillance and autonomous systems.
I had a chat with ChatGPT (OpenAI) about this a couple of days ago. ChatGPT told me a couple of interesting things:

1. Claude's safeguards are so deeply embedded in the code that removing them would render Claude "unpredictable."

2. ChatGPT has no such safeguards and would be more suitable for Hegseth's apparent purposes.

The second issue in the matter is that even if Anthropic and the DoD can't come to an agreement, they should be able to simply shake hands and walk away. Instead, Hegseth is petulantly threatening to destroy the company by prohibiting any other company that does or would like to do business with the government from doing business with Anthropic.
 
Upvote 0

mindlight

See in the dark
Site Supporter
Dec 20, 2003
14,483
3,056
London, UK
✟1,059,477.00
Country
Germany
Gender
Male
Faith
Christian
Marital Status
Married
I had a chat with ChatGPT (OpenAI) about this a couple of days ago. ChatGPT told me a couple of interesting things:

1. Claude's safeguards are so deeply embedded in the code that removing them would render Claude "unpredictable."

2. ChatGPT has no such safeguards and would be more suitable for Hegseth's apparent purposes.

The second issue in the matter is that even if Anthropic and the DoD can't come to an agreement, they should be able to simply shake hands and walk away. Instead, Hegseth is petulantly threatening to destroy the company by prohibiting any other company that does or would like to do business with the government from doing business with Anthropic.
To me that suggests that Claude is an inferior tool in this case. I suspect that investors like Amazon and Google have their own slant on what answers are offensive, suitable or dangerous. I have always preferred the way that I can set my own parameters to questions for the Chat GPT AI and get answers which I, as an expert in some fields, can recognize as authoritative.

In a way your original comment about Hegseth's disdain for Claude is therefore a little disingenuous.

If the tool cannot adapt to the militaries own human assessments and parameter settings then it is trying to impose the very autonomous automatic AI that it is trying to code out of its possible responses. The AI just needs to provide useful answers and be trustworthy in following defined parameterized contextually relevant military protocols not make decisions for the military.
 
Upvote 0

RDKirk

Alien, Pilgrim, and Sojourner
Site Supporter
Mar 3, 2013
43,305
23,971
US
✟1,841,921.00
Faith
Christian
Marital Status
Married
To me that suggests that Claude is an inferior tool in this case. I suspect that investors like Amazon and Google have their own slant on what answers are offensive, suitable or dangerous. I have always preferred the way that I can set my own parameters to questions for the Chat GPT AI and get answers which I, as an expert in some fields, can recognize as authoritative.

In a way your original comment about Hegseth's disdain for Claude is therefore a little disingenuous.

If the tool cannot adapt to the militaries own human assessments and parameter settings then it is trying to impose the very autonomous automatic AI that it is trying to code out of its possible responses. The AI just needs to provide useful answers and be trustworthy in following defined parameterized contextually relevant military protocols not make decisions for the military.
I think you have not been paying attention or are being a bit disingenuous yourself.

Anthropics has been under contract with the DoD since July 2025. In February, 2026, Hegseth moved to change the contract wording. Anthropics claims that the change requires them to remove safeguards that presently prevents their software from accomplishing widespread domestic surveillance and fully automated human targeting.

(I would point out that DoD domestic surveillance has been illegal for 50 or more years. When I took a ride on an RC-135 training mission in the early 80s (electronic intelligence collection for the NSA, which is part of the DoD), we weren't allowed even to listen to US pop music radio stations--the prohibition against DoD domestic surveillance is that tight. The PATRIOT Act loosens it around the periphery--the NSA can listen to a communication that originates from a foreign nation into the US--but widespread domestic surveillance by the DoD is still prohibited by law.)

In my military service, I also spent some time writing scenarios for military exercises. I can't imagine a scenario in which we would want an automaton engaging targets without human supervision. Identifying possible targets is one thing; going through the cycle of destroying them without human supervision is much more something else.

Why would the US government want either of those capabilities, and why would the government not, rather, insist on those safeguards?
 
Upvote 0

mindlight

See in the dark
Site Supporter
Dec 20, 2003
14,483
3,056
London, UK
✟1,059,477.00
Country
Germany
Gender
Male
Faith
Christian
Marital Status
Married
I think you have not been paying attention or are being a bit disingenuous yourself.

Anthropics has been under contract with the DoD since July 2025. In February, 2026, Hegseth moved to change the contract wording. Anthropics claims that the change requires them to remove safeguards that presently prevents their software from accomplishing widespread domestic surveillance and fully automated human targeting.

(I would point out that DoD domestic surveillance has been illegal for 50 or more years. When I took a ride on an RC-135 training mission in the early 80s (electronic intelligence collection for the NSA, which is part of the DoD), we weren't allowed even to listen to US pop music radio stations--the prohibition against DoD domestic surveillance is that tight. The PATRIOT Act loosens it around the periphery--the NSA can listen to a communication that originates from a foreign nation into the US--but widespread domestic surveillance by the DoD is still prohibited by law.)

In my military service, I also spent some time writing scenarios for military exercises. I can't imagine a scenario in which we would want an automaton engaging targets without human supervision. Identifying possible targets is one thing; going through the cycle of destroying them without human supervision is much more something else.

Why would the US government want either of those capabilities, and why would the government not, rather, insist on those safeguards?
The USA can define it's own laws and should adhere to what it decides. Where the safeguards that protect that compliance are defined is the issue here. Hegseth says the DOW should define lawful use rather than it's Big Tech supplier.

It is naive to believe that the modern battlefield will not include fifth columnists who are themselves American citizens on US soil.

Regarding autonomous AI killing systems I tend to agree on the macro level. Though the option of setting a "sentinel gun" on a narrow corridor of potential assault by the enemy remains a viable option.
 
Upvote 0

Pommer

CoPacEtiC SkEpTic
Sep 13, 2008
24,108
14,715
Earth
✟283,636.00
Country
United States
Gender
Male
Faith
Deist
Marital Status
In Relationship
Politics
US-Democrat
It is naive to believe that the modern battlefield will not include fifth columnists who are themselves American citizens on US soil.
Oh goodie, the purity tests are nearly here, who should we “suspect”? I’m rapt.
 
Upvote 0

Nithavela

you're in charge you can do it just get louis
Apr 14, 2007
31,506
23,205
Comb. Pizza Hut and Taco Bell/Jamaica Avenue.
✟621,595.00
Country
Germany
Faith
Other Religion
Marital Status
Single
It seemed that Hegseth in effect asked anthropic to trust his usage and assume that it would be lawful. So he did not want the company to introduce their own safeguards to the technology suggesting these would be provided by the military in the way they used it. But Anthropic would most probably object that since it is their tech they know what it takes to provide the necessary safeguards and these need to be provided by them and especially when it comes to domestic surveillance and autonomous systems.
Just yesterday I saw a social media post of a highly paid AI security expert accidentally having their AI delete their E-Mail folder. I doubt anyone really "knows" AI.
 
Upvote 0

Nithavela

you're in charge you can do it just get louis
Apr 14, 2007
31,506
23,205
Comb. Pizza Hut and Taco Bell/Jamaica Avenue.
✟621,595.00
Country
Germany
Faith
Other Religion
Marital Status
Single
Oh goodie, the purity tests are nearly here, who should we “suspect”?
everyone2.jpg

I suggest implanting every US american with a micro-bomb in their neck, so that they can be retired when AI determines them to be a fifth columnist.
 
Last edited:
  • Like
Reactions: Desk trauma
Upvote 0

Pommer

CoPacEtiC SkEpTic
Sep 13, 2008
24,108
14,715
Earth
✟283,636.00
Country
United States
Gender
Male
Faith
Deist
Marital Status
In Relationship
Politics
US-Democrat
everyone2.jpg

I suggest implanting every US american with a micro-bomb in their neck, so that they can be retired when AI determines them to be a fifth columnist.
That’s down the road, we still have to get through the many rounds of “denunciations” firstly.
 
Upvote 0