AI & Trust

J_B_

I have answers to questions no one ever asks.
May 15, 2020
1,258
365
Midwest
✟109,655.00
Country
United States
Faith
Christian
Marital Status
Private
"Did AI Prove Our Proton Model Wrong?"

Using "prove" in the title of the video, is a bit click-baity, but the video itself is much more reserved. They discuss the use of AI to identify a Proton model that is different than the current best candidates. I don't have a dog in that fight, so I could care less which Proton model rises to the top. Let the best one win.

My question is: How much do you trust AI solutions? Before you answer, let me elaborate further.

The video reveals that the "AI" involved was a neural network that optimized 1000s of models to arrive at the best, which is an advantage given that human physicists can only test a few. When I heard that, my reaction was, "Oh, is that all it was?" I'm not downplaying the accomplishment, but rather the use of "AI" as a label in this case, where it seems a misnomer. It's fine if people want to call such things "AI", but I am sometimes concerned that the general public misunderstands the nature of what is actually going on in the belly of the beast.

I've used all kinds of different optimizers in my engineering work: gradient descent, genetic algorithms, neural nets. They all have their uses, but I've never trusted them enough to just turn them loose and take their solution without reservation. I never believed use of a genetic algorithm meant I was condoning evolutionary biology because they simply aren't the same thing. I've never considered any of the neural nets I've ever used actually "intelligent". Usually it's an intensive process where I am deeply involved with guiding the algorithm, and find a better solution that way than just turning it loose to do its own thing. In the end, it's more about using the optimizer as a workhorse to test more cases than I can do on my own. It's not about the algorithm understanding the engineering problem better than I do.

But what if the day arrives when that is the case - the day when AI gives an answer we don't understand - would you trust it?
 
Last edited:
  • Informative
Reactions: 2PhiloVoid

J_B_

I have answers to questions no one ever asks.
May 15, 2020
1,258
365
Midwest
✟109,655.00
Country
United States
Faith
Christian
Marital Status
Private
If its useful its not a matter of trust is it?

That's a nice thought, but it would be difficult to work out in practice. It would depend on the risks involved with trying out this 'useful' thing we don't understand.
 
Upvote 0

SelfSim

A non "-ist"
Jun 23, 2014
6,195
1,971
✟177,244.00
Faith
Humanist
Marital Status
Private
That's a nice thought, but it would be difficult to work out in practice. It would depend on the risks involved with trying out this 'useful' thing we don't understand.
In order for it to be useful, someone (or something) has to make use of it.

I still don't see what that has to do with trusting in anything?

I mean, as has been pointed out before in one of these AI style threads, one of the chess AI's makes moves no human understands, yet I don't think anyone would argue that the learning techniques which distinguished that AI, (Alpha Zero?), were demonstrated as not being useful .. it won all its games so it demonstrated usefulness .. where does the trust come in there?
 
Last edited:
Upvote 0

J_B_

I have answers to questions no one ever asks.
May 15, 2020
1,258
365
Midwest
✟109,655.00
Country
United States
Faith
Christian
Marital Status
Private
In order for it to be useful, someone (or something) has to make use of it.

I still don't see what that has to do with trusting in anything?

I mean, as has been pointed out before in one of these AI style threads, one of the chess AI's makes moves no human understands, yet I don't think anyone would argue that the learning techniques which distinguished that AI, (Alpha Zero?), were demonstrated as not being useful .. it won all its games so it demonstrated usefulness .. where does the trust come in there?

Chess is a trivial example. AI is suggesting moves in a game you already understand.

The example I gave is about making moves in a game we don't understand. The example involved nuclear physics. Consider the consequences of implementing a nuclear device based on a model we don't understand. And then there's a meltdown, an explosion, radiation poisoning ... Maybe humans could mitigate those results, maybe not. Consider a less dramatic case, where the device suggested by AI is used for power generation, transportation, something like that, and it breaks down after going into production. Now we need to fix this device people have come to depend upon, but we can't because we don't know how.

I'm surprised you can't imagine possibilities where utilizing technology we don't understand could go bad. But if, for you, AI is only about winning chess games, I can see why you wouldn't think it's important.
 
Upvote 0

2PhiloVoid

Other scholars got to me before you did!
Site Supporter
Oct 28, 2006
21,215
9,976
The Void!
✟1,134,506.00
Country
United States
Faith
Christian
Marital Status
Married
Politics
US-Others
Chess is a trivial example. AI is suggesting moves in a game you already understand.

The example I gave is about making moves in a game we don't understand. The example involved nuclear physics. Consider the consequences of implementing a nuclear device based on a model we don't understand. And then there's a meltdown, an explosion, radiation poisoning ... Maybe humans could mitigate those results, maybe not. Consider a less dramatic case, where the device suggested by AI is used for power generation, transportation, something like that, and it breaks down after going into production. Now we need to fix this device people have come to depend upon, but we can't because we don't know how.

I'm surprised you can't imagine possibilities where utilizing technology we don't understand could go bad. But if, for you, AI is only about winning chess games, I can see why you wouldn't think it's important.

Or, just as bad, A.I. suggests a mode of power generation, not for physics..................... but for politics.

Frankly, I don't trust it for that, either.
 
Upvote 0

SelfSim

A non "-ist"
Jun 23, 2014
6,195
1,971
✟177,244.00
Faith
Humanist
Marital Status
Private
Chess is a trivial example. AI is suggesting moves in a game you already understand.

The example I gave is about making moves in a game we don't understand. The example involved nuclear physics. Consider the consequences of implementing a nuclear device based on a model we don't understand. And then there's a meltdown, an explosion, radiation poisoning ... Maybe humans could mitigate those results, maybe not. Consider a less dramatic case, where the device suggested by AI is used for power generation, transportation, something like that, and it breaks down after going into production. Now we need to fix this device people have come to depend upon, but we can't because we don't know how.

I'm surprised you can't imagine possibilities where utilizing technology we don't understand could go bad. But if, for you, AI is only about winning chess games, I can see why you wouldn't think it's important.
Hmm .. I've just watched more of the video. They used AI to synthesise new theoretical models for testing.

In the scenario of your above post, you're talking about possible serious results emerging from the implementation of production technologies. There's a myriad of testing and step-by-step testing in between the initial modelling (or prototyping) and production phases. Its sort of hard to imagine those steps not involving observation of safety protocols. It looks a bit like hyperbolisation without mentioning the in-between steps(?)

The 'trust' issue I think you're concerned about, only seems to become an issue where the step-by-step process moving from theoretical modelling to production of technologies is completely ignored(?) If anything, I think I'd say that 'trust' comes from the incremental buildup of knowledge gained throughout that overall end-to-end process, which by post production phases, is far removed from where AI was used and so too was any 'trust' in AI(?)

Its an intriguing question, but I don't think scientfically thinking humans would just blindly stumble forward with something they are completely ignorant about.
 
Upvote 0

J_B_

I have answers to questions no one ever asks.
May 15, 2020
1,258
365
Midwest
✟109,655.00
Country
United States
Faith
Christian
Marital Status
Private
Or, just as bad, A.I. suggests a mode of power generation, not for physics..................... but for politics.

Frankly, I don't trust it for that, either.
I would trust AI less for politics than I would for physics. For physics there's at least a conceivable path to a good solution.
 
  • Agree
Reactions: 2PhiloVoid
Upvote 0

J_B_

I have answers to questions no one ever asks.
May 15, 2020
1,258
365
Midwest
✟109,655.00
Country
United States
Faith
Christian
Marital Status
Private
In the scenario of your above post, you're talking about possible serious results emerging from the implementation of production technologies. There's a myriad of testing and step-by-step testing in between the initial modelling (or prototyping) and production phases. Its sort of hard to imagine those steps not involving observation of safety protocols. It looks a bit like hyperbolisation without mentioning the in-between steps(?)

I briefly mentioned mitigation, but it's hard to mitigate the safety concerns of something you don't understand. So, yes, the steps to production would be a factor in reducing the risk. But, this would fall into the "we don't know what we don't know" category, which IMO is the highest risk.

The 'trust' issue I think you're concerned about, only seems to become an issue where the step-by-step process moving from theoretical modelling to production of technologies is completely ignored(?) If anything, I think I'd say that 'trust' comes from the incremental buildup of knowledge gained throughout that overall end-to-end process, which by post production phases, is far removed from where AI was used and so too was any 'trust' in AI(?)

Have you ever been involved in taking something from concept to production? I have. There are very smart and very skilled people in the world. That doesn't mean everyone in an end-to-end production process is smart and/or skilled. The real world involves nepotism, politics, dishonesty, and incompetence. Of course reality can weed out those people and their products over time, but at a cost.

Its an intriguing question, but I don't think scientfically thinking humans would just blindly stumble forward with something they are completely ignorant about.

I guess you trust humanity more than I do - though it sounds as if you, yourself, would apply due caution, so that's good.
 
Upvote 0

Jipsah

Blood Drinker
Aug 17, 2005
12,416
3,710
70
Franklin, Tennessee
✟221,422.00
Country
United States
Faith
Anglican
Marital Status
Married
Politics
US-Others
But what if the day arrives when that is the case - the day when AI gives an answer we don't understand - would you trust it?
An answer I couildn't understand would lead me to beleve that the code was buggy.
 
Upvote 0
This site stays free and accessible to all because of donations from people like you.
Consider making a one-time or monthly donation. We appreciate your support!
- Dan Doughty and Team Christian Forums

SelfSim

A non "-ist"
Jun 23, 2014
6,195
1,971
✟177,244.00
Faith
Humanist
Marital Status
Private
Just been looking at the other thread at the AlphaGeometry example provided by @sjastro.
It is an example of where AI came up with a unique proof .. and an example of what happened when it did.

Analysis of its proof concluded that:
AlphaGeometry outputs very lengthy, low-level steps, whereas humans use a high-level insight .. to obtain a broad set of
conclusions all at once. For algebraic deductions, AlphaGeometry cannot flesh out its intermediate derivations .. therefore leading to low readability.
So the human 'intuition' evident there, is skepticism and a drive towards understanding what is being presented by AI.

I think its a good example of how the humans looking at its proof, found difficulty in simply trusting the proof before their eyes.

IOW, perhaps, one could extrapolate by saying that the solution to any given problem, might not be worth more than the understanding of 'the how' of what was derived(?)

PS: I have to be careful here, because that last hypothesis attracts abundant evidence supporting it, from the CF 'debates' on how Creationists and scientific thinkers arrive at their respective conclusions about what is real and what is belief.

PPS: Solutions are still models awaiting testing .. and are thus unlikely to be trusted by scientific thinkers until the results of those tests have been extensively reviewed and agreed, as having been replicated and conducted objectively.
 
Last edited:
Upvote 0

eleos1954

God is Love
Site Supporter
Nov 14, 2017
9,810
5,657
Utah
✟722,349.00
Country
United States
Faith
Christian
Marital Status
Single
Politics
US-Others
"Did AI Prove Our Proton Model Wrong?"

Using "prove" in the title of the video, is a bit click-baity, but the video itself is much more reserved. They discuss the use of AI to identify a Proton model that is different than the current best candidates. I don't have a dog in that fight, so I could care less which Proton model rises to the top. Let the best one win.

My question is: How much do you trust AI solutions? Before you answer, let me elaborate further.

The video reveals that the "AI" involved was a neural network that optimized 1000s of models to arrive at the best, which is an advantage given that human physicists can only test a few. When I heard that, my reaction was, "Oh, is that all it was?" I'm not downplaying the accomplishment, but rather the use of "AI" as a label in this case, where it seems a misnomer. It's fine if people want to call such things "AI", but I am sometimes concerned that the general public misunderstands the nature of what is actually going on in the belly of the beast.

I've used all kinds of different optimizers in my engineering work: gradient descent, genetic algorithms, neural nets. They all have their uses, but I've never trusted them enough to just turn them loose and take their solution without reservation. I never believed use of a genetic algorithm meant I was condoning evolutionary biology because they simply aren't the same thing. I've never considered any of the neural nets I've ever used actually "intelligent". Usually it's an intensive process where I am deeply involved with guiding the algorithm, and find a better solution that way than just turning it loose to do its own thing. In the end, it's more about using the optimizer as a workhorse to test more cases than I can do on my own. It's not about the algorithm understanding the engineering problem better than I do.

But what if the day arrives when that is the case - the day when AI gives an answer we don't understand - would you trust it?
AI can not be trusted ... it may or may not return accurate information.
 
Upvote 0

durangodawood

Dis Member
Aug 28, 2007
23,602
15,761
Colorado
✟433,247.00
Country
United States
Faith
Seeker
Marital Status
Single
....The example I gave is about making moves in a game we don't understand. The example involved nuclear physics. Consider the consequences of implementing a nuclear device based on a model we don't understand. And then there's a meltdown, an explosion, radiation poisoning ... Maybe humans could mitigate those results, maybe not. ....
Surely theres other ways to test the result where error would be less consequential - before you implement it in the high stakes settings?
 
Upvote 0

J_B_

I have answers to questions no one ever asks.
May 15, 2020
1,258
365
Midwest
✟109,655.00
Country
United States
Faith
Christian
Marital Status
Private
Surely theres other ways to test the result where error would be less consequential - before you implement it in the high stakes settings?
There are methods in science to scale systems so they can be tested for less cost, less risk, etc. But those methods utilize models of the system to perform the scaling. If we don't understand what we're testing, I'm not sure those methods could be applied.

No doubt people would try to reduce the risk. But knowledge has its limits.
 
Upvote 0
This site stays free and accessible to all because of donations from people like you.
Consider making a one-time or monthly donation. We appreciate your support!
- Dan Doughty and Team Christian Forums

Bob Crowley

Well-Known Member
Site Supporter
Dec 27, 2015
3,061
1,899
69
Logan City
✟757,786.00
Country
Australia
Faith
Catholic
Marital Status
Married
I have no background in programming (oh, I've taught myself a bit of basic stuff) and I have no qualifications in AI.

But recently my wife and I had to do a "Police Check" or "National Crime Check" (NCC) (in Australia) to continue with our volunteer roles in a particular Catholic charity. It's a bit irritating but it's the law. No doubt the pedophile crisis in the Catholic Church had some influence as we sometimes visit or interview families with children.

But when we submit our identification details online it's apparently AI that checks them. We've had this issue both of the last two times in that my wife had to show proof of name change (maiden name to married name). We duly sent off a scanned copy of the marriage certificate which was church issued on Commonwealth Stationery, but because it's not an official "State" certificate it bounced back.

The AI was following very stringent rules no doubt implemented by a human programmer. "Is document a state document? ... No ... Return to Sender..."

In due course we sent a request form back and the checks were completed.

But it took human interference to get the job finalised.

I'm wondering how they go if AI is involved in designing a nuclear plasma reactor with billions of repetitive equations which humans have no possibility of checking individually. Should they trust the result?

Or is there some algorithm that calculates the possibility of error by an AI controlled process?
 
Upvote 0

helmut

Member
Nov 26, 2007
1,853
353
Berlin
✟73,162.00
Country
Germany
Faith
Protestant
Marital Status
Married
"Did AI Prove Our Proton Model Wrong?"

Using "prove" in the title of the video, is a bit click-baity, but the video itself is much more reserved. They discuss the use of AI to identify a Proton model that is different than the current best candidates. I don't have a dog in that fight, so I could care less which Proton model rises to the top. Let the best one win.
After that description, I decided not to look into the video before I answered to you.
I never believed use of a genetic algorithm meant I was condoning evolutionary biology because they simply aren't the same thing.
I once read a (German) textbook on »evolutionary algorithms«, and one type of them was labelled deluge algorithms (Sintflut-Algorithmen, a clear allusion to Noah's flood). Such labels mean next to nothing beside a classification of algorithms.
I've never considered any of the neural nets I've ever used actually "intelligent".
What does "intelligence" mean? I once read an article which showed that there is no consensus about that. It ended with a quote that intelligence is the feature measured by IQ tests ;)

Sometimes an algorithm designed to solve one type of problems can be used for a quite different type of problems. Depending on the definition of intelligence, you may say that this shows there is intelligence in this algorithm - it stems, of course, from the creator of that algorithm (and all this has nothing to do with what is called AI).
But what if the day arrives when that is the case - the day when AI gives an answer we don't understand - would you trust it?
AFAIK, we already got at that point. In Chip design, there are modules which involve many transistors, one example would be the division of two floating-point numbers (say, with a mantissa of 64 bits). You may design a chip by analyzing the mathematical structure of the problem, and put this into a grid pattern, but this is far from optimal. Such modules are know designed and optimized by computer programs, and no human understand how the result works.

It is impossible to test every combination of bits (in my example: 128 bit input, that is 2¹²⁸ or about 3.4028*10³⁸ possibilities), so we can never be sure that there is no exceptional situation that will end in a wrong result (there have been processor bugs with some wrong results in rare circumstances, though I'm not sure whether any such bug was due to computer optimization and not by an human error in design or implementation of design). We simply have to trust that the chip divides correctly, same with other very complex operations.

So do you check every division by using two different computer chips from two producers that do not share their microcode to one another (or do you use old chips or software that does not make divisions in »one step«, but rather use a method humans can understand)?

Or do you just trust your computer?

EDIT: many typos
 
Last edited:
Upvote 0

mindlight

See in the dark
Site Supporter
Dec 20, 2003
13,626
2,676
London, UK
✟824,256.00
Country
Germany
Faith
Christian
Marital Status
Married
"Did AI Prove Our Proton Model Wrong?"

Using "prove" in the title of the video, is a bit click-baity, but the video itself is much more reserved. They discuss the use of AI to identify a Proton model that is different than the current best candidates. I don't have a dog in that fight, so I could care less which Proton model rises to the top. Let the best one win.

My question is: How much do you trust AI solutions? Before you answer, let me elaborate further.

The video reveals that the "AI" involved was a neural network that optimized 1000s of models to arrive at the best, which is an advantage given that human physicists can only test a few. When I heard that, my reaction was, "Oh, is that all it was?" I'm not downplaying the accomplishment, but rather the use of "AI" as a label in this case, where it seems a misnomer. It's fine if people want to call such things "AI", but I am sometimes concerned that the general public misunderstands the nature of what is actually going on in the belly of the beast.

I've used all kinds of different optimizers in my engineering work: gradient descent, genetic algorithms, neural nets. They all have their uses, but I've never trusted them enough to just turn them loose and take their solution without reservation. I never believed use of a genetic algorithm meant I was condoning evolutionary biology because they simply aren't the same thing. I've never considered any of the neural nets I've ever used actually "intelligent". Usually it's an intensive process where I am deeply involved with guiding the algorithm, and find a better solution that way than just turning it loose to do its own thing. In the end, it's more about using the optimizer as a workhorse to test more cases than I can do on my own. It's not about the algorithm understanding the engineering problem better than I do.

But what if the day arrives when that is the case - the day when AI gives an answer we don't understand - would you trust it?

No one can see protons, they are inferred from indirect effects like tracks.

The AI can collate information faster and check the mathematical viability of various theories faster and more consistently. But it does not see what it models and cannot test its conclusions.

The best AI can do in this case is establish what are the most useful ways to think about protons given what we know. It cannot prove its conclusions because we cannot prove our conclusions and there are no accurate observations of protons in themselves.
 
Upvote 0

FrumiousBandersnatch

Well-Known Member
Mar 20, 2009
15,262
8,058
✟326,854.00
Faith
Atheist
...
So do you check every division by using two different computer chips from two producers that do not share their microcode to one another (or do you use old chips or software that does not make divisions in »one step«, but rather use a method humans can understand)?

Or do you just trust your computer?
Trust, but verify (Russian proverb) ;-)
 
Upvote 0
This site stays free and accessible to all because of donations from people like you.
Consider making a one-time or monthly donation. We appreciate your support!
- Dan Doughty and Team Christian Forums