For example, we have unit tests in software development. We have a function (a model), we predict the output that this model will give (expected result) based on certain input. So, if the test produces the expected result, it passes and the function is useful. BUT, there's more than one way to give you the same result, hence if you test two independent functions that produce the same exact result... it would be false to automatically assume that they are the same function.
I think this here lies at the very core of your reasoning error.
Let's consider that the piece of software is a single batch method wich does some stuff.
3 parts here are relevant:
- the functional requirements
- the actual business logic
- the unit test, to see if the business logic actually does what the functional requirements specify.
You say that the business logic is "the model". This is an incorrect analogy to scientific research.
The actual business logic (the code), is rather the equivalent of the "phenomena" that needs explaining. The model, would then be the functional requirements.
In analogy to science, what we get is just the code - the phenomena. And we need to find out how it works / what it does. So the unknown here, are the functional requirements.
So we develop a hypothesis concerning what these requirements are. We then develop unit tests to see if we get the expected result. When we don't, then we need to alter the hypothesis / model: the functional requirements. The code is what it is and does what it does.
And we could do all kinds of tests, which directly relate to the hypothesis and which would zero-in on the actual requirements. For example, suppose we suspect that the method does a division. We could then write a test which would cause the code to divide by zero and then see if we get the specific exception that is thrown when trying to divide by zero.
If that test is succesfull, then we KNOW that a division by zero has occured and that at the very least, that part of the model is actually correct.
Such an exception is ONLY thrown if division by zero occurs, after all.
See, in such an analogy, science is not the development of the software. In such analogy, the software already exists (and can't be changed) and we are trying to find out what it does.
The same is valid for any given scientific model, which don't exist in vacuum and are built on foundation of pre-existing axiomatic assumptions that feed into it. If you presume that space and matter are discrete, then of course you will arrive with atomic theory. If you don't, then you will arrive with something else.
And yet, the predictions of atomic theory check out and it allowed us to build nuclear bombs and nuclear power stations.
But the consistent output of the reality is what normalizes any given model. Models tend to conform to pre-existing data and assumptions.
And are changed accordingly when testing it and uncovering
new data. And assumptions are dropped like yesterday's newspaper in case they are shown wrong through this testing.
The point being in all of this, you seem to imply that only models that are not aware of some data prior to formulation of these models matter. That's simply not the case. We can predict outcomes we already know the answer and data to.
The predictions aren't arbitrary. They are an inherent part of the model.
If you are questioning the utility of religous model, then you don't have to look far-beyond Western Civilization. I can give you a dozen of socio-political advantages that such system would present for any developing civilization.
Which has nothing whatsoever to do with science and everything with arbitrary social/political structures. I could also point out that life in the west exponentially improved once we kicked the church out of government, turned to science for answers and installed
secular democracy. But it would be irrelevant, because none of this matters to the points at hand.
We are talking about explaining the nature of reality and the phenomena of nature. We are not talking about how to organize a society.
Not entirely.... atomic model was not developed in a philosophical vacuum. The philosophy drives science, and not the other way around. Each successive iteration of atomic theory is based on previous axiomatic assumptions that are existing prior to any models are formulated.
And yet, nukes explode and nuclear power stations generate electricity.
You are merely shifting semantics into a process that's detached of human agency.
Not at all.
I can "ascribe" / "attribute" ANYTING to unfalsifiable/undectable entities. And the merrit of doing so is exactly zero.
Contrast it with for example nested hierarchies and evolution....
Nested hierarchies aren't just "ascribed" to evolution theory.
Rather, evolution theory
predicts the existance of such hierarchies. As in: if evolution is correct, then such hierarchies MUST exist.
To predict: "if this, then that"
To ascribe: "entity-X-did-it"
That's the difference.