As I previously pointed out....you are assuming the last 3 generations...were the same a 100 generations ago.
Well, no, you didn't previously point that out, but let's leave that aside.
We know something changed because after the flood the life spans shortened...then after the days of Peleg, they shortened again.
Bottom line...you haven't convinced me your mutation rate is as linear as you say it is. Your model is built on speculation.
Okay, we finally got a potential explanation from you: mutation rates used to be much higher than they are now, so lots of mutations occurred while the population size was still small. Let's consider how plausible that explanation is.
First, though, something I neglected to ask about previously: where are you getting your number of generations since Adam and Eve? At the moment, you've allowed yourself ~200 generations of a small population for mutations to accumulate, which would have to be in the period between the Flood and Abraham. That's not at all Biblical: Genesis lists nine generations for that period. So where did you get your estimate?
Back to the mutation rate. . .
First, your proposal doesn't make sense in terms of the creationist story you're telling. You're saying that while humans were degenerating, accumulating deleterious mutations and suffering dramatically shortened lifespans, their mutation rate was
improving? And not just improving, but getting much, much better? Even if we allow that the long-lived guys had higher mutation rates because of lifespan (paternal mutation rates increase with age), mutation rates still had to have decreased by about a factor of ten since then. That's not consistent with the rest of your story.
Second, there's no physical mechanism by which this could happen. Mutation isn't a single process but many different ones, and no single effect will change the rate of all of them. Yet when we compare mutations happening today to ancient ones we've inherited, we see the same spectrum of different kinds of mutation. What could do that? For example, spontaneous deamination of methylated cytosines occurs at a rate pretty much determined just by basic chemistry and leaves a distinct signature of mutations, while incorporation of the wrong base during replication depends on the fidelity of the DNA polymerase and causes a very different set of mutations. What could have caused both basic chemistry and a particular enzyme to be so different a few thousand years ago, and in ways that happen to produce the same increase in mutation? It's not like we have any reason to think our DNA polymerase is any different than it ever was; today it's functionally the same as that seen in other primates. So what's supposed to have happened?
Third, we have independent evidence for the age of the mutations we're talking about. That evidence comes from associations between nearby genetic variants (technical term, "linkage disequilibrium"). When a new mutation occurs, the new variant appears on a particular chromosome having a particular set of other variants. It will be passed on to future generations along with those variants, unless recombination during meiosis breaks up the chromosome and combines it with a different one. But recombination only occurs about once every 80 million basepairs per generation, so the associations break down slowly. Recent mutations, then, will appear on long unbroken segments of chromosome, while older mutations will be on short segments. We thus have a good idea what recent mutations should look like. The mutation that confers lactose tolerance on many Europeans, for example, sits on a largely unbroken segment of DNA that is over a million basepairs long. Conveniently, we happen to know when that mutation spread in the European population, because researchers have tracked its rise in DNA samples from ancient skeletons: it became common around 4500 years ago. That's around the end of the time when -57's high mutation rate was contributing lots of mutations. If we look at 1% mutations throughout the genome, we find them on unbroken segments that are only 100,000 basepairs long, one tenth the length of the lactose mutation. 5% mutations are on even shorter segments, half as long again. Thus, the bulk of the genetic variants we see in the 1% - 5% range are at least 10 to 20 times older (in generations) than 4500 years ago. (That's actually a lower bound for complicated reasons I won't go into.)
Fourth, the processes that lead to new mutations in the next generation also produce the mutations that give us cancer. If people live long enough today, pretty much everyone gets cancer eventually, since the dangerous mutations continue to accumulate the longer you live. If mutation rates were really, say, 70 times higher per generation a few thousand years ago, then essentially everyone would have gotten cancer while still young. If anything, those enormously long life spans back then would have required a much lower mutation rate than we have so they could avoid cancer that long; instead, it's being proposed that their mutation rate was much higher than ours. Given that mutation rate, most of them would have been dead of cancer by age 20.
TL;DR version: the proposal that mutation rates used to be much higher has zero evidence in its favor, is not consistent even with creationist ideas, is contradicted by independent evidence about the age of mutations, is physically impossible, and would have caused everyone to die of cancer at a young age. Not a plausible proposal.
I will add that this kind of approach is pretty typical of creationists' engagement with scientific data. Where a scientist will try to find an explanation that explains new data in a way that's consistent with all existing data -- since that's the best way of figuring out what's really going on -- creationists will make ad hoc proposals in an effort to make inconvenient data go away.