Daniel Dennett has written quite a bit on intentionality. While he tends to focus on the intentional stance treating other things as though they have rational minds and purposes as a shortcut to predicting their behavior, he also discusses the design stance, where we treat things as though they were designed as a shortcut to predicting their behavior. Or for cases where we know they were designed, it is clearly a useful shortcut to use this knowledge of design rather than try to predict the behavior of, say, an alarm clock by taking it apart and working out the physics of its innards (the physical stance).
The sorts of entities so far discussed in relation to design-stance predictions have been artifacts, but the design stance also works well when it comes to living things and their parts. For example, even without any understanding of the biology and chemistry underlying anatomy we can nonetheless predict that a heart will pump blood throughout the body of a living thing. The adoption of the design stance supports this prediction; that is what hearts are supposed to do (i.e., what nature has "designed" them to do).
A thermostat doesn't want or intend to regulate its temperature any more than a human body wants to regulate its temperature through the mechanisms of homeostasis. But it can be a shortcut to describe things in such terms to avoid having to directly refer to and derive the resultant behavior of the complex physiological and biochemical mechanisms that produce it.
So I'm amenable to this, but it's important to keep in mind that it is a shortcut of sorts, an act of pretend that we regard these things 'as though' they have design or intent.
I'm fine with this as a means to an end, but that doesn't really address the question I asked. As @durangodawood indicated, instead it leads one away from the answer.
As you posed it, the method suffers from an extrapolation fallacy (at least that's what I call it). The same thing happens a lot in moral arguments. Someone wants to convince me they've developed the perfect system for moral decision making (such as utilitarianism, objectivism, or some such nonsense). They "prove" their system by using it to show that killing a person in order to steal their Nikes is wrong. What they're counting on is that we both "obviously" accept such an action as wrong, and then expect that since we agree on one moral point, and since their system decides that moral point, they can extrapolate to insisting I accept all moral decisions as made by their system.
Don't work that way.
In this case, you're counting on an "obviously hearts don't have intent" agreement that I have not conceded.
Upvote
0