I think agape "carries" our actions despite cluelessness, but only up to a point. When we start looking at the consequences of our actions over long timescales or large scales, the math break down, and there's no way to fix it. In this sense, I think our world is what some have called a "morally uncooperative world"; I care deeply about the well-being of sentient beings in the distant future, but I don't think there's any way of having any idea how to help them.
Personally, I'm a short-termist. I think short-termism is hard to defend in utilitarianism, but not so strange in agapism. Utilitarianism aims to maximize the well-being of all space-time, and if it has no idea how to do that or whether maximizing well-being locally maximizes well-being globally, it breaks, and I think the fixes are very ugly. Agapism, on the other hand, calls for actions by and for agape: I care about the well-being of sentient beings in the distant future, but I have no idea how to help them, but that doesn't prevent me from caring about the well-being of sentient beings in the present/near future. Where utilitarianism sees no reason to help sentient beings of the present/near future if we don't know if it increases "overall well-being", agapism sees a reason; because we love sentient beings of the present/near future. Not knowing how to help sentient beings in the distant future, or not knowing whether helping sentient beings in the present/near future helps sentient beings in "all of space-time overall", doesn't change the love we feel for sentient beings in the present/near future whom we can help and whom we know how to help.
Regarding the subject of cause prioritization, I haven't dug very deep into it (I don't have much money to give), but when I do give money, I tend to give it to anti-animal-exploitation charities (like The Humane League) that are recognized as having a good impact. I also tend to think that "the West" (its culture, its human biocapital, etc.) is of enormous importance to the altruistic project, and that it's in danger. And I know it's a big topic right now, but I've never delved into the subject of AI.
What do you think of all this? I have to say that I haven't delved very deeply into the subject.
1. If the bearers of value are e.g. spacetime regions, then this doesn’t seem like an issue. (Since I will in fact be clueless about the effects of donating to THL on any far-future spacetime region.)
2. There does seem to be something unloving about saying: “Sorry pigs whose suffering is immediately presented to me, I’m able to come up with a gerrymandered set of far-future moral patients who are determinately worse off as a result of trying to help you, so my reasons don’t actually favor helping you.” I’m confused about what to think of this.
Anyway, I currently prioritize short-term interventions for neglected animals, like shrimp and insects. Maybe related to your comment about the West, I also feel drawn to something along the lines of “it’s good to participate in the project of making the world wise, loving, just, cooperative, etc, even if you’re clueless about this on consequentialist grounds”, but I feel confused there.
I also considered conceiving the bearers of value as spacetime regions before reading your second message, but thinking about it, I'm not sure that solves the problem. What I'm thinking is that it's true that we're clueless about the impacts of THL donations on the spacetime region of "the far future globally", but can't we say that there's a spacetime region that "corresponds" to that of the gerrymandered set on which we're not clueless? It may be a composite and unnatural spacetime region, but at the same time, isn't the spacetime region of the chickens helped by our donation to THL also composite and unnatural? (the helped chickens are scattered all over space and time)
Also, while there seems to be something unloving about telling the chickens "sorry, I can come up with a gerrymandered set of people from the far future who will suffer if I help you", but if we could talk on the phone with the gerrymandered set of people from the far future, wouldn't it also seem like there's something unloving about telling them "sorry, you're just a gerrymandered set"?
I think the agapist response to this situation is not straightforward.
Personally, I feel that I consider anything after, I don't know, 2200, to be part of the "noise region" (noise as in "irregular fluctuations"), and that one should not be concerned with "noise consequences".
Is there a deep justification for this in agapism, is it just a mechanism to protect one's sanity in a morally uncooperative world, is it a corruption of the word "agape" into a catch-all empty word to justify any moral intuition...?
It's hard to say, but I nonetheless think there may be a line of justification for this that is faithful to agapism. The idea is that "noise" (a word to inclue the noise region, the noise consequences, the people of the region of noise...) is not a possible object of love. As soon as we include noise in our agape, as soon as noise becomes an object of love, love collapses under the weight, in that if we start imagining nightmarish scenarios for every action, every action becomes suspicious; we become paralyzed, and no more loving action is possible.
I don't think you can make a viable parallel answer in utilitarianism. In the case of utilitarianism, saying that "we can't include noise in our calculations" does in fact imply that acting to maximize global well-being is impossible.
In the case of agapism, we're not saying that love is impossible, but that certain objects can't be loved, that certain objects can't be objects of love, that certain objects (noise) aren't "lovable". You have to love what you can love, and if you start loving people from any gerrymandered set, love collapses, because you can imagine gerrymandered sets for any action.
But perhaps someone here could reply "loving what you can love doesn't imply stopping loving people from gerrymandered sets, it just implies not loving people from ALL gerrymandered sets". Maybe someone could say "I can decide to love people from THIS gerrymandered set only, and by loving only THOSE, my love doesn't fall apart".
I'm not exactly sure how to answer that, but maybe I'm thinking that love just doesn't seem to work that way. There are some objects that are more "naturally lovable" than others, for example those that are tangible and whose vulnerability presents itself directly. But above all, in agape, there's a notion of "unconditionality" that hardly seems compatible with such an arbitrary filter; why exclude from one's agape the whole region of noise EXCEPT that precise gerrymandered set?
To sum up, my idea at the moment, and I'm not very confident about it, would be to say that agapism responds to the problem of gerrymandered sets with the idea that you have to love what you can love, and that noise (people from the "noise region") is not something that can be an object of love (because otherwise, love collapses). (To this is perhaps added an insistence on the idea that arbitrary filters are incompatible with the unconditionality of agape love.)
> can't we say that there's a spacetime region that "corresponds" to that of the gerrymandered set on which we're not clueless
The thought is: No, because the moral patients in the gerrymandered set only exist in worlds where I take action A, whereas the spacetime region they occupy exists in both A-worlds and not-A-worlds. So we can say, “I’m clueless as to whether there’s more expected welfare in this region given A vs. not-A”, but “This gerrymandered set of moral patients doesn’t exist in not-A-worlds, so I’m not clueless that not-A is better for them.”
> but if we could talk on the phone with the gerrymandered set of people from the far future, wouldn't it also seem like there's something unloving about telling them "sorry, you're just a gerrymandered set"?
> I think the agapist response to this situation is not straightforward.
Yeah.
> I'm not exactly sure how to answer that, but maybe I'm thinking that love just doesn't seem to work that way. There are some objects that are more "naturally lovable" than others, for example those that are tangible and whose vulnerability presents itself directly
1. We decompose the “bearers of value” affected by our actions into those we’re clueless about and those we’re not clueless about
2. We decide based on the value-bearers we’re not clueless about, even though we may be clueless in aggregate
(Indeed, some colleagues and I are actually writing a paper rigorously working out this view.)
This does have issues, though. The biggest issue for me is how to construe the “bearers of value”. If these are persons or experience-moments, we get the following problem: Given a choice between A and B, I can find gerrymandered sets of people who only exist in A-worlds and B-worlds, respectively, and whose aggregate welfare I’m not clueless about. For example, I can tell some story about how donating to The Humane League leads to the existence of some large set of moral patients in the far future whose lives are not worth living. I’m not clueless about their being worse off than the animals I help in the short-term by donating to THL. So why do I get to privilege the animals affected in the short-term?
from today on im an agapist
This is awesome and the best (only?) statement of my current ethical leanings that I know of.
I'd love to hear more of your thoughts on altruistic cause prioritization from this point of view.
I think agape "carries" our actions despite cluelessness, but only up to a point. When we start looking at the consequences of our actions over long timescales or large scales, the math break down, and there's no way to fix it. In this sense, I think our world is what some have called a "morally uncooperative world"; I care deeply about the well-being of sentient beings in the distant future, but I don't think there's any way of having any idea how to help them.
Personally, I'm a short-termist. I think short-termism is hard to defend in utilitarianism, but not so strange in agapism. Utilitarianism aims to maximize the well-being of all space-time, and if it has no idea how to do that or whether maximizing well-being locally maximizes well-being globally, it breaks, and I think the fixes are very ugly. Agapism, on the other hand, calls for actions by and for agape: I care about the well-being of sentient beings in the distant future, but I have no idea how to help them, but that doesn't prevent me from caring about the well-being of sentient beings in the present/near future. Where utilitarianism sees no reason to help sentient beings of the present/near future if we don't know if it increases "overall well-being", agapism sees a reason; because we love sentient beings of the present/near future. Not knowing how to help sentient beings in the distant future, or not knowing whether helping sentient beings in the present/near future helps sentient beings in "all of space-time overall", doesn't change the love we feel for sentient beings in the present/near future whom we can help and whom we know how to help.
Regarding the subject of cause prioritization, I haven't dug very deep into it (I don't have much money to give), but when I do give money, I tend to give it to anti-animal-exploitation charities (like The Humane League) that are recognized as having a good impact. I also tend to think that "the West" (its culture, its human biocapital, etc.) is of enormous importance to the altruistic project, and that it's in danger. And I know it's a big topic right now, but I've never delved into the subject of AI.
What do you think of all this? I have to say that I haven't delved very deeply into the subject.
(Cont'd)
Two responses:
1. If the bearers of value are e.g. spacetime regions, then this doesn’t seem like an issue. (Since I will in fact be clueless about the effects of donating to THL on any far-future spacetime region.)
2. There does seem to be something unloving about saying: “Sorry pigs whose suffering is immediately presented to me, I’m able to come up with a gerrymandered set of far-future moral patients who are determinately worse off as a result of trying to help you, so my reasons don’t actually favor helping you.” I’m confused about what to think of this.
Anyway, I currently prioritize short-term interventions for neglected animals, like shrimp and insects. Maybe related to your comment about the West, I also feel drawn to something along the lines of “it’s good to participate in the project of making the world wise, loving, just, cooperative, etc, even if you’re clueless about this on consequentialist grounds”, but I feel confused there.
Super curious to hear what you think of all that!
It's fascinating, and these are tough questions.
I also considered conceiving the bearers of value as spacetime regions before reading your second message, but thinking about it, I'm not sure that solves the problem. What I'm thinking is that it's true that we're clueless about the impacts of THL donations on the spacetime region of "the far future globally", but can't we say that there's a spacetime region that "corresponds" to that of the gerrymandered set on which we're not clueless? It may be a composite and unnatural spacetime region, but at the same time, isn't the spacetime region of the chickens helped by our donation to THL also composite and unnatural? (the helped chickens are scattered all over space and time)
Also, while there seems to be something unloving about telling the chickens "sorry, I can come up with a gerrymandered set of people from the far future who will suffer if I help you", but if we could talk on the phone with the gerrymandered set of people from the far future, wouldn't it also seem like there's something unloving about telling them "sorry, you're just a gerrymandered set"?
I think the agapist response to this situation is not straightforward.
Personally, I feel that I consider anything after, I don't know, 2200, to be part of the "noise region" (noise as in "irregular fluctuations"), and that one should not be concerned with "noise consequences".
Is there a deep justification for this in agapism, is it just a mechanism to protect one's sanity in a morally uncooperative world, is it a corruption of the word "agape" into a catch-all empty word to justify any moral intuition...?
It's hard to say, but I nonetheless think there may be a line of justification for this that is faithful to agapism. The idea is that "noise" (a word to inclue the noise region, the noise consequences, the people of the region of noise...) is not a possible object of love. As soon as we include noise in our agape, as soon as noise becomes an object of love, love collapses under the weight, in that if we start imagining nightmarish scenarios for every action, every action becomes suspicious; we become paralyzed, and no more loving action is possible.
I don't think you can make a viable parallel answer in utilitarianism. In the case of utilitarianism, saying that "we can't include noise in our calculations" does in fact imply that acting to maximize global well-being is impossible.
In the case of agapism, we're not saying that love is impossible, but that certain objects can't be loved, that certain objects can't be objects of love, that certain objects (noise) aren't "lovable". You have to love what you can love, and if you start loving people from any gerrymandered set, love collapses, because you can imagine gerrymandered sets for any action.
But perhaps someone here could reply "loving what you can love doesn't imply stopping loving people from gerrymandered sets, it just implies not loving people from ALL gerrymandered sets". Maybe someone could say "I can decide to love people from THIS gerrymandered set only, and by loving only THOSE, my love doesn't fall apart".
I'm not exactly sure how to answer that, but maybe I'm thinking that love just doesn't seem to work that way. There are some objects that are more "naturally lovable" than others, for example those that are tangible and whose vulnerability presents itself directly. But above all, in agape, there's a notion of "unconditionality" that hardly seems compatible with such an arbitrary filter; why exclude from one's agape the whole region of noise EXCEPT that precise gerrymandered set?
To sum up, my idea at the moment, and I'm not very confident about it, would be to say that agapism responds to the problem of gerrymandered sets with the idea that you have to love what you can love, and that noise (people from the "noise region") is not something that can be an object of love (because otherwise, love collapses). (To this is perhaps added an insistence on the idea that arbitrary filters are incompatible with the unconditionality of agape love.)
> can't we say that there's a spacetime region that "corresponds" to that of the gerrymandered set on which we're not clueless
The thought is: No, because the moral patients in the gerrymandered set only exist in worlds where I take action A, whereas the spacetime region they occupy exists in both A-worlds and not-A-worlds. So we can say, “I’m clueless as to whether there’s more expected welfare in this region given A vs. not-A”, but “This gerrymandered set of moral patients doesn’t exist in not-A-worlds, so I’m not clueless that not-A is better for them.”
> but if we could talk on the phone with the gerrymandered set of people from the far future, wouldn't it also seem like there's something unloving about telling them "sorry, you're just a gerrymandered set"?
> I think the agapist response to this situation is not straightforward.
Yeah.
> I'm not exactly sure how to answer that, but maybe I'm thinking that love just doesn't seem to work that way. There are some objects that are more "naturally lovable" than others, for example those that are tangible and whose vulnerability presents itself directly
Yeah, that seems like a promising thought!
Thanks!
Yeah, I’m drawn to a view on which, roughly,
1. We decompose the “bearers of value” affected by our actions into those we’re clueless about and those we’re not clueless about
2. We decide based on the value-bearers we’re not clueless about, even though we may be clueless in aggregate
(Indeed, some colleagues and I are actually writing a paper rigorously working out this view.)
This does have issues, though. The biggest issue for me is how to construe the “bearers of value”. If these are persons or experience-moments, we get the following problem: Given a choice between A and B, I can find gerrymandered sets of people who only exist in A-worlds and B-worlds, respectively, and whose aggregate welfare I’m not clueless about. For example, I can tell some story about how donating to The Humane League leads to the existence of some large set of moral patients in the far future whose lives are not worth living. I’m not clueless about their being worse off than the animals I help in the short-term by donating to THL. So why do I get to privilege the animals affected in the short-term?