Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It reeks of some navel-gazing self-aggrandizing. I bet not even half the people doing the hand-wringing over how some matrix multiplications might feel are vegan or regularly spare a thought about how their or their companies consumption behavior indirectly leads to real human suffering.

It's just so absurd how narrowly their focus on preventing suffering is. I almost can't imagine a world where their concern isn't coming from a disingenious place.



I'm not highly concerned but I think there is merit in at least contemplating this problem. I believe that it would be better to reduce suffering in animals, but I am not vegan because the weight of my moral concern for animals does not outweigh my other priorities.

I believe that it doesn't really matter whether consciousness comes from electronics or cells. If something seems identical to what we consider consciousness, I will likely believe it's better to not make that thing suffer. Though ultimately it's still just a consideration balanced among other concerns.


I too think there is merit in exploring to what degree conciousness can be approximated by or observed in computational systems of any kind. Including neural networks. But I just can't get over how fake and manipulative the framing of "AI welfare" or concern over suffering feels.


That's reasonable, I certainly believe that there are many fake and manipulative people who say what's best for their personal gain, perhaps even the majority. But I still think it's reasonable to imagine that there are some people are genuinely concerned about this.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: