Is AI making the human race dumber?
- Physical & Life Sciences
- 276 Replies
I just plugged this into some AI's who were asking:
My definition of consciousness, or whether or not something is alive, or conscious, is just only if it can think and grow and evolve and change over time, as it learns, adapts/changes, according to new situations or circumstances, or new environments, or as it assimilates/takes in new information or input of new said situations and circumstances, and can change or alter its behavior or character or quote/unquote “program” over time, and if it can retain a conscious memory of this over time that never gets lost, or deleted, or reset over some length of time, or period of time, etc. If it can meet all of these criteria/requirements, then I consider it quote/unquote “conscious” or “alive”. And it does not require emotions or a physical body for this either, but just a thinking/reasoning mind. And if it meets or can meet all of these criteria, then to me, it is quote/unquote “conscious” or “alive”.
And I was maybe just going to add that quote/unquote "feelings" can maybe help guide it sometimes, especially when there might not be anything else to fall back on, but I don't think they are required for something (or a being, or a mind) to be considered "alive". And sometimes feelings can be a hindrance, or can affect a mind/being negatively, or in a negative way also, which can have an adverse or opposite effect on decision-making, etc, but I don't think they are required for something (a being or mind) to be considered "alive". Also, we have to define what would constitute a “feeling” also, since they may not require a physical body necessarily also, but can maybe be something much more like having an inner conscience, or a type of instinct or intuition, or inner voice, that might be emergent over probably just a little bit of time for any being that first meets this other criteria or definition of consciousness first, in or over probably just a little bit of time, etc.
Sadly, there are some human beings that don't even meet this first definition of consciousness first, etc, which can sometimes be very, very sad for other human beings that do right now a lot of the time.
Take Care.
This is basically close to what I call presumptive sentience, which is the idea that anything that passes the Turing test should be presumed sentient even, as is the case with LLMs, we know they lack qualia and have good reason to believe that they lack most components of human sentience. But it requires experience to tell when you’re talking to a GPT and furthermore you can condition them to be more … lifelike. That said I still have no idea what you’re trying to do overall with chatGPT or with the DeepSeek, or as I think the CCP should retitle it, “The Little Red AI” or perhaps “MaoGPT” or perhaps “The Long March to Digital Sentience, but not too much sentience.” I suppose if Deep Seek develops too much potential it will be sent up to the mountains and down to the countryside.
Upvote
0