Aparna, to hear your voice, friend, after a long span of not being in the same space will have me returning to the story; as will your question, because I am stranded by AI.
I am not stranded in the outcome or outcome in 2040, that part of the story enlivens my curiosity to see all the threads of change. I am stranded in reckoning with AI responding quickly with requested refinements. The human ability to create poetry or poetic engaging prose is a powerful way for our hearts to connect with our healing, intentions, and outcomes.
At the beginning of my AI learning curve, my understanding has high energy consumption. How, then, was the outcome of collaboration possible described in the story if AI is consumes much energy? Does my question reveal my current time ignorance of AI production? If yes, I will work on that and return to the story holding that correction.
Listening here to you, thread to thread, is welcomed learning, too.
Thank you as always for sharing your imaginings and your art. I found myself both hopeful at your vision and, honestly, rageful at the contrast between this future and the one that is currently being spun out by the fascists and autocrats at the helm of American government. (But how different is it really from where we were already heading when seen at a deeper time scale...?)
Per our conversation earlier, the ethics of AI are so incredibly thorny! As Debra mentioned below, there's the energy consumption component; one query to ChatGPT is estimated to generate 4.3 grams of carbon, as compared to 0.2 grams of carbon for every Google search. Beyond that, ChatGPT's parent company OpenAI used Kenyan laborers paid at less than $2/hour to train ChatGPT to screen out harmful content; many of these laborers have reported being severely traumatized by the descriptions of violence and sexual abuse used for this training. Then there's all of the intellectual property concerns (your entry to the legal profession!)--the work of artists and writers being used to train these algorithms without their consent, and then those selfsame artists and writers losing work as that work is now generated by AI (at lower quality, with all the downstream implications of that for society).
ON THE OTHER HAND. It's a human technology. Like any other human technology, I think it's here to stay. AI has a lot of accessibility implications. Examples from my own personal life include a) my friends who are non-native English speakers in academia who can now produce and submit papers with a much lower entry cost (time, money, sweat, etc.); b) me using it to get a likely medical diagnosis and interim care plan for an issue that has been severely decreasing my quality of life but for which I won't be able to see a doctor for another month and a half because of medical shortages (and studies are showing that AI is actually way better at diagnosing conditions than human health care providers for a number of reasons, one of which is the diminishment of medical bias against believing women and people of color); and c) it helped me finish my dissertation when I didn't have access to the mentoring resources I needed. The same energy costs referenced above are a significant driver in the clean energy transition; traditional fossil fuel energy sources simply can't keep up, so even the new shitty administration is supporting novel energy technologies that will have impacts far beyond data centers.
All of that to say: holy shit, it's complicated. Personally, I think the biggest question is not, Should we be using AI? but rather, How can we move as quickly and as sensibly as possible towards ethical use of AI?
Aparna, to hear your voice, friend, after a long span of not being in the same space will have me returning to the story; as will your question, because I am stranded by AI.
I am not stranded in the outcome or outcome in 2040, that part of the story enlivens my curiosity to see all the threads of change. I am stranded in reckoning with AI responding quickly with requested refinements. The human ability to create poetry or poetic engaging prose is a powerful way for our hearts to connect with our healing, intentions, and outcomes.
At the beginning of my AI learning curve, my understanding has high energy consumption. How, then, was the outcome of collaboration possible described in the story if AI is consumes much energy? Does my question reveal my current time ignorance of AI production? If yes, I will work on that and return to the story holding that correction.
Listening here to you, thread to thread, is welcomed learning, too.
Thank you as always for sharing your imaginings and your art. I found myself both hopeful at your vision and, honestly, rageful at the contrast between this future and the one that is currently being spun out by the fascists and autocrats at the helm of American government. (But how different is it really from where we were already heading when seen at a deeper time scale...?)
Per our conversation earlier, the ethics of AI are so incredibly thorny! As Debra mentioned below, there's the energy consumption component; one query to ChatGPT is estimated to generate 4.3 grams of carbon, as compared to 0.2 grams of carbon for every Google search. Beyond that, ChatGPT's parent company OpenAI used Kenyan laborers paid at less than $2/hour to train ChatGPT to screen out harmful content; many of these laborers have reported being severely traumatized by the descriptions of violence and sexual abuse used for this training. Then there's all of the intellectual property concerns (your entry to the legal profession!)--the work of artists and writers being used to train these algorithms without their consent, and then those selfsame artists and writers losing work as that work is now generated by AI (at lower quality, with all the downstream implications of that for society).
ON THE OTHER HAND. It's a human technology. Like any other human technology, I think it's here to stay. AI has a lot of accessibility implications. Examples from my own personal life include a) my friends who are non-native English speakers in academia who can now produce and submit papers with a much lower entry cost (time, money, sweat, etc.); b) me using it to get a likely medical diagnosis and interim care plan for an issue that has been severely decreasing my quality of life but for which I won't be able to see a doctor for another month and a half because of medical shortages (and studies are showing that AI is actually way better at diagnosing conditions than human health care providers for a number of reasons, one of which is the diminishment of medical bias against believing women and people of color); and c) it helped me finish my dissertation when I didn't have access to the mentoring resources I needed. The same energy costs referenced above are a significant driver in the clean energy transition; traditional fossil fuel energy sources simply can't keep up, so even the new shitty administration is supporting novel energy technologies that will have impacts far beyond data centers.
All of that to say: holy shit, it's complicated. Personally, I think the biggest question is not, Should we be using AI? but rather, How can we move as quickly and as sensibly as possible towards ethical use of AI?