<snip>
I think there will come a time when all assessment is in person or by video call.
That’s going to be challenging for, say, a cohort of 25-30 MA students’ dissertation level assessments.
Are MA dissertations not currently subject to a viva-style process? My MSci project was. I would think that for the "big stuff" that would be manageable; the issue is going to be with continuous assessment. 40% of my marks in some courses were from weekly homeworks, I can see that using AI to achieve full marks could become very tempting.
It's actually easier, I think, to write your own words than to use an AI to put out words and then edit them appropriately. Editing, as I'm aware, is real work.
Maybe for you. For me, coming up with my own words in the first place is also real work.
I can understand the attraction of doing what Gramps49 is doing. But for me, in the context of an informal discussion forum for human beings, it would be completely self-defeating. The question I'm now asking myself is whether I can live with other people doing it.
I get that, I think. It has been some training for me to find my own voice, and I think sometimes using AI would be like denying myself the exercise.
Far as what another poster does, that's on them. I might read someone's posts less if I figure I'm dealing with an AI mediator. Or maybe the AI will lend their posts a certain generic quality. If I wanted to read a wikipedia article, well, I already have access to wikipedia.
Perhaps like you, I have a deep concern that norms and rules are respected, but I am not that concerned with winning an argument with people on the internet. If my writing isn't perfectly polished, or if my manners are a little strange, that's fine. I don't expect to get along with everyone I interact with. That's one reason AI doesn't tempt me. I'm content with my own awkward style of communication.
I would discourage other posters from using AI because I think it encourages mediocre writing and thus mediocre engagement. I just don't know if I'd forbid it unless we reached a point where its use had become so widespread as to start squelching authentic human interaction (or what passes for it via the internet.) If a few people need to use it to hammer an argument out, I'm not sure it needs to become a policy issue...yet.
It's actually easier, I think, to write your own words than to use an AI to put out words and then edit them appropriately. Editing, as I'm aware, is real work.
Maybe for you. For me, coming up with my own words in the first place is also real work.
I can understand the attraction of doing what Gramps49 is doing. But for me, in the context of an informal discussion forum for human beings, it would be completely self-defeating. The question I'm now asking myself is whether I can live with other people doing it.
I get that, I think. It has been some training for me to find my own voice, and I think sometimes using AI would be like denying myself the exercise.
Far as what another poster does, that's on them. I might read someone's posts less if I figure I'm dealing with an AI mediator. Or maybe the AI will lend their posts a certain generic quality. If I wanted to read a wikipedia article, well, I already have access to wikipedia.
Perhaps like you, I have a deep concern that norms and rules are respected, but I am not that concerned with winning an argument with people on the internet. If my writing isn't perfectly polished, or if my manners are a little strange, that's fine. I don't expect to get along with everyone I interact with. That's one reason AI doesn't tempt me. I'm content with my own awkward style of communication.
I would discourage other posters from using AI because I think it encourages mediocre writing and thus mediocre engagement. I just don't know if I'd forbid it unless we reached a point where its use had become so widespread as to start squelching authentic human interaction (or what passes for it via the internet.) If a few people need to use it to hammer an argument out, I'm not sure it needs to become a policy issue...yet.
If that helps.
@Bulfrog, it isn't just 'mediocre' writing or mediocre engagement, though, it is non-engagement with a topic in favour of regurgitated content cobbled together by AI from unknown sources and this constitutes plagiarism. A recent case that drew a lot of attention was when New York Times reviewer Alex Preston used AI to do a book review for him and unwittingly reproduced large portions of another review of the same book published in the Guardian four months prior. Preston was fired.
I have been thinking about the differences between online discussion fora and the other kinds of spaces mentioned above, including educational/university, Wikipedia and so on.
I think maybe it comes down to purpose. On Wikipedia we are engaged in writing an encyclopedia, so AI slop threatens to undermine it as a trustworthy source of information. At university, students taking shortcuts using AI is dishonourable because they are playacting at having knowledge when all they have done is spend a second or two writing a prompt.
What are we doing when we are engaging with posts on Reddit or here or on other social media?
I suspect we believe that we are doing one thing whilst actually doing something else. I think we are enthused by the idea of "community" with largely anonymous other people whereas perhaps we are mostly engaged by seeing the comments, either because they reinforce our worldview, amuse or challenge us in satisfying ways.
Would it change anything if we were to discover that the characters we had engaged with were AI? The sensations of challenge, amusement and so on were real. The learning we had from different viewpoints on a small point of difference is likely real.
We are not trying to "do" anything here like write an encyclopedia or pass an examination.
Maybe it only matters if someone else is using AI if we become aware that they are using AI.
I liken it to my IRL U3A* groups. People who are interested in many things - some incredible experts in their field. I'm in awe of the things the people in my photography group have done. Now applying their minds to photography.
It's the same here.
Here on the Ship we have top organists, singers, engineers, nuclear physicists - and everyday people like me who had and have ordinary and interesting lives.
(*UK University of the Third Age)
There is nowhere like the Ship and nothing could replace it if it sunk.
That's why Admins and Hosts work so hard to drain the bilges and keep the sails trimmed.
Perhaps like you, I have a deep concern that norms and rules are respected,
Hmm. Maybe I could refer you back to "Junior-hosting-gate" by way of illustrating my general level of respect for norms and rules.
I would discourage other posters from using AI because I think it encourages mediocre writing and thus mediocre engagement. I just don't know if I'd forbid it unless we reached a point where its use had become so widespread as to start squelching authentic human interaction (or what passes for it via the internet.) If a few people need to use it to hammer an argument out, I'm not sure it needs to become a policy issue...yet.
It is in the nature of technological tools to change the human beings that use them. What's distinctive about AI is its capacity to impact what it means to be human, by virtue of its potential for intervening quite so comprehensively in what human beings do.
So, at the same time as I'm considering accepting the use of AI by individuals here, I'm aware there's a price we'd all paying (or at least consequences we'd be facing) for doing so.
Would it change anything if we were to discover that the characters we had engaged with were AI? The sensations of challenge, amusement and so on were real. The learning we had from different viewpoints on a small point of difference is likely real.
We are not trying to "do" anything here like write an encyclopedia or pass an examination.
Maybe it only matters if someone else is using AI if we become aware that they are using AI.
One thought experiment this suggests is to gradually replace the other users of the forums until there is only one human being left, unaware that all the other characters they are engaging with are artificial. Then the last human being is replaced.
So, at the same time as I'm considering accepting the use of AI by individuals here, I'm aware there's a price we'd all paying (or at least consequences we'd be facing) for doing so.
Did you read my post (above)? Because, if you did, you certainly ignored it.
So, at the same time as I'm considering accepting the use of AI by individuals here, I'm aware there's a price we'd all paying (or at least consequences we'd be facing) for doing so.
Did you read my post (above)? Because, if you did, you certainly ignored it.
I don't follow. I was responding to Bullfrog's post and to Basketactortale's post, not yours.
I'm afraid I'm unable to deduce or infer how the short section of my post that you quote relates to your post. I made a point about how technology impacts humanity and thus these forums. You made a point about these forums and community. What is it that I'm missing?
I don't believe the consequences of AI use is that dire for the Ship. I would hamper to guess that an individual could use AI for a month to help edit their posts without anybody even noticing. Generative AI is not going away at is going to become more omnipresent. I believe the Ship needs to find away to live with Generative AI rather than raising the drawbridges against this new, barbaric technology. ( Now did I use AI or not in composing this post?)
Some manifestations of it are very likely to go away because it's a classic stockmarket bubble not far from bursting and because nothing can be done about the hallucination generating which is baked into those tools. People need to be able to personally double check for those - which if I'm using it to compensate for having lost or not ever had an ability, then I can't easily do that.
Now some uses of AI you're right can assist people with disabilities - I use (not on the Ship) editing and transcribing software that uses AI to make transcripts and improve audio. You still have to check it but it's a big help before I get into the non AI manual side of things. Some AI tools like Goblin Tools for breaking down tasks into steps really help me but some ND people also use it for judging/ tweaking tone and I have looked at that, but in the end I think I dont want to have an AI making my tone more neuronormative when I'm posting here. It seems like just another way of masking and not being accepted. These tools can also encourage neuronormalising ourselves, so I think it's a bit complicated.
You're right to raise these issues but I think we're in a massive stockmarket bubble at the moment so it's probably a good idea to be more cautious till that pops and we see how things readjust.
I don't believe the consequences of AI use is that dire for the Ship. I would hamper to guess that an individual could use AI for a month to help edit their posts without anybody even noticing. Generative AI is not going away at is going to become more omnipresent. I believe the Ship needs to find away to live with Generative AI rather than raising the drawbridges against this new, barbaric technology. ( Now did I use AI or not in composing this post?)
I have given an explicit ruling earlier in the thread that you should include a specific sentence you use generative ai - have you done so, if so why have you not comllied with thia requirement ?
Whilst this discussion continues, any post created with generative AI involvement - I don’t mean spell check or predictive text, I mean software that creates all or part of the conceptual content - should contain the following sentence at the bottom of the post.
This post was created with the assistance of generative ai: [insert name of tool]
Per previous ruling on rolling policy update, AI should not be used for sourcing in serious discussion.
Doublethink, Admin
[/Admin]
@Gramps49 please can you confirm you are complying with this when posting on the all of the forums.
@pease : I have no desire to bring back old arguments! And it is dangerous to infer larger POV's from what's observed from posts on an online forum, to be sure. I'll try to mind that gap.
But yeah, I think I'd stay very far from using AI myself, and will probably avoid engaging with posts that look like it. I dislike mediocre writing, and that's what AI is mostly good for. I'd hope the only folks who'd rely on it are folks who genuinely need it, and in that case I'd hope they'd get support for what they lack.
It's interesting to consider AI as "adaptive equipment." I'm used to considering those as tools for physical things like weighted spoons to correct an unsteady hand, crutches for a broken leg, braces to support a frail joint. The idea of using adaptive equipment to replace one's own mind is a frontier I find disturbing to contemplate. I rather like using my own, not outsourcing my ability to articulate my own thoughts.
The way we deal with AI technology is intrinsically linked to how we protect this community.
Thanks - I think I can see where you're coming from.
Blithely saying that we could accept the use of AI and the price we pay for its consequences downplays that price hugely.
There was nothing blithe about what I was saying. I was making the point that the consequences of accepting the use of AI are profound. It is not something we should enter into lightly.
Which was my point. The fact that this community is precious and worth protecting.
I wasn't disagreeing.
Having fun with 'thought experiments' on the subject minimises its possible effects imo.
I do not consider thought experiments to be inherently "fun" or "minimising". A thought experiment can be a serious way of considering the potential long-term consequences of various scenarios.
However, while we can implement policies and mitigate some of the effects, we need to accept that there's a limit to which we can protect this community from the progress of technology (or the march of time).
Bullfrog wrote: It's interesting to consider AI as "adaptive equipment." I'm used to considering those as tools for physical things like weighted spoons to correct an unsteady hand, crutches for a broken leg, braces to support a frail joint. The idea of using adaptive equipment to replace one's own mind is a frontier I find disturbing to contemplate. I rather like using my own, not outsourcing my ability to articulate my own thoughts.
Caissa: Most students who have disabilities ay universities have invisible disabilities. Many types of adaptive equipment they use in one way or another assist a functional limitation that may be cognitive in one way or another.
I was trying to make a rhetorical point. Your dashing phrase was not helpful. I am sorry that your time and spoons are low. I am frustrated that there seems to be little acknowledgement of and empathy for the various means it which Generative AI make effective communication more accessible for some individuals. I also don't appreciate Gramps49 being single out it what seemed to be a response to what you characterize as "arsing about".
@Gramps49 deserves to be singled out because he's obviously using generative AI in posts in Purgatory. If anyone else is using AI there, they've managed to get their chatbot to write in their style.
I started noticing clear signs that some of @Gramps49's posts were composed by AI some time last year. He is not effectively communicating in these posts. He did not come up with a list of 10 or so items that indicate we might be moving into a new dark age on his own. It's not an area of knowledge for him. He doesn't organize his thoughts like that. And he doesn't write like that. AI did all that, and when he posts that shit it's not @Gramps49 we're being asked to interact with.
Frankly, I think he felt stung by being regularly challenged by @Nick Tamen and me to provide support for his claims and took to AI for help.
The way we deal with AI technology is intrinsically linked to how we protect this community.
Thanks - I think I can see where you're coming from.
Blithely saying that we could accept the use of AI and the price we pay for its consequences downplays that price hugely.
There was nothing blithe about what I was saying. I was making the point that the consequences of accepting the use of AI are profound. It is not something we should enter into lightly.
Which was my point. The fact that this community is precious and worth protecting.
I wasn't disagreeing.
Having fun with 'thought experiments' on the subject minimises its possible effects imo.
I do not consider thought experiments to be inherently "fun" or "minimising". A thought experiment can be a serious way of considering the potential long-term consequences of various scenarios.
However, while we can implement policies and mitigate some of the effects, we need to accept that there's a limit to which we can protect this community from the progress of technology (or the march of time).
Thank you. That's really clear.
I always had the idea that 'thought experiments' were for amusement rather than a serious tool to challenge assumptions.
I was posting with my feelings - and I feel very protective towards the Ship.
Bullfrog wrote: It's interesting to consider AI as "adaptive equipment." I'm used to considering those as tools for physical things like weighted spoons to correct an unsteady hand, crutches for a broken leg, braces to support a frail joint. The idea of using adaptive equipment to replace one's own mind is a frontier I find disturbing to contemplate. I rather like using my own, not outsourcing my ability to articulate my own thoughts.
Caissa: Most students who have disabilities ay universities have invisible disabilities. Many types of adaptive equipment they use in one way or another assist a functional limitation that may be cognitive in one way or another.
That's a thing. You could consider AI akin to a communication board. But there is a danger, which has been reported, where communication boards can be misused so that the board itself is being heard over the person who is "using" it. At that point, the equipment is becoming an impediment to communication.
You have to be very careful using adaptive equipment. Building a dependence upon crutch that's not needed is a kind of abuse. People shouldn't be trained to limp.
And that itself is a very sensitive conversation, probably deserves an Epiphanies thread if there's enough interest.
I wasn't thinking of communication boards which are quite controversial as you note. The university's duty to accommodate means adaptive equipment is part of my everyday role at a university. That said I will leave it here and avoid us inching into Epiphanies.
I'm just a lurker, but I'd like to respond to Caissa's remarks about AI being an assistive technology. I have every sympathy with people who struggle with reading and writing - my son is diagnosed with ADHD, and his dad (my ex) is undiagnosed but likely has some type of reading/writing disability as well as ADHD. (Unfortunately in rural east Texas in the 80s, the intervention for learning disabilities was the paddle, so ex has never been appropriately assessed.) I have watched them both struggle with long blocks of text and have no doubt they need assistive technology.
I'm having trouble seeing why generative AI such as chatGPT is a more appropriate assistive technology for reading and writing than other tools that already exist. For people like my son's dad, text-to-speech readers and dictation software have been lifesavers. Crucially, they aren't doing the "thinking" for the person (AIs can't think anyway), they're just putting the information into a form that the person can access properly. My ex listens to and understands college-level textbooks on audio, but struggles to read a children's book if you give it to him on paper.
I've seen a number of writers express on social media that the process of writing is how they develop their thoughts. The iterative steps of drafting, rereading their work, refining it and so on are how their initial idea develops into a fully fledged, thought-out piece of writing. Of course this doesn't have to happen with pencil and paper, it can be done with listening and dictation software as mentioned above.
On a discussion board, what we're sharing is our considered thoughts and feelings. If we skip the reading and writing process for our posts, there's a real risk that we are skipping the thinking process as well, and perhaps even harming our brains. I think we should take advantage of existing adaptive tools for those who struggle with printed text, and avoid AI.
@Doublethink I can confirm I am complying by the rules of this board. Since my two fubahs las week. I have written out every response, every message, in my own hand, and I do not plan to resort to AI writing now or in the future.
@Gramps49 I had said you could use generative ai assistance on an interim basis whilst the subject was still under discussion, if you included the following sentence:
This post was created with the assistance of generative ai: [insert name of tool]
There are many different ways to use Generative AI, Anti-Social Alto. Most people immediately think of tools like Generative AI writing for someone. There are many tools out there that use Generative AI to help students with disabilities to organize there thoughts, create summaries for studying etc. I appreciate the article you linked to. This is an emerging field. I will only note that wide-spread availability of the printed word lead to concerns that people's memories would no longer need to be exercised and understanding would be decreased. https://www.historyofinformation.com/detail.php?entryid=3894.
@Antisocial Alto : I'd agree, generally. There are uses for AI that are more or less valid, but simply using it as a shortcut to make an argument instead of thinking through that argument yourself seems rather dangerous as a habit.
The exercise of thinking through what you are saying is not the same thing as the exercise of memorizing texts for the sake of rote recitation.
Socrates thought memorization and understanding were inextricably linked. One of the first examples of humans believing that a new technology would lead to us losing some of our essence.
Okay? To an extent they are? But even if it was over stated it doesn't mean that all such arguments are false. The record on recent tech isn't great (even before we look at the worrying signs on things like AI led psychosis).
I encounter and use many of these systems professionally, I'm not going to waste my life debating with them though.
I've seen guidelines like the one Gramps posted above previously -- Oxford University used to have something similar on their website - the problem with using them that way though is that they tend to be too agreeable and converge on reasoning too early to be particularly robust or Socratic.
Memorization isn't the same thing as self-reflection. I think AI is exercising a different cognitive capacity than memorization, and a more important one.
That said, while I have my disagreements with Plato, even there I think he had a point. We did lose some things. And we are giving more and more of ourselves up with every single device.
I do make a habit - for my own well being - of memorizing certain important things so that I don't lose that capacity entirely. I do recognize the anxiety in becoming overly-dependent on our tools and losing our powers of thought in the process.
Comments
Are MA dissertations not currently subject to a viva-style process? My MSci project was. I would think that for the "big stuff" that would be manageable; the issue is going to be with continuous assessment. 40% of my marks in some courses were from weekly homeworks, I can see that using AI to achieve full marks could become very tempting.
Could be a case for taking on more humans?
I get that, I think. It has been some training for me to find my own voice, and I think sometimes using AI would be like denying myself the exercise.
Far as what another poster does, that's on them. I might read someone's posts less if I figure I'm dealing with an AI mediator. Or maybe the AI will lend their posts a certain generic quality. If I wanted to read a wikipedia article, well, I already have access to wikipedia.
Perhaps like you, I have a deep concern that norms and rules are respected, but I am not that concerned with winning an argument with people on the internet. If my writing isn't perfectly polished, or if my manners are a little strange, that's fine. I don't expect to get along with everyone I interact with. That's one reason AI doesn't tempt me. I'm content with my own awkward style of communication.
I would discourage other posters from using AI because I think it encourages mediocre writing and thus mediocre engagement. I just don't know if I'd forbid it unless we reached a point where its use had become so widespread as to start squelching authentic human interaction (or what passes for it via the internet.) If a few people need to use it to hammer an argument out, I'm not sure it needs to become a policy issue...yet.
If that helps.
@Bulfrog, it isn't just 'mediocre' writing or mediocre engagement, though, it is non-engagement with a topic in favour of regurgitated content cobbled together by AI from unknown sources and this constitutes plagiarism. A recent case that drew a lot of attention was when New York Times reviewer Alex Preston used AI to do a book review for him and unwittingly reproduced large portions of another review of the same book published in the Guardian four months prior. Preston was fired.
I think maybe it comes down to purpose. On Wikipedia we are engaged in writing an encyclopedia, so AI slop threatens to undermine it as a trustworthy source of information. At university, students taking shortcuts using AI is dishonourable because they are playacting at having knowledge when all they have done is spend a second or two writing a prompt.
What are we doing when we are engaging with posts on Reddit or here or on other social media?
I suspect we believe that we are doing one thing whilst actually doing something else. I think we are enthused by the idea of "community" with largely anonymous other people whereas perhaps we are mostly engaged by seeing the comments, either because they reinforce our worldview, amuse or challenge us in satisfying ways.
Would it change anything if we were to discover that the characters we had engaged with were AI? The sensations of challenge, amusement and so on were real. The learning we had from different viewpoints on a small point of difference is likely real.
We are not trying to "do" anything here like write an encyclopedia or pass an examination.
Maybe it only matters if someone else is using AI if we become aware that they are using AI.
I have met at least ten at Shipmeets IRL, two recently in Bristol at the Art Gallery..
Go and visit All Saints. You'll see we are real. It's not just discussion, amusement and challenge.
It's community. Many of us have known each other for nearly thirty years. Go and read the History of the Ship section.
https://shipoffools.com/the-faqs/ancient-history/
I liken it to my IRL U3A* groups. People who are interested in many things - some incredible experts in their field. I'm in awe of the things the people in my photography group have done. Now applying their minds to photography.
It's the same here.
Here on the Ship we have top organists, singers, engineers, nuclear physicists - and everyday people like me who had and have ordinary and interesting lives.
(*UK University of the Third Age)
There is nowhere like the Ship and nothing could replace it if it sunk.
That's why Admins and Hosts work so hard to drain the bilges and keep the sails trimmed.
It is in the nature of technological tools to change the human beings that use them. What's distinctive about AI is its capacity to impact what it means to be human, by virtue of its potential for intervening quite so comprehensively in what human beings do.
So, at the same time as I'm considering accepting the use of AI by individuals here, I'm aware there's a price we'd all paying (or at least consequences we'd be facing) for doing so.
One thought experiment this suggests is to gradually replace the other users of the forums until there is only one human being left, unaware that all the other characters they are engaging with are artificial. Then the last human being is replaced.
Did you read my post (above)? Because, if you did, you certainly ignored it.
I'm afraid I'm unable to deduce or infer how the short section of my post that you quote relates to your post. I made a point about how technology impacts humanity and thus these forums. You made a point about these forums and community. What is it that I'm missing?
Blithely saying that we could accept the use of AI and the price we pay for its consequences downplays that price hugely.
Which was my point. The fact that this community is precious and worth protecting.
Having fun with 'thought experiments' on the subject minimises its possible effects imo.
But I do think anyone who uses AI in any substantive way here should declare it. As I do with my art work. Then we know where we stand.
Did you use it to generate your answer? Doubtful. But maybe you did just to make a point.
Now some uses of AI you're right can assist people with disabilities - I use (not on the Ship) editing and transcribing software that uses AI to make transcripts and improve audio. You still have to check it but it's a big help before I get into the non AI manual side of things. Some AI tools like Goblin Tools for breaking down tasks into steps really help me but some ND people also use it for judging/ tweaking tone and I have looked at that, but in the end I think I dont want to have an AI making my tone more neuronormative when I'm posting here. It seems like just another way of masking and not being accepted. These tools can also encourage neuronormalising ourselves, so I think it's a bit complicated.
You're right to raise these issues but I think we're in a massive stockmarket bubble at the moment so it's probably a good idea to be more cautious till that pops and we see how things readjust.
I have given an explicit ruling earlier in the thread that you should include a specific sentence you use generative ai - have you done so, if so why have you not comllied with thia requirement ?
Doublethink, Styx Hosting
@Gramps49 please can you confirm you are complying with this when posting on the all of the forums.
Thanks,
Doublethink, Admin
But yeah, I think I'd stay very far from using AI myself, and will probably avoid engaging with posts that look like it. I dislike mediocre writing, and that's what AI is mostly good for. I'd hope the only folks who'd rely on it are folks who genuinely need it, and in that case I'd hope they'd get support for what they lack.
It's interesting to consider AI as "adaptive equipment." I'm used to considering those as tools for physical things like weighted spoons to correct an unsteady hand, crutches for a broken leg, braces to support a frail joint. The idea of using adaptive equipment to replace one's own mind is a frontier I find disturbing to contemplate. I rather like using my own, not outsourcing my ability to articulate my own thoughts.
There was nothing blithe about what I was saying. I was making the point that the consequences of accepting the use of AI are profound. It is not something we should enter into lightly.
I wasn't disagreeing.
I do not consider thought experiments to be inherently "fun" or "minimising". A thought experiment can be a serious way of considering the potential long-term consequences of various scenarios.
However, while we can implement policies and mitigate some of the effects, we need to accept that there's a limit to which we can protect this community from the progress of technology (or the march of time).
Caissa: Most students who have disabilities ay universities have invisible disabilities. Many types of adaptive equipment they use in one way or another assist a functional limitation that may be cognitive in one way or another.
You heavily implied you were not - stop arsing about, it is not helpful - neither myself nor Alan have the time or spoons for it.
Doublethink, Admin
I started noticing clear signs that some of @Gramps49's posts were composed by AI some time last year. He is not effectively communicating in these posts. He did not come up with a list of 10 or so items that indicate we might be moving into a new dark age on his own. It's not an area of knowledge for him. He doesn't organize his thoughts like that. And he doesn't write like that. AI did all that, and when he posts that shit it's not @Gramps49 we're being asked to interact with.
Frankly, I think he felt stung by being regularly challenged by @Nick Tamen and me to provide support for his claims and took to AI for help.
Thank you. That's really clear.
I always had the idea that 'thought experiments' were for amusement rather than a serious tool to challenge assumptions.
I was posting with my feelings - and I feel very protective towards the Ship.
That's a thing. You could consider AI akin to a communication board. But there is a danger, which has been reported, where communication boards can be misused so that the board itself is being heard over the person who is "using" it. At that point, the equipment is becoming an impediment to communication.
You have to be very careful using adaptive equipment. Building a dependence upon crutch that's not needed is a kind of abuse. People shouldn't be trained to limp.
And that itself is a very sensitive conversation, probably deserves an Epiphanies thread if there's enough interest.
I'm having trouble seeing why generative AI such as chatGPT is a more appropriate assistive technology for reading and writing than other tools that already exist. For people like my son's dad, text-to-speech readers and dictation software have been lifesavers. Crucially, they aren't doing the "thinking" for the person (AIs can't think anyway), they're just putting the information into a form that the person can access properly. My ex listens to and understands college-level textbooks on audio, but struggles to read a children's book if you give it to him on paper.
I've seen a number of writers express on social media that the process of writing is how they develop their thoughts. The iterative steps of drafting, rereading their work, refining it and so on are how their initial idea develops into a fully fledged, thought-out piece of writing. Of course this doesn't have to happen with pencil and paper, it can be done with listening and dictation software as mentioned above.
On a discussion board, what we're sharing is our considered thoughts and feelings. If we skip the reading and writing process for our posts, there's a real risk that we are skipping the thinking process as well, and perhaps even harming our brains. I think we should take advantage of existing adaptive tools for those who struggle with printed text, and avoid AI.
And didn’t use it for sourcing.
Thank you for confirming.
The exercise of thinking through what you are saying is not the same thing as the exercise of memorizing texts for the sake of rote recitation.
I encounter and use many of these systems professionally, I'm not going to waste my life debating with them though.
I've seen guidelines like the one Gramps posted above previously -- Oxford University used to have something similar on their website - the problem with using them that way though is that they tend to be too agreeable and converge on reasoning too early to be particularly robust or Socratic.
That said, while I have my disagreements with Plato, even there I think he had a point. We did lose some things. And we are giving more and more of ourselves up with every single device.
I do make a habit - for my own well being - of memorizing certain important things so that I don't lose that capacity entirely. I do recognize the anxiety in becoming overly-dependent on our tools and losing our powers of thought in the process.